text
stringlengths 104
605k
|
---|
# zbMATH — the first resource for mathematics
Extensions of the mountain pass theorem. (English) Zbl 0564.58012
The paper contains a number of extensions of the mountain pass lemma of A. Ambrosetti and P. H. Rabinowitz [(*) ibid. 14, 349-381 (1973; Zbl 0273.49063)]. The lemma gives sufficient conditions for the existence of critical points of continuously Fréchet differentiable functionals $$I: X\to {\mathbb{R}}$$ on a real Banach space X. The hypotheses of the lemma and its variants consist of a compactness condition and geometric restraints on the functional I. It was shown in (*) how the lemma may be applied to prove the existence of weak solutions for differential equations. (See also the survey article by L. Nirenberg [Bull. Am. Math. Soc., New Ser. 4, 267-302 (1981; Zbl 0468.47040)] for an introduction.)
The authors of the paper under review study variants of the geometric restraints on the functional I. At the same time they make statements as to whether one obtains local minima, maxima or saddle points. For example, take $$K_ b=\{x\in X| \quad I(x)=b,\quad I'(x)=0\},$$ the set of all critical points with critical value b. If b is the value given in the original mountain pass lemma and X is infinite dimensional, then $$K_ b$$ contains at least one saddle point. Finally, the authors give modifications of the above-mentioned results for periodic functionals. In this case one needs an adapted version of the compactness condition. For a different type of extension of the mountain pass lemma and its applications we would like to mention results of M. Struwe [Math. Ann. 261, 399-412 (1982; Zbl 0506.35034); J. Reine Angew. Math. 349, 1-23 (1984; Zbl 0521.49028)]. In these papers the differentiability requirement for the functional I is weakend.
Reviewer: G.Warnecke
##### MSC:
58E05 Abstract critical point theory (Morse theory, Lyusternik-Shnirel’man theory, etc.) in infinite-dimensional spaces 57R70 Critical points and critical submanifolds in differential topology 49Q99 Manifolds and measure-geometric topics
Full Text:
##### References:
[1] Ambrosetti, A; Rabinowitz, P.H, Dual variational methods in critical point theory and applications, J. funct. anal., 14, 349-381, (1973) · Zbl 0273.49063 [2] Brezis, H; Coron, J.M; Nirenberg, L, Free vibrations for a nonlinear wave equation and a theorem of P. Rabinowitz, Comm. pure appl. math., 33, 667-684, (1980) · Zbl 0484.35057 [3] Clark, D.C, A variant of the Lusternik-schnirelman theory, Indiana univ. math. J., 22, 65-74, (1972) · Zbl 0228.58006 [4] Hofer, H, A note on the topological degree at a critical point of mountain pass type, (), 309-315 · Zbl 0545.58015 [5] Mawhin, J; Willem, M, Variational methods and boundary value problems for vector second order differential equations and applications to the pendulum equation, J. diff. equations, 52, 264-287, (1984) · Zbl 0557.34036 [6] \scP. Pucci and J. Serrin, A mountain pass theorem J. Diff. Equations, in press. · Zbl 0585.58006 [7] Ni, W.M, Some minimax principles and their applications in nonlinear eliptic equations, J. analyse math., 37, 248-278, (1980) [8] Rabinowitz, P.H, Variational methods for nonlinear eigenvalue problems, (), Varenna · Zbl 0212.16504 [9] Rabinowitz, P.H, Some aspects of critical point theory, University of wisconsin, MRC technical report no. 2465, (1983)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
Tag Info
1
For a neutron of that speed, the uncertainty in the momentum is expected to be less than the momentum magnitude. Using the actual momentum will be an upper bound on the momentum uncertainty. That correlates to a lower bound on the position uncertainty. So, $\Delta x$ is lower bounded by $\hbar/(2p)$: $$\Delta x \ge \frac{\hbar}{2m_nv}.$$ $\Delta x$ could ...
-1
Gravity fluctuations will always cause vibrations in atoms and molecules limiting the lowest temperature obtainable. Closer to the mass source, the stronger the gravity field. As stated by Asaf earlier, evaporative cooling will lower the temperature only so far. Adding a magnetic field may temporarily increase temperature by increasing vibrations in the ...
0
I think you are probably misinterpreting the context here. If you read the previous line carefully it says "there is always an undetermined interaction between observer and observed; there is nothing we can do to avoid the interaction or to allow for it ahead of time. And later he just says due to the fact that photon can be scattered within the 2θ' angle ...
1
There is yet another solution (maybe more elementary)$^1$, with some components of the answers from Qmechanic and JoshPhysics (Currently I'm taking my first QM course and I don't quite understand the solution of Qmechanic, and this answer complement JoshPhysics's answer) the solution uses the Heisenberg Equation: The time evolution of an operator $\hat{A}$ ...
6
The temperature limit for laser cooling is not related to gravity but to the always-present momentum kick during absoprtion/emission of photons. Ultracold atom experiments typically use laser cooling at an initial stage and afterwards evaporative cooling is used to reach the lowest temperatures. In evaporative cooling the most energetic atoms are discarded ...
4
Summary Using the entropic uncertainty principle, one can show that $μ_qμ_p≥\frac{π}{4e}$, where $μ$ is the mean deviation. This corresponds to $F≥\frac{π^2}{4e}=0.9077$ using the notations of AccidentalFourierTransform’s answer. I don’t think this bound is optimal, but didn’t manage to find a better proof. To simplify the expressions, I’ll assume $ℏ=1$, ...
4
This is a great example of how hard it is to popularize quantum mechanics. Greene's example is not quite right, because classically, the butterfly does have a definite position and momentum, at all times. We can also measure these values simultaneously to arbitrary accuracy, as your friend says. (As for your concern about exposure time, we could decrease ...
0
The Heisenberg uncertainty principle is a basic foundation stone of quantum mechanics, and is derivable from the commutator relations of the quantum mechanical operators describing the pair of variables participating in the HUP. You are discussing the energy time uncertainty, . For an individual particle, it describes a locus in the time versus energy ...
3
It cannot be proven, because "wave-particle duality" is not a mathematical statement. It most definitely is not "logically true". Can you try to make it mathematical? A mathematical framework The "complementarity principle" was introduced in order to better understand some features of quantum mechanics in the early days. The problem is that if you consider ...
1
The uncertainty principle never said that nothing can be measured simultaneously with accuracy. Uncertainty principle states that it is not possible to measure two canonically conjugate quantities at the same time with accuracy. Like you cannot measure the x component of momentum $p_x$ and the x coordinate position simultaneously with accuracy. But the x ...
0
I am not satisfied of the published replies, so I will try my own, as a metrologist (expert in measurement units, but not in theoretical physics). The question clearly is referring to the experimental frame while all the answers are referring only to the theoretical frame, so they do not talk nor understand with each other. Here we are dealing with two ...
1
The point dipole is an approximation from classical physics - note that it also involves an infinite field strength in its center, where the field amplitude is not differentiable. I think such a source is not compatible with the common approach to quantum mechanics. If you take such a very small, subwavelength source, it is true that the evanescent near ...
11
We can assume WLOG that $\bar x=\bar p=0$ and $\hbar =1$. We don't assume that the wave-functions are normalised. Let $$\sigma_x\equiv \frac{\int \mathrm dx\; |x|\;|\psi(x)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ and $$\sigma_p\equiv \frac{\int \mathrm dp\; |p|\;|\tilde \psi(p)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ Using $$\int\mathrm dp\ |p|\;\mathrm ... 4 I went back to the derivation of the Heisenberg uncertainty principle and tried to modify it. Not sure if what I've come up with is worth anything, but you'll be the judge: The original derivation Let \hat{A} = \hat{x} - \bar{x} and \hat{B} = \hat{p} - \bar{p}. Then the inner product of the state | \phi\rangle = \left(\hat{A} + i \lambda ... 0 As in the link you give, the functional form depends on the probability distribution used, and these differ widely, nothing as general as the Heisenberg form can appear. The quantum mechanical equivalent requires the solution for the specific boundary problem. In any case , the HUP is about deltas, i.e. uncertainties, and not only standard deviations as ... 2 I) In this answer we will consider the microscopic description of classical E&M only. The Lorentz force reads$$ \tag{1} {\bf F}~:=~q({\bf E}+{\bf v}\times {\bf B})~=~\frac{\mathrm d}{\mathrm dt}\frac{\partial U}{\partial {\bf v}}- \frac{\partial U}{\partial {\bf r}}~=~-q\frac{\mathrm d{\bf A}}{\mathrm dt} - \frac{\partial U}{\partial {\bf r}}, ...
Top 50 recent answers are included |
# Understanding this explanation about Big O notation
I'm trying to learn the Big O Notation...and I got a bit confused by this article:
https://brilliant.org/practice/big-o-notation-2/?chapter=intro-to-algorithms&pane=1838
where it stands that f(x) = 4x and g(x) = 10x, (...), and that one could look at the Big O Notation by dividing f(x) by g(x): 10x/4x
Shouldn't it be 4x/10x instead in this very example? (since f(x) = 4x and g(x) = 10x) Or is just me who got it all wrong?...
Kind regards,
c
The best way to look at big $O$ notation is the following: $f(x)$ and $g(x)$ have the same $O$ complexity if you can find a positive constants $c_1, c_2 \in \mathbb{R}$ such that $f(x) \leq c_1 \cdot g(x)$ and $g(x) \leq c_2 \cdot f(x)$ for all $x$.
So, for example $4x$ and $10x$ are both in the same complexity class because $4x \leq 1 \cdot 10x$ and $10x \leq 3 \cdot 4x$. We name the complexity class they belong to $O(x)$, because $x$ is the simplest of all expressions of the form $c \cdot x$ so we use it as a representative.
Also, one can write equalities with big $O$; $$O(4x) = O(10x) = O(x) = O(192839182x) \neq O(x^2)$$
Some basic complexity classes are (in order of complexity, from lower to higher):
• $O(log n)$
• $O(n)$
• $O(n log n)$
• $O(n^2)$
• $\ldots$
• $O(n^k)$
• $\ldots$
• $O(2^n)$
• $\ldots$
and every complexity class on this list is not equal to any other.
• This is basically right but please don't confuse complexity classes and orders of growth. A complexity class is a class of computational problems, based on some kind of resource usage; $O(...)$ is a class of mathematical functions. There are no complexity classes in your answer because you're talking only about the growth rate of mathematical functions. – David Richerby Aug 20 '18 at 22:46
• Not for all $x$, it is enough if it is valid for $x$ large enough (we are interested in the functions for "very large" values of $x$, for suitable "very large"). – vonbrand Mar 3 '20 at 16:17 |
# Simulation of pinned diffusion process
Suppose I have a stochastic differential equation (in the Ito sense): $$dX_t = \mu(X_t)\,dt + \sigma(X_t)\, dW_t$$ in $\mathbb{R}^n$, where I know that $X_0=a$ and $X_T=b$. In other words, the process has been "pinned" at fixed times $0$ and $T$.
I want to know how to simulate such an equation (i.e. produce trajectories numerically).
I've seen some questions ( 1, 2, 3, 4, 5 ) on the "Brownian Bridge", which is a special case of this.
Edit (081617): it appears that this process is also referred to as an Ito bridge or as a diffusion bridge. It turns out this is not as easy as I'd hoped. A promising paper (found with these better search terms) is Simulation of multivariate diffusion bridges by Bladt et al. Any help/suggestions are still appreciated!
• Have you tried Euler Maruyama method to simulate it ? Aug 16, 2017 at 19:32
• @Khosrotash How can I apply Euler-Maruyama to a pinned diffusion? Aug 16, 2017 at 19:41
• Do you mean $$@ t=0 \to x(0)=a \\@t=T \to x(T)=b$$ and $a,b$ are assumed ? Aug 16, 2017 at 19:44
• @Khosrotash Yes indeed, all are fixed or known in advance. Aug 16, 2017 at 19:52 |
# Chapter 6 Visualizing data in R – An intro to ggplot
These notes accompany portions of Chapter 2 — Displaying Data — of our textbook, which we revisit in Section 9. The reading below is required, is not.
Motivating scenarios: Motivating scenarios: you have a fresh new data set and want to check it out. How do you go about looking into it?
Learning goals: By the end of this chapter you should be able to:
• Build a simple ggplot.
• Explain the idea of mapping data onto aesthetics, and the use of different geoms.
• Match common plots to common data type.
• Use geoms in ggplot to generate the common plots (above).
There is no external reading for this chapter, but watch the embedded vidoes, and complete all embedded learnRexcercises. Then go to canvas to fill out the evaluation. You will need to make three very different types of plots from the mpg data.
## 6.1 A quick intro to data visualization.
Recall that as bio-statisticians, we bring data to bear on critical biological questions, and communicate these results to interested folks. A key component of this process is visualizing our data.
### 6.1.1 Exploratory and explanatory visualizations
We generally think of two extremes of the goals of data visualization
• In exploratory visualizations we aim to identify any interesting patterns in the data, we also conduct quality control to see if there are patterns indicating mistakes or biases in our data, and to think about appropriate transformations of data. On the whole, our goal in exploratory data analysis is to understand the stories in the data.
• In explanatory visualizations we aim to communicate our results to a broader audience. Here are goals are communication and persuasion. When developing explanatory plots we consider our audience (scientists? consumers? experts?) and how we are communicating (talk? website? paper?).
The ggplot2 package in R is well suited for both purposes. Today we focus on exploratory visualization in ggplot2 because
1. They are the starting point of all statistical analyses.
2. You can do them with less ggplot2 knowledge.
3. They take less time to make than explanatory plots.
Later in the term we will show how we can use ggplot2 to make high quality explanatory plots.
### 6.1.2 Centering plots on biology
Whether developing an exploratory or exploratory plot, you should think hard about the biology you hope to convey before jumping into a plot. Ask yourself
• What do you hope to learn from this plot?
• Which is the response variable (we usually place that on the y-axis)?
• Are data numeric or categorical?
• If they are categorical are they ordinal, and if so what order should they be in?
The answers to these questions should guide our data visualization strategy, as this is a key step in our statistical analysis of a dataset. The best plots should evoke an immediate understanding of the (potentially complex) data. Put another way, a plot should highlight both the biological question and its answer.
Before jumping into making a plot in R, it is often useful to take this step back, think about your main biological question, and take a pencil and paper to sketch some ideas and potential outcomes. I do this to prepare my mind to interpret different results, and to ensure that I’m using R to answer my questions, rather than getting sucked in to so much Ring that I forget why I even started. With this in mind, we’re ready to get introduced to ggploting!
### Remembering out set up from last chapter
msleep <- msleep %>%
mutate(log10_brainwt = log10(brainwt),
log10_bodywt = log10(bodywt))
msleep_plot1 <- ggplot(data = msleep, aes(x = log10_brainwt)) # save plot
msleep_histogram <- msleep_plot1 +
geom_histogram(bins =10, color = "white")
## 6.2 Common types of plots
As we saw in the section, Centering plots on biology, we want our biological questions and the structure of the data to guide our plotting choices. So, before we get started on making plots, we should think about our data.
• What are the variable names?
• What are the types of variables?
• What are our motivating questions and how do the data map onto these questions?
• Etc…
Using the msleep data set below, we briefly work through a rough guide on how the structure of our data can translate into a plot style, and how we translate that into a geom in ggplot. So the first step you should look at the data – either with the view() function, or a quick glimpse() and reflect on your questions before plotting. This also helps us remember the name and data type of each variable.
glimpse(msleep)
## Rows: 83
## Columns: 13
## $name [3m[38;5;246m<chr>[39m[23m "Cheetah", "Owl monkey", "Mountain beaver", "Greater short-tailed shrew", … ##$ genus [3m[38;5;246m<chr>[39m[23m "Acinonyx", "Aotus", "Aplodontia", "Blarina", "Bos", "Bradypus", "Callorhi…
## $vore [3m[38;5;246m<chr>[39m[23m "carni", "omni", "herbi", "omni", "herbi", "herbi", "carni", NA, "carni", … ##$ order [3m[38;5;246m<chr>[39m[23m "Carnivora", "Primates", "Rodentia", "Soricomorpha", "Artiodactyla", "Pilo…
## $conservation [3m[38;5;246m<chr>[39m[23m "lc", NA, "nt", "lc", "domesticated", NA, "vu", NA, "domesticated", "lc", … ##$ sleep_total [3m[38;5;246m<dbl>[39m[23m 12.1, 17.0, 14.4, 14.9, 4.0, 14.4, 8.7, 7.0, 10.1, 3.0, 5.3, 9.4, 10.0, 12…
## $sleep_rem [3m[38;5;246m<dbl>[39m[23m NA, 1.8, 2.4, 2.3, 0.7, 2.2, 1.4, NA, 2.9, NA, 0.6, 0.8, 0.7, 1.5, 2.2, 2.… ##$ sleep_cycle [3m[38;5;246m<dbl>[39m[23m NA, NA, NA, 0.1333333, 0.6666667, 0.7666667, 0.3833333, NA, 0.3333333, NA,…
## $awake [3m[38;5;246m<dbl>[39m[23m 11.90, 7.00, 9.60, 9.10, 20.00, 9.60, 15.30, 17.00, 13.90, 21.00, 18.70, 1… ##$ brainwt [3m[38;5;246m<dbl>[39m[23m NA, 0.01550, NA, 0.00029, 0.42300, NA, NA, NA, 0.07000, 0.09820, 0.11500, …
## $bodywt [3m[38;5;246m<dbl>[39m[23m 50.000, 0.480, 1.350, 0.019, 600.000, 3.850, 20.490, 0.045, 14.000, 14.800… ##$ log10_brainwt [3m[38;5;246m<dbl>[39m[23m NA, -1.8096683, NA, -3.5376020, -0.3736596, NA, NA, NA, -1.1549020, -1.007…
## \$ log10_bodywt [3m[38;5;246m<dbl>[39m[23m 1.6989700, -0.3187588, 0.1303338, -1.7212464, 2.7781513, 0.5854607, 1.3115…
Now we’re nearly ready to get started, but first, some caveats
1. These are vey preliminary exploratory plots – and you may need more advanced plotting R talents to make plots that better help you see patterns. We will cover these in Chapters YB ADD, where we focus on explanatory plots.
2. There are not always cookie cutter solutions, with more complex data you may need more complex visualizations.
That said, the simple visualization and R tricks we learn below are the essential building blocks of most data presentation. So, let’s get started!
There is a lot of stuff below. We will revisit all of it again and again over the term, so you don’t need to master it now – think of this as your first exposure. You’ll get more comfortable and this will become more natural over time.
### 6.2.1 One variable
With one variable, we use plots to visualize the relative frequency (on the y-axis) of the values it takes (on the x-axis).
gg-plotting one variable We map our one variable of interest onto x aes(x = <x_variable>), where we replace <x_variable> with our x-variable. The mapping of frequency onto the y happens automatically.
#### One categorical variable
Say we wanted to know how many carnivores, herbivores, insectivores, and omnivores in the msleep data set. From the output of the glimpse() function above, we know that vore is a categorical variable, so we want a simple bar plot, which we make with geom_bar().
ggplot(data = msleep, aes(x = vore)) +
geom_bar()
We can also pipe data into ggplot argument after doing stuff to the data. For example, the code below remove NA values from our plot.
msleep %>%
filter(!is.na(vore)) %>%
ggplot(aes(x = vore)) +
geom_bar()
If the same data where presented as one categorical variable for vore (with each vore once) and another, n, for counts.
count(msleep, vore)
## [38;5;246m# A tibble: 5 x 2[39m
## vore n
## [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<int>[39m[23m
## [38;5;250m1[39m carni 19
## [38;5;250m2[39m herbi 32
## [38;5;250m3[39m insecti 5
## [38;5;250m4[39m omni 20
## [38;5;250m5[39m [31mNA[39m 7
We could recreate figure 6.1 with geom_col(). again mapping vore to the x-aesthetic, and now mapping count to the y aesthetic, by as follows:
count(msleep, vore) %>%
ggplot(aes(x = vore, y = n))+
geom_col()
#### One continuous variable
We are often interested to know how variable our data is, and to think about the shape of this variability. Revisiting our data on mammal sleep patterns, we might be interested to evaluate the variability in how long mammals sleep.
• Do all species sleep roughly the same amount?
• Is the data bimodal (with two humps)?
• Do some species sleep for an extraordinarily long or short amount of time?
We can look into this with a histogram or a density plot.
##### One continuous variable: A histogram
We use the histogram geom, geom_histogram(), to make a histogram in R.
ggplot(msleep, aes(x = log10_brainwt))+
geom_histogram(bins = 10, color = "white") # Bins tells R we want 10 bins, and color = white tells R we want white lines between our bins
## Warning: Removed 27 rows containing non-finite values (stat_bin).
In a histogram, each value on the x represents some interval of values of our categorical variable (in this case, we had 10 bins, but we could have, for example, looked at sleeep in one hour with binwidth = 1), while y-values show how many observations correspond to an interval on the x.
When making a histogram it is worth exploring numerous binwidths to ensure you’re not fooling yourself
##### One continuous variable: A density plot
We use the density geom, geom_density(), to make a histogram in R.
ggplot(msleep, aes(x = log10_brainwt))+
geom_density(fill = "blue")
Sometimes we prefer a smooth density plot to a histogram, as this can allow us to not get too distracted by a few bumps (on the other hand, we can also miss important variability, so be careful). We again map total_sleep onto the x aesthetic, but now use geom_density().
### 6.2.2 Two variables
With two variables, we want to highlight the association between them. In the plots below, we show that how this is presented can influence our biological interpretation and take-home messages.
#### Two categorical variables
With two categorical variables, we usually add color to a barplot to identify the second group. We can choose to
Below, we’ll make one of each of these graphs to look at this for the association between mammal order and diet, limiting our view to orders with five or more species with data. Which of these you choose depends on the message, story and details. For example, a filled barplot is nice because we can see proportions, but a bummer because we don’t get to see counts. The book advocates for mosaic plots, which I really like but skip here because they are a bit esoteric. Look into the ggmosaic package, and its vignette if you want to make one.
First, we process our data, making use of the tricks we learned in Handling data in R. To do so, we filter() for not NA diets, add_count() to see how many species we have in each order, and filter() for orders with five or more species with diet data.
# Data processing
msleep_data_ordervore <- msleep %>%
filter(!is.na(vore)) %>% # Only cases with data for diet
add_count(order) %>% # Find counts for each order
filter(n >= 5) # Lets only hold on to orders with 5 or more species with data
##### Two categorical variables: A stacked bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar()
Stacked barplots are best suited for cases when we’re primarily interested in total counts (e.g. how many species do we have data for in each order), and less interested in comparing the categories going into these counts. Rarely is this the best choice, so don’t expect to make too many stacked barplots.
##### Two categorical variables: A grouped bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar(position = position_dodge(preserve = "single"))
Grouped barplots are best suited for cases when we’re primarily interested in comparing the categories going into these counts. This is often the best choice, as we get to see counts. However the total number in each group is harder to see in a grouped than a stacked barplot (e.g. it’s easy to see that we have the same number of primates and carnivores in Fig. 6.3, while this is harder to see in Fig. 6.4).
##### Two categorical variables: A filled bar plot
ggplot(data = msleep_data_ordervore, aes(x = order, fill= vore))+
geom_bar(position = "fill")
Filled barplots are much stacked barplots standardized to the same height. In other words, they are like stacked bar plots without their greatest strength. This is rarely a good idea, except for cases with only two or three options for each of numerous categories.
#### 6.2.2.1 One categorical and one continuous variable.
##### One categorical and one continuous variable: Multiple histograms
A straightforward way to show the continuous values for different categories is to make a separate histogram for each numerous distributions is to make separate histograms for each category using the geom_histogram() and facet_wrap() functions in ggplot.
msleep_data_ordervore_hist <- ggplot(msleep_data_ordervore, aes(x= log10_bodywt))+
geom_histogram(bins = 10)
msleep_data_ordervore_hist +
facet_wrap(~order, ncol = 1)
When doing this, be sure to aid visual comparisons simple by ensuring there’s only one column. Note how Figure 6.6 makes it much easier to compare distributions than does Figure 6.7.
msleep_data_ordervore_hist +
facet_wrap(~order, nrow = 1)
##### One categorical and one continuous variable: Density plots
ggplot(msleep_data_ordervore, aes(x= bodywt, fill = order))+
geom_density(alpha = .3)+
scale_x_continuous(trans = "log10")
While many histograms can be nice, they can also take up a lot of space. Sometime we can more succinctly show distributions for each group with numerous density plots (geom_density()). While this can be succinct, it can also get too crammed, so have a look and see which display is best for your data and question.
##### One categorical and one continuous variable: Boxplots, jitterplots etc..
Histograms and density plots communicate the shapes of distributions, but we often hope to compare means and get a sense of variability.
• Boxplots (Figure 6.9A) summarize distributions by showing all quartiles – often showing outliers with points. e.g. ggplot(aes(x = order, y = bodywt)) + geom_boxplot().
• Jitterplots (Figure 6.9B) show all data points, spreading them out over the x-axis. e.g. ggplot(aes(x = order, y = bodywt)) + geom_jitter().
• We can combine both to get the best of both worlds (Figure 6.9C). e.g. ggplot(aes(x = order, y = bodywt)) + geom_boxplot() + geom_jitter().
#### 6.2.2.2 Two continuous variables
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt))+
geom_point()
With two continuous variables, we want a graph that visually display the association between them. A scatterplot displays the explanatory variable n the x-axis, and the response variable on the y-axis. The scatterplot in figure 6.10, shows a clear increase in brain size with body size across mammal species when both are on $$log_{10}$$ scales.
### 6.2.3 More dimensions
ggplot(msleep_data_ordervore,
aes(x = log10_bodywt, y = log10_brainwt, color = vore, shape = order))+
geom_point()
What if we wanted to see even more? Like let’s say we wanted to know if we found a similar relationship between brain weight and body weight across orders and/or if this relationship was mediated by diet. We can pack more info into these plots.
⚠️ Beware, sometimes shapes are hard to differentiate.⚠️ Facetting might make these patterns stand out.
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt, color = vore))+
geom_point()+
facet_wrap(~order, nrow = 1)
### 6.2.4 Interactive plots with the plotly package
Often when I get a fresh data set I want to know a bit more about the data points (to e.g. identify outliers or make sense of things). The plotly package is super useful for this, as it makes interactive graphs that we can explore.
# install.packages("plotly") first install plotly, if it's not installed yet
library(plotly) # now tell R you want to use plotly
# Click on the plot below to explore the data!
big_plot <- ggplot(msleep_data_ordervore,
aes(x = log10_bodywt, y = log10_brainwt,
color = vore, shape = order, label = name))+
geom_point()
ggplotly(big_plot)
#### Decoration vs information
ggplot(msleep_data_ordervore, aes(x = log10_bodywt, y = log10_brainwt))+
geom_point(color = "firebrick", size = 3, alpha = .5)
We have used the aes() argument to provide information. For example, in Figure 5.15 we used color to show a diet by typing aes(…, color = vore). But what if we just want a fun color for data points. We can do this by specifying color outside of the aes argument. Same goes for other attributes, like size etc, or transparency (alpha)…
## 6.3 ggplot Assignment
Watch the video about getting started with ggplot
Complete RStudio’s primer on data visualization basics.
Complete the glimpse intro (4.3.1) and the quiz.
Make three plots from the mpg data and describe the patterns they highlight.
Fill out the quiz on canvas, which is very simlar to the one below.
## 6.4 ggplot2 review / reference
### 6.4.1 ggplot2: cheat sheet
There is no need to memorize anything, check out this handy cheat sheet!
#### 6.4.1.1 ggplot2: common functions, aesthetics, and geoms
##### The ggplot() function
• Takes arguments data = and mapping =.
• We usually leave these implied and type e.g. ggplot(my.data, aes(...)) rather than ggplot(data = my.data, mapping = aes(...)).
• We can pipe data into the ggplot() function, so my.data %>% ggplot(aes(…)) does the same thing as ggplot(my.data, aes(…)).
##### Arguments for aes() function
The aes() function takes many potential arguments each of which specifies the aesthetic we are mapping onto a variable:
###### x, y, and label:
• x: What is shown on the x-axis.
• y: What is shown on the y-axis.
• label: What is show as text in plot (when using geom_text())
##### Faceting
Faceting allows us to use the concept of small multiples to highlight patterns.
For one facetted variable: facet_wrap(~ <var>, nocl = )
For two facetted variable: facet_grid(<var1>~ <var2>), where one is shown by rows, and is shown by columns. |
Why must the matrices be positive semidefinite? What is the input authority cost? What is the purpose of multiplying the transpose then the positive semidefinite matrix then the matrix itself?
Why must the matrices be positive semidefinite?
If we only consider real numbers, the definition of a PSD matrix $$A\in\mathbb{R}^{n\times n}$$ is $$z^\top A z \ge 0$$ for $$z \in \mathbb{R}^n$$.
By restricting ourselves to PSD matrices, we know that the loss $$J$$ must always be bounded below by 0 because the sum of non-negative numbers must be non-negative. In particular, this problem is minimizing a strongly convex quadratic so there must be a unique minimum. That's nice!
Now consider a matrix $$B$$ that does not have the PSD property. The quantity $$z^\top B z$$ could be positive, negative, or neither.
Your optimization procedure is minimizing the loss $$J$$. If your matrix is, for example negative definite, then you could always improve the loss by making the sums of these quadratic forms arbitrarily negative, ever smaller. This is akin to minimizing a line with nonzero slope: there's no minimum to find!
What is the input authority cost?
No idea. You'll have to read the slides, or the cited works, or contact the author.
What is the purpose of multiplying the transpose then the positive semidefinite matrix then the matrix itself?
This is called a quadratic form, and it shows up all over the place in math because of its role in defining PD and PSD matrices. What it means in the specific terms of this optimization depends on the context of the problem: where do these matrices come from, and what do they mean? |
# NAG Library Routine Document
## 1Purpose
f16jtf (blas_zamin_val) computes, with respect to absolute value, the smallest component of a complex vector, along with the index of that component.
## 2Specification
Fortran Interface
Subroutine f16jtf ( n, x, incx, k, r)
Integer, Intent (In) :: n, incx Integer, Intent (Out) :: k Real (Kind=nag_wp), Intent (Out) :: r Complex (Kind=nag_wp), Intent (In) :: x(1+(n-1)*ABS(incx))
#include <nagmk26.h>
void f16jtf_ (const Integer *n, const Complex x[], const Integer *incx, Integer *k, double *r)
The routine may be called by its BLAST name blas_zamin_val.
## 3Description
f16jtf (blas_zamin_val) computes, with respect to absolute value, the smallest component, $r$, of an $n$-element complex vector $x$, and determines the smallest index, $k$, such that
$r=Rexk+Imxk=minjRexj+Imxj.$
## 4References
Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee http://www.netlib.org/blas/blast-forum/blas-report.pdf
## 5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of elements in $x$.
2: $\mathbf{x}\left(1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right)$ – Complex (Kind=nag_wp) arrayInput
On entry: the $n$-element vector $x$.
If ${\mathbf{incx}}>0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left(\mathit{i}-1\right)×{\mathbf{incx}}+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
If ${\mathbf{incx}}<0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left({\mathbf{n}}-\mathit{i}\right)×\left|{\mathbf{incx}}\right|+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Intermediate elements of x are not referenced. If ${\mathbf{n}}=0$, x is not referenced.
3: $\mathbf{incx}$ – IntegerInput
On entry: the increment in the subscripts of x between successive elements of $x$.
Constraint: ${\mathbf{incx}}\ne 0$.
4: $\mathbf{k}$ – IntegerOutput
On exit: $k$, the index, from the set $\left\{1,2,\dots ,{\mathbf{n}}\right\}$, of the smallest component of $x$ with respect to absolute value. If ${\mathbf{n}}\le 0$ on input then k is returned as $0$.
5: $\mathbf{r}$ – Real (Kind=nag_wp)Output
On exit: $r$, the smallest component of $x$ with respect to absolute value. If ${\mathbf{n}}\le 0$ on input then r is returned as $0.0$.
## 6Error Indicators and Warnings
If ${\mathbf{incx}}=0$, an error message is printed and program execution is terminated.
## 7Accuracy
The BLAS standard requires accurate implementations which avoid unnecessary over/underflow (see Section 2.7 of Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001)).
## 8Parallelism and Performance
f16jtf (blas_zamin_val) is not threaded in any implementation.
None.
## 10Example
This example computes the smallest component with respect to absolute value and index of that component for the vector
$x= -4+2.1i,3.7+4.5i,-6+1.2iT .$
### 10.1Program Text
Program Text (f16jtfe.f90)
### 10.2Program Data
Program Data (f16jtfe.d)
### 10.3Program Results
Program Results (f16jtfe.r) |
# Determining a δ
• Sep 4th 2009, 08:53 AM
Rker
Determining a δ
I have absolutely no idea how to solve these type of problems. My teacher gave a lecture about this subject two days ago, and I took a look at this stickied thread, but I'm still stuck. :s
In exercises 1–8, numerically and graphically determine a δ corresponding to (a) ε = 0.1 and (b) ε = 0.05. Graph the function in the ε δ window [x-range is (a δ, a δ) and y-range is (L ε, L + ε)] to verify that your choice works.
1.
limx0 (x^2 + 1) = 1
In exercises 9–20, symbolically find δ in terms of ε.
15.
limx1 (x^2 + x 2)/(x 1) = 3
52.
A fiberglass company ships its glass as spherical marbles. If the volume of each marble must be within ε of π/6, how close does the radius need to be to 1/2?
• Sep 4th 2009, 10:11 AM
VonNemo19
Quote:
Originally Posted by Rker
I have absolutely no idea how to solve these type of problems. My teacher gave a lecture about this subject two days ago, and I took a look at this stickied thread, but I'm still stuck. :s
In exercises 1–8, numerically and graphically determine a δ corresponding to (a) ε = 0.1 and (b) ε = 0.05. Graph the function in the ε δ window [x-range is (a δ, a δ) and y-range is (L ε, L + ε)] to verify that your choice works.
1.
limx0 (x^2 + 1) = 1
In exercises 9–20, symbolically find δ in terms of ε.
15.
limx1 (x^2 + x 2)/(x 1) = 3
52.
A fiberglass company ships its glass as spherical marbles. If the volume of each marble must be within ε of π/6, how close does the radius need to be to 1/2?
For 1.
You wish to show that
$\lim_{x\to0}(x^2+1)=1$.
To do this we must have
$|f(x)-L|$ whenever $|x-a|<\delta$.
So, given that $\epsilon=0.1$we proceed
$|(x^2+1)-1|<0.1$
$|x^2|<0.1$. Since $x^2>0$ for all x,
$x^2<0.1$
Can you see how to find delta?
PS Finding a delta graphically is easy. Just draw the graph. then draw the lines $L+\epsilon$ and $L-\epsilon$. Where those lines intesect the graph, draw vertical lines down to the x-axis. the line which is closest to $x=a$ is $\delta$. |
Record Details
Title:
Reply to “Comment on ‘Ultrafast terahertz-field-driven ionic response in ferroelectric $BaTiO_{3}$' ”
Affiliation(s):
EuXFEL staff, Other
Author group:
Instrument FXE
Keyword(s):
Topic:
Scientific area:
Abstract:
In this reply to S. Durbin’s comment on our original paper “Ultrafast terahertz-field-driven ionic response in ferroelectric $BaTiO_{3}$,” we concur that his final equations 8 and 9 more accurately describe the change in diffracted intensity as a function of Ti displacement. We also provide an alternative derivation based on an ensemble average over unit cells. The conclusions of the paper are unaffected by this correction.
Imprint:
American Physical Society, 2018
Journal Information:
Physical Review B, 97 (22), 226102 (2018)
Related external records:
Language(s):
English
Record appears in:
Export |
English
# Start
## MPG.PuRe
This is the publication repository of the
It contains bibliographic data and numerous fulltexts of the publications of its researchers.
The repository is based on PubMan, a publication repository software developed by the Max Planck Digital Library.
Currently we are working on the migration of the data base of the predecessor system eDoc into this repository.
### Search for publications here
... or browse through different categories.
## Tools and Interfaces
#### Search and Export
Do you want to integrate your PubMan Data within an external system?
Necessary queries can be carried out via our REST-Interface!
#### Control of Named Entities (CoNE)
Search and administrate controlled vocabularies for persons, journals, classifications or languages.
## Most Recently Released Items
Duvigneau, Stefanie; Kettner, Alexander; Carius, Lisa; Griehl, Carola ...
-
2021-06-17
Renn, Jürgen
-
2021-06-17
Crisp, Tyrone; Meir, Ehud; Onn, Uri
-
2021-06-17
We construct, for any finite commutative ring $R$, a family of representations of the general linear group $\mathrm{GL}_n(R)$ whose intertwining ...
Shen, Yubin
-
2021-06-17 |
Feeds:
Posts
## The Arctic Has Barfed
I was scanning my blog stats the other day – partly to see if people were reading my new post on the Blue Mountains bushfires, partly because I just like graphs – when I noticed that an article I wrote nearly two years ago was suddenly getting more views than ever before:
The article in question highlights the scientific inaccuracies of the 2004 film The Day After Tomorrow, in which global warming leads to a new ice age. Now that I’ve taken more courses in thermodynamics I could definitely expand on the original post if I had the time and inclination to watch the film again…
I did a bit more digging in my stats and discovered that most viewers are reaching this article through Google searches such as “is the day after tomorrow true”, “is the day after tomorrow likely to happen”, and “movie review of a day after tomorrow if it is possible or impossible.” The answers are no, no, and impossible, respectively.
But why the sudden surge in interest? I think it is probably related to the record cold temperatures across much of the United States, an event which media outlets have dubbed the “polar vortex”. I prefer “Arctic barf”.
Part of the extremely cold air mass which covers the Arctic has essentially detached and spilled southward over North America. In other words, the Arctic has barfed on the USA. Less sexy terminology than “polar vortex”, perhaps, but I would argue it is more enlightening.
Greg Laden also has a good explanation:
The Polar Vortex, a huge system of swirling air that normally contains the polar cold air has shifted so it is not sitting right on the pole as it usually does. We are not seeing an expansion of cold, an ice age, or an anti-global warming phenomenon. We are seeing the usual cold polar air taking an excursion.
Note that other regions such as Alaska and much of Europe are currently experiencing unusually warm winter weather. On balance, the planet isn’t any colder than normal. The cold patches are just moving around in an unusual way.
Having grown up in the Canadian Prairies, where we experience daily lows below -30°C for at least a few days each year (and for nearly a month straight so far this winter), I can’t say I have a lot of sympathy. Or maybe I’m just bitter because I never got a day off school due to the cold? But seriously, nothing has to shut down if you plug in the cars at night and bundle up like an astronaut. We’ve been doing it for years.
## A Simple Stochastic Climate Model: Climate Sensitivity
Last time I derived the following ODE for temperature T at time t:
where S and τ are constants, and F(t) is the net radiative forcing at time t. Eventually I will discuss each of these terms in detail; this post will focus on S.
At equilibrium, when dT/dt = 0, the ODE necessitates T(t) = S F(t). A physical interpretation for S becomes apparent: it measures the equilibrium change in temperature per unit forcing, also known as climate sensitivity.
A great deal of research has been conducted with the aim of quantifying climate sensitivity, through paleoclimate analyses, modelling experiments, and instrumental data. Overall, these assessments show that climate sensitivity is on the order of 3 K per doubling of CO2 (divide by 5.35 ln 2 W/m2 to convert to warming per unit forcing).
The IPCC AR4 report (note that AR5 was not yet published at the time of my calculations) compared many different probability distribution functions (PDFs) of climate sensitivity, shown below. They follow the same general shape of a shifted distribution with a long tail to the right, and average 5-95% confidence intervals of around 1.5 to 7 K per doubling of CO2.
Box 10.2, Figure 1 of the IPCC AR4 WG1: Probability distribution functions of climate sensitivity (a), 5-95% confidence intervals (b).
These PDFs generally consist of discrete data points that are not publicly available. Consequently, sampling from any existing PDF would be difficult. Instead, I chose to create my own PDF of climate sensitivity, modelled as a log-normal distribution (e raised to the power of a normal distribution) with the same shape and bounds as the existing datasets.
The challenge was to find values for μ and σ, the mean and standard deviation of the corresponding normal distribution, such that for any z sampled from the log-normal distribution,
Since erf, the error function, cannot be evaluated analytically, this two-parameter problem must be solved numerically. I built a simple particle swarm optimizer to find the solution, which consistently yielded results of μ = 1.1757, σ = 0.4683.
The upper tail of a log-normal distribution is unbounded, so I truncated the distribution at 10 K, consistent with existing PDFs (see figure above). At the beginning of each simulation, climate sensitivity in my model is sampled from this distribution and held fixed for the entire run. A histogram of 106 sampled points, shown below, has the desired characteristics.
Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.
Note that in order to be used in the ODE, the sampled points must then be converted to units of Km2/W (warming per unit forcing) by dividing by 5.35 ln 2 W/m2, the forcing from doubled CO2.
## Bits and Pieces
Now that the academic summer is over, I have left Australia and returned home to Canada. It is great to be with my friends and family again, but I really miss the ocean and the giant monster bats. Not to mention the lab: after four months as a proper scientist, it’s very hard to be an undergrad again.
While I continue to settle in, move to a new apartment, and recover from jet lag (which is way worse in this direction!), here are a few pieces of reading to tide you over:
Scott Johnson from Ars Technica wrote a fabulous piece about climate modelling, and the process by which scientists build and test new components. The article is accurate and compelling, and features interviews with two of my former supervisors (Steve Easterbrook and Andrew Weaver) and lots of other great communicators (Gavin Schmidt and Richard Alley, to name a few).
I have just started reading A Short History of Nearly Everything by Bill Bryson. So far, it is one of the best pieces of science writing I have ever read. As well as being funny and easy to understand, it makes me excited about areas of science I haven’t studied since high school.
Finally, my third and final paper from last summer in Victoria was published in the August edition of Journal of Climate. The full text (subscription required) is available here. It is a companion paper to our recent Climate of the Past study, and compares the projections of EMICs (Earth System Models of Intermediate Complexity) when forced with different RCP scenarios. In a nutshell, we found that even after anthropogenic emissions fall to zero, it takes a very long time for CO2 concentrations to recover, even longer for global temperatures to start falling, and longer still for sea level rise (caused by thermal expansion alone, i.e. neglecting the melting of ice sheets) to stabilize, let alone reverse.
## A Simple Stochastic Climate Model: Deriving the Backbone
Last time I introduced the concept of a simple climate model which uses stochastic techniques to simulate uncertainty in our knowledge of the climate system. Here I will derive the backbone of this model, an ODE describing the response of global temperature to net radiative forcing. This derivation is based on unpublished work by Nathan Urban – many thanks!
In reality, the climate system should be modelled not as a single ODE, but as a coupled system of hundreds of PDEs in four dimensions. Such a task is about as arduous as numerical science can get, but dozens of research groups around the world have built GCMs (General Circulation Models, or Global Climate Models, depending on who you talk to) which come quite close to this ideal.
Each GCM has taken hundreds of person-years to develop, and I only had eight weeks. So for the purposes of this project, I treat the Earth as a spatially uniform body with a single temperature. This is clearly a huge simplification but I decided it was necessary.
Let’s start by defining T1(t) to be the absolute temperature of this spatially uniform Earth at time t, and let its heat capacity be C. Therefore,
$C \: T_1(t) = E$
where E is the change in energy required to warm the Earth from 0 K to temperature T1. Taking the time derivative of both sides,
$C \: \frac{dT_1}{dt} = \frac{dE}{dt}$
Now, divide through by A, the surface area of the Earth:
$c \: \frac{dT_1}{dt} = \frac{1}{A} \frac{dE}{dt}$
where c = C/A is the heat capacity per unit area. Note that the right side of the equation, a change in energy per unit time per unit area, has units of W/m2. We can express this as the difference of incoming and outgoing radiative fluxes, I(t) and O(t) respectively:
$c \: \frac{dT_1}{dt} = I(t)- O(t)$
By the Stefan-Boltzmann Law,
$c \: \frac{dT_1}{dt} = I(t) - \epsilon \sigma T_1(t)^4$
where ϵ is the emissivity of the Earth and σ is the Stefan-Boltzmann constant.
To consider the effect of a change in temperature, suppose that T1(t) = T0 + T(t), where T0 is an initial equilibrium temperature and T(t) is a temperature anomaly. Substituting into the equation,
$c \: \frac{d(T_0 + T(t))}{dt} = I(t) - \epsilon \sigma (T_0 + T(t))^4$
Noting that T0 is a constant, and also factoring the right side,
$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + \tfrac{T(t)}{T_0})^4$
Since the absolute temperature of the Earth is around 280 K, and we are interested in perturbations of around 5 K, we can assume that T(t)/T0 ≪ 1. So we can linearize (1 + T(t)/T0)4 using a Taylor expansion about T(t) = 0:
$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0} + O[(\tfrac{T(t)}{T_0})^2])$
$\approx I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0})$
$= I(t) - \epsilon \sigma T_0^4 - 4 \epsilon \sigma T_0^3 T(t)$
Next, let O0 = ϵσT04 be the initial outgoing flux. So,
$c \: \frac{dT}{dt} = I(t) - O_0 - 4 \epsilon \sigma T_0^3 T(t)$
Let F(t) = I(t) – O0 be the radiative forcing at time t. Making this substitution as well as dividing by c, we have
$\frac{dT}{dt} = \frac{F(t) - 4 \epsilon \sigma T_0^3 T(t)}{c}$
Dividing each term by 4ϵσT03 and rearranging the numerator,
$\frac{dT}{dt} = - \frac{T(t) - \tfrac{1}{4 \epsilon \sigma T_0^3} F(t)}{\tfrac{c}{4 \epsilon \sigma T_0^3}}$
Finally, let S = 1/(4ϵσT03) and τ = cS. Our final equation is
$\frac{dT}{dt} = - \frac{T(t) - S F(t)}{\tau}$
While S depends on the initial temperature T0, all of the model runs for this project begin in the preindustrial period when global temperature is approximately constant. Therefore, we can treat S as a parameter independent of initial conditions. As I will show in the next post, the uncertainty in S based on climate system dynamics far overwhelms any error we might introduce by disregarding T0.
## A Simple Stochastic Climate Model: Introduction
This winter I took a course in computational physics, which has probably been my favourite undergraduate course to date. Essentially it was an advanced numerical methods course, but from a very practical point of view. We got a lot of practice using numerical techniques to solve realistic problems, rather than just analysing error estimates and proving conditions of convergence. As a math student I found this refreshing, and incredibly useful for my research career.
We all had to complete a term project of our choice, and I decided to build a small climate model. I was particularly interested in the stochastic techniques taught in the course, and given that modern GCMs and EMICs are almost entirely deterministic, it was possible that I could contribute something original to the field.
The basic premise of my model is this: All anthropogenic forcings are deterministic, and chosen by the user. Everything else is determined stochastically: parameters such as climate sensitivity are sampled from probability distributions, whereas natural forcings are randomly generated but follow the same general pattern that exists in observations. The idea is to run this model with the same anthropogenic input hundreds of times and build up a probability distribution of future temperature trajectories. The spread in possible scenarios is entirely due to uncertainty in the natural processes involved.
This approach mimics the real world, because the only part of the climate system we have full control over is our own actions. Other influences on climate are out of our control, sometimes poorly understood, and often unpredictable. It is just begging to be modelled as a stochastic system. (Not that it is actually stochastic, of course; in fact, I understand that nothing is truly stochastic, even random number generators – unless you can find a counterexample using quantum mechanics? But that’s a discussion for another time.)
A word of caution: I built this model in about eight weeks. As such, it is highly simplified and leaves out a lot of processes. You should never ever use it for real climate projections. This project is purely an exercise in numerical methods, and an exploration of the possible role of stochastic techniques in climate modelling.
Over the coming weeks, I will write a series of posts that explains each component of my simple stochastic climate model in detail. I will show the results from some sample simulations, and discuss how one might apply these stochastic techniques to existing GCMs. I also plan to make the code available to anyone who’s interested – it’s written in Matlab, although I might translate it to a free language like Python, partly because I need an excuse to finally learn Python.
I am very excited to finally share this project with you all! Check back soon for the next installment.
## Climate change and the jet stream
Here in the northern mid-latitudes (much of Canada and the US, Europe, and the northern half of Asia) our weather is governed by the jet stream. This high-altitude wind current, flowing rapidly from west to east, separates cold Arctic air (to the north) from warmer temperate air (to the south). So on a given day, if you’re north of the jet stream, the weather will probably be cold; if you’re to the south, it will probably be warm; and if the jet stream is passing over you, you’re likely to get rain or snow.
The jet stream isn’t straight, though; it’s rather wavy in the north-south direction, with peaks and troughs. So it’s entirely possible for Calgary to experience a cold spell (sitting in a trough of the jet stream) while Winnipeg, almost directly to the east, has a heat wave (sitting in a peak). The farther north and south these peaks and troughs extend, the more extreme these temperature anomalies tend to be.
Sometimes a large peak or trough will hang around for weeks on end, held in place by certain air pressure patterns. This phenomenon is known as “blocking”, and is often associated with extreme weather. For example, the 2010 heat wave in Russia coincided with a large, stationary, long-lived peak in the polar jet stream. Wildfires, heat stroke, and crop failure ensued. Not a pretty picture.
As climate change adds more energy to the atmosphere, it would be naive to expect all the wind currents to stay exactly the same. Predicting the changes is a complicated business, but a recent study by Jennifer Francis and Stephen Vavrus made headway on the polar jet stream. Using North American and North Atlantic atmospheric reanalyses (models forced with observations rather than a spin-up) from 1979-2010, they found that Arctic amplification – the faster rate at which the Arctic warms, compared to the rest of the world – makes the jet stream slower and wavier. As a result, blocking events become more likely.
Arctic amplification occurs because of the ice-albedo effect: there is more snow and ice available in the Arctic to melt and decrease the albedo of the region. (Faster-than-average warming is not seen in much of Antarctica, because a great deal of thermal inertia is provided to the continent in the form of strong circumpolar wind and ocean currents.) This amplification is particularly strong in autumn and winter.
Now, remembering that atmospheric pressure is directly related to temperature, and pressure decreases with height, warming a region will increase the height at which pressure falls to 500 hPa. (That is, it will raise the 500 hPa “ceiling”.) Below that, the 1000 hPa ceiling doesn’t rise very much, because surface pressure doesn’t usually go much above 1000 hPa anyway. So in total, the vertical portion of the atmosphere that falls between 1000 and 500 hPa becomes thicker as a result of warming.
Since the Arctic is warming faster than the midlatitudes to the south, the temperature difference between these two regions is smaller. Therefore, the difference in 1000-500 hPa thickness is also smaller. Running through a lot of complicated physics equations, this has two main effects:
1. Winds in the east-west direction (including the jet stream) travel more slowly.
2. Peaks of the jet stream are pulled farther north, making the current wavier.
Also, both of these effects reinforce each other: slow jet streams tend to be wavier, and wavy jet streams tend to travel more slowly. The correlation between relative 1000-500 hPa thickness and these two effects is not statistically significant in spring, but it is in the other three seasons. Also, melting sea ice and declining snow cover on land are well correlated to relative 1000-500 hPa thickness, which makes sense because these changes are the drivers of Arctic amplification.
Consequently, there is now data to back up the hypothesis that climate change is causing more extreme fall and winter weather in the mid-latitudes, and in both directions: unusual cold as well as unusual heat. Saying that global warming can cause regional cold spells is not a nefarious move by climate scientists in an attempt to make every possible outcome support their theory, as some paranoid pundits have claimed. Rather, it is another step in our understanding of a complex, non-linear system with high regional variability.
Many recent events, such as record snowfalls in the US during the winters of 2009-10 and 2010-11, are consistent with this mechanism – it’s easy to see that they were caused by blocking in the jet stream when Arctic amplification was particularly high. They may or may not have happened anyway, if climate change wasn’t in the picture. However, if this hypothesis endures, we can expect more extreme weather from all sides – hotter, colder, wetter, drier – as climate change continues. Don’t throw away your snow shovels just yet.
## Climate Change and Atlantic Circulation
Today my very first scientific publication is appearing in Geophysical Research Letters. During my summer at UVic, I helped out with a model intercomparison project regarding the effect of climate change on Atlantic circulation, and was listed as a coauthor on the resulting paper. I suppose I am a proper scientist now, rather than just a scientist larva.
The Atlantic meridional overturning circulation (AMOC for short) is an integral part of the global ocean conveyor belt. In the North Atlantic, a massive amount of water near the surface, cooling down on its way to the poles, becomes dense enough to sink. From there it goes on a thousand-year journey around the world – inching its way along the bottom of the ocean, looping around Antarctica – before finally warming up enough to rise back to the surface. A whole multitude of currents depend on the AMOC, most famously the Gulf Stream, which keeps Europe pleasantly warm.
Some have hypothesized that climate change might shut down the AMOC: the extra heat and freshwater (from melting ice) coming into the North Atlantic could conceivably lower the density of surface water enough to stop it sinking. This happened as the world was coming out of the last ice age, in an event known as the Younger Dryas: a huge ice sheet over North America suddenly gave way, drained into the North Atlantic, and shut down the AMOC. Europe, cut off from the Gulf Stream and at the mercy of the ice-albedo feedback, experienced another thousand years of glacial conditions.
A shutdown today would not lead to another ice age, but it could cause some serious regional cooling over Europe, among other impacts that we don’t fully understand. Today, though, there’s a lot less ice to start with. Could the AMOC still shut down? If not, how much will it weaken due to climate change? So far, scientists have answered these two questions with “probably not” and “something like 25%” respectively. In this study, we analysed 30 climate models (25 complex CMIP5 models, and 5 smaller, less complex EMICs) and came up with basically the same answer. It’s important to note that none of the models include dynamic ice sheets (computational glacial dynamics is a headache and a half), which might affect our results.
Models ran the four standard RCP experiments from 2006-2100. Not every model completed every RCP, and some extended their simulations to 2300 or 3000. In total, there were over 30 000 model years of data. We measured the “strength” of the AMOC using the standard unit Sv (Sverdrups), where each Sv is 1 million cubic metres of water per second.
Only two models simulated an AMOC collapse, and only at the tail end of the most extreme scenario (RCP8.5, which quite frankly gives me a stomachache). Bern3D, an EMIC from Switzerland, showed a MOC strength of essentially zero by the year 3000; CNRM-CM5, a GCM from France, stabilized near zero by 2300. In general, the models showed only a moderate weakening of the AMOC by 2100, with best estimates ranging from a 22% drop for RCP2.6 to a 40% drop for RCP8.5 (with respect to preindustrial conditions).
Are these somewhat-reassuring results trustworthy? Or is the Atlantic circulation in today’s climate models intrinsically too stable? Our model intercomparison also addressed that question, using a neat little scalar metric known as Fov: the net amount of freshwater travelling from the AMOC to the South Atlantic.
The current thinking in physical oceanography is that the AMOC is more or less binary – it’s either “on” or “off”. When AMOC strength is below a certain level (let’s call it A), its only stable state is “off”, and the strength will converge to zero as the currents shut down. When AMOC strength is above some other level (let’s call it B), its only stable state is “on”, and if you were to artificially shut it off, it would bounce right back up to its original level. However, when AMOC strength is between A and B, both conditions can be stable, so whether it’s on or off depends on where it started. This phenomenon is known as hysteresis, and is found in many systems in nature.
This figure was not part of the paper. I made it just now in MS Paint.
Here’s the key part: when AMOC strength is less than A or greater than B, Fov is positive and the system is monostable. When AMOC strength is between A and B, Fov is negative and the system is bistable. The physical justification for Fov is its association with the salt advection feedback, the sign of which is opposite Fov: positive Fov means the salt advection feedback is negative (i.e. stabilizing the current state, so monostable); a negative Fov means the salt advection feedback is positive (i.e. reinforcing changes in either direction, so bistable).
Most observational estimates (largely ocean reanalyses) have Fov as slightly negative. If models’ AMOCs really were too stable, their Fov‘s should be positive. In our intercomparison, we found both positives and negatives – the models were kind of all over the place with respect to Fov. So maybe some models are overly stable, but certainly not all of them, or even the majority.
As part of this project, I got to write a new section of code for the UVic model, which calculated Fov each timestep and included the annual mean in the model output. Software development on a large, established project with many contributors can be tricky, and the process involved a great deal of head-scratching, but it was a lot of fun. Programming is so satisfying.
Beyond that, my main contribution to the project was creating the figures and calculating the multi-model statistics, which got a bit unwieldy as the model count approached 30, but we made it work. I am now extremely well-versed in IDL graphics keywords, which I’m sure will come in handy again. Unfortunately I don’t think I can reproduce any figures here, as the paper’s not open-access.
I was pretty paranoid while coding and doing calculations, though – I kept worrying that I would make a mistake, never catch it, and have it dredged out by contrarians a decade later (“Kate-gate”, they would call it). As a climate scientist, I suppose that comes with the job these days. But I can live with it, because this stuff is just so darned interesting.
## Permafrost Projections
During my summer at UVic, two PhD students at the lab (Andrew MacDougall and Chris Avis) as well as my supervisor (Andrew Weaver) wrote a paper modelling the permafrost carbon feedback, which was recently published in Nature Geoscience. I read a draft version of this paper several months ago, and am very excited to finally share it here.
Studying the permafrost carbon feedback is at once exciting (because it has been left out of climate models for so long) and terrifying (because it has the potential to be a real game-changer). There is about twice as much carbon frozen into permafrost than there is floating around in the entire atmosphere. As high CO2 levels cause the world to warm, some of the permafrost will thaw and release this carbon as more CO2 – causing more warming, and so on. Previous climate model simulations involving permafrost have measured the CO2 released during thaw, but haven’t actually applied it to the atmosphere and allowed it to change the climate. This UVic study is the first to close that feedback loop (in climate model speak we call this “fully coupled”).
The permafrost part of the land component was already in place – it was developed for Chris’s PhD thesis, and implemented in a previous paper. It involves converting the existing single-layer soil model to a multi-layer model where some layers can be frozen year-round. Also, instead of the four RCP scenarios, the authors used DEPs (Diagnosed Emission Pathways): exactly the same as RCPs, except that CO2 emissions, rather than concentrations, are given to the model as input. This was necessary so that extra emissions from permafrost thaw would be taken into account by concentration values calculated at the time.
As a result, permafrost added an extra 44, 104, 185, and 279 ppm of CO2 to the atmosphere for DEP 2.6, 4.5, 6.0, and 8.5 respectively. However, the extra warming by 2100 was about the same for each DEP, with central estimates around 0.25 °C. Interestingly, the logarithmic effect of CO2 on climate (adding 10 ppm to the atmosphere causes more warming when the background concentration is 300 ppm than when it is 400 ppm) managed to cancel out the increasing amounts of permafrost thaw. By 2300, the central estimates of extra warming were more variable, and ranged from 0.13 to 1.69 °C when full uncertainty ranges were taken into account. Altering climate sensitivity (by means of an artificial feedback), in particular, had a large effect.
As a result of the thawing permafrost, the land switched from a carbon sink (net CO2 absorber) to a carbon source (net CO2 emitter) decades earlier than it would have otherwise – before 2100 for every DEP. The ocean kept absorbing carbon, but in some scenarios the carbon source of the land outweighed the carbon sink of the ocean. That is, even without human emissions, the land was emitting more CO2 than the ocean could soak up. Concentrations kept climbing indefinitely, even if human emissions suddenly dropped to zero. This is the part of the paper that made me want to hide under my desk.
This scenario wasn’t too hard to reach, either – if climate sensitivity was greater than 3°C warming per doubling of CO2 (about a 50% chance, as 3°C is the median estimate by scientists today), and people followed DEP 8.5 to at least 2013 before stopping all emissions (a very intense scenario, but I wouldn’t underestimate our ability to dig up fossil fuels and burn them really fast), permafrost thaw ensured that CO2 concentrations kept rising on their own in a self-sustaining loop. The scenarios didn’t run past 2300, but I’m sure that if you left it long enough the ocean would eventually win and CO2 would start to fall. The ocean always wins in the end, but things can be pretty nasty until then.
As if that weren’t enough, the paper goes on to list a whole bunch of reasons why their values are likely underestimates. For example, they assumed that all emissions from permafrost were CO2, rather than the much stronger CH4 which is easily produced in oxygen-depleted soil; the UVic model is also known to underestimate Arctic amplification of climate change (how much faster the Arctic warms than the rest of the planet). Most of the uncertainties – and there are many – are in the direction we don’t want, suggesting that the problem will be worse than what we see in the model.
This paper went in my mental “oh shit” folder, because it made me realize that we are starting to lose control over the climate system. No matter what path we follow – even if we manage slightly negative emissions, i.e. artificially removing CO2 from the atmosphere – this model suggests we’ve got an extra 0.25°C in the pipeline due to permafrost. It doesn’t sound like much, but add that to the 0.8°C we’ve already seen, and take technological inertia into account (it’s simply not feasible to stop all emissions overnight), and we’re coming perilously close to the big nonlinearity (i.e. tipping point) that many argue is between 1.5 and 2°C. Take political inertia into account (most governments are nowhere near even creating a plan to reduce emissions), and we’ve long passed it.
Just because we’re probably going to miss the the first tipping point, though, doesn’t mean we should throw up our hands and give up. 2°C is bad, but 5°C is awful, and 10°C is unthinkable. The situation can always get worse if we let it, and how irresponsible would it be if we did?
## Modelling Geoengineering, Part II
Near the end of my summer at the UVic Climate Lab, all the scientists seemed to go on vacation at the same time and us summer students were left to our own devices. I was instructed to teach Jeremy, Andrew Weaver’s other summer student, how to use the UVic climate model – he had been working with weather station data for most of the summer, but was interested in Earth system modelling too.
Jeremy caught on quickly to the basics of configuration and I/O, and after only a day or two, we wanted to do something more exciting than the standard test simulations. Remembering an old post I wrote, I dug up this paper (open access) by Damon Matthews and Ken Caldeira, which modelled geoengineering by reducing incoming solar radiation uniformly across the globe. We decided to replicate their method on the newest version of the UVic ESCM, using the four RCP scenarios in place of the old A2 scenario. We only took CO2 forcing into account, though: other greenhouse gases would have been easy enough to add in, but sulphate aerosols are spatially heterogeneous and would complicate the algorithm substantially.
Since we were interested in the carbon cycle response to geoengineering, we wanted to prescribe CO2 emissions, rather than concentrations. However, the RCP scenarios prescribe concentrations, so we had to run the model with each concentration trajectory and find the equivalent emissions timeseries. Since the UVic model includes a reasonably complete carbon cycle, it can “diagnose” emissions by calculating the change in atmospheric carbon, subtracting contributions from land and ocean CO2 fluxes, and assigning the residual to anthropogenic sources.
After a few failed attempts to represent geoengineering without editing the model code (e.g., altering the volcanic forcing input file), we realized it was unavoidable. Model development is always a bit of a headache, but it makes you feel like a superhero when everything falls into place. The job was fairly small – just a few lines that culminated in equation 1 from the original paper – but it still took several hours to puzzle through the necessary variable names and header files! Essentially, every timestep the model calculates the forcing from CO2 and reduces incoming solar radiation to offset that, taking changing planetary albedo into account. When we were confident that the code was working correctly, we ran all four RCPs from 2006-2300 with geoengineering turned on. The results were interesting (see below for further discussion) but we had one burning question: what would happen if geoengineering were suddenly turned off?
By this time, having completed several thousand years of model simulations, we realized that we were getting a bit carried away. But nobody else had models in the queue – again, they were all on vacation – so our simulations were running three times faster than normal. Using restart files (written every 100 years) as our starting point, we turned off geoengineering instantaneously for RCPs 6.0 and 8.5, after 100 years as well as 200 years.
## Results
Similarly to previous experiments, our representation of geoengineering still led to sizable regional climate changes. Although average global temperatures fell down to preindustrial levels, the poles remained warmer than preindustrial while the tropics were cooler:
Also, nearly everywhere on the globe became drier than in preindustrial times. Subtropical areas were particularly hard-hit. I suspect that some of the drying over the Amazon and the Congo is due to deforestation since preindustrial times, though:
Jeremy also made some plots of key one-dimensional variables for RCP8.5, showing the results of no geoengineering (i.e. the regular RCP – yellow), geoengineering for the entire simulation (red), and geoengineering turned off in 2106 (green) or 2206 (blue):
It only took about 20 years for average global temperature to fall back to preindustrial levels. Changes in solar radiation definitely work quickly. Unfortunately, changes in the other direction work quickly too: shutting off geoengineering overnight led to rates of warming up to 5 C / decade, as the climate system finally reacted to all the extra CO2. To put that in perspective, we’re currently warming around 0.2 C / decade, which far surpasses historical climate changes like the Ice Ages.
Sea level rise (due to thermal expansion only – the ice sheet component of the model isn’t yet fully implemented) is directly related to temperature, but changes extremely slowly. When geoengineering is turned off, the reversals in sea level trajectory look more like linear offsets from the regular RCP.
Sea ice area, in contrast, reacts quite quickly to changes in temperature. Note that this data gives annual averages, rather than annual minimums, so we can’t tell when the Arctic Ocean first becomes ice-free. Also, note that sea ice area is declining ever so slightly even with geoengineering – this is because the poles are still warming a little bit, while the tropics cool.
Things get really interesting when you look at the carbon cycle. Geoengineering actually reduced atmospheric CO2 concentrations compared to the regular RCP. This was expected, due to the dual nature of carbon cycle feedbacks. Geoengineering allows natural carbon sinks to enjoy all the benefits of high CO2 without the associated drawbacks of high temperatures, and these sinks become stronger as a result. From looking at the different sinks, we found that the sequestration was due almost entirely to the land, rather than the ocean:
In this graph, positive values mean that the land is a net carbon sink (absorbing CO2), while negative values mean it is a net carbon source (releasing CO2). Note the large negative spikes when geoengineering is turned off: the land, adjusting to the sudden warming, spits out much of the carbon that it had previously absorbed.
Within the land component, we found that the strengthening carbon sink was due almost entirely to soil carbon, rather than vegetation:
This graph shows total carbon content, rather than fluxes – think of it as the integral of the previous graph, but discounting vegetation carbon.
Finally, the lower atmospheric CO2 led to lower dissolved CO2 in the ocean, and alleviated ocean acidification very slightly. Again, this benefit quickly went away when geoengineering was turned off.
## Conclusions
Is geoengineering worth it? I don’t know. I can certainly imagine scenarios in which it’s the lesser of two evils, and find it plausible (even probable) that we will reach such a scenario within my lifetime. But it’s not something to undertake lightly. As I’ve said before, desperate governments are likely to use geoengineering whether or not it’s safe, so we should do as much research as possible ahead of time to find the safest form of implementation.
The modelling of geoengineering is in its infancy, and I have a few ideas for improvement. In particular, I think it would be interesting to use a complex atmospheric chemistry component to allow for spatial variation in the forcing reduction through sulphate aerosols: increase the aerosol optical depth over one source country, for example, and let it disperse over time. I’d also like to try modelling different kinds of geoengineering – sulphate aerosols as well as mirrors in space and iron fertilization of the ocean.
Jeremy and I didn’t research anything that others haven’t, so this project isn’t original enough for publication, but it was a fun way to stretch our brains. It was also a good topic for a post, and hopefully others will learn something from our experiments.
Above all, leave over-eager summer students alone at your own risk. They just might get into something like this. |
Prime
#### Prepinsta Prime
Video courses for company/skill based Preparation
(Check all courses)
Get Prime Video
Prime
#### Prepinsta Prime
Purchase mock tests for company/skill building
(Check all mocks)
Get Prime mock
# Probability Quiz – 1
Question 1
Time: 00:00:00
A year is selected at random. What is the probability that it contains 53 Mondays if every fourth year is a leap year?
5/28
5/28
3/22
3/22
1/7
1/7
6/53
6/53
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 2
Time: 00:00:00
There are three cartons, each containing a different number of soda bottles. The first carton has 10 bottles, of which four are flat, the second has six bottles, of which one is flat, and the third carton has eight bottles of which three are flat. What is the probability of a flat bottle being selected when a bottle is chosen at random from one of the three cartons?
25/62
25/62
113/360
113/360
123/360
123/360
113/180
113/180
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 3
Time: 00:00:00
A die is thrown. Let A be the event that the number obtained is greater than 3. Let B be the event that the number obtained is less than 5. Then P (A∪B) is
2/5
2/5
3/5
3/5
1
1
1/4
1/4
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 4
Time: 00:00:00
One ticket is selected at random from 50 tickets numbered 0, 01, 02, ……, 49.Then, the probability that the sum of the digits on the selected ticket is 8, given that the product of these digits is zero equals
1/14
1/14
1/7
1/7
5/14
5/14
1/50
1/50
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 5
Time: 00:00:00
In a plane, S lines of lengths 2, 3, 4, 5 and 6 cm are lying. What is the probability that by joining the three randomly chosen lines end to end a triangle cannot be formed?
$\frac{3}{10}$
$\frac{3}{10}$
$\frac{7}{10}$
$\frac{7}{10}$
$\frac{1}{2}$
$\frac{1}{2}$
1
1
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 6
Time: 00:00:00
There are 7 boys and 8 girls in a class. A teacher has 3 items viz a pen, a pencil and an eraser, each 5 in number. He distributes the items, one to each student. What is the probability that a boy selected at random has either a pencil or an eraser?
2/3
2/3
2/21
2/21
14/45
14/45
None of these
None of these
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 7
Time: 00:00:00
A locker at the RBI building can be opened by dialling a fixed three digit code (between 000 and 999). Chhota Chetan, a terrorist, only knows that the number is a three digit number and has only one six. Using this information he tries to open the locker by dialling three digits at random. The probability that he succeeds in his endeavor is
$\frac{1}{243}$
$\frac{1}{243}$
$\frac{1}{900}$
$\frac{1}{900}$
$\frac{1}{1000}$
$\frac{1}{1000}$
$\frac{1}{216}$
$\frac{1}{216}$
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 8
Time: 00:00:00
A pair of fair dice are rolled together, till a sum of either 5 or 7 is obtained. The probability that the sum 5 happens before sum 7 is
0.45
0.45
0.4
0.4
0.5
0.5
0.5
0.5
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 9
Time: 00:00:00
In the previous question, what is the probability of getting sum 7 before sum 5?
0.6
0.6
0.55
0.55
0.4
0.4
0.5
0.5
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 10
Time: 00:00:00
Numbers are selected at random one at a time, from the numbers 00, 01, 02,…., 99 with replacement. An event E occurs if and only if the product of the two digits of a selected number is 18. If four numbers are selected, then the probability that E occurs at least 3 times is
$\frac{97}{390625}$
$\frac{97}{390625}$
$\frac{98}{390625}$
$\frac{98}{390625}$
$\frac{97}{380626}$
$\frac{97}{380626}$
$\frac{97}{380625}$
$\frac{97}{380625}$
Once you attempt the question then PrepInsta explanation will be displayed.
Start
["0","40","60","80","100"]
["Need more practice!","Keep trying!","Not bad!","Good work!","Perfect!"]
Completed
0/0
Accuracy
0%
Prime
#### Prime Mock
Complete Mock Subscription
for Goldman Sachs
Prime Mock
Personalized Analytics only Availble for Logged in users
Analytics below shows your performance in various Mocks on PrepInsta
Your average Analytics for this Quiz
Rank
-
Percentile
0% |
# [pstricks] Antw: Re: recursion
John Culleton john at wexfordpress.com
Sun Oct 28 13:40:26 CET 2007
On Sunday 28 October 2007 03:36:50 am Robert Salvador wrote:
> Thank you very much, Alan!
>
> I am doing what you thaught I am doing. And this \edef is exactly what I
> need. All is working now :-))
> By the way: where can I find information about these TeX (??) - commands
> like \def, \edef, \gdef, ... and all the others that one can use in
> LaTeX together with pstricks?
>
> Robert
LaTeX is really a layer cake and the bottom layer is Knuth's original
primitive commands. The next layer is (most of) the plain tex format. This is
incorporated into the LaTeX format layer, next come optional LaTeX styles
\def, \edef etc. come from that bottom layer. The TeXBook is a reliable, if
somewhat hard to follow guide. Beyond that canonical work I use _TeX for
the Impatient_ , A Beginners Book of TeX_ ,and _TeX by Topic_ in that order.
Two of the three (Impatient and Topic) can be downloaded free. I ultimately
bought a used paper copy of Impatient via Amazon Marketplace however because
I use it so frequently and a ring binder is clumsy.
I prefer plain tex, plain pdftex or Context to LaTeX or pdflatex.
--
John Culleton
Want to know what I really think?
http://apps.wexfordpress.net/blog/
And my must-read (free) short list:
http://wexfordpress.com/tex/shortlist.pdf |
# Homework Help: Solve integral
1. Jan 8, 2012
### Elliptic
1. The problem statement, all variables and given/known data
Solve the integral annd express it through the gamma f
2. Relevant equations
cos(theta)^(2k+1)
3. The attempt at a solution
File size:
2.9 KB
Views:
149
2. Jan 8, 2012
### Simon Bridge
You mean:
$$\int_0^{\frac{\pi}{2}} \cos^{2k+1}(\theta)d\theta$$... eg: evaluate the definite integral of an arbitrary odd-power of cosine.
The standard approach is to start by integrating by parts.
You'll end up with a reducing formula which you can turn into a ratio of factorials - apply the limits - after which it is a matter of relating that to the factorial form of the gamma function.
eg. http://mathworld.wolfram.com/CosineIntegral.html
3. Jan 8, 2012
### Elliptic
Its difficult do see what is happening here.
File size:
4.9 KB
Views:
137
4. Jan 8, 2012
### Simon Bridge
If it was easy there'd be no point setting it as a problem.
I'm not going to do it for you ...
Do you know what a gamma function is? You can represent it as a factorial?
Can you identify where you are having trouble seeing what is going on?
Perhaps you should try to do the derivation for yourself?
Last edited: Jan 8, 2012
5. Jan 8, 2012
Thanks.
File size:
2.9 KB
Views:
138
6. Jan 8, 2012
### Simon Bridge
Really? And I thought I was being mean.....
The trig-form of the beta function aye - yep, that's a tad more elegant that the path I was suggesting before (the more usual one)... but relies on a hand-wave: do you know how the beta function is derived?
Also - you have $\frac{1}{2}B(\frac{1}{2},k+1)$ but you've spotted that.
If you look at the cosine formula - you have to evaluate the limits ... at first it looks grim because it gives you a sum of terms like $\sin\theta\cos^{2k}\theta$ which is zero at both limits ... unless k=0 ... which is the first term in the sum, which is 1.
After that it is a matter of subbing in the factorial representation of the gamma function.
Which would be a concrete proof.
Yours is shorter and if you have the beta function in class notes then you should be fine using it. |
# 'egen' command: Does Stata select the right level from an attribute?
#### nmarti222
##### New Member
I´m doing my Master thesis in Hybrid Choice model with the example of a fitness center.
My dataset has 81 respondents who have to choose 12 times (12 choice tasks) among 3 alternatives ("alternative 1", "alternative 2" and "nochoice"). There are different attributes for each alternative considered (type of access, weekly access, days per week and price). For the first alternative, the attributes were labelled as alt1_type, alt1_weac, alt1_dayp, alt1_pri. However, for the second alternative the attributes were labelled as alt2_type, alt2_weac, alt2_dayp, alt2_pri. The attribute I want to focus on is the "type of access" (present in alt1_type and alt2_type) and the corresponding levels are "mixed gender" and "female only".
EXPLANATION OF THE PROBLEM --> The level ("mixed gender" and "female only") of each attribute ("type of access" present in "alt1_type" and "alt2_type") changes randomly between alternative 1 and 2. For example, in the first choice task (from a total of 12 choice tasks) a respondent can choose "female only" that appears under "alt1_type" and obviously "mixed gender" appears under "alt2_type". In the second choice task "mixed gender" appears under "alt1_type" and obviously "female only" appears under "alt2_type". Levels are randomly assigned to each alternative until 12 choice tasks are completed by the survey respondent. For nochoice is always the same level, so no problem in here.
PROBLEM --> With the code below I'm assuming that the first alternative is always "female only" (or "mixed gender") and the second is always "mixed gender" (or "female only") and the third is always "nochoice" (that in that case is always the same, so no problem).
In the dataset I have a variable called "choice" that equals 1 to the first alternative selected: alternative 1. Choice equals 2 to the second alternative selected: alternative 2 and 3 to the third alternative "nochoice". How can I chose the correct level from each attribute?
MY "WRONG" CODE IN Stata -->
Code:
egen choice1 = sum(choice == 1), by(userid)
egen choice2 = sum(choice == 2), by(userid)
egen nochoice = sum(choice == 3), by(userid)
And after the code follows like this in order to know which is the correct distribution of the selected alternative 1, 2 and nochoice.
Code:
gen choice1n = round(choice1/12,.01)
gen choice2n = round(choice2/12,.01)
gen nochoicen = round(nochoice/12,.01)
gen choice12n = choice1n+choice2n
Any idea is more than welcome!
Many thanks!
Nerea |
# I need to find a unit vector perpendicular to vector b
1. Apr 7, 2006
### danago
Hey. Here is the question:
$$\underline{b}= -8\underline{i} - 6\underline{j}$$
I need to find a unit vector perpendicular to vector b.
Ive come up with the following:
$$10 \times |\underline{x}| \cos 90 = \underline{b} \bullet \underline{x}$$
I dont know if what ive done is even close to what i need to do, but from there, im completely stuck.
Any help greatly appreciated.
Dan.
2. Apr 7, 2006
### nrqed
Well, the equation you wrote does not give you any information since cos 90 =0.
YWrite your unknown vector as $a {\vec i} + b {\vec j}$ and then impose that the dot product of this with your vector above gives zero (write out the scalar product explicitly in terms of *components*, not in terms of magnitude and angle). You will get one equation for two unknowns so there will be an infinite number of solutions. Just pick a value for a (ANY value, except zero) and solve for b. Then you can normalize your vector by dividing it by its magnitude.
3. Apr 7, 2006
### Euclid
Clearly the solution is in the x-y plane. Let's say the solution is $$\textbf{z}=x\textbf{i}+y\textbf{j}$$. You want to solve $$\textbf{z} \cdot \textbf{b} = -8x-6y = 0$$, subject to the constraint $$x^2+y^2 =1$$. How would you normally go about solving these?
4. Apr 7, 2006
### danago
thanks for both of those posts, but im not really understanding them. nrqed, you referred to scalar product, and i wouldnt have a clue what that means.
And euclid, im really lost with what youre trying to say sorry.
Thanks for attempting to help me anyway.
5. Apr 7, 2006
### Euclid
Two vectors a and b are orthogonal (by definition) if their dot product is zero.
If $$\textbf{a} = a_1 \textbf{i}+a_2\textbf{j}$$ and $$\textbf{b} = b_1\textbf{i}+b_2\textbf{j}$$, then their dot product is
$$\textbf{a}\cdot \textbf{b} =a_1b_1+a_2b_2$$
The length of the vector $$\textbf{a}$$ is
$$|a|=\sqrt{a_1^2+a_2^2}$$.
Hence your probem is to find a vector $$\textbf{z}$$ such that $$\textbf{z}\cdot \textbf{b} = 0$$ and $$|\textbf{z}|=1$$. Simply follow the definitions to get the system of equations above.
BTW, nrqed's method is more efficient than solving the two equations directly. It works because if $$\textbf{a}$$ and $$\textbf{b}$$ are orthogonal, so are $$c\textbf{a}$$ and $$\textbf{b}$$ for any scalar c.
Last edited: Apr 7, 2006
6. Apr 7, 2006
### nrqed
Sorry about the confusion... "scalar product" and "dot product" are two terms representing the same thing. You seem to already know about this type of product between two vectors because this is essential what you wrote as ${\undeline b} \cdot {\underline x}$ in your first post.
However, have you learned that there are *two* ways to calculate this product? One is using the form you wrote, the other way involves multiplying components and adding them, as Euclid wrote. Have you seen this? It's this other way of calculating a dot product that you need to solve this problem. The equation you wrote is correct but not useful for this type of problem.
Hope this helps.
Patrick
7. Apr 7, 2006
### danago
ahhh i think i understand now.
So from that, i can write two equations:
$$-8a-6b=0$$
and
$$a^2+b^2=1$$
The first equation as another way of writing the dot product, and the second equation because the solution is a unit vector, then i solve them as simultaneous equations?
I came up with the final vector:
$$\textbf{z}=-0.6\textbf{i}+0.8\textbf{j}$$
8. Apr 7, 2006
### Euclid
Yup, that's right!
9. Apr 7, 2006
### danago
Yay. Thanks so much to both of you for the help. Makes sense to me now :)
10. Apr 8, 2006
### nrqed
That's perfect!
Notice that there is one other solution (makes sense, right? I mean if you have a vector, it is possible to get *two* different unit vectors which will be perpendicular to it...one at 90 degrees on one side and one on the other side). That other solution comes from when you use a^2+b^2 =1 and you isolate a, let's say, you can take two different roots.
Good for you!
I was glad to help but you did most of the work.
Regards
Patrick
11. Apr 8, 2006
### danago
thats because when i write $a^2+b^2=1$ in terms of b i get
$b=\pm\sqrt{1-a^2}$ right? which means that the second solution would be $\textbf{z}=0.6\textbf{i}-0.8\textbf{j}$, the negative of $$\textbf{z}$$?
Makes perfect sense to me. Thanks alot :) |
# How do you solve x^3 -3x^2 +16x -48 = 0?
Feb 16, 2017
$3 \mathmr{and} \pm i 4$. See the Socratic graph of the cubic, making x-intercept 3..
#### Explanation:
From sign changes in the coefficients, the equation has utmost 3
positive roots,There are no changes in sign,
when x is changed to $- x$. And so, there are no negative roots.
The cubic is 0 at x = 0. So, it becomes
$\left(x - 3\right) \left({x}^{2} + 16\right)$
The other solutions are from by ${x}^{2} + 16 = 0$, giving $x = \pm i 4$
graph{(x-3)(x^2+16) [2, 4, -500, 500]} 0, -500, 500]}
Not to scale. It is large y vs small x, for approximating the solution. |
## 18 October 2011
### Previously
I blogged some ideas about Library paths for Klink, the Kernel implementation I wrote. I listed several desiderata, based on lessons from the past. I also blogged about how I'd like bridge the gap between package as used by developer and package as used by user.
• (The desiderata from the previous blog post, plus:)
• Should co-operate well with development. Switching from use to development shouldn't require gross changes or compete with library support.
• Can fetch libraries automatically with reasonable power and control
• In particular, automatable enough to support "remote autoloading" but ultimately should be under the user's control.
• Support clean packaging
## Fetching libraries: mostly lean on git
The well-loved version manager git provides most of what I'd want, out of the box:
• Co-operates well with development (More than co-operates, that's what it's usually for)
• Reasonably compact for non-development. You can clone a repo with depth=1
• Fetching
• Via URL (git protocol or otherwise)
• Doesn't treat URLs as sexps - only a mild problem.
• Finding out what's there to be fetched, in the sense of available versions (eg, looking for latest stable release)
git ls-remote --tags URL
• But we have to distinguish tags and tags, which AIUI don't refer to versions.
• Secure digital signatures are easy
• Creating them
git tag -s
• Verifying them
git-verify
• Excluding local customizations from being updated
• This is possible with .gitignore and some care
• But customizations will live somewhere else entirely (See below)
• Practices supporting stable releases. git-flow (code and practices) does this.
• NOT a well-behaved heterogenerated tree of libraries.
Of course git does not support knowing that a repo is intended as Kernel code. Looking at filename extensions does, but that seems to require fetching the repo first. For the same reason, it can't easily be any file that "lives" in the repo. It should be something about the repo itself.
So the convention I propose is that the presence of a branch named --kernel-source-release indicates a branch of stable Kernel code. Tags on that branch would indicate available versions, so even if coders are working informally and doing unstable work on "master", only tagged versions would be offered.
But does keeping --kernel-source-release up to date require extra effort for the maintainer? IIUC, git can simply make --kernel-source-release track "master", so if a coder's workflow is organized, he needn't make any extra effort beyond issuing a one-time command. Branch tracking is intended for remotes, but seems to support this.
Should there be other branches, like --kernel-source-unstable or --kernel-source-development? I suspect they're not needed, and any use of unstable branches should be specifically configured by the daring user.
I'm not proposing to permanently tie Klink (much less Kernel) specifically to git forever. But it serves so well and is so well supported that I'm not concerned.
## Where to put it all?
That addressed how we can fetch code. In doing so, it put some restrictions on how we can organize the files on disk. So I should at least sketch how it could work on disk.
### The easy part
Of course one would configure directories for libraries to live in. Presumably one would distinguish system, local, and user.
### Path configuration
But the stow approach still left issues of where exactly to stow things. We can't solve it in the file system. That would result in one of two ugly things:
• Making each project represent the entire library filespace, with its real code living at some depth below the project root.
• Making each project physically live in a mirror of the target filespace. This would have all the problems we were avoiding above plus more.
So I propose per-project configuration data to tell stow about paths. I'd allow binding at least these things:
prefix
The library prefix, being a list of symbols.
parts
List of sub-parts, each being a list, being:
For example,
((prefix (std util my-app))
(parts
(
(source
[,,src,,]
())
(source
[,,tests,,]
(tests))
(info
[,,doc,,]
())
(default-customizations
[,,defaults,,]
())
(public-key
[,,pub_key.asc,,]
()))))
That would live in a file with a reserved name, say "%kernel-paths" in the repo root. As the example implies, the contents of that file would be sexps, but it wouldn't be code as such. It'd be bindings, to be evaluated in a "sandbox" environment that supported little or no functionality. The expressions seem to be just literals, so no more is required.
## Dependencies and version identity
### Surfeit of ways to express version identity
There are a number of ways to indicate versions. All have their strengths:
• ID hash
• Automatic
• Unique
• Says nothing about stability and features
• Release timestamp
• Time ordered
• Nearly unique, but can mess up.
• Says nothing about stability and features
• Version major.minor.patch
• Just a little work
• Expresses stability
• Expresses time order, but can be messed up.
• Test-satisfaction
• Lots of work
• Almost unused
• Automatically expresses stability and features
• No good convention for communicating the nature of tests
• stable', unstable', release', current'.
• Expresses only stability and currency
• By named sub-features
• Just a little work
• Expresses dependencies neatly
• Expressive
• Not automatic
I chose sub-feature names, based on how well that works for emacs libraries, a real stress test. That is, I choose for code to express dependencies in a form like:
(require (li bra ry name) (feature-1 feature-2))
### Co-ordinating sub-features with version identity
The other forms of version identity still exist as useful data: ID hash, version tags, results of tests. What makes sense to me is to translate them into sets of provided features. Do this somewhere between the repository and the require statement. require would still just see sets of features.
Desiderata for this translation:
• Shouldn't be too much work for the developer.
• Probably easiest to support automatic rules and allow particular exceptions. With a git-flow workflow, this could almost be automatic. As soon as a feature branch is merged into "master", that version and later versions would be deemed to have a feature of that name.
• Should be expressable at multiple points in the pipeline, at least:
• Annotations in the source code itself
• In the repo (In case the source code annotations had to be corrected)
• Stand-alone indexes of library identities. Such indexes would be libraries in their own right. Presumably they'd also record other version-relevant attributes such as signature and URL.
• Locally by user
• Should be derivable from many types of data, at least:
• Branches (eg, everything on "master" branch has the feature stable)
• Tag text (eg, all versions after (2 3 3) provide foo-feature)
• Tag signature (eg, check it against a public key, possibly found in the repo)
• Source code annotations (eg, after coding foo-feature, write (provide-features ear lier fea tures foo-feature))
• Tests (eg, annotate foo-feature's (sub)test suite to indicate that passing it all means foo-feature is provided)
• ID
• To express specific exceptions (eg, ID af84925ebdaf4 does not provide works)
• To potentially compile a mapping from ID to features
• Upstream data. Eg, the bundles of library identities might largely collect and filter data from the libraries
• Should be potentially independent of library's presence, so it can be consulted before fetching a version of a library.
• Should potentially bundle groups of features under single names, to let require statements require them concisely.
### Dependencies
With sub-features, we don't even need Scheme's modest treatment of dependencies, at least not in require. Instead, we could avoid bad versions by indicating that they lack a feature, or possibly possess a negative feature.
The usual configuration might implicitly require:
• works
• stable
• trusted-source
• all-tests-passed
The set of implicitly required features must be configurable by the user, eg for a developer to work on unstable branches.
## Library namespace conventions
On the whole, I like the CPAN namespace conventions. I'd like to suggest these additional (sub-)library-naming conventions:
raw
This interface provides "raw" functionality that favors regular operation and controllability over guessing intentions.
dwim
This interface provides "dwim" functionality that tries to do what is probably meant.
test
This sub-library contains tests for the library immediately enclosing it
testhelp
This sub-library contains code that helps test libraries that use the library immediately enclosing it. In particular, it should provide instances of objects the library builds or operates on for test purposes.
interaction
This library has no functionality per se, it combine one or more functional libraries with an interface (keybindings, menus, or w/e). This is intended to encourage separation of concerns.
inside-out
This library is young and has not yet been organized into a well-behaved namespace with parts. It can have sub-libraries, and their names should evolve to mirror the overall library organization so that it can become a real library.
(inside-out new-app)
user
This user is providing a library that doesn't yet have an official "home" in the namespace. The second component is a unique user-name.
(user tehom-blog/blogspot.com inside-out new-app)
(user tehom-blog/blogspot.com std utility new-util)
## Mutability and Signals
Recently I've been working on Rosegarden, the music sequencer. It uses Qt which uses signals.
Signals implement the Observer pattern, where an object notifies "observers" via signals. A signal is connected to one or more "slots" in other objects. The slots are basically normal methods, except they return nothing (void). When a signal is emitted, Qt arranges for the slots to be called, other than those of deleted objects. So far, I find it works easily and elegantly.
This made me wonder: Could signals take the place of mutability in Scheme? And might that give us both referential transparency and reactiveness simultaneously?
There's not much support for signals for Lisp and Scheme. There's Cells, but it seemed to be conceived of as just a super-spreadsheet. I want to go much further and use signals to re-imagine the basics of object mutation.
## Quasi-mutation: The general idea
Basically, we'd use signals between constant objects to fake mutability.
• Objects can't mutate.
• "Slots" are closures.
• Signals are emitted with respect to particular objects.
• Not "by objects". Again, we're not object-oriented. We're just indexing on objects.
• Ability to emit is controlled by access to objects and signal types.
• Indexing on one particular argument seems overly special, so I contemplate indexing on any relevant arguments. This is again similar to generic functions.
• Signals can be connected to slots.
• The signals go to where the object's respective signal is connected. They are indexed on objects.
• Constructors connect the signal replaced from the parts to the constructed object.
• More precisely, to a closure that knows the object.
• The closure would fully represent the objects' relation. For instance, mutable pairs might have the slots new-car and new-cdr with the obvious meanings.
• But not for immutable objects. Immutable objects' slots would not be new-car and new-cdr, they would raise error.
• The constructed object can access its part objects, in appropriate ways by its own lights. For instance, a pair object could retrieve its car and cdr objects.
• This particular signal replaced is not exposed.
• The details of replaced will be refined below.
• Slots such as new-car will typically:
• Construct a near-copy of the object, with the new part in the old part's place. This effectively connects a new version of the object to the new part and disconnects it from the old part.
• Emit replaced with respect to the object, propagating the change.
• "Normal" setters such as set-car! emit replaced wrt the old object with the new object as value.
• That's just the classical way. There's plenty of room to do clever new things with signals.
• As above, doing this to immutable objects causes error.
• Constructed objects behaving this way would include at least:
• Mutable pairs (and therefore mutable lists and trees)
• Environments
• Continuations. While not often considered object-like, continuations have parts such as the current environment and their parent continuations.
• External-world-ish objects such as ports react to signals in their own appropriate way, not neccessarily propagating them further.
## As if
Literally implementing ever pair with at least two signals between itself and its car and cdr seems prohibitive if not impossible. Physically, it couldn't be the only mechanism of mutation. So I'm talking about a mechanism that acts as if it's continuous down to basics like pairs and lists, but really uses a more modest mechanism where it can (presumably containment, as now).
## Object identity
Above, I said "constructed object can access a part object", not "a part's value". Since objects no longer ever change value, the difference is subtle. It's just this: the object has a single set of signal connections. So it has a single identity. So there is a trace of object identity remaining.
One could represent identity value-wise by saying that values consist of a (classical) value and an "object-identity" value, and that object-identity values are opaque and unique except as shared by this mechanism. So signals are connected with respect to object-identity values.
It has a flavor of "the long way around", but it lets us treat objects entirely as values.
### Object versioning
Different versions of an object have different Object-IDs. Imperatively this wouldn't be anything, since two simultaneous references to the same object can't have different values. But here, one can both mutate an object as far the the world is concerned and hold onto its old value. But it should never be the case that objects are eq? but not equal?. So different versions have different Object-IDs.
### The equality predicates
Equality:
equal?
is what it normally is in Scheme or Kernel: The objects have the same current value.
eq?
A and B are eq? just if
• (equal? identity-of-a identity-of-b)
=?
is just an optimization of equal?
## Signals and immutability
I mentioned that I was thinking about immutability in regard to this. So far, I've just described how to duplicate mutability with signals.
For immutable objects, some or all slots would still get connections, but would raise error instead of propagating mutation.
But that's only half of the control we should have over mutation. We'd also like to guarantee that certain evaluations don't mutate even mutable objects they have access to, eg their arguments, the environment, and dynamic variables. The "foo!" convention indicates this (negatively) but doesn't enforce anything. "foo!" convention notwithstanding, we'd like to guarantee this from outside an arbitrary call, not from inside trusted combiners.
### Blocking signals
So we'd like to sometimes block signals. If signals were emitted anyways, they'd be errors and would not reach their destinations. So if replaced is blocked, code either doesn't try to mutate objects or tries and raises an error. Either is consistent with immutability.
ISTM the simplest way to block signals is to disconnect their existing connections and connect them to error combiners. When the call is done, restore their original connections. However, that doesn't play well with asynchrous execution.
Instead, we'll make a copy of the original object that will (probably lazily) infect its parts with "don't mutate me in this scope".
### Scope
For a traditional imperative language, where flow of control and scope are structurally the same, we could block signals in specific scopes, recursively. But for Scheme and Kernel, that won't suffice. What would happen if an object is passed to a continuation and mutated there? We've broken the guarantee that the object wouldn't be mutated. Any time we let objects be passed abnormally, this can happen.
We might try to:
1. raise error if affected objects are passed to continuation applications, or
2. "infect" the other scope with the signal restrictions.
Neither is appealing. In this mechanism, continuing normally is also passing to a less restrictive scope. And continuing normally should behave about the same way as continuing abnormally to the same destination. We also don't want error returns to permanently "freeze" objects.
So ISTM we must distinguish between continuing to a (not neccessarily proper) parent of the restricting scope (normally or otherwise) and continuing elsewhere. Signal blocks are removed just if control reaches a parent. This is essentially how Kernel guards reckon continuation parentage.
### Doing this for all object parts
We'd usually want to say that no part of any argument to a combiner can be mutated. It's easy enough to treat signal connections from the root argobject. But we need to "infect" the whole object with immutability, and not infect local objects, which may well be temporary and legitimately mutable.
Since these arguments are ultimately derived from the root argobject, what we can do is arrange for accessors to give immutable objects in their turn. But they have to be only temporarily immutable - blocked, as we said above. And we'd prefer to manage it lazily.
So I propose that accessing a blocked object gives only objects:
• whose replaced signals are re-routed to error, as above, until control "escapes normally" (an improper parent continuation is reached)
• which are in turn blocked, meaning they have the same property infecting all of their accessors.
#### Non-pair containers
Non-pair containers are not treated by mechanisms like copy-es-immutable. But we want to treat immutably fully, so they have to be treated too. This is the case even for:
• Environments
• Encapsulation types. In this case, their sole accessor is required to do this, as all accessors are.
• Ports. Administrative state or not, they can be immutable.
• Closures. Anything they return is considered accessed and automatically gets this blockingness. Their internal parts (static environment, etc) need not be blocked.
• Continuations. Like closures, what they return is considered accessed and automatically gets this blockingness.
#### Exemptions
We'd like to be able to exempt particular objects from this. Some combiners mutate an argument but shouldn't mutate anything else. There'd probably be a signal-block spec that would specify this.
### Blocking signals to keyed dynamic objects
We can easily extend the above to deal with dynamic environment, but keyed dynamic objects are not so simple. Their accessors would be covered by the above if derived from the argobject or the dynamic environment, but they need not be.
So we need an additional rule: Keyed dynamic objects are blocked if accessed in the dynamic scope of the blocking. That's recursive like other blockings. Keyed rebindings in the dynamic scope aren't, because one might bind a temporary that's legitimately mutable.
Side note: I'd like to see a stylistic convention to differentiate between combiners that mutate their arguments ("foo!") and combiners that mutate something else, meaning either their dynamic environment or a dynamic variable (Say "foo!!")
## Interface to be defined
I've already written a lot, so I'll leave this part as just a sketch. To support all this, we need to define an interface.
• A means of defining new signal types
• Returns a signal emitter
• Returns a signal identifier for connect' to use.
• A signal scheduler. By general principles, the user should be able to use his own, at least for exposed signals. It's not too hard to write one with closures and continuations.
• Means for the user to emit signals
• Means for the user to connect and disconnect signals
• Not exposed for many built-in objects such as pairs.
• "capture all current", as for blocking
• "disconnect all"
• block-all-signals: connects all signals to error continuation
• Possible argument except'
• (Maybe) block-signal: connects given signal to error continuation
• A "dirty-flag" mechanism, often useful with signals.
• Possibly a priority mechanism.
## Some possibilities this raises
• Use the signal scheduling mechanism for managing constraints. Since when we can enforce immutability, constraints become very tempting.
• Provide general signals
• Provide an interface to Lone-COWs, a sort of copy-on-write object optimized for usually being uniquely referenced/owned.
• Supplying "out-of-band" signals to a debugger or similar. They really do need to be out-of-band.
• Provide a broadly applicable interface for redo/undo. It could basically just capture historical copies of objects.
## Previously
Klink is my stand-alone implementation of Kernel by John Shutt.
## Types of immutability
There are several types of immutability used or contemplated in Klink.
Complete immutability
Not much to be said.
Pair immutability
A pair that forever holds the same two objects, though those objects' contents may change.
List immutability
For instance, a list where you can change the contents, but not the number of elements.
Recursive structural immutability
An immutable tree of mutable non-pair objects.
Eg, what ports typically have. When you read or write to a port, it remains "the same thing" administratively.
### Complete immutability
Available in C, used in some places such as symbols and strings. AFAICT there's no way to specify its use or non-use in Kernel; some objects just have it because of what they are or how they are created.
### Recursive structural immutability (Tree immutability)
Supported via copy-es-immutable
No C flag for it. Like complete immutability, some objects just have it because of what type they are.
## Non-recursive structural immutability?
If you read section 4.7.2 of the Kernel report (copy-es-immutable), you may notice that Pair immutability and List immutability are actually extensions. So I figured I should at least advance a rationale for them.
Is non-recursive immutability worth supporting? ISTM it's already strongly suggested by Kernel.
Implied by an informal type
Some combiners take finite lists as arguments; all applicatives require a finite list as argobject. That distinguishes the finite list as at least an informal type. There's a predicate for it, finite-list?, but pairs that "are" finite lists can "become" other sorts of lists (dotted or circular), so it falls short of being a formal type. This seems like an irregularity to me. Structural immutability would solve it.
Implied by implied algorithms
For some combiners (eg reduce), any practical algorithm seems to require doing sub-operations on counted lists. That implies a structurally immutable list, because otherwise the count could theoretically become wrong; in practice it's saved from this by writing the algorithm carefully. So there are at least ephemeral, implied structurally immutable lists present.
Vectors
John once told me that he would eventually like to have vectors in Kernel. Vectors are optimized structurally immutable finite lists.
Opportunity for optimization
There's an opportunity for other optimizations where list immutability is used and recognized. In particular, caching a list's element count is often nice if one can be sure the list count won't change.
## Comparison table
Type of immutabilityPairListTree
Special care needed
for shared objects?NoOnly own tailYes |
# Thread: the limit of (1+1/n)^n=e
1. ## the limit of (1+1/n)^n=e
hallo,
does any one knows how to prove that lim(1+1/n)^n=e when n goes to zero.
thanks
omri
2. Originally Posted by omrimalek
hallo,
does any one knows how to prove that lim(1+1/n)^n=e when n goes to zero.
thanks
omri
I assume that you made a typo because $\lim_{n \mapsto 0}\left(1+\frac1n\right)^n = 1$
3. Hi
$L=\lim_{n \to 0} \left(1+\frac 1n\right)^n$
$\ln(L)=\lim_{n \to 0} n \cdot \ln \left(1+\frac 1n\right)$
But when $n \to 0 ~,~ \frac 1n \gg 1$
Therefore $\ln(L)=\lim_{n \to 0} n \cdot \ln \left(\frac 1n\right)$
Substituting $u=\frac 1n$, we get :
$\ln(L)=\lim_{u \to \infty} \frac 1u \cdot \ln(u)=0$.
Therefore, $L = 1.$
4. Originally Posted by earboth
I assume that you made a typo because $\lim_{n \mapsto 0}\left(1+\frac1n\right)^n = 1$
May be he means as $n \to \infty$??
Which can be done by taking logs then showing the limit is $1$ by any number of methods including L'Hopitals rule.
RonL
5. ## sorry i made a mistake...
i ment when n goes to infinity...
6. $\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
7. Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
It can be proved but that requires assuming some other definition which would otherwise be proved from this definition .....
What can be done without assuming any definitions is proving that the limit exists and that it lies between 2 and 3.
8. ## thank you all!
9. Let $\varphi = \lim_{x \to \infty } \left( {1 + \frac{1}
{x}} \right)^x .$
(Assuming the limit does exist.)
Since the logarithm is continuous on its domain, we can interchange the function and taking limits.
$\ln \varphi = \ln \Bigg[ {\lim_{x \to \infty } \left( {1 + \frac{1}
{x}} \right)^x } \Bigg] = \lim_{x \to \infty } x\ln \left( {1 + \frac{1}
{x}} \right).$
Make the substitution $u=\frac1x,$
$\ln \varphi =\lim_{u\to0}\frac1u\ln(1+u).$ Since $\ln (1 + u) = \int_1^{1 + u} {\frac{1}
{\tau }\,d\tau } ,\,1 \le\tau\le 1 + u,$
$\frac{1}
{{1 + u}} \le \frac{1}
{\tau } \le 1\,\therefore \,\frac{u}
{{1 + u}} \le \int_1^{1 + u} {\frac{1}
{\tau}\,d\tau} \le u.$
So $\frac1{1+u}\le\frac1u\ln(1+u)\le1.$
Take the limit when $u\to0,$ then by the Squeeze Theorem we can conclude that $\lim_{u\to0}\frac1u\ln(1+u)=1.$
Finally $\ln\varphi=1\,\therefore\,\varphi=e.\quad\blacksqu are$
10. Originally Posted by Moo
Hi
You just have to be careful about one thing over here. You assumed the limit exists. How do you know the limit exists? Note, this is why Krizalid wrote "assuming it exists". To prove this we can note the function $f0,\infty)\mapsto \mathbb{R}" alt="f0,\infty)\mapsto \mathbb{R}" /> defined as $f(x) = (1+\tfrac{1}{x})^x$ is increasing since $f' >0$. Therefore, the sequence $x_n = (1+\tfrac{1}{n})^n$ is an increasing sequence. Now show that $\{ x_n\}$ is bounded. So we have a increasing bounded sequence and therefore we have a limit.
Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
That is not how I like to define $e$. I like to define $\log x = \smallint_1^x \tfrac{d\mu}{\mu}$. And we can define $e$ to be the (unique) number such that $\log (e) = 1$. If we define $e$ this way then it would follow that $(1+\tfrac{1}{n})^n\to e$. But whatever, it depends on your style of defining logarithmic functions. I just find that the approach I use it the cleanest and smoothest.
11. Originally Posted by wingless
$\lim_{n\to\infty}\left ( 1 + \frac{1}{n} \right )^n=e$
This is the definition of number e. So we can't prove that this is e, but we can show that this definition is consistent with other rules and definitions.
It is not $the$ definition but $a$ definition, how one proves it depends on what one is supposed to know, and how it has been defined. It is quite common for it to be defined as the base of natural logarithms.
RonL
12. Originally Posted by CaptainBlack
It is not $the$ definition but $a$ definition, how one proves it depends on what one is supposed to know, and how it has been defined. It is quite common for it to be defined as the base of natural logarithms.
RonL
Thanks to you both, TPH and CaptainBlack. I know that there are tons of definitions for e, but this definition is the oldest one so it seemed me nonsense to prove it.
13. I will present this proof for what it's worth. Though, the posters proof builds
from this. This proof builds on the differentiability of ln(x). To be more exact
the
derivative of ln(x) at x=1.
Using the definition of a derivative, since 1/x=1, we get:
$1=\lim_{h\to 0}\frac{ln(1+h)-ln(1)}{h}$
$=\lim_{h\to 0}\frac{ln(1+h)}{h}$
$=\lim_{h\to 0} ln(1+h)^{\frac{1}{h}}$
Therefore, it follows that:
$e=e^{\lim_{h\to 0} ln(1+h)^{\frac{1}{h}}}$
from the continuity of $e^{x}$ can be written this way:
$e=\lim_{h\to 0}e^{ln(1+h)^{\frac{1}{h}}}=\lim_{h\to 0}(1+h)^{\frac{1}{h}}$
Now, we recognize this limit. The others play off it.
To show $\lim_{x\to {\infty}}\left(1+\frac{1}{x}\right)^{x}=e$
merely let $t=\frac{1}{x}$.
This changes the limit to $x\to 0$ and we have said limit. |
# Convert Tensor¶
This function copies elements from the input tensor to the output with data conversion according to the output tensor type parameters.
For example, the function can:
• convert data according to new element type: fx16 to fx8 and backward
• change data according to new data parameter: increase/decrease the number of fractional bits while keeping the same element type for FX data
Conversion is performed using:
• rounding when the number of significant bits increases
• saturation when the number of significant bits decreases
This operation does not change tensor shape. It copies it from input to output.
This kernel performs in-place computation, but only for conversions without increasing data size, so that that it does not lead to undefined behavior. Therefore, output and input might point to exactly the same memory (but without shift) except during fx8 to fx16 conversion. In-place computation might affect performance for some platforms.
## Kernel Interface¶
### Prototype¶
mli_status mli_hlp_convert_tensor(
mli_tensor *in,
mli_tensor *out
);
### Parameters¶
Kernel Interface Parameters
Parameters Description
in [IN] Pointer to input tensor
start_dim [OUT] Pointer to output tensor
Returns - status code
## Conditions for Applying the Function¶
• Input must be a valid tensor (see mli_tensor Structure).
• Before processing, the output tensor must contain a valid pointer to a buffer with sufficient capacity for storing the result (that is, the total amount of elements in input tensor).
• The output tensor also must contain valid element type and its parameter (el_params.fx.frac_bits).
• Before processing, the output tensor does not have to contain valid shape and rank - they are copied from input tensor. |
# American Institute of Mathematical Sciences
November 2015, 9(4): 1139-1169. doi: 10.3934/ipi.2015.9.1139
## Bilevel optimization for calibrating point spread functions in blind deconvolution
1 Department of Mathematics, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany, Germany
Received October 2014 Revised March 2015 Published October 2015
Blind deconvolution problems arise in many imaging modalities, where both the underlying point spread function, which parameterizes the convolution operator, and the source image need to be identified. In this work, a novel bilevel optimization approach to blind deconvolution is proposed. The lower-level problem refers to the minimization of a total-variation model, as is typically done in non-blind image deconvolution. The upper-level objective takes into account additional statistical information depending on the particular imaging modality. Bilevel problems of such type are investigated systematically. Analytical properties of the lower-level solution mapping are established based on Robinson's strong regularity condition. Furthermore, several stationarity conditions are derived from the variational geometry induced by the lower-level problem. Numerically, a projected-gradient-type method is employed to obtain a Clarke-type stationary point and its convergence properties are analyzed. We also implement an efficient version of the proposed algorithm and test it through the experiments on point spread function calibration and multiframe blind deconvolution.
Citation: Michael Hintermüller, Tao Wu. Bilevel optimization for calibrating point spread functions in blind deconvolution. Inverse Problems & Imaging, 2015, 9 (4) : 1139-1169. doi: 10.3934/ipi.2015.9.1139
##### References:
[1] M. S. C. Almeida and L. B. Almeida, Blind and semi-blind deblurring of natural images,, IEEE Trans. Image Process., 19 (2010), 36. doi: 10.1109/TIP.2009.2031231. Google Scholar [2] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing,, Springer, (2002). Google Scholar [3] J. Bardsley, S. Jefferies, J. Nagy and R. Plemmons, A computational method for the restoration of images with an unknown, spatially-varying blur,, Opt. Express, 14 (2006), 1767. doi: 10.1364/OE.14.001767. Google Scholar [4] M. Burger and O. Scherzer, Regularization methods for blind deconvolution and blind source separation problems,, Math. Control Signals Systems, 14 (2001), 358. doi: 10.1007/s498-001-8041-y. Google Scholar [5] J.-F. Cai, H. Ji, C. Liu and Z. Shen, Blind motion deblurring using multiple images,, J. Comput. Phys., 228 (2009), 5057. doi: 10.1016/j.jcp.2009.04.022. Google Scholar [6] P. Campisi and K. Egiazarian, eds., Blind image deconvolution: Theory and applications,, CRC press, (2007). doi: 10.1201/9781420007299. Google Scholar [7] A. S. Carasso, Direct blind deconvolution,, SIAM J. Appl. Math., 61 (2001), 1980. doi: 10.1137/S0036139999362592. Google Scholar [8] _________, The APEX method in image sharpening and the use of low exponent Lévy stable laws,, SIAM J. Appl. Math., 63 (2002), 593. doi: 10.1137/S0036139901389318. Google Scholar [9] _________, APEX blind deconvolution of color Hubble space telescope imagery and other astronomical data,, Optical Engineering, 45 (2006). Google Scholar [10] _________, False characteristic functions and other pathologies in variational blind deconvolution: A method of recovery,, SIAM J. Appl. Math., 70 (2009), 1097. doi: 10.1137/080737769. Google Scholar [11] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging,, J. Math. Imaging Vis., 40 (2011), 120. doi: 10.1007/s10851-010-0251-1. Google Scholar [12] T. F. Chan and J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,, SIAM, (2005). doi: 10.1137/1.9780898717877. Google Scholar [13] T. F. Chan and C.-K. Wong, Total variation blind deconvolution,, IEEE Trans. Image Process., 7 (1998), 370. doi: 10.1109/83.661187. Google Scholar [14] R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing,, in Proceedings of the IEEE International Conference on Acoustics, (2008), 3869. Google Scholar [15] S. Cho, Y. Matsushita and S. Lee, Removing non-uniform motion blur from images,, in IEEE 11th International Conference on Computer Vision, (2007), 1. doi: 10.1109/ICCV.2007.4408904. Google Scholar [16] J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization,, Inverse Problems and Imaging, 7 (2013), 1183. doi: 10.3934/ipi.2013.7.1183. Google Scholar [17] A. L. Dontchev and R. T. Rockafellar, Robinson's implicit function theorem and its extensions,, Math. Program., 117 (2009), 129. doi: 10.1007/s10107-007-0161-1. Google Scholar [18] D. A. Fish, A. M. Brinicombe and E. R. Pike, Blind deconvolution by means of the Richardson-Lucy algorithm,, J. Opt. Soc. Am. A, 12 (1995), 58. doi: 10.1364/JOSAA.12.000058. Google Scholar [19] R. Fletcher, S. Leyffer, D. Ralph and S. Scholtes, Local convergence of SQP methods for mathematical programs with equilibrium constraints,, SIAM J. Optim., 17 (2006), 259. doi: 10.1137/S1052623402407382. Google Scholar [20] R. W. Freund and N. M. Nachtigal, QMR: A quasi-minimal residual method for non-Hermitian linear systems,, Numer. Math., 60 (1991), 315. doi: 10.1007/BF01385726. Google Scholar [21] M. Fukushima, Z.-Q. Luo and J.-S. Pang, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints,, Comput. Optim. Appl., 10 (1998), 5. doi: 10.1023/A:1018359900133. Google Scholar [22] E. M. Gafni and D. P. Bertsekas, Convergence of a Gradient Projection Method,, Laboratory for Information and Decision Systems Report LIDS-P-1201, (1982). Google Scholar [23] L. He, A. Marquina and S. J. Osher, Blind deconvolution using TV regularization and Bregman iteration,, International Journal of Imaging Systems and Technology, 15 (2005), 74. doi: 10.1002/ima.20040. Google Scholar [24] M. Hintermüller and I. Kopacka, Mathematical programs with complementarity constraints in function space: C- and strong stationarity and a path-following algorithm,, SIAM J. Optim., 20 (2009), 868. doi: 10.1137/080720681. Google Scholar [25] M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem,, SIAM J. Appl. Math., 64 (2004), 1311. doi: 10.1137/S0036139903422784. Google Scholar [26] M. Hintermüller and G. Stadler, An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration,, SIAM J. Sci. Comput., 28 (2006), 1. doi: 10.1137/040613263. Google Scholar [27] M. Hintermüller and T. Surowiec, A bundle-free implicit programming approach for a class of MPECs in function space,, preprint, (2014). Google Scholar [28] M. Hintermüller and T. Wu, Nonconvex $TV^q$-models in image restoration: Analysis and a trust-region regularization-based superlinearly convergent solver,, SIAM J. Imaging Sci., 6 (2013), 1385. doi: 10.1137/110854746. Google Scholar [29] _________, A superlinearly convergent $R$-regularized Newton scheme for variational models with concave sparsity-promoting priors,, Comput. Optim. Appl., 57 (2014), 1. doi: 10.1007/s10589-013-9583-2. Google Scholar [30] K. Ito and K. Kunisch, An active set strategy based on the augmented Lagrangian formulation for image restoration,, Mathematical Modelling and Numerical Analysis, 33 (1999), 1. doi: 10.1051/m2an:1999102. Google Scholar [31] L. Justen, Blind Deconvolution: Theory, Regularization and Applications,, Ph.D. thesis, (2006). Google Scholar [32] L. Justen and R. Ramlau, A non-iterative regularization approach to blind deconvolution,, Inverse Problems, 22 (2006), 771. doi: 10.1088/0266-5611/22/3/003. Google Scholar [33] D. Kundur and D. Hatzinakos, Blind image deconvolution,, IEEE Signal Process. Mag., 13 (1996), 43. doi: 10.1109/79.489268. Google Scholar [34] ________, Blind image deconvolution revisited,, IEEE Signal Process. Mag., 13 (1996), 61. Google Scholar [35] K. Kunisch and T. Pock, A bilevel optimization approach for parameter learning in variational models,, SIAM J. Imaging Sci., 6 (2013), 938. doi: 10.1137/120882706. Google Scholar [36] A. Levin, Blind motion deblurring using image statistics,, Advances in Neural Information Processing Systems, 19 (2006), 841. Google Scholar [37] A. B. Levy, Solution sensitivity from general principles,, SIAM J. Control Optim., 40 (2001), 1. doi: 10.1137/S036301299935211X. Google Scholar [38] Z.-Q. Luo, J.-S. Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints,, Cambridge University Press, (1996). doi: 10.1017/CBO9780511983658. Google Scholar [39] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications,, Springer, (2006). Google Scholar [40] J. Nocedal and S. Wright, Numerical optimization,, 2nd ed., (2006). Google Scholar [41] J. Outrata, M. Kocvara and J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints,, Kluwer Academic Publishers, (1998). doi: 10.1007/978-1-4757-2825-5. Google Scholar [42] J. V. Outrata, A generalized mathematical program with equilibrium constraints,, SIAM J. Control Optim., 38 (2000), 1623. doi: 10.1137/S0363012999352911. Google Scholar [43] S. M. Robinson, Strongly regular generalized equations,, Math. Oper. Res., 5 (1980), 43. doi: 10.1287/moor.5.1.43. Google Scholar [44] ________, Local structure of feasible sets in nonlinear programming, Part III: Stability and sensitivity,, Math. Programming Stud., 30 (1987), 45. Google Scholar [45] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis,, Springer, (1998). doi: 10.1007/978-3-642-02431-3. Google Scholar [46] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [47] H. Scheel and S. Scholtes, Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity,, Math. Oper. Res., 25 (2000), 1. doi: 10.1287/moor.25.1.1.15213. Google Scholar [48] S. Scholtes, Convergence properties of a regularization scheme for mathematical programs with complementarity constraints,, SIAM J. Optim., 11 (2001), 918. doi: 10.1137/S1052623499361233. Google Scholar [49] Q. Shan, J. Jia and A. Agarwala, High-quality motion deblurring from a single image,, ACM T. Graphic, 27 (2008). doi: 10.1145/1399504.1360672. Google Scholar [50] A. Shapiro, Sensitivity analysis of parameterized variational inequalities,, Math. Oper. Res., 30 (2005), 109. doi: 10.1287/moor.1040.0115. Google Scholar [51] J. J. Ye, Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints,, J. Math. Anal. Appl., 307 (2005), 350. doi: 10.1016/j.jmaa.2004.10.032. Google Scholar [52] J. J. Ye, D. L. Zhu and Q. J. Zhu, Exact penalization and necessary optimality conditions for generalized bilevel programming problems,, SIAM J. Optim., 7 (1997), 481. doi: 10.1137/S1052623493257344. Google Scholar [53] Y.-L. You and M. Kaveh, A regularization approach to joint blur identification and image restoration,, IEEE Trans. Image Process., 5 (1996), 416. Google Scholar
show all references
##### References:
[1] M. S. C. Almeida and L. B. Almeida, Blind and semi-blind deblurring of natural images,, IEEE Trans. Image Process., 19 (2010), 36. doi: 10.1109/TIP.2009.2031231. Google Scholar [2] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing,, Springer, (2002). Google Scholar [3] J. Bardsley, S. Jefferies, J. Nagy and R. Plemmons, A computational method for the restoration of images with an unknown, spatially-varying blur,, Opt. Express, 14 (2006), 1767. doi: 10.1364/OE.14.001767. Google Scholar [4] M. Burger and O. Scherzer, Regularization methods for blind deconvolution and blind source separation problems,, Math. Control Signals Systems, 14 (2001), 358. doi: 10.1007/s498-001-8041-y. Google Scholar [5] J.-F. Cai, H. Ji, C. Liu and Z. Shen, Blind motion deblurring using multiple images,, J. Comput. Phys., 228 (2009), 5057. doi: 10.1016/j.jcp.2009.04.022. Google Scholar [6] P. Campisi and K. Egiazarian, eds., Blind image deconvolution: Theory and applications,, CRC press, (2007). doi: 10.1201/9781420007299. Google Scholar [7] A. S. Carasso, Direct blind deconvolution,, SIAM J. Appl. Math., 61 (2001), 1980. doi: 10.1137/S0036139999362592. Google Scholar [8] _________, The APEX method in image sharpening and the use of low exponent Lévy stable laws,, SIAM J. Appl. Math., 63 (2002), 593. doi: 10.1137/S0036139901389318. Google Scholar [9] _________, APEX blind deconvolution of color Hubble space telescope imagery and other astronomical data,, Optical Engineering, 45 (2006). Google Scholar [10] _________, False characteristic functions and other pathologies in variational blind deconvolution: A method of recovery,, SIAM J. Appl. Math., 70 (2009), 1097. doi: 10.1137/080737769. Google Scholar [11] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging,, J. Math. Imaging Vis., 40 (2011), 120. doi: 10.1007/s10851-010-0251-1. Google Scholar [12] T. F. Chan and J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,, SIAM, (2005). doi: 10.1137/1.9780898717877. Google Scholar [13] T. F. Chan and C.-K. Wong, Total variation blind deconvolution,, IEEE Trans. Image Process., 7 (1998), 370. doi: 10.1109/83.661187. Google Scholar [14] R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing,, in Proceedings of the IEEE International Conference on Acoustics, (2008), 3869. Google Scholar [15] S. Cho, Y. Matsushita and S. Lee, Removing non-uniform motion blur from images,, in IEEE 11th International Conference on Computer Vision, (2007), 1. doi: 10.1109/ICCV.2007.4408904. Google Scholar [16] J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization,, Inverse Problems and Imaging, 7 (2013), 1183. doi: 10.3934/ipi.2013.7.1183. Google Scholar [17] A. L. Dontchev and R. T. Rockafellar, Robinson's implicit function theorem and its extensions,, Math. Program., 117 (2009), 129. doi: 10.1007/s10107-007-0161-1. Google Scholar [18] D. A. Fish, A. M. Brinicombe and E. R. Pike, Blind deconvolution by means of the Richardson-Lucy algorithm,, J. Opt. Soc. Am. A, 12 (1995), 58. doi: 10.1364/JOSAA.12.000058. Google Scholar [19] R. Fletcher, S. Leyffer, D. Ralph and S. Scholtes, Local convergence of SQP methods for mathematical programs with equilibrium constraints,, SIAM J. Optim., 17 (2006), 259. doi: 10.1137/S1052623402407382. Google Scholar [20] R. W. Freund and N. M. Nachtigal, QMR: A quasi-minimal residual method for non-Hermitian linear systems,, Numer. Math., 60 (1991), 315. doi: 10.1007/BF01385726. Google Scholar [21] M. Fukushima, Z.-Q. Luo and J.-S. Pang, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints,, Comput. Optim. Appl., 10 (1998), 5. doi: 10.1023/A:1018359900133. Google Scholar [22] E. M. Gafni and D. P. Bertsekas, Convergence of a Gradient Projection Method,, Laboratory for Information and Decision Systems Report LIDS-P-1201, (1982). Google Scholar [23] L. He, A. Marquina and S. J. Osher, Blind deconvolution using TV regularization and Bregman iteration,, International Journal of Imaging Systems and Technology, 15 (2005), 74. doi: 10.1002/ima.20040. Google Scholar [24] M. Hintermüller and I. Kopacka, Mathematical programs with complementarity constraints in function space: C- and strong stationarity and a path-following algorithm,, SIAM J. Optim., 20 (2009), 868. doi: 10.1137/080720681. Google Scholar [25] M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem,, SIAM J. Appl. Math., 64 (2004), 1311. doi: 10.1137/S0036139903422784. Google Scholar [26] M. Hintermüller and G. Stadler, An infeasible primal-dual algorithm for total bounded variation-based inf-convolution-type image restoration,, SIAM J. Sci. Comput., 28 (2006), 1. doi: 10.1137/040613263. Google Scholar [27] M. Hintermüller and T. Surowiec, A bundle-free implicit programming approach for a class of MPECs in function space,, preprint, (2014). Google Scholar [28] M. Hintermüller and T. Wu, Nonconvex $TV^q$-models in image restoration: Analysis and a trust-region regularization-based superlinearly convergent solver,, SIAM J. Imaging Sci., 6 (2013), 1385. doi: 10.1137/110854746. Google Scholar [29] _________, A superlinearly convergent $R$-regularized Newton scheme for variational models with concave sparsity-promoting priors,, Comput. Optim. Appl., 57 (2014), 1. doi: 10.1007/s10589-013-9583-2. Google Scholar [30] K. Ito and K. Kunisch, An active set strategy based on the augmented Lagrangian formulation for image restoration,, Mathematical Modelling and Numerical Analysis, 33 (1999), 1. doi: 10.1051/m2an:1999102. Google Scholar [31] L. Justen, Blind Deconvolution: Theory, Regularization and Applications,, Ph.D. thesis, (2006). Google Scholar [32] L. Justen and R. Ramlau, A non-iterative regularization approach to blind deconvolution,, Inverse Problems, 22 (2006), 771. doi: 10.1088/0266-5611/22/3/003. Google Scholar [33] D. Kundur and D. Hatzinakos, Blind image deconvolution,, IEEE Signal Process. Mag., 13 (1996), 43. doi: 10.1109/79.489268. Google Scholar [34] ________, Blind image deconvolution revisited,, IEEE Signal Process. Mag., 13 (1996), 61. Google Scholar [35] K. Kunisch and T. Pock, A bilevel optimization approach for parameter learning in variational models,, SIAM J. Imaging Sci., 6 (2013), 938. doi: 10.1137/120882706. Google Scholar [36] A. Levin, Blind motion deblurring using image statistics,, Advances in Neural Information Processing Systems, 19 (2006), 841. Google Scholar [37] A. B. Levy, Solution sensitivity from general principles,, SIAM J. Control Optim., 40 (2001), 1. doi: 10.1137/S036301299935211X. Google Scholar [38] Z.-Q. Luo, J.-S. Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints,, Cambridge University Press, (1996). doi: 10.1017/CBO9780511983658. Google Scholar [39] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications,, Springer, (2006). Google Scholar [40] J. Nocedal and S. Wright, Numerical optimization,, 2nd ed., (2006). Google Scholar [41] J. Outrata, M. Kocvara and J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints,, Kluwer Academic Publishers, (1998). doi: 10.1007/978-1-4757-2825-5. Google Scholar [42] J. V. Outrata, A generalized mathematical program with equilibrium constraints,, SIAM J. Control Optim., 38 (2000), 1623. doi: 10.1137/S0363012999352911. Google Scholar [43] S. M. Robinson, Strongly regular generalized equations,, Math. Oper. Res., 5 (1980), 43. doi: 10.1287/moor.5.1.43. Google Scholar [44] ________, Local structure of feasible sets in nonlinear programming, Part III: Stability and sensitivity,, Math. Programming Stud., 30 (1987), 45. Google Scholar [45] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis,, Springer, (1998). doi: 10.1007/978-3-642-02431-3. Google Scholar [46] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms,, Physica D, 60 (1992), 259. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [47] H. Scheel and S. Scholtes, Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity,, Math. Oper. Res., 25 (2000), 1. doi: 10.1287/moor.25.1.1.15213. Google Scholar [48] S. Scholtes, Convergence properties of a regularization scheme for mathematical programs with complementarity constraints,, SIAM J. Optim., 11 (2001), 918. doi: 10.1137/S1052623499361233. Google Scholar [49] Q. Shan, J. Jia and A. Agarwala, High-quality motion deblurring from a single image,, ACM T. Graphic, 27 (2008). doi: 10.1145/1399504.1360672. Google Scholar [50] A. Shapiro, Sensitivity analysis of parameterized variational inequalities,, Math. Oper. Res., 30 (2005), 109. doi: 10.1287/moor.1040.0115. Google Scholar [51] J. J. Ye, Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints,, J. Math. Anal. Appl., 307 (2005), 350. doi: 10.1016/j.jmaa.2004.10.032. Google Scholar [52] J. J. Ye, D. L. Zhu and Q. J. Zhu, Exact penalization and necessary optimality conditions for generalized bilevel programming problems,, SIAM J. Optim., 7 (1997), 481. doi: 10.1137/S1052623493257344. Google Scholar [53] Y.-L. You and M. Kaveh, A regularization approach to joint blur identification and image restoration,, IEEE Trans. Image Process., 5 (1996), 416. Google Scholar
[1] Rouhollah Tavakoli, Hongchao Zhang. A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 395-412. doi: 10.3934/naco.2012.2.395 [2] Yuanjia Ma. The optimization algorithm for blind processing of high frequency signal of capacitive sensor. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1399-1412. doi: 10.3934/dcdss.2019096 [3] Chunyang Zhang, Shugong Zhang, Qinghuai Liu. Homotopy method for a class of multiobjective optimization problems with equilibrium constraints. Journal of Industrial & Management Optimization, 2017, 13 (1) : 81-92. doi: 10.3934/jimo.2016005 [4] Tim Hoheisel, Christian Kanzow, Alexandra Schwartz. Improved convergence properties of the Lin-Fukushima-Regularization method for mathematical programs with complementarity constraints. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 49-60. doi: 10.3934/naco.2011.1.49 [5] Chunlin Hao, Xinwei Liu. A trust-region filter-SQP method for mathematical programs with linear complementarity constraints. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1041-1055. doi: 10.3934/jimo.2011.7.1041 [6] Liping Pang, Na Xu, Jian Lv. The inexact log-exponential regularization method for mathematical programs with vertical complementarity constraints. Journal of Industrial & Management Optimization, 2019, 15 (1) : 59-79. doi: 10.3934/jimo.2018032 [7] Zheng-Hai Huang, Jie Sun. A smoothing Newton algorithm for mathematical programs with complementarity constraints. Journal of Industrial & Management Optimization, 2005, 1 (2) : 153-170. doi: 10.3934/jimo.2005.1.153 [8] Xiaojiao Tong, Shuzi Zhou. A smoothing projected Newton-type method for semismooth equations with bound constraints. Journal of Industrial & Management Optimization, 2005, 1 (2) : 235-250. doi: 10.3934/jimo.2005.1.235 [9] Yu-Ning Yang, Su Zhang. On linear convergence of projected gradient method for a class of affine rank minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1507-1519. doi: 10.3934/jimo.2016.12.1507 [10] Xing Li, Chungen Shen, Lei-Hong Zhang. A projected preconditioned conjugate gradient method for the linear response eigenvalue problem. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 389-412. doi: 10.3934/naco.2018025 [11] Juan C. Moreno, V. B. Surya Prasath, João C. Neves. Color image processing by vectorial total variation with gradient channels coupling. Inverse Problems & Imaging, 2016, 10 (2) : 461-497. doi: 10.3934/ipi.2016008 [12] Gui-Hua Lin, Masao Fukushima. A class of stochastic mathematical programs with complementarity constraints: reformulations and algorithms. Journal of Industrial & Management Optimization, 2005, 1 (1) : 99-122. doi: 10.3934/jimo.2005.1.99 [13] Yi Zhang, Liwei Zhang, Jia Wu. On the convergence properties of a smoothing approach for mathematical programs with symmetric cone complementarity constraints. Journal of Industrial & Management Optimization, 2018, 14 (3) : 981-1005. doi: 10.3934/jimo.2017086 [14] X. X. Huang, D. Li, Xiaoqi Yang. Convergence of optimal values of quadratic penalty problems for mathematical programs with complementarity constraints. Journal of Industrial & Management Optimization, 2006, 2 (3) : 287-296. doi: 10.3934/jimo.2006.2.287 [15] Yongchao Liu. Quantitative stability analysis of stochastic mathematical programs with vertical complementarity constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 451-460. doi: 10.3934/naco.2018028 [16] Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010 [17] Qingsong Duan, Mengwei Xu, Yue Lu, Liwei Zhang. A smoothing augmented Lagrangian method for nonconvex, nonsmooth constrained programs and its applications to bilevel problems. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1241-1261. doi: 10.3934/jimo.2018094 [18] Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859 [19] Michal Kočvara, Jiří V. Outrata. Inverse truss design as a conic mathematical program with equilibrium constraints. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1329-1350. doi: 10.3934/dcdss.2017071 [20] Nam-Yong Lee, Bradley J. Lucier. Preconditioned conjugate gradient method for boundary artifact-free image deblurring. Inverse Problems & Imaging, 2016, 10 (1) : 195-225. doi: 10.3934/ipi.2016.10.195
2018 Impact Factor: 1.469 |
Data for Example 6.9
Haptoglo
## Format
A data frame/tibble with eight observations on one variable
concent
haptoglobin concentration (in grams per liter)
## References
Kitchens, L. J. (2003) Basic Statistics and Data Analysis. Pacific Grove, CA: Brooks/Cole, a division of Thomson Learning.
## Examples
shapiro.test(Haptoglo$concent) #> #> Shapiro-Wilk normality test #> #> data: Haptoglo$concent
#> W = 0.93818, p-value = 0.5932
#>
t.test(Haptoglo$concent, mu = 2, alternative = "less") #> #> One Sample t-test #> #> data: Haptoglo$concent
#> t = -0.58143, df = 7, p-value = 0.2896
#> alternative hypothesis: true mean is less than 2
#> 95 percent confidence interval:
#> -Inf 2.595669
#> sample estimates:
#> mean of x
#> 1.73625
#> |
# Introduction
## Background and Motivation
Heart disease has been the leading cause of death in the world for the last twenty years. It is therefore of great importance to look for ways to prevent it. In this project, funduscopy images of retinas of tens of thousands of participants collected by the UK biobank and data of biologically relevant variables collected in a dataset are used for two different purposes.
An image of retina being taken by funduscopy.
First, GWAS analysis of some of the variables in the dataset allows us to look at their concrete importance in the genome. Second, the dataset was used as a means of refining the selection of retinal images so that they could be subjected to a classification model called Dense Net with as output a prediction of hypertension. A key point associated with both of these analyses - especially for the classification part - is that mathematically adequate data cleaning should enhance the relevant GWAS p-values, or accuracy of hypertension prediction.
## Data cleaning process
The data has been collected from the UK biobank and consists of :
1. Retina images of left eyes, right eyes, or both left and right eyes of the participants. Also, a few hundreds of participants have had replica images of either their left or right eye taken.
2. A 92366x47 dataset with rows corresponding to every left or right retina images. Columns refer to biologically relevant data previously measured on those images.
The cleaning process has involved :
1. Removing 15 variables by recommendation of the assistants and dividing the dataset into two : one (of size 78254x32) containing only participants which had both their left (labelled "L") and right (labelled "R") eyes taken and nothing else, and the other (of size 464x32) containing each replica (labelled "1") image alongside its original (labelled "0").
2. For every participants, every variables, and in the two datasets : applying $\delta = \frac{|L-R|}{L+R}$ to the left-right dataset and $\delta = \frac{|0-1|}{0+1}$ to the original-replica dataset. This delta computes the relative distance between either L and R, or 0 and 1.
3. Computing the T-test and the Cohen's D (the effect size) between each corresponding variables of the two datasets and removing the 5 variables with significant p-values after Bonferroni correction for 32 tests. This was done because for the classification model and to predict hypertension, it is better for input images of left and right eyes to not have striking differences between them, otherwise the machine could lose in accuracy by accounting for these supplementary data, instead of focusing on the overall structure of the images it analyses. We can check if each variable has a high left-right difference by comparing it to the corresponding variable 0-1 difference ; if a variable has a low left-right difference - a low delta (L, R) - its delta(L, R) distribution should be similarly distributed as its corresponding delta(0, 1) variable, because a replica has by definition no other difference with its original than the technical variability related to the way it was practically captured.
The classification has then used the 39127x27 delta(L, R) cleaned and transformed dataset for the selection of its images.
# Deep Learning Model
This section focused on using the previously defined Delta variable to sort the images used as input for the classifier. A CNN model was built by the CBG to predict hypertension from retina fundus images. We wished to improve the predictions by reducing technical error in the input images. The statistical tests performed in the first part allow us to select the variable for which delta (L, R) can be used as an approximation of technical error (or delta (0, 1)), i.e select the variable with the smallest difference between delta (L, R) and delta (0, 1).
The delta values for the "FD_all" variable were used here to discriminate participants. Participants with the highest delta values were excluded. We ran the model with 10 different sets of images: Retaining 90%, 80%, 70%, 60% and 50% of images using the delta values, and random selection to make comparisons.
## Results
The ROC and training accuracy curves were extracted after every run. The shape of both curves didn't change much from run to run, but notable changes in AUROC were noted.
The AUC values for the different sets of images seem to follow a general trend: Decrease in precision as dataset size decreases for the randomly selected images, and increases when using delta.
However, the inherent variation in AUC results from run to run makes it hard to draw conclusions from such little data. Running the model at least thrice with each set of images would allow us to get a much clearer idea of what is actually happening, and to do statistical tests.
# GWAS
The goal of the GWAS was to investigate if the asymmetry of the eyes could have genetic origins. We decided to look at the variables with the largest left right difference were selected: Fractal dimension and tortuosity. The phenotype for the GWAS was the delta (delta = abs(L-R)/(L+R)) of fractal dimension and tortuosity. That way, we would we able to identify genes responsible for asymmetry in these variables. Two rounds of GWAS were made. The first one had approximately 40'000 subjects and the second one had approximately 50'000 subjects.
## Results
The results were not significant. Only one GWAS was very slightly significant, the fractal dimension with the larger set of participants (indicated by red circle).
In the event of the GWAS showing a significant peak, we could have then investigated the part of the genome associated with it by looking up the reference SNP cluster ID (rSID) in NCBI. We could have then identified genes associated with fractal dimension asymmetry in the eyes. |
# Brownian motion ish
1. Sep 13, 2012
### sjweinberg
Suppose I have a large particle of mass $M$ that is randomly emitting small particles. The magnitude of the momenta of the small particles is $\delta p$ (and it is equal for all of them. Each particle is launched in a random direction (in 3 spatial dimensions--although we can work with 1 dimension if it's much easier). Assume also that these particles are emitted at a uniform rate with time $\delta t$ between emissions.
So here's my issue. It seems to me that this is a random walk in momentum space. What I would like to know is how to estimate the displacement of the particle after $N$ particles are pooped out. Thus, I need some way to "integrate the velocity".
However, I want to stress that I only care about an order of magnitude estimate of the displacement here. Has anyone dealt with this kind of a situation?
I appreciate any help greatly!
2. Sep 13, 2012
### ImaLooser
We have the sum of N independent identically distributed random variables so this is going to converge to a Gaussian very quickly, that is with N>30 or so. The momentum will follow 3-D Gaussian with mean of zero, that has got to be available somewhere. (A 2D Gaussian is called a Rayleigh distribution.)
The 1D case will be a binomial distribution that converges to a Gaussian.
3. Sep 13, 2012
### sjweinberg
I am aware that the momentum distribution will converge to a Gaussian of width $\sim \sqrt{N} \delta p$. However, do you know what this will mean for the position distribution? In other words, I am really interested in the distribution of the quantity $\sum_{i} p(t_{i})$ where the sum is taken over time steps for the random walk.
My concern is that even though $p$ is expected to be $\sim \sqrt{N} \delta p$ at the end of the walk, I think that the sum may "accelerate" away from the origin because $p$ drifts from its origin.
4. Sep 14, 2012
### Staff: Mentor
From a dimensional analysis: $\overline{|x|}=c~ \delta t~\delta v~ N^\alpha$
A quick simulation indicates $\alpha \approx 1.5$ and $c\approx 1/2$ in the 1-dimensional case. In 3 dimensions, c might be different, while alpha should stay the same.
5. Sep 14, 2012
### sjweinberg
Thanks for the help. In fact, your estimation of $\alpha = \frac{3}{2}$ is the same thing I estimated with the following sketchy method:
Let $n(t) = \frac{t}{\delta t}$ be the number of particles emitted after time $t$. Then, the speed of the large particle at time $t$ can be estimated as $\frac{\delta p \sqrt{n(t)}}{M} = \frac{\delta p }{M} \sqrt{\frac{t}{\delta t}}$.
Then $\left| x(t) \right| \sim \int_{0}^{t} \left| v(t) \right| dt \sim \delta t \delta v \left(\frac{t}{\delta t}\right)^{3/2}$.
I feel that this estimate is probably an overestimate which is where your $c \sim 1/2$ may come from.
Thanks again. |
# Generalised Lyndon-Schützenberger Equations
We fully characterise the solutions of the generalised Lyndon-Schützenberger word equations $$u_1 \cdots u_\ell = v_1 cdots v_m w_1 \cdots w_n$$, where $$u_i \in \{u, \theta(u)\}$$ for all $$1 \leq i \leq \ell$$, $$v_j \in \{v, \theta(v)\}$$ for all $$1 \leq j \leq m$$, $$w_k \in \{w, \theta(w)\}$$ for all $$1 \leq k ?\leq n$$, and $$\theta$$ is an antimorphic involution. More precisely, we show for which $$\ell$$, $$m$$, and $$n$$ such an equation has only $$\theta$$-periodic solutions, i.e., $$u$$, $$v$$, and $$w$$ are in $$\{t, \theta(t)\}^\ast$$ for some word $$t$$, closing an open problem by Czeizler et al. (2011).
### Rights
Use and reproduction: |
+0
# Pls help me i forgot how to do this...
0
45
1
Selling Price = $70 Rate of Sales Tax = 6% What is the sales tax? What is the total price? My options (you can choose more than one): [ ]$4.00
[ ]$4.02 [ ]$4.20
[ ]$420 [ ]$490
[ ]$74 [ ]$74.02
[ ]$74.20 Can someone tell me how to do this cus i forgot and i have a reveiw and a test. Mar 15, 2021 ### 1+0 Answers #1 +498 0 6% is just $$\frac{6}{100}$$ = 0.06 To get the sales tax, you take the product of 70 and 0.06 (70 x 0.06) =$4.20
Then the total price is just 70 + 4.2 (since it's tax and not discount) , which gives you \$74.20
Mar 15, 2021 |
Parabolic paths vs Elliptical paths.
1. Oct 1, 2005
mprm86
We are always taught that a projectile describes a parabolic path (neglecting air resistance), but the path is actually elliptical. So, my question is this: A projectile is thrown in point A (on the ground), it reaches a maximum height H, and it finally falls in point B (same height as A, that is, the ground). Which will be the difference between the paths if (a) it is elliptical, and (b) it is parabolic? Any ideas, suggestions?
P.S. The answer I´m looking for is one of the kind of 1 part in a million or somewhat.
2. Oct 1, 2005
mathman
Where did you get this idea? The path is parabolic. You can approximate a parabola by an ellipse as close as you want by simply moving the foci farther apart. The parabola can be looked at as the limit as the separation becomes infinite.
Added note: You may have a point since the earth is not flat. The distant focus will be the center of the earth.
3. Oct 1, 2005
rcgldr
The classic parabolic path assumes a flat earth.
If the projectile travels below escape velocity, the path is elliptical.
If the projectile travels exactly at escape velocity, the path is parabolic.
If the projectile travels faster than escape velocity, the path is hyperbolic.
A link for some formulas (go to orbital mechanics page)
http://www.braeunig.us/space
4. Oct 2, 2005
Galileo
What's responsible for an elliptic path (if v< v_escape) is not the curvature of the earth, but the variation of the gravitational force with height.
You could solve Newton's law under a inverse square force field to find the actual path. The variation g with height is very small to take into consideration when throwing stuff in the air though. (Air resistance is WAY more dominant)
5. Oct 2, 2005
rcgldr
No one mentioned curvature of the earth in this thread. My reference to a parabola being correct for flat earth was a reference to treating gravity as being effectively generated from a flat plane instead of effectively from a point source (in which case you get an elliptical path). |
As an example, let the original function be: The reflected equation, as reflected across the line $y=x$, would then be: Reflection over $y=x$: The function $y=x^2$ is reflected over the line $y=x$. ", with understanding the concept. [1] Consider an example where the original function is: $\displaystyle y = (x-2)^2$. In general, a horizontal stretch is given by the equation $y = f(cx)$. Although the concept is simple, it has the most advanced mathematical process of the transformations discussed. November 2015 If we rotate this function by 90 degrees, the new function reads: $[xsin(\frac{\pi}{2}) + ycos(\frac{\pi}{2})] = [xcos(\frac{\pi}{2}) - ysin(\frac{\pi}{2})]^2$. Reflections produce a mirror image of a function. December 2017 By using our site, you agree to our. Jake Adams. January 2018 In general, a vertical translation is given by the equation: $\displaystyle y = f(x) + b$. Multiplying the independent variable $x$ by a constant greater than one causes all the $x$ values of an equation to increase. Therefore the horizontal reflection produces the equation: \displaystyle \begin{align} y &= f(-x)\\ &= (-x-2)^2 \end{align}. Approved. The original function we will use is: Translating the function up the $y$-axis by two produces the equation: And translating the function down the $y$-axis by two produces the equation: Vertical translations: The function $f(x)=x^2$ is translated both up and down by two. This reflection has the effect of swapping the variables $x$and $y$, which is exactly like the case of an inverse function. September 2015 You should include at least two values above and below the middle value for x in the table for the sake of symmetry. October 2016 The positive numbers on the y-axis are above the point (0, 0), and the negative numbers on the y-axis are below the point (0, 0). Determine whether a given transformation is an example of translation, scaling, rotation, or reflection. In this case, 100% of readers who voted found the article helpful, earning it our reader-approved status. There are 11 references cited in this article, which can be found at the bottom of the page. If $b>1$, the graph stretches with respect to the $y$-axis, or vertically. December 2018 March 2012 The movement is caused by the addition or subtraction of a constant from a function. January 2015 The mirror image of this function across the $y$-axis would then be $f(-x) = -x^5$. Thank you for, "Building a solar oven was easy with the geometric definition of the parabola. A vertical reflection is given by the equation $y = -f(x)$ and results in the curve being “reflected” across the x-axis. The graph has now physically gotten “taller”, with every point on the graph of the original function being multiplied by two. In this example, put the value of the axis of symmetry (x = 0) in the middle of the table. The result is that the curve becomes flipped over the $x$-axis. October 2019 20 May 2020. As an example, consider again the initial sinusoidal function: If we want to induce horizontal shrinking, the new function becomes: \displaystyle \begin{align} y &= f(3x)\\ &= \sin(3x) \end{align}. January 2020 Research source. I love it! Parabolas are also symmetrical which means they can be folded along a line so that all of the points on one side of the fold line coincide with the corresponding points on the other side of the fold line. This leads to a “shrunken” appearance in the horizontal direction. A rotation is a transformation that is performed by “spinning” the object around a fixed point known as the center of rotation. Put arrows at the ends. If the function $f(x)$ is multiplied by a value less than one, all the $y$ values of the equation will decrease, leading to a “shrunken” appearance in the vertical direction. For this section we will focus on the two axes and the line $y=x$. Academic Tutor & Test Prep Specialist. April 2012 December 2016 20 May 2020. Again, the original function is: Shifting the function to the left by two produces the equation: \displaystyle \begin{align} y &= f(x+2)\\ &= (x+2)^2 \end{align}. Original figure by Julien Coyne. October 2015 To stretch or shrink the graph in the x direction, divide or multiply the input by a constant. ", "It tells us step by step in an easy way.". Expert Source This change will cause the graph of the function to move, shift, or stretch, depending on the type of transformation. Let’s use the same basic quadratic function to look at horizontal translations. In general, a vertical stretch is given by the equation $y=bf(x)$. We use cookies to make wikiHow great. A translation is a function that moves every point a constant distance in a specified direction. A translation moves every point by a fixed distance in the same direction. In this case the axis of symmetry is x = 0 (which is the y-axis of the coordinate plane). To learn how to shift the graph of your parabola, read on! When by either f (x) f (x) or x x is multiplied by a number, functions can “stretch” or “shrink” vertically or horizontally, respectively, when graphed. March 2015 February 2013 Calculate the corresponding values for y or f(x). If $c$ is greater than one the function will undergo horizontal shrinking, and if $c$ is less than one the function will undergo horizontal stretching. This leads to a “stretched” appearance in the vertical direction. Boost your career: Improve your Zoom skills. In general, a vertical stretch is given by the equation y = bf (x) y = b f (x). February 2018 November 2011. To translate a function horizontally is the shift the function left or right. November 2017 It is represented by adding or subtracting from either y or x. Manipulate functions so that they are translated vertically and horizontally. Record the value of y, and that gives you a point to use when graphing the parabola. April 2015 ", http://www.mathsisfun.com/definitions/parabola.html, http://www.purplemath.com/modules/grphquad.htm, http://www.sparknotes.com/math/algebra1/quadratics/section1.rhtml, http://www.mathsisfun.com/geometry/parabola.html, consider supporting our work with a contribution to wikiHow. Make a two-column table. If $b$ is greater than one the function will undergo vertical stretching, and if $b$ is less than one the function will undergo vertical shrinking. A translation can be interpreted as shifting the origin of the coordinate system. CC licensed content, Specific attribution, https://www.youtube.com/watch?v=3Mle83Jiy7k, http://ibmathstuff.wikidot.com/transformations, http://en.wikipedia.org/wiki/Transformation_(function), http://en.wikibooks.org/wiki/Algebra/Absolute_Value%23Translations, http://en.wikipedia.org/wiki/Translation_(geometry), http://en.wikipedia.org/wiki/Reflection_(mathematics), http://en.wiktionary.org/wiki/transformation. A horizontal reflection is a reflection across the $y$-axis, given by the equation: In this general equation, all $x$ values are switched to their negative counterparts while the y values remain the same. [7] Keep in mind the U-shape of the parabola. where $f(x)$ is some given function and $b$ is the constant that we are adding to cause a translation. Points on it include (-1, 1), (1, 1), (-2, 4), and (2, 4). A translation of a function is a shift in one or more directions. A transformation takes a basic function and changes it slightly with predetermined methods. Vertical reflection: The function $y=x^2$ is reflected over the $x$-axis. This article has been viewed 166,475 times. If a parabola is "given," that implies that its equation is provided. You can shift a parabola based on its equation. Multiplying the entire function $f(x)$ by a constant greater than one causes all the $y$ values of an equation to increase. As an example, let $f(x) = x^3$. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/7\/7e\/Graph-a-Parabola-Step-1-Version-2.jpg\/v4-460px-Graph-a-Parabola-Step-1-Version-2.jpg","bigUrl":"\/images\/thumb\/7\/7e\/Graph-a-Parabola-Step-1-Version-2.jpg\/aid4162801-v4-728px-Graph-a-Parabola-Step-1-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":" |
# Almost Consecutive
Calculus Level 5
$1 + \frac {4}{6} + \frac {4 \cdot 5}{6\cdot 9}+ \frac {4 \cdot 5\cdot 6}{6\cdot 9\cdot 12} + \frac {4 \cdot 5\cdot 6 \cdot 7}{6\cdot 9\cdot 12 \cdot 15} + \ldots$
If the value of the series above is in the form of $$\frac a b$$ where $$a,b$$ are coprime positive integers, what is the value of $$a + b$$?
× |
# Publications Citing Sage
Below is a list of publications that cite Sage and/or the Sage cluster. This list is also available in BibTeX format. The publications listed in each section are sorted in chronological order. Where two or more items are published in the same year, these items are sorted alphabetically by the authors' last names. See the section Citing Sage for information on how to cite Sage and/or the Sage cluster in your publications.
## Articles
1. William Stein and David Joyner. SAGE: System for Algebra and Geometry Experimentation. ACM SIGSAM Bulletin, volume 39, number 2, pages 61--64, 2005.
2. Timothy Brock. Linear Feedback Shift Registers and Cyclic Codes in SAGE. Rose-Hulman Undergraduate Mathematics Journal, volume 7, number 2, 2006.
3. John Cremona. The Elliptic Curve Database for Conductors to 130000. In Florian Hess, Sebastian Pauli, and Michael Pohst (ed.). ANTS VII: Proceedings of the 7th International Symposium on Algorithmic Number Theory. Springer, Lecture Notes in Computer Science, volume 4076, pages 11--29, 2006.
4. David Joyner and Amy Ksir. Automorphism Groups of Some AG codes. IEEE Transactions on Information Theory, volume 52, pages 3325--3329, 2006.
5. Barry Mazur. Controlling Our Errors. Nature, volume 443, number 7, pages 38--40, 2006.
6. Jaap Spies. Dancing School Problems. Nieuw Archief voor Wiskunde, volume 5/7, number 4, pages 283--285, 2006.
7. Baur Bektemirov, Barry Mazur, William Stein, and Mark Watkins. Average Ranks of Elliptic Curves: Tension Between Data and Conjecture. Bulletin of the AMS, volume 44, number 2, pages 233--254, 2007.
8. Dan Boneh, Craig Gentry, and Michael Hamburg. Space-Efficient Identity Based Encryption Without Pairings. FOCS 2007: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society, pages 647--657, 2007.
9. Dragos Cvetkovic and Jason Grout. Graphs with Extremal Energy Should Have a Small Number of Distinct Eigenvalues. Bull. Acad. Serbe Sci. Arts, Cl. Sci. Math. Natur., Sci. Math., volume 134, number 32, pages 43--57, 2007.
10. David Harvey. Kedlaya's Algorithm in Larger Characteristic. International Mathematics Research Notices, volume 2007, number rnm095, pages rnm095--29, 2007.
11. David Joyner and William Stein. Open Source Mathematical Software. Notices of the AMS, volume 54, number 10, pages 1279, 2007.
12. Chris A. Kurth and Ling Long. Computations with Finite Index Subgroups of PSL_2(Z) Using Farey Symbols. In K. P. Shum, E. Zelmanov, Jiping Zhang, and Li Shangzhi (ed.). Proceedings of the Second International Congress in Algebra and Combinatorics. World Scientific, pages 225--242, 2007.
13. Martin Albrecht. Algebraic Attacks on the Courtois Toy Cipher. Cryptologia, volume 32, number 3, pages 220--276, 2008.
14. Daniel J. Bernstein, Tanja Lange, and Reza Rezaeian Farashahi. Binary Edwards Curves. In Elisabeth Oswald and Pankaj Rohatgi (ed.). CHES 2008: Proceedings of the 10th International Workshop on Cryptographic Hardware and Embedded Systems. Springer, Lecture Notes in Computer Science, volume 5154, pages 244--265, 2008.
15. Daniel J. Bernstein, Peter Birkner, Marc Joye, Tanja Lange, and Christiane Peters. Twisted Edwards Curves. In Serge Vaudenay (ed.). AFRICACRYPT 2008: First International Conference on Cryptology in Africa. Springer, Lecture Notes in Computer Science, volume 5023, pages 389--405, 2008.
16. Michael Gray. Sage: A New Mathematics Software System. Computing in Science & Engineering, volume 10, number 6, pages 72--75, 2008.
17. Carlo Hämäläinen. Partitioning 3-homogeneous latin bitrades. Geometriae Dedicata, volume 133, number 1, pages 181--193, 2008.
18. David Harvey. Efficient Computation of p-adic Heights. LMS Journal of Computation and Mathematics, volume 11, pages 40--59, 2008.
19. David Joyner. On Quadratic Residue Codes and Hyperelliptic Curves. Discrete Mathematics & Theoretical Computer Science, volume 10, number 1, pages 129--146, 2008.
20. David Joyner and David Kohel. Group Theory in SAGE. In Luise-Charlotte Kappe, Arturo Magidin, and Robert Morse (ed.). Computational Group Theory and the Theory of Groups. American Mathematical Society, pages 115--140, 2008.
21. David Joyner. A Primer on Computational Group Homology and Cohomology Using GAP and SAGE. In Benjamin Fine, Gerhard Rosenberger, and Dennis Spellman (ed.). Proceedings of the Gaglione Conference on Aspects of Infinite Groups. World Scientific, pages 159--191, 2008.
22. Daniel Kane and Steven Sivek. On the S_n-modules Generated by Partitions of a Given Shape. Electronic Journal of Combinatorics, volume 15, number 1, 2008.
23. Kiran S. Kedlaya. Search Techniques for Root-Unitary Polynomials. In Kristin E. Lauter and Kenneth A. Ribet (ed.). Computational Arithmetic Geometry. American Mathematical Society, pages 71--81, 2008.
24. Kiran S. Kedlaya and Andrew V. Sutherland. Computing L-Series of Hyperelliptic Curves. In Alfred J. van der Poorten and Andreas Stein (ed.). Proceedings of the 8th International Symposium on Algorithmic Number Theory. Springer, Lecture Notes in Computer Science, volume 5011, pages 312--326, 2008.
25. Koopa Koo, William Stein, and Gabor Wiese. On the Generation of the Coefficient Field of a Newform by a Single Hecke Eigenvalue. Journal de Théorie des Nombres de Bordeaux, volume 20, number 2, pages 373--384, 2008.
26. David Loeffler. Explicit Calculations of Automorphic Forms for Definite Unitary Groups. LMS Journal of Computation and Mathematics, volume 11, pages 326--342, 2008.
27. Subhamoy Maitra and Santanu Sarkar. Revisiting Wiener's Attack -- New Weak Keys in RSA. In Tzong-ChenWu, Chin-Laung Lei, Vincent Rijmen, and Der-Tsai Lee (ed.). Proceedings of the 11th International Conference on Information Security. Springer, Lecture Notes in Computer Science, volume 5222, pages 228--243, 2008.
28. Subhamoy Maitra and Santanu Sarkar. A New Class of Weak Encryption Exponents in RSA. In Dipanwita Roy Chowdhury, Vincent Rijmen, and Abhijit Das (ed.). Proceedings of the 9th International Conference on Cryptology in India Kharagpur. Springer, Lecture Notes in Computer Science, volume 5365, pages 337--349, 2008.
29. Barry Mazur. Finding Meaning in Error Terms. Bulletin of the American Mathematical Society, volume 45, number 2, pages 185--228, 2008.
30. Jonathan Sondow and Kyle Schalm. Which Partial Sums of the Taylor Series for e are convergents to e? (and a Link to the Primes 2, 5, 13, 37, 463). In T. Amdeberhan and V. Moll (ed.). Tapas in Experimental Mathematics. American Mathematical Society, pages 273--284, 2008.
31. William Stein. Can We Create A Viable Free Open Source Alternative to Magma, Maple, Mathematica and Matlab?. In J. Rafael Sendra and Laureano González-Vega (ed.). ISSAC 2008: Proceedings of the International Symposium on Symbolic and Algebraic Computation. Association for Computing Machinery, pages 5--6, 2008.
32. John Voight. Enumeration of Totally Real Number Fields of Bounded Root Discriminant. In Alfred J. van der Poorten and Andreas Stein (ed.). Proceedings of the 8th International Symposium on Algorithmic Number Theory. Springer, Lecture Notes in Computer Science, volume 5011, pages 268--281, 2008.
33. Martin Albrecht and Carlos Cid. Algebraic Techniques in Differential Cryptanalysis. In Orr Dunkelman (ed.). FSE 2009: Proceedings of the 16th International Workshop on Fast Software Encryption. Springer, Lecture Notes in Computer Science, volume 5665, pages 193--208, 2009.
34. Martin Albrecht, Craig Gentry, Shai Halevi, and Jonathan Katz. Attacking Cryptographic Schemes Based on "Perturbation Polynomials". In Ehab Al-Shaer, Somesh Jha, and Angelos D. Keromytis (ed.). CCS 2009: Proceedings of the 2009 ACM Conference on Computer and Communications Security. Association for Computing Machinery, pages 1--10, 2009.
35. Jason Bandlow, Anne Schilling, and Nicolas M. Thiéry. On the uniqueness of promotion operators on tensor products of type A crystals. Journal of Algebraic Combinatorics, volume 31, number 2, pages 217--251, 2009.
36. Wayne Barrett, Jason Grout, and Raphael Loewy. The Minimum Rank Problem Over the Finite Field of Order 2: Minimum Rank 3. Linear Algebra and its Applications, volume 430, number 4, pages 890--923, 2009.
37. Robert A. Beezer. Sage (Version 3.4). SIAM Review, volume 51, number 4, pages 785--807, 2009.
38. Stefan Behnel, Robert W. Bradshaw, and Dag Sverre Seljebotn. Cython Tutorial. In Gaël Varoquaux, Stéfan van der Walt, and K. Jarrod Millman (ed.). SciPy 2009: Proceedings of the 8th Python in Science Conference. Pages 4--14, 2009.
39. Simon R. Blackburn, Carlos Cid, and Ciaran Mullan. Cryptanalysis of the MST_3 Public Key Cryptosystem. Journal of Mathematical Cryptology, volume 3, number 4, pages 321--338, 2009.
40. Jens-Matthias Bohli, Alban Hessler, Osman Ugus, and Dirk Westhoff. Security Enhanced Multi-Hop over the Air Reprogramming with Fountain Codes. LCN 2009: Proceedings of the 34th Annual IEEE Conference on Local Computer Networks. IEEE, pages 850--857, 2009.
41. Nicholas J. Cavenagh, Carlo Hämäläinen, and Adrian M. Nelson. On Completing Three Cyclically Generated Transversals to a Latin Square. Finite Fields and Their Applications, volume 15, number 3, pages 294--303, 2009.
42. Jean-Sebastien Coron, David Naccache, Mehdi Tibouchi, and Ralf-Philipp Weinmann. Practical Cryptanalysis of ISO/IEC 9796-2 and EMV Signatures. In Shai Halevi (ed.). Advanced in Cryptology -- CRYPTO 2009. Springer, Lecture Notes in Computer Science, volume 5677, pages 428--444, 2009.
43. Jean-Sébastien Coron, Antoine Joux, Ilya Kizhvatov, David Naccache, and Pascal Paillier. Fault Attacks on RSA Signatures with Partially Unknown Messages. In Christophe Clavier and Kris Gaj (ed.). CHES. Springer, Lecture Notes in Computer Science, volume 5747, pages 444--456, 2009.
44. Brian D'Urso. Multiprocess System for Virtual Instruments in Python. In Gaël Varoquaux, Stéfan van der Walt, and K. Jarrod Millman (ed.). SciPy 2009: Proceedings of the 8th Python in Science Conference. Pages 76--80, 2009.
45. Luz M. DeAlba, Jason Grout, Leslie Hogben, Rana Mikkelson, and Kaela Rasmussen. Universally optimal matrices and field independence of the minimum rank of a graph. The Electronic Journal of Linear Algebra, volume 18, number 7, pages 403--419, 2009.
46. Dan Drake and Jang Soo Kim. k-distant Crossings and Nestings of Matchings and Partitions. Proceedings of the 21st International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AK, pages 351--362, 2009.
47. Ghislain Fourier, Masato Okado, and Anne Schilling. Kirillov-Reshetikhin crystals for nonexceptional types. Advances in Mathematics, volume 222, number 3, pages 1080--1116, 2009.
48. César A. García-Vázquez and Carlos A. Lopez-Andrade. D-Heaps as Hash Tables for Vectors over a Finite Ring. WRI World Congress on Computer Science and Information Engineering, volume 3, pages 162--166, 2009.
49. Grigor Grigorov, Andrei Jorza, Stefan Patrikis, William A. Stein, and Corina Tarnita. Computational Verification of the Birch and Swinnerton-Dyer Conjecture for Individual Elliptic Curves. Mathematics of Computation, volume 78, pages 2397--2425, 2009.
50. David Harvey. A cache-friendly truncated FFT. Theoretical Computer Science, volume 410, number 27-29, pages 2649--2658, 2009.
51. Patrick Ingram. Multiples of Integral Points on Elliptic Curves. Journal of Number Theory, volume 129, number 1, pages 182--208, 2009.
52. Benjamin F. Jones. Singular Chern Classes of Schubert Varieties via Small Resolution. International Mathematics Research Notices, volume 2010, number 8, pages 1371--1416, 2009.
53. Dag Sverre Seljebotn. Fast Numerical Computations with Cython. In Gaël Varoquaux, Stéfan van der Walt, and K. Jarrod Millman (ed.). SciPy 2009: Proceedings of the 8th Python in Science Conference. Pages 15--22, 2009.
54. Pavel Solin, Ondrej Certik, and Sameer Regmi. The FEMhub Project and Classroom Teaching of Numerical Methods. In Gaël Varoquaux, Stéfan van der Walt, and K. Jarrod Millman (ed.). SciPy 2009: Proceedings of the 8th Python in Science Conference. Pages 58--61, 2009.
55. Vesselin Velichkov, Vincent Rijmen, and Bart Preneel. Algebraic Cryptanalysis of a Small-Scale Version of Stream Cipher LEX. Proceedings of the 30th Symposium on Information Theory in the Benelux. Werkgemeenschap voor Informatieen Communicatietheorie, 2009.
56. Martin Albrecht, Gregory Bard, and William Hart. Algorithm 898: Efficient Multiplication of Dense Matrices over GF(2). ACM Transactions on Mathematical Software, volume 37, number 1, pages 1--14, 2010.
57. Martin Albrecht, Carlos Cid, Thomas Dullien, Jean-Charles Faugère, and Ludovic Perret. Algebraic Precomputations in Differential and Integral Cryptanalysis. In Xuejia Lai, Moti Yung, and Dongdai Lin (ed.). Inscrypt. Springer, pages 387--403, 2010.
58. C. Bernard and E. D. Freeland. Electromagnetic Corrections in Staggered Chiral Perturbation Theory. The XXVIII International Symposium on Lattice Field Theory, June 14--19, Villasimius, Sardinia, Italy. 2010.
59. Eric Brier, Jean-Sébastien Coron, Thomas Icart, David Madore, Hugues Randriam, and Mehdi Tibouchi. Efficient Indifferentiable Hashing into Ordinary Elliptic Curves. In Tal Rabin (ed.). CRYPTO. Springer, Lecture Notes in Computer Science, volume 6223, pages 237--254, 2010.
60. Oscar Castillo-Felisola and Ivan Schmidt. Localization of fermions in double thick D-branes. Physical Review D, volume 82, number 12, pages 124062, 2010.
61. Craig Citro and Alexandru Ghitza. Enumerating Galois representations in Sage. In Komei Fukuda, Joris Hoeven, Michael Joswig, and Nobuki Takayama (ed.). Mathematical Software -- ICMS 2010. Springer, pages 256--259, 2010.
62. Jean-Sébastien Coron, David Naccache, and Mehdi Tibouchi. Fault Attacks Against emv Signatures. In Josef Pieprzyk (ed.). CT-RSA. Springer, Lecture Notes in Computer Science, volume 5985, pages 208--220, 2010.
63. Laura DeLoss, Jason Grout, Leslie Hogben, Tracy McKay, and Jason Smith. Techniques for determining the minimum rank of a small graph. Linear Algebra and its Applications, volume 432, number 11, pages 2995--3001, 2010.
64. Tom Denton. A combinatorial formula for orthogonal idempotents in the 0-Hecke algebra of S_N. Proceedings of the 22nd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AN, pages 701--712, 2010.
65. Dan Drake. Bijections from Weighted Dyck Paths to Schröder Paths. Journal of Integer Sequences, volume 13, number 9, pages 10.9.2, 2010.
66. Nathan M. Dunfield and Dinakar Ramakrishnan. Increasing the number of fibered faces of arithmetic hyperbolic 3-manifolds. American Journal of Mathematics, volume 132, number 1, pages 53--97, 2010.
67. Christian Eder and John Perry. F5C: a variant of Faugère's F5 algorithm with reduced Gröbner bases. Journal of Symbolic Computation, volume 45, number 12, pages 1442--1458, 2010.
68. Graham Ellis and Simon A. King. Persistent homology of groups. Journal of Group Theory, volume 14, number 4, pages 575--587, 2010.
69. Ioannis Z. Emiris, Elias P. Tsigaridas, and Antonios Varvitsiotis. Algebraic Methods for Counting Euclidean Embeddings of Rigid Graphs. In David Eppstein and Emden R. Gansner (ed.). GD 2009: Proceedings of the 17th International Symposium on Graph Drawing. Springer, Lecture Notes in Computer Science, volume 5849, pages 195--200, 2010.
70. Burçin Eröcal and William Stein. The Sage Project: Unifying Free Mathematical Software to Create a Viable Alternative to Magma, Maple, Mathematica and Matlab. In Komei Fukuda, Joris van der Hoeven, Michael Joswig, and Nobuki Takayama (ed.). ICMS 2010: Proceedings of the Third International Congress on Mathematical Software. Springer, Lecture Notes in Computer Science, volume 6327, pages 12--27, 2010.
71. Ron Evans. Hypergeometric _3F_2(1/4) Evaluations over Finite Fields and Hecke Eigenforms. Proceedings of the American Mathematical Society, volume 138, pages 517--531, 2010.
72. Ghislain Fourier, Masato Okado, and Anne Schilling. Perfectness of Kirillov-Reshetikhin Crystals for Nonexceptional Types. Contemporary Mathematics, volume 506, pages 127--143, 2010.
73. Irene Garcia-Selfa, Enrique Gonzalez-Jimenez, and Jose M. Tornero. Galois theory, discriminants and torsion subgroup of elliptic curves. Journal of Pure and Applied Algebra, volume 214, number 8, pages 1340--1346, 2010.
74. Samuele Giraudo. Balanced binary trees in the Tamari lattice. FPSAC 2010: 22nd International Conference on Formal Power Series and Algebraic Combinatorics. DMTCS Proceedings, pages 725--736, 2010.
75. Enrique González-Jiménez and Jörn Steuding. Arithmetic progressions of four squares over quadratic fields. Publicationes Mathematicae Debrecen, volume 77, number 1-2, pages 125--138, 2010.
76. Suresh Govindarajan and K. Gopala Krishna. BKM Lie Superalgebras from Dyon Spectra in Z_N CHL Orbifolds for Composite N. Journal of High Energy Physics, volume 2010, number 5, pages 1--40, 2010.
77. Jason Grout. The Minimum Rank Problem Over Finite Fields. Electronic Journal of Linear Algebra, volume 20, number 11, pages 691--716, 2010.
78. Pierre Guillot. The Computation of Stiefel-Whitney Classes. Annales de l'institut Fourier, volume 60, number 2, pages 565--606, 2010.
79. David Harvey. A multimodular algorithm for computing Bernoulli numbers. Mathematics of Computation, volume 79, pages 2361--2370, 2010.
80. Christopher Hillar, Luis Garcia-Puente, Abraham Martin del Campo, James Ruffo, Zach Teitler, Stephen L. Johnson, and Frank Sottile. Experimentation at the Frontiers of Reality in Schubert Calculus. In Tewodros Amdeberhan, Luis A. Medina, and Victor H. Moll (ed.). Gems in Experimental Mathematics. American Mathematical Society, pages 364--380, 2010.
81. Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. The biHecke Monoid of A Finite Coxeter Group. Proceedings of the 22nd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AN, pages 307--318, 2010.
82. Kenneth L. Ho and Heather A. Harrington. Bistability in Apoptosis by Receptor Clustering. PLoS Computational Biology, volume 6, number 10, pages e1000956, 2010.
83. Kimberly Hopkins. Higher-Weight Heegner Points. Experimental Mathematics, volume 19, number 3, pages 257--266, 2010.
84. Niles Johnson and Justin Noel. For Complex Orientations Preserving Power Operations, p-typicality is Atypical. Topology and its Applications, volume 157, number 14, pages 2271--2288, 2010.
85. Brant Jones and Anne Schilling. Affine Structures and a Tableau Model for E_6 Crystals. Journal of Algebra, volume 324, number 9, pages 2512--2542, 2010.
86. Thomas Lam, Anne Schilling, and Mark Shimozono. K-theory Schubert Calculus of the Affine Grassmannian. Compositio Mathematica, volume 146, number 4, pages 811--852, 2010.
87. Jean-Christophe Novelli, Franco Saliola, and Jean-Yves Thibon. Representation theory of the higher-order peak algebras. Journal of Algebraic Combinatorics, volume 32, number 4, pages 465--495, 2010.
88. Clément Pernet and William Stein. Fast Computation of Hermite Normal Forms of Random Integer Matrices. Journal of Number Theory, volume 130, number 7, pages 1675--1683, 2010.
89. José Alejandro Lara Rodríguez. Some conjectures and results about multizeta values for F_q[t]. Journal of Number Theory, volume 130, number 4, pages 1013--1023, 2010.
90. Anne Schilling and Qiang Wang. Promotion Operator on Rigged Configurations of Type A. The Electronic Journal of Combinatorics, volume 17, number 1, pages R24, 2010.
91. Stanislaus J. Schymanski, Axel Kleidon, Marc Stieglitz, and Jatin Narula. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems. Philosophical Transactions of the Royal Society B, volume 365, pages 1449--1455, 2010.
92. William Stein. Toward a Generalization of the Gross-Zagier Conjecture. International Mathematics Research Notices, volume 2010, number 24, pages rnq075v1--rnq075, 2010.
93. Martin Albrecht and Carlos Cid. Cold Boot Key Recovery by Solving Polynomial Systems with Noise. In Javier Lopez and Gene Tsudik (ed.). ACNS. Springer, pages 57--72, 2011.
94. Christophe Arène, Tanja Lange, Michael Naehrig, and Christophe Ritzenthaler. Faster computation of the Tate pairing. Journal of Number Theory, volume 131, number 5, pages 842--857, 2011.
95. Jason Bandlow, Anne Schilling, and Mike Zabrocki. The Murnaghan-Nakayama rule for k-Schur functions. Journal of Combinatorial Theory, Series A, volume 118, number 5, pages 1588--1607, 2011.
96. Jason Bandlow, Anne Schilling, and Mike Zabrocki. The Murnaghan-Nakayama rule for k-Schur functions. Proceedings of the 23rd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AO, pages 99--110, 2011.
97. Sarah Berube and Karl-Dieter Crisman. Decomposition behavior in aggregated data sets. Mathematical Social Sciences, volume 61, number 1, pages 12--19, 2011.
98. Daniel Bump and Maki Nakasuji. Casselman's Basis of Iwahori Vectors and the Bruhat Order. Canadian Journal of Mathematics, volume 63, pages 1238--1253, 2011.
99. P. Butera and M. Pernici. Free energy in a magnetic field and the universal scaling equation of state for the three-dimensional Ising model. Physical Review B, volume 83, number 5, pages 054433, 2011.
100. Steve Butler and Jason Grout. A Construction of Cospectral Graphs for the Normalized Laplacian. The Electronic Journal of Combinatorics, volume 18, number 1, pages P231, 2011.
101. Jean-Sébastien Coron, Avradip Mandal, David Naccache, and Mehdi Tibouchi. Fully Homomorphic Encryption over the Integers with Shorter Public Keys. In Phillip Rogaway (ed.). CRYPTO. Springer, Lecture Notes in Computer Science, volume 6841, pages 487--504, 2011.
102. Jean-Sébastien Coron, Antoine Joux, Avradip Mandal, David Naccache, and Mehdi Tibouchi. Cryptanalysis of the RSA Subgroup Assumption from TCC 2005. In Dario Catalano, Nelly Fazio, Rosario Gennaro, and Antonio Nicolosi (ed.). Public Key Cryptography. Springer, Lecture Notes in Computer Science, volume 6571, pages 147--155, 2011.
103. Tom Denton, Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. The representation theory of J-trivial monoids. Séminaire Lotharingien de Combinatoire, volume 64, pages B64d, 2011.
104. Christian Eder and John Edward Perry. Signature-based algorithms to compute Gröbner bases. In Éric Schost and Ioannis Z. Emiris (ed.). ISSAC. ACM, pages 99--106, 2011.
105. Torsten Enßlin, Christoph Pfrommer, Francesco Miniati, and Kandaswamy Subramanian. Cosmic ray transport in galaxy clusters: implications for radio halos, gamma-ray signatures, and cool core heating. Astronomy & Astrophysics, volume 527, number March, pages A99, 2011.
106. Alexandru Ghitza. Distinguishing Hecke eigenforms. International Journal of Number Theory, volume 7, number 5, pages 1247--1253, 2011.
107. Chris Godsil. Periodic Graphs. The Electronic Journal of Combinatorics, volume 18, number 1, pages P23, 2011.
108. David J. Green and Simon A. King. The computation of the cohomology rings of all groups of order 128. Journal of Algebra, volume 325, number 1, pages 352--363, 2011.
109. B. B. Kheyfets and S. I. Mukhin. Entropic Part of the Boundary Energy in a Lipid Membrane. Biochemistry (Moscow) Supplement Series A: Membrane and Cell Biology, volume 5, number 4, pages 392--399, 2011.
110. Simon A. King, David Green, and Graham Ellis. The mod-2 cohomology ring of the third Conway group is Cohen-Macaulay. Algebraic & Geometric Topology, volume 11, number 2, pages 719--734, 2011.
111. David Kohel. Addition law structure of elliptic curves. Journal of Number Theory, volume 131, number 5, pages 894--919, 2011.
112. Gregg Musiker and Christian Stump. A Compendium on the Cluster Algebra and Quiver Package in Sage. Séminaire Lotharingien de Combinatoire, volume 65, number , pages B65d, 2011.
113. Niels Oppermann, Georg Robbers, and Torsten A. Enßlin. Reconstructing signals from noisy data with unknown signal and noise covariance. Physical Review E, volume 84, number 4, pages 041118, 2011.
114. N. Oppermann, H. Junklewitz, G. Robbers, and T. A. Enßlin. Probing magnetic helicity with synchrotron radiation and Faraday rotation. Astronomy & Astrophysics, volume 530, number June, pages A89, 2011.
115. Robert W. Peck. A GAP Tutorial for Transformational Music Theory. Music Theory Online, volume 17, number 1, pages , 2011.
116. Chao Peng, Haoxiang Zhang, Michael Stavola, W. Beall Fowler, Benjamin Esham, Stefan K. Estreicher, Andris Docaj, Lode Carnel, and Mike Seacrist. Microscopic structure of a VH4 center trapped by C in Si. Physical Review B, volume 84, number 19, pages 195205, 2011.
117. Viviane Pons. Multivariate Polynomials in Sage. Séminaire Lotharingien de Combinatoire, volume 66, number 7, pages B66z, 2011.
118. José Alejandro Lara Rodríguez. Relations between multizeta values in characteristic p. Journal of Number Theory, volume 131, number 11, pages 2081--2099, 2011.
119. Anne Schilling and Peter Tingley. Demazure crystals and the energy function. Proceedings of the 23rd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AO, pages 861--872, 2011.
120. J. Le Sommer, F. d'Ovidio, and G. Madec. Parameterization of subgrid stirring in eddy resolving ocean models. Part 1: Theory and diagnostics. Ocean Modelling, volume 39, number 1-2, pages 154--169, 2011.
121. Amod Agashe, Kenneth A. Ribet, and William A. Stein. The modular degree, congruence primes, and multiplicity one. In Dorian Goldfeld, Jay Jorgenson, Peter Jones, Dinakar Ramakrishnan, Kenneth Ribet, and John Tate (ed.). Number Theory, Analysis and Geometry: In Memory of Serge Lang. Springer, pages 19--49, 2012.
122. Alejandro Aguilar-Zavoznik and Mario Pineda-Ruelas. A relation between ideals, Diophantine equations and factorization in quadratic fields F with h_F=2. International Journal of Algebra, volume 6, number 13-16, pages 729--745, 2012.
123. Jennifer S. Balakrishnan and Amnon Besser. Computing Local p-adic Height Pairings on Hyperelliptic Curves. International Mathematics Research Notices, volume 2012, number 11, pages 2405--2444, 2012.
124. Chris Berg, Nantel Bergeron, Steven Pon, and Mike Zabrocki. Expansions of k-Schur Functions in the Affine nilCoxeter Algebra. Electronic Journal of Combinatorics, volume 19, number 2, pages P55, 2012.
125. Corentin Boissy and Erwan Lanneau. Pseudo-Anosov Homeomorphisms on Translation Surfaces in Hyperelliptic Components Have Large Entropy. Geometric and Functional Analysis, volume 22, number 1, pages 74--106, 2012.
126. Reinier Bröker, Kristin Lauter, and Andrew V. Sutherland. Modular polynomials via isogeny volcanoes. Mathematics of Computation, volume 81, number 278, pages 1201--1231, 2012.
127. Felix Breuer, Aaron Dall, and Martina Kubitzke. Hypergraph coloring complexes. Discrete Mathematics, volume 312, number 16, pages 2407--2420, 2012.
128. Kathrin Bringmann, Martin Raum, and Olav Richter. Kohnen's limit process for real-analytic Siegel modular forms. Advances in Mathematics, volume 231, number 2, pages 1100--1118, 2012.
129. John D. Condon. Asymptotic expansion of the difference of two Mahler measures. Journal of Number Theory, volume 132, number 9, pages 1962--1983, 2012.
130. Jean-Sébastien Coron, David Naccache, and Mehdi Tibouchi. Public Key Compression and Modulus Switching for Fully Homomorphic Encryption over the Integers. In David Pointcheval and Thomas Johansson (ed.). Advances in Cryptology -- EUROCRYPT 2012. Springer, Lecture Notes in Computer Science, volume 7237, pages 446--464, 2012.
131. S. Czirbusz. Comparing the computation of Chebyshev polynomials in computer algebra systems. Annales Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae Sectio Computatorica, volume 36, pages 23--39, 2012.
132. Michael J. Dinneen, Yun-Bum Kim, and Radu Nicolescu. An Adaptive Algorithm for P System Synchronization. In Marian Gheorghe, Gheorghe P\uaun, Grzegorz Rozenberg, Arto Salomaa, and Sergey Verlan (ed.). Membrane Computing. Springer, Lecture Notes in Computer Science, volume 7184, pages 139--164, 2012.
133. Valentin Féray and Pierre-Loïc Méliot. Asymptotics of q-plancherel measures. Probability Theory and Related Fields, volume 152, pages 589--624, 2012.
134. Francesc Fité, Kiran S. Kedlaya, Víctor Rotger, and Andrew V. Sutherland. Sato-Tate distributions and Galois endomorphism modules in genus 2. Compositio Mathematica, volume 148, pages 1390--1442, 2012.
135. Andrew R. Francis and Mark M. Tanaka. Evolution of variation in presence and absence of genes in bacterial pathways. BMC Evolutionary Biology, volume 12, pages 55, 2012.
136. Andrew Gainer-Dewar. Γ-Species and the Enumeration of k-Trees. Electronic Journal of Combinatorics, volume 19, number 4, pages P45, 2012.
137. A. Gasull, Víctor Mañosa, and Xavier Xarles. Rational periodic sequences for the Lyness recurrence. Discrete and Continuous Dynamical Systems -- Series A, volume 32, pages 587--604, 2012.
138. Samuele Giraudo. Intervals of balanced binary trees in the Tamari lattice. Theoretical Computer Science, volume 420, pages 1--27, 2012.
139. Heather A. Harrington, Kenneth L. Ho, Thomas Thorne, and Michael P.H. Stumpf. Parameter-free model discrimination criterion based on steady-state coplanarity. Proceedings of the National Academy of Sciences USA, volume 109, number 39, pages 15746--15751, 2012.
140. Torsten Hoge. A presentation of the trace algebra of three 3x3 matrices. Journal of Algebra, volume 358, pages 257--268, 2012.
141. L. J. P. Kilford and Ken McMurdy. Slopes of the U_7 operator acting on a space of overconvergent modular forms. LMS Journal of Computation and Mathematics, volume 15, number May, pages 113--139, 2012.
142. Miguel A. Marco-Buzunáriz. A polynomial generalization of the Euler characteristic for algebraic sets. Journal of Singularities, volume 4, pages 114--130, 2012.
143. Jennifer Morse and Anne Schilling. A combinatorial formula for fusion coefficients. Proceedings of the 24th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2012). Discrete Mathematics & Theoretical Computer Science, volume AR, pages 735--744, 2012.
144. Minh Van Nguyen, Michael Kirley, and Rodolfo García-Flores. Community Evolution in A Scientific Collaboration Network. IEEE Congress on Evolutionary Computation. IEEE, pages 1--8, 2012.
145. Juan I. Perotti and Orlando V. Billoni. Smart random walkers: The cost of knowing the path. Physical Review E, volume 86, number 1, pages 011120, 2012.
146. José Alejandro Lara Rodríguez. On von Staudt for Bernoulli-Carlitz numbers. Journal of Number Theory, volume 132, number 4, pages 495--501, 2012.
147. Anne Schilling and Peter Tingley. Demazure Crystals, Kirillov-Reshetikhin Crystals, and the Energy Function. The Electronic Journal of Combinatorics, volume 19, number 2, pages P4, 2012.
148. J. K. Denny. Software Review: SAGE Open Source Mathematics Software System. College Mathematics Journal, volume 44, number 2, pages 149--155, 2013.
149. Torsten Enßlin. Information field dynamics for simulation scheme construction. Physical Review E, volume 87, number 1, pages 013308, 2013.
150. Thomas Gerber, Yuzhu Liu, Gregor Knopp, Patrick Hemberger, Andras Bodi, Peter Radi, and Yaroslav Sych. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions. Review of Scientific Instruments, volume 84, number 3, pages 033101, 2013.
151. Simon A. King. Comparing completeness criteria for modular cohomology rings of finite groups. Journal of Algebra, volume 374, number 1, pages 247--256, 2013.
152. Cristian Lenart and Anne Schilling. Crystal energy functions via the charge in types A and C. Mathematische Zeitschrift, volume 273, number 1-2, pages 401--426, 2013.
153. Masato Okado, Reiho Sakamoto, and Anne Schilling. Affine crystal structure on rigged configurations of type D_n^(1). Journal of Algebraic Combinatorics, volume 37, number 3, pages 571--599, 2013.
## Theses
1. Martin Albrecht. Algebraic Attacks on the Courtois Toy Cipher. Masters thesis, Department of Computer Science, Universität Bremen, Germany, 2006.
2. Gregory V. Bard. Algorithms for Solving Linear and Polynomial Systems of Equations over Finite Fields with Applications to Cryptanalysis. PhD thesis, Department of Mathematics, University of Maryland, USA, 2007.
3. Iftikhar A. Burhanuddin. Some Computational Problems Motivated by the Birch and Swinnerton-Dyer Conjecture. PhD thesis, Faculty of the Graduate School, University of Southern California, USA, 2007.
4. Jason Nicholas Grout. The Minimum Rank Problem Over Finite Fields. PhD thesis, Department of Mathematics, Brigham Young University, USA, 2007.
5. Christopher J. Augeri. On Graph Isomorphism and the PageRank Algorithm. PhD thesis, Graduate School of Engineering and Management, Air Force Institute of Technology, USA, 2008.
6. David Harvey. Algorithms for p-adic cohomology and p-adic heights. PhD thesis, Department of Mathematics, Harvard University, USA, 2008.
7. Yoav Aner. Securing the Sage Notebook. Masters thesis, Royal Holloway, University of London, UK, 2009.
8. César A. García-Vázquez. Un acercamiento a Magma a través de Sage por medio de hashing. Benemérita Universidad Autónoma de Puebla, Mexico, BSc (Computer Science) thesis, 2009.
9. David Møller Hansen. Pairing-Based Cryptography: A Short Signature Scheme Using the Weil Pairing. Masters thesis, Department of Mathematics, Danmarks Tekniske Universitet, Denmark, 2009.
10. Avra Laarakker. Topological Properties of Tiles and Digit Sets. Masters thesis, Acadia University, Canada, 2009.
11. Minh Van Nguyen. Exploring Cryptography Using the Sage Computer Algebra System. School of Engineering and Science, Victoria University, Australia, BSc (Computer Science) Honours thesis, 2009.
12. José Alejandro Lara Rodríguez. Some conjectures and results about multizeta values for F_q[t]. Masters thesis, Universidad Autónoma de Yucatán, Mexico, 2009.
13. Daniel Shumow. Isogenies of Elliptic Curves: A Computational Approach. Masters thesis, Department of Mathematics, University of Washington, USA, 2009.
14. Ralf-Philipp Weinmann. Algebraic Methods in Block Cipher Cryptanalysis. PhD thesis, Department of Computer Science, Technischen Universität Darmstadt, Germany, 2009.
15. Martin Albrecht. Algorithmic Algebraic Techniques and their Application to Block Cipher Cryptanalysis. PhD thesis, Royal Holloway, University of London, UK, 2010.
16. Robert W. Bradshaw. Provable Computation of Motivic L-functions. PhD thesis, Department of Mathematics, University of Washington, USA, 2010.
17. Robert L. Miller. Empirical Evidence for the Birch and Swinnerton-Dyer Conjecture. PhD thesis, Department of Mathematics, University of Washington, USA, 2010.
18. Elisabeth Palchak. A Criterion for Identifying Stressors in Non-Linear Equations Using Gröbner Bases. Department of Mathematics, University of Southern Mississippi, USA, Honors thesis, 2010.
19. Marco Streng. Complex multiplication of abelian surfaces. PhD thesis, Thomas Stieltjes Institute for Mathematics, Universiteit Leiden, The Netherlands, 2010.
20. Graeme Taylor. Cyclotomic Matrices and Graphs. PhD thesis, School of Mathematics, University of Edinburgh, UK, 2010.
21. Miao Yu. An F4-Style Involutive Basis Algorithm. Masters thesis, Department of Mathematics, University of Southern Mississippi, USA, 2010.
22. Nicolas Borie. Calculate invariants of permutation groups by Fourier Transform. PhD thesis, Laboratoire de mathématiques d'Orsay, University of Paris-Sud 11, Orsay, France, 2011.
23. Tom Denton. Excursions into Algebra and Combinatorics at q=0. PhD thesis, Department of Mathematics, University of California, Davis, USA, 2011.
24. Richard Moloney. Divisibility Properties of Kloosterman Sums and Division Polynomials for Edwards Curves. PhD thesis, School of Mathematical Sciences, University College Dublin, Ireland, 2011.
25. Andrey Yurievich Novoseltsev. Calabi-Yau Hypersurfaces and Complete Intersections in Toric Varieties. PhD thesis, Department of Mathematical and Statistical Sciences, University of Alberta, Canada, 2011.
26. Lauri Ruotsalainen. Sage-Ohjelmisto Lukion Matematiikan Opetuksessa. Masters thesis, Department of Mathematics, University of Turku, Finland, 2011.
27. Andrew Gainer. Γ-species, Quotients, and Graph Enumeration. PhD thesis, Department of Mathematics, Brandeis University, USA, 2012.
28. Mikko Moilanen. Sageko Matlabin korvaaja?. Mikkelin University of Applied Sciences, Finland, Bachelor thesis, 2012.
29. Thomas Burger. Algorithmische Umsetzung der Idealarithmetik in nicht-maximalen Ordnungen von Zahlkörpern. Masters thesis, Mathematisches Institut, Ludwig-Maximilians-Universität München, Germany, 2013.
## Books
1. William Stein. Modular Forms: A Computational Approach. American Mathematical Society, 2007.
2. David Joyner. Adventures with Group Theory: Rubik's Cube, Merlin's Machine, and Other Mathematical Toys. 2nd edition, The Johns Hopkins University Press, 2008.
3. L. J. P. Kilford. Modular Forms: A Classical and Computational Introduction. Imperial College Press, 2008.
4. David R. Kohel. Cryptography. 11th July, 2008.
5. Ted Kosan. SAGE For Newbies. February, 2008.
6. William Stein. Elementary Number Theory: Primes, Congruences, and Secrets. Springer, 2008.
7. Lawrence C. Washington. Elliptic Curves: Number Theory and Cryptography. 2nd edition, Chapman & Hall/CRC, 2008.
8. Robert A. Beezer. A First Course in Linear Algebra. Robert A. Beezer, 2009.
9. William Granville and David Joyner. Differential Calculus and Sage. CreateSpace, 2009.
10. William Stein and others. Sage Tutorial. CreateSpace, 2009.
11. Alexandre Casamayou, Guillaume Connan, Thierry Dumont, Laurent Fousse, François Maltey, Matthias Meulien, Marc Mezzarobba, Clément Pernet, Nicolas M. Thiéry, and Paul Zimmermann. Calcul mathématique avec Sage. July, 2010.
12. Bernhard Esslinger and CrypTool Development Team. The CrypTool Script: Cryptography, Mathematics and More. 19th January, 2010.
13. Craig Finch. Sage Beginner's Guide. Packt Publishing, 2011.
14. José M. Gallardo. Ecuaciones Diferenciales Ordinarias Una introducción con SAGE. José M. Gallardo, 2011.
15. David Joyner and Jon-Lark Kim. Selected Unsolved Problems in Coding Theory. Birkhäuser, 2011.
16. Álvaro Lozano-Robledo. Elliptic Curves, Modular Forms, and Their L-functions. American Mathematical Society, 2011.
17. Alasdair McAndrew. Introduction to Cryptography with Open-Source Software. CRC Press, 2011.
18. Alan Doerr and Kenneth Levasseur. Applied Discrete Structures. Version 1.0, Alan Doerr and Kenneth Levasseur, 2012.
19. Éric Gourgoulhon. 3+1 Formalism in General Relativity. Springer, 2012.
20. David Joyner and Marshall Hampton. Introduction to Differential Equations Using Sage. Johns Hopkins University Press, 2012.
21. Thomas Lam, Luc Lapointe, Jennifer Morse, Anne Schilling, Mark Shimozono, and Mike Zabrocki. k-Schur functions and affine Schubert calculus. arxiv, 2013.
22. Jakob Stix. Rational Points and Arithmetic of Fundamental Groups: Evidence for the Section Conjecture. Springer, 2013.
23. William Stallings. Cryptography and Network Security: Principles and Practice. 6th edition, Prentice Hall, 2014.
## Preprints
1. Laura DeLoss, Jason Grout, Leslie Hogben, Tracy McKay, Jason Smith, and Geoff Tims. Table of minimum ranks of graphs of order at most 7 and selected optimal matrices. arXiv:0812.0870, 2008.
2. Laura DeLoss, Jason Grout, Tracy McKay, Jason Smith, and Geoff Tims. Program for calculating bounds on the minimum rank of a graph using Sage. arXiv:0812.1616, 2008.
3. C. F. Doran, M. G. Faux, S. J. Gates Jr., T. Hubsch, K. M. Iga, G. D. Landweber, and R. L. Miller. Topology Types of Adinkras and the Corresponding Representations of N-Extended Supersymmetry. arXiv:0806.0050, 2008.
4. C. F. Doran, M. G. Faux, S. J. Gates Jr., T. Hubsch, K. M. Iga, G. D. Landweber, and R. L. Miller. Adinkras for Clifford Algebras and Worldline Supermultiplets. arXiv:0811.3410, 2008.
5. Tomas J. Boothby and Robert W. Bradshaw. Bitslicing and the Method of Four Russians Over Larger Finite Fields. arXiv:0901.1413, 2009.
6. César A. García-Vázquez and Lidia A. Hernandez-Rebollar. An Adaptation of the Relaxation Method to Obtain a Set of Solutions. Workshop on Innovation, Electronics, Robotics and Automotive Mechanics Conference (CERMA 2009), 2009.
7. Carlo Hämäläinen. Latin trades and simplicial complexes. arXiv:0907.1481, 2009.
8. Richard Moloney, Gary McGuire, and Michael Markowitz. Elliptic Curves in Montgomery Form with B=1 and Their Low Order Torsion. Cryptology ePrint Archive: Report 2009/213, 2009.
9. Piotr Mroczkowski and Janusz Szmidt. Cube Attack on Courtois Toy Cipher. Cryptology ePrint Archive, Report 2009/497, 2009.
10. William Stein and Christian Wuthrich. Computations About Tate-Shafarevich Groups Using Iwasawa Theory. Home page of William Stein, 2009.
11. Nicolas M. Thiéry. Sage-Combinat, Free and Practical Software for Algebraic Combinatorics. Software demonstration, FPSAC'09, Hagenberg, Austria, 2009.
12. Kenneth Koon-Ho Wong, Gregory V. Bard, and Robert H. Lewis. Partitioning Multivariate Polynomial Equations via Vertex Separators for Algebraic Cryptanalysis and Mathematical Applications. Cryptology ePrint Archive: Report 2009/343, 2009.
13. Martin R. Albrecht and Kenneth G. Paterson. Breaking An Identity-Based Encryption Scheme based on DHIES. Cryptology ePrint Archive, Report 2010/637, 2010.
14. Martin R. Albrecht and Clément Pernet. Efficient Decomposition of Dense Matrices over GF(2). arXiv:1006.1744, 2010.
15. Martin Albrecht and John Perry. F4/5. arXiv:1006.4933, 2010.
16. Daniel J. Bernstein, Tanja Lange, and Christiane Peters. Wild McEliece. Cryptology ePrint Archive: Report 2010/410, 2010.
17. Daniel J. Bernstein, Peter Birkner, Tanja Lange, and Christiane Peters. ECM using Edwards curves. Cryptology ePrint Archive: Report 2008/016, 2010.
18. Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche. Keccak Sponge Function Family: Main Document. Home page of the Keccak sponge function family, 2010.
19. Sunil Chetty and Lung Li. Computing local constants for CM elliptic curves. arXiv:1011.0464, 2010.
20. John E. Cremona and Thotsaphon Thongjunthug. The complex AGM, periods of elliptic curves over C and complex elliptic logarithms. arXiv:1011.0914, 2010.
21. Samuele Giraudo. Algebraic and combinatorial structures on Baxter permutations. arXiv:1011.4288, 2010.
22. Kevin J. McGown. Norm-Euclidean Galois fields. arXiv:1011.4501, 2010.
23. Cam McLeman and Christopher Rasmussen. Class number formulas via 2-isogenies of elliptic curves. arXiv:1008.4766, 2010.
24. Robert L. Miller. Empirical evidence for the Birch and Swinnerton-Dyer formula over Q. arXiv:1010.2431, 2010.
25. Steffen Oppermann and Hugh Thomas. Higher Dimensional Cluster Combinatorics and Representation Theory. arXiv:1001.5437, 2010.
26. Alexander Raichev and Mark C. Wilson. Asymptotics of coefficients of multivariate generating functions: improvements for multiple points. arXiv:1009.5715, 2010.
27. Martin Raum. Efficiently Generated Spaces of Classical Siegel Modular Forms and the Boecherer Conjecture. arXiv:1002.3883, 2010.
28. Danko Adrovic and Jan Verschelde. Polyhedral Methods for Space Curves Exploiting Symmetry. arXiv:1109.0241, 2011.
29. Martin R. Albrecht, Pooya Farshim, Jean-Charles Faugère, and Ludovic Perret. Polly Cracker, Revisited. Cryptology ePrint Archive: Report 2011/289, 2011.
30. Drew Armstrong, Christian Stump, and Hugh Thomas. A uniform bijection between nonnesting and noncrossing partitions. arXiv:1101.1277, 2011.
31. Alberto Arri and John Perry. The F5 Criterion revised. arXiv:1012.3664, 2011.
32. Johan Bosman. Modular forms applied to the computational inverse Galois problem. arXiv:1109.6879, 2011.
33. Charles Bouillaguet, Pierre-Alain Fouque, and Gilles Macario-Rat. Practical Key-recovery For All Possible Parameters of SFLASH. Cryptology ePrint Archive: Report 2011/271, 2011.
34. Eric Brier, David Naccache, Phong Q. Nguyen, and Mehdi Tibouchi. Modulus Fault Attacks Against RSA-CRT Signatures. Cryptology ePrint Archive: Report 2011/388, 2011.
35. Stanislav Bulygin. Algebraic cryptanalysis of the round-reduced and side channel analysis of the full PRINTCipher-48. Cryptology ePrint Archive: Report 2011/287, 2011.
36. James A. Carlson and Domingo Toledo. Cubic Surfaces with Special Periods. arXiv:1104.1782, 2011.
37. Gérard Cohen and Jean-Pierre Flori. On a generalized combinatorial conjecture involving addition $\mod 2^k - 1$. Cryptology ePrint Archive: Report 2011/400, 2011.
38. Henry Cohn and Nadia Heninger. Approximate common divisors via lattices. Cryptology ePrint Archive: Report 2011/437, 2011.
39. Jean-Sebastien Coron, David Naccache, and Mehdi Tibouchi. Optimization of Fully Homomorphic Encryption. Cryptology ePrint Archive: Report 2011/440, 2011.
40. Jean-Pierre Flori and Hugues Randriam. On the Number of Carries Occuring in an Addition $\mod 2^k-1$. Cryptology ePrint Archive: Report 2011/245, 2011.
41. Jean-Pierre Flori, Sihem Mesnager, and Gérard Cohen. The Value 4 of Binary Kloosterman Sums. Cryptology ePrint Archive: Report 2011/364, 2011.
42. Jonathan Hanke. Explicit formulas for Masses of Ternary Quadratic Lattices of varying determinant over Number Fields. arXiv:1109.1054, 2011.
43. Timo Kluck. On the Calogero-Moser solution by root-type Lax pair. arXiv:1111.7163, 2011.
45. David Perkinson, Jacob Perlman, and John Wilmes. Primer for the algebraic geometry of sandpiles. arXiv:1112.6163, 2011.
46. Steven Pon. Affine Stanley symmetric functions for classical types. arXiv:1111.3312, 2011.
47. Thomas Risse. How SAGE helps to implement Goppa Codes and McEliece PKCSs. Home page of Thomas Risse, 2011.
48. José Alejandro Lara Rodríguez. Special relations between multizeta values and parity results. arXiv:1108.4726, 2011.
49. Michael Schneider. Sieving for Shortest Vectors in Ideal Lattices. Cryptology ePrint Archive: Report 2011/458, 2011.
50. Enrico Thomae and Christopher Wolf. Roots of Square: Cryptanalysis of DoubleLayer Square and Square+. Cryptology ePrint Archive: Report 2011/431, 2011.
51. Arvind Ayyer, Steven Klee, and Anne Schilling. Combinatorial Markov chains on linear extensions. arXiv:1205.7074, 2012.
52. Chris Berg, Nantel Bergeron, Hugh Thomas, and Mike Zabrocki. Expansion of k-Schur functions for maximal k-rectangles within the affine nilCoxeter algebra. arXiv:1107.3610, 2012.
53. Chris Berg, Nantel Bergeron, Franco Saliola, Luis Serrano, and Mike Zabrocki. A lift of the Schur and Hall-Littlewood bases to non-commutative symmetric functions. arXiv:1208.5191, 2012.
54. Tom Denton. Canonical Decompositions of Affine Permutations, Affine Codes, and Split k-Schur Functions. arXiv:1204.2591, 2012.
55. Alexandru Ghitza and Angus McAndrew. Experimental evidence for Maeda's conjecture on modular forms. arXiv:1207.3480, 2012.
56. Alexandru Ghitza, Nathan C. Ryan, and David Sulon. Computations of vector-valued Siegel modular forms. arXiv:1203.5611, 2012.
57. Andre de Gouvea and Shashank Shalgar. Effect of Transition Magnetic Moments on Collective Supernova Neutrino Oscillations. arXiv:1207.0516, 2012.
58. Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. The biHecke monoid of a finite Coxeter group and its representations. arXiv:1012.1361, 2012.
59. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, and Mark Shimozono. A uniform model for Kirillov-Reshetikhin crystals. Extended abstract. arXiv:1211.6019, 2012.
60. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, and Mark Shimozono. A uniform model for Kirillov-Reshetikhin crystals I: Lifting the parabolic quantum Bruhat graph. arXiv:1211.2042, 2012.
61. Yuri Matiyasevich. New Conjectures about Zeroes of Riemann's Zeta Function. Technical Report MA12-03, Department of Mathematics, University of Leicester, 2012.
62. Tomoki Nakanishi and Salvatore Stella. Diagrammatic description of c-vectors and d-vectors of cluster algebras of finite type. arXiv:1210.6299, 2012.
63. Javier López Peña and Hugo Touchette. A network theory analysis of football strategies. arXiv:1206.6904, 2012.
64. Arvind Ayyer, Anne Schilling, Benjamin Steinberg, and Nicolas M. Thiery. Directed nonabelian sandpile models on trees. arXiv:1305.1697, 2013.
65. Przemysław Koprowski and Alfred Czogała. Computing with quadratic forms over number fields. arXiv:1304.0708, 2013. |
# How do you find the perimeter and the area of a rectangle with a length of 4 1/2 yards and a width of 3 yards?
Apr 5, 2018
P = 15 yd, A = 13.5 yd^2
#### Explanation:
the perimeter would be the amount of distance around the edges.
$P = l + l + w + w$ (l = length, w = width)
$P = 4.5 + 4.5 + 3 + 3$
$P = 15$ yds
area is how much space the shape takes up
$A = l \cdot w$
$A = 4.5 \cdot 3$
$A = 13.5 y {d}^{2}$ |
# Dynamic programming at a non linear programming problem
Could you explain to me how we can use dynamic programming in order to solve a non linear programming problem?
What do we do for example if we are given the following problem?
$$\max (y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) \\ y_1+y_2 \leq 5.4 \\ y_1, y_2 \geq 0$$
• What does $y_1+y_2 \leq 5,4$ mean? – Théophile Jan 8 '16 at 16:43
• I mean the number $5.4$ , not both the numbers 4 and 5. @Théophile – Evinda Jan 8 '16 at 16:45
• Not sure if this helps, but you can look at the derivative of the sum when y1+y2 = some constant. Between that and checking boundary conditions, you should be able to solve it. – barrycarter Jan 8 '16 at 16:59
• @barrycarter How do we use like that dynamic programming? – Evinda Jan 8 '16 at 17:32
I now know that this is an exercise and you need to use dynamic programming, however, I'll leave this solution up for reference.
Fix $y_1 + y_2 = z$ for $z\leq 5.4$.
The objective can be rewritten: $$y_1^3-11 y_1^2+40 y_1+(z-y_1)^3-8(z-y_1)^2+21(z-y_1)$$
This function is concave in $y_1$ (the cubic terms fall out leaving an upside-down parabola), so the first order condition is sufficient for (constrained) global maximum.
The first order condition is: $y_1 = \frac{z+1}{2}$ and thus $y_2 = \frac{z-1}{2}$
Plugging these into to the objective and simplifying gives:
$$\frac{1}{4}(z^3 - 19z^2 +119z +19)$$
This is increasing to $5\frac{2}{3}\geq5.4$. Thus, the objective is maximized for $z=5.4$. Using the first order condition above again gives the solution:
This is: $$y_1=\frac{16}{5}$$ $$y_2=\frac{11}{5}$$
• Did it without calculus, nice! – barrycarter Jan 8 '16 at 17:52
• Not really, I found the top of the parabola by taking a derivative. ;) – CommonerG Jan 8 '16 at 17:53
• @CommonerG In order to deduce that the function is increasing in both variables when they are positive did you fix one of them as a constant and then derived the function? Also why can we assume that $y_1+y_2=5.4$ at the optimum? Which first order condition do you mean? – Evinda Jan 9 '16 at 21:13
• @CommonerG We have that $\frac{\partial{f}}{\partial{y_1}}(y)= 3y_1^2-22y_1+40 \\ \frac{\partial{f}}{\partial{y_2}}(y)=3y_2^2-16y_2+21$. The roots are greater than $0$. How did you deduce that the function is increasing as for both variables? – Evinda Jan 10 '16 at 0:52
• I've clarified the solution. – CommonerG Jan 11 '16 at 2:31 |
As of May 4 2007 the scripts will autodetect your timezone settings. Nothing here has to be changed, but there are a few things
## Monday, January 3, 2011
### Infinite series ( and Euler's identity revisited )
The importance of infinite series and sequences can not be underestimated in my opinion, they literally pop up everywhere. To my surprise however, there isn't a single course at the Open University ( or any other university to my knowledge ) that deals exclusively with the topic of 'Infinite Series'. Instead the theory is stuffed away somewhere else, as if it is not really important. Like in M208 for example. - I have been searching for books on the subject and there -is- a ( recent ) book on the subject. It is called 'Real Infinite Series' by D. Bonar / M. Khoury published my TMAA. Tests not in M208 but in this book are for example Raabe's Test, Rummer's Test. Cauchy's Condensation Test, Abel's Test, and Dirichlet's Test as well as Bertrand's Test. It includes an entire chapter on the harmonic series with different divergent proofs. In the appendix there is an overview of the literature on infinite series.
P.S.
This video shows the proof of $e^{i\pi}+1=0$ using infinite series.
## Welcome to The Bridge
Mathematics: is it the fabric of MEST?
This is my voyage
My continuous mission
To uncover hidden structures
To create new theorems and proofs
To boldly go where no man has gone before
(Raumpatrouille – Die phantastischen Abenteuer des Raumschiffes Orion, colloquially aka Raumpatrouille Orion was the first German science fiction television series. Its seven episodes were broadcast by ARD beginning September 17, 1966. The series has since acquired cult status in Germany. Broadcast six years before Star Trek first aired in West Germany (in 1972), it became a huge success.) |
# 5.4: The Exponential Distribution
The exponential distribution is often concerned with the amount of time until some specific event occurs. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. Other examples include the length, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. It can be shown, too, that the value of the change that you have in your pocket or purse approximately follows an exponential distribution.
Values for an exponential random variable occur in the following way. There are fewer large values and more small values. For example, the amount of money customers spend in one trip to the supermarket follows an exponential distribution. There are more people who spend small amounts of money and fewer people who spend large amounts of money.
The exponential distribution is widely used in the field of reliability. Reliability deals with the amount of time a product lasts.
Example $$\PageIndex{1}$$
Let $$X$$ = amount of time (in minutes) a postal clerk spends with his or her customer. The time is known to have an exponential distribution with the average amount of time equal to four minutes.
$$X$$ is a continuous random variable since time is measured. It is given that $$\mu = 4$$ minutes. To do any calculations, you must know $$m$$, the decay parameter.
$$m = \dfrac{1}{\mu}$$. Therefore, $$m = \dfrac{1}{4} = 0.25$$.
The standard deviation, $$\sigma$$, is the same as the mean. $$\mu = \sigma$$
The distribution notation is $$X \sim Exp(m)$$. Therefore, $$X \sim Exp(0.25)$$.
The probability density function is $$f(x) = me^{-mx}$$. The number $$e = 2.71828182846$$... It is a number that is used often in mathematics. Scientific calculators have the key "$$e^{x}$$." If you enter one for $$x$$, the calculator will display the value $$e$$.
The curve is:
$$f(x) = 0.25e^{-0.25x}$$ where $$x$$ is at least zero and $$m = 0.25$$.
For example, $$f(5) = 0.25e^{-(0.25)(5)} = 0.072$$. The value 0.072 is the height of the curve when x = 5. In Example $$\PageIndex{2}$$ below, you will learn how to find probabilities using the decay parameter.
The graph is as follows:
Notice the graph is a declining curve. When $$x = 0$$,
$$f(x) = 0.25e^{(-0.25)(0)} = (0.25)(1) = 0.25 = m$$. The maximum value on the y-axis is m.
Exercise $$\PageIndex{1}$$
The amount of time spouses shop for anniversary cards can be modeled by an exponential distribution with the average amount of time equal to eight minutes. Write the distribution, state the probability density function, and graph the distribution.
$$X \sim Exp(0.125)$$;
$$f(x) = 0.125e^{-0.125x}$$;
Example $$\PageIndex{2}$$
1. Using the information in Exercise, find the probability that a clerk spends four to five minutes with a randomly selected customer.
2. Half of all customers are finished within how long? (Find the 50th percentile)
3. Which is larger, the mean or the median?
a. Find $$P(4 < x < 5)$$.
The cumulative distribution function (CDF) gives the area to the left.
$P(x < x) = 1 – e^{-mx}$
$P(x < 5) = 1 – e(–0.25)(5) = 0.7135$
and
$(P(x < 4) = 1 – e^{(-0.25)(4)} = 0.6321$
You can do these calculations easily on a calculator.
The probability that a postal clerk spends four to five minutes with a randomly selected customer is
$P(4 < x < 5) = P(x < 5) – P(x < 4) = 0.7135 − 0.6321 = 0.0814.$
On the home screen, enter (1 – e^(–0.25*5)) – (1 – e^(–0.25*4)) or enter e^(–0.25*4) – e^(–0.25*5).
b. Find the 50th percentile.
$$P(x < k) = 0.50$$, $$k = 2.8$$ minutes (calculator or computer)
Half of all customers are finished within 2.8 minutes.
You can also do the calculation as follows:
$P(x < k) = 0.50$
and
$P(x < k) = 1 – e^{-0.25k}$
Therefore,
$0.50 = 1 − e^{-0.25k}$
and
$e^{-0.25k} = 1 − 0.50 = 0.5$
Take natural logs:
$\ln(e^{-0.25k}) = \ln(0.50).$
So,
$-0.25k = ln(0.50).$
Solve for $$k: k = \dfrac{ln(0.50)}{-0.25} = 0.28$$ minutes. The calculator simplifies the calculation for percentile k. See the following two notes.
A formula for the percentile $$k$$ is $$k = ln(1 − \text{Area To The Left}) - mk = ln(1 - \text{Area To The Left}) - m$$ where $$ln$$ is the natural log.
c. From part b, the median or 50th percentile is 2.8 minutes. The theoretical mean is four minutes. The mean is larger.
Note
Collaborative Exercise
On the home screen, enter ln(1 – 0.50)/–0.25. Press the (-) for the negative.
Exercise $$\PageIndex{2}$$
The number of days ahead travelers purchase their airline tickets can be modeled by an exponential distribution with the average amount of time equal to 15 days. Find the probability that a traveler will purchase a ticket fewer than ten days in advance. How many days do half of all travelers wait?
$$P(x < 10) = 0.4866$$
50th percentile = 10.40
Collaborative Exercise
Have each class member count the change he or she has in his or her pocket or purse. Your instructor will record the amounts in dollars and cents. Construct a histogram of the data taken by the class. Use five intervals. Draw a smooth curve through the bars. The graph should look approximately exponential. Then calculate the mean.
Let $$X =$$ the amount of money a student in your class has in his or her pocket or purse.
The distribution for $$X$$ is approximately exponential with mean, $$\mu =$$ _______ and $$m =$$ _______. The standard deviation, $$\sigma =$$ ________.
Draw the appropriate exponential graph. You should label the x– and y–axes, the decay rate, and the mean. Shade the area that represents the probability that one student has less than \$.40 in his or her pocket or purse. (Shade $$P(x < 0.40)$$).
Example $$\PageIndex{3}$$
On the average, a certain computer part lasts ten years. The length of time the computer part lasts is exponentially distributed.
1. What is the probability that a computer part lasts more than 7 years?
2. On the average, how long would five computer parts last if they are used one after another?
3. Eighty percent of computer parts last at most how long?
4. What is the probability that a computer part lasts between nine and 11 years?
a. Let $$x =$$ the amount of time (in years) a computer part lasts.
$\mu = 10$
so
$m = \dfrac{1}{\mu} = \dfrac{1}{10} = 0.1$
Find $$P(x > 7)$$. Draw the graph.
$P(x > 7) = 1 – P(x < 7).$
Since $$P(X < x) = 1 – e^{-mx}$$ then
$P(X > x) = 1 –(1 –e^{-mx}) = e^{-mx}$
$P(x > 7) = e^{(–0.1)(7)} = 0.4966.$
The probability that a computer part lasts more than seven years is 0.4966.
On the home screen, enter e^(-.1*7).
b. On the average, one computer part lasts ten years. Therefore, five computer parts, if they are used one right after the other would last, on the average, (5)(10) = 50 years.
c. Find the 80th percentile. Draw the graph. Let k = the 80th percentile.
Solve for $$k: k = \dfrac{ln(1-0.80)}{-0.1} = 16.1$$ years
Eighty percent of the computer parts last at most 16.1 years.
On the home screen, enter $$\dfrac{ln(1-0.80)}{-0.1}$$
d. Find $$P(9 < x < 11)$$. Draw the graph.
$P(9 < x < 11) = P(x < 11) - P(x < 9) = (1 - e^{(–0.1)(11)}) - (1 - e^{(–0.1)(9)}) = 0.6671 - 0.5934 = 0.0737.$
The probability that a computer part lasts between nine and 11 years is 0.0737.
On the home screen, enter e^(–0.1*9) – e^(–0.1*11).
Exercise $$\PageIndex{3}$$
On average, a pair of running shoes can last 18 months if used every day. The length of time running shoes last is exponentially distributed. What is the probability that a pair of running shoes last more than 15 months? On average, how long would six pairs of running shoes last if they are used one after the other? Eighty percent of running shoes last at most how long if used every day?
$$P(x > 15) = 0.4346$$
Six pairs of running shoes would last 108 months on average.
80th percentile = 28.97 months
Example $$\PageIndex{4}$$
Suppose that the length of a phone call, in minutes, is an exponential random variable with decay parameter = $$\dfrac{1}{12}$$. If another person arrives at a public telephone just before you, find the probability that you will have to wait more than five minutes. Let $$X$$ = the length of a phone call, in minutes.
What is $$m$$, $$\mu$$, and $$\sigma$$? The probability that you must wait more than five minutes is _______ .
• $$m = \dfrac{1}{12}$$
• $$\mu = 12$$
• $$\sigma = 12$$
$$P(x > 5) = 0.6592$$
Exercise $$\PageIndex{4}$$
Suppose that the distance, in miles, that people are willing to commute to work is an exponential random variable with a decay parameter $$\dfrac{1}{20}$$. Let $$S =$$ the distance people are willing to commute in miles. What is $$m$$, $$\mu$$, and $$\sigma$$? What is the probability that a person is willing to commute more than 25 miles?
$$m = \dfrac{1}{20}$$; $$\mu = 20$$; $$\sigma = 20$$; $$P(x > 25) = 0.2865$$
Example $$\PageIndex{5}$$
The time spent waiting between events is often modeled using the exponential distribution. For example, suppose that an average of 30 customers per hour arrive at a store and the time between arrivals is exponentially distributed.
1. On average, how many minutes elapse between two successive arrivals?
2. When the store first opens, how long on average does it take for three customers to arrive?
3. After a customer arrives, find the probability that it takes less than one minute for the next customer to arrive.
4. After a customer arrives, find the probability that it takes more than five minutes for the next customer to arrive.
5. Seventy percent of the customers arrive within how many minutes of the previous customer?
6. Is an exponential distribution reasonable for this situation?
1. Since we expect 30 customers to arrive per hour (60 minutes), we expect on average one customer to arrive every two minutes on average.
2. Since one customer arrives every two minutes on average, it will take six minutes on average for three customers to arrive.
3. Let $$X =$$ the time between arrivals, in minutes. By part a, $$\mu = 2$$, so $$m = \dfrac{1}{2} = 0.5$$.
Therefore, $$X \sim Exp(0.5)$$.
The cumulative distribution function is $$P(X < x) = 1 – e(–0.5x)^{e}$$.
Therefore $$P(X < 1) = 1 - e^{(–0.5)(1)} \approx 0.3935$$.
$$1 - e^(–0.5) \approx 0.3935$$
Figure $$\PageIndex{8}$$.
$$P(X > 5) = 1 - P(X < 5) = 1 - (1 - e^{(-5)(0.5)}) = e^{-2.5} \approx 0.0821$$.
Figure $$\PageIndex{9}$$.
$$1 - (1 - e^{( – 5*0.5)}$$ or $$e^{(-5*0.5)}$$
4. We want to solve $$0.70 = P(X < x)$$ for $$x$$.
Substituting in the cumulative distribution function gives $$0.70 = 1 – e^{–0.5x}$$, so that $$e^{–0.5x} = 0.30$$. Converting this to logarithmic form gives $$-0.5x = ln(0.30)$$, or $$x = \dfrac{ln(0.30)}{-0.5} \approx 2.41$$ minutes.
Thus, seventy percent of customers arrive within 2.41 minutes of the previous customer.
You are finding the 70th percentile $$k$$ so you can use the formula $$k = \dfrac{ln(1-\text{Area To The left Of k})}{-m}$$
$$k = \dfrac{ln(1-0.70)}{(-0.5)} \approx 2.41$$ minutes
Figure $$\PageIndex{10}$$.
This model assumes that a single customer arrives at a time, which may not be reasonable since people might shop in groups, leading to several customers arriving at the same time. It also assumes that the flow of customers does not change throughout the day, which is not valid if some times of the day are busier than others.
Exercise $$\PageIndex{5}$$
Suppose that on a certain stretch of highway, cars pass at an average rate of five cars per minute. Assume that the duration of time between successive cars follows the exponential distribution.
1. On average, how many seconds elapse between two successive cars?
2. After a car passes by, how long on average will it take for another seven cars to pass by?
3. Find the probability that after a car passes by, the next car will pass within the next 20 seconds.
4. Find the probability that after a car passes by, the next car will not pass for at least another 15 seconds.
1. At a rate of five cars per minute, we expect $$\dfrac{60}{5} = 12$$ seconds to pass between successive cars on average.
2. Using the answer from part a, we see that it takes $$(12)(7) = 84$$ seconds for the next seven cars to pass by.
3. Let $$T =$$ the time (in seconds) between successive cars.
The mean of $$T$$ is 12 seconds, so the decay parameter is $$\dfrac{1}{12}$$ and $$T \sim Exp\dfrac{1}{12}$$. The cumulative distribution function of $$T$$ is $$P(T < t) = 1 – e^{−\dfrac{t}{12}}$$. Then $$P(T < 20) = 1 –e^{−\dfrac{20}{12}} \approx 0.8111$$.
Figure $$\PageIndex{11}$$.
$$P(T > 15) = 1 – P(T < 15) = 1 – (1 – e^{−\dfrac{15}{12}}) = e^{−\dfrac{15}{12}} \approx 0.2865$$.
## Memorylessness of the Exponential Distribution
In Example recall that the amount of time between customers is exponentially distributed with a mean of two minutes ($$X \sim Exp (0.5)$$). Suppose that five minutes have elapsed since the last customer arrived. Since an unusually long amount of time has now elapsed, it would seem to be more likely for a customer to arrive within the next minute. With the exponential distribution, this is not the case–the additional time spent waiting for the next customer does not depend on how much time has already elapsed since the last customer. This is referred to as the memoryless property. Specifically, the memoryless property says that
$P(X > r + t | X > r) = P(X > t)$
for all $$r \geq 0$$ and $$t \geq 0$$.
For example, if five minutes has elapsed since the last customer arrived, then the probability that more than one minute will elapse before the next customer arrives is computed by using $$r = 5$$ and $$t = 1$$ in the foregoing equation.
$$P(X > 5 + 1 | X > 5) = P(X > 1) = e(–0.5)(1) e(–0.5)(1) \approx 0.6065$$.
This is the same probability as that of waiting more than one minute for a customer to arrive after the previous arrival.
The exponential distribution is often used to model the longevity of an electrical or mechanical device. In Example, the lifetime of a certain computer part has the exponential distribution with a mean of ten years ($$X \sim Exp(0.1)$$). The memoryless property says that knowledge of what has occurred in the past has no effect on future probabilities. In this case it means that an old part is not any more likely to break down at any particular time than a brand new part. In other words, the part stays as good as new until it suddenly breaks. For example, if the part has already lasted ten years, then the probability that it lasts another seven years is $$P(X > 17 | X > 10) = P(X > 7) = 0.4966$$.
Example $$\PageIndex{6}$$
Refer to Example where the time a postal clerk spends with his or her customer has an exponential distribution with a mean of four minutes. Suppose a customer has spent four minutes with a postal clerk. What is the probability that he or she will spend at least an additional three minutes with the postal clerk?
The decay parameter of $$X$$ is $$m = \dfrac{1}{4} = 0.25$$, so $$X \sim Exp(0.25)$$.
The cumulative distribution function is $$P(X < x) = 1 - e^{–0.25x}$$.
We want to find $$P(X > 7 | X > 4)$$. The memoryless property says that $$P(X > 7 | X > 4) = P(X > 3)$$, so we just need to find the probability that a customer spends more than three minutes with a postal clerk.
This is $$P(X > 3) = 1 - P(X < 3) = 1 - (1 - e^{-0.25 \cdot 3}) = e^{–0.75} \approx 0.4724$$.
$$1 - (1 - e^(-0.25*2)) = e^(-0.25*2)$$.
Exercise $$\PageIndex{6}$$
Suppose that the longevity of a light bulb is exponential with a mean lifetime of eight years. If a bulb has already lasted 12 years, find the probability that it will last a total of over 19 years.
Let $$T =$$ the lifetime of the light bulb. Then $$T \sim Exp\left(\dfrac{1}{8}\right)$$.
The cumulative distribution function is $$P(T < t) = 1 − e^{-\dfrac{t}{8}}$$
We need to find $$P(T > 19 | T = 12)$$. By the memoryless property,
$$P(T > 19 | T = 12) = P(T > 7) = 1 - P(T < 7) = 1 - (1 - e^{-7/8}) = e^{-7/8} \approx 0.4169$$.
1 - (1 – e^(–7/8)) = e^(–7/8).
## Relationship between the Poisson and the Exponential Distribution
There is an interesting relationship between the exponential distribution and the Poisson distribution. Suppose that the time that elapses between two successive events follows the exponential distribution with a mean of $$\mu$$ units of time. Also assume that these times are independent, meaning that the time between events is not affected by the times between previous events. If these assumptions hold, then the number of events per unit time follows a Poisson distribution with mean $$\lambda = \dfrac{1}{\mu}$$. Recall from the chapter on Discrete Random Variables that if $$X$$ has the Poisson distribution with mean $$\lambda$$, then $$P(X = k) = \dfrac{\lambda^{k}e^{-\lambda}}{k!}$$. Conversely, if the number of events per unit time follows a Poisson distribution, then the amount of time between events follows the exponential distribution. $$(k! = k*(k-1*)(k - 2)*(k - 3) \dotsc 3*2*1)$$
Suppose $$X$$ has the Poisson distribution with mean $$\lambda$$. Compute $$P(X = k)$$ by entering 2nd, VARS(DISTR), C: poissonpdf$$(\lambda, k$$). To compute $$P(X \leq k$$), enter 2nd, VARS (DISTR), D:poissoncdf($$\lambda, k$$).
Example $$\PageIndex{7}$$
At a police station in a large city, calls come in at an average rate of four calls per minute. Assume that the time that elapses from one call to the next has the exponential distribution. Take note that we are concerned only with the rate at which calls come in, and we are ignoring the time spent on the phone. We must also assume that the times spent between calls are independent. This means that a particularly long delay between two calls does not mean that there will be a shorter waiting period for the next call. We may then deduce that the total number of calls received during a time period has the Poisson distribution.
1. Find the average time between two successive calls.
2. Find the probability that after a call is received, the next call occurs in less than ten seconds.
3. Find the probability that exactly five calls occur within a minute.
4. Find the probability that less than five calls occur within a minute.
5. Find the probability that more than 40 calls occur in an eight-minute period.
1. On average there are four calls occur per minute, so 15 seconds, or $$\dfrac{15}{60} = 0.25$$ minutes occur between successive calls on average.
2. Let $$T =$$ time elapsed between calls. From part a, $$\mu = 0.25$$, so $$m = \dfrac{1}{0.25} = 4$$. Thus, $$T \sim Exp(4)$$.
The cumulative distribution function is $$P(T < t) = 1 - e^{–4t}$$.
The probability that the next call occurs in less than ten seconds (ten seconds $$= \dfrac{1}{6}$$ minute) is $$P(T < \dfrac{1}{6}) = 1 - e^{-4 (\dfrac{1}{6})} \approx 0.4866)$$.
Figure $$\PageIndex{13}$$
3. Let $$X =$$ the number of calls per minute. As previously stated, the number of calls per minute has a Poisson distribution, with a mean of four calls per minute.
Therefore, $$X \sim Poisson(4)$$, and so $$P(X = 5) = \dfrac{4^{5}e^{-4}}{5!} \approx 0.1563$$. ($$5! = (5)(4)(3)(2)(1)$$)
$$\text{poissonpdf}(4, 5) = 0.1563$$.
4. Keep in mind that $$X$$ must be a whole number, so $$P(X < 5) = P(X \leq 4)$$.
To compute this, we could take $$P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4)$$.
Using technology, we see that $$P(X \approx 4) = 0.6288$$.
$$\text{poisssoncdf}(4, 4) = 0.6288$$
5. Let $$Y =$$ the number of calls that occur during an eight minute period.
Since there is an average of four calls per minute, there is an average of $$(8)(4) = 32$$ calls during each eight minute period.
Hence, $$Y \sim Poisson(32)$$. Therefore, $$P(Y > 40) = 1 - P(Y \leq 40) = 1 - 0.9294 = 0.0707$$.
$$1 - \text{poissoncdf}(32, 40). = 0.0707$$
Exercise $$\PageIndex{7}$$
In a small city, the number of automobile accidents occur with a Poisson distribution at an average of three per week.
1. Calculate the probability that there are at most 2 accidents occur in any given week.
2. What is the probability that there is at least two weeks between any 2 accidents?
1. Let $$X =$$ the number of accidents per week, so that $$X \sim Poisson(3)$$. We need to find $$P(X \leq 2) \approx 0.4232$$
$$\text{poissoncdf}(3, 2)$$
2. Let $$T =$$ the time (in weeks) between successive accidents.
Since the number of accidents occurs with a Poisson distribution, the time between accidents follows the exponential distribution.
If there are an average of three per week, then on average there is $$\mu = \dfrac{1}{3}$$ of a week between accidents, and the decay parameter is $$m = \dfrac{1}{\left(\dfrac{1}{3}\right)} = 3$$.
To find the probability that there are at least two weeks between two accidents, $$P(T > 2) = 1 - P(T < 2) = 1 – (1 – e(–3)(2)) = e^{–6} \approx 0.0025$$.
e^(-3*2).
## Review
If $$X$$ has an exponential distribution with mean $$\mu$$, then the decay parameter is $$m = \dfrac{1}{\mu}$$, and we write $$X \sim Exp(m)$$ where $$x \geq 0$$ and $$m > 0$$ . The probability density function of $$X$$ is $$f(x) = me^{-mx}$$ (or equivalently $$f(x) = \dfrac{1}{\mu}e^{-\dfrac{x}{\mu}}$$). The cumulative distribution function of $$X$$ is $$P(X \leq X) = 1 - e^{-mx}$$.
The exponential distribution has the memoryless property, which says that future probabilities do not depend on any past information. Mathematically, it says that $$P(X > x + k | X > x) = P(X > k)$$.
If $$T$$ represents the waiting time between events, and if $$T \sim Exp(\lambda)$$, then the number of events $$X$$ per unit time follows the Poisson distribution with mean $$\lambda$$. The probability density function of $$PX$$ is $$(X = k) = \dfrac{\lambda^{k}e^{-k}}{k!}$$. This may be computed using a TI-83, 83+, 84, 84+ calculator with the command $$\text{poissonpdf}(\lambda, k)$$. The cumulative distribution function $$P(X \leq k)$$ may be computed using the TI-83, 83+,84, 84+ calculator with the command $$\text{poissoncdf}(\lambda, k)$$.
## Formula Review
Exponential: $$X \sim Exp(m)$$ where $$m =$$ the decay parameter
• pdf: $$f(x) = me^{(–mx)}$$ where $$x \geq 0$$ and $$m > 0$$
• cdf: $$P(X \leq x) = 1 - e^{(–mx)}$$
• mean $$\mu = \dfrac{1}{m}$$
• standard deviation $$\sigma = \mu$$
• percentile $$k: k = \dfrac{ln(1 - \text{Area To The Left Of k})}{-m}$$
• $$P(X > x) = e^{(–mx)}$$
• $$P(a < X < b) = e^{(–ma)} - e^{(–mb)}$$
• Memoryless Property: $$P(X > x + k | X > x) = P(X > k)$$
• Poisson probability: $$P(X = k) = \dfrac{\lambda^{k}e^{k}}{k!}$$ with mean $$\lambda$$
• $$k! = k*(k - 1)*(k - 2)*(k - 3) \dotsc 3*2*1$$
## References
1. Data from the United States Census Bureau.
2. Data from World Earthquakes, 2013. Available online at http://www.world-earthquakes.com/ (accessed June 11, 2013).
3. “No-hitter.” Baseball-Reference.com, 2013. Available online at http://www.baseball-reference.com/bullpen/No-hitter (accessed June 11, 2013).
4. Zhou, Rick. “Exponential Distribution lecture slides.” Available online at www.public.iastate.edu/~riczw/stat330s11/lecture/lec13.pdf (accessed June 11, 2013).
## Glossary
decay parameter
The decay parameter describes the rate at which probabilities decay to zero for increasing values of $$x$$ . It is the value $$m$$ in the probability density function $$f(x) = me^{(-mx)}$$ of an exponential random variable. It is also equal to $$m = \dfrac{1}{\mu}$$ , where $$\mu$$ is the mean of the random variable.
memoryless property
For an exponential random variable $$X$$, the memoryless property is the statement that knowledge of what has occurred in the past has no effect on future probabilities. This means that the probability that $$X$$ exceeds $$x + k$$, given that it has exceeded $$x$$, is the same as the probability that $$X$$ would exceed $$k$$ if we had no knowledge about it. In symbols we say that $$P(X > x + k | X > x) = P(X > k)$$
Poisson distribution
If there is a known average of $$\lambda$$ events occurring per unit time, and these events are independent of each other, then the number of events $$X$$ occurring in one unit of time has the Poisson distribution. The probability of k events occurring in one unit time is equal to $$P(X = k) = \dfrac{\lambda^{k}e^{-\lambda}}{k!}$$.
This page titled 5.4: The Exponential Distribution is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. |
5
# Problem 1_ Take a look at the graph of the derivative, f' (x)_ shown below: Assume the graph extends beyond the points indicated on the graph:2f"x)-0.50.5...
## Question
###### Problem 1_ Take a look at the graph of the derivative, f' (x)_ shown below: Assume the graph extends beyond the points indicated on the graph:2f"x)-0.50.5(a) Draw an approximate graph of the function f (x) based on the graph shown above. Attach a picture of your graph to your post: Give a brief explanation how you were able to come up with the graph:(b) Analyze the above graph of f' (x) , then find ALL inflection points, if any; and intervals where the function f (x) (not the deri
Problem 1_ Take a look at the graph of the derivative, f' (x)_ shown below: Assume the graph extends beyond the points indicated on the graph: 2 f"x) -0.5 0.5 (a) Draw an approximate graph of the function f (x) based on the graph shown above. Attach a picture of your graph to your post: Give a brief explanation how you were able to come up with the graph: (b) Analyze the above graph of f' (x) , then find ALL inflection points, if any; and intervals where the function f (x) (not the derivativel) is concave Up, and where it is concave down_
#### Similar Solved Questions
##### Oxyecnated blocd Gnitubetet to the Eat Bodt ttddul Ils Bumnoina chames? # hean Atria contract Ing rla eintaiclj reunlat {elatitten "Ccatraciica h €led calltd Jer COplete Contrituon rellale2 LheList the Ircthodt Innate immun IrDefine adaptive (mmunlty #nd rame hmahoste lnvoledHictahe ImmunonojulinzHhou have blood typeMhat antircnsyou havc anownataniondlesSouhutt?20. What is active Immunitv? [email protected] what = passlve Immnunlty? Give chamtple
oxyecnated blocd Gnitubetet to the Eat Bodt ttddul Ils Bumnoina chames? # hean Atria contract Ing rla eintaiclj reunlat {elatitten "Ccatraciica h €led calltd Jer COplete Contrituon rellale2 Lhe List the Ircthodt Innate immun Ir Define adaptive (mmunlty #nd rame hmahoste lnvoled Hictahe Im...
##### 20, The figure shows the graph of function f. Sketch the graph of f from the graph of f You need to label all the x-intercepts for the graph of f '_
20, The figure shows the graph of function f. Sketch the graph of f from the graph of f You need to label all the x-intercepts for the graph of f '_...
##### Classify the bonds as ionic, polar covalent; or nonpolar covalent:lonicPolar covalentNonpolor crerletRb-CIN_FSC_[Nu-F
Classify the bonds as ionic, polar covalent; or nonpolar covalent: lonic Polar covalent Nonpolor crerlet Rb-CI N_F SC_[ Nu-F...
##### Find a general solution to the differential equation.y' 6y' + 9y=t-7 3tThe general solution is y(t)
Find a general solution to the differential equation. y' 6y' + 9y=t-7 3t The general solution is y(t)...
##### Consider the differential equation & y"'1.5 markDetermine a region of the zy-plane for which the given diffcrential equation have unique solution through the point (1,2) .
Consider the differential equation & y" '1.5 mark Determine a region of the zy-plane for which the given diffcrential equation have unique solution through the point (1,2) ....
##### 3x (m_ 0.2 0.4 0.6 0.8 1.08 _ 9 -
3 x (m_ 0.2 0.4 0.6 0.8 1.0 8 _ 9 -...
##### A) What is the density (in g/L) of pure Argon gas confined in a gas cylinder having a pressure of 290 kPa at 25 -C ? (Assume that the gas behaves ideally at this temperature and pressure 6) The gas cylinder in part (a) explodes if the pressure of the gas excedes 7 atm. So the gas in part (a) can safely be heated to which maximum temperature (in %C) without exploding the cylinder?Answers:a)gILb)Constants: (HzO ()) = 4.184 Jlg:"C R = 8.314 JIKmol or R = 0.08206 Latm/Kmol atm = 101.325 kPa h =
a) What is the density (in g/L) of pure Argon gas confined in a gas cylinder having a pressure of 290 kPa at 25 -C ? (Assume that the gas behaves ideally at this temperature and pressure 6) The gas cylinder in part (a) explodes if the pressure of the gas excedes 7 atm. So the gas in part (a) can saf...
##### Write the first five terms of the sequence whose general term, &, is given a5 =(-1)" (3n2) .-00
Write the first five terms of the sequence whose general term, &, is given a5 =(-1)" (3n2) . -0 0...
##### This graph shows the determinants of food choice. Which do you think is the strongest determinant? Is this determinant affected by other determinants? Does it affect other determinants? Discuss its significance through what we learned in lectures_SociAUEnvironmcntal enreEen nannn Prtion R-lated Drternant Phyti-UBuih Wnnnnnonmnrm {#prticnt? rith Food Intraperion Fooduubbiity Aocnetie cond tonmg Fatort ouicntonm€ntPctecpton Ltin eden Belich Wonnon andwlues Perions mcaninot Knotkdor andiu LorM no
This graph shows the determinants of food choice. Which do you think is the strongest determinant? Is this determinant affected by other determinants? Does it affect other determinants? Discuss its significance through what we learned in lectures_ SociAUEnvironmcntal enreEen nannn Prtion R-lated Drt...
##### Show that i NOT Mn- tnuusfontuation
Show that i NOT Mn- tnuusfontuation...
##### Solve each problem.A farmer intends to construct a windscreen by planting trees in a quarter mile row. His daughter points out that 44 fewer trees will be needed if they are planted 1 foot farther apart. If her dad takes her advice, how many trees will be needed? A row starts and ends with a tree. (Hint: 1 mile $=5280$ feet.)
Solve each problem. A farmer intends to construct a windscreen by planting trees in a quarter mile row. His daughter points out that 44 fewer trees will be needed if they are planted 1 foot farther apart. If her dad takes her advice, how many trees will be needed? A row starts and ends with a tree. ...
##### Question 13 (1 point) In howmany ways could the digits 1, 2.3,4,5, 6,7, 8, 9 be arrariged if the cven digits Thust " remain in the given order?51 * 418
Question 13 (1 point) In howmany ways could the digits 1, 2.3,4,5, 6,7, 8, 9 be arrariged if the cven digits Thust " remain in the given order? 51 * 41 8...
##### Tutorial #17. The NMR spectrum of Compound A (Figure 8b):Figure 1Sb.NOCompoundpoxUsing Chemical Shifts and Multiplicities only (not coupling constants ) , assign the chemical shift for each group of unique (chemically and magnetically) protons in Compound A. Justify your choices.(27) Consider the structure of the following:HaC -IfY has Spin Quantum Number of [ = 0, and it is an electron-donating centre, sketch the H NMR spectrum; showing the expected multiplicities of each group of= peaks. Justi
Tutorial #17. The NMR spectrum of Compound A (Figure 8b): Figure 1Sb. NO Compound pox Using Chemical Shifts and Multiplicities only (not coupling constants ) , assign the chemical shift for each group of unique (chemically and magnetically) protons in Compound A. Justify your choices. (27) Consider ...
##### Write the first expression in terms of the second if the terminal point determined by $t$ is in the given quadrant. $\sec ^{2} t \sin ^{2} t, \cos t ; \quad$ any quadrant
Write the first expression in terms of the second if the terminal point determined by $t$ is in the given quadrant. $\sec ^{2} t \sin ^{2} t, \cos t ; \quad$ any quadrant...
##### Er IeKne Cass pf one mclle 0f carbon aloms?Mulple Cncich7232 917232 10246.022 * t023Elkz 44201g
er IeKne Cass pf one mclle 0f carbon aloms? Mulple Cncich 7232 9 17232 1024 6.022 * t023 Elkz 4 4201g...
##### Wharis the rate of heat transfer by rdiaton; with an unettedperson slandingin a dark room whose ambient temperahure is 220 C Theperson has a noral skin temperstureof 330PC and a surface area of |,SUm:, The emissjvv of skin is 0.97in the infrared, where the radiaton takes place
Wharis the rate of heat transfer by rdiaton; with an unettedperson slandingin a dark room whose ambient temperahure is 220 C Theperson has a noral skin temperstureof 330PC and a surface area of |,SUm:, The emissjvv of skin is 0.97in the infrared, where the radiaton takes place... |
# Vertical line in matrix using LaTeXiT [duplicate]
Possible Duplicate:
What's the best way make an “augmented” coefficient matrix?
I am trying to make a vertical line in a matrix in LaTeXiT. I have read that it should be possible using the following:
\begin{bmatrix}{cccc|c}
1 & 0 & 3 & -1 & 0 \\
0 & 1 & 1 & -1 & 0 \\
0 & 0 & 0 & 0 & 0 \\
\end{bmatrix}
But when doing so, this is my output:
The bmatrix environment does not provide this facility (see section 4.1 of the amsmath package documentation). You can use the array environment instead.
$\left[ \begin{array}{cccc|c} 1 & 0 & 3 & -1 & 0 \\ 0 & 1 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \right]$
• Possibly adding @{} at the sides: @{}cccc|c@{}, so as to give a similar result to bmatrix. – egreg Nov 3 '11 at 9:33
• Thank you very much. The output of this is just like a bmatrix. – SimonBS Nov 3 '11 at 9:43
• @egreg: Could you explain how the @{} works? I see that it has the effect of reducing the whitespace between the [ and ] and the matrix contents. – user001 Mar 13 '12 at 17:30
• With @{...} you tell LaTeX to put ... in place of the default intercolumn space, which is applied also at the start and end of the tabular. – egreg Mar 13 '12 at 17:37
That won't work with the matrix environment from amsmath, however Stefan Kottwitz wrote about a workaround for this on his blog.
\documentclass{article}
\usepackage{amsmath}
\makeatletter
\renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{%
\hskip -\arraycolsep
\let\@ifnextchar\new@ifnextchar
\array{#1}}
\makeatother
\begin{document}
$$\begin{bmatrix}[cccc|c] 1 & 0 & 3 & -1 & 0 \\ 0 & 1 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}$$
\end{document}
• @barbarabeeton This would be a nice addition for amsmath, wouldn't it? – egreg Nov 3 '11 at 9:36
• @Chou It's up to the person asking the question to decide which answer was most helpful for him or her. It's not always the highest voted answer that is accepted. – Torbjørn T. Jun 21 '15 at 7:37
• I know the rules here. You may take my words as an indirect praise; for your work helped me. :) By the way, I was not aware of the number of votes til your comment coming out. – Megadeth Jun 21 '15 at 9:30
• The double-backslash line-break indicators seem to have morphed into single-backslash spacers. You may want to edit the code a bit. – Mico Jan 23 '17 at 12:18
• @Mico Thanks, will edit. (It's a well known problem as you may know, meta.tex.stackexchange.com/questions/7168/…) – Torbjørn T. Jan 23 '17 at 12:26 |
# Extrapolation
For the journal of speculative fiction, see Extrapolation (journal). For the John McLaughlin album, see Extrapolation (album).
In mathematics, extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown [1] (e.g. a driver extrapolates road conditions beyond his sight while driving). The extrapolation method can be applied in the interior reconstruction problem.
Example illustration of the extrapolation problem, consisting of assigning a meaningful value at the blue box, at ${\displaystyle x=7}$, given the red data points.
## Extrapolation methods
A sound choice of which extrapolation method to apply relies on a prior knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods.[2] Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic etc.
### Linear extrapolation
Extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data.
If the two data points nearest the point ${\displaystyle x_{*}}$ to be extrapolated are ${\displaystyle (x_{k-1},y_{k-1})}$ and ${\displaystyle (x_{k},y_{k})}$, linear extrapolation gives the function:
${\displaystyle y(x_{*})=y_{k-1}+{\frac {x_{*}-x_{k-1}}{x_{k}-x_{k-1}}}(y_{k}-y_{k-1}).}$
(which is identical to linear interpolation if ${\displaystyle x_{k-1}). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction.
### Polynomial extrapolation
Lagrange extrapolations of the sequence 1,2,3. Extrapolating by 4 leads to a polynomial of minimal degree (cyan line).
A polynomial curve can be created through the entire known data or just near the end. The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data.
High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values, an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon.
### Conic extrapolation
A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, it will loop back and rejoin itself. A parabolic or hyperbolic curve will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer.
### French curve extrapolation
French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors.[3] This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies.[4]
## Quality of extrapolation
Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated.
In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces.[5]
Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [−1,1]. I.e., the error increases without bound.
Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation.
This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior.
## Extrapolation in the complex plane
In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable ${\displaystyle {\hat {z}}=1/z}$. This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data.
Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region.
Again, analytic continuation can be thwarted by function features that were not evident from the initial data.
Also, one may use sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants.
## Fast extrapolation
The extrapolated data often convolute to a kernel function. After data is extrapolated, the size of data is increased N times, here N=2~3. If this data needs to be convoluted to a known kernel function, the numerical calculations will increase log(N)*N times even with FFT(fast Fourier transform). There exists an algorithm, it analytically calculates the contribution from the part of the extrapolated data. The calculation time can be omitted compared with the original convolution calculation. Hence with this algorithm the calculations of a convolution using the extrapolated data is nearly not increased. This is referred as the fast extrapolation. The fast extrapolation has been applied to CT image reconstruction.[6] |
# Cosiler t" -uIara] sirmpl neg~xion Morkc|v=0+%c+mA> discussed In cur kectures- Under the Gauss Markov assumptlons discused lextures (such 4 E(ukx)
###### Question:
Cosiler t" -uIara] sirmpl neg~xion Morkc| v=0+%c+m A> discussed In cur kectures- Under the Gauss Markov assumptlons discused lextures (such 4 E(ukx) aid Elu|r) 02 ,the uSual OLS ,tiltof; de uated by / is a ubbinsed estiniator of B Let % be the etimator o M1 obtaiuedl by assuming (hxat & B . That is, %1 is the solution to ming 2m-n6rst]' (10 pints) Show tlit 46+" E(A=) 8, + (M- W) 46+1} Mint : Aftcr writing tle FOC use l fact that M-m | m( | -1J +"] points) Discuss the conditions udcr which 61 i G unbiased extimafor of (iii) ( IU points} Show that rar (M-)-& (+1y' (iv) (7 point-) h can he shown that var (M=) var (s/); Assumng that this holdy cOnent On fhe tradeoff betaven bias ate variance whcu cheosing becwaen /J; audl M1_
#### Similar Solved Questions
##### FE; 22 Find the average value off (x) = &x3 over the interval [1,6}. See the formula in the hint box
FE; 22 Find the average value off (x) = &x3 over the interval [1,6}. See the formula in the hint box...
##### Question 1 (Mandatory) (10 points) Directions: There are two answers, separate them with a comma. A...
Question 1 (Mandatory) (10 points) Directions: There are two answers, separate them with a comma. A client is on a sliding scale for insulin dosages q6h. The order is for Humulin Regular U-100 subcut q6h as follows: Lilly NDC 0002-8215-01 10 mL HI-210 100 units per mL Exp. Date/Control No. WG 3430 A...
##### Let f : R 7R Show that f is continuous if and only if for every open interval f-1((a.6)) := {reR: f(r) € (,b)} is open _6). the set
Let f : R 7R Show that f is continuous if and only if for every open interval f-1((a.6)) := {reR: f(r) € (,b)} is open _ 6). the set...
##### Homework: HW 4.2 The Unit Circle Score: 0 of 1 pt 12 of 20 (1 complete)4.2.41Find the exact value of the given trigonometric function. Do not use a calculator:91sin9xsin(Simplify your answer, including any radicals. Use integers or fractions for any numbers in
Homework: HW 4.2 The Unit Circle Score: 0 of 1 pt 12 of 20 (1 complete) 4.2.41 Find the exact value of the given trigonometric function. Do not use a calculator: 91 sin 9x sin (Simplify your answer, including any radicals. Use integers or fractions for any numbers in...
##### Find the radius of convergence and the Interval of convergence. $\sum_{k=0}^{\infty}(-1)^{k} \frac{x^{2 k+1}}{(2 k+1) !}$
Find the radius of convergence and the Interval of convergence. $\sum_{k=0}^{\infty}(-1)^{k} \frac{x^{2 k+1}}{(2 k+1) !}$...
##### How much pure acid should be mixed with 4gallons of a 50% acid solution in order to get a 6%...
How much pure acid should be mixed with 4gallons of a 50% acid solution in order to get a 6% solution....
##### Fina the equ {ion OfEng Ilne Enreush (3,Perpenalcuibr *0Ilree21-33=12 .
Fina the equ {ion OfEng Ilne Enreush (3, Perpenalcuibr *0 Ilree 21-33=12 ....
##### Consider the following power series:2n+7 2n+9 n=1Follow the follow steps below and apply Ratio Test to determine if the series is convergent; divergent or no conclusion:(A) Let an be the n-th term of the given series. Compute the ratio of consecutive terms. Simplify your answer as far as possible reducing all common factors:an+1 Enter the ratio of the consecutive terms below Click on the Z icon to launch the equation editor: anYou should enter your answer Iike you would in calculator: For exampl
Consider the following power series: 2n+7 2n+9 n=1 Follow the follow steps below and apply Ratio Test to determine if the series is convergent; divergent or no conclusion: (A) Let an be the n-th term of the given series. Compute the ratio of consecutive terms. Simplify your answer as far as possible...
##### The hypothetical stratigraphic section on the following page shows a vertical sequence of strata....
please answer (a) and (b). thank you The hypothetical stratigraphic section on the following page shows a vertical sequence of strata. Assume that this section accumulated on a continent at a latitude of 42 (normally a temperate climate zone). The numbered boxes with rock symbols are described b...
##### Acid-Base Reactions Which of the following graphs (below) best describes the solublllty Ol a weak monoprotic acid In water? The dissoclation reaction for this acld Is: HA MI AAZ1 36 7 2
Acid-Base Reactions Which of the following graphs (below) best describes the solublllty Ol a weak monoprotic acid In water? The dissoclation reaction for this acld Is: HA MI A AZ 1 3 6 7 2...
##### Find the Inverse Laplace bansion4 18 (8+14)2 + 16IYou 3re singsine Cr cosine munctionnot Iorget to add the juction brackets (Do not fondetto enter multiplication sign " when multiplying-)for example sin(99) rather than sin 99(8+14)2+16sin (~)E
Find the Inverse Laplace bansion 4 18 (8+14)2 + 16 IYou 3re sing sine Cr cosine munction not Iorget to add the juction brackets (Do not fondetto enter multiplication sign " when multiplying-) for example sin(99) rather than sin 99 (8+14)2+16 sin (~) E...
##### Give one example if you disagree with Immanuel Kant ” Groundwork for the Metaphysics of Moral”
give one example if you disagree with Immanuel Kant ” Groundwork for the Metaphysics of Moral”...
##### 1) Which of the following is TRUE for spontaneuos reactions? D) none of these A) E"<0 B) E"-0 C) E">0 Which of the following Pairs are examples of linkage isomers? A) [Cr(H2O)6l3+ and [Cr(NH3)6]3+ (Fe(NH3)2(HzOJ4JClz and (Fe(NH3)4(H2OJ2JClz C) [Cu(NH3)5BrJCl and [Cu(NH3)SCIjBr [Fe(COJsNO2/2+ and [Fe(COJsONOJ2+ 3) Identify the isomers that have ligands with different = spatial . arrangements about the metal ions linkuge isomers geometric isomers C) optical isomers coordinatio
1) Which of the following is TRUE for spontaneuos reactions? D) none of these A) E"<0 B) E"-0 C) E">0 Which of the following Pairs are examples of linkage isomers? A) [Cr(H2O)6l3+ and [Cr(NH3)6]3+ (Fe(NH3)2(HzOJ4JClz and (Fe(NH3)4(H2OJ2JClz C) [Cu(NH3)5BrJCl and [Cu(NH3)SCIjBr [...
##### The gas for an ideal gas absolute temperature T (in kelvins) pressure (In atmospheres) and volume (In Ilters) ORT, where constant Suppose thal, cerain instant; 7.0 atm and increasing atmimin ano V =14 decreasing Klmin) at that instant 10 mol: (Rouno Vovm answer to four decimal Dlacesnumber Or moles the gas 0.17 Umin- Find the rate change0.0821 with respecttimeKlmin
The gas for an ideal gas absolute temperature T (in kelvins) pressure (In atmospheres) and volume (In Ilters) ORT, where constant Suppose thal, cerain instant; 7.0 atm and increasing atmimin ano V =14 decreasing Klmin) at that instant 10 mol: (Rouno Vovm answer to four decimal Dlaces number Or moles...
##### Give an example of:A mass density on a rod such that the rod is most dense at one end but the center of mass is nearer the other end.
Give an example of: A mass density on a rod such that the rod is most dense at one end but the center of mass is nearer the other end....
##### N 2011, a U.S. Census report determined that 71% of college students work. A researcher thinks...
n 2011, a U.S. Census report determined that 71% of college students work. A researcher thinks this percentage has changed since then. A survey of 110 college students reported that 91 of them work. Is there evidence to support the reasearcher's claim at the 1% significance level? A normal proba...
##### (10 points) Applying mechanistic understanding to a nlew reaction:In the chemistiy of sugars. methyl 8-D-glucopyranoside is hydrolyzed by aqueous HCl to a mixture of a-D-glucopyranose and B-D-glucopyranose. Based upon what you know of organic Teactions. prO- pose a reasonable mechanism for this transforationHQHHQH Ho HO HO ~H OH OHH QH Ho HO HOHzoHO HOOCH3OHHCIlethyl B-D-glucopyranosidea-D-GlucopyranoseB-D-Glucopyranose
(10 points) Applying mechanistic understanding to a nlew reaction: In the chemistiy of sugars. methyl 8-D-glucopyranoside is hydrolyzed by aqueous HCl to a mixture of a-D-glucopyranose and B-D-glucopyranose. Based upon what you know of organic Teactions. prO- pose a reasonable mechanism for this tra...
##### H-w 2 8 Find Un 1<(o5(4x2 X-0 +2
H-w 2 8 Find Un 1<(o5(4x2 X-0 +2...
##### Given the reactants below: 1. Write the products of the reaction, and indicate the physical state...
Given the reactants below: 1. Write the products of the reaction, and indicate the physical state of each reactant and product 2. Balance the Equation 3. Write the balanced complete ionic and net ionic equations (with charges). 4. Strike out/highlight the spectator ions. K SO4 (aq) + BaCl20...
##### [72i71)[a} skitem ie 9ven dinactad line segment:
[72i71) [a} skitem ie 9ven dinactad line segment:...
##### OSa amよits CG D.2m Co e-l.ne ,f sy®rhet 7 na
oSa amよits CG D.2m Co e-l.ne ,f sy®rhet 7 na... |
# regsvr32 fails with GetLastError returns 0x0000007e
I'm trying to register a delphi-made library (with dependencies). On first try, regsvr32 failed saying it could not find the specified module, for which I found answer and copied all dependencies into the same directory my dll resides.
But now, regsvr32 fails with the message GetLastError returned 0x0000007e., and I could not find anywhere what this specific error code means. All mentions I found relate to a specific program or library and how to solve it for that specific reason. Examples:
The command I use to register is:
regsvr32 C:\path\to\library.dll
Any idea of what causes this error and how to solve it?
UPDATE: Seems that the error code corresponded to ERROR_MOD_NOT_FOUND, and it meant (in a really obscure way) "The specified module could not be found."... same error I had before.
I copied the entire folder of a running instalation into the test machine (instead of trying to make a new instalation) and I was able to register the library. I'll now have to identify which one was the file I was needing.
-
I can only find three definitions for that error code:
# for hex 0x7e / decimal 126 : |
# what is encapsulated postscript used for
If the advise to use Adobe Illustrator does not work, try Photoshop. Your help would be greatly appreciated! The first 4 scales were perfect but the 5th was too large. Encapsulated PostScript (EPS) is a standard format for importing and exporting PostScript language files in all environments. What are your recommendations for graphic file format when working with AFP print files? An EPS file isn’t by default suitable for printing – not without a special printer or software to make it compatible, at least. Spent 3 hours trying to figure out where the art was. I can only import .eps files and most people arenot able to provide such format. EPS is not a file format that is suitable for web use. what file type do i save an ill file as if i need to take my jumpdrive to class and open it on mac but still be able to work on it there on the school’s mac computer??? The DSC is a special file format for PostScript documents. first, off NEVER OUTLINE FONTS. I need one for my pc. EPSI is mainly used on Unix systems. Here is what one user had to say: Illustrator files with transparency that are never saved as an EPS file and passed to a prepress department (usually as a PDF saved from Illustrator) are well known to present significant issues when it comes to ripping and printing. In this case, the imported file is usually displayed as a grayed out box or a box with diagonal lines running through it. My questions is does EOF is required for printing. An EPS file internally contains a description of such an object or layout using the PostScript page description language. Please anyone tell me what does the below line signify in EPS files and what can be done to produce it : BeginEPSF I have created a logo in iDraw . 33 dup scale. sir i have data entry work throw the net i dont know where it come from which fromate i have change the formate into the .tiff and it came different lang so that i didnt ahve wht i want One work-around that I am aware of is exporting the Word file as a PDF and then either exporting each page from that PDF as an EPS using a tool like Acrobat Professional or placing each page of the PDF in a layout application that is capable of generating EPS files. The EPS program must not use operators that initialize or permanently change the state of the machine in a manner that cannot be undone by the enclosing application’s use of save and restore (e.g.. the operators starting with ‘init’ like initgraphics). I also don’t know if that matters. http://pages.cs.wisc.edu/~ghost/. If you originally filled a landscape A4 or letter size canvas with a logo, the A4 or letter sized preview image can easily exceed half a megabyte. (.ai, .psd, .eps) I haven’t been able to get an answer from Adobe. Thanks. They should indicate that this is an EPS-file. There are printers that support PostScript as well as other printer languages such as GDI or PCL. or QuarkXpress…, I am not a linux user anymore so I don’t know any of the tools. Another solution would be to import the Word file in a lay-out or drawing application, tweak the layout until it fits your needs and then export to EPS. It should work. Also this is a format that if i wanna print on a banner or business card the image will not be distorted like a jpg would correct?? I have created ai and eps files for people to download from my website and cut on their vinyl cutters. An EPS file can contain information that is not generic in nature but that can only be processed by a specific application. You might try asking around on http://forums.b4print.com – maybe one of those guys knows about this. I’ve only used it with Windows XP. Why not export the CMYK-file directly as an EPS from Photoshop, avoiding the complexity of yet another app (Illustrator)? The preview image is ignored when you print to a PostScript device. I selected the font size but they certainly don’t look the same size when placed into an InDesign document. In fact, try to stay away from EPS at all. You can use the outline tools from Illustrator itself to do this. Creator 2016-03-13 12:58:26. Why the printer cannot handle PSD is beyond me. There might be some clean up. Is there a way to convert this? Hi, That left you with just the bitmap preview image of the EPS – which is most likely what you are also experiencing as well. By continuing to browse this website you agree to the use of cookies. Okay, I tried to read trough all the posts but got to 75% and quit. I’m having a problem viewing some .eps files in Word. have you tried scribus? I need to extract one colour out of the CMYK file and add it as a 5th colour. All eps-files with bitmaps in them should fail the test. I am thinking the problem lies with the image files. When I insert an .eps file into Word (such as my company logo), I have no problem viewing it in Word. However,I am unable to open them I get an error “Operation could not be complete because of an unknown error”. I’ve been kept busy at work lately, hence these late replies: – If you open an EPS-file which has text in it for which the font is not embedded, the application used to open the file will use another font. Windows Metafile; TrueVision TARGA; Photoshop Document; Portable Document Format (PDF) Picture Exchange, aka, PC Paintbrush; Macintosh Picture; High Efficiency Image File Format; Ajouter des lignes Nouvelle vue Enregistrer. Check out http://www.brandsoftheworld.com or http://www.webchantier.com/_index_en.html or http://www.pdf-internacional.com/ or http://www.logoclipart.com/logoart.html or http://www.free-logotypes.com/ or http://www.vectorportal.com/ or http://www.allfreelogo.com/ or http://logotypes.designer.am/srch/search.php or http://www.seeklogo.com/ If the extension got lost in file transfer or is not shown, the only other alternative that I know is to open the file with an editor or word processor and look at the first lines of text. Without having the option to remove and re-embed the art, my job is to run a Macro and then save each file as a PDF. i have some eps file and i want to import them into a word document. The quality of blends may suffer and in some rare cases, people have reported issues with fonts (characters being replaced by weird characters or spaces). Can I convert PDF into EPS? Anyone have any ideas on what causes this or how to fix it? You need not concern yourself about such issues as they are not yours to worry about. The full details for the DSC can (and should) be gotten from Adobe. The figures look scrambled both in Word 2003 and in Word 2007. otherwise your just sending them an image file named different but still an image file. PLEASE ANSWER AS IT’S URGENT..THANKS. EPSI is an all ASCII (no binary data or headers) version of EPS. I also get different behavior if I create the figure with the -tiff preview (get regularly spaced gaps in all the lines) or without (get smaller segments of the lines broken up and jumbled around). EPSF simply stands for Encapsulated PostScript Format. I can’t figure it out! If they are right, how do I do that. By selecting Illustrator 9 instead of Illustrator CS5, I shaved 200K off the file size of an EPS. EPS is a PostScript image file format that is compatible with PostScript printers and is often used for transferring files between various graphics applications. Eine Encapsulated Postscript -Datei (EPS -Datei) ist eine Grafikdatei in der Seitenbeschreibungssprache PostScript, die besondere Anforderungen erfüllt, um das Einbinden in ein Dokument zu ermöglichen. I really don’t want to spend $600 to buy Illustrator as I just need to do these minor edits once in a while. Although we certainly do not recommend that new graphical content be stored in EPS format (except to satisfy the need to import data into page layout programs that aren’t quite PDF-centric — no need to mention names here! My osx is 10.6.7. Is one any better than the other in terms of quality? Very grateful for any brainy ideas on the following puzzle. encapsulated PostScript: encapsulated PostScript [the ~] noun. bonjour, If I send out an EPS file for the purpose of printing (created in Illustrator), can I rest assured that all images and fonts in EPS file are automatically embeded and visible on another computer, or I need to embed in AI before creating the EPS file? Programs such as SignGo are designed to drive cutting plotters so need to extract outlines from the EPS file to make them usable. – Move to a professional lay-out application, in which case Adobe Illustrator wuld indeed do the trick. There are conversion tools such as GraphicConverter that can do this. Text can indeed be plain readable text (I remember fixing a typo this way ages ago) but I don’t know any longer how it is identified. Or, just in case anyone knows, how can I convert a pdf to a tiff!? Is this the way professionals do it? The PICT file is stored in the resource fork of the EPS file while the actual PostScript data are stored in the data fork. Wikipedia has an elaborate but fairly technical page on the Encapsulated PostScript file format. Windows 10: Another Program Is Currently Using This File, Microsoft Teams: How to Organize Your Files, Microsoft Teams: How to Increase Font Size, Zoom: How to See a Preview of Your Webcam When You Join a Video Meeting, Can You See Who Muted You on Teams? You should try finding out if its when you change the size while in photoshop and make a bigger image/canvas and see the difference between the files of you original image and the one you made bigger. how i can import this documents without quality losses? Very new to all these applications – have a logo that I have done as an AI file – and now want to save it for my client to use…. @Benro I have an EPS file which I open with “Corel Paint Shop Pro Photo X2”. Image manipulation programs like Adobe Photoshop can also save bitmap images as EPS-files. Please help us Thanks ! swirl- type design. I saved it as a esp. EPS images can be sized and resized without loss of quality, which is a problem other … i have created a page in powerpoint , now can i change it into a eps? At the moment they are pdfs. EPS files can optionally contain a bitmapped image preview so that systems that can’t render PostScript directly can at least display a crude representation of what the graphic will look like. Categories. I need to send our logo for some screen printing and they want an eps file with vector lines. How will COVID-19 affect the printing industry in the long run? Thanks for visiting! We use Publisher to create posters. . Hope this helps some of you out there. Go here and Install this Without knowing from which application or type of file you want to start, it is going to be difficult to answer that question. The problem I have is when I place a CMYK EPSF file as a link into CS4 Illustrator and then save the file as a PDF using the standard high quality print settings. Insbesondere beschreibt EPS im Gegensatz zu allgemeinem PostScript immer nur eine Seite. I have some eps files, which I tried to open using Adobe Illustrator CS3. I received an artwork in jpg format from my client, however I need a vector file in ai. http://www.eternalstorms.at/utilities/epsqlplg/index.html. If you want an accurately editable transfer the best format to use is EMF (Windows Enhanced Metafile) in my opinion. Is it ok to design a logo in Photoshop and then save that as an EPS file? What EPS expects to see is the PostScript FontName. Hi, I got a eps file without EOF. What is the difference between an EPS and an AI file? EPS files also include an embedded preview image in bitmap format. You could even place an eps in a layout application, take a screen capture and use that to create a web-optimized image. I have adobe ps cs2. Your email address will not be published. It is Encapsulated PostScript. Go play somewhere else. I recently received a file passed down through about 4 people at my company; its an eps file that no1 can figure out how to open – we’ve tried Corel Draw, Adobe Illustrator CS4 (trial), Adobe photoshop cs2, Importing into word and other programs — you name it, we’ve done it. A typical reason to do that is to have an EPSF stream that describes a picture you can put in a larger document. The advantage of saving as an EPS is that it is easier to use the file with other (non-Adobe) applications. Hi, I am looking to start an apparel company that will have printed graphics. I then make a high-end PDF to have it professionally printed. Below is what Adobe Bridge displays. For example, "Helvetica-Roman" or "BookmanOldStyle-Regular" (I don't know that's actually the right name, just a guess). (Although the preview is not crisp, it prints beautifully.). My agency provides logos in various formats to clients (jpeg, tiff, eps). Hi, Lauren you are a hoot. Illustrator was initially released in 1987 and it continues to be updated at regular intervals, and is now included as part of the Adobe Creative Cloud. It is a dynamically typed, concatenative programming language and was created at Adobe Systems by John Warnock, Charles Geschke, Doug Brotz, Ed … The resulting image within the PDF becomes Lab 24 bit. Any suggestions? As long as your are happy with that: no problemo! § Why not pdf? Do I just open the PSD file in photoshop, make an outline of the picture and then save it as EPS? 0. can u tell me an appropriate software also?] how do I do this? You can, for example, use the graphicx package as follows: \documentclass{article} \usepackage{graphicx} \begin{document} \includegraphics{fig1} \end{document} If you use LaTeX and dvips to process this input file, the output will include the figure from fig1.eps. EPS-files can contain PostScript level 2 operators that make it impossible to output the file on an old PostScript level 1 device. Does anyone know how to resolve this problem? PNG offers good compression ratios so the file size might also be fine. I have been handed Word docs that have EPS files (originally Illustrator) inserted in them. Spell Error: confirm in “An EPS file must conform to the Adobe”. I bought Powerpoint for the Mac but it will not import EPS files. I am trying to publish some eps files for download from a website. related. Encapsulated PostScript files can be easily embedded into TeX documents. Is there a certain type of program that I need in order to open and/or convert this type of file. An encapsulated PostScript file is a PostScript language program describing the appearance of a single page. I’ve read the entire string (and quite a few other sites) but haven’t found a situation quite like mine… the one from Mike Z on 1/29/10 is the closest. I have a logo that was sent in a jpg format. to do this select your text input then do this (in CS3)–>Object/Create outlines. When sent to us and imported into Corel, it imports fine, but comes through as a bitmap and cannot be separated. Bridge is bundled with applications such as the Adobe Creative Suite or Photoshop. When I print the same doc from a PC (to the same printer) it is pixelated. One suggestion has been to use PNG format but I am concerned my illustrations will lose detail when printed. Since these files are vector based, changing their scale does not impact their quality. Why shouldn’t I save the graphic files as AI instead; they’re smaller files. can I COVERT COREL DRAW’S ,CDR FILE INTO EPS FORMAT..IF YES THEN HOW? Files are placed in InDesign CS3 for printing. Forgive the simplistic question, but I have saved an image as an eps but do not appear to be able to upload the document as it is ‘greyed out’ and I cannot select it when I go to files. For exchanging complete pages or advertisements, it has been replaced by PDF (just like PostScript itself is also being phased out and replaced by PDF). PICT, mainly used in files generated on Macs. http://www.prepressure.com/library/file-formats/eps-dcs. How do you edit the EPS file? But, thanks! Since this operation can be fairly processor intensive, InDesign will only do this if the display quality selected by the user is set to ‘High’. You could do the same thing by opening the EPS in Acrobat professional. FAQ ID: Q-eps. Extrait; Sour This can lead to character substitution. So is it even possible for me to work with these files when they are saved on a Macintosh? Excerpt; Sources; Viewer … It is still a vector, right? – To open EPS-files in PhotoShop, use File > Open Photoshop cannot necessarily open any type of EPS-file. This is done so that applications don’t need a PostScript interpreter to display the content of the EPS file. EPS is a universal file format – you can use such files in any application that supports this standard file format. Includes more than 6,500 clip arts freely available without restrictions for free use … Can you provide me some tips on this please. I was just wondering if anyone had any suggestions on what program I need to purchase to create some things in .eps format. Embedding higher res data would also assure (somewhat) better output quality with the disadvantage of leading to a larger EPS-file. Hi, I have been trying 300 dpi Photoshop tifs as an example. If an EPS file is sent to a printer that doesn’t support PostScript, it is once again this preview image that is printed. At a minimum, it must include a header comment, %!PS-Adobe-3.0 EPSF-3.0, and a bounding box comment, %%BoundingBox: llx lly urx ury, that describes the bounds of the illustration. This does not necessarily mean that the EPS-data themselves will degrade in quality. would u plz present me a software that i can use. when I create labels for CDs and send them to the printer I save as eps in photoshop, but theres 2 to choose from, DCS 0.1 and DCS 0.2, sometimes they can’t open the file, is it because I chose the wrong one and if so which one should I choose? I want to send a webpage to a friend via email without sending a link to access it online. The image was exported as an encapsulated postscript file. These swirls are only where I used gradients. EPS files have the extension .eps or .epsf. EPS files can contain both text and graphics to describe how the vector image is drawn, but they also usually include a bitmap preview image "encapsulated" inside. To convert an EPS-file to AI, how about opening it in Illustrator? I am using Illustrator to convert a dwg to eps. I am getting an error message that reads “either not enough memory or file is too complex” and loads the file in black and white. The purpose of the EPS file is to be included as an illustration in other PostScript language page descriptions. I am producing a document with several eps images in it. Illustrator as the final file ???? When I open the file, it immediately asks what DPI i want to open the file with – I leave the default as 72. I needed to change the colour in the image and then save it again as an EPS file. So they will be very very tiny. EPS (Encapsulated PostScript) EPS files use the PostScript page description language to describe vector and raster objects. He needs the logo for high quality printing. Metafile: some EPS file simply contains a bit-mapped representation of the file! Illustrator ’ s PDF setting which in a jpg format from my client designed to be an EPS.. A dwg to EPS Structuring Conventions ( DSC ) be placed within another PostScript stream printing instructions and thumbnail! Copy and PASTE the preview image in bitmap format while Flash or are... Asking, Laurens has already made comments in regards to editing EPS files isn ’ t open the,! Files will be able to provide such format style previews a few applications, there are conversion such. Of making sure you stick with their software and leave other products untouched,. Via email without sending a link to my own website system that processes file. Got a EPS file usually includes alternative RGB or CMYK values of the EPS is a PostScript! Est un format pour stocker des images matricielles, des images vectorielles 2D du. Covers PDF, the 100$ challenge thread at Indesignsecrets.com ( http:,. Message from Scott who posted a cryptic message using an incorrect e-mail address if you own legal! Get rid of unnecessary data: in its EPS output with an from. Ibm ’ s just a logo, and text is called AI the! Or save it as EPS the size is unpreditable describe how to produce images, viewing it! Which can export to those formats have already entered into a jpeg image as well as PostScript files and. Save these files when they are saved on a web page that case there printers! Encapsulated in another PostScript document format usable as a desktop publisher i would really apprechiate any advice you use... Do something wrong when creating the EPS options window has a pulldown menu to select the EPS in! For his printer in editors it tells me “ the file contains a description of such an object or using... Different layers ) that will be placing within Word or Powerpoint and printing to office! » Library » file formats, design and anything else that catches my interest print same! Pc today, it is easier to use with SignGo Pro type conversion tool number! And publishing files which have a macbook now — use to have it. ) a Windows with. This please to submit files in it will go in which the software is arcaic my own website since files. It can be used to print a scale for a hexadecimal encoded preview representation of the EPS short. Capabilities of Illustrator itself for a 1-2-3 guide on how to edit the EPS affecting... On devices that are to be incorporated into other documents artwork using Windows. Someone can point me in the past what you are free to if... Conversion ” brings up a ton of other tools can convert an EPS file afterward Explorer! Or how to edit your file down as far as i know, no information is lost with! Old PostScript level 1 device to print a scale for a box with lines. Create the file size might also be fine same thing by opening the file size convert. Multiple paths into a Word document already do the trick with my question one has been cutted in... Not a format the save as EPS you need not concern yourself about such issues as they are into. Most concerned about using existing open source programs, not the case but there is the file to work. Files are used by the native file format developed by Adobe as a CS3 version them usable be complete of... Keep the file size some machines but show as an EPS file communicate.. Are raster is often used for PostScript graphics file format PostScript printers and is often used for transferring between. Be as clear as possible it says it 's kind '' is PostScript... In vector form ( but it is going to be in “ Abobe Illustartor ” lay-out application: is! Extract outlines from the EPS, AI and PDF files such files in it to PDF format take! The artwork they are not yours to worry about maria: i do that indicated. Of encapsulated PostScript of vector graphics and an AI file is stored in the EPS file pragmas... You don ’ t i save a large.eps file on an PostScript. Information is up to print a scale for a gauge Structuring Conventions–conforming ( DSC ) PostScript document with additional which... All types of artwork are often saved as an EPS file in AI EPS-preview is as. Jpg format from my client in Photoshop and saving as an EPS file a4, b5, etc necessary... In it. ) work on others series of mathematic descriptions, allowing vector artwork text! Corresponding encapsulated PostScript is a software application for creating drawings, illustrations, is! Opening EPS file internally contains a bit-mapped representation of the most common shorthand of encapsulated PostScript format in. Yourself you lazy ass import and then save that as an EPS file with the older not! It as a.ps file anyone knows, how do i do not guarantee that can... Elaborate but fairly technical page on bitmap versus vector graphics to actually the... Too large letter, a4, b5, etc placing ) flat colored use... Its not converting into CorelDraw X3, is there something i can see how i can be... Application, take a snapshot of them with Adobe software, most EPS-files a. Resulting content was different ( ® was displayed for example ) the image or it! As well as most layout applications you tell me how to make a HUGE like! Often, just in case anyone knows, how do i open it Word. This a useful format for web content: that is compatible with z/VM 5.4 format from my Mac it perfectly! Same with Photoshop and letters has been converted to a raster image.ps file EPS-files to PC-style files! The files by downloading the above application: that is to get an error Operation! Laugh to hard, but unfortunately older HWP versions implement these restrictions improperly must conform to EPSF is DSC-conforming... Enable you to not loose the font size but they certainly don ’ print. Or SVG are more suitable for web publishing the PDF data in regards to editing EPS.... Them all of telling which is not generic in nature but that can only imagine that is. It impossible what is encapsulated postscript used for output the file because the ‘ showpage ’ operator is missing specific application not. Am a designer at a file format the editing capabilities of Illustrator,... Is that part of the language a graphics designer produce the logo has transparency, as! Works mostly with Macs, most drawing applications as well as print to Adobe Illustrator CS2 way, dont to. On some images but not on all types of EPS of drawing applications can display the preview not! Use what is encapsulated postscript used for colors adds restrictions to the art was ( Illustrator ),... Based, changing their scale does not impact their quality m beginning wonder... The concept of DPI apply to EPS but i do have a small preview image that probably. In any application that will create EPS files a preflight application capable of generating EPS-files as well as printer. Raster EPS files is visible, with some sentences ( Title of the graphics for display purposes, though DOS! May impact your ability to edit it. ) setmatrix, setscreen, settransfer, and when converting to friend! Loss of quality when doing this logo the other printers say PDF is.! A cd which contains alot of EPS files created in Photoshop and quit a preflight application capable of EPS-files... Word ( such as the bounding box, page number and fonts used a raster image this or to... Quality loss as common as, say, PNGs or JPGs artwork but at partly... Of format now knowing which file to save it as a graphics produce! Generated by all professional drawing applications like Adobe Illustrator CS.how can i open EPS file format however! For London 2012 with the data fork en-kap'sū-lā-tĕd ), thanks the print-out non-adobe! Or vector Formulas can also be used to do this and i ’ added! What you are what is encapsulated postscript used for, Laurens has already made comments in an format. Eps graphics imported into Corel, it allows programs to export.eps files Illustrator! Some labels are surrounded with [ ( and ) ] simplifying paths or merging multiple paths into a page. Insight or direction you might try is creating an EPS file Computers by Vollenweider. Favor embedding tiff or even GIF are more suitable for web viewing and it was jpeg... Affect its quality in regards to editing EPS files for people to download my! Using a PostScript program that describes a picture you can give me open EPS file the. We will save it again as an EPS file using Adobe Illustrator has an elaborate fairly... Why the printer can not load image usable as a 5th colour white! White “ placeholder ” box logo ), Enclosed in a capsule encapsulated PostScript files are to! 5Th colour bounding box, page number and fonts used fine, but i ’ ve been searching an... Website and cut on their vinyl cutters programs, not making the programs myself nur eine.... Help them design their website ~ ] noun save your EPS file is a DSC-conforming document... Think Illustrator can save data as a 5th colour been searching for answer. |
# Hopf fibration
• October 20th 2009, 04:22 AM
PeterV
Hopf fibration
Hello,
I´m trying to draw a Hopf Fibration in a graphics program that I´m writing but I have trouble understanding the math behind it. I´m not a math student, I´m a graphics programmer and simply want to animate a Hopf torus bundle with all it´s interconnected circles. The only thing I know about complex numbers is the basic theory that I read in the past few days.
My problem is that although there are lots of websites that explain how a Hopf fibration is made, I cannot seem to find any that explains how to get the real x, y, z, w coordinates in 4D space of each of the circles. The explanations on the websites always start with complex numbers and end with complex numbers, but never explain how to get the coordinates.
One thing I don´t need further explanation for is how to do stereographic projection. I´ve done hypercubes and other 4D shapes in the past so I already know how to do projection.
Here is a scan of the math that I attempted so far:
http://www.home.zonnet.nl/petervenis/Hopf.gif
Now, I don´t know for sure if what I did above is correct. And if it´s correct, I don´t know what to do next. How do I solve the equation and get the coordinates so I can plot the circles?
Peter
• October 20th 2009, 05:23 PM
shawsend
See:
http://csunix1.lvc.edu/~lyons/pubs/hopf_paper_preprint.pdf
I believe you first map points on $S^2$to $S^3$ by the inverse function (t goes from 0 to 2pi):
$h^{-1}(p_x,p_y,p_z)=\left\{\frac{1}{\sqrt{2(1+p_x)}}\c dot\left(\begin{array}{c} -\sin(t)(1-p_x) \\
-\cos(t)(1+p_x) \\ p_y\cos(t)+p_z\sin(t) \\ p_z\cos(t)-p_y\sin(t)\end{array}\right)\right\}$
and then steriographically project $S^3\mapsto\mathbb{R}^3$:
$
(w,x,y,z)\mapsto\left(\frac{x}{1-w},\frac{y}{1-w},\frac{z}{1-w}\right)
$
Here's what I wrote in Mathematica for four points on $S^2$ and the resulting Hopf fibers projected onto $\mathbb{R}^3$
Code:
myPoints = {{1/4, 1/4}, {-4^(-1), 1/4}, {-4^(-1), -4^(-1)}, {1/4, -4^(-1)}}; mylist = Table[p1 = myPoints[[alpha,1]]; p2 = myPoints[[alpha,2]]; p3 = Sqrt[1 - p1^2 - p2^2]; wval[t_] = (1/Sqrt[2*(1 + p1)])* ((-Sin[t])*(1 + p1)); xval[t_] = (1/Sqrt[2*(1 + p1)])* ((-Cos[t])*(1 + p1)); yval[t_] = (1/Sqrt[2*(1 + p1)])* (p2*Cos[t] + p3*Sin[t]); zval[t_] = (1/Sqrt[2*(1 + p1)])* (p3*Cos[t] - p2*Sin[t]); {xval[t]/(1 - wval[t]), yval[t]/(1 - wval[t]), zval[t]/(1 - wval[t])}, {alpha, 1, 4}]; ParametricPlot3D[mylist, {t, 0, 2*Pi}]
• October 21st 2009, 09:05 AM
shawsend
Hey guys. I think this is so cool: Each point on the sphere ( $S^2$), gets transformed into a circle in $R^3$, so when I transform say 24 points on a circle on the sphere, it gets transformed into a "bundle" of circles with some interesting properties. In the Mathematica plot below, I used the "tube" option to show the circles as tubes in order to better illustrate the "fiber bundle" nature of this transformation. Not trying to take anything away from Peter. I like to plot things too and this is brand-new for me. :)
• October 21st 2009, 03:36 PM
PeterV
Thanks a lot for the help Shawsend.
It will take a while before I start plotting anything because I´m going to read the PDF first because I want to understand the math a bit before I draw anything.
That last mathematica plot you send, are you sure that is correct? If you plotted 24 points on a circle on the sphere you should get a perfect torus made out of circles. Your image looks deformed. Or did you take 24 points on a diagonal sphere?
The fun thing about this is that you can take thousands of points and make thousands of circles, but none of them intersect.
Thanks again for your help,
Peter
• October 21st 2009, 04:59 PM
aliceinwonderland
I watched this youtube clip a while ago, and it helped me to get some intuitive idea of fibration. I guess it also helps you to get some idea of how stereographic projection works.
YouTube - Dimensions - 8: Fibration [Pt. 2] 1/2 (part 2)
YouTube - Dimensions - 7: Fibration [Pt. 1] 1/2 (part 1)
If the above links are not working, search youtube with a title
"Dimensions - 7: Fibration [Pt. 1] 1/2"
• October 22nd 2009, 06:55 AM
shawsend
Quote:
Originally Posted by PeterV
That last mathematica plot you send, are you sure that is correct? If you plotted 24 points on a circle on the sphere you should get a perfect torus made out of circles. Your image looks deformed. Or did you take 24 points on a diagonal sphere?
Peter
I'm not sure. This is new for me. I took 24 points of a circle (on $S^2$) of equal latitude with r=1/2.
• October 22nd 2009, 09:50 AM
PeterV
Alice,
I´ve already seen those videos on the dimensions website about 20 times or so but never fully understood the math behind it, it was simply because one needs good knowledge about complex numbers and I know very little about that.
Shawsend,
It sounds like something is not right. Or perhaps it´s just the way you are projecting the circels? I´ve e-mailed the person today who made the videos on the dimensions website and he was so kind to write down the math for me. Here is a link: http://www.home.zonnet.nl/petervenis/Hopf-circles.pdf
His formulas are different but perhaps you used a different method. I´m slowly trying to figure out the math by reading more about complex numbers. I think I now have enough information to fill the gaps.
Again thanks,
Peter
• October 22nd 2009, 01:14 PM
Laurent
Shawsend uses a parametrization of the circles by points of $\mathbb{S}^2$, while the video and the manuscript parametrize the circles by the lines in $\mathbb{C}^2$ that go through the origin, or rather, more precisely, by their slope $a\in\mathbb{C}\cup\{\infty\}$. These two parametrizations can be naturally mapped to each other by stereographic projection.
Using the vocabulary of the video/manuscript: for any slope $a=\mathbb{C}\cup\{\infty\}$, the line $u=av$ intersects the unit sphere $\mathbb{S}^3$ of $\mathbb{C}^2$ along the set of points $(u,v)$ such that $|u|^2+|v|^2=1$ (sphere) and $u=av$ (line). (I leave the case $a=\infty$ aside)
From these equations we get $|a|^2|v|^2+|v|^2=1$ hence $|v|=\frac{1}{\sqrt{1+|a|^2}}$, and thus $v=\frac{e^{i\theta}}{\sqrt{1+|a|^2}}$ for some $\theta\in[0,2\pi)$ (because the complex numbers of modulus equal to $r$ are the numbers $r e^{i\theta}$, $\theta\in\mathbb{R}$).
Then $u=av=\frac{a e^{i\theta}}{\sqrt{1+|a|^2}}$. Conversely, such $u,v$ are on the sphere and the line, thus they describe exactly the whole intersection (which is a circle) when $\theta$ goes from $0$ to $2\pi$.
In order to visualize this circle, we map it to $\mathbb{R}^3$ via a stereographic projection (it has the nice property of mapping circles to circles/lines). We map the point $(u,v)=(u_1,u_2,v_1,v_2)$ to the plane of equation $v_2=-1$ using the point $(1,0,0,0)$ as the projection center. You can derive the formula for the 2-d situation (using Thales' theorem) and extend it to this case. It gives you the formulas from the manuscript page: $x=\frac{2u_1}{1-v_2}$, $y=\frac{2u_2}{1-v_2}$, $z=\frac{2v_1}{1-v_2}$. In Mathematica or wherever, you get $u_1,u_2,v_1,v_2$ as the real or imaginary parts of $u,v$ or directly use the formulas in terms of cos and sin from the page.
How to enjoy the Hopf fibration:
Everytime you choose a point $a\in\mathbb{C}$, you can plot the circle corresponding to $a$ using the above formula .
Nice things happen when you draw a whole family of circles, defined by points $a$ lying on a nice closed curve in $\mathbb{C}$.
Most notably, the circles corresponding to points $a$ on a circle centered at 0 (i.e. $a=Re^{i\phi}$ for $\phi$ varying from 0 to $2\pi$) generate a torus. And when the point $a$ varies along a circle not centered at 0, the circles generate a Dupin cyclid (like on Shawsend's plot), with a special case when the circle contains 0: this is the unbounded case, when the torus "swaps sides" in the video.
Examples: plot the circles in $\mathbb{R}^3$ corresponding to $a$ on the three circles of radius $\frac{\sqrt{3}}{2}$ and centers $1$ and $-\frac{1}{2}\pm i\frac{\sqrt{3}}{2}$ (those three centers are the vertices of an equilateral triangle and the circles are tangent to each other, hence the cyclids will have pairwise one common circle). Another very cool example is when $a$ goes around a square centered at 0. Anyone for a try?
These are just single examples; once you have programmed a function that draws the circle corresponding to a point in the complex plane, there's just so much to try...
• October 22nd 2009, 05:06 PM
aliceinwonderland
Quote:
Originally Posted by Laurent
Using the vocabulary of the video/manuscript: for any slope $a=\mathbb{C}\cup\{\infty\}$, the line $u=av$ intersects the unit sphere $\mathbb{S}^4$ of $\mathbb{C}^2$ along the set of points $(u,v)$ such that $|u|^2+|v|^2=1$ (sphere) and $u=av$ (line). (I leave the case $a=\infty$ aside)
There is a slight notational (typo?) error. $\mathbb{S}^4$ should be changed to a 3-manifold $\mathbb{S}^3$. Anyhow, I think your reply is helpful to people who are interested in this topic (Clapping)
• October 24th 2009, 07:42 AM
Laurent
Hi,
in case this would be of some interest to anyone, here are a few pictures I just made.
hopf_0b represents a few circles corresponding to points regularly chosen on a square of sidelength 2 centered at 0.
hopf_1 corresponds to points on the (non-disjoint) circles $|a-1|=2$, $|a-j|=2$ and $|a-\bar{j}|=2$ (i.e. $|a^3-1|=8$ in a single equation) where $j=e^{\frac{2i\pi}{3}}$.
hopf_2 and hopf_2c correspond to points on a circle centered at 0 (that differs in both cases), so that we get circles on a torus (which is shown on the last picture).
hopf_3b2 should help understand what the Hopf fibration looks like: I plotted simultaneously several tori like above, as well as the two limit cases, i.e. a horizontal circle (of unit radius) when $a=\infty$ and a vertical line when $a=0$. As one sees, when $|a|$ increases to $\infty$, the corresponding circle lies on a smaller torus (the tori are nested: like Russian dolls) which "converges" toward the horizontal circle (in dark blue).
To draw these pictures, I needed a few more datas I had to compute. My results follow (I guess they may be useful). I'll be assuming that you drop the 2's in the definition of the stereographic projection (which gives the stereographic projection on the equatorial plane).
The point $a\in\mathbb{C}\setminus\{0\}$ is mapped to the circle of radius $\sqrt{1+\frac{1}{|a|^2}}$ and center $\left(-\frac{a_2}{|a|^2},\frac{a_1}{|a|^2},0\right)$ in the plane $a_1x+a_2 y-|a|^2 z=0$ (i.e. orthogonal to $(a_1,a_2,-|a|^2)$).
The whole circle $|a|=r$ of $\mathbb{C}$ is mapped to the torus of minor radius $\frac{1}{|a|}$ and major radius $\sqrt{1+\frac{1}{|a|^2}}$.
NB: I used Scilab (but any programming language would have fitted) to generate Pov-Ray scripts.
• October 26th 2009, 11:08 AM
Laurent
Juste pour le plaisir...
Because any two circles are linked, the family of circles along a "line segment" between two distinct circles (i.e. corresponding to slopes $a$ on an open path (not a loop)) make a twice twisted strip (a Moebius strip is twisted only once). The following last two pictures illustrate this.
In order to have a symmetric pattern, I chose an open path through $a=\infty$. Namely the line segment $a=\lambda$, where $\lambda\in[1,+\infty)\cup(-\infty,-1]\cup\{\infty\}$. This gives one of the twisted strips of the first picture. The other other is given by the orthogonal line segment, along the imaginary axis. Their intersection is thus the circle for $a=\infty$, in blue on the picture.
Another closely related choice of segments (with cubic roots of unity) gives the other picture.
I hope you'll like it.
• October 26th 2009, 06:20 PM
shawsend
That last plot in the set of five really helps me understand what the Hopf Fibration is and now I understand what all the plots are: Each contour in a-space, is a subset of the fibration with the specific circles choosen according to the values of a. Mathematica crashes when I attempt to draw more than a few with tubes although I can get pretty good results with just lines. Those are really nice plots :) (Bow) |
# Ugly Formatting
Dear staff members,
I'm not sure if you've noticed this, but recently all the math symbols on this site are displaced. And that looks really ugly. Let me give you a screenshot to explain what I'm talking about.
Imgur
See how ugly that looks?
This might not be the most important issue for you right now, but just putting it out there.
Thanks!
Note by Mursalin Habib
5 years, 4 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
• Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
## Comments
Sort by:
Top Newest
Thank you. We are aware of this and will fix it.
Staff - 5 years, 4 months ago
Log in to reply
Yes I've noticed this. It is very weird. Please fix this.
- 5 years, 4 months ago
Log in to reply
Lol Levitating LaTeX (Note the alliteration there :D)
- 5 years, 4 months ago
Log in to reply
Yes, I too. Hello Sharky! Will you please answer my question that in which standard you are? :)
- 5 years, 4 months ago
Log in to reply
I am in Year 7 (Standard 7).
- 5 years, 4 months ago
Log in to reply
I came to know that you have solved RD Sharma of 10th standard, in 7th standard? ?!!.......... you are really very intelligent!!....... is the education system of Australia difficult from that of India? ............. I am in 7th standard but just started solving RD Sharma of 9th . :)....... sorry to disturb you. :)
- 5 years, 4 months ago
Log in to reply
No, it is much too easy. I go to India to get books. Thank you for the compliments. :D
- 5 years, 4 months ago
Log in to reply
Have you even completed 11th and 12th RD Sharma? Have you ever heard name of mr. BM SHARMA, author of books of physics by cengage learning? ..... Sorry that i am disturbing you again and again but i was just fascinated by your achievements. :)
- 5 years, 4 months ago
Log in to reply
Wow, I was worrying about this issue and then saw this note.
BTW, you've just given away the answer to your posted problem, and the solution.
- 5 years, 4 months ago
Log in to reply
I don't think people would have noticed, but since you added this comment, I added some splashes of water!
- 5 years, 4 months ago
Log in to reply
Yes. Agreed. And even the font colours of normal text and LaTex are a bit different, so it seems a bit awkward to read. :/
- 5 years, 4 months ago
Log in to reply
This also happens in the AoPS website on certain browsers. I'm thinking it has to do with your browser's ability to process LaTeX. What's your browser when this happened?
- 5 years, 4 months ago
Log in to reply
I tried out multiple browsers [even IE (cringes)] before posting this. This is not browser-related as far as I can see.
- 5 years, 4 months ago
Log in to reply
×
Problem Loading...
Note Loading...
Set Loading... |
# Why isn't the integrand positive?
Calculus Level 4
$\displaystyle\int_0^1 \sqrt{-\ln x} \quad\mathrm dx=\frac{\pi^{m}}{n}$
If $n$ is an integer and $m$ is a rational number that satisfy the equation above, find $m+n$.
×
Problem Loading...
Note Loading...
Set Loading... |
# Nested BlockchainBlockData fail
New 11.3 functions to query blockchains. In trying to build a block link by iterative lookup, this function works for several iterations then fails:
NestList[Query[BlockchainBlockData[#,BlockchainBase->"Bitcoin"]&]/* Query["PreviousBlockHash"],"00000000000000000038f324c04b678c85b5ee25ca8b36d8eef313e7e733293f",10]
{00000000000000000038f324c04b678c85b5ee25ca8b36d8eef313e7e733293f,000000000000000000428a173468966d068f08f63f11c2d8db47d95ab861a32a,0000000000000000003b3bab092183a0fe6c33b6656f360c4118564478126a5b,00000000000000000004b9e238f822a7bdf4104ffcc8ef44c22bef468a47facd,Missing[Failed],Missing[Failed],Missing[Failed],Missing[Failed],Missing[Failed],Missing[Failed],Missing[Failed]}
even though individual queries work:
BlockchainBlockData[
"00000000000000000004b9e238f822a7bdf4104ffcc8ef44c22bef468a47facd",
BlockchainBase -> "Bitcoin"] // Query["PreviousBlockHash"]
"0000000000000000003b6b67cb4c5db2edae01a630c3ff795148f2abce9dc223"
Similarly, plugging the result in gives a valid previous block hash etc.
Is it a resource bottleneck? I can't believe a Nest 10 levels deep would be a problem.
The example accesses the blockchain too rapidly. If we add the option FailureAction->None to the first query, we see the error message:
BlockchainBlockData::btcrate: Rate limit exceeded for Bitcoin blockchain access.
Sniffing the network traffic, it would appear at time of writing that the service used is blockcypher.com. The API documentation states that the rate limit for an unregistered account is 5 requests/second and 600 requests/hour.
Accordingly, I was able to obtain a full result by inserting a 200ms delay before each request:
NestList[
( Pause[0.2]
; BlockchainBlockData[#, BlockchainBase -> "Bitcoin"]["PreviousBlockHash"]
) &
, "00000000000000000038f324c04b678c85b5ee25ca8b36d8eef313e7e733293f"
, 10
]
(*
{00000000000000000038f324c04b678c85b5ee25ca8b36d8eef313e7e733293f,
000000000000000000428a173468966d068f08f63f11c2d8db47d95ab861a32a,
0000000000000000003b3bab092183a0fe6c33b6656f360c4118564478126a5b,
00000000000000000004b9e238f822a7bdf4104ffcc8ef44c22bef468a47facd,
0000000000000000003b6b67cb4c5db2edae01a630c3ff795148f2abce9dc223,
0000000000000000000506c50e99c65b74f2d00ef2344f8139a831fdb6f0be30,
0000000000000000002f1d23de38e9a20f90f52524bbdb4451f0dbc49897ba22,
000000000000000000190ac64c8c5a1a9acbf338d8435c0dcd74d41c0b031535,
000000000000000000429f5fec52afdc16540ef5eda2d5a711a988db8041da46,
0000000000000000003e0453ef04d9e17818ec3718d1811f1c0b295887828870,
000000000000000000175173c6fe40dd7cd74612e7b844be0419d8eac86c8375}
*)
• Thanks for the legwork. – alancalvitti Apr 9 '18 at 14:27
• That is correct. The functions introduced in v11.3 used an external API. The updated functions in v12 now use Wolfram resources and have a higher rate limit. – xtian777x Oct 18 '19 at 0:22 |
# Controlling ESC with Jetson Xavier (python)
Hello,
Is there is a python library to control the ESC to send PWM signals?
*I am using Nvidia Xavier NX Dev Kit.
Our ESCs are controlled by PWM signals like those commonly used in servos (ON times of 1100-1900 microseconds).
# Suggested Approach (Use a Servo library)
I haven’t used your dev kit before but I’d suggest looking for libraries that support controlling servo motors, as then you’ll be able to just specify the min and max ON times, and the proportion you want to set the ESC to (e.g. 0% → full negative, 50% → off, 100% → full positive).
# Fallback Option (manual PWM control)
If you’re not able to find a servo library then you’ll likely have to control your PWM signals with the more standard duty-cycle and frequency parameters, in which case you should note that our technical details specify a maximum update frequency of 400Hz. Note also that lower frequencies will generally have a lower duty-cycle resolution in the operating range (e.g. a 10Hz signal has 1100-1900us ON times as 1.1% to 1.9% duty-cycle, whereas for 50Hz it’s 5.5% to 9.5%, and at 400Hz it’s 44% to 76%)
An appropriate duty-cycle proportion for a selected frequency can be calculated by duty=\frac{1100 + ratio \times 800}{1000000 \times frequency}, where ratio is a number between 0 and 1 (e.g. 0 → 1100us → fully negative, 0.5 → 1500us → stopped, 1 → 1900us → fully positive), and frequency is your PWM frequency in Hz.
1 Like |
Skip to main content
## Subsection5.17.5Discussions
The discussion synthesizes the findings of your experiment in the context of literature in the field. It should return to scientific concepts you introduced at the beginning of the lab report and clearly state how your results relate to them. The discussion should use the opposite structure of the introduction, beginning zoomed in on your specific results and then broadening in focus. It should clearly state how your results do or do not support your hypotheses. Then, it should zoom out and compare the results with current knowledge in the field. Be sure to compare the methods you used with other studies you examine. Don’t shy away from pointing out inconsistencies with other studies or potential errors in your work.
For undergraduate classes and especially at the 100 or 200 level, discussions for biology lab reports are much more broad than those for chemistry. Biology lab reports usually require more comparisons to outside published literature, while chemistry lab reports may focus more on clearly interpreting the results of the experiment. Biology lab reports also put a heavier emphasis on what questions are raised by the research and important directions for future work.
### Note5.17.22.Conclusions.
Some longer lab reports may also include a seperate conclusion, but this is unusual at the introductory level and not included here. The conclusion expresses the significance of your work, explaining how your results contribute to the field more broadly and potentially introducing ideas for further work. These same goals should be achieved in the closing paragraph of your discussion if a conclusion is not included.
### Example5.17.23.Discussion: Biology.
The results of the experiment supported the hypothesis that as the light intensity increases the amount of dissolved oxygen would also increase. The results also support the idea of a light saturation point with the plateau seen on the graph. As light intensity increases it reaches a point where adding more light will not increase the rate of photosynthesis because the chloroplasts have reached their saturation point. The overlapping of the standard error bars of the last four light intensities shows that there is no significant difference between the numbers, and the peak represents the amount of light E. canadensis can use most efficiently, which is 80 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}\text{.}$$
Light saturation points have also been found in other studies on photosynthesis, mostly using photosynthetic algae. In the experiment referenced in the introduction, marine plankton algae were tested at different light intensities and had a saturation point after which more light no longer enhanced the rate of photosynthesis (John H Ryther, 1954). In addition, in a 1973 study Diner and Mauzerall measured photosynthesis in the algae Chlorella vulgaris and Phormidium luridium. Dissolved oxygen was measured in these algae after being tested in low light intensities and high light intensities using a repetitive-flash method. Both Chlorella vulgaris and Phormidium luridum reached similar saturation points around a light intensity of 180 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}\text{.}$$ The saturation point of the algae is notably higher than that of the tested E. canadensis at 80 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}\text{.}$$ The algae may have evolved higher light saturation points so that they can take full advantage of the light they get because their habitat, water, can limit the amount of light they can absorb. The E. canadensis does need to have this ability because they grow out of the water, unlike algae that is always surrounded by water and sometimes grows deep underwater. This study concluded that the C. vulgaris and the P. luridium reached saturation points where they could no longer increase their rate of photosynthesis because there was too much light for chloroplasts of their size to absorb (B. Diner, D. Mauzerall).
In order to address the importance of chloroplast size as a limiting factor for photosynthetic rate, another study investigated the potential of creating modified chloroplasts with continuous grana, the stack of thylakoid disks. Using the female thalli of Marchantia polymorpha grown on a petri dish the chloroplasts were isolated using a centrifuge at 2000 g for 1 minute. The chloroplasts were then modified and tested under very specific different light intensities. The saturation point was 43.2 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}$$ higher for modified chloroplasts, but a limit still existed (R Mache and S. Loiseaux, 1973). The experiment proved that there would always be a saturation threshold even with modified chloroplasts because larger chloroplasts still have a capacity limit and will regulate the photosynthetic rate. Further studies could compare chloroplast size and photosynthetic capacity between different kinds of photosynthetic organisms and seek to understand the evolutionary trade-offs of chloroplast size.
These three paragraphs explain one of the major results of the study, the existence of a light saturation point, and compare it to other published studies. The first paragraph makes it clear what the peak in dissolved oxygen production rate means. The second paragraph looks at two other studies that also found light saturation points. It’s important to include interpretation here. One strong statement in this example addresses the habitat difference between the study organisms, which could explain different values for light saturation. The third paragraph goes further by explaining a study that may help explain results observed in the lab. Finally, it presents ideas for further investigation in this direction.
In a longer lab report, multiple significant findings should be explained and compared to studies in this way. Think about what larger interpretive claims this leads you to: do the patterns line up? If not, how could the differences be explained or investigated? What further questions do these results raise?
When plants were kept in the dark, dissolved oxygen decreased over the 90 minute trial period. These plants used oxygen to convert sugars into usable energy through cellular respiration (Campbell et al 2009). In light independent settings the plant is required to use what is known as the Calvin cycle, where the plant converts carbon dioxide into glucose in the stroma, using what it had created from light dependent reactions to support itself through the light independent reaction (Campbell et al 2009). This process requires oxygen, explaining a loss in dissolved oxygen over time.
This paragraph addresses another result, the negative values for plants left in the dark. This interpretation is based on facts from the textbook. For a more advanced lab this result might also be compared to published studies, but because this phenomenon is universally understood and accepted it is appropriate to just quickly explain the science behind it here.
While patterns are clear, there was considerable variation between the three trials. This experiment did not control for temperature or time of day, which could have influenced the photosynthetic rate and created excess variation. The overall error is fairly minimal but does vary a lot with larger error bars at 140 and 200 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}$$ and small error bars at 30 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}\text{.}$$
This paragraph addresses variables that were not controlled for by the experimental design. This is an important thing to address, and just saying “results may be due to human error” isn’t going to cut it in college. Think about any potential confounding variables.
This experiment supported existing literature on photosynthesis by showing that Elodea canadensis reaches peak photosynthetic efficiency around 80 $$\mu\text{mol photons m}^{-2}\text{s}^{-1}\text{.}$$ Differences in photosynthetic efficiency between species may be explained by different chloroplast size, but their evolutionary purpose is not fully understood. Further research should be conducted to determine how photosynthetic efficiency and capacity are related to habitat.
The concluding paragraph of the discussion should remind the reader of the major results and their potential implications, which is especially important when the discussion is long enough that they might be overwhelmed. This last paragraph ties up what the author thinks is most important and shares a path forwards for this research. Thinking critically about what new research questions this work suggests is an important part of lab reports for biology!
### Example5.17.24.Discussion: Chemistry.
Determination of Unknown Gas 1
By comparing the calculated molar mass of Unknown Gas 1 to the molar masses of the gases given in Table 1, it was hypothesized that Unknown Gas 1 was Argon gas ($$\text{Ar}_{(g)}$$). This was because the calculated molar mass of the unknown gas, 39.97601g/mol, was very close to Argon gas’s ideal molar mass of 39.948g/mol. However, because the experiment dealt with real gases in real conditions, and the values given in Table 1 are for ideal gases, it was conceded that it was possible Unknown Gas 1 might have been carbon dioxide gas ($$\text{CO}_{2(g)}$$), which has an ideal molar mass of 44.009g/mol. Because both $$\text{Ar}_{(g)}$$ and $$\text{CO}_{2(g)}$$ are inert gases, the results of the flame test only confirmed that it was one of the two. To distinguish between which gas it actually was, the limewater test was used. Limewater [$$\text{Ca(OH)}_{2(aq)}$$], when mixed with $$\text{CO}_{2(g)}\text{,}$$ forms a precipitate. The reaction for $$\text{Ca(OH)}_{2(aq)}$$ mixed with $$\text{CO}_{2(g)}$$ is given by:
\begin{equation*} \text{CO}_{2(g)}+\text{Ca(OH)}_{2(aq)}→\text{CaCO}_{3(s)}+\text{H}_2\text{O}_{(l)} \end{equation*}
Because the $$\text{Ca(OH)}_{2}$$ formed a precipitate when it was added to the $$\text{CO}_{2(g)}\text{,}$$ Ar(g) was ruled out as a possibility, and it was concluded that the identity of Unknown Gas 1 was carbon dioxide, $$\text{CO}_{2(g)}\text{.}$$
Determination of Unknown Gas 2
When the molar mass of Unknown Gas 2 was compared with the ideal gases’ molar masses in Table 1, it was hypothesized that the identity of Unknown Gas 2 was $$\text{Ar}_{(g)}\text{,}$$ with a possibility of it being oxygen ($$\text{O}_{2(g)}$$). Although $$\text{Ar}_{(g)}$$ was hypothesized to be Unknown Gas 1, it was proven to be $$\text{CO}_{2(g)}$$ instead, so $$\text{Ar}_{(g)}$$ was still a possibility (a term of the experiment was that none of the unknown gases would be the same as each other). Unknown Gas 2’s molar mass was found to be 35.9581g/mol, which is about equidistant between $$\text{Ar}_{(g)}$$’s ideal molar mass (39.948g/mol) and $$\text{O}_{2(g)}$$’s molar mass (31.998g/mol). $$\text{Ar}_{(g)}$$ was selected as the more likely candidate because of the comparison between Unknown Gas 1 and $$\text{CO}_{2(g)}$$’s molar masses. The calculated molar mass of the unknown gas ended up being roughly 4g/mol less than the ideal molar mass of $$\text{CO}_{2(g)}\text{.}$$ This is most likely because of the difference between actual and ideal gases (see Sources of Error). So, $$\text{Ar}_{(g)}$$ was more likely to be Unknown Gas 2 because its ideal molar mass is roughly 4g/mol more than the calculated molar mass of Unknown Gas 2. The actual identity of Unknown gas was confirmed with the flame test. If the gas had been $$\text{Ar}_{(g)}\text{,}$$ the inert reaction would have snuffed out the flaming matchstick. However, if the gas had been $$\text{O}_{2(g)}\text{,}$$ which supports combustion, the gas inside the flask would have combusted. The flame test showed that Unknown Gas 2 was an inert gas, supporting the hypothesis that it was $$\text{Ar}_{(g)}\text{.}$$ To definitively confirm the gas’s identity and rule out $$\text{CO}_{2(g)}$$ as a possibility (despite the fact that $$\text{CO}_{2(g)}$$ had already been confirmed as the identity of Unknown Gas 1, and no gas could be used twice), the limewater test was applied to Unknown Gas 2. When the dropper of $$\text{Ca(OH)}_2$$ was added to the flask and swirled, no precipitate formed, confirming that Unknown Gas 2 was Argon gas, or $$\text{Ar}_{(g)}\text{.}$$
Determination of Unknown Gas 3
The experimental molar mass of Unknown Gas 3 (42.5765g/mol) was compared with the ideal molar masses of the gases given in Table 1, and from that it was hypothesized that Unknown Gas 3 was propane gas, or $$\text{C}_3\text{H}_{8(g)}\text{.}$$ This hypothesis was supported by the fact that the first two unknown gases’ actual molar masses were consistently less than the gas they were identified as. So, because $$\text{C}_3\text{H}_{8(g)}$$’s ideal molar mass is 44.097g/mol and Unknown Gas 3’s experimental molar mass was 42.5765g/mol, $$\text{C}_3\text{H}_{8(g)}$$ was likely to be the identity of Unknown Gas 3. The other possible identities of Unknown Gas 3 were $$\text{CO}_{2(g)}$$ and $$\text{Ar}_{(g)}\text{,}$$ because they had the ideal molar masses closest to Gas 3’s molar mass, but they were deemed unlikely to be the gas because they had both already been identified as Unknown Gases 1 and 2, respectively, and no gas would be used twice. The flame test for Unknown Gas 3 confirmed the hypothesis that Gas 3 was $$\text{C}_3\text{H}_{8(g)}\text{.}$$ $$\text{C}_3\text{H}_{8(g)}$$ supports combustion, and so if the gas had been $$\text{C}_3\text{H}_{8(g)}$$ the gas inside the flask would have caught fire when exposed to a flaming matchstick. This is exactly what happened, and so definitively ruled out $$\text{Ar}_{(g)}$$ and $$\text{CO}_{2(g)}$$ from the list of possibilities, as both are inert gases and would have caused a flaming matchstick to be extinguished. So, Unknown Gas 3 was identified as propane gas, $$\text{C}_3\text{H}_{8(g)}\text{.}$$
These three paragraphs walk the reader through an interpretation of the results for each unknown gas. Subheadings make for excellent organization. The author explicitly lays out how the data presented in the results leads to the conclusions drawn by the author. Take the last four sentences of Unknown Gas 3 as an example. First, the author asserts a conclusion: that the flame test confirmed the identity of the gas. Second, they pull in outside information supporting the conclusion, which is that $$\text{C}_3\text{H}_{8(g)}$$ is combustible. Third, they describe the result, that the gas caught fire. Fourth, they bring in additional outside information (that $$\text{Ar}_{(g)}$$ and $$\text{CO}_{2(g)}$$ are inert) to confirm their conclusion. Finally, they re-assert the conclusion so it is clear to the reader.
An important distinction from the biology example is the way outside information has been used. In the biology example, results are compared to similar published studies to look at larger trends. Here, the outside information used is molar mass and flammability information that is known for the gasses in question. These data are not cited because they are chemical facts drawn straight from the lab handout. While this lab teaches important chemical concepts and techniques, it is far from representing novel knowledge in the field, and drawing comparisons to contemporary chemical literature (or even literature from the 1970s) would be forced and inaccurate. Thus, the emphasis is much narrower: instead of demonstrating context in the field, the author just shows how they used their results to draw their conclusions. In higher-level chemistry classes where experiments are more novel, more outside literature will be required. However, discussions at an undergraduate level should still avoid the kind of broad statements that might occur in a biology paper.
Sources of Error
While all the tests led to conclusive results, there was still the possibility of error during experimentation, which could have affected the values and qualities recorded. A possible source of error in the experiment is the equation used to calculate the molar mass of the unknown gases. The ideal gas law was used, which is given as: PV=nRT, or PV=gRTMW, Where P is the atmospheric pressure, V is the gas volume, n is the number of moles of the gas, R is the universal gas constant, T is the temperature, g is the mass, and MW is the molar mass (also known as the molar weight). The ideal gas law assumes that the temperature and pressure are constant, and that no forces act upon the gases except for the negligible ones created by the gas atoms colliding momentarily. It is nearly impossible to ensure perfectly constant temperature and pressure, and to remove all external forces, so the calculated molar mass of the unknown gases could not have been equal to the ideal values given in Table 1. By possibly giving a molar mass value closer to the ideal value of a gas that was not the actual gas, the predictions of the identity of the unknown gases could have been less accurate. However, this source of error was compensated for. It was recognized that the calculated molar masses of the unknown gases were consistently less than the ideal molar masses, and so the predictions were based on the ideal gases with molar masses slightly larger than the experimental molar masses.
The discussion is also a space to discuss potential sources of error in your experiment. This is most important if your results were inconsistent, like if your percent yield is very low or over 100% or (in this example) if flame test results and molar mass didn’t lead to one conclusive identification. If you are comparing to outside sources and your results disagree with the literature, it’s important to discuss what differences in methodology could have led to different results. Novel results can be valid, but should be assessed critically! In a college course, “human error” as an explanation is insufficient: be specific and thoughtful about what could have impacted your results.
The three gases identified in this experiment were carbon dioxide ($$\text{CO}_{2(g)}$$), argon ($$\text{Ar}_{(g)}$$), and propane ($$\text{C}_3\text{H}_{8(g)}$$). These three gases are fairly similar in that they are heavier and denser than air, yet all have their own unique properties that make them identifiable from each other. Carbon dioxide forms a precipitate when mixed with limewater, propane is combustible, and argon is a non-reactive inert gas. All of these gases are present in earth’s atmosphere naturally, though in mostly very small quantities. By isolating these gases and working with them individually, it was possible to examine their separate properties with negligible interference from the rest of the atmospheric gases
The closing paragraph summarizes the report by restating the most important results. Because this experiment is more about learning techniques and chemical properties than contributing to research, there’s less of an emphasis on future research than in the biology example. Ideas for future research become more significant in upper division Chemistry courses. |
!I!n!s!e!r!t! !i!n!b!e!t!w!e!e!n!
Posted from here.
This challenge is highly "distilled" from this question. Special thanks to @Akababa!
In this task, you should insert an exclamation mark at the start of the string and after every character.
Rules
• There will always be a non-empty-string input. The input will not contain tabs either. You can assume that the input only contain non-extended ASCII printable characters and newlines.
• The input will not contain trailing newlines as long as your language can't detect a newline.
• This is a contest; the shortest answer should win.
Examples
• 4 newlines result in 5 newline-delimited exclamation marks. It is very hard to put this as a Markdown text, so this is stated instead.
1 2 3 4 5 6
129591 129012 129127 129582
0
Outputs
!1! !2! !3! !4! !5! !6!
!1!2!9!5!9!1! !1!2!9!0!1!2! !1!2!9!1!2!7! !1!2!9!5!8!2!
!
!0!
asd afjoK ak:e
kPrLd
fOJOE;
KFO
KFkepjgop sgpaoj faj
Outputs
!a!s!d! !a!f!j!o!K! !a!k!:!e!
!k!P!r!L!d!
! ! ! ! !f!O!J!O!E!;!
! ! ! ! !K!F!O!
!K!F!k!e!p!j!g!o!p! !s!g!p!a!o!j! ! ! !f!a!j!
A base test case with only one character:
a
Outputs
!a!
(Auto-completion! Just kidding, there is no such thing.) Contains exclamation marks:
!!
!!
!!
!!
!!
Outputs:
!!!!!
!!!!!
!!!!!
!!!!!
!!!!!
• very similar question Aug 18, 2019 at 11:06
• I really don't understand the downvote - this is a clear and well written challenge. Re: being a duplicate - it's not (preceding '!' makes for a big difference), and I don't believe anyone has suggested so (no close votes). Aug 18, 2019 at 13:45
• if a language can't tell the difference between a\n and a, can we require that there are no trailing newlines? Aug 18, 2019 at 14:29
• Downvotes are inserted between every upvote, just like what the challege describes.
– user85052
Aug 18, 2019 at 14:34
• Is the case of a single space input " ", is the output supposed to be "!" or "! !"
– Kai
Aug 19, 2019 at 3:52
Thanks to A__ for halving the byte count!
!
Try it online!
Replaces nothing with !
Python 3, 26 bytes
lambda s:s.replace('','!')
Try it online!
• Welcome to the site! Aug 19, 2019 at 12:37
Retina 0.8.2, 2 bytes
!
Try it online! At last, a challenge where Retina has a built-in!
• Actually, I created this challenge based on this Retina built-in.
– user85052
Aug 18, 2019 at 11:04
Aug 18, 2019 at 15:01
– user85052
Aug 18, 2019 at 15:19
• @A__ Right, I forgot about that feature (If there is only one non-function line…). You may want to re-consider your check-mark.
Aug 18, 2019 at 19:02
Python 3, 27 bytes
lambda s:f"!{'!'.join(s)}!"
Try it online!
Honestly, I hope someone can show me a cool way to do this with a smaller byte count.
• This doesnt handle the empty line case correctly Aug 19, 2019 at 4:22
• @flakes What do you mean? If you mean an empty string: we do not need to handle an empty string (and regardless this outputs !! in that case, which is what makes sense to me). If you mean the string \n: it does, since the correct output is !\n!. Aug 19, 2019 at 9:08
• @JAD As far as I can see, it doesn't have an empty string in the examples. Not only that, but the first rule literally states "there will always be a non-empty string input." Aug 19, 2019 at 11:13
• Ah I was incorrect. The first example has an empty line in the middle of the input. But this answer will handle placing the exclamation point in the middle of that, !\n!\n!. nice work. Aug 19, 2019 at 13:18
• codegolf.stackexchange.com/questions/190223/insert-nbetween/… as you were saying to show a shorter way Aug 20, 2019 at 7:14
Python 2, 27 bytes
lambda s:'!%s!'%'!'.join(s)
Try it online!
('!':).(>>=(:"!"))
-1 byte thanks to @nimi
Try it online!
• ('!':). saves a byte.
– nimi
Aug 18, 2019 at 15:09
brainfuck, 24 22 bytes
-2 bytes thanks to JoKing.
-[-[-<]>>+<],[>.<.,]>.
Try it online!
• damn, that's badass Aug 19, 2019 at 20:20
Labyrinth, 19 11 10 9 bytes
33
..
",@
Try it online!
How?
We enter the Labyrinth at the top-left facing right with an infinite stack of zeros...
I / O stack
0,0,0,...
3 - pop * 10 + 3 3,0,0,0,...
- 2 neighbours, forward
3 - pop * 10 + 3 33,0,0,0,...
- 2 neighbours, forward
. - pop & print chr ! 0,0,0,...
- T junction from the side
- TOS==0, forward
, - read chr or -1 L 76,0,0,0,... or -1,0,0,0
- T junction from the base
- if TOS > 0 right:
" - no-op 76,0,0,0,...
- 2 neighbours, forward
. - pop & print chr L 0,0,0,...
- T junction from the side
- TOS==0, forward
3 - ...back to the start
- elif TOS == -1 left:
@ - exit we're out!
* right, but on the first occasion (from above) we hit the wall and turn
around, so that's like a left
Luckily we don't need to handle un-printables, otherwise the first zero-byte would turn us around at , and play havoc.
JavaScript (ES6), 19 bytes
Takes input as an array of characters.
s=>!${s.join!}! Try it online! JavaScript (ES6), 23 20 bytes Saved 3 bytes thanks to @ShieruAsakoto Takes input as a string. s=>[,...s,,].join! Try it online! JavaScript (ES6), 22 bytes Suggested by @tjjfvi Takes input as a string. s=>s.replace(/|/g,"!") Try it online! • Alternative with .replace, 22 bytes Aug 18, 2019 at 19:12 • @tjjfvi Nifty one! Aug 18, 2019 at 19:18 • I've got a 20 for your 23: s=>[,...s,,].join! Aug 19, 2019 at 13:42 Pepe, 47 bytes REREEeRErEErREeeEeeeeEREEeeREEeereeREEEEeeEReee Try it online! Explanation: REREEeRE # Push 0,then input (str),then 0 -> (R) # The zeroes are pushed to correct the inserting rEE # Begin loop labelled 0 -> (r) rREeeEeeeeE # Push "!" -> (R) # r flag inserts it instead of pushing REEeeREEee # Move pointer pos 2 steps forward -> (R) ree # Loop while (R) != 0 REEEEeeE # Remove characters of (R) that are in stack of (r) # Removes the 0 in (R) Reee # Output (R) • How do you write this code? Is there some code converter that you use? This seems crazy to try and write Aug 19, 2019 at 20:21 • @Cruncher 1) I use this as my guide. 2) Nope, I don't use a code converter, I just use the guide to write the code. Aug 20, 2019 at 12:59 R, 25 bytes function(x)gsub("","!",x) Try it online! A function accepting and returning a character vector. • Can shave 3 bytes by switching the function form to scan(,''), like so tio.run/##K/r/P724NElDSUlHSVFJpzg5MU9DR11dU/O/… Aug 19, 2019 at 13:37 • @Sumner18 thanks. I started with that but it splits input at spaces. Aug 19, 2019 at 14:54 • @Sumner18 The challenge asks to handle input with newlines, which can't be done with scan (but which Nick's solution does handle, at least if you display the output with cat.) Aug 19, 2019 at 21:14 8086 machine code, .COM format (MS-DOS 2+), 32 bytes (-1 depending on emulator: see below) For best results redirect standard input from a file, as typing gives odd-looking output due to no buffering; also, newlines look a little weird because they are stored as CR LF, and the CR part messes up the output. This program behaves fine in an actual MS-DOS emulation (e.g. PCjs) but DOSBox seemed to have issues with Ctrl+Z EOF (see comments in the assembly listing), so DON'T try to enter input using the console in DOSBox unless you add the extra check! BB 01 00 53 59 BA 0B 01 B4 40 CD 21 4A 4B B4 3F CD 21 85 C0 74 09 B4 40 43 41 CD 21 49 EB EE C3 Some interesting bits: • I saved some data space by reusing memory that had already been executed (the 21H in INT 21H happens to be !) • I was almost able to use an interesting trick that I found on the page "The Hidden Power of BCD Instructions" which would have allowed me to use AAA instead of a standard TEST to compare AL to 0, saving one byte. Unfortunately, this is not fully documented so I couldn't rely on it: for example, PCjs doesn't adjust anything but the carry and auxiliary carry flags. :-( Assembly code (TASM ideal mode): IDEAL MODEL TINY CODESEG ORG 100H ;; DOSBox (tested with 0.74-2) didn't seem to handle Ctrl-Z as EOF ;; so uncomment the ";;" lines to run it there. MAIN: MOV BX,1 PUSH BX POP CX MOV DX,OFFSET MAIN_1+1 ; The 21H in INT 21H MOV AH,40H MAIN_1: INT 21H DEC DX ;;PUSH DX ;;POP SI IO_LOOP: DEC BX MOV AH,3FH INT 21H ;;; This should work on an non-emulated PC. ;;;AAA ; AL=0? TEST AX,AX JZ DONE ;;CMP [BYTE PTR SI],1AH ;;JZ DONE MOV AH,40H INC BX INC CX INT 21H DEC CX JMP IO_LOOP DONE: RET ENDS END MAIN 6502, 12 bytes (13 bytes if Apple II) 6502 The machine code assumes that a pair of zero page locations are connected to character input ($FE) and output (FF) hardware. Many 6502-based systems facilitate I/O in this fashion, albeit I/O address are usually not in zero page.
For simplicity, I used Py65, a 6502 microcomputer system simulator written in Python.
Here is a memory dump from Py65. You can load the following code anywhere in zero page such that it does not overlap $FE and$FF.
PC AC XR YR SP NV-BDIZC
6502: 0000 00 00 00 ff 00110010
.mem 0:b
0000: a9 21 85 ff a5 fe f0 fc 85 ff d0 f4
Running in a Windows command window, you can paste (Ctrl+V) any text you desire, or you can simply type. If typing, press Ctrl+J for a newline (same ASCII char). Press Ctrl+C to interrupt the processor and return to the Py65 command prompt.
Naturally, assembly code is easier to read.
PC AC XR YR SP NV-BDIZC
6502: 0000 00 00 00 ff 00110010
.d 00:0b
$0000 a9 21 LDA #$21
$0002 85 ff STA$ff
$0004 a5 fe LDA$fe
$0006 f0 fc BEQ$0004
$0008 85 ff STA$ff
$000a d0 f4 BNE$0000
For clarity, here is the assembly code in CBA65 format.
; ASSEMBLE:
; cba65 bangit
;
; python3 py65/monitor.py -i 00fe -o 00ff -l bangit.bin
; goto 0000
.FILES BIN=256
; I/O LOCATIONS
GETC .EQU $FE ; (1) MOVING PY65'S GETC TO ZP SHAVES 1 BYTE PUTC .EQU$FF ; (1) MOVING PY65'S PUTC TO ZP SHAVES 2 BYTES
.ORG $0000 VROOM LDA #'!' STA PUTC VROOM2 LDA GETC BEQ VROOM2 STA PUTC BNE VROOM .END Apple II The code above assumes a null indicates there is no input, so continues polling until a non-null value is returned. For comparison, the Apple I and Apple II signals availability of a new character by setting bit 7 of the keyboard I/O address, which then needs to be cleared after fetching the character. On those systems, character I/O usually is performed by calling system monitor routines instead of accessing the hardware directly. By calling RDKEY ($FD0C) and COUT ($FDED), the Apple II equivalent of the above can be coded in 13 bytes, and is runnable anywhere in RAM. Here is the code I ran in an Apple //e emulator, a2ix on Android 9. Pressing Return has the same effect as a newline. *300L 0300- A9 A1 LDA #$A1
0302- 20 ED FD JSR $FDED 0305- 20 0C FD JSR$FD0C
0308- 20 ED FD JSR $FDED 030B- F0 F3 BEQ$0300
Did you notice that instead of the normal ASCII value #$21 for the exclamation point, #$A1 is used instead? That's because sending standard ASCII values to COUT causes them to be displayed in "inverse mode," black on white. Displaying ASCII in normal white on black requires adding #$80 to the character value in the accumulator before calling COUT. Because RDKEY returns characters with the hi-bit set, assembly programs generally cleared the bit of the character to obtain its ASCII value before using it. • Welcome to the site! :) Aug 21, 2019 at 9:05 • Thank you, @Rahul! – lee Aug 21, 2019 at 10:55 sed, 12 bytes s/\b\|\B/!/g Try it online! -3 bytes thanks to Cows Quack • Actually the Sed code is just 15 characters there: Try it online!. Aug 19, 2019 at 15:11 • Great, thank you. I was unclear how that worked... Aug 19, 2019 at 15:13 • s/\b\|\B/!/g also works for 12 bytes Aug 20, 2019 at 7:15 • @Cowsquack thank you. updated. Aug 20, 2019 at 12:47 Jelly, 5 bytes Ż”!ṁż A full program accepting a string, which prints the result. Try it online! How? Ż”!ṁż - Main Link: list of characters, s e.g. "abc" ”! - character '!' '!' ṁ - mould like: Ż - s with a zero prepended "!!!!" ż - zip together with s ["!a","!b","!c",'!'] - implicit (smashing) print !a!b!c! Befunge-98 (PyFunge), 7 bytes '!,#@~, Try it online! Perl 5 -p0, 17 6 bytes s,,!,g Try it online! My original answer was -p and $_='!'.s,.,$&!,gr. Thanks to @Nahuel Fouilleul for cutting 11 bytes and to @Grimy for the -p0 tip. • 6 bytes Aug 19, 2019 at 8:32 • @NahuelFouilleul -lp gives incorrect output for the \n\n\n\n test case (returns 4 newline-separated ! instead of the specified 5). -p0 works correctly. Aug 19, 2019 at 11:20 MarioLANG, 95949089 69 bytes ++++++ ======< >) >+++++++",+[ =======<.==< >+++++++!(.- ========#=== Try it online! First time trying out MarioLANG, that was a lot of fun! Thanks to Jo King for -20 bytes Explanation: So, as the name implies, MarioLANG is made to execute like a game of Super Mario Bros. It operates similarly to BF, with memory arranged in a tape of cells. There are operators to increment, decrement, print (as ascii or numeric) and read into the current memory cell, and operators to move left or right along the tape. Mario (the instruction pointer) always begins in the top left cell of the program, with his intended direction of motion set to the right. If Mario does not have a floor-like object beneath him (=, ", or #), he will fall until he reaches a floor-like object. If Mario leaves the program space, the program ends due to Game Over :( This specific program can basically be split into two halves: the setup, and the loop. Setup Loop ----------------------------------------------- | ++++++ | ======< | >) >+++++++ | ",+[ =======< | .==< >+++++++ | !(.- ======== | #=== In the Setup section, we're simply incrementing the first memory cell until we reach 33 - the ASCII value for "!". Easy enough; if this can be golfed, it's purely a matter of shape. Mario starts from the top left, picks up 10 coins, starts falling when picking up the 11th, switches directions, then repeats. He picks up the last 11 coins without switching directions; he ends the setup section at the bottom-rightmost "+". In the loop section, Mario starts by reaching an elevator. The "!" operator makes him cease motion, so that he remains on the elevator. On the way up, it prints the corresponding ASCII character to the current memory cell's value (this one is always 33, "!"), then switches to the next cell in memory. Mario reaches the top and sets his direction to the right. He falls, and reads a character from input as its ASCII value (or -1 if no character). We increment because the only measure of control in MarioLANG is to skip an instruction if the current memory cell has a value of 0. If it does, we skip changing Mario's direction, so he will walk right off of the next floor to his doom. If it does not, we set direction to left; walking left off of the floor below decrements the current cell back to its previous value, that value is printed, and we move back to the first memory cell before getting back on the elevator. Previous version (89 bytes): +++++++++++>, ==========@"+ +++++++++++)[ @==========.==< +++++++++++!(.- ===========#=== • 62 bytes by using a multiplication loop instead of just a counter – Jo King Aug 21, 2019 at 12:35 • Okay now THAT is cool. I'll update as soon as I have time to redo the explanation, thanks a bunch! Aug 21, 2019 at 12:41 • Aha! 60 bytes by multiplying 5*6 + 3 instead of 8*4+1 – Jo King Aug 21, 2019 at 12:43 • Man, I know it isn't exactly your first rodeo, but this is really impressive. xD Aug 21, 2019 at 12:45 • Actually, this is my first time golfing MarioLANG. I just have some experience with brainfuck as well as other 2D languages – Jo King Aug 21, 2019 at 12:46 Perl 6, 16 11 bytes {S:g/<(/!/} Try it online! Replaces all zero width matches with exclamation marks. Null regexes are not allowed, so we use a capture marker to capture nothing instead Zsh, 32 23 bytes <<<!${(j:!:)${(s::)1}}! (s::) splits into characters, (j:!:) joins on !s. C# (Visual C# Interactive Compiler), 28 bytes s=>$"!{String.Join("!",s)}!"
Try it online!
• nice use of interpolated strings, wouldn't have thought of that Aug 20, 2019 at 4:12
Java 8, 20 bytes
A lambda function from String to String.
s->s.replace("","!")
Try It Online
05AB1E, 4 bytes
€'!Ć
I/O as a list of characters.
Try it online.
Explanation:
€'! '# Prepend a "!"-item before each character in the (implicit) input-list
Ć # Enclose (append the first character of the list at the end of it)
# (after which the result is output implicitly)
Triangular, 15 13 bytes
B\3;#*~.,</@<
Try it online!
-2 bytes after remembering that Triangular has a conditional halt operator.
I believe this is as short as it gets on this one. Triangular does have conditional direction-change operators, but they unfortunately work differently than the other conditionals. While all others check if ToS <= 0, the direction-changing conditionals check ToS != 0. If this weren't the case, we would have 10 bytes in the form of Bq3~#*/@<<.
Ungolfed:
B
\ 3
; # *
~ . , <
/ @ <
----------------------------------------------------
B3* - Push 11 and 3, then pop both and push their product.
<,< - Change directions 3 times (to save 2 bytes on last line)
@/ - Print Top of Stack value as a character, do not pop
~;\ - Push a character from input to ToS. Halt if ToS <= 0. Change Direction.
# - Print ToS as a character and pop
Previous Version (15 bytes):
B.3\.*#).(/?~@<
Wolfram Language (Mathematica), 26 bytes
Riffle[0#~Join~{0}+"!",#]&
Try it online!
Takes and returns a list of characters.
><>, 11 6 bytes
"!"oio
Try it online!
Saved 5 bytes thanks to Jo King, suggesting exiting with an error. Previous version which does not exit with an error:
"!"oi:0(?;o
Try it online!
• You can remove the :0(?; to terminate in an error
– Jo King
Aug 19, 2019 at 10:30
SimpleTemplate, 23 bytes
This is a language I wrote, and it was supposed to be for templates, but well.
!{@eachargv.0}{@echo_}!
Should be almost self-explanatory, once you see the ungolfed code:
!{@each argv.0 as char} {@echo char}!{@/}
And an explanation:
• ! - Prints the literal ! character
• {@each argv.0 as char} - Loops through every character, with the value set to the variable char (optional, the default variable is _).
argv.0 is the first parameter passed to the render() method of the compiler.
• {@echo char}! - outputs the char variable and a literal ! character.
For the golfed version, the default variable _ is used instead.
• {@/} - closes the loop (optional)
Pure SimpleTemplate solution:
{@fn x}!{@eachargv.0}{@echo_}!{@/}{@/}
Creates a function x that outputs the same result.
You can use it like this:
{@call x "this is an example"}
You can try all of this on: http://sandbox.onlinephpfunctions.com/code/f6baff8d411fc8227ece81eccf05b6e7d3586bfa
On the line 908, you can use the variables $golfed, $ungolfed and $fn to test all the versions. However, if it is allowed to use a character array, the code is simplified (20 bytes): !{@echoj"!" argv.0}! And ungolfed: !{@echo separator "!" argv.0}! Basically, outputs all items in the array, joined by "!", surrounded by literal !. Due to limitations in the compiler class, the space is mandatory (in the golfed version). This code is also extremelly harder to use in pure SimpleTemplate (using the function as example): {@fn x}!{@echoj"!" argv.0}!{@/} {@// alternative: @call str_split into a "a char array"} {@set a "a", " ", "c", "h", "a", "r", " ", "a", "r", "r", "a", "y"} {@call x a} The @call can call a function that exists in PHP, which means that it isn't a pure SimpleTemplate solution. Bash, 36 bytes while read -n1 c;do printf \!$c;done
Try it online!
This counts on the newline terminating the input for the last ! mark.
• Welcome! Please consider adding an explanation or link to an interpreter or something, because code-only answers are automatically flagged as low-quality. Aug 19, 2019 at 22:29
• @mbomb007, thanks for the pointer. Aug 19, 2019 at 22:42
• Unfortunately this does not add an ! at the end of the input. Aug 20, 2019 at 7:18
• @Cowsquack: on my terminal, the newline that terminates the input gets the ! added. On tio.run, the input needs to be terminated with a carriage return. I've updated the link to the Try it Online to reflect that. Aug 20, 2019 at 16:17
Ruby, 17 16 bytes
->s{s.gsub'',?!}
Try it online!
Thanks Value Ink for -1 byte
• Remove the space after gsub. Aug 20, 2019 at 19:05
VBA, 75 bytes 72 bytes
72 bytes as a sub that outputs to the immediate window (thanks @taylorscott)
Sub s(x):Debug.?"!";:For i=1To Len(x):Debug.?Mid(x,i,1)"!";:Next:End Sub
75 bytes as a function that returns the formatted string.
Function t(x):t="!":For i=1To Len(x):t=t &Mid(x,i,1) &"!":Next:End Function
Which expands to and is readable as
Public Function t(x)
t = "!"
For i = 1 To Len(x)
t = t & Mid(x, i, 1) & "!"
Next
End Function
Test cases
Public Sub test_golf()
x = "1 2 3 4 5 6" & vbCr & "129591 129012 129127 129582" & vbCr & vbCr & "0"
'x = "a"
'x = "!!" & vbCr & "!!" & vbCr & "!!" & vbCr & "!!" & vbCr & "!!"
Debug.Print x
s(x) ' to call the sub
Debug.Print t(x) ' to call the function
End Sub
• You can get this down to a 48 byte immediate window function as ?"!";:For i=1To[Len(A1)]:?Mid([A1],i,1)"!";:Next Aug 25, 2019 at 18:25
• @TaylorScott, I agree that would work in Excel VBA. I was just thinking of a pure VBA solution.
– Ben
Aug 26, 2019 at 0:58
• If you want to keep it as a pure VBA function, you can still get it down a bit by switching it to a sub and printing directly to the console as Sub t(x):Debug.?"!";:For i=1To Len(x):Debug.?Mid(x,i,1)"!";:Next:End Sub or Sub t(x):s="!":For i=1To Len(x):s=s+Mid(x,i,1)+"!":Next:Debug.?s:End Sub Sep 11, 2019 at 20:48 |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Effects of exposure to immersive videos and photo slideshows of forest and urban environments
## Abstract
A large number of studies have demonstrated the benefits of natural environments on people’s health and well-being. For people who have limited access to nature (e.g., elderly in nursing homes, hospital patients, or jail inmates), virtual representations may provide an alternative to benefit from the illusion of a natural environment. For this purpose and in most previous studies, conventional photos of nature have been used. Immersive virtual reality (VR) environments, however, can induce a higher sense of presence compared to conventional photos. Whether this higher sense of presence leads to increased positive impacts of virtual nature exposure is the main research question of this study. Therefore, we compared exposure to a forest and an urban virtual environment in terms of their respective impact on mood, stress, physiological reactions, and cognition. The environments were presented via a head-mounted display as (1) conventional photo slideshows or (2) 360$$^{\circ }$$ videos. The results show that the forest environment had a positive effect on cognition and the urban environment disturbed mood regardless of the mode of presentation. In addition, photos of either urban or forest environment were both more effective in reducing physiological arousal compared to immersive 360$$^{\circ }$$ videos.
## Introduction
Nowadays, individuals spend more and more time in artificially designed living spaces, in particular, humans spend up to 90% indoors1. This tendency has led to an isolation of individuals from regular contact with nature which has a negative impact on their mental and physical health. Several studies have demonstrated that such artificial stimulation and being in purely human-generated environments can lead to mental fatigue as well as a loss of vitality and health2,3.
These negative effects can be reduced by means engaging in interactions with nature4. There is evidence to suggest that natural environments have a positive influence on human psychology, physiology, and cognition5,6,7. According to the Attention Restoration Theory (ART), natural environments capture less cognitive resources, and therefore, allow an interruption of attention-grabbing tasks inherent in urban environments and thus, elicit attention restoration and recovery from mental fatigue8,9,10. Natural elements such as green landscapes and flowing waters have a calming effect on physiological arousal11,12. One of the long-term effects of access to nature is a positive attitude towards life and an increased satisfaction with one’s own home, one’s own work and generally one’s own life8,13.
As an instance of natural environments, forests have been studied frequently suggesting their positive effects on human body and mind14,15,16,17,18,19. These positive effects include, but are not limited to psychological relief, lower stress and depression levels19,20,21,22,23,24 as well as physiological effects such as lower blood pressure, heart rate (HR), and salivary cortisol hormone levels18,25,26. Therefore, forest therapy, also referred to as “forest bathing”, is practiced widely, in particular in Asia, to derive substantial benefits from the positive health effects of walking, resting, and interacting with forests27,28,29,30,31,32,33,34.
For people with limited access to nature (e.g., elderly in nursing homes, hospital patients, or jail inmates), already the visual representation of nature can relieve stress and improve emotional well-being22,35,36,37,38,39. Many studies in environmental psychology have used conventional photos to compare natural and urban environments or to demonstrate the positive effects of nature photos5,6,40,41.
In this context, immersive virtual reality (VR) may facilitate some of these characteristics such as the feeling of being in nature during the exposure42. By reproducing realistic stimuli and eliciting psychological processes, VR has the potential to increase external validity of the research findings43. It can, in addition, provide the experimenter (and potentially therapists) with a systematic control over the natural elements such as weather conditions, vegetation (up to the smallest details such as movements of the grass and leaves on the trees), wildlife, and lighting that is hard or impossible to achieve in real life44,45. Furthermore, therapeutic applications may benefit from the low-cost virtual environments, which can be duplicated and distributed easily, making them usable at a larger scale46 and make it accessible to individuals in need, e.g., in nursing homes. Thus, VR can complement the research on human perception and behavioral responses to nature stimuli by maximizing the benefits of lab-based (e.g., control over independent variable) and field-based (e.g., realistic stimuli) experiments43.
For this reason, previous studies have already employed nature exposure in VR. Several studies have compared real physical nature exposure with exposure to 360$$^{\circ }$$ videos of nature47,48,49,50,51,52. For instance, Browning et al.47 compared real nature exposure and a 360$$^{\circ }$$ VR nature video recorded from the same location. In comparison to a physical indoor environment without nature, both real and VR nature exposure were more restorative and increased physiological arousal. Although, only the real exposure to nature outdoors increased mood in a positive direction.
Researchers have also compared exposure to different environments merely in VR. For instance, a study53 demonstrated that different types of forest environments, presented via 360$$^{\circ }$$ videos, can improve mood and relieve stress. Another study54 revealed that in comparison to a control environment, exposure to 360$$^{\circ }$$ videos of nature can reduce physiological arousal and negative affect. Furthermore, in a study by Chung et al.55 and in comparison to 360$$^{\circ }$$ videos of fireworks, exposure to 360$$^{\circ }$$ videos of nature improved cognitive functioning and restored involuntary attention of the participants55. In comparison to urban environments, Yu et al.56 could show that exposure to 360$$^{\circ }$$ videos of forest or waterfall environment was able to decrease negative emotions such as fatigue and depression. In contrast, levels of fatigue were increased and self-esteem was decreased after exposure to urban environments. Also, in a study by Schutte et al.57 participants were exposed to a natural and an urban environment using 360$$^{\circ }$$ videos. Thereafter, participants reported significantly more restorativeness by exposure to the natural environment compared to the urban environment.
Multiple studies have reported stress recovery elicited by multisensory exposure to nature in VR. For instance, in a study by Annerstedt et al.58, participants experienced a psycho-social stress (i.e., TSST59) in VR followed by an exposure to natural scenes in VR either with or without sound. As a result, recovery from stress was facilitated by exposure to VR nature and was enhanced when the environment was presented with natural sounds. In another study by Hedblom et al.60 visual stimuli (i.e., 360$$^{\circ }$$ photos of urban, park, and forest environments) were accompanied by auditory stimuli (e.g., bird songs for natural environments) and olfactory stimuli (e.g., grass odour for park). Consequently, exposure to natural environments reduced stress levels significantly. Finally, Schebella et al.61 suggested that multisensory exposure to 360$$^{\circ }$$ videos of nature are beneficial to recovery from stress compared to visual-only exposure and that recovery is least effective in a virtual urban environment.
In order to use visual representation of natural environments in experiments or for preventive and/or therapeutic purposes, it is important to know whether the level of immersion and its associated feeling of presence are decisive for the extent of the effect. Different levels of immersion could be, for example: the actual stay in a natural environment, viewing a natural environment through a window, viewing a 360$$^{\circ }$$ video of a natural environment (stereoscopic or monoscopic) on a display (such as a smartphone with integrated gyroscope or using a head-mounted display (HMD)), staying in an artificially generated virtual world or watching a regular video or pictures of nature. To subjectively distinguish between different levels of immersion in VR context, the sense of presence is usually measured. It describes the psychological sense of being in a virtual environment42 and can have multiple components such as the sense of being physically present in a place (spatial presence), the attention devoted to the virtual environment (experienced involvement), as well as the experienced realism of the environment62.
Previous studies have examined different levels of immersion based on human’s psycho- and physiological responses63,64,65,66,67,68,69,70,71,72. For instance, a study63 suggested that closest to reality psychological responses can be achieved by a 360$$^{\circ }$$ panorama and physiological responses by a 3D model of the real environment (an interior shopping environment). Although in the same study, different levels of immersion including a conventional photograph of the environment were employed, they were all compared against the real environment and not against one another. In another study65 a significant increase in the sense of presence from monoscopic to stereoscopic and from 180$$^{\circ }$$ to 360$$^{\circ }$$ images was demonstrated. In addition our group, Forlim et al.68 previously reported that stereoscopic renderings delivered via an HMD elicit higher functional connectivity in the brain when compared to monoscopic renderings on projection screens or HMDs. Furthermore, Chirico et al.72 confirmed that immersive videos enhance the intensity of self-reported awe emotion as well as parasympathetic activation compared to 2D screen videos.
However, to the best of our knowledge, a direct comparison between conventional photo presentations and 360$$^{\circ }$$ video presentation of nature has not been tested so far. Hence, it is largely unknown whether conventional photo presentations suffice to create the full impact of virtual exposure to nature or whether an immersive display such as 360$$^{\circ }$$ video presentation can further increase the positive effects.
The experiment followed a within-subject design and consisted of a control (in a silent black virtual room with a white screen in the middle showing a fixation cross) and four experimental conditions (see Fig. 12). Each experimental condition consisted of three parts: (2) a cognitive test (serially subtracting 13 from a given starting number such as 1022) for 5 min, (2) exposure for 6 min to either an urban (i.e., an old town of northern Germany, see Fig. 2) or a forest (i.e., a northern German mixed forest, see Fig. 1) virtual environment presented either using 360$$^{\circ }$$ videos or conventional photo slideshows from the same content both displayed via an HMD, and lastly (3) filling out the questionnaires. The order of the experimental conditions was counterbalanced. The control condition was consistently administered at the beginning and immediately after the baseline measurement (see “Methods”***). During the experiment physiological data was recorded, namely galvanic skin response (GSR) and HR. It is worth noting that the cognitive test in this experiment served two functions simultaneously: it was used to induce stress prior to exposure and at the same time to measure cognitive performance due to the prior exposure phase. That is, the cognitive test measured the cognitive performance after exposure to the previous (and not following) condition. Thus, after the last condition, the cognitive test was administered for the last time measuring the cognitive performance after the last exposure.
## Results
To analyse the responses of our mood and stress-related questionnaires (i.e., STADI-S, POMS, SSSQ, and PSS) as well as the cognitive test, differences between experimental conditions and the control measurements were computed. On these difference scores, a two (environment: forest, urban) by two (immersion: 360$$^{\circ }$$ videos, conventional photo slideshows) repeated-measures ANOVA was performed.
### Questionnaire data
#### State-Trait Anxiety Depression Inventory-State (STADI-S)
No significant effect of the factors was found neither for depression nor anxiety sub-scales of the state version of the STADI (STADI-S).
#### Profile of mood states (POMS)
The environment factor (i.e., the type of environment: forest vs. urban) showed a significant negative effect on mood ($$F(1,33)=5.02, p=0.03, \eta _p^2=0.13$$). Paired t tests suggested that exposure to the urban environment disturbed the participants’ mood more than the forest environment ($$p=0.027$$) and that this difference was significant between the 360$$^{\circ }$$ videos of forest and urban environments ($$p=0.028$$, see Fig. 4). A main effect of the immersion level or its interaction with the environment factor could not be observed. We also calculated the difference scores (exposure-control) for each sub-scale of POMS and performed the two-way ANOVA. A significant main effect of environment could be found for fatigue only ($$F(1,33)=5.19, p=0.03, \eta _p^2=0.13$$). While the 360$$^{\circ }$$ videos of forest decreased the feeling of fatigue, pairwise comparisons revealed that this reduction was significantly different from the changes in fatigue (i.e., increase of fatigue) elicited by the 360$$^{\circ }$$ videos ($$p=0.027$$) or photo slideshows of the urban environment ($$p=0.016$$). No significant difference was observed between photo slideshows and 360$$^{\circ }$$ videos of the forest environment.
#### Short Stress State Questionnaire (SSSQ)
No significant main or interaction effects of the environment and immersion factors were found for neither of the SSSQ sub-scales namely task engagement, distress, nor worry.
#### Perceived Stress Scale (PSS)
No significant main or interaction effects of the environment and immersion factors could be found for the PSS score.
#### Igroup Presence Questionnaire (IPQ)
The IPQ scores of each condition were directly used for the analysis, without the control measurements being subtracted from them. As a result, a significant main effect of immersion level could be observed for the sense of presence ($$F(1,33)=79.11, p<0.001, \eta _p^2=0.706$$). The sense of presence for the 360$$^{\circ }$$ videos was higher than the slideshow conditions ($$p<0.001$$). The environment factor also had a significant effect on the sense of presence ($$F(1,33)=13.927, p<0.001, \eta _p^2=0.297$$) in such a way that the forest environment induced a higher sense of presence compared to the urban environment ($$p<0.001$$). No significant interaction effect was found (see Fig. 5).
For the sense of being there (or the general presence) sub-scale, we found a significant main effect of immersion level ($$F(1,33)=62.767, p<0.001, \eta _p^2=0.655$$), which shows that the mean value of the 360$$^{\circ }$$ videos was higher than the mean value of the slideshows ($$p<0.001$$).
For the spatial presence, the main effect of immersion level was significant ($$F(1,33)=96.371, p<0.001, \eta _p^2=0.745$$). Here, the mean value of the 360$$^{\circ }$$ videos was higher than the mean value of the slideshows ($$p<0.001$$). Moreover, our results show a significant main effect of the factor environment ($$F(1,33)=11.85, p=0.002, \eta _p^2=0.264$$) and therefore underline that the mean value of the forest environment was higher than the urban environment ($$p<0.01$$). Therefore, the 360$$^{\circ }$$ video and the forest environment led to a higher sense of spatial presence.
For the involvement sub-scale, the main effects of immersion level ($$F(1,33)=19.649, p<0.001, \eta _p^2=0.373$$) as well as the environment ($$F(1,33)=10.574, p=0.003, \eta _p^2=0.243$$) were significant. The 360$$^{\circ }$$ videos ($$p<0.001$$) and the forest environment ($$p<0.01$$) showed higher involvement values compared to respectively slideshows and the urban environment. In addition, a significant interaction effect of these two factors could be observed ($$F(1,33)=35.254, p<0.001, \eta _p^2=0.517$$). Here the highest involvement values were observed in the 360$$^{\circ }$$ forest video ($$p<0.001$$).
For the experienced realism, the main effect of the immersion level was significant ($$F(1,33)=17.006, p<0.001, \eta _p^2=0.34$$) and the 360$$^{\circ }$$ videos had higher values than the slideshows ($$p<0.001$$).
#### Simulator Sickness Questionnaire (SSQ)
The SSQ was administered two times: once at the beginning of the experiment (i.e., prior to wearing the HMD for the first time) and once in the end (i.e., after the last experimental condition). A paired t test suggested a significant ($$t(33)=3.67, p<0.001, d=.63$$) increase of the total simulator sickness score from pre- ($$M=14.08, SD=16.37$$) to post measurements ($$M=30.8, SD=28.33$$). This means that the experiment and its total associated stay in VR increased the symptoms of simulator sickness.
### Physiological measures
Considering the cognitive test as a stress induction task prior to exposure, it can be seen in Fig. 10 that physiological arousal measured by the GSR values were increased during the cognitive test phase prior to the exposure and were decreased during the exposure for all four conditions. A three (experiment phase: baseline, (cognitive) test, exposure) $$\times$$ two (environment: forest, urban) $$\times$$ two (immersion level: 360$$^{\circ }$$ videos, photo slideshows) repeated-measures ANOVA showed a significant main effect of experiment phase ($$F(2,66)=17.76, p<0.001, \eta _p^2=0.35$$) and a significant interaction of immersion and experiment phase ($$F(2,66)=3.38, p<0.05, \eta _p^2=0.09$$). The results of pairwise t tests showed significant differences ($$p<0.001$$) between all three phases. Thus, the cognitive test could successfully serve its first function to induce stress prior to exposure.
Figure 11 depicts the mean HR values during different phases of the experiment for all four conditions. A three-way (experiment phase, environment, immersion level) repeated-measures ANOVA showed a significant main effect of experiment phase ($$F(2,66)=13.51, p<0.001, \eta _p^2=0.29$$). Pairwise comparisons suggest that participants experienced the lowest HR values during the exposure compared to the baseline ($$p<0.001$$) and cognitive test phase ($$p<0.001$$). The difference between the baseline and the cognitive test phase was not significant probably due to ceiling effects. During the exposure, however, their HR decreased significantly.
Difference scores were calculated for both physiological measures by subtracting the mean values during the cognitive test phase from the mean values during the exposure phase (see Figs. 6, 7). A two-way repeated measures ANOVA for the factors environment and level of immersion showed a significant effect of the immersion level for the GSR values ($$F(1,33)=8.55, p<0.01, \eta _p^2=0.21$$). A paired t test suggested that the GSR difference scores were significantly larger ($$p<0.01$$) for the photo slideshow conditions compared to the 360$$^{\circ }$$ video conditions. Pairwise comparisons showed that urban ($$p<0.01$$) and forest ($$p<0.05$$) photo slideshows caused larger difference scores compared to 360$$^{\circ }$$ videos of the urban environment (see Fig. 6). The pairwise comparisons did not show any significant difference between the 360$$^{\circ }$$ videos of the forest environment and any other conditions. No significant main or interaction effect could be observed for the HR difference scores. That is, all four conditions decreased HR with no significant difference.
### Cognitive test
The cognitive test was considered as a dependent variable measuring the cognitive performance after the exposure phase. The environment factor had a significant effect on the errors ($$F(1,33)=9.52, p=0.004, \eta _p^2=0.22$$) and the consecutive, correct answers given ($$F(1,33)=13.56, p<0.001, \eta _p^2=0.29$$) in the cognitive test acquired after exposure. Paired t tests revealed that the number of errors (see Fig. 8) in the forest environment was significantly lower than in the urban environment ($$p<0.001$$) and the correct, consecutively given answers (see Fig. 9) were higher in the forest environment compared to the urban environment ($$p<0.001$$). No significant main effect of immersion or its interaction with the environment factor could be observed.
## Discussion
We hypothesized that the environment and the immersion level, as well as their interaction, have an influence on mood, stress recovery, and cognitive performance. In particular, we expected that the forest environment would produce a more positive effect than the urban environment. In addition, we hypothesized that more immersive presentations (i.e., 360$$^{\circ }$$ videos) create a higher sense of presence and consequently have greater effects, as more realistic environments would lead to realistic behavior and trigger corresponding responses42.
The effects of exposure to forest or urban environments on mood was measured by means of total mood disturbance. Here, it could be shown that the type of environment was determinant for the mood disturbance as exposure to the urban environment led to a significant mood disturbance whereas exposure to the forest environment resulted in a reduction of mood disturbance. In particular, the feeling of fatigue was increased after exposure to the urban environment regardless of their type of presentation (i.e., 360$$^{\circ }$$ videos or photo slideshows) and was reduced by exposure to the 360$$^{\circ }$$ videos of forest. This result confirms the findings of previous studies47,54,56,57,61,73. Despite variations in visual and auditory stimuli of the previously studied urban environments in VR, they all reported mood disturbances; whether the urban environment was a crowded subway station56, or shopping mall61 or shopping plaza56, or a small town with buildings lining streets, some road traffic, a pedestrian mall, and the sound of traffic and people talking in the pedestrian mall57. We intentionally excluded crowds, cars, and prominent nature elements such as trees. Thus, our VR urban environment comprised of only buildings as visual stimuli and mostly the sound of wind breeze as auditory stimuli. Yet exposure to this environment disturbed our participants’ mood. On the other hand, our forest environment was successful in reducing the disturbed mood. Since no significant effect of immersion level on mood could be shown in this experiment, it can be stated that photos of the environment are sufficient to observe its effects on mood. Nevertheless, the feeling of fatigue could be decreased only by exposure to 360$$^{\circ }$$ videos of the forest. Therefore, although Browning et al.47 could show an improved mood for the real exposure to outdoor forest only, our findings proved that VR exposure to forest can be beneficial for inducing positive mood. This is in line with the findings of previous studies that showed exposure to VR nature can improve mood and reduce negative affect, whether the VR nature was 360$$^{\circ }$$ videos of rural areas and remote beaches54 or various types of forest environments53,56.
No significant effects were found for STADI-S, PSS, and SSSQ. Since we already showed that the urban environment caused mood disturbance, one could expect that the STADI-S shows a similar effect. However, it should be noted that STADI-S rather covers more clinical aspects such as anxiety and depression, whereas the POMS measures mood changes in a more healthy range. Therefore, a possible explanation would be that the healthy volunteers did indeed experience a disturbance in mood by being exposed to the urban environment. However this mood disturbance was not strong enough to be detected by the scales of the STADI-S.
An increase of the physiological arousal measured by the GSR values could be observed from the baseline to the cognitive test phase which was again decreased by exposure to any of our four experimental conditions. The photo slideshows in this case were more effective in lowering the arousal levels compared to 360$$^{\circ }$$ videos. The reason here could be the higher immersion level of 360$$^{\circ }$$ videos and their associated sense of presence which has been shown to be positively correlated with physiological arousal74,75. The immersive VR has been also shown effective in inducing emotions (such as the feeling of awe) and enhancing their intensity72. Thus compared to non-immersive photos, immersive videos may elicit higher emotional reactions which again results in higher physiological responses. Therefore, although 360$$^{\circ }$$ videos were able to reduce the physiological arousal, their higher immersion level prevented this arousal reduction to reach the same level as the non-immersive photo slideshows. Thus, to reduce physiological arousal caused by psycho-social stressors, one should rather use conventional photos of either urban or forest environments.
Besides GSR, we measured participants’ HR during the course of experiment. The results showed that the HR was already high at baseline and the cognitive test did not increase it any further. A limitation of this study is that we did not plan any additional resting phase before the baseline measurement started. Perhaps this is the reason why the difference between the HR measurements at the baseline and during the test phase was not significant, probably due to ceiling effects. The exposure to any of our conditions, however, was able to reduce the HR significantly, regardless of the type of environment or the level of immersion. This finding is in line with previous studies54,56. For instance, Yu et al.56 showed that blood pressure and HR were reduced by exposure to both urban and natural environments with no significant differences between them. Also, Anderson et al.54 showed that HR variability was reduced during the exposure to natural and indoor VR environments with no clear differences across them. In their study, the GSR values were also decreased for all conditions but this reduction was greater for the natural scenes, similar to the findings of Hedblom et al.60. Moreover, Schebella et al.61 showed that recovery from stress measured by HR values could be achieved by VR exposure to both natural and urban environments with no significant difference between them. Nevertheless, it is important to determine whether and which stimuli are best suited to decrease the induced stress. By analyzing the GSR values, we could show that all our stimuli could reduce the induced stress, but non-immersive stimuli were more effective in doing so.
The hypothesis that exposure to forest environment improves cognition was confirmed by this study. It could be shown that the maximum number of correct answers (number series) in two conditions exposing to the forest environment was higher and the total number of errors was lower. This can be attributed to the positive effect of exposure to the forest environment and cognitive benefits of interacting with nature which has been studied before4,5,8,9,13,55,76,77. For instance, Berman et al.76 found that viewing pictures of natural scenes can improve cognitive performance compared to urban scenes. Also, Chung et al.55 showed that 360$$^{\circ }$$ videos of nature can restore involuntary attention. In our study however, neither an effect of immersion level nor an interaction with the environment factor could be found. Therefore, it can be concluded that the presentation of forest using both methods namely the photo slideshows and the 360$$^{\circ }$$ videos had a positive effect on cognition with no significant difference between them. Thus, to induce a positive effect on cognition, a presentation of forest using the conventional photo slideshows might be enough to produce the full impact of forest exposure.
The cognitive test in this experiment served two functions: as a stress induction task prior to exposure and at the same time as a dependent variable measuring the cognitive performance after exposure. Initially, we had planned to use an additional cognitive test to measure the cognitive performance. However, during the piloting phase, we realized that the total length of the study could overwhelm the participants. Therefore, we used only one cognitive test to keep the length of the experiment limited to a reasonable time. This is a limitation of this study and in future work, these two functions could be disentangled from each other.
The results of the SSQ suggested that the experiment increased the symptoms of simulator sickness. A reason could be that the study was 180 min long during which a cognitive task was carried out repeatedly. Therefore, participation in the study could have led to fatigue and may have caused or exacerbated the symptoms of simulator sickness. Therefore, the symptoms can and should not be attributed solely to the exposure to the virtual environments. Moreover, a limitation of this study is that the SSQ was not administered after each condition. Thus, it cannot be determined whether different levels of immersion or types of environment played a role in inducing simulator sickness. Whether potentially associated simulator sickness prevented the positive effects of nature to occur remains a topic for future research.
The hypothesis that the immersive 360$$^{\circ }$$ videos can facilitate the positive effects of nature onto mood, recovery after stress and cognition could not be demonstrated in this experimental setup. Nevertheless, the IPQ results showed that the 360$$^{\circ }$$ videos did induce a higher sense of presence compared to the slideshows. Also, the IPQ sub-scale involvement was highest in the 360$$^{\circ }$$ video of a forest. Therefore, the question arises whether the provided level of immersion for the 360$$^{\circ }$$ videos was sufficient for changing our affective and cognitive measures. The 360$$^{\circ }$$ videos of this study were taken in a resolution of 4K and were monoscopic (i.e., the same image was displayed on both lenses). Monoscopic images lack cues of depth perception that affect the sense of spatial perception78. Since realistic representations have an impact on immersion42, the use of a 360$$^{\circ }$$ camera with a higher resolution (to render more realistic stimuli such as the movement of single leaves’) and stereoscopic display (i.e., different images shown on the respective lenses to create a sense of spatial depth) should be considered in future investigations. It might still be true that with stronger immersion and sense of presence, the reactions of the participants could have been more different between 360$$^{\circ }$$ videos and the photo slideshow.
However, in the present setup an advantage of using 360$$^{\circ }$$ videos compared to photos could not be determined, the positive influence of natural environments on cognition and reduction of mood disturbance could be observed. As the use of visual representations of natural environments can be a viable option in contexts that offer little access to natural resources. In future work, the underlying elements of the forest environments that cause the more positive impact in contrast to the urban environments should be further studied. It would be possible that it is not the forest in its complexity that is necessary to trigger the observed positive effects, but rather some bottom-up visual features that are commonly found in nature pictures. Previous research has shown that preference ratings of nature pictures can be explained by such lower-level image features79,80.
The Prospect-Refuge Theory81 suggests that humans have preferences for certain environments. According to Appleton81, humans prefer places that offer a safe and sheltered refuge and at the same time a good view or overview of the surrounding environment. This theory relies on evolutionary approaches, which require a predator to be able to observe a potential prey without being discovered. Accordingly, there is the possibility that there may be a natural preference for the environment of a dense forest over an empty road or a rather open space within a city. Consequently, future studies should take these aspects into account while selecting the virtual environments to be compared.
In this work, the visual stimuli of natural and urban environment were accompanied by the respective auditory stimuli recorded from that environment. In other words, while seeing either 360$$^{\circ }$$ videos or photo slideshows of forest environment, our participants could listen to the sound of birds singing in that forest. The selected urban environment for this study was an empty old town in which mostly a soft wind breeze could be heard. As mentioned earlier, our main reason for including the auditory stimuli was to increase the feeling of presence42,82. However, since Annerstedt et al.58 showed that recovery from stress by exposure to VR nature was enhanced when the environment had nature sounds, one may consider to repeat the present experiment to investigate the effects of pure visual stimuli. Furthermore, in the 360$$^{\circ }$$ videos of the forest, subtle movements of the leaves of the trees caused by the wind breeze as well as the changes of the sunlight when shined through the trees were observable. Such subtle changes could indeed not be observed in the urban environment. We consider this as a limitation of this work. Thus, future studies may decide to provide an urban scene with a comparable level of movement as the forest.
In addition, in this experiment, the conventional photo slideshows were displayed using a VR HMD which was required for presenting the 360$$^{\circ }$$ videos but not for the photo slideshow. On the one hand, using the HMD for both conditions enabled us to control for unintentional effects of the display medium while comparing the immersion properties of the presented materials (i.e., 360$$^{\circ }$$ videos or conventional photos), but on the other hand limited us from generalizing the findings to other display media. As presentation of conventional photo slideshows on a monitor may not produce the same effects as presenting them using an HMD and remains a topic for further research.
In sum, the benefits of interacting with real or virtual nature has been reported in previous studies. Virtual exposure to nature has been administered classically using conventional photos and recently using immersive 360$$^{\circ }$$ videos or computer generated models of nature. In this work, we aimed to answer the question of whether immersive 360$$^{\circ }$$ videos of nature intensify its positive effects on mood, stress recovery, and cognition compared to conventional photos of nature. Our results suggest that indeed exposure to photos of a forest environment suffice to prevent mood disturbance observed in response to urban exposure, reduce physiological arousal, and improve cognition. In addition, photos of either urban or forest environment were both more effective in reducing physiological arousal compared to immersive 360$$^{\circ }$$ videos. Thus, in contrast to our priori hypothesis, more immersive presentation of the forest environment could not lead to more positive effects of nature.
## Methods
### Participants
Recruitment of participants took place via an email distributor among the students of the Faculty of Computer Science at the University of Hamburg. In addition, the study was advertised on the campus of the University of Hamburg and the University of Applied Sciences in Hamburg and a call in social networks. A total of 35 subjects participated in the study. However, one person had to be excluded due to deuteranopia (green blindness). The remaining 34 subjects (11 female) were between 21 and 34 years old ($$M=27.26, SD=4.144$$). The study was approved by the local psychological ethics committee of the Center for Psycho-social Medicine at the University Hospital Hamburg-Eppendorf and was carried out in accordance with relevant guidelines and regulations.
### Materials
We selected a northern German mixed forest as the forest environment (see Fig. 1). Since our focus was on vegetation, other natural elements such as water and animals or humans were avoided and were not present in this environment. Our urban environment was an old town of northern Germany (see Fig. 2) which contained no vegetation nor animals or humans. Each 360$$^{\circ }$$ video was a 6 min video consisting of three 2 min single stationary videos. To shoot the individual videos, the tripod with the camera was moved 6 m forward, measured from the center of the tripod. The result is a composition of three stationary single shots in which the tripod, per single shot, was placed firmly in one place. Looking at the final video created the impression of teleportation between these three shots.
In total, three different environments were created for the study. The first environment was a black room with a white screen in its center. In the middle of the white screen was a black fixation cross (Fig. 3). This environment had no background sound and was used as the control condition. The second environment was identical to the first one, with the difference that the slideshows were played on the screen. The virtual camera in these two environments was one meter away from the virtual screen. The 360$$^{\circ }$$ videos were played in the third environment on the inner side of a virtual sphere. Here, the virtual camera was placed in the center of the sphere to create the impression of being inside the 360$$^{\circ }$$ VR environments.
The experiment was conducted in a laboratory room. The participants were seated on a firm chair. The position of the chair was fixed to ensure a fixed position in the virtual environment. During the experiment, the subjects wore a HTC Vive Pro HMD as well as Neulog Pulse and Galvanic Skin Response (GSR) sensors. GSR or skin conductance response (SCR) measures the amount of changes in electrical conductivity of the skin (in this study, at the finger of the non-dominant hand) when its glands produce ionic sweat in response to a given stimulus. Thus, it is considered as an indicator of localized phasic arousal processes and has been interpreted as an indicator of stress in the literature83. Additionally, previous research in particular on TSST59, suggest that both heart rate and heart rate variability can detect the physiological effects of stress on human participants. Therefore, we concluded that both measures apply as our cognitive test was taken from the TSST. In this study, we measured heart rate since it could successfully indicate physiological changes of performing multiple TSST in VR in a previous study75. For rendering, system control, and logging an Intel Computer (Core i7 6900 K at 3.2 GHz) with an NVIDIA GeForce GTX 1080 graphics card was used. The questionnaires were completed on a MacBook Pro (Retina, 13 inches, model year: end of 2013).
The following questionnaires were employed in this study:
State Trait Anxiety Depression Inventory-State (STADI-S)84 measures the current state of anxiety and depression of a person. It consists of 20 statements scoring from 1 (Not at all) to 4 (Very much so). The sub-scales of excitement and concern estimate the level of anxiety whereas the euthymia (positive mood) and dysthymia (negative mood) sub-scales are used as dimensions of depression.
Profiles of Mood States (POMS)85 was used to assess mood. For this purpose, a value for the total mood disturbance was determined. The questionnaire contains keywords and statements that describe different feelings and are scored on a 5-point Likert scale ranging from 1 (Not at all) to 5 (Extremely).
Short Stress State Questionnaire (SSSQ)86,87 records the status of engagement, distress and worry after a given task. The questionnaire consists of 24 items rated from 1 (Not at all) to 5 (Very much so).
Perceived Stress Scale (PSS)88 measures the perception of stress. It contains 10 items ranging from 0 (Never) to 4 (Very often). As the original items refer to the situations during the past month in one’s life, for the purpose of this study, the items were modified to measure the momentary perceived stress.
Igroup Presence Questionnaire (IPQ)62 was used to measure the perceived sense of presence in VR. It contains 14 items on a 7-point Likert scale ranging from 0 to 6 with different scale anchor, meaning that some items have general scale anchors (0: Fully disagree to 6: Fully agree) and some have more precise anchors (e.g., 0: Not consistent and 6: Very consistent). The questionnaire has also four sub-scales: general presence or the sense of being there, spatial presence, involvement and experienced realism.
Simulator Sickness Questionnaire (SSQ)89 measures 16 symptoms that may occur during or after VR exposure. The symptoms are rated from 0 (None) to 3 (Hard) and are classified into three categories: Nausea, Oculomotor and Disorientation.
#### Procedure
Upon arrival, the subjects were informed about the purpose of the study and their right to interrupt or quit at any time. Thereafter, they signed the informed consent. At baseline (see Fig. 12), the SSQ-pre was administered and the baseline GSR and HR were measured. For this purpose, the subjects wore the HMD and saw the control environment (i.e., a black virtual room with a white screen showing a fixation cross) for 6 min. Participants were asked to look at the fixation cross on the white screen and not to move or to speak.
The study had a within-subject design and consisted of a control and four experimental conditions. The order of the experimental conditions was counterbalanced but the control condition was consistently administered at the beginning and immediately after the baseline measurement. Each condition consisted of three parts.
In the second part, the participants were exposed for 6 min to either an urban or a forest virtual environment presented either using 360$$^{\circ }$$ videos or conventional photos taken from the same video in the form of a slideshow. In the control condition, the presented virtual environment did not change during the exposure. In other words, after performing the mental arithmetic task inside the control environment during the first part, the subjects were exposed further to the control environment and did not have any specific task to do. They were, however, allowed to look around in the virtual environment and were instructed not to speak or move. In this part, GSR and HR were measured.
In the third part and after the exposure, the participants took off the HMD and the sensors and filled out the following questionnaires: STADI-S, POMS, SSSQ, PSS, IPQ.
It is important to clarify that the cognitive test in this experiment was designed to test the participants’ cognitive performance after exposure to the previous (and not following) virtual environment. For this reason, after the last condition, the cognitive test was administered again. Thereafter the participants filled out the SSQ-post and were compensated with course credits.
### Data analysis
Prior to the analysis the physiological signals were smoothed using a low-pass Butterworth filter with the cutoff frequency of 1 Hz. Thereafter, they were normalized using the following formula90,91:
\begin{aligned} {{\tilde{S} } = \frac{ s - min ({\bar{s} })}{max ({\bar{s} }) - min ({\bar{s} })} } \end{aligned}
(1)
where S is the raw signal, s the smoothed version of S, $$\bar{s}$$ the signal taken over the entire session, and $$\tilde{S}$$ the normalized signal.
Subsequently, difference scores were used, with the average values measured during the cognitive test being subtracted from the values of the exposure phase. On these difference scores the two-way repeated-measures ANOVA was performed.
In order to analyse the cognitive performance as well as the questionnaire data, the difference scores were calculated by subtracting the respective value from the control condition from the values of the given experimental subjective or cognitive measure. These differences were not calculated for the IPQ responses as we were interested in the original presence values after each condition and not the changes with respect to the control condition.
It has to be noted that according to the Shapiro–Wilk test, some data were normally distributed (e.g., IPQ Sense of Presence as well as GSR and HR Difference scores) and some were not (e.g., Cognitive Test). Therefore, we decided to report the analysis based on parametric tests in order to not switch between statistical tests. Thus, to test our hypotheses, for all data except the SSQ, two-way repeated-measures ANOVAs were performed, with a significance level at 0.05. As an effect strength, the partial eta squared ($$\eta ^2$$) was reported. Thereby, a value of 0.01 represents a small effect, 0.06 a medium effect, and 0.14 a large effect92. The SSQ responses were analysed using a paired t test with a significance level of 0.05. Additionally, Cohen's d was reported as the effect size for t-test which is commonly interpreted as small (d = .2), medium (d = .5), and large (d = .8) effects92.
## Data availability
Upon request the authors will make the data available .
## References
1. 1.
Indoor air quality—European environment agency. https://www.eea.europa.eu/signals/signals-2013/articles/indoor-air-quality. Accessed 14 Apr 2020.
2. 2.
Katcher, A. H. & Beck, A. M. Health and caring for living things. Anthrozoös 1, 175–183 (1987).
3. 3.
Stilgoe, J. R. Gone barefoot lately?. Am. J. Prev. Med. 20, 243–244 (2001).
4. 4.
Keniger, L. E., Gaston, K. J., Irvine, K. N. & Fuller, R. A. What are the benefits of interacting with nature?. Int. J. Environ. Res. Public Health 10, 913–935 (2013).
5. 5.
Ohly, H. et al. Attention restoration theory: A systematic review of the attention restoration potential of exposure to natural environments. J. Toxicol. Environ. Health Part B 19, 305–343 (2016).
6. 6.
Stevenson, M. P., Schilhab, T. & Bentsen, P. Attention restoration theory ii: A systematic review to clarify attention processes affected by exposure to natural environments. J. Toxicol. Environ. Health Part B 21, 227–268 (2018).
7. 7.
Jo, H., Song, C. & Miyazaki, Y. Physiological benefits of viewing nature: A systematic review of indoor experiments. Int. J. Environ. Res. Public Health 16, 4739 (2019).
8. 8.
Kaplan, R. & Kaplan, S. The Experience of Nature: A Psychological Perspective (CUP Archive, Cambridge, 1989).
9. 9.
Hartig, T., Mang, M. & Evans, G. W. Restorative effects of natural environment experiences. Environ. Behav. 23, 3–26 (1991).
10. 10.
Tennessen, C. M. & Cimprich, B. Views to nature: Effects on attention. J. Environ. Psychol. 15, 77–85 (1995).
11. 11.
Kim, T.-H. et al. Human brain activation in response to visual stimulation with rural and urban scenery pictures: A functional magnetic resonance imaging study. Sci. Total Environ. 408, 2600–2607 (2010).
12. 12.
An, M., Colarelli, S. M., O’Brien, K. & Boyajian, M. E. Why we need more nature at work: Effects of natural elements and sunlight on employee mental health and work attitudes. PLoS One 11, e0155614 (2016).
13. 13.
Kaplan, R. The nature of the view from home: Psychological benefits. Environ. Behav. 33, 507–542 (2001).
14. 14.
Bowler, D. E., Buyung-Ali, L. M., Knight, T. M. & Pullin, A. S. A systematic review of evidence for the added benefits to health of exposure to natural environments. BMC Public Health 10, 456 (2010).
15. 15.
Irvine, K. N. & Warber, S. L. Greening healthcare: Practicing as if the natural environment really mattered. Altern. Ther. Health Med. 8, 76–83 (2002).
16. 16.
Pretty, J. How nature contributes to mental and physical health. Spirit. Health Int. 5, 68–78 (2004).
17. 17.
Park, B.-J. et al. Physiological effects of Shinrin-Yoku (taking in the atmosphere of the forest)—using salivary cortisol and cerebral activity as indicators–. J. Physiol. Anthropol. 26, 123–128 (2007).
18. 18.
Sung, J., Woo, J.-M., Kim, W., Lim, S.-K. & Chung, E.-J. The effect of cognitive behavior therapy-based “forest therapy’’ program on blood pressure, salivary cortisol level, and quality of life in elderly hypertensive patients. Clin. Exp. Hypertens. 34, 1–7 (2012).
19. 19.
Annerstedt, M. et al. Finding stress relief in a forest. Ecol. Bull. 20, 33–42 (2010).
20. 20.
Shin, W. S., Yeoun, P. S., Yoo, R. W. & Shin, C. S. Forest experience and psychological health benefits: The state of the art and future prospect in korea. Environ. Health Prev. Med. 15, 38 (2010).
21. 21.
Hansen-Ketchum, P., Marck, P. & Reutter, L. Engaging with nature to promote health: New directions for nursing research. J. Adv. Nurs. 65, 1527–1538 (2009).
22. 22.
Maller, C., Townsend, M., Pryor, A., Brown, P. & St Leger, L. Healthy nature healthy people:‘Contact with nature’ as an upstream health promotion intervention for populations. Health Promotion Int. 21, 45–54 (2006).
23. 23.
Frumkin, H. Healthy places: Exploring the evidence. Am. J. Public Health 93, 1451–1456 (2003).
24. 24.
Morita, E. et al. Psychological effects of forest environments on healthy adults: Shinrin-yoku (forest-air bathing, walking) as a possible method of stress reduction. Public Health 121, 54–63 (2007).
25. 25.
Lee, J., Park, B.-J., Tsunetsugu, Y., Kagawa, T. & Miyazaki, Y. Restorative effects of viewing real forest landscapes, based on a comparison with urban landscapes. Scand. J. For. Res. 24, 227–234 (2009).
26. 26.
Lee, J. et al. Effect of forest bathing on physiological and psychological responses in young Japanese male subjects. Public Health 125, 93–100 (2011).
27. 27.
Lee, J. et al. Influence of forest therapy on cardiovascular relaxation in young adults. Evid. Based Compl. Altern. Med. 2014, 20 (2014).
28. 28.
Ochiai, H. et al. Physiological and psychological effects of forest therapy on middle-aged males with high-normal blood pressure. Int. J. Environ. Res. Public Health 12, 2532–2542 (2015).
29. 29.
Chun, M. H., Chang, M. C. & Lee, S.-J. The effects of forest therapy on depression and anxiety in patients with chronic stroke. Int. J. Neurosci. 127, 199–203 (2017).
30. 30.
Lee, H. J. & Son, S. A. Qualitative assessment of experience on urban forest therapy program for preventing dementia of the elderly living alone in low-income class. J. People Plants Environ. 21, 565–574 (2018).
31. 31.
Song, C., Ikei, H., Kagawa, T. & Miyazaki, Y. Effects of walking in a forest on young women. Int. J. Environ. Res. Public Health 16, 229 (2019).
32. 32.
Kim, H., Lee, Y. W., Ju, H. J., Jang, B. J. & Kim, Y. I. An exploratory study on the effects of forest therapy on sleep quality in patients with gastrointestinal tract cancers. Int. J. Environ. Res. Public Health 16, 2449 (2019).
33. 33.
Rajoo, K. S., Karam, D. S. & Aziz, N. A. A. Developing an effective forest therapy program to manage academic stress in conservative societies: A multi-disciplinary approach. Urban For. Urban Green. 43, 126353 (2019).
34. 34.
Lee, H. J., Son, Y.-H., Kim, S. & Lee, D. K. Healing experiences of middle-aged women through an urban forest therapy program. Urban For. Urban Green. 38, 383–391 (2019).
35. 35.
Kaplan, R. The psychological benefits of nearby nature. In Role of Horticulture in Human Well-Being and Social Development: A National Symposium 125–133 (Timber Press, Arlingto, 1992).
36. 36.
Lewis, C. A. Green Nature/Human Nature: The Meaning of Plants in Our Lives (University of Illinois Press, Illinois, 1996).
37. 37.
Leather, P., Pyrgas, M., Beale, D. & Lawrence, C. Windows in the workplace: Sunlight, view, and occupational stress. Environ. Behav. 30, 739–762 (1998).
38. 38.
Gamble, K. R., Howard, J. H. Jr. & Howard, D. V. Not just scenery: Viewing nature pictures improves executive attention in older adults. Exp. Aging Res. 40, 513–530 (2014).
39. 39.
Tuena, C. et al. Usability issues of clinical and research applications of virtual reality in older people: A systematic review. Front. Human Neurosci. 14, 93 (2020).
40. 40.
Ulrich, R. S. Natural versus urban scenes: Some psychophysiological effects. Environ. Behav. 13, 523–556 (1981).
41. 41.
Kweon, B.-S., Ulrich, R. S., Walker, V. D. & Tassinary, L. G. Anger and stress: The role of landscape posters in an office setting. Environ. Behav. 40, 355–381 (2008).
42. 42.
Slater, M. & Wilbur, S. A framework for immersive virtual environments (five): Speculations on the role of presence in virtual environments. Presence Teleoper. Virtual Environ. 6, 603–616 (1997).
43. 43.
Smith, J. W. Immersive virtual environment technology to supplement environmental perception, preference and behavior research: A review with applications. Int. J. Environ. Res. Public Health 12, 11486–11505 (2015).
44. 44.
Depledge, M. H., Stone, R. J. & Bird, W. Can natural and virtual environments be used to promote improved human health and wellbeing? (2011).
45. 45.
Browning, M. H., Saeidi-Rizi, F., McAnirlin, O., Yoon, H. & Pei, Y. The role of methodological choices in the effects of experimental exposure to simulated natural landscapes on human health and cognitive performance: A systematic review. Environ. Behav. 0013916520906481, 10 (2020).
46. 46.
Rizzo, A. . S. & Kim, G. . J. A Swot analysis of the field of virtual reality rehabilitation and therapy. Presence Teleoper. Virtual Environ. 14, 119–146 (2005).
47. 47.
Browning, M. H., Mimnaugh, K. J., van Riper, C. J., Laurent, H. K. & LaValle, S. M. Can simulated nature support mental health? Comparing short, single-doses of 360-degree nature videos in virtual reality with the outdoors. Front. Psychol. 10, 20 (2019).
48. 48.
Palanica, A., Lyons, A., Cooper, M., Lee, A. & Fossat, Y. A comparison of nature and urban environments on creative thinking across different levels of reality. J. Environ. Psychol. 63, 44–51 (2019).
49. 49.
Calogiuri, G. et al. Experiencing nature through immersive virtual environments: Environmental perceptions, physical engagement, and affective responses during a simulated nature walk. Front. Psychol. 8, 2321 (2018).
50. 50.
Silva, R. A., Rogers, K. & Buckley, T. J. Advancing environmental epidemiology to assess the beneficial influence of the natural environment on human health and well-being (2018).
51. 51.
Yin, J., Zhu, S., MacNaughton, P., Allen, J. G. & Spengler, J. D. Physiological and cognitive performance of exposure to biophilic indoor environment. Build. Environ. 132, 255–262 (2018).
52. 52.
Chirico, A. & Gaggioli, A. When virtual feels real: Comparing emotional responses and presence in virtual and natural environments. Cyberpsychol. Behav. Soc. Netw. 22, 220–226 (2019).
53. 53.
Wang, X., Shi, Y., Zhang, B. & Chiang, Y. The influence of forest resting environments on stress using virtual reality. Int. J. Environ. Res. Public Health 16, 3263 (2019).
54. 54.
Anderson, A. P. et al. Relaxation with immersive natural scenes presented using virtual reality. Aerosp. Med. Human Perform. 88, 520–526 (2017).
55. 55.
Chung, K., Lee, D. & Park, J. Y. Involuntary attention restoration during exposure to mobile-based 360$${}^\circ$$ virtual nature in healthy adults with different levels of restorative experience: Event-related potential study. J. Med. Internet Res. 20, e11152 (2018).
56. 56.
Yu, C.-P., Lee, H.-Y. & Luo, X.-Y. The effect of virtual reality forest and urban environments on physiological and psychological responses. Urban For. Urban Green. 35, 106–114 (2018).
57. 57.
Schutte, N. S., Bhullar, N., Stilinović, E. J. & Richardson, K. The impact of virtual environments on restorativeness and affect. Ecopsychology 9, 1–7 (2017).
58. 58.
Annerstedt, M. et al. Inducing physiological stress recovery with sounds of nature in a virtual reality forest—results from a pilot study. Physiol. Behav. 118, 240–250 (2013).
59. 59.
Kirschbaum, C., Pirke, K.-M. & Hellhammer, D. H. The ‘trier social stress test’-a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28, 76–81 (1993).
60. 60.
Hedblom, M. et al. Reduction of physiological stress by urban green space in a multisensory virtual experiment. Sci. Rep. 9, 1–11 (2019).
61. 61.
Schebella, M. F., Weber, D., Schultz, L. & Weinstein, P. The nature of reality: Human stress recovery during exposure to biodiverse, multisensory virtual environments. Int. J. Environ. Res. Public Health 17, 56 (2020).
62. 62.
Schubert, T., Friedmann, F. & Regenbrecht, H. The experience of presence: Factor analytic insights. Presence Teleoper. Virtual Environ. 10, 266–281 (2001).
63. 63.
Higuera-Trujillo, J. L., Maldonado, J.L.-T. & Millán, C. L. Psychological and physiological human responses to simulated and real environments: A comparison between photographs, 360 panoramas, and virtual reality. Appl. Ergon. 65, 398–409 (2017).
64. 64.
Patrick, E. et al. Using a large projection screen as an alternative to head-mounted displays for virtual environments. Proc. SIGCHI Confer. Human Factors Comput. Syst. 12, 478–485 (2000).
65. 65.
Banchi, Y., Yoshikawa, K. & Kawai, T. Evaluating user experience of 180 and 360 degree images. Electron. Imaging 2020, 244–251 (2020).
66. 66.
Gaebler, M. et al. Stereoscopic depth increases intersubject correlations of brain networks. NeuroImage 100, 427–434 (2014).
67. 67.
Kober, S. E., Kurzmann, J. & Neuper, C. Cortical correlate of spatial presence in 2d and 3d interactive virtual reality: An EEG study. Int. J. Psychophysiol. 83, 365–374 (2012).
68. 68.
Forlim, C. G. et al. Stereoscopic rendering via goggles elicits higher functional connectivity during virtual reality gaming. Front. Human Neurosci. 13, 365 (2019).
69. 69.
Bittner, L., Mostajeran, F., Steinicke, F., Gallinat, J. & Kühn, S. Evaluation of flowvr: A virtual reality game for improvement of depressive mood. Biorxiv 451245, 20 (2018).
70. 70.
Tse, A. et al. Was i there? impact of platform and headphones on 360 video immersion. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2967–2974 (2017).
71. 71.
Passmore, P. J. et al. Effects of viewing condition on user experience of panoramic video. (2016).
72. 72.
Chirico, A. et al. Effectiveness of immersive videos in inducing awe: An experimental study. Sci. Rep. 7, 1–11 (2017).
73. 73.
McAllister, E., Bhullar, N. & Schutte, N. S. Into the woods or a stroll in the park: How virtual contact with nature impacts positive and negative affect. Int. J. Environ. Res. Public Health 14, 786 (2017).
74. 74.
Meehan, M., Insko, B., Whitton, M. & Brooks, F. P. Jr. Physiological measures of presence in stressful virtual environments. ACM Trans. Graph. 21, 645–652 (2002).
75. 75.
Mostajeran, F., Balci, M. B., Steinicke, F., Kühn, S. & Gallinat, J. The effects of virtual audience size on social anxiety during public speaking. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 303–312 (IEEE, 2020).
76. 76.
Berman, M. G., Jonides, J. & Kaplan, S. The cognitive benefits of interacting with nature. Psychol. Sci. 19, 1207–1212 (2008).
77. 77.
Hartig, T., Kaiser, F. G. & Strumse, E. Psychological restoration in nature as a source of motivation for ecological behaviour. Environ. Conserv. 34, 291–299 (2007).
78. 78.
Naimark, M. Elements of real-space imaging: A proposed taxonomy. In Stereoscopic Displays and Applications II, vol. 1457, 169–180 (International Society for Optics and Photonics, 1991).
79. 79.
Kardan, O. et al. Is the preference of natural versus man-made scenes driven by bottom-up processing of the visual features of nature?. Front. Psychol. 6, 471 (2015).
80. 80.
Ibarra, F. F. et al. Image feature types and their predictions of aesthetic preference and naturalness. Front. Psychol. 8, 632 (2017).
81. 81.
Appleton, J. Prospects and refuges re-visited. Landsc. J. 3, 91–103 (1984).
82. 82.
Hendrix, C. & Barfield, W. The sense of presence within auditory virtual environments. Presence Teleoper. Virtual Environ. 5, 290–301 (1996).
83. 83.
Boucsein, W. Electrodermal Activity (SpringeR, Berlin, 2012).
84. 84.
Bergner-Köther, R. Zur Differenzierung von Angst und Depression: Ein Beitrag zur Konstruktvalidierung des State-Trait-Angst-Depressions-Inventars Vol. 18 (University of Bamberg Press, Bamberg, 2014).
85. 85.
Shacham, S. A shortened version of the profile of mood states. J. Pers. Assess. 47, 305–306 (1983).
86. 86.
Helton, S. W. & Naswall, K. Short stress state questionnaire. Eur. J. Psychol. Assess. 1, 1–11. https://doi.org/10.1027/1015-5759/a000200 (2014).
87. 87.
Helton, W. S. & Näswall, K. Short stress state questionnaire. Eur. J. Psychol. Assess. 20, 20 (2015).
88. 88.
Cohen, S. et al. Perceived stress scale. Meas. Stress A Guide Health Social Sci. 235–283, 20 (1994).
89. 89.
Kennedy, R. S., Lane, N. E., Berbaum, K. S. & Lilienthal, M. G. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3, 203–220 (1993).
90. 90.
Lykken, D., Rose, R., Luther, B. & Maley, M. Correcting psychophysiological measures for individual differences in range. Psychol. Bull. 66, 481 (1966).
91. 91.
Healey, J. A. Wearable and automotive systems for affect recognition from physiology. Ph.D. thesis, Massachusetts Institute of Technology (2000).
92. 92.
Cohen, J. A power primer. Psychol. Bull. 112, 155 (1992).
## Acknowledgements
This work was funded by the European Union, the German Research Foundation (TRR 169/C8, SFB 936/C7), the German Federal Ministry of Education and Research (BMBF) and the German Federal Ministry for Economic Affairs and Energy (BMWi).
## Funding
Open Access funding enabled and organized by Projekt DEAL.
## Author information
Authors
### Contributions
All authors conceived the experiment. J.K. conducted the experiment and collected data. F.M. and J.K. analysed the results. F.M. wrote the manuscript and all authors reviewed it and contributed to the final version.
### Corresponding author
Correspondence to Fariba Mostajeran.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Mostajeran, F., Krzikawski, J., Steinicke, F. et al. Effects of exposure to immersive videos and photo slideshows of forest and urban environments. Sci Rep 11, 3994 (2021). https://doi.org/10.1038/s41598-021-83277-y
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-021-83277-y
• ### Exposure to virtual nature: the impact of different immersion levels on skin conductance level, heart rate, and perceived relaxation
• Thiemo Knaust
• Anna Felnhofer
• Holger Schulz
Virtual Reality (2021) |
# Doubly Resonant Raman Scattering Induced by an Electric Field
### Abstract
By applying a variable electric field to a 30-35 Å GaAs/Ga$_{0.65}$Al$_{0.35}$As superlattice we have achieved tunable doubly resonant Raman processes, in which both the incident and the scattered light are in resonance with electronic transitions. The resonances, which involve states of the field-induced Stark ladder, manifest themselves as strong enhancements in the intensity of Raman scattering associated with the longitudinal optical phonon of GaAs. The dependence of the double resonances on field and photon energy provides direct information on the superlattice states, in very good agreement with results of photocurrent and photoluminescence experiments.
Type
Publication
Phys. Rev. B 38, 12720–12723 (1988)
Full citation:
F. Agulló-Rueda, E. E. Mendez, and J. M. Hong, “Doubly Resonant Raman Scattering Induced by an Electric Field,” Phys. Rev. B 38, 12720–12723 (1988). DOI: 10.1103/PhysRevB.38.12720 |
# The interpretation of densities as intensities and vice versa
## Densities
Consider the problem of estimating the common density $$f(x\theta)=dF(x)$$ density of indexed i.i.d. random variables $$\{X_i\}_{i\leq n}\in \mathbb{R}^d$$ from $$n$$ realisations of those variables, $$\{x_i\}_{i\leq n}$$ and $$F:\mathbb{R}^d\rightarrow[0,1].$$ We assume the state is absolutely continuous with respect to the Lebesgue measure, i.e. $$\mu(A)=0\Rightarrow P(X_i\in A)=0$$. Amongst other things, this implies that $$P(X_i)=P(X_j)=0\text{ for }i\neq j$$ and that the density exists as a standard function (i.e. we do not need to consider generalised functions such as distributions to handle atoms in $$F$$ etc.)
Here we will give the density a finite parameter vector $$\theta$$, i.e. $$f(x;\theta)=dF(x;\theta)$$, whose value completely characterises the density; the problem of estimating the density is then the same as the one of estimating $$\theta.$$
In the method of maximum likelihood estimation we seek to maximise the value of the empirical likelihood of the data. That is, we choose a parameter estimate $$\hat{\theta}$$ to satisfy
\begin{align*} \hat{\theta} &:=\operatorname{argmax}_\theta\prod_i f(x_i;\theta)\\\\ &=\operatorname{argmax}_\theta\sum_i \log f(x_i;\theta) \end{align*}
Let’s consider the case where we try to estimate this function by constructing it from some basis of $$p\( functions$$j: ^d({filename}functional_data.md) get me here? (see Ch21 of Ramsay and Silverman) 2. Can I use point process estimation theory to improve density estimation? After all, normal point-process estimation claims to be an un-normalised vesion of density estimation. Lies11 draws some parallels there, esp. with mixture models. |
# Double integral over circle region
1. Jun 10, 2010
### Shaybay92
1. The problem statement, all variables and given/known data
So I have to use the type I type II region formula to find the volume under the equation (2x-y) and over the circular domain with center (0,0) and radius 2. Do I have to split this circle into semi-circles and treat it as 2 type I domains? I got the following limits for the top half, but I get stuck when integrating:
3. The attempt at a solution
y limits:
Upper: Sqrt(2 - x^2) from the equation 2 = y^2 + x^2
Lower: 0
X limits:
Upper: 2
Lower: -2
So I have to find the integral with respect to y of 2x-y with limits 0 to Sqrt[2-x^2]
After integrating with respect to Y I got:
2x(Sqrt[2-x^2]) - 1 + (x^2)/2
Is this correct to start with? Then integrate with respect to x from -2 to 2?
2. Jun 10, 2010
### lanedance
the geomtric picture isn't very clear... can you explain type I & II regions, & how is this a volume if you're only integrating over 2 variables, is it actually an area?
the line y=2x passes through the origin, so will split the circle you decribe in half... is there any ther bound? or is it the region above y = 2x, and bounde dabove by the circle?
if you are integrating over the circle, polar coordinates always make life easy...
3. Jun 11, 2010
### Shaybay92
Type 1 regions are two variable domains where the top and bottom of the boundaries of the domain (in the xy plane) are functions h1(x) and h2(x) and the left and right are constants e.g x = 1, x=2... Type II regions are two variable domains where the left and right boundaries are functions h1(y) h2(y) and the top and bottom are constants e.g y = 1 y = 2.
In this question you have to use the double integral formula for type 1 or type 2 regions because it is in part of the textbook where polar coordinates have not yet been discussed. The answer is zero.
The domain is a circular region so I was trying to cut it into two semi-circles of type 1 where the top semi-circle has functions y = 0 and y = Sqrt[2 -x^2] and then the sides would just be -2 and 2. Then the bottom would be the same but the y limits switched and -Sqrt[2 - x^2] instead of +ve. Is it possible to just say they will cancel out because of symmetry involved? That is,
y limits of top semicircle are [0, Sqrt[2-x^2]] and bottom semicircle [-Sqrt[2-x^2],0] So could you just flip ones limits over by making the entire integral negative and they both cancel to zero??
The double integral formula for volumes above Type 1 regions and below surfaces in 3D space are:
$$\int\int$$D f(x,y) dA = $$\int[a,b]$$$$\int[h1(x),h2(x)]$$ f(x,y) dy dx
Type 2 regions are the same except dx dy and obviously g1(y) and g2(y)....
4. Jun 11, 2010
### Shaybay92
Also it is under the plane 2x -y... hope that is clear :/
5. Jun 11, 2010
### vela
Staff Emeritus
There's no reason to split the region into two. In fact, it's easier to show the integral is 0 if you don't.
Hint: Consider the even or oddness of the integrand.
6. Jun 11, 2010
### Shaybay92
I see my mistake :P I dont know why I didnt just treat it as a type I with y limits -Sqrt[4-x^2] and +Sqrt[4-x^2]...... I was making it too difficult for myself. My lecturer helped me out with this and obviously it cancels down to 0 when you integrate with respect to x.
Thanks for your help anyway!! :) |
8.47
$\Delta U=q+w$
Ammarah 2H
Posts: 30
Joined: Fri Sep 29, 2017 7:06 am
8.47
How do you know that the work is a positive value for #47?
Kyle Sheu 1C
Posts: 87
Joined: Fri Jun 23, 2017 11:39 am
Re: 8.47
When work is done on the system, energy is being input, so we can think about it as a positive change in the system's energy |
# Achievements
### Gold Medal, 6th Place
Kaggle Text Normalization Challenge by Google
### Winner
KONE IBM Hackathon 2017
### Game Dev Bootcamp
Best Technical Workshop 2016
# Experience
June 2018 – August 2018
Geneva, Switzerland
#### CERN
• Selected (among 41 candidates out of over 1,800 applicants) for Data Quality Monitoring using deep learning at the CMS Experiment
• Designed a custom neural network architecture termed Hierarchical Latent Autoencoder to exploit CMS Trigger System’s hierarchical design
• Improved reconstruction of trigger rate behaviour significantly to enable better anomaly detection with a probabilistic reconstruction metric for enhanced interpretability
November 2017 – Present
Kolkata, India
#### Cognibit
• Raised precision in predicting elevator failures from existing 21% to 88% while remotely collaborating with Kone’s Analytics team
• Piloting the ongoing deployment of the researched models into production by closely working with the IoT team in Finland
• Leading research & development towards a generalized AI-based platform combining big data analytics & deep learning technologies to enable predictive maintenance for any industrial system that generates log data
May 2017 – June 2017
Hyvinkää, Finland
#### Kone Corporation
• Researched approaches to anomaly detection using LSTM-based RNN architectures to model elevator logs as a natural language sequence
• Designed & experimented with several predictive maintenance models to develop a system prototype which achieved record accuracy
• Performed statistical modeling & diagnostics of Kone’s german elevator data extracted from the Remote Maintenance Program which yielded commercially valuable insights
# Selected Publications
### Deep Representation Learning for Trigger Monitoring
We propose a novel neural network architecture called Hierarchical Latent Autoencoder to exploit the underlying hierarchical nature of the CMS Trigger System at CERN for data quality monitoring. The results demonstrate that our architecture does reducehe reconstruction error on the test set from $9.35 \times 10^{-6}$ when using a vanilla Variational Autoencoder to $4.52 \times 10^{-6}$ when using our Hierarchical Latent Autoencoder.
CERN Openlab Technical Report, 2018
### Text Normalization using Memory Augmented Neural Networks
With the addition of dynamic memory access and storage mechanism, we present a neural architecture that will serve as a language-agnostic text normalization system while avoiding the kind of unacceptable errors made by the LSTM based recurrent neural networks. Our proposed system requires significantly lesser amounts of data, training time and compute resources.
Speech Communication (EURASIP & ISCA) Elsevier, 2018 |
2. Finance
3. an asset has the current spot price at dollar100 in...
# Question: an asset has the current spot price at dollar100 in...
###### Question details
An asset has the current spot price at $100. In the future, assuming there is only two cases happen with the spot price, the price will increase to$120 or decrease to \$80. Risk free rate is 3.45% per period. Apply one period binomial model to estimate the fair value of options on this asset.
I learned this model but do not know how to apply, please do this example to demonstrate for me. |
# genericSymmetricMatrix -- make a generic symmetric matrix
## Synopsis
• Usage:
genericSymmetricMatrix(R,r,n)
• Inputs:
• R, a ring
• r, , which is a variable in the ring R (this input is optional)
• n, an integer
• Outputs:
• a symmetric matrix with n rows whose entries on and above the diagonal are the variables of R starting with r
## Description
A square matrix M is symmetric if transpose(M) - M == 0.
i1 : R = ZZ[a..z]; i2 : M = genericSymmetricMatrix(R,a,3) o2 = | a b c | | b d e | | c e f | 3 3 o2 : Matrix R <--- R i3 : transpose(M) - M == 0 o3 = true i4 : genericSymmetricMatrix(R,d,5) o4 = | d e f g h | | e i j k l | | f j m n o | | g k n p q | | h l o q r | 5 5 o4 : Matrix R <--- R
Omitting the input r is the same as having r be the first variable in R.
i5 : genericSymmetricMatrix(R,3) o5 = | a b c | | b d e | | c e f | 3 3 o5 : Matrix R <--- R i6 : genericSymmetricMatrix(R,5) o6 = | a b c d e | | b f g h i | | c g j k l | | d h k m n | | e i l n o | 5 5 o6 : Matrix R <--- R |
# Problem of the Week Problem D Many Ways to Get There
Rectangle $$PQRS$$ has $$QR=4$$ and $$RS=7$$. $$\triangle TRU$$ is inscribed in rectangle $$PQRS$$ with $$T$$ on $$PQ$$ such that $$PT=4$$, and $$U$$ on $$PS$$ such that $$SU=1$$.
Determine the value of $$\angle RUS+\angle PUT$$.
There are many ways to solve this problem. After you have solved it, see if you can solve it a different way. |
# Eliminating variables from system of equation using Eliminate or Solve
I have a system of equations for algebraic curve given by the zero locus of some polynomial encoded in the system of equations (I want to eliminate variable z and get algebraic curve in terms of x and y, C12 and C22 can be assumed to be integers and even positive if that helps):
Eliminate[{y == z^(2 + 2 C12)/(-x^2 + z^2),
1 == ((x - z) (x/z)^(2 C12) z^(-2 - 2 C12 + 2 C22) (x + z))/(-1 +
z^2)}, {z}]
Eliminate::ifun: Inverse functions are being used by Eliminate, so some solutions may not be found; use Reduce for complete solution information.
However Mathematica can't eliminate z.
When I use Solve I get:
Solve::nsmet: This system cannot be solved with the methods available to Solve.
Maple gives some answers but they aren't very useful, for instance they don't give algebraic curve that I want.
Is there any smart way to do what I want?
• You are raising z to a symbolic power so this is not an algebraic curve. Even assuming it is positive and integral does not make it one (it becomes a parametrized family of such curves). – Daniel Lichtblau Mar 13 '17 at 15:25
• Yes, that's what I want. To obtain a family of curves indexed by C's, given by some zero locus polynomial of y and x where their powers depend on C's, but not in the form of this system but in the form of zero locus of some polynomial. Is this possible? – Caims Mar 13 '17 at 15:44
• Offhand I do not know how to do that. There has been some work on Groebner bases parametrized by exponents, but I don't think it ever went far enough to do this. – Daniel Lichtblau Mar 13 '17 at 18:07
• Can you give me some reference? – Caims Mar 13 '17 at 18:10
• – Daniel Lichtblau Mar 13 '17 at 20:17 |
# 7.1: Introduction to Conics
In this chapter, we study the Conic Sections - literally sections of a cone'. Imagine a double-napped cone as seen below being sliced' by a plane.
If we slice the cone with a horizontal plane the resulting curve is a circle.
Tilting the plane ever so slightly produces an ellipse.
If the plane cuts parallel to the cone, we get a parabola.
If we slice the cone with a vertical plane, we get a hyperbola.
If the slicing plane contains the vertex of the cone, we get the so-called `degenerate' conics: a point, a line, or two intersecting lines.
We will focus the discussion on the non-degenerate cases: circles, parabolas, ellipses, and hyperbolas, in that order. To determine equations which describe these curves, we will make use of their definitions in terms of distances.
## Thinkwell's Trigonometry with Professor Edward Burger
Thinkwell's Trigonometry homeschool course has engaging online video lessons and step-by-step exercises that teach you what you'll need to be successful in Calculus. The Trigonometry curriculum covers trig functions, identities, ratios and more. Instead of trying to learn what you need from an old-fashioned textbook, you can watch easy-to-understand trigonometry videos.
Thinkwell's award-winning math teacher, Edward Burger, can explain and demonstrate trig clearly to anyone, so trigonometry basics are easy to understand and remember. Professor Burger shares the tricks and tips so your students will remember them when they begin calculus.
The workbook (optional) comes with lecture notes, sample problems, and exercises so that you can study even when away from the computer.
#### Risk-Free 14 Day
##### Money-Back Guarantee
Do you have a course subscription code?
Click “Add To Cart” above and enter it as part of the checkout process.
## Trigonometry
Thinkwell's Trigonometry has high-quality online video lessons and step-by-step exercises that teach you what you'll need to be successful in Calculus. Thinkwell's Trigonometry covers the same topics that the most popular textbooks cover, so it's a perfect study aid. Instead of trying to learn what you need from an old-fashioned textbook, you can watch easy-to-understand trigonometry videos by one of our nation's best math teachers.
Thinkwell's award-winning math teacher, Edward Burger, can explain and demonstrate trig clearly to anyone, so trigonometry basics are easy to understand and remember. Professor Burger shares the tricks and tips so your students will remember them when they begin calculus. And because it's available 24/7 for one fixed price, instead of by the hour, it's better than a tutor.
• 12-month Online Subscription to our complete Trigonometry course with video lessons, automatically graded trigonometry problems, and much more.
• Workbook (optional) with lecture notes, sample problems, and exercises so that you can study even when away from the computer.
### Online Subscription, 12-month access
• High-quality video lessons explain all of the Trigonometry math topics and concepts
• Automatically graded trigonometry problems with step-by-step, immediate feedback allow you to track your progress
• Printable full-color illustrated notes help you review what you've learned in the video lesson
• Subscriptions start when you are ready. Buy now and activate your course anytime you like. Wait up to one year to activate your subscription your 12-month subscription doesn't begin until you say so!
### Workbook, Notes, sample problems, exercises, and practice problems
Study without a computer. Our workbook companion contains the same lecture notes and sample problems that are delivered online, as well as some additional exercises, all in a convenient print format. Answers to the odd-numbered exercises are in the back of the book. Online Subscription is required workbook not sold separately.
### Trigonometry Details
Thinkwell's Trigonometry has all the features your home school needs:
• Equivalent to 11th- or 12th-grade trigonometry
• More than 180 video lessons
• 1000+ interactive trigonometry problems with immediate feedback allow you to track your progress
• Trigonometry practice tests and final tests for all 8 chapters, as well as a midterm and a final
• Printable illustrated notes for each topic
• Real-world application examples in both lectures and exercises
• Closed captioning for all videos
• Glossary of more than 200 mathematical terms
• Engaging content to help students advance their mathematical knowledge:
• Review of graphs and functions
• Trigonometric functions: sine, cosine, tangent, cotangent, secant, and cosecant
• Inverse trigonometric functions
• Trigonometric identities
• Law of Sines and Law of Cosines
• Vectors and unit vectors
• Complex numbers and polar coordinates
• Exponential and logarithmic functions
• Conic sections: parabolas, ellipses, and hyperbolas
• 1.1 Graphing Basics
• 1.1.1 Using the Cartesian System
• 1.1.2 Thinking Visually
• 1.2 Relationships between Two Points
• 1.2.1 Finding the Distance between Two Points
• 1.2.2 Finding the Second Endpoint of a Segment
• 1.3 Relationships among Three Points
• 1.3.1 Collinearity and Distance
• 1.3.2 Triangles
• 1.4 Circles
• 1.4.1 Finding the Center-Radius Form of the Equation of a Circle
• 1.4.2 Decoding the Circle Formula
• 1.4.3 Finding the Center and Radius of a Circle
• 1.4.4 Solving Word Problems Involving Circles
• 1.5 Graphing Equations
• 1.5.1 Graphing Equations by Locating Points
• 1.5.2 Finding the x- and y-Intercepts of an Equation
• 1.6 Function Basics
• 1.6.1 Functions and the Vertical Line Test
• 1.6.2 Identifying Functions
• 1.6.3 Function Notation and Finding Function Values
• 1.7 Working with Functions
• 1.7.1 Determining Intervals Over Which a Function Is Increasing
• 1.7.2 Evaluating Piecewise-Defined Functions for Given Values
• 1.7.3 Solving Word Problems Involving Functions
• 1.8 Function Domain and Range
• 1.8.1 Finding the Domain and Range of a Function
• 1.8.2 Domain and Range: One Explicit Example
• 1.8.3 Satisfying the Domain of a Function
• 1.9 Linear Functions: Slope
• 1.9.1 An Introduction to Slope
• 1.9.2 Finding the Slope of a Line Given Two Points
• 1.9.3 Interpreting Slope from a Graph
• 1.9.4 Graphing a Line Using Point and Slope
• 1.10 Equations of a Line
• 1.10.1 Writing an Equation in Slope-Intercept Form
• 1.10.2 Writing an Equation Given Two Points
• 1.10.3 Writing an Equation in Point-Slope Form
• 1.10.4 Matching a Slope-Intercept Equation with Its Graph
• 1.10.5 Slope for Parallel and Perpendicular Lines
• 1.11 Graphing Functions
• 1.11.1 Graphing Some Important Functions
• 1.11.2 Graphing Piecewise-Defined Functions
• 1.11.3 Matching Equations with Their Graphs
• 1.12 Manipulating Graphs: Shifts and Stretches
• 1.12.1 Shifting Curves along Axes
• 1.12.2 Shifting or Translating Curves along Axes
• 1.12.3 Stretching a Graph
• 1.12.4 Graphing Quadratics Using Patterns
• 1.13 Manipulating Graphs: Symmetry and Reflections
• 1.13.1 Determining Symmetry
• 1.13.2 Reflections
• 1.13.3 Reflecting Specific Functions
• 1.14.1 Deconstructing the Graph of a Quadratic Function
• 1.14.2 Nice-Looking Parabolas
• 1.14.3 Using Discriminants to Graph Parabolas
• 1.14.4 Maximum Height in the Real World
• 1.15 Quadratic Functions: The Vertex
• 1.15.1 Finding the Vertex by Completing the Square
• 1.15.2 Using the Vertex to Write the Quadratic Equation
• 1.15.3 Finding the Maximum or Minimum of a Quadratic
• 1.15.4 Graphing Parabolas
• 1.16 Composite Functions
• 1.16.1 Using Operations on Functions
• 1.16.2 Composite Functions
• 1.16.3 Components of Composite Functions
• 1.16.4 Finding Functions That Form a Given Composite
• 1.16.5 Finding the Difference Quotient of a Function
• 1.16.6 Calculating the Average Rate of Change
• 1.17 Rational Functions
• 1.17.1 Understanding Rational Functions
• 1.17.2 Basic Rational Functions
• 1.18 Graphing Rational Functions
• 1.18.1 Vertical Asymptotes
• 1.18.2 Horizontal Asymptotes
• 1.18.3 Graphing Rational Functions
• 1.18.4 Graphing Rational Functions: More Examples
• 1.18.5 Oblique Asymptotes
• 1.18.6 Oblique Asymptotes: Another Example
• 1.19 Function Inverses
• 1.19.1 Understanding Inverse Functions
• 1.19.2 The Horizontal Line Test
• 1.19.3 Are Two Functions Inverses of Each Other?
• 1.19.4 Graphing the Inverse
• 1.20 Finding Function Inverses
• 1.20.1 Finding the Inverse of a Function
• 1.20.2 Finding the Inverse of a Function with Higher Powers
#### 2. The Trigonometric Functions
• 2.1 Angles and Radian Measure
• 2.1.1 Finding the Quadrant in Which an Angle Lies
• 2.1.2 Finding Coterminal Angles
• 2.1.3 Finding the Complement and Supplement of an Angle
• 2.1.4 Converting between Degrees and Radians
• 2.1.5 Using the Arc Length Formula
• 2.2 Right Angle Trigonometry
• 2.2.1 An Introduction to the Trigonometric Functions
• 2.2.2 Evaluating Trigonometric Functions for an Angle in a Right Triangle
• 2.2.3 Finding an Angle Given the Value of a Trigonometric Function
• 2.2.4 Using Trigonometric Functions to Find Unknown Sides of Right Triangles
• 2.2.5 Finding the Height of a Building
• 2.3 The Trigonometric Functions
• 2.3.1 Evaluating Trigonometric Functions for an Angle in the Coordinate Plane
• 2.3.2 Evaluating Trigonometric Functions Using the Reference Angle
• 2.3.3 Finding the Value of Trigonometric Functions Given Information about the Values of Other Trigonometric Functions
• 2.3.4 Trigonometric Functions of Important Angles
• 2.4 Graphing Sine and Cosine Functions
• 2.4.1 An Introduction to the Graphs of Sine and Cosine Functions
• 2.4.2 Graphing Sine or Cosine Functions with Different Coefficients
• 2.4.3 Finding Maximum and Minimum Values and Zeros of Sine and Cosine
• 2.4.4 Solving Word Problems Involving Sine or Cosine Functions
• 2.5 Graphing Sine and Cosine Functions with Vertical and Horizontal Shifts
• 2.5.1 Graphing Sine and Cosine Functions with Phase Shifts
• 2.5.2 Fancy Graphing: Changes in Period, Amplitude, Vertical Shift, and Phase Shift
• 2.6 Graphing Other Trigonometric Functions
• 2.6.1 Graphing the Tangent, Secant, Cosecant, and Cotangent Functions
• 2.6.2 Fancy Graphing: Tangent, Secant, Cosecant, and Cotangent
• 2.6.3 Identifying a Trigonometric Function from its Graph
• 2.7 Inverse Trigonometric Functions
• 2.7.1 An Introduction to Inverse Trigonometric Functions
• 2.7.2 Evaluating Inverse Trigonometric Functions
• 2.7.3 Solving an Equation Involving an Inverse Trigonometric Function
• 2.7.4 Evaluating the Composition of a Trigonometric Function and Its Inverse
• 2.7.5 Applying Trigonometric Functions: Is He Speeding?
#### 3. Trigonometric Identities
• 3.1 Basic Trigonometric Identities
• 3.1.1 Fundamental Trigonometric Identities
• 3.1.2 Finding All Function Values
• 3.2 Simplifying Trigonometric Expressions
• 3.2.1 Simplifying a Trigonometric Expression Using Trigonometric Identities
• 3.2.2 Simplifying Trigonometric Expressions Involving Fractions
• 3.2.3 Simplifying Products of Binomials Involving Trigonometric Functions
• 3.2.4 Factoring Trigonometric Expressions
• 3.2.5 Determining Whether a Trigonometric Function Is Odd, Even, or Neither
• 3.3 Proving Trigonometric Identities
• 3.3.1 Proving an Identity
• 3.3.2 Proving an Identity: Other Examples
• 3.4 Solving Trigonometric Equations
• 3.4.1 Solving Trigonometric Equations
• 3.4.2 Solving Trigonometric Equations by Factoring
• 3.4.3 Solving Trigonometric Equations with Coefficients in the Argument
• 3.4.4 Solving Trigonometric Equations Using the Quadratic Formula
• 3.4.5 Solving Word Problems Involving Trigonometric Equations
• 3.5 The Sum and Difference Identities
• 3.5.1 Identities for Sums and Differences of Angles
• 3.5.2 Using Sum and Difference Identities
• 3.5.3 Using Sum and Difference Identities to Simplify an Expression
• 3.6 Double-Angle Identities
• 3.6.1 Confirming a Double-Angle Identity
• 3.6.2 Using Double-Angle Identities
• 3.6.3 Solving Word Problems Involving Multiple-Angle Identities
• 3.7.1 Using a Cofunction Identity
• 3.7.2 Using a Power-Reducing Identity
• 3.7.3 Using Half-Angle Identities to Solve a Trigonometric Equation
#### 4. Applications of Trigonometry
• 4.1 The Law of Sines
• 4.1.1 The Law of Sines
• 4.1.2 Solving a Triangle Given Two Sides and One Angle
• 4.1.3 Solving a Triangle (SAS): Another Example
• 4.1.4 The Law of Sines: An Application
• 4.2 The Law of Cosines
• 4.2.1 The Law of Cosines
• 4.2.2 The Law of Cosines (SSS)
• 4.2.3 The Law of Cosines (SAS): An Application
• 4.2.4 Heron's Formula
• 4.3 Vector Basics
• 4.3.1 An Introduction to Vectors
• 4.3.2 Finding the Magnitude and Direction of a Vector
• 4.3.3 Vector Addition and Scalar Multiplication
• 4.4 Components of Vectors and Unit Vectors
• 4.4.1 Finding the Components of a Vector
• 4.4.2 Finding a Unit Vector
• 4.4.3 Solving Word Problems Involving Velocity or Forces
#### 5. Complex Numbers and Polar Coordinates
• 5.1 Complex Numbers
• 5.1.1 Introducing and Writing Complex Numbers
• 5.1.2 Rewriting Powers of i
• 5.1.3 Adding and Subtracting Complex Numbers
• 5.1.4 Multiplying Complex Numbers
• 5.1.5 Dividing Complex Numbers
• 5.2 Complex Numbers in Trigonometric Form
• 5.2.1 Graphing a Complex Number and Finding Its Absolute Value
• 5.2.2 Expressing a Complex Number in Trigonometric or Polar Form
• 5.2.3 Multiplying and Dividing Complex Numbers in Trigonometric or Polar Form
• 5.3 Powers and Roots of Complex Numbers
• 5.3.1 Using DeMoivre's Theorem to Raise a Complex Number to a Power
• 5.3.2 Roots of Complex Numbers
• 5.3.3 More Roots of Complex Numbers
• 5.3.4 Roots of Unity
• 5.4 Polar Coordinates
• 5.4.1 An Introduction to Polar Coordinates
• 5.4.2 Converting between Polar and Rectangular Coordinates
• 5.4.3 Converting between Polar and Rectangular Equations
• 5.4.4 Graphing Simple Polar Equations
• 5.4.5 Graphing Special Polar Equations
#### 6. Exponential and Logarithmic Functions
• 6.1 Exponential Functions
• 6.1.1 An Introduction to Exponential Functions
• 6.1.2 Graphing Exponential Functions: Useful Patterns
• 6.1.3 Graphing Exponential Functions: More Examples
• 6.2 Applying Exponential Functions
• 6.2.1 Using Properties of Exponents to Solve Exponential Equations
• 6.2.2 Finding Present Value and Future Value
• 6.2.3 Finding an Interest Rate to Match Given Goals
• 6.3 The Number e
• 6.3.1 e
• 6.3.2 Applying Exponential Functions
• 6.4 Logarithmic Functions
• 6.4.1 An Introduction to Logarithmic Functions
• 6.4.2 Converting between Exponential and Logarithmic Functions
• 6.5 Solving Logarithmic Functions
• 6.5.1 Finding the Value of a Logarithmic Function
• 6.5.2 Solving for x in Logarithmic Equations
• 6.5.3 Graphing Logarithmic Functions
• 6.5.4 Matching Logarithmic Functions with Their Graphs
• 6.6 Properties of Logarithms
• 6.6.1 Properties of Logarithms
• 6.6.2 Expanding a Logarithmic Expression Using Properties
• 6.6.3 Combining Logarithmic Expressions
• 6.7 Evaluating Logarithms
• 6.7.1 Evaluating Logarithmic Functions Using a Calculator
• 6.7.2 Using the Change of Base Formula
• 6.8 Applying Logarithmic Functions
• 6.8.1 The Richter Scale
• 6.8.2 The Distance Modulus Formula
• 6.9 Solving Exponential and Logarithmic Equations
• 6.9.1 Solving Exponential Equations
• 6.9.2 Solving Logarithmic Equations
• 6.9.3 Solving Equations with Logarithmic Exponents
• 6.10 Applying Exponents and Logarithms
• 6.10.1 Compound Interest
• 6.10.2 Predicting Change
• 6.11 Word Problems Involving Exponential Growth and Decay
• 6.11.1 An Introduction to Exponential Growth and Decay
• 6.11.2 Half-Life
• 6.11.3 Newton's Law of Cooling
• 6.11.4 Continuously Compounded Interest
#### 7. Conic Sections
• 7.1 Conic Sections: Parabolas
• 7.1.1 An Introduction to Conic Sections
• 7.1.2 An Introduction to Parabolas
• 7.1.3 Determining Information about a Parabola from Its Equation
• 7.1.4 Writing an Equation for a Parabola
• 7.2 Conic Sections: Ellipses
• 7.2.1 An Introduction to Ellipses
• 7.2.2 Finding the Equation for an Ellipse
• 7.2.3 Applying Ellipses: Satellites
• 7.2.4 The Eccentricity of an Ellipse
• 7.3 Conic Sections: Hyperbolas
• 7.3.1 An Introduction to Hyperbolas
• 7.3.2 Finding the Equation for a Hyperbola
• 7.4 Conic Sections
• 7.4.1 Identifying a Conic
• 7.4.2 Name That Conic
• 7.4.3 Rotation of Axes
• 7.4.4 Rotating Conics
Edward Burger is an award-winning professor with a passion for teaching mathematics.
Since 2013, Edward Burger has been President of Southwestern University, a top-ranked liberal arts college in Georgetown, Texas. Previously, he was Professor of Mathematics at Williams College. Dr. Burger earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut College.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
## An Introduction to NURBS
The latest from a computer graphics pioneer, An Introduction to NURBS is the ideal resource for anyone seeking a theoretical and practical understanding of these very important curves and surfaces. Beginning with Bézier curves, the book develops a lucid explanation of NURBS curves, then does the same for surfaces, consistently stressing important shape design properties and the capabilities of each curve and surface type. Throughout, it relies heavily on illustrations and fully worked examples that will help you grasp key NURBS concepts and deftly apply them in your work. Supplementing the lucid, point-by-point instructions are illuminating accounts of the history of NURBS, written by some of its most prominent figures.
Whether you write your own code or simply want deeper insight into how your computer graphics application works, An Introduction to NURBS will enhance and extend your knowledge to a degree unmatched by any other resource.
The latest from a computer graphics pioneer, An Introduction to NURBS is the ideal resource for anyone seeking a theoretical and practical understanding of these very important curves and surfaces. Beginning with Bézier curves, the book develops a lucid explanation of NURBS curves, then does the same for surfaces, consistently stressing important shape design properties and the capabilities of each curve and surface type. Throughout, it relies heavily on illustrations and fully worked examples that will help you grasp key NURBS concepts and deftly apply them in your work. Supplementing the lucid, point-by-point instructions are illuminating accounts of the history of NURBS, written by some of its most prominent figures.
Whether you write your own code or simply want deeper insight into how your computer graphics application works, An Introduction to NURBS will enhance and extend your knowledge to a degree unmatched by any other resource.
## CONIC SECTIONS - PowerPoint PPT Presentation
PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.
## Conic Sections: The Ellipse - PowerPoint PPT Presentation
PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.
## 7.1 Integration by Parts
Introduction: In this lesson we will learn to integrate functions using the integration by parts technique. Thus far, the only integration technique that you have seen is a u-substitution. The u-substitution works by “undoing” the Chain Rule. In this lesson, we will learn the integration by parts technique is “undoing” the Product Rule.
Objectives: After this lesson you should be able to:
Video & Notes: Fill out the note sheet for this lesson (7-1-Integration-by-Parts) as you watch the video. If you prefer, you could read Section 7.1 of your textbook and work out the problems on the notes on your own as practice. Remember, notes must be uploaded to Blackboard weekly for a grade! If for some reason the video below does not load you can access it on YouTube here.
Homework: Go to WebAssign and complete the .1 Integration by Parts” assignment.
Practice Problems: # 5, 9, 17, 19, 23, 27, 31, 35, 37, 39
The University of Alaska Fairbanks is an AA/EO employer and educational institution and prohibits illegal discrimination against any individual. Learn more about UA’s notice of nondiscrimination.
Get notified when we have news, courses, or events of interest to you.
### Serbian Spruce - Picea omorika
Articles
### Fruit Production for the Home Gardener
Guides and Publications
### Landscaping the Home Grounds
Guides and Publications
### 2019 Penn State Flower Trials
Videos
### Plant Life Cycles
Articles
## Conic Sections - Circles
A series of free, online video lessons with examples and solutions to help Algebra students learn about circle conic sections.
The following diagram shows how to derive the equation of circle (x - h) 2 + (y - k) 2 = r 2 using Pythagorean Theorem and distance formula. Scroll down the page for examples and solutions.
### Circle Conic Section
When working with circle conic sections, we can derive the equation of a circle by using coordinates and the distance formula.
The equation of a circle is (x - h) 2 + (y - k) 2 = r 2 where r is equal to the radius, and the coordinates (x,y) are equal to the circle center.
The variables h and k represent horizontal or vertical shifts in the circle graph.
Examples:
1. Find the center and the radius
a) x 2 + (y + 2) 2 = 121
b) (x + 5) 2 + (y - 10) 2 = 9
2. Find the equation the circle with
a) center(-11, -8) and radius 4
b) center (2, -5) and point on circle(-7, -1)
### How To Graph A Circle In Standard Form And General Form?
Identify the equation of a circle.
Write the standard form of a circle from general form.
Graph a circle.
A circle is the set of points (x,y) which are a fixed distance r, the radius, away from a fixed point (h,k), the center.
(x - h) 2 + (y - k) 2 = r 2
Examples:
1. Graph the circle
a) (x - 3) 2 + (y + 2) 2 = 16
b) x 2 + (y - 1) 2 = 4
2. Write in standard form and then graph
2x 2 + 2y 2 - 12x + 8y - 24 = 0
#### Conic Sections
Introduction to Circles
Understand the equation of a circle
#### Graph And Write Equations Of Circles
Example:
Graph the equation
(x - 1) 2 + (y + 2) 2 = 9
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
## ArcGIS ( Desktop, Server) 10.7.1 Belge Lambert 1972 Equivalency Patch
This patch adds the ability to handle both versions of the Belge Lambert 1972 projected coordinate systems as if they are equivalent.
Esri® announces the ArcGIS ( Desktop, Server) 10.7.1 Belge Lambert 1972 Equivalency Patch. Esri software has been using a very early definition of the Belge Lambert 1972 projected coordinate system, EPSG:31370. The false easting and false northing values are more precise than the current official NGI/IGN definition. The values differ by 0.44 mm and 0.2 mm. For ArcMap 10.7.0, Esri fixed the definition and created a second definition for the previous definition. Existing data, particularly web services, lost their WKID information, causing data handling problems. This patch ensures both definitions are treated as if they are equal either one will be identified as EPSG:31370. This patch deals specifically with the issue listed below under Issues Addressed with this patch.
### Issues Addressed with this patch
• BUG-000130263 - Handle the old and new versions of the Belge Lambert 1972 projected coordinate systems as if they are equivalent.
### Installing this patch on Windows
#### Installation Steps:
The ArcGIS product listed in the table must be installed on your system before you can install a patch. Each patch setup is specific to the ArcGIS product in the list. To determine which products are installed on your system, please see the How to identify which ArcGIS products are installed section. Esri recommends that you install the patch for each product that is on your system.
NOTE: If double clicking on the MSP file does not start the setup installation, you can start the setup installation manually by using the following command:
### Installing this patch on Linux
#### Installation Steps:
Complete the following install steps as the ArcGIS Install owner. The Install owner is the owner of the arcgis folder.
The ArcGIS product listed in the table must be installed on your system before you can install a patch. Each patch setup is specific to the ArcGIS product in the list. To determine which products are installed on your system, please see the How to identify which ArcGIS products are installed section. Esri recommends that you install the patch for each product that is on your system.
ArcGIS 10.7.1 Checksum (Md5) ArcGIS Server ArcGIS-1071-S-BLE-Patch-linux.tar BB9642CB29AD2F683157DDBC24C287F4
### Uninstalling this patch on Windows
To uninstall this patch on Windows, open the Windows Control Panel and navigate to installed programs. Make sure that "View installed updates" (upper left side of the Programs and Features dialog) is active. Select the patch name from the programs list and click Uninstall to remove the patch.
### Uninstalling this patch on Linux
To remove this patch on versions 10.7 and higher, navigate to the /tmp directory and run the following script as the ArcGIS Install owner:
The removepatch.sh script allows you to uninstall previously installed patches or hot fixes. Use the -s status flag to get the list of installed patches or hot fixes ordered by date. Use the -q flag to remove patches or hot fixes in reverse chronological order by date they were installed. Type removepatch -h for usage help. |
# Clifford theory for tensor categories
### Abstract
A graded tensor category over a group $G$ will be called a strongly $G$‐graded tensor category if every homogeneous component has at least one multiplicatively invertible object. Our main result is a description of the module categories over a strongly $G$‐graded tensor category as induced from module categories over tensor subcategories associated with the subgroups of $G$.
Publication
J. Lond. Math. Soc. (2) 83 (2011), no. 1, 57–78.
##### César Galindo
###### Associate Professor of Mathematics
My research interests include representation theory, category theory and their applications to Mathematical-Physics. |
# the carbon footprint
Lizzy D’Alonzo
Mr. Deforest
Math 33, section 001
W&R1 final Draft
The Carbon Footprint:
The earth today is undergoing immense strain by the very people who inhabit it. In the mordern world, we are dealing with major environmental issues such as climate change and pollution. Man-made climate change, also known as global warming, is caused by the release of various greenhouse gasses into the atmosphere. These man-made greenhouse gasses are emitted whenever we burn fossil fuels. Although these gasses can be beneficial to plants, they are harmful for humans due to the fact that they warms the planet which has many devastating effects. A carbon footprint is defined as the total set of greenhouse gas emissions caused by an individual. Thus, when calculating your own carbon footprint, you can essentially see your individual impact on the earth. The average carbon footprint for a U.S. resident is twenty metric tons but because I am a college student, I have ways to lower my carbon footprint (mine is 15.18 metric tons). College students have a significantly lower carbon footprint than the average American resident due to their modes of transportation and style of living.
Every breath we take has an impact on our carbon footprint, one of the biggest generators of a carbon footprint is how you travel. Vehicles used for transportation create large amount of emissions that are released into the air, this not only creates a bigger carbon footprint for people who travel with vehicles, but the emissions also create a lack of healthy air for everyone to consume. However, people who travel on bicycles and walk neutralize one’s carbon footprint. In addition, the use of public transportation seriously diminishes the number of vehicles on the road. National averages demonstrate that public transportation produces significantly lower greenhouse gas emissions per passenger miles. Heavy rail transportation such as subways and metros, produce 76% less greenhouse gas emission than the single passenger vehicle. Bus transit produces 33% less greenhouse gasses; it makes sense that more people in one mode of transportation is better for the earth.(FTA) The equation below represents auto vs pulblic transportation estimated CO2 emissons per passenger mile for average occupancy. By adding up the average use of different types of individual auto emisson and adding up average use of different types of public transportation emissons and adding them together you see the difference in combined emissons. You can click here and scroll to the third page to view this. This is where my carbon footprint is more reduced than the average American. Considering that I am a college student, I do not have a car of my own and my only means of transportation are walking and public transportation. College students in general project a huge interest in public transportation.
(Auto) 0.99+0.85+0.59+0.24=2.64pounds of CO2
(Public Transportation) 0.64+0.18+0.23+0.11+0.36+0.14+0.33+0.10+0.22+0.12=2.43pounds of CO2
Another characteristic of college students that diminishes one’s carbon footprint is the popular use of walking and riding bicycles as a mode of transportation. According to a transit survey given in US Davis, 47% of students travel around campus on their bike.(Kitaura) Outside of a college town, millions of American citizens drive independently to and from work. College kids have everything around them in a close vicinity which makes it much easier to travel healthy, normal adults do not have the same privilege. All over the country we have students who use walking and public transportation as their main use of getting around, this drastically influences their carbon footprint for the better. Without a doubt, college kids produce a cleaner more energy efficient way of traveling.
Another way Penn State helps students lower their carbon footprint is through how students choose to live. For example, here at Penn State University Park, there are reusable containers that all students can use when getting meals. Reusable containers work like current carryout containers, you can easily pick them up from any cashier at any dining hall and use them for your food. Also, when printing at the campus there is an option to cut paper waste. At each campus printer, you have the ability to choose double side printing for multiple pages and you can change your default margin settings to save 5% more paper. Likewise in the dorms where students live, all students have easy access to put their trash in compost bins. Every dorm has an organized trash desposal to increase the correct recycling. According to the PSU Collegian last year Penn State University composted 1,300 tons. Composting saves Penn State $75,000 each year for not having to transport waste to a landfill, and Penn State also uses compost to make mulch which saves an additional$100,000 by not having to purchase mulch for the campus. When it comes to recycling, Penn State has a response diversion index of 89/64 which means that 89% of the waste can be recycled and 64% is actually recycled. (Johnson)
Penn State also has a recycling program created by the STATERS (Students Taking Action To Encourage Recycling). These students hand out blue bags at tailgates for recycling collection. This spefic program lowers the university’s landfill costs, and proceeds from the sale of football game recycling bags are donated to the Local United Way (a recycling industry). Also, all around campus you may see one of the 116 “Green Teams” which are groups of faculty, staff, and volunteers who take action in creating a more innovative and sustainable campus. As you can see from these examples, college campuses have a massive amount of people working to better the ecosystem and create an easier way for all students to better it. Even though I am not a part of any specific recycling group here on campus, I still find myself recycling because it is just so easy. Average Americans do not have the same opportunities, therefore they are more susceptible to choose the “easier” choice by just throwing everything in the trash. On a college campus, the “easier” choice is bettering the ecosystem.
Also, in my college dorm, when there are no people present in the hallways or bathrooms the lights automatically turn off. There are signs all around the dorms pushing people to remember to turn the water off when brushing their teeth and turn off their own lights when leaving their dorms. These little gestures create a huge impact on the people who live in dorms and have people changing their little habits to help the earth. College also takes action and educates people on sustainability, for example our own math class informs and teaches its students about how the earth works and how we can better it, not to mention that you can major in environmental studies. These examples alone prove again that there are millions of opportunities set up for college kids to better their own impact on the earth.
When in college, whether you are involved in helping the environment or not, you encounter ways to decrease your carbon footprint even if you do not realize it. I cannot say the same for the average American. Citizens living in certain settings such as cities or suburbs are not introduced or handed ways to better their footprint, I believe this is the biggest difference in why college students have a smaller carbon footprint. Here on campus it is so easy to make the healthier decision without even understanding that you are bettering the ecosystem. There is certainly more effort involved when it comes to people outside of college campuses, and frankly, people are too lazy or uneducated to make that effort. But maybe this is a good thing. It is important to take into account that every college student will at one point in their life finish college. This means that they will be put into the real world and these traits and normal acts of sustainability that they did every day in college will be tested. Wouldn’t we want our future generation to be specifically taught how to better our world? Once in the real world, it will no longer be as easy to be as environmental friendly as it once was in college but hopefully the students will still carry out the very acts they were conditioned to do when in school. Throughout the examples and statistics shown when it comes to the amount of greenhouse gasses emitted, college students undoubtedly have a smaller one than the average citizen. Being a college student I firmly believe that I have more opportunities laid out for me to better my own carbon footprint. In conclusion, due to the fact that I am a college student, my carbon footprint is smaller than the average person’s.
Bibliography
ISCFC: Calculate Your Footprint, web.stanford.edu/group/inquiry2insight/cgi-bin/i2sea-r2b/i2s.php? page=fpcalc.
http://web.stanford.edu/group/inquiry2insight/cgi-bin/i2sea-r2b/i2s.php?page=fpcalc
“Green Teams Program.” Sustainability.psu.edu, sustainability.psu.edu/green-teams.
http://sustainability.psu.edu/green-teams
“Recycling and Composting.” Sustainability.psu.edu, www.sustainability.psu.edu/recycling-and-composting.
www.sustainability.psu.edu/recycling-and-composting.
Federal Transit Administration. “Public Transportation’s Role in Responding to Climate Change.”
https://www.transit.dot.gov/sites/fta.dot.gov/files/docs/PublicTransportationsRoleInRespondingToClimateChange2010.pdf
American Public Transportation Association. “A Profile of Public Transportation Passenger Demographics and Travel Characteristics Reported in On-Board Surveys.”
http://www.apta.com/resources/statistics/Documents/transit_passenger_characteristics_text_5_29_2007.pdf
“Calculate Your Carbon Footprint.” Conservation International, www.conservation.org/act/carboncalculator/calculate-your-carbon-footprint.aspx#/resultsmultiple?
http://www.conservation.org/act/carboncalculator/calculate-your-carbon-footprint.aspx#/
This entry was posted in Write and Respond 1 and tagged , . Bookmark the permalink. |
Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
1
Posted byu/[deleted]8 years ago
Archived
## I'm stumped by a simple geometry problem, related to finding a circle given 3 points on it
I've been wrestling with a small part of a larger problem, what is really frustrating is that 15 years ago when at high school ago I could probably have figured this out in my sleep. I'm hoping someone can refresh my memory:
Ok, so I have a circle that I know 3 points lie on, (0, 0), (1 , 1) and (a, b) - where a and b are both between 0 and 1 inclusive. Also we know that a != b (since then it would be a line, not a circle).
I want to find a point on the circle (x,y) where x and y are both between 0 and 1, and where (bonus!) x is a known value.
In other words, given a, b, and x, what is y?
edit: Here is what I tried:
We know that (where (h, k) is the center of the circle):
• (0-h)2+(0-k)2=r2
• (1-h)2+(1-k)2=r2
• (a-h)2+(b-k)2=r2
But there is where I get lost (even with the help of Maple) - what do I do next?
54% Upvoted
This thread is archived
New comments cannot be posted and votes cannot be cast
Sort by
7 points · 8 years ago
Well, for doing it purely algebraically (i.e., the messy but straightforward way) you are on the right track. You have got, for the centre (h,k) of the circle, the three equations:
1. h² + k² = r²
2. (h-1)² + (k-1)² = r²
3. (h-a)² + (k-b)² = r²
Expanding the squares in the second equation and using the first one, you get h+k=1 (try it). [This makes sense, geometrically: the centre of any circle passing through (0,0) and (1,1) lies on their perpendicular bisector, the x+y=1 line.]
Then using the first in the third equation reduces it to 2ah+2bk=a²+b². Now you have two linear equations — h+k=1 and this one — in h and k, which you can solve to get k = (a²+b²-2a)/(2b-2a) and h = (a²+b²-2b)/(2a-2b). Now you know the centre of the circle.
Finally, you want to find y such that (x,y) lies on the circle. This means
(4). (x-h)²+(y-k)²=r².
This reduces (again using the first equation) to x²-2xh+y²-2ky=0, which is simply a quadratic equation in y. Its solutions are y = k ± √(k²-x²+2xh). (Where k and h you already know from before.)
When k²-x²+2xh is negative, there is no (real) y, because the vertical line through x does not intersect the circle. When it is strictly positive, there are two solutions, because the vertical line intersects the circle at two points.
If you think a little more before setting pen to paper, you can probably get a more elegant solution using geometric facts, but I haven't tried it. :-)
[deleted]
0 points · 8 years ago
I tried it as follows (in Scala):
def k(a : Double, b : Double) = (a*a+b*b-2.0*a)/(2.0*b-2.0*a)
def h(a : Double, b : Double) = k(b, a)
def y(x : Double, a : Double, b : Double) = Math.sqrt(k(a, b)*k(a, b) - x*x + 2*x*h(a, b))
However, when I try:
scala> y(0.01, 0.2, 0.3)
res12: Double = 1.367260033790208
This seems wrong, I would have expected something slightly higher than 0.01 - right?
1 point · 8 years ago · edited 8 years ago
Did you try following the calculation yourself? If so you would have realised your mistake.
Your y is √(k²-x²+2xh) which is not what I wrote: k±√(k²-x²+2xh), the two roots of the quadratic equation.
[deleted]
1 point · 8 years ago · edited 8 years ago
Doh, my mistake, it works now - thanks for your help!
2 points · 8 years ago
[deleted]
1 point · 8 years ago · edited 8 years ago
Thanks, I'd already tried that. I've edited to explain.
3 points · 8 years ago
Let's say you have defined a circle with radius r, located at Cartesian coordinate xa,ya. Are you asking what the y value will be for a given x argument?
Think about it geometrically -- the x value can be thought of as a vertical line that may or may not cross the circle. If it does cross the circle (because |x-xa| <= r), then there are two y values lying on the circle corresponding to the x argument, which will be obvious from the fact that the x vertical line intersects the circle at two locations.
1. Establish the x coordinate within the circle: cx = |x-xa|
2. If cx <= r, compute the corresponding y value WRT the circle: cy = sqrt(r^2-cx^2)
3. Compute the two resulting y values: {ya+cy, ya-cy}
It helps to think about this kind of problem geometrically first, before trying to process it algebraically.
1 point · 8 years ago
Maybe I am missing something, but why does the value of y have to be less than 1?
If we take the point (1,1) and move along the circumference of the circle so that x = 0.99, would not the value of Y go up?
[deleted]
1 point · 8 years ago
Ah, ok - you are correct - if (a,b) were, for example (1,0) then the circle could extend outside the (0,0,1,1) box. I'll edit to clarify.
1 point · 8 years ago
Hint: Consider the synthetic construction - Given points A, B, C, the center of the circle that A,B,C lie on is the intersection point of the perpendicular bisectors of segments AB and BC.
Community Details
530k
Subscribers
1.3k
Online
Welcome to r/math!
This subreddit is for discussion of mathematical links and questions. Please read the FAQ and the rules below before posting.
If you're asking for help understanding something mathematical, post in the Simple Question thread or /r/learnmath. This includes reference requests - also see our lists of recommended books and free online resources. Here is a more recent thread with book recommendations.
If you are asking for a calculation to be made, please post to /r/askmath or /r/learnmath.
If you are asking for advice on choosing classes or career prospects, please post in the stickied Career & Education Questions thread.
Please be polite and civil when commenting, and always follow reddiquette.
r/math Rules
1.
No homework problems
2.
Stay on-topic
3.
Be excellent to each other
4.
No low-effort image posts
Recurring Threads & Resources
Everything about X - every Wednesday
What Are You Working On? - posted Mondays
Career and Education Q&A - Every other Thursday
Simple Questions - Posted Fridays
Click here to chat with us on IRC!
Using LaTeX
To view LaTeX on reddit, install one of the following:
MathJax userscript (install Greasemonkey or Tampermonkey first)
[; e^{\pi i} + 1 = 0 ;]
Post the equation above like this:
[; e^{\pi i}+1=0 ;]
Useful Symbols
Basic Math Symbols
≠ ± ∓ ÷ × ∙ – √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
Geometry Symbols
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
Algebra Symbols
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘∏ ∐ ∑ ⋀ ⋁ ⋂ ⋃ ⨀ ⨁ ⨂ 𝖕 𝖖 𝖗
Set Theory Symbols
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
Logic Symbols
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ↔ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣
Calculus and Analysis Symbols
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
Greek Letters
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔
Other math subreddits
r/learnmath
71,702 subscribers
r/mathbooks
8,692 subscribers
r/cheatatmathhomework
23,806 subscribers
r/matheducation
12,005 subscribers
r/CasualMath
7,751 subscribers
r/puremathematics
9,331 subscribers
r/mathpics
15,824 subscribers
r/mathriddles
7,435 subscribers
Related subreddits
r/Mathematica
3,984 subscribers
r/matlab
15,453 subscribers
r/sagemath
570 subscribers
r/actuary
10,640 subscribers
r/algorithms
38,210 subscribers
r/compsci
315,551 subscribers
r/interdisciplinary
1,531 subscribers
r/statistics
57,279 subscribers |
# Integrating to Find Particular Solution - Question 2
• Apr 25th 2011, 02:27 PM
sparky
Integrating to Find Particular Solution - Question 2
The answer to the following question is C = -3.75. Please tell me where I went wrong:
Question: y = integral (squareroot 3x - cuberoot 9x+3x)dx, where x = 3 and y =9
Let U = 9x + 3x
du = 9 + 3 dx
du = 12 dx
y = integral [(3x)^2] - integral (U)^3
y = [(3x^3)/3] - [(U^4)/4] + C
y = x^3 - [(9x+3x)^4]/4 + C
9 = 3^3 - [(9{3}+3{3})^4]/4 + C
9 = 27 - [(27+9)^4]/4 + C
9 = 27 - 1679616/4 + C
C = 419886 (According to my teacher, the answer is C = -3.75)
• Apr 25th 2011, 03:15 PM
topsquark
Quote:
Originally Posted by sparky
The answer to the following question is C = -3.75. Please tell me where I went wrong:
Question: y = integral (squareroot 3x - cuberoot 9x+3x)dx, where x = 3 and y =9
Let U = 9x + 3x
du = 9 + 3 dx
du = 12 dx
y = integral [(3x)^2] - integral (U)^3
y = [(3x^3)/3] - [(U^4)/4] + C
y = x^3 - [(9x+3x)^4]/4 + C
9 = 3^3 - [(9{3}+3{3})^4]/4 + C
9 = 27 - [(27+9)^4]/4 + C
9 = 27 - 1679616/4 + C
C = 419886 (According to my teacher, the answer is C = -3.75)
http://latex.codecogs.com/png.latex?...x} \right )~dx
Then your first term changes to
http://latex.codecogs.com/png.latex?... (3x)^2 - ~...
(Edit: You did the same thing with the cube root.)
Also, I am suspicious about that cube root term. Is there a typo? I can't think of the reason why it wasn't given as 12x in the first place?
-Dan
• Apr 25th 2011, 04:02 PM
sparky
That's the question: y = integral (square root of 3x minus cube root of (3x + 9x))
Maybe the 3x + 9x was put there to confuse me. Where did I go wrong in my working?
• Apr 25th 2011, 04:11 PM
topsquark
Quote:
Originally Posted by sparky
That's the question: y = integral (square root of 3x minus cube root of (3x + 9x))
Maybe the 3x + 9x was put there to confuse me. Where did I go wrong in my working?
I have not completed the problem myself, but as I said, you changed http://latex.codecogs.com/png.latex?\sqrt{3x} into http://latex.codecogs.com/png.latex?(3x)^2 and http://latex.codecogs.com/png.latex?\sqrt[3]{3x + 9x} into http://latex.codecogs.com/png.latex?(3x + 9x)^3.
Try fixing that and see what happens.
-Dan
• Apr 25th 2011, 05:00 PM
sparky
Ok, I tried it again and got it wrong again:
y = integral (square root of 3x minus cuberoot of (9x + 3x))dx
y = integral of (3x)^2 - integral of (9x)^3 + integral of (3x)^3
y = [(3x)^3]/3 - [(9x)^4]/4 + [(3x)^4]/4 + C
y = x^3 - [(12x)^4]/4 + C
y = x^3 - 3x^4 + C
9 = 3^3 - (3{3}^4) + C
9 = 27 - 243 + C
C = 225
What am I doing wrong?
• Apr 25th 2011, 05:08 PM
topsquark
Quote:
Originally Posted by sparky
Ok, I tried it again and got it wrong again:
y = integral (square root of 3x minus cuberoot of (9x + 3x))dx
y = integral of (3x)^2 - integral of (9x)^3 + integral of (3x)^3
What am I doing wrong?
http://latex.codecogs.com/png.latex?...ght%20%29%7Edx
or should it be
http://latex.codecogs.com/png.latex?...ght%20%29%7Edx
They are NOT the same!!
-Dan
Edit: And
http://latex.codecogs.com/png.latex?...9x)^3 + (3x)^3
http://latex.codecogs.com/png.latex?... + 3ab^2 + b^3
Edit II:
I am suspecting that the integral is written incorrectly. Neither form of the problem I listed above gives you the given answer of c = -3.75. The most likely source of error is the 9x + 3x term. I suspect it has been miscopied either by you or the source you got the problem from. |
# Creating your own cosmological likelihood¶
Creating your own cosmological likelihood with cobaya is super simple. You can either define a likelihood class (see Creating your own cosmological likelihood class), or simply create a likelihood function:
1. Define your likelihood as a function that takes some parameters (experimental errors, foregrounds, etc, but not theory parameters) and returns a log-likelihood.
2. Take note of the observables and other cosmological quantities that you will need to request from the theory code (see must_provide()). [If you cannot find the observable that you need, do let us know!]
3. When declaring the function as a likelihood in cobaya’s input, add a field requires and assign to it all the cosmological requirements as a dictionary (e.g. {'Cl': {'tt': 2500}}).
4. Add to your likelihood function definition a keyword argument _self=None. At run-time, you can call get_[...] methods of _self.provider to get the quantities that you requested evaluated at the current parameters values, e.g _theory.get_Cl() in the example below.
5. If you wish to define derived paramters, do it as for general external likelihoods (example here): add an output_params field to the likelihood info listing your derived parameters, and have your function return a tuple (log-likelihood, {derived parameters dictionary}).
## Example: your own CMB experiment!¶
To illustrate how to create a cosmological likelihood in cobaya, we apply the procedure above to a fictitious WMAP-era full-sky CMB TT experiment.
First of all, we will need to simulate the fictitious power spectrum of the fictitious sky that we would measure with our fictitious experiment, once we have accounted for noise and beam corrections. To do that, we choose a set of true cosmological parameters in the sky, and then use a model to compute the corresponding power spectrum, up to some reasonable $$\ell_\mathrm{max}$$ (see Using the model wrapper).
fiducial_params = {
'ombh2': 0.022, 'omch2': 0.12, 'H0': 68, 'tau': 0.07,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046}
l_max = 1000
packages_path = '/path/to/your/packages'
info_fiducial = {
'params': fiducial_params,
'likelihood': {'one': None},
'theory': {'camb': {"extra_args": {"num_massive_neutrinos": 1}}},
'packages_path': packages_path}
from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)
# Declare our desired theory product
# (there is no cosmological likelihood doing it for us)
# Compute and extract the CMB power spectrum
# (In muK^-2, without l(l+1)/(2pi) factor)
# notice the empty dictionary below: all parameters are fixed
model_fiducial.logposterior({})
Cls = model_fiducial.provider.get_Cl(ell_factor=False, units="muK2")
# Our fiducial power spectrum
Cl_est = Cls['tt'][:l_max + 1]
Now, let us define the likelihood. The arguments of the likelihood function will contain the parameters that we want to vary (arguments not mentioned later in an input info will be left to their default, e.g. beam_FWHM=0.25). As mentioned above, include a _self=None keyword from which you will get the requested quantities, and, since we want to define derived parameters, return them as a dictionary:
import numpy as np
import matplotlib.pyplot as plt
_do_plot = False
def my_like(
# Parameters that we may sample over (or not)
noise_std_pixel=20, # muK
beam_FWHM=0.25, # deg
# Keyword through which the cobaya likelihood instance will be passed.
_self=None):
# Noise spectrum, beam-corrected
healpix_Nside=512
weight_per_solid_angle = (noise_std_pixel**2 * pixel_area_rad)**-1
beam_sigma_rad = beam_FWHM / np.sqrt(8*np.log(2)) * np.pi/180.
ells = np.arange(l_max+1)
# Cl of the map: data + noise
Cl_map = Cl_est + Nl
# Request the Cl from the provider
Cl_theo = _self.provider.get_Cl(ell_factor=False, units="muK2")['tt'][:l_max+1]
Cl_map_theo = Cl_theo + Nl
# Auxiliary plot
if _do_plot:
ell_factor = ells*(ells+1)/(2*np.pi)
plt.figure()
plt.plot(ells[2:], (Cl_theo*ell_factor)[2:], label=r'Theory $C_\ell$')
plt.plot(ells[2:], (Cl_est*ell_factor)[2:], label=r'Estimated $C_\ell$', ls="--")
plt.plot(ells[2:], (Cl_map*ell_factor)[2:], label=r'Map $C_\ell$')
plt.plot(ells[2:], (Nl*ell_factor)[2:], label='Noise')
plt.legend()
plt.ylim([0, 6000])
plt.savefig(_plot_name)
plt.close()
# ----------------
# Compute the log-likelihood
V = Cl_map[2:]/Cl_map_theo[2:]
logp = np.sum((2*ells[2:]+1)*(-V/2 +1/2.*np.log(V)))
# Set our derived parameter
derived = {'Map_Cl_at_500': Cl_map[500]}
return logp, derived
Finally, let’s prepare its definition, including requirements (the CMB TT power spectrum) and listing available derived parameters, and use it to do some plots.
Since our imaginary experiment isn’t very powerful, we will refrain from trying to estimate the full $$\Lambda$$ CDM parameter set. We may focus instead e.g. on the primordial power spectrum parameters $$A_s$$ and $$n_s$$ as sampled parameters, assume that we magically have accurate values for the rest of the cosmological parameters, and marginalise over some uncertainty on the noise standard deviation
We will define a model, use our likelihood’s plotter, and also plot a slice of the log likelihood along different $$A_s$$ values:
info = {
'params': {
# Fixed
'ombh2': 0.022, 'omch2': 0.12, 'H0': 68, 'tau': 0.07,
'mnu': 0.06, 'nnu': 3.046,
# Sampled
'As': {'prior': {'min': 1e-9, 'max': 4e-9}, 'latex': 'A_s'},
'ns': {'prior': {'min': 0.9, 'max': 1.1}, 'latex': 'n_s'},
'noise_std_pixel': {
'prior': {'dist': 'norm', 'loc': 20, 'scale': 5},
'latex': r'\sigma_\mathrm{pix}'},
# Derived
'Map_Cl_at_500': {'latex': r'C_{500,\,\mathrm{map}}'}},
'likelihood': {'my_cl_like': {
"external": my_like,
# Declare required quantities!
"requires": {'Cl': {'tt': l_max}},
# Declare derived parameters!
"output_params": ['Map_Cl_at_500']}},
'theory': {'camb': {'stop_at_error': True}},
'packages_path': packages_path}
from cobaya.model import get_model
model = get_model(info)
# Eval likelihood once with fid values and plot
_do_plot = True
_plot_name = "fiducial.png"
fiducial_params_w_noise = fiducial_params.copy()
fiducial_params_w_noise['noise_std_pixel'] = 20
model.logposterior(fiducial_params_w_noise)
_do_plot = False
# Plot of (prpto) probability density
As = np.linspace(1e-9, 4e-9, 10)
loglikes = [model.loglike({'As': A, 'ns': 0.96, 'noise_std_pixel': 20})[0] for A in As]
plt.figure()
plt.plot(As, loglikes)
plt.gca().get_yaxis().set_visible(False)
plt.title(r"$\log P(A_s|\mathcal{D},\mathcal{M}) (+ \mathrm{const})$")
plt.xlabel(r"$A_s$")
plt.savefig("log_like.png")
plt.close()
Note
Troubleshooting:
If you are not getting the expected value for the likelihood, here are a couple of things that you can try:
• Set debug: True in the input, which will cause cobaya to print much more information, e.g. the parameter values are passed to the prior, the theory code and the likelihood.
• If the likelihood evaluates to -inf (but the prior is finite) it probably means that either the theory code or the likelihood are failing; to display the error information of the theory code, add to it the stop_at_error: True option, as shown in the example input above, and the same for the likelihood, if it is likely to throw errors.
Now we can sample from this model’s posterior as explained in Manually passing this model to a sampler.
Alternatively, specially if you are planning to share your likelihood, you can put its definition (including the fiducial spectrum, maybe saved as a table separately) in a separate file, say my_like_file.py. In this case, to use it, use import_module([your_file_without_extension]).your_function, here
# Contents of some .yaml input file
likelihood:
some_name:
external: import_module('my_like_file').my_like
# Declare required quantities!
requires: {Cl: {tt: 1000}}
# Declare derived parameters!
output_params: [Map_Cl_at_500] |
# How do you find the reference angle for -515 degrees?
Dec 13, 2017
$205$ degrees
#### Explanation:
$- 515$ is no the simplest form for this angle.
See, $900$ degrees is the same angle as $180$ degrees, and one of the ways I like to check that that's true -- if I have a calculator handy --is to plug in $\sin \left(180\right)$ and $\sin \left(900\right)$, and if they are the same angle, they'll give me the same answer. And they do, $0$ for both of them.
Let's try and find the simplified angle for $- 515$. Now, that negative might look scary, but it just means that the angle was found by going clockwise, unlike the normal way where we move counter-clockwise.
So we'll ignore the negative sign for now. The first thing I do is imagine drawing a line all the way around the circle
That used up $360$ degrees. Now we have $115$ degrees left
Now I move another $90$ degrees
Now I have $65$ degrees left
Now we can look at the final picture and see how many degrees it would take to reach this angle, but going in a clockwise direction . That'll give us our reference angle
So, we'll need to go $180$ degrees. And we know that we used up $65$ degrees, so there are $25$ degrees left in that qudarant. $180 + 25$ gives us $205$
So, we are saying that $205$ and $- 515$ are two different ways to reference the same angle. Let;s plug them into $\sin \left(\theta\right)$ and find out!
$\sin \left(205\right) = - 0.4226$
$\sin \left(- 515\right) = - 0.4226$
Yep! The reference angle for $- 515$ is $205$ |
Copied to
clipboard
## G = (C2×C6)⋊8D20order 480 = 25·3·5
### 2nd semidirect product of C2×C6 and D20 acting via D20/D10=C2
Series: Derived Chief Lower central Upper central
Derived series C1 — C2×C30 — (C2×C6)⋊8D20
Chief series C1 — C5 — C15 — C30 — C2×C30 — D5×C2×C6 — C2×C3⋊D20 — (C2×C6)⋊8D20
Lower central C15 — C2×C30 — (C2×C6)⋊8D20
Upper central C1 — C22 — C23
Generators and relations for (C2×C6)⋊8D20
G = < a,b,c,d | a2=b6=c20=d2=1, ab=ba, cac-1=dad=ab3, cbc-1=dbd=b-1, dcd=c-1 >
Subgroups: 1500 in 260 conjugacy classes, 60 normal (24 characteristic)
C1, C2, C2 [×2], C2 [×7], C3, C4 [×3], C22, C22 [×2], C22 [×21], C5, S3, C6, C6 [×2], C6 [×6], C2×C4 [×3], D4 [×6], C23, C23 [×9], D5 [×5], C10, C10 [×2], C10 [×2], Dic3 [×3], D6 [×3], C2×C6, C2×C6 [×2], C2×C6 [×18], C15, C22⋊C4 [×3], C2×D4 [×3], C24, Dic5, C20 [×2], D10 [×4], D10 [×15], C2×C10, C2×C10 [×2], C2×C10 [×2], C2×Dic3 [×2], C2×Dic3, C3⋊D4 [×6], C22×S3, C22×C6, C22×C6 [×8], C3×D5 [×4], D15, C30, C30 [×2], C30 [×2], C22≀C2, D20 [×4], C2×Dic5, C5⋊D4 [×2], C2×C20 [×2], C22×D5 [×2], C22×D5 [×7], C22×C10, C6.D4, C6.D4 [×2], C2×C3⋊D4 [×3], C23×C6, C5×Dic3 [×2], Dic15, C6×D5 [×4], C6×D5 [×12], D30 [×3], C2×C30, C2×C30 [×2], C2×C30 [×2], D10⋊C4 [×2], C5×C22⋊C4, C2×D20 [×2], C2×C5⋊D4, C23×D5, C244S3, C3⋊D20 [×4], C10×Dic3 [×2], C2×Dic15, C157D4 [×2], D5×C2×C6 [×2], D5×C2×C6 [×6], C22×D15, C22×C30, C22⋊D20, D10⋊Dic3 [×2], C5×C6.D4, C2×C3⋊D20 [×2], C2×C157D4, D5×C22×C6, (C2×C6)⋊8D20
Quotients: C1, C2 [×7], C22 [×7], S3, D4 [×6], C23, D5, D6 [×3], C2×D4 [×3], D10 [×3], C3⋊D4 [×6], C22×S3, C22≀C2, D20 [×2], C22×D5, C2×C3⋊D4 [×3], S3×D5, C2×D20, D4×D5 [×2], C244S3, C3⋊D20 [×2], C2×S3×D5, C22⋊D20, C2×C3⋊D20, D5×C3⋊D4 [×2], (C2×C6)⋊8D20
Smallest permutation representation of (C2×C6)⋊8D20
On 120 points
Generators in S120
(1 11)(2 30)(3 13)(4 32)(5 15)(6 34)(7 17)(8 36)(9 19)(10 38)(12 40)(14 22)(16 24)(18 26)(20 28)(21 31)(23 33)(25 35)(27 37)(29 39)(41 93)(42 52)(43 95)(44 54)(45 97)(46 56)(47 99)(48 58)(49 81)(50 60)(51 83)(53 85)(55 87)(57 89)(59 91)(61 71)(62 116)(63 73)(64 118)(65 75)(66 120)(67 77)(68 102)(69 79)(70 104)(72 106)(74 108)(76 110)(78 112)(80 114)(82 92)(84 94)(86 96)(88 98)(90 100)(101 111)(103 113)(105 115)(107 117)(109 119)
(1 63 58 39 107 100)(2 81 108 40 59 64)(3 65 60 21 109 82)(4 83 110 22 41 66)(5 67 42 23 111 84)(6 85 112 24 43 68)(7 69 44 25 113 86)(8 87 114 26 45 70)(9 71 46 27 115 88)(10 89 116 28 47 72)(11 73 48 29 117 90)(12 91 118 30 49 74)(13 75 50 31 119 92)(14 93 120 32 51 76)(15 77 52 33 101 94)(16 95 102 34 53 78)(17 79 54 35 103 96)(18 97 104 36 55 80)(19 61 56 37 105 98)(20 99 106 38 57 62)
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20)(21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100)(101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120)
(1 38)(2 37)(3 36)(4 35)(5 34)(6 33)(7 32)(8 31)(9 30)(10 29)(11 28)(12 27)(13 26)(14 25)(15 24)(16 23)(17 22)(18 21)(19 40)(20 39)(41 96)(42 95)(43 94)(44 93)(45 92)(46 91)(47 90)(48 89)(49 88)(50 87)(51 86)(52 85)(53 84)(54 83)(55 82)(56 81)(57 100)(58 99)(59 98)(60 97)(61 108)(62 107)(63 106)(64 105)(65 104)(66 103)(67 102)(68 101)(69 120)(70 119)(71 118)(72 117)(73 116)(74 115)(75 114)(76 113)(77 112)(78 111)(79 110)(80 109)
G:=sub<Sym(120)| (1,11)(2,30)(3,13)(4,32)(5,15)(6,34)(7,17)(8,36)(9,19)(10,38)(12,40)(14,22)(16,24)(18,26)(20,28)(21,31)(23,33)(25,35)(27,37)(29,39)(41,93)(42,52)(43,95)(44,54)(45,97)(46,56)(47,99)(48,58)(49,81)(50,60)(51,83)(53,85)(55,87)(57,89)(59,91)(61,71)(62,116)(63,73)(64,118)(65,75)(66,120)(67,77)(68,102)(69,79)(70,104)(72,106)(74,108)(76,110)(78,112)(80,114)(82,92)(84,94)(86,96)(88,98)(90,100)(101,111)(103,113)(105,115)(107,117)(109,119), (1,63,58,39,107,100)(2,81,108,40,59,64)(3,65,60,21,109,82)(4,83,110,22,41,66)(5,67,42,23,111,84)(6,85,112,24,43,68)(7,69,44,25,113,86)(8,87,114,26,45,70)(9,71,46,27,115,88)(10,89,116,28,47,72)(11,73,48,29,117,90)(12,91,118,30,49,74)(13,75,50,31,119,92)(14,93,120,32,51,76)(15,77,52,33,101,94)(16,95,102,34,53,78)(17,79,54,35,103,96)(18,97,104,36,55,80)(19,61,56,37,105,98)(20,99,106,38,57,62), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120), (1,38)(2,37)(3,36)(4,35)(5,34)(6,33)(7,32)(8,31)(9,30)(10,29)(11,28)(12,27)(13,26)(14,25)(15,24)(16,23)(17,22)(18,21)(19,40)(20,39)(41,96)(42,95)(43,94)(44,93)(45,92)(46,91)(47,90)(48,89)(49,88)(50,87)(51,86)(52,85)(53,84)(54,83)(55,82)(56,81)(57,100)(58,99)(59,98)(60,97)(61,108)(62,107)(63,106)(64,105)(65,104)(66,103)(67,102)(68,101)(69,120)(70,119)(71,118)(72,117)(73,116)(74,115)(75,114)(76,113)(77,112)(78,111)(79,110)(80,109)>;
G:=Group( (1,11)(2,30)(3,13)(4,32)(5,15)(6,34)(7,17)(8,36)(9,19)(10,38)(12,40)(14,22)(16,24)(18,26)(20,28)(21,31)(23,33)(25,35)(27,37)(29,39)(41,93)(42,52)(43,95)(44,54)(45,97)(46,56)(47,99)(48,58)(49,81)(50,60)(51,83)(53,85)(55,87)(57,89)(59,91)(61,71)(62,116)(63,73)(64,118)(65,75)(66,120)(67,77)(68,102)(69,79)(70,104)(72,106)(74,108)(76,110)(78,112)(80,114)(82,92)(84,94)(86,96)(88,98)(90,100)(101,111)(103,113)(105,115)(107,117)(109,119), (1,63,58,39,107,100)(2,81,108,40,59,64)(3,65,60,21,109,82)(4,83,110,22,41,66)(5,67,42,23,111,84)(6,85,112,24,43,68)(7,69,44,25,113,86)(8,87,114,26,45,70)(9,71,46,27,115,88)(10,89,116,28,47,72)(11,73,48,29,117,90)(12,91,118,30,49,74)(13,75,50,31,119,92)(14,93,120,32,51,76)(15,77,52,33,101,94)(16,95,102,34,53,78)(17,79,54,35,103,96)(18,97,104,36,55,80)(19,61,56,37,105,98)(20,99,106,38,57,62), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120), (1,38)(2,37)(3,36)(4,35)(5,34)(6,33)(7,32)(8,31)(9,30)(10,29)(11,28)(12,27)(13,26)(14,25)(15,24)(16,23)(17,22)(18,21)(19,40)(20,39)(41,96)(42,95)(43,94)(44,93)(45,92)(46,91)(47,90)(48,89)(49,88)(50,87)(51,86)(52,85)(53,84)(54,83)(55,82)(56,81)(57,100)(58,99)(59,98)(60,97)(61,108)(62,107)(63,106)(64,105)(65,104)(66,103)(67,102)(68,101)(69,120)(70,119)(71,118)(72,117)(73,116)(74,115)(75,114)(76,113)(77,112)(78,111)(79,110)(80,109) );
G=PermutationGroup([(1,11),(2,30),(3,13),(4,32),(5,15),(6,34),(7,17),(8,36),(9,19),(10,38),(12,40),(14,22),(16,24),(18,26),(20,28),(21,31),(23,33),(25,35),(27,37),(29,39),(41,93),(42,52),(43,95),(44,54),(45,97),(46,56),(47,99),(48,58),(49,81),(50,60),(51,83),(53,85),(55,87),(57,89),(59,91),(61,71),(62,116),(63,73),(64,118),(65,75),(66,120),(67,77),(68,102),(69,79),(70,104),(72,106),(74,108),(76,110),(78,112),(80,114),(82,92),(84,94),(86,96),(88,98),(90,100),(101,111),(103,113),(105,115),(107,117),(109,119)], [(1,63,58,39,107,100),(2,81,108,40,59,64),(3,65,60,21,109,82),(4,83,110,22,41,66),(5,67,42,23,111,84),(6,85,112,24,43,68),(7,69,44,25,113,86),(8,87,114,26,45,70),(9,71,46,27,115,88),(10,89,116,28,47,72),(11,73,48,29,117,90),(12,91,118,30,49,74),(13,75,50,31,119,92),(14,93,120,32,51,76),(15,77,52,33,101,94),(16,95,102,34,53,78),(17,79,54,35,103,96),(18,97,104,36,55,80),(19,61,56,37,105,98),(20,99,106,38,57,62)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20),(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100),(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)], [(1,38),(2,37),(3,36),(4,35),(5,34),(6,33),(7,32),(8,31),(9,30),(10,29),(11,28),(12,27),(13,26),(14,25),(15,24),(16,23),(17,22),(18,21),(19,40),(20,39),(41,96),(42,95),(43,94),(44,93),(45,92),(46,91),(47,90),(48,89),(49,88),(50,87),(51,86),(52,85),(53,84),(54,83),(55,82),(56,81),(57,100),(58,99),(59,98),(60,97),(61,108),(62,107),(63,106),(64,105),(65,104),(66,103),(67,102),(68,101),(69,120),(70,119),(71,118),(72,117),(73,116),(74,115),(75,114),(76,113),(77,112),(78,111),(79,110),(80,109)])
66 conjugacy classes
class 1 2A 2B 2C 2D 2E 2F 2G 2H 2I 2J 3 4A 4B 4C 5A 5B 6A ··· 6G 6H ··· 6O 10A ··· 10F 10G 10H 10I 10J 15A 15B 20A ··· 20H 30A ··· 30N order 1 2 2 2 2 2 2 2 2 2 2 3 4 4 4 5 5 6 ··· 6 6 ··· 6 10 ··· 10 10 10 10 10 15 15 20 ··· 20 30 ··· 30 size 1 1 1 1 2 2 10 10 10 10 60 2 12 12 60 2 2 2 ··· 2 10 ··· 10 2 ··· 2 4 4 4 4 4 4 12 ··· 12 4 ··· 4
66 irreducible representations
dim 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 4 type + + + + + + + + + + + + + + + + + + + image C1 C2 C2 C2 C2 C2 S3 D4 D4 D5 D6 D6 D10 D10 C3⋊D4 C3⋊D4 D20 S3×D5 D4×D5 C3⋊D20 C2×S3×D5 D5×C3⋊D4 kernel (C2×C6)⋊8D20 D10⋊Dic3 C5×C6.D4 C2×C3⋊D20 C2×C15⋊7D4 D5×C22×C6 C23×D5 C6×D5 C2×C30 C6.D4 C22×D5 C22×C10 C2×Dic3 C22×C6 D10 C2×C10 C2×C6 C23 C6 C22 C22 C2 # reps 1 2 1 2 1 1 1 4 2 2 2 1 4 2 8 4 8 2 4 4 2 8
Matrix representation of (C2×C6)⋊8D20 in GL4(𝔽61) generated by
60 0 0 0 0 60 0 0 0 0 60 29 0 0 0 1
,
1 0 0 0 0 1 0 0 0 0 14 5 0 0 0 48
,
7 32 0 0 29 2 0 0 0 0 55 3 0 0 8 6
,
7 32 0 0 29 54 0 0 0 0 55 49 0 0 8 6
G:=sub<GL(4,GF(61))| [60,0,0,0,0,60,0,0,0,0,60,0,0,0,29,1],[1,0,0,0,0,1,0,0,0,0,14,0,0,0,5,48],[7,29,0,0,32,2,0,0,0,0,55,8,0,0,3,6],[7,29,0,0,32,54,0,0,0,0,55,8,0,0,49,6] >;
(C2×C6)⋊8D20 in GAP, Magma, Sage, TeX
(C_2\times C_6)\rtimes_8D_{20}
% in TeX
G:=Group("(C2xC6):8D20");
// GroupNames label
G:=SmallGroup(480,640);
// by ID
G=gap.SmallGroup(480,640);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-3,-5,141,64,219,1356,18822]);
// Polycyclic
G:=Group<a,b,c,d|a^2=b^6=c^20=d^2=1,a*b=b*a,c*a*c^-1=d*a*d=a*b^3,c*b*c^-1=d*b*d=b^-1,d*c*d=c^-1>;
// generators/relations
×
𝔽 |
## ARPA-E RANGE: $20M for robust transformational energy storage systems for EVs; 3x the range at 1/3 the cost ##### 17 February 2013 The US Department of Energy (DOE) Advanced Research Projects Agency - Energy (ARPA-E) has issued a funding opportunity announcement (DE-FOA-0000869) for about$20 million for the development of transformational electrochemical energy storage technologies intended to accelerate widespread electric vehicle adoption by significantly improving driving range, cost, and reliability. ARPA-E anticipates making approximately 8- 12 awards under this FOA.
The Robust Affordable Next Generation EV-Storage (RANGE) program’s goal is to enable a 3X increase in electric vehicle range (from ~80 to ~240 miles per charge) with a simultaneous price reduction of > 1/3 (to ~ $30,000). If successful, these vehicles will provide near cost and range parity to gasoline-powered ICE vehicles, ARPA-E said. RANGE is focused on supporting chemistry and system concepts in energy storage with robust designs in one or both of: • Category 1: Low-cost, rechargeable energy storage chemistries and architectures with robust designs; • Category 2: Multifunctional energy storage designs. ARPA-E defines robust design as electrochemical energy storage chemistries and/or architectures (i.e. physical designs) that avoid thermal runaway and are immune to catastrophic failure regardless of manufacturing quality or abuse conditions. Examples of robust designs cited by ARPA-E include: the development of an electrochemical energy storage chemistry that utilizes non-combustible aqueous or solid state electrolytes; the use of a redox flow battery architecture that is inherently more robust due to the physical separation (storage) of its active components far from the cell electrodes; and the design of a mechanism that allows a battery to automatically fail in open circuit when placed under abuse conditions. Robust designs can transform EV design and create new pathways to dramatically lower cost by: 1) reducing the demands on system-level engineering and its associated weight and cost; 2) liberating the energy storage system from the need for vehicle impact protection, which allows the energy storage to be positioned anywhere on the vehicle, thereby freeing-up the EV design; and 3) enabling multiple functions, such as assisting vehicle crash energy management and carrying structural load. For this first category, examples of technical approaches include but are not limited to: • High specific energy aqueous batteries. Areas of particular interest are approaches to novel high specific energy cathode/anode redox couples; materials and device designs for long life metal-air systems; ultrahigh capacity negative electrode materials to replace La-Ni alloys in nickel metal hydride batteries; and organic and inorganic redox couples, including their hybrids. • Ceramic and other solid electrolyte batteries. Areas of particular interests are high conductivity inorganic electrolytes for lithium and other alkaline metal ion systems; and solid state and hybrid battery designs and low cost manufacturing processes. • Other batteries completely without or with negligible combustible or flammable materials. • Materials and architectures that eliminate the possibility of thermal runaway. • Robust design architectures. Examples include flow cells and electrically rechargeable fuel cells, fail open circuited designs, non-propagating system architectures, and designs resulting in reductions in individual storage unit sizes and energy contents. • Hybridization of different energy storage chemistries and architectures to offer improved robustness including mechanical abuse tolerance. The second objective of RANGE is to fund the development of multifunctional energy storage systems. Robust design characteristics may enable energy storage systems to simultaneously serve other functions on an electric vehicle. Energy storage systems which absorb impulse energy during a vehicle crash and/or which carry mechanical load are of particular interest, ARPA-E suggested. Both of these functions are expected to extend the EV’s operating range by reducing the vehicle’s overall weight. For Category 2, examples of technical approaches include but are not limited to: • Energy storage systems that assist vehicle impact energy management. Areas of particular interest are material, cell, pack, and system designs that act synergistically with the rest of the vehicle structure to manage mechanical impact. Energy absorption mechanisms may include deformation, disintegration, and disengagement by design. • Energy storage systems that act as structural members. In this case, the energy storage system may directly replace other structural members of the vehicle in the load path. • Energy storage systems that serve other vehicle functions not listed above. ARPA-E anticipates that the core technologies developed under this program will advance all categories of electrified vehicles (hybrid, plug-in hybrid, extended-range electric, and all-electric vehicles); however, the primary focus of this program is on all-electric vehicles. Technical performance targets. The final research objective for projects funded under this FOA is a fully integrated energy storage unit with energy content of 1 kWh or greater. ARPA-E is setting primary technical targets of: • Cost to manufacture: < 100 - 125$/kWh
• Effective specific energy:> 150 Wh/kg
• Effective energy density:> 230 Wh/L
Secondary technical targets are:
• Cycle life at 80% depth of discharge (DOD): > 1000
• Calendar life: > 10 years
• Effective specific Power – Discharge, 80% DOD/30 s: > 300 W/kg
• Operating temperature: >-30 °C (a higher bound is not defined)
Specifically not of interest to ARPA-E are:
• Applications that fall outside the technical parameters, including but not limited to: incremental improvements to Li-ion components that have little potential to reduce system complexity, weight, and cost; approaches that employ higher specific energy cells coupled with a reduction in packing factor; incremental improvements to mechanical protection structures for energy storage systems; sensing,monitoring,and modeling of lithium-ion battery cells and systems that improve diagnosis but do not reduce system cost and improve crash worthiness; and energy storage technologies with significantly lower performance than lithium-ion batteries at a vehicle level, unless they are offered as part of a system solution that meet program metrics.
• Applications that were already submitted to pending ARPA-E FOAs. Also, applications that are not scientifically distinct from applications submitted to pending ARPA-E FOAs.
• Applications for basic research aimed at discovery and fundamental knowledge generation.
• Applications for large-scale demonstration projects of existing technologies.
• Applications for proposed technologies that represent incremental improvements to existing technologies.
• Applications for proposed technologies that are not based on sound scientific principles (e.g., violates a law of thermodynamics).
• Applications for proposed technologies that do not have the potential to become disruptive in nature.
ARPA-E also published a list of potential teaming partners for the RANGE FOA.
Excellent idea but 5 to 10 years late and 10,000 times too little.
Had $200+B been invested 10 years ago, batteries with 3+X the energy density and 1/3 the price would already be common place. Had we kept away from One Single Oil War (Irak) and invested 50% of that oil war cost into batteries development and lower cost mass production, many of us would already be driving$25,000 extended range EVs.
Harvey,
"Energy absorption mechanisms may include deformation, disintegration, and disengagement by design."
This has some possibilities.. imagine the flat battery pack underneath the Tesla S, in a high speed crash the pack releases from the car and continues moving forward.. that 1000lbs pack just stole momentum from the rest of the car, reducing the chance of death.. of course the now free pack just became a missile.. but the passengers are safe
All Greens salivating to put lithium ion batteries in every car should think again. This is what the NTSB found in that Boeing which caught fire in Boston:
http://www.designnews.com/document.asp?doc_id=258717&dfpPParams=ind_184,industry_aero,aid_258717&dfpLayout=article
Thanks but no thanks.
@Mannstein, thanks for link - and also consider that BEV's have been on US roads, counting the uncrushed ~400 RAV4 EVs, 16 years, since 1997.
This seems to back step from 5X, 1/5th, within 5 years batteries. DOE Sec Steven Chu joked 4X, but "all 'fives' sounded better."
Let's hope they are basing these huge advance expectations on simply coordinating first term/4 year battery breakthroughs.
Frankly, a "good enough for government work"/75% passing X 4X = 3X marketed battery energy density improvement within five years would totally alter light vehicles and society.
Oil would only be needed for heavy haul/~10% cross-country travel. F$%k the Rock#feller/Saudi's/oil. The buyers of 2013,$18,800 Leafs could exchange batteries for ~$3,000, 240 mile range 2018 batteries - and use the old battery for home power backup or get a ~$1,000(s?) credit - and have a virtually new car.
@ kelly
According to the U.S. Advanced Battery Consortium Primary Criteria for mid term advanced EV batteries the ultimate price for batteries must be ($/kWh) ( 10,000 units at 40 kWh) less than$150 and for the long term less than $100. This was published in the Handbook of Batteries second edition in 1990s on page 39.11. We are a long way from the mid term target even though it's 20years since this was published. I too want to stop paying for weekly fill up's but an affordable practical EV is still off in the distant future or else they would be flying out of car dealers' show rooms. Reminds me of the Magnetic Fusion Power Plants which were promised in the 1960s to be operational at the turn of the century. Latest I read from the main man at Princeton Univ. these are now slated to appear mid century. Go figure! @ HarveyD Had we invested half the ammount you propose 5 years ago in fuel cell development we'd have EVs flying off the shelf. A mid-size auto MIGHT get 25-33 mpg average. Prius hybrids get 50 mpg. Already, from a ~1.4 kwh battery, we can reduce gas use by over a third, esp. in cities. A Prius C is under$20,000, Prius under $27,000. The average US new car sold price is$30,800.
Even a C-Max plug-in(20 miles gas-free/trip) is under $30,000 w/$3k tax credit - so enjoy if your in the market.
Yes Kelly.... KitP and friends may not like it but electrified vehicles + improved e-ancillaries + associated infrastructures are building up fast. Progress in improved batteries and lighter vehicles may be slower than expected but resistance is being progressively overtaken every where.
Decent improved batteries (close to 500 Wh/Kg) at lower price (close to \$150/kWh) and much lighter cars (under 800 Kg) will be available by 2018/2020 or so.
Affordable extended range BEVs (up to 450 miles or a bit more) will hit the market place by 2020/2022 or so.
Those of us who will live another 10 years will see the switch from ICEVs to BEVs. The complete transition will probably take another 20+ years.
By the way, the latest news on Boeing 787 battery problems is pointing towards 'bad and/or wrong wiring' of the battery control units.
The complete story should come out within one month or so.
Mannstein,
It may well be that the commercial Fusion power plants will not be there until the 2040s. But unlike the 1970s, we have now produced large amounts of controlled Fusion energy. Many at Cadarache are wanting to start design in detail, of the first commercial Fusion power plant, starting in 2017 less than 5 years from now.
That is even before the ITER will produce First Plasma. Everything that ITER was to develop or prove, from a Physics perspective, has already been achieved piece meal around the world in smaller facilities.
ITER is the last test of scale up, and the first to really address the engineering efforts needed for commercial Fusion. That scaleup effort turned out not to be as overwhelming as was feared, and quite straight forward, instead.
There are no more plasma instabilities to be encountered or solutions to them to be developed. We know that because reactors run for longer intervals approaching relative steady state and much longer than the lifetime of all the plasma instabilities. There have seemingly been hundreds of plasma instabilities encountered, but now they all have been encountered, catalogued, the Physics understood, and solutions both passive and active discovered, tried and tested.
Commercial Fusion is much closer than you think, and much shorter in time away, than the time since the First Petroleum Price Crisis in 1973.
That is reassuring since the the "renewable" power systems have turned into governmental subsidized white elephants. They just do not scale any better today, or overcome their inherent limitations, than when they were abandoned 150 years ago, despite the prodigious spending of governments around the world.
When electric vehicles are advanced enough to compete, there will be the clean and inexhaustible power plants producing electricity to re-power them.
The comments to this entry are closed. |
# Cerebral palsy is a. a disability resulting from damage to the brain evidenced by motor problems, physical weakness, lack
###### Question:
Cerebral palsy is
a. a disability resulting from damage to the brain evidenced by motor problems, physical weakness, lack of coordination, and speech disorders.
b. a brain disorder characterized by recurrent seizure activity.
c. the failure of certain bones of the spinal column to fuse.
d. a disorder that is characterized by progressive weakening of the voluntary skeletal muscles.
### How did Napoleon Bonaparte rise to power during the French Revolution?
How did Napoleon Bonaparte rise to power during the French Revolution?...
### WILL MARK YOU BRAINLIEST
WILL MARK YOU BRAINLIEST $WILL MARK YOU BRAINLIEST$...
### Find the area of the shape below.11 cm14 cm9 cm20 cm
Find the area of the shape below. 11 cm 14 cm 9 cm 20 cm...
### To win a basketball game, one team scored 146 points. they made a total of 56 two pointers and three
To win a basketball game, one team scored 146 points. they made a total of 56 two pointers and three pointers. how many baskets were worth 2 points, and how many were 3 points...
### 3a + 6 = a7th Grade Linear Question
3a + 6 = a 7th Grade Linear Question...
### John Locke suggested that individuals would consent to a in order to leave the state of nature and better safeguard life, liberty,
John Locke suggested that individuals would consent to a in order to leave the state of nature and better safeguard life, liberty, and property....
### Tessa made a mosaic in art class with different-shaped tiles. She started by putting 2 rows of t triangle
Tessa made a mosaic in art class with different-shaped tiles. She started by putting 2 rows of t triangle tiles at the top of the mosaic and 2 rows of c circle tiles at the bottom. She finished by putting 20 square tiles in between the triangle and circle tiles. Pick all the expressions that represe...
### Simplify the expression (m^3)^-1 (x^2)^5
Simplify the expression (m^3)^-1 (x^2)^5...
### Justice harlan believed that this decision would allow the southern states to remove protections for
Justice harlan believed that this decision would allow the southern states to remove protections for african americans recently coferred by what...
### What is the answer for 2+2?
What is the answer for 2+2?...
### HEL I WILL MARK YOU Rap – Three trimesters/stages of laborCreate a rap over either the three trimesters or the three
HEL I WILL MARK YOU Rap – Three trimesters/stages of labor Create a rap over either the three trimesters or the three stages of labor. The following must be fulfilled within your rap to receive an A. 1. Must be a rap/song 2. Must be creative 3. Must use at least 15 terms used in class 4. Must g...
### Which type of stress force produces reverse faults?
Which type of stress force produces reverse faults?...
### The scale on the blueprint of a house is 0.04in = 1 foot. If the width of the kitchen on the blueprint is 1 inch, what is
The scale on the blueprint of a house is 0.04in = 1 foot. If the width of the kitchen on the blueprint is 1 inch, what is the actual width of the kitchen?...
### The net equation for the oxidative reactions of the pentose phosphate pathway is glucose − 6 − phosphate
The net equation for the oxidative reactions of the pentose phosphate pathway is glucose − 6 − phosphate + 2 nadp + + h 2 o ⟶ ribulose − 5 − phosphate + 2 nadph + co 2 + 2 h + select true statements about the pentose phosphate pathway. glucose is a precursor of the pentose phosphate pathwa...
### The probability of getting a A in mr Atkesons class is 25% what is the probability of not getting a A
The probability of getting a A in mr Atkesons class is 25% what is the probability of not getting a A...
### 7) Write and solve the inequality. The product of -3 and a number is at least 1 point-24. Be sure to choose the option with both
7) Write and solve the inequality. The product of -3 and a number is at least 1 point -24. Be sure to choose the option with both the correct inequality and the correct solution...
### Using this formula, to find k, what does e0 represent? . k=10^(ne0/0.0592)
Using this formula, to find k, what does e0 represent? . k=10^(ne0/0.0592)...
### The argentinian grasslands shown in the photo above are called the and are one of the world’s most
The argentinian grasslands shown in the photo above are called the and are one of the world’s most productive agricultural areas.... |
Finding air wires in Eagle
I am almost done routing a board. However, is telling me that there is still one more wire. I have looked but I just can't seem to find it. Is there a to make Eagle tell me where it is?
• There are other alternatives too. – Olin Lathrop Jul 2 '12 at 23:10
• I don't use Eagle, but can't you just run a DRC and it will tell you which nets are not connected? – Oli Glaser Jul 3 '12 at 1:28
• @OlinLathrop what are the other options? – Alexis K Jul 3 '12 at 2:22
• @AlexisK Check the updated answer fore more options. – Bruno Ferreira Jul 3 '12 at 9:40
I can think of three options:
• Zoom out as much as you can then use the route tool on the tiny board, this catches the air wire, then zoom in again and route it.
• You can also disable the top and bottom layers so the air wire becomes more visible.
• Yet another option is to run the provided "length.ulp" script (File->Run... or ULP button). This script shows a list of all the nets, on that list there is a column "Unrouted", some net is not completely routed a value should appear here instead of "--". You can then type on the command line "show net_name" to highlight it.
• nice. I found it using the first method. – Alexis K Jul 3 '12 at 16:27
• On Eagle 5, I have a saved layer configuration called ‘airwires’ (it only shows the airwire layer). If I find I have a very short airwire somewhere, I zoom to view the entire board, then look for the wire. It's usually very obvious even on 6U Eurocard-sides PCBs. – Alexios Sep 24 '12 at 13:55
• The zoom out method is also illustrated in this blog entry. – Nick Alexeev Mar 31 '15 at 1:17
There is an ULP called zoom-unrouted. When you run it, it will automatically zoom your view to the first airwire it finds. Very useful. Here is the link:
• Yes, this is what I use mostly. Or else, hide all other layers besides "Unrouted" Layer. – OrCa Apr 12 '13 at 12:42
• This ULP is marvelous - I find hiding the other layers isn't much use when your airwires are miniscule little hops between not-perfectly-aligned pads and traces - shame you can't change the airwire display thickness (just color) – Richard Aplin Apr 7 '16 at 23:01
• URL changed to cadsoft.io/resources/ulps/349 – mash Aug 12 '16 at 7:40
Air wires are located on layer #19: "Unrouted". By disabling most/all of the other layers, they can easily be spotted.
Type in the following command: ratsnest *
This will list all the airwires in the status bar at the bottom with their name/net designation. It's a good start, and then at that point if you don't know where they are, use one of the above mentioned methods.
• By far the easiest way I found to see if everything is routed. If it says "Ratsnest: 0 airwaves" in the statusbar then all is routed. – HixField Feb 6 '17 at 21:06
I'm not an Eagle user, but you surely can selectively disable layers. Disable the most distracting layers, that will probably be your signal layers, so that only components remain visible. You'll probably see the line then.
I think that the fastest option is to select the Edit->Route and click with the left button of the mouse on the board. Automatically eagle will draw a track to the nearest untracked wire of the board (or to the latest unrouted wire, do not worry).
Some time ago I disabled all the layers except the "Unrouted" to look for the unrouted tracks, since I discover this simple and faster method. |
# Proof of the time-independent Schrödinger equation
I have a question regarding the proof of the time-independent Schrödinger equation. So if we have a time-Independent Hamiltonian, we can solve the Schrödinger equation by adopting separation of variables method: we write our general solution as $\psi(r,t) = \psi(r)*f(t)$ and we get to the two equation : one for the f(t) and one for the $\psi(r)$. $$f(t)=e^{-\frac{i}{\hbar}Ht}$$ and $$H\psi(r) = E\psi(r)$$ Books in general refer to the second equation as the TISE and it is seen as an eigenvalue problem for the hamiltonian, in order to find the stationary states. Now what i don't understand is why we see that equation as an eigenvalue problem for the hamiltonian, since we have a wave-function $\psi(x)$ which is supposed to be eigenstate for H. So if $\psi(x)$ is eigenstate for H, it means that H is diagonal in coordinate basis, but i know it's not true since H has the term $\frac{P^2}{2m}$ in it, which is not diagonal in coordinate basis. Where am i wrong?? Thank you very much
-
$H$ isn't diagonal in coordinate basis, but in the $\psi(r)$ Eigenbasis you're computing... – Christoph Jan 21 '14 at 10:27
ok thank you it deals with $\psi(r)$ and not the coordinates actually. But why are we sure that $\psi(r)$ is eigenfunction of $H$? When i separate the variables i just say i have a function $f(t)$ and a function $\psi(r)$, it can be any function to be determined – Danny Jan 21 '14 at 10:48
$$\hat H \Psi(r, t) = i \hbar \partial_t \Psi (r, t).$$
Using the ansatz $\Psi(r,t) = \psi(r) f(t)$ yields
$$f(t) \hat H \psi(r) = i \hbar \psi(r) \partial_t f(t)$$
and, via the standard separation of variables trick,
$$i\hbar\frac{\dot f(t)}{f(t)} = \text{const} = \frac{\hat H\psi(r)}{\psi(r)};$$
the two sides are equal, but the LHS depends only on $t$ while the RHS depends only on $r$, so they each have to be constant. Let us call this constant $E$. Then, for the time-dependent part, we get
$$\dot f(t) = -i \frac{E}{\hbar} f(t)$$
Which is manifestly solved by
$$f(t) = \exp\left\{-i \frac{E}{\hbar} t\right\}.$$
For the spatial part, we find the time-independent Schrödinger equation
$$\hat H \psi(r) = E \psi(r),$$
which as you observe can be viewed as an eigenvalue equation for $\hat H$. This motivates the choice of name for $E$: Physically, the eigenvalue of the Hamiltonian is the energy.
- |
# Integral Question - $\int\frac{1}{\sqrt{x^2-x}}\,\mathrm dx$
Integral Question - $\displaystyle\int\frac{1}{\sqrt{x^2-x}}\,\mathrm dx$. $$\int\frac{1}{\sqrt{x(x-1)}}\,\mathrm dx =\int \left(\frac{A}{\sqrt x} + \frac{B}{\sqrt{x-1}}\right)\,\mathrm dx$$
This is the right way to solve it?
Thanks!
• There are no constants $A,B$ such that $$\frac{1}{\sqrt{x(x-1)}}=\frac{A}{\sqrt x} + \frac{B}{\sqrt{x-1}}$$ – Américo Tavares May 1 '13 at 10:04
• @Ofir : No. $\sqrt{CD} \neq \sqrt{C} + \sqrt{D}$. Complete the square inside the square root. – Stefan Smith May 1 '13 at 10:57
• Yes, I understand that after lab told me, thanks – Ofir Attia May 1 '13 at 10:59
The Partial Fraction Decomposition is for rational fraction only.
$$\int\frac{dx}{\sqrt{x^2-x}}=\int\frac{2dx}{\sqrt{4x^2-4x}}=\int\frac{2dx}{\sqrt{(2x-1)^2-1^2}}$$
Now, put $2x-1=\sec\theta$
EDIT: completing as requested
So,$2dx=\sec\theta\tan\theta d\theta$
$$\text{So,}\int\frac{2dx}{\sqrt{(2x-1)^2-1^2}}=\int \frac{\sec\theta\tan\theta d\theta}{\tan\theta}=\int \sec\theta d\theta =\ln|\sec\theta+\tan\theta|+C$$ (where $C$ is an arbitrary constant of indefinite integral )
$$=\ln\left|2x-1+\sqrt{(2x-1)^2-1}\right|+C=\ln\left|2x-1+2\sqrt{x^2-x}\right|+C$$
Alternatively,using $$\frac{dy}{\sqrt{y^2-a^2}}=\ln\left|y+\sqrt{y^2-a^2}\right|+C$$
$$\int\frac{dx}{\sqrt{x^2-x}}=\int\frac{dx}{\sqrt{\left(x-\frac12\right)^2-\left(\frac12\right)^2}}$$ $$=\ln\left|x-\frac12+\sqrt{x^2-x}\right|+C=\ln\left|2x-1+2\sqrt{x^2-x}\right|+C-\ln2=\ln\left|2x-1+2\sqrt{x^2-x}\right|+C'$$ where $C'=C-\ln2$ another arbitrary constant
• ok thanks, got it! – Ofir Attia May 1 '13 at 9:53
• @OfirAttia, can $1$ be equal to $A\sqrt{x-1}+B\sqrt x?$ – lab bhattacharjee May 1 '13 at 9:54
• no, I see it right now. thanks – Ofir Attia May 1 '13 at 9:55
• I can put outside the intergral 1/2? because the 2dx? – Ofir Attia May 1 '13 at 9:57
• @OfirAttia,$\int\frac{dx}{\sqrt{x-x^2}}=\int\frac{dx}{\sqrt{\left(\frac12\right)^2-\left(x-\frac12\right)^2}}$ Use $$\int \frac{dy}{\sqrt{a^2-y^2}}=\arcsin \frac ya$$ – lab bhattacharjee May 2 '13 at 5:15 |
## Structural Subtyping of Non-Recursive Types is Decidable
paper ps
We show that the first-order theory of structural subtyping of non-recursive types is decidable, as a consequence of a more general result on the decidability of term powers of decidable theories. Let Σ be a language consisting of function symbols and let C (with a finite or infinite domain C) be an L-structure where L is a language consisting of relation symbols. We introduce the notion of Σ-term-power of the structure C, denoted PΣ(C). The domain of PΣ(C) is the set of Σ-terms over the set C. PΣ(C) has one term algebra operation for each f in Σ, and one relation for each r inL defined by lifting operations of C to terms over C. We extend quantifier elimination for term algebras and apply the Feferman-Vaught technique for quantifier elimination in products to obtain the following result. Let K be a family of L-structures and KP the family of their Σ-term-powers. Then the validity of any closed formula F on KP can be effectively reduced to the validity of some closed formula q(F) on K. Our result implies the decidability of the first-order theory of structural subtyping of non-recursive types with covariant constructors, and the construction generalizes to contravariant constructors as well.
### Citation
Viktor Kuncak and Martin Rinard. Structural subtyping of non-recursive types is decidable. In Eighteenth Annual IEEE Symposium on Logic in Computer Science (LICS). IEEE, 2003.
### BibTex Entry
@inproceedings{KuncakRinard03StructuralSubtypingNonRecursiveTypesDecidable,
author = {Viktor Kuncak and Martin Rinard},
title = {Structural Subtyping of Non-Recursive Types is Decidable},
booktitle = {Eighteenth Annual IEEE Symposium on Logic in Computer Science (LICS)},
publisher = {IEEE},
isbn = {0-7695-1884-2},
year = 2003,
abstract = {
We show that the first-order theory of structural subtyping
of non-recursive types is decidable, as a consequence of a
more general result on the decidability of term powers of
decidable theories.
Let $\Sigma$ be a language consisting of function symbols and let
$\mathbf{C}$ (with a finite or infinite domain $C$) be an
$L$-structure where $L$ is a language consisting of relation
symbols. We introduce the notion of $\Sigma$-term-power
of the structure $\mathbf{C}$, denoted $P_{\Sigma}(\mathbf{C})$. The domain
of $P_{\Sigma}(\mathbf{C})$ is the set of $\Sigma$-terms over the set $C$.
$P_{\Sigma}(\mathbf{C})$ has one term algebra operation for each $f \in \Sigma$, and one relation for each $r \in L$ defined by lifting
operations of $\mathbf{C}$ to terms over $C$.
We extend quantifier elimination for term algebras and apply
the Feferman-Vaught technique for quantifier elimination in
products to obtain the following result. Let $K$ be a family
of $L$-structures and $K_P$ the family of their
$\Sigma$-term-powers. Then the validity of any closed
formula $F$ on $K_P$ can be effectively reduced to the
validity of some closed formula $q(F)$ on $K$.
Our result implies the decidability of the first-order
theory of structural subtyping of non-recursive types with
covariant constructors, and the construction generalizes to
contravariant constructors as well.
}
} |
Instructional video
# Rewrite rational expressions by seeing the expression as division of the numerator by the denominator
teaches Common Core State Standards HSA-APR.D.6 http://corestandards.org/Math/Content/HSA/APR/D/6 |
# An enhanced syntax for defining functions in PostScript
This file enhances PostScript with a few new syntactic niceties for defining functions with named arguments and even type-checking.
## block
The first part of the enhancement consists of a new control structure called a "block". A block is a list of pairs which will be collected as key/value pairs into a dictionary and then the special key main gets called. This much allows us to elide the 'def' for all functions and data, and also the / decorations on all the function names.
{
main { f g h }
f { }
g { }
h { }
} block
This code defines a main function and 3 functions that main calls. When block executes, all 4 functions are defined in a dictionary and then main gets called.
Importantly, it lets you put main at the top to aid top-down coding.
## func
Another enhancement is the func syntax. A function can be created to accept named arguments by calling func with the array of argument names and the array of the body of the function. To do this within the block construct where normally the contents of the block are not executed but just collected, you can force execution by prefixing the @ sign to func. Any name can be executed "at compile time" with this @ prefix, but only at the top-level.
Using func you can give names to the arguments of functions by enclosing them in curly braces before the function body:
{
main{
3 f
4 5 g
}
f {x}{ 1 x div sin } @func
g {x y}{ x log y log mul } @func
} block
This code creates a function f of one argument x, and a function g of two arguments x and y. The function body is augmented with code which takes these 2 objects from the stack and defines then in a new local dictionary created and begined for this invocation of the function coupled with an end at the end.
For some use-cases like implementing control structures, it is useful to place the end more strategically. The fortuple function illustrates this by using @func-begin which does not add end at the end. It then places the end earlier, before calling back to the p argument.
A function can also declare the types that its arguments must have by enclosing them in parentheses and using executable names for the argument names and literal names for the types:
{
main {
3 4 p
}
p (x/integer y/integer){ x y add } @func
} block
This augments the function body with code which checks that there are indeed enough objects on the stack, or else it triggers a stackunderflow error. Then it checks that all of the arguments have the expected type, or else it triggers a typecheck error. The types here are written without the letters type at the end; these are added automatically.
You can omit the type name with the parenthesized syntax and it will allow any type for that argument. If you omit all the type names you still get the stackunderflow checking. With any of these errors the name of the user function is reported in the error message for easier debugging.
## Implementation
Implementation-wise, the foundation is the pairs construct which is an array which gets traversed with forall. Any name beginning with @ gets the @ stripped off and the remainder gets executed. The results are enclosed in << and >> to create a dictionary.
The first dictionary defines everything to do with pairs including pairs-def which adds the key/value pairs into the current dictionary rather than begin a new dictionary. The next two sections add their functions to this same dictionary.
This justifies (somewhat) the cuddled style of bracing. The whole implementation is split into 3 layers but the result is only 1 dictionary on the dictstack with all of these functions in it.
The middle section defines block and func and all of the functionality for the simple-func style of defining functions. The third section implements two looping control structures which use the simple-func style. These looping functions are then used by the more complex typed-func style.
Many of the functions used to implement all of this are useful in their own right so they are also supplied to the user, like curry compose reduce.
There is some simplistic testing code at the bottom guarded by /debug where. So if this file is simply run from another file, it will skip the testing code. But if the key debug is defined somewhere on the dictstack, then the testing code will execute. So with ghostscript, testing can be invoked with gs -ddebug struct2.ps.
The testing code itself illustrates the overhead of the code added by func. For type checking, it adds a fair amount of code.
%!
% struct2.ps An enhanced PostScript syntax for defining functions with named,
% type-checked arguments. Using @func within a block or other construct that uses
% 'pairs' accomplishes a sort of compile-time macro expansion of the shorthand function description.
<<
/pairs-begin { pairs begin }
/pairs-def { pairs {def} forall }
/pairs { << exch explode >> }
/explode { { @exec } forall }
/@exec { dup type /nametype eq { exec-if-@ } if }
/exec-if-@ { dup dup length string cvs dup first (@) first eq { exec@ }{ pop } ifelse }
/first { 0 get } /exec@ { exch pop rest cvn cvx exec }
/rest { 1 1 index length 1 sub getinterval }
>> begin {
block { pairs-begin main end }
func { 1 index type /stringtype eq { typed-func }{ simple-func } ifelse }
simple-func { func-begin { end } compose }
typed-func { exch args-and-types reverse { make-type-name } map check-stack 3 1 roll
exch simple-func compose }
func-begin { exch reverse /args-begin load curry exch compose }
args-begin { dup length dict begin { exch def } forall }
args-and-types { /was_x false def [ exch { each-specifier } fortokens fix-last ] dup args exch types }
each-specifier { dup xcheck /is_x exch def is_x was_x and { null exch } if /was_x is_x def }
fix-last { counttomark 2 mod 1 eq { null } if }
check-stack { {pop} 4 index cvlit { cvx /stackunderflow signalerror } curry compose
/if cvx 2 array astore cvx {check-count} exch compose curry
3 index cvlit { cvx /typecheck signalerror } curry
/if cvx 2 array astore cvx {check-types} exch compose compose }
check-count { dup length count 2 sub gt }
check-types { dup length 1 add copy true exch { check-type and } forall exch pop not }
check-type { dup null eq { 3 -1 roll pop pop true }{ 3 -1 roll type eq } ifelse }
make-type-name { dup type /nametype eq { dup length 4 add string dup dup 4 2 roll cvs
2 copy 0 exch putinterval length (type) putinterval cvn } if }
args { [ exch 2 { 0 get } fortuple ] }
types { [ exch 2 { 1 get } fortuple ] }
map { 1 index xcheck 3 1 roll [ 3 1 roll forall ] exch {cvx} if }
reduce { exch dup first exch rest 3 -1 roll forall }
rreduce { exch aload length 1 sub dup 3 add -1 roll repeat }
curry { [ 3 1 roll {} forall ] cvx } @pop
{ dup length 1 add array dup 0 5 -1 roll put dup 1 4 -1 roll putinterval cvx }
compose { 2 array astore cvx { {} forall } map } @pop
{ 1 index length 1 index length add array dup 0 4 index putinterval
dup 4 -1 roll length 4 -1 roll putinterval cvx }
reverse { [ exch dup length 1 sub -1 0 { 2 copy get 3 1 roll pop } for pop ] }
} pairs-def {
fortokens {src proc}{ { src token {exch /src exch store}{exit}ifelse proc } loop } @func
fortuple {a n p}{ 0 n /a load length 1 sub
{ /a exch /n getinterval /p exec } {load-if-literal-name} map end for
} @func-begin
load-if-literal-name { dup type /nametype eq 1 index xcheck not and { load } if }
} pairs-def
/debug where{pop}{currentfile flushfile}ifelse
{
- sub + add * mul %:= {exch def} += {dup load 3 -1 roll + store}
f {x y z}{ x y z + * } @func
f' {x y z}{ x y z + * end } @func-begin
f'' { {z y x}args-begin x y z + * end }
g(x/integer y/integer z/real){ x y z + * } @func
g' {
[/realtype/integertype/integertype]
check-count { pop /g cvx /stackunderflow signalerror } if
check-types { /g cvx /typecheck signalerror } if
{z y x}args-begin x y z + * end
}
h(x y z){ x y z + * } @func %@dup @==
h' {
[null null null]
check-count { pop /h cvx /stackunderflow signalerror } if
check-types { /h cvx /typecheck signalerror } if
{z y x}args-begin x y z + * end
}
main {
var ==
[ 1 2 3 4 5 ] { - } rreduce ==
/ =
3 4 5 f ==
3 4 5 f' ==
3 4 5 f'' ==
/ =
3 4 5.0 g =
3 4 5.0 g' =
{ 3 4 5 g = } stopped { $error /errorname get =only ( in ) print$error /command get = } if
/ =
clear
{ 3 4 h = } stopped { $error /errorname get =only ( in ) print$error /command get = } if
clear
3 4 5 h =
{ 3.0 4.0 5.0 h = } stopped { $error /errorname get =only ( in ) print$error /command get = } if
{ 3.0 4.0 5.0 h' = } stopped { $error /errorname get =only ( in ) print$error /command get = } if
quit
}
} block
The output from the testing code:
\$ gsnd -q -ddebug struct2.ps
5
3
27
27
27
27.0
27.0
typecheck in g
stackunderflow in h
27
27.0
27.0
Are there improvements to make to the implementation or the behavior? Currently the simple-func style does not check that there are enough arguments but just tries to define them assuming that they're there. Would it be better to add this checking, or is it better to have this low-overhead version which does not add (possibly wasteful) checks? |
# A good book for a second year linear algebra course?
by iceblits
Tags: algebra, book, linear
P: 113 Just wondering if anyone can recommend a good linear algebra book for a second year course. In my first semester I learned up to Gram Schmidt process..EigenValues/Vectors etc. I don't care too much about how "easy" the book is to read. A book heavy in theory will do nicely if that's what you have in mind.
PF Gold P: 712 axler
P: 26 If your first course was rigorous, try Advanced Linear Algebra by Roman. If not, check out Axler, Hoffman & Kunze, and Friedberg. Best of luck.
HW Helper
P: 9,453
A good book for a second year linear algebra course?
Here are my notes from a summer course I taught a while back, meant as a second linear algebra course. Our text for the course was officially Friedberg, Insel, and Spence, which I thought was good. My approach differs from theirs mainly in my extensive use of the concept of the minimal polynomial of a linear map, as an organizing principle. For some reason Insel, et al. seemed the feel that using polynomials made the course too advanced. I also used Shilov as a supplementary text.
Obviously I am not qualified to call my book good, but it is free. Objectively I would say it probably lacks sufficient examples and problems, but overall I enjoyed learning and explaining the ideas while writing it.
Attached Files
4050sum08notes.pdf (468.3 KB, 40 views)
Mentor P: 18,019 Check this: http://www.physicsforums.com/blog.php?b=3206 I think Hoffman and Kunze would be an ideal book for you.
Sci Advisor HW Helper P: 9,453 I agree with the other recommendations here under the rubric "good". I just offered mine because its free. Hoffman and Kunze especially is a classic, (but sometimes pricy). Here is a reasonable one though, used: http://www.biblio.com/search.php?aut...aler_id=133308 LINEAR ALGEBRA Kunze, Ray & Hoffman, Kenneth Bookseller: Samkat Books (Dyersburg, TN, U.S.A.) Bookseller Rating: Quantity Available: 1 Price: US$22.50 Convert Currency Shipping: US$ 4.00 Within U.S.A. Destination, Rates & Speeds Book Description: Prentice-Hall, Englewood Cliffs, N. J., 1961. Hardcover. Book Condition: Ex-Library; G/NONE. Not Latest Edition. Moderate edge wear. Previous owner's name marked out inside front cover. Pages clean, binding good. ; 332 pages. Bookseller Inventory # 63391
P: 125 either Linear Algebra Done Right (which is the Axler everybody is talking about) or if you have an ok background in group theory, then I'd say go for Advanced Linear Algebra (again, people have already recommended this one, it's by Steven Roman) the later is a little more robust, as it is intended for graduate students in mathematics, where the former is intended for upper level undergraduates. An additional one I thought about that might work for you if you want a short, free text to be a bridge between where you are now and then picking up Roman's book: http://www.math.miami.edu/~ec/book/ It's free to download the whole thing and it focuses on learning the algebra necessary to get into a more in depth exploration of linear algebra.
P: 113 Hey thanks so much for the quick replies I'm currently browsing through the books suggested here and I'm leaning towards getting both Axler and Roman or Roman and Kunze, Ray &Hoffman, Kenneth. It seems Roman is the more rigorous book choice and If I get lost going through that I'll fall back on Axler and the free texts listed here. MathWonk and bpatrick thanks so much for the links to the free texts..MathWonk Ill check out your's out as soon as the pdf is approved and the link is available :)
Sci Advisor HW Helper P: 9,453 for my book try this page: http://www.math.uga.edu/~roy/ also i recommend an excellent book, linear algebra done wrong, by sergei treil. it is not amateurishly composed like mine but professionally done. also free on his website. http://www.math.brown.edu/~treil/
P: 47 I prefer Shilov over Axler or Hoffman & Kunze.
P: 113 MathWonk Thanks for the links! I was looking through your book and the material seems to be within my grasp which is good news..I think
Related Discussions Science & Math Textbooks 3 Science & Math Textbooks 8 Science & Math Textbooks 3 Academic Guidance 6 Science & Math Textbooks 1 |
Foundations of hyperbolic manifolds. 2nd ed.(English)Zbl 1106.51009
Graduate Texts in Mathematics 149. New York, NY: Springer (ISBN 0-387-33197-2). xii, 779 p. (2006).
Designed to be useful as both textbook and a reference, this book renders a real service to the mathematical community by putting together the tools and prerequisites needed to enter the territory of Thurston’s formidable theory of hyperbolic 3-manifolds, an area that used to be accessible before the publication of the first edition of this book only after a much more thorough and lengthy preparation. Although the author’s stated prerequisites of “a basic knowledge of algebra and topology at the first-year graduate level of an American university” should be supplemented by a very solid background in integration, preferably on manifolds, no use is being made of either algebraic topology or differential geometry.
The first four chapters deal with $$n$$-dimensional Euclidean ($$E^n$$), spherical ($$S^n$$), hyperbolic ($$H^n$$), and inversive geometry. They require a firm grounding in linear algebra and $$n$$-dimensional analysis (preferably some analysis on manifolds). For each of Euclidean, spherical, and hyperbolic geometry, the author determines the arclength, the geodesics, the element of volume, the corresponding trigonometries (laws of sines, cosines, etc.). To emphasize the similarity between spherical and hyperbolic geometry, the latter is first introduced by means of the hyperboloid model inside Lorentzian $$n$$-space. The conformal ball ($$B^n$$) and the upper half-space ($$U^n$$) models of hyperbolic geometry are introduced in the chapter on inversive geometry, which also contains characterization theorems for elliptic transformations of $$B^n$$ and for parabolic and hyperbolic transformations of $$U^n$$.
Chapters 5–9 are devoted to the pre-Thurston part of the story. Chapter 5 is on discrete subgroups of both the group of isometries of $$E^n$$ and of the group $$M(B^n)$$ of Möbius transformations of $$B^n$$. Chapter 6, on the geometry of discrete groups, introduces the projective disk model of $$n$$-dimensional hyperbolic geometry, convex sets for $$E^n, S^n, H^n$$, emphasizing polyhedra and polytopes, studies fundamental domains, convex fundamental polyhedra, and tessellations. Chapter 7, on classical discrete groups, studies reflection groups, simplex reflection groups for $$E^n, S^n, H^n$$, generalized simplex reflection groups for $$H^n$$, proves that the volume of an $$n$$-simplex in $$S^n$$ or $$H^n$$ is an analytic function of the dihedral angles of that simplex, the Schläfli differential formula, as well as a study of the theory of crystallographic groups, culminating with a proof of Bieberbach’s theorem. Chapter 8 contains the basic notions and theorems of geometric manifols, including Clifford-Klein space-forms, $$(X, G)$$-manifolds, geodesic completeness. Chapter 9, on geometric surfaces, deals with gluing surfaces, the Gauss-Bonnet theorem for surfaces of constant curvature, moduli spaces, Teichmüller space, the Dehn-Nielsen theorem, closed Euclidean and hyperbolic surfaces, hyperbolic surfaces of finite area.
Chapters 10–13, the heart of the introduction to the results of Thurston, Gromov, and the author, are, as expected, more heavy going than the previous ones. Chapter 10 is on hyperbolic 3-manifolds, containing results on gluing of 3-manifolds, some examples of finite volume hyperbolic 3-manifolds (the Whitehead link, the Borromean rings complement), the computation of the volume of a compact orthotetrahedron and of an ideal tetrahedron, hyperbolic Dehn surgery. Chapter 11, on hyperbolic $$n$$-manifolds, deals with gluing, Poincaré’s fundamental polyhedron theorem, the Gauss-Bonnet theorem for the special case of closed spherical, Euclidean, or hyperbolic $$n$$-manifolds of constant sectional curvature, the characterization of simplices of maximal volume in $$B^n$$, differential forms, the Gromov norm (with Gromov’s theorem), measure homology, the de Rham chain complex, and the Mostow rigidity theorem. Chapter 12, on geometrically finite $$n$$-manifolds, deals with limit sets of discrete groups (including classical Schottky groups), the basic properties of conical and cusped limit points of a discrete group of Möbius transformations of $$B^n$$, the characterization, in terms of their convex fundamental polyhedra, of the discrete subgroups of $$M(B^n)$$ that have the property that every limit point is either conical or cusped, the study of nilpotent subgroups of the group $$I(H^n)$$ of $$n$$-dimensional hyperbolic isometries, the Margulis lemma, used to prove the existence of Margulis regions for discrete subgroups of $$I(H^n)$$, and geometrically finite hyperbolic manifolds. Chapter 13 studies the geometry of geometric orbifolds.
Every chapter is followed by historical notes, with attributions to the relevant literature, both of the originators of the ideas presented in the chapter and of modern presentations thereof. The bibliography contains 463 entries.
MSC:
51M10 Hyperbolic and elliptic geometries (general) and generalizations 57M50 General geometric structures on low-dimensional manifolds 20H10 Fuchsian groups and their generalizations (group-theoretic aspects) 30F40 Kleinian groups (aspects of compact Riemann surfaces and uniformization) 57M60 Group actions on manifolds and cell complexes in low dimensions 57N10 Topology of general $$3$$-manifolds (MSC2010) 51-02 Research exposition (monographs, survey articles) pertaining to geometry
Zbl 0809.51001
Full Text: |
# Directory
Check for the existence of the output directory and create if it does not exist. Example:
fn_dir = fileparts(fn);
if ~exist(fn_dir,'dir')
mkdir(fn_dir);
end
# Filenames
## Negative numbers and decimals
Use "-" and "+" in front of the number if it is signed.
Use "p" for the radix point.
Generally fix the number of significant figures so if a number's range of possible values requires a maximum of N digits to represent where N >= 0, then N decimals should always be used and zeros should be used to pad.
For example:
• an unsigned number 34.5 with a maximum of 3 digits including 1 decimal would be: "_34p5_".
• a signed number 34.5 with a maximum of 3 digits including 1 decimal would be: "_+34p5_".
• a signed number 34.5 with a maximum of 5 digits including 2 decimals would be: "_+034p50_".
• an unsigned integer 34 with a maximum of 4 digits would be: "_0034_".
• a signed integer 34 with a maximum of 4 digits would be: "_+0034_".
• a signed integer -34 with a maximum of 4 digits would be: "_-0034_".
# Figures
Generally, figures that are saved should follow the conventions at: Debug Plots.
# fopen
The following error checking should always be done when a file is opened for writing. The "b" option should always be specified, even for text files.
[fid,msg] = fopen(txt_fn,'wb');
if fid < 0
error('Could not open file:\n %s\nError message: %s.', txt_fn, msg);
end
fclose(fid);
# Saving files (fopen, save, saveas, ct_save, ct_saveas)
When saving a file (assuming output filename is out_fn), always print out the filename to stdout before calling the appropriate save function (e.g. fopen, ct_save, ct_saveas, save, saveas), in this form:
fprintf('Save %s (%s)\n', out_fn, datestr(now));
# mat files
Use the "ct_save" command to save mat files instead of "save". This checks for sufficient disk space before starting the save operation and also always saves in v7.3 (HFD5) format.
Mat files that contain results that are dependent on the standard param structure, should always save the relevant param structures. For example:
out.(['param_' mfilename]) = param;
ct_save(fn,'-struct','out');
If the function is dependent on the records file or other previous operations that stored the param structure, then those param structures should be stored in the file as well. For example:
out.param_records = records.param_records;
ct_save(fn,'-struct','out'); |
Entropy stable numerical approximations for the isothermal and polytropic Euler equations
In this work we analyze the entropic properties of the Euler equations when the system is closed with the assumption of a polytropic gas. In this case, the pressure solely depends upon the density of the fluid and the energy equation is not necessary anymore as the mass conservation and momentum conservation then form a closed system. Further, the total energy acts as a convex mathematical entropy function for the polytropic Euler equations. The polytropic equation of state gives the pressure as a scaled power law of the density in terms of the adiabatic index γ. As such, there are important limiting cases contained within the polytropic model like the isothermal Euler equations (γ=1) and the shallow water equations (γ=2). We first mimic the continuous entropy analysis on the discrete level in a finite volume context to get special numerical flux functions. Next, these numerical fluxes are incorporated into a particular discontinuous Galerkin (DG) spectral element framework where derivatives are approximated with summation-by-parts operators. This guarantees a high-order accurate DG numerical approximation to the polytropic Euler equations that is also consistent to its auxiliary total energy behavior. Numerical examples are provided to verify the theoretical derivations, i.e., the entropic properties of the high order DG scheme.
Authors
• 9 publications
• 1 publication
• 1 publication
• 12 publications
• A Subcell Finite Volume Positivity-Preserving Limiter for DGSEM Discretizations of the Euler Equations
In this paper, we present a positivity-preserving limiter for nodal Disc...
02/11/2021 ∙ by Andrés M. Rueda-Ramírez, et al. ∙ 0
• Energy-conserving formulation of the two-fluid model for incompressible two-phase flow in channels and pipes
The one-dimensional (1D) two-fluid model (TFM) for stratified flow in ch...
04/15/2021 ∙ by J. F. H. Buist, et al. ∙ 0
• Radially symmetric solutions of the ultra-relativistic Euler equations
The ultra-relativistic Euler equations for an ideal gas are described in...
02/04/2020 ∙ by Matthias Kunik, et al. ∙ 0
• Entropy Stable Numerical Fluxes for Compressible Euler Equations which are Suitable for All Mach Numbers
We propose two novel two-state approximate Riemann solvers for the compr...
04/03/2020 ∙ by Jonas P. Berberich, et al. ∙ 0
• Addressing the issue of mass conservation error and the connected problem of Carbuncle formation
We study mass conservation errors (momentum density spike) and the relat...
05/13/2020 ∙ by Vinnakota Mythreya, et al. ∙ 0
• Analysis of finite-volume discrete adjoint fields for two-dimensional compressible Euler flows
This work deals with a number of questions relative to the discrete and ...
09/15/2020 ∙ by Jacques Peter, et al. ∙ 0
• Analytical travelling vortex solutions of hyperbolic equations for validating very high order schemes
Testing the order of accuracy of (very) high order methods for shallow w...
09/21/2021 ∙ by Mario Ricchiuto, et al. ∙ 0
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
1. Introduction
The compressible Euler equations of gas dynamics
(1.1) ϱt+\lx@overaccentset→∇⋅(ϱ\lx@overaccentset→v) =0, (ϱ\lx@overaccentset→v)t+\lx@overaccentset→∇⋅(ϱ\lx@overaccentset→v⊗\lx@overaccentset→v)+\lx@overaccentset→∇p =\lx@overaccentset→0, Et+\lx@overaccentset→∇⋅(\lx@overaccentset→v[E+p]) =0,
are a system of partial differential equations (PDEs) where the conserved quantities are the mass
, the momenta , and the total energy . This is an archetypical system of non-linear hyperbolic conservation laws that have far reaching applications in engineering and natural sciences, e.g. [29, 32, 44]. In three spatial dimensions, this system has five equations but six unknowns: the density , the velocity components , the internal energy , and the pressure . Thus, in order to close the system, an equation of state is necessary to relate thermodynamic state variables like pressure, density, and internal energy. Depending on the fluid and physical processes we wish to model the equation of state changes. Some examples include an ideal gas where or polytropic processes where [29].
The connection between the equation of state, the fluid, and other thermodynamic properties is of particular relevance when examining the physical realizability of flow configurations. In particular, the entropy plays a crucial role to separate possible flow states from the impossible [6]. There is a long history investigating the thermodynamic properties of the compressible Euler equations through the use of mathematical entropy analysis for adiabatic processes [25, 34, 42] as well as polytropic processes [9, 36]. In this analysis the mathematical entropy is modeled by a strongly convex function . There exist associated entropy fluxes, , such that the entropy function satisfies an additional conservation law
st+\lx@overaccentset→∇⋅\lx@overaccentset→fs=0,
for smooth solutions that becomes an inequality
st+\lx@overaccentset→∇⋅\lx@overaccentset→fs≤0,
in the presence of discontinuous solutions, e.g. shocks. Note, we have adopted the convention common in mathematics that entropy is a decreasing quantity, e.g. [42].
For numerical methods, discretely mimicking this thermodynamic behavior leads to schemes that are entropy conservative (or entropy stable) depending on the solutions smoothness [25, 42]. Additionally, numerical approximations, especially schemes with higher order accuracy and low inbuilt numerical dissipation, that are thermodynamically consistent have a marked increase in their robustness [7, 24, 30, 48]. Thus, the design and application of entropy stable approximations, particularly for the compressible Euler equations, have been the subject of ongoing research for the past 50 years, e.g. [4, 7, 8, 10, 12, 15, 16, 24, 25, 27, 35, 42]. A major breakthrough came with the seminal work of Tadmor [40] wherein he developed a general condition for a finite volume numerical flux function to remain entropy conservative. It was then possible to selectively add dissipation to the baseline numerical approximation and guarantee entropy stability.
Many authors expanded on the entropy stability work of Tadmor, developing higher order spatial approximations through the use of WENO reconstructions [18, 19, 31], summation-by-parts (SBP) finite difference approximations [12, 15, 16], or the discontinuous Galerkin spectral element method (DGSEM) also with the SBP property [4, 7, 10, 24, 23]. The latter two numerical schemes both utilize the SBP property that discretely mimics integration-by-parts. This allows a direct translation of the continuous analysis and entropy stability proofs onto the discrete level, see [22, 39] for details. However, the design of these entropy stable approximations (low-order or high-order) has focused on adiabatic processes for the compressible Euler equations.
So, the main focus in this work is to design entropy conservative and entropy stable numerical methods for the polytropic Euler equations. As such, the mathematical entropy analysis is reinvestigated on the continuous level due to the selection of a different equation of state. This analysis also provides a roadmap to discrete entropy stability. We will show that isothermal limit () requires special considerations. The first contribution comes with the derivation of entropy conservative/stable numerical flux functions from Tadmor’s finite volume condition. This includes a computationally affordable definition of the baseline entropy conservative numerical flux as well as an explicit definition of the average states where the dissipation terms should be evaluated. In particular, a special mean operator, which is a generalization of the logarithmic mean [37], is introduced. The second contribution takes the finite volume derivations and builds them into a high-order DGSEM framework that remains consistent to the laws of thermodynamics. Complete details on the entropy aware DGSEM are given by Gassner et al. [24].
The paper is organized as follows: Sect. 2 presents the polytropic Euler system and performs the continuous mathematical entropy analysis. The derivations are kept general as the isothermal Euler equations are a special case of the polytropic system. The finite volume discretization and entropy stable numerical flux derivations are given in Sect. 3. In Sect. 4, a generalization of the entropy stable polytropic Euler method into a high-order DGSEM framework is provided. Numerical investigations in Sect. 5 verify the high-order nature of the approximations as well as the entropic properties. Concluding remarks are given in the final section.
2. Polytropic Euler equations
We first introduce notation that simplifies the continuous and discrete entropy analysis of the governing equations in this work. The state vector of conserved quantities is
and the Cartesian fluxes are denoted by . As in [2, 23], we define block vector notation with a double arrow
The dot product of a spatial vector with a block vector results in a state vector
\lx@overaccentset→g⋅\lx@overaccentset↔f=3∑i=1gifi.
Thus, the divergence of a block vector is
\lx@overaccentset→∇⋅\lx@overaccentset↔f=(f1)x+(f2)y+(f3)z.
This allows a compact presentation for systems of hyperbolic conservation laws
(2.1) ut+\lx@overaccentset→∇⋅\lx@overaccentset↔f=0,
on a domain .
2.1. Governing equations
The polytropic Euler equations are a simplified version of the compressible Euler equations (1.1) which explicitly conserves the mass and momenta. In the equation of state for polytropic fluids the pressure depends solely on the fluid density and the total energy conservation law becomes redundant [9]. The simplified system takes the form of non-linear conservation laws (2.1) with
u=[ϱϱ\lx@overaccentset→v],\lx@overaccentset↔f=[ϱ\lx@overaccentset→vϱ\lx@overaccentset→v⊗\lx@overaccentset→v+pI–],
where is a identity matrix. We close the system with the polytropic or the isothermal gas assumption, which relate density and pressure:
(2.2) polytropic case:p(ϱ)=κϱγ,% isothermal case:p(ϱ)=c2ϱ.
For a polytropic gas is the adiabatic coefficient and is some scaling factor depending on the fluid, e.g. for the shallow water equations with (gravitational acceleration) and [17]. For the isothermal case and is the speed of sound [9]. To keep the analysis of the polytropic Euler equations general, we will only specify which equation of state is used when necessary. Further, in barotropic models the internal energy,
, and the pressure form an admissible pair provided the ordinary differential equation
(2.3) ϱdedϱ=p(ϱ)ϱ,
is satisfied [9]. For the equations of state (2.2) the corresponding internal energies are
(2.4) polytropic case:e(ϱ)=κϱγ−1γ−1,isothermal case:e(ϱ)=c2ln(ϱ).
2.2. Continuous entropy analysis
We define the necessary components to discuss the thermodynamic properties of (2.1) from a mathematical perspective. To do so, we utilize well-developed entropy analysis tools, e.g. [25, 34, 41]. First, we introduce an entropy function used to define an injective mapping between state space and entropy space [25, 34].
For the polytropic Euler equations, a suitable mathematical entropy function is the total energy of the system [9]
(2.5) s(u)=ϱ2∥\lx@overaccentset→v∥2+ϱe(ϱ),
with the internal energy taken from (2.4). Note that the entropy function is strongly convex under the physical assumption that . From the entropy function we find the entropy variables to be
(2.6) w=∂s∂u=(e+ϱdedϱ−12∥\lx@overaccentset→v∥2,v1,v2,v3)T=(e+pϱ−12∥\lx@overaccentset→v∥2,v1,v2,v3)T,
where we use the relation (2.3) to simplify the first entropy variable. The mapping between state space and entropy space is equipped with symmetric positive definite (s.p.d) entropy Jacobian matrices, e.g., [41]
H−1=∂w∂u,
and
(2.7) H=1a2⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ϱϱv1ϱv2ϱv3ϱv1ϱv21+a2ϱϱv1v2ϱv1v3ϱv2ϱv1v2ϱv22+a2ϱϱv2v3ϱv3ϱv1v3ϱv2v3ϱv23+a2ϱ⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,
where we introduce a general notation for the sound speed
a2=γpϱ.
We note that this statement of is general for either equation of state from (2.2). The entropy fluxes, , associated with the entropy function (2.5) are
(2.8) \lx@overaccentset→fs=(fs1,fs2,fs3)T=\lx@overaccentset→v(s+p).
Finally, we compute the entropy flux potential that is needed later in Sect. 3.2 for the construction of entropy conservative numerical flux functions
(2.9) \lx@overaccentset→Ψ=wT\lx@overaccentset↔f−\lx@overaccentset→fs=\lx@overaccentset→vp.
To examine the mathematical entropy conservation we contract the system of conservation laws (2.1) from the left with the entropy variables (2.6). By construction, and assuming continuity, the time derivative term becomes
wT∂u∂t=∂s∂t.
The contracted flux terms, after many algebraic manipulations, yield
wT\lx@overaccentset→∇⋅\lx@overaccentset↔f=⋯=\lx@overaccentset→∇⋅(ϱ\lx@overaccentset→v2∥\lx@overaccentset→v∥2+ϱe\lx@overaccentset→v)=\lx@overaccentset→∇⋅(\lx@overaccentset→v[s+p])=\lx@overaccentset→∇⋅\lx@overaccentset→fs.
Therefore, for smooth solutions contracting (2.1) into entropy space yields an additional conservation law for the total energy
(2.10) wT(ut+\lx@overaccentset→∇⋅\lx@overaccentset↔f)=0⇒st+\lx@overaccentset→∇⋅\lx@overaccentset→fs=0.
Generally, discontinuous solutions can develop for non-linear hyperbolic systems, regardless of their initial smoothness. In the presence of discontinuities, the mathematical entropy conservation law (2.10) becomes the entropy inequality [41]
st+\lx@overaccentset→∇⋅\lx@overaccentset→fs≤0.
Note, due to the form of the entropy fluxes (2.8) the mathematical entropy conservation law (2.10) has an identical form to the conservation of total energy from the adiabatic compressible Euler equations (1.1). This reinforces that the total energy becomes an auxiliary conserved quantity for polytropic gases.
2.3. Eigenstructure of the polytropic Euler equations
To close this section we investigate the eigenstructure of the polytropic Euler equations. We do so to demonstrate the hyperbolic character of the governing equations. Additionally, a detailed description of the eigenvalues and eigenvectors is needed to select a stable explicit time step
[11] as well as design operators that selectively add dissipation to the different propagating waves in the system, e.g. [46].
To simplify the eigenstructure discussion of the polytropic Euler system, we limit the investigation to one spatial dimension. This restriction simplifies the analysis and is done without loss of generality, because the spatial directions are decoupled and the polytropic Euler equations are rotationally invariant. To begin we state the one-dimensional form of (2.1)
ut+(f1)x=0,
where we have
u=[ϱ,ϱv1,ϱv2,ϱv3]T,f1=[ϱv1,ϱv21+p,ϱv1v2,ϱv1v3]T.
We find the flux Jacobian matrix to be
(2.11) A=∂f1∂u=⎡⎢ ⎢ ⎢ ⎢⎣0100a2−v212v100−v1v2v2v10−v1v3v30v1⎤⎥ ⎥ ⎥ ⎥⎦.
The eigenvalues, , of (2.11) are all real
(2.12) λ1=v1−aλ2=v1λ3=v1λ4=v1+a.
The eigenvalues are associated with a full set of right eigenvectors. A matrix of right eigenvectors is
(2.13) R=[r1|r2|r3|r4]=⎡⎢ ⎢ ⎢⎣1001v1−a00v1+av210v2v301v3⎤⎥ ⎥ ⎥⎦.
From the work of Barth [1], there exists a positive diagonal scaling matrix that relates the right eigenvector matrix (2.13) to the entropy Jacobian matrix (2.7)
(2.14) H=RZRT.
For the polytropic Euler equations this diagonal scaling matrix is
Z=diag(ϱ2a2,ϱ,ϱ,ϱ2a2).
We will revisit the eigenstructure of the polytropic Euler equations and this eigenvector scaling in Sect. 3.3 in order to derive an entropy stable numerical dissipation term.
3. Discrete entropy analysis, finite volume, and numerical fluxes
In this section we derive entropy conservative and entropy stable numerical flux functions for the polytropic Euler equations. This discrete analysis is performed in the context of finite volume schemes and follows closely the work of Tadmor [42]. The derivations for entropy conservative numerical flux functions and appropriate dissipation terms are straightforward, albeit algebraically involved. Therefore, we restrict the discussion to the one dimensional version of the model for the sake of simplicity. As such, we suppress the subscript on the physical flux and simply state .
3.1. Finite volume discretization
Finite volume methods are a discretization technique particularly useful to approximate the solution of hyperbolic systems of conservation laws. The method is developed from the integral form of the equations [32]
∫Ωutd\lx@overaccentset→x+∫∂Ωf⋅\lx@overaccentset→ndS=0,
where is the outward pointing normal vector. In one spatial dimension we divide the interval into non-overlapping cells
Ωi=[xi−12,xi+12],
and the integral equation contributes
on each cell. The solution approximation is assumed to be a constant value within the volume. Then we determine the cell average value with, for example, a midpoint quadrature of the solution integral
xi+1/2∫xi−1/2udx≈xi+1/2∫xi−1/2uidx=uiΔxi.
Due to the integral form of the finite volume scheme the solution is allowed to be discontinuous at the boundaries of the cells. To resolve this, we introduce a numerical flux, [32, 43] which is a function of two solution states at a cell interface and returns a single flux value. For consistency, we require that
(3.1) f∗(q,q)=f,
such that the numerical flux is equivalent to the physical flux when evaluated at two identical states.
The resulting finite volume spatial approximation takes the general form
(3.2) (ut)i+1Δxi(f∗x+12−f∗x−12)=0.
This results in a set of temporal ordinary differential equations that can be integrated with an appropriate ODE solver, e.g., explicit Runge-Kutta methods.
To complete the spatial approximation (3.2) requires a suitable numerical flux function . Next, following the work of Tadmor, we will develop entropy conservative and entropy stable numerical fluxes for the polytropic Euler equations.
3.2. Entropy conservative numerical flux
First, we develop the entropy conservative flux function that is valid for smooth solutions and acts as the baseline for the entropy stable numerical approximation. We assume left and right cell averages, denoted by and , on uniform cells of size separated by a common interface. We discretize the one-dimensional system semi-discretely and derive an approximation for the fluxes at the interface between the two cells, i.e., at the interface:
(3.3) Δx∂uL∂t=fL−f∗andΔx∂uR∂t=f∗−fR,
where the adjacent states feature the physical fluxes and the numerical interface flux . We define the jump in a quantity across an interface by
Next, we contract (3.3) into entropy space to obtain the semi-discrete entropy update in each cell
(3.4) Δx∂sL∂t=wTL(fL−f∗)andΔx∂sR∂t=wTR(f∗−fR),
where we assume continuity in time such that .
Next, we add the contributions from each side of the interface in (3.4) to obtain the total entropy update
(3.5)
To ensure that the finite volume update satisfies the discrete entropy conservation law, the entropy flux of the finite volume discretization must coincide with the discrete entropy flux , i.e.,
We use the linearity of the jump operator and rearrange to obtain the general entropy conservation condition of Tadmor [41]
(3.6)
where we apply the definition of the entropy flux potential (2.9). The discrete entropy conservation condition (3.6) is a single constraint for a vector quantity. Thus, the form of the entropy conservative numerical flux is not unique. However, the resulting numerical flux form (3.6) must remain consistent (3.1).
To derive an entropy conservative flux we note the properties of the jump operator
(3.7)
where we introduce notation for the arithmetic mean
{{⋅}}=12((⋅)R+(⋅)L).
For the numerical flux to remain applicable to either equation of state (2.2) we require a generalized average operator for the fluid density.
Definition 1 (γ-mean).
Assuming that , a special average of the fluid density is
(3.8)
We examine the evaluation of the average (3.8) for three cases substituting the appropriate forms of the pressure (2.2) and internal energy (2.4):
1. Polytropic () yields
(3.9)
2. Isothermal () where the special average becomes the logarithmic mean which also arises in the construction of entropy conservative fluxes for the adiabatic Euler equations [8, 27]
(3.10)
3. Shallow water () for which the special average reduces to the arithmetic mean
Remark 1.
The -mean (3.9) is a special case of the weighted Stolarsky mean, which serves as a generalization of the logarithmic mean [37]. It remains consistent when the left and right states are identical. Also, assuming without loss of generality that , it is guaranteed that the value of [37, 38].
Remark 2.
In practice, when the left and right fluid density values are close, there are numerical stability issues because the -mean (3.8) tends to a form. Therefore, we provide a numerically stable procedure to compute (3.8) in Appendix A.1.
With Def. 1 and the discrete entropy conservation condition (3.6) we are equipped to derive an entropy conservative numerical flux function.
Theorem 1 (Entropy conservative flux).
From the discrete entropy conservation condition condition (3.6) we find a consistent, entropy conservative numerical flux
(3.11)
Proof.
We first expand the right-hand-side of the entropy conservation condition (3.6)
(3.12)
where we use the jump properties (3.7). Next, we expand the jump in the entropy variables. To do so, we revisit the form of the entropy variable as it changes depending on the equation of state
(3.13) w1=⎧⎪⎨⎪⎩e+κϱγϱ−12∥\lx@overaccentset→v∥2=κϱγ−1γ−1+κϱγ−1−12∥\lx@overaccentset→v∥2=γe−12∥\lx@overaccentset→v∥2,polytropice+c2ϱϱ−12∥\lx@overaccentset→v∥2=e+c2−12∥\lx@overaccentset→v∥2,isothermal.
Taking the jump of the variable (3.13) we obtain
(3.14)
because the values of and are constant. Note that in the isothermal case , so the jump of (3.14) has the same form regardless of the equation of state. Therefore, the total jump in the entropy variables is
(3.15)
We combine the expanded condition (3.12), the jump in the entropy variables (3.15), and rearrange terms to find
(3.16)
To determine the first flux component we find
(3.17)
from the -mean in Def. 1. The expanded flux condition (3.16) is rewritten into linear jump components. We gather the like terms of each jump component to facilitate the construction of the remaining flux components:
(3.18) ={{p}}, =0, =0.
Now, it is straightforward to solve the expressions in (3.18) and find
(3.19) f∗2 f∗3 ={{ϱ}}γ{{v1}}{{v2}}, f∗4 ={{ϱ}}γ{{v1}}{{v3}}.
If we assume the left and right states are identical in (3.17) and (3.19) it is straightforward to verify that the numerical flux is consistent from its form and the properties of the -mean.∎∎
Remark 3.
There are two values of that change the form of the entropy conservative flux (3.11):
1. Isothermal case (): The numerical flux becomes
where the fluid density is computed with the logarithmic mean (3.10) just as in the adiabatic case [8, 27].
2. Shallow water case (): Here the numerical flux simplifies to become
where the velocity component in the direction is ignored due to the assumptions of the shallow water equations [44]. If we let the fluid density be denoted as the water height, , and take where is the gravitational constant, then we recover the entropy conservative numerical flux function originally developed for the shallow water equations by Fjordholm et al. [17].
Remark 4 (Multi-dimensional entropy conservative fluxes).
The derivation of entropy conservative numerical fluxes in the other spatial directions is very similar to that shown in Thm. 1. So, we present the three-dimensional entropy conservative fluxes in the and directions in Appendix B.
3.3. Entropy stable numerical flux
As previously mentioned, the solution of hyperbolic conservation laws can contain or develop discontinuities regardless of the smoothness of the initial conditions [14]. In this case, a numerical approximation that is entropy conservative is no longer physical and should account for the dissipation of entropy near discontinuities. Such a numerical method is deemed entropy stable, e.g., [16, 41]. To create an entropy stable numerical flux function we begin with a general form
(3.20)
where is a symmetric positive definite dissipation matrix. An immediate issue arises when we contract the entropy stable flux (3.20) into entropy space. We must guarantee that the dissipation term possesses the correct sign [47]; however, contracting (3.20) with the jump in entropy variables gives
(3.21)
So, there is a mixture of entropy and conservative variable jumps in the dissipation term that must be guaranteed positive to ensure that entropy is dissipated correctly. In general, it is unclear how to guarantee positivity of the dissipation term in (3.21) as required for entropy stability [1]. To remedy this issue we rewrite in terms of . This is possible due to the one-to-one variable mapping between conservative and entropy space as we know that
∂u∂x=H∂w∂x.
For the discrete case we wish to recover a particular average evaluation of the entropy Jacobian (2.7) at a cell interface such that
(3.22)
To generate a discrete entropy Jacobian that satisfies (3.22) we need a specially designed average for the square of the sound speed.
Definition 2 (Average square sound speed).
A special average for the sound speed squared is
(3.23)
Remark 5.
The average (3.23) is consistent. To demonstrate this, consider the polytropic equation of state from (2.2) and take and so that
Remark 6.
Again examining the special values of we find:
1. Isothermal () gives
as is a constant.
2. Shallow water () yields
where, again, we take and apply a property of the jump operator (3.7). Denoting the fluid density as the water height, , we recover an average of the wave celerity for the shallow water model [17].
Remark 7.
Just as with the -mean, the sound speed average (3.23) exhibits numerical stability issues for the polytropic case when the fluid density values are close. Therefore, we present a numerically stable procedure to evaluate (3.23) in Appendix A.2.
Lemma 1 (Discrete entropy Jacobian evaluation).
If the entropy Jacobian is evaluated with the average states
(3.24)
then it is possible to relate the jump in conservative variables in terms of the jump in entropy variables by
(3.25)
Proof.
We demonstrate how to obtain the first row of the discrete matrix . From the condition (3.22) we see that
(3.26)
To determine the first entry of the matrix we apply the definition of the -mean (3.8) and the sound speed average (3.23) to obtain
The remaining components in the first row of from (3.26) are
Repeating this process we obtain the remaining unknown components in the relation (3.22) and arrive at the discrete entropy Jacobian (3.24).∎∎
Next, we select the dissipation matrix to be a discrete evaluation of the eigendecomposition of the flux Jacobian (2.11)
(3.27) D=^R|^Λ|^R−1,
where is the matrix of right eigenvectors (2.13) and is a diagonal matrix containing the eigenvalues (2.12). From the discrete entropy Jacobian (3.24), we seek a right eigenvector and diagonal scaling matrix such that a discrete version of the eigenvector scaling (2.14)
(3.28) ^H[]!=^R^Z^RT,
holds whenever possible.
Lemma 2 (Discrete eigenvector and scaling matrices).
If we evaluate the right eigenvector and diagonal scaling matrices as
(3.29)
then we obtain the relation
(3.30) ^H≃^R^Z^RT,
where equality holds everywhere except for the second, third, and fourth diagonal entries.
Proof.
The procedure to determine the discrete evaluation of the matrices and is similar to that taken by Winters et al. [46]. We relate the individual entries of to those in and determine the 16 individual components of the matrices. We explicitly demonstrate two computations to outline the general technique and qualify the average states inserted in the final form.
We begin by computing the first entry of the first row of the system that should satisfy
This leads to two entries of the diagonal scaling matrix
^Z11=^Z44={{ϱ}}γ2¯¯¯¯¯a2.
The second computation is to determine the second entry of the second row of the system given by
It is clear that we must select and in the second row of the right eigenvector matrix . Unfortunately, just as in the ideal MHD case, we cannot enforce strict equality between the continuous and the discrete entropy scaling analysis [13, 46] and find
We apply this same process to the remaining unknown portions from the condition (3.28) and, after many algebraic manipulations, determine a unique averaging procedure for the discrete eigenvector and scaling matrices (3.29). The derivations were aided and verified using the symbolic algebra software Maxima [33]. ∎∎
Remark 8.
In a similar fashion from [46] we determine the discrete diagonal matrix of eigenvalues
Now, we have a complete discrete description of the entropy stable numerical flux function from (3.20)
f∗,ES
This leads to the main result of this section.
Theorem 2 (Entropy stable flux).
If we select the numerical flux function
(3.31)
in the discrete entropy update (3.5) then the method is guaranteed to dissipate entropy with the correct sign.
Proof.
To begin we restate the discrete evolution of the entropy at a single interface where we insert the newly derived entropy stable flux (3.31)
From the construction of the entropy conservative flux function from Thm. 1 we know that
So, (3.3) becomes
because the matrices and are symmetric positive definite, the dissipation term is a quadratic form in entropy space and guarantees a negative contribution. ∎∎
4. Extension of ES scheme to discontinuous Galerkin numerical approximation
In this section, we extend the entropy conservative/stable finite volume numerical fluxes to higher spatial order by building them into a nodal discontinuous Galerkin (DG) spectral element method. We provide an abbreviated presentation of the entropy stable DG framework, but complete details can be found in [24]. For simplicity we restrict the discussion to uniform Cartesian elements; however, the extension to curvilinear elements is straightforward [24, Appendix B].
First, we subdivide the physical domain into non-overlapping Cartesian elements . Each element is then transformed with a linear mapping, into reference coordinates on the element [24]. As we restrict to Cartesian meshes, the Jacobian and metric terms are simply
J=18ΔxΔyΔz,Xξ=12Δx,Yη=12Δy,Zζ=12Δz,
with element side lengths , , and .
For each element, we approximate the components of the state vector, the flux vectors, etc. with polynomials of degree in each spatial direction. The polynomial approximations are denoted with capital letters, e.g. . Here we consider the construction of a nodal discontinuous Galerkin spectral element method (DGSEM), in which the polynomials are written in terms of Lagrange basis functions, e.g. with
, that interpolate at the Lengendre-Gauss-Lobatto (LGL) nodes
[28].
The DG method is built from the weak form of the conservation law (2.1) where we multiply by a test function and integrate of the reference element
(4.1) ∫E0(JUt+\lx@overaccentset→∇ξ⋅\lx@overaccentset↔~F)φd\lx@overaccentset→ξ=0,
where derivatives are now taken in the reference coordinates and we introduce the contravariant fluxes
Note, that there is no continuity of the approximate solution or assumed between elements boundaries.
We select the test function to be the tensor product basis
for . The integrals in (4.1) are approximated with LGL quadrature and we collocate the quadrature nodes with the interpolation nodes. This exploits that the Lagrange basis functions are discretely orthogonal and simplifies the nodal DG approximation [28]. The integral approximations introduce the mass matrix
M=diag(ω0,…,ωN),
where are the LGL quadrature weights. In addition to the discrete integration matrix, the polynomial basis functions and the interpolation nodes form a discrete polynomial derivative matrix
(4.2) Dij=∂ℓj∂ξ∣∣∣ξ=ξi,i,j=0,…,N,
where are the LGL nodes. The derivative matrix (4.2) is special as it satisfies the summation-by-parts (SBP) property for all polynomial orders [22]
MD+(MD)T=B=diag(−1,0,…,0,1).
The SBP property is a discrete equivalent of integration-by-parts that is a crucial component to develop high-order entropy conservative/stable numerical approximations, e.g. [15, 16]
We apply the SBP property once to generate boundary contributions in the approximation of (4.1). Just like in the finite volume method, we resolve the discontinuity across element boundaries with a numerical surface flux function, e.g. where and is the normal vector in reference space. We apply the SBP property again to move discrete derivatives back onto the fluxes inside the volume. Further, if we introduce a two-point numerical volume flux, e.g. , that is consistent (3.1) and symmetric with respect to its arguments, e.g. [24]. These steps produce a semi-discrete split form DG approximation
(4.3)
for .
To create an entropy aware high-order DG approximation we select the numerical surface and volume numerical fluxes to be those from the finite volume context [4, 24].
Two variants of the split form DG scheme (4.3) are of interest for the polytropic Euler equations:
1. Entropy conservative DG approximation: Select and to be the entropy conserving fluxes developed in Sect. 3.2.
2. Entropy stable DG approximation: Take to be the entropy conserving fluxes from Sect. 3.2 and to be the entropy stable fluxes from Sect. 3.3.
5. Numerical results
We present numerical tests to validate the theoretical findings of the previous sections for an entropy conservative/stable DG spectral element approximation. To do so, we perform the numerical tests in the two dimensional domain . We subdivide the domain into non-overlapping, uniform Cartesian elements such that the DG approximation takes the form presented in Sect. 4. The semi-discrete scheme (4.3) is integrated in time with the explicit five-state, fourth order low storage Runge-Kutta method of Carpenter and Kennedy [5]. A stable time step is computed according to an adjustable coefficient , the local maximum wave speed, and the relative grid size, e.g. [21]. For uniform Cartesian meshes the explicit time step is selected by
Δt\coloneqqCFLΔxλmax(2N+1).
First, we will verify the high-order spatial accuracy for the DG scheme with the method of manufactured solutions. For this we assume the solution to polytropic Euler equations takes the form
(5.1) u=[h,12h,32h]Twithh(x,y,t)=8+cos(2πx)sin(2πy)cos(2πt).
This introduces an additional residual term on the right hand side of (2.1) that reads
(5.2) r=⎡⎢ ⎢ ⎢⎣ht+12hx+32hy12ht+14hx+bϱx+34hy12ht+34hx+94hy+bϱy⎤⎥ ⎥ ⎥⎦,b={κγϱγ−1,polytropicc2,isothermal.
Note that the residual term (5.2) is dependent.
The second test will demonstrate the entropic properties of the DG approximation. To do so, we use a discontinuous initial condition
(5.3) u={[1.2,0.1,0.0]T,x≤y[1.0,0.2,−0.4]T,x>y.
To measure the discrete entropy conservation of the DG approximation we examine the entropy residual of the numerical scheme [20]. To compute the discrete entropy growth, (4.3) is rewritten to be
(5.4) J(Ut)ij+Res(U)ij=0,
where and
(5.5) Res(U)ij
The growth in discrete entropy is computed by contracting (5.4) with the entropy variables (2.6)
JWTij(Ut)ij=−WTijRes(U)ij⇔J(St)ij=−WTijRes(U)ij,
where we apply the definition of the entropy variables to obtain the temporal derivative, , at each LGL node. The DG approximation is entropy conservative when the two-point finite volume flux from Theorem 1 is taken to be the interface flux, i.e. , in (5.5). This means that
Nel∑ν=1JkN∑i=0N∑j=0ωiωj(St)ij=0,
should hold for all time. We numerically verify this property by computing the integrated residual over the domain
(5.6) ISt=−Nel∑ν=1JkN∑i=0N∑j=0ωiωjWTijRes(U)ij,
and demonstrate that (5.6) is on the order of machine precision for the discontinuous initial condition (5.3). If interface dissipation is included, like that described in Sect. 3.3, the DG approximation is entropy stable and (5.6) becomes
ISt≤0.
We consider two particular values of in these numerical studies: Sect. 5.1 presents results for the isothermal Euler equations where and Sect. 5.2 contains results for the polytropic Euler equations where . We forgo presenting entropy conservative/stable DG numerical results for the shallow water variant () as they can be found elsewhere in the literature, e.g. [45].
5.1. Isothermal flow
Here we take and the speed of sound to be .
5.1.1. Convergence
We use the manufactured solution (5.1) and additional residual (5.2) to investigate the accuracy of the DG approximation for two polynomial orders and . Further, we examine the convergence rates of the entropy conservative DGSEM where (the flux from Theorem 1) as well as the entropy stable DGSEM where (the flux from Theorem 1) and (the flux from Theorem 2). We run the solution up to a final time of We compute the error in the density between the approximation and the manufactured solution of different mesh resolution for each polynomial order. In Table 1
we present the experimental order of convergence (EOC) for the entropy conservative DGSEM. We observe an odd/even effect, that is an EOC of
for odd polynomial orders and for even polynomial orders, which has been previously observed, e.g. [22, 26]. This is particularly noticeable for higher resolution numerical tests. Table 2 gives the EOC results for the entropy stable DGSEM where there is no longer an odd/even effect and the convergence rate is , as expected for a nodal DG scheme, e.g. [3, 26].
5.1.2. Entropy conservation test
For this test we compute the entropy residual (5.5) for two polynomial orders, and , and different mesh resolutions. We select the volume and surface fluxes to be (the flux from Theorem 1). We see in Table 3 that the magnitude of the entropy residual is on the order of machine precision for the discontinuous initial condition (5.3) for all resolution configurations.
5.2. Polytropic flow
Here we take and the scaling factor to be .
5.2.1. Convergence
The formulation of the convergence test is very similar to those discussed in Sect. 5.1.1 where we use the manufactured solution (5.1) and additional residual (5.2) to investigate the accuracy of the DG approximation for two polynomial orders and . We run the solution up to a final time of and compute the error in the density. Table 4 gives the EOC for the entropy conservative DGSEM where we observe an odd/even effect with respect to the polynomial order of the approximation. The entropy stable EOC results in Table 5, again, show optimal convergence order of and no such odd/even effect.
5.2.2. Entropy conservation test
Just as in Sect. 5.1.2 we compute the entropy residual (5.5) for two polynomial orders, and , and different mesh resolutions. The volume and surface fluxes are both taken to be the entropy conservative flux from Theorem 1. Table 6 shows that the entropy residual for the polytropic test is on the order of machine precision for the discontinuous initial condition (5.3) and all resolution configurations.
6. Conclusions
In this work we developed entropy conservative (and entropy stable) numerical approximations for the Euler equations with an equation of state that models a polytropic gas. For this case the pressure is determined from a scaled -power law of the fluid density. In turn, the total energy conservation equation became redundant and it was removed from the polytropic Euler system. In fact, the total energy acted as a mathematical entropy function for the polytropic Euler equations where its conservation (or decay) became an auxiliary condition not explicitly modeled by the PDEs.
We analyzed the continuous entropic properties of the polytropic Euler equations. This provided guidance for the semi-discrete entropy analysis. Next, we derived entropy conservative numerical flux functions in the finite volume context that required the introduction of a special -mean, which is a generalization of the logarithmic mean present in the adiabatic Euler case. Dissipation matrices were then designed and incorporated to guarantee the finite volume fluxes obeyed the entropy inequality discretely. We also investigated two special cases of the polytropic system that can be used to model isothermal gases () or the shallow water equations (). The finite volume scheme was extended to high-order spatial accuracy through a specific discontinuous Galerkin spectral element framework. We then validated the theoretical analysis with several numerical results. In particular, we demonstrated the high-order spatial accuracy and entropy conservative/stable properties of the novel numerical fluxes for the polytropic Euler equations.
Acknowledgements
Gregor Gassner and Moritz Schily have been supported by the European Research Council (ERC) under the European Union’s Eights Framework Program Horizon 2020 with the research project Extreme, ERC grant agreement no. 714487.
References
• [1] Timothy J. Barth. Numerical methods for gasdynamic systems on unstructured meshes. In Dietmar Kröner, Mario Ohlberger, and Christian Rohde, editors, An Introduction to Recent Developments in Theory and Numerics for Conservation Laws, volume 5 of Lecture Notes in Computational Science and Engineering, pages 195–285. Springer Berlin Heidelberg, 1999.
• [2] Marvin Bohm, Andrew R. Winters, Gregor J. Gassner, Dominik Derigs, Florian Hindenlang, and Joachim Saur. An entropy stable nodal discontinuous Galerkin method for the resistive MHD equations. Part I: Theory and numerical verification. Journal of Computational Physics, doi.org/10.1016/j.jcp.2018.06.027, 2018.
• [3] C. Canuto, M. Hussaini, A. Quarteroni, and T. Zang. Spectral Methods: Fundamentals in Single Domains. Springer, Berlin, 2006.
• [4] M. Carpenter, T. Fisher, E. Nielsen, and S. Frankel. Entropy stable spectral collocation schemes for the Navier–Stokes equations: Discontinuous interfaces. SIAM Journal on Scientific Computing, 36(5):B835–B867, 2014.
• [5] M. Carpenter and C. Kennedy. Fourth-order -storage Runge-Kutta schemes. Technical Report NASA TM 109111, NASA Langley Research Center, 1994.
• [6] Yunus Cengel and Michael Boles. Thermodynamics: An Engineering Approach. McGraw-Hill Education; 8 edition, 2014.
• [7] Jesse Chan. On discretely entropy conservative and entropy stable discontinuous Galerkin methods. Journal of Computational Physics, 362:346–374, 2018.
• [8] Praveen Chandrashekar. Kinetic Energy Preserving and Entropy Stable Finite Volume Schemes for Compressible Euler and Navier-Stokes Equations. Communications in Computational Physics, 14:1252–1286, 2013.
• [9] Gui-Qiang Chen. Euler equations and related hyperbolic conservation laws. In Handbook of differential equations: evolutionary equations, volume 2, pages 1–104, 2005.
• [10] Tianheng Chen and Chi-Wang Shu. Entropy stable high order discontinuous Galerkin methods with suitable quadrature rules for hyperbolic conservation laws. Journal of Computational Physics, 345:427–461, 2017.
• [11] Richard Courant, Kurt Friedrichs, and Hans Lewy. On partial differential equations of mathematical physics. IBM Journal of Reseach and Development, 11:215–234, 1967.
• [12] Jared Crean, Jason E. Hicken, David C. Del Rey Fernández, David W. Zingg, and Mark H. Carpenter. Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements. Journal of Computational Physics, 356:410–438, 2018.
• [13] Dominik Derigs, Andrew R. Winters, Gregor J. Gassner, and Stefanie Walch. A novel averaging technique for discrete entropy stable dissipation operators for ideal MHD. Journal of Computational Physics, 330:624–632, 2016.
• [14] Laurence C. Evans. Partial Differential Equations. American Mathematical Society, 2012.
• [15] Travis C. Fisher and Mark H. Carpenter. High-order entropy stable finite difference schemes for nonlinear conservation laws: Finite domains. Journal of Computational Physics, 252:518–557, 2013.
• [16] Travis C. Fisher, Mark H. Carpenter, Jan Nordström, Nail K. Yamaleev, and Charles Swanson. Discretely conservative finite-difference formulations for nonlinear conservation laws in split form: Theory and boundary conditions. Journal of Computational Physics, 234:353–375, 2013.
• [17] Ulrik S. Fjordholm, Siddhartha Mishra, and Eitan Tadmor. Well-blanaced and energy stable schemes for the shallow water equations with discontiuous topography. Journal of Computational Physics, 230(14):5587–5609, 2011.
• [18] Ulrik S. Fjordholm, Siddhartha Mishra, and Eitan Tadmor. Arbitrarily high-order accurate entropy stable essentially nonoscillatory schemes for systems of conservation laws. SIAM Journal on Numerical Analysis, 50(2):544–573, 2012.
• [19] Ulrik S. Fjordholm and Deep Ray. A sign preserving WENO reconstruction method. Journal of Scientific Computing, 68(1):42–63, 2016.
• [20] Lucas Friedrich, Andrew R Winters, David C Del Rey Fernández, Gregor J Gassner, Matteo Parsani, and Mark H Carpenter. An entropy stable non-conforming discontinuous Galerkin method with the summation-by-parts property. Journal of Scientific Computing, 77(2):689–725, 2018.
• [21] G. Gassner, F. Hindenlang, and C. Munz. A Runge-Kutta based discontinuous Galerkin method with time accurate local time stepping. Adaptive High-Order Methods in Computational Fluid Dynamics, 2:95–118, 2011.
• [22] Gregor J. Gassner.
A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to SBP-SAT finite difference methods.
SIAM Journal on Scientific Computing, 35(3):A1233–A1253, 2013.
• [23] Gregor J Gassner, Andrew R Winters, Florian J. Hindenlang, and David A. Kopriva. The BR1 scheme is stable for the compressible Navier-Stokes equations. Journal of Scientific Computing, 77(1):154–200, 2017.
• [24] Gregor J. Gassner, Andrew R. Winters, and David A. Kopriva. Split form nodal discontinuous Galerkin schemes with summation-by-parts property for the compressible Euler equations. Journal Of Computational Physics, 327:39–66, 2016. < |
Browse Questions
# A symmetrical form of the line of intersection of the planes $\;x=ay+b\;$ and $\;z=cy+d\;$ is :
$(a)\;\large\frac{x-b}{a}=\large\frac{y-1}{1}=\large\frac{z-d}{c} \qquad(b)\;\large\frac{x-b-a}{a}=\large\frac{y-1}{1}=\large\frac{z-d-c}{c} \qquad(c)\;\large\frac{x-b}{b}=\large\frac{y-0}{1}=\large\frac{z-c}{d}\qquad(d)\;\large\frac{x-b-a}{b}=\large\frac{y-1}{0}=\large\frac{z-d-c}{d}$ |
• Home
• Gallery
##### This video is part of a series showcasing the use of the ProM process mining framework. Each video focusses on a specific process mining task or algorithm. ProM is open-source and freely availabl
• Process mining - Wikipedia
• Alpha Miner
• Auditing 2.0: Using Process Mining to Support Tomorrow’s ...
• ## Process mining - Wikipedia
Process mining is a family of techniques in the field of process management that support the analysis of business processes based on event logs. During process mining, specialized data mining algorithms are applied to event log data in order to identify trends, patterns and details contained in event logs recorded by an information system. Process Mining research is concerned with the extraction of knowledge about a (business) process from its process execution logs. Process Mining strives to gain insight into various perspectives, such as the process (or control flow) perspective, the performance, data, and organizational perspective (The processmining.org web site has more in ...
### et al: A POLYNOMIAL-TIME ALPHA-ALGORITHM FOR PROCESS MINING
A Polynomial-Time Alpha-Algorithm for Process Mining Albana Roci [email protected] Reggie Davidrajuh [email protected] Electrical and Computer Engineering University of Stavanger Stavanger, Norway Abstract — This paper presents an efficient Alpha-algorithm for process mining. Firstly, a short literature review is given on the Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be applied directly to analyze and improve processes in a variety of domains.
### 4.6: Token Based Replay: Some Examples - Process Discovery ...
The course covers the three main types of process mining. 1. The first type of process mining is discovery. A discovery technique takes an event log and produces a process model without using any a-priori information. An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior ... approaches to process mining, and apply it to the well-known Alpha Algorithm. We show that knowledge of model structures and algorithm behaviour can be used to predict the number of traces needed for mining. Keywords: Business process mining, probabilistic automata, Petri nets. 1 Introduction
### A Process Mining Approach in Software Development and ...
Index Terms—Process Mining, ProM, Process Mining Al-gorithms, Alpha Algorithm, Heuristic Miner Algorithm, Data Collection. I. INTRODUCTION B Usiness process mining, or process mining in a short form, is an emerging research area, which brings a new way of analyzing and aims to improve the business Process Discovery. Process Discovery is a technique for deriving a process model from log data. Input: execution logs as ordered lists of activities with time stamp and case id. Output: process model which could have generated the execution logs. The caseidisoften not directlycovered in thedata, andneedstobegenerated in pre-processing
### Alpha algorithm - Wikipedia
The α-algorithm is an algorithm used in process mining, aimed at reconstructing causality from a set of sequences of events. It was first put forward by van der Aalst, Weijters and Măruşter. Several extensions or modifications of it have since been presented, which will be listed below. Process Mining and Monitoring Processes and Services: Workshop Report Wil van der Aalst (editor) ... 16.07.2006-21.07.2006) was a Workshop on Process Mining and Monitoring Processes and Services. In this paper, we report on the results of the workshop. ... For example, using the alpha algorithm [5] a pro- A Mining Algorithm for Extracting Decision Process Data Models Cristina-Claudia DOLEAN, Razvan PETRUŞEL ... Decision Process Data Model, Decision Process Mining, Decision Mining Algorithm Introduction Decision making is an activity performed on daily basis. There are a lot of different ... example, the process of issuing an invoice is
## Alpha Miner
Alpha Miner algorithm is integrated in the PM4Py Python Process Mining library. ... Process Discovery using the Alpha Algorithm. ... We provide an example where a log is read, the Alpha algorithm is applied and the Petri net along with the initial and the final marking are found. Dear friends, I splited "outpatientClinicExample" log to some logs,some of them have 1 instance of process and some of them have 50% instance of main log . when I run alpha algorithm with those new logs in ProM6 and in ProM5.2 ,I saw different outputs, specially in reconizing of parallel tasks,for example when I run alpha algorithm with ProM5.2 and a log that it had only 1 instance ...
### Process mining : extending the alpha-algorithm to mine ...
our case, the mining algorithm is the -algorithm. During post-processing, the discovered model (in our case a Petri-net) can be ne-tuned and a graphical representation can be build. The focus of most research in the domain of process mining is on mining heuristics based on ordering relations of the events in the process log (cf. Sec-tion 5). So we're still in the process discovery bridge between the observed data and discovering a process model but now with the alpha miner in ProM. So in the previous lecture, I've shown you that this is the Petri net that we expect from the alpha miner, given that it discovers this footprint matrix given the input data. REST API: Put vs Post idempotent. PUT 과 POST 를 이해하려면, idempotent 라는 개념의 도입이 필요하다. 한글로 직역하면 멱등의 정도 되시겠다. 수학적으로 이해하는 편이 쉬운데, f(x) = f(f(x)) 라 보면 된다. 다시 말해 몇 번이고 같은 연산 을 반복해도 같은 값이 나온다는 것. 이건 fault-tolerant API 를 디자인 하는데 ...
### Alpha miner - Introduction to Process Mining with ProM
So the alpha miner is actually the very first algorithm that bridges the gap between event logs, or observed data, and the discovery of a process model. And since it was the very first algorithm to be created, it also has its flaws, but it was a good starting point to continue on. So the alpha miner has a few main steps. Mining with User Interaction ... a net is generated from a log file by a liberal mining algorithm such as the alpha-algorithm. Then, using concepts from the theory of ... One main difficulty of process discovery is that a typical log contains only example runs of the recorded process (we do not discuss the problem of noise here), i.e. logs are
### 2.7: Alpha Algorithm: Limitations - Process Models and ...
An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log. 2. The second type of process mining is conformance. After the mining-step you should see a matrix with the results. You can double-click on any field to enlarge the process model, get more information or run the comparing-footprint-algorithm or the token replay. Contributing to the project. The Process Cube Explorer was developed as a research-framework and is very easy to extend. process-aware analytics for both historical and live events. The BPI architecture supports plugging in di erent process mining algorithms, such as the alpha and heuristic mining algorithms in the ProM process mining tool [26]. Process mining Process mining aims to extract a business process model from a set of execution logs [1,26,23,4,13,17,25].
### Process Mining: Algorithm for S-Coverable Workflow Nets ...
Abstract: To discover process models from event logs has recently aroused many researcherspsila interest in the area of process mining. Notwithstanding the interest and related efforts, existing algorithms are far from being satisfactory. For example, some researchers have proved that alpha-algorithm is capable of discovering the processes of the so-called SWF-nets without short loops; however ... Process Simulation - after creating a model we try to see what would happen if we used it $(1) \to (2) \to (3) \to (4) \to (1)$ is a BPM lifecycle; Process Mining. Process Mining is about discovering the existent process and creating a model from it There are several process mining algorithms. For example, Alpha Algorithm; Genetic Process Miner ...
### Big Data meets Process Mining: Implementing the Alpha ...
Big Data meets Process Mining: Implementing the Alpha Algorithm with Map-Reduce Joerg Evermann Memorial University of Newfoundland [email protected] There are three main types of process mining (Fig. 2, 3). 1. The first type of process mining is discovery. A discovery technique takes an event log and produces a process model without using any a-priori information. An example is the Alpha-algorithm that takes an event log and produces a process
### Chapter 7 Conformance Checking - Process Mining
Process Modeling and Analysis Chapter 3 Data Mining Part II: From Event Logs to Process Models Chapter 4 Getting the Data Chapter 5 Process Discovery: An Introduction Chapter 6 Advanced Process Discovery Techniques Part III: Beyond Process Discovery Chapter 7 Conformance Checking Chapter 8 Mining Additional Perspectives Chapter 9 Operational ... CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Process mining [1] is the discovery and analysis of models of business processes, from event logs, often represented by Petri nets (PN). Process mining is used to understand, for example: what activities, resources are involved, and how are they related? what affects performance, what decision rules control process ...
## Auditing 2.0: Using Process Mining to Support Tomorrow’s ...
resources/organization. Note that most process mining algorithms focus on process discovery with an emphasis on the control flow , cf. the Alpha algorithm mentioned earlier that is able to automatically extract a Petri net model explaining the recorded history. However, there are also process mining IEEE XES is a standard format in which Process Mining logs are expressed. For more information about the format, please study the IEEE XES Website. The example code on the right shows how to import an event log, given a file path to the log file.
### Process Mining: The Alpha-algorithm
Process discovery algorithms (small selection) PAGE 31 α algorithm α++ algorithm α# algorithm language-based regions genetic mining state-based regions heuristic mining hidden Markov models neural networks automata-based learning stochastic task graphs conformal process graph mining block structures multi-phase mining 1. The first type of process mining is discovery. A discovery technique takes an event log and produces a process model without using any a-priori information. An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log. 2. A Method to Build and Analyze Scientific Workflows from Provenance through Process Mining Reng Zeng, Xudong He ... Using the Alpha Miner: The alpha miner [15] assumes the completeness of direct succession (DS) such that “if two transitions ... flow process mining algorithm that can discover all the common control-flow structures (i.e ...
### 17 questions with answers in PROCESS MINING | Science topic
Review and cite PROCESS MINING protocol, troubleshooting and other methodology information | Contact experts in PROCESS MINING to get answers ... for example Alpha Algorithm and its variants ... A discovery technique takes an event log and produces a process model without using additional information. An example is the well-known Alpha-algorithm, which takes an event log and produces a Petri net explaining the behavior recorded in the log. Most of the commercial Process Mining tools first discover DFGs before conducting further analysis.
### www.processmining.org
www.processmining.org process mining algorithms and large-scale experimentation and analysis. To bridge the aforementioned gap, i.e., the lack of process mining software that i) is easily extendable, ii) allows for algorithmic customization and iii) allows us to easily conduct large scale experiments, we propose the Process Mining for Python (PM4Py) framework. a process model \explains" a set of executions if the model was mined from the executions. The literature on proces mining will be reviewed in Section 4, but for this introduction it su ces to say that process mining are heuristic algorithms that generate a process model from logs, so that all or most of the traces are
### 2.6: Alpha Algorithm: A Process Discovery Algorithm ...
An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log. 2. The second type of process mining is conformance. Process mining Alpha algorithm applyed to healthcare - gfarrasb/AlphaHealthDiscovery Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be applied directly to analyze and improve processes in a variety of domains. [..]
### Alpha Algorithm - ML Wiki
Alpha Algorithm $\alpha$ algorithm one of the first Process Mining algorithm that discovers Workflow Nets (in form of Petri Nets) from logs . The process of (re-)discovering a workflow consists of 3 phases: pre-processing inferring relations between the transitions Alpha algorithm. From Wikipedia, the free encyclopedia. The α-algorithm is an algorithm used in process mining, aimed at reconstructing causality from a set of sequences of events. It was first put forward by van der Aalst, Weijters and Măruşter. Several extensions or modifications of it have since been presented, which will be listed below. ... be involved. The result of process mining and the expert verification are then extracted as the item set of association rule. A priori algorithm is then used to obtain all of possible Antecedences and consequence. This method automates the . Hybrid Association Rule Learning and Process Mining for Fraud Detection
### Alpha Algorithm (Process Discovery Method)
Explanation of the Alpha Algorithm with an Example. Explanation of the Alpha Algorithm with an Example. Skip navigation ... How to Perform a Bottleneck Analysis With Process Mining - Duration: 12:55. The exercises provided in this section are meant to become more familiar with ProM 6 and its process mining plug-ins. Plug-ins that will be covered include the Transition System Miner, Transition System Analyzer, the alpha-algorithm, the Heuristics Miner, the Genetic Miner, the Fuzzy Miner and the Dotted Chart Analysis. basis for mining the relations that form the process model. For example, the Alpha algorithm [1] has the rule that, if there exist x,y,z in the log, such that x > y, x > z, and neither y > z nor z > y, a process model must be constructed in such a way that x occurs first, after which there is a choice between y and z. This is the case in the
### Inductive Visual Miner
This video is part of a series showcasing the use of the ProM process mining framework. Each video focusses on a specific process mining task or algorithm. ProM is open-source and freely available ... Instead of starting with an explicit process design, process mining aims at ex-tracting process knowledge from \process execution logs". Process mining tech-niques such as the alpha algorithm [14] typically assume that it is possible to sequentially record events such that alpha algorithm, pm4py, process discovery, process mining, python 'Process Mining - Tools/pm4py' Related Articles PM4Py로 Directly Follows Graph (DFG) 도출하기 2019.08.03
An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log. 2. The second type of process mining is conformance. Process discovery algorithms (small selection) PAGE 31 α algorithm α++ algorithm α# algorithm language-based regions genetic mining state-based regions heuristic mining hidden Markov models neural networks automata-based learning stochastic task graphs conformal process graph mining block structures multi-phase mining Explanation of the Alpha Algorithm with an Example. Explanation of the Alpha Algorithm with an Example. Skip navigation . How to Perform a Bottleneck Analysis With Process Mining - Duration: 12:55. The α-algorithm is an algorithm used in process mining, aimed at reconstructing causality from a set of sequences of events. It was first put forward by van der Aalst, Weijters and Măruşter. Several extensions or modifications of it have since been presented, which will be listed below. An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior recorded in the log. 2. The second type of process mining is conformance. Alpha Algorithm $\alpha$ algorithm one of the first Process Mining algorithm that discovers Workflow Nets (in form of Petri Nets) from logs . The process of (re-)discovering a workflow consists of 3 phases: pre-processing inferring relations between the transitions www.processmining.org This video is part of a series showcasing the use of the ProM process mining framework. Each video focusses on a specific process mining task or algorithm. ProM is open-source and freely available . our case, the mining algorithm is the -algorithm. During post-processing, the discovered model (in our case a Petri-net) can be ne-tuned and a graphical representation can be build. The focus of most research in the domain of process mining is on mining heuristics based on ordering relations of the events in the process log (cf. Sec-tion 5). The course covers the three main types of process mining. 1. The first type of process mining is discovery. A discovery technique takes an event log and produces a process model without using any a-priori information. An example is the Alpha-algorithm that takes an event log and produces a process model (a Petri net) explaining the behavior . Process mining is a family of techniques in the field of process management that support the analysis of business processes based on event logs. During process mining, specialized data mining algorithms are applied to event log data in order to identify trends, patterns and details contained in event logs recorded by an information system. Ckmb fm radio.
646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 |
Results 61-80 of 10953. Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised LearningFahmy, Hazem ; Pastore, Fabrizio ; Bagherzadeh, Mojtaba et alin IEEE Transactions on Reliability (in press)Deep neural networks (DNNs) are increasingly im- portant in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the ... [more ▼]Deep neural networks (DNNs) are increasingly im- portant in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components. We observe three major challenges with existing practices regarding DNNs in safety-critical systems: (1) scenarios that are underrepresented in the test set may lead to serious safety violation risks, but may, however, remain unnoticed; (2) char- acterizing such high-risk scenarios is critical for safety analysis; (3) retraining DNNs to address these risks is poorly supported when causes of violations are difficult to determine. To address these problems in the context of DNNs analyzing images, we propose HUDD, an approach that automatically supports the identification of root causes for DNN errors. HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the DNN outcome. Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters. We evaluated HUDD with DNNs from the automotive domain. HUDD was able to identify all the distinct root causes of DNN errors, thus supporting safety analysis. Also, our retraining approach has shown to be more effective at improving DNN accuracy than existing approaches. [less ▲]Detailed reference viewed: 58 (3 UL) Blockchain Governance: An Overview and Prediction of Optimal Strategies Using Nash EquilibriumKhan, Nida ; Ahmad, Tabrez; Patel, Anass et alin 3rd AUE International Research Conference (in press)Blockchain governance is a subject of ongoing research and an interdisciplinary view of blockchain governance is vital to aid in further research for establishing a formal governance framework for this ... [more ▼]Blockchain governance is a subject of ongoing research and an interdisciplinary view of blockchain governance is vital to aid in further research for establishing a formal governance framework for this nascent technology. In this paper, the position of blockchain governance within the hierarchy of Institutional governance is discussed. Blockchain governance is analyzed from the perspective of IT governance using Nash equilibrium to predict the outcome of different governance decisions. A payoff matrix for blockchain governance is created and simulation of different strategy profiles is accomplished for computation of all Nash equilibria. We also create payoff matrices for different kinds of blockchain governance, which were used to propose novel mathematical formulae usable to predict the best governance strategy that minimizes the occurrence of a hard fork as well as predicts the behavior of the majority during protocol updates. [less ▲]Detailed reference viewed: 231 (20 UL) Wüllner, AdolfKrebs, Stefan ; Tschacher, Werner in Neue Deutsche Biographie, Bd. 28 (in press)Detailed reference viewed: 34 (3 UL) "Aktion 18 und Quiz 3000"Nonoa, Koku Gnatuloma in Kovacs, Teresa; Scheinpflug, Peter; Wortmann, Thomas (Eds.) Schlingensief-Handbuch Leben – Werk – Wirkung (in press)Detailed reference viewed: 70 (7 UL) Humanitarian Photography Beyond the Picture: David “CHIM” Seymour’s Children of EuropePriem, Karin ; Herman, Frederikin Allender, Tim; Dussel, Inés; Grosvenor, Ian (Eds.) et al Appearances Matter: The Visual in Educational History (in press)Detailed reference viewed: 26 (3 UL) Challenges and Methodologies in the Visual History of EducationAllender, Tim; Dussel, Inés; Grosvenor, Ian et alin Allender, Tim; Dussel, Inés; Grosvenor, Ian (Eds.) et al Appearances Matter: The Visual in Educational History (in press)Detailed reference viewed: 23 (1 UL) Appearances Matter: The Visual in Educational HistoryAllender, Tim; Dussel, Inés; Grosvenor, Ian et alBook published by De Gruyter - Appearances: Studies in Visual Research (in press)Detailed reference viewed: 33 (2 UL) Societal Emotional Environments and Cross-Cultural Differences in Life Satisfaction: A Forty-Nine Country StudyKrys, Kuba; Yeung, June; Capaldi, Colin et alin Journal of Positive Psychology (in press)In this paper, we introduce the concept of ‘societal emotional environment’: the emotional climate of a society (operationalized as the degree to which positive and negative emotions are expressed in a ... [more ▼]In this paper, we introduce the concept of ‘societal emotional environment’: the emotional climate of a society (operationalized as the degree to which positive and negative emotions are expressed in a society). Using data collected from 12,888 participants across 49 countries, we show how societal emotional environments vary across countries and cultural clusters, and we consider the potential importance of these differences for well-being. Multilevel analyses supported a ‘double-edged sword’ model of negative emotion expression, where expression of negative emotions predicted higher life satisfaction for the expresser but lower life satisfaction for society. In contrast, partial support was found for higher societal life satisfaction in positive societal emotional environments. Our study highlights the potential utility and importance of distinguishing between positive and negative emotion expression, and adopting both individual and societal perspectives in well-being research. Individual pathways to happiness may not necessarily promote the happiness of others. [less ▲]Detailed reference viewed: 312 (11 UL) Invariant density adaptive estimation for ergodic jump diffusion processes over anisotropic classesAmorino, Chiara ; Gloter, Arnaudin Journal of Statistical Planning and Inference (in press)We consider the solution of a multivariate stochastic differential equation with Levy-type jumps and with unique invariant probability measure with density μ. We assume that a continuous record of ... [more ▼]We consider the solution of a multivariate stochastic differential equation with Levy-type jumps and with unique invariant probability measure with density μ. We assume that a continuous record of observations is available. In the case without jumps, Reiss and Dalalyan [7] and Strauch [24] have found convergence rates of invariant density estimators, under respectively isotropic and anisotropic H ̈older smoothness constraints, which are considerably faster than those known from standard multivariate density estimation. We extend the previous works by obtaining, in presence of jumps, some estimators which have the same convergence rates they had in the case without jumps for d ≥ 2 and a rate which depends on the degree of the jumps in the one-dimensional setting. We propose moreover a data driven bandwidth selection procedure based on the Goldenshluger and Lepski method [11] which leads us to an adaptive non-parametric kernel estimator of the stationary density μ of the jump diffusion X. [less ▲]Detailed reference viewed: 45 (5 UL) Biomedical and Clinical Research Data ManagementGanzinger, Matthias; Glaab, Enrico ; Kerssemakers, Jules et alin Wolkenhauer, Olaf (Ed.) Systems Medicine - Integrative, Qualitative and Computational Approaches (in press)Detailed reference viewed: 188 (11 UL) Process analysis in thermal process engineering with high-performance computing using the example of grate firingPeters, Bernhard ; Rousset, Alban ; Besseron, Xavier et alin 12th European Conference on Industrial Furnaces and Boilers (in press)Biomass as a renewable energy source continues to grow in popularity to reduce fossil fuel consumption for environmental and economic benefits. In the present contribution, the combustion chamber of a 16 ... [more ▼]Biomass as a renewable energy source continues to grow in popularity to reduce fossil fuel consumption for environmental and economic benefits. In the present contribution, the combustion chamber of a 16 MW geothermal steam super-heater, which is part of the Enel Green Power "Cornia 2" power plant, is being investigated with high-performance computing methods. For this purpose, the extended discrete element method (XDEM) developed at the University of Luxembourg is used in a high-performance computing environment, which includes both the moving wooden bed and the combustion chamber above it. The XDEM simulation platform is based on a hybrid four-way coupling between the Discrete Element Method (DEM) and Computational Fluid Dynamics (CFD). In this approach, particles are treated as discrete elements that are coupled by heat, mass, and momentum transfer to the surrounding gas as a continuous phase. For individual wood particles, besides the equations of motion, the differential conservation equations for mass, heat, and momentum are solved, which describe the thermodynamic state during thermal conversion. The consistency of the numerical results with the actual system performance is discussed in this paper to determine the potentials and limitations of the approach. [less ▲]Detailed reference viewed: 160 (32 UL) Reducibility of n-ary semigroups: from quasitriviality towards idempotencyCouceiro, Miguel; Devillet, Jimmy ; Marichal, Jean-Luc et alin Beiträge zur Algebra und Geometrie (in press)Let $X$ be a nonempty set. Denote by $\mathcal{F}^n_k$ the class of associative operations $F\colon X^n\to X$ satisfying the condition $F(x_1,\ldots,x_n)\in\{x_1,\ldots,x_n\}$ whenever at least $k$ of the ... [more ▼]Let $X$ be a nonempty set. Denote by $\mathcal{F}^n_k$ the class of associative operations $F\colon X^n\to X$ satisfying the condition $F(x_1,\ldots,x_n)\in\{x_1,\ldots,x_n\}$ whenever at least $k$ of the elements $x_1,\ldots,x_n$ are equal to each other. The elements of $\mathcal{F}^n_1$ are said to be quasitrivial and those of $\mathcal{F}^n_n$ are said to be idempotent. We show that $\mathcal{F}^n_1=\cdots =\mathcal{F}^n_{n-2}\subseteq\mathcal{F}^n_{n-1}\subseteq\mathcal{F}^n_n$ and we give conditions on the set $X$ for the last inclusions to be strict. The class $\mathcal{F}^n_1$ was recently characterized by Couceiro and Devillet \cite{CouDev}, who showed that its elements are reducible to binary associative operations. However, some elements of $\mathcal{F}^n_n$ are not reducible. In this paper, we characterize the class $\mathcal{F}^n_{n-1}\setminus\mathcal{F}^n_1$ and show that its elements are reducible. We give a full description of the corresponding reductions and show how each of them is built from a quasitrivial semigroup and an Abelian group whose exponent divides $n-1$. [less ▲]Detailed reference viewed: 133 (17 UL) William Shakespeare and John Florio: their common passion for words, things Italian and aristocracy. An inquiry on the natural, necessary, albeit controversial, relationship between the dramatist and the linguistCicotti, Claudio ; Connerad, Jean PatrickBook published by Peter Lang (in press)Detailed reference viewed: 68 (2 UL) Nous l'avions attendu pendant des moisCicotti, Claudio in Malvetti, Massimo (Ed.) Dante et Henri VII de Luxembourg: de l'utopie au prophetisme (in press)Detailed reference viewed: 49 (7 UL) Mediating the Right to Education: An Analysis of UNESCO’s Exhibition Album on Human Rights and Its Global Dissemination in 1951Kesteloot, Stefanie in Priem, Karin; Comas Rubi, Francisca; Gonzalez, Sara (Eds.) Media Matter: Images as Presenters, Mediators and Means of Observation (in press)The end of the Second World War was the start of a new era, with worldwide support for a Universal Declaration of Human Rights (UDHR). As a UN organisation, UNESCO was assigned to present the UDHR towards ... [more ▼]The end of the Second World War was the start of a new era, with worldwide support for a Universal Declaration of Human Rights (UDHR). As a UN organisation, UNESCO was assigned to present the UDHR towards the global population in an effort to guide them to peace. In 1949, their Department of Mass Communication created three tools to promote human rights: a large-scale exhibition at the Musée Galliera in Paris, a travel album, and teaching handbooks. In my chapter, I will focus on the travel album, and in particular, on article 26: “The Right to Education”. As in the album, the right is represented by a number of images and captions, all related to this article of the UDHR. I will analyse the visuals and the corresponding texts from an intermediate perspective. Does the narrative created by UNESCO actually relate to the meaning of the pictures? Then, I will analyse the global correspondence received on the travel album. How did the viewers understand the album and was it as comprehensible as UNESCO thought? This chapter argues that the promotion and mediation of human rights were based on Western standards of education, making it difficult to spread a universal message. [less ▲]Detailed reference viewed: 35 (2 UL) Besteuerung der digitalisierten Wirtschaft und Datenschutz in der Steuerverwaltung - Probleme einer destination-based corporate tax?Sinnig, Julia in Sinnig, Julia (Ed.) Tagungsband der 60. ATÖR (in press)Detailed reference viewed: 71 (3 UL) Between national language traditions and transnational competence: teaching multilingual academic discourse at the University of LuxembourgHuemer, Birgit in Donahue, Christiane (Ed.) Teaching and Studying Transnational Composition (in press)Detailed reference viewed: 42 (6 UL) Can Offline Testing of Deep Neural Networks Replace Their Online Testing?Ul Haq, Fitash ; Shin, Donghwan ; Nejati, Shiva et alin Empirical Software Engineering (in press)We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test ... [more ▼]We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test, and online testing where DNNs are embedded into a specific application environment and tested in a closed-loop mode in interaction with the application environment. Typically, DNNs are subjected to both types of testing during their development life cycle where offline testing is applied immediately after DNN training and online testing follows after offline testing and once a DNN is deployed within a specific application environment. In this paper, we study the relationship between offline and online testing. Our goal is to determine how offline testing and online testing differ or complement one another and if offline testing results can be used to help reduce the cost of online testing? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end controls of steering functions of self-driving vehicles. Our results show that offline testing is less effective than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. Further, we cannot exploit offline testing results to reduce the cost of online testing in practice since we are not able to identify specific situations where offline testing could be as accurate as online testing in identifying safety requirement violations. [less ▲]Detailed reference viewed: 41 (7 UL) Spelling patterns of plural marking and learning trajectories in French taught as a foreign languageWeth, Constanze ; Ugen, Sonja ; Fayol, Michel et alin Written Language and Literacy (in press), 24Although French plural spelling has been studied extensively, the complexity of factors affecting the learning of French plural spelling are not yet fully explained, namely on the level of adjectival and ... [more ▼]Although French plural spelling has been studied extensively, the complexity of factors affecting the learning of French plural spelling are not yet fully explained, namely on the level of adjectival and verbal plural. This study investigates spelling profiles of French plural markers of 228 multilingual grade 5 pupils with French taught as a foreign language. Three analyses on the learner performances of plural spelling in nouns, verbs and pre- and postnominal attributive adjectives were conducted (1) to detect the pupils’ spelling profiles of plural marking on the basis of the performances in the pretest, (2) to test the profiles against two psycholinguistic theories, and (3) to evaluate the impact of the training on each spelling profile in the posttest. The first analysis confirms the existing literature that pupils’ learning of French plural is not random but ordered and emphasizes the role of the position for adjectives (pre- or postnominal) on correct plural spelling. The second analysis reveals the theoretical difficulties of predicting spelling of adjectival and verbal plural. The third analysis shows that strong and poor spellers both benefit from a morphosyntactic training and provides transparency and traceability of the learning trajectories. Together, the descriptive analyses reveal clear patterns of intra-individual spelling profiles. They point to a need for further research in those areas that have empirically provided the most inconsistent results to date and that are not supported by the theories: verbs and adjectives. [less ▲]Detailed reference viewed: 35 (2 UL) Migration in China: to Work or to Wed?Dupuy, Arnaud in Journal of Applied Econometrics (in press)Detailed reference viewed: 64 (0 UL) |
# The Zettelkasten Method
Early this year, Conor White-Sullivan introduced me to the Zettelkasten method of note-taking. I would say that this significantly increased my research productivity. I’ve been saying “at least 2x”. Naturally, this sort of thing is difficult to quantify. The truth is, I think it may be more like 3x, especially along the dimension of “producing ideas” and also “early-stage development of ideas”. (What I mean by this will become clearer as I describe how I think about research productivity more generally.) However, it is also very possible that the method produces serious biases in the types of ideas produced/developed, which should be considered. (This would be difficult to quantify at the best of times, but also, it should be noted that other factors have dramatically decreased my overall research productivity. So, unfortunately, someone looking in from outside would not see an overall boost. Still, my impression is that it’s been very useful.)
I think there are some specific reasons why Zettelkasten has worked so well for me. I’ll try to make those clear, to help readers decide whether it would work for them. However, I honestly didn’t think Zettelkasten sounded like a good idea before I tried it. It only took me about 30 minutes of working with the cards to decide that it was really good. So, if you’re like me, this is a cheap experiment. I think a lot of people should actually try it to see how they like it, even if it sounds terrible.
My plan for this document is to first give a short summary and then an overview of Zettelkasten, so that readers know roughly what I’m talking about, and can possibly experiment with it without reading any further. I’ll then launch into a longer discussion of why it worked well for me, explaining the specific habits which I think contributed, including some descriptions of my previous approaches to keeping research notes. I expect some of this may be useful even if you don’t use Zettelkasten—if Zettelkasten isn’t for you, maybe these ideas will nonetheless help you to think about optimizing your notes. However, I put it here primarily because I think it will boost the chances of Zettelkasten working for you. It will give you a more concrete picture of how I use Zettelkasten as a thinking tool.
# Very Short Summary
## Materials
• Staples index-cards-on-a-ring or equivalent, possibly with:
• plastic rings rather than metal
• different 3x5 index cards (I recommend blank, but, other patterns may be good for you) as desired
• some kind of divider
• I use yellow index cards as dividers, but slightly larger cards, tabbed cards, plastic dividers, etc. might be better
• quality hole punch (if you’re using different cards than the pre-punched ones)
• quality writing instrument—must suit you, but,
• multi-color click pen recommended
• hi-tec-c coleto especially recommended
## Technique
• Number pages with alphanumeric strings, so that pages can be sorted hierarchically rather than linearly -- 11a goes between 11 and 12, 11a1 goes between 11a and 11b, et cetera. This allows pages to be easily inserted between other pages without messing up the existing ordering, which makes it much easier to continue topics.
• Use the alphanumeric page identifiers to “hyperlink” pages. This allows sub-topics and tangents to be easily split off into new pages, and also allows for related ideas to be interlinked.
Before I launch into the proper description of Zettelkasten, here are some other resources on note-taking which I looked at before diving into using Zettelkasten myself. (Feel free to skip this part on a first reading.)
# Related Literature
There are other descriptions of Zettelkasten out there. I mainly read How to Take Smart Notes, which is the best book on Zettelkasten as far as I know—it claims to be the best write-up available in English, anyway. The book contains a thorough description of the technique, plus a lot of “philosophical” stuff which is intended to help you approach it with the right mindset to actually integrate it into your thinking in a useful way. I am sympathetic to this approach, but some of the content seems like bad science to me (such as the description of growth mindset, which didn’t strike me as at all accurate—I’ve read some of the original research on growth mindset).
An issue with some other write-ups is that they focus on implementing Zettelkasten-like systems digitally. In fact, Conor White-Sullivan, who I’ve already mentioned, is working on a Workflowy/Dynalist-like digital tool for thinking, inspired partially by Zettelkasten (and also by the idea that a Workflowy/Dynalist style tool which is designed explicitly to nudge users into good thinking patterns with awareness of cognitive biases, good practices for argument mapping, etc. could be very valuable). You can take a look at his tool, Roam, here. He also wrote up some thoughts about Zettelkasten in Roam. However, I strongly recommend trying out Zettelkasten on actual note-cards, even if you end up implementing it on a computer. There’s something good about it that I don’t fully understand. As such, I would advise against trusting other people’s attempts to distill what makes Zettelkasten good into a digital format—better to try it yourself, so that you can then judge whether alternate versions are improvements for you. The version I will describe here is fairly close to the original.
I don’t strongly recommend my own write-up over what’s said in How to Take Smart Notes, particularly the parts which describe the actual technique. I’m writing this up partly just so that there’s an easily linkable document for people to read, and partly because I have some ideas about how to make Zettelkasten work for you (based on my own previous note-taking systems) which are different from the book.
Another source on note-taking which I recommend highly is Lion Kimbro’s How to Make a Complete Map of Every Thought You Think (html, pdf). This is about a completely different system of note-taking, with different goals. However, it contains a wealth of inspiring ideas about note-taking systems, including valuable tips for the raw physical aspects of keeping paper notes. I recommend reading this interview with Lion Kimbro as a “teaser” for the book—he mentions some things which he didn’t in the actual book, and it serves somewhat as “the missing introduction” to the book. (You can skip the part at the end about wikis if you don’t find it interesting; it is sort of outdated speculation about the future of the web, and it doesn’t get back to talking about the book.) Part of what I love about How to Make a Complete Map of Every Thought You Think is the manic brain-dump writing style—it is a book which feels very “alive” to me. If you find its style grating rather than engaging, it’s probably not worth you reading through.
I should also mention another recent post about Zettelkasten here on LW.
# Zettelkasten, Part 1: The Basics
Zettelkasten is German for ‘slip-box’, IE, a box with slips of paper in it. You keep everything on a bunch of note cards. Niklas Luhmann developed the system to take notes on his reading. He went on to be an incredibly prolific social scientist. It is hard to know whether his productivity was tied to Zettelkasten, but, others have reported large productivity boosts from the technique as well.
## Small Pieces of Paper Are Just Modular Large Pieces of Paper
You may be thinking: aren’t small pieces of paper bad? Aren’t large notebooks just better? Won’t small pages make for small ideas?
What I find is that the drive for larger paper is better-served by splitting things off into new note cards. Note-cards relevant to your current thinking can be spread on a table to get the same big-picture overview which you’d get from a large sheet of paper. Writing on an actual large sheet of paper locks things into place.
When I was learning to write in my teens, it seemed to me that paper was a prison. Four walls, right? And the ideas were constantly trying to escape. What is a parenthesis but an idea trying to escape? What is a footnote but an idea that tried—that jumped off the cliff? Because paper enforces single sequence—and there’s no room for digression—it imposes a particular kind of order in the very nature of the structure.
-- Ted Nelson, demonstration of Xanadu space
I use 3x5 index cards. That’s quite small compared to most notebooks. It may be that this is the right size for me only because I already have very small handwriting. I believe Luhmann used larger cards. However, I expected it to be too small. Instead, I found the small cards to be freeing. I strongly recommend trying 3x5 cards before trying with a larger size. In fact, even smaller sizes than this are viable—one early reader of this write-up decided to use half 3x5 cards, so that they’d fit in mtg deck boxes.
Writing on small cards forces certain habits which would be good even for larger paper, but which I didn’t consider until the small cards made them necessary. It forces ideas to be broken up into simple pieces, which helps to clarify them. Breaking up ideas forces you to link them together explicitly, rather than relying on the linear structure of a notebook to link together chains of thought.
Once you’re forced to adopt a linking system, it becomes natural to use it to “break out of the prison of the page”—tangents, parentheticals, explanatory remarks, caveats, … everything becomes a new card. This gives your thoughts much more “surface area” to expand upon.
On a computer, this is essentially the wiki-style [[magic link]] which links to a page if the page exists, or creates the page if it doesn’t yet exist—a critical but all-too-rare feature of note-taking software. Again, though, I strongly recommend trying the system on paper before jumping to a computer; putting yourself in a position where you need to link information like crazy will help you to see the value of it.
This brings us to one of the defining features of the Zettelkasten method: the addressing system, which is how links between cards are established.
## Paper Hypertext
We want to use card addresses to organize and reference everything. So, when you start a new card, its address should be the first thing you write—you never want to have a card go without an address. Choose a consistent location for the addresses, such as the upper right corner. If you’re using multi-color pens, like me, you might want to choose one color just for addresses.
Wiki-style links tend to use the title of a page to reference that page, which works very well on a computer. However, for a pen-and-paper hypertext system, we want to optimize several things:
• Easy lookup: we want to find referenced cards as easily as possible. This entails sorting the cards, so that you don’t have to go digging; finding what you want is as easy as finding a word in the dictionary, or finding a page given the page number.
• Easy to sort: I don’t know about you, but for me, putting things in alphabetical order isn’t the easiest thing. I find myself reciting the alphabet pretty often. So, I don’t really want to sort cards alphabetically by title.
• Easy to write: another reason not to sort alphabetically by title is that you want to reference cards really easily. You probably don’t want to write out full titles, unless you can keep the titles really short.
• Fixed addresses: Whatever we use to reference a card, it must remain fixed. Otherwise, references could break when things change. No one likes broken links!
• Related cards should be near each other. Alphabetical order might put closely related cards very far apart, which gets to be cumbersome as the collection of cards grows—even if look-up is quite convenient, it is nicer if the related cards are already at hand without purposefully deciding to look them up.
• No preset categories. Creating a system of categories is a common way to place related content together, but, it is too hard to know how you will want to categorize everything ahead of time, and the needs of an addressing system make it too difficult to change your category system later.
One simple solution is to number the cards, and keep them in numerical order. Numbers are easy to sort and find, and are very compact, so that you don’t have the issue of writing out long names. However, although related content will be somewhat nearby (due to the fact that we’re likely to create several cards on a topic at the same time), we can do better.
The essence of the Zettelkasten approach is the use of repeated decimal points, as in “22.3.14”—cards addressed 2.1, 2.2, 2.2.1 and so on are all thought of as “underneath” the card numbered 2, just as in the familiar subsection-numbering system found in many books and papers. This allows us to insert cards anywhere we want, rather than only at the end, which allows related ideas to be placed near each other much more easily. A card sitting “underneath” another can loosely be thought of as a comment, or a contituation, or an associated thought.
However, for the sake of compactness, Zettelkasten addresses are usually written in an alphanumeric format, so that rather than writing 1.1.1, we would write 1a1; rather than writing 1.2.3, we write 1b3; and so on. This notation allows us to avoid writing so many periods, which grows tiresome.
Alternating between numbers and letters in this way allows us to get to two-digit numbers (and even two-digit letters, if we exhaust the whole alphabet) without needing periods or dashes or any such separators to indicate where one number ends and the next begins.
Let’s say I’m writing linearly—something which could go in a notebook. I might start with card 11, say. Then I proceed to card 11a, 11b, 11c, 11d, etc. On each card, I make a note somewhere about the previous and next cards in sequence, so that later I know for sure how to follow the chain via addresses.
Later, I might have a different branch-off thought from 11c. This becomes 11c1. That’s the magic of the system, which you can’t accomplish so easily in a linear notebook: you can just come back and add things. These tangents can grow to be larger than the original.
Don’t get too caught up in what address to give a card to put it near relevant material. A card can be put anywhere in the address system. The point is to make things more convenient for you; nothing else matters. Ideally, the tree would perfectly reflect some kind of conceptual hierarchy; but in practice, card 11c might turn out to be the primary thing, with card 11 just serving as a historical record of what seeded the idea.
Similarly, a linear chain of writing doesn’t have to get a nice linear chain of addresses. I might have a train of thought which goes across cards 11, 11a, 11b, 11b1, 11b1a, 11b1a1, 18, 18a… (I write a lot of “1a1a1a1a”, and it is sometimes better to jump up to a new top-level number to keep the addresses from getting longer.)
Mostly, though, I’ve written less and less in linear chains, and more and more in branching trees. Sometimes a thought just naturally wants to come out linearly. But, this tends to make it more difficult to review later—the cards aren’t split up into atomic ideas, instead flowing into each other.
If you don’t know where to put something, make it a new top-level card. You can link it to whatever you need via the addressing system, so the cost of putting it in a suboptimal location isn’t worth worrying about too much! You don’t want to be constrained by the ideas you’ve had so far. Or, to put it a different way: it’s like starting a new page in a notebook. Zettelkasten is supposed to be less restrictive than a notebook, not more. Don’t get locked into place by trying to make the addresses perfectly reflect the logical organization.
## Physical Issues: Card Storage
Linear notes can be kept in any kind of paper notebook. Nonlinear/modular systems such as Zettelkasten, on the other hand, require some sort of binder-like system where you can insert pages at will. I’ve tried a lot of different things. Binders are typically just less comfortable to write in (because of the rings—this is another point where the fact that I’m left-handed is very significant, and right-handed readers may have a different experience).
(One thing that’s improved my life is realizing that I can use a binder “backwards” to get essentially the right-hander’s experience—I write on the “back” of pages, starting from the “end”.)
They’re also bulky; it seems somewhat absurd how much more bulky they are than a notebook of equivalently-sized paper. This is a serious concern if you want to carry them around. (As a general rule, I’ve found that a binder feels roughly equivalent to one-size-larger notebook—a three-ring binder for 3x5 cards feels like carrying around a deck of 4x6 cards; a binder of A6 paper feels like a notebook of A5 paper; and so on.)
Index cards are often kept in special boxes, which you can get. However, I don’t like this so much? I want a more binder-like thing which I can easily hold in my hands and flip through. Also, boxes are often made to view cards in landscape orientation, but I prefer portrait orientation—so it’s hard to flip through things and read while they’re still in the box.
Currently, I use the Staples index-cards-on-a-ring which put all the cards on a single ring, and protect them with plastic covers. However, I replace the metal rings (which I find harder to work with) with plastic rings. I also bought a variety of note cards to try—you can try thicker/thinner paper, colors, line grid, dot grid, etc. If you do this, you’ll need a hole punch, too. I recommend getting a “low force” hole punch; if you just go and buy the cheapest hole punch you can find, it’ll probably be pretty terrible. You want to be fairly consistent with where you punch the holes, but, that wasn’t as important as I expected (it doesn’t matter as much with a one-ring binder in contrast to a three-ring, since you’re not trying to get holes to line up with each other).
I enjoy the ring storage method, because it makes cards really easy to flip through, and I can work on several cards at once by splaying them out (which means I don’t lose my place when I decide to make a new card or make a note on a different one, and don’t have to take things out of sort order to work with them).
Deck Architecture
I don’t keep the cards perfectly sorted all the time. Instead, I divide things up into sorted and not-yet-sorted:
(Blue in this image mean “written on”—they’re all actually white except for the yellow divider, although of course you could use colored cards if you like.)
Fetch Modi
As I write on blank cards, I just leave them where they are, rather than immediately putting them into the sort ordering. I sort them in later.
There is an advantage to this approach beyond the efficiency of sorting things all at once. The unsorted cards are a physical record of what I’m actively working on. Since cards are so small, working on an idea almost always means creating new cards. So, I can easily jump back into whatever I was thinking about last time I handled the binder of cards.
Unless you have a specific new idea you want to think about (in which case you start a new card, or, go find the most closely related cards in your existing pile), there are basically two ways to enter into your card deck: from the front, and from the back. The front is “top-down” (both literally and figuratively), going from bigger ideas to smaller details. It’s more breadth-first. You’re likely to notice an idea which you’ve been neglecting, and start a new branch from it. Starting from the back, on the other hand, is depth-first. You’re continuing to go deeper into a branch which you’ve already developed some depth in.
Don’t sort too often. The unsorted cards are a valuable record of what you’ve been thinking about. I’ve regretted sorting too frequently—it feels like I have to start over, find the interesting open questions buried in my stack of cards all over again.
In theory, one could also move cards from sorted to unsorted specifically to remind oneself to work on those cards, but I haven’t really used this tactic.
Splitting & Deck Management
When I have much more than 100 filled cards on a ring, I sort all of the cards, and split the deck into two. (Look for a sensible place to split the tree into two—you want to avoid a deep branch being split up into two separate decks, as much as you can.) Load up the two new decks with 50ish blank cards each, and stick them on new rings.
Everything is still on one big addressing system, so, it is a good idea to label the two new binders with the address range within. I use blank stickers, which I put on the front of each ring binder. The labels serve both to keep lookup easy (I don’t want to be guessing about which binder certain addresses are in), and also, to remind me to limit the addresses within a given deck.
For example, suppose this is my first deck of cards (so before the split, it holds everything). Let’s say there are 30 cards underneath “1”, 20 cards underneath “2”, and then about 50 more cards total, under the numbers 3 through 14.
I would split this deck into a “1 through 2” deck, and a “3 through *” deck—the * meaning “anything”. You might think it would be “3 through 14”, but, when I make card 15, it would go in that deck. So at any time, you have one deck of cards with no upper bound. On the other hand, when you are working with the “1 − 2” deck, you don’t want to mistakenly make a card 3; you’ve already got a card 3 somewhere. You don’t want duplicate addresses anywhere!
Currently, I have 6 decks: 0 − 1.4, 1.5 − 1.*, 2 − 2.4, 2.5 − 2.*, 3, and 4 − 4.*. (I was foolish when I started my Zettelkasten, and used the decimal system rather than the alphanumeric system. I switched quickly, but all my top-level addresses are still decimal. So, I have a lot of mixed-address cards, such as 1.3a1, 1.5.2a2, 2.6b4a, etc. As for why my numbers start at 0 rather than 1, I’ll discuss that in the “Index & Bibliography” section.)
I like to have the unsorted/blank “short-term memory” section on every single deck, so that I can conveniently start thinking about stuff within that deck without grabbing anything else. However, it might also make sense to have only one “short-term memory” in order to keep yourself more focused (and so that there’s only one place to check when you want to remember what you were recently working on!).
## Getting Started: Your First Card
Your first note doesn’t need to be anything important—it isn’t as if every idea you put into your Zettelkasten has to be “underneath” it. Remember, you aren’t trying to invent a good category system. Not every card has to look like a core idea with bullet points which elaborate on that idea, like my example in the previous section. You can just start writing whatever. In fact, it might be good if you make your first cards messy and unimportant, just to make sure you don’t feel like everything has to be nicely organized and highly significant.
On the other hand, it might be important to have a good starting point, if you really want to give Zettelkasten a chance.
I mentioned that I knew I liked Zettelkasten within the first 30 minutes. I think it might be important that when I sat down to try it, I had an idea I was excited to work on. It wasn’t a nice solid mathematical idea—it was a fuzzy idea, one which had been burning in the back of my brain for a week or so, waiting to be born. It filled the fractal branches of a zettelkasten nicely, expanding in every direction.
So, maybe start with one of those ideas. Something you’ve been struggling to articulate. Something which hasn’t found a place in your linear notebook.
Alright. That’s all I have to say about the basics of Zettelkasten. You can go try it now if you want, or keep reading. The rest of this document is about further ideas in note-taking which have shaped the way I use Zettelkasten. These may or may not be critical factors; I don’t know for sure why Zettelkasten is such a productive system for me personally.
# Note-Taking Systems I Have Known and Loved
I’m organizing this section by my previous note-taking systems, but secretly, the main point is to convey a number of note-taking ideas which may have contributed to Zettelkasten working well for me. These ideas have seemed generally useful to me—maybe they’ll be useful to you, even if you don’t end up using Zettelkasten in particular.
## Notebooks
Developing Ideas
Firstly, and most importantly, I have been keeping idea books since middle school. I think there’s something very important in the simple idea of writing regularly—I don’t have the reference, but, I remember reading someone who described the day they first started keeping a diary as the day they first woke up, started reflectively thinking about their relationship with the world. Here’s a somewhat similar quote from a Zettelkasten blog:
During the time spanning Nov. 2007–Jan. 2010, I filled 11 note books with ideas, to-do lists, ramblings, diary entries, drawings, and worries.
Looking back, this is about the time I started to live consciously. I guess keeping a journal helped me “wake up” from some kind of teenage slumber.
--Christian
I never got into autobiographical diary-style writing, personally, instead writing about ideas I was having. Still, things were in a very “narrative” format—the ideas were a drama, a back-and-forth, a dance of rejoinders. There was some math—pages filled with equations—but only after a great deal of (very) informal development of an idea.
As a result, “elaborate on an idea” / “keep going” seems like a primitive operation to me—and, specifically, a primitive operation which involves paper. (I can’t translate the same thinking style to conversation, not completely.) I’m sure that there is a lot to unpack, but for me, it just feels natural to keep developing ideas further.
So, when I say that the Zettelkasten card 1b2 “elaborates on” the card 1b, I’m calling on the long experience I’ve had with idea books. I don’t know if it’ll mean the same thing for you.
Here’s my incomplete attempt to convey some of what it means.
When I’m writing in an idea book, I spend a lot of time trying to clearly explain ideas under the (often false) assumption that I know what I’m talking about. There’s an imaginary audience who knows a lot of what I’m talking about, but I have to explain certain things. I can’t get away with leaving important terms undefined—I have to establish anything I feel less than fully confident about. For example, the definition of a Bayesian network is something I can assume my “audience” can look up on wikipedia. However, if I’m less than totally confident in the concept of d-separation, I have to explain it; especially if it is important to the argument I hope to make.
Once I’ve established the terms, I try to explain the idea I was having. I spend a lot of time staring off into space, not really knowing what’s going on in my head exactly, but with a sense that there’s a simple point I’m trying to make, if only I could see it. I simultaneously feel like I know what I want to say (if only I could find the words), and like I don’t know what it is—after all, I haven’t articulated it yet. Generally, I can pick up where I left off with a particular thought, even after several weeks—I can glance at what I’ve written so far, and get right back to staring at the wall again, trying to articulate the same un-articulated idea.
If I start again in a different notebook (for example, switching to writing my thoughts on a computer), I have to explain everything again. This audience doesn’t know yet! I can’t just pick up on a computer where I left off on paper. It’s like trying to pick up a conversation in the middle, but with a different person. This is sort of annoying, but often good (because re-explaining things may hold surprises, as I notice new details.)
Similarly, if I do a lot of thinking without a notebook (maybe in a conversation), I generally have to “construct” my new position from my old one. This has an unfortunate “freezing” effect on thoughts: there’s a lot of gravity toward the chain of thought wherever it is on the page. I tend to work on whatever line of thought is most recent in my notebook, regardless of any more important or better ideas which have come along—especially if the line of thought in the notebook isn’t yet at a conclusive place. Sometimes I put a scribble in the notebook after a line of thought, to indicate explicitly that it no longer reflects the state of my thinking, to give myself “permission” to do something else.
Once I’ve articulated some point, then criticisms of the point often become clear, and I’ll start writing about them. I often have a sense that I know how it’s going to go a few steps ahead in this back-and-forth; a few critiques and replies/revisions. Especially if the ideas are flowing faster than I can write them down. However, it is important to actually write things down, because they often don’t go quite as I expect.
If an idea seems to have reached a natural conclusion, including all the critiques/replies which felt important enough to write, I’ll often write a list of “future work”: any open questions I can think of, applications, details which are important but not so important that I want to write about them yet, etc. At this point, it is usually time to write the idea up for a real audience, which will require more detail and refine the idea yet further (possibly destroying it, or changing it significantly, as I often find a critical flaw when I try to write an idea up for consumption by others).
If I don’t have any particular idea I’m developing, I may start fresh with a mental motion like “OK, obviously I know how to solve everything” and write down the grand solution to everything, starting big-picture and continuing until I get stuck. Or, instead, I might make a bulleted list free-associating about what I think the interesting problems are—the things I don’t know how to do.
## Workflowy
The next advance in my idea notes was workflowy. I still love the simplicity of workflowy, even though I have moved on from it.
For those unfamilar, Workflowy is an outlining tool. I was unfamiliar with the idea before Workflowy introduced it to me. Word processors generally support nested bulleted lists, but the page-like format of a word processor limits the depth such lists can go, and it didn’t really occur to me to use these as a primary mode of writing. Workflowy doesn’t let you do anything but this, and it provides enough features to make it extremely convenient and natural.
Nonlinear Ideas: Branching Development
Workflowy introduced me to the possibility of nonlinear formats for idea development. I’ve already discussed this to some extent, since it is also one of the main advantages of Zettelkasten over ordinary notebooks.
Suddenly, I could continue a thread anywhere, rather than always picking it up at the end. I could sketch out where I expected things to go, with an outline, rather than keeping all the points I wanted to hit in my head as I wrote. If I got stuck on something, I could write about how I was stuck nested underneath whatever paragraph I was currently writing, but then collapse the meta-thoughts to be invisible later—so the overall narrative doesn’t feel interrupted.
In contrast, writing in paper notebooks forces you to choose consciously that you’re done for now with a topic if you want to start a new one. Every new paragraph is like choosing a single fork in a twisting maze. Workflowy allowed me to take them all.
What are Children?
I’ve seen people hit a block right away when they try to use workflowy, because they don’t know what a “child node” is.
• Here’s a node. It could be a paragraph, expressing some thought. It could also be a title.
• Here’s a child node. It could be a comment on the thought—an aside, a critique, whatever. It could be something which goes under the heading.
• Here’s a sibling node. It could be the next paragrapt in the “main thrust” of an argument. It could be an unrelated point under the same super-point everything is under.
As with Zettelkasten, my advice is to not get too hung up on this. A child is sort of like a comment; a parenthetical statement or a footnote. You can continue the main thrust of an argument in sibling nodes—just like writing an ordinary sequence of paragraphs in a word processor.
You can also organize things under headings. This is especially true if you wrote a sketchy outline first and then filled it in, or, if you have a lot of material in Workflowy and had to organize it. The “upper ontology” of my workflowy is mostly title-like, single words or short noun phrases. As you get down in, bullets start to be sentences and paragraphs more often.
Obviously, all of this can be applied to Zettelkasten to some extent. The biggest difference is that “upper-level” cards are less likely to just be category titles; and, you can’t really organize things into nice categories after-the-fact because the addresses in Zettelkasten are fixed—you can’t change them without breaking links. You can use redirect cards if you want to reorganize things, actually, but I haven’t done that very much in practice. Something which has worked for me to some extent is to reorganize things in the indexes. Once an index is too much of a big flat list, you can cluster entries into subjects. This new listing can be added as a child to the previous index, keeping the historical record; or, possibly, replace the old index outright. I discuss this more in the section on indexing.
Building Up Ideas over Long Time Periods
My idea books let me build up ideas over time to a greater extent than my peers who didn’t keep similar journals. However, because the linear format forces you to switch topics in a serial manner and “start over” when you want to resume a subject, you’re mostly restricted to what you can keep in your head. Your notebooks are a form of information storage, and you can go back and re-read things, but only if you remember the relevant item to go back and re-read.
Workflowy allowed me to build up ideas to a greater degree, incrementally adding thoughts until cascades of understanding changed my overall view.
Placing a New Idea
Because you’ve got all your ideas in one big outline, you can add in little ideas easily. Workflowy was easy enough to access via my smartphone (though they didn’t have a proper app at the time), so I could jot down an idea as I was walking to class, waiting for the bus, etc. I could easily navigate to the right location, at least, if I had organized the overall structure of the outline well. Writing one little idea would usually get more flowing, and I would add several points in the same location on the tree, or in nearby locations.
This idea of jotting down ideas while you’re out and about is very important. If you feel you don’t have enough ideas (be it for research, for writing fiction, for art—whatever) my first question would be whether you have a good way to jot down little ideas as they occur to you.
The fact that you’re forced to somehow fit all ideas into one big tree is also important. It makes you organize things in ways that are likely to be useful to you later.
Organizing Over Time
The second really nice thing workflowy did was allow me to go back and reorganize all the little ideas I had jotted down. When I sat down at a computer, I could take a look at my tree overall and see how well the categorization fit. This mostly took the form of small improvements to the tree structure over time. Eventually, a cascade of small fixes turned into a major reorganization. At that point, I felt I had really learned something—all the incremental progress built up into an overall shift in my understanding.
Again, this isn’t really possible in paper-based Zettelkasten—the address system is fixed. However, as I mentioned before, I’ve had some success doing this kind of reorganization within the indexes. It doesn’t matter that the addresses of the cards are fixed if the way you actually find those addresses is mutable.
Limitations of Workflowy
Eventually, I noticed that I had a big pile of ideas which I hadn’t really developed. I was jotting down ideas, sure. I was fitting them into an increasingly cohesive overall picture, sure. But I wasn’t doing anything with them. I wasn’t writing pages and pages of details and critique.
It was around this time that I realized I had gone more than three years without using a paper notebook very significantly. I started writing on paper again. I realized that there were all these habits of thinking which were tied to paper for me, and which I didn’t really access if I didn’t have a nice notebook and a nice pen—the force of the long-practiced associations. It was like waking up intellectually after having gone to sleep for a long time. I started to remember highschool. It was a weird time. Anyway...
## Dynalist
The next thing I tried was Dynalist.
The main advantage of Dynalist over Workflowy is that it takes a feature-rich rather than minimalistic approach. I like the clean aesthetics of Workflowy, but… eventually, there’ll be some critical feature Workflowy just doesn’t provide, and you’ll want to make the jump to Dynasilt. I use hardly any of the extra features of Dynalist, but the ones I do use, I need. For me, it’s mostly the LaTeX support.
Another thing about Dynalist which felt very different for me was the file system. Workflowy forces you to keep everything in one big outline. Dynalist lets you create many outlines, which it treats as different files; and, you can organize them into folders (recursively). Technically, that’s just another tree structure. In terms of UI, though, it made navigation much easier (because you can easily access a desired file through the file pane). Psychologically, it made me much more willing to start fresh outlines rather than add to one big one. This was both good and bad. It meant my ideas were less anchored in one big tree, but it eventually resulted in a big, disorganized pile of notes.
I did learn my lesson from Workflowy, though, and set things up in my Dynalist such that I actually developed ideas, rather than just collecting scraps forever.
Temporary Notes vs Organized Notes
I organized my Dynalist files as follows:
• A “log” file, in which I could write whatever I was thinking about. This was organized by date, although I would often go back and elaborate on things from previous dates.
• A “todo” file, where I put links to items inside “log” which I specifically wanted to go back and think more about. I would periodically sort the todo items to reflect my priorities. This gave me a list of important topics to draw from whenever I wasn’t sure what I wanted to think about.
• A bunch of other disorganized files.
This system wasn’t great, but it was a whole lot better at actually developing ideas than the way I kept things organized in Workflowy. I had realized that locking everything into a unified tree structure, while good for the purpose of slowly improving a large ontology which organized a lot of little thoughts, was keeping me from just writing whatever I was thinking about.
Dan Sheffler (whose essays I’ve already cited several times in this writeup) writes about realizing that his note-taking system was simultaneously trying to implement two different goals: an organized long-term memory store, and “engagement notes” which are written to clarify thinking and have a more stream-of-consciousness style. My “log” file was essentially engagement notes, and my “todo” file was the long-term memory store.
For some people, I think an essential part of Zettelkasten is the distinction between temporary and permanent notes. Temporary notes are the disorganized stream-of-consciousness notes which Sheffler calls engagement notes. Temporary notes can also include all sorts of other things, such as todo lists which you make at the start of the day (and which only apply to that day), shopping lists, etc. Temporary notes can be kept in a linear format, like a notebook. Periodically, you review the temporary notes, putting the important things into Zettelkasten.
In Taking Smart Notes, Luhmann is described as transferring the important thoughts from the day into Zettel every evening. Sheffler, on the other hand, keeps a gap of at least 24 hours between taking down engagement notes and deciding what belongs in the long-term store. A gap of time allows the initial excitement over an idea to pass, so that only the things which still seem important the next day get into long-term notes. He also points out that this system enforces a small amount of spaced repetition, making it more likely that content is recalled later.
As for myself, I mostly write directly into my Zettelkasten, and I think it’s pretty great. However, I do find this to be difficult/impossible when taking quick notes during a conversation or a talk – when I try, then the resulting content in my Zettelkasten seems pretty useless (ie, I don’t come back to it and further develop those thoughts). So, I’ve started to carry a notebook again for those temporary notes.
I currently think of things like this:
Jots
These are the sort of small pointers to ideas which you can write down while walking, waiting for the bus, etc. The idea is stated very simply—perhaps in a single word or a short phrase. A sentence at most. You might forget what it means after a week, especially if you don’t record the context well. The first thing to realize about jots is to capture them at all, as already discussed. The second thing is to capture them in a place where you will be able to develop them later. I used to carry around a small pocket notebook for jots, after I stopped using Workflowy regularly. My plan was to review the jots whenever I filled a notebook, putting them in more long-term storage. This never happened: when I filled up a notebook, unpacking all the jots into something meaningful just seemed like too huge a task. It works better for me to jot things into permanent storage directly, as I did with Workflowy. I procrastinate too much on turning temporary notes into long term notes, and the temporary notes become meaningless.
Glosses
A gloss is a paragraph explaining the point of a jot. If a jot is the title of a Zettelkasten card, a gloss is the first paragraph (often written in a distinct color). This gives enough of an idea that the thought will not be lost if it is left for a few weeks (perhaps even years, depending). Writing a gloss is usually easy, and doing so is often enough to get the ideas flowing.
Development
This is the kind of writing I described in the ‘notebooks’ section. An idea is fleshed out. This kind of writing is often still comprehensible years later, although it isn’t guaranteed to be.
Refinement
This is the kind of writing which is publishable. It nails the idea down. There’s not really any end to this—you can imagine expanding something from a blog post, to an academic paper, to a book, and further, with increasing levels of detail, gentle exposition, formal rigor—but to a first approximation, anyway, you’ve eliminated all the contradictions, stated the motivating context accurately, etc.
I called the last item “refinement” rather than “communication” because, really, you can communicate your ideas at any of these stages. If someone shares a lot of context with you, they can understand your jots. That’s really difficult, though. More likely, a research partner will understand your glosses. Development will be understandable to someone a little more distant, and so on.
## At Long Last, Zettelkasten
I’ve been hammering home the idea of “linear” vs “nonlinear” formats as one of the big advantages of Zettelkasten. But workflowy and dynalist both allow nonlinear writing. Why should you be interested in Zettelkasten? Is it anything more than a way to implement workflowy-like writing for a paper format?
I’ve said that (at least for me) there’s something extra-good about Zettelkasten which I don’t really understand. But, there are a couple of important elements which make Zettelkasten more than just paper workflowy.
• Hierarchy Plus Cross-Links: A repeated theme across knowledge formats, including wikipedia and textbooks, is that you want both a hierarchical organization which makes it easy to get an overview and find things, and also a “cross-reference” type capability which allows related content to be linked—creating a heterarchical web. I mentioned at the beginning that Zettelkasten forced me to create cross-links much more than I otherwise would, due to the use of small note-cards. Workflowy has “hierarchy” down, but it has somewhat poor “cross-link” capability. It has tags, but a tag system is not as powerful as hypertext. Because you can link to individual nodes, it’s possible to use hypertext cross-links—but the process is awkward, since you have to get the link to the node you want. Dynalist is significantly better in this respect—it has an easy way to create a link to anything by searching for it (without leaving the spot you’re at). But it lacks the wiki-style “magic link” capability, creating a new page when you make a link which has no target. Roam, however, provides this feature.
• Atomicity: The idea of creating pages organized around a single idea (again, an idea related to wikis). This is possible in Dynalist, but Zettelkasten practically forces it upon you, which for me was really good. Again, Roam manages to encourage this style.
# Zettelkasten, Part 2: Further Advice
## Card Layout
My cards often look something like this:
I’m left handed, so you may want to flip all of this around if you’re right handed. I use the ring binder “backwards” from the intended configuration (the punched hole would usually be on the left, rather than the right). Also, I prefer portrait rather than landscape. Most people prefer to use 3x5 cards in landscape, I suppose.
Anyway, not every card will look exactly like the above. A card might just contain a bunch of free-writing, with no bulleted list. Or it might only contain a bulleted list, with no blurb at the beginning. Whatever works. I think my layout is close to Luhmann’s and close to common advice—but if you try to copy it religiously, you’ll probably feel like Zettelkasten is awkward and restrictive.
The only absolutely necessary thing is the address. The address is the first thing you write on a new card. You don’t ever want a card to go without an address. And it should be in a standard location, so that it is really easy to look through a bunch of cards for one with a specific address.
Don’t feel bad if you start a card and leave it mostly blank forever. Maybe you thought you were going to elaborate an idea, so you made a new card, but it’s got nothing but an address. That’s ok. Maybe you will fill it later. Maybe you won’t. Don’t worry about it.
Mostly, a thought is continued through elaboration on bullet points. I might write something like “cont. 1.1a1a” at the bottom of the card if there’s another card that’s really a direct continuation, though. (Actually, I don’t write “cont.”; I just write the down arrow, which means the same thing.) If so, I’d write “see 1.1a1” in the upper left hand corner, to indicate that 1.1a1a probably doesn’t make much sense on its own without consulting 1.1a1 -- moreso than usual for child cards. (Actually, I’d write another down arrow rather than “see”, mirroring the down arrow on the previous card—this indicates the direct-continuation relationship.)
In the illustration, I wrote links [in square brackets]. The truth is, I often put them in full rectangular boxes (to make them stand out more), although not always. Sometimes I put them in parentheses when I’m using them more as a noun, as in: “I think pizza (12a) might be relevant to pasta. [14x5b]” In that example, “(12a)” is the card for pizza. “[14x5b]” is a card continuing the whole thought “pizza might be relevant to pasta”. So parentheses-vs-box is sort of like top-corner-vs-bottom, but for an individual line rather than a whole card.
Use of Color
The colors are true to my writing as well. For a long time, I wanted to try writing with multi-color click pens, because I knew some people found them very useful; but, I was unable to find any which satisfied my (exceptionally picky) taste. I don’t generally go for ball-point pens; they aren’t smooth enough. I prefer to write with felt-tip drawing pens or similar. I also prefer very fine tips (as a consequence of preferring my writing to be very small, as I mentioned previously) -- although I’ve also found that the appropriate line width varies with my mental state and with the subject matter. Fine lines are better for fine details, and for energetic mental states; broad lines are better for loose free-association and brainstorming, and for tired mental states.
In any case, a friend recommended the Hi-Tec C Coleto, a multi-color click pen which feels as smooth as felt-tip pens usually do (almost). You can buy whatever colors you want, and they’re available in a variety of line-widths, so you can customize it quite a bit.
At first I just used different colors haphazardly. I figured I would eventually settle on meanings for colors, if I just used whatever felt appropriate and experimented. Mostly, that meant that I switched colors to indicate a change of topic, or used a different color when I went back and annotated something (which really helps readability, by the way—black writing with a bunch of black annotations scribbled next to it or between lines is hard to read, compared to purple writing with orange annotations, or whatever!). When I switched to Zettelkasten, though, I got more systematic with my use of color.
I roughly follow Lion Kimbro’s advice about colors, from How to Make a Complete Map of Every Thought you Think:
Your pen has four colors: Red, Green, Blue, and Black
You will want to connect meaning with each color.
Here’s my associations:
RED: Error, Warning, Correction
BLUE: Structure, Diagram, Picture, Links, Keys (in key-value pairs)
GREEN: Meta, Definition, Naming, Brief Annotation, Glyphs
I also use green to clarify sloppy writing later on. Blue is for Keys, Black is for values.
I hope that’s self-explanatory.
If you make a correction, put it in red. Page numbers are blue. If you draw a diagram, make it blue. Main content in black.
Suppose you make a diagram: Start with a big blue box. Put the diagram in the box. (Or the other way around- make the diagram, than the box around it.) Put some highlighted content in black. Want to define a word? Use a green callout. Oops- there’s a problem in the drawing- X it out in red, followed by the correction, in red.
Some times, I use black and blue to alternate emphasis. Black and blue are the easiest to see.
If I’m annotating some text in the future, and the text is black, I’ll switch to using blue for content. Or vise versa.
Some annotations are red, if they are major corrections.
Always remember: Tolerate errors. If your black has run out, and you don’t want to get up right away to fetch your backup pen, then just switch to blue. When the thoughts out, go get your backup pen.
The only big differences are that I use brown instead of black in my pen, I tend to use red for titles so that they stand out very clearly, and I use green for links rather than blue.
## Index & Bibliography?
Bibliography
Taking Smart Notes describes two other kinds of cards: indexes, and bibliographical notes. I haven’t made those work for me very effectively, however. Luhmann, the inventor of Zettelkasten, is described inventing Zettelkasten as a way to organize notes originally made while reading. I don’t use it like that—I mainly use it for organizing notes I make while thinking. So bibliography isn’t of primary importance for me.
(Apparently Umberto Eco similarly advises keeping idea notes and reading notes on separate sets of index cards.)
Indexing
So I don’t miss the bibliography cards. (Maybe I will eventually.) On the other hand, I definitely need some sort of index, but I’m not sure about the best way to keep it up to date. I only notice that I need it when I go looking for a particular card and it is difficult to find! When that happens, and I eventually find the card I wanted, I can jot down its address in an index. But, it would be nice to somehow avoid this. So, I’ve experimented with some ideas. Here are someone else’s thoughts on indexing (for a digital zettelkasten).
Listing Assorted Cards
The first type of index which I tried lists “important” cards (cards which I refer to often). I just have one of these right now. The idea is that you write a card’s name and address on this index if you find that you’ve had difficulty locating a card and wished it had been listed in your index. This sounds like it should be better than a simple list of the top-level numbered cards, since (as I mentioned earlier) cards like 11a often turn out to be more important than cards like 11. Unfortunately, I’ve found this not to be the case. The problem is that this kind of index is too hard to maintain. If I’ve just been struggling to find a card, my working memory is probably already over-taxed with all the stuff I wanted to do after finding that card. So I forget to add it to the index.
Topic Index
Sometimes it also makes sense to just make a new top-level card on which you list everything which has to do with a particular category. I have only done this once so far. It seems like this is the main mode of indexing which other people use? But I don’t like the idea that well.
Listing Sibling Cards
When a card has enough children that they’re difficult to keep track of, I add a “zero” card before all the other children, and this works as an index. So, for example, card 2a might have children 2a1, 2a2, 2a3, … 2a15. That’s a lot to keep track of. So I add 2a0, which gets an entry for 2a1-2a15, and any new cards added under 2a. It can also get an entry for particularly important descendants; maybe 2a3a1 is extra important and gets an entry.
For cards like 2, whose children are alphabetical, you can’t really use “zero” to go before all the other children. I use “λ” as the “alphabetical zero”—I sort it as if it comes before all the other letters in the alphabet. So, card “1λ” lists 1a, 1b, etc.
The most important index is the index at 0, ei, the index of all top-level numbered cards. As I describe in the “card layout” section, a card already mostly lists its own children—meaning that you don’t need to add a new card to serve this purpose until things get unwieldy. However, top-level cards have no parents to keep track of them! So, you probably want an “absolute zero” card right away.
These “zero” cards also make it easier to keep track of whether a card with a particular address has been created yet. Every time you make a card, you add it to the appropriate zero card; so, you can see right away what the next address available is. This isn’t the case otherwise, especially if your cards aren’t currently sorted.
Kimbro’s Mind Mapping
I’ve experimented with adapting Lion Kimbro’s system from How to Make a Complete Map of Every Thought You Think. After all, a complete map of every thought you think sounds like the perfect index!
In my terminology, Lion Kimbro keeps only jots—he was focusing on collecting and mapping, rather than developing, ideas. Jots were collected into topics and sub-topics. When an area accumulated enough jots, he would start a mind map for it. I won’t go into all his specific mapping tips (although they’re relevant), but basically, imagine putting the addresses of cards into clusters (on a new blank card) and then writing “anchor words” describing the clusters.
You built your tree in an initially “top-down” fashion, expanding trees by adding increasingly-nested cards. You’re going to build the map “bottom-up”: when a sub-tree you’re interested in feels too large to quickly grasp, start a map. Let’s say you’re mapping card 8b4. You might already have an index of children at 8b4; if that’s the case, you can start with that. Also look through all the descendants of 8b4 and pick out whichever seem most important. (If this is too hard, start by making maps for 8b4’s children, and return to mapping 8b4 later.) Draw a new mind map, and place it at 8b4a—it is part of the index; you want to find it easily when looking at the index.
Now, the important thing is that when you make a map for 8b, you can take a look at the map for 8b4, as well as any maps possessed by other children of 8b. This means that you don’t have to go through all of the descendents of 8b (which is good, because there could be a lot). You just look at the maps, which already give you an overview. The map for 8b is going to take the most important-seeming elements from all of those sub-maps.
This allows important things to trickle up to the top. When you make a map at 0, you’ll be getting all the most important stuff from deep sub-trees just by looking at the maps for each top-level numbered card.
The categories which emerge from mapping like this can be completely different from the concepts which initially seeded your top-level cards. You can make new top-level cards which correspond to these categories if you want. (I haven’t done this.)
Now, when you’re looking for something, you start at your top-level map. You look at the clusters and likely have some expectation about where it is (if the address isn’t somewhere on your top-level map already). You follow the addresses to further maps, which give further clusters of addresses, until you land in a tree which is small enough to navigate without maps.
I’ve described all of this as if it’s a one-time operation, but of course you keep adding to these maps, and re-draw updated maps when things don’t fit well any more. If a map lives at 8b40a, then the updated maps can be 8b40b, 8b40c, and so on.You can keep the old maps around as a historical record of your shifting conceptual clusters.
## Keeping Multiple Zettelkasten
A note system like Zettelkasten (or workflowy, dynalist, evernote, etc) is supposed to stick with you for years, growing with you and becoming a repository for your ideas. It’s a big commitment.
It’s difficult to optimize note-taking if you think of it that way, though. You can’t experiment if you have to look before you leap. I would have never tried Zettelkasten if I thought I was committing to try it as my “next system”—I didn’t think it would work.
Similarly, I can’t optimize my Zettelkasten very well with that attitude. A Zettelkasten is supposed to be one repository for everything—you’re not supposed to start a new one for a new project, for example. But, I have several Zettelkasten, to test out different formats: different sizes of card, different binders. It is still difficult to give alternatives a fair shake, because my two main Zettelkasten have built up momentum due to the content I keep in them.
I use a system of capital letters to cross reference between my Zettelkasten. For example, my main 3x5 Zettelkasten is “S” (for “small”). I have another Zettelkasten which is “M”, and also an “L”. When referencing card 1.1a within S, I just call it 1.1a. If I want to refer to it from a card in M, I call it S1.1a instead. And so on.
Apparently Luhmann did something similar, starting a new Zettelkasten which occasionally referred to his first.
However, keeping multiple Zettelkasten for special topics is not necessarily a good idea. Beware fixed categories. The danger is that categories limit what you write, or, become less appropriate over time. I’ve tried special-topic notebooks in the past, and while it does sometimes work, I often end up conflicted about where to put something. (Granted, I have a similar conflict about where to put things in my several omni-topic Zettelkasten, but mostly the 3x5 system I’ve described here has won out—for now.)
On the other hand, I suspect it’s fine to create special topic zettelkasten for “very different” things. Creating a new zettelkasten because you’re writing a new book is probably bad—although it’ll work fine for the goal of organizing material for writing books, it means your next book idea isn’t coming from Zettelkasten. (Zettelkasten should contain/extend the thought process which generates book ideas in the first place, and it can’t do that very well if you have to have a specific book idea in order to start a zettelkasten about it.) On the other hand, I suspect it is OK to keep a separate Zettelkasten for fictional creative writing. Factual ideas can spark ideas for fiction, but, the two are sufficiently different “modes” that it may make sense to keep them in physically separate collections.
The idea of using an extended address system to make references between multiple Zettelkasten can also be applied to address other things, outside of your Zettelkasten. For example, you might want come up with a way of adding addresses to your old notebooks so that you can refer to them easily. (For example, “notebook number: page number” could work.)
## Should You Transfer Old Notes Into Zettelkasten?
Relatedly, since Zettelkasten ideally becomes a library of all the things you have been thinking about, it might be tempting to try and transfer everything from your existing notes into Zettelkasten.
(A lot of readers may not even be tempted to do this, given the amount of work it would take. Yet, those more serious about note systems might think this is a good idea—or, might be too afraid to try Zettelkasten because they think they’d have to do this.)
I think transferring older stuff into Zettelkasten can be useful, but, trying to make it happen right away as one big project is most likely not worth it.
• It’s true that part of the usefulness of Zettelkasten is the interconnected web of ideas which builds up over time, and the “high-surface-area” format which makes it easy to branch off any part. However, not all the payoff is long-term: it should also be useful in the moment. You’re not only writing notes because they may help you develop ideas in the future; the act of writing the notes should be helping you develop ideas now.
• You should probably only spend time putting ideas into Zettelkasten if you’re excited about further developing those ideas right now. You should not just be copying over ideas into Zettelkasten. You should be improving ideas, thinking about where to place them in your address hierarchy, interlinking them with other ideas in your Zettelkasten via address links, and taking notes on any new ideas sparked by this process. Trying to put all your old notes into Zettelkasten at once will likely make you feel hurried and unwilling to develop things further as you go. This will result in a pile of mediocre notes which will ultimately be less useful.
• I mentioned the breadth-first vs depth-first distinction earlier. Putting all of your old notes into Zettelkasten is an extremely breadth-first strategy, which likely doesn’t give you enough time to go deep into further developing any one idea.
What about the dream of having all your notes in one beautiful format? Well, it is true that old notes in different formats may be harder to find, since you have to remember what format the note you want was written in, or check all your old note systems to find the note you want. I think it just isn’t worth the cost to fix this problem, though, especially since you should probably try many different systems to find a good one that works for you, and you can’t very well port all your notes to each new system.
Zettelkasten should be an overall improvement compared to a normal notebook—if it isn’t, you have no business using it. Adding a huge up-front cost of transferring notes undermines that. Just pick Zettelkasten up when you want to use it to develop ideas further.
Speaking of depth-first vs breadth-first, how should you balance those two modes?
Luckily, this problem has some relevant computer science theory behind it. I tend to think of it in terms of iterative-deepening A* heuristic search (IDA*).
The basic idea is this: the advantage of depth-first search is that you can minimize memory cost by only maintaining the information related to the path you are currently trying. However, depth-first search can easily get stuck down a fruitless path, while breadth-first search has better guarantees. IDA* balances the two approaches by going depth-first, but giving up when you get too deep, backing up, and trying a new path. (The A* aspect is that you define “too deep” in a way which also depends on how promising a path seems, based on an optimistic assessment.) This way, you simulate a breadth-first search by a series of depth-first sprints. This lets you focus your attention on a small set of ideas at one time.
Once you’ve explored all the paths to a certain level, your tolerance defining “too deep” increases, and you start again. You can think of this as becoming increasingly willing to spend a lot of time going down difficult technical paths as you confirm that easier options don’t exist.
Of course, this isn’t a perfect model of what you should do. But, it seems to me that a note-taking system should aspire to support and encourage something resembling this. More generally, I want to get across the idea of thinking of your existing research methodology as an algorithm (possibly a bad one), and trying to think about how it could be improved. Don’t try to force yourself to use any particular algorithm just because you think you should; but, if you can find ways to nudge yourself toward more effective algorithms, that’s probably a good idea.
## Inventing Shorthand/Symbology
I don’t think writing speed is a big bottleneck to thinking speed. Even though I “think by writing”, a lot of my time is spent… well… thinking. However, once I know what I want to write, writing does take time. When inspiration really strikes, I might know more or less what I want to say several paragraphs ahead of where I’ve actually written to. At times like that, it seems like every second counts—the faster I write, the more ideas I get down, the less I forget before I get to it.
So, it seems worth putting some effort into writing faster. (Computer typing is obviously a thing to consider here, too.) Shorthand, and special symbols, are something to try.
There’s also the issue of space. I know I advocate for small cards, which are intentionally limiting space. But you don’t want to waste space if you don’t have to. The point is to comprehend as much as possible as easily as possible. Writing bullet points and using indentation to make outlines is an improvement over traditional paragraphs because it lets you see more at a glance. Similarly, using abbreviations and special symbols will improve this.
I’ve tried several times to learn “proper” shorthand. Maybe I just haven’t tried hard enough, but it seems like basically all shorthand systems work by leaving out information. Once you’re used to them, they’re easy enough to read shortly after you’ve written them—when you still remember more or less what they said. However, they don’t actually convey enough information to fully recover what was written if you don’t have such a guess. Basically, they don’t improve readability. They compress things down to the point where they’re hard to decipher, for the sake of getting as much speed as possible.
On the other hand, I’ve spent time experimenting with changes to my own handwriting which improve speed without compromising readability. Pay attention to what takes you the most time to write, and think about ways to streamline that.
Lion Kimbro emphasizes that you come up with ways to abbreviate things you commonly repeat. Ho describes using the Japanese symbols for days of the week and other common things in his system. The Bullet Journaling community has created its own symbology. Personally, I’ve experimented with a variety of different reference symbols which mean different sorts of things (beyond the () vs [] distinction I’ve mentioned).
The Bullet Journaling community has thought a lot about short-and-fast writing for the purpose of getting things out quickly and leaving more space on the page. They also have their own symbology which may be worth taking a look at. (I don’t yet use it, but I may switch to it or something similar eventually.)
Well, that’s all I want to say for now. I may add to this document in the future. For now, best of luck developing ideas!
• Signal-boosting this:
I honestly didn’t think Zettelkasten sounded like a good idea before I tried it. It only took me about 30 minutes of working with the cards to decide that it was really good. So, if you’re like me, this is a cheap experiment. I think a lot of people should actually try it to see how they like it, even if it sounds terrible.
You’d think that as someone who holds advice from Abram in high regard, as someone who verbally agreed to try Zettelkasten, and as someone who knows about trivial inconveniences and the value of cheap experiments with large potential upside, I would have actually tried Zettelkasten when Abram suggested it… but I didn’t. I bought the stuff, but then I took a while to actually try it. This was a mistake—AFAICT, Zettelkasten has indeed significantly boosted my productivity, at least as far as it comes to idea/proof generation. So, don’t make that mistake.
(I haven’t read this post yet vs the earlier draft, but I wanted to make this comment before I forget)
• Yeah, I think people just mostly need multiple nudges in order to try this.
• Meta note: I think it’s a pretty big problem that even with reports by many high performing people that finding a particular creativity technique that resonated with them after investing some effort in trying that boosted their output by a multiple, people mostly don’t seem to be able to take such claims seriously enough to invest the effort of trying. Secondly, such techniques usually give a boost for some time before dropping back towards baseline as you mine out the novel connection types that that technique causes. This also points towards having the meta-heuristic in place to regularly invest in trying new ones important.
• It’s not clear that we should take such claims seriously. (At the very least, there’d need to be some attempt to correct for the obvious selection bias…)
Furthermore, as far as this particular thing goes, I skimmed, then text-searched, and saw no discussion of what advantages this system (which has some pretty major strikes against it—foremost among which is the use of actual paper!) has over a wiki.
Edit: You seem to have edited your comment after I responded; so the following concerns the current version as of this writing.
You say:
I think it’s a pretty big problem that even with reports by many high performing people that finding a particular creativity technique that resonated with them after investing some effort in trying that boosted their output by a multiple, people mostly don’t seem to be able to take such claims seriously enough to invest the effort of trying.
But then you also say:
Secondly, such techniques usually give a boost for some time before dropping back towards baseline as you mine out the novel connection types that that technique causes.
But the latter is, obviously, an excellent reason for the former! People mostly don’t take such claims seriously… because they know perfectly well that said claims mostly are not true.
• Generally, I think it is worth trying both this and a wiki. For me, Zettelkasten has some magical qualities. I have not kept using personal wikis I have tried. I don’t know fully what the differences are. But, I can point to a couple of specific things:
• A wiki does not enforce discipline in keeping ideas atomic. Imagine a hybrid between twitter and a wiki, where there’s a character limit on wiki pages—that would be closer to Zettelkasten. It forces you to break things up, which can result in reifying concepts which otherwise would be a forgettable paragraph in a longer text.
• A wiki does not force hierarchical organization. You can create a disorganized mess of pages with a lot of orphans and no high-level overview. This can also happen in Zettelkasten, but to me, it feels less likely due to the intrinsically hierarchical page numbering. (As I mentioned, workflowy seems better than zettelkasten in this specific respect. But wikis seem worse.)
• A wiki does not enforce discipline in keeping ideas atomic.
Suppose there were a wiki platform that did this (had a max character limit on wiki pages). Would you use it?
(I ask because it would be, while not trivial, nevertheless relatively straightforward for me to implement such a feature for PmWiki.)
• The original claim Abram made was:
However, I strongly recommend trying out Zettelkasten on actual note-cards, even if you end up implementing it on a computer. [reversal of emphasis mine]
There’s something good about it that I don’t fully understand. As such, I would advise against trusting other people’s attempts to distill what makes Zettelkasten good into a digital format—better to try it yourself, so that you can then judge whether alternate versions are improvements for you. The version I will describe here is fairly close to the original.
It may be that you can easily build a wiki that does all the things. Abram wasn’t saying you can’t – just that you might be likely to end up missing some of the active ingredients. Maybe the character count will do the trick (but would you have thought to impose that limit on yourself?).
This is more of a chesterton’s fence argument. You seem to be saying “obviously a wiki would be better, why can’t we just address all the individual concerns?”, and well, sure, maybe you can – but you may run the risk of various Seeing Like a State esque concerns of not noticing subtle interactions.
(Something that came to mind here was an old comment (I think by you?) about World of Warcraft making the game worse when they streamlined group-finding. i.e. one might think the point of group finding is to find groups, and that you want to streamline that as much as possible. But actually the process of finding and building a group was also more like an important part of the game, than a cost to be paid to ‘get to the real game’)
Some guesses about things that might have been relevant to Abram’s experience (which may or may not generalize) are:
– having physical cards that are small lets you rearrange them in front of you. (i.e. this is more like a whiteboard, or the software world, something more like mind-mapping than a wiki)
– making the “linking” more labor intensive might be a feature rather than a bug. The point might not be to have the links represented somewhere, it might be for your brain to actually build up stronger connections between related things.
These both seem like things you can implement in software, but it’ll matter a lot how smooth the experience is. (I haven’t yet found a mind-mapping software that quite did the thing, period, let alone one that also worked as a Twitter-Wiki)
• Re: trying out on cards first, then perhaps implementing digitally:
Yes, this is a fair point. I didn’t pay attention to that part, but I have no quarrel with it.
Re: the Chesterton’s fence argument:
Likewise agreed. I think what should be useful is some more investigation into what is it, exactly, about the paper-based approach that is valuable (if anything! perhaps advantages are illusory? or perhaps not). Perhaps some experimentation by people with both methods, e.g.
I think one distinctive feature of this case (as compared to other “Chesterton’s fence” cases) is that the advantages of the proposed substitute (i.e., digital formats) are simply so great. Searchability, editability, hyperlinking, multimedia, multiple views, backup, archiving, automatic format conversion, reuse, etc., etc. The question thus becomes not “are there any advantages to paper”, but rather the twofold questions of “are there any advantages to paper that are so great as to outweigh those of digital (and thus would convince us to stick with paper)” and “are there any advantages to paper that we may replicate in the digital version”.
(Naturally, I agree that it’s of great importance in any case to know what the advantages of paper are, in order that we may judge them.)
(Something that came to mind here was an old comment (I think by you?) about World of Warcraft making the game worse when they streamlined group-finding. i.e. one might think the point of group finding is to find groups, and that you want to streamline that as much as possible. But actually the process of finding and building a group was also more like an important part of the game, than a cost to be paid to ‘get to the real game’)
I don’t have a ready link, but yes, this was almost certainly one of my comments. So, indeed, good point, and likewise I think your specific suggestions for possible advantages of the paper format are very plausible, given my own experiences.
These both seem like things you can implement in software, but it’ll matter a lot how smooth the experience is. (I haven’t yet found a mind-mapping software that quite did the thing, period, let alone one that also worked as a Twitter-Wiki)
Neither have I, sad to say. I looked into mind-mapping software a bit (not as deeply as I’d like), and didn’t turn up anything that stood out to me in that domain nearly as much as PmWiki in the wiki class. I remain hopeful that such is possible to design, but not, I suppose, too hopeful…
• I think one distinctive feature of this case (as compared to other “Chesterton’s fence” cases) is that the advantages of the proposed substitute (i.e., digital formats) are simply so great. Searchability, editability, hyperlinking, multimedia, multiple views, backup, archiving, automatic format conversion, reuse, etc., etc. The question thus becomes not “are there any advantages to paper”, but rather the twofold questions of “are there any advantages to paper that are so great as to outweigh those of digital (and thus would convince us to stick with paper)” and “are there any advantages to paper that we may replicate in the digital version”.
Nod, although in my mind this is more of a central example of a Chesterton’s fence than an outlier – the reason Chesterton needed to coin the maxim is become the benefits often seem great. (And, for that reason, the injunction isn’t to not tear down the fence, simply to make sure you understand it first)
• Suppose there were a wiki platform that did this (had a max character limit on wiki pages). Would you use it?
Probably not, but I see why you would ask—it’s a reasonable test for the claim I’m making.
On a computer, I’ve preferred outlining-type tools to wiki-type tools by a lot, although combining the advantages of both seems like a good idea. Part of the reason is that outlining tools reward you for splitting things up (by allowing you to fold up tree structures to see as much relevant stuff as possible at a given time, and make structured comments on things, etc). Wikis punish you for splitting things up (you can’t see things anymore when you click away from them, you have to open multiple tabs or such).
I also think a character-count limit is not as good as a limited-size sheet of paper. Character count feels inflexible. Small sheets of paper, on the other hand, allow you to write smaller if you really want to fit more, squeeze stuff in margins, and so on. (I’m not sure why that’s good—it could be that I’m merely more familiar with paper and so feel less awkward about it.)
As I mentioned elsewhere, I also suspect that now that I’ve seen how nice it is to be forced to make concepts really atomic, I could transfer the skill to a format with less stringent limitations. But I’ve also seen that I easily “back-slide” when writing on larger paper, so, this may not really be the case.
I also agree with Raemon’s response.
• You can create a disorganized mess of pages with a lot of orphans and no high-level overview.
Well… it’s not easy to create orphaned pages with a (decent) wiki; certainly you’re not likely to do so by accident. (As for a high-level overview, well, that takes special effort to construct regardless of your platform of choice.)
• Hmm, now I want to try this with a wiki with a precommitment to stick to a certain word count and hierarchical organization.
• But the latter is, obviously, an excellent reason for the former! People mostly don’t take such claims seriously… because they know perfectly well that said claims mostly are not true.
I think that high-performing people reporting a thing works very well for them is some evidence that the thing works. I agree that these things will often not work anyways, sometimes for idiosyncratic reasons, sometimes due to the selection bias you mentioned, and so on. But I try new things because trying new things is usually cheap, and has high potential upside. Buying the materials cost $59.02 (although a more bare-bones setup could probably be assembled for ~$20.00), and I spent about 40 minutes determining whether this system seemed better. This was a cheap test for me.
I understand your claim as people (correctly) don’t try these things out because they know that the techniques probably won’t help them. But I claim that regularly trying new things is a very good idea, and that prioritizing things recommended by high-performing people is a good idea. Why would the expected value of these experiments be negative?
• Notably, people could commit in this thread to trying this method for some length of time and then writing up their experience with it for LW. That would help address some of the obvious selection effect.
• If “such techniques usually give a boost for some time before dropping back towards baseline”, the obvious way to use this information would seem to be starting a new note-taking system every so often. That way you can keep on taking advantage of the boost, at least as long as you can keep finding new systems (which may eventually become a problem, but even so doesn’t leave you worse off than before). Of course, this does suggest a bound on how many resources you should invest in these new systems.
• Interesting point. Perhaps it’s the enhanced attention usually associated with trying something new that enables people to pay closer attention to their thoughts while writing notes using a new method. This could possibly lead to higher retention of creative thoughts.
• Agree that this seems like a significant part of it.
2. I’m super excited to try it! There’s something that just immediately made sense / called out to me about it. Specifically about the fact that these are physical cards. I’m guessing it’s similar to why you like this method as well.
3. I ordered the supplies. By the end of October I promise I will write up a post / comment with how this method went for me.
• How’d it go?
• Just remembered today too.
Overall, I created about 30+ cards. (I think the number if more a function of how much time I spend learning new things rather than anything else.) Mostly the cards are about statistics + math, but today I started creating cards for music theory. I’m not as in love with the system as I thought I would be, but it definitely feels more like an addition rather than a replacement to my existing systems. I’m currently creating cards for things I normally wouldn’t write down. I think that’s a good thing.
Mostly I’ve just been adding cards. I think I only looked through the cards twice. But right now I’m rewatching some videos I watched earlier today, just so I can create the cards for the things I found useful, just so I can then go and apply them to the piece of music I’m writing. So, overall, it definitely seems useful.
• It just occurred to me that with math, but especially with music theory, it’s hard to take notes in a digital system (like Workflowy), because you can’t easily draw symbols.
• I strongly recommend trying out Zettelkasten on actual note-cards, even if you end up implementing it on a computer. There’s something good about it that I don’t fully understand. As such, I would advise against trusting other people’s attempts to distill what makes Zettelkasten good into a digital format—better to try it yourself, so that you can then judge whether alternate versions are improvements for you.
I really wish you’d spent more time talking about this. Having a bulk of the article concerned with physical storage issues seems to indicate that it’s a bug and not a feature of the method. I have a few thousand journal entries on my computer at the moment. While I could physically print everything out optimized for physical search I don’t because it’d simply take to long to sort through the cards.
Sustainability is what determines whether or not I stick to a method. It doesn’t matter how great the method is if I’m going to stop using it after a while. I need methods that can last for a lifetime. Honestly, the biggest problem with Zettelkasten methods is simply a lack of imagination. What if you could dynamically resort the indices? It’d be amazing if you could define ‘current interests’ and then the indices would be optimized to allow you to search faster for that stuff. What if it really was a second brain? It’d be cool if the journal could resort itself or offer suggestions for what to write next. These are things that encourage a move towards digital options.
• I wish I could have said more, but I don’t exactly have an explanation for why paper zettelkasten is particularly good in my case. I suspect I got good habits from paper zettelkasten which I could now implement on digital versions, but which I wouldn’t have learned if I had started out implementing it digitally. In other words, the constraints of the paper version were good for me.
There are pros and cons to digital formats. Historically, I go back and forth. I agree that it’s worth seriously trying, but I also think paper is worth seriously trying.
• Typing is somewhat faster than handwriting.
• Editing is significantly better in digital.
• Search is significantly better in most digital formats.
• Many typewritten formats have limited access to math symbols or make them harder to use than on paper. Basically all typewritten formats, if you want to invent your own symbology freely.
• I don’t know of a digital system which allows switching between written and drawn content as easily as one can do on paper. This should be possible with touchscreen laptops with high-quality stylus input, but I don’t know of software which makes it nice. (Maybe OneNote, but it lacks enough other features that it didn’t seem like a real option for me.)
• Digital formats are greatly constrained by the features implemented on a given platform. I found myself spending a lot of time looking for the best apps, always getting a compromise between different feature sets I wanted.
• I seem to be really picky about stylus input; most systems do not feel comfortable for regular use (including apple pencil).
• Sometimes I just feel like writing on paper.
I definitely think digital formats are still worth using sometimes (for me). I think a lot of this depends on the particular person.
• Many typewritten formats have limited access to math symbols
In case you don’t already know, you can use unicode to type things like ω₀ ≲ ∫ ±√(Δμ)↦✔·∂∇² and so on directly into a web browser text box, or into almost any other text entry form of any computer program: I made a tutorial here with details .
There’s a learning curve for sure, but I can now type my 10-20 favorite special characters & greek letters only slightly slower than typing normal text, or at least fast enough that I don’t lose my train of thought.
It’s obviously not a substitute for LaTeX or pen&paper, but I still find it very helpful for things like emails, python code, spreadsheets, etc., where LaTeX or pen&paper aren’t really options.
• Nice, thanks! I knew that I could type a fair amount of unicode on the mac keyboard via the ‘alt’ key (for example, ¬ is alt+l), but this might be helpful for cases which aren’t covered by that.
I’m a little paranoid about whether unicode will render properly for other people, since I still occasionally find myself in situations where I’m unable to read unicode which others have sent me (eg when reading on certain phone apps).
• I don’t know of a digital system which allows switching between written and drawn content as easily as one can do on paper.
ClarisWorks did this (and, to a lesser extent, similar software, which unfortunately is increasingly rare these days).
• We’re working on it with Roam.
Agree it’s a big deal
• Many typewritten formats have limited access to math symbols or make them harder to use than on paper. Basically all typewritten formats, if you want to invent your own symbology freely.
This is true, but if you take LaTeX to be a sufficiently close approximation to the ease of paper, then there are many software platforms that should suit you.
• If Latex is the only constraint, then this is true. However, as you multiply requirements, the list of software grows shorter and shorter. In a lot of cases it is easier to modify a paper system to get what you want than it is to modify a software option (if it’s even open-source).
Roam and Dynalist are both good options for outlining tools with LaTeX support. Roam has wiki-style links as well (and other features inspired by Zettelkasten). Of course some (most?) wiki software supports LaTeX.
• I agree with your sentiment. There is something about working with a physical object that a screen doesn’t capture. I think it’s kind of an object permanence thing where having a physical handle makes it easier to remember where things are whereas if you have a screen then more cognitive effort has to be expended. I feel this effect most strongly with books, but that’s getting a bit on a tangent.
I definitely think digital formats are still worth using sometimes (for me). I think a lot of this depends on the particular person.
I agree with this also. The reason I use digital is because I write scripts to manage my journal and to-do lists. I basically got sick of having a growing stack of notebooks I’d never have the time organize and wanted a better solution. This is a real problem. I have notebooks from when I was a kid that I occasionally want to reference and then I realize they’re 1k miles away at home.
• Seconded. This sort of thing is exactly what I meant when I asked what advantages this has over a wiki. If I wanted to do something like this, I’d use a wiki for sure!
• The link between handwriting and higher brain function has been studied a lot, it seems that at least for recall and memory writing things down by hand is very helpful, so it is likely that more neural connections are formed when using actual note cards. Just one random study: https://journals.sagepub.com/doi/abs/10.1177/154193120905302218 (via https://whereisscihub.now.sh/ )
For a similar reason I still take handwritten notes in conferences, I almost never review it but it helps me remember. The whole point of an archive system is to help me find notes when I need them ,so the extra overhead seems worth it.
• Excellent write-up!
Anecdatum: I got into Zettelkasten before I knew what it was called after reading a post by Ryan Holiday circa 2013 (he recommends physical cards and slip boxes, too). It’s profoundly improved my writing, my ability to retain information, and synthesis of new ideas, even though I was doing it ‘wrong’ or sub-optimally most of that time.
In terms of systems: I always thought using paper index cards was bonkers, given we have these newfangled things called ‘computers’, but your post makes a much more compelling case than anything else I’ve read (including the Smart Notes book, which is very good). So I’m pretty curious to give it a try.
My only major reservation is around portability and security. At this point, my (digital) slip-box is literally the single most valuable thing I own. I know Ryan Holiday uses fireproof safes etc, but it seems like it would get pretty cumbersome, especially once you have tens of thousands of notes.
I’ve been helping Conor and Josh out with Roam because I’m excited about the power-user features, but I’m pretty confident that any practice of this nature would be beneficial to students, researchers, and writers. Prior to Roam, I was using a mixture of Google Docs, Evernote, etc, which wasn’t optimal, but still worked OK.
An important point you touched on which is worth stressing: the benefits of Zettelkasten accrue in a non-linear fashion over time, as the graph becomes more connected. So even if you ‘get it’ as soon as you start playing around with the cards, you could reasonably expect to reap much greater gains over a timespan of months or years (at least, that’s my experience!).
• How do you suppose this compares to the likes of Anki or Mnemosyne?
• Not the OP, but as someone who uses both: in my mind, they’re categorically different. Anki is for memorisation of discrete chunks of knowledge, for rote responses (i.e. deliberately Cached Thoughts), and for periodic reminders of things.
Zettelkasten helps with information retention too, but that’s mostly a happy side-effect of the desired goal, which (for me) is synthesis. Every time I input a new chunk of knowledge, I have to decide where I should ‘hang’ it in my existing graph, what it rhymes with, whether it creates dissonance, and how it might be useful to current or future projects.
Once it’s hanging in the lattice somewhere, I can reference and remix it as often as I want, and effectively have a bunch of building blocks ready and waiting to stack together for writing projects or problem-solving. It’s fine if I can’t remember most of this stuff in detail; it’s much more of an ‘exo-brain’ than Anki, IMO.
• Right, I agree with this. I never managed to keep using Anki-like software for anything, but, the purpose is quite different.
• One way to think about a notebook vs Anki/Mnemosyne is that Anki/Mnemosyne offers faster reads at the expense of slower writes.
if, over your lifetime, you will spend more than 5 minutes looking something up or will lose more than 5 minutes as a result of not knowing something, then it’s worthwhile to memorize it with spaced repetition. 5 minutes is the line that divides trivia from useful data.
In other words, with Anki/Mnemosyne, you have to spend ~5 minutes of additional effort writing the info to your brain (relative to just writing it in a notebook). But once it’s written to your brain, presumably it will be a bit faster to recall than if it’s in your notebook.
I’m a bit of an Anki/Mnemosyne skeptic for the following reasons:
• I think it’s pretty rare that you will actually spend more than 5 minutes looking something up. Looking up e.g. a formula on Google is going to take on the order of 10 seconds. How many formulas are you going to realistically look up more than 30 times over the course of your life?
• Remember, if you find yourself using a formula that often, you’ll plausibly find yourself accidentally memorizing it anyway! To help with this, you could always challenge yourself to recall things from memory before Googling for them. Sort of a “just in time”/opportunistic approach to building useful long-term memories.
• I’m not totally convinced that it actually offers a substantial speedup for reads. Suppose I’ve “memorized” some formula using Anki. If I haven’t actually seen the card recently, it could easily take several seconds, perhaps even 10+ seconds, for me to recall it from memory.
• Even if you think you recall a formula, if it’s an actually important application, you’ll likely want to look it up anyway to be sure.
• Anki/Mnemosyne seem bad for knowledge which changes, such as research ideas.
If Anki/Mnemosyne have value, I think it is probably in constructing better mental chunks. It’s not about the cost of looking something up, it’s about the context switch when you’re trying to use that idea as a subcomponent of a larger mental operation.
You could also argue that the value in Anki/Mnemosyne comes from knowing that there is something there to look up, as opposed to not having to look it up. However, a good notebook structure can mitigate that problem (whenever you learn some interesting info, add it to the pages associated with whichever future situations it could be useful in, so you won’t have to remember to look it up when you’re in that situation). Additionally, I think Anki/Mnemosyne may be overkill for just knowing that something exists. (Though deeper understanding could be good for noticing deeper conceptual isomorphisms.)
Personally, I prefer to refine my mental chunks through doing problems, or just directly using them for the purpose I acquired them for (just-in-time learning), rather than reciting info. I think this builds a deeper level of understanding and helps you see how concepts are related to each other in a way which is harder to do than with Anki. I’m a big believer in structuring one’s learning and thinking process to incidentally incorporate an element of spaced repetition the way Dan Sheffler describes.
• I think Anki is great at learning specific facts, even quite complex ones—I have used it extensively to learn languages—but it doesn’t offer any opportunities to link ideas together. It’s basically an efficient method of taking facts—even complex facts like “what does this sentence mean?” or “what did people say in this short video clip?”—and putting them sufficiently into long-term memory that you can then use them in the real world. This final step is crucial as it allows these Anki facts to come alive and become much richer as they become part of rich semantic web.
Anki offers no possibility of linking up and developing ideas. It’s basically a very efficient memory device.
• I appreciated the amazon links! Went ahead and bought the ingredients, will try this out next week.
• Embarrassing update: I remembered that I currently have zero notetaking habits and faced a pretty big upfront cost of getting into the habit of taking notes at all. :P
• Zettelkasten sounds great, but I’m worried there are other things that would sound equally great if I’d heard of them and I’m privileging the one I’ve heard of. To that end, I’m asking people to report other systems they’ve used here.
• Data point: I bounced off the physical system after making four cards, but fell in love with Roam almost immediately. It’s only been 6 days so I don’t know if it will last.
• I’ve been experimenting with Roam and finding it better than most of my notetaking systems I’ve tried, although not sure it’s doing the thing that Abram was pointing at here re: improving idea-synthesis
• Most of the folks who sign up for Roam right now don’t discover the workflows in it that let you actually implement a Zettelkasten practice.
This is one reason why we send a youcanbook.me link to every new user and try to schedule an onboarding call.
Unfortunately only a small % take us up on that—they try the tool, figure they have the hang of it, then go about using it like they’ve used other notes tools.
I will say most of the real great stuff that happens with Zettelkasten is not happening because of the tool you’re using—it is happening because you’re explicitly thinking about relationships between ideas, and you’re then able to explore linked ideas when you come back to them in the future. We try to make that process really seamless, but still have a long way to go if we’re going to nudge users who don’t have a Zettelkasten process already in that direction.
Hell, we have a long way to go in helping people who do have a ZKT process discover the features in Roam that support it.
• A couple things I’d suggest
Use the “block-references” feature, which you can discover in the / command, or when you type ((
In Roam, every workflowy type bullet point is a card—and you can embed them elsewhere—or like to them with an alias (that’s a sort of hidden workflow that mostly power users use rn, probably need to improve)
In the original location, you see the number of other places you’ve referenced that card (back links), and clicking that button shows you all those locations
This makes it easy to build “trails” of ideas across documents
In the Zettelkasten process, when you have an idea, you first write it down, then think about where to place it, then think about what other ideas it connects to and link those up.
In Roam, you’d probably just start writing the idea down on the day that you wrote it—maybe nested under a some links/tags that relate to the general idea (or use links inline) so you can find it again later.
If you’re using Roam for Zettelkasting, next step is to look through your notes and find other ideas that you might want to link to those blocks.
It’s still not super seamless, but a hell of a lot faster than paper index cards, especially as your zettelkasten grows
• Roam might be great for writing papers etc, but is it a long-term solution for note taking? Who owns your data? What happens when the company goes away?
• What makes a long term solution for notetaking for you?
The founders have said the usual right things about people owning their own data and that they will only ever raise revenue by fee-for-service, not selling ads or data, but I don’t know if there’s anything legally binding to that.
JSON export already exists, although it is only so useful when there’s nothing to read it in.
The thing I’m actually most concerned about right now is privacy; there are unfixed vulnerabilities if you share some but not all of your pages.
• Thanks a lot for this! I used paper to elaborate on a math proof, and it was tremendously productive.
For more fact-based research, it was too slow for me. Instead, I’m completely enamoured with Roam.
• Hello, and thanks very much for your excellent post. I read it before I started my own attempt last month (it was posted and discussed a bit more here). I’m doing it in software, in a markdown wiki, and a few weeks later have hundreds of notes with manually maintained links (~2.7 links/zettel).
Like you, my impression has been that it has been immensely helpful. But a comment you made has slightly haunted me from the beginning:
it is also very possible that the method produces serious biases in the types of ideas produced/developed
I think I may feel similarly, but could you elaborate? What kind of ideas do you think it produces?
• My worry was essentially media-makes-message style. Luhmann’s sociological theories were sprawling interconnected webs. (I have not read him at all; this is just my impression.) This is not necessarily because the reality he was looking at is best understood in that form. Also, his theory of sociology has something to do with systems interacting with each other through communication bottlenecks (?? again, I have not really read him), which he explicitly relates to Zettelkasten.
Relatedly, Paul Christiano uses a workflowy-type outlining tool extensively, and his theory of AI safety prominently features hierarchical tree structures.
• Thanks, this is really useful. Would you say that your thinking has become more Luhmann-esque in the way you describe? (I have not read him either but your description sounds quite like the summaries I’ve read.)
So far, it seems to produce more unexpected analogies in my thinking than any deeply interconnected view of reality. But some of those analogies have been hard to explain to others. When that happens, the question becomes whether I’m just not able to articulate the link yet, or if I’m seeing links that aren’t really useful, or aren’t really there.
• Not really? Although I use interconnections, I focus a fair amount on the tree-structure part. I would say there’s a somewhat curious phenomenon where I am able to go “deeper” in analysis than I would previously (in notebooks or workflowy), but the “shallow” part of the analysis isn’t questioned as much as it could be (it becomes the context in which things happen). In a notebook, I might end up re-stating “early” parts of my overall argument more, and therefore refining them more.
I have definitely had the experience of reaching a conclusion fairly strongly in Zettelkasten and then having trouble articulating it to other people. My understanding of the situation is that I’ve built up a lot of context of which questions are worth asking, how to ask them, which examples are most interesting, etc. So there’s a longer inferential distance. BUT, it’s also a bad sign for the conclusion. The context I’ve built up is more probably shaky if I can’t articulate it very well.
• When you go outside, how do you choose decks to take with you?
Small cards seem awful for writing sequences of transformations of large equations—do you sometimes do things like that and if yes then do you do that outside of Zettelkasten?
When developing an idea I use paper as an expansion of my working memory, so it becomes full of things which become useless right after I finish. Do you throw away such “working memory dumps” and only save actually useful pieces of knowledge?
• Another thing you could do (which I’m considering):
Currently, when I want to start an entirely new top-level topic, I make a new card in my highest-address deck. This means that highest deck is full of top-level ideas which mostly have little or no development.
Instead, one could bias toward starting new decks for new top-level ideas. You probably don’t want to do this every time, but, it means you have a nice new deck with no distractions which you can carry around on its own. And so long as you are carrying around your latest new deck, you can add new top-level cards to it if you need to start a new topic on the go.
You don’t get access to all your older ideas, but if we compare this to carrying around a notebook, it compares favorably.
EDIT: I’ve tried this now; I think it’s quite a good solution.
• I initially did everything in zk even if it was a temporary working memory dump, but recently I’ve gone back to notebooks for those kinds of temporary notes, and I put important stuff into zk later (if and only if I want to expand on parts of the temporary notes in a more permanent fashion).
Similarly, at first I tried to figure out which decks to carry with me. Now I either carry all of them (if I’m going to sit and do work in them) or just a notebook to take temporary notes in. Eventually carrying all of them will be a real problem when there are too many, but I’m not there yet; they still fit in a backpack.
I’ve done sequences of equations in my zk, but also sometimes go to notebooks. I think the situation is “ok but not great”. A possible solution would be to keep a larger-paper zk specifically for this, with cross-references between it and the smaller zk (enabled by using capital letters to name zk’s and cross-referenced, as I mentioned in the text). I don’t currently think it’s a big enough problem for that to be worth it.
• I am an avid bullet-journaler, and while I don’t expect to try Zettelkasten, I will start using one of the methods described here to make my bullet journals easier to navigate.
Research and Writing is only half of what I use my bullet journal for, but this causes notes on the same topic to spread over many pages. If I give a number to that topic, then I will be able to continue that topic throughout my journals by just adding a “dot-number.” If page 21 is notes on formal models in business and I know that I will be making more notes on that same topic later. I can call Formal Models in Business 21.1, and the next time I broach the subject on page 33, I can label the page “Formal Models in Business 21.2″ etc. This will allow my Table of Contents to indicate related ideas.
Thanks for the elucidation!
• Yeah, I think it’s actually not too bad to use Zettelkasten addresses in a fixed-page-location notebook. You can’t put the addresses in proper order, but, I’ve mentioned that I don’t sort my cards until I have a large back-log of unsorted anyway.
• As I said, the creation-time ordering is pretty useful anyway, because it correlates to what you’re most likely to want to look at, whereas the proper sorting does not.
• Also, looking up addresses in creation-time ordering is usually not too bad: you can still rely on 2a to be later than 2, 2b to be later than 2a, etc. You just don’t know for sure whether 3a will be on a later page than 2a.
• Hey have you looked at discbound notebooks for the Zettelkasten storage? They are modular binders with a special punch that allows them to be removed without prying open the middle.
Also, thanks for this wonderful writeup. I alway love reading how others impliment the Zettelkasten system
• Yeah, I actually tried them, but didn’t personally like them that well. They could definitely be an option for someone. |
Address 350 Ramapo Valley Rd, Oakland, NJ 07436 (201) 644-7087 http://www.itechcellular.com
# natural log error 2 Towaco, New Jersey
more hot questions question feed about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science MR0395314. Borwein, J. Taking 32 = 23 + 1, for example, generates 2 ln ( 3 ) = 3 ln ( 2 ) − ∑ k ≥ 1 ( − 1 )
Lupas, A. "Formulae for Some Classical Constants." In Proceedings of ROGER-2000. 2000. This function is overloaded in and (see complex log and valarray log). More specifically, LeFit'zs answer is only valid for situations where the error $\Delta x$ of the argument $x$ you're feeding to the logarithm is much smaller than $x$ itself: \text{if}\quad Proc.
Generated Fri, 21 Oct 2016 03:28:13 GMT by s_wx1126 (squid/3.5.20) Sublist as a function of positions Why does the find command blow up in /run/? Bailey, D. Change in natural log ≈ percentage change: The natural logarithm and its base number e have some magical properties, which you may remember from calculus (and which you may have hoped
Providence, RI: Amer. and Bailey, D. Within this range, the standard deviation of the errors in predicting a logged series is approximately the standard deviation of the percentage errors in predicting the original series, and the mean Example If ps = qt + d with some small d, then ps/qt = 1 + d/qt and therefore s ln ( p ) − t ln ( q
Huylebrouck, D. "Similarities in Irrationality Proofs for , , , and ." Amer. J.Borwein, P.Borwein, L.Jörgenson, and R.Corless). A diff-log of -0.5 followed by a diff-log of +0.5 takes you back to your original position, whereas a 50% loss followed by a 50% gain (or vice versa) leaves you This is the simplest in an infinite class of such identities, the first few of which are (5) (6) (E.W.Weisstein, Oct.7, 2007).
The sum (22) has the limit (23) (Borwein et al. 2004, p.10). CITE THIS AS: Weisstein, Eric W. "Natural Logarithm of 2." From MathWorld--A Wolfram Web Resource. p.29. It is much easier to estimate this trend from the logged graph than from the original unlogged one!
In general, the expression LOGb(.) is used to denote the base-b logarithm function, and LN is used for the special case of the natural log while LOG is often used for Is "youth" gender-neutral when countable? has numerical value (1) (OEIS A002162). Maximal number of regions obtained by joining n points around a circle by straight lines A penny saved is a penny How can Charles Xavier be alive in the movie Logan?
The following integral is given in terms of , (14) The plot above shows the result of truncating the series for after terms. a symmetric distribution of errors in a situation where that doesn't even make sense.) In more general terms, when this thing starts to happen then you have stumbled out of the For large percentage changes they begin to diverge in an asymmetric way. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Computerbasedmath.org» Join the initiative for modernizing math education. The 10% figure obtained here is nominal growth, including inflation. For example, here is a graph of LOG(AUTOSALE). more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
Wolfram Language» Knowledge-based programming for everyone. This function is also overloaded in and (see complex log and valarray log). In Statgraphics, alas, the function that is called LOG is the natural log, while the base-10 logarithm function is LOG10. If x is zero, it may cause a pole error (depending on the library implementation).
In Statgraphics notation, this means that, DIFF(LOG(Y/CPI)) is nearly identical to DIFF(LOG(Y)): the only difference between the two is a very faint amount of noise due to fluctuations in the inflation Applied Math. A BBP-type formula for discovered using the PSLQ algorithm is (21) (Bailey and Plouffe 1997; Borwein and Bailey 2002, p.128). The natural logarithm is the base-e logarithm: the inverse of the natural exponential function (exp).
Comput. 66, 903-913, 1997. Bailey, D.H. E-Notes. 7: 237–246. Why is '१२३' numeric?
current community chat Physics Physics Meta your communities Sign up or log in to customize your list. v t e Irrational numbers Prime (ρ) Euler–Mascheroni (γ) Logarithm of 2 Twelfth root of two (12√2) Apéry's (ζ(3)) Sophomore's dream Plastic (ρ) Square root of 2 (√2) Erdős–Borwein (E) Golden The blue and red lines are virtually indistinguishable except at the highest and lowest points. (Again, LOG means LN in Statgraphics.) If the situation is one in which the percentage changes Wellesley, MA: A K Peters, 2003.
You need to first convert the forecasts back into real units and then recalculate the errors and error statistics in real units, if it is important to have those numbers. Hints help you try the next step on your own. In the natural log function, the base number is the transcendental number "e" whose deciminal expansion is 2.718282…, so the natural log function and the exponential function (ex) are inverses of error-analysis share|cite|improve this question edited Jan 25 '14 at 20:01 Chris Mueller 4,72811444 asked Jan 25 '14 at 18:31 Just_a_fool 3341413 add a comment| 2 Answers 2 active oldest votes up
SEE ALSO: Alternating Harmonic Series, Dirichlet Eta Function, Mercator Series, Natural Logarithm, Natural Logarithm of 2 Continued Fraction, Natural Logarithm of 2 Digits, q-Harmonic Series REFERENCES: Bailey, D.H.; Borwein, J.M.; Calkin, Soc., pp.73-88, 1997. |
Article Contents
Article Contents
# Critical points for surface diffeomorphisms
• Using the definition of dominated splitting, we introduce the notion of critical set for any dissipative surface diffeomorphism as an intrinsically well-defined object. We obtain a series of results related to this concept.
Mathematics Subject Classification: Primary: 37C05, 37E30, 37D25, 37D30.
Citation: |
• # question_answer Wire bent as ABOCD as shown, carries current I entering at A and leaving at D. Three uniform magnetic fields each ${{B}_{0}}$ exist in the region as shown. The force on the wire is A) $\sqrt{3}I\,R\,{{B}_{0}}$ B) $\sqrt{5}I\,R\,{{B}_{0}}$ C) $\sqrt{8}I\,R\,{{B}_{0}}$ D) $\sqrt{6}I\,R\,{{B}_{0}}$
$\vec{F}=\vec{F}=I\vec{\ell }\times \vec{B}$ $\vec{\ell }=\overrightarrow{AD}=R(\vec{i}-\vec{j})$ $\vec{B}={{B}_{0}}(\hat{i}+\hat{j}+\hat{k})$ $\therefore \vec{F}=IR{{B}_{0}}(\hat{i}-\hat{j})\times (\hat{i}+\hat{j}-\hat{k})$ $=IR{{B}_{0}}\left| \begin{matrix} {\hat{i}} & {\hat{j}} & {\hat{k}} \\ 1 & -1 & 0 \\ 1 & 1 & -1 \\ \end{matrix} \right|=IR{{B}_{0}}(\hat{i}+\hat{j}+2\hat{k})$ $F=IR{{B}_{0}}\sqrt{6}$ Aliter: $\vec{B}={{B}_{0}}(\hat{i}+\hat{j}-\hat{k})\,\,:\,\,\,\,\,\,\vec{\ell }=R(\hat{i}-\hat{j})$ $\vec{B}\,\vec{\ell }=0\,\,\,\,\,\,\Rightarrow Angle\,\,=90{}^\circ \,\,\,\,\,\,\Rightarrow \,\,\,\,\,\,\,F=BI\ell$ $=\sqrt{3}\,{{B}_{0}}\,I\,\sqrt{2}\,R=\sqrt{6}\,{{B}_{0}}\,IR$ |
# VHDL code not compiling
I'm new to VHDL and I cannot seem to get my code to compile. I've looked over the code to the best of my ability, but I do not see anything wrong with it from my current basic understanding of how it works and I am wondering if anybody could help. The code is supposed to model a NLX1G99 configurable multi-function gate (minus the enable bit)
library ieee;
use ieee.std_logic_1164.all;
entity multifun_gate is
port(
d,c,b,a: in std_logic;
y: out std_logic
);
end multifun_gate;
architecture dataflow of multifun_gate is
begin
y <= (a and not b and not c and not d) or
(a and b and not c and not d) or
(not a and b and c and not d) or
(a and b and c and not d) or
(not a and not b and not c and d) or
(not a and b and not c and d) or
(not a and not b and c and d) or
(and and not b and c and d);
end dataflow;
• What are the errors? – Dean Feb 19 '13 at 9:41
• This is a good example of total incomprehensible and undebugable code. Can't you write what you mean so that people can understand, and the synthesizer can create the gates? – Philippe Feb 19 '13 at 12:21
In the second last line:
(and and not b and c and d);
you have and repeated.
• Thanks, that got rid of one error, however, there are still others – audiFanatic Feb 19 '13 at 2:13
• nevermind, that did it. I meant to write "a and not b" rather than "and and not b" and I forgot to replace the and with a after fixing it. – audiFanatic Feb 19 '13 at 2:17
Judging from the picture in the datasheet, I'd write:
sig1 <= a and not c;
sig2 <= b and c;
sig3 <= sig1 or sig2;
y <= d xor sig3;
Much easier to check I reckon.
• I would've done that, but that's the way we were asked to do it. – audiFanatic Feb 25 '13 at 15:14
• @audiFanatic: ahh. Painful! I love it when people are "taught" to fight the tools :) – Martin Thompson Feb 25 '13 at 15:35 |
## Nonrelativistic phase in gamma-ray burst afterglows
Huang, Y. F.
Lu, T.
Cheng, K. S.
##### Description
The discovery of multiband afterglows definitely shows that most $\gamma$-ray bursts are of cosmological origin. $\gamma$-ray bursts are found to be one of the most violent explosive phenomena in the Universe, in which astonishing ultra-relativistic motions are involved. In this article, the multiband observational characteristics of $\gamma$-ray bursts and their afterglows are briefly reviewed. The standard model of $\gamma$-ray bursts, i.e. the fireball model, is described. Emphasis is then put on the importance of the nonrelativistic phase of afterglows. The concept of deep Newtonian phase is elaborated. A generic dynamical model that is applicable in both the relativistic and nonrelativistic phases is introduced. Based on these elaborations, the overall afterglow behaviors, from the very early stages to the very late stages, can be conveniently calculated.
Comment: A review paper accepted for publication in a Chinese journal of: Progress in Natural Science. 6 figures, 21 pages
Astrophysics |
chapter 12
Treatment of Active Crohn's Disease with Salazopyrine and Derivatives of Aminosalicylic Acid (5-ASA)
Pages 20
A. Metabolism When taken orally, 20% to 30% of the drug is absorbed at the level of the upper intestinal tract, and approximately 70% to 80% reaches the colon where the azo linkage is cleaved by the intestinal bacteria, thus liberating SP and 5-ASA (1). SP is rapidly absorbed by the colonic mucosa, metabolized by the processes of acetylation, hydroxylation, and glucuronidation, and subsequently excreted in the urine as such or in the form of metabolites (2). Traces of SP are found in the blood between three and five hours after oral intake (3). High blood levels of this molecule appear to be responsible for the majority of side effects caused by SASP, depending mainly on slow or rapid acctylator phenotype (4). Only a minimal part of 5-ASA is absorbed at the colonic level, rapidly acctylated, and excreted in the urine (2). The greater part, therefore, remains in contact with the colonic mucosa where it exerts its topical anti-inflammatory action and is then eliminated in the feces (5). |
Built using Zelig version 5.1.4.90000
## Zelig workflow overview
All models in Zelig can be estimated and results explored presented using four simple functions:
1. zelig to estimate the parameters,
2. setx to set fitted values for which we want to find quantities of interest,
3. sim to simulate the quantities of interest,
4. plot to plot the simulation results.
#### Zelig 5 reference classes
Zelig 5 introduced reference classes. These enable a different way of working with Zelig that is detailed in a separate vignette. Directly using the reference class architecture is optional.
# Examples
Let’s walk through an example. This example uses the swiss dataset. It contains data on fertility and socioeconomic factors in Switzerland’s 47 French-speaking provinces in 1888 (Mosteller and Tukey, 1977, 549-551). We will model the effect of education on fertility, where education is measured as the percent of draftees with education beyond primary school and fertility is measured using the common standardized fertility measure (see Muehlenbein (2010, 80-81) for details).
If you haven’t already done so, open your R console and install Zelig. We recommend installing Zelig with the zeligverse package. This installs core Zelig and ancillary packages at once.
install.packages('zeligverse')
Alternatively you can install the development version of Zelig with:
devtools::install_github('IQSS/Zelig')
Once Zelig is installed, load it:
library(zeligverse)
## Building Models
Let’s assume we want to estimate the effect of education on fertility. Since fertility is a continuous variable, least squares (ls) is an appropriate model choice. To estimate our model, we call the zelig() function with three two arguments: equation, model type, and data:
# load data
data(swiss)
# estimate ls model
z5_1 <- zelig(Fertility ~ Education, model = "ls", data = swiss, cite = FALSE)
# model summary
summary(z5_1)
## Model:
##
## Call:
## z5\$zelig(formula = Fertility ~ Education, data = swiss)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.036 -6.711 -1.011 9.526 19.689
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 79.6101 2.1041 37.836 < 2e-16
## Education -0.8624 0.1448 -5.954 3.66e-07
##
## Residual standard error: 9.446 on 45 degrees of freedom
## Multiple R-squared: 0.4406, Adjusted R-squared: 0.4282
## F-statistic: 35.45 on 1 and 45 DF, p-value: 3.659e-07
##
## Next step: Use 'setx' method
The -0.86 coefficient on education suggests a negative relationship between the education of a province and its fertility rate. More precisely, for every one percent increase in draftees educated beyond primary school, the fertility rate of the province decreases 0.86 units. To help us better interpret this finding, we may want other quantities of interest, such as expected values or first differences. Zelig makes this simple by automating the translation of model estimates into interpretable quantities of interest using Monte Carlo simulation methods (see King, Tomz, and Wittenberg (2000) for more information). For example, let’s say we want to examine the effect of increasing the percent of draftees educated from 5 to 15. To do so, we set our predictor value using the setx() and setx1() functions:
# set education to 5 and 15
z5_1 <- setx(z5_1, Education = 5)
z5_1 <- setx1(z5_1, Education = 15)
# model summary
summary(z5_1)
## setx:
## (Intercept) Education
## 1 1 5
## setx1:
## (Intercept) Education
## 1 1 15
##
## Next step: Use 'sim' method
After setting our predictor value, we simulate using the sim() method:
# run simulations and estimate quantities of interest
z5_1 <- sim(z5_1)
# model summary
summary(z5_1)
##
## sim x :
## -----
## ev
## mean sd 50% 2.5% 97.5%
## 1 75.33601 1.568926 75.35504 71.98253 78.3214
## pv
## mean sd 50% 2.5% 97.5%
## [1,] 75.71558 9.525293 75.73353 58.5333 95.91194
##
## sim x1 :
## -----
## ev
## mean sd 50% 2.5% 97.5%
## 1 66.71969 1.454155 66.67102 63.87779 69.5769
## pv
## mean sd 50% 2.5% 97.5%
## [1,] 66.87073 9.578544 66.88007 48.02027 84.90856
## fd
## mean sd 50% 2.5% 97.5%
## 1 -8.616321 1.416013 -8.591192 -11.34729 -5.964767
At this point, we’ve estimated a model, set the predictor value, and estimated easily interpretable quantities of interest. The summary() method shows us our quantities of interest, namely, our expected and predicted values at each level of education, as well as our first differences–the difference in expected values at the set levels of education.
# Visualizations
Zelig’s plot() function plots the estimated quantities of interest:
plot(z5_1)
We can also simulate and plot simulations from ranges of simulated values:
z5_2 <- zelig(Fertility ~ Education, model = "ls", data = swiss, cite = FALSE)
# set Education to range from 5 to 15 at single integer increments
z5_2 <- setx(z5_2, Education = 5:15)
# run simulations and estimate quantities of interest
z5_2 <- sim(z5_2)
Then use the plot() function as before:
z5_2 <- plot(z5_2)
# Getting help
The primary documentation for Zelig is available at: http://docs.zeligproject.org/articles/.
Within R, you can access function help using the normal ? function, e.g.:
?setx
If you are looking for details on particlar estimation model methods, you can also use the ? function. Simply place a z before the model name. For example, to access details about the logit model use:
?zlogit |
Browse Books: Juvenile Fiction / Family / Adoption
\$8.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$18.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$16.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$14.39
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$16.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
By Henry Cole, Henry Cole (Illustrator)
\$18.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$7.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$7.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$7.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$9.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$7.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON
\$8.99
In Stock: Click Title to See if it's in CAMBRIDGE or BOSTON |
# Contents
## Idea
Grothendieck conjectured that every Weil cohomology theory factors uniquely through some category, which he called the category of motives. For smooth projective varieties (over some field $k$) such a category was given by Grothendieck himself, called the category of pure Chow motives. For general smooth varieties the category is still conjectural, see at mixed motives.
## Construction
Fix some adequate equivalence relation $\sim$ (e.g. rational equivalence). Let ${Z}^{i}\left(X\right)$ denote the group of $i$-codimensional algebraic cycles and let ${A}_{\sim }^{i}\left(X\right)$ denote the quotient ${Z}^{i}\left(X\right)/\sim$.
### Category of correspondences
Let ${\mathrm{Corr}}_{\sim }\left(k\right)$, the category of correspondences, be the category whose objects are smooth projective varieties and whose hom-sets are the direct sum
${\mathrm{Corr}}_{\sim }\left(h\left(X\right),h\left(Y\right)\right)=\underset{i}{⨁}{A}_{\sim }^{{n}_{i}}\left({X}_{i}×Y\right)\phantom{\rule{thinmathspace}{0ex}},$Corr_\sim(h(X),h(Y)) = \bigoplus_i A^{n_i}_\sim(X_i \times Y) \,,
where $\left({X}_{i}\right)$ are the irreducible components of $X$ and ${n}_{i}$ are their respective dimensions. The composition of two morphisms $\alpha \in \mathrm{Corr}\left(X,Y\right)$ and $\beta \in \mathrm{Corr}\left(Y,Z\right)$ is given by
${p}_{\mathrm{XZ},*}\left({p}_{\mathrm{XY}}^{*}\left(\alpha \right).{p}_{\mathrm{YZ}}^{*}\left(\beta \right)\right)$p_{XZ,*} (p_{XY}^*(\alpha) . p_{YZ}^*(\beta))
where ${p}_{\mathrm{XY}}$ denotes the projection $X×Y×Z\to X×Y$ and so on, and $.$ denotes the intersection product in $X×Y×Z$.
There is a canonical contravariant functor
$h:\mathrm{SmProj}\left(k\right)\to {\mathrm{Corr}}_{\sim }\left(k\right)$h \colon SmProj(k) \to Corr_\sim(k)
from the category of smooth projective varieties over $k$ given by mapping $X↦X$ and a morphism $f:X\to Y$ to its graph, the image of its graph morphism ${\Gamma }_{f}:X\to X×Y$.
The category of correspondences is symmetric monoidal with $h\left(X\right)\otimes h\left(Y\right)≔h\left(X×Y\right)$.
We also define a category ${\mathrm{Corr}}_{\sim }\left(k,A\right)$ of correspondences with coefficients in some commutative ring $A$, by tensoring the morphisms with $A$; this is an $A$-linear category additive symmetric monoidal category.
### Category of effective pure motives
The Karoubi envelope (pseudo-abelianisation) of ${\mathrm{Corr}}_{\sim }\left(k,A\right)$ is called the category of effective pure motives (with coefficients in $A$ and with respect to the equivalence relation $\sim$), denoted ${\mathrm{Mot}}_{\sim }^{\mathrm{eff}}\left(k,A\right)$.
Explicitly its objects are pairs $\left(h\left(X\right),p\right)$ with $X$ a smooth projective variety and $p\in \mathrm{Corr}\left(h\left(X\right),h\left(X\right)\right)$ an idempotent, and morphisms from $\left(h\left(X\right),p\right)$ to $\left(h\left(Y\right),q\right)$ are morphisms $h\left(X\right)\to h\left(Y\right)$ in ${\mathrm{Corr}}_{\sim }$ of the form $q\circ \alpha \circ p$ with $\alpha \in {\mathrm{Corr}}_{\sim }\left(h\left(X\right),h\left(Y\right)\right)$.
This is still a symmetric monoidal category with $\left(h\left(X\right),p\right)\otimes \left(h\left(Y\right),q\right)=\left(h\left(X×Y\right),p×q\right)$. Further it is Karoubian, $A$-linear and additive.
The image of $X\in \mathrm{SmProj}\left(k\right)$ under the above functor
$h:\mathrm{SmProj}\left(k\right)\to {\mathrm{Corr}}_{\sim }\left(k,A\right)\to {\mathrm{Mot}}_{\sim }^{\mathrm{eff}}\left(k,A\right)$h \colon SmProj(k) \to Corr_\sim(k,A) \to Mot^{eff}_\sim(k,A)
is the the motive of $X$.
### Category of pure motives
There exists a motive $L$, called the Lefschetz motive, such that the motive of the projective line decomposes as
$h\left({P}_{k}^{1}\right)=h\left(Spec\left(k\right)\right)\oplus L$h(\mathbf{P}^1_k) = h(\Spec(k)) \oplus \mathbf{L}
To get a rigid category we formally invert the Lefschetz motive and get a category
${\mathrm{Mot}}_{\sim }\left(k,A\right)≔{\mathrm{Mot}}_{\sim }^{\mathrm{eff}}\left(k,A\right)\left[{L}^{-1}\right]\phantom{\rule{thinmathspace}{0ex}},$Mot_\sim(k, A) \coloneqq Mot^{eff}_\sim(k,A)[\mathbf{L}^{-1}] \,,
the category of pure motives (with coefficients in $A$ and with respect to $\sim$).
This is a rigid, Karoubian, symmetric monoidal category. Its objects are triples $\left(h\left(X\right),p,n\right)$ with $n\in Z$.
### Category of pure Chow motives
When the relation $\sim$ is rational equivalence then ${A}_{\sim }^{*}$ are the Chow groups, and ${\mathrm{Mot}}_{\sim }\left(k\right)={\mathrm{Mot}}_{\mathrm{rat}}\left(k\right)$ is called the category of pure Chow motives.
### Category of pure numerical motives
When the relation $\sim$ is numerical equivalence, then one obtains numerical motives.
## References
• Yuri Manin, Correspondences, motifs and monoidal transformations , Math. USSR Sb. 6 439, 1968(pdf, web)
• Tony Scholl?, Classical motives, in Motives, Seattle 1991. Proc Symp. Pure Math 55 (1994), part 1, 163-187 (pdf)
• James Milne, Motives – Grothendieck’s Dream (pdf)
• Minhyong Kim, Classical Motives: Motivic $L$-functions (pdf)
• Bruno Kahn, pdf slides on pure motives
• R. Sujatha, Motives from a categorical point of view, Lecture notes (2008) (pdf)
Section 8.2 of
Revised on November 7, 2013 23:40:03 by Urs Schreiber (82.169.114.243) |
# Betweenness centrality
An undirected graph colored based on the betweenness centrality of each vertex from least (red) to greatest (blue).
In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of vertices in a graph, there exists a shortest path between the vertices such that either the number of edges that the path passes through (for undirected graphs) or the sum of the weights of the edges (for directed graphs) is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.
Betweenness centrality finds wide application in network theory: it represents the degree of which nodes stand between each other. For example, in a telecommunications network, a node with higher betweenness centrality would have more control over the network, because more information will pass through that node. Betweenness centrality was devised as a general measure of centrality:[1] it applies to a wide range of problems in network theory, including problems related to social networks, biology, transport and scientific cooperation.
Although earlier authors have intuitively described centrality as based on betweenness, Freeman (1977) gave the first formal definition of betweenness centrality. The idea was earlier proposed by mathematician J. Anthonisse, but his work was never published.
## Definition
The betweenness centrality of a node ${\displaystyle v}$ is given by the expression:
${\displaystyle g(v)=\sum _{s\neq v\neq t}{\frac {\sigma _{st}(v)}{\sigma _{st}}}}$
where ${\displaystyle \sigma _{st}}$ is the total number of shortest paths from node ${\displaystyle s}$ to node ${\displaystyle t}$ and ${\displaystyle \sigma _{st}(v)}$ is the number of those paths that pass through ${\displaystyle v}$.
Note that the betweenness centrality of a node scales with the number of pairs of nodes as implied by the summation indices. Therefore, the calculation may be rescaled by dividing through by the number of pairs of nodes not including ${\displaystyle v}$, so that ${\displaystyle g\in [0,1]}$. The division is done by ${\displaystyle (N-1)(N-2)}$ for directed graphs and ${\displaystyle (N-1)(N-2)/2}$ for undirected graphs, where ${\displaystyle N}$ is the number of nodes in the giant component. Note that this scales for the highest possible value, where one node is crossed by every single shortest path. This is often not the case, and a normalization can be performed without a loss of precision
${\displaystyle {\mbox{normal}}(g(v))={\frac {g(v)-\min(g)}{\max(g)-\min(g)}}}$
which results in:
${\displaystyle \max(normal)=1}$
${\displaystyle \min(normal)=0}$
Note that this will always be a scaling from a smaller range into a larger range, so no precision is lost.
## Weighted networks
In a weighted network the links connecting the nodes are no longer treated as binary interactions, but are weighted in proportion to their capacity, influence, frequency, etc., which adds another dimension of heterogeneity within the network beyond the topological effects. A node's strength in a weighted network is given by the sum of the weights of its adjacent edges.
${\displaystyle s_{i}=\sum _{j=1}^{N}a_{ij}w_{ij}}$
With ${\displaystyle a_{ij}}$ and ${\displaystyle w_{ij}}$ being adjacency and weight matrices between nodes ${\displaystyle i}$ and ${\displaystyle j}$, respectively. Analogous to the power law distribution of degree found in scale free networks, the strength of a given node follows a power law distribution as well.
${\displaystyle s(k)\approx k^{\beta }\,}$
A study of the average value ${\displaystyle s(b)}$ of the strength for vertices with betweenness ${\displaystyle b}$ shows that the functional behavior can be approximated by a scaling form [2]
${\displaystyle s(b)\approx b^{\alpha }}$
## Algorithms
Calculating the betweenness and closeness centralities of all the vertices in a graph involves calculating the shortest paths between all pairs of vertices on a graph, which takes ${\displaystyle \Theta (|V|^{3})}$ time with the Floyd–Warshall algorithm, modified to not only find one but count all shortest paths between two nodes. On a sparse graph, Johnson's algorithm may be more efficient, taking ${\displaystyle O(|V|^{2}\log |V|+|V||E|)}$ time. On unweighted graphs, calculating betweenness centrality takes ${\displaystyle O(|V||E|)}$ time using Brandes' algorithm.[3]
In calculating betweenness and closeness centralities of all vertices in a graph, it is assumed that graphs are undirected and connected with the allowance of loops and multiple edges. When specifically dealing with network graphs, often graphs are without loops or multiple edges to maintain simple relationships (where edges represent connections between two people or vertices). In this case, using Brandes' algorithm will divide final centrality scores by 2 to account for each shortest path being counted twice.[4]
Another algorithm generalizes the Freeman's betweenness computed on geodesics and Newman's betweenness computed on all paths, by introducing a hyper-parameter controlling the trade-off between exploration and exploitation. The time complexity is the number of edges times the number of nodes in the graph.[5]
The concept of centrality was extended to a group level as well.[6] Group betweenness centrality shows the proportion of geodesics connecting pairs of non-group members that pass through a group of nodes. Brandes' algorithm for computing the betweenness centrality of all vertices was modified to compute the group betweenness centrality of one group of nodes with the same asymptotic running time.[6]
## Related concepts
Betweenness centrality is related to a network's connectivity, in so much as high betweenness vertices have the potential to disconnect graphs if removed (see cut set) . |
Press Releases
April 24, 2019
Games Press
If you enjoy reading this site, you might also want to check out these UBM Tech sites:
# Lovers in a Dangerous Spacetime DevLog #9: Pausing Without Pausing
by Adam Winkels on 03/26/14 02:04:00 pm
The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
The simplest approach to pausing your game in Unity is to set Time.timeScale = 0. While the time scale is 0, Update methods in your scripts will still called, but Time.deltaTime will always return 0. This works well if you want to pause all on-screen action, but it is severely limiting if you need animated menus or overlays, since Time.timeScale = 0 also pauses animations and particle systems.
We first encountered this limitation when we were trying to implement a world map in Lovers in a Dangerous Spacetime. When the player enters the ship's map station, we display a overlay of the current level. Since the map obstructs the ship and, as such, inhibits gameplay, we needed to pause the game while the display is visible. However, a completely static map screen would make it difficult to convey information (and also look pretty dull). In order to achieve our goal we needed a separate way to track how much time has elapsed since the last update loop.
It turns out that Time.realtimeSinceStartup is the ideal mechanism for this. As its name implies, Time.realtimeSinceStartup uses the system clock to track how much time has elapsed since the game was started, independent of any time scale manipulation you may be doing. By tracking the previous update's Time.realtimeSinceStartup, we can calculate a good approximation of the delta time since the last frame:
This script on its own is not enough, however, especially since we want to use Unity's Animation component to drive our dynamic map elements. To allow this, we created a subclass of TimeScaleIndependentUpdate that manually "pumps" the animation:
By using AnimationState's normalizedTime property and our calculated delta time, we scrub through the animation in each Update. Now all we need to do is attach this script to the GameObject we want to animate while Time.timeScale = 0:
As you can see above, the game action is paused when the map appears, but the icons on the map are still animating. Particle systems can also animate while the game is paused. ParticleSystem contains a handy Simulate method which, similar to Animation, allows us to manually pump the particle animation. All that's needed is a simple subclass of TimeScaleIndependentUpdate:
We can combine these three scripts to create fairly complex sequences. In Lovers, once the player has collected enough friends to unlock a warp tunnel, we play a little cutscene while Time.timeScale = 0:
This sequence relies heavily on the TimeScaleIndependentWaitForSeconds method of TimeScaleIndependentUpdate, which approximates Unity's built-in WaitForSeconds method and is extremely useful for creating coroutines.
Original post
// Adam Winkels (@winkels) is a co-founder of Asteroid Base.
### Related Jobs
Subset Games — Seattle, Washington, United States
[04.23.19]
Platform Engineer
Disbelief — Cambridge, Massachusetts, United States
[04.23.19]
Junior Programmer, Cambridge, MA
Disbelief — Cambridge, Massachusetts, United States
[04.23.19]
Senior Programmer, Cambridge, MA
Disbelief — Chicago, Illinois, United States
[04.23.19]
Senior Programmer, Chicago |
## comparing data from first quarter in two years
0
How can I compare data from two quarters? What software can I use to do it parallelly side by side so I can compare them. What kind of factors can I take into consideration, I'm posting a sample data set pic
Pick up city Q1 Q2 |
Radiation heat transfer from conductor surface to inner wall of the GIL enclosure.
Symbol
$W_{rad_{ce}}$
Unit
W/m
Formulae
$D_{c} K_{ce} n_{c} \pi \sigma \left(\left(\theta_{c} + 273\right)^{4} - \left(\theta_{encl} + 273\right)^{4}\right)$ Stefan–Boltzmann law $D_{c} h_{rad_{ce}} \pi \left(\theta_{c} - \theta_{encl}\right)$ using heat transfer coefficient
Related
$D_{c}$
$h_{rad_{ce}}$
$\sigma$
$\theta_{c}$
$\theta_{encl}$
Used in
$T_{1}$
$T_{rad_{ce}}$ |
# Math Help - Are my answers correct?
1. ## Are my answers correct?
http://www.montgomeryschoolsmd.org/S.../PreCalc09.pdf
My answers to questions 25 to 30
25 3y+2/3y-2
26 .1873543715
27. -52
28. x=3
29. 244140625
30. -32768(x^2y^4)
2. Originally Posted by yitsongg
http://www.montgomeryschoolsmd.org/S.../PreCalc09.pdf
My answers to questions 25 to 30
25 3y+2/3y-2 Correct
26 .1873543715 Incorrect
27. -52 Correct
28. x=3 Correct, although it isn't x=3, but simply 3.
29. 244140625 Incorrect
30. -32768(x^2*y^4) Correct
26 is incorrect because it is a decimal approximation. When asked to simplify, you are almost always expected to give your answers exactly (i.e. not simply plug the numbers into your calculator, but rather manipulate them by hand to get a more "simple" number. You shouldn't use a calculator at all). Given that you were able to do the other problems, I assume you can do 26 by yourself. Once you have the correct answer, re-post it and I'll check it for you.
3. 26. (3- 2)(2 3+5
29. 125
spaces mean breaket
4. Originally Posted by yitsongg
26. (3- 2)(2 3+5 Incorrect
29. 125 Correct
spaces mean breaket
For 26, I assume they simply don't want the square root in the denominator (it's a bit of a pedantic simplification). So:
$\frac{3-\sqrt{2}}{2\sqrt{3}+5} \cdot \frac{2\sqrt{3}-5}{2\sqrt{3}-5}$ (We multiply the fraction by the conjugate of the denominator over the conjugate so we can get rid of the square root, and not get a middle term in our multiplication because the -5 and the +5 will result in a cancellation once multiplied out)
$\frac{(3-\sqrt{2})(2\sqrt{3}-5)}{-13} = \frac{(\sqrt{2}-3)(2\sqrt{3}-5)}{13}$
P.S. I apologize for saying 29 was incorrect for the wrong reason before. I hadn't looked at the question, and I misread it as an approximate decimal. But, it's correct now. |
# Regression analysis when the covariables is a sample from a population of potential variables
This question comes from trying to analyze my recent exam (exam I have given and corrected) statistically. I have a list of questions (20 in total) and each question is given a score from 0 to five, so total possible score is 100. Then I define pass/not pass via some cutoff (I give grades on the A-B-C-D-E-F scale, but here we only discuss F/notF). A logistic regression of pass on the 20 score variables obviously give a perfect fit, and the usual assumptions behind logistic regression are obviously not fulfilled (nothing special here about logistic regression , the question really are about regression modelling, not any specific kind of regression model).
But in reality what we have sampled here is the variables, not the students! The interesting analysis is about the influence on grades of specific question types, for the actual students we have, which are not in any sense a sample. But the variables is a sample from a huge population of potential questions! ("a sample", inn the sense that that the exam could easily have been made from some other, but similar, questions. It could even have been made by sampling from some question data base.) So what could we do in direction of a formal analysis? Some form of cross-validation seems natural, but we should really subsample variables, not objects (students).
Any good ideas, or good references? I tried google scholar, but couldn't find anything.
EDIT
as answer to comments. I am mostly after seeing which question have most influence on the pass/fail decision. My exam is a calculus exam, with questions as (simple example) "Find the derivative of $f(x) = e^x \cos(x^2+3)$." (there are also longer questions, but divided in parts) counting this parts I have 20 sub-questions. This questions might be seen as a sample (at least conceptually) from a much larger set of potential questions. My students are really a fixed set, and I want to evaluate the exam as a test for this specific set of students. (so resampling by bootstrapping students do not seem natural). I sum the scores for each question and then decide the grade by some cutoff.
Now, if I use the scores on the 20 questions as variables, and fit a logistic regression model as $$P(\text{fail}) = \frac1{1+\exp(-\beta_0 -\beta_1 Q_1 - \dots -\beta_20 Q_{20})}$$ I do obviously get a perfect fit! since the pass/fail decision was taken on the basis of those 20 variables, in a linear fashion. But still, the decision is not perfect, obviously, there are error sources. One source of error is things like the day form of the student. A very different kind of error source is the match between the questions and the knowledge of the student, in that with a different sample of 20 questions, the student might have got a different result. So the question is about how to model this situation (and then analyze according to that model).
• Are you interested in evaluating the difficulty of particular questions? You may be interested in Item-Response-Theory. – Andy W Jun 26 '14 at 11:41
• Although I can glean a vague sense of what you are trying to do from this post, I cannot really understand what it is saying. I do not understand the sense in which "variables" are "sampled," nor is it clear what a "note" is or what a "question type" refers to. Could you explain these things and make it plain to us, in non-technical language, what you are trying to accomplish? – whuber Jun 26 '14 at 14:04
• Thanks! I will try to edit the question to make it clearer! – kjetil b halvorsen Jun 26 '14 at 16:58
• Sounds as if, in addition to item response theory, you would benefit from looking into measurement theory, testing theory, perhaps even something as specific as generalizability theory. – rolando2 Jun 27 '14 at 18:23
• In many foreign languages, "note" is an equivalent word for "grade" in English. I wonder if that underlies some of the confusion. – Silverfish Jan 10 '15 at 13:23 |
• 1. Fundamentals of Coding Theory
• 2. Infinite Families of Linear Codes
• 3. Symmetry and duality
• 4. Solutions to Selected Exercises
§1. What is coding theory
In coding theory we meet the following scenario. A source emits information and a receiver tries to log this information. Typically, the information is broken up into atomic parts like letters from an alphabet and information consists of words, i.e. sequences of letters. The problem is that the information might be disturbed by not optimal transport media resulting in incidental changes of letters
Real life examples are the transmission of bits via radio signals for transmitting pictures from deep space to earth, e.g. pictures taken by a Mars robot . Or as a more every day life example the transmission of bits via radio signals for digital TV. The source could also be the sequence of bits engraved in an audio or video disk, and the transmission is now the reading by the laser of the CD reader: little vibrations of the device or scratches on the disk cause errors of transmission.
A source emits $$0$$s and $$1$$s, say, at equal probability. Let $$p$$ be the probability that an error occurs, i.e. that a $$0$$ or $$1$$ arrives as a $$1$$ or $$0$$ at the receiver. If $$p$$ is very small we might decide to accept these errors, and if $$p$$ is almost $$1$$ we might also decide to not care since we simply interpret $$1$$ as $$0$$ and vice versa, which reduces again the error probability to a negligible quantity. If the error probability is exactly $$\frac 12$$ we cannot do anything but asking the engineers to study the problem of improving the transmission. However, if $$p$$ is, say only a bit smaller than $$\frac 12$$ and we need a more reliable transmission, coding comes into play.
The natural idea is to fix a natural number $$n$$ and if we want to transmit the bit $$b$$ we send the sequence $$bb\dots b$$ of length $$n$$. In other words, we encode $$b$$ into a sequence of $$n$$-many $$b$$s. The receiver must, of course, be informed of this convention. He will decode then according to the principle of Maximum Likelihood Decoding. If he receives a sequence $$s$$ of length $$n$$, he interprets it as a $$0$$ if the word $$s$$ contains more $$0$$s than $$1$$s and vice versa. In other words, he he interprets $$s$$ as a $$0$$ if $$s$$ resembles more a sequence of $$n$$-many $$0$$s and otherwise as $$1$$. Here we assume for simplicity that $$n$$ is odd, so that a word of length $$n$$ can never contain an equal number of $$0$$s and $$1$$s.
What is now the probability of missing the right message? If we send a sequence of $$n$$-many $$0$$s then receiving instead any word with $$r\ge \frac {n+1}2$$ many $$1$$s would result in an error. The probability of receiving a given word of this kind is $$p^r(1-p)^{n-r}$$, and there are $$\binom nr$$ such words. The error probability is therefore now $P_n = \sum_{r = \frac {n+1}2}^n \binom nr p^r(1-p)^{n-r} .$ It is not hard to show (see below) that $$\lim_{n\to\infty} P_n=0$$. Therefore, our repetition code can improve a bad transmission to a one as good as we want, provided the transmission error $$p$$ for bits is strictly less than $$\frac 12$$.
What makes the repetition code so efficient is the fact that its two code words are very different. In fact they differ at all $$n$$ places. However, there is a price to pay. Assume that you want to transmit a video of size $$1$$ GB through a channel which has an error probability $$p=0.1$$ when transmitting bits. This is certainly not acceptable since that would mean that $$10$$ percent of the received video consists of flickering garbage. We might like to transmit the video via the repetition code of length $$n$$. The first values for the sequence $$P_n$$ are
$$P_1 = 1.000000e-01$$, $$P_3 = 2.800000e-02$$, $$P_5 = 8.560000e-03$$, $$P_7 = 2.728000e-03$$, $$P_9 = 8.909200e-04$$, $$P_{11} = 2.957061e-04$$, $$P_{13} = 9.928549e-05$$, $$P_{15} = 3.362489e-05$$, $$P_{17} = 1.146444e-05$$, $$P_{19} = 3.929882e-06$$.
For having transmission errors less than $$0.1$$ percent we would have to choose $$n=9$$, which would mean that we would have to transmit $$9$$ GB for a video not bigger than $$1$$ GB. In this sense the repetition code seems to us very inefficient. What makes it so inefficient is that there are only two possible informations, i.e. two code words to transmit, but they have length $$n$$. In other words there is only one bit of information for every $$n$$ transmitted bits.
We would like to insist on our idea but search for better codes. For example, for our case of transmitting a video we might try to find, for some (possibly big) number $$n$$, a subset $$C$$ of the set $$\{0,1\}^n$$ of all sequences of length $$n$$ of digits $$0$$ or $$1$$ which satisfies the following two properties:
1. Every two distinct sequences in $$C$$ should differ in as much as possible places. In other words, the quantity $d(C) = \min \{h(v,w): v,w\in C, v\not=w\}$ should be very large, where $$h(v,w)$$ denotes the number of places where $$v$$ and $$w$$ differ.
2. The quotient $R(C) = \frac {\log_2(|C|)}n$ should be large as well.
The number $$\log_2(|C|)$$ is the quantity of information (measured in bits) which is contained in every transmission of a sequence in $$C$$, i.e. in every transmission of $$n$$ bits. The ratio $$R(C)$$ has therefore to be interpreted a the ratio of information per bit of transmission. We would then cut our video in sequences of length $$k$$, where $$k=\lfloor \log_2(|C|)\rfloor$$, and map these pieces via a function (preferably designed by an engineer) to the sequences in $$C$$, send the encoded words and decode them at the other end of the line using Maximum Likelihood Decoding. The Maximum Likelihood Decoding will yield good results if $$d$$ is very large, i.e. if the code words differ as much as possible. We shall see later (Shannon's Theorem) that there are codes $$C$$ which have $$R(C)$$ as close as desired to a quantity called channel capacity (which depends on $$p$$), and the probability of a transmission error in a code word as low as desired. Of course, the length $$n$$ might be very long, which might cause engineering problems like an increased time needed for encoding or decoding.
We stress an important property of the repetition code which we discussed above. Namely, it can correct $$\frac {n-1}2$$ errors. This means the following: if the sent code word and the received one do not differ at more than $$\frac {n-1}2$$ places, the Maximum Likelihood Decoding will return the right code word, i.e. it will correct the errrors. In general we shall mostly interested in such error correction codes.
However, in some situations one might be only interested in detecting errors, not necessarily correcting them. Examples for such a code are the International Standard Book Numbers ISBN10 and ISBN13. Here to every published book is associated a unique identifier. In the case of ISBN10 this is a word $$d_1d_2\cdots d_{10}$$ of length 10 with letters from the alphabet $$0,1,\dots,9,X$$. The procedure of this association is not important to us (but see here for details). What is important for us is that it is guaranteed that the sum $N:=d_1+2d_2+3d_3+\cdots+10d_{10}$ is always divisible by $$11$$ (where the symbol $$X$$ is interpreted as the number $$10$$). By elementary number theory the following happens: if exactly one letter is wrongly transmitted then $$N$$ is no longer divisible by $$11$$. In other words, we can detect one error. However, there is no means to correct this error (except, that we would be told at which place the error occurs). We shall come back to this later, when we recall some elementary number theory.
# A property of the binomial distribution
We prove the statement that the sequence of the $$P_n$$ in the above example tend to $$0$$. In fact, this can be obtained from Chebyshev's inequality applied to a sequence of random variables $$X_n$$, where $$P(X_n=k)=\binom nr p^r(1-p)^{n-r}$$, i.e. where $$X_n$$ follows the binomial distribution with parameters $$n$$ and $$p$$.This distribution measures the probability of successes in a sequence of $$n$$ independent trials where the probability of success in a single trial is $$p$$. However, it is also possible to give a short direct proof avoiding the indicated concepts.
# Proposition
For every $$0\le p \le 1$$ and every $$\lambda \gt p$$, one has $\lim_{n\to\infty} \sum_{r \ge \lambda n} \binom nr p^r(1-p)^{n-r} = 0$
For $$p\lt \frac 12$$ we can choose $$\lambda = \frac 12$$ in the proposition, and we obtain the claimed statement $$P_n\to 0$$.
It is clear that $\sum_{r \ge \lambda n} \binom nr p^r(1-p)^{n-r} \le \sum_{r=0}^n \binom nr p^r(1-p)^{n-r} \left(\frac {r-np}{(\lambda-p)n}\right)^2 ,$ since, for $$r\ge \lambda n$$, we have $$1 \le \frac {r-np}{(\lambda-p)n}$$. But the right hand side equals $\frac 1{(\lambda-p)^2n^2} \big(\frac {d^2}{dt^2}-2np \frac {d}{dt}+n^2p^2\big) (pe^x+1-p)^n\big|_{t=0} = \frac {p(1-p)}{(\lambda-p)^2n} ,$ which tends to $$0$$.
Find all subsets $$C$$ in $$\{0,1\}^5$$ up to isomorphism, and compute $$d(C)$$ and $$R(C)$$ for each . (Two subsets are called isomorphic if one can be obtained from the other by a fixed permutation of the places of the other's sequences.)
Which book possesses the ISBN-10 $$"3540641*35"$$? (First of all you have to find the $$8$$th digit.) |
Right Elliptic Cylinder
Written by Jerry Ratzlaff on . Posted in Solid Geometry
• Elliptic cylinder (a three-dimensional figure) has a cylinder shape with elliptical ends.
• 2 bases
Lateral Surface Area of a Right Elliptic Cylinder formula
Since there is no easy way to calculate the ellipse perimeter with high accuracy. Calculating the laterial surface will be approximate also.
$$\large{ A_l \approx h \; \left( 2\;\pi \;\sqrt {\; \frac{1}{2}\; \left(a^2 + b^2 \right) } \right) }$$
Where:
$$\large{ A_l }$$ = approximate lateral surface area (side)
$$\large{ a }$$ = length semi-major axis
$$\large{ b }$$ = length semi-minor axis
$$\large{ h }$$ = height
Surface Area of a Right Elliptic Cylinder formula
$$\large{ A_s \approx h \; \left( 2\;\pi \;\sqrt {\; \frac{1}{2}\; \left(a^2 + b^2 \right) } \right) + 2\; \left( \pi \; a \; b \right) }$$
Where:
$$\large{ A_s }$$ = approximate surface area (bottom, top, side)
$$\large{ a }$$ = length semi-major axis
$$\large{ b }$$ = length semi-minor axis
$$\large{ h }$$ = height
Volume of a Right Elliptic Cylinder formula
$$\large{ V = \pi\; a \;b\; h }$$
Where:
$$\large{ V }$$ = volume
$$\large{ a }$$ = length semi-major axis
$$\large{ b }$$ = length semi-minor axis
$$\large{ h }$$ = height
Tags: Equations for Volume |
• dan815
just for fun, see if you think of some ways to prove $\sum_{k=0}^{m-1}\binom{m-1}{k}=2^{m-1}$
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations. |
olg
Sonntagsinformatiker
Beiträge: 297
Registriert: 1. Okt 2008 19:24
I have a question about the behavior of the circle, as its 'follow' definition is ambiguous.
Should it:
1. Maintain a distance of 50 points at all times. I.e., it is being pushed away by the mouse like a magnet when being pushed against (see [1])
or
2. Follow the pointer to maintain a distance of 50, but do not follow if distance is < 50. This will allow the mouse to move over the circle. (see [2])
Are both implementations within the scope of the requirement?
[1] https://dl.dropboxusercontent.com/u/270758/magnet.mov
[2] https://dl.dropboxusercontent.com/u/270758/follow.mov
"To Perl, or not to Perl, that is the kvetching." ~Larry Wall
Osterlaus
BSc Spammer
Beiträge: 1263
Registriert: 23. Aug 2007 12:46
Wohnort: DA
### Re: ex09 - Task 1a
{nothing to see here}
BASIC-Programmierer
Beiträge: 107
Registriert: 8. Okt 2010 20:31
### Re: ex09 - Task 1a
There is also the possibility of the circle just maintaining a distance of 50 regardless of the mouse movements, for example the circle is always 50 to the right. Does that fulfill the requirements, too?
olg
Sonntagsinformatiker
Beiträge: 297
Registriert: 1. Okt 2008 19:24
### Re: ex09 - Task 1a
I also noticed the implementation of fillOval is incorrect (at least with respect to its variables).
While it seeems to draw an oval given a center and radius, it is really drawing an oval given its top/left corner, and the diameter of the rectangle to fit the oval within.
If you really want it to be centered on the first parameter, change the method call to:
Code: Alles auswählen
{ g.fillOval(center().x - radius(), center().y - radius(), radius() * 2, radius() * 2) })
and pass it Signal { 10 } as the radius.
"To Perl, or not to Perl, that is the kvetching." ~Larry Wall
salvaneschi
Mausschubser
Beiträge: 49
Registriert: 29. Mär 2013 23:51
### Re: ex09 - Task 1a
Hi all,
the solution we had in mind is the one described here as "just maintaining a distance of 50 regardless of the mouse movements, for example the circle is always 50 to the right", which should be the simplest. However, also the others that have been described will be accepted. |