text
stringlengths 100
356k
|
---|
# Show that a group of order $p^2q^2$ is solvable
I am trying to prove that a group of order $p^2q^2$ where $p$ and $q$ are primes is solvable, without using Burnside's theorem. Here's what I have for the moment:
• If $p = q$, then $G$ is a $p$-group and therefore it is solvable.
• If $p \neq q$, we shall look at the Sylow $p$-subgroups of $G$. We know from Sylow's theorems that $n_p \equiv 1 \pmod p$ and $n_p \mid q^2$, therefore $n_p \in \{1, q, q^2\}$.
• If $n_p = 1$, it is over, because the Sylow $p$-Subgroup $P$ is normal in $G$ of order $p^2$, and $G/P$ has order $q^2$. Thus both are solvable and $G$ is solvable.
• If $n_p = q^2$, we have $q^2(p^2-1)$ elements of order $p$ or $p^2$ in $G$, and we have $q^2$ elements left to form a unique Sylow $q$-subgroup. By the same argument as before, $G$ is solvable.
• That's where I'm in trouble. I don't know what to do with $n_p = q$. It seems to lead nowhere.
Thanks in advance for any help!
Laurent
• Welcome to math.SE! Thanks for showing your work so far! Jun 15 '14 at 16:19
• Since you reduced this to $n_p=q$, you have $n_q = p$ as well. Also $p<q$ without loss of generality. Jun 15 '14 at 16:23
• I'm not sure that I follow your argument. You seem to be assuming that distinct Sylow $p$-subgroups intersect in the identity, and that needs some justification at least, as far as I can see. This can't hold for both a Sylow $p$-subgroup and a Sylow $q$-subgroup in any case. If $P \in {\rm Syl}_{p}(G)$ has $|P| >|Q|$ for $Q \in {\rm Syl}_{p}(G),$ for example, we have $P \cap P^{g} >1$ for any $g \in G \backslash N_{G}(P),$ otherwise $|PP^{g}| > |G|.$ Jun 15 '14 at 16:50
• Yes, but what worries me is that you could have elements of order $p$ or $p^{2}$ which lie in more than one Sylow $p$-subgroup, so that $q^{2}(p^{2}-1)$ would be an overcount of the number of elements of order $p$ or $p^{2}.$ Jun 15 '14 at 18:41
• In a group whose Sylow $p$-subgroup has prime order $p,$ it is clear that distinct Sylow $p$-subgroups only have the identity in common. In general, for larger Sylow subgroups that need not be the case. In particular groups, it needs to be checked. Jun 15 '14 at 22:32
You argument works just as well with $p$ and $q$ switched, so the only time you have trouble is if both $n_p=q$ and $n_q = p$. Since $1\equiv n_p \mod p$ and $1\equiv n_q \mod q$ this puts very strong requirements on $p$ and $q$.
Hint 1:
Unless $n_p=1$, $n_p > p$.
Hint 2:
If $n_p=q$, then $q>p$. If $n_q =p$, then $p>q$. Oops.
### Fix for OP's argument:
The OP's argument is currently flawed in the case $n_p=q^2$, so this answer is only truly helpful after that flaw is fixed.
A very similar argument to the one given in this answer works. First part of your argument works, and the $p-q$ symmetry helps:
If $n_p=1$ or $n_q=1$, then the group is solvable.
Now we use the Sylow counting again to get some severe restrictions:
If $n_p \neq 1$, then $n_p \in \{q,q^2\}$ and in both cases we have $1 \equiv q^2 \mod p$. Similarly, if $n_q \neq 1$, then $1 \equiv p^2 \mod q$.
Unfortunately now we don't get an easy contradiction, but at least we only get one possibility:
Since $p$ divides $q^2-1 = (q-1)(q+1)$, we must also have $p$ divides $q-1$ or $q+1$, so $p \leq q+1$ and $q \leq p+1$, so $p-1 \leq q \leq p+1$. If $p=2$ is even, then $q$ is trapped between 1 and 3, so $q=3$. If $p$ is odd, then $p-1$ and $p+1$ are both even, so the only possibility for $q \neq p$ is $q=p-1=2$ (so $p=3$) or $q=p+1=2$ (so $p=1$, nope). Hence the only possibility is $p=2$ and $q=3$ (or vice versa).
In this case, we get:
If $p=2$ and $q=3$, then $n_q \in \{2,4\}$. Considering the permutation action of $G$ on its Sylow $q$-subgroups, we know that $n_q=2$ is impossible (Sylow normalizers are never normal) and $n_q=4$ means $G$ has a normal subgroup $K$ so that $G/K$ is isomorphic to a transitive subgroup of $S_4$ containing a non-normal Sylow 3-subgroup and having order a divisor of 36. The only such subgroup is $A_4$, so $K$ has order 3. Hence $G/K\cong A_4$ and $K \cong A_3$ are solvable, so $G$ is solvable.
• I am tempted to henceforth end all my proofs by contradiction with "oops". +1 Jun 15 '14 at 18:19
• Thank you! I think I managed to solve it with your hints! As you said, if we have $n_p = q$ and $n_q = p$, we fall on the contradiction you mentioned in the hint 2, $p > q$ and $q > p$. Thus we can conclude that this case can't happen, and therefore $G$ is solvable. Is that right? Jun 15 '14 at 18:20
• That's OK, but unfortunately your reduction to the case $n_p=q$ and $n_q=p$ is not fully justified. Jun 15 '14 at 19:07
• I gave a fix for the OP's reduction. It is probably harder than it should be. :-) Jun 15 '14 at 19:48
• I get almost everything, but the last part of the argument goes a bit beyond my current level, I think. I need a few precision: 1. as english is not my mother tongue, I don't know what an OP's reduction is. 2. $n_q= 2$ is impossible because Sylow $q$-subgroups are conjugate, and not normal? 3. I do not get the last part of the argument (since $n_q = 4$). I think I don't know enough theory for the moment, isn't there any simpler way? Jun 15 '14 at 20:04
Assuming that you know that groups of order $p^2q$, $pq$ and $p^k$ are solvable, it is enough to prove that a group of order $p^2q^2$ is not simple.
Suppose that $G$ is a simple group of order $p^2q^2$. By symmetry (and since $p$-groups are solvable) we may assume $p > q$. Steps to reach a contradiction:
1. Prove the following: if $G$ is a finite group with a subgroup $H$ of index $r$, where $r$ is the smallest prime divisor of $|G|$, then $H$ is a normal subgroup.
2. By 1. $n_p = q$ cannot happen since $G$ is simple. Therefore $n_p = q^2$.
3. If there exist distinct Sylow $p$-subgroups $P_1$ and $P_2$ such that their intersection $D = P_1 \cap P_2$ is nontrivial, then $D$ has order $p$. Now $D$ is normal in both $P_1$ and $P_2$, but not normal in all of $G$, so $N_G(D)$ has order $p^2q$. This is a contradiction by 1.
4. Therefore distinct Sylow $p$-subgroups of $G$ have pairwise trivial intersection. By 2. this means that there are $q^2(p^2-1)$ elements of order $p$ or $p^2$. But then $G$ has a normal Sylow $q$-subgroup.
• Hi! First of all, thanks for helping! I like this solution, nevertheless, if the 2., 3. and 4. points are easy to understand, the first one is quiet hard to prove. I tried to follow the steps of this post: math.stackexchange.com/questions/164244/… I just don't get the argument "since $p$ is the smallest prime that divides $|G|$, it follows that $|G/K|=p$. That would be great if you could explain, because it the only thing that I don't understand. Jun 16 '14 at 20:00
• @LaurentHayez: Since $p$ is the smallest prime divisor, $\gcd(|G|, p!) = p$. This follows since a common divisor of $|G|$ and $p!$ would have all of its prime divisors $\leq p$, hence it has to be $p$ or $1$. If it is still not clear, you could try proving it in the case $|G| = p^2q^2$. Jun 16 '14 at 21:00
I will give a different series of hints/steps to show the following: if $p >q,$ then $G$ has a non-identity normal $p$-subgroup. Let $P \in {\rm Syl}_{p}(G),$ and suppose that no non-identity subgroup of $P$ is normal in $G$ (that includes $P$ itself, of course). Notice that $q \not \equiv 1$ (mod $p$), since $1 \neq q < p.$ Hence $G$ must have $q^{2}$ Sylow $p$-subgroups, and we must have $q^{2} \equiv 1$ (mod $p$). Hence $p | q+1$ ( we can't have $p|q-1$ as $q <p$). But $q <p,$ so $q+1 \leq p,$ so we must have $q = p-1$. Now $p \neq 2$ as $p>q,$ so $q$ is even. Hence $q = 2$ and $p=3,$ as $p$ is a prime. Hence $|G| = 36$. Now $P = N_{G}(P)$ by Sylow's Theorem. Furthermore, there is non proper subgroup $M$ of $G$ which strictly contains $P.$ For otherwise we would have $[M:P] \equiv 1$ (mod $3$) and $[G:M] \equiv 1$ (mod 3), forcing $[G:P] \geq 16,$ which is not the case. Now let $g^{-1}Pg$ be another Sylow $3$-subgroup of $G.$ Then $P \cap g^{-1}Pg \neq 1$ as $|P||g^{-1}Pg| > |G|$.However $P$ and $g^{-1}Pg$ are both Abelian, so $P \cap g^{-1}Pg \lhd \langle P,g^{-1}Pg \rangle >P.$ But there is no subgroup of $G$ strictly between $P$ and $G,$ so $P \cap g^{-1}Pg \lhd G.$ (actually your (Laurent's) argument works in $G/(P \cap g^{-1}Pg)$ to show that a Sylow $2$-subgroup is normal in that quotient group.
• Thank you, I got everything except one point. It is the part "For otherwise we would have $[M:P] \equiv 1 \pmod 3$ and $[G:M] \equiv 1 \pmod 3$". I really can't see why the indexes should be equivalent to $1$ modulo $3$? Jun 16 '14 at 20:04
• Because $P$ is also the normalizer of a Sylow $3$-subgroup of $M$, we must have $[M:P] \equiv 1$ (mod $3$). Because $P$ is the normalizer of a Sylow $3$-subgroup of $G,$ we must have $[G:P ]\equiv 1$ (mod $3$). Hence we have $[G:P] = [G:M][M:P] \equiv 1$ (mod $3$) also forces $[G:M] \equiv 1$ (mod $3$). Jun 16 '14 at 22:21 |
# [SOLVED] Another trigonometric Identity
• Mar 1st 2010, 06:08 PM
F.A.S.T.
[SOLVED] Another trigonometric Identity
I figured out the problem before, but now I have no clue what I did. Can you please help me figure this out!
The It is in the word document!
• Mar 2nd 2010, 12:27 AM
Hello F.A.S.T.
Quote:
Originally Posted by F.A.S.T.
I figured out the problem before, but now I have no clue what I did. Can you please help me figure this out!
The It is in the word document!
The question is this: Prove the identity:
$\sin^4x-\cos^4x = 2\sin^2x-1$
The proof, and the explanation of the steps, is:
$\sin^4x-\cos^4x = (\sin^2x)^2-(\cos^2x)^2$, since $a^4 = (a^2)^2$, where $a$ can be anything you like
$=(\sin^2x+\cos^2x)(\sin^2x-\cos^2x)$, since $a^2-b^2 = (a+b)(a-b)$ - sometimes called 'the difference of two squares'
$= 1(\sin^2x-\cos^2x)$, since $\sin^2x+\cos^2x=1$, for any value of $x$
$=\sin^2x - (1-\sin^2x)$, using $\sin^2x+\cos^2x=1$ again, but written as $\cos^2x = 1-\sin^2x$
$=2\sin^2x-1$
Do you follow it all now? |
My Math Forum Exponent
Algebra Pre-Algebra and Basic Algebra Math Forum
July 27th, 2014, 11:25 AM #11
Senior Member
Joined: Nov 2013
From: Baku
Posts: 502
Thanks: 56
Math Focus: Geometry
Quote:
Originally Posted by bml1105 Can anyone tell me how to make the subscripts for the log formulas when writing on the forum? Thanks!
\log_x y => $\displaystyle \log_x y$
July 27th, 2014, 11:31 AM #12 Newbie Joined: Jul 2014 From: Seattle Posts: 16 Thanks: 0 Hmmm that actually isn't working for me. It's just staying in the same format \log_x y log_x y => Last edited by bml1105; July 27th, 2014 at 11:35 AM.
July 27th, 2014, 11:40 AM #13 Newbie Joined: Jul 2014 From: Seattle Posts: 16 Thanks: 0 Nevermind I figured it out!
Tags exponent
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Thinkhigh Calculus 3 March 2nd, 2012 06:42 AM happysmiles374 Elementary Math 2 February 28th, 2012 08:48 AM skarface Algebra 1 January 24th, 2010 03:28 PM greg1313 Applied Math 3 November 17th, 2009 07:39 PM alfonso1 Number Theory 17 August 7th, 2007 09:03 AM
Contact - Home - Forums - Cryptocurrency Forum - Top |
# How do you find the intercepts, vertex and graph f(x)=-2x^2+8x-3?
Feb 21, 2017
Vertex $\left(2 , 5\right)$
X intercept (3.58, 0); (0.42, 0)
Y-Intercept $\left(0 , - 3\right)$
#### Explanation:
Given -
$f \left(x\right) = - 2 {x}^{2} + 8 x - 3$
Vertex
$x = \frac{- b}{2 a} = \frac{- 8}{2 \times - 2} = \frac{8}{- 4} = - 2$
At $x = 2$
$y = - 2 \left({2}^{2}\right) + 8 \left(2\right) - 3$
$y = - 8 + 16 - 3 = 16 - 11 = 5$
Vertex $\left(2 , 5\right)$
X intercept
$- 2 {x}^{2} + 8 x - 3 = 0$
${x}^{2} - 4 x + \frac{3}{2} = 0$ [divide all the terms by $- 2$]
${x}^{2} - 4 x = - \frac{3}{2}$ [Take the constant term to the right]
${x}^{2} - 4 x + 4 = - \frac{3}{2} + 4$ (divide the coefficient of $x$ square it and add to both sides]
${x}^{2} - 4 x + 4 = - \frac{3}{2} + 4 = \frac{- 3 + 8}{2} = \frac{5}{2}$
${\left(x - 2\right)}^{2} = - \frac{5}{2}$
$x - 2 = \pm \sqrt{\frac{5}{2}} = \pm 1.58$
$x = 1.58 + 2 = 3.58$
$x = - 1.58 + 2 = 0.42$
X intercept (3.58, 0); (0.42, 0)
Y-Intercept
At $x = 0$
$y = - 2 {\left(0\right)}^{2} + 8 \left(0\right) - 3$
$y = - 3$
Y-Intercept $\left(0 , - 3\right)$
To graph the function take a few values on either side of $x = 2$
Find the corresponding $y$ values.
Tabulate them. Graph the pairs |
# SD card speed class specifies “minimum sequential write speed” but not the “maximum sequential write speed”, why?
A speed rating for any memory device is usually how fast it can work. However, the speed class of SD cards actually specifies the "minimum sequential write speed". Please see here for evidence under "SD Speed Class".
1. Shouldn't speed rating be about the maximum rate at which we can write to a memory?
2. Also, since the SD card speed class is about the lower bound, does that mean that if we write to it at a slower rate, the card will not function and the data shall become corrupt?
Finally, what exactly is the minimum data rate for SPI mode? The "SD Specifications Part 1 Physical Layer Simplified Specification Version 2.00 September 25, 2006" section 7.2.15 "Speed Class Specification" states:
As opposed to SD mode, the card cannot guarantee its Speed Class. In SPI mode, host shall treat the card as Class 0 no matter what Class is indicated in SD Status.
1. This implies that the minimum rate at which we need to read/write the card in SPI mode is actually 0 MB/sec? This implies that when using the SPI bus, the actual speed class of the card does not matter? But is this really the case?
2. As far as I am aware, after SD card power up, when we want to enter the SPI mode, we communicate with the card using a very low clock frequency which is in range of a few 100KHz at most. Is this true for SD card of any speed class?
• I'm not putting this as an answer because I'm too lazy to check, but a stat like this normally means "it will go at least this fast". The minimum is the lower bound on its maximum rate. Like specifying a motor car as being reliably capable of going at at least 100 miles per hour. It may be able to go at 120mph, but it'll definitely run at 100. – Ian Bland Jul 4 '18 at 22:55
• The card's minimum speed specifies the maximum speed you can reliably transfer data to it. If $$f_{yourdesign} \leq f_{min} \leq f_{card}$$, it is guaranteed that $$f_{yourdesign} \leq f_{card}$$ and therefore no data is lost. – Ben Voigt Jul 5 '18 at 0:22
To be considered to meet the requirements of the relevant class the card must support writing at a given minimum speed (eg. 6MB/s for class 6). If they are faster, that is okay. It's like saying you must have a minimum income of \$150K to qualify for this credit card, if you have more that's okay.
Class 0 is not rated under this system so you would have to check the manufacturer's specifications if you want to use SPI.
And you should not necessarily expect that the average write rate of the card means that you can write every byte at that rate. You may have to have a sizable buffer to account for the card going off on its own from time to time.
When you have for example a camera and take a video then you have to store data at a rate of X MiB/s or data would get lost. So you go and buy a card with the class that has the appropriate minimum speed.
There is never a case where you say: Well, I can't generate data any faster than X MiB/s. So I must not buy a card that will exceed X MiB/s because then the card will be idle and self destruct out of boredom.
Nobody cares about the cards feelings. :) If the card stores data twice as fast as you generate it and then is idle nothing bad happens.
• What confused me is that when I buy hard disk it gives the maximum read/write speed that it can achieve, a bit like "hey look how fast this can run". So it being other way around for SD card for me was confusing. – quantum231 Jul 7 '18 at 10:10 |
# Newtons first law should not be termed an expression of inertia
1. Jan 19, 2012
### johann1301
Newtons first law should not be termed an expression of inertia
LAW I.
EVERY BODY PERSEVERES IN ITS STATE OF REST, OR OF UNIFORM MOTION IN A RIGHT LINE, UNLESS IT IS COMPELLED TO CHANGE THAT STATE BY FORCES IMPRESSED THERON. - «Isaac Newtons Principa»
In case inertia could be removed from a body (e.g by a relative of Maxwells demon), the following situation is presumably true;
PART I
1. A body B has a state of uniform motion in a right line
2. Inertia is removed from the bodies mass
3. The bodies state of motion is unchanged*
PART II
4. Another (inertial)body A comes in contact with Body B
5. This contact causes B to change in its state of motion wich is described as mass over a distance per second squared. Also known as force.
I conclude the following about LAW I;
PART I
Every body perseveres in its state of uniform motion in a right line regardless of inertia.
PART II
Any body can be compelled to a change in its state of motion by forces regardless of inertia.
Newtons first law is therefore independent of inertia and should not be termed an expression of inertia or «the law of inertia».
Some examples where LAW I is termed an expression of inertia;
Last edited by a moderator: Sep 25, 2014
2. Jan 19, 2012
### zhermes
Your post is hardly coherent... but it seems you just don't understand what 'inertia' is.
'Inertia' (as you can find from a simple google search), is not a physical thing---its a property, or a 'tendency'. In particular, 'intertia' IS exactly newton's first law. In other words, Newton's first law defines 'inertia'.
If:
$$F = 0$$
Then:
$$ma \sim \Delta (mv) = 0$$
Edited equation for clarity
That is inertia.
What? Maxwell's demon removed certain types of particles from an ensemble of many... how do you remove inertia from an object without exerting a force upon it?
You don't 'remove inertia from mass'.
You put an '*' on part '3' without expanding... '3' is also completely false. Inertia is an objects state of motion.
This isn't really related to the overall point, but force is in units of mass * distance per second squared.
For the record; random videos aren't especially pertinent sources or references...
Last edited by a moderator: Sep 25, 2014
3. Jan 19, 2012
### johann1301
What does
stand for?
The mark(*) is there because i don't know if the statement is true.
I figure you do;
Last edited: Jan 19, 2012
4. Jan 19, 2012
### zhermes
The 'Delta' ($\Delta$) refers to a 'change in' a parameter. A more exact statement would have been $$\frac{d}{dt} (mv) = 0$$ which means, the derivative* of the mass times the velocity. If the mass is constant, this simplifies to
$$\textrm{if} \hspace{0.2in} \frac{d}{dt} m = 0 \hspace{0.2in} \textrm{then} \hspace{0.2in} \frac{d}{dt} (mv) = m a$$
Where the acceleration is the derivative of the velocity.
* If you are unfamiliar with derivatives, its basically the instantaneous rate of change of something.
Last edited: Jan 19, 2012
5. Jan 19, 2012
### daqddyo1
ΔZHermes: Force has units based on mass, distance and time squared. Not what you said.
Your equation: ma = Δ(mv) is incorrect.
It should read ma = m(Δv/Δt).
johann1301: Δ(mv) is more commonly and correctly written as mΔv and means change in momentum.
To start changing mass by breaking off bits or whatever changes the object in discussion so that it is neither Object A nor Object B any more.
6. Jan 19, 2012
### zhermes
Thanks @daqddyo, fixed and clarified these respectively.
7. Jan 19, 2012
### johann1301
I understand why you disagree and find my demon silly. Its not possibly to remove inertia from matter.
If i said;
Lets say that some day we find a rock from space which is made of matter/mass but has no inertia. I know, its a stupid thought. (but imagine that! wow!)
Why should or would we think that it would be an exception from Newtons first law of motion?
My answer is; it wouldn't be an exception from the law, it would follow the law exactly. Is there any reason to think anything else?
8. Jan 19, 2012
### ZapperZ
Staff Emeritus
This is one of the most meaningless waste-of-time discussion that I've ever seen on this forum.
1. You have not shown ANY physics that decouples mass from inertia.
2. Yet, without showing #1, you THEN somehow make this outlandish assertion of "what if".
This thread is in clear violation of the PF Rules on over-speculative post.
Zz. |
# App Bundles with a Makefile
After my recent Xcode learning experience, I thought I would see if I could accomplish the same things without it. It turns out that it’s pretty straightforward to use a plain old Makefile to create a nice double-clickable .app bundle.
Makefiles are almost as old as Mac OS X’s UNIX underpinnings themselves, and pretty simple to use. The command line make tool checks the nearest Makefile for instructions, and then, well, makes the build happen.
In it’s simplest form, you don’t need any Makefile at all. If you have a file called foo.c, typing make foo in the same directory will produce a binary called foo by compiling foo.c using your system’s C compiler (e.g. the one installed with Xcode). More sophisticated things, like creating a .app bundle, can be done by adding some variables and targets to the Makefile.
Similar to last time, I have made a folder for this project. SDL2.framework is installed in my home directory under ~/Library/Frameworks/, and Xcode and OS X are up to date.
# Step 1: Create our main.c
We’ll use the same code as last time, at least to start. Save this as main.c (or whatever you like, but remember to change later directions accordingly).
#include <SDL2/SDL.h>
int main(int argc, const char * argv[]){
if(SDL_Init(SDL_INIT_EVERYTHING) != 0)
{
puts("SDL_Init error");
return -1;
} else {
puts("SDL_Init success!");
return 0;
}
}
# What’s in an .app?
That which we call a program, by any other name would smell as sweet…
App bundles are a clever way of moving executables and their associated resources around while convincing the user that they’re one big file. Really, they’re just folders with names ending in .app, and a few special folders and files that OS X looks for. They are:
• Contents/Info.plist —- Tells Finder and friends what kind of bundle this is and how to use it.
• Contents/Resources/ —- Resources needed by the app. Notably, frameworks (a Mac OS X specific way to package libraries) live here
• Contents/MacOS/ —- we’ll be putting our executable in this folder
We know where to find the SDL2 framework, and we will build our binary shortly, but what about Info.plist? We’ll have to steal one from an existing app and modify it. Right- or Control-click on an existing app (say, TextEdit) and choose “Show Package Contents”. Inside the app, you should see an Info.plist. Open it in your favorite text editor and have a look. (What the hell is all that?, you’re probably thinking.)
Fortunately, it turns out that we can remove most of those properties and still have an app bundle that works. Here’s the template I came up with based on the minimal app from last time with Xcode:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleExecutable</key>
<string>APP_NAME</string>
<key>CFBundleIdentifier</key>
<string>com.your-name.APP_NAME</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>APP_NAME</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
</dict>
</plist>
Fill in your-name appropriately, then save this as Info.plist in the same directory as your Makefile.
# Step 2: Building a Bundle
Before we automate this with a Makefile, it’s good practice to make sure you can put it all together in the terminal. After all, make just runs your commands!
I’m calling my application SDLExample2 (original, I know). I’m putting it in a build directory just to keep things organized. (This way I can ignore the whole folder with .gitignore, no matter what build products I eventually have.)
1. Make the .app folder and essential subfolders
mkdir -p ./build/SDLExample2.app/Contents/{MacOS,Resources}
2. Compile main.c to make an executable for the bundle
cc -F "$HOME/Library/Frameworks" -framework SDL2 main.c -o main As you can probably guess, -F specifies a folder with frameworks in it. -framework indicates the name of a framework to link with. If you run it in the terminal you should see something like: $ ./main
SDL_Init success!
3. Copy the resulting binary into place
cp ./main ./build/SDLExample2.app/Contents/MacOS/SDLExample2
4. Copy the SDL2 framework into Resources
cp -R "$HOME/Library/Frameworks/SDL2.framework" ./build/SDLExample2.app/Contents/Resources/ 5. Copy in the Info.plist file… cp Info.plist ./build/SDLExample2.app/Contents/ …and edit it with your favorite text editor. It should look like this, unless you’ve named the app something other than SDLExample2: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>SDLExample2</string> <key>CFBundleIdentifier</key> <string>com.joseph-long.SDLExample2</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>SDLExample2</string> <key>CFBundlePackageType</key> <string>APPL</string> </dict> </plist> Now you should have an app! An app that doesn’t do much! Double click the SDLExample2 icon in the build folder and see. If you don’t get any error messages about corrupted or incomplete applications, you’re good to go. # Step 3: Make the computer build you a bundle I was afraid of make and Makefiles for a long time, but I eventually figured out that my fear was from seeing Makefiles made to do things they really ought not to do. It’s not a very good language in which to write scripts! For simple tasks, however, they are more than adequate. Here is my Makefile to automate everything we did in Step 2. FRAMEWORK_PATH=$(HOME)/Library/Frameworks
APP_NAME=SDLExample2
CFLAGS=-F $(FRAMEWORK_PATH) -framework SDL2 all: main clean_app package_app clean_app: rm -rf "./build/$(APP_NAME).app/"
package_app:
mkdir -p "./build/$(APP_NAME).app"/Contents/{MacOS,Resources} cp -R "$(FRAMEWORK_PATH)/SDL2.framework" "./build/$(APP_NAME).app/Contents/Resources/" cp Info.plist "./build/$(APP_NAME).app/Contents/"
sed -e "s/APP_NAME/$(APP_NAME)/g" -i "" "./build/$(APP_NAME).app/Contents/Info.plist"
cp ./main "./build/$(APP_NAME).app/Contents/MacOS/$(APP_NAME)"
(n.b. These should be actual tab characters indenting lines in the Makefile, not spaces.)
The Makefile begins with some variable definitions of the form FOO=bar. Including the value of one variable in another is done with the $(FOO) construct. (We don’t quote variables defined in Makefiles, as the quotes are interpreted literally.) CFLAGS is important, since it sets up extra arguments to be passed to the C compiler (like the ones in step 2.2). The all target is special, as it is built (or run) when make is invoked without target names. Here we tell it to “build main, remove any existing app bundle, and build the app bundle again from scratch”. The clean_app target just removes the entire bundle folder. The package_app target is basically the commands we did, one by one, in Step 2. The only thing different here is the use of seds in-place-editing mode (-i "") to replace APP_NAME with our app name defined in the beginning of the Makefile. Also, note that we have quotes around arguments that include $(APP_NAME). This is so you won’t confuse the shell if you set APP_NAME to something with spaces in it (e.g. APP_NAME=SDL Example 2).
# Making make make
Remove the main binary and app bundle, if they exist, then run make with no arguments. You should see something like this:
# make
cc -F /Users/josephoenix/Library/Frameworks -framework SDL2 main.c -o main
rm -rf "./build/SDLExample2.app/"
mkdir -p "./build/SDLExample2.app"/Contents/{MacOS,Resources}
cp -R "/Users/josephoenix/Library/Frameworks/SDL2.framework" "./build/SDLExample2.app/Contents/Resources/"
cp Info.plist "./build/SDLExample2.app/Contents/"
sed -e "s/APP_NAME/SDLExample2/g" -i "" "./build/SDLExample2.app/Contents/Info.plist"
cp ./main "./build/SDLExample2.app/Contents/MacOS/SDLExample2"
Look in the ./build/ directory. Do you see an app bundle?
## You don’t need Xcode to build an app bundle after all!
I know I should check my shell-based privilege here, but this took me all of one flight to figure out. (It was a short flight too! ATL to BWI.) On the other hand, I futzed around with Xcode for an entire evening to write that other post.
posted
← return home
Background image of the Carina nebula by NASA, ESA, N. Smith (University of California, Berkeley), and The Hubble Heritage Team (STScI/AURA) |
Copied to
clipboard
## G = C6×C9⋊S3order 324 = 22·34
### Direct product of C6 and C9⋊S3
Series: Derived Chief Lower central Upper central
Derived series C1 — C3×C9 — C6×C9⋊S3
Chief series C1 — C3 — C32 — C3×C9 — C32×C9 — C3×C9⋊S3 — C6×C9⋊S3
Lower central C3×C9 — C6×C9⋊S3
Upper central C1 — C6
Generators and relations for C6×C9⋊S3
G = < a,b,c,d | a6=b9=c3=d2=1, ab=ba, ac=ca, ad=da, bc=cb, dbd=b-1, dcd=c-1 >
Subgroups: 556 in 130 conjugacy classes, 46 normal (18 characteristic)
C1, C2, C2 [×2], C3 [×2], C3 [×3], C3 [×4], C22, S3 [×8], C6 [×2], C6 [×3], C6 [×6], C9 [×3], C9 [×3], C32 [×2], C32 [×3], C32 [×4], D6 [×4], C2×C6, D9 [×6], C18 [×3], C18 [×3], C3×S3 [×8], C3⋊S3 [×2], C3×C6 [×2], C3×C6 [×3], C3×C6 [×4], C3×C9, C3×C9 [×3], C3×C9 [×4], C33, D18 [×3], S3×C6 [×4], C2×C3⋊S3, C3×D9 [×6], C9⋊S3 [×2], C3×C18, C3×C18 [×3], C3×C18 [×4], C3×C3⋊S3 [×2], C32×C6, C32×C9, C6×D9 [×3], C2×C9⋊S3, C6×C3⋊S3, C3×C9⋊S3 [×2], C32×C18, C6×C9⋊S3
Quotients: C1, C2 [×3], C3, C22, S3 [×4], C6 [×3], D6 [×4], C2×C6, D9 [×3], C3×S3 [×4], C3⋊S3, D18 [×3], S3×C6 [×4], C2×C3⋊S3, C3×D9 [×3], C9⋊S3, C3×C3⋊S3, C6×D9 [×3], C2×C9⋊S3, C6×C3⋊S3, C3×C9⋊S3, C6×C9⋊S3
Smallest permutation representation of C6×C9⋊S3
On 108 points
Generators in S108
(1 62 33 67 43 48)(2 63 34 68 44 49)(3 55 35 69 45 50)(4 56 36 70 37 51)(5 57 28 71 38 52)(6 58 29 72 39 53)(7 59 30 64 40 54)(8 60 31 65 41 46)(9 61 32 66 42 47)(10 75 21 99 102 83)(11 76 22 91 103 84)(12 77 23 92 104 85)(13 78 24 93 105 86)(14 79 25 94 106 87)(15 80 26 95 107 88)(16 81 27 96 108 89)(17 73 19 97 100 90)(18 74 20 98 101 82)
(1 2 3 4 5 6 7 8 9)(10 11 12 13 14 15 16 17 18)(19 20 21 22 23 24 25 26 27)(28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45)(46 47 48 49 50 51 52 53 54)(55 56 57 58 59 60 61 62 63)(64 65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80 81)(82 83 84 85 86 87 88 89 90)(91 92 93 94 95 96 97 98 99)(100 101 102 103 104 105 106 107 108)
(1 30 37)(2 31 38)(3 32 39)(4 33 40)(5 34 41)(6 35 42)(7 36 43)(8 28 44)(9 29 45)(10 108 24)(11 100 25)(12 101 26)(13 102 27)(14 103 19)(15 104 20)(16 105 21)(17 106 22)(18 107 23)(46 57 68)(47 58 69)(48 59 70)(49 60 71)(50 61 72)(51 62 64)(52 63 65)(53 55 66)(54 56 67)(73 87 91)(74 88 92)(75 89 93)(76 90 94)(77 82 95)(78 83 96)(79 84 97)(80 85 98)(81 86 99)
(1 74)(2 73)(3 81)(4 80)(5 79)(6 78)(7 77)(8 76)(9 75)(10 47)(11 46)(12 54)(13 53)(14 52)(15 51)(16 50)(17 49)(18 48)(19 63)(20 62)(21 61)(22 60)(23 59)(24 58)(25 57)(26 56)(27 55)(28 94)(29 93)(30 92)(31 91)(32 99)(33 98)(34 97)(35 96)(36 95)(37 88)(38 87)(39 86)(40 85)(41 84)(42 83)(43 82)(44 90)(45 89)(64 104)(65 103)(66 102)(67 101)(68 100)(69 108)(70 107)(71 106)(72 105)
G:=sub<Sym(108)| (1,62,33,67,43,48)(2,63,34,68,44,49)(3,55,35,69,45,50)(4,56,36,70,37,51)(5,57,28,71,38,52)(6,58,29,72,39,53)(7,59,30,64,40,54)(8,60,31,65,41,46)(9,61,32,66,42,47)(10,75,21,99,102,83)(11,76,22,91,103,84)(12,77,23,92,104,85)(13,78,24,93,105,86)(14,79,25,94,106,87)(15,80,26,95,107,88)(16,81,27,96,108,89)(17,73,19,97,100,90)(18,74,20,98,101,82), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63)(64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81)(82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99)(100,101,102,103,104,105,106,107,108), (1,30,37)(2,31,38)(3,32,39)(4,33,40)(5,34,41)(6,35,42)(7,36,43)(8,28,44)(9,29,45)(10,108,24)(11,100,25)(12,101,26)(13,102,27)(14,103,19)(15,104,20)(16,105,21)(17,106,22)(18,107,23)(46,57,68)(47,58,69)(48,59,70)(49,60,71)(50,61,72)(51,62,64)(52,63,65)(53,55,66)(54,56,67)(73,87,91)(74,88,92)(75,89,93)(76,90,94)(77,82,95)(78,83,96)(79,84,97)(80,85,98)(81,86,99), (1,74)(2,73)(3,81)(4,80)(5,79)(6,78)(7,77)(8,76)(9,75)(10,47)(11,46)(12,54)(13,53)(14,52)(15,51)(16,50)(17,49)(18,48)(19,63)(20,62)(21,61)(22,60)(23,59)(24,58)(25,57)(26,56)(27,55)(28,94)(29,93)(30,92)(31,91)(32,99)(33,98)(34,97)(35,96)(36,95)(37,88)(38,87)(39,86)(40,85)(41,84)(42,83)(43,82)(44,90)(45,89)(64,104)(65,103)(66,102)(67,101)(68,100)(69,108)(70,107)(71,106)(72,105)>;
G:=Group( (1,62,33,67,43,48)(2,63,34,68,44,49)(3,55,35,69,45,50)(4,56,36,70,37,51)(5,57,28,71,38,52)(6,58,29,72,39,53)(7,59,30,64,40,54)(8,60,31,65,41,46)(9,61,32,66,42,47)(10,75,21,99,102,83)(11,76,22,91,103,84)(12,77,23,92,104,85)(13,78,24,93,105,86)(14,79,25,94,106,87)(15,80,26,95,107,88)(16,81,27,96,108,89)(17,73,19,97,100,90)(18,74,20,98,101,82), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63)(64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81)(82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99)(100,101,102,103,104,105,106,107,108), (1,30,37)(2,31,38)(3,32,39)(4,33,40)(5,34,41)(6,35,42)(7,36,43)(8,28,44)(9,29,45)(10,108,24)(11,100,25)(12,101,26)(13,102,27)(14,103,19)(15,104,20)(16,105,21)(17,106,22)(18,107,23)(46,57,68)(47,58,69)(48,59,70)(49,60,71)(50,61,72)(51,62,64)(52,63,65)(53,55,66)(54,56,67)(73,87,91)(74,88,92)(75,89,93)(76,90,94)(77,82,95)(78,83,96)(79,84,97)(80,85,98)(81,86,99), (1,74)(2,73)(3,81)(4,80)(5,79)(6,78)(7,77)(8,76)(9,75)(10,47)(11,46)(12,54)(13,53)(14,52)(15,51)(16,50)(17,49)(18,48)(19,63)(20,62)(21,61)(22,60)(23,59)(24,58)(25,57)(26,56)(27,55)(28,94)(29,93)(30,92)(31,91)(32,99)(33,98)(34,97)(35,96)(36,95)(37,88)(38,87)(39,86)(40,85)(41,84)(42,83)(43,82)(44,90)(45,89)(64,104)(65,103)(66,102)(67,101)(68,100)(69,108)(70,107)(71,106)(72,105) );
G=PermutationGroup([(1,62,33,67,43,48),(2,63,34,68,44,49),(3,55,35,69,45,50),(4,56,36,70,37,51),(5,57,28,71,38,52),(6,58,29,72,39,53),(7,59,30,64,40,54),(8,60,31,65,41,46),(9,61,32,66,42,47),(10,75,21,99,102,83),(11,76,22,91,103,84),(12,77,23,92,104,85),(13,78,24,93,105,86),(14,79,25,94,106,87),(15,80,26,95,107,88),(16,81,27,96,108,89),(17,73,19,97,100,90),(18,74,20,98,101,82)], [(1,2,3,4,5,6,7,8,9),(10,11,12,13,14,15,16,17,18),(19,20,21,22,23,24,25,26,27),(28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45),(46,47,48,49,50,51,52,53,54),(55,56,57,58,59,60,61,62,63),(64,65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80,81),(82,83,84,85,86,87,88,89,90),(91,92,93,94,95,96,97,98,99),(100,101,102,103,104,105,106,107,108)], [(1,30,37),(2,31,38),(3,32,39),(4,33,40),(5,34,41),(6,35,42),(7,36,43),(8,28,44),(9,29,45),(10,108,24),(11,100,25),(12,101,26),(13,102,27),(14,103,19),(15,104,20),(16,105,21),(17,106,22),(18,107,23),(46,57,68),(47,58,69),(48,59,70),(49,60,71),(50,61,72),(51,62,64),(52,63,65),(53,55,66),(54,56,67),(73,87,91),(74,88,92),(75,89,93),(76,90,94),(77,82,95),(78,83,96),(79,84,97),(80,85,98),(81,86,99)], [(1,74),(2,73),(3,81),(4,80),(5,79),(6,78),(7,77),(8,76),(9,75),(10,47),(11,46),(12,54),(13,53),(14,52),(15,51),(16,50),(17,49),(18,48),(19,63),(20,62),(21,61),(22,60),(23,59),(24,58),(25,57),(26,56),(27,55),(28,94),(29,93),(30,92),(31,91),(32,99),(33,98),(34,97),(35,96),(36,95),(37,88),(38,87),(39,86),(40,85),(41,84),(42,83),(43,82),(44,90),(45,89),(64,104),(65,103),(66,102),(67,101),(68,100),(69,108),(70,107),(71,106),(72,105)])
90 conjugacy classes
class 1 2A 2B 2C 3A 3B 3C ··· 3N 6A 6B 6C ··· 6N 6O 6P 6Q 6R 9A ··· 9AA 18A ··· 18AA order 1 2 2 2 3 3 3 ··· 3 6 6 6 ··· 6 6 6 6 6 9 ··· 9 18 ··· 18 size 1 1 27 27 1 1 2 ··· 2 1 1 2 ··· 2 27 27 27 27 2 ··· 2 2 ··· 2
90 irreducible representations
dim 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 type + + + + + + + + + image C1 C2 C2 C3 C6 C6 S3 S3 D6 D6 C3×S3 D9 C3×S3 S3×C6 D18 S3×C6 C3×D9 C6×D9 kernel C6×C9⋊S3 C3×C9⋊S3 C32×C18 C2×C9⋊S3 C9⋊S3 C3×C18 C3×C18 C32×C6 C3×C9 C33 C18 C3×C6 C3×C6 C9 C32 C32 C6 C3 # reps 1 2 1 2 4 2 3 1 3 1 6 9 2 6 9 2 18 18
Matrix representation of C6×C9⋊S3 in GL5(𝔽19)
8 0 0 0 0 0 7 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 11
,
1 0 0 0 0 0 6 0 0 0 0 0 16 0 0 0 0 0 6 0 0 0 0 0 16
,
1 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 1 0 0 0 0 0 1
,
18 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0
G:=sub<GL(5,GF(19))| [8,0,0,0,0,0,7,0,0,0,0,0,7,0,0,0,0,0,11,0,0,0,0,0,11],[1,0,0,0,0,0,6,0,0,0,0,0,16,0,0,0,0,0,6,0,0,0,0,0,16],[1,0,0,0,0,0,7,0,0,0,0,0,11,0,0,0,0,0,1,0,0,0,0,0,1],[18,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0] >;
C6×C9⋊S3 in GAP, Magma, Sage, TeX
C_6\times C_9\rtimes S_3
% in TeX
G:=Group("C6xC9:S3");
// GroupNames label
G:=SmallGroup(324,142);
// by ID
G=gap.SmallGroup(324,142);
# by ID
G:=PCGroup([6,-2,-2,-3,-3,-3,-3,3171,453,2164,7781]);
// Polycyclic
G:=Group<a,b,c,d|a^6=b^9=c^3=d^2=1,a*b=b*a,a*c=c*a,a*d=d*a,b*c=c*b,d*b*d=b^-1,d*c*d=c^-1>;
// generators/relations
×
𝔽 |
# 2.2.3.4 Curve
Curve
## Remark
An Origin C Curve is comprised of a Y Dataset and typically (but not necessarily) an associated X Dataset. For example, a data set plotted against row numbers will not have an associated X data set. An Origin C Curve object can easily be plotted using methods of the GraphLayer class. The Curve class is derived from the curvebase, and vectorbase classes from which it inherits methods and properties.
## Examples
EX1
// Assumes Book1_A and Book1_B exist and contain data
void Curve_ex1()
{
Dataset ds1("Book1_A"), ds2("Book1_B");
Curve crvCopy( ds1, ds2 );
crvCopy.Sort(); // Does not effect ds1, ds2
} |
# Symbol of a differential operator
In mathematics, the symbol of a linear differential operator is obtained from a differential operator of a polynomial by, roughly speaking, replacing each partial derivative by a new variable. The symbol of a differential operator has broad applications to Fourier analysis. In particular, in this connection it leads to the notion of a pseudo-differential operator. The highest-order terms of the symbol, known as the principal symbol, almost completely controls the qualitative behavior of solutions of a partial differential equation. Linear elliptic partial differential equations can be characterized as those whose principal symbol is nowhere zero. In the study of hyperbolic and parabolic partial differential equations, zeros of the principal symbol correspond to the characteristics of the partial differential equation. Consequently, the symbol is often fundamental for the solution of such equations, and is one of the main computational devices used to study their singularities.
## Definition
### Operators on Euclidean space
Let P be a linear differential operator of order k on the Euclidean space Rd. Then P is a polynomial in the derivative D, which in multi-index notation can be written
${\displaystyle P=p(x,D)=\sum _{|\alpha |\leq k}a_{\alpha }(x)D^{\alpha }.}$
The total symbol of P is the polynomial p:
${\displaystyle p(x,\xi )=\sum _{|\alpha |\leq k}a_{\alpha }(x)\xi ^{\alpha }.}$
The leading symbol, also known as the principal symbol, is the highest-degree component of p :
${\displaystyle \sigma _{P}(\xi )=\sum _{|\alpha |=k}a_{\alpha }\xi ^{\alpha }}$
and is of importance later because it is the only part of the symbol that transforms as a tensor under changes to the coordinate system.
The symbol of P appears naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
${\displaystyle Pf(x)={\frac {1}{(2\pi )^{d}}}\int _{\mathbf {R} ^{d}}e^{ix\cdot \xi }p(x,i\xi ){\hat {f}}(\xi )\,d\xi .}$
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
### Vector bundles
Let E and F be vector bundles over a closed manifold X, and suppose
${\displaystyle P:C^{\infty }(E)\to C^{\infty }(F)}$
is a differential operator of order ${\displaystyle k}$. In local coordinates on X, we have
${\displaystyle Pu(x)=\sum _{|\alpha |=k}P^{\alpha }(x){\frac {\partial ^{\alpha }u}{\partial x^{\alpha }}}+{\text{lower-order terms}}}$
where, for each multi-index α, ${\displaystyle P^{\alpha }(x):E\to F}$ is a bundle map, symmetric on the indices α.
The kth order coefficients of P transform as a symmetric tensor
${\displaystyle \sigma _{P}:S^{k}(T^{*}X)\otimes E\to F}$
from the tensor product of the kth symmetric power of the cotangent bundle of X with E to F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components
${\displaystyle (Pu)_{\nu }=\sum _{\mu }P_{\nu \mu }u_{\mu }}$
on each section u of E. Here Pνμ is the scalar differential operator defined by
${\displaystyle P_{\nu \mu }=\sum _{\alpha }P_{\nu \mu }^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}.}$
With this trivialization, the principal symbol can now be written
${\displaystyle (\sigma _{P}(\xi )u)_{\nu }=\sum _{|\alpha |=k}\sum _{\mu }P_{\nu \mu }^{\alpha }(x)\xi _{\alpha }u^{\mu }.}$
In the cotangent space over a fixed point x of X, the symbol ${\displaystyle \sigma _{P}}$ defines a homogeneous polynomial of degree k in ${\displaystyle T_{x}^{*}X}$ with values in ${\displaystyle \operatorname {Hom} (E_{x},F_{x})}$.
The differential operator ${\displaystyle P}$ is elliptic if its symbol is invertible; that is for each nonzero ${\displaystyle \theta \in T^{*}X}$ the bundle map ${\displaystyle \sigma _{P}(\theta ,\dots ,\theta )}$ is invertible. On a compact manifold, it follows from the elliptic theory that P is a Fredholm operator: it has finite-dimensional kernel and cokernel. |
The page Site.Preferences contains customisable browser preference settings. These include access keys (keyboard shortcuts to certain actions like edit, history, browse) and settings of the EditForms (width and height of the edit textarea) as well as the name of the edit form in use.
A different page than Site.Preferences can be choosen by making a copy of that page under a new name, customising it, and setting a cookie which will point to this page for the browser being used, through
?setprefs=SomeGroup.CustomPreferences
SomeGroup.CustomPreferences being the name of the new customised preference page.
# About Access Keys
## Notes and Comments
Note that in order to enable parsing of Site.Preferences, a line like the following needs to be added to local/config.php:
XL Page('prefs', "Site.Preferences"); |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Oct 2019, 17:01
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Which of the following inequalities specifies the shaded reg
Author Message
TAGS:
Hide Tags
SVP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1687
Location: India
GPA: 3.01
WE: Engineering (Real Estate)
Show Tags
13 Mar 2018, 06:31
kaushiksamrat wrote:
Hi Bunnel.. my doubt is that looking at the picture -1 and 4 are included with bold line so can't the range be -1<=x <=4
Sent from my SM-G935F using GMAT Club Forum mobile app
Hi kaushiksamrat,
First of, it is important to note that we don't have any answer choice with =< sign, so we need to choose from the options available in front. Second thing, when end-points are included in any range for that matter they are marked as BIG DOT or DOUBLE CIRCLE, a sign that can be clearly seen by the test taker. Other than that consider the ranges excluding the end-points because GMAC will always be clear with diagrams.
Hope it will help.
QZ
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Intern
Joined: 05 Dec 2016
Posts: 4
Show Tags
21 Mar 2018, 07:47
aeglorre wrote:
Bunuel wrote:
Not sure I understand what you mean. The shaded region gives $$-1<x<4$$ and option B also gives $$-1<x<4$$. Thus B is correct. No other option gives this range, no matter whether you include endpoints on the diagram in the inequality or not.
What Im trying to say is that I interpret the shaded region as saying $$-1<=x<=4$$, since both -1 and 4 are "covered" by the shaded area (at least it seems so to me). Thus, x could be -1 or it could be 4. But $$-1<x<4$$ means that x cannot be either -1 or 4, and this is what confuses me. Either I am misinterpreting the shaded area, or I lack a fundamental understanding of inequalities.
Even I am having the same issue... I interpreted the inequality to be $$-1<=x<=4$$ and discarded all the options... Can someone please tell me why are we considering $$-1<x<4$$ and not $$-1<=x<=4$$
SVP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1687
Location: India
GPA: 3.01
WE: Engineering (Real Estate)
Show Tags
21 Mar 2018, 08:56
1
chandu2016 wrote:
aeglorre wrote:
Bunuel wrote:
Not sure I understand what you mean. The shaded region gives $$-1<x<4$$ and option B also gives $$-1<x<4$$. Thus B is correct. No other option gives this range, no matter whether you include endpoints on the diagram in the inequality or not.
What Im trying to say is that I interpret the shaded region as saying $$-1<=x<=4$$, since both -1 and 4 are "covered" by the shaded area (at least it seems so to me). Thus, x could be -1 or it could be 4. But $$-1<x<4$$ means that x cannot be either -1 or 4, and this is what confuses me. Either I am misinterpreting the shaded area, or I lack a fundamental understanding of inequalities.
Even I am having the same issue... I interpreted the inequality to be $$-1<=x<=4$$ and discarded all the options... Can someone please tell me why are we considering $$-1<x<4$$ and not $$-1<=x<=4$$
You can read the above thread for clarity. Also, it is not correct to discard all the answer choices as at least one answer choice has to be the answer. Extreme points are clearly marked, if they are inclusive in range.
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Non-Human User
Joined: 09 Sep 2013
Posts: 13317
Show Tags
04 Apr 2019, 06:55
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Which of the following inequalities specifies the shaded reg [#permalink] 04 Apr 2019, 06:55
Go to page Previous 1 2 [ 24 posts ]
Display posts from previous: Sort by |
# set the context to run dissolve_degenerate from the python shell
I've downloaded some 3d models of a city with a ton of topologic errors. I am trying to fix some iterating through the objects and running dissolve_degenerate. But when calling the function I get a Runtime error : RuntimeError: Operator bpy.ops.mesh.dissolve_degenerate.poll() failed, context is incorrect. I see this a fairly common problem for Blender neophites like me, but despite looking to several examples of "context incorrect" I can't figure out how fix this problem. Can somebody tell me how to set the right context to call dissolve_degenerate from the python shell?
# deselect all objects
bpy.ops.object.select_all(action='DESELECT')
for i,o in enumerate(bpy.data.objects):
if o.type == 'MESH':
o.select_set(True)
bpy.ops.object.editmode_toggle()
dim_min = min(o.dimensions)
# the following line throws an error
bpy.ops.mesh.dissolve_degenerate(threshold=dim_min/100)
o.select_set(False)
print(o.name)
if i>10:
break
With bmesh
Started out as a comment on answer, ended up as another answer.
• Can do the above with bmesh without any need to select or switch mode.
• Loop over all objects in scene, find all the unique meshes, and list their objects.
• For each mesh, run the degenerate dissolve bmesh operator. The mesh dimension is gained from its bounding box (local coordinates), not its dimension which is object coordinates and involves scale. (Else if scale > 100 will remove all geo)
Test script, run in object mode.
import bpy
from collections import defaultdict
from mathutils import Vector
import bmesh
context = bpy.context
scene = context.scene
# meshes in scene
def mesh_dim(ob):
bbox = [Vector(b) for b in ob.bound_box]
return (
bbox[6] - bbox[0]
)
meshes = defaultdict(list)
for o in scene.objects:
if o.type != 'MESH':
continue
meshes[o.data].append(o)
bm = bmesh.new()
for m, obs in meshes.items():
bm.from_mesh(m)
bmesh.ops.dissolve_degenerate(
bm,
dist=min(mesh_dim(obs[0])) / 100,
edges=bm.edges,
)
bm.to_mesh(m)
m.update()
bm.clear()
one solution is to put twice editmode_toggle in the loop, otherwise it every alternative iteration is going into object mode, which is incompatible with dissolve_degenerate
# deselect all objects
bpy.ops.object.select_all(action='DESELECT')
for i,o in enumerate(bpy.data.objects):
if o.type == 'MESH':
o.select_set(True)
bpy.ops.object.editmode_toggle()
dim_min = min(o.dimensions)
bpy.ops.mesh.dissolve_degenerate(threshold=dim_min/100)
o.select_set(False)
bpy.ops.object.editmode_toggle() |
# Sentence Examples with the word Small intestine
The salts of sodium resemble potassium in their action on the alimentary tract, but they are much more slowly absorbed, and much less diffusible; therefore considerable amounts may reach the small intestine and there act as saline purgatives.
Internally they are found to consist of a lamina twisted upon itself, and externally they generally exhibit a tortuous structure, produced, before the cloaca was reached, by the spiral valve of a compressed small intestine (as in skates, sharks and dog-fishes); the surface shows also vascular impressions and corrugations due to the same cause.
Among mammals, o, oesophagus; st, stomach; p, pylorus; ss, small intestine breviated); c, caecum; ll, large intestine colon, ending in r, the rectum.
View more
With few exceptions tapeworms select the small intestine for their station, and in this situation execute active movements of extension and contraction. |
American Institute of Mathematical Sciences
• Previous Article
The two dimensional Vlasov-Poisson system with steady spatial asymptotics
• KRM Home
• This Issue
• Next Article
Boundary layers for discrete kinetic models: Multicomponent mixtures, polyatomic molecules, bimolecular reactions, and quantum kinetic equations
December 2017, 10(4): 957-976. doi: 10.3934/krm.2017038
Grossly determined solutions for a Boltzmann-like equation
Department of Mathematics, Bradley University, Bradley Hall 445, Peoria, IL 61625, USA
Received October 2015 Revised December 2016 Published March 2017
Fund Project: The author is supported by NSF grant DMS 08-38434 "EMSW21-MCTP: Research Experience for Graduate Students" and from the Caterpillar Fellowship Grant at Bradley University.
In gas dynamics, the connection between the continuum physics model offered by the Navier-Stokes equations and the heat equation and the molecular model offered by the kinetic theory of gases has been understood for some time, especially through the work of Chapman and Enskog, but it has never been established rigorously. This paper established a precise bridge between these two models for a simple linear Boltzman-like equation. Specifically a special class of solutions, the grossly determined solutions, of this kinetic model are shown to exist and satisfy closed form balance equations representing a class of continuum model solutions.
Citation: Thomas Carty. Grossly determined solutions for a Boltzmann-like equation. Kinetic and Related Models, 2017, 10 (4) : 957-976. doi: 10.3934/krm.2017038
References:
[1] R. Alonso, Boltzmann-type Equations and Their Applications Publicações Matemáticas do IMPA. [IMPA Mathematical Publications], Instituto Nacional de Matemática Pura e Aplicada (IMPA), Rio de Janeiro, 2015. [2] R. Alonso and W. Sun, The radiative transfer equation in the forward-peaked regime, Comm. Math. Phys., 338 (2015), 1233-1286. doi: 10.1007/s00220-015-2395-8. [3] R. J. Alonso and I. M. Gamba, Distributional and classical solutions to the Cauchy Boltzmann problem for soft potentials with integrable angular cross section, J. Stat. Phys., 137 (2009), 1147-1165. doi: 10.1007/s10955-009-9873-3. [4] L. Arkeryd, Stability in L1 for the spatially homogeneous Boltzmann equation, Archive for Rational Mechanics and Analysis, 103 (1988), 151-167. doi: 10.1007/BF00251506. [5] T. E. Carty, Elementary Solutions for a model Boltzmann Equation in one-dimension and the connection to Grossly Determined Solutions, preprint, arXiv: 1608.03510. [6] K. M. Case, Elementary solutions of the transport equation and their applications, Annals of Physics, 9 (1960), 1-23. doi: 10.1016/0003-4916(60)90060-9. [7] C. Cercignani, Methods of solution of the linearized Boltzmann equation for rarefied gas dynamics, Journal of Quantitative Spectroscopy and Radiative Transfer, 11 (1971), 973-985. doi: 10.1016/0022-4073(71)90068-9. [8] C. Cercignani, H-theorem and trend to equilibrium in the kinetic theory of gases, Arch. Mech. (Arch. Mech. Stos.), 34 (1982), 231-241. [9] C. Cercignani, Elementary solutions of the linearized gas-dynamics Boltzmann equation and their application to the slip-flow problem, Annals of Physics, 20 (1962), 219-233. doi: 10.1016/0003-4916(62)90199-9. [10] C. Cercignani, R. Illner and M. Pulvierenti, The Mathematical Theory of Dilute Gases Applied Mathematical Sciences, 106, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4419-8524-8. [11] C. Cercignani and F. Sernagiotto, The method of elementary solutions for time-dependent problems in linearized kinetic theory, Annals of Physics, 30 (1964), 154-167. doi: 10.1016/0003-4916(64)90308-2. [12] L. Desvillettes and C. Villani, On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation, Invent. Math., 159 (2005), 245-316. doi: 10.1007/s00222-004-0389-9. [13] L. Desvillettes and C. Mouhot, Large time behavior of the a priori bounds for the solutions to the spatially homogeneous Boltzmann equations with soft potentials, Asymptot. Anal., 54 (2007), 235-245. [14] R. J. DiPerna and P.-L. Lions, On the Cauchy problem for Boltzmann equations: Global existence and weak stability, Ann. of Math., 130 (1989), 321-366. doi: 10.2307/1971423. [15] E. Dolera, On the computation of the spectrum of the linearized Boltzmann collision operator for Maxwellian molecules, Boll. Unione Mat. Ital.(9), 4 (2011), 47-68. [16] S. Friedlander and D. Serre, Handbook of mathematical fluid dynamics Elsevier, 2002. [17] L. S. García-Colín, R. M. Velasco and F. J. Uribe, Beyond the Navier-Stokes equations: Burnett hydrodynamics, Phys. Rep., 465 (2008), 149-189. [18] L. Gosse, Well-balanced schemes using elementary solutions for linear models of the Boltzmann equation in one space dimension, Kinetic and Related Models, 5 (2012), 283-323. doi: 10.3934/krm.2012.5.283. [19] S. Harris, An Introduction to the Theory of the Boltzmann Equation Dover Books on Physics, 2012. [20] L. Hörmander, Linear Partial Differential Operators Springer Verlag, Berlin, 1976. [21] S. Jin, L. Pareschi and M. Slemrod, A relaxation scheme for solving the Boltzmann equation based on the Chapman-Enskog expansion, Acta Math. Appl. Sin. Engl. Ser., 18 (2002), 37-62. doi: 10.1007/s102550200003. [22] S. Kaniel and M. Shinbrot, The Boltzmann equation. Ⅰ. Uniqueness and local existence, Comm. Math. Phys., 58 (1978), 65-84. [23] S. Mischler and B. Wennberg, On the spatially homogeneous Boltzmann equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 16 (1999), 467-501. doi: 10.1016/S0294-1449(99)80025-0. [24] C. Mouhot, Quantitative linearized study of the Boltzmann collision operator and applications, Commun. Math. Sci., 1 (2007), 73-86. doi: 10.4310/CMS.2007.v5.n5.a6. [25] M. Slemrod, Constitutive relations for monatomic gases based on a generalized rational approximation to the sum of the Chapman-Enskog expansion, Arch. Ration. Mech. Anal., 150 (1999), 1-22. doi: 10.1007/s002050050178. [26] C. Truesdell and R. G. Muncaster, Fundamentals Of Maxwell's Kinetic Theory Of A Simple Monatomic Gas Treated as a branch of rational mechanics, Pure and Applied Mathematics, Vol. 83, Academic Press, 1980. [27] C. Villani, Cercignani's conjecture is sometimes true and always almost true, Comm. Math. Phys., 234 (2003), 455-490. doi: 10.1007/s00220-002-0777-1.
show all references
References:
[1] R. Alonso, Boltzmann-type Equations and Their Applications Publicações Matemáticas do IMPA. [IMPA Mathematical Publications], Instituto Nacional de Matemática Pura e Aplicada (IMPA), Rio de Janeiro, 2015. [2] R. Alonso and W. Sun, The radiative transfer equation in the forward-peaked regime, Comm. Math. Phys., 338 (2015), 1233-1286. doi: 10.1007/s00220-015-2395-8. [3] R. J. Alonso and I. M. Gamba, Distributional and classical solutions to the Cauchy Boltzmann problem for soft potentials with integrable angular cross section, J. Stat. Phys., 137 (2009), 1147-1165. doi: 10.1007/s10955-009-9873-3. [4] L. Arkeryd, Stability in L1 for the spatially homogeneous Boltzmann equation, Archive for Rational Mechanics and Analysis, 103 (1988), 151-167. doi: 10.1007/BF00251506. [5] T. E. Carty, Elementary Solutions for a model Boltzmann Equation in one-dimension and the connection to Grossly Determined Solutions, preprint, arXiv: 1608.03510. [6] K. M. Case, Elementary solutions of the transport equation and their applications, Annals of Physics, 9 (1960), 1-23. doi: 10.1016/0003-4916(60)90060-9. [7] C. Cercignani, Methods of solution of the linearized Boltzmann equation for rarefied gas dynamics, Journal of Quantitative Spectroscopy and Radiative Transfer, 11 (1971), 973-985. doi: 10.1016/0022-4073(71)90068-9. [8] C. Cercignani, H-theorem and trend to equilibrium in the kinetic theory of gases, Arch. Mech. (Arch. Mech. Stos.), 34 (1982), 231-241. [9] C. Cercignani, Elementary solutions of the linearized gas-dynamics Boltzmann equation and their application to the slip-flow problem, Annals of Physics, 20 (1962), 219-233. doi: 10.1016/0003-4916(62)90199-9. [10] C. Cercignani, R. Illner and M. Pulvierenti, The Mathematical Theory of Dilute Gases Applied Mathematical Sciences, 106, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4419-8524-8. [11] C. Cercignani and F. Sernagiotto, The method of elementary solutions for time-dependent problems in linearized kinetic theory, Annals of Physics, 30 (1964), 154-167. doi: 10.1016/0003-4916(64)90308-2. [12] L. Desvillettes and C. Villani, On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation, Invent. Math., 159 (2005), 245-316. doi: 10.1007/s00222-004-0389-9. [13] L. Desvillettes and C. Mouhot, Large time behavior of the a priori bounds for the solutions to the spatially homogeneous Boltzmann equations with soft potentials, Asymptot. Anal., 54 (2007), 235-245. [14] R. J. DiPerna and P.-L. Lions, On the Cauchy problem for Boltzmann equations: Global existence and weak stability, Ann. of Math., 130 (1989), 321-366. doi: 10.2307/1971423. [15] E. Dolera, On the computation of the spectrum of the linearized Boltzmann collision operator for Maxwellian molecules, Boll. Unione Mat. Ital.(9), 4 (2011), 47-68. [16] S. Friedlander and D. Serre, Handbook of mathematical fluid dynamics Elsevier, 2002. [17] L. S. García-Colín, R. M. Velasco and F. J. Uribe, Beyond the Navier-Stokes equations: Burnett hydrodynamics, Phys. Rep., 465 (2008), 149-189. [18] L. Gosse, Well-balanced schemes using elementary solutions for linear models of the Boltzmann equation in one space dimension, Kinetic and Related Models, 5 (2012), 283-323. doi: 10.3934/krm.2012.5.283. [19] S. Harris, An Introduction to the Theory of the Boltzmann Equation Dover Books on Physics, 2012. [20] L. Hörmander, Linear Partial Differential Operators Springer Verlag, Berlin, 1976. [21] S. Jin, L. Pareschi and M. Slemrod, A relaxation scheme for solving the Boltzmann equation based on the Chapman-Enskog expansion, Acta Math. Appl. Sin. Engl. Ser., 18 (2002), 37-62. doi: 10.1007/s102550200003. [22] S. Kaniel and M. Shinbrot, The Boltzmann equation. Ⅰ. Uniqueness and local existence, Comm. Math. Phys., 58 (1978), 65-84. [23] S. Mischler and B. Wennberg, On the spatially homogeneous Boltzmann equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 16 (1999), 467-501. doi: 10.1016/S0294-1449(99)80025-0. [24] C. Mouhot, Quantitative linearized study of the Boltzmann collision operator and applications, Commun. Math. Sci., 1 (2007), 73-86. doi: 10.4310/CMS.2007.v5.n5.a6. [25] M. Slemrod, Constitutive relations for monatomic gases based on a generalized rational approximation to the sum of the Chapman-Enskog expansion, Arch. Ration. Mech. Anal., 150 (1999), 1-22. doi: 10.1007/s002050050178. [26] C. Truesdell and R. G. Muncaster, Fundamentals Of Maxwell's Kinetic Theory Of A Simple Monatomic Gas Treated as a branch of rational mechanics, Pure and Applied Mathematics, Vol. 83, Academic Press, 1980. [27] C. Villani, Cercignani's conjecture is sometimes true and always almost true, Comm. Math. Phys., 234 (2003), 455-490. doi: 10.1007/s00220-002-0777-1.
The graph of $|\boldsymbol{\xi}|=\Xi(c)$
[1] Marzia Bisi, Giampiero Spiga. A Boltzmann-type model for market economy and its continuous trading limit. Kinetic and Related Models, 2010, 3 (2) : 223-239. doi: 10.3934/krm.2010.3.223 [2] Carlo Brugna, Giuseppe Toscani. Boltzmann-type models for price formation in the presence of behavioral aspects. Networks and Heterogeneous Media, 2015, 10 (3) : 543-557. doi: 10.3934/nhm.2015.10.543 [3] Nadia Loy, Andrea Tosin. Boltzmann-type equations for multi-agent systems with label switching. Kinetic and Related Models, 2021, 14 (5) : 867-894. doi: 10.3934/krm.2021027 [4] Zhaohui Huo, Yoshinori Morimoto, Seiji Ukai, Tong Yang. Regularity of solutions for spatially homogeneous Boltzmann equation without angular cutoff. Kinetic and Related Models, 2008, 1 (3) : 453-489. doi: 10.3934/krm.2008.1.453 [5] Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Regularity of solutions to the spatially homogeneous Boltzmann equation without angular cutoff. Discrete and Continuous Dynamical Systems, 2009, 24 (1) : 187-212. doi: 10.3934/dcds.2009.24.187 [6] Seung-Yeal Ha, Mitsuru Yamazaki. $L^p$-stability estimates for the spatially inhomogeneous discrete velocity Boltzmann model. Discrete and Continuous Dynamical Systems - B, 2009, 11 (2) : 353-364. doi: 10.3934/dcdsb.2009.11.353 [7] Marcel Braukhoff. Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness. Kinetic and Related Models, 2019, 12 (2) : 445-482. doi: 10.3934/krm.2019019 [8] C. David Levermore, Weiran Sun. Compactness of the gain parts of the linearized Boltzmann operator with weakly cutoff kernels. Kinetic and Related Models, 2010, 3 (2) : 335-351. doi: 10.3934/krm.2010.3.335 [9] Miguel Escobedo, Minh-Binh Tran. Convergence to equilibrium of a linearized quantum Boltzmann equation for bosons at very low temperature. Kinetic and Related Models, 2015, 8 (3) : 493-531. doi: 10.3934/krm.2015.8.493 [10] Wei-Xi Li, Lvqiao Liu. Gelfand-Shilov smoothing effect for the spatially inhomogeneous Boltzmann equations without cut-off. Kinetic and Related Models, 2020, 13 (5) : 1029-1046. doi: 10.3934/krm.2020036 [11] Léo Glangetas, Hao-Guang Li, Chao-Jiang Xu. Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation. Kinetic and Related Models, 2016, 9 (2) : 299-371. doi: 10.3934/krm.2016.9.299 [12] Léo Glangetas, Mohamed Najeme. Analytical regularizing effect for the radial and spatially homogeneous Boltzmann equation. Kinetic and Related Models, 2013, 6 (2) : 407-427. doi: 10.3934/krm.2013.6.407 [13] Torsten Keßler, Sergej Rjasanow. Fully conservative spectral Galerkin–Petrov method for the inhomogeneous Boltzmann equation. Kinetic and Related Models, 2019, 12 (3) : 507-549. doi: 10.3934/krm.2019021 [14] Jingwei Hu, Shi Jin, Li Wang. An asymptotic-preserving scheme for the semiconductor Boltzmann equation with two-scale collisions: A splitting approach. Kinetic and Related Models, 2015, 8 (4) : 707-723. doi: 10.3934/krm.2015.8.707 [15] Andrea Bondesan, Laurent Boudin, Marc Briant, Bérénice Grec. Stability of the spectral gap for the Boltzmann multi-species operator linearized around non-equilibrium maxwell distributions. Communications on Pure and Applied Analysis, 2020, 19 (5) : 2549-2573. doi: 10.3934/cpaa.2020112 [16] Pierre Gervais. A spectral study of the linearized Boltzmann operator in $L^2$-spaces with polynomial and Gaussian weights. Kinetic and Related Models, 2021, 14 (4) : 725-747. doi: 10.3934/krm.2021022 [17] Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Exponential stability of the solutions to the Boltzmann equation for the Benard problem. Kinetic and Related Models, 2012, 5 (4) : 673-695. doi: 10.3934/krm.2012.5.673 [18] Seiji Ukai. Time-periodic solutions of the Boltzmann equation. Discrete and Continuous Dynamical Systems, 2006, 14 (3) : 579-596. doi: 10.3934/dcds.2006.14.579 [19] Radjesvarane Alexandre, Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Bounded solutions of the Boltzmann equation in the whole space. Kinetic and Related Models, 2011, 4 (1) : 17-40. doi: 10.3934/krm.2011.4.17 [20] Marco Cannone, Grzegorz Karch. On self-similar solutions to the homogeneous Boltzmann equation. Kinetic and Related Models, 2013, 6 (4) : 801-808. doi: 10.3934/krm.2013.6.801
2020 Impact Factor: 1.432
Metrics
• PDF downloads (161)
• HTML views (118)
• Cited by (1)
Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top] |
How to remove the section title in header using scrpage2
I would like to remove the section title (header) in the appendix:
\documentclass[headsepline, open=right,twoside=true, numbers=noenddot]{scrreprt}
\usepackage[english]{babel}
\usepackage{scrpage2}
\usepackage{blindtext}
\clearscrheadfoot
\ihead{\headmark}
\ohead[\pagemark]{\pagemark}
\automark[section]{chapter}
\pagestyle{scrheadings}
\renewcommand*{\chapterpagestyle}{empty}
\begin{document}
\begin{appendix}
\chapter{Appendix}
\section{Remove the section title}
\blindtext[12]
\end{appendix}
\end{document}
I'm looking for something like \renewcommand*{\sectionmarkformat}{} except I want to keep the number and remove the title.
2 Answers
If the KOMA-Script option appendixprefix is not used:
\newcommand*{\appendixmore}{%
\renewcommand\sectionmark[1]{%
\markright{\ifnumbered{section}{\thesection\autodot}{}}%
}}
Using \ifnumbered avoids a wrong number in the header if there is an unnumbered section.
Note \appendix is not an environment. The package scrpage2 is depreciated. Use scrlayer-scrpage instead.
\documentclass[headsepline,twoside, numbers=noenddot]{scrreprt}
\usepackage[english]{babel}
\usepackage{blindtext}
\usepackage{scrlayer-scrpage}
\clearpairofpagestyles
\ihead{\headmark}
\ohead[\pagemark]{\pagemark}
\automark[section]{chapter}
\renewcommand*{\chapterpagestyle}{empty}
\newcommand*{\appendixmore}{%
\renewcommand\sectionmark[1]{%
\markright{\ifnumbered{section}{\thesection\autodot}{}}%
}}
\begin{document}
\appendix
\chapter{Appendix}
\section{Remove the section title}
\blindtext[30]
\addsec{Unnumbered section}
\blindtext[12]
\end{document}
• Maybe my question was unclear. I would like to keep the section number. In my example I would like to keep the A.1 and just remove the Remove the section title. Dec 7 '14 at 21:02
• I have changed my answer.
– esdd
Dec 7 '14 at 22:19
• @esdd I am sorry, but how can I totally remove page headers using scrbook and scrlayer-scrpage?
– Diaa
Dec 8 '16 at 3:29
– esdd
Dec 8 '16 at 7:09
• @esdd I am sorry, but I meant for the whole document. I found that plain.scrheadings option of scrlayer-scrpage did what I want, am I right? or should it be done another way?
– Diaa
Dec 8 '16 at 8:31
I also suggest the use of scrlayer-scrpage instead of scrpage2....
About your question, you can issue
\lohead{\thesection\autodot}
just after
\chapter{Appendix}
to obtain what you want.
MWE
\documentclass[headsepline, open=right,twoside=true, numbers=noenddot]{scrreprt}
\usepackage[english]{babel}
\usepackage{blindtext}
\usepackage{scrlayer-scrpage}
\clearpairofpagestyles
\ihead{\headmark}
\ohead[\pagemark]{\pagemark}
\automark[section]{chapter}
\renewcommand*{\chapterpagestyle}{empty}
\begin{document}
\chapter{Test}
\section{Here I want the headers}
\blindtext[12]
\appendix
\chapter{Appendix}
\lohead{\thesection\autodot}
\section{Remove the section title}
\blindtext[12]
\end{document}
Output |
# Help!!!!!!!!!
#### robin
##### New member
if you flip a coin 100 times and then pick 4 colors...how many combinations do u get?HELP ME!
#### soroban
##### Elite Member
Hello, robin!
If you flip a coin 100 times and then pick 4 colors, how many combinations do u get?
When we flip the coin 100 times, are we counting the number of Heads and Tails (and their order)?
. . The number of outcomes is: .2[SUP]100[/SUP] . . . a 31-digit number.
When we pick 4 colors, from WHAT are we picking them?
. . If we're choosing from a box of 8 crayons, there are 70 possible choices.
. . From a box of 16 crayons, there are 1,820 possible choices.
Last edited:
#### robin
##### New member
wow.
wow.well i mean the total possabilities both togethercan u help me now
#### soroban
##### Elite Member
Don't you just love it?
me said:
When we flip the coin 100 times, are we counting the number of Heads and Tails (and their order)?
. . The number of outcomes is: .2[SUP]100[/SUP] . . . a 31-digit number.
I said that the first action was not clearly explained.
Taking a GUESS at what was intended, there is an incredibly large number of outcomes.
When we pick 4 colors, from WHAT are we picking them?
. . If we're choosing from a box of 8 crayons, there are 70 possible choices.
. . From a box of 16 crayons, there are 1,820 possible choices.
Then I illustrated that the second action is VERY vaguely stated,
. . and that we required serious clarification.
What was the OP's response? . "The total possibilities both together."
. . As if that was the part I didn't understand.
. . As if that clarifies the entire problem.
Pick the name of a month and pick a color.
How many combinations do you get?
Let's see . . .
. . January, red
. . January, orange
. . January, yellow
. . January, green
. . January, blue
. . January, indigo
. . January, violet
. . January, white
. . January, black
. . January, brown
. . January, pink
. . January, peach
. . January, mauve
. . January, puce
. . January, maroon
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
Do you think the OP will get my point now? |
Enables easy loading of sparse data matrices provided by alevin-fry USA mode.
loadFry(fryDir, outputFormat = "scRNA", nonzero = FALSE, quiet = FALSE)
## Arguments
fryDir
path to the output directory returned by alevin-fry quant command. This directory should contain a metainfo.json, and an alevin folder which contains quants_mat.mtx, quants_mat_cols.txt and quants_mat_rows.txt
outputFormat
can be either be a list that defines the desired format of the output SingleCellExperiment object or a string that represents one of the pre-defined output formats, which are "scRNA", "snRNA", "all", "scVelo", "velocity", "U+S+A" and "S+A". See details for the explanations of the pre-defined formats and how to define custom format.
nonzero
whether to filter cells with non-zero expression value across all genes (default FALSE). If TRUE, this will filter based on all assays. If a string vector of assay names, it will filter based on the matching assays in the vector. If not in USA mode, it must be TRUE/FALSE/counts.
quiet
logical whether to display no messages
## Value
A SingleCellExperiment object that contains one or more assays. Each assay consists of a gene by cell count matrix. The row names are feature names, and the column names are cell barcodes
## Details about loadFry
This function consumes the result folder returned by running alevin-fry quant in unspliced, spliced, ambiguous (USA) quantification mode, and returns a SingleCellExperiment object that contains a final count for each gene within each cell. In USA mode, alevin-fry quant returns a count matrix contains three types of count for each feature (gene) within each sample (cell or nucleus), which represent the spliced mRNA count of the gene (S), the unspliced mRNA count of the gene (U), and the count of UMIs whose splicing status is ambiguous for the gene (A). For each assay defined by outputFormat, these three counts of a gene within a cell will be summed to get the final count of the gene according to the rule defined in the outputFormat. The returned object will contains the desired assays defined by outputFormat, with rownames as the barcode of samples and colnames as the feature names.
## Details about the output format
The outputFormat argument takes either be a list that defines the desired format of the output SingleCellExperiment object or a string that represents one of the pre-defined output format.
Currently the pre-defined formats of the output SingleCellExperiment object are:
"scRNA":
This format is recommended for single cell experiments. It returns a counts assay that contains the S+A count of each gene in each cell, and a unspliced assay that contains the U count of each gene in each cell.
"snRNA", "all" and "U+S+A":
These three formats are the same. They return a counts assay that contains the U+S+A count of each gene in each cell without any extra layers. "snRNA" is recommended for single-nucleus RNA-sequencing experiments. "raw" is recommended for mimicing CellRanger 7's behavior, which returns this format for both single-cell and single-nucleus experiments.
"S+A":
This format returns a counts assay that contains the S+A count of each gene in each cell.
"raw":
This format puts the three kinds of counts into three separate assays, which are unspliced, spliced and ambiguous.
"velocity":
This format contains two assays. The spliced assay contains the S+A count of each gene in each cell. The unspliced assay contains the U counts of each gene in each cell.
"scVelo":
This format is for direct entry into velociraptor R package or other scVelo downstream analysis pipeline for velocity analysis in R with Bioconductor. It adds the expected "S"-pliced assay and removes errors for size factors being non-positive.
A custom output format can be defined using a list. Each element in the list defines an assay in the output SingleCellExperiment object. The name of an element in the list will be the name of the corresponding assay in the output object. Each element in the list should be defined as a vector that takes at least one of the three kinds of count, which are U, S and A. See the provided toy example for defining a custom output format.
## References
alevin-fry publication:
He, D., Zakeri, M., Sarkar, H. et al. "Alevin-fry unlocks rapid, accurate and memory-frugal quantification of single-cell RNA-seq data." Nature Methods 19, 316–322 (2022). https://doi.org/10.1038/s41592-022-01408-3
## Author
Dongze He, with contributions from Steve Lianoglou, Wes Wilson
## Examples
# Get path for minimal example avelin-fry output dir
testdat <- fishpond:::readExampleFryData("fry-usa-basic")
# This is exactly how the velocity format defined internally.
custom_velocity_format <- list("spliced"=c("S","A"), "unspliced"=c("U"))
# Load alevin-fry gene quantification in velocity format
sce <- loadFry(fryDir=testdat$parent_dir, outputFormat=custom_velocity_format) #> locating quant file #> Reading meta data #> USA mode: TRUE #> Processing 4 genes and 4 barcodes #> Using user-defined output assays #> Building the 'spliced' assay, which contains S A #> Building the 'unspliced' assay, which contains U #> Constructing output SingleCellExperiment object #> Done SummarizedExperiment::assayNames(sce) #> [1] "spliced" "unspliced" # Load the same data but use pre-defined, velociraptor R pckage desired format scvelo_format <- "scVelo" scev <- loadFry(fryDir=testdat$parent_dir, outputFormat=scvelo_format, nonzero=TRUE)
#> locating quant file
#> Reading meta data
#> USA mode: TRUE
#> Processing 4 genes and 4 barcodes
#> Using pre-defined output format: scvelo
#> Building the 'counts' assay, which contains S A
#> Building the 'spliced' assay, which contains S A
#> Building the 'unspliced' assay, which contains U
#> Constructing output SingleCellExperiment object
#> Done
SummarizedExperiment::assayNames(scev)
#> [1] "counts" "spliced" "unspliced" |
## Introduction
Multi-channel signal generation and detection are indispensable in many applications such as multi-qubit quantum computing and multi-sensor systems. Synchronization of signal channels comes to the equation when the temporal sequence of events is of interest, especially for fast physical phenomena with short lifetimes. Zurich Instruments offers the Multi-Device Synchronization (MDS) feature embedded in all its products to provide full synchronization in a scalable approach [1].
In this post, we learn how to setup a multi-channel lock-in measurement by synchronizing multiple MFLI Lock-in Amplifiers using the MDS toolbox provided with Zurich Instruments products [2]. MDS is a powerful tool to align the timestamp of several instruments enabling multi-channel signal generation and acquisition. In order to have a stable and synchronized timestamp among multiple instruments, it is essential to have the following two conditions satisfied:
• A common reference clock to provide identical clock speed for all the instruments.
• A common trigger signal to synchronize the starting point of device clocks.
The first criterion guarantees that all the instruments measure a time interval equally, while the second condition forces the devices to start from an identical timestamp. A device timestamp is a counter which starts counting with the device sampling rate when it is switched on. To keep the timestamp identical among all instruments, it is necessary to have an identical sampling rate for all the devices; therefore, MDS only works for instruments of the same kind unless their counting rate is adjusted to be equal.
## Cabling
MDS requires two levels of synchronization, i.e. reference clock and timestamp. Reference clock synchronization is achieved by connecting the 10-MHz clock output of each instrument to the 10-MHz clock input of another device in a series fashion starting from the master instrument and ending with the last slave device. For timestamp synchronization, trigger output of the master device must be distributed to all the instruments in a parallel (star) mode. All the cables should be coaxial with BNC connectors and it is necessary to have equal-length cables for trigger signals. If you have $$n$$ instruments one as master and the others as slaves, how many pieces of cable are required to fully attach all the devices and make them MDS-ready?
• According to the series configuration of reference clocks, $$n-1$$ cables are necessary to have all the reference clocks connected.
• According to the star configuration of trigger signals, at least $$n+1$$ cables are needed to distribute the master trigger to all the slaves.
It should be noted that for trigger distribution, one needs to have a ($$1\times n$$)-signal splitter and it can be easily made by some T-connectors if a commercial splitter is not available. In this blog, we use 4 MFLI lock-in amplifiers and thus we need 3 cables for reference clocks and 5 cables for trigger signals. Using 3 T-connectors we can make a 1-to-4 signal splitter as shown in the following figure.
Fig. 1. Cabling for the trigger signals connection of master and slave devices in a setup of 4 MFLI instruments. The trigger output of master is distributed to the trigger input of master and slaves with equal-length cables. The 1-to-4 splitter is simply made by attaching 3 T-connectors together.
Using the above cable set for the trigger signals and also a series cabling set for clock signals, we can connect 4 MFLI instruments to prepare them for synchronization by MDS. The following figure shows the rear panel of all the 4 instruments stacked on top of each other. The cables connect trigger and clock signals according to the star and series configurations, respectively. Also the power cables and USB/Ethernet cables are shown in the figure. Please note that both USB and Ethernet can be used to connect the instruments to the host PC and it is not required to have the same connection type for all the devices.
Fig. 2. Rear panel of 4 MFLI instruments (left) connected according to the MDS requirements shown in the scheme (right). The BNC cables show the trigger ports connected in a star configuration and the clock ports connected in a series fashion. The instrument on top is master and the rest are slaves. Click on the image to enlarge.
For MFLI/MFIA instruments, all the MDS cabling is in the rear panel; therefore, all the ports in the front panel are freely available for signal and reference generation and detection.
## Connectivity
All the Zurich Instruments products are controlled via the LabOne software which provides a web-based user interface (UI) as well as application program interfaces (API) for MATLAB, Python, LabVIEW, .NET and C. The MFLI/MFIA device can run the LabOne software on its embedded processor which means there is no need to install the software on the host computer, but by running the LabOne software on the device, synchronization of multiple instruments is NOT possible. This is because the controlling software of each device is independent from the others and they do not communicate with each other. Therefore, it is essential to install the LabOne software on the host PC. The latest version of LabOne software is available here in our download center.
After installing and running LabOne, if you have all the instruments connected to your computer directly or via network, you should be able to see them all in the landing page of LabOne as shown in the figure below. In this case, there are 4 MFLI devices connected to the host computer.
Fig. 3. Landing page of LabOne in the Basic view showing 4 MFLI instruments with their serial number DEVXXXX connected to the host computer.
The LabOne software includes a data server to communicate with the instruments. Moreover, there is a data server living in each MFLI device. In order to be able to synchronize multiple devices, it is necessary to have one data server communicating with all the instruments and thus, it must be the data server running on the host computer. By default, the LabOne software connects to the data server living in the device; therefore, it is required to change the settings before connecting to the instruments. Fig. 4 shows the landing page of LabOne in the “Advanced” mode. From this page, we can change the data server to the local one which has a local host IP of 127.0.0.1 with port 8004.
Fig. 4. Landing page of LabOne in the Advanced view showing the relevant settings to connect to the local host’s data server.
After connecting to the local data server by clicking on the “Connect” button in Fig. 4, the enable button “En” for all the devices will be activated so that by clicking on each button, the corresponding device is connected to the host as highlighted by red boxes in the following figure.
Fig. 5. After connecting to the local data server, the En button is ready to connect to the corresponding instrument.
Having connected to all the instruments as shown in the above figure, one can open a single session of LabOne by double-clicking on one of the devices. All the device-related tabs in the user interface such as Lock-in, Aux, Device, etc. must have a small blue pop-up menu on top which includes all the devices connected to the local data server. This is highlighted in Fig. 6 by a red box showing all the 4 instruments in the pop-up menu of the lock-in tab.
Fig. 6. Single session of LabOne UI showing 4 devices in a pop-up menu next to the title of device-related tabs. Click on the image to enlarge.
It should be noted that if there is no pop-up menu indicating more than one instrument, then there is something missing with the proper connection to the devices.
## Multi-Device Synchronization
Once all the instruments are properly connected to the local data server on the host computer and the LabOne user interface shows them, we can open the MDS tab which lists all the connected devices as shown in the left side of Fig. 7. The devices must selected in a proper order to have first the master and then the slaves according to the sequence of their clock connection. In order to distinguish the instruments, the “Locate” button can be pressed to make the power LED of the corresponding device blink. After selecting the device in a proper order, we press the “Start/Stop Sync” button to start the synchronization process as shown below.
Fig. 7. MDS tab of LabOne UI showing all the connected instruments and the status of synchronization. Click on the image to enlarge.
When the process is finished, a green flag in the status section indicates a successful synchronization as depicted in the above figure. In case the process is unsuccessful, the flag will be red and thus we need to recheck the cables, clock and trigger signals.
When the instruments are synchronized, all the slave devices receive an external clock from another instrument; only the master device uses its own internal 10 MHz clock. Therefore, all the slaves should be able to lock to an external clock source. To check that, you can go to the Device tab of LabOne and change the clock source from “Internal” to “Clk 10 MHz” as depicted in the following figure. If a proper external clock is connected to the clock input of the device, the status remains at ‘Clk 10 MHz’; otherwise, it jumps back to the ‘Internal’ mode and thus you need to check the provided clock signal.
Fig. 8. Clock settings in the Device tab of LabOne for slave devices indicating an external source for 10 MHz clock.
In addition to the 10 MHz clock, a proper trigger signaling is required for synchronization. To check the trigger signals, you should open the DIO tab in LabOne and see a proper trigger acquisition at “Trigger In 1” similar to the figure below. The two moving green flags show the low and high levels of the trigger signal coming from “Trigger Out 1” of the master instrument. In case, “Trigger In 1” does not show blinking flags, you need to check your cabling and/or modify the threshold level of trigger detection in the DIO tab.
Fig. 9. DIO tab of master device showing the trigger signals at Trigger Out 1 and Trigger In 1.
It should be noted that only the master device generates the “MDS Sync Out” at its trigger output as shown in Fig. 9, while all the instruments receive trigger signals at their trigger input similar to the above figure.
## Multi-Channel Measurements
After synchronizing all the instruments using the MDS tool, they can be treated like a single device with multiple input/output channels. A single session of LabOne user interface can control all the instruments and acquire and process data from the synchronized devices simultaneously. This is possible thanks to the clock and timestamp synchronizations which provide the same understanding of time for all the instruments. Using the 4 synchronized MFLI devices in this blog, we can simply perform a 4-channel measurement on a 4-port network. The master MFLI drives one port of the network connected to its Signal Output via a directional coupler [3] to be able to measure the reflection from the driven port. The reflection from the driven port is measured by Signal Input of the master device. All the other 3 ports of the network are connected to the Signal Input of the slave instruments. Fig. 10 shows the temporal measurement of all the 4 ports of the network using 4 synchronized instruments. Any change in the response of the network can be monitored synchronously at all its ports as shown in the following figure.
Fig. 10. Plotter tool of LabOne UI showing 4 signals from 4 different devices each measuring one port of a 4-port electrical network. Click on the image to enlarge.
In order to add signals from various devices to the vertical axis group of tools such as plotter, we use the button highlighted in the above figure to open the signal selection window which includes signals from all the instruments.
Besides time-domain measurements, it is possible to carry out multi-channel measurements in the frequency domain. To do so, not only the frequency of the master’s numerical oscillator sweeps to excite and measure one port of the network, but also the numerical oscillators on all the slave devices sweep in parallel with the master’s to measure all other ports of the network. Fig. 11 demonstrates how the sweeper module of LabOne can acquire the spectral response of all the 4 ports of the network measured by 4 instruments simultaneously.
Fig. 11. Sweeper module of LabOne UI depicting 4 simultaneous measurements of the 4-port network by sweeping the frequency of 4 synchronized MFLI devices. Click on the image to enlarge.
It is worth noting that by sweeping the frequency of numerical oscillators in different instruments, we change the initial phase of each oscillator to a random number. Therefore, this method is only suitable to measure the amplitude response and not the phase response versus frequency. For a proper phase measurement, we need to synchronize all the numerical oscillators in different devices for each frequency point. This can be done by external reference or the ‘Osc Phase Sync’ button in the MDS tab shown in Fig. 7.
## Conclusion
Multiple instruments can be synchronized by the MDS toolbox to convert them to one multi-channel device capable of generating and measuring several signal ports simultaneously. MDS synchronizes the instruments at two levels: clock and timestamp. This helps the user to carry out time- and frequency-domain measurements using multiple instruments while a single user interface or API session controls the entire instrument assembly. As a result, one can save a lot of time while characterizing multi-port electrical networks by simultaneous measurement of multi-channel signals.
## References
1. Zurich Instruments: “Multi-Device Synchronization (MDS).”
2. Zurich Instruments: “MFLI Lock-in Amplifiers.”
3. Mini-Circuits: “ZFDC-20-5+ Directional Coupler.” |
## Thinking Mathematically (6th Edition)
Published by Pearson
# Chapter 11 - Counting Methods and Probability Theory - 11.3 Combinations - Exercise Set 11.3: 45
#### Answer
$1140$ ways
#### Work Step by Step
The order in which the members are chosen is not important Combinations, ${}_{n}C_{r}=\displaystyle \frac{n!}{(n-r)!r!}$ ${}_{20}C_{3}=\displaystyle \frac{20!}{(20-3)!3!}=\frac{20\times 19\times 18}{1\times 2\times 3}$ $=20\times 19\times 3=1140$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Publications
2017
Stern, Brian, Xingchen Ji, Avik Dutt, and Michal Lipson. “Compact narrow-linewidth integrated laser based on a low-loss silicon nitride ring resonator.” Optics Letters 42, no. 21 (2017): 4541-4544. Publisher's Version Abstract
We design and demonstrate a compact, narrow-linewidth integrated laser based on low-loss silicon nitride waveguides coupled to a III-V gain chip. By using a highly confined optical mode, we simultaneously achieve compact bends and ultra-low loss. We leverage the narrowband backreflection of a high-Q microring resonator to act as a cavity output mirror, a single-mode filter, and a propagation delay all in one. This configuration allows the ring to provide feedback and obtain a laser linewidth of 13 kHz with 1.7 mW output power around 1550 nm. This demonstration realizes a compact sub-millimeter silicon nitride laser cavity with a narrow linewidth.
Ji, Xingchen, Felippe AS Barbosa, Samantha P Roberts, Avik Dutt, Jaime Cardenas, Yoshitomo Okawachi, Alex Bryant, Alexander L Gaeta, and Michal Lipson. “Ultra-low-loss on-chip resonators with sub-milliwatt parametric oscillation threshold.” Optica 4, no. 6 (2017): 619. Publisher's Version Abstract
On-chip optical resonators have the promise of revolutionizing numerous fields including metrology and sensing; however, their optical losses have always lagged behind their larger discrete resonator counterparts based on crystalline materials and flowable glass. Silicon nitride (Si3N4) ring resonators open up capabilities for optical routing, frequency comb generation, optical clocks and high precision sensing on an integrated platform. However, simultaneously achieving high quality factor and high confinement in Si3N4 (critical for nonlinear processes for example) remains a challenge. Here, we show that addressing surface roughness enables us to overcome the loss limitations and achieve high-confinement, on-chip ring resonators with a quality factor (Q) of 37 million for a ring with 2.5 {\mu}m width and 67 million for a ring with 10 {\mu}m width. We show a clear systematic path for achieving these high quality factors. Furthermore, we extract the loss limited by the material absorption in our films to be 0.13 dB/m, which corresponds to an absorption limited Q of at least 170 million by comparing two resonators with different degrees of confinement. Our work provides a chip-scale platform for applications such as ultra-low power frequency comb generation, high precision sensing, laser stabilization and sideband resolved optomechanics.
Mohanty, Aseema, Mian Zhang, Avik Dutt, Sven Ramelow, Paulo Nussenzveig, and Michal Lipson. “Quantum Interference between Transverse Spatial Waveguide Modes.” Nature Communications 8 (2017): 14010. Publisher's Version Abstract
Integrated quantum optics has the potential to markedly reduce the footprint and resource requirements of quantum information processing systems, but its practical implementation demands broader utilization of the available degrees of freedom within the optical field. To date, integrated photonic quantum systems have primarily relied on path encoding. However, in the classical regime, the transverse spatial modes of a multi-mode waveguide have been easily manipulated using the waveguide geometry to densely encode information. Here, we demonstrate quantum interference between the transverse spatial modes within a single multi-mode waveguide using quantum circuit-building blocks. This work shows that spatial modes can be controlled to an unprecedented level and have the potential to enable practical and robust quantum information processing.
2016
Dutt, Avik, Chaitanya Joshi, Xingchen Ji, Jaime Cardenas, Yoshitomo Okawachi, Kevin Luke, Alexander L. Gaeta, and Michal Lipson. “On-chip dual comb source for spectroscopy.” arXiv:1611.07673 [physics] (2016). Publisher's Version Abstract
Dual-comb spectroscopy is a powerful technique for real-time, broadband optical sampling of molecular spectra which requires no moving components. Recent developments with microresonator-based platforms have enabled frequency combs at the chip scale. However, the need to precisely match the resonance wavelengths of distinct high-quality-factor microcavities has hindered the development of an on-chip dual comb source. Here, we report the first simultaneous generation of two microresonator combs on the same chip from a single laser. The combs span a broad bandwidth of 51 THz around a wavelength of 1.56 \$\textbackslashmu\$m. We demonstrate low-noise operation of both frequency combs by deterministically tuning into soliton mode-locked states using integrated microheaters, resulting in narrow (\$\textless\$ 10 kHz) microwave beatnotes. We further use one mode-locked comb as a reference to probe the formation dynamics of the other comb, thus introducing a technique to investigate comb evolution without auxiliary lasers or microwave oscillators. We demonstrate broadband high-SNR absorption spectroscopy of dichloromethane spanning 170 nm using the dual comb source over a 20 \$\textbackslashmu\$s acquisition time. Our device paves the way for compact and robust dual-comb spectrometers at nanosecond timescales.
Dutt, Avik, Steven Miller, Kevin Luke, Jaime Cardenas, Alexander L. Gaeta, Paulo Nussenzveig, and Michal Lipson. “Tunable Squeezing Using Coupled Ring Resonators on a Silicon Nitride Chip.” Opt. Lett. 41 (2016): 223. Publisher's Version Abstract
We demonstrate continuous tuning of the squeezing-level generated in a double-ring optical parametric oscillator by externally controlling the coupling condition using electrically controlled integrated microheaters. We accomplish this by utilizing the avoided crossing exhibited by a pair of coupled silicon nitride microring resonators. We directly detect a change in the squeezing level from 0.5 dB in the undercoupled regime to 2 dB in the overcoupled regime, which corresponds to a change in the generated on-chip squeezing factor from 0.9 to 3.9 dB. Such wide tunability in the squeezing level can be harnessed for on-chip quantum-enhanced sensing protocols that require an optimal degree of squeezing.
2015
Dutt, Avik, Kevin Luke, Sasikanth Manipatruni, Alexander L. Gaeta, Paulo Nussenzveig, and Michal Lipson. “On-Chip Optical Squeezing.” Physical Review Applied 3 (2015): 044005. Publisher's Version Abstract
We report the observation of all-optical squeezing in an on-chip monolithically integrated CMOScompatible platform. Our device consists of a low-loss silicon nitride microring optical parametric oscillator (OPO) with a gigahertz cavity linewidth. We measure 1.7 dB (5 dB corrected for losses) of subshot-noise quantum correlations between bright twin beams generated in the microring four-wave-mixing OPO pumped above threshold. This experiment demonstrates a compact, robust, and scalable platform for quantum-optics and quantum-information experiments on chip.
Cardenas, Jaime, Mengjie Yu, Yoshitomo Okawachi, Carl B. Poitras, Ryan K. W. Lau, Avik Dutt, Alexander L. Gaeta, and Michal Lipson. “Optical nonlinearities in high-confinement silicon carbide waveguides.” Optics Letters 40 (2015): 4138-4141. Publisher's Version Abstract
We demonstrate strong nonlinearities of n(2) = 8.6 +/- 1.1 x 10(-15) cm(2) W-1 in single-crystal silicon carbide (SiC) at a wavelength of 2360 nm. We use a high-confinement SiC waveguide fabricated based on a high-temperature smart-cut process. (C) 2015 Optical Society of America
Miller, Steven A., Yoshitomo Okawachi, Sven Ramelow, Kevin Luke, Avik Dutt, Alessandro Farsi, Alexander L. Gaeta, and Michal Lipson. “Tunable frequency combs based on dual microring resonators.” Optics Express 23 (2015): 21527-21540. Publisher's Version Abstract
In order to achieve efficient parametric frequency comb generation in microresonators, external control of coupling between the cavity and the bus waveguide is necessary. However, for passive monolithically integrated structures, the coupling gap is fixed and cannot be externally controlled, making tuning the coupling inherently challenging. We design a dual-cavity coupled microresonator structure in which tuning one ring resonance frequency induces a change in the overall cavity coupling condition. We demonstrate wide extinction tunability with high efficiency by engineering the ring coupling conditions. Additionally, we note a distinct dispersion tunability resulting from coupling two cavities of slightly different path lengths, and present a new method of modal dispersion engineering. Our fabricated devices consist of two coupled high quality factor silicon nitride microresonators, where the extinction ratio of the resonances can be controlled using integrated microheaters. Using this extinction tunability, we optimize comb generation efficiency as well as provide tunability for avoiding higher-order mode-crossings, known for degrading comb generation. The device is able to provide a 110-fold improvement in the comb generation efficiency. Finally, we demonstrate open eye diagrams using low-noise phase-locked comb lines as a wavelength-division multiplexing channel. (C) 2015 Optical Society of America
2013
Luke, Kevin, Avik Dutt, Carl B. Poitras, and Michal Lipson. “Overcoming Si3N4 film stress limitations for high quality factor ring resonators.” Optics Express 21 (2013): 22829-22833. Abstract
Silicon nitride (Si3N4) ring resonators are critical for a variety of photonic devices. However the intrinsically high film stress of silicon nitride has limited both the optical confinement and quality factor (Q) of ring resonators. We show that stress in Si3N4 films can be overcome by introducing mechanical trenches for isolating photonic devices from propagating cracks. We demonstrate a Si3N4 ring resonator with an intrinsic quality factor of 7 million, corresponding to a propagation loss of 4.2 dB/m. This is the highest quality factor reported to date for high confinement Si3N4 ring resonators in the 1550 nm wavelength range. (c) 2013 Optical Society of America |
# Irreducible Representation and IR stretching bands
After you find the irreducible representation of a molecule, how do you determine the number of IR active bands?
For example, in methane, the pointgroup is Td , then you found that the irreducible representation is A1 + T2.
How do you find the number of IR active and Raman active bands?
Thanks
Btw, a friend of mine said there are 2 IR active bands but I am not sure how to figure that out. Also, I know that T is triple degenerate and A is nondegenerate. But not sure how that relates to the number of IR active bands. |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 01 Jul 2016, 07:03
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# M10 Q#14
Author Message
Intern
Joined: 27 Jul 2010
Posts: 24
Followers: 1
Kudos [?]: 14 [0], given: 37
### Show Tags
29 Aug 2010, 09:55
Set consists of all prime integers less than 10. If a number is selected from set at random and then another number, not necessarily different, is selected from set at random, what is the probability that the sum of these numbers is odd?
1/8
1/6
3/8
1/2
5/8
(C) 2008 GMAT Club - m10#14
------------------
For Ans: pls refer - probability-69758.html
The OC says - the total probability of getting an ODD sum =
Prob( picking up first number as 2 & 2nd number as ODD) + Prob( picking up first number as ODD & 2nd number as 2 )
I dont quite understand the logic here. For the actual operation - SUM , the order of picking up the numbers doesnot matter.
Should the ans then be 3/16 ?!
SVP
Joined: 30 Apr 2008
Posts: 1888
Location: Oklahoma City
Schools: Hard Knocks
Followers: 39
Kudos [?]: 530 [1] , given: 32
### Show Tags
29 Aug 2010, 20:36
1
KUDOS
As you pick the first number, it doesn't really matter which one you pick. The only thing the first number changes is what options you have when you pick the second number.
Two Possible Situations: 1st: Pick 2 and then forst 2nd you'd have to pick one of the odd numbers in order to get EVEN + ODD = ODD.
So, Chance of picking 2 on the first try? 1 out of 4 = $$\frac{1}{4}$$
Now on the second pick, we must have one of the odd numbers 3, 5, or 7, so 3 out of 4, or $$\frac{3}{4}$$. In order to figure out the chance of getting an odd when we pick 2 first, we multiply 1/4 by 3/4 = 3/16.
But now we have another possibility.
If on the first pick we get an odd number, then we know for the second pick, we need 2 in order to have ODD + EVEN = ODD.
So chance to pick an odd first time around = 3/4. Chance to pick number 2 on the second pick is 1/4. So again we have 3/4 * 1/4 = 3/16. When either situation gives us the desired result, we add the results. so 3/16 + 3/16 = 6/16...reduce to 3/8 for the final answer.
wininblue wrote:
Sorry, I Still don't get it.
Since for the sum to be Odd, one number has to be even, in this case it is 2.
How can the order still matter in this case?
Posted from my mobile device
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a.
GMAT Club Premium Membership - big benefits and savings
Manager
Joined: 25 Jun 2010
Posts: 91
Followers: 1
Kudos [?]: 31 [0], given: 0
### Show Tags
29 Aug 2010, 10:22
The numbers in the set are : 2,3,5,7
The order matters, because 2 is the only even prime number.
Suppose, you have drawn 2 in the first attempt, now you have to get any odd number to make sure the sum is odd.
Similarly, if you draw any odd number in the first draw, you have to draw 2 to make the sum odd.
Thanks
Posted from my mobile device
Intern
Joined: 27 Jul 2010
Posts: 24
Followers: 1
Kudos [?]: 14 [0], given: 37
### Show Tags
29 Aug 2010, 10:34
Sorry, I Still don't get it.
Since for the sum to be Odd, one number has to be even, in this case it is 2.
How can the order still matter in this case?
Posted from my mobile device
Manager
Joined: 16 Feb 2010
Posts: 224
Followers: 2
Kudos [?]: 218 [0], given: 16
### Show Tags
29 Aug 2010, 18:32
wininblue wrote:
Set consists of all prime integers less than 10. If a number is selected from set at random and then another number, not necessarily different, is selected from set at random, what is the probability that the sum of these numbers is odd?
1/8
1/6
3/8
1/2
5/8
set is 2,3,5,7
P(sum odd)= P(pick 2) and P(pick Other than 2) OR P(pick other than 2) and P(pick 2)
$$P(SumOdd) = \frac{1}{4}*\frac{3}{4} + \frac{3}{4}*\frac{1}{4}$$
therefore 3/16 + 3/16 = 6/16 = 3/8
hope it helps
Manager
Joined: 09 Jun 2010
Posts: 116
Followers: 2
Kudos [?]: 88 [0], given: 1
### Show Tags
30 Aug 2010, 09:01
Set is : 2,3,5,7
Now: total number of selections = 4X4 [4 ways for the 1st pick & 4 ways for 2nd pick] = 16 = total probable outcomes
total number of favorable outcomes i.e odd sum = selecting 2 in 1st pick and 3or5or7 in 2nd pick. + selecting 3or 5or 7 in 1st pick and 2 in second pick = 1C1X3C1+3C1X1C1 = 3+3 =6
=> P(odd) = 6/16 = 3/8.
D.
Manager
Joined: 13 May 2010
Posts: 124
Followers: 0
Kudos [?]: 10 [0], given: 4
### Show Tags
05 Jul 2012, 04:04
question stem says prime integers .....wouldn't we consider -2, -3, -5, -7 as well?
Math Expert
Joined: 02 Sep 2009
Posts: 33599
Followers: 5954
Kudos [?]: 73965 [0], given: 9906
### Show Tags
05 Jul 2012, 04:07
Expert's post
teal wrote:
question stem says prime integers .....wouldn't we consider -2, -3, -5, -7 as well?
Only positive numbers can be primes.
For more check Number Theory chapter of Math Book: math-number-theory-88376.html
Hope it helps.
_________________
Manager
Joined: 13 May 2010
Posts: 124
Followers: 0
Kudos [?]: 10 [0], given: 4
### Show Tags
05 Jul 2012, 05:14
Hi,
I understand that prime numbers are only positive numbers starting smallest prime = 2 and onwards......but the confusion was because the question stem said prime integers rather than prime numbers?
Math Expert
Joined: 02 Sep 2009
Posts: 33599
Followers: 5954
Kudos [?]: 73965 [0], given: 9906
### Show Tags
05 Jul 2012, 05:16
Expert's post
teal wrote:
Hi,
I understand that prime numbers are only positive numbers starting smallest prime = 2 and onwards......but the confusion was because the question stem said prime integers rather than prime numbers?
There is no difference between "prime number" and "prime integer".
_________________
Senior Manager
Status: Juggg..Jugggg Go!
Joined: 11 May 2012
Posts: 254
Location: India
GC Meter: A.W.E.S.O.M.E
Concentration: Entrepreneurship, General Management
GMAT 1: 620 Q46 V30
GMAT 2: 720 Q50 V38
Followers: 5
Kudos [?]: 37 [0], given: 239
### Show Tags
06 Jul 2012, 00:48
Prime # <10 2,3,5,7
Probablity of sum being odd=1-probability of sum being even
[Lets remember its with replacement type of question]
Prob. of sum being even= 3*3/4*4 [choosing two numbers from 3,5,7 with replacement)+1/16 (choosing two 2's)=10/16=5/8
prob. of sum being odd = 1-5/8=3/8
_________________
You haven't failed, if you haven't given up!
---
Check out my other posts:
Bschool Deadlines 2013-2014 | Bschool Admission Events 2013 Start your GMAT Prep with Stacey Koprince | Get a head start in MBA finance
Re: M10 Q#14 [#permalink] 06 Jul 2012, 00:48
Similar topics Replies Last post
Similar
Topics:
m09q14 1 12 Jul 2011, 09:45
3 m25 q14 3 13 May 2010, 12:29
M23 Q14 1 23 Feb 2010, 12:57
m06 q14 3 04 Jan 2010, 20:34
8 M03 Q14 22 08 Nov 2008, 09:28
Display posts from previous: Sort by
# M10 Q#14
Moderator: Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
# Method of moments and MLE estimates for Lomax (Pareto Type 2)
I have this dataset, on which I am supposed to fit Lomax distribution with MM and MLE. Lomax pdf is: $$f(x|\alpha, \lambda) = \frac{\alpha\lambda^\alpha}{\left(\lambda+x\right)^{\alpha+1}}$$ For MM, it is possible to show that: $$\hat{\alpha}=\frac{2\hat{\sigma}^2}{\hat{\sigma}^2-\bar{X}^2}$$ $$\hat{\lambda}= \bar{X}\frac{\hat{\sigma}^2+\bar{X}^2}{\hat{\sigma}^2-\bar{X}^2}$$ Where $$\hat{\sigma}^2$$ is the sample variance and $$\bar{X}$$ is sample mean. The estimates are:
df <- df$V1 s <- var(df) m <- mean(df) alpha <- (2*s)/(s-m^2) lambda <- m*((s+m^2)/(s-m^2)) > alpha [1] 2.70862 > lambda [1] 3451.911 For MLE, I have log-likelihood function: $$\ell(\alpha, \lambda|x) = n\log(\alpha)+\alpha n\log(\lambda) - (\alpha+1)\sum_{i=1}^{n}\log(\lambda+x_i)$$ and the implementation: llik <- function(alpha, lambda,x){ n<-length(x) res <- n*log(alpha)+n*alpha*log(lambda)-(alpha+1)*sum(log(x+lambda)) return(-res) } mle1 <- mle(minuslogl = llik, start = list(alpha=alpha,lambda=lambda), fixed = list(x=df), method = 'BFGS') > mle1@coef alpha lambda 2.860708 3451.907162 I used as starting values the MM estimates. The resulting coefficients are quite similar to MM, however after using flomax() function from Renext package, I am getting completely different estimates, with higher likelihood: > flomax(df)$estimate
shape scale
1.880468 1872.132104
I have also done some simulations, in which both MM and MLE are really bad at estimating the 'real' parameters of Lomax. Why are these estimates this bad? Why is in my case MM so different from MLE? Why is mle() so sensitive to starting values?
Thank you for help!
• Have you tried calculating the values of the log likelihood function for the different parameter estimates? That might give you some insight into what is happening. – jbowman Apr 12 at 15:09
• yes, for the mle function the log-likelihood is -1013, for flomax it is -1012, so flomax does better. – PK1998 Apr 12 at 15:14
• My guess is that the LF is very flat for a large region around the optimum, so it's easy for optimization functions to find no significant improvement from a step and just stop. You might want to try to plot a 3D wireframe or some such for the log LF as a function of shape and scale just to see. – jbowman Apr 12 at 15:18
• Also note that $\lambda$ is a scale parameter; if you divide all your data by $1000$, it'll put the resultant scale parameter at about the same magnitude as the shape parameter, which difference may also be causing problems. – jbowman Apr 12 at 15:19
• The rescaling did the trick for me, BTW. – jbowman Apr 12 at 15:39
The issue appears to be the greatly different scales of the two parameters and how that interacts with BFGS. When I try optim using BFGS on the raw data, I get similar results to mle above (not surprisingly):
x <- df / 1000
llik <- function(theta, x){
alpha <- theta[1]
lambda <- theta[2]
n<-length(x)
res <- n*log(alpha)+n*alpha*log(lambda)-(alpha+1)*sum(log(x+lambda))
return(-res)
}
alpha <- 2.7
lambda <- 3450
mle1 <- optim(c(alpha, lambda), llik, method="BFGS", x = 1000*x)
mle1$par [1] 2.859574 3449.996428 But working with the rescaled data: alpha <- 2.7 lambda <- 3.450 mle1 <- optim(c(alpha, lambda), llik, method="BFGS", x = x) mle1$par
[1] 1.880470 1.872135
llik(c(mle1$$par[1], 1000*mle1$$par[2]), 1000*x)
[1] 1012.211
Using a different technique (Nelder-Mead) on the original data gives good results, although we really ought to rewrite the log likelihood function so as to not fail when negative values of the two parameters are passed:
alpha <- 2.7
lambda <- 3450
mle1 <- optim(c(alpha, lambda), llik, method="Nelder-Mead", x = 1000*x)
Warning messages:
1: In log(alpha) : NaNs produced
2: In log(alpha) : NaNs produced
3: In log(alpha) : NaNs produced
4: In log(alpha) : NaNs produced
5: In log(alpha) : NaNs produced
6: In log(alpha) : NaNs produced
7: In log(alpha) : NaNs produced
mle1\$par
[1] 1.879401 1870.984994
• Thanks, that's nice! Do you know why are the Lomax estimates so far from each other? Even when you do a simulation, the bootstrapped standard errors are huge both for MLE and MM – PK1998 Apr 12 at 17:24
• Those are two different, but related, questions. The first has to do with the internals of the BFGS algorithm, the different scales of the parameters, and the fact that the likelihood function is quite flat over a large range of parameter values, the second is due just to the flatness of the LF. I may expand on my answer later... – jbowman Apr 12 at 19:08
• Through its argument control the stats::optim function allows to give a parscale vector (as an element of the list passed) which scales the parameters. So using c(1, 1000) in parscale does the job for your example. This can be used in MLE as soon as a parameter is a scale parameter and the scaling of the data is avoided. This is a very nice feature which lacks in many other optimisation functions. – Yves May 19 at 15:03 |
# ManifoldsBase.jl – an interface for manifolds
The interface for a manifold is provided in the lightweight package ManifoldsBase.jl. You can easily implement your algorithms and even your own manifolds just using the interface. All manifolds from the package here are also based on this interface, so any project based on the interface can benefit from all manifolds, as soon as a certain manifold provides implementations of the functions a project requires.
Additionally the AbstractDecoratorManifold is provided as well as the ValidationManifold as a specific example of such a decorator.
## Types and functions
The following functions are currently available from the interface. If a manifold that you implement for your own package fits this interface, we happily look forward to a Pull Request to add it here.
We would like to highlight a few of the types and functions in the next two sections before listing the remaining types and functions alphabetically.
### The Manifold Type
Besides the most central type, that of an AbstractManifold accompanied by AbstractManifoldPoint to represent points thereon, note that the point type is meant in a lazy fashion. This is mean as follows: if you implement a new manifold and your points are represented by matrices, vectors or arrays, then it is best to not restrict types of the points p in functions, such that the methods work for example for other array representation types as well. You should subtype your new points on a manifold, if the structure you use is more structured, see for example FixedRankMatrices. Another reason is, if you want to distinguish (and hence dispatch on) different representation of points on the manifold. For an example, see the Hyperbolic manifold, which has different models to be represented.
ManifoldsBase.AbstractManifoldType
AbstractManifold{F}
A manifold type. The AbstractManifold is used to dispatch to different functions on a manifold, usually as the first argument of the function. Examples are the exponential and logarithmic maps as well as more general functions that are built on them like the geodesic.
The manifold is parametrized by an AbstractNumbers to distinguish for example real (ℝ) and complex (ℂ) manifolds.
For subtypes the preferred order of parameters is: size and simple value parameters, followed by the AbstractNumbers field, followed by data type parameters, which might depend on the abstract number field type.
For more details see interface-types-and-functions in the ManifoldsBase.jl documentation at https://juliamanifolds.github.io/Manifolds.jl/stable/interface.html#Types-and-functions.
ManifoldsBase.AbstractManifoldPointType
AbstractManifoldPoint
Type for a point on a manifold. While a AbstractManifold does not necessarily require this type, for example when it is implemented for Vectors or Matrix type elements, this type can be used for more complicated representations, semantic verification, or even dispatch for different representations of points on a manifold.
### The exponential and the logarithmic map, and geodesics
Geodesics are the generalizations of a straight line to manifolds, i.e. their intrinsic acceleration is zero. Together with geodesics one also obtains the exponential map and its inverse, the logarithmic map. Informally speaking, the exponential map takes a vector (think of a direction and a length) at one point and returns another point, which lies towards this direction at distance of the specified length. The logarithmic map does the inverse, i.e. given two points, it tells which vector “points towards” the other point.
Base.expMethod
exp(M::AbstractManifold, p, X)
exp(M::AbstractManifold, p, X, t::Real = 1)
Compute the exponential map of tangent vector X, optionally scaled by t, at point p from the manifold AbstractManifold M, i.e.
$$$\exp_p X = γ_{p,X}(1),$$$
where $γ_{p,X}$ is the unique geodesic starting in $γ(0)=p$ such that $\dot γ(0) = X$.
ManifoldsBase.exp!Method
exp!(M::AbstractManifold, q, p, X)
exp!(M::AbstractManifold, q, p, X, t::Real = 1)
Compute the exponential map of tangent vector X, optionally scaled by t, at point p from the manifold AbstractManifold M. The result is saved to q.
See also exp.
ManifoldsBase.geodesicMethod
geodesic(M::AbstractManifold, p, X) -> Function
Get the geodesic with initial point p and velocity X on the AbstractManifold M. A geodesic is a curve of zero acceleration. That is for the curve $γ_{p,X}: I → \mathcal M$, with $γ_{p,X}(0) = p$ and $\dot γ_{p,X}(0) = X$ a geodesic further fulfills
$$$∇_{\dot γ_{p,X}(t)} \dot γ_{p,X}(t) = 0,$$$
i.e. the curve is acceleration free with respect to the Riemannian metric. This yields, that the curve has constant velocity that is locally distance-minimizing.
This function returns a function of (time) t.
geodesic(M::AbstractManifold, p, X, t::Real)
geodesic(M::AbstractManifold, p, X, T::AbstractVector) -> AbstractVector
Return the point at time t or points at times t in T along the geodesic.
ManifoldsBase.shortest_geodesicMethod
shortest_geodesic(M::AbstractManifold, p, q) -> Function
Get a geodesic $γ_{p,q}(t)$ whose length is the shortest path between the points pand q, where $γ_{p,q}(0)=p$ and $γ_{p,q}(1)=q$. When there are multiple shortest geodesics, a deterministic choice will be returned.
This function returns a function of time, which may be a Real or an AbstractVector.
shortest_geodesic(M::AabstractManifold, p, q, t::Real)
shortest_geodesic(M::AbstractManifold, p, q, T::AbstractVector) -> AbstractVector
Return the point at time t or points at times t in T along the shortest geodesic.
### Retractions and inverse Retractions
The exponential and logarithmic map might be too expensive to evaluate or not be available in a very stable numerical way. Retractions provide a possibly cheap, fast and stable alternative.
The following figure compares the exponential map exp(M, p, X) on the Circle(ℂ) (or Sphere(1) embedded in $ℝ^2$ with one possible retraction, the one based on projections. Note especially that $\mathrm{dist}(p,q)=\lVert X\rVert_p$ while this is not the case for $q'$.
ManifoldsBase.inverse_retract!Method
inverse_retract!(M::AbstractManifold, X, p, q[, method::AbstractInverseRetractionMethod])
Compute the inverse retraction, a cheaper, approximate version of the logarithmic map), of points p and q on the AbstractManifold M. Result is saved to X.
Inverse retraction method can be specified by the last argument, defaulting to default_inverse_retraction_method(M). See the documentation of respective manifolds for available methods.
See also retract!.
ManifoldsBase.inverse_retractMethod
inverse_retract(M::AbstractManifold, p, q)
inverse_retract(M::AbstractManifold, p, q, method::AbstractInverseRetractionMethod
Compute the inverse retraction, a cheaper, approximate version of the logarithmic map), of points p and q on the AbstractManifold M.
Inverse retraction method can be specified by the last argument, defaulting to default_inverse_retraction_method(M). For available inverse retractions on certain manifolds see the documentation on the corresponding manifold.
See also retract.
ManifoldsBase.retract!Method
retract!(M::AbstractManifold, q, p, X)
retract!(M::AbstractManifold, q, p, X, t::Real=1)
retract!(M::AbstractManifold, q, p, X, method::AbstractRetractionMethod)
retract!(M::AbstractManifold, q, p, X, t::Real=1, method::AbstractRetractionMethod)
Compute a retraction, a cheaper, approximate version of the exponential map, from p into direction X, scaled by t, on the AbstractManifold manifold M. Result is saved to q.
Retraction method can be specified by the last argument, defaulting to default_retraction_method(M). See the documentation of respective manifolds for available methods.
See retract for more details.
ManifoldsBase.retractMethod
retract(M::AbstractManifold, p, X)
retract(M::AbstractManifold, p, X, t::Real=1)
retract(M::AbstractManifold, p, X, method::AbstractRetractionMethod)
retract(M::AbstractManifold, p, X, t::Real=1, method::AbstractRetractionMethod)
Compute a retraction, a cheaper, approximate version of the exponential map, from p into direction X, scaled by t, on the AbstractManifold M.
A retraction $\operatorname{retr}_p: T_p\mathcal M → \mathcal M$ is a smooth map that fulfills
1. $\operatorname{retr}_p(0) = p$
2. $D\operatorname{retr}_p(0): T_p\mathcal M \to T_p\mathcal M$ is the identity map, i.e. $D\operatorname{retr}_p(0)[X]=X$,
where $D\operatorname{retr}_p$ denotes the differential of the retraction
The retraction is called of second order if for all $X$ the curves $c(t) = R_p(tX)$ have a zero acceleration at $t=0$, i.e. $c''(0) = 0$.
Retraction method can be specified by the last argument, defaulting to default_retraction_method(M). For further available retractions see the documentation of respective manifolds.
Locally, the retraction is invertible. For the inverse operation, see inverse_retract.
To distinguish different types of retractions, the last argument of the (inverse) retraction specifies a type. The following ones are available.
ManifoldsBase.NLsolveInverseRetractionType
NLsolveInverseRetraction{T<:AbstractRetractionMethod,TV,TK} <:
ApproximateInverseRetraction
An inverse retraction method for approximating the inverse of a retraction using NLsolve.
Constructor
NLsolveInverseRetraction(
method::AbstractRetractionMethod[, X0];
project_tangent=false,
project_point=false,
nlsolve_kwargs...,
)
Constructs an approximate inverse retraction for the retraction method with initial guess X0, defaulting to the zero vector. If project_tangent is true, then the tangent vector is projected before the retraction using project. If project_point is true, then the resulting point is projected after the retraction. nlsolve_kwargs are keyword arguments passed to NLsolve.nlsolve.
### Projections
A manifold might be embedded in some space. Often this is implicitly assumed, for example the complex Circle is embedded in the complex plane. Let‘s keep the circle in mind in the following as a simple example. For the general case see of explicitly stating an embedding and/or distinguising several, different embeddings, see Embedded Manifolds below.
To make this a little more concrete, let‘s assume we have a manifold $\mathcal M$ which is embedded in some manifold $\mathcal N$ and the image $i(\mathcal M)$ of the embedding function $i$ is a closed set (with respect to the topology on $\mathcal N$). Then we can do two kinds of projections.
To make this concrete in an example for the Circle $\mathcal M=\mathcal C := \{ p ∈ ℂ | |p| = 1\}$ the embedding can be chosen to be the manifold $N = ℂ$ and due to our representation of $\mathcal C$ as complex numbers already, we have $i(p) = p$ the identity as the embedding function.
1. Given a point $p∈\mathcal N$ we can look for the closest point on the manifold $\mathcal M$ formally as
$$$\operatorname*{arg\,min}_{q\in \mathcal M} d_{\mathcal N}(i(q),p)$$$
And this resulting $q$ we call the projection of $p$ onto the manifold $\mathcal M$.
1. Given a point $p∈\mathcal M$ and a vector in $X\inT_{i(p)}\mathcal N$ in the embedding we can similarly look for the closest point to $Y∈ T_p\mathcal M$ using the pushforward $\mathrm{d}i_p$ of the embedding.
$$$\operatorname*{arg\,min}_{Y\in T_p\mathcal M} \lVert \mathrm{d}i(p)[Y] - X \rVert_{i(p)}$$$
And we call the resulting $Y$ the projection of $X$ onto the tangent space $T_p\mathcal M$ at $p$.
Let‘s look at the little more concrete example of the complex Circle again. Here, the closest point of $p ∈ ℂ$ is just the projection onto the circle, or in other words $q = \frac{p}{\lvert p \rvert}$. A tangent space $T_p\mathcal C$ in the embedding is the line orthogonal to a point $p∈\mathcal C$ through the origin. This can be better visualized by looking at $p+T_p\mathcal C$ which is actually the line tangent to $p$. Note that this shift does not change the resulting projection relative to the origin of the tangent space.
Here the projection can be computed as the classical projection onto the line, i.e. $Y = X - ⟨X,p⟩X$.
this is illustrated in the following figure
ManifoldsBase.project!Method
project!(M::AbstractManifold, Y, p, X)
Project ambient space representation of a vector X to a tangent vector at point p on the AbstractManifold M. The result is saved in vector Y. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, project! includes changing data representation, if applicable, i.e. if the tangents on M are not represented in the same way as points on the embedding, the representation is changed accordingly. This is the case for example for Lie groups, when tangent vectors are represented in the Lie algebra. after projection the change to the Lie algebra is perfomed, too.
ManifoldsBase.project!Method
project!(M::AbstractManifold, q, p)
Project point p from the ambient space onto the AbstractManifold M. The result is storedin q. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, the projection includes changing data representation, if applicable, i.e. if the points on M are not represented in the same array data, the data is changed accordingly.
ManifoldsBase.projectMethod
project(M::AbstractManifold, p, X)
Project ambient space representation of a vector X to a tangent vector at point p on the AbstractManifold M. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, project includes changing data representation, if applicable, i.e. if the tangents on M are not represented in the same way as points on the embedding, the representation is changed accordingly. This is the case for example for Lie groups, when tangent vectors are represented in the Lie algebra. after projection the change to the Lie algebra is perfomed, too.
ManifoldsBase.projectMethod
project(M::AbstractManifold, p)
Project point p from the ambient space of the AbstractManifold M to M. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, the projection includes changing data representation, if applicable, i.e. if the points on M are not represented in the same array data, the data is changed accordingly.
### Remaining functions
Base.copyto!Method
copyto!(M::AbstractManifold, Y, p, X)
Copy the value(s) from X to Y, where both are tangent vectors from the tangent space at p on the AbstractManifold M. This function defaults to calling copyto!(Y, X), but it might be useful to overwrite the function at the level, where also information from p and M can be accessed.
Base.copyto!Method
copyto!(M::AbstractManifold, q, p)
Copy the value(s) from p to q, where both are points on the AbstractManifold M. This function defaults to calling copyto!(q, p), but it might be useful to overwrite the function at the level, where also information from M can be accessed.
Base.isapproxMethod
isapprox(M::AbstractManifold, p, X, Y; kwargs...)
Check if vectors X and Y tangent at p from AbstractManifold M are approximately equal.
Keyword arguments can be used to specify tolerances.
Base.isapproxMethod
isapprox(M::AbstractManifold, p, q; kwargs...)
Check if points p and q from AbstractManifold M are approximately equal.
Keyword arguments can be used to specify tolerances.
ManifoldsBase.allocateMethod
allocate(a)
allocate(a, dims::Integer...)
allocate(a, dims::Tuple)
allocate(a, T::Type)
allocate(a, T::Type, dims::Integer...)
allocate(a, T::Type, dims::Tuple)
Allocate an object similar to a. It is similar to function similar, although instead of working only on the outermost layer of a nested structure, it maps recursively through outer layers and calls similar on the innermost array-like object only. Type T is the new number element type number_eltype, if it is not given the element type of a is retained. The dims argument can be given for non-nested allocation and is forwarded to the function similar.
ManifoldsBase.allocate_resultMethod
allocate_result(M::AbstractManifold, f, x...)
Allocate an array for the result of function f on AbstractManifold M and arguments x... for implementing the non-modifying operation using the modifying operation.
Usefulness of passing a function is demonstrated by methods that allocate results of musical isomorphisms.
ManifoldsBase.base_manifoldFunction
base_manifold(M::AbstractManifold, depth = Val(-1))
Return the internally stored AbstractManifold for decorated manifold M and the base manifold for vector bundles or power manifolds. The optional parameter depth can be used to remove only the first depth many decorators and return the AbstractManifold from that level, whether its decorated or not. Any negative value deactivates this depth limit.
ManifoldsBase.check_pointMethod
check_point(M::AbstractManifold, p; kwargs...) -> Union{Nothing,String}
Return nothing when p is a point on the AbstractManifold M. Otherwise, return an error with description why the point does not belong to manifold M.
By default, check_point returns nothing, i.e. if no checks are implemented, the assumption is to be optimistic for a point not deriving from the AbstractManifoldPoint type.
ManifoldsBase.check_sizeMethod
check_size(M::AbstractManifold, p)
check_size(M::AbstractManifold, p, X)
Check whether p has the right representation_size for a AbstractManifold M. Additionally if a tangent vector is given, both p and X are checked to be of corresponding correct representation sizes for points and tangent vectors on M.
By default, check_size returns nothing, i.e. if no checks are implemented, the assumption is to be optimistic.
ManifoldsBase.check_vectorMethod
check_vector(M::AbstractManifold, p, X; kwargs...) -> Union{Nothing,String}
Check whether X is a valid tangent vector in the tangent space of p on the AbstractManifold M. An implementation does not have to validate the point p. If it is not a tangent vector, an error string should be returned.
By default, check_vector returns nothing, i.e. if no checks are implemented, the assumption is to be optimistic for tangent vectors not deriving from the TVector type.
ManifoldsBase.distanceMethod
distance(M::AbstractManifold, p, q)
Shortest distance between the points p and q on the AbstractManifold M, i.e.
$$$d(p,q) = \inf_{γ} L(γ),$$$
where the infimum is over all piecewise smooth curves $γ: [a,b] \to \mathcal M$ connecting $γ(a)=p$ and $γ(b)=q$ and
$$$L(γ) = \displaystyle\int_{a}^{b} \lVert \dotγ(t)\rVert_{γ(t)} \mathrm{d}t$$$
is the length of the curve $γ$.
If $\mathcal M$ is not connected, i.e. consists of several disjoint components, the distance between two points from different components should be $∞$.
ManifoldsBase.embed!Method
embed!(M::AbstractManifold, Y, p, X)
Embed a tangent vector X at a point p on the AbstractManifold M into the ambient space and return the result in Y. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, embed! includes changing data representation, if applicable, i.e. if the tangents on M are not represented in the same way as tangents on the embedding, the representation is changed accordingly. This is the case for example for Lie groups, when tangent vectors are represented in the Lie algebra. The embedded tangents are then in the tangent spaces of the embedded base points.
ManifoldsBase.embed!Method
embed!(M::AbstractManifold, q, p)
Embed point p from the AbstractManifold M into an ambient space. This method is only available for manifolds where implicitly an embedding or ambient space is given. Not implementing this function means, there is no proper embedding for your manifold. Additionally, embed might include changing data representation, if applicable, i.e. if points on M are not represented in the same way as their counterparts in the embedding, the representation is changed accordingly.
If you have more than one embedding, see EmbeddedManifold for defining a second embedding. If your point p is already represented in some embedding, see AbstractEmbeddedManifold how you can avoid reimplementing code from the embedded manifold
ManifoldsBase.embedMethod
embed(M::AbstractManifold, p, X)
Embed a tangent vector X at a point p on the AbstractManifold M into an ambient space. This method is only available for manifolds where implicitly an embedding or ambient space is given. Not implementing this function means, there is no proper embedding for your tangent space(s).
Additionally, embed might include changing data representation, if applicable, i.e. if tangent vectors on M are not represented in the same way as their counterparts in the embedding, the representation is changed accordingly.
If you have more than one embedding, see EmbeddedManifold for defining a second embedding. If your tangent vector X is already represented in some embedding, see AbstractEmbeddedManifold how you can avoid reimplementing code from the embedded manifold
ManifoldsBase.embedMethod
embed(M::AbstractManifold, p)
Embed point p from the AbstractManifold M into the ambient space. This method is only available for manifolds where implicitly an embedding or ambient space is given. Additionally, embed includes changing data representation, if applicable, i.e. if the points on M are not represented in the same way as points on the embedding, the representation is changed accordingly.
ManifoldsBase.injectivity_radiusMethod
injectivity_radius(M::AbstractManifold, p)
Return the distance $d$ such that exp(M, p, X) is injective for all tangent vectors shorter than $d$ (i.e. has an inverse).
injectivity_radius(M::AbstractManifold)
Infimum of the injectivity radius of all manifold points.
injectivity_radius(M::AbstractManifold[, x], method::AbstractRetractionMethod)
injectivity_radius(M::AbstractManifold, x, method::AbstractRetractionMethod)
Distance $d$ such that retract(M, p, X, method) is injective for all tangent vectors shorter than $d$ (i.e. has an inverse) for point p if provided or all manifold points otherwise.
ManifoldsBase.is_pointFunction
is_point(M::AbstractManifold, p, throw_error = false; kwargs...)
Return whether p is a valid point on the AbstractManifold M.
If throw_error is false, the function returns either true or false. If throw_error is true, the function either returns true or throws an error. By default the function calls check_point(M, p; kwargs...) and checks whether the returned value is nothing or an error.
ManifoldsBase.is_vectorFunction
is_vector(M::AbstractManifold, p, X, throw_error = false; check_base_point=true, kwargs...)
Return whether X is a valid tangent vector at point p on the AbstractManifold M. Returns either true or false.
If throw_error is false, the function returns either true or false. If throw_error is true, the function either returns true or throws an error. By default the function calls check_vector(M, p, X; kwargs...) and checks whether the returned value is nothing or an error.
If check_base_point is true, then the point p will be first checked using the check_point function.
ManifoldsBase.size_to_tupleMethod
size_to_tuple(::Type{S}) where S<:Tuple
Converts a size given by Tuple{N, M, ...} into a tuple (N, M, ...).
ManifoldsBase.zero_vector!Method
zero_vector!(M::AbstractManifold, X, p)
Save to X the tangent vector from the tangent space $T_p\mathcal M$ at p that represents the zero vector, i.e. such that retracting X to the AbstractManifold M at p produces p.
ManifoldsBase.zero_vectorMethod
zero_vector(M::AbstractManifold, p)
Return the tangent vector from the tangent space $T_p\mathcal M$ at p on the AbstractManifold M, that represents the zero vector, i.e. such that a retraction at p produces p.
## Number systems
ManifoldsBase._unify_number_systemsMethod
_unify_number_systems(𝔽s::AbstractNumbers...)
Compute a number system that includes all given number systems (as sub-systems) and is closed under addition and multiplication.
ManifoldsBase.number_systemMethod
number_system(M::AbstractManifold{𝔽})
Return the number system the manifold M is based on, i.e. the parameter 𝔽.
## Allocation
Non-mutating functions in ManifoldsBase.jl are typically implemented using mutating variants. Allocation of new points is performed using a custom mechanism that relies on the following functions:
• allocate that allocates a new point or vector similar to the given one. This function behaves like similar for simple representations of points and vectors (for example Array{Float64}). For more complex types, such as nested representations of PowerManifold (see NestedPowerRepresentation), FVector types, checked types like ValidationMPoint and more it operates differently. While similar only concerns itself with the higher level of nested structures, allocate maps itself through all levels of nesting until a simple array of numbers is reached and then calls similar. The difference can be most easily seen in the following example:
julia> x = similar([[1.0], [2.0]])
2-element Array{Array{Float64,1},1}:
#undef
#undef
julia> y = Manifolds.allocate([[1.0], [2.0]])
2-element Array{Array{Float64,1},1}:
[6.90031725726027e-310]
[6.9003678131654e-310]
julia> x[1]
Stacktrace:
[1] getindex(::Array{Array{Float64,1},1}, ::Int64) at ./array.jl:744
[2] top-level scope at REPL[12]:1
julia> y[1]
1-element Array{Float64,1}:
6.90031725726027e-310
## Bases
The following functions and types provide support for bases of the tangent space of different manifolds. Moreover, bases of the cotangent space are also supported, though this description focuses on the tangent space. An orthonormal basis of the tangent space $T_p \mathcal M$ of (real) dimension $n$ has a real-coefficient basis $e_1, e_2, …, e_n$ if $\mathrm{Re}(g_p(e_i, e_j)) = δ_{ij}$ for each $i,j ∈ \{1, 2, …, n\}$ where $g_p$ is the Riemannian metric at point $p$. A vector $X$ from the tangent space $T_p \mathcal M$ can be expressed in Einstein notation as a sum $X = X^i e_i$, where (real) coefficients $X^i$ are calculated as $X^i = \mathrm{Re}(g_p(X, e_i))$.
Bases are closely related to atlases.
The main types are:
The main functions are:
Coordinates of a vector in a basis can be stored in an FVector to explicitly indicate which basis they are expressed in. It is useful to avoid potential ambiguities.
ManifoldsBase.CachedBasisType
CachedBasis{𝔽,V,<:AbstractBasis{𝔽}} <: AbstractBasis{𝔽}
A cached version of the given basis with precomputed basis vectors. The basis vectors are stored in data, either explicitly (like in cached variants of ProjectedOrthonormalBasis) or implicitly.
Constructor
CachedBasis(basis::AbstractBasis, data)
ManifoldsBase.DiagonalizingOrthonormalBasisType
DiagonalizingOrthonormalBasis{𝔽,TV} <: AbstractOrthonormalBasis{𝔽,TangentSpaceType}
An orthonormal basis Ξ as a vector of tangent vectors (of length determined by manifold_dimension) in the tangent space that diagonalizes the curvature tensor $R(u,v)w$ and where the direction frame_direction $v$ has curvature 0.
The type parameter 𝔽 denotes the AbstractNumbers that will be used for the vectors elements.
Constructor
DiagonalizingOrthonormalBasis(frame_direction, 𝔽::AbstractNumbers = ℝ)
ManifoldsBase.GramSchmidtOrthonormalBasisType
GramSchmidtOrthonormalBasis{𝔽} <: AbstractOrthonormalBasis{𝔽}
An orthonormal basis obtained from a basis.
Constructor
GramSchmidtOrthonormalBasis(𝔽::AbstractNumbers = ℝ)
ManifoldsBase.ProjectedOrthonormalBasisType
ProjectedOrthonormalBasis(method::Symbol, 𝔽::AbstractNumbers = ℝ)
An orthonormal basis that comes from orthonormalization of basis vectors of the ambient space projected onto the subspace representing the tangent space at a given point.
The type parameter 𝔽 denotes the AbstractNumbers that will be used for the vectors elements.
Available methods:
• :gram_schmidt uses a modified Gram-Schmidt orthonormalization.
• :svd uses SVD decomposition to orthogonalize projected vectors. The SVD-based method should be more numerically stable at the cost of an additional assumption (local metric tensor at a point where the basis is calculated has to be diagonal).
ManifoldsBase.VectorSpaceTypeType
VectorSpaceType
Abstract type for tangent spaces, cotangent spaces, their tensor products, exterior products, etc.
Every vector space fiber is supposed to provide:
• a method of constructing vectors,
• basic operations: addition, subtraction, multiplication by a scalar and negation (unary minus),
• zero_vector(fiber, p) to construct zero vectors at point p,
• allocate(X) and allocate(X, T) for vector X and type T,
• copyto!(X, Y) for vectors X and Y,
• number_eltype(v) for vector v,
• vector_space_dimension.
Optionally:
ManifoldsBase.allocation_promotion_functionMethod
allocation_promotion_function(M::AbstractManifold, f, args::Tuple)
Determine the function that must be used to ensure that the allocated representation is of the right type. This is needed for get_vector when a point on a complex manifold is represented by a real-valued vectors with a real-coefficient basis, so that a complex-valued vector representation is allocated.
ManifoldsBase.dual_basisMethod
dual_basis(M::AbstractManifold, p, B::AbstractBasis)
Get the dual basis to B, a basis of a vector space at point p from manifold M.
The dual to the $i$th vector $v_i$ from basis B is a vector $v^i$ from the dual space such that $v^i(v_j) = δ^i_j$, where $δ^i_j$ is the Kronecker delta symbol:
$$$δ^i_j = \begin{cases} 1 & \text{ if } i=j, \\ 0 & \text{ otherwise.} \end{cases}$$$
ManifoldsBase.get_basisMethod
get_basis(M::AbstractManifold, p, B::AbstractBasis) -> CachedBasis
Compute the basis vectors of the tangent space at a point on manifold M represented by p.
Returned object derives from AbstractBasis and may have a field .vectors that stores tangent vectors or it may store them implicitly, in which case the function get_vectors needs to be used to retrieve the basis vectors.
ManifoldsBase.get_coordinatesMethod
get_coordinates(M::AbstractManifold, p, X, B::AbstractBasis)
get_coordinates(M::AbstractManifold, p, X, B::CachedBasis)
Compute a one-dimensional vector of coefficients of the tangent vector X at point denoted by p on manifold M in basis B.
Depending on the basis, p may not directly represent a point on the manifold. For example if a basis transported along a curve is used, p may be the coordinate along the curve. If a CachedBasis is provided, their stored vectors are used, otherwise the user has to provide a method to compute the coordinates.
For the CachedBasis keep in mind that the reconstruction with get_vector requires either a dual basis or the cached basis to be selfdual, for example orthonormal
See also: get_vector, get_basis
ManifoldsBase.get_vectorMethod
get_vector(M::AbstractManifold, p, X, B::AbstractBasis)
Convert a one-dimensional vector of coefficients in a basis B of the tangent space at p on manifold M to a tangent vector X at p.
Depending on the basis, p may not directly represent a point on the manifold. For example if a basis transported along a curve is used, p may be the coordinate along the curve.
For the CachedBasis keep in mind that the reconstruction from get_coordinates requires either a dual basis or the cached basis to be selfdual, for example orthonormal
ManifoldsBase.get_vectorsMethod
get_vectors(M::AbstractManifold, p, B::AbstractBasis)
Get the basis vectors of basis B of the tangent space at point p.
ManifoldsBase.gram_schmidtMethod
gram_schmidt(M::AbstractManifold{𝔽}, p, B::AbstractBasis{𝔽}) where {𝔽}
gram_schmidt(M::AbstractManifold, p, V::AbstractVector)
Compute an ONB in the tangent space at p on the [AbstractManifold](@ref} M from either an AbstractBasis basis ´B´ or a set of (at most) manifold_dimension(M) many vectors. Note that this method requires the manifold and basis to work on the same AbstractNumbers 𝔽, i.e. with real coefficients.
The method always returns a basis, i.e. linearly dependent vectors are removed.
Keyword arguments
• warn_linearly_dependent (false) – warn if the basis vectors are not linearly independent
• skip_linearly_dependent (false) – whether to just skip (true) a vector that is linearly dependent to the previous ones or to stop (false, default) at that point
• return_incomplete_set (false) – throw an error if the resulting set of vectors is not a basis but contains less vectors
further keyword arguments can be passed to set the accuracy of the independence test. Especially atol is raised slightly by default to atol = 5*1e-16.
Return value
When a set of vectors is orthonormalized a set of vectors is returned. When an AbstractBasis is orthonormalized, a CachedBasis is returned.
ManifoldsBase.hatMethod
hat(M::AbstractManifold, p, Xⁱ)
Given a basis $e_i$ on the tangent space at a point p and tangent component vector $X^i$, compute the equivalent vector representation $X=X^i e_i$, where Einstein summation notation is used:
$$$∧ : X^i ↦ X^i e_i$$$
For array manifolds, this converts a vector representation of the tangent vector to an array representation. The vee map is the hat map's inverse.
ManifoldsBase.number_of_coordinatesMethod
number_of_coordinates(M::AbstractManifold, B::AbstractBasis)
Compute the number of coordinates in basis B of manifold M. This also corresponds to the number of vectors represented by B, or stored within B in case of a CachedBasis.
ManifoldsBase.veeMethod
vee(M::AbstractManifold, p, X)
Given a basis $e_i$ on the tangent space at a point p and tangent vector X, compute the vector components $X^i$, such that $X = X^i e_i$, where Einstein summation notation is used:
$$$\vee : X^i e_i ↦ X^i$$$
For array manifolds, this converts an array representation of the tangent vector to a vector representation. The hat map is the vee map's inverse.
ManifoldsBase.AbstractFibreVectorType
AbstractFibreVector{TType<:VectorSpaceType}
Type for a vector from a vector space (fibre of a vector bundle) of type TType of a manifold. While a AbstractManifold does not necessarily require this type, for example when it is implemented for Vectors or Matrix type elements, this type can be used for more complicated representations, semantic verification, or even dispatch for different representations of tangent vectors and their types on a manifold.
ManifoldsBase.CoTVectorType
CoTVector = AbstractFibreVector{CotangentSpaceType}
Type for a cotangent vector of a manifold. While a AbstractManifold does not necessarily require this type, for example when it is implemented for Vectors or Matrix type elements, this type can be used for more complicated representations, semantic verification, or even dispatch for different representations of cotangent vectors and their types on a manifold.
ManifoldsBase.FVectorType
FVector(type::VectorSpaceType, data, basis::AbstractBasis)
Decorator indicating that the vector data contains coordinates of a vector from a fiber of a vector bundle of type type. basis is an object describing the basis of that space in which the coordinates are given.
Conversion between FVector representation and the default representation of an object (for example a tangent vector) for a manifold should be done using get_coordinates and get_vector.
Examples
julia> using Manifolds
julia> M = Sphere(2)
Sphere(2, ℝ)
julia> p = [1.0, 0.0, 0.0]
3-element Vector{Float64}:
1.0
0.0
0.0
julia> X = [0.0, 2.0, -1.0]
3-element Vector{Float64}:
0.0
2.0
-1.0
julia> B = DefaultOrthonormalBasis()
DefaultOrthonormalBasis(ℝ)
julia> fX = TFVector(get_coordinates(M, p, X, B), B)
TFVector([2.0, -1.0], DefaultOrthonormalBasis(ℝ))
julia> X_back = get_vector(M, p, fX.data, fX.basis)
3-element Vector{Float64}:
-0.0
2.0
-1.0
ManifoldsBase.TVectorType
TVector = AbstractFibreVector{TangentSpaceType}
Type for a tangent vector of a manifold. While a AbstractManifold does not necessarily require this type, for example when it is implemented for Vectors or Matrix type elements, this type can be used for more complicated representations, semantic verification, or even dispatch for different representations of tangent vectors and their types on a manifold.
## Vector transport
There are three main functions for vector transport:
Different types of vector transport are implemented using subtypes of AbstractVectorTransportMethod:
ManifoldsBase.AbstractLinearVectorTransportMethodType
AbstractLinearVectorTransportMethod <: AbstractVectorTransportMethod
Abstract type for linear methods for transporting vectors, that is transport of a linear combination of vectors is a linear combination of transported vectors.
ManifoldsBase.DifferentiatedRetractionVectorTransportType
DifferentiatedRetractionVectorTransport{R<:AbstractRetractionMethod} <:
AbstractVectorTransportMethod
A type to specify a vector transport that is given by differentiating a retraction. This can be introduced in two ways. Let $\mathcal M$ be a Riemannian manifold, $p\in\mathcal M$ a point, and $X,Y\in T_p\mathcal M$ denote two tangent vectors at $p$.
Given a retraction (cf. AbstractRetractionMethod) $\operatorname{retr}$, the vector transport of X in direction Y (cf. vector_transport_direction) by differentiation this retraction, is given by
$$$\mathcal T^{\operatorname{retr}}_{p,Y}X = D_Y\operatorname{retr}_p(Y)[X] = \frac{\mathrm{d}}{\mathrm{d}t}\operatorname{retr}_p(Y+tX)\Bigr|_{t=0}.$$$
see [AbsilMahonySepulchre2008], Section 8.1.2 for more details.
This can be phrased similarly as a vector_transport_to by introducing $q=\operatorname{retr}_pX$ and defining
$$$\mathcal T^{\operatorname{retr}}_{q \gets p}X = \mathcal T^{\operatorname{retr}}_{p,Y}X$$$
which in practice usually requires the inverse_retract to exists in order to compute $Y = \operatorname{retr}_p^{-1}q$.
Constructor
DifferentiatedRetractionVectorTransport(m::AbstractRetractionMethod)
ManifoldsBase.ParallelTransportType
ParallelTransport = DifferentiatedRetractionVectorTransport{ExponentialRetraction}
Specify to use parallel transport vector transport method.
To be precise let $c(t)$ be a curve depending on the method
In these cases $Y\in T_p\mathcal M$ is the vector that we would like to transport from the tangent space at $p=c(0)$ to the tangent space at $c(1)$.
Let $Z\colon [0,1] \to T\mathcal M$, $Z(t)\in T_{c(t)}\mathcal M$ be a smooth vector field along the curve $c$ with $Z(0) = Y$, such that $Z$ is parallel, i.e. its covariant derivative $\frac{\mathrm{D}}{\mathrm{d}t}Z$ is zero. Note that such a $Z$ always exists and is unique.
Then the parallel transport is given by $Z(1)$.
Note that since it is technically the DifferentiatedRetractionVectorTransport of the exp (cf. ExponentialRetraction), we define ParallelTransport as an alias.
ManifoldsBase.PoleLadderTransportType
PoleLadderTransport <: AbstractVectorTransportMethod
Specify to use pole_ladder as vector transport method within vector_transport_to, vector_transport_direction, or vector_transport_along, i.e.
Let $X\in T_p\mathcal M$ be a tangent vector at $p\in\mathcal M$ and $q\in\mathcal M$ the point to transport to. Then $x = \exp_pX$ is used to call y =pole_ladder(M, p, x, q) and the resulting vector is obtained by computing $Y = -\log_qy$.
The PoleLadderTransport posesses two advantages compared to SchildsLadderTransport:
• it is cheaper to evaluate, if you want to transport several vectors, since the mid point $c$ then stays unchanged.
• while both methods are exact if the curvature is zero, pole ladder is even exact in symmetric Riemannian manifolds[Pennec2018]
The pole ladder was was proposed in [LorenziPennec2014]. Its name stems from the fact that it resembles a pole ladder when applied to a sequence of points usccessively.
Constructor
PoleLadderTransport(
retraction = ExponentialRetraction(),
inverse_retraction = LogarithmicInverseRetraction(),
)
Construct the classical pole ladder that employs exp and log, i.e. as proposed in[LorenziPennec2014]. For an even cheaper transport the inner operations can be changed to an AbstractRetractionMethod retraction and an AbstractInverseRetractionMethod inverse_retraction, respectively.
ManifoldsBase.ScaledVectorTransportType
ScaledVectorTransport{T} <: AbstractVectorTransportMethod
Introduce a scaled variant of any AbstractVectorTransportMethod T, as introduced in [SatoIwai2013] for some $X\in T_p\mathcal M$ as
$$$\mathcal T^{\mathrm{S}}(X) = \frac{\lVert X\rVert_p}{\lVert \mathcal T(X)\rVert_q}\mathcal T(X).$$$
Note that the resulting point q has to be known, i.e. for vector_transport_direction the curve or more precisely its end point has to be known (via an exponential map or a retraction). Therefore a default implementation is only provided for the vector_transport_to
Constructor
ScaledVectorTransport(m::AbstractVectorTransportMethod)
ManifoldsBase.SchildsLadderTransportType
SchildsLadderTransport <: AbstractVectorTransportMethod
Specify to use schilds_ladder as vector transport method within vector_transport_to, vector_transport_direction, or vector_transport_along, i.e.
Let $X\in T_p\mathcal M$ be a tangent vector at $p\in\mathcal M$ and $q\in\mathcal M$ the point to transport to. Then
$$$P^{\mathrm{S}}_{q\gets p}(X) = \log_q\bigl( \operatorname{retr}_p ( 2\operatorname{retr}_p^{-1}c ) \bigr),$$$
where $c$ is the mid point between $q$ and $d=\exp_pX$.
This method employs the internal function schilds_ladder(M, p, d, q) that avoids leaving the manifold.
The name stems from the image of this paralleltogram in a repeated application yielding the image of a ladder. The approximation was proposed in [EhlersPiraniSchild1972].
Constructor
SchildsLadderTransport(
retraction = ExponentialRetraction(),
inverse_retraction = LogarithmicInverseRetraction(),
)
Construct the classical Schilds ladder that employs exp and log, i.e. as proposed in[EhlersPiraniSchild1972]. For an even cheaper transport these inner operations can be changed to an AbstractRetractionMethod retraction and an AbstractInverseRetractionMethod inverse_retraction, respectively.
ManifoldsBase.default_vector_transport_methodMethod
default_vector_transport_method(M::AbstractManifold)
The AbstractVectorTransportMethod that is used when calling vector_transport_along, vector_transport_to, or vector_transport_direction without specifying the vector transport method. By default, this is DifferentiatedRetractionVectorTransport(default_retraction_method(M)).
ManifoldsBase.pole_ladderFunction
pole_ladder(
M,
p,
d,
q,
c = mid_point(M, p, q);
retraction=default_retraction_method(M),
inverse_retraction=default_inverse_retraction_method(M)
)
Compute an inner step of the pole ladder, that can be used as a vector_transport_to. Let $c = \gamma_{p,q}(\frac{1}{2})$ mid point between p and q, then the pole ladder is given by
$$$\operatorname{Pl}(p,d,q) = \operatorname{retr}_d (2\operatorname{retr}_d^{-1}c)$$$
Where the classical pole ladder employs $\operatorname{retr}_d=\exp_d$ and $\operatorname{retr}_d^{-1}=\log_d$ but for an even cheaper transport these can be set to different AbstractRetractionMethod and AbstractInverseRetractionMethod.
When you have $X=log_pd$ and $Y = -\log_q \operatorname{Pl}(p,d,q)$, you will obtain the PoleLadderTransport. When performing multiple steps, this method avoidsd the switching to the tangent space. Keep in mind that after $n$ successive steps the tangent vector reads $Y_n = (-1)^n\log_q \operatorname{Pl}(p_{n-1},d_{n-1},p_n)$.
It is cheaper to evaluate than schilds_ladder, sinc if you want to form multiple ladder steps between p and q, but with different d, there is just one evaluation of a geodesic each., since the center c can be reused.
ManifoldsBase.pole_ladder!Function
pole_ladder(
M,
pl,
p,
d,
q,
c = mid_point(M, p, q),
X = allocate_result_type(M, log, d, c);
retraction = default_retraction_method(M),
inverse_retraction = default_inverse_retraction_method(M),
)
Compute the pole_ladder, i.e. the result is saved in pl. X is used for storing intermediate inverse retraction.
ManifoldsBase.schilds_ladderFunction
schilds_ladder(
M,
p,
d,
q,
c = mid_point(M, q, d);
retraction = default_retraction_method(M),
inverse_retraction = default_inverse_retraction_method(M),
)
Perform an inner step of schilds ladder, which can be used as a vector_transport_to, see SchildsLadderTransport. Let $c = \gamma_{q,d}(\frac{1}{2})$ denote the mid point on the shortest geodesic connecting $q$ and the point $d$. Then Schild's ladder reads as
$$$\operatorname{Sl}(p,d,q) = \operatorname{retr}_x( 2\operatorname{retr}_p^{-1} c)$$$
Where the classical Schilds ladder employs $\operatorname{retr}_d=\exp_d$ and $\operatorname{retr}_d^{-1}=\log_d$ but for an even cheaper transport these can be set to different AbstractRetractionMethod and AbstractInverseRetractionMethod.
In consistency with pole_ladder you can change the way the mid point is computed using the optional parameter c, but note that here it's the mid point between q and d.
When you have $X=log_pd$ and $Y = \log_q \operatorname{Sl}(p,d,q)$, you will obtain the PoleLadderTransport. Then the approximation to the transported vector is given by $\log_q\operatorname{Sl}(p,d,q)$.
When performing multiple steps, this method avoidsd the switching to the tangent space. Hence after $n$ successive steps the tangent vector reads $Y_n = \log_q \operatorname{Pl}(p_{n-1},d_{n-1},p_n)$.
ManifoldsBase.schilds_ladder!Function
schilds_ladder!(
M,
sl
p,
d,
q,
c = mid_point(M, q, d),
X = allocate_result_type(M, log, d, c);
retraction = default_retraction_method(M),
inverse_retraction = default_inverse_retraction_method(M),
)
Compute schilds_ladder and return the value in the parameter sl. If the required mid point c was computed before, it can be passed using c, and the allocation of new memory can be avoided providing a tangent vector X for the interims result.
ManifoldsBase.vector_transport_along!Method
function vector_transport_along!(
M::AbstractManifold,
Y,
p,
X,
c::AbstractVector,
)
Compute the vector transport along a discretized curve using PoleLadderTransport succesively along the sampled curve. This method is avoiding additional allocations as well as inner exp/log by performing all ladder steps on the manifold and only computing one tangent vector in the end.
ManifoldsBase.vector_transport_along!Method
vector_transport_along!(
M::AbstractManifold,
Y,
p,
X,
c::AbstractVector,
)
Compute the vector transport along a discretized curve using SchildsLadderTransport succesively along the sampled curve. This method is avoiding additional allocations as well as inner exp/log by performing all ladder steps on the manifold and only computing one tangent vector in the end.
ManifoldsBase.vector_transport_along!Method
vector_transport_along!(M::AbstractManifold, Y, p, X, c)
vector_transport_along!(M::AbstractManifold, Y, p, X, c, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M along the curve represented by c using the method, which defaults to default_vector_transport_method(M). The result is saved to Y.
ManifoldsBase.vector_transport_alongMethod
vector_transport_along(M::AbstractManifold, p, X, c)
vector_transport_along(M::AbstractManifold, p, X, c, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M along the curve represented by c using the method, which defaults to default_vector_transport_method(M).
ManifoldsBase.vector_transport_direction!Method
vector_transport_direction!(M::AbstractManifold, Y, p, X, d)
vector_transport_direction!(M::AbstractManifold, Y, p, X, d, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M in the direction indicated by the tangent vector d at p. By default, retract and vector_transport_to! are used with the method, which defaults to default_vector_transport_method(M). The result is saved to Y.
ManifoldsBase.vector_transport_directionMethod
vector_transport_direction(M::AbstractManifold, p, X, d)
vector_transport_direction(M::AbstractManifold, p, X, d, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M in the direction indicated by the tangent vector d at p. By default, retract and vector_transport_to! are used with the method, which defaults to default_vector_transport_method(M).
ManifoldsBase.vector_transport_to!Method
vector_transport_to!(M::AbstractManifold, Y, p, X, q, method::ProjectionTransport)
Transport a vector X from the tangent space at p on the AbstractManifold M by interpreting it as an element of the embedding and then projecting it onto the tangent space at q. This function needs to be separately implemented for each manifold because projection project may also change vector representation (if it's different than in the embedding) and it is assumed that the vector X already has the correct representation for M.
ManifoldsBase.vector_transport_to!Method
vector_transport_to!(M::AbstractManifold, Y, p, X, q)
vector_transport_to!(M::AbstractManifold, Y, p, X, q, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M along the shortest_geodesic to the tangent space at another point q. By default, the AbstractVectorTransportMethod method is default_vector_transport_method(M). The result is saved to Y.
ManifoldsBase.vector_transport_toMethod
vector_transport_to(M::AbstractManifold, p, X, q)
vector_transport_to(M::AbstractManifold, p, X, q, method::AbstractVectorTransportMethod)
Transport a vector X from the tangent space at a point p on the AbstractManifold M along the shortest_geodesic to the tangent space at another point q. By default, the AbstractVectorTransportMethod method is default_vector_transport_method(M).
## A Decorator for manifolds
A decorator manifold extends the functionality of a AbstractManifold in a semi-transparent way. It internally stores the AbstractManifold it extends and by default for functions defined in the ManifoldsBase it acts transparently in the sense that it passes all functions through to the base except those that it actually affects. For example, because the ValidationManifold affects nearly all functions, it overwrites nearly all functions, except a few like manifold_dimension. On the other hand, the MetricManifold only affects functions that involve metrics, especially exp and log but not the manifold_dimension. Contrary to the previous decorator, the MetricManifold does not overwrite functions. The decorator sets functions like exp and log to be implemented anew but required to be implemented when specifying a new metric. An exception is not issued if a metric is additionally set to be the default metric (see is_default_metric, since this makes all functions act transparently. this last case assumes that the newly specified metric type is actually the one already implemented on a manifold initially.
By default, i.e. for a plain new decorator, all functions are transparent, i.e. passed down to the manifold the AbstractDecoratorManifold decorates. To implement a method for a decorator that behaves differently from the method of the same function for the internal manifold, two steps are required. Let's assume the function is called f(M, arg1, arg2), and our decorator manifold DM of type OurDecoratorManifold decorates M. Then
1. set decorator_transparent_dispatch(f, M::OurDecoratorManifold, args...) = Val(:intransparent)
2. implement f(DM::OurDecoratorManifold, arg1, arg2)
This makes it possible to extend a manifold or all manifolds with a feature or replace a feature of the original manifold.
The MetricManifold is the best example of the second case, since the default metric indicates for which metric the manifold was originally implemented, such that those functions are just passed through. This can best be seen in the SymmetricPositiveDefinite manifold with its LinearAffineMetric.
A final technical note – if several manifolds have similar transparency rules concerning functions from the interface, the last parameter T of the AbstractDecoratorManifold{𝔽,T<:AbstractDecoratorType} can be used to dispatch on different transparency schemes.
ManifoldsBase.AbstractDecoratorManifoldType
AbstractDecoratorManifold{𝔽,T<:AbstractDecoratorType} <: AbstractManifold{𝔽}
An AbstractDecoratorManifold indicates that to some extent a manifold subtype decorates another AbstractManifold in the sense that it either
• it extends the functionality of a manifold with further features
• it defines a new manifold that internally uses functions from the decorated manifold
with the main intent that several or most functions of AbstractManifold are transparently passed through to the manifold that is decorated. This way a function implemented for a decorator acts transparent on all other decorators, i.e. they just pass them through. If the decorator the function is implemented for is not among the decorators, an error is issued. By default all base manifold functions, for example exp and log are transparent for all decorators.
Transparency of functions with respect to decorators can be specified using the macros @decorator_transparent_fallback, @decorator_transparent_function and @decorator_transparent_signature.
There are currently three modes given a new AbstractDecoratorManifold M
• :intransparent – this function has to be implmented for the new manifold M
• :transparent – this function is transparent, in the sense that the function is invoked on the decorated M.manifold. This is the default, when introducing a function or signature.
• :parent specifies that (unless implemented) for this function, the classical inheritance is issued, i.e. the function is invoked on Ms supertype.
ManifoldsBase.AbstractDecoratorTypeType
AbstractDecoratorType
Decorator types can be used to specify a basic transparency for an AbstractDecoratorManifold. This can be seen as an initial (rough) transparency pattern to start a type with.
Note that for a function f and it's mutating variant f!
• The function f is set to :parent to first invoke allocation and call of f!
• The mutating function f! is set to transparent
ManifoldsBase.@decorator_transparent_fallbackMacro
@decorator_transparent_fallback(ex)
@decorator_transparent_fallback(fallback_case = :intransparent, ex)
This macro introduces an additional implementation for a certain additional case. This can especially be used if for an already transparent function and an abstract intermediate type a change in the default is required. For implementing a concrete type, neither this nor any other trick is necessary. One just implements the function as before. Note that a decorator that is_default_decorator still dispatches to the transparent case.
• :transparent states, that the function is transparently passed on to the manifold that is decorated by the AbstractDecoratorManifold M, which is determined using the function decorated_manifold.
• :intransparent states that an implementation for this decorator is required, and if none of the types provides one, an error is issued. Since this macro provides such an implementation, this is the default.
• :parent states, that this function passes on to the supertype instead of to the decorated manifold.
Inline definitions are not supported. The function signature however may contain keyword arguments and a where clause. It does not allow for parameters with default values.
Examples
@decorator_transparent_fallback function log!(M::AbstractGroupManifold, X, p, q)
log!(decorated_manifold(M), X, p, Q)
end
@decorator_transparent_fallback :transparent function log!(M::AbstractGroupManifold, X, p, q)
log!(decorated_manifold(M), X, p, Q)
end
ManifoldsBase.@decorator_transparent_functionMacro
@decorator_transparent_function(ex)
@decorator_transparent_function(fallback_case = :intransparent, ex)
Introduce the function specified by ex to act transparently with respect to AbstractDecoratorManifolds. This introduces the possibility to modify the kind of transparency the implementation is done for. This optional first argument, the Symbol within fallback_case. This macro can be used to define a function and introduce it as transparent to other decorators. Note that a decorator that is_default_decorator still dispatches to the transparent case.
The cases of transparency are
• :transparent states, that the function is transparently passed on to the manifold that is decorated by the AbstractDecoratorManifold M, which is determined using the function decorated_manifold.
• :intransparent states that an implementation for this decorator is required, and if none of the types provides one, an error is issued. Since this macro provides such an implementation, this is the default.
• :parent states, that this function passes on to the supertype instead of to the decorated manifold. Passing is performed using the invoke function where the type of manifold is replaced by its supertype.
Innkoline-definitions are not yet covered – the function signature however may contain keyword arguments and a where clause.
Examples
@decorator_transparent_function log!(M::AbstractDecoratorManifold, X, p, q)
log!(decorated_manifold(M), X, p, Q)
end
@decorator_transparent_function :parent log!(M::AbstractDecoratorManifold, X, p, q)
log!(decorated_manifold(M), X, p, Q)
end
ManifoldsBase.@decorator_transparent_signatureMacro
@decorator_transparent_signature(ex)
Introduces a given function to be transparent with respect to all decorators. The function is adressed by its signature in ex.
Supports standard, keyword arguments and where clauses. Doesn't support parameters with default values. It introduces a dispatch on several transparency modes
The cases of transparency are
• :transparent states, that the function is transparently passed on to the manifold that is decorated by the AbstractDecoratorManifold M, which is determined using the function decorated_manifold. This is the default.
• :intransparent states that an implementation for this decorator is required, and if none of the types provides one, an error is issued.
• :parent states, that this function passes on to the supertype instead of to the decorated manifold.
Inline definitions are not supported. The function signature however may contain keyword arguments and a where clause.
The dispatch kind can later still be set to something different, see decorator_transparent_dispatch
Examples:
@decorator_transparent_signature log!(M::AbstractDecoratorManifold, X, p, q)
@decorator_transparent_signature log!(M::TD, X, p, q) where {TD<:AbstractDecoratorManifold}
@decorator_transparent_signature isapprox(M::AbstractDecoratorManifold, p, q; kwargs...)
ManifoldsBase.decorated_manifoldMethod
decorated_manifold(M::AbstractDecoratorManifold)
Return the manifold decorated by the decorator M. Defaults to M.manifold.
ManifoldsBase.decorator_transparent_dispatchMethod
decorator_transparent_dispatch(f, M::AbstractManifold, args...) -> Val
Given a AbstractManifold M and a function f(M,args...), indicate, whether a function is Val(:transparent) or Val(:intransparent) for the (decorated) AbstractManifold M. Another possibility is, that for M and given args... the function f should invoke Ms Val(:parent) implementation, see @decorator_transparent_function for details.
ManifoldsBase.is_decorator_transparentMethod
is_decorator_transparent(f, M::AbstractManifold, args...) -> Bool
Given a AbstractManifold M and a function f(M, args...), indicate, whether an AbstractDecoratorManifold acts transparently for f. This means, it just passes through down to the internally stored manifold. Transparency is only defined for decorator manifolds and by default all decorators are transparent. A function that is affected by the decorator indicates this by returning false. To change this behaviour, see decorator_transparent_dispatch.
If a decorator manifold is not in general transparent, it might still pass down for the case that a decorator is the default decorator, see is_default_decorator.
ManifoldsBase.is_default_decoratorMethod
is_default_decorator(M) -> Bool
For any manifold that is a subtype of AbstractDecoratorManifold, this function indicates whether a certain manifold M acts as a default decorator.
This yields that all functions are passed through to the decorated AbstractManifold if M is indicated as default. This overwrites all is_decorator_transparent values.
This yields the following advantange: For a manifold one usually implicitly assumes for example a metric. To avoid reimplementation of this metric when introducing a second metric, the first metric can be set to be the default, i.e. its implementaion is already given by the undecorated case.
Value returned by this function is determined by default_decorator_dispatch, which returns a Val-wrapped boolean for type stability of certain functions.
## Abstract Power Manifold
ManifoldsBase.InversePowerRetractionType
InversePowerRetraction{TR<:AbstractInverseRetractionMethod} <: AbstractInverseRetractionMethod
The InversePowerRetraction avoids ambiguities between dispatching on the AbstractPowerManifold and dispatching on the AbstractInverseRetractionMethod and encapsulates this. This container should only be used in rare cases outside of this package. Usually a subtype of the AbstractPowerManifold should define a way how to treat its AbstractRetractionMethods.
Constructor
InversePowerRetraction(inverse_retractions::AbstractInverseRetractionMethod...)
ManifoldsBase.NestedReplacingPowerRepresentationType
NestedReplacingPowerRepresentation
Representation of points and tangent vectors on a power manifold using arrays of size equal to TSize of a PowerManifold. Each element of such array stores a single point or tangent vector.
For modifying operations, each element of the outer array is replaced using non-modifying operations, differently than for NestedReplacingPowerRepresentation.
ManifoldsBase.PowerManifoldType
PowerManifold{𝔽,TM<:AbstractManifold,TSize<:Tuple,TPR<:AbstractPowerRepresentation} <: AbstractPowerManifold{𝔽,TM}
The power manifold $\mathcal M^{n_1× n_2 × … × n_d}$ with power geometry TSize statically defines the number of elements along each axis.
For example, a manifold-valued time series would be represented by a power manifold with $d$ equal to 1 and $n_1$ equal to the number of samples. A manifold-valued image (for example in diffusion tensor imaging) would be represented by a two-axis power manifold ($d=2$) with $n_1$ and $n_2$ equal to width and height of the image.
While the size of the manifold is static, points on the power manifold would not be represented by statically-sized arrays.
Constructor
PowerManifold(M::PowerManifold, N_1, N_2, ..., N_d)
PowerManifold(M::AbstractManifold, NestedPowerRepresentation(), N_1, N_2, ..., N_d)
M^(N_1, N_2, ..., N_d)
Generate the power manifold $M^{N_1 × N_2 × … × N_d}$. By default, a [PowerManifold](@ref} is expanded further, i.e. for M=PowerManifold(N,3) PowerManifold(M,2) is equivalend to PowerManifold(N,3,2). Points are then 3×2 matrices of points on N. Providing a NestedPowerRepresentation as the second argument to the constructor can be used to nest manifold, i.e. PowerManifold(M,NestedPowerRepresentation(),2) represents vectors of length 2 whose elements are vectors of length 3 of points on N in a nested array representation.
Since there is no default AbstractPowerRepresentation within this interface, the ^ operator is only available for PowerManifolds and concatenates dimensions.
ManifoldsBase.PowerRetractionType
PowerRetraction{TR<:AbstractRetractionMethod} <: AbstractRetractionMethod
The PowerRetraction avoids ambiguities between dispatching on the AbstractPowerManifold and dispatching on the AbstractRetractionMethod and encapsulates this. This container should only be used in rare cases outside of this package. Usually a subtype of the AbstractPowerManifold should define a way how to treat its AbstractRetractionMethods.
Constructor
PowerRetraction(retraction::AbstractRetractionMethod)
ManifoldsBase.PowerVectorTransportType
PowerVectorTransport{TR<:AbstractVectorTransportMethod} <:
AbstractVectorTransportMethod
The PowerVectorTransport avoids ambiguities between dispatching on the AbstractPowerManifold and dispatching on the AbstractVectorTransportMethod and encapsulates this. This container should only be used in rare cases outside of this package. Usually a subtype of the AbstractPowerManifold should define a way how to treat its AbstractVectorTransportMethods.
Constructor
PowerVectorTransport(method::AbstractVectorTransportMethod)
Base.copyto!Method
copyto!(M::PowerManifoldNested, Y, p, X)
Copy the values elementwise, i.e. call copyto!(M.manifold, B, a, A) for all elements A, a and B of X, p, and Y, respectively.
Base.copyto!Method
copyto!(M::PowerManifoldNested, q, p)
Copy the values elementwise, i.e. call copyto!(M.manifold, b, a) for all elements a and b of p and q, respectively.
Base.expMethod
exp(M::AbstractPowerManifold, p, X)
Compute the exponential map from p in direction X on the AbstractPowerManifold M, which can be computed using the base manifolds exponential map elementwise.
Base.getindexMethod
getindex(p, M::AbstractPowerManifold, i::Union{Integer,Colon,AbstractVector}...)
p[M::AbstractPowerManifold, i...]
Access the element(s) at index [i...] of a point p on an AbstractPowerManifold M by linear or multidimensional indexing. See also Array Indexing in Julia.
Base.logMethod
log(M::AbstractPowerManifold, p, q)
Compute the logarithmic map from p to q on the AbstractPowerManifold M, which can be computed using the base manifolds logarithmic map elementwise.
Base.setindex!Method
setindex!(q, p, M::AbstractPowerManifold, i::Union{Integer,Colon,AbstractVector}...)
q[M::AbstractPowerManifold, i...] = p
Set the element(s) at index [i...] of a point q on an AbstractPowerManifold M by linear or multidimensional indexing to q. See also Array Indexing in Julia.
Base.viewMethod
view(p, M::AbstractPowerManifold, i::Union{Integer,Colon,AbstractVector}...)
Get the view of the element(s) at index [i...] of a point p on an AbstractPowerManifold M by linear or multidimensional indexing.
ManifoldsBase.check_pointMethod
check_point(M::AbstractPowerManifold, p; kwargs...)
Check whether p is a valid point on an AbstractPowerManifold M, i.e. each element of p has to be a valid point on the base manifold. If p is not a point on M a CompositeManifoldError consisting of all error messages of the components, for which the tests fail is returned.
The tolerance for the last test can be set using the kwargs....
ManifoldsBase.check_vectorMethod
check_vector(M::AbstractPowerManifold, p, X; kwargs... )
Check whether X is a tangent vector to p an the AbstractPowerManifold M, i.e. atfer check_point(M, p), and all projections to base manifolds must be respective tangent vectors. If X is not a tangent vector to p on M a CompositeManifoldError consisting of all error messages of the components, for which the tests fail is returned.
The tolerance for the last test can be set using the kwargs....
ManifoldsBase.innerMethod
inner(M::AbstractPowerManifold, p, X, Y)
Compute the inner product of X and Y from the tangent space at p on an AbstractPowerManifold M, i.e. for each arrays entry the tangent vector entries from X and Y are in the tangent space of the corresponding element from p. The inner product is then the sum of the elementwise inner products.
ManifoldsBase.inverse_retractMethod
inverse_retract(M::AbstractPowerManifold, p, q, m::InversePowerRetraction)
Compute the inverse retraction from p with respect to q on an AbstractPowerManifold M using an InversePowerRetraction, which by default encapsulates a inverse retraction of the base manifold. Then this method is performed elementwise, so the encapsulated inverse retraction method has to be one that is available on the base AbstractManifold.
ManifoldsBase.manifold_dimensionMethod
manifold_dimension(M::PowerManifold)
Returns the manifold-dimension of an PowerManifold M $=\mathcal N = (\mathcal M)^{n_1,…,n_d}$, i.e. with $n=(n_1,…,n_d)$ the array size of the power manifold and $d_{\mathcal M}$ the dimension of the base manifold $\mathcal M$, the manifold is of dimension
$$$\dim(\mathcal N) = \dim(\mathcal M)\prod_{i=1}^d n_i = n_1n_2\cdot…\cdot n_d \dim(\mathcal M).$$$
ManifoldsBase.retractMethod
retract(M::AbstractPowerManifold, p, X, method::PowerRetraction)
Compute the retraction from p with tangent vector X on an AbstractPowerManifold M using a PowerRetraction, which by default encapsulates a retraction of the base manifold. Then this method is performed elementwise, so the encapsulated retraction method has to be one that is available on the base AbstractManifold.
## ValidationManifold
ValidationManifold is a simple decorator using the AbstractDecoratorManifold that “decorates” a manifold with tests that all involved points and vectors are valid for the wrapped manifold. For example involved input and output paratemers are checked before and after running a function, repectively. This is done by calling is_point or is_vector whenever applicable.
ManifoldsBase.ValidationManifoldType
ValidationManifold{𝔽,M<:AbstractManifold{𝔽}} <: AbstractDecoratorManifold{𝔽}
A manifold to encapsulate manifolds working on array representations of AbstractManifoldPoints and TVectors in a transparent way, such that for these manifolds it's not necessary to introduce explicit types for the points and tangent vectors, but they are encapsulated/stripped automatically when needed.
This manifold is a decorator for a manifold, i.e. it decorates a AbstractManifold M with types points, vectors, and covectors.
## EmbeddedManifold
Some manifolds can easily be defined by using a certain embedding. For example the Sphere(n) is embedded in Euclidean(n+1). Similar to the metric and MetricManifold, an embedding is often implicitly assumed. We introduce the embedded manifolds hence as an AbstractDecoratorManifold.
This decorator enables to use such an embedding in an transparent way. Different types of embeddings can be distinguished using the AbstractEmbeddingType, which is an AbstractDecoratorType.
### Isometric Embeddings
For isometric embeddings the type AbstractIsometricEmbeddingType can be used to avoid reimplementing the metric. See Sphere or Hyperbolic for example. Here, the exponential map, the logarithmic map, the retraction and its inverse are set to :intransparent, i.e. they have to be implemented.
Furthermore, the TransparentIsometricEmbedding type even states that the exponential and logarithmic maps as well as retractions and vector transports of the embedding can be used for the embedded manifold as well. See SymmetricMatrices for an example.
In both cases of course check_point and check_vector have to be implemented.
### Further Embeddings
A first embedding can also just be given implementing embed! ann project! for a manifold. This is considered to be the most usual or default embedding.
If you have two different embeddings for your manifold, a second one can be specified using the EmbeddedManifold, a type that “couples” two manifolds, more precisely a manifold and its embedding, to define embedding and projection functions between these two manifolds.
### Types
ManifoldsBase.AbstractEmbeddedManifoldType
AbstractEmbeddedManifold{𝔽,T<:AbstractEmbeddingType,𝔽} <: AbstractDecoratorManifold{𝔽}
This abstract type indicates that a concrete subtype is an embedded manifold with the additional property, that its points are given in the embedding. This also means, that the default implementation of embed is just the identity, since the points are already stored in the form suitable for this embedding specified. This also holds true for tangent vectors.
Furthermore, depending on the AbstractEmbeddingType different methods are transparently used from the embedding, for example the inner product or even the distance function. Specifying such an embedding type transparently passes the compuation onwards to the embedding (note again, that no embed is required) and hence avoids to reimplement these methods in the manifold that is embedded.
This should be used for example for check_point or check_vector, which should first invoke the test of the embedding and then test further constraints the representation in the embedding has for these points to be valid.
Technically this is realised by making the AbstractEmbeddedManifold is a decorator for the AbstractManifolds that are subtypes.
ManifoldsBase.EmbeddedManifoldType
EmbeddedManifold{𝔽, MT <: AbstractManifold, NT <: AbstractManifold} <: AbstractDecoratorManifold{𝔽}
A type to represent an explicit embedding of a AbstractManifold M of type MT embedded into a manifold N of type NT.
Note
This type is not required if a manifold M is to be embedded in one specific manifold N. One can then just implement embed! and project!. Only for a second –maybe considered non-default– embedding, this type should be considered in order to dispatch on different embed and project methods for different embeddings N.
Fields
• manifold the manifold that is an embedded manifold
• embedding a second manifold, the first one is embedded into
Constructor
EmbeddedManifold(M, N)
Generate the EmbeddedManifold of the AbstractManifold M into the AbstractManifold N.
ManifoldsBase.TransparentIsometricEmbeddingType
TransparentIsometricEmbedding <: AbstractIsometricEmbeddingType
Specify that an embedding is the default isometric embedding. This even inherits logarithmic and exponential map as well as retraction and inverse retractions from the embedding.
For an example, see SymmetricMatrices which are isometrically embedded in the Euclidean space of matrices but also inherit exponential and logarithmic maps.
### Functions
ManifoldsBase.base_manifoldMethod
base_manifold(M::AbstractEmbeddedManifold, d::Val{N} = Val(-1))
Return the base manifold of M that is enhanced with its embedding. While functions like inner might be overwritten to use the (decorated) manifold representing the embedding, the basemanifold is the manifold itself in the sense that detemining e.g. the [isdefault_metric](@ref) does not fall back to check with the embedding but with the manifold itself. For this abstract case, justM is returned.
ManifoldsBase.base_manifoldMethod
base_manifold(M::EmbeddedManifold, d::Val{N} = Val(-1))
Return the base manifold of M that is enhanced with its embedding. For this specific type the internally stored enhanced manifold M.manifold is returned.
ManifoldsBase.check_vectorMethod
check_vector(M::AbstractEmbeddedManifold, p, X; kwargs...)
Check that embed(M, p, X) is a valid tangent to embed(M, p).
## DefaultManifold
DefaultManifold is a simplified version of Euclidean and demonstrates a basic interface implementation. It can be used to perform simple tests. Since when using Manifolds.jl the Euclidean is available, the DefaultManifold itself is not exported.
ManifoldsBase.DefaultManifoldType
DefaultManifold <: AbstractManifold
This default manifold illustrates the main features of the interface and provides a skeleton to build one's own manifold. It is a simplified/shortened variant of Euclidean from Manifolds.jl.
This manifold further illustrates how to type your manifold points and tangent vectors. Note that the interface does not require this, but it might be handy in debugging and educative situations to verify correctness of involved variabes.
## Error Messages
especially to collect and display errors on AbstractPowerManifolds the following component and collection error messages are available.
ManifoldsBase.ComponentManifoldErrorType
CompnentError{I,E} <: Exception
Store an error that occured in a component, where the additional index is stored.
Fields
• index index where the error occured
• error` error that occured. |
# Real-analytic variant of theorem 4.2.5 of Duistermaat's “FIO”, 1996
Theorem 4.2.5 of Duistermaat's "Fourier Integral Operators", 1996, states:
Let $A \in I^m(X,Y,C)$ be an elliptic Fourier Integral Operator of order $m$, associated to a bijective canonical homogeneous transformation $C$ from an open cone $\Gamma \subset T^\ast Y \setminus 0$ into $T^\ast X \setminus 0$. Then for any closed (in $T^\ast Y \setminus 0$) cone $\Gamma_0 \subset \Gamma$ such that $C(\Gamma_0)$ is closed in $T^\ast X \setminus 0$ one can find a properly supported Fourier Integral Operator $B \in I^{-m}(Y,X,C^{-1})$ such that for any $u \in \mathscr E'(Y)$, for any $v \in \mathscr E'(X)$ the following relations are valid: $$WF(BAu - u) \cap \Gamma_0 = \varnothing,\quad WF(ABv - v) \cap C(\Gamma_0) = \varnothing.$$
The proof of the theorem involves multiplications by smooth functions with compact supports. This is why I can't directly generalize the proof of the abovementioned theorem to the case of the real-analytic category (the only real analytic function with compact support is zero). On the other hand, I think that the theorem is still valid for real-analytic Fourier Integral Operators and for analytic wavefronts but uses some more subtle proof. Please tell me, if you have encountered the mentioned theorem (or some variants) in the real-analytic case in some paper. |
# How to give a Gantt chart title table like formatting?
I am using the package pgfgantt to create a table displaying people serving on a committee over a period of time. (Probably not what it was intended for but it produced a nice output for what I wanted.) The one issue I have run into was that I want the make the headings/title look like they would in my other tables. What I mean is that the "main" \gantttitle would have a \toprule etc. and the bottom of the table would have a \bottomrule. I have tried entering this type of code into the ganttchart but it won't compile.
I have a MWE that has a the Gantt Chart as I have it set up (but with some irrelevant fun data in it) and a smaller table to show the format I hope to be able to put on the chart.
\documentclass[a4paper]{memoir}
\usepackage{libertine}
\usepackage{pgfgantt}
\usepackage[graphicx]{realboxes}
\begin{document}
\begin{table}
\centering
\caption{Members of the Intergalactic committee}
\label{IGC-Mem}
\resizebox{\textwidth}{!}{
\begin{ganttchart}[y unit title=0.65cm,y unit chart=0.80cm,vgrid={draw=none, draw=none, dotted},title height=1.00,bar/.append style={fill=gray!30},bar height=0.50,canvas/.style=%
{shape=rectangle, draw=black, dotted}]{1}{21}
\gantttitle{Members of IGC by delegation}{21}\\
\gantttitle{2525}{3}
\gantttitle{2526}{3}
\gantttitle{2527}{3}
\gantttitle{2528}{3}
\gantttitle{2529}{3}
\gantttitle{2530}{3}
\gantttitle{2531}{3}\\
%Member Details
\ganttbar{The Federation}{1}{1}
\ganttbar[inline]{Capt. Kirk}{1}{6}
\ganttbar[inline]{Mr.~ Spock}{7}{21} \\
\ganttbar{The Empire}{1}{1}
\ganttbar{The Republic}{21}{21}
\ganttbar[inline]{Princess Leia}{8}{17}
\ganttbar[inline]{Han Solo}{18}{21}\\
\ganttbar{Earth}{1}{1}
\ganttbar[inline]{Buck Rodgers}{1}{21}\\
\ganttbar{Red Dwarf}{1}{1}
\ganttbar[inline]{D. Lister}{1}{3}
\ganttbar[inline]{A. Rimmer}{4}{9}
\ganttbar[inline]{Kryten}{10}{21}\\
\ganttbar{Romulan}{1}{1}
\ganttbar[inline]{Vreenak}{1}{21}\\
\ganttbar{Imaginationland}{21}{21}
\ganttbar[inline]{Leopold "Butters" Stotch}{12}{21}
\end{ganttchart}
}
\hrulefill
\end{table}
\begin{table}
\centering
\caption{Members of the Intergalactic committee}
\label{IGC-Mem1}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}llllllll@{}}
\toprule
\multicolumn{8}{c}{\textbf{Members of the IGC by delegation}} \\ \midrule
& \multicolumn{1}{c}{{2525}} & \multicolumn{1}{c}{{2526}} & \multicolumn{1}{c}{{2527}} & \multicolumn{1}{c}{{2528}} & \multicolumn{1}{c}{{2529}} & \multicolumn{1}{c}{{2530}} & \multicolumn{1}{c}{{2531}} \\
\multicolumn{1}{l|}{The Federation} & \multicolumn{2}{c|}{Capt. Kirk} & \multicolumn{5}{c|}{Mr. Spock} \\ \cline{2-8}
\multicolumn{1}{l|}{The Empire} & \multicolumn{5}{c|}{Darth Vader} \\ \cline{2-8}
\multicolumn{1}{l|}{Earth} & \multicolumn{7}{c|}{Buck Rodgers} \\ \cline{2-8} \bottomrule
\end{tabular}
}
\end{table}
\end{document}
I tried editing the Canvas in the ganttchart settings but that has only got me as far as what is in the MWE above. If you need any more info please let me know in the comments. Effectively what I would like is the main heading in the Gantt Chart to look like a table heading.
• Probably a naïve question, but why not use a table if you want it to look like one? – cfr Aug 15 '15 at 21:12
• @cfr Prefer the look of the gantt chart body. Just wanted to see if could get \toperule etc look in it to match other table headings. – gman Aug 15 '15 at 21:18
• Package tikz-timing does something similar. – Symbol 1 Aug 17 '15 at 6:01
Here's one possibility; you first box the Gantt chart and then place it inside a TikZ named \node; using \draw and the node anchors you can now easily place the rules. \heaverulewidth for line width gives the thickness of \toprule and \bottomrule, and \lightrulewidth gives the one from \midrule; I also set inner xsep to \tabcolsep to have the usual horizontal padding for the contents of a table (you can set it to 0pt if you want to).
The code:
\documentclass[a4paper]{memoir}
\usepackage{libertine}
\usepackage{pgfgantt}
\usepackage[graphicx]{realboxes}
\newsavebox\mybox
\begin{lrbox}{\mybox}
\begin{ganttchart}[
y unit title=0.65cm,
y unit chart=0.80cm,
vgrid={
draw=none,
draw=none,
dotted
},
title height=1.00,
bar/.append style={fill=gray!30},
bar height=0.50,
canvas/.style={
shape=rectangle,
draw=none
},
title/.style={draw=none}
]{1}{21}
\gantttitle{Members of IGC by delegation}{21}\\
\gantttitle{2525}{3}
\gantttitle{2526}{3}
\gantttitle{2527}{3}
\gantttitle{2528}{3}
\gantttitle{2529}{3}
\gantttitle{2530}{3}
\gantttitle{2531}{3}\\
%Member Details
\ganttbar{The Federation}{1}{1}
\ganttbar[inline]{Capt. Kirk}{1}{6}
\ganttbar[inline]{Mr.~ Spock}{7}{21} \\
\ganttbar{The Empire}{1}{1}
\ganttbar{The Republic}{21}{21}
\ganttbar[inline]{Princess Leia}{8}{17}
\ganttbar[inline]{Han Solo}{18}{21}\\
\ganttbar{Earth}{1}{1}
\ganttbar[inline]{Buck Rodgers}{1}{21}\\
\ganttbar{Red Dwarf}{1}{1}
\ganttbar[inline]{D. Lister}{1}{3}
\ganttbar[inline]{A. Rimmer}{4}{9}
\ganttbar[inline]{Kryten}{10}{21}\\
\ganttbar{Romulan}{1}{1}
\ganttbar[inline]{Vreenak}{1}{21}\\
\ganttbar{Imaginationland}{21}{21}
\ganttbar[inline]{Leopold "Butters" Stotch}{12}{21}
\end{ganttchart}
\end{lrbox}
\begin{document}
\begin{table}
\centering
\caption{Members of the Intergalactic committee}
\label{IGC-Mem}
\begin{tikzpicture}
\node[inner ysep=0pt,outer sep=0pt,inner xsep=\tabcolsep]
(gantt)
{\resizebox{\textwidth}{!}{\usebox\mybox}};
\draw[line width=\heavyrulewidth]
(gantt.north west) -- (gantt.north east);
\draw[line width=\lightrulewidth]
([yshift=-17pt]gantt.north west) -- ([yshift=-17pt]gantt.north east);
\draw[line width=\heavyrulewidth]
(gantt.south west) -- (gantt.south east);
\end{tikzpicture}
\end{table}
\end{document}
A variation on the same idea, but this time just the midlle rule is \drawn:
\documentclass[a4paper]{memoir}
\usepackage{libertine}
\usepackage{pgfgantt}
\usepackage[graphicx]{realboxes}
\newsavebox\mybox
\begin{lrbox}{\mybox}
\begin{ganttchart}[
y unit title=0.65cm,
y unit chart=0.80cm,
vgrid={
draw=none,
draw=none,
dotted
},
title height=1.00,
bar/.append style={fill=gray!30},
bar height=0.50,
canvas/.style={
shape=rectangle,
draw=none
},
title/.style={draw=none}
]{1}{21}
\gantttitle{Members of IGC by delegation}{21}\\
\gantttitle{2525}{3}
\gantttitle{2526}{3}
\gantttitle{2527}{3}
\gantttitle{2528}{3}
\gantttitle{2529}{3}
\gantttitle{2530}{3}
\gantttitle{2531}{3}\\
%Member Details
\ganttbar{The Federation}{1}{1}
\ganttbar[inline]{Capt. Kirk}{1}{6}
\ganttbar[inline]{Mr.~ Spock}{7}{21} \\
\ganttbar{The Empire}{1}{1}
\ganttbar{The Republic}{21}{21}
\ganttbar[inline]{Princess Leia}{8}{17}
\ganttbar[inline]{Han Solo}{18}{21}\\
\ganttbar{Earth}{1}{1}
\ganttbar[inline]{Buck Rodgers}{1}{21}\\
\ganttbar{Red Dwarf}{1}{1}
\ganttbar[inline]{D. Lister}{1}{3}
\ganttbar[inline]{A. Rimmer}{4}{9}
\ganttbar[inline]{Kryten}{10}{21}\\
\ganttbar{Romulan}{1}{1}
\ganttbar[inline]{Vreenak}{1}{21}\\
\ganttbar{Imaginationland}{21}{21}
\ganttbar[inline]{Leopold "Butters" Stotch}{12}{21}
\end{ganttchart}
\end{lrbox}
\begin{document}
\begin{table}
\centering
\caption{Members of the Intergalactic committee}
\label{IGC-Mem}
\tikz{
\node[inner sep=0pt,outer sep=0pt] (gantt)
{\begin{tabular}{c}
\toprule
\resizebox{\textwidth}{!}{\usebox\mybox} \\
\bottomrule
\end{tabular}%
};
\draw[line width=\lightrulewidth]
([yshift=-19pt]gantt.north west) -- ([yshift=-19pt]gantt.north east);
}
\end{table}
\end{document}
• +1 looks great. Just showing my lack of knowledge :-) but why is it required to put the gantt table in the preamble? – gman Aug 17 '15 at 9:39
• @gman It's not required to put it in the preamble; the important thing is to box it and this can be done either in the preamble or in the document body; since pgfgantt internally uses a tikzpicture, what we are really doing is to nest tikzpictures and this is something that might produce undesired results (although perhaps not in such a simple case as this one), so it's a good precaution in any case to first box the Gantt table. – Gonzalo Medina Aug 17 '15 at 15:29 |
# how to find vector $(\pm 1, \pm 1, \pm 1, … \pm 1)$ which is most close to given vector (r_1,…r_l) ? Is it NP-problem ? What algorithms are available ?
Given: some vector $R=(r_1...r_l)$ - real numbers, and a set of distinct vectors with $+1$ or $-1$ coordinates $$\begin{array}{c} V_1=(c_{1,1} ... c_{1,l}),\\\ V_2=(c_{2,1} ... c_{2,l}),\\\ .....\\\ V_n=(c_{n,1} ... c_{n,l})\end{array}$$ each $c_{i,j}$ are $+1$ or $-1$. $n\lt 2^l$ is some number (if $n=2^l$ - problem is trivial.)
Problem: How to find vector $V_i$ which is the most close to $R$ ? ( in the sense of Euclidean distance (suggestions on any other distances are also welcome)).
Of course, we by brute force can check all $V_i$, but is there any way to reduce brute force ? Set of vectors $V_k$ is fixed once and forever, $R$ is coming every millisecond, and algorithm should quickly "decode" $R$ to some $V_i$.
Sub-problems: Is this problem NP-hard ? ( i.e. Is it possible to have algorithm polynomial in the log(n) ? (It is true for some special cases like trivial n=2^l. But what about more general ?))
Given some "hint" vector $V_k$ is possible to answer a question "is it the right answer or not" in some computationally simple way ?
Given some "hint" vector $V_k$ is possible to improve it in some way ?
PS
Does distance function have only global or also local minimums on the set $V_i$ ?
More precisely one should speak about "$\epsilon$-local minimums" for some $\epsilon$.
I.e.
Set of vectors $V_i$ is a metric space (induce metric from $R^n$). Let us say some function $f$ has an "$\epsilon$-local minimum" at some point $V_k$ of this set if $f(V_k)< f(V_i)$ for all $V_i$ in $\epsilon$ - neighborhood $V_k$).
Consider a distance function from given vector $R=(r_1...r_l)$ to $V_i$.
What is the smallest $\epsilon$ for which any $\epsilon$-local minimum is global minimum ?
How it depends on input vector $R=(r_1...r_l)$ ?
-
What does closest mean? (It's not most close, by the way) – Thierry Zell Oct 30 '10 at 13:09
This a different kind of question than where the NP issue applies. You're asking for some method of preprocessing information about the $V_i$ that makes decoding quick per real number instance of $R_j$. In fact the method of brute force testing is already polynomial in the size of the input $\{ V_i\} \cup \{R_j\}$. – Bill Thurston Oct 30 '10 at 13:55
We cannot tell whether you want the $V_i$ which is nearest in terms of Euclidean distance or in terms of the angle between the vectors, or perhaps some third manner of measuring closeness. If it is Euclidean distance the length of the vector $R$ itself heavily influences this, a rough trichotomy given by $| R | < , = , > \sqrt l$ – Will Jagy Oct 30 '10 at 21:09
1) On R^n we consider standard Eucleadian distance. (But if you have an idea for any other distance is is also heartly welcome). So closest in this sense, i.e. vector v_i such that distance ||R- v_i|| is smallest one. 2) I want to consider polynomiality in the log(size(V_i)). (e.g. l=20 , n=2^10). – Alexander Chervov Oct 31 '10 at 6:18
Let the $R_i$ be $\pm 1$ vectors and the $V_i$ be a linear space over GF(2). This then becomes the problem of maximum likelihood decoding of binary linear codes, which is known to be an NP-hard problem. It thus seems likely (although it doesn't follow rigorously) that there's no good way of doing this better than brute force.
-
This problem is NP-hard. It is known (as it has been already mentioned) as the decoding of linear block codes. Here is the reference of the intractability result
E.R. BERLEKAMP, R.J. MCELIECE, and H.C.A. VAN TILBORG, "On the inherent intractability of certain coding problems," IEEE Trans. Inform. Theory, vol. 24, pp. 384– 386, May 1978.
There are some nice relaxations, such as the LP decoder that was introduced in
J. Feldman, M.J. Wainwright and D.R. Karger, "Using linear programming to Decode Binary linear codes," IEEE Transactions on Information Theory, 51:954–972, March 2005.
-
Thank You very much ! – Alexander Chervov Apr 17 '11 at 5:27
If you restrict to the case where $R \in \{ \pm1 \}^l$ you can encode an arbitrary function $f\colon \{\pm1\}^l\to \pm1$ with appropriate choice of the $V_i$ by augmenting your problem to instead return $1$ if $R = V_i$ for some $i$ and $-1$ otherwise. If you have an algorithm to solve your original problem then you can solve the augmented problem without adding much by first finding the closest $V_i$ and then checking if $V_i=R$. Since the augmented problem can encode any function it will in general have exponential circuit complexity, and therefore the original problem will also have exponential circuit complexity (and therefore has no subexponential "algorithm"). |
This is an archived post. You won't be able to vote or comment.
[–][deleted] (3 children)
[deleted]
[–][S] 0 points1 point (2 children)
Sorry for the late reply. First, thanks for responding to this - didn't think anyone would. Second, could you provide an example of how to apply the E-L equation to take these functions down to an ODE?
[–] 1 point2 points (0 children)
Here is a functional
S = \int 1/2 m x'2 - 1/2 k x2 dt
The E-L equation says take d/dt dL/dx' - dL/dx = 0
Here:
dL/dx' = m x'
d/dt dL/dx' = m x''
dL/dx = -k x
So the ODE you need to solve is
mx'' - - kx = 0 mx'' + kx = 0
You should be able to solve that ODE.
[–][S] 0 points1 point (8 children)
So, I was told to use the Hamiltonian method for #1, 2.
Quite frankly, I don't think I've felt this lost in a math course before. This is a summer class, so the progress is accelerated, but no one in our class is able to follow the professor. He's jumping around the chapters in our required text (this hw problem dealt with topics not really covered in the book). Any advice on what I should do in order to actually learn some Diff Eq?
[–][deleted] (7 children)
[deleted]
[–][S] 0 points1 point (6 children)
I am an undergrad. This course is listed as a 300 level course, so I'm not sure if it's necessarily an introduction course to Diff. Eqn's. But I'm taking it as an equivalent course, so I can transfer credits to the university I'm a full-time student at. The equivalent course to my university is titled "Introduction to Diff Eqn's" and it's a 200 level course. So I'm not sure why they're listed as equivalents.
The book we're using is, "Introductory Differential Equations using Sage" by David Joyner and Marshall Hampton.
[–][deleted] (5 children)
[deleted]
[–][S] 0 points1 point (0 children)
Copy/Paste from course website: Course Catalog’s Description
First and second order differential equations with applications, linear differential equations, power series solutions, Laplace transforms.
Course Objectives Upon completing the course, students should be able to:
Describe the basic existence and uniqueness conditions for ordinary differential equations, and their physical significance.
Solve first and second order differential equations by direct integration, transform, and power series methods.
Use software to study differential equations numerically, taking note of the reasons for different numerical methods.
Classify equilibria and dynamics for two-dimensional systems of differential equations (if time).
"This is a topic driven course, and I would like to do much more than this! But the above are the basics."
[–][S] 0 points1 point (3 children)
Also, I'd love to hear your recommendation.
[–][deleted] (2 children)
[deleted]
[–][S] 0 points1 point (1 child)
Shoot, I don't know if I have the heart to do that. The prof's not a bad guy - he's not intentionally trying to lose the class. I probably shouldn't have complained to you guys if I wasn't prepared to take action against him, but I was just wondering if it was my own incompetency with mathematics or if there was something else going on. |
# [webkit-dev] Proposal: remove menclose notation radical
Frédéric WANG fred.wang at free.fr
Thu Mar 10 03:45:04 PST 2016
Hi everyone,
As said in a previous message, MathML layout refactoring is in progress
[1] and I'd like to use this opportunity to propose removing support for
More context:
In MathML3 there are three ways to write roots:
1) <msqrt> child1 child2 ... child3 </msqrt>, equivalent to the LaTeX
expression \sqrt{child1 child2 ... child3} i.e. square root of "child1
child2 ... child3".
2) <mroot> child1 child2 </mroot>, equivalent to the LaTeX expression
\sqrt[child2]{child1} i.e. "child2"-th root of "child1".
3) <menclose notation="radical"> child1 child2 ... child3 </menclose>,
equivalent to 1)
Currently 1) and 2) share an implementation in the RenderMathMLRoot
class. That implementation relies on anonymous renderers and flexbox
layout and has many issues with dynamic changes of child list, style,
zoom etc. However, I've uploaded a patch to do a complete refactoring of
RenderMathMLRoot in bug 153987 [2] which solves these issues.
The menclose element definitely needs a clean rewriting too. In
particular, 3) is implemented by appending an anonymous RenderMathMLRoot
child in which we put all the other children ; and again dynamic changes
are not handled very well . I have uploaded a patch to bug 155019 [3] in
order to completely rewrite the menclose implementation and I decided to
reuse RenderMathMLRoot.
I expect this removal to be safe for the users given that (except in
examples and tests) I've always seen 1) preferred over 3) in math
documents. The only rationale for 3) would be to write overlapping
notations (e.g. <menclose notation="radical circle horizontalstrike">)
but again I'm not aware of any concrete use case for that and it's
always possible to nest <msqrt> and <menclose> to get similar rendering.
Finally, note that the accessibility code does expose the radical
notation and my patch for bug 155019 does not affect that code.
I plan to ask the same to Mozilla developers (FYI in Gecko, 1) and 3)
are implemented in the same class but 2) is implemented in a separate
class with duplicate code ; so it would also make sense for them to do a
simplification). In the future, I also plan to ask to the Math WG to
deprecate this notation and this is already done in the current draft of
the MathML in HTML5 implementation note I wrote [4]. Anyway, the MathML
"Conforming renderers may ignore any value they do not handle, although
renderers are encouraged to render as many of the values listed below as
possible" [5].
Frédéric Wang
[1] https://lists.webkit.org/pipermail/webkit-dev/2015-December/027840.html
[2] https://bugs.webkit.org/show_bug.cgi?id=153987
[3] https://bugs.webkit.org/show_bug.cgi?id=155019
[4] http://www.mathml-association.org/MathMLinHTML5/
[5] https://www.w3.org/TR/MathML3/chapter3.html#presm.menclose
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <https://lists.webkit.org/pipermail/webkit-dev/attachments/20160310/00dda7da/attachment.sig> |
# How much hotter will an object get if I paint it black?
If black is the best absorber and radiator, why does it get hot?
Black and white matters. But why and how?
If a black body is a perfect absorber, why does it emit anything?
Why is black the best emitter?
Some respondents referred to the Stefan-Boltzmann Law and indeed were kind enough to do the calculation. This post
Emissivity and Final Temperature of a Black and White object
indicates that the emissivity constant should be different for white objects than for black objects. Wikipedia shows for example
https://en.wikipedia.org/wiki/Emissivity
states that 'white paint absorbs very little visible light. However, at an infrared wavelength of 10x10−6 metres, paint absorbs light very well, and has a high emissivity. '
I am still at a loss though as to how to apply the Stefan-Boltzmann equation to calculate the equilibrium temperature of two identical objects (for example a piece of paper) in the identical sunlight(light intensity of 1000 W/m2 (typical for cloudless sunny day)) that differ only in color.
• Are you asking how to determine the emissivity of an object, or are you asking how to do the calculation assuming that you already have the emissivity as a function of wavelength? – probably_someone Jul 13 '20 at 0:01
• If what you have is emmisivity and absorbance at a given frequency, then you need to find the power emitted as an integral( Wikipedia) and set that equal to the absorbed power from the sun( if you want, integrated over absorbance as a function of frequency). – Zach Johnson Jul 13 '20 at 5:57
• Actually rereading the title, thermal conduction with the painted object may be much more important for you than radiative loss. – Zach Johnson Jul 13 '20 at 5:58
• this measurement may help phys.org/news/2011-10-silver-white-cars-cooler.html – anna v Jul 13 '20 at 6:03
• @probably_someone I am asking what the temp diff will be if I have two identical objects, one white and one black in the same light. Wikipedia lists e for snow as .8-.9 but for asphalt .88. so the Stefan-Boltzmann law shows the same result yet we all know that black objects get warmer – aquagremlin Jul 14 '20 at 13:28
When the objects are exposed to sunlight, they are heated by radiation, and cooled mainly by convection: $$\frac{q}{A} = h(T_{obj} - T_{air})$$, where $$h$$ is the convective coefficient.
The black surface absorbs more heat by radiation than the white one. So, the $$1000 W/m^2$$ is close to the reality for it ($$\epsilon \approx 1)$$, where $$\epsilon$$ is the emissivity. As the emissivity is smaller for the white surface, it gets only a fraction of the energy of the black surface.
Considering the Stefan-Boltzmann law, the equilibrium temperature for the objects is expressed by the equation:
$$\left(\frac{1000\epsilon_{obj}}{\sigma}\right)^{1/4} = h(T_{obj} - T_{air})$$
It is clear that if $$h$$ and $$T_{air}$$ is the same (what is reasonable for the same environment and material), the black object (biggest emissivity) has a bigger equilibrium temperature than the white one.
• You actually came closest to answering my question so i gave you the check mark. However, the actual values of emissivity used in tables like the one above as well as this. nuclear-power.net/nuclear-engineering/heat-transfer/… Lead to temp differences smaller than i found. I must be doing something wrong. – aquagremlin Jul 15 '20 at 0:12
• I tested yesterday porcelain tiled floor black and white. The temperature differences were about 10 K after some hours under the sun. As you can see from the equation, that depends also on $h$, that is another empirical factor. – Claudio Saspinski Jul 15 '20 at 0:27
• Convection...hmmm.. I have to read how emissivity is measured. They would have to do it in a vacuum to eliminate this effect. The convective coefficient can be 2-20 for air. nuclear-power.net/nuclear-engineering/heat-transfer/… – aquagremlin Jul 15 '20 at 0:48
In this link, a comparison is made between colors of cars, before reaching thermodynamic equillibrium.
Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium there are no net macroscopic flows of matter or of energy, either within a system or between systems.
If your two objects stay in the sunlight long enough to reach thermodynamic equillibrium, the zeroth law should say that their final temperatures are same:
The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other.
See the explanation of thermal equilibrium here.
Figure 1.2.1: If thermometer A is in thermal equilibrium with object B, and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. Therefore, the reading on A stays the same when A is moved over to make contact with C.
Emissivity and absorptivity would play a role to how long it would take for the two different colored objects to reach thermodynamic equilibrium with the air surrounding them at the same input radiation.
The tests with cars show that the time is important in showing the differences in the color of the car, and the particular case has to be taken into account. I would think that the two pieces of different color paper ( no wind) should reach equilibrium in the noon sun fairly soon, and thus the same temperature. In general one should use the emissivity and absroptivity to solve a specific case, but it is not simple calculations.
• yes your thinking is what I went through. But I wanted to CALCULATE the actual temperature. I did the experiment (a piece of paper half sprayed black in a big cardboard box whose top was open) and used an infrared thermometer which read the black paper as hotter (just as your reference found with cars). The fact that emissivity and absorbance are the same and the conclusion that they should have the same temperature at equilibrium is not agreeing with experience so I assume that more is needed for the Stefan-Boltzmann law to agree with reality. efharisto. – aquagremlin Jul 14 '20 at 18:44
• well, your experiment is measuring the heat by the radiation, not by the standard thermometer, If the standard thermometer could be used, as in the figure? If you are serious in experimenting I would use two boxes of different colors and a standard thermometer inside. – anna v Jul 15 '20 at 3:48
• I mean that in real life the black body curve is not exactly followed. It could be that the emissivity of the black or the white side is affected differently (the curve is different) – anna v Jul 15 '20 at 4:12
• Sometimes, especially near ambient temperatures, readings may be subject to error due to the reflection of radiation from a hotter body—even the person holding the instrument[citation needed] — rather than radiated by the object being measured, and to an incorrectly assumed emissivity. en.wikipedia.org/wiki/Infrared_thermometer – anna v Jul 15 '20 at 4:22 |
is below this banner.
Can't find a solution anywhere?
NEED A FAST ANSWER TO ANY QUESTION OR ASSIGNMENT?
You will get a detailed answer to your question or assignment in the shortest time possible.
## Here's the Solution to this Question
a) Let $X = \{1,2,3,4,5,6,7\}$ and $R = \{(x,y)|x-y\text{ is divisible by }3\}$ in $X$. Let us show that $R$ is an equivalence relation. Since $x-x=0$ is divisible by 3 for any $x\in X$, we conclude that $(x,x)\in R$ for any $x\in X$, and hence $R$ is a reflexive relation. If $x,y\in X$ and $(x,y)\in R,$ then $x-y$ is divisible by 3. It follows that $y-x=-(x-y)$ is also divisible by 3, and hence $(y,x)\in R.$
We conclude that the relation $R$ is symmetric. If $x,y,z\in X$ and $(x,y)\in R,\ (y,z)\in R,$ then $x-y$ is divisible by 3 and $y-z$ is divisible by 3. It follows that $x-z=(x-y)+(y-z)$ is also divisible by 3, and hence $(x,z)\in R.$ We conclude that the relation $R$ is transitive. Consequently, $R$ is an equivalence relation.
b) Let $A = \{1,2,3,4\}$ and let $R = \{(1,1), (1,2),(2,1),(2,2),(3,4),(4,3), (3,3), (4,4)\}$ be an equivalence relation on $R$. Let us determine $A/R$. Taking into account that $[a]=\{x\in A\ |\ (x,a)\in R\}$ and hence $[1]=\{1,2\}=[2],\ [3]=\{3,4\}=[4],$ we conclude that $A/R=\{[a]\ |\ a\in A\}=\{[1],[3]\}.$
c) Let us draw the Hasse diagram of lattices, $(L_1,<)$ and $(L_2,<)$ where $L_1 = \{1, 2, 3, 4, 6, 12\}$ and $L_2 = \{2, 3, 6, 12, 24\}$ and a < b if and only if a divides b.
Note that a Hasse diagram is a graphical rendering of a partially ordered set displayed via the cover relation of the partially ordered set with an implied upward orientation. A point is drawn for each element of the poset, and line segments are drawn between these points according to the following two rules:
1. If $x in the poset, then the point corresponding to $x$ appears lower in the drawing than the point corresponding to $y$.
2. The line segment between the points corresponding to any two elements $x$ and $y$ of the poset is included in the drawing iff $x$ covers $y$ or $y$ covers $x$.
In our case, $x if and only if $x|y.$ Therefore, the Hasse diagrams are the following: |
# Showing a series converges non-uniformly
For the series $$f(x) = \sum_{n=0}^\infty \frac{1}{n(1+nx^2)}$$ my lecture notes use the Weierstrass M-test to show that this converges uniformly on any interval of the form $(-\infty,-a)$ or $(a,\infty)$ for $a>0$ and says that it converges non uniformly on $(0,a)$ or $(-a,0)$ for any $a>0$.
Isn't this a contradiction? I.e. can't you find two such $a$'s where it converges uniformly and non-uniformly.
Also how does one show that there is some interval where this converges non-uniformly?
Thanks!
-
Do you mean $(-\infty,-a)$? – robjohn Apr 5 '12 at 11:39
ah yes thank you – user26069 Apr 5 '12 at 11:40
Just notice that the harmonic series $\sum_{n\geq 1}\frac{1}{n}$ diverges. Now for any $m$, if $S_m$ denotes the partial sum of $m$ terms of the harmonic series, notice that by taking $x$ sufficiently close to zero, one can ensure that the partial sum of $m$ terms of $f(x)$ is close to $S_m$. Now use divergence of $S_m$ as $m\rightarrow\infty$. – William Apr 5 '12 at 11:45
First, you need to start your sum at $n=1$, not $n=0$.
There is no contradiction. If a series converges uniformly on a set $O$, then it converges uniformly on any subset of $O$; but, the series does not necessarily converge uniformly on a set $U\supset O$.
You have uniform convergence on a set $(a,\infty)$, $a>0$ and non-uniform convergence on a larger set $(0,\infty)$, which is ok...
In your example, you have uniform convergence on sets of the form $(a,\infty)$, $a>0$. You have pointwise convergence on the set $(0,\infty)$. Moreover, since the series diverges at $x=0$, this is the largest interval, unbounded on the right, for which you have pointwise convergence. This is then the natural candidate for an interval over which you could have non-uniform convergence. Moreover, things would have to go awry near $x=0$.
But is this indeed the case?
Let's look at the graphs of the first few partial sums $s_n=\sum\limits_{k=1}^n {1\over k(1+kx^2). }$
We are led to suspect that the series does not converge uniformly on $(0,\infty)$. (look at how the difference between the $s_k$ grows larger as $x\rightarrow0^+$).
In fact, given $N$, we may choose $M>N$ so that $\sum\limits_{n=N}^M{1\over n}\ge 1$. Then $$\lim_{x\rightarrow0^+}\sum_{n=N}^M {1\over n(1+nx^2)} =\lim_{x\rightarrow0^+}\sum_{n=N}^M {1\over n}\ge1.$$ So the series $\sum\limits_{n=1}^\infty {1\over n(1+nx^2)}$ is not uniformly Cauchy on $(0,\infty)$ and thus not uniformly convergent on $(0,\infty)$.
Since $x=0$ is the "bad point" as far as uniform convergence is concerned, the series does not converge uniformly on any interval of the form $(0,a)$.
-
Define $$f_k(x)=\sum_{n=0}^{k-1}\frac{1}{n(1+nx^2)}\tag{1}$$ For $|x|>a$, we have \begin{align} |f(x)-f_k(x)| &=\sum_{n=k}^\infty\frac{1}{n(1+nx^2)}\\ &<\sum_{n=k}^\infty\frac{1}{n(1+na^2)}\\ &<\frac{1}{a^2}\sum_{n=k}^\infty\frac{1}{n^2}\\ &<\frac{1}{a^2}\sum_{n=k}^\infty\left(\frac{1}{n-1/2}-\frac{1}{n+1/2}\right)\\ &=\frac{1}{a^2(k-1/2)}\tag{2} \end{align} Estimate $(2)$ says that $f_k\to f$ uniformly on $\{x:|x|>a\}$.
However, \begin{align} |f(x)-f_k(x)| &=\sum_{n=k}^\infty\frac{1}{n(1+nx^2)}\\ &\ge\sum_{n=k}^{1/x^2}\frac{1}{n(1+nx^2)}\\ &\ge\frac12\sum_{n=k}^{1/x^2}\frac{1}{n}\\ &\ge-\frac12\log(x^2(k+1))\tag{3} \end{align} That is, for any positive integer $k$, if $0<x<\frac{1}{e\sqrt{k+1}}$, then $|f(x)-f_k(x)|\ge1$. Thus, the convergence is not uniform on any neighborhood of $x=0$.
-
There is no contradiction. Note that $a>0$ is arbitrary. This series converges uniformly on any interval of the form $(-\infty,-a)$ or $(-\infty,-b]$ or $(a,\infty)$ or $[b,\infty)$ where $a>0,b>0$. Due to
$$|\frac{1}{n(1+nx^2)}|\leq \frac{1}{n(1+na^2)}$$
(the series $\sum\frac{1}{n(1+na^2)}$ converges), so by Weierstrass M-test, the form series converges uniformly. For the other hand, give you any interval $0\in(a,b)$, take $x=0$ , then the series becomes $\sum\frac{1}{n}$, obviously, this series does not converge. |
How to make the final Interpretation of PCA?
I have question regarding final loading of data back to original variables.
So for example:
I have 10 variable from a,b,c....j using returns for last 300 days i got return matrix of 300 X 10. Further I have normalized returns and calculated covariance matrix of 10 X 10. Now I have calculated eigen values and eigen vectors, So I have vector of 10 X 1 and 10 X 10 corresponding eigen values. Screeplot says that 5 component explain 80% of variation so now there are 5 eigenvectors and corresponding eigenvalues.
Now further how to load them back to original variable and how can i conclude which of the variable from a,b,c.....j explain the maximum variation at time "t"
-
I had the same problem earlier. cs.otago.ac.nz/cosc453/student_tutorials/… was something others directed to my attention and it helped quite a bit. – user1234440 Nov 23 '12 at 16:07
To make things really clear, you have an original matrix $X$ of size $300 \times 10$ with all your returns.
Now what you do is that you choose the first $k=5$ eigenvectors (i.e. enough to get 80% of the variation given your data) and you form a vector $U$ of size $10 \times 5$. Each of the columns of $U$ represents a portfolio of the original dataset, and all of them are orthogonal.
PCA is a dimensionality-reduction method: you could use it to store your data in a matrix $Z$ of size $300 \times 5$ by doing:
$$Z = X U$$
You can then recover an approximation of $X$ which we can call $\hat{X}$ as follows:
$$\hat{X} = Z U^\intercal$$
Note that as your 5 eigenvectors only represent 80% of the variation of X, you will not have $X=\hat{X}$.
In practice for finance application, I don't see why you would want to perform these reduction operations.
In terms of factor analysis, you could sum the absolute value for each row of $U$; the vector with the highest score would be a good candidate I think.
-
When not relying on Bayesian techniques, I can see the advantage of PCA for dimension reduction. Consider high-dimension estimation of the covariance matrix where the number of observations is smaller than the number of securities. This typically leads to problems. Alternately, VAR or Garch estimation on a small number of factors is usually faster with fewer parameters than estimating them on every security in the universe. – John Nov 27 '12 at 15:00
@SRKX "In terms of factor analysis, you could sum the absolute value for each row of U; the vector with the highest score would be a good candidate I think." Candidate to do what? How would you use it to reach a trading decision? – ManInMoon Apr 26 '13 at 7:37
@ManInMoon the variable which adds the most variance to the sample. – SRKX Apr 29 '13 at 12:55
If you are asking which of the 10 variables is contributing most to the principle component, then look at your first eigenvector; each value reflects a single variable, so the largest value (by magnitude) in that eigenvector should give the variable with the largest contribution. Note that a large negative number means anticorrelation.
The matrix you have is in fact mapping from the 10d space of your variables onto the eigenspace of the matrix; the first eigenvector represents one of the basis vectors of this new eigenspace, in the space of your 10d vectors.
The analogy is that if you had 2 variables, x and y, then you could construct a similar 2d matrix, and calculate its eigenvectors. The eigenvectors would show you the axes of the new space, and the first eigenvector is its principle component (axis).
Caveat: I know a lot more about eigenvectors than I do about PCA, so there may be a subtlety I'm missing.
- |
# USB - rechargeable batteries
Nice idea. Knowing that many of us carry a laptop everyday, it makes sense to use it for charging batteries!
UsbCell can be found here.
Brent said...
hi cati, wow, nice find. :)
Luciano Bove said...
Ciao! This is a real useful gadget.
Bob Johnson said...
Great Idea.
dt said...
Its interesting, but I was told that the USB power is not enough for recharging? Perhaps my source was wrong.
Joanne said...
Whoa, neat idea. I like the whole multi-tasking energy use! |
Definition:Angle/Types
Definition
Angles can be divided into categories:
Zero Angle
The zero angle is an angle the measure of which is $0$ regardless of the unit of measurement.
Acute Angle
An acute angle is an angle which has a measure between that of a right angle and that of a zero angle.
Right Angle
A right angle is an angle that is equal to half of a straight angle.
Obtuse Angle
An obtuse angle is an angle which has a measurement between those of a right angle and a straight angle.
Straight Angle
A straight angle is defined to be the angle formed by the two parts of a straight line from a point on that line.
Reflex Angle
A reflex angle is an angle which has a measure between that of a straight angle and that of a full angle.
Full Angle
A full angle is an angle equivalent to one full rotation.
It is possible to consider angles outside the range $\closedint {0 \degrees} {360 \degrees}$, that is, $\closedint 0 {2 \pi}$.
However, in geometric contexts it is usually preferable to convert these to angles inside this range by adding or subtracting multiples of a full angle. |
# zbMATH — the first resource for mathematics
## Booker, John Robert
Compute Distance To:
Author ID: booker.john-robert Published as: Booker, J. R.; Booker, John R.; Booker, John Robert; Booker, John
Documents Indexed: 76 Publications since 1967 Biographic References: 3 Publications
all top 5
#### Co-Authors
4 single-authored 23 Small, J. C. 17 Carter, John P. 6 Davis, Elwyn H. 5 Rowe, R. Kerry 5 Smith, David W. 4 Leo, Chin Jian 3 Balaam, N. P. 2 Elzein, Abbas H. 2 Moore, Ian D. 2 Seneviratne, H. N. 2 Sloan, Scott W. 2 Stengel, Karl C. 1 Airey, David W. 1 Bretherton, Francis P. 1 Chiarella, Carl 1 Chou, Chee W. 1 El-Zahaby, Khalid 1 Oliver, Dean S. 1 Randolph, Mark F. 1 Runesson, Kenneth
all top 5
#### Serials
36 International Journal for Numerical and Analytical Methods in Geomechanics 13 International Journal of Solids and Structures 9 International Journal for Numerical Methods in Engineering 5 Quarterly Journal of Mechanics and Applied Mathematics 3 Journal of Fluid Mechanics 1 Archives of Mechanics 1 The Geophysical Journal of the Royal Astronomical Society 1 Journal of Engineering Mathematics 1 Journal of the Mechanics and Physics of Solids 1 Communications in Applied Numerical Methods
all top 5
#### Fields
65 Mechanics of deformable solids (74-XX) 22 Fluid mechanics (76-XX) 7 Geophysics (86-XX) 6 Numerical analysis (65-XX) 5 Classical thermodynamics, heat transfer (80-XX) 4 Integral transforms, operational calculus (44-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 1 Partial differential equations (35-XX) 1 Integral equations (45-XX)
#### Citations contained in zbMATH Open
45 Publications have been cited 413 times in 300 Documents Cited by Year
The critical layer for internal gravity waves in a shear flow. Zbl 0148.23101
Booker, John R.; Bretherton, Francis P.
1967
Onset of convection in a variable-viscosity fluid. Zbl 0534.76093
Stengel, Karl C.; Oliver, Dean S.; Booker, John R.
1982
An investigation of the stability of numerical solutions of Biot’s equations of consolidation. Zbl 0311.73047
Booker, J. R.; Small, J. C.
1975
A method of computing the consolidation behaviour of layered soils using direct numerical inversion of Laplace transforms. Zbl 0612.73109
Booker, J. R.; Small, J. C.
1987
Integration of Tresca and Mohr-Coulomb constitutive relations in plane strain elastoplasticity. Zbl 0758.73054
Sloan, S. W.; Booker, J. R.
1992
Finite layer analysis of consolidation. II. Zbl 0482.73088
Booker, J. R.; Small, J. C.
1982
Finite layer analysis of consolidation. I. Zbl 0482.73087
Booker, J. R.; Small, J. C.
1982
Finite layer analysis of layered elastic materials using a flexibility approach. II: Circular and rectangular loadings. Zbl 0585.73127
Small, J. C.; Booker, J. R.
1986
Removal of singularities in Tresca and Mohr-Coulomb yield functions. Zbl 0585.73055
Sloan, S. W.; Booker, J. R.
1986
The analysis of finite elasto-plastic consolidation. Zbl 0394.73097
Carter, J. P.; Booker, J. R.; Small, J. C.
1979
Finite layer analysis of layered elastic materials using a flexibility approach. I. Strip loadings. Zbl 0535.73051
Small, J. C.; Booker, J. R.
1984
Elasto-plastic consolidation of soil. Zbl 0333.73073
Small, J. C.; Booker, J. R.; Davis, E. H.
1976
Green’s functions for a fully coupled thermoporoelastic material. Zbl 0776.73060
Smith, David W.; Booker, John R.
1993
Further thoughts on convective heat transport in a variable-viscosity fluid. Zbl 0377.76077
Booker, John R.; Stengel, Karl C.
1978
The behaviour of an elastic nonhomogeneous half-space. I. Line and point loads. Zbl 0569.73106
Booker, J. R.; Balaam, N. P.; Davis, E. H.
1985
The behaviour of an elastic nonhomogeneous half-space. II. Circular and strip footings. Zbl 0569.73107
Booker, J. R.; Balaam, N. P.; Davis, E. H.
1985
Consolidation of a cross-anisotropic soil medium. Zbl 0543.73129
Booker, J. R.; Randolph, M. F.
1984
Boundary integral analysis of transient thermoelasticity. Zbl 0687.73010
Smith, D. W.; Booker, J. R.
1989
The behaviour of layered soil or rock containing a decaying heat source. Zbl 0597.73110
Small, J. C.; Booker, J. R.
1986
The time-settlement behavior of a rigid die resting on a deep clay layer. Zbl 0331.73016
Chiarella, C.; Booker, J. R.
1975
The analysis of liquid storage tanks on deep elastic foundations. Zbl 0508.73094
Booker, J. R.; Small, J. C.
1983
A theory of finite elastic consolidation. Zbl 0354.73079
Carter, J. P.; Small, J. C.; Booker, J. R.
1977
Elastic consolidation around a deep circular tunnel. Zbl 0505.73071
Carter, J. P.; Booker, J. R.
1982
Boundary element analysis of linear thermoelastic consolidation. Zbl 0894.73203
Smith, David W.; Booker, John R.
1996
Withdrawal of a compressible pore fluid from a point sink in an isotropic elastic half space with anisotropic permeability. Zbl 0611.76103
Booker, J. R.; Carter, J. P.
1987
Long term subsidence due to fluid extraction from a saturated, anisotropic, elastic soil mass. Zbl 0574.73104
Booker, J. R.; Carter, J. P.
1986
Creep and consolidation around circular openings in infinite media. Zbl 0515.73108
Carter, J. P.; Booker, J. R.
1983
A boundary element method for analysis of contaminant transport in porous media. I: Homogeneous porous media. II: Non-homogeneous porous media. Zbl 0955.76060
Leo, C. J.; Booker, J. R.
1999
A review of models for predicting the thermomechanical behaviour of soft clays. Zbl 0800.73362
Seneviratne, H. N.; Carter, J. P.; Airey, D. W.; Booker, J. R.
1993
Analysis of fully coupled thermomechanical behaviour around a rigid cylindrical heat source buried in clay. Zbl 0800.73016
Seneviratne, H. N.; Carter, J. P.; Booker, J. R.
1994
Analysis of a point sink embedded in a porous elastic half space. Zbl 0579.73108
Booker, J. R.; Carter, J. P.
1986
Elastic consolidation around a point sink embedded in a half-space with anisotropic permeability. Zbl 0598.73109
Booker, J. R.; Carter, J. P.
1987
A general treatment of plastic anisotropy under conditions of plane strain. Zbl 0298.73052
Booker, J. R.; Davis, E. H.
1972
Application of discrete Fourier series to the finite element stress analysis of axi-symmetric solids. Zbl 0825.73797
Lai, J. Y.; Booker, J. R.
1991
Consolidation of axi-symmetric bodies subjected to non axi-symmetric loading. Zbl 0508.73090
Carter, J. P.; Booker, J. R.
1983
Groundwater pollution by organic compounds: A two-dimensional analysis of contaminant transport in stratified porous media with multiple sources of non-equilibrium partitioning. Zbl 0939.76600
Elzein, Abbas H.; Booker, John R.
1999
A semi-analytical method for the wave-induced seabed response. Zbl 0825.73506
Rahman, M. S.; El-Zahaby, Khalid; Booker, John
1994
The behaviour of an impermeable flexible raft on a deep layer of consolidating soil. Zbl 0582.73099
Booker, J. R.; Small, J. C.
1986
Finite layer analysis of viscoelastic layered materials. Zbl 0589.73093
Booker, J. R.; Small, J. C.
1986
A numerical method for the solution of Biot’s consolidation theory. Zbl 0267.65085
Booker, J. R.
1973
Finite element analysis of primary and secondary consolidation. Zbl 0344.73079
Booker, J. R.; Small, J. C.
1977
A method of analysis for horizontally embedded anchors in an elastic soil. Zbl 0393.73111
Rowe, R. K.; Booker, J. R.
1979
Finite element analysis of problems with infinitely distant boundaries. Zbl 0462.73058
Booker, J. R.; Small, J. C.
1981
Groundwater pollution by organic compounds: A three-dimensional boundary element solution of contaminant transport equations in stratified porous media with multiple non-equilibrium partitioning. Zbl 0939.86500
Elzein, Abbas H.; Booker, John R.
1999
Boundary element analysis of contaminant transport in fractured porous media. Zbl 0800.76262
Leo, C. J.; Booker, J. R.
1993
A boundary element method for analysis of contaminant transport in porous media. I: Homogeneous porous media. II: Non-homogeneous porous media. Zbl 0955.76060
Leo, C. J.; Booker, J. R.
1999
Groundwater pollution by organic compounds: A two-dimensional analysis of contaminant transport in stratified porous media with multiple sources of non-equilibrium partitioning. Zbl 0939.76600
Elzein, Abbas H.; Booker, John R.
1999
Groundwater pollution by organic compounds: A three-dimensional boundary element solution of contaminant transport equations in stratified porous media with multiple non-equilibrium partitioning. Zbl 0939.86500
Elzein, Abbas H.; Booker, John R.
1999
Boundary element analysis of linear thermoelastic consolidation. Zbl 0894.73203
Smith, David W.; Booker, John R.
1996
Analysis of fully coupled thermomechanical behaviour around a rigid cylindrical heat source buried in clay. Zbl 0800.73016
Seneviratne, H. N.; Carter, J. P.; Booker, J. R.
1994
A semi-analytical method for the wave-induced seabed response. Zbl 0825.73506
Rahman, M. S.; El-Zahaby, Khalid; Booker, John
1994
Green’s functions for a fully coupled thermoporoelastic material. Zbl 0776.73060
Smith, David W.; Booker, John R.
1993
A review of models for predicting the thermomechanical behaviour of soft clays. Zbl 0800.73362
Seneviratne, H. N.; Carter, J. P.; Airey, D. W.; Booker, J. R.
1993
Boundary element analysis of contaminant transport in fractured porous media. Zbl 0800.76262
Leo, C. J.; Booker, J. R.
1993
Integration of Tresca and Mohr-Coulomb constitutive relations in plane strain elastoplasticity. Zbl 0758.73054
Sloan, S. W.; Booker, J. R.
1992
Application of discrete Fourier series to the finite element stress analysis of axi-symmetric solids. Zbl 0825.73797
Lai, J. Y.; Booker, J. R.
1991
Boundary integral analysis of transient thermoelasticity. Zbl 0687.73010
Smith, D. W.; Booker, J. R.
1989
A method of computing the consolidation behaviour of layered soils using direct numerical inversion of Laplace transforms. Zbl 0612.73109
Booker, J. R.; Small, J. C.
1987
Withdrawal of a compressible pore fluid from a point sink in an isotropic elastic half space with anisotropic permeability. Zbl 0611.76103
Booker, J. R.; Carter, J. P.
1987
Elastic consolidation around a point sink embedded in a half-space with anisotropic permeability. Zbl 0598.73109
Booker, J. R.; Carter, J. P.
1987
Finite layer analysis of layered elastic materials using a flexibility approach. II: Circular and rectangular loadings. Zbl 0585.73127
Small, J. C.; Booker, J. R.
1986
Removal of singularities in Tresca and Mohr-Coulomb yield functions. Zbl 0585.73055
Sloan, S. W.; Booker, J. R.
1986
The behaviour of layered soil or rock containing a decaying heat source. Zbl 0597.73110
Small, J. C.; Booker, J. R.
1986
Long term subsidence due to fluid extraction from a saturated, anisotropic, elastic soil mass. Zbl 0574.73104
Booker, J. R.; Carter, J. P.
1986
Analysis of a point sink embedded in a porous elastic half space. Zbl 0579.73108
Booker, J. R.; Carter, J. P.
1986
The behaviour of an impermeable flexible raft on a deep layer of consolidating soil. Zbl 0582.73099
Booker, J. R.; Small, J. C.
1986
Finite layer analysis of viscoelastic layered materials. Zbl 0589.73093
Booker, J. R.; Small, J. C.
1986
The behaviour of an elastic nonhomogeneous half-space. I. Line and point loads. Zbl 0569.73106
Booker, J. R.; Balaam, N. P.; Davis, E. H.
1985
The behaviour of an elastic nonhomogeneous half-space. II. Circular and strip footings. Zbl 0569.73107
Booker, J. R.; Balaam, N. P.; Davis, E. H.
1985
Finite layer analysis of layered elastic materials using a flexibility approach. I. Strip loadings. Zbl 0535.73051
Small, J. C.; Booker, J. R.
1984
Consolidation of a cross-anisotropic soil medium. Zbl 0543.73129
Booker, J. R.; Randolph, M. F.
1984
The analysis of liquid storage tanks on deep elastic foundations. Zbl 0508.73094
Booker, J. R.; Small, J. C.
1983
Creep and consolidation around circular openings in infinite media. Zbl 0515.73108
Carter, J. P.; Booker, J. R.
1983
Consolidation of axi-symmetric bodies subjected to non axi-symmetric loading. Zbl 0508.73090
Carter, J. P.; Booker, J. R.
1983
Onset of convection in a variable-viscosity fluid. Zbl 0534.76093
Stengel, Karl C.; Oliver, Dean S.; Booker, John R.
1982
Finite layer analysis of consolidation. II. Zbl 0482.73088
Booker, J. R.; Small, J. C.
1982
Finite layer analysis of consolidation. I. Zbl 0482.73087
Booker, J. R.; Small, J. C.
1982
Elastic consolidation around a deep circular tunnel. Zbl 0505.73071
Carter, J. P.; Booker, J. R.
1982
Finite element analysis of problems with infinitely distant boundaries. Zbl 0462.73058
Booker, J. R.; Small, J. C.
1981
The analysis of finite elasto-plastic consolidation. Zbl 0394.73097
Carter, J. P.; Booker, J. R.; Small, J. C.
1979
A method of analysis for horizontally embedded anchors in an elastic soil. Zbl 0393.73111
Rowe, R. K.; Booker, J. R.
1979
Further thoughts on convective heat transport in a variable-viscosity fluid. Zbl 0377.76077
Booker, John R.; Stengel, Karl C.
1978
A theory of finite elastic consolidation. Zbl 0354.73079
Carter, J. P.; Small, J. C.; Booker, J. R.
1977
Finite element analysis of primary and secondary consolidation. Zbl 0344.73079
Booker, J. R.; Small, J. C.
1977
Elasto-plastic consolidation of soil. Zbl 0333.73073
Small, J. C.; Booker, J. R.; Davis, E. H.
1976
An investigation of the stability of numerical solutions of Biot’s equations of consolidation. Zbl 0311.73047
Booker, J. R.; Small, J. C.
1975
The time-settlement behavior of a rigid die resting on a deep clay layer. Zbl 0331.73016
Chiarella, C.; Booker, J. R.
1975
A numerical method for the solution of Biot’s consolidation theory. Zbl 0267.65085
Booker, J. R.
1973
A general treatment of plastic anisotropy under conditions of plane strain. Zbl 0298.73052
Booker, J. R.; Davis, E. H.
1972
The critical layer for internal gravity waves in a shear flow. Zbl 0148.23101
Booker, John R.; Bretherton, Francis P.
1967
all top 5
#### Cited by 464 Authors
29 Ai, Zhiyong 7 Cheng, Yi Chong 7 Wang, Lujun 5 Ferronato, Massimiliano 5 Wheeler, Mary Fanett 4 Booker, John Robert 4 Borja, Ronaldo I. 4 Gambolati, Giuseppe 4 Kelder, Hennie 4 Maslowe, Sherwin A. 4 Selvadurai, A. Patrick S. 4 Sharifian, Mehrzad 4 Sloan, Scott W. 4 Venkatachalappa, M. 4 Wang, Quan-sheng 4 Yue, Zhongqi 3 Abbo, Andrew J. 3 Abousleiman, Younane N. 3 Auricchio, Ferdinando 3 Brown, Susan N. 3 Budhu, Muniram 3 Elzein, Abbas H. 3 Fowler, Andrew C. 3 Hu, Yadong 3 Lindzen, Richard S. 3 Liu, Ruijie 3 Lott, François 3 Rezaiee-Pajand, Mohammad 3 Sharifian, Mehrdad 3 Sheng, Daichao 3 Stewartson, Keith 2 Adam, John A. 2 Alarcón, Enrique 2 Bai, Bing 2 Balmforth, Neil J. 2 Barry, Steven Ian 2 Bouzid, Dj. Amar 2 Broutman, Dave 2 Capone, Florinda 2 Caulfield, C. P. 2 Chen, Shaohua 2 Chimonas, George 2 Dang, Faning 2 Dawson, Clint N. 2 Dhiman, Joginder Singh 2 Ellingsen, Simen A. 2 Feng, Dong Liang 2 Gajo, Alessandro 2 Garg, Nat Ram 2 Gentile, Maurizio 2 Girault, Vivette 2 Grimshaw, Roger Hamilton James 2 Gui, Jun Chao 2 Han, Jie 2 Knobloch, Edgar 2 Kukreti, Anant R. 2 Kumar, Jyant 2 Kumar, Kundan 2 Liao, Hongjian 2 Liu, Wen-Jie 2 Lu, Jianfei 2 Ma, Zongyuan 2 Manga, Michael 2 Mercer, Geoffry Norman 2 Millet, Christophe 2 Nijimbere, Victor 2 Pan, Ernian 2 Pini, Giorgio 2 Rudraiah, Nanjundappa 2 Sachdev, P. L. 2 Solomatov, V. S. 2 Spilker, Robert L. 2 Straughan, Brian 2 Teitelbaum, Hector 2 Troitskaya, Yuliya I. 2 Tung, Ka Kit 2 Tyvand, Peder A. 2 van Duin, Cornelis A. 2 Vanneste, Jacques 2 Vermeer, P. A. 2 Wang, Chen 2 Weeraratne, Dayanthie 2 Wu, Chao 2 Wu, Quan Long 2 Xiao, Sha 2 Xue, Xinhua 2 Yan, Cong 2 Yang, Xingguo 2 Yue, Zhongqi Quentin 2 Zebib, Abdelfattah 2 Zhang, Wohua 2 Zheng, Hong 1 Abidin, Nurul Hafizah Zainal 1 Achala, L. N. 1 Achatz, Ulrich 1 Adiyaman, Ibrahim Bahadir 1 Albaalbaki, Bashar 1 Alexeyeva, Lyudmila A. 1 Almani, Tameem 1 Almeida, Edgard S. ...and 364 more Authors
all top 5
#### Cited in 67 Serials
47 Journal of Fluid Mechanics 31 Applied Mathematical Modelling 24 Computer Methods in Applied Mechanics and Engineering 19 International Journal for Numerical and Analytical Methods in Geomechanics 13 Acta Mechanica 12 Physics of Fluids 10 Geophysical and Astrophysical Fluid Dynamics 9 Engineering Analysis with Boundary Elements 8 International Journal of Engineering Science 8 Applied Mathematics and Mechanics. (English Edition) 7 Computational Mechanics 6 International Journal for Numerical Methods in Engineering 6 Studies in Applied Mathematics 5 Physics of Fluids, A 5 Meccanica 5 European Journal of Mechanics. A. Solids 4 Computers & Mathematics with Applications 4 International Journal of Solids and Structures 4 Archive of Applied Mechanics 3 Journal of Computational Physics 3 Journal of the Mechanics and Physics of Solids 3 Mathematical and Computer Modelling 3 European Journal of Mechanics. B. Fluids 3 Engineering Computations 2 Astrophysics and Space Science 2 Fluid Dynamics 2 Wave Motion 2 Applied Mathematics and Computation 2 Mechanics Research Communications 2 Numerical Methods for Partial Differential Equations 2 Journal of Elasticity 2 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 2 Communications in Numerical Methods in Engineering 2 Computational Geosciences 2 Archives of Computational Methods in Engineering 2 Sādhanā 2 International Journal of Computational Methods 2 Acta Mechanica Sinica 1 Modern Physics Letters B 1 Computers and Fluids 1 International Journal of Heat and Mass Transfer 1 International Journal of Plasticity 1 Journal of Engineering Mathematics 1 Journal of Mathematical Analysis and Applications 1 Journal of Mathematical Physics 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Journal of Theoretical and Applied Mechanics (Sofia) 1 Prikladnaya Matematika i Mekhanika 1 Aplikace Matematiky 1 Journal of Computational and Applied Mathematics 1 Mathematics and Computers in Simulation 1 Mathematika 1 SIAM Journal on Numerical Analysis 1 Applied Numerical Mathematics 1 Finite Elements in Analysis and Design 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Journal of Non-Equilibrium Thermodynamics 1 Acta Mechanica Sinica. (English Edition) 1 Continuum Mechanics and Thermodynamics 1 Mathematical Problems in Engineering 1 Journal of Mathematical Fluid Mechanics 1 Proceedings of the National Academy of Sciences, India. Section A. Physical Sciences 1 Arabian Journal for Science and Engineering 1 Numerical Algebra, Control and Optimization 1 International Journal of Applied and Computational Mathematics 1 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences
all top 5
#### Cited in 13 Fields
179 Fluid mechanics (76-XX) 166 Mechanics of deformable solids (74-XX) 41 Numerical analysis (65-XX) 32 Geophysics (86-XX) 25 Classical thermodynamics, heat transfer (80-XX) 14 Partial differential equations (35-XX) 5 Astronomy and astrophysics (85-XX) 3 Integral transforms, operational calculus (44-XX) 2 Biology and other natural sciences (92-XX) 1 History and biography (01-XX) 1 Ordinary differential equations (34-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Statistical mechanics, structure of matter (82-XX) |
# 1 CpG sets
RaMWAS calculates CpG scores and performs further analyses at a set of CpGs (or locations in general) defined by the user via filecpgset parameter. The filecpgset parameter must point to an .rds file (a file saved using saveRDS function), with the set of locations stored as a list with one sorted vector of CpG locations per chromosome.
cpgset = list(
chr1 = c(12L, 57L, 123L),
chr2 = c(45L, 95L, 99L, 111L),
chr3 = c(22L, 40L, 199L, 211L))
In practice, the set should depend on the reference genome and can include CpGs created by common SNPs.
Optionally, parameter filenoncpgset, can point to a file storing vetted locations away from any CpGs.
Our CpG sets include all common CpGs that are identified by combining reference genome sequence data with SNP information as SNPs can often create or destroy CpGs in the reference. Our sets exclude CpGs with unreliable coverage estimates due to poor alignment (e.g. CpG in repetitive elements) as indicated by our in silico experiment (details below).
CpG sets for human genome (autosomes only).
Code Super Population hg19 no QC hg19 with QC hg38 no QC hg38 with QC
ALL All samples 28.4M 28.0M 29.5M 27.8M
AFR African 28.7M 28.3M 29.8M 28.1M
AMR Ad Mixed American 28.1M 27.7M 29.2M 27.5M
EAS East Asian 27.8M 27.4M 28.9M 27.2M
EUR European 27.9M 27.5M 29.0M 27.4M
SAS South Asian 28.0M 27.6M 29.1M 27.4M
Reference Genome 26.8M 26.4M 27.9M 27.1M
Note: SNPs were obtained from the 1000 Genomes super populations (Phase 3 data, more info). Only SNPs with minor allele frequency above 1% are included. In silico alignment experiments assumed 75 bp single-end reads and alignment with Bowtie 2.
CpG sets for mouse genome.
Genome No QC With QC
GRCm38.p4 22.6M 21.7M
GRCm38.p5 22.7M 21.7M
# 3 Constructing a custom CpG set
## 3.1 Constructing a CpG set for a reference genome
A CpG set can be constructed from a reference genome with the getCpGsetCG function. The functions can use any genome available in Bioconductor as BSGenome class. Additional genomes can be loaded using readDNAStringSet function from .fa files.
library(ramwas)
library(BSgenome.Ecoli.NCBI.20080805)
cpgset = getCpGsetCG(BSgenome.Ecoli.NCBI.20080805)
# First 10 CpGs in NC_008253:
print(cpgsetNC_008253[1:10]) ## [1] 22 90 152 196 243 248 256 258 283 297 For a genome with injected SNPs, we provide the function getCpGsetALL for also finding CpGs that can be created by the SNPs. The example below uses all SNPs from dbSNP144 for listing CpGs in human genome. We do NOT advice using all dbSNP144 SNPs, as it causes a large number of CpGs that almost never occur in the population. library(BSgenome.Hsapiens.UCSC.hg19) library(SNPlocs.Hsapiens.dbSNP144.GRCh37) genome = injectSNPs(Hsapiens, "SNPlocs.Hsapiens.dbSNP144.GRCh37") cpgset = getCpGsetALL(genome) # Number of CpGs with all SNPs injected in autosomes sum(sapply(cpgset[1:22], length)) ## [1] 42841152 The code above shows that using all dbSNP144 SNPs we get over 42 million CpGs instead of about 29 million when using only SNPs with minor allele frequency above 1%. In outbred population such as humans it’s reasonable to ignore rare CpG-SNPs because they would have low power to detect associations. To exclude rare CpG-SNPs, we need allele frequency information. Unfortunately, (to our knowledge) Bioconductor packages with SNP information do not contain SNP allele frequencies. To alleviate this problem, we provide a way to inject SNP information from 1000 Genomes data or any other VCF. First, the VCF files, obtained from the 1000 Genomes project (or other sources), need to be processed by vcftools command --counts. Note that vcftools is an independent software, not part of RaMWAS. vcftools --gzvcf ALL.chr22.phase3.vcf.gz \ --counts \ --out count_ALL_chr22.txt RaMWAS provides the function injectSNPsMAF to read in the generated allele count files, select common SNPs, and inject them in the reference genome. Here we apply it to chromosome 22. genome[["chr22"]] = injectSNPsMAF( gensequence = BSGenome[["chr22"]], frqcount = "count_ALL_chr22.txt", MAF = 0.01) # Find the CpGs cpgset = getCpGsetALL(genome) Once a CpG set is generated, it can be saved with saveRDS function for use by RaMWAS. saveRDS(file = "My_cpgset.rds", object = cpgset) ## 3.2In silico alignment experiment CpG sites in loci that are problematic in terms of alignment need to be eliminated prior to analysis as CpG score estimates will be confounded with alignment errors. For example, repetitive elements constitute about 45% of the human genome. Reads may be difficult to align to these loci because of their high sequence similarity. To identify problematic sites we conduct an in silico experiment. The pre-computed CpG sets for human genome in this vignette are prepared for 75 bp single end reads. In the in silico experiment we first generate all possible 75 bp single-end reads from the forward strand of the reference. It starts with the read from position 1 to 75 on chromosome 1 of the reference. Next read spans positions 2 to 76, etc. In the perfect scenario, aligning these reads to the reference genome they originated from should cause each CpG to be covered by 75 reads. We excluded CpG sites with read coverage deviating from 75 by more than 10. For a typical mammalian genome the in silico experiment is computationally intensive, as it requires alignment of billions of artificially created reads. RaMWAS supports in silico experiments with the function insilicoFASTQ for creating artificial reads from the reference genome. The function supports gz compression of the output files, decreasing disk space requirement for human genome from about 500 GB to 17 GB. Here is how insilicoFASTQ function # Do for all chromosomes insilicoFASTQ( con="chr1.fastq.gz", gensequence = BSGenome[["chr1"]], fraglength=75) The generated FASTQ files are then aligned to the reference genome. Taking Bowtie2 as an example: bowtie2 --local \ --threads 14 \ --reorder \ -x bowtie2ind \ -U chr1.fastq.gz | samtools view -bS -o chr1.bam The generated BAMs are then scanned with RaMWAS and the coverage for one sample combining all the BAMs is calculated: library(ramwas) chrset = paste0("chr",1:22) targetcov = 75 covtolerance = 10 param = ramwasParameters( dirproject = ".", dirbam = "./bams", dirfilter = TRUE, bamnames = chrset, bam2sample = list(all_samples = chrset), scoretag = "AS", minscore = 100, minfragmentsize = targetcov, maxfragmentsize = targetcov, minavgcpgcoverage = 0, minnonzerosamples = 0, # filecpgset - file with the CpG set being QC-ed filecpgset = filecpgset ) param1 = parameterPreprocess(param) ramwas1scanBams(param) ramwas3normalizedCoverage(param) The following code then filters CpGs by the in silico coverage # Preprocess parameters to learn the location of coverage matrix param1 = parameterPreprocess(param) # Load the coverage matrix (vector) cover = fm.load( paste0(param1dircoveragenorm, "/Coverage"))
# split the coverage by chromosomes
# cpgset - the CpG set being QC-ed
fac = rep(seq_along(cpgset), times = sapply(cpgset, length))
levels(fac) = names(cpgset)
class(fac) = "factor"
cover = split(cover, fac)
# filter CpGs on each chromosome by the coverage
cpgsetQC = cpgset
for( i in seq_along(cpgset) ){
keep =
(cover[[i]] >= (targetcov - covtolerance)) &
(cover[[i]] <= (targetcov + covtolerance))
cpgsetQC[[i]] = cpgset[[i]][ keep ]
}
Once the desired CpG set is generated, it can be saved with saveRDS function for use by RaMWAS.
saveRDS(file = "My_cpgset_QC.rds", object = cpgsetQC) |
# video conversion – ffmpeg command, what is the “best” config to re-encode vidoe youtube?
Let’s say we have a video generated with matplotlib.animation with the code below (while can’t figure how to have the mp4 file with HD 1080p, but we can re-encode with Final cut pro HD 1080p)
Writer = animation.writers['ffmpeg']#code to save the example.mp4
writer = Writer(fps=0.9, codec="h264", bitrate=1000000, metadata=dict(artist="me"))
animator.save('example.mp4', writer=writer)
When this mp4 file is imported into Final Cut, we have the option to make it HD 1080p.
Q: How can “convert” an mp4 file with HD 1080P as planned for youtube use with ffmpeg command?
enter image description here |
CorneliusFurtado414
14
# A central angle, thata, of a circle with radius 16 inches intercepts an arc of 19.36 inches. Find thata.
(1) Answers
Skyheart
Theta* $\theta=\frac{19.36}{16}=1.21$ radians.
Add answer |
# LabelSpreading¶
class ibex.sklearn.semi_supervised.LabelSpreading(kernel='rbf', gamma=20, n_neighbors=7, alpha=0.2, max_iter=30, tol=0.001, n_jobs=1)
Bases: sklearn.semi_supervised.label_propagation.LabelSpreading, ibex._base.FrameMixin
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
This model is similar to the basic Label Propagation algorithm, but uses affinity matrix based on the normalized graph Laplacian and soft clamping across the labels.
Read more in the User Guide.
kernel : {‘knn’, ‘rbf’, callable}
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape [n_samples, n_features], and return a [n_samples, n_samples] shaped weight matrix
gamma : float
parameter for rbf kernel
n_neighbors : integer > 0
parameter for knn kernel
alpha : float
Clamping factor. A value in [0, 1] that specifies the relative amount that an instance should adopt the information from its neighbors as opposed to its initial label. alpha=0 means keeping the initial label information; alpha=1 means replacing all initial information.
max_iter : integer
maximum number of iterations allowed
tol : float
Convergence tolerance: threshold to consider the system at steady state
n_jobs : int, optional (default = 1)
The number of parallel jobs to run. If -1, then the number of jobs is set to the number of CPU cores.
X_ : array, shape = [n_samples, n_features]
Input array.
classes_ : array, shape = [n_classes]
The distinct labels used in classifying instances.
label_distributions_ : array, shape = [n_samples, n_classes]
Categorical distribution for each item.
transduction_ : array, shape = [n_samples]
Label assigned to each item via the transduction.
n_iter_ : int
Number of iterations run.
>>> from sklearn import datasets
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
...
Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, Bernhard Schoelkopf. Learning with local and global consistency (2004) http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.3219
LabelPropagation : Unregularized graph based semi-supervised learning
fit(X, y)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Fit a semi-supervised label propagation model based
All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples.
X : array-like, shape = [n_samples, n_features]
A {n_samples by n_samples} size matrix will be created from this
y : array_like, shape = [n_samples]
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels
self : returns an instance of self.
predict(X)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Performs inductive inference across the model.
X : array_like, shape = [n_samples, n_features]
y : array_like, shape = [n_samples]
Predictions for input data
predict_proba(X)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Predict probability for each possible outcome.
Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution).
X : array_like, shape = [n_samples, n_features]
probabilities : array, shape = [n_samples, n_classes]
Normalized probability distributions across class labels
score(X, y, sample_weight=None)
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
Returns the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True labels for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
score : float
Mean accuracy of self.predict(X) wrt. y. |
# Sketch one cycle of each sine curve. Assume a > 0. Write an equation for each graph. amplitude = 2 period = (pi/2)
Sketch one cycle of each sine curve. Assume $a>0$. Write an equation for each graph. amplitude = 2 period = $\frac{\pi }{2}$
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
pattererX
1:
Sinusoidal Function
$y=A\mathrm{sin}\left(Bx\right)$
where $|A|=$Amplitude
A and B= Cycles from 0 to $2\pi$
period $=\frac{2\pi }{B}$
2:
Here Amplitude =2 and period= $\frac{\pi }{2}$
i.e A=2
$B=\frac{2\pi }{period}=\frac{2\pi }{\frac{\pi }{2}}=4$
The required sine curve is $y=2\mathrm{sin}\left(4x\right)$
Graph Could Be Drawn If Required |
Monotonicity and symmetry of solutions of $p$-Laplace equations, $1, via the moving plane method
Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 4, Volume 26 (1998) no. 4, pp. 689-707.
@article{ASNSP_1998_4_26_4_689_0,
author = {Damascelli, Lucio and Pacella, Filomena},
title = {Monotonicity and symmetry of solutions of $p${-Laplace} equations, $1 < p < 2$, via the moving plane method},
journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze},
pages = {689--707},
publisher = {Scuola normale superiore},
volume = {Ser. 4, 26},
number = {4},
year = {1998},
zbl = {0930.35070},
mrnumber = {1648566},
language = {en},
url = {http://archive.numdam.org/item/ASNSP_1998_4_26_4_689_0/}
}
TY - JOUR
AU - Damascelli, Lucio
AU - Pacella, Filomena
TI - Monotonicity and symmetry of solutions of $p$-Laplace equations, $1 < p < 2$, via the moving plane method
JO - Annali della Scuola Normale Superiore di Pisa - Classe di Scienze
PY - 1998
DA - 1998///
SP - 689
EP - 707
VL - Ser. 4, 26
IS - 4
PB - Scuola normale superiore
UR - http://archive.numdam.org/item/ASNSP_1998_4_26_4_689_0/
UR - https://zbmath.org/?q=an%3A0930.35070
UR - https://www.ams.org/mathscinet-getitem?mr=1648566
LA - en
ID - ASNSP_1998_4_26_4_689_0
ER -
%0 Journal Article
%A Damascelli, Lucio
%A Pacella, Filomena
%T Monotonicity and symmetry of solutions of $p$-Laplace equations, $1 < p < 2$, via the moving plane method
%J Annali della Scuola Normale Superiore di Pisa - Classe di Scienze
%D 1998
%P 689-707
%V Ser. 4, 26
%N 4
%I Scuola normale superiore
%G en
%F ASNSP_1998_4_26_4_689_0
Damascelli, Lucio; Pacella, Filomena. Monotonicity and symmetry of solutions of $p$-Laplace equations, $1 < p < 2$, via the moving plane method. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 4, Volume 26 (1998) no. 4, pp. 689-707. http://archive.numdam.org/item/ASNSP_1998_4_26_4_689_0/
[BN] H. Berestycki - L. Nirenberg, On the method of moving planes and the sliding method, Bol. Soc. Brasil. Mat. 22 (1991), 1-37. | MR | Zbl
[BaNa] M. Badiale - E. Nabana, A note on radiality of solutions of p-Laplacian equations, Appl. Anal. 52 (1994), 35-43. | MR | Zbl
[Br] F. Brock, Continuous Rearrangement and symmetry of solutions of elliptic problems, Habilitation thesis, Cologne (1997).
[D] E.N. Dancer, Some notes on the method of moving planes, Bull. Austral. Math. Soc. 46 (1992), 425-434. | MR | Zbl
[Da1] L. Damascelli, Some remarks on the method of moving planes, Differential Integral Equations, 11, 3 (1998), 493-501. | MR | Zbl
[Da2] L. Damascelli, Comparison theorems for some quasilinear degenerate elliptic operators and applications to symmetry and monotonicity results, Ann. Inst. H. Poincaré Anal. Non Linéaire, to appear. | Numdam | MR | Zbl
[Di] E. Dibenedetto, C1+α local regularity of weak solutions of degenerate elliptic equations, Nonlinear Anal. 7 (8) (1993), 827-850. | Zbl
[GNN] B. Gidas - W.M. Ni - L. Nirenberg, Symmetry and related properties via the maximum principle, Comm. Math. Phys. 68 (1979), 209-243. | MR | Zbl
[GKPR] M. Grossi - S. Kesavan - F. Pacella - M. Ramaswami, Symmetry of positive solutions of some nonlinear equations, Topol. Methods Nonlinear Analysis, to appear. | MR | Zbl
[H] H. Hopf, "Lectures on differential geometry in the large", Stanford University, 1956. | Zbl
[KP] S. Kesavan - F. Pacella, Symmetry of positive solutions of a quasilinear elliptic equation via isoperimetric inequality, Appl. Anal. 54 (1994), 27-37. | MR | Zbl
[N] A. Norton, A critical set with nonnull image has a large Hausdorff dimension, Trans. Amer. Math. Soc. 296 (1986), 367-376. | MR | Zbl
[S] J. Serrin, A symmetry problem in potential theory, Arch. Rational Mech. Anal. 43 (1971), 304-318. | MR | Zbl
[T] P. Tolksdorf, Regularity for a more general class of quasilinear elliptic equations, J. Diff. Equations 51 (1984), 126-150. | MR | Zbl
[V] J.L. Vazquez, A strong maximum principle for some quasilinear elliptic equations, Appl. Math. Optim. (1984), 191-202. | MR | Zbl
[W] H. Whitney, A function not constant on a connected set of critical points, Duke Math. J. 1 (1935), 514-517. | JFM | MR | Zbl |
# What is the remainder when the function f(x)=x^3-4x^2+12 is divided by (x+2)?
Jan 11, 2018
$\textcolor{b l u e}{- 12}$
#### Explanation:
The Remainder theorem states that, when $f \left(x\right)$ is divided by $\left(x - a\right)$
$f \left(x\right) = g \left(x\right) \left(x - a\right) + r$
Where $g \left(x\right)$ is the quotient and $r$ is the remainder.
If for some $x$ we can make $g \left(x\right) \left(x - a\right) = 0$, then we have:
$f \left(a\right) = r$
From example:
${x}^{3} - 4 {x}^{2} + 12 = g \left(x\right) \left(x + 2\right) + r$
Let $x = - 2$
$\therefore$
${\left(- 2\right)}^{3} - 4 {\left(- 2\right)}^{2} + 12 = g \left(x\right) \left(\left(- 2\right) + 2\right) + r$
$- 12 = 0 + r$
$\textcolor{b l u e}{r = - 12}$
This theorem is just based on what we know about numerical division. i.e.
The divisor x the quotient + the remainder = the dividend
$\therefore$
$\frac{6}{4} = 1$ + remainder 2.
$4 \times 1 + 2 = 6$
Jan 11, 2018
$\text{remainder } = - 12$
#### Explanation:
$\text{using the "color(blue)"remainder theorem}$
$\text{the remainder when "f(x)" is divided by "(x-a)" is } f \left(a\right)$
$\text{here } \left(x - a\right) = \left(x - \left(- 2\right)\right) \Rightarrow a = - 2$
$f \left(- 2\right) = {\left(- 2\right)}^{3} - 4 {\left(- 2\right)}^{2} + 12 = - 12$ |
Does a smaller value of F from PLINK --het represent higher heterozygosity?
0
0
Entering edit mode
5 days ago
curious ▴ 500
Running command plink --het gives a column "F".
I read that F is essentially "1 - (HI/HS), where HI represents the individual's heterozygosity, and HS the subpopulation's heterozygosity".
From this definition it would seem that the lower the F value for a sample the higher the heterozygosity (eg maybe contamination if low enough, inbreeding if high enough). Is it right?
I am also wondering what a "normal" range of F is for a randomly sampled population. Here they say to remove samples that are 3 standard deviation (SD) units from the mean, but what is a typical mean? 0.018?
Traffic: 2731 users visited in the last hour
FAQ
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy. |
# Dimensionless Formulation
To improve numerical stability, GYRE solves the separated equations by recasting them into a dimensionless form that traces its roots back to Dziembowski (1971).
## Variables
The independent variable is the fractional radius $$x \equiv r/R$$ and the dependent variables $$\{y_{1},y_{2},\ldots,y_{6}\}$$ are
\begin{split}\begin{align} y_{1} &= x^{2 - \ell}\, \frac{\txir}{r}, \\ y_{2} &= x^{2-\ell}\, \frac{\tP'}{\rho g r}, \\ y_{3} &= x^{2-\ell}\, \frac{\tPhi'}{gr}, \\ y_{4} &= x^{2-\ell}\, \frac{1}{g} \deriv{\tPhi'}{r}, \\ y_{5} &= x^{2-\ell}\, \frac{\delta \tS}{c_{p}}, \\ y_{6} &= x^{-1-\ell}\, \frac{\delta \tLrad}{L}. \end{align}\end{split}
## Oscillation Equations
The dimensionless oscillation equations are
\begin{split}\begin{align} x \deriv{y_{1}}{x} &= \left( \frac{V}{\Gammi} - 1 - \ell \right) y_{1} + \left( \frac{\ell(\ell+1)}{c_{1} \omegac^{2}} - \alphagam \frac{V}{\Gammi} \right) y_{2} + \alphagrv \frac{\ell(\ell+1)}{c_{1} \omegac^{2}} y_{3} + \delta y_{5}, \\ % x \deriv{y_{2}}{x} &= \left( c_{1} \omegac^{2} - \fpigam \As \right) y_{1} + \left( 3 - U + \As - \ell \right) y_{2} - \alphagrv y_{4} + \delta y_{5}, \\ % x \deriv{y_{3}}{x} &= \alphagrv \left( 3 - U - \ell \right) y_{3} + \alphagrv y_{4} \\ % x \deriv{y_{4}}{x} &= \alphagrv \As U y_{1} + \alphagrv \frac{V}{\Gammi} U y_{2} + \alphagrv \ell(\ell+1) y_{3} - \alphagrv (U + \ell - 2) y_{4} - \alphagrv \delta \, U y_{5}, \\ % x \deriv{y_{5}}{x} &= \frac{V}{\frht} \left[ \nabad (U - c_{1}\omegac^{2}) - 4 (\nabad - \nabla) + \ckapad V \nabla + \cdif \right] y_{1} + \mbox{} \\ & \frac{V}{\frht} \left[ \frac{\ell(\ell+1)}{c_{1} \omegac^{2}} (\nabad - \nabla) - \ckapad V \nabla - \cdif \right] y_{2} + \mbox{} \\ & \alphagrv \frac{V}{\frht} \left[ \frac{\ell(\ell+1)}{c_{1} \omegac^{2}} (\nabad - \nabla) \right] y_{3} + \alphagrv \frac{V \nabad}{\frht} y_{4} + \mbox{} \\ & \left[ \frac{V \nabla}{\frht} (4 \frht - \ckapS) + \dfrht + 2 - \ell \right] y_{5} - \frac{V \nabla}{\frht \crad} y_{6} \\ % x \deriv{y_{6}}{x} &= \left[ \alphahfl \ell(\ell+1) \left( \frac{\nabad}{\nabla} - 1 \right) \crad - V \cepsad \right] y_{1} + \mbox{} \\ & \left[ V \cepsad - \ell(\ell+1) \crad \left( \alphahfl \frac{\nabad}{\nabla} - \frac{3 + \dcrad}{c_{1}\omegac^{2}} \right) \right] y_{2} + \mbox{} \\ & \alphagrv \left[ \ell(\ell+1) \crad \frac{3 + \dcrad}{c_{1}\omegac^{2}} \right] y_{3} + \left[ \cepsS - \alphahfl \frac{\ell(\ell+1)\crad}{\nabla V} + \ii \alphathm \omegac \cthk \right] y_{5} - \left[ 1 + \ell \right] y_{6}. \end{align}\end{split}
These equations are derived from the separated equations, but with the insertion of ‘switch’ terms (denoted $$\alpha$$) that allow certain pieces of physics to be altered. See the Physics Switches section for more details
For non-radial adiabatic calculations, the last two equations above are set aside and the $$y_{5}$$ terms dropped from the first four equations. For radial adiabatic calculations with reduce_order=.TRUE. (see the Oscillation Parameters section), the last four equations are set aside and the first two replaced by
\begin{split}\begin{align} x \deriv{y_{1}}{x} &= \left( \frac{V}{\Gammi} - 1 \right) y_{1} - \frac{V}{\Gamma_{1}} y_{2}, \\ % x \deriv{y_{2}}{x} &= \left( c_{1} \omega^{2} + U - \As \right) y_{1} + \left( 3 - U + \As \right) y_{2}. \end{align}\end{split}
## Boundary Conditions
### Inner Boundary
When inner_bound='REGULAR', GYRE applies regularity-enforcing conditions at the inner boundary:
\begin{split}\begin{align} c_{1} \omega^{2} y_{1} - \ell y_{2} - \alphagrv \ell y_{3} &= 0, \\ \alphagrv \ell y_{3} - (2\alphagrv - 1) y_{4} &= 0, \\ y_{5} &= 0. \end{align}\end{split}
When inner_bound='ZERO_R', the first and second conditions are replaced with zero radial displacement conditions,
\begin{split}\begin{align} y_{1} &= 0, \\ y_{4} &= 0. \end{align}\end{split}
Likewise, when inner_bound='ZERO_H', the first and second conditions are replaced with zero horizontal displacement conditions,
\begin{split}\begin{align} y_{2} - y_{3} &= 0, \\ y_{4} &= 0. \end{align}\end{split}
### Outer Boundary
When outer_bound='VACUUM', GYRE applies vacuum surface pressure conditions at the outer boundary:
\begin{split}\begin{align} y_{1} - y_{2} &= 0 \\ \alphagrv U y_{1} + (\alphagrv \ell + 1) y_{3} + \alphagrv y_{4} &= 0 \\ (2 - 4\nabad V) y_{1} + 4 \nabad V y_{2} + 4 \frht y_{5} - y_{6} &= 0 \end{align}\end{split}
When outer_bound='DZIEM', the first condition is replaced by the Dziembowski (1971) outer mechanical boundary condition,
$\left\{ 1 + V^{-1} \left[ \frac{\ell(\ell+1)}{c_{1} \omega^{2}} - 4 - c_{1} \omega^{2} \right] \right\} y_{1} - y_{2} = 0.$
When outer_bound='UNNO'|'JCD', the first condition is replaced by the (possibly-leaky) outer mechanical boundary conditions described by Unno et al. (1989) and Christensen-Dalsgaard (2008), respectively. When outer_bound='ISOTHERMAL', the first condition is replaced by a (possibly-leaky) outer mechanical boundary condition derived from a local dispersion analysis of an isothermal atmosphere.
Finally, when outer_bound='GAMMA', the first condition is replaced by the outer mechanical boundary condition described by Ong & Basu (2020).
## Jump Conditions
Across density discontinuities, GYRE enforces conservation of mass, momentum and energy by applying the jump conditions
\begin{split}\begin{align} U^{+} y_{2}^{+} - U^{-} y_{2}^{-} &= y_{1} (U^{+} - U^{-}) \\ y_{4}^{+} - y_{4}^{-} &= -y_{1} (U^{+} - U^{-}) \\ y_{5}^{+} - y_{5}^{-} &= - V^{+} \nabad^{+} (y_{2}^{+} - y_{1}) + V^{-} \nabad^{-} (y_{2}^{-} - y_{1}) \end{align}\end{split}
Here, + (-) superscripts indicate quantities evaluated on the inner (outer) side of the discontinuity. $$y_{1}$$, $$y_{3}$$ and $$y_{6}$$ remain continuous across discontinuites, and therefore don’t need these superscripts.
## Structure Coefficients
The various stellar structure coefficients appearing in the dimensionless oscillation equations are defined as follows:
$\begin{split}\begin{gather} V = -\deriv{\ln P}{\ln r} \qquad V_{2} = x^{-2} V \qquad \As = \frac{1}{\Gamma_{1}} \deriv{\ln P}{\ln r} - \deriv{\ln \rho}{\ln r} \qquad U = \deriv{\ln M_{r}}{\ln r} \\ % c_1 = \frac{r^{3}}{R^{3}} \frac{M}{M_{r}} \qquad \fpigam = \begin{cases} \alphapi & \As > 0, x < x_{\rm atm} \\ \alphagam & \As > 0, x > x_{\rm atm} \\ 1 & \text{otherwise} \end{cases}\\ % \nabla = \deriv{\ln T}{\ln P} \qquad \clum = x^{-3} \frac{\Lrad+\Lcon}{L} \qquad \crad = x^{-3} \frac{\Lrad}{L} \qquad \dcrad = \deriv{\ln \crad}{\ln r} \\ % \frht = 1 - \alpharht \frac{\ii \omega \cthn}{4} \qquad \dfrht = - \alpharht \frac{\ii \omega \cthn \dcthn}{4 \frht} \\ % \ckapad = \frac{\alphakar \kaprho}{\Gamma_{1}} + \nabad \alphakat \kapT \qquad \ckapS = - \upsT \alphakar \kaprho + \alphakat \kapT \\ % \ceps = x^{-3} \frac{4\pi r^{3} \rho \epsnuc}{L} \qquad \cepsad = \ceps \epsad \qquad \cepsS = \ceps \epsS \\ % \cdif = - 4 \nabad V \nabla + \nabad \left(V + \deriv{\ln \nabad}{\ln x} \right) \\ % \cthn = \frac{\cP}{a c \kappa T^{3}} \sqrt{\frac{GM}{R^{3}}} \qquad \dcthn = \deriv{\ln \cthn}{\ln r} \\ % \cthk = x^{-3} \frac{4\pi r^{3} \cP T \rho}{L} \sqrt{\frac{GM}{R^{3}}} \end{gather}\end{split}$
## Physics Switches
GYRE offers the capability to adjust the oscillation equations through a number of physics switches, controlled by parameters in the &osc namelist group. The table below summarizes the mapping between the switches appearing in the expressions above, and the corresponding namelist parameters.
Symbol
Parameter
Description
$$\alphagrv$$
alpha_grv
Scaling factor for gravitational potential perturbations. Set to 1 for normal behavior, and to 0 for the Cowling (1941) approximation
$$\alphathm$$
alpha_thm
Scaling factor for local thermal timescale. Set to 1 for normal behavior, to 0 for the non-adiabatic reversible (NAR) approximation (see Gautschy & Glatzel, 1990), and to a large value to approach the adiabatic limit
$$\alphahfl$$
alpha_hfl
Scaling factor for horizontal flux perturbations. Set to 1 for normal behavior, and to 0 for the non-adiabatic radial flux (NARF) approximation (see Townsend, 2003b)
$$\alphagam$$
alpha_gam
Scaling factor for g-mode isolation. Set to 1 for normal behavior, and to 0 to isolate g modes as described by Ong & Basu (2020)
$$\alphapi$$
alpha_pi
Scaling factor for p-mode isolation. Set to 1 for normal behavior, and to 0 to isolate p modes as described by Ong & Basu (2020)
$$\alphakar$$
alpha_kar
Scaling factor for opacity density partial derivative. Set to 1 for normal behavior, and to 0 to suppress the density part of the $$\kappa$$ mechanism
$$\alphakat$$
alpha_kat
Scaling factor for opacity temperature partial derivative. Set to 1 for normal behavior, and to 0 to suppress the temperature part of the $$\kappa$$ mechanism
$$\alpharht$$
alpha_rht
Scaling factor for time-dependent term in the radiative heat equation (see Unno & Spiegel, 1966). Set to 1 to include this term (Unno calls this the Eddington approximation), and to 0 to ignore the term |
anonymous 4 years ago Give a unique example of each: An absolute value equation with two solutions An absolute value equation with one solution An absolute value equation with no solutions
1. anonymous
|x|=9< it could be 9 or -9 |x-2|=0< it has to be 2 |x|=-4< it can't be negative :)
2. anonymous
$\left| x \right|=1$ $\log_{} \left| x \right|=1$ $\left| x \right|=-1$
Find more explanations on OpenStudy |
# Exercise 8-13 Contrasting Traditional and ABC Product Costs 1 answer below »
Exercise 8-13 Contrasting Traditional and ABC Product Costs [LO1, LO5] Model X100 sells for $120 per unit whereas Model X200 offers advanced features and sells for$500 per unit. Management expects to sell 50,000 units of Model X100 and 5,000 units of Model X200 next year. The direct material cost per unit is $50 for Model X100 and$220 for Model X200. The company's total manufacturing overhead for the year is expected to be $1,995,000. A unit of Model X100 requires 2 direct labor-hours and a unit of Model X200 requires 5 direct labor-hours. The direct labor wage rate is$20 per hour.
Requirement 1:
(a) Calculate the predetermined overhead rate. (Round your answer to 2 decimal places. Omit the "$" sign in your response.) (b) Using this traditional approach, compute the product margins for X100 and X200. (Negative amounts should be indicated by a minus sign. Omit the "$" sign in your response.)
2.) Management is considering an activity-based costing system and would like to know what impact this would have on product costs. Preliminary analysis suggests that under activity-based costing, a total of $1,000,000 in manufacturing overhead cost would be assigned to Model X100 and a total of$600,000 would be assigned to Model X200. In addition, a total of $150,000 in nonmanufacturing overhead would be applied to Model X100 and a total of$350,000 would be applied to Model X200. Using the activity-based costing approach, compute the product margins for X100 and X200. (Negative amounts should be indicated by a minus sign. Omit the "$" sign in your response.) ## 1 Approved Answer kanika b 5 Ratings, (9 Votes) 1 Under the traditional direct labor-hour based costing system, manufacturing overhead is applied to products using the predetermined overhead rate computed as follows: Predtermined overhead rate= Total manufacturing overhead / total direct labor hours *50,000 units of Model X100 @ 2.0 DLH per unit + 5,000 units of Model X200 @ 5.0 DLH per unit = 100,000 DLHs + 25,000 DLHs = 125,000 DLHs Predtermined overhead rate=$1,995,000/125,000 labor hours Predtermined overhead rate $15.96 2 Consequently, the product margins using the traditional approach would be computed as follows: Model X100 Model X200 Total Sales (50000 units*$120), (5000 units *$500) 60,00,000 25,00,000 85,00,000 Direct materials (50000 units*$50), (5000 units *$220) 25,00,000 11,00,000 36,00,000 Direct labor (50000 units*2 DLH*$20) , (5000 units*5DLH*$20) 20,00,000 5,00,000 25,00,000 Manufacturing overhead applied @$15.96 per direct labor-hour 1,596,000 399,000 1,995,000 Total manufacturing cost...
Looking for Something Else? Ask a Similar Question |
# SimpleImageStim¶
class psychopy.visual.SimpleImageStim(*args, **kwargs)[source]
A simple stimulus for loading images from a file and presenting at exactly the resolution and color in the file (subject to gamma correction if set).
Unlike the ImageStim, this type of stimulus cannot be rescaled, rotated or masked (although flipping horizontally or vertically is possible). Drawing will also tend to be marginally slower, because the image isn’t preloaded to the graphics card. The slight advantage, however is that the stimulus will always be in its original aspect ratio, with no interplotation or other transformation, and it is slightly faster to load into PsychoPy. |
# Which of the following facts regarding the movement of anions in the solution is correct
$\begin{array}{1 1}\text{Towards cathode in an electrolytic cell and towards anode in a galvanic cell}\\\text{Towards anode in an electrolytic cell and towards cathode in a galvanic cell}\\\text{Towards cathode in both types of cells}\\\text{Towards anode in both types of cells}\end{array}$
Answer : Towards anode in both types of cells
Anions move towards anode in both galvanic and electrolytic cells. |
# derivation of rotation matrix using polar coordinates
We derive formally the expression for the rotation of a two-dimensional vector $\boldsymbol{v}=a\boldsymbol{x}+b\boldsymbol{y}$ by an angle $\phi$ counter-clockwise. Here $\boldsymbol{x}$ and $\boldsymbol{y}$ are perpendicular unit vectors that are oriented counter-clockwise (the usual orientation).
In terms of polar coordinates, $\boldsymbol{v}$ may be rewritten:
$\displaystyle\boldsymbol{v}$ $\displaystyle=r(\cos\theta\,\boldsymbol{x}+\sin\theta\,\boldsymbol{y})\,,\quad a% =r\cos\theta\,;b=r\sin\theta\,,$ for some angle $\theta$ and radius $r\geq 0$. To rotate a vector $\boldsymbol{v}$ by $\phi$ really means to shift its polar angle by a constant amount $\phi$ but leave its polar radius fixed. Therefore, the result of the rotation must be: $\displaystyle\boldsymbol{v}^{\prime}$ $\displaystyle=r\bigl{(}\cos(\theta+\phi)\,\boldsymbol{x}+\sin(\theta+\phi)\,% \boldsymbol{y}\bigr{)}$ Expanding using the angle addition formulae, we obtain $\displaystyle\boldsymbol{v}^{\prime}$ $\displaystyle=r\bigl{(}\cos\theta\cos\phi-\sin\theta\sin\phi)\,\boldsymbol{x}+% (\sin\theta\cos\phi+\cos\theta\sin\phi)\,\boldsymbol{y}\bigr{)}$ $\displaystyle=(a\cos\phi-b\sin\phi)\,\boldsymbol{x}+(b\cos\phi+a\sin\phi)\,% \boldsymbol{y}\,.$
When this transformation is written out in $[\boldsymbol{x},\boldsymbol{y}]$-coordinates, we obtain the formula for the rotation matrix:
$\boldsymbol{v}^{\prime}=\begin{bmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{bmatrix}\begin{bmatrix}a\\ b\end{bmatrix}\,.$
Title derivation of rotation matrix using polar coordinates DerivationOfRotationMatrixUsingPolarCoordinates 2013-03-22 15:25:02 2013-03-22 15:25:02 stevecheng (10074) stevecheng (10074) 9 stevecheng (10074) Derivation msc 15-00 RotationMatrix PolarCoordinates DecompositionOfOrthogonalOperatorsAsRotationsAndReflections DerivationOf2DReflectionMatrix |
Chin. Phys. B, 2022, Vol. 31(5): 050203 DOI: 10.1088/1674-1056/ac5986
RAPID COMMUNICATION Prev Next
# Gauss quadrature based finite temperature Lanczos method
Jian Li(李健) and Hai-Qing Lin(林海青)
Beijing Computational Science Research Center, Beijing 100193, China
Abstract The finite temperature Lanczos method (FTLM), which is an exact diagonalization method intensively used in quantum many-body calculations, is formulated in the framework of orthogonal polynomials and Gauss quadrature. The main idea is to reduce finite temperature static and dynamic quantities into weighted summations related to one- and two-dimensional Gauss quadratures. Then lower order Gauss quadrature, which is generated from Lanczos iteration, can be applied to approximate the initial weighted summation. This framework fills the conceptual gap between FTLM and kernel polynomial method, and makes it easy to apply orthogonal polynomial techniques in the FTLM calculation.
Keywords: exact diagonalization Lanczos method orthogonal polynomials
Received: 12 January 2022 Revised: 15 February 2022 Accepted manuscript online:
PACS: 02.60.Dc (Numerical linear algebra) 02.60.-x (Numerical approximation and analysis) 75.10.Jm (Quantized spin models, including quantum spin frustration) 75.40.Mg (Numerical simulation studies)
Fund: This work is supported by the National Natural Science Foundation of China (Grant Nos.11734002 and U1930402).All numerical computations were carried out on the Tianhe-2JK at the Beijing Computational Science Research Center (CSRC).
Corresponding Authors: Hai-Qing Lin,E-mail:haiqing0@csrc.ac.cn E-mail: haiqing0@csrc.ac.cn
[1] Lin H Q 1990 Phys. Rev. B 42 6561[2] Lin H Q and Gubernatis J 1993 Comput. Phys. 7 400[3] Zhang J M and Dong R X 2010 European Journal of Physics 31 591[4] Nataf P and Mila F 2014 Phys. Rev. Lett. 113 127204[5] Schollwöck U 2005 Rev. Mod. Phys. 77 259[6] Schollwöck U 2011 Annals of Physics 326 96[7] White S R 1992 Phys. Rev. Lett. 69 2863[8] Gubernatis J, Kawashima N and Werner P 2016 Quantum Monte Carlo Methods: Algorithms for Lattice Models (Cambridge University Press)[9] Loh E Y, Gubernatis J E, Scalettar R T, White S R, Scalapino D J and Sugar R L 1990 Phys. Rev. B 41 9301[10] Mak C H and Chandler D 1990 Phys. Rev. A 41 5709[11] Lanczos C 1950 Journal of Research of the National Bureau of Standards 45 255[12] PAIGE C C 1972 IMA Journal of Applied Mathematics 10 373[13] Golub G and Underwood R 1977 The block lanczos method for computing eigenvalues Mathematical Software ed Rice J R (Academic Press) pp. 361-377[14] Komzsik L 2003 The Lanczos method: evolution and application (SIAM)[15] Weiße A, Wellein G, Alvermann A and Fehske H 2006 Rev. Mod. Phys. 78 275[16] Weiße A 2009 Phys. Rev. Lett. 102 150604[17] Silver R N and Röder H 1997 Phys. Rev. E 56 4822[18] Silver R and Röder H 1994 International Journal of Modern Physics C 5 735[19] Prelovšek P and Bonča J 1994 Phys. Rev. B 49 5065[20] Prelovšek P and Bonča J 2013 Ground State and Finite Temperature Lanczos Methods (Berlin, Heidelberg: Springer Berlin Heidelberg) pp. 1-30[21] Schlúter H, Gayk F, Schmidt H J, Honecker A and Schnack J 2021 Zeitschrift fúr Naturforschung A 76 823[22] Schnack J, Richter J and Steinigeweg R 2020 Phys. Rev. Res. 2 013186[23] Golub G H and Meurant G A 2010 Matrices, moments, and quadrature with applications Princeton series in applied mathematics (Princeton, N.J: Princeton University Press)[24] Golub G H and Van Loan C F 2013 Matrix computations 4th ed Johns Hopkins studies in mathematical sciences (Baltimore, Md: Johns Hopkins Univ. Press)[25] Stoer J and Bulirsch R 2002 Introduction to Numerical Analysis (New York, NY: Springer New York)[26] Golub G H and Welsch J H 1969 Mathematics of Computation 23 221[27] Jie S and Tao T 2006 Spectral and High-Order Methods with Applications Mathematics Monograph Series 3 (Beijing: Science Press)[28] It can be stated more rigorously that higher order expansion of f does not contribute to the result of u† f(H)u. All functions $\hat f$ with same expansion up to order N-1 form a equivalent class as f.[29] Same as Gauss quadrature in one-dimensional case, higher order expansion of C(x, y) does not contribute to u† f(H)Ag(H)v\$.[30] Skilling J (ed) 1989 Maximum Entropy and Bayesian Methods (Springer Netherlands)[31] Hewitt E and Hewitt R E 1979 Archive for History of Exact Sciences 21 129[32] Aichhorn M, Daghofer M, Evertz H G and von der Linden W 2003 Phys. Rev. B 67 161103[33] Lieb E, Schultz T and Mattis D 1961 Annals of Physics 16 407[34] Niemeijer T 1967 Physica 36 377 |
# The sum of two consecutive odd integers is 124, what are the integers?
Feb 7, 2016
$61$ and $63$
#### Explanation:
An odd integer can be written as:
$\left(2 n + 1\right)$
If the odd integers are consecutive, then the next odd integer will be:
$\left(2 \left(n + 1\right) + 1\right) = \left(2 n + 3\right)$
Given that the sum of these integers comes to $124$ we can write an equation and then solve for $n$:
$\left(2 n + 1\right) + \left(2 n + 3\right) = 124$
$4 n + 4 = 124$
$4 n = 120$
$\to n = 30$.
That would mean that our odd integers are:
$2 \left(30\right) + 1$ =61
and #2(30)+3 = 63
And of course $61 + 63 = 124$. |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Genome-wide association analysis of the strength of the MAMP-elicited defense response and resistance to target leaf spot in sorghum
## Abstract
Plants have the capacity to respond to conserved molecular features known as microbe-associated molecular patterns (MAMPs). The goal of this work was to assess variation in the MAMP response in sorghum, to map loci associated with this variation, and to investigate possible connections with variation in quantitative disease resistance. Using an assay that measures the production of reactive oxygen species, we assessed variation in the MAMP response in a sorghum association mapping population known as the sorghum conversion population (SCP). We identified consistent variation for the response to chitin and flg22—an epitope of flagellin. We identified two SNP loci associated with variation in the flg22 response and one with the chitin response. We also assessed resistance to Target Leaf Spot (TLS) disease caused by the necrotrophic fungus Bipolaris cookei in the SCP. We identified one strong association on chromosome 5 near a previously characterized disease resistance gene. A moderately significant correlation was observed between stronger flg22 response and lower TLS resistance. Possible reasons for this are discussed.
## Introduction
Sorghum (Sorghum bicolor) has a diploid genome of ~ 730 Mb with 10 chromosomes1,2. It is a widely-grown cereal crop used as feed or silage for animal consumption, for bio-fuel production and as gluten-free grain for human consumption and is better adapted to grow under high heat and drought conditions than other agriculturally important crops like corn and wheat. These agronomically-important traits make the species an attractive crop for the mass production of grains and bio-fuel under challenging growing conditions.
The warm and humid conditions under which much sorghum is grown support the growth of a wide variety of foliar fungi. Among many diseases, target leaf spot (TLS) caused by the necrotrophic fungus Bipolaris cookei, is one of the most economically-important fungal diseases of sorghum in the southeastern US, causing major yield losses3. TLS causes distinctive oval or elliptical reddish-purple spots that eventually coalesce during disease progression.
The genetic basis of TLS resistance in sorghum has been the subject of several studies. A major recessive resistance gene ds1 on chromosome 5 was identified as a loss of function allele of a gene encoding a leucine-rich repeat receptor kinase4. Other work identified QTL for three different fungal diseases; target leaf spot, zonate leaf spot and drechslera leaf blight, co-localized on chromosome 65. A third study identified a TLS resistance QTL on chromosome 3, as well as the previously-reported chromosome 6 QTL6. A recent study identified novel QTL on chromosomes 3, 4 and 9 as well as a strong QTL on chromosome 5 near the ds1 locus7.
Plants possess cell-surface receptors known as pattern recognition receptors (PRRs) that mediate recognition of highly conserved structural molecules associated with microbes known as microbe-associated molecular patterns (MAMPs). The two best-studied MAMPs are bacterial flagellin, especially its flg22 epitope, and chitin, a component of the fungal cell wall8,9. MAMP recognition elicits a basal response at the infection site known as MAMP-triggered immunity (MTI) which often includes phenomena such as callose deposition, changes in membrane ion flux, changes in phytohormone concentrations, induction or repression of plant defense-related genes, and production of reactive oxygen species (ROS) and nitric oxide (NO)10. In some cases, a pathogen adapted to a particular host can overcome MTI by producing so-called effector proteins, which are usually introduced into the cytoplasm and may suppress MTI. Effectors are sometimes recognized by cytoplasmic receptors known as R proteins, eliciting a strong response known as effector-triggered immunity (ETI) which is quantitatively stronger though, qualitatively somewhat similar to MTI11,12.
Non-host resistance can be defined as: “Resistance shown by an entire plant species against all known genetic variants (or isolates) of a specific parasite or pathogen”13,14. It has been hypothesized that MTI is a significant cause of non-host resistance, as most non-adapted pathogens cannot subvert the MTI-based defenses of their non-host plants15.
Host resistance can be subdivided into qualitative and quantitative resistance. Qualitative resistance is typically based on the action of a single, large-effect gene, while quantitative resistance is mediated by large numbers of small-effect genes16,17. There is some evidence that variation in the strength of MTI may underlie some part of quantitative resistance. Genetic variation in the strength of the MTI response has been documented in a number of plant species including brassicas18,19,20,21, maize22, soybean23, and tomato24 and in several of these cases, QTL controlling the variation were identified. In particular, the fact that several genes resembling PRRs confer quantitative resistance in various plant species16 and that the strength of flg22 perception is negatively correlated with susceptibility to Pseudomonas syringae in Arabidopsis20 suggest that there may be some connection between variation in the MTI response and quantitative disease resistance. However, the relationship between these traits is not well understood, especially in crop plants. The objectives of this study were to characterize the genetic control of the MAMP response and TLS resistance in a diverse panel of sorghum germplasm and to determine if there was evidence of shared genetic control of these traits. Specifically, we wanted to determine whether a stronger MAMP response was indicative of stronger quantitative resistance.
## Materials and methods
### Plant and pathogen materials
A sorghum association mapping population known as the sorghum conversion population (SCP) was provided by Dr. Pat Brown at the University of Illinois (now at UC Davis). It has been described previously25 and is a collection of diverse lines converted to photoperiod-insensitivity and smaller stature to facilitate the growth and development of the plants in US environments26. 510 lines from this population were used in this study although due to bad germination and other quality control issues, not all the lines were used in the analysis of all three traits. Ultimately data from 345 lines were used for the analysis of the chitin response, 472 lines for the flg22 response, and 456 for TLS resistance. B. cookei strain LSLP18 was obtained from Dr. Burt Bluhm at the University of Arkansas.
### MAMP response measurement
Two different MAMPs were used in this study flg22, (Genscript catalog# RP19986), and chitin (Sigma catalog # C9752). Sorghum plants were grown in inserts laid on flats filled with soil (33% Sunshine Redi-Earth Pro Growing Mix) in the greenhouse. Plants were watered the day before sample collection to avoid extra leaf moisture on the day of collection.
The lines were randomized and, for logistical reasons, were planted in batches of 60 lines. For each line, three ‘pots’ were planted with two seeds per line. Subsequent batches were planted as soon as the previous batch had been processed until the entire population had been assessed. Two experimental runs were conducted for both MAMPs with genotypes re-randomized in each of the two runs.
ROS assays were carried out as previously described27. Briefly, for each line, six seeds were planted in 3 different pots. From the resulting seedlings, three were selected based on uniformity. Seedlings that looked unusual or were significantly taller or shorter than the majority were not used. Four leaf discs of 3 mm diameter were excised from the broadest part of the 4th leaf of three different 15-day old sorghum plants. One disc per leaf from two plants and two discs from one plant, with the second disc becoming the water control (see below). The discs were individually floated on 50 µl H20 in a black 96-well plate, sealed with an aluminum seal to avoid exposure to light, and kept at room temperature overnight. The next morning a reaction solution was made using 2 mg/ml chemiluminescent probe L-012 (Wako, catalog # 120-04891), 2 mg/ml horseradish peroxidase (Type VI-A, Sigma-Aldrich, catalog # P6782), and 100 mg/ml Chitin or 2 μM of Flg22. 50 µl of this reaction solution was added to three of the four wells. The fourth well was a mock control, to which the reaction solution excluding the MAMP was added. Four blank wells containing only water were also included in each plate.
After adding the reaction solution, the luminescence was measured using SynergyTM 2 multi-detection microplate reader (BioTek) every 2 min for 1 hr. The plate reader takes luminescence measurements every 2 min during this 1 h. The sum of all 31 readings was calculated to give the value for each well. The estimated value for the MAMP response for each genotype was calculated as (average luminescence value of the three experimental wells—the mock well value) -minus the average blank well value. The blank well values were consistently close to zero.
Leaf discs of Nicotiana benthamiana, one high responsive sorghum line (SC0003), and one low responsive sorghum line (PI 6069) were also included as controls in each 96-well plate for quality control purposes.
### B. cookei inoculum preparation and inoculation
B. cookei inoculum was prepared as described previously28. Briefly, sorghum grains were soaked in water for three days, rinsed, scooped into 1L conical flasks and autoclaved for an hour at 15psi and 121 °C. The grains were then inoculated with about 5 ml of macerated mycelia from a fresh culture of B. cookei LSLP18 isolates and left for 2 weeks at room temperature, shaking the flasks every 3 days. After 2 weeks, the fungus infested sorghum grains were air-dried and then stored at 4 °C until field inoculation. The same inoculum was used for the entire trial and made fresh every year. For inoculation, 6–10 infested grains were placed into the whorl of 4–5 week old sorghum plants. The spores produced from these fungi initiated infection in the young sorghum plants within a week.
### Seed preparation
Before planting in the field sorghum seed was treated with a fungicide, insecticide, and safener mixture containing ~ 1% Spirato 480 FS fungicide, 4% Sebring 480 FS fungicide, 3% Sorpro 940 ES seed safener. Then the seeds were air-dried for 3 days which provided a thin coating of this mix around the seeds. The safener allowed the use of the herbicide Dual Magnum as a pre-emergence treatment.
### Evaluation of Target Leaf Spot resistance
The SCP was planted at the Central Crops Research Station in Clayton, NC on June 14–15 2017 and June 20, 2018 in a randomized complete block design with two experimental replications in each case. Experiments were planted in 1.8 m single rows with a 0.9 m row width using 10 seeds per plot. Two border rows were planted around the periphery of each experiment to prevent edge effects. The experiments were inoculated on July 20, 2017 and July 20, 2018 at which point the sorghum plants were at growth stage 3. Ratings were taken on a one to nine scales (Fig. S2), where plants showing no signs of disease were scored as a nine and completely dead plants were scored as one (Fig. S2). Two ratings were taken in 2017 and four readings in 2018 starting two weeks after inoculation each year. sAUDPC (standardized area under disease progression curve) was calculated as described previously29,30.
### Statistical analyses
All statistical analysis of phenotypic data was performed using SAS V9.4 software. The LSmeans of two replicates for each year were calculated and these were used in turn to calculate the overall LS means. Analysis of variance (ANOVA) and least square (LS)means were calculated using the Proc Mix and Proc GLM procedure in SAS respectively. Correlations were calculated using the CORR procedure of SAS31.
### Phenotypic data transformation for association analysis
The phenotypic distribution was right skewed for flg22 and chitin-elicited ROS response traits. From a simple ANOVA we determined that higher predicted score values of these phenotypic traits were moderately associated with higher residuals. Therefore, natural logarithm transformation and root square transformation were performed using the raw scores of chitin and flg22, respectively. After transformation, the phenotypic distribution of each trait was less skewed and the relationship between residual and predicted values was improved. Transformed data were used in the downstream association analysis. Each trial was analyzed separately with SAS mixed model procedure in SAS software version 9.4 (SAS institute. 2019). For the chitin and flg22 response, a best linear unbiased estimator (BLUE) was obtained to estimate each line mean phenotypic value by a mixed model considering inbred lines as fixed effects and replications as random effects. Similarly, for TLS, the data of both years were combined by using a mixed linear model across years considering years and replication within years as random effects and inbred lines as fixed. All the original phenotypic data used for analysis is provided in File S1.
### Genotypic data
All genotypic data used for this study are available upon request from the corresponding author. Genotypic data for the SCP were obtained from Dr. Tiffany Jamann and Dr. Patrick Brown (University of Illinois). The original array consisted of ~ 1.12 million SNPs derived from whole-genome sequencing. We used the genotypes of each set of plants with phenotypic data described above. Each data set was first filtered to exclude SNPs with less than 5% minor allele frequency (MAF) and more than 10% heterozygosity. Linkage disequilibrium-based pruning of genotypic data was performed in software Plink v1.932. After pruning, a set of ~ 58 K SNPs were used to compute the kinship matrix in Tassel 533. Pruned data was also used to perform principal component analysis (PCA) with JMP Genomics 9 (SAS, Institute. 2019). Based on PCA, approximately 20% of the variability was accounted for by the first three principal components. To control for population structure in the association analysis, we excluded 37 inbred lines that explain more than 7% of the variability (Fig. S1). After removing outlier inbred lines, a second filter (< 5% MAF and > 10% heterozygote sites) was performed in each data set. The final sub-set of genotypes used in Genome wide Association Studies (GWAS) contained ~ 755 K SNPs for TLS and flg22 and ~ 750 K SNPs for Chitin. The genotypic datasets used for the analysis of the TLS, flg22 and chitin analyses are available from the corresponding authors.
### Association analysis
Genome-wide association analysis based on a mixed linear model (MLM) was performed in Tassel 533. The MLM model used was y = Xβ + Zu + e where y is the vector of phenotypes (BLUEs), β is a vector of fixed effects, including the SNP marker, tested, u is a vector of random additive effects (inbred lines), X and Z represent matrices, and e is a vector of random residuals. The variance of random line effects was modeled as Var(u) = K $${\sigma }_{a}^{2}$$, where K is the n × n matrix of pairwise kinship coefficient and $${\sigma }_{a}^{2}$$ is the estimated additive genetic variance. A threshold to declare significance of 1/m where m is the number of markers tested was used34,35.
### Candidate gene selection
Genes within 100 Kb of the highly significant markers were considered candidate genes. Identification and annotation of the candidate genes were performed using the maize BTx623 reference genome v3 available on the Ensembl Plants browser. Functional annotation of the candidate gene was based on EnsemblPlants and Gramene annotation.
## Results and discussion
### Evaluation and mapping of TLS resistance
The SCP was assessed in the field for TLS resistance in 2017 and 2018, in randomized complete blocks with two complete replications per year. We observed substantial variation in TLS resistance in the SCP (Fig. 1A). The two replicates in each year were significantly correlated (0.52 and 0.68 in 2017 and 2018 respectively, p < 0.0001) and the LSmean scores were significantly correlated between years (0.45, p < 0.0001) (Table 1). ANOVA analysis indicated that the genotype effect was significant (Table 2).
Association analysis using the LSMeans of the 492 lines that were scored identified a single highly significant association on chromosome 5 (Fig. 1B). Table 3 shows the parameters associated with this locus and details predicted genes located 100 Kb either side in the Btx623 genome. One of these genes is ds1, a leucine-rich repeat serine/threonine protein kinase gene that was previously reported as a major TLS resistance gene4. It seems very likely that this gene underlies the major QTL identified in the SCP.
### Evaluation and mapping of the MAMP response
To assess variation in basal immune response, we measured ROS production in response to flg22 and chitin treatment in the SCP in two full replications. We observed significant variation in response to both MAMPs (Fig. 2). Significant correlations were observed between replicates in both cases (0.5 and 0.38 for flg22 and chitin respectively, p < 0.0001, Table 1). ANOVA indicated that genotype effects were highly significant for both traits (Table 1).
The LSmeans of the flg22 and chitin responses were not significantly correlated, though replicate 2 of flg22 response was somewhat correlated with rep1 and rep2 of the chitin response (0.20 and 0.17 respectively, p < 0.01 and < 0.05). In previous work, we observed significant correlations in flg22 and chitin responses measured using the same ROS plate assay as well as shared QTL, in a maize recombinant inbred line mapping population22. The lack of correlation here is therefore somewhat surprising. It is not clear whether this reflects fundamental differences between the maize and sorghum MAMP responses. Vetter et al21 found a negligible correlation in plant growth responses between the bacterial MAMPs EF-Tu and flagellin in Arabidopsis, we are not aware of other published work comparing variation in the responses to two different MAMPs.
The phenotypic data were transformed as described and used for association analysis. Q-Q plots did not indicate an excess of false positives (Fig. S3). Two associations with the flg22 response were detected on chromosome 4. The significance threshold was calculated using a Bonferroni multiple comparison test correction which is based on the number of markers used. Since we used a relatively high number of markers (more than 750,000) this threshold was consequently relatively conservative. It should be noted that the associations with flg22 were below the threshold for significance we used but we are nevertheless reporting them as they are the highest associations detected and, given the conservative significance threshold used, are nevertheless likely to reflect real associations. One significantly associated locus was detected on chromosome 5 for the chitin response (Fig. 3). Table 3 shows the parameters associated with these associated loci and details predicted genes. While it is premature to assign causation, it is interesting to note that several of the candidate genes associated with flg22- and chitin-induced responses have homology to genes involved in the defense response or disease resistance in other systems. For instance, the durable wheat rust resistance gene LR34 is an ABC transporter36 while genes involved in the auxin response37 and the ubiquitin-mediated protein-degradation38 pathway have been implicated in disease resistance in other systems.
### Comparison of TLS resistance and MAMP response data
To understand whether variation in the response elicited by flg22 and chitin is connected to TLS resistance, we looked for correlation between MAMP response and TLS disease scores. Despite chitin being an integral component of the fungal cell wall and TLS being a fungal disease, we did not observe a significant correlation between the traits. We did observe a small but moderately-significant negative correlation between the flg22 response and TLS scores (− 0.13*, p value < 0.05), indicating that a higher flg22 response was somewhat associated with higher susceptibility. This was unexpected both because flg22 is a bacterial MAMP and TLS is a fungal disease and because we were expecting an association between increased MAMP response and increased resistance. Instead, we observed an opposite relationship, albeit quite weak. Two possible explanations occur to us. Since the correlation is relatively low, this may not be a meaningful correlation. Alternatively, several necrotrophic pathogens of a similar type to Bipolaris cookei have effectors that both induce ETI and facilitate pathogenesis. It appears that in these cases elicitation of HR allows the pathogen to grow on the resulting dead host cells39. It is possible that this correlation is due to a similar subversion of the plant defense machinery.
A recent companion study measuring the flg22 response and TLS resistance in two sorghum recombinant inbred line (RIL) populations did not identify correlations between the 2 traits or any colocalizing QTL7. In the current study, we used the SCP which provided higher genetic and phenotypic diversity than had been available from the two RIL populations but, overall this study also did not produce evidence to support our original hypothesis that a stronger MAMP response is predictive of stronger QDR. However, there are a number of caveats that make it impossible to draw general conclusions.
Perhaps the major caveat is that quantitation of the MTI response is complex. It depends on what MAMP is used and how the response is measured. Low correlations between responses to different MAMPs have been reported previously19,21,24, although, as mentioned above, Zhang et al22 observed a significant correlation between flg22 and chitin responses in maize. Moreover, the MAMP response can be quantified in a number of different ways, including measurement of ROS or NO production, MAP kinase phosphorylation, mRNA accumulation levels, lignin and cell wall-bound phenols, callose deposition, seedling growth inhibition and MAMP-induced pathogen resistance18,19,40. Relative line rankings vary significantly depending on the assay used19,22. Our preliminary data also suggests that the MAMP response varies with the age of the plant and the individual leaf on the plant. Essentially, quantification of the MAMP response is complex and inferences may vary significantly depending on how the response is elicited and how measured7.
The other major caveat is of course resistance to only one disease was assessed. It may be that resistance to certain diseases, perhaps those that are less well adapted to the host and cannot completely suppress basal resistance mechanisms, may be more associated with the MAMP response. As more diseases are assessed on the SCP we may be able to re-evaluate our hypothesis in the light of multiple comparisons.
## References
1. 1.
Price, H. J. et al. Genome evolution in the genus Sorghum (Poaceae). Ann. Bot. 95, 219–227 (2005).
2. 2.
Paterson, A. H. et al. The Sorghum bicolor genome and the diversification of grasses. Nature 457, 551–556 (2009).
3. 3.
Zaccaron, A. Z. & Bluhm, B. H. The genome sequence of Bipolaris cookei reveals mechanisms of pathogenesis underlying target leaf spot of sorghum. Sci. Rep. 7, 17217 (2017).
4. 4.
Kawahigashi, H. et al. Positional cloning of ds1, the target leaf spot resistance gene against Bipolaris sorghicola in sorghum. Theor. Appl. Genet. 123, 131–142 (2011).
5. 5.
Mohan, S. M. et al. Co-localization of quantitative trait loci for foliar disease resistance in sorghum. Plant Breed. 128, 532–535 (2009).
6. 6.
Murali Mohan, S. et al. Identification of quantitative trait loci associated with resistance to foliar diseases in sorghum [Sorghum bicolor (L.) Moench]. Euphytica 176, 199–211 (2010).
7. 7.
Kimball, J. et al. Identification of QTL for Target Leaf Spot resistance in Sorghum bicolor and investigation of relationships between disease resistance and variation in the MAMP response. Sci. Rep. 9, 18285 (2019).
8. 8.
Zipfel, C. et al. Bacterial disease resistance in Arabidopsis through flagellin perception. Nature 428, 764–767 (2004).
9. 9.
Kaku, H. et al. Plant cells recognize chitin fragments for defense signaling through a plasma membrane receptor. Proc. Natl. Acad Sci. U. S. A. 103, 11086–11091 (2006).
10. 10.
Newman, M.-A., Sundelin, T., Nielsen, J. T. & Erbs, G. MAMP (microbe-associated molecular pattern) triggered immunity in plants. Front. Plant Sci. 4, 139 (2013).
11. 11.
Bent, A. F. & Mackey, D. Elicitors, effectors, and r genes: the new paradigm and a lifetime supply of questions. Annu. Rev. Phytopathol. 45, 399–436 (2007).
12. 12.
Bittel, P. & Robatzek, S. Microbe-associated molecular patterns (MAMPs) probe plant immunity. Curr. Opin. Plant Biol. 10, 335–341 (2007).
13. 13.
Heath, M. C. Nonhost resistance and nonspecific plant defenses. Curr. Opin. Plant Biol. 3, 315–319 (2000).
14. 14.
Heath, M. C. A generalized concept of host-parasite specificity. Phytopathology 71, 1121–1123 (1981).
15. 15.
Lipka, U., Fuchs, R. & Lipka, V. Arabidopsis non-host resistance to powdery mildews. Curr. Opin. Plant Biol. 11, 404–411 (2008).
16. 16.
Nelson, R., Wiesner-Hanks, T., Wisser, R. & Balint-Kurti, P. Navigating complexity to breed disease-resistant crops. Nat. Rev. Genet. 19, 21–33 (2018).
17. 17.
Poland, J. A., Balint-Kurti, P. J., Wisser, R. J., Pratt, R. C. & Nelson, R. J. Shades of gray: the world of quantitative disease resistance. Trends Plant Sci. 14, 21–29 (2009).
18. 18.
Lloyd, S. R., Schoonbeek, H.-J., Trick, M., Zipfel, C. & Ridout, C. J. Methods to study PAMP-triggered immunity in brassica species. Mol. Plant Microbe Interact. 27, 286–295 (2014).
19. 19.
Lloyd, S. R., Ridout, C. J. & Schoonbeek, H.-j. Methods to Quantify PAMP-Triggered Oxidative Burst, MAP Kinase Phosphorylation, Gene Expression, and Lignification in Brassicas. In Plant Pattern Recognition Receptors: Methods and Protocols 325–335 (2017).
20. 20.
Vetter, M. M. et al. Flagellin perception varies quantitatively in Arabidopsis thaliana and its relatives. Mol. Biol. Evol. 29, 1655–1667 (2012).
21. 21.
Vetter, M., Karasov, T. L. & Bergelson, J. Differentiation between MAMP triggered defenses in Arabidopsis thaliana. PLoS Genet. 12, e1006068 (2016).
22. 22.
Zhang, X., Valdés-López, O., Arellano, C., Stacey, G. & Balint-Kurti, P. Genetic dissection of the maize (Zea mays L.) MAMP response. Theor. Appl. Genet. 130, 1155–1168 (2017).
23. 23.
Valdes-Lopez, O. et al. Identification of quantitative trait loci controlling gene expression during the innate immunity response of soybean. Plant Physiol. 157, 1975–1986 (2011).
24. 24.
Hind, S. R. et al. Tomato receptor FLAGELLIN-SENSING 3 binds flgII-28 and activates the plant immune system. Nat. Plants 2, 16128 (2016).
25. 25.
Thurber, C. S., Ma, J. M., Higgins, R. H. & Brown, P. J. Retrospective genomic analysis of sorghum adaptation to temperate-zone grain production. Genome Biol. 14, R68 (2013).
26. 26.
Stephens, J. C., Miller, F. R. & Rosenow, D. T. Conversion of alien sorghums to early combine genotypes1. Crop Sci. 7, 396–396 (1967).
27. 27.
Samira, R. et al. Quantifying MAMP-induced production of reactive oxygen species in sorghum and maize. Bio-protocol 9, e3304 (2019).
28. 28.
Sermons, S. M. & Balint-Kurti, P. J. Large scale field inoculation and scoring of maize southern leaf blight and other maize foliar fungal diseases. Bio-protocol 8, e2745 (2018).
29. 29.
Campbell, C. L. & Madden, L. V. Introduction to Plant Disease Epidemiology 192–194 (Wiley, New York, 1990).
30. 30.
Shaner, G. & Finney, P. E. The effect of nitrogen fertilizer on expression of slow mildewing resistance in Knox wheat. Phytopathology 67, 1051–1056 (1977).
31. 31.
SAS Institute, I. SAS 9.2 Help and Documentation,. (SAS, Cary , NC, 2000–2004).
32. 32.
Purcell, S. et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am. J. Hum. Genet. 81, 559–575 (2007).
33. 33.
Bradbury, P. et al. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics 23, 2633–2635 (2007).
34. 34.
Xu, Y. et al. Genome-wide association mapping of starch pasting properties in maize using single-locus and multi-locus models. Front. Plant Sci. 9, 1311 (2018).
35. 35.
Zhang, Y.-M., Jia, Z. & Dunwell, J. M. Editorial: the applications of new multi-locus GWAS methodologies in the genetic dissection of complex traits. Front. Plant Sci. 10 (2019).
36. 36.
Krattinger, S. G. et al. A putative ABC transporter confers durable resistance to multiple fungal pathogens in wheat. Science 323, 1360–1363 (2009).
37. 37.
Ding, X. et al. Activation of the indole-3-acetic acid-amido synthetase GH3-8 suppresses expansin expression and promotes salicylate- and jasmonate-independent basal immunity in rice. Plant Cell 20, 228–240 (2008).
38. 38.
Goritschnig, S., Zhang, Y. & Li, X. The ubiquitin pathway is required for innate immunity in Arabidopsis. Plant J. 49, 540–551 (2007).
39. 39.
Lorang, J. M. Necrotrophic exploitation and subversion of plant defense: a lifestyle or just a phase, and implications in breeding resistance. Phytopathology 109, 332–346 (2019).
40. 40.
Zeidler, D. et al. Innate immunity in Arabidopsis thaliana: Lipopolysaccharides activate nitric oxide synthase (NOS) and induce defense genes. Proc. Natl. Acad. Sci. U.S.A. 101, 15811–15816 (2004).
## Acknowledgements
We thank Dr. Steve Kresovich for advice on seed handling and agronomic practice, and Dr. Burt Bluhm for providing isolates of B. cookei. We thank Cathy Herring and the staff at Central Crops Research Station for their work facilitating the field trials. Dr. Shannon Sermons and Greg Marshall assisted with several aspects of the field research. This work was funded by the DOE Plant Feedstock Genomics for Bioenergy program grants # DE-SC0014116 and DE-SC0019189.
## Author information
Authors
### Contributions
R.S., J.A.K., G.S. and P.B.K. planned the experiments, RS, JAK and PJBK performed the experiments and wrote the manuscript, R.S., J.A.K., L.S.F.L. and J.H. conducted the analyses. L.S.F.L. and R.S. prepared the figures. T.M.J. and P.J.B. provided seeds and unpublished genotypic datasets. All authors reviewed and edited the manuscript.
### Corresponding authors
Correspondence to Jennifer A. Kimball or Peter J. Balint-Kurti.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Samira, R., Kimball, J.A., Samayoa, L.F. et al. Genome-wide association analysis of the strength of the MAMP-elicited defense response and resistance to target leaf spot in sorghum. Sci Rep 10, 20817 (2020). https://doi.org/10.1038/s41598-020-77684-w
• Accepted:
• Published: |
Idea: generate N points in the 1x1 square in the first quadrant¶
2
1----
. |
. |
.|
0 1 2 3
Compute the number of points which fall within the unit circle centered at 1.
The quarter-circle that's the part of the unit circle that's in the first quadrant has area $\frac{1}{4}\pi ×{1}^{2}=\frac{1}{4}\pi$$\frac{1}{4}\pi\times 1^2 = \frac{1}{4}\pi$, and the unit square has area 1, so about $\frac{1}{4}\pi$$\frac{1}{4}\pi$ of the random points should be within the quarter-circle.
We compute the ratio of the number of points in the quarter circle and the total number of random points, multiply it by 4, and get $\pi$$\pi$.
To test whether the point $\left(x,y\right)$$(x, y)$ is within the unit circle, we test whether ${x}^{2}+{y}^{2}<1$$x^2+y^2<1$ (since points inside the unit circle all are less than 1 unit away from the origin, so that ${x}^{2}+{y}^{2}<1$$x^2+y^2<1$.
A small preliminary note on generating random numbers¶
We can use random.random() to generate random numbers in the interval $\left[0,1\right)$$[0, 1)$. Every time random.random() is called, a new random number is generated.
In [1]:
import random
x = random.random()
y = random.random()
print(x, y)
0.9199110526377364 0.28585222076565997
Here is a little trick: we can use multiple assignment to simoultaneously assign random numbers to x and y:
In [2]:
x, y = random.random(), random.random()
print(x, y)
0.30163174845316 0.43737830722543325
Another aside: how to generate a random number between 5 and 12? The following will do the trick:
In [3]:
5+7*random.random()
Out[3]:
9.681080775786686
We are now ready to compute an approximation of $\pi$$\pi$:
In [4]:
import random
N = 1000000
count = 0 #count will store the number of random points
#that fell within the unit circle
for i in range(N):
x, y = random.random(), random.random()
if x**2 + y**2 < 1:
count += 1
print(4*count/N)
3.14254 |
# Logistic Regression
INCOMPLETE. WORK IN PROGRESS!!!
## Overview
Logistic regression is a classification algorithm, used to estimate probabilities (Binary values like 0/1, yes/no, true/false) based on given set of independent variable(s). Its output values lies between 0 and 1. Prior to building a model, the features values are transformed using the logistic function (Sigmoid) to produce probability values that can be mapped to two or more classes.
### Linear vs Logistic
Given data on time spent studying and exam scores. Linear Regression and logistic regression can predict different things:
• Linear Regression could help us predict the student's test score on a scale of 0 - 100. Linear regression predictions are continuous (numbers in a range).
• Logistic Regression could help use predict whether the student passed or failed. Logistic regression predictions are discrete (only specific values or categories are allowed). We can also view probability scores underlying the model's classifications.
### Variations
• Binary (Pass/Fail)
• Multi (Cats, Dogs, Sheep)
• Ordinal (Low, Medium, High)
• Pros: Easy to implement, fast to train, returns probability scores
• Cons: Bad when too many features or too many classifications
## Binary Classification
Say we're given a dataset about student exam results and our goal is to predict whether a student will pass or fail based on number of hours slept and hours spent studying. We have two features (hours slept, hours studied) and two classes: passed (1) and failed (0).
Studied Slept Passed
4.85 9.63 1
8.62 3.23 0
5.43 8.23 1
9.21 6.34 0
Graphically we could represent our data with a scatter plot.
## Sigmoid Function
In order to map predicted values to probabilities, we use the Sigmoid function. The function maps any real value into another value between 0 and 1. In machine learning, we use Sigmoid to map predictions to probabilities.
Math
${\displaystyle z=w_{0}+w_{1}x_{1}+w_{2}x_{2}}$
${\displaystyle s(z)={\frac {1}{1+e^{-z}}}}$
• ${\displaystyle s(z)}$ = output between 0 and 1 (probability estimate)
• ${\displaystyle z}$ = input to the function (your algorithm's prediction e.g. mx + b)
• ${\displaystyle e}$ = base of natural log (wikipedia)
Visualize
Here is a graphical representation of the sigmoid function:
Code
Transforming a real number to a value between 0 and 1.
def sigmoid(z):
return 1 / (1 + np.exp(-z))
## Decision Boundary
A decision boundary is a pretty simple concept. Logistic regression is a classification algorithm, the output should be a category: Yes/No, True/False, Red/Yellow/Orange. Our prediction function however returns a probability score between 0 and 1. A decision boundary is a threshold or tipping point that helps us decide which category to choose based on probability.
${\displaystyle p\geq 0.5,class=1}$
${\displaystyle p<0.5,class=0}$
For example, if our threshold was .5 and our prediction function returned .7, we would classify this observation as positive. If our prediction was .2 we would classify the observation as negative. For logistic regression with multiple classes we could select the class with the highest predicted probability.
## Prediction Function
Using our knowledge of sigmoid functions and decision boundaries, we can now write a prediction function. A prediction function in logistic regression returns the probability of our observation being positive, True, or "Yes". We call this class 1 and its notation is P(class=1). As the probability gets closer to 1, our model is more confident that the observation is in class 1.
Math
Let's use the same multiple linear equation from our Linear Regression wiki.
${\displaystyle z=W_{0}+W_{1}Studied+W_{2}Slept}$
This time however we will transform the output using the sigmoid function to return a probability value between 0 and 1.
${\displaystyle P(class=1)={\frac {1}{1+e^{-z}}}}$
If the model returns .4 it believes there is only a 40% chance of passing. If our decision boundary was .5, we would categorize this observation as "Fail."
Code
We wrap the sigmoid function over our old linear regression prediction function.
def predict(features, weights):
'''
Returns 1D array of probabilities
that the class label == 1
'''
return 1 / (1 + np.exp(-np.dot(features,weights)))
## Cost Function
Unfortunately we can't (or at least shouldn't) use the same cost function (MSE) as we did for linear regression. Why? There is a great math explanation here and here, but for now I'll simply say it's because our prediction function is non-linear (due to sigmoid transform). Squaring this prediction as we do in MSE results in a non-convex function with many local minimums. If our cost function has many local minimums, gradient descent may not find the optimal global minimum.
Math
Instead of Mean Squared Error, we use a cost function called Log Loss, also known as Negative Log-Likelihood or Cross-Entropy Loss. Log Loss can be divided into two separate cost functions, one for y=1 and one for y=0.
The benefits of taking the logarithm reveal themselves when you look at the cost function graphs for y=1 and y=0. These smooth monotonic functions (always increasing or always decreasing) make it easy to calculate the gradient and minimize cost. Source.
The key thing to note is the cost function penalizes confident and wrong predictions more than it rewards confident and right predictions! The corollary is increasing prediction accuracy (closer to 0 or 1) has diminishing returns on reducing cost due to the logistic nature of our cost function.
The above functions compressed into one
Multiplying by y and (1-y) in the above equation is a sneaky trick that let's us use the same equation to solve for both y=1 and y=0 cases. If y=0, the first side cancels out. If y=1, the second side cancels out. In both cases we only perform the operation we need to perform.
Vectorized cost function
Code
# Using Mean Absolute Error
def cost_function(features, labels, weights):
'''
Features:(100,3)
Labels: (100,1)
Weights:(3,1)
Returns 1D matrix of predictions
Cost = ( log(predictions) + (1-labels)*log(1-predictions) ) / len(labels)
'''
observations = len(labels)
predictions = predict(features, weights)
#Take the error when label=1
class1_cost = -labels*np.log(predictions)
#Take the error when label=0
class2_cost = (1-labels)*np.log(1-predictions)
#Take the sum of both costs
cost = class1_cost - class2_cost
#Take the average cost
cost = cost.sum()/observations
return cost
To minimize our cost, we use Gradient Descent just like before in Linear Regression. There are other more sophisticated optimization algorithms out there such as conjugate gradient, BFGS, and L-BFGS, but you don't have to worry about these. Machine learning libraries like Scikit-learn hide their implementations so you can focus on more interesting things!
Math
One of the neat properties of the sigmoid function is its derivative is easy to calculate. You can find a walk-through of the derivation here and a more detailed overview here
${\displaystyle z=w_{0}+w_{1}x_{1}+w_{2}x_{2}}$
${\displaystyle s(z)={\frac {1}{1+e^{-z}}}}$
${\displaystyle s'(z)=s(z)(1-s(z))}$
This leads to an equally beautiful and convenient log loss derivative:
${\displaystyle C'=x(s(z)-{\hat {y}})}$
• ${\displaystyle C'}$ is the derivative of cost with respect to weights
• ${\displaystyle {\hat {y}}}$ is the actual class label (y=0 or y=1)
• ${\displaystyle z}$ is your model's prediction prior to applying sigmoid (${\displaystyle w_{0}+w_{1}x_{1}+w_{2}x_{2}}$)
• ${\displaystyle x}$ is your feature or feature vector.
Notice how this gradient is the same as the Mean Squared Error gradient in Linear Regression! The only difference is the hypothesis function.
Pseudocode
2. Multiply by learning rate
3. Subtract from weights
Code
# Vectorized Gradient Descent
# gradient = X.T * (X*W - y) / N
# gradient = features.T * (predictions - labels) / N
def update_weights(features, labels, weights, lr):
'''
Features:(200, 3)
Labels: (200, 1)
Weights:(3, 1)
'''
N = len(features)
#1 - Get Predictions
predictions = predict(features, weights)
#2 Transpose features from (200, 3) to (3, 200)
# So we can multiply w the (200,1) cost matrix.
# Returns a (3,1) matrix holding 3 partial derivatives --
# one for each feature -- representing the aggregate
# slope of the cost function across all observations
gradient = np.dot(features.T, predictions - labels)
#3 Take the average cost derivative for each feature
#4 - Multiply the gradient by our learning rate
#5 - Subtract from our weights to minimize cost
return weights
## Classify
The final step is to convert assign predicted probabilities into class labels (0 or 1)
Code
def decision_boundary(prob):
return 1 if prob >= .5 else 0
def classify(preds):
'''
preds = N element array of predictions between 0 and 1
returns N element array of 0s (False) and 1s (True)
'''
decision_boundary = np.vectorize(decision_boundary) #vectorized function
return decision_boundary(predictions).flatten()
#Example
Probabilities = [ 0.967 0.448 0.015 0.780 0.978 0.004]
Classifications = [1 0 0 1 1 0]
## Training
Our training process and code is the same as we used for linear regression.
Code
def train(features, labels, weights, lr, iters):
cost_history = []
for i in range(iters):
weights = update_weights(features, labels, weights, lr)
#Calculate error for auditing purposes
cost = cost_function(features, labels, weights)
cost_history.append(cost)
# Log Progress
if i % 1000 == 0:
print "iter: "+str(i) + " cost: "+str(cost)
return weights, cost_history
Logging
If our model is working, we should see our cost decrease after every iteration.
iter: 0 cost: 0.635
iter: 1000 cost: 0.302
iter: 2000 cost: 0.264
• Final Cost: 0.2487
• Final Weights: [-8.197, .921, .738]
## Model Evaluation
### Log Loss
The Log Loss cost function.
### Accuracy
Also referred to as a "score."
def accuracy(predicted_labels, actual_labels):
diff = predicted_labels - actual_labels
return 1.0 - (float(np.count_nonzero(diff)) / len(diff))
### Decision Boundary
We can also visualize our models performance by graphically comparing our probability estimates to the actual labels. This involves splitting our observations by class (0 and 1) and assigning each observation its predicted probability.
def plot_decision_boundary(trues_preds, falses_preds, db):
fig = plt.figure()
no_of_preds = len(trues_preds) + len(falses_preds)
ax.scatter([i for i in range(len(trues_preds))],trues_preds, s=25, c='b', marker="o", label='Trues')
ax.scatter([i for i in range(len(falses_preds))],falses_preds, s=25, c='r', marker="s", label='Falses')
plt.legend(loc='upper right');
ax.set_title("Decision Boundary")
ax.set_xlabel('N/2')
ax.set_ylabel('Predicted Probability')
plt.axhline(.5, color='black')
plt.show()
## Multiclass Classification
Instead of ${\displaystyle y={0,1}}$ we will expand our definition so that ${\displaystyle y={0,1...n}}$. Basically we re-run binary classification multiple times, once for each class.
Steps
1. Divide the problem into n+1 binary classification problems (+1 because the index starts at 0?).
2. For each class...
3. Predict the probability the observations are in that single class.
4. prediction = ${\displaystyle max(probabilityoftheclasses)}$
For each sub-problem, we select one class (YES) and lump all the others into a second class (NO). Then we take the class with the highest predicted value.
Visualize
What does it looks like with multiple classes and lines?
## Code Examples
### Scikit-learn
Let's compare our performance to the LogisticRegression model provided by scikit-learn .
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
# Normalize grades to values between 0 and 1 for more efficient computation
normalized_range = sklearn.preprocessing.MinMaxScaler(feature_range=(-1,1))
# Extract Features + Labels
labels.shape = (100,) #scikit expects this
features = normalized_range.fit_transform(features)
# Create Test/Train
features_train,features_test,labels_train,labels_test = train_test_split(features,labels,test_size=0.4)
# Scikit Logistic Regression
scikit_log_reg = LogisticRegression()
scikit_log_reg.fit(features_train,labels_train)
#Score is Mean Accuracy
scikit_score = clf.score(features_test,labels_test)
print 'Scikit score: ', scikit_score
#Our Mean Accuracy
observations, features, labels, weights = run()
probabilities = predict(features, weights).flatten()
classifications = classifier(probabilities)
our_acc = accuracy(classifications,labels.flatten())
print 'Our score: ',our_acc
Scikit score: 0.88
Our score: 0.89 |
On the information bottleneck theory of deep learning Anonymous et al., ICLR’18 submission
Last week we looked at the Information bottleneck theory of deep learning paper from Schwartz-Viz & Tishby (Part I,Part II). I really enjoyed that paper and the different light it shed on what’s happening inside deep neural networks. Sathiya Keerthi got in touch with me to share today’s paper, a blind submission to ICLR’18, in which the authors conduct a critical analysis of some of the information bottleneck theory findings. It’s an important update pointing out some of the limitations of the approach. Sathiya gave a recent talk summarising results on understanding optimisation and generalisation, ‘Interplay between Optimization and Generalization in DNNs,’ which is well worth checking out if this topic interests you. Definitely some more papers there that are going on my backlog to help increase my own understanding!
Let’s get back to today’s paper! The authors start out by reproducing the information plane dynamics from the Schwartz-Viz & Tishby paper, and then go on to conduct further experiments: replacing the tanh activation with ReLU to see what impact that has; exploring the link between generalisation and compression; investigating whether the randomness is important to compression during training; and studying the extent to which task-irrelevant information is also compressed.
The short version of their findings is that the results reported by Schwartz-Viz and Tishby don’t seem to generalise well to other network architectures: the two phases seen during training depend on the choice of activation function; there is no evidence of a causal connection between compression and generalisation; and that when compression does occur, it is not necessarily dependent on randomness from SGD.
Our results highlight the importance of noise assumptions in applying information theoretic analyses to deep learning systems, and complicate the IB theory of deep learning by demonstrating instances where representation compression and generalization performance can diverge.
The quest for deeper understanding continues!
### The impact of activation function choice
The starting point for our analysis is the observation that changing the activation function can markedly change the trajectory of a network in the information plane.
The authors used code supplied by Schwartz-Vis and Tishby to first replicate the results that we saw last week (Fig 1A below), and then changed the network to use ReLU instead — rectified linear activation functions $f(x) = max(0,x)$. The resulting information plane dynamics are show in Fig 1B.
The phase shift that we saw with the original tanh activation functions disappears!
The mutual information with the input monotonically increases in all ReLu layers, with no apparent compression phase. Thus, the choice of nonlinearity substantively affects the dynamics in the information plane.
Using a very simple three neuron network, the authors explore this phenomenon further. A scalar Gaussian input distribution $X \sim \mathcal{N}(0,1)$ is fed through a scalar first layer weight $w_1$ and passed through a neural nonlinearity $f(\cdot)$ to yield hidden unit activity $h=f(w_{1}X)$.
In order to calculate the mutual information, the hidden unit activity $h$ is binned into 30 uniform bins, to yield the discrete variable $T = bin(h)$.
With the tanh nonlinearity, mutual information first increases and then decreases. With the ReLU nonlinearity it always increases.
What’s happening is that with large weights, the tanh function saturates, falling back to providing mutual information with the input of approximately 1 bit (i.e, the discrete variable concentrates in just two bins around 1 and -1). With the ReLU though, half of the inputs are negative and land in the bin around 0, but the other half are Gaussian distributed and have entropy that increases with the size of weight. So it turns out that this double saturating nature of tanh is central to the original results.
… double-saturating nonlinearities can lead to compression of information about the input, as hidden units enter their saturation regime, due to the binning procedure used to calculate mutual information. We note that this binning procedure can be viewed as implicitly adding noise to the hidden layer activity: a range of X values map to a single bin, such that the mapping between X and T is no longer perfectly invertible.
The binning procedure is crucial for the information theoretic analysis, “however, this noise is not added in practice either during training or testing in these neural networks.
The saturation of tanh explains the presence of the compression period where mutual information decreases, and also explains why training slows down as tanh networks enter their compression phase: some fraction of inputs have saturated the nonlinearities, reducing backpropagated error gradients.
### Generalisation independent of compression
Next the authors use the information plane lens to further study the relationship between compression and generalisation.
… we exploit recent results on the generalization dynamics in simple linear networks trained in an student-teacher setup (Seung et al., 1992; Advani & Saxe, 2017). This setting allows exact calculation of the generalization performance of the network, exact calculation of the mutual information of the representation (without any binning procedure), and, though we do not do so here, direct comparison to the IB bound which is already known for linear Gaussian problems.
No compression is observed in the information plane (panel D in the figure above), although the network does learn a map that generalise well on the task and shows minimal overtraining. Experimentation to force varying degrees of overfitting shows networks with similar behaviour in the information plane can nevertheless have differing generalisation performance.
This establishes a dissociation between behavior in the information plane and generalization dynamics: networks that compress may or may not generalize well, and that networks that do not compress may or may not generalize well.
### Does randomness help compression?
Next the authors investigate what contributes to compression in the first place, looking at the differences in the information plane between stochastic gradient descent and batch gradient descent. Whereas SGD takes a sample from the dataset and calculates the error gradient with respect to it, batch gradient descent uses the total error across all examples — “and crucially, therefore has no randomness or diffusion-like behaviour in its updates.”
Both tanh and linear networks are trained with both SGD and BGD, and the resulting information plane dynamics look like this:
We find largely consistent information dynamics in both instances, with robust compression in tanh networks for both methods. Thus randomness in the training process does not appear to contributed substantially to compression of information about the input. This finding is consistent with the view presented in §2 that compression arises predominantly from the double saturating nonlinearity.
(Which seems to pretty much rule out the hope from the Schwartz-Viz & Tishby paper that we would find alternatives to SGD that support better diffusion and faster training).
### The compression of task-irrelevant information
A final experiment partitions the input X into a set of task-relevant inputs and a known to be task_irrelevant_ inputs. The former contribute signal therefore, while the latter only contribute noise. Thus good generalisation would seem to require ignoring the noise. The authors found that information for the task-irrelevant subspace does compress, at the same time as fitting is occurring for the task-relevant information, even though overall there is no observable compression phase.
### The bottom line
Our results suggest that compression dynamics in the information plane are not a general feature of deep networks, but are critically influenced by the nonlinearities employed by the network… information compression may parallel the situation with sharp minima; although empirical evidence has shown a correlation with generalization error in certain settings and architectures; further theoretical analysis has shown that sharp minima can in fact generalize well.
1. November 24, 2017 9:44 am
You should also definitely check the replies and replies to replies to this paper – https://openreview.net/forum?id=ry_WPG-A-
2. November 24, 2017 12:41 pm
“Blind submission to ICLR’18”: it’s not blind anymore, since you have published it in the open. By doing so, the authors have broken the rules of blind submission. I expect the PC chair will be very unhappy.
• November 24, 2017 2:15 pm
I think there might be a misunderstanding here. Sathiya Keerthi told me about the ICLR’18 submission, but that in no way means or implies he is an author – all submissions are available on the openreview system and there’s been quite a bit of discussion about this one. So it’s public domain that such a submission exists.
3. November 27, 2017 3:43 pm
I take the viewpoint that layer after layer the nonlinearity compounds. Leading to something like chaos theory with bifurcations. Ie. The weighed sums in the net cannot entirely cancel out the nonlinearities introduced by the activation functions.
Another effect of nonlinear activation functions is guaranteed information loss. As the input information empties out layer after layer the net is put on a fixed trajectory with no further possibility of the input information further influencing matters. Unless you use something like ResNet where maintain some connection to the input. |
# Laziest torus identified
Or, in similarly simplified headlinese, “Math finds the best doughnut”. A little bit more precisely, Fernando C. Marques and André Neves claim in a preprint on the arXiv to have proved the Willmore conjecture, that the minimum achievable mean curvature of a torus is $\frac{2}{\pi^2}$.
The article I linked to is some surprisingly non-stupid coverage from the Huffington Post. It seems they have a maths professor writing a column. I will never understand that site. I don’t know if there’s a Serious Business way of framing this, but the result is nice to know.
Richard Elwes has written a very short post on Google+ with some more real-maths information about what’s going on. |
# How to use PropertyGroupItem
I was browsing the docs when I ran into something interesting: bpy.types.propertyGroupItem
This is what the doc says about it:
Property that stores arbitrary, user defined properties
I did a little digging around but couldn't find anything else about it. It sounds very interesting but I have absolutely no idea how to use it. Could someone give an explanation of what this is and when to use it?
• example is in the DOC – Chebhou Aug 29 '15 at 22:18
• Instances of MyPropertyGroup as seen in that example can store two properties, custom_1 and custom_2. The instances of these properties are PropertyGroupItems. You don't use that type however, it's handled internally. See here for properties in general: blender.org/api/blender_python_api_2_75_release/bpy.props.html – CoDEmanX Aug 29 '15 at 22:57
• @CoDEmanX That makes a lot of sense, thanks! – Isaac Aug 30 '15 at 0:14 |
# Extreme point of a convex set
Let S be a convex set in $\mathbb{R}^n$. A vector $x \in S$ is said to be a extreme point of S if $x= \lambda x_1+\left ( 1-\lambda \right )x_2$ with $x_1, x_2 \in S$ and $\lambda \in\left ( 0, 1 \right )\Rightarrow x=x_1=x_2$.
## Example
Step 1 − $S=\left \{ \left ( x_1,x_2 \right ) \in \mathbb{R}^2:x_{1}^{2}+x_{2}^{2}\leq 1 \right \}$
Extreme point, $E=\left \{ \left ( x_1, x_2 \right )\in \mathbb{R}^2:x_{1}^{2}+x_{2}^{2}= 1 \right \}$
Step 2 − $S=\left \{ \left ( x_1,x_2 \right )\in \mathbb{R}^2:x_1+x_2< 2, -x_1+2x_2\leq 2, x_1,x_2\geq 0 \right \}$
Extreme point, $E=\left \{ \left ( 0, 0 \right), \left ( 2, 0 \right), \left ( 0, 1 \right), \left ( \frac{2}{3}, \frac{4}{3} \right) \right \}$
Step 3 − S is the polytope made by the points $\left \{ \left ( 0,0 \right ), \left ( 1,1 \right ), \left ( 1,3 \right ), \left ( -2,4 \right ),\left ( 0,2 \right ) \right \}$
Extreme point, $E=\left \{ \left ( 0,0 \right ), \left ( 1,1 \right ),\left ( 1,3 \right ),\left ( -2,4 \right ) \right \}$
## Remarks
• Any point of the convex set S, can be represented as a convex combination of its extreme points.
• It is only true for closed and bounded sets in $\mathbb{R}^n$.
• It may not be true for unbounded sets.
## k extreme points
A point in a convex set is called k extreme if and only if it is the interior point of a k-dimensional convex set within S, and it is not an interior point of a (k+1)- dimensional convex set within S. Basically, for a convex set S, k extreme points make k-dimensional open faces. |
# Observation of the ${B^0 \to \rho^0 \rho^0}$ decay from an amplitude analysis of ${B^0 \to (\pi^+\pi^-)(\pi^+\pi^-)}$ decays
[to restricted-access page]
## Abstract
Proton-proton collision data recorded in 2011 and 2012 by the \lhcb experiment, co\-rres\-pon\-ding to an integrated luminosity of 3.0\invfb, are a\-na\-lysed to search for the charmless ${B^0 \to \rho^0 \rho^0}$ decay. More than 600 ${\B^0 \to (\pi^+\pi^-)(\pi^+\pi^-)}$ signal decays are selected and used to perform an amplitude analysis from which the ${B^0 \to \rho^0 \rho^0}$ decay is observed for the first time with 7.1 standard deviations significance. The fraction of ${B^0 \to \rho^0 \rho^0}$ decays yielding a longitudinally polarised final state is measured to be $\fL = 0.745^{+0.048}_{-0.058} ({\rm stat}) \pm 0.034 ({\rm syst})$. The ${B^0 \to \rho^0 \rho^0}$ branching fraction, using the ${\B^0 \to \phi K^*(892)^{0}}$ decay as reference, is also reported as ${\BF(B^0 \to \rho^0 \rho^0) = (0.94 \pm 0.17 ({\rm stat}) \pm 0.09 ({\rm syst}) \pm 0.06 ({\rm BF})) \times 10^{-6}}$.
## Figures and captions
Reconstructed invariant mass spectrum of (left) $(\pi ^+\pi ^-)(\pi ^+\pi ^-)$ and (right)$(K^+K^-)(K^+\pi^-)$. The data are represented by the black dots. The fit is represented by the solid blue line, the $B ^0$ signal by the solid red line and the $B ^0_ s$ by the solid green line. The combinatorial background is represented by the pink dotted line, the partially reconstructed background by the cyan dotted line and the cross-feed by the dark blue dashed line. Fig1a.pdf [22 KiB] HiDef png [342 KiB] Thumbnail [241 KiB] *.C file Fig1b.pdf [21 KiB] HiDef png [287 KiB] Thumbnail [216 KiB] *.C file Helicity angles for the $(\pi ^+\pi ^-)(\pi ^+\pi ^-)$ system. Fig2.pdf [7 KiB] HiDef png [79 KiB] Thumbnail [41 KiB] *.C file Background-subtracted $M (\pi ^+\pi ^-) _{1,2}$, $\cos\theta_{1,2}$ and $\varphi$ distributions. The black dots correspond to the four-body background-subtracted data and the black line is the projection of the fit model. The specific decays $B^0\rightarrow \rho^0\rho^0$ (brown), $B^0 \rightarrow \omega \rho^0$ (dashed brown), $B^0\rightarrow VS$ (dashed blue), $B^0 \rightarrow SS$ (long dashed green), $B^0\rightarrow VT$ (orange) and $B^0 \rightarrow a_1^{\pm}\pi^{\mp}$ (light blue) are also displayed. The $B^0\rightarrow \rho^0\rho^0$ contribution is split into longitudinal (dashed red) and transverse (dotted red) components. Interference contributions are only plotted for the total (black) model. The efficiency for longitudinally polarized $B^0\rightarrow \rho^0\rho^0$ events is $\sim$5 times smaller than for the transverse component. Fig3a.pdf [24 KiB] HiDef png [474 KiB] Thumbnail [377 KiB] *.C file Fig3b.pdf [22 KiB] HiDef png [324 KiB] Thumbnail [240 KiB] *.C file Fig3c.pdf [19 KiB] HiDef png [228 KiB] Thumbnail [189 KiB] *.C file Animated gif made out of all figures. PAPER-2015-006.gif Thumbnail
## Tables and captions
Yields from the simultaneous fit for the 2011 and 2012 data sets. The first and second uncertainties are the statistical and systematic contributions, respectively. Table_1.pdf [49 KiB] HiDef png [66 KiB] Thumbnail [36 KiB] tex code Amplitudes, $A_i$, $C P$ eigenvalues, $\eta_i$, and mass-angle distributions, $f_i$, of the ${ B ^0 \rightarrow (\pi ^+\pi ^-)(\pi ^+\pi ^-) }$ model. The indices ${ijkl}$ indicate the eight possible combinations of pairs of opposite-charge pions. The angles $\alpha_{kl}$, $\beta_{ij}$ and $\Phi_{kl}$ are defined in Ref. [38]. Table_2.pdf [70 KiB] HiDef png [98 KiB] Thumbnail [42 KiB] tex code Results of the unbinned maximum likelihood fit to the angular and two-body invariant mass distributions. The first uncertainty is statistical, the second systematic. Table_3.pdf [65 KiB] HiDef png [164 KiB] Thumbnail [76 KiB] tex code Relative systematic uncertainties on the longitudinal polarisation parameter, $f_{\rm L}$ , and the fraction of $B ^0 \rightarrow \rho^0\rho^0$ decays in the ${ B ^0 \rightarrow (\pi ^+\pi ^-)(\pi ^+\pi ^-) }$ sample. The model uncertainty includes the three uncertainties below. Table_4.pdf [81 KiB] HiDef png [47 KiB] Thumbnail [20 KiB] tex code
## Supplementary Material [file]
Supplementary material full pdf supple[..].pdf [125 KiB] This ZIP file contains supplemetary material for the publication LHCb-PAPER-2015-006. The files are: Supplementary.pdf : An overview of the extra figures *.pdf, *.png, *.eps : The figures in variuous formats Sup1a.pdf [22 KiB] HiDef png [349 KiB] Thumbnail [255 KiB] *C file Sup1b.pdf [22 KiB] HiDef png [351 KiB] Thumbnail [256 KiB] *C file Sup2.pdf [21 KiB] HiDef png [296 KiB] Thumbnail [220 KiB] *C file
Created on 11 July 2020. |
Math/CS Home People Faculty and Staff Alumni Prospective Students Request Information Apply Online Now Academics Preparation Majors Mathematics Math/Economics Math/Physics Computer Science Minors Mathematics Applied Mathematics Statistics Computer Science Courses Mathematics Computer Science Schedule Colloquium Series Off-Campus Opportunities Quantitative Skills Center Awards and Activities Transfer Credit Organizations Kappa Mu Epsilon Information News Calendar Facilities Careers Links Mission Giving
## 2021-2022 Academic Year Colloquium Schedule
Submit Item (Faculty/Staff Only)
### September 16, 2021
Title: Visual Representations of Natural Numbers using Geometric Patterns Speaker: David A. Reimann Professor Mathematics and Computer Science Albion College Albion, Michigan Abstract: Natural numbers can be visually represented by a geometric arrangement of simple visual motifs. This representation is not unique because any partition of an integer $n$ can generate at least one geometric pattern. Thus the number of partitions of $n$ is a lower bound on the number of geometric patterns. For example, there are 17977 partitions for the number 36; it is both a square number $(6^2)$ and a triangular number $(1 + 2 + 3 + 4 + 5 + 6 + 7 + 8).$ Aesthetic considerations often favor patterns with some degree of symmetry, such as patterns that fix a single point or wallpaper patterns. A series of geometric designs for the numbers 1–100 were created to visually highlight some properties of each number. The designs use a variety of motifs and arrangements to provide a diverse yet cohesive collection. One application of these patterns is as a teaching tool for helping students recognize and generalize patterns and sequences. Location: Palenske 227 Time: 3:30 PM Citation Click for BibTeX citation Flyer Click for a printable flyer
### September 23, 2021
Title: Opportunities in Pathogen War Gaming Speaker: Lauren Ancel Meyers Professor Department of Integrative Biology The University of Texas at Austin Austin, Texas Abstract: Video of a talk given for the Hertz Foundation on May 6, 2021. Dr. Lauren Ancel Meyers discussed the new Center for Advanced Pathogen Threat and Response Simulation (CAPTRS) and its pioneering vision to apply sophisticated war gaming technology to build robust health and socioeconomic lines of defense against future pathogen threats. CAPTRS has three core elements that differ from the science today: (1) an AI-based synthetic threat lab that generates a universe of pathogen threats; (2) a multi-disciplinary collaboration hub where researchers convene to develop solutions; and (3) the situation room where Pathogen War Gaming™ technology will be used to simulate real-time responses and their consequences. Location: ONLINE Time: VIRTUAL Citation Click for BibTeX citation Flyer Click for a printable flyer |
# May 2012 Archives
## Paper of the Day (Po'D): Semantic gap?? Schemantic Schmap!! Edition
Hello, and welcome to Paper of the Day (Po'D): Semantic gap?? Schemantic Schmap!! Edition. Today's paper provides an interesting argument for what is necessary to push forward the field of "Music Information Retrieval": G. A. Wiggins, "Semantic gap?? Schemantic Schmap!! Methodological Considerations in the Scientific Study of Music," Proc. IEEE Int. Symp. Mulitmedia, pp. 477-482, San Diego, CA, Dec. 2009.
My one line summary of this work is:
The sampled audio signal is only half of half of half of the story.
## Wow, Mbalax
This is now my favorite uncontrolled-chaos-to-Western-ears-but-theoretically-highly-ordered music in the world. At first listen, it reminded me immediately of the composed results of this experiment. By "Kor Leer" the impression had become permanent; and by "Deuram" I knew that I must find and listen to all mbalax music. Aside from Nancarrow, e.g., here, mbalax will strain the best algorithms for automatic rhythm description.
## Music Genre Recognition with Bayesian Classification of Scattering Coefficients
During the weekend, I experimented with the scattering coefficient features for music genre recognition. At first, I was using AdaBoost with 1000 decision stumps, giving me just above 80% accuracy. These features being 469 dimensions makes the training process very slow, so I decided why not test a much quicker approach given by Bayesian classification with Gaussianity assumptions. So, I learned class-dependent means and covariances for each class from the training data, the covariance matrix of all the training data. I then implemented the Mahalanobis distance (MDC) and full quadratic classifiers (FDC), the benefits of which include their simple implementation, and quick training and testing. Furthermore, within a Bayesian framework, we can naturally introduce concepts of confidence, risk, and rejection. But I started simple: equal priors and uniform risk. Below we see the mean classification results from 10 independent trials of 10-fold stratified cross validation.
## Proper respect for the norms of atoms in dictionaries
Until recently, I have always thought about sparse approximation and recovery in terms of algorithms with dictionaries having atoms $$\{\phi_i\}$$ with the same $$\ell_2$$ norm. I find it quite typical in the literature to discuss and treat algorithms with such dictionaries. The assumption of unit normness is often explicitly stated in the very beginning of work in this area, such as in Tropp's "Greed is Good," and Mallat and Zhang's "Matching Pursuit." While it makes the mathematics cleaner, its ubiquity led to my lack of respect for the role of the norm of atoms.
To see what I mean, consider the exact sparse recovery problem: $$\min_\vs \|\vs\|_0 \; \textrm{subject to} \; \vu = \MPhi \vs$$ where the columns of $$\MPhi$$ are made from $$\{\phi_i\}$$. In many papers from the past 15 years, I see that authors (including me :) often say something like the following: the above problem is computationally difficult, and so we can replace the $$\ell_0$$-pseudonorm with its closest convex cousin, the $$\ell_1$$-norm: $$\min_\vs \|\vs\|_1 \; \textrm{subject to} \; \vu = \MPhi \vs$$ which can be solved by numerous methods far less complex than that required to solve the other problem. What is more, we know that there are many cases in which the solution to this solvable problem is identical to that of the harder problem.
However, the justification for posing the first problem as the second implicitly assumes the atoms $$\{\phi_i\}$$ have the same $$\ell_2$$-norm --- otherwise, it is not correct! When the atoms do not have the same norm, the geometry of the problem changes, and bad things happen if we do not take this into account. In short, when the atoms in the dictionary have different norms, the $$\ell_1$$-norm of the solution does not act like its $$\ell_0$$-pseudonorm. As posed above, the minimal $$\ell_1$$-norm solution will likely use atoms that are much longer than all the others, because their weights will be smaller than those for the shorter atoms.
Instead, the correct way to pose the convexification of the exact sparse problem with the $$\ell_1$$-norm is $$\min_\vs \|\MN\vs\|_1 \; \textrm{subject to} \; \vu = \MPhi \vs$$ where $$\MN$$ is a diagonal matrix with $$[\MN]_{ii} := \|\phi_i\|_2$$. Now, the weights of all the atoms are treated equally, no matter their norms in the dictionary. It may seem pedantic, but I have seen this implicit condition lead to some confusion about how to apply particular principles and conditions. I have yet to find such a discussion, however; and so hope to fill this gap in a brief article I am writing.
## Experiments with Multiscale Scattering for Music Genre Classification
I have been experimenting with the approach to feature extraction posed in J. Andén and S. Mallat, "Multiscale scattering for audio classification," Proc. Int. Soc. Music Info. Retrieval, 2011. Specifically, I have substituted these "scattering coefficients" for the features used by Bergstra et al. 2006 into AdaBoost for music genre recognition.
The idea behind the features reminds me of the temporal modulation analysis of Panagakis et al. 2009, which itself comes from S. A. Shamma, "Encoding sound timbre in the auditory system", IETE J. Research, vol. 49, no. 2, pp. 145-156, Mar.-Apr. 2003. One difference is that these scattering coefficients are not psychoacoustically derived, yet they appear just as powerful as those that are.
## More Experiments with MOD
Continuing from my experiments last week, I have decided to test to what degree the problems of MOD are inherited from the problems of OMP as the choice in its approximation step. So as before, I run MOD for 1000 iterations to learn dictionaries of cardinality 128 from 256 measurements generated from 8 length-64 atoms (each sampled from the uniform spherical ensemble). The weights of the linear combination are iid Normal. In these experiments, however, I substitute the oracle support and find the weights by least squares. So, given the $$l$$th measurement is composed of atoms indexed by $$I_l$$, after each dictionary update, I find the new weights by an orthogonal projection onto the span of the atoms indexed by $$I_l$$. In this way, we remove OMP to see the behavior of MOD in the best case scenario: known support. Finding the support is the hardest part anyway.
Below are mean coding errors over all 1000 iterations for 100 independent trials. We see MOD found the correct dictionary less than 10% of the time.
A close-up of where the majority end shows a mean error of about -20 dB. Using instead OMP, and picking 3 times the number of atoms with debiasing, we see a mean error of about -9 dB in the same experimental setup. So, clearly, knowing the support does help.
These results appear better than those reported in B. Mailhé and M. D. Plumbley, "Dictionary Learning with Large Step Gradient Descent for Sparse Representations", Proc 10th International Conference on Latent Variable Analysis and Source Separation (LVA/ICA 2012), Tel-Aviv, Israel, LNCS 7191, pp. 231-238, March 12-15, 2012.
My further experimentation reveals that OMP often finds the support of all measurements when using the real dictionary; and when it is run to twice or three times the expected sparsity, it always finds it (at least in the 100 independent trials I am running). This is not so surprising since this dictionary, sampled from the uniform spherical ensemble, likely has very low coherence. So, this says to me that the failures of MOD in this case are due mostly to the dictionary update, which obviously hinders good coding by OMP, which obviously hinders the dictionary update, which clearly hinders good coding, and so on.
Next up is looking at MOD with shift-invariant dictionaries.
Bob L. Sturm, Associate Professor
Audio Analysis Lab
Aalborg University Copenhagen
A.C. Meyers Vænge 15
DK-2450 Copenahgen SV, Denmark
Email: bst_at_create.aau.dk |
# What goes 2 what This is a measure of an object's ability to transmit electricity. This is the potential energy per unit charge.An electric current that reverses direction in a circuit at regular intervals.An electric current flowing in one direction only.alternating currentresistancevoltage direct current
###### Question:
what goes 2 what
This is a measure of an object's ability to transmit electricity.
This is the potential energy per unit charge.
An electric current that reverses direction in a circuit at regular intervals.
An electric current flowing in one direction only.
alternating current
resistance
voltage
direct current
### 5. How can contact tracing be beneficial for Corona Virus outbreaks? Please explain.
5. How can contact tracing be beneficial for Corona Virus outbreaks? Please explain....
### A diagram of a new competition swimming pool is shown. If the width of the pool is 25 meters, find the length of the actual pool. (Round to the nearest tenth)
A diagram of a new competition swimming pool is shown. If the width of the pool is 25 meters, find the length of the actual pool. (Round to the nearest tenth)...
### How to make a tangram with paper?
how to make a tangram with paper?...
### Which word in the sentence is the adverb? Bring me the book immediately. HELPPP
Which word in the sentence is the adverb? Bring me the book immediately. HELPPP...
### Compare 1/4 and 2/12
compare 1/4 and 2/12...
### A young boy has been found, and police are trying to locate his family. they take a dna sample from him and begin collecting dna samples from families who have missing children. if police use dna samples only from the fathers, which type of dna technology can they use to identify the boy's parent?
a young boy has been found, and police are trying to locate his family. they take a dna sample from him and begin collecting dna samples from families who have missing children. if police use dna samples only from the fathers, which type of dna technology can they use to identify the boy's parent?...
### In a sample of 10 bags of doug's Super green grass seeds only 70% of the seeds were actually grass seeds
in a sample of 10 bags of doug's Super green grass seeds only 70% of the seeds were actually grass seeds...
### According to Franks and Smallwood (2013), information has not become the lifeblood of every organization, and that an increasing volume of information today has increased and exchanged through the use of social networks and Web2.0 tools like blogs, microblogs, and wikisa. TRUE b. FALSE.
According to Franks and Smallwood (2013), information has not become the lifeblood of every organization, and that an increasing volume of information today has increased and exchanged through the use of social networks and Web2.0 tools like blogs, microblogs, and wikisa. TRUE b. FALSE....
### A new fossil is found in a rock layer in England. How can index fossils be used to help date the approximate age of the new fossil? Question 4 options: Scientists compare the new fossil directly on the fossils on same layer to known fossils to find its age. Scientist use radioactive dating to get exact age of the new fossil. Scientist ask other scientists to date the fossils based on known information. Scientists use the fossils in the same layer and layers before and after the new fossil to com
A new fossil is found in a rock layer in England. How can index fossils be used to help date the approximate age of the new fossil? Question 4 options: Scientists compare the new fossil directly on the fossils on same layer to known fossils to find its age. Scientist use radioactive dating to get ex...
### Mcdonald's, kfc, baskin-robbins, and aamco all make use of the __________ form of contractual distribution system. wholesale-sponsored chain franchise retail cooperative horizontally administered
Mcdonald's, kfc, baskin-robbins, and aamco all make use of the __________ form of contractual distribution system. wholesale-sponsored chain franchise retail cooperative horizontally administered...
### Convert the expression 10-(-30) to an addition expression. HELPPP MEEE PLSSSSSSSSS
Convert the expression 10-(-30) to an addition expression. HELPPP MEEE PLSSSSSSSSS...
### The total ticket sales for a high school basketball game were $2,260. The ticket price for students was$2.25 less than the adult ticket price. The number of adult tickets sold was 230, and the number of student tickets sold was 180. What was the price of an adult ticket?
The total ticket sales for a high school basketball game were $2,260. The ticket price for students was$2.25 less than the adult ticket price. The number of adult tickets sold was 230, and the number of student tickets sold was 180. What was the price of an adult ticket?...
### A Protagonist..... A.) Is always a good person B.) Always wins in the end C.) Avoids the main conflict D.) Experiences the main conflict
A Protagonist..... A.) Is always a good person B.) Always wins in the end C.) Avoids the main conflict D.) Experiences the main conflict...
### Factorize -xsquared- x=20
factorize -xsquared- x=20...
### In a sequence of numbers a4=7 , a5=10 , a6=13 , a7=16 , and a8=19 . Based on this information, write an explicit equation to find the nth term in the sequence, an ?
In a sequence of numbers a4=7 , a5=10 , a6=13 , a7=16 , and a8=19 . Based on this information, write an explicit equation to find the nth term in the sequence, an ?...
### The way kim tackled tracy and chris, what does it suggest about her character.
The way kim tackled tracy and chris, what does it suggest about her character....
### What is the value of 6n 2 when n =3?
What is the value of 6n 2 when n =3?...
### | n | = 3.5 what value for n will make this expression true This is the I Ready diagnostics.please help me this is due very very soon! Thanks!!
| n | = 3.5 what value for n will make this expression true This is the I Ready diagnostics.please help me this is due very very soon! Thanks!!... |
# [tlaplus] Re: Issue enumerating Nat, but I don't see where I'm enumerating the entire set Nat
• From: Brandon Barker <brandon.barker@xxxxxxxxx>
• Date: Tue, 2 Aug 2022 15:11:54 -0700 (PDT)
• Ironport-data: A9a23:pSBv7K5q54Aggj/x944zLwxRtKLCchMFZxGqfqrLsTDasY5as4F+v mcWCmnTPKqDamrxft5waoW380xTsJOExoIxTlZvqC41Zn8b8sCt6faxfh6hZXvKRiHgZBs6t JtGMoGowOQcFCK0SsKFa+C5xZVE/fjUAOK6UoYoAwgpLeNeYH5JZSlLxqho2eaEvfDjW1nX4 YOq+ZWDULOY82cc3lw8u/rrRCxH56yaVAMw5jTSstgW1LN2vyB94KM3fcldHVOgKmVnNrLSq 9L48V2M1jixEyHBpT+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRYRQR8/ttmHozx+4 PkOm8CZeApzB7/nlthMAgBxGSVwBbITrdcrIVDn2SCS50jPcn+p2uk3SU9vbNde9eFwDmVDs /cfLVjhbDjZ37PwkO/9E7c0wJ1ydqEHP6tH0p1k5S3dBO4iXIuASaPBz/Nk7RdtqOR8JNjkT elHQgc1SC3lTCRiZHwpVakerc+ngX7wdzBXslWIvbFx6G/WpOB0+OSyb4qJJYHXLSlTtmuz+ 3zrr1miOUkHFefPkwqXyS2ijcaayEsXX6pLTOHinhJwu3Wfx3cYFQYNfUe/qL+8kVT7WtRFK kVS+yw0rKF0+lbDczXmdxixoXrBpwJFHtQJQrd85waKxa7ZpQ2eAwDoUwKtdvQK9+FqbA0nj GabtN3VVG12mbeRR0qCo+L8QSyJBQAZKmoLZCkhRAQD4sX+rIxbsv4pZoYzeEJSpo2lcQwc0 wxmvwBl2OpO1Z9jO7GTuAGY02j19/AlWyZsvl2PNl9J+D+Vc2JMWmBFwV3S7PIFNZrAC1fY7 SJClM+Z4+QDS5qKkURhodnh/pn2up5p0xWG2zaD+qXNERzxpBZPmqgMsFlDyL9BaJpsRNMQS Ba7VfltzJFSJmC2SqR8fpi8Dc8npYC5S4m0Cq6JNoATMsMuHONiwM2ITR7At4wKuBh8+ZzTx b/GGSpRJS1HV/o5lWLeqxk1iO90nH5WKZzvqWDTlkz7i9JylVaaTrAKNFbmUwzKxPLsnekhy P4OZ6Oikk0BOMWnO3m/2dNNcTgicCZqbbir+pQ/XrPSfmJORTBxY9ePm+9JU9I+xcx9yLyYl kxRr2cEoLYJrSCcd1Xih7EKQOiHYKuTWlphYHx0YwnyhyJ6CWtthY9GH6YKkXAc3LQL5ZZJo zMtIa1s29xDFWbK/Sozd574oNAwfRinn1vQbSWiZzc7cpF6QBHR4ZnveQ62rHsCCS++tM0fp by811KLGsFaHFs8U8uGOuiyy16RvGQGnL4gVUX/JNQOKl7n95JnKnCsg/Jue5MMJBzPyyG0z QGTBRtE9+DBr5VkotbMjKGA6YyuFrImTEZdGmDa65ewNDXbrjLzm98eDL7QcGmEBm3u+aika eFE9N3GMaUKzARQro5xM7d31qZhtdbiorltyA47TnjGalKcDKw5fiuL0MxJga16xrFDvDywV E/SqMJRPq+EOZ+8HVMcfVF3bumK2fwOoDTK6eUpJ0H2uH1+8LadCBsAMB6LhyhQI6FyLZs+h +wmvZdOuQC4jxMrNPeAjzxVpjTXdSVbDPt4u8FIGpLvhyoq1kpGPc7WBBjw7czdcN5LKEQrf mKZiaee1bRRwk3OLyg6GXTXh7YPgJ0PvFVT0AZHKQ3YwJzKgfg42BAX+jMyF1wHwhJC2uN1G 25qK0wlevnUrmkw3JBODzK2BgVMJByF4UitmVEHo2vUEhuzXWvXIWxhZOuArRId8nlAQz5A4 bucxDq3WDrmZp+hjC47WEp5rK7sStt+8gDNgse6B9/AGpA8aDXowfTxNTtU9kW3Xppo2g7ao /J39v17c6zxOAYfpKo0D4SVz7MNUAvCL2tHGKkz8KQMFGDaWTezxTnfexvqI5gSeaPHoR2iF shjBsNTTBDihiyAmTYWWPwXKLhukf91udcPdu+5LGIKqefO/Dp1rIrLpG+5i3UsXs1118k6L YzVenSJFWnXimFThneKsM1NIm6lep4feQfn1/q0+ugEGs5Rqu1qak1ugLK4s2/PbFli9hOQ+ RrfPurYlr04j4trmIToH+NIAADtcYH/U+GB8QaStdVSbIOQbZ2f6VtN8lS3bR5LObYxWshsk ejfutDA2k6Y7q09VHrUmsXcGqREjSlosDG77i4qwLhmcSq+tAvE5hIC/yWhNcUMnooMvI+oQ Ay3bMb2ftkQMzuYKLu5dAAGeyvxyYyuBksjmc95h/uLDRcZ3APdK86/7jniamQzmuogJcjlE gGt0xqxzokwkWmPbSPow9lpBJh3JFLsQ6w7b8a3vj6dZoVtbpVupZO6/ScdBfr35rVo3So0D V8phvQzSfhqhJz18Q==
• Ironport-hdrordr: A9a23:uJzBlqyuQCtaecM2iBWgKrPwp71zdoMgy1knxilNoH1uA7Clfq WV9pkmPHDP+VIssR0b6Le90ey7MAvhHXAc2/hmAV+NNDOWz1dAb7sSnrcL+lXbalnDH5dmpO 5dmstFeaDN5e0Qt7eL3ODHKadY/DDdytHYuQ629R4EJnAJV0gj1XYDNu/8KDwJeOBoP/UE/f Gnl7B6TlSbCBIqhgXSPAhnYwEBnbP2fVDdDSLvt3UcmXyzZP+TiYIT43Ojr2gjuvp0ocZGzU H11zbh7qHmmfC2wB3R2ivy6NB5g93807J4db6xtvQ=
• Ironport-sdr: ocNrmHpkhoZf9UcRhZxwI1mr0H6RMZ1cqxIfBq6nykouIPSi/fPRLVw04yR4enUvmGIUulASCL 8krBUzeCTAPLujhnYcRr8235zNm53hCKBubxVNkRi1tu7wKcb81l08IL9uRrSMLh2RbLwFW4IX rYodkJbyxYb+VYQuCViJMD7SX+A24Lw44fSeHm2UMQRWa0K6F/TD6UJJjTDFmirq7qf8BvEjto lue5TydFUhG48aCA+kUZOUOdHcoolW+WRmwCoSL/wYLajxnYwiGlEBwg2eqPdSZWyFem3gFHor 5L02wArjqOdIGjGmLRhedHi8
Haha, sorry for the lack of context in that statement - I should have said "use for infinite sets in TLC", meaning I just need to read more and come to grips with the specific limitations of TLC. I appreciate that TLC does not support everything that might be expressed in TLA+, and other solvers or model checkers may have other feature sets.
On Tuesday, August 2, 2022 at 6:05:02 PM UTC-4 Leslie Lamport wrote:
I suppose in time I will see some use for infinite sets, since they exist in TLA
You might start by wondering why you've been using one since you were a young child.
On Tuesday, August 2, 2022 at 7:11:20 AM UTC-7 brandon...@xxxxxxxxx wrote:
Got it, makes sense - I suppose in time I will see some use for infinite sets, since they exist in TLA.
On Tuesday, August 2, 2022 at 8:20:51 AM UTC-4 andrew...@xxxxxxxxx wrote:
You have to override the value of Nat to be some set like 0 .. 100 or something.
On Monday, August 1, 2022 at 4:35:03 PM UTC-4 brandon...@xxxxxxxxx wrote:
Apologies, minor typo (though the issue is unrelated) - due to a refactor I should have had randomActiveCampaign[c], not randomActiveCampaign[c.id], in the last snippet. Like this:
CreateActiveCampaign ==
/\ campaign' = \E c \in CAMPAIGN :
IF campaign[c.id] = {} THEN
[campaign EXCEPT ![c.id] = c /\ randomActiveCampaign[c]]
ELSE campaign[c.id]
On Monday, August 1, 2022 at 4:31:59 PM UTC-4 Brandon Barker wrote:
A little more context. Although those are the only places I'm referencing startDate directly, I am positing the existence of a campaign (of which campaign details is a member of) here:
CreateActiveCampaign ==
/\ campaign' = \E c \in CAMPAIGN :
IF campaign[c.id] = {} THEN
[campaign EXCEPT ![c.id] = c /\ randomActiveCampaign[c.id]]
ELSE campaign[c.id]
I'm not sure how TLC is handling this under the hood; in principle, I'm constraining the values of startDate to be finite by using randomActiveCampaign, but I'm not sure if that is how things are composing here.
On Monday, August 1, 2022 at 4:14:59 PM UTC-4 Brandon Barker wrote:
Hello,
I'm having an issue running TLC on a model where, so far, I'm just "creating" a somewhat complex record, and not doing much else. Here is one component of the record:
CAMPAIGN_DETAILS == [
startDate: Nat
, endDate: Nat
]
\*
randomFutureCampaignDetails[details \in CAMPAIGN_DETAILS] ==
/\ details.startDate \in currentTime..monthsToSeconds[3]
/\ details.endDate \in (details.startDate+1)..monthsToSeconds[3]
randomFutureCampaignDetails is used, indirectly, from Next. This results in the following error from TLC
The exception was a java.lang.RuntimeException
: Attempted to enumerate a set of the form [l1 : v1, ..., ln : vn],
but can't enumerate the value of the `startDate' field:
Nat
It seems like I have bounded startDate to a finite set in:
details.startDate \in currentTime..monthsToSeconds[3]
Are there any suggestions for debugging this sort of error?
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/1bb3ceaa-4b3a-4fd6-830d-001a9ea7b622n%40googlegroups.com.
• References: |
# Lefschetz theorem
Lefschetz' fixed-point theorem, or the Lefschetz–Hopf theorem, is a theorem that makes it possible to express the number of fixed points of a continuous mapping in terms of its Lefschetz number. Thus, if a continuous mapping $f : X \rightarrow X$ of a finite CW-complex (cf. also Cellular space) $X$ has no fixed points, then its Lefschetz number $L ( f )$ is equal to zero. A special case of this assertion is Brouwer's fixed-point theorem (cf. Brouwer theorem).
## Contents
#### References
[a1] M.J. Greenberg, J.R. Harper, "Algebraic topology, a first course" , Benjamin/Cummings (1981) MR643101 Zbl 0498.55001
Lefschetz' hyperplane-section theorem, or the weak Lefschetz theorem: Let $X$ be an algebraic subvariety (cf. Algebraic variety) of complex dimension $n$ in the complex projective space $\mathbf C P ^ {N}$, let $P \subset \mathbf C P ^ {N}$ be a hyperplane passing through all singular points of $X$( if any) and let $Y = X \cap P$ be a hyperplane section of $X$; then the relative homology groups (cf. Homology group) $H _ {i} ( X , Y , \mathbf Z )$ vanish for $i < n$. This implies that the natural homomorphism
$$H _ {i} ( Y ; \mathbf Z ) \rightarrow H _ {i} ( X ; \mathbf Z )$$
is an isomorphism for $i < n- 1$ and is surjective for $i = n- 1$( see ).
Using universal coefficient formulas (cf. Künneth formula) one obtains corresponding assertions for arbitrary cohomology groups. In every case, for cohomology with coefficients in the field of rational numbers the dual assertions hold: The homomorphism of cohomology spaces
$$H ^ {i} ( X ; \mathbf Q ) \rightarrow H ^ {i} ( Y ; \mathbf Q )$$
induced by the imbedding $Y \subset X$ is an isomorphism for $i < n- 1$ and is injective for $i = n- 1$( see ).
An analogous assertion is true for homotopy groups: $\pi _ {i} ( X , Y ) = 0$ for $i < n$. In particular, the canonical homomorphism $\pi _ {1} ( Y) \rightarrow \pi _ {1} ( X)$ is an isomorphism for $n \geq 3$ and is surjective for $n = 2$( the Lefschetz theorem on the fundamental group). There is a generalization of this theorem to the case of an arbitrary algebraically closed field (see ), and also to the case when $Y$ is a normal complete intersection of $X$( see ).
The hard Lefschetz theorem is a theorem about the existence of a Lefschetz decomposition of the cohomology of a complex Kähler manifold into primitive components.
Let $V$ be a compact Kähler manifold of dimension $n$ with Kähler form $\omega$, let
$$\eta \in H ^ {1,1} ( Y , \mathbf C ) \subset H ^ {2} ( V , \mathbf C )$$
be the cohomology class of type $( 1 , 1 )$ corresponding to $\omega$ under the de Rham isomorphism (cf. de Rham cohomology; if $V$ is a projective algebraic variety over $\mathbf C$ with the natural Hodge metric, then $\eta$ is the cohomology class dual to the homology class of a hyperplane section) and let
$$L : H ^ {i} ( V , \mathbf C ) \rightarrow H ^ {i+} 2 ( V , \mathbf C )$$
be the linear operator defined by multiplication by $\eta$, that is,
$$Lz = z \cdot \eta ,\ z \in H ^ {i} ( V , \mathbf C ) .$$
One has the isomorphism (see )
$$L ^ {k} : H ^ {n-} k ( V , \mathbf C ) \rightarrow H ^ {n+} k ( V , \mathbf C )$$
for any $k = 0 \dots n$. The kernel of the operator
$$L ^ {k+} 1 : H ^ {n-} k ( V , \mathbf C ) \rightarrow H ^ {n+} k+ 2 ( V , \mathbf C )$$
is denoted by $H _ {0} ^ {n-} k ( V , \mathbf C )$ and is called the primitive part of the $( n- k )$- cohomology of the variety $V$. The elements of $H _ {0} ^ {n-} k ( V , \mathbf C )$ are called primitive cohomology classes, and the cycles corresponding to them are called primitive cycles. The hard Lefschetz theorem establishes the following decomposition of the cohomology into the direct sum of primitives (called the Lefschetz decomposition):
$$H ^ {m} ( V , \mathbf C ) = \oplus _ { k= } 0 ^ { {[ } m/2 ] } L ^ {k} H _ {0} ^ {m-} 2k ( V , \mathbf C )$$
for all $m = 0 \dots 2n$. The mappings
$$L ^ {k} : H _ {0} ^ {m-} 2k ( V , \mathbf C ) \rightarrow H ^ {m} ( V ,\ \mathbf C ) ,\ k = 0 \dots [ m/2 ] ,$$
are imbeddings. The Lefschetz decomposition commutes with the Hodge decomposition (cf. Hodge conjecture)
$$H ^ {m} ( V , \mathbf C ) = \oplus _ {p+ q = m } H ^ {p,q} ( V,\ \mathbf C )$$
(see ). In particular, the primitive part $H _ {0} ^ {p,q} ( V , \mathbf C )$ of $H ^ {p,q} ( V , \mathbf C )$ is defined and
$$H _ {0} ^ {m} ( V , \mathbf C ) = \oplus _ {p+ q = m } H _ {0} ^ {p,q} ( V , \mathbf C ) .$$
The hard Lefschetz theorem and the Lefschetz decomposition have analogues in abstract algebraic geometry for $l$- adic and crystalline cohomology (see , ).
The Lefschetz theorem on cohomology of type $( 1 , 1)$ is a theorem about the correspondence between the two-dimensional algebraic cohomology classes of a complex algebraic variety and the cohomology classes of type $( 1 , 1)$.
Let $V$ be a non-singular projective algebraic variety over the field $\mathbf C$. An element $z \in H ^ {2} ( V , \mathbf Z )$ is said to be algebraic if the cohomology class dual to it (in the sense of Poincaré) is determined by a certain divisor. The Lefschetz theorem on cohomology of type $( 1 , 1 )$ asserts that a class $z \in H ^ {2} ( V , \mathbf Z )$ is algebraic if and only if
$$z \in j ( H ^ {2} ( V , \mathbf Z ) ) \cap H ^ {1,1} ( V, \mathbf C ) ,$$
where $H ^ {1,1} ( V , \mathbf C )$ is the Hodge component of type $( 1 , 1 )$ of the two-dimensional complex cohomology space $H ^ {2} ( V , \mathbf C )$, and the mapping $j: H ^ {2} ( V , \mathbf Z ) \rightarrow H ^ {2} ( V , \mathbf C )$ is induced by the natural imbedding $\mathbf Z \rightarrow \mathbf C$( see [1], and also [6], [12]). For algebraic cohomology classes in dimensions greater than 2, see Hodge conjecture.
For an arbitrary complex-analytic manifold $V$ there is an analogous characterization of elements of the group $H ^ {2} ( V , \mathbf Z )$ that are Chern classes of complex line bundles over $V$( see [11]).
#### References
[1] S. Lefschetz, "L'analysis situs et la géométrie algébrique" , Gauthier-Villars (1950) MR0033557 [2] S. Lefschetz, "On certain numerical invariants of algebraic varieties with applications to Abelian varieties" Trans. Amer. Math. Soc. , 22 (1921) pp. 327–482 MR1501180 MR1501178 [3] S. Lefschetz, "On the fixed point formula" Ann. of Math. (2) , 38 (1937) pp. 819–822 MR1503373 Zbl 0018.17703 Zbl 63.0563.02 [4] P. Berthelot, "Cohomologie cristalline des schémas de caractéristique " , Springer (1974) MR0384804 Zbl 0298.14012 [5] P. Deligne (ed.) N.M. Katz (ed.) , Groupes de monodromie en géométrie algébrique. SGA 7.II , Lect. notes in math. , 340 , Springer (1973) MR0354657 [6] P.A. Griffiths, J.E. Harris, "Principles of algebraic geometry" , Wiley (Interscience) (1978) MR0507725 Zbl 0408.14001 [7] A. Grothendieck, "Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux" , SGA 2 , North-Holland & Masson (1968) MR0476737 Zbl 1079.14001 Zbl 0159.50402 [8] R. Hartshorne, "Ample subvarieties of algebraic varieties" , Springer (1970) MR0282977 Zbl 0208.48901 [9] D. Mumford, "Abelian varieties" , Oxford Univ. Press (1974) MR2514037 MR1083353 MR0352106 MR0441983 MR0282985 MR0248146 MR0219542 MR0219541 MR0206003 MR0204427 Zbl 0326.14012 [10] J.W. Milnor, "Morse theory" , Princeton Univ. Press (1963) MR0163331 Zbl 0108.10401 [11] R.O. Wells jr., "Differential analysis on complex manifolds" , Springer (1980) MR0608414 Zbl 0435.32004 [12] S.S. Chern, "Complex manifolds without potential theory" , Springer (1979) MR0533884 Zbl 0444.32004 [13] A. Weil, "Introduction à l'Aeetude des variétés kahlériennes" , Hermann (1958) [14] P. Deligne, "La conjecture de Weil" Publ. Math. IHES , 43 (1974) pp. 273–307 MR0340258 Zbl 0456.14014 Zbl 0314.14007 Zbl 0287.14001 Zbl 0219.14022
V.A. Iskovskikh
Weak and hard (strong) Lefschetz theorems also hold in étale cohomology [a4] and in intersection homology [a5], [a6]. For the proof of the hard Lefschetz theorem in $l$- adic cohomology, see [a2]. |
# Evaluate $\int_{{\frac {\pi}{8}}}^{{\frac {7\,\pi}{8}}}\!{\frac {\ln \left( 1- \cos \left( t \right) \right) }{\sin \left( t \right) }}\,{\rm d}t$
I'm interested in this integral: $$\int_{{\frac {\pi}{8}}}^{{\frac {7\,\pi}{8}}}\!{\frac {\ln \left( 1- \cos \left( t \right) \right) }{\sin \left( t \right) }}\,{\rm d}t$$
I found this particular closed form with Maple and finally : $$-{\frac { \left( \ln \left( 2-\sqrt {2+\sqrt {2}} \right) \right) ^{ 2}}{2}}-{\frac {{\pi}^{2}}{12}}+{\frac {11\, \left( \ln \left( 2 \right) \right) ^{2}}{8}}-{\frac { \left( \ln \left( 1+\sqrt {2} \right) \right) ^{2}}{2}}+{\frac {3\, \left( \ln \left( 2+\sqrt {2} \right) \right) ^{2}}{4}}+{\it {Li_2}} \left( -{\frac {\sqrt {2+ \sqrt {2}}}{4}}+{\frac{1}{2}} \right)$$, where \Li_2 is the dilogarithm function.
Please, can someone prove it ?
• Start with the property of definite integrals that $\int_{a}^{b}f(x)dx=\int_{a}^{b}f(a+b-x)dx$ Nov 8, 2021 at 8:23
$$I=\int{\frac {\log \left( 1- \cos \left( t \right) \right) }{\sin \left( t \right) }}\,dt=\int \sin(t){\frac {\log \left( 1- \cos \left( t \right) \right) }{1-\cos^2\left( t \right) }}\,dt$$
Let $$x=\cos(t)\implies I=\int \frac{\log(1-x)}{x^2-1} \,dx=-\frac12\int\frac{\log(1-x)}{ 1-x}\,dx-\frac12\int\frac{\log(1-x)}{ x+1}\,dx$$
$$\int\frac{\log(1-x)}{ 1-x}\,dx=-\frac{1}{2} \log ^2(1-x)$$ For the second integral, integration by parts gives $$\int\frac{\log(1-x)}{x+1}\,dx=\text{Li}_2\left(\frac{1-x}{2}\right)+\log (1-x) \log \left(\frac{x+1}{2}\right)$$ Combining the results
$$I=\int \frac{\log(1-x)}{x^2-1} \,dx=\frac{1}{4} \left(\log (1-x) (\log (4(1-x))-2 \log (x+1))-2 \text{Li}_2\left(\frac{1-x}{2}\right)\right)$$ Go back to $$t$$ if you wish and use bounds.
• So great and thanks !
– Dens
Nov 8, 2021 at 10:40
• @Dens. Glad to help ! In fact the problem is simple if you think about the first trick $$\frac 1 {\sin(x)}=\frac {\sin(x)}{\sin^2(x)}=\frac {\sin(x)}{1-\cos^2(x)}$$ Nov 8, 2021 at 10:49
• Merci bien pour le conseil !
– Dens
Nov 8, 2021 at 10:50
• If I may ask, where are you located ? I am in Pau. Cheers :-) Nov 8, 2021 at 10:52
• Je suis francais et je me trouve en Saône et Loire.D'ailleurs, je vous avais écris au sujet d'une publication sur Vixra << values of barnes function >>. C'est moi !
– Dens
Nov 8, 2021 at 10:55 |
# Period map for $\partial\bar\partial$-manifolds
When we talk about the theory of variation of Hodge structures, we always assume that the central fiber is a Kähler manifold $$X$$, then consider a family of deformations $$\pi:\mathcal X\to B$$ and the period map $$\mathcal P:B\to Grass(b^{p,k},H^k(X,\mathbb C))$$, $$b^{p,k}=dim F^pH^k(X,\mathbb C)$$.
What if we replace the Kähler manifold by a $$\partial\bar\partial$$-manifold and consider a holomorphic family of $$\partial\bar\partial$$-manifolds $$\pi:\mathcal X\to B$$?
Recall a $$\partial\bar\partial$$-manifold is a compact complex manifold which satisfies for any $$\partial$$, $$\bar\partial$$-closed $$d$$ exact $$(p,q)$$ form $$\alpha$$, there this a $$(p-1,q-1)$$ form $$\beta$$ such that $$\alpha=\partial\bar\partial \beta$$.
Is the theory of variation of Hodge structures remains the same? And is there also a period map $$\mathcal P:B\to Grass(b^{p,k},H^k(X,\mathbb C))$$? If so, is the period map holomorphic and the Griffiths transversality still holds?
In my opinion, first we should define what $$F^pH^k(X,\mathbb C)$$ means for a non-Kähler manifold, since for a Kähler manifold, we have the Kähler identity $$\Delta=2\Delta_{\bar\partial}=2\Delta_{\partial}$$ from which we deduce the Hodge decomposition: $$H^k(X,\mathbb C)=\oplus_{p+q=k}H^{p,q}(X)$$, so we can define $$F^pH^k(X,\mathbb C)=H^{p,k-p}(X)\oplus H^{p+1,k-p-1}(X)\oplus...\oplus H^{k,0}(X)$$ which is obviously a subspace of $$H^k(X,\mathbb C)$$, but for a non-Kähler manifold, there is no Kähler identities. But from AT13, we know that the Bott-Chern cohomology $$H^{p,q}_{BC}(X,\mathbb C):=\frac{ker\partial\cap ker\bar\partial}{im\partial\bar\partial}$$ is isomorphic to Dolbeault cohomology $$H_{\bar\partial}^{p,q}(X)$$ and $$H_{\partial}^{p,q}(X)$$, so if we define $$F^pH^k(X,\mathbb C)=H^{p,k-p}_{BC}(X,\mathbb C)\oplus H^{p+1,k-p-1}_{BC}(X,\mathbb C)\oplus...\oplus H^{k,0}_{BC}(X,\mathbb C)$$, is it reasonable to treat is as a subspace of $$H^k(X,\mathbb C)$$ and define a period map as in the Kähler case?
Let me start with a disclaimer that I think the following facts are true, but I'm doing this over coffee and I haven't checked the details carefully. First, I'll redefine $$F^pH^k(X,\mathbb{C})$$ to be the space of de Rham classes represented by the sum of $$(p', p'-k)$$ forms with $$p'\ge p$$, or equivalently as $$F^pH^k(X,\mathbb{C})= im(H^k(\Omega_X^{\ge p})\xrightarrow{\iota} H^k(X,\Omega_X^\bullet))$$ Then the $$\partial\bar\partial$$-lemma is sufficient to guarantee the filtration is strict in the sense that the maps above are injective (or equivalently that the Hodge to de Rham spectral sequence degenerates); edit see remark 5.21 of Deligne, Griffiths, Morgan, Sullivan, Real homotopy theory of Kähler manifolds. Then, if I understand the notation of the paper you linked, this should probably give decomposition in terms of BC cohomology as you wrote. The other thing I want to remark is that if $$\mathcal{X}\to B$$ is a smooth proper family such that the fibres satisfy $$\partial\bar\partial$$-lemma, then usual arguments should imply that $$F^p R^kf_*\Omega_{\mathcal{X}/B}^\bullet= im(R^kf_*\Omega_{\mathcal{X}/B}^{\ge p}\xrightarrow{\iota}R^kf_*\Omega_{\mathcal{X}/B}^\bullet)$$ satisfies Griffiths transversality etc. So in this sense, things work. However, you would be missing the polarization, which need for most of the deeper results about the Griffiths period map.
• Do you mean you define $F^pH^k(X,\mathbb C)=ker(d:F^pA^k\to F^{p+1}A^k)/im(d:F^{p-1}A^k\to F^pA^k)$?
• Almost, but there is no shift because $d(F^p)\subset F^p$ Jul 24 at 14:50
• Sorry that I made a mistake, I mean $F^pH^k(X,\mathbb C)=ker(d:F^pA^k\to F^pA^{k+1})/im(d:F^pA^{k-1}\to F^pA^k)$. If we define the filtration like this, I don't think it obvious that we will have of decompostion of this filtration in BC cohomology, since for the Kahler case, it takes p158-159 for Voisin in her book<Hodge theory and...> to prove the corresponding decomposition.
• Technically, it gives an isomorphism $F^pH^k(X)= H^{p,k-p}\oplus\ldots$ with Dolbeault cohomology. Now use the fact, you stated, that BC and Dolbeault cohomologies are isomorphic. Jul 25 at 14:36 |
# PPA - Intro
## DFA
### RDA
Reaching Definitions Analysis (or more properly, Reaching Assignment Analysis):
An assignment (or definition) of form l: x := a may reach a certain program point if there is an execution of the program where x was last assigned a value of l when the point is reached.
So, $RD(l) = (RD_{entry}, RD_{exit})$, in which $RD_{entry}$ is the set of pair $(x, l_x)$ means that assignment to $x$ at line $l_x$ may reach $l$'s entry and likely for exit.
Exmaple:
1: y := x
2: z := 1
3: while y > 1 do
4: z := z * y
5: y := y - 1
6: y := 0
$l$ $RD_{entry}(l)$ $RD_{exit}(l)$
1 (x,?), (y,?), (z,?) (x,?), (y,1), (z,?)
2 (x,?), (y,1), (z,?) (x,?), (y,?), (z,2)
3 (x,?), (y,1), (y,5), (z,2), (z,4) (x,?), (y,1), (y,5), (z,2), (z,4)
4 (x,?), (y,1), (y,5), (z,2), (z,4) (x,?), (y,1), (y,5), (z,4)
5 (x,?), (y,1), (y,5), (z,4) (x,?), (y,5), (z,4)
6 (x,?), (y,1), (y,5), (z,2), (z,4) (x,?), (y,6), (z,2), (z,4)
### Equational Approach
Based on the flow information, we have two rules:
1. For an assignment l: x := a, we exclude all pairs $(x, l_0)$ from $RD_{entry}(l)$ and add $(x, l)$ to obtain $RD_{exit}(l)$; For non-assignment statement at $l$, $RD_{exit}(l) = RD_{entry}(l)$
2. $RD_{entry}(l) = \cup_{i}RD_{exit}(l_i)$, in which $l_i \in$ all the labels from which control might pass to $l$; For the first statement, $RD_{entry}(1) = {(x, ?)\,|\, x \in Var}$
About least solution: The above system can be viewed as a function $F: [RD] \rightarrow [RD]$, which is a iterative process; so we can continue the process until reachability information doesn't change anymore, which is "least solution".
## Constraint Approach
For an assignment l: z := a,
For non-assignment $l$, $RD_{exit}(l) \supseteq RD_{entry}(l)$
For any $l$, $RD_{entry}(l) \supseteq RD_{exit}(l_x)$ if $l_x$'s control might pass to $l$
And $RD_{entry}(1) \supseteq { (x, ?) \,|\, x \in Var}$
The "least solution" idea also applies to this method. |
# Add FunctorKey for full ellipses
XMLWordPrintable
#### Details
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Sprint:
Measurement-S14-4
• Team:
Data Release Production
#### Description
Add a FunctorKey for full afw::geom::ellipses::Ellipse objects, delegating to the existing QuadrupoleKey and PointKey.
Need to add convenience methods to all three of these for adding fields to a Schema.
#### Activity
Hide
Jim Bosch added a comment -
Ready for review on u/jbosch/DM-1222 of afw; should be pretty straightforward.
This is a new class I needed while implementing DM-240; I decided to split it off into a subtask to make the overall review easier overall. The new class is a "FunctorKey" for Ellipses - that is, a Key-like object that aggregates a few normal Keys to allow more complex objects to be retrieved and set from a Record. It delegates all of its work to the existing QuadrupoleKey and PointKey classes, which I've also improved slightly by adding convenience methods to add appropriately-named fields to a Schema.
afw:u/jbosch/DM-1222 % git diff --stat master include/lsst/afw/table/aggregates.h | 107 ++++++++++++++++++++++++++++++++++- src/table/aggregates.cc | 46 +++++++++++++++ tests/testFunctorKeys.py | 70 +++++++++++++++++++++-- 3 files changed, 216 insertions(+), 7 deletions(-)
Show
Jim Bosch added a comment - Ready for review on u/jbosch/ DM-1222 of afw; should be pretty straightforward. This is a new class I needed while implementing DM-240 ; I decided to split it off into a subtask to make the overall review easier overall. The new class is a "FunctorKey" for Ellipses - that is, a Key-like object that aggregates a few normal Keys to allow more complex objects to be retrieved and set from a Record. It delegates all of its work to the existing QuadrupoleKey and PointKey classes, which I've also improved slightly by adding convenience methods to add appropriately-named fields to a Schema. afw:u/jbosch/DM-1222 % git diff --stat master include/lsst/afw/table/aggregates.h | 107 ++++++++++++++++++++++++++++++++++- src/table/aggregates.cc | 46 +++++++++++++++ tests/testFunctorKeys.py | 70 +++++++++++++++++++++-- 3 files changed, 216 insertions(+), 7 deletions(-)
Hide
Russell Owen added a comment -
This looks great. The <x>Key classes and addField static methods look very useful.
Nit-pick in aggregates.h: EllipseKey wants a Quadrupole and a Point, but the subschema constructor brief comment says:
@brief Construct from a subschema, assuming xx, yy, and xy subfields
it should also mention x and y fields.
Show
Russell Owen added a comment - This looks great. The <x>Key classes and addField static methods look very useful. Nit-pick in aggregates.h: EllipseKey wants a Quadrupole and a Point, but the subschema constructor brief comment says: @brief Construct from a subschema, assuming xx, yy, and xy subfields it should also mention x and y fields.
Hide
Jim Bosch added a comment -
Nit-pick in aggregates.h: EllipseKey wants a Quadrupole and a Point, but the subschema constructor brief comment says:
@brief Construct from a subschema, assuming xx, yy, and xy subfields
it should also mention x and y fields.
Fixed, thanks!
Show
Jim Bosch added a comment - Nit-pick in aggregates.h: EllipseKey wants a Quadrupole and a Point, but the subschema constructor brief comment says: @brief Construct from a subschema, assuming xx, yy, and xy subfields it should also mention x and y fields. Fixed, thanks!
#### People
Assignee:
Jim Bosch
Reporter:
Jim Bosch
Reviewers:
Russell Owen
Watchers:
Jim Bosch, Russell Owen |
# The Stacks Project
## Tag 0BSK
### 10.150. Henselization and strict henselization
In this section we construct the henselization. We encourage the reader to keep in mind the uniqueness already proved in Lemma 10.149.6 and the functorial behaviour pointed out in Lemma 10.149.5 while reading this material.
Lemma 10.150.1. Let $(R, \mathfrak m, \kappa)$ be a local ring. There exists a local ring map $R \to R^h$ with the following properties
1. $R^h$ is henselian,
2. $R^h$ is a filtered colimit of étale $R$-algebras,
3. $\mathfrak m R^h$ is the maximal ideal of $R^h$, and
4. $\kappa = R^h/\mathfrak m R^h$.
Proof. Consider the category of pairs $(S, \mathfrak q)$ where $R \to S$ is an étale ring map, and $\mathfrak q$ is a prime of $S$ lying over $\mathfrak m$ with $\kappa = \kappa(\mathfrak q)$. A morphism of pairs $(S, \mathfrak q) \to (S', \mathfrak q')$ is given by an $R$-algebra map $\varphi : S \to S'$ such that $\varphi^{-1}(\mathfrak q') = \mathfrak q$. We set $$R^h = \mathop{\mathrm{colim}}\nolimits_{(S, \mathfrak q)} S.$$ Let us show that the category of pairs is filtered, see Categories, Definition 4.19.1. The category contains the pair $(R, \mathfrak m)$ and hence is not empty, which proves part (1) of Categories, Definition 4.19.1. For any pair $(S, \mathfrak q)$ the prime ideal $\mathfrak q$ is maximal with residue field $\kappa$ since the composition $\kappa \to S/\mathfrak q \to \kappa(\mathfrak q)$ is an isomorphism. Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two objects. Set $S'' = S \otimes_R S'$ and $\mathfrak q'' = \mathfrak qS'' + \mathfrak q'S''$. Then $S''/\mathfrak q'' = S/\mathfrak q \otimes_R S'/\mathfrak q' = \kappa$ by what we said above. Moreover, $R \to S''$ is étale by Lemma 10.141.3. This proves part (2) of Categories, Definition 4.19.1. Next, suppose that $\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$ are two morphisms of pairs. Then $\varphi$, $\psi$, and $S' \otimes_R S' \to S'$ are étale ring maps by Lemma 10.141.8. Consider $$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$ with prime ideal $$\mathfrak q'' = (\mathfrak q' \otimes S' + S' \otimes \mathfrak q') \otimes S' + (S' \otimes_{\varphi, S, \psi} S') \otimes \mathfrak q'$$ Arguing as above (base change of étale maps is étale, composition of étale maps is étale) we see that $S''$ is étale over $R$. Moreover, the canonical map $S' \to S''$ (using the right most factor for example) equalizes $\varphi$ and $\psi$. This proves part (3) of Categories, Definition 4.19.1. Hence we conclude that $R^h$ consists of triples $(S, \mathfrak q, f)$ with $f \in S$, and two such triples $(S, \mathfrak q, f)$, $(S', \mathfrak q', f')$ define the same element of $R^h$ if and only if there exists a pair $(S'', \mathfrak q'')$ and morphisms of pairs $\varphi : (S, \mathfrak q) \to (S'', \mathfrak q'')$ and $\varphi' : (S', \mathfrak q') \to (S'', \mathfrak q'')$ such that $\varphi(f) = \varphi'(f')$.
Suppose that $x \in R^h$. Represent $x$ by a triple $(S, \mathfrak q, f)$. Let $\mathfrak q_1, \ldots, \mathfrak q_r$ be the other primes of $S$ lying over $\mathfrak m$. Then we can find a $g \in S$, $g \not \in \mathfrak q$ and $g \in \mathfrak q_i$ for $i = 1, \ldots, r$, see Lemma 10.14.2. Consider the morphism of pairs $(S, \mathfrak q) \to (S_g, \mathfrak qS_g)$. In this way we see that we may always assume that $x$ is given by a triple $(S, \mathfrak q, f)$ where $\mathfrak q$ is the only prime of $S$ lying over $\mathfrak m$, i.e., $\sqrt{\mathfrak mS} = \mathfrak q$. But since $R \to S$ is étale, we have $\mathfrak mS_{\mathfrak q} = \mathfrak qS_{\mathfrak q}$, see Lemma 10.141.5. Hence we actually get that $\mathfrak mS = \mathfrak q$.
Suppose that $x \not \in \mathfrak mR^h$. Represent $x$ by a triple $(S, \mathfrak q, f)$ with $\mathfrak mS = \mathfrak q$. Then $f \not \in \mathfrak mS$, i.e., $f \not \in \mathfrak q$. Hence $(S, \mathfrak q) \to (S_f, \mathfrak qS_f)$ is a morphism of pairs such that the image of $f$ becomes invertible. Hence $x$ is invertible with inverse represented by the triple $(S_f, \mathfrak qS_f, 1/f)$. We conclude that $R^h$ is a local ring with maximal ideal $\mathfrak mR^h$. The residue field is $\kappa$ since we can define $R^h/\mathfrak mR^h \to \kappa$ by mapping a triple $(S, \mathfrak q, f)$ to the residue class of $f$ module $\mathfrak q$.
We still have to show that $R^h$ is henselian. Namely, suppose that $P \in R^h[T]$ is a monic polynomial and $a_0 \in \kappa$ is a simple root of the reduction $\overline{P} \in \kappa[T]$. Then we can find a pair $(S, \mathfrak q)$ such that $P$ is the image of a monic polynomial $Q \in S[T]$. Since $S \to R^h$ induces an isomorphism of residue fields we see that $S' = S[T]/(Q)$ has a prime ideal $\mathfrak q' = (\mathfrak q, T - a_0)$ at which $S \to S'$ is standard étale. Moreover, $\kappa = \kappa(\mathfrak q')$. Pick $g \in S'$, $g \not \in \mathfrak q'$ such that $S'' = S'_g$ is étale over $S$. Then $(S, \mathfrak q) \to (S'', \mathfrak q'S'')$ is a morphism of pairs. Now that triple $(S'', \mathfrak q'S'', \text{class of }T)$ determines an element $a \in R^h$ with the properties $P(a) = 0$, and $\overline{a} = a_0$ as desired. $\square$
Lemma 10.150.2. Let $(R, \mathfrak m, \kappa)$ be a local ring. Let $\kappa \subset \kappa^{sep}$ be a separable algebraic closure. There exists a commutative diagram $$\xymatrix{ \kappa \ar[r] & \kappa \ar[r] & \kappa^{sep} \\ R \ar[r] \ar[u] & R^h \ar[r] \ar[u] & R^{sh} \ar[u] }$$ with the following properties
1. the map $R^h \to R^{sh}$ is local
2. $R^{sh}$ is strictly henselian,
3. $R^{sh}$ is a filtered colimit of étale $R$-algebras,
4. $\mathfrak m R^{sh}$ is the maximal ideal of $R^{sh}$, and
5. $\kappa^{sep} = R^{sh}/\mathfrak m R^{sh}$.
Proof. This is proved by exactly the same proof as used for Lemma 10.150.1. The only difference is that, instead of pairs, one uses triples $(S, \mathfrak q, \alpha)$ where $R \to S$ étale, $\mathfrak q$ is a prime of $S$ lying over $\mathfrak m$, and $\alpha : \kappa(\mathfrak q) \to \kappa^{sep}$ is an embedding of extensions of $\kappa$. $\square$
Definition 10.150.3. Let $(R, \mathfrak m, \kappa)$ be a local ring.
1. The local ring map $R \to R^h$ constructed in Lemma 10.150.1 is called the henselization of $R$.
2. Given a separable algebraic closure $\kappa \subset \kappa^{sep}$ the local ring map $R \to R^{sh}$ constructed in Lemma 10.150.2 is called the strict henselization of $R$ with respect to $\kappa \subset \kappa^{sep}$.
3. A local ring map $R \to R^{sh}$ is called a strict henselization of $R$ if it is isomorphic to one of the local ring maps constructed in Lemma 10.150.2
The maps $R \to R^h \to R^{sh}$ are flat local ring homomorphisms. By Lemma 10.149.6 the $R$-algebras $R^h$ and $R^{sh}$ are well defined up to unique isomorphism by the conditions that they are henselian local, filtered colimits of étale $R$-algebras with residue field $\kappa$ and $\kappa^{sep}$. In the rest of this section we mostly just discuss functoriality of the (strict) henselizations. We will discuss more intricate results concerning the relationship between $R$ and its henselization in More on Algebra, Section 15.42.
Remark 10.150.4. We can also construct $R^{sh}$ from $R^h$. Namely, for any finite separable subextension $\kappa \subset \kappa' \subset \kappa^{sep}$ there exists a unique (up to unique isomorphism) finite étale local ring extension $R^h \subset R^h(\kappa')$ whose residue field extension extension reproduces the given extension, see Lemma 10.148.7. Hence we can set $$R^{sh} = \bigcup\nolimits_{\kappa \subset \kappa' \subset \kappa^{sep}} R^h(\kappa')$$ The arrows in this system, compatible with the arrows on the level of residue fields, exist by Lemma 10.148.7. This will produce a henselian local ring by Lemma 10.149.7 since each of the rings $R^h(\kappa')$ is henselian by Lemma 10.148.4. By construction the residue field extension induced by $R^h \to R^{sh}$ is the field extension $\kappa \subset \kappa^{sep}$. Hence $R^{sh}$ so constructed is strictly henselian. By Lemma 10.149.2 the $R$-algebra $R^{sh}$ is a colimit of étale $R$-algebras. Hence the uniqueness of Lemma 10.149.6 shows that $R^{sh}$ is the strict henselization.
Lemma 10.150.5. Let $R \to S$ be a local map of local rings. Let $S \to S^h$ be the henselization. Let $R \to A$ be an étale ring map and let $\mathfrak q$ be a prime of $A$ lying over $\mathfrak m_R$ such that $R/\mathfrak m_R \cong \kappa(\mathfrak q)$. Then there exists a unique morphism of rings $f : A \to S^h$ fitting into the commutative diagram $$\xymatrix{ A \ar[r]_f & S^h \\ R \ar[u] \ar[r] & S \ar[u] }$$ such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$.
Proof. This is a special case of Lemma 10.148.11. $\square$
Lemma 10.150.6. Let $R \to S$ be a local map of local rings. Let $R \to R^h$ and $S \to S^h$ be the henselizations. There exists a unique local ring map $R^h \to S^h$ fitting into the commutative diagram $$\xymatrix{ R^h \ar[r]_f & S^h \\ R \ar[u] \ar[r] & S \ar[u] }$$
Proof. Follows immediately from Lemma 10.149.5. $\square$
Here is a slightly different construction of the henselization.
Lemma 10.150.7. Let $R$ be a ring. Let $\mathfrak p \subset R$ be a prime ideal. Consider the category of pairs $(S, \mathfrak q)$ where $R \to S$ is étale and $\mathfrak q$ is a prime lying over $\mathfrak p$ such that $\kappa(\mathfrak p) = \kappa(\mathfrak q)$. This category is filtered and $$(R_{\mathfrak p})^h = \mathop{\mathrm{colim}}\nolimits_{(S, \mathfrak q)} S = \mathop{\mathrm{colim}}\nolimits_{(S, \mathfrak q)} S_{\mathfrak q}$$ canonically.
Proof. A morphism of pairs $(S, \mathfrak q) \to (S', \mathfrak q')$ is given by an $R$-algebra map $\varphi : S \to S'$ such that $\varphi^{-1}(\mathfrak q') = \mathfrak q$. Let us show that the category of pairs is filtered, see Categories, Definition 4.19.1. The category contains the pair $(R, \mathfrak p)$ and hence is not empty, which proves part (1) of Categories, Definition 4.19.1. Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two pairs. Note that $\mathfrak q$, resp. $\mathfrak q'$ correspond to primes of the fibre rings $S \otimes \kappa(\mathfrak p)$, resp. $S' \otimes \kappa(\mathfrak p)$ with residue fields $\kappa(\mathfrak p)$, hence they correspond to maximal ideals of $S \otimes \kappa(\mathfrak p)$, resp. $S' \otimes \kappa(\mathfrak p)$. Set $S'' = S \otimes_R S'$. By the above there exists a unique prime $\mathfrak q'' \subset S''$ lying over $\mathfrak q$ and over $\mathfrak q'$ whose residue field is $\kappa(\mathfrak p)$. The ring map $R \to S''$ is étale by Lemma 10.141.3. This proves part (2) of Categories, Definition 4.19.1. Next, suppose that $\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$ are two morphisms of pairs. Then $\varphi$, $\psi$, and $S' \otimes_R S' \to S'$ are étale ring maps by Lemma 10.141.8. Consider $$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$ Arguing as above (base change of étale maps is étale, composition of étale maps is étale) we see that $S''$ is étale over $R$. The fibre ring of $S''$ over $\mathfrak p$ is $$F'' = (F' \otimes_{\varphi, F, \psi} F') \otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'$$ where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and $\psi$ are morphisms of pairs the map $F' \to \kappa(\mathfrak p)$ corresponding to $\mathfrak p'$ extends to a map $F'' \to \kappa(\mathfrak p)$ and in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$ whose residue field is $\kappa(\mathfrak p)$. The canonical map $S' \to S''$ (using the right most factor for example) is a morphism of pairs $(S', \mathfrak q') \to (S'', \mathfrak q'')$ which equalizes $\varphi$ and $\psi$. This proves part (3) of Categories, Definition 4.19.1. Hence we conclude that the category is filtered.
Recall that in the proof of Lemma 10.150.1 we constructed $(R_{\mathfrak p})^h$ as the corresponding colimit but starting with $R_{\mathfrak p}$ and its maximal ideal $\mathfrak pR_{\mathfrak p}$. Now, given any pair $(S, \mathfrak q)$ for $(R, \mathfrak p)$ we obtain a pair $(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for $(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$. Moreover, in this situation $$S_{\mathfrak p} = \mathop{\mathrm{colim}}\nolimits_{f \in R, f \not \in \mathfrak p} S_f.$$ Hence in order to show the equalities of the lemma, it suffices to show that any pair $(S_{loc}, \mathfrak q_{loc})$ for $(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$ is of the form $(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for some pair $(S, \mathfrak q)$ over $(R, \mathfrak p)$ (some details omitted). This follows from Lemma 10.141.3. $\square$
Lemma 10.150.8. Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying over $\mathfrak p \subset R$. Let $R \to R^h$ and $S \to S^h$ be the henselizations of $R_\mathfrak p$ and $S_\mathfrak q$. The local ring map $R^h \to S^h$ of Lemma 10.150.6 identifies $S^h$ with the henselization of $R^h \otimes_R S$ at the unique prime lying over $\mathfrak m^h$ and $\mathfrak q$.
Proof. By Lemma 10.150.7 we see that $R^h$, resp. $S^h$ are filtered colimits of étale $R$, resp. $S$-algebras. Hence we see that $R^h \otimes_R S$ is a filtered colimit of étale $S$-algebras $A_i$ (Lemma 10.141.3). By Lemma 10.149.4 we see that $S^h$ is a filtered colimit of étale $R^h \otimes_R S$-algebras. Since moreover $S^h$ is a henselian local ring with residue field equal to $\kappa(\mathfrak q)$, the statement follows from the uniqueness result of Lemma 10.149.6. $\square$
Lemma 10.150.9. Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$. Assume $R \to S$ is quasi-finite at $\mathfrak q$. The commutative diagram $$\xymatrix{ R_{\mathfrak p}^h \ar[r] & S_{\mathfrak q}^h \\ R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u] }$$ of Lemma 10.150.6 identifies $S_{\mathfrak q}^h$ with the localization of $R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$ at the prime generated by $\mathfrak q$.
Proof. Note that $R_{\mathfrak p}^h \otimes_R S$ is quasi-finite over $R_{\mathfrak p}^h$ at the prime ideal corresponding to $\mathfrak q$, see Lemma 10.121.6. Hence the localization $S'$ of $R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$ is henselian, see Lemma 10.148.4. As a localization $S'$ is a filtered colimit of étale $R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras. By Lemma 10.150.8 we see that $S_\mathfrak q^h$ is the henselization of $R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$. Thus $S' = S_\mathfrak q^h$ by the uniqueness result of Lemma 10.149.6. $\square$
Lemma 10.150.10. Let $R$ be a local ring with henselization $R^h$. Let $I \subset \mathfrak m_R$. Then $R^h/IR^h$ is the henselization of $R/I$.
Proof. This is a special case of Lemma 10.150.9. $\square$
Lemma 10.150.11. Let $\varphi : R \to S$ be a local map of local rings. Let $S/\mathfrak m_S \subset \kappa^{sep}$ be a separable algebraic closure. Let $S \to S^{sh}$ be the strict henselization of $S$ with respect to $S/\mathfrak m_S \subset \kappa^{sep}$. Let $R \to A$ be an étale ring map and let $\mathfrak q$ be a prime of $A$ lying over $\mathfrak m_R$. Given any commutative diagram $$\xymatrix{ \kappa(\mathfrak q) \ar[r]_{\phi} & \kappa^{sep} \\ R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u] }$$ there exists a unique morphism of rings $f : A \to S^{sh}$ fitting into the commutative diagram $$\xymatrix{ A \ar[r]_f & S^{sh} \\ R \ar[u] \ar[r]^{\varphi} & S \ar[u] }$$ such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$ and the induced map $\kappa(\mathfrak q) \to \kappa^{sep}$ is the given one.
Proof. This is a special case of Lemma 10.148.11. $\square$
Lemma 10.150.12. Let $R \to S$ be a local map of local rings. Choose separable algebraic closures $R/\mathfrak m_R \subset \kappa_1^{sep}$ and $S/\mathfrak m_S \subset \kappa_2^{sep}$. Let $R \to R^{sh}$ and $S \to S^{sh}$ be the corresponding strict henselizations. Given any commutative diagram $$\xymatrix{ \kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\ R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u] }$$ There exists a unique local ring map $R^{sh} \to S^{sh}$ fitting into the commutative diagram $$\xymatrix{ R^{sh} \ar[r]_f & S^{sh} \\ R \ar[u] \ar[r] & S \ar[u] }$$ and inducing $\phi$ on the residue fields of $R^{sh}$ and $S^{sh}$.
Proof. Follows immediately from Lemma 10.149.5. $\square$
Lemma 10.150.13. Let $R$ be a ring. Let $\mathfrak p \subset R$ be a prime ideal. Let $\kappa(\mathfrak p) \subset \kappa^{sep}$ be a separable algebraic closure. Consider the category of triples $(S, \mathfrak q, \phi)$ where $R \to S$ is étale, $\mathfrak q$ is a prime lying over $\mathfrak p$, and $\phi : \kappa(\mathfrak q) \to \kappa^{sep}$ is a $\kappa(\mathfrak p)$-algebra map. This category is filtered and $$(R_{\mathfrak p})^{sh} = \mathop{\mathrm{colim}}\nolimits_{(S, \mathfrak q, \phi)} S = \mathop{\mathrm{colim}}\nolimits_{(S, \mathfrak q, \phi)} S_{\mathfrak q}$$ canonically.
Proof. A morphism of triples $(S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$ is given by an $R$-algebra map $\varphi : S \to S'$ such that $\varphi^{-1}(\mathfrak q') = \mathfrak q$ and such that $\phi' \circ \varphi = \phi$. Let us show that the category of pairs is filtered, see Categories, Definition 4.19.1. The category contains the triple $(R, \mathfrak p, \kappa(\mathfrak p) \subset \kappa^{sep})$ and hence is not empty, which proves part (1) of Categories, Definition 4.19.1. Suppose that $(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$ are two triples. Note that $\mathfrak q$, resp. $\mathfrak q'$ correspond to primes of the fibre rings $S \otimes \kappa(\mathfrak p)$, resp. $S' \otimes \kappa(\mathfrak p)$ with residue fields finite separable over $\kappa(\mathfrak p)$ and $\phi$, resp. $\phi'$ correspond to maps into $\kappa^{sep}$. Hence this data corresponds to $\kappa(\mathfrak p)$-algebra maps $$\phi : S \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}, \quad \phi' : S' \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}.$$ Set $S'' = S \otimes_R S'$. Combining the maps the above we get a unique $\kappa(\mathfrak p)$-algebra map $$\phi'' = \phi \otimes \phi' : S'' \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}$$ whose kernel corresponds to a prime $\mathfrak q'' \subset S''$ lying over $\mathfrak q$ and over $\mathfrak q'$, and whose residue field maps via $\phi''$ to the compositum of $\phi(\kappa(\mathfrak q))$ and $\phi'(\kappa(\mathfrak q'))$ in $\kappa^{sep}$. The ring map $R \to S''$ is étale by Lemma 10.141.3. Hence $(S'', \mathfrak q'', \phi'')$ is a triple dominating both $(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$. This proves part (2) of Categories, Definition 4.19.1. Next, suppose that $\varphi, \psi : (S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$ are two morphisms of pairs. Then $\varphi$, $\psi$, and $S' \otimes_R S' \to S'$ are étale ring maps by Lemma 10.141.8. Consider $$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$ Arguing as above (base change of étale maps is étale, composition of étale maps is étale) we see that $S''$ is étale over $R$. The fibre ring of $S''$ over $\mathfrak p$ is $$F'' = (F' \otimes_{\varphi, F, \psi} F') \otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'$$ where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and $\psi$ are morphisms of triples the map $\phi' : F' \to \kappa^{sep}$ extends to a map $\phi'' : F'' \to \kappa^{sep}$ which in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$. The canonical map $S' \to S''$ (using the right most factor for example) is a morphism of triples $(S', \mathfrak q', \phi') \to (S'', \mathfrak q'', \phi'')$ which equalizes $\varphi$ and $\psi$. This proves part (3) of Categories, Definition 4.19.1. Hence we conclude that the category is filtered.
We still have to show that the colimit $R_{colim}$ of the system is equal to the strict henselization of $R_{\mathfrak p}$ with respect to $\kappa^{sep}$. To see this note that the system of triples $(S, \mathfrak q, \phi)$ contains as a subsystem the pairs $(S, \mathfrak q)$ of Lemma 10.150.7. Hence $R_{colim}$ contains $R_{\mathfrak p}^h$ by the result of that lemma. Moreover, it is clear that $R_{\mathfrak p}^h \subset R_{colim}$ is a directed colimit of étale ring extensions. It follows that $R_{colim}$ is henselian by Lemmas 10.148.4 and 10.149.7. Finally, by Lemma 10.141.15 we see that the residue field of $R_{colim}$ is equal to $\kappa^{sep}$. Hence we conclude that $R_{colim}$ is strictly henselian and hence equals the strict henselization of $R_{\mathfrak p}$ as desired. Some details omitted. $\square$
Lemma 10.150.14. Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying over $\mathfrak p \subset R$. Choose separable algebraic closures $\kappa(\mathfrak p) \subset \kappa_1^{sep}$ and $\kappa(\mathfrak q) \subset \kappa_2^{sep}$. Let $R^{sh}$ and $S^{sh}$ be the corresponding strict henselizations of $R_\mathfrak p$ and $S_\mathfrak q$. Given any commutative diagram $$\xymatrix{ \kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\ \kappa(\mathfrak p) \ar[r]^{\varphi} \ar[u] & \kappa(\mathfrak q) \ar[u] }$$ The local ring map $R^{sh} \to S^{sh}$ of Lemma 10.150.12 identifies $S^{sh}$ with the strict henselization of $R^{sh} \otimes_R S$ at a prime lying over $\mathfrak m^{sh}$ and $\mathfrak q$.
Proof. The proof is identical to the proof of Lemma 10.150.8 except that it uses Lemma 10.150.13 instead of Lemma 10.150.7. $\square$
Lemma 10.150.15. Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$. Let $\kappa(\mathfrak q) \subset \kappa^{sep}$ be a separable algebraic closure. Assume $R \to S$ is quasi-finite at $\mathfrak q$. The commutative diagram $$\xymatrix{ R_{\mathfrak p}^{sh} \ar[r] & S_{\mathfrak q}^{sh} \\ R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u] }$$ of Lemma 10.150.12 identifies $S_{\mathfrak q}^{sh}$ with a localization of $R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$.
Proof. The residue field of $R_{\mathfrak p}^{sh}$ is the separable algebraic closure of $\kappa(\mathfrak p)$ in $\kappa^{sep}$. Note that $R_{\mathfrak p}^{sh} \otimes_R S$ is quasi-finite over $R_{\mathfrak p}^{sh}$ at the prime ideal corresponding to $\mathfrak q$, see Lemma 10.121.6. Hence the localization $S'$ of $R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$ is henselian, see Lemma 10.148.4. Note that the residue field of $S'$ is $\kappa^{sep}$ since it contains both the separable algebraic closure of $\kappa(\mathfrak p)$ and $\kappa(\mathfrak q)$. Furthermore, as a localization $S'$ is a filtered colimit of étale $R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras. By Lemma 10.150.14 we see that $S_{\mathfrak q}^{sh}$ is a strict henselization of $R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$. Thus $S' = S_\mathfrak q^h$ by the uniqueness result of Lemma 10.149.6. $\square$
Lemma 10.150.16. Let $R$ be a local ring with strict henselization $R^{sh}$. Let $I \subset \mathfrak m_R$. Then $R^{sh}/IR^{sh}$ is a strict henselization of $R/I$.
Proof. This is a special case of Lemma 10.150.15. $\square$
Lemma 10.150.17. Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying over $\mathfrak p \subset R$ such that $\kappa(\mathfrak p) \to \kappa(\mathfrak q)$ is an isomorphism. Choose a separable algebraic closure $\kappa^{sep}$ of $\kappa(\mathfrak p) = \kappa(\mathfrak q)$. Then $$(S_\mathfrak q)^{sh} = (S_\mathfrak q)^h \otimes_{(R_\mathfrak p)^h} (R_\mathfrak p)^{sh}$$
Proof. This follows from the alternative construction of the strict henselization of a local ring in Remark 10.150.4 and the fact that the residue fields are equal. Some details omitted. $\square$
The code snippet corresponding to this tag is a part of the file algebra.tex and is located in lines 40545–41251 (see updates for more information).
\section{Henselization and strict henselization}
\label{section-henselization}
\noindent
In this section we construct the henselization. We encourage the reader
to keep in mind the uniqueness already proved in
Lemma \ref{lemma-uniqueness-henselian}
and the functorial behaviour pointed out in
Lemma \ref{lemma-map-into-henselian-colimit}
\begin{lemma}
\label{lemma-henselization}
Let $(R, \mathfrak m, \kappa)$ be a local ring. There exists a
local ring map $R \to R^h$ with the following properties
\begin{enumerate}
\item $R^h$ is henselian,
\item $R^h$ is a filtered colimit of \'etale $R$-algebras,
\item $\mathfrak m R^h$ is the
maximal ideal of $R^h$, and
\item $\kappa = R^h/\mathfrak m R^h$.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider the category of pairs $(S, \mathfrak q)$ where $R \to S$ is an
\'etale ring map, and $\mathfrak q$ is a prime of $S$ lying over
$\mathfrak m$ with $\kappa = \kappa(\mathfrak q)$. A morphism of pairs
$(S, \mathfrak q) \to (S', \mathfrak q')$ is given by an $R$-algebra
map $\varphi : S \to S'$ such that $\varphi^{-1}(\mathfrak q') = \mathfrak q$.
We set
$$R^h = \colim_{(S, \mathfrak q)} S.$$
Let us show that the category of pairs is filtered, see
Categories, Definition \ref{categories-definition-directed}.
The category contains the pair $(R, \mathfrak m)$ and hence is not empty,
which proves part (1) of
Categories, Definition \ref{categories-definition-directed}.
For any pair $(S, \mathfrak q)$ the prime ideal $\mathfrak q$
is maximal with residue field $\kappa$ since the composition
$\kappa \to S/\mathfrak q \to \kappa(\mathfrak q)$ is an isomorphism.
Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two objects. Set
$S'' = S \otimes_R S'$ and $\mathfrak q'' = \mathfrak qS'' + \mathfrak q'S''$.
Then $S''/\mathfrak q'' = S/\mathfrak q \otimes_R S'/\mathfrak q' = \kappa$
by what we said above. Moreover, $R \to S''$ is \'etale by
Lemma \ref{lemma-etale}.
This proves part (2) of
Categories, Definition \ref{categories-definition-directed}.
Next, suppose that
$\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$
are two morphisms of pairs. Then $\varphi$, $\psi$, and
$S' \otimes_R S' \to S'$ are \'etale ring maps by
Lemma \ref{lemma-map-between-etale}.
Consider
$$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$
with prime ideal
$$\mathfrak q'' = (\mathfrak q' \otimes S' + S' \otimes \mathfrak q') \otimes S' + (S' \otimes_{\varphi, S, \psi} S') \otimes \mathfrak q'$$
Arguing as above (base change of \'etale maps is \'etale, composition of
\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. Moreover,
the canonical map $S' \to S''$ (using the right most factor for example)
equalizes $\varphi$ and $\psi$. This proves part (3) of
Categories, Definition \ref{categories-definition-directed}.
Hence we conclude that $R^h$ consists of triples $(S, \mathfrak q, f)$
with $f \in S$, and two such triples
$(S, \mathfrak q, f)$, $(S', \mathfrak q', f')$
define the same element of $R^h$ if and only if there exists
a pair $(S'', \mathfrak q'')$ and morphisms of pairs
$\varphi : (S, \mathfrak q) \to (S'', \mathfrak q'')$
and
$\varphi' : (S', \mathfrak q') \to (S'', \mathfrak q'')$
such that $\varphi(f) = \varphi'(f')$.
\medskip\noindent
Suppose that $x \in R^h$.
Represent $x$ by a triple $(S, \mathfrak q, f)$.
Let $\mathfrak q_1, \ldots, \mathfrak q_r$ be
the other primes of $S$ lying over $\mathfrak m$.
Then we can find a $g \in S$, $g \not \in \mathfrak q$ and
$g \in \mathfrak q_i$ for $i = 1, \ldots, r$, see
Lemma \ref{lemma-silly}. Consider the morphism of
pairs $(S, \mathfrak q) \to (S_g, \mathfrak qS_g)$.
In this way we see that we may always assume that $x$
is given by a triple $(S, \mathfrak q, f)$ where
$\mathfrak q$ is the only prime of $S$ lying over $\mathfrak m$,
i.e., $\sqrt{\mathfrak mS} = \mathfrak q$. But since
$R \to S$ is \'etale, we have
$\mathfrak mS_{\mathfrak q} = \mathfrak qS_{\mathfrak q}$, see
Lemma \ref{lemma-etale-at-prime}.
Hence we actually get that $\mathfrak mS = \mathfrak q$.
\medskip\noindent
Suppose that $x \not \in \mathfrak mR^h$.
Represent $x$ by a triple $(S, \mathfrak q, f)$ with
$\mathfrak mS = \mathfrak q$.
Then $f \not \in \mathfrak mS$, i.e., $f \not \in \mathfrak q$.
Hence $(S, \mathfrak q) \to (S_f, \mathfrak qS_f)$ is a morphism
of pairs such that the image of $f$ becomes invertible.
Hence $x$ is invertible with inverse represented by the triple
$(S_f, \mathfrak qS_f, 1/f)$. We conclude that $R^h$ is a local
ring with maximal ideal $\mathfrak mR^h$. The residue field is
$\kappa$ since we can define $R^h/\mathfrak mR^h \to \kappa$
by mapping a triple $(S, \mathfrak q, f)$ to the residue
class of $f$ module $\mathfrak q$.
\medskip\noindent
We still have to show that $R^h$ is henselian.
Namely, suppose that $P \in R^h[T]$ is a monic
polynomial and $a_0 \in \kappa$ is a simple root of
the reduction $\overline{P} \in \kappa[T]$.
Then we can find a pair $(S, \mathfrak q)$ such that
$P$ is the image of a monic polynomial $Q \in S[T]$.
Since $S \to R^h$ induces an isomorphism of residue
fields we see that $S' = S[T]/(Q)$ has a prime ideal
$\mathfrak q' = (\mathfrak q, T - a_0)$ at which
$S \to S'$ is standard \'etale. Moreover, $\kappa = \kappa(\mathfrak q')$.
Pick $g \in S'$, $g \not \in \mathfrak q'$ such that
$S'' = S'_g$ is \'etale over $S$. Then
$(S, \mathfrak q) \to (S'', \mathfrak q'S'')$ is a morphism
of pairs. Now that triple $(S'', \mathfrak q'S'', \text{class of }T)$
determines an element $a \in R^h$ with the properties $P(a) = 0$,
and $\overline{a} = a_0$ as desired.
\end{proof}
\begin{lemma}
\label{lemma-strict-henselization}
Let $(R, \mathfrak m, \kappa)$ be a local ring.
Let $\kappa \subset \kappa^{sep}$ be a separable algebraic closure.
There exists a commutative diagram
$$\xymatrix{ \kappa \ar[r] & \kappa \ar[r] & \kappa^{sep} \\ R \ar[r] \ar[u] & R^h \ar[r] \ar[u] & R^{sh} \ar[u] }$$
with the following properties
\begin{enumerate}
\item the map $R^h \to R^{sh}$ is local
\item $R^{sh}$ is strictly henselian,
\item $R^{sh}$ is a filtered colimit of \'etale $R$-algebras,
\item $\mathfrak m R^{sh}$ is the
maximal ideal of $R^{sh}$, and
\item $\kappa^{sep} = R^{sh}/\mathfrak m R^{sh}$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is proved by exactly the same proof as used for
Lemma \ref{lemma-henselization}.
The only difference is that, instead of pairs, one uses triples
$(S, \mathfrak q, \alpha)$ where $R \to S$ \'etale,
$\mathfrak q$ is a prime of $S$ lying over $\mathfrak m$, and
$\alpha : \kappa(\mathfrak q) \to \kappa^{sep}$ is an embedding
of extensions of $\kappa$.
\end{proof}
\begin{definition}
\label{definition-henselization}
Let $(R, \mathfrak m, \kappa)$ be a local ring.
\begin{enumerate}
\item The local ring map $R \to R^h$ constructed in
Lemma \ref{lemma-henselization}
is called the {\it henselization} of $R$.
\item Given a separable algebraic closure $\kappa \subset \kappa^{sep}$
the local ring map $R \to R^{sh}$ constructed in
Lemma \ref{lemma-strict-henselization}
is called the
{\it strict henselization of $R$ with respect to
$\kappa \subset \kappa^{sep}$}.
\item A local ring map $R \to R^{sh}$ is called a {\it strict henselization}
of $R$ if it is isomorphic to one of the local ring maps constructed in
Lemma \ref{lemma-strict-henselization}
\end{enumerate}
\end{definition}
\noindent
The maps $R \to R^h \to R^{sh}$ are flat local ring homomorphisms.
By Lemma \ref{lemma-uniqueness-henselian} the $R$-algebras $R^h$ and
$R^{sh}$ are well defined up to unique isomorphism by the conditions
that they are henselian local, filtered colimits of \'etale $R$-algebras
with residue field $\kappa$ and $\kappa^{sep}$.
In the rest of this section we mostly just discuss functoriality of the
(strict) henselizations.
We will discuss more intricate results concerning
the relationship between $R$ and its henselization in
More on Algebra, Section \ref{more-algebra-section-permanence-henselization}.
\begin{remark}
\label{remark-construct-sh-from-h}
We can also construct $R^{sh}$ from $R^h$. Namely, for any finite separable
subextension $\kappa \subset \kappa' \subset \kappa^{sep}$
there exists a unique (up to unique isomorphism) finite \'etale local
ring extension $R^h \subset R^h(\kappa')$
whose residue field extension extension reproduces the given extension, see
Lemma \ref{lemma-henselian-cat-finite-etale}.
Hence we can set
$$R^{sh} = \bigcup\nolimits_{\kappa \subset \kappa' \subset \kappa^{sep}} R^h(\kappa')$$
The arrows in this system, compatible with the arrows on the level
of residue fields, exist by
Lemma \ref{lemma-henselian-cat-finite-etale}.
This will produce a henselian local ring by
Lemma \ref{lemma-colimit-henselian}
since each of the rings
$R^h(\kappa')$ is henselian by
Lemma \ref{lemma-finite-over-henselian}.
By construction the residue field extension induced by
$R^h \to R^{sh}$ is the field extension $\kappa \subset \kappa^{sep}$.
Hence $R^{sh}$ so constructed is strictly henselian.
By Lemma \ref{lemma-composition-colimit-etale} the $R$-algebra
$R^{sh}$ is a colimit of \'etale $R$-algebras. Hence the uniqueness
of Lemma \ref{lemma-uniqueness-henselian} shows that $R^{sh}$
is the strict henselization.
\end{remark}
\begin{lemma}
\label{lemma-henselian-functorial-prepare}
Let $R \to S$ be a local map of local rings.
Let $S \to S^h$ be the henselization.
Let $R \to A$ be an \'etale ring map and let $\mathfrak q$
be a prime of $A$ lying over $\mathfrak m_R$
such that $R/\mathfrak m_R \cong \kappa(\mathfrak q)$.
Then there exists a unique morphism of rings
$f : A \to S^h$ fitting into the commutative diagram
$$\xymatrix{ A \ar[r]_f & S^h \\ R \ar[u] \ar[r] & S \ar[u] }$$
such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$.
\end{lemma}
\begin{proof}
This is a special case of Lemma \ref{lemma-map-into-henselian}.
\end{proof}
\begin{lemma}
\label{lemma-henselian-functorial}
Let $R \to S$ be a local map of local rings.
Let $R \to R^h$ and $S \to S^h$ be the henselizations.
There exists a unique local ring map $R^h \to S^h$ fitting
into the commutative diagram
$$\xymatrix{ R^h \ar[r]_f & S^h \\ R \ar[u] \ar[r] & S \ar[u] }$$
\end{lemma}
\begin{proof}
Follows immediately from Lemma \ref{lemma-map-into-henselian-colimit}.
\end{proof}
\noindent
Here is a slightly different construction of the henselization.
\begin{lemma}
\label{lemma-henselization-different}
Let $R$ be a ring.
Let $\mathfrak p \subset R$ be a prime ideal.
Consider the category of pairs $(S, \mathfrak q)$ where
$R \to S$ is \'etale and $\mathfrak q$ is a prime lying over $\mathfrak p$
such that $\kappa(\mathfrak p) = \kappa(\mathfrak q)$.
This category is filtered and
$$(R_{\mathfrak p})^h = \colim_{(S, \mathfrak q)} S = \colim_{(S, \mathfrak q)} S_{\mathfrak q}$$
canonically.
\end{lemma}
\begin{proof}
A morphism of pairs $(S, \mathfrak q) \to (S', \mathfrak q')$
is given by an $R$-algebra map $\varphi : S \to S'$ such that
$\varphi^{-1}(\mathfrak q') = \mathfrak q$.
Let us show that the category of pairs is filtered, see
Categories, Definition \ref{categories-definition-directed}.
The category contains the pair $(R, \mathfrak p)$ and hence is not empty,
which proves part (1) of
Categories, Definition \ref{categories-definition-directed}.
Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two pairs.
Note that $\mathfrak q$, resp.\ $\mathfrak q'$ correspond to primes
of the fibre rings $S \otimes \kappa(\mathfrak p)$,
resp.\ $S' \otimes \kappa(\mathfrak p)$ with residue fields
$\kappa(\mathfrak p)$, hence they correspond to maximal ideals of
$S \otimes \kappa(\mathfrak p)$, resp.\ $S' \otimes \kappa(\mathfrak p)$.
Set $S'' = S \otimes_R S'$. By the above there exists a unique
prime $\mathfrak q'' \subset S''$ lying over $\mathfrak q$ and over
$\mathfrak q'$ whose residue field is $\kappa(\mathfrak p)$.
The ring map $R \to S''$ is \'etale by
Lemma \ref{lemma-etale}.
This proves part (2) of
Categories, Definition \ref{categories-definition-directed}.
Next, suppose that
$\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$
are two morphisms of pairs. Then $\varphi$, $\psi$, and
$S' \otimes_R S' \to S'$ are \'etale ring maps by
Lemma \ref{lemma-map-between-etale}. Consider
$$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$
Arguing as above (base change of \'etale maps is \'etale, composition of
\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. The fibre
ring of $S''$ over $\mathfrak p$ is
$$F'' = (F' \otimes_{\varphi, F, \psi} F') \otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'$$
where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and
$\psi$ are morphisms of pairs the map $F' \to \kappa(\mathfrak p)$
corresponding to $\mathfrak p'$ extends to a map $F'' \to \kappa(\mathfrak p)$
and in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$
whose residue field is $\kappa(\mathfrak p)$.
The canonical map $S' \to S''$ (using the right most factor for example)
is a morphism of pairs $(S', \mathfrak q') \to (S'', \mathfrak q'')$
which equalizes $\varphi$ and $\psi$. This proves part (3) of
Categories, Definition \ref{categories-definition-directed}.
Hence we conclude that the category is filtered.
\medskip\noindent
Recall that in the proof of
Lemma \ref{lemma-henselization}
we constructed $(R_{\mathfrak p})^h$ as the corresponding colimit
but starting with $R_{\mathfrak p}$ and its maximal ideal
$\mathfrak pR_{\mathfrak p}$. Now, given any pair $(S, \mathfrak q)$
for $(R, \mathfrak p)$ we obtain a pair
$(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for
$(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$.
Moreover, in this situation
$$S_{\mathfrak p} = \colim_{f \in R, f \not \in \mathfrak p} S_f.$$
Hence in order to show the equalities
of the lemma, it suffices to show that any pair $(S_{loc}, \mathfrak q_{loc})$
for $(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$ is of the form
$(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for some pair
$(S, \mathfrak q)$ over $(R, \mathfrak p)$ (some details omitted).
This follows from
Lemma \ref{lemma-etale}.
\end{proof}
\begin{lemma}
\label{lemma-henselian-functorial-improve}
Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying
over $\mathfrak p \subset R$. Let $R \to R^h$ and $S \to S^h$ be the
henselizations of $R_\mathfrak p$ and $S_\mathfrak q$. The local ring map
$R^h \to S^h$ of Lemma \ref{lemma-henselian-functorial} identifies $S^h$
with the henselization of $R^h \otimes_R S$ at the unique prime
lying over $\mathfrak m^h$ and $\mathfrak q$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma-henselization-different} we see that $R^h$, resp.\ $S^h$
are filtered colimits of \'etale $R$, resp.\ $S$-algebras.
Hence we see that $R^h \otimes_R S$ is a filtered colimit of
\'etale $S$-algebras $A_i$ (Lemma \ref{lemma-etale}). By
Lemma \ref{lemma-colimits-of-etale} we see that $S^h$ is a
filtered colimit of \'etale $R^h \otimes_R S$-algebras.
Since moreover $S^h$ is a henselian local ring with residue field
equal to $\kappa(\mathfrak q)$, the statement follows from the uniqueness
result of Lemma \ref{lemma-uniqueness-henselian}.
\end{proof}
\begin{lemma}
\label{lemma-quasi-finite-henselization}
Let $R \to S$ be a ring map.
Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$.
Assume $R \to S$ is quasi-finite at $\mathfrak q$.
The commutative diagram
$$\xymatrix{ R_{\mathfrak p}^h \ar[r] & S_{\mathfrak q}^h \\ R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u] }$$
of
Lemma \ref{lemma-henselian-functorial}
identifies $S_{\mathfrak q}^h$ with the localization of
$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
at the prime generated by $\mathfrak q$.
\end{lemma}
\begin{proof}
Note that $R_{\mathfrak p}^h \otimes_R S$ is quasi-finite over
$R_{\mathfrak p}^h$ at the prime ideal corresponding to $\mathfrak q$, see
Lemma \ref{lemma-four-rings}. Hence the localization $S'$ of
$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$ is henselian, see
Lemma \ref{lemma-finite-over-henselian}. As a localization $S'$ is a filtered
colimit of \'etale
$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras.
By Lemma \ref{lemma-henselian-functorial-improve} we see that
$S_\mathfrak q^h$ is the henselization of
$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$.
Thus $S' = S_\mathfrak q^h$ by the uniqueness
result of Lemma \ref{lemma-uniqueness-henselian}.
\end{proof}
\begin{lemma}
\label{lemma-quotient-henselization}
\begin{slogan}
Henselization is compatible with quotients.
\end{slogan}
Let $R$ be a local ring with henselization $R^h$.
Let $I \subset \mathfrak m_R$.
Then $R^h/IR^h$ is the henselization of $R/I$.
\end{lemma}
\begin{proof}
This is a special case of
Lemma \ref{lemma-quasi-finite-henselization}.
\end{proof}
\begin{lemma}
\label{lemma-strictly-henselian-functorial-prepare}
Let $\varphi : R \to S$ be a local map of local rings.
Let $S/\mathfrak m_S \subset \kappa^{sep}$ be a separable algebraic closure.
Let $S \to S^{sh}$ be the strict henselization of $S$
with respect to $S/\mathfrak m_S \subset \kappa^{sep}$.
Let $R \to A$ be an \'etale ring map and let $\mathfrak q$
be a prime of $A$ lying over $\mathfrak m_R$.
Given any commutative diagram
$$\xymatrix{ \kappa(\mathfrak q) \ar[r]_{\phi} & \kappa^{sep} \\ R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u] }$$
there exists a unique morphism of rings
$f : A \to S^{sh}$ fitting into the commutative diagram
$$\xymatrix{ A \ar[r]_f & S^{sh} \\ R \ar[u] \ar[r]^{\varphi} & S \ar[u] }$$
such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$ and the induced
map $\kappa(\mathfrak q) \to \kappa^{sep}$ is the given one.
\end{lemma}
\begin{proof}
This is a special case of Lemma \ref{lemma-map-into-henselian}.
\end{proof}
\begin{lemma}
\label{lemma-strictly-henselian-functorial}
Let $R \to S$ be a local map of local rings.
Choose separable algebraic closures
$R/\mathfrak m_R \subset \kappa_1^{sep}$
and
$S/\mathfrak m_S \subset \kappa_2^{sep}$.
Let $R \to R^{sh}$ and $S \to S^{sh}$ be the corresponding strict
henselizations. Given any commutative diagram
$$\xymatrix{ \kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\ R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u] }$$
There exists a unique local ring map $R^{sh} \to S^{sh}$ fitting
into the commutative diagram
$$\xymatrix{ R^{sh} \ar[r]_f & S^{sh} \\ R \ar[u] \ar[r] & S \ar[u] }$$
and inducing $\phi$ on the residue fields of
$R^{sh}$ and $S^{sh}$.
\end{lemma}
\begin{proof}
Follows immediately from Lemma \ref{lemma-map-into-henselian-colimit}.
\end{proof}
\begin{lemma}
\label{lemma-strict-henselization-different}
Let $R$ be a ring.
Let $\mathfrak p \subset R$ be a prime ideal.
Let $\kappa(\mathfrak p) \subset \kappa^{sep}$ be a
separable algebraic closure.
Consider the category of triples $(S, \mathfrak q, \phi)$
where $R \to S$ is \'etale, $\mathfrak q$ is a prime lying over $\mathfrak p$,
and $\phi : \kappa(\mathfrak q) \to \kappa^{sep}$ is a
$\kappa(\mathfrak p)$-algebra map. This category is filtered and
$$(R_{\mathfrak p})^{sh} = \colim_{(S, \mathfrak q, \phi)} S = \colim_{(S, \mathfrak q, \phi)} S_{\mathfrak q}$$
canonically.
\end{lemma}
\begin{proof}
A morphism of triples $(S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$
is given by an $R$-algebra map $\varphi : S \to S'$ such that
$\varphi^{-1}(\mathfrak q') = \mathfrak q$ and such that
$\phi' \circ \varphi = \phi$.
Let us show that the category of pairs is filtered, see
Categories, Definition \ref{categories-definition-directed}.
The category contains the triple
$(R, \mathfrak p, \kappa(\mathfrak p) \subset \kappa^{sep})$
and hence is not empty, which proves part (1) of
Categories, Definition \ref{categories-definition-directed}.
Suppose that $(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$
are two triples.
Note that $\mathfrak q$, resp.\ $\mathfrak q'$ correspond to primes
of the fibre rings $S \otimes \kappa(\mathfrak p)$,
resp.\ $S' \otimes \kappa(\mathfrak p)$ with residue fields
finite separable over $\kappa(\mathfrak p)$ and $\phi$, resp.\ $\phi'$
correspond to maps into $\kappa^{sep}$. Hence this data corresponds to
$\kappa(\mathfrak p)$-algebra maps
$$\phi : S \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}, \quad \phi' : S' \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}.$$
Set $S'' = S \otimes_R S'$. Combining the maps the above we get a unique
$\kappa(\mathfrak p)$-algebra map
$$\phi'' = \phi \otimes \phi' : S'' \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}$$
whose kernel corresponds to a prime $\mathfrak q'' \subset S''$
lying over $\mathfrak q$ and over $\mathfrak q'$, and whose residue field
maps via $\phi''$ to the compositum of
$\phi(\kappa(\mathfrak q))$ and $\phi'(\kappa(\mathfrak q'))$ in
$\kappa^{sep}$. The ring map $R \to S''$ is \'etale by
Lemma \ref{lemma-etale}.
Hence $(S'', \mathfrak q'', \phi'')$ is a triple dominating both
$(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$.
This proves part (2) of
Categories, Definition \ref{categories-definition-directed}.
Next, suppose that
$\varphi, \psi : (S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$
are two morphisms of pairs. Then $\varphi$, $\psi$, and
$S' \otimes_R S' \to S'$ are \'etale ring maps by
Lemma \ref{lemma-map-between-etale}.
Consider
$$S'' = (S' \otimes_{\varphi, S, \psi} S') \otimes_{S' \otimes_R S'} S'$$
Arguing as above (base change of \'etale maps is \'etale, composition of
\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. The fibre
ring of $S''$ over $\mathfrak p$ is
$$F'' = (F' \otimes_{\varphi, F, \psi} F') \otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'$$
where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and
$\psi$ are morphisms of triples the map $\phi' : F' \to \kappa^{sep}$
extends to a map $\phi'' : F'' \to \kappa^{sep}$
which in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$.
The canonical map $S' \to S''$ (using the right most factor for example)
is a morphism of triples
$(S', \mathfrak q', \phi') \to (S'', \mathfrak q'', \phi'')$
which equalizes $\varphi$ and $\psi$. This proves part (3) of
Categories, Definition \ref{categories-definition-directed}.
Hence we conclude that the category is filtered.
\medskip\noindent
We still have to show that the colimit $R_{colim}$ of the system
is equal to the strict henselization
of $R_{\mathfrak p}$ with respect to $\kappa^{sep}$. To see this note that
the system of triples $(S, \mathfrak q, \phi)$ contains as a subsystem
the pairs $(S, \mathfrak q)$ of
Lemma \ref{lemma-henselization-different}.
Hence $R_{colim}$ contains $R_{\mathfrak p}^h$ by the result of that lemma.
Moreover, it is clear that $R_{\mathfrak p}^h \subset R_{colim}$
is a directed colimit of \'etale ring extensions.
It follows that $R_{colim}$ is henselian by
Lemmas \ref{lemma-finite-over-henselian} and
\ref{lemma-colimit-henselian}.
Finally, by
Lemma \ref{lemma-make-etale-map-prescribed-residue-field}
we see that the residue field of $R_{colim}$ is equal to
$\kappa^{sep}$. Hence we conclude that $R_{colim}$ is strictly henselian
and hence equals the strict henselization of $R_{\mathfrak p}$ as desired.
Some details omitted.
\end{proof}
\begin{lemma}
\label{lemma-strictly-henselian-functorial-improve}
Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying
over $\mathfrak p \subset R$. Choose separable algebraic closures
$\kappa(\mathfrak p) \subset \kappa_1^{sep}$
and
$\kappa(\mathfrak q) \subset \kappa_2^{sep}$.
Let $R^{sh}$ and $S^{sh}$ be the corresponding strict
henselizations of $R_\mathfrak p$ and $S_\mathfrak q$.
Given any commutative diagram
$$\xymatrix{ \kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\ \kappa(\mathfrak p) \ar[r]^{\varphi} \ar[u] & \kappa(\mathfrak q) \ar[u] }$$
The local ring map $R^{sh} \to S^{sh}$ of
Lemma \ref{lemma-strictly-henselian-functorial} identifies $S^{sh}$
with the strict henselization of $R^{sh} \otimes_R S$ at a prime
lying over $\mathfrak m^{sh}$ and $\mathfrak q$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of
Lemma \ref{lemma-henselian-functorial-improve}
except that it uses
Lemma \ref{lemma-strict-henselization-different}
Lemma \ref{lemma-henselization-different}.
\end{proof}
\begin{lemma}
\label{lemma-quasi-finite-strict-henselization}
Let $R \to S$ be a ring map.
Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$.
Let $\kappa(\mathfrak q) \subset \kappa^{sep}$ be a separable
algebraic closure. Assume $R \to S$ is quasi-finite at $\mathfrak q$.
The commutative diagram
$$\xymatrix{ R_{\mathfrak p}^{sh} \ar[r] & S_{\mathfrak q}^{sh} \\ R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u] }$$
of
Lemma \ref{lemma-strictly-henselian-functorial}
identifies $S_{\mathfrak q}^{sh}$ with a localization of
$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$.
\end{lemma}
\begin{proof}
The residue field of $R_{\mathfrak p}^{sh}$ is the separable
algebraic closure of $\kappa(\mathfrak p)$ in $\kappa^{sep}$.
Note that $R_{\mathfrak p}^{sh} \otimes_R S$ is quasi-finite over
$R_{\mathfrak p}^{sh}$ at the prime ideal corresponding to $\mathfrak q$, see
Lemma \ref{lemma-four-rings}. Hence the localization $S'$ of
$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
is henselian, see
Lemma \ref{lemma-finite-over-henselian}.
Note that the residue field of $S'$ is $\kappa^{sep}$ since it
contains both the separable algebraic closure of
$\kappa(\mathfrak p)$ and $\kappa(\mathfrak q)$.
Furthermore, as a localization $S'$ is a filtered colimit of \'etale
$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras.
By Lemma \ref{lemma-strictly-henselian-functorial-improve}
we see that $S_{\mathfrak q}^{sh}$ is a strict henselization of
$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$.
Thus $S' = S_\mathfrak q^h$ by the uniqueness
result of Lemma \ref{lemma-uniqueness-henselian}.
\end{proof}
\begin{lemma}
\label{lemma-quotient-strict-henselization}
Let $R$ be a local ring with strict henselization $R^{sh}$.
Let $I \subset \mathfrak m_R$.
Then $R^{sh}/IR^{sh}$ is a strict henselization of $R/I$.
\end{lemma}
\begin{proof}
This is a special case of
Lemma \ref{lemma-quasi-finite-strict-henselization}.
\end{proof}
\begin{lemma}
\label{lemma-sh-from-h-map}
Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime
lying over $\mathfrak p \subset R$ such that
$\kappa(\mathfrak p) \to \kappa(\mathfrak q)$ is an isomorphism.
Choose a separable algebraic closure $\kappa^{sep}$ of
$\kappa(\mathfrak p) = \kappa(\mathfrak q)$.
Then
$$(S_\mathfrak q)^{sh} = (S_\mathfrak q)^h \otimes_{(R_\mathfrak p)^h} (R_\mathfrak p)^{sh}$$
\end{lemma}
\begin{proof}
This follows from the alternative construction of the strict henselization
of a local ring in Remark \ref{remark-construct-sh-from-h} and the
fact that the residue fields are equal. Some details omitted.
\end{proof}
Comment #1726 by Matthieu Romagny on December 14, 2015 a 4:05 pm UTC
Second sentence of 10.150 : encourge --> encourage
Comment #1765 by Johan (site) on December 15, 2015 a 7:10 pm UTC
Thanks, fixed here.
Comment #2991 by Peng Du on November 4, 2017 a 6:45 pm UTC
For Remark 10.150.4. line 3, "...whose residue field extension extension reproduces..."there are 2 "extension", one should be deleted.
## Add a comment on tag 0BSK
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner). |
## Merging Functions, Modules, Classes, the whole nine yards...
Say we have a declarative language, where a = b is an assertion that governs the rest of the code, not an assignment operation. To this language we add the notion of explicit scopes -- assignments only apply within a scope, it's okay to have multiple contradictory assignments in different scopes. On top of this, we add one more idea -- scopes can inherit from one another, which is to say that children can overwrite assignments of their parents but otherwise maintain the same environment. For example:
foo(a, b, c)
gamma = a * b
rho = b * c
gamma * rho
bar(a, b, c) extends foo
gamma = a * a
is a piece of code in which bar assigns a different value to gamma but otherwise runs exactly as foo does, computing gamma * rho and returning the result. What I am talking about is function inheritance. In the above example, foo and bar must have the same signature for it to work -- but there are ideas from object oriented programming that can be leveraged to get around that.
Of course, the above example is little bit simplistic. Far more interesting is the use of the implies operator (->) for matching:
foo(a, b, c)
'a' -> a
'b' -> b
'c' -> c
'foo' -> 'foo'
bar(a, b, c) extends foo
'foo' -> 'bar extending foo'
Here we use the implies operator to mean "when x is injected into my symbol space there is y" or in other words "a message x gets a response of y". Now, this example also offers up some real problems -- how do we specify tail recursion or termination in this language? -- but I'm willing to bet that a little study would reveal straightforward syntactic forms for this kind of thing.
Now that I have thought this language up, I have to work on it. I swore there would never be a day when I would stray from the well settled lands of software development to wander in the wilds of language design! Alas, I am doomed. All I ask from ltu is some thoughtful criticism of my idea, some pointers to earlier work in this domain and some suggestions about the syntax. What I want in the end is a language that is process/message oriented like Erlang but also has straightforward syntax for the notion of 'inheritance'.
## Comment viewing options
### I had the same idea
I am working on the same idea for my own language. The biggest problem is that normal function scopes already have inheritance: they inherit from the parent scope. So adding explicit inheritance gives you multiple inheritance, with all the problems that go with it.
I have thought about this some more. A language like Objective-C, which does have 'message passing' of a kind, would be exactly what I wanted, except for the absence of concurrent facilities. So, my idea has narrowed a little bit -- instead of a 'scope oriented language' we can have an object-oriented language in which objects are concurrent processes that can receive messages from other objects and the message passing system is brokered by some clever VM that knows how to deal with method calls across the network to other 'nodes'.
Maybe there is the germ of an OO extension to Erlang in all this -- something to do with mapping 'methods' to receive in that language. Consider also that Erlang may be able to play the role of C for some other language -- you might translate Haskell to Erlang to get concurrent, distributed Haskell, for example. Erlang is pretty dumb when it comes to numbers, strings and records -- but it's perfectly okay for control flow and manipulating binary junk -- it may be the C of concurrent programming.
### While this idea is appealing
While this idea is appealing I suspect it's hard to find the right level of granularity. Often a member variable is really just an object attribute and not a concurrent process. It doesn't make sense to enforce isolation and transparent distribution in any case. When you deal with state and referential intransparency you need at least topological notions of locality and boundary. Maybe escape analysis has the right vocabulary to start with?
I guess Erlang with classes is much harder than C with classes. But maybe you can prove me wrong?
### Finding the right level of
Finding the right level of granularity is challenging -- it seems that the 'server' is the primitive that fits best what I want to do. A 'server' is a shared nothing, tail-recursive function that receives certain messages. What I'd like is a way to compose servers and override their responses to certain messages.
foo(a, b) ->
'first' -> a
'second' -> b
bar(c)
'only' -> c
foobar(a, b, c)
?compose foo(a, b) bar(c)
'abc' -> a * b * c
So now we can have a foobar that responds like a bar and a foo. I swear I just saw the dreaded diamond around here somewhere... I guess a combination of variable renaming (for function local code) and orderly message overriding (for the interface) will allow for 'multiple composition' without pain (though not without a little confusion).
As for escape analysis, I'm not sure what to say about it. I'm certainly not going to implement it. That's for languages with mutation and shared state -- two things I want to stay away from, if I can.
### Now I need some cool
Now I need some cool branding.
• ConcurrenC (hard to resist, but it doesn't make any sense)
• Oolang (play on Erlang and a category of tea)
• Erloo (cute, unintimidating)
Which is better? Really, ConcurrenC belongs to another project.
You could have called it "Except", except it would likely be confused with Expect.
Josh
### With a little more
With a little more experience, I see Erlang as the sh of concurrent programming.
### beta?
I havent' really used the beta language (or the successor, gbeta), but it seems like the unification of "class" and "method" declarations into a common "pattern" construct in those languages would bring about exactly the kind of function inheritance you describe.
Can anybody with greater familiarity with beta comment on whether this is the case?
### Prototype-based languages
I'm pretty sure this is how things work in most purely prototype-based languages like Self and Io (but not Javascript). In Io, scopes are first class objects that delegate to the outer scope, and everything ultimately delegates to the both the Lobby and Object through a cycle of delegation. There are two different things you can make: blocks, whose scope delegates to the scope where they were made (lexical scope) and methods, whose scope delegates to the object that they're on.
### Merging stuff
You can do that, but you have a problem here. Now, what does it mean for two functions to be equivalent? Ordinarily, you choose either extensional equality (functions are equivalent iff for all inputs they produce the same result) or the weaker intensional equality. But allowing "extension" like this exposes all the local variables of a function as part of that function's identity.
It's probably better to factor that sort of code into a class (which you can inherit from and extend by overriding existing fields and adding new ones), and a function that constructs an element of that class and then returns the member coinciding to the "return" value. This scheme doesn't require stretching the notions of extension and equivalence.
### Ohmu
Check out the paper linked to from here.
"The Ohmu model unifies functions, classes, instances, templates, and even aspects into a single construct - the structure. Function calls, instantiation, aspect-weaving, and inheritance are likewise unified into a single operation - the structure transformation" |
# [NTG-context] How to let a macro check the previous value of #1 the last time the same macro was called?
Joel uaru99 at yahoo.com
Sun Jan 9 15:16:37 CET 2022
```Is there a way for a macro to check the previous value of #1, the last time that same macro was called?
Here is a minimum working example, pretending that `\previousvalue` is equal to #1 from the last time the same macro was called:
\define[1]\mymacro{
\if\previousvalue=#1
same as last time
\else
it is different from last time
\fi
}
\starttext
\mymacro{cat}
\mymacro{cat}
\mymacro{mouse}
\mymacro{mouse}
\mymacro{cat}
\stoptext
This would print:
it is different from last time <--it was never called previously
same as last time
it is different from last time
same as last time
it is different from last time
--Joel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.ntg.nl/pipermail/ntg-context/attachments/20220109/bc12b74b/attachment.htm>
```
More information about the ntg-context mailing list |
# What are all the possible rational zeros for f(x)=-6x^3+5x^2-2x+18 and how do you find all zeros?
Jan 20, 2018
$x = \sqrt[3]{\frac{100345392 + \sqrt{{100345392}^{2} + 181081080576}}{68024448}} + \sqrt[3]{\frac{100345392 - \sqrt{{100345392}^{2} + 181081080576}}{68024448}} + \frac{5}{18}$
#### Explanation:
We can use the rational roots theorem to find that all the possible rational roots are $\pm$ all of the factors of $18$ divided by all of the factors of $6$:
$\pm 18 , \pm 9 , \pm 6 , \pm \frac{9}{2} , \pm 3 , \pm \frac{3}{2} , \pm 1 , \pm \frac{2}{3} , \pm \frac{1}{2} , \pm \frac{1}{3} , \pm \frac{1}{6}$
By using Descartes' rule of signs and showing that $f \left(- x\right)$ has no sign changes, we can exclude all the negative possibilities:
$18 , 9 , 6 , \frac{9}{2} , 3 , \frac{3}{2} , 1 , \frac{2}{3} , \frac{1}{2} , \frac{1}{3} , \frac{1}{6}$
After trying these, we find that none of them are zeroes. This means that the polynomial has no rational zeroes.
We can also compute the cubic discriminant and get a value of −304580. This means that the polynomial has one real solution and two complex ones.
To find the real solution, I will first transform the equation into a depressed cubic. This is done by first dividing so the leading coefficient is $1$:
${x}^{3} - \frac{5}{6} {x}^{2} + \frac{1}{3} x - 3 = 0$
Then we substitute $x = t + \frac{5}{18}$:
${\left(t + \frac{5}{18}\right)}^{3} - \frac{5}{6} {\left(t + \frac{5}{18}\right)}^{2} + \frac{1}{3} \left(t + \frac{5}{18}\right) - 3 = 0$
${t}^{3} + \frac{11}{108} t - \frac{8603}{2916} = 0$
Now that we have a depressed cubic (ie no ${t}^{2}$'s), we can solve by substituting $t = u + v$:
${\left(u + v\right)}^{3} + \frac{11}{108} \left(u + v\right) - \frac{8603}{2916} = 0$
${u}^{3} + {v}^{3} + 3 {u}^{2} v + 3 u {v}^{2} + \frac{11}{108} u + \frac{11}{108} v - \frac{8603}{2916} = 0$
Now we can factor:
${u}^{3} + {v}^{3} + 3 u v \left(u + v\right) + \frac{11}{108} \left(u + v\right) - \frac{8603}{2916} = 0$
${u}^{3} + {v}^{3} + \left(3 u v + \frac{11}{108}\right) \left(u + v\right) - \frac{8603}{2916} = 0$
Since $u$ and $v$ are any arbitrary numbers, I can add a condition that $3 u v + \frac{11}{108} = 0$. This allows us to get rid of the middle term and rewrite $v$ in terms of $u$:
${u}^{3} + {\left(- \frac{11}{324 u}\right)}^{3} - \frac{8603}{2916} = 0$
${u}^{3} - \frac{1331}{34012224 {u}^{3}} - \frac{8603}{2916} = 0$
Multiply through by ${u}^{3}$:
${u}^{6} - \frac{8603}{2916} {u}^{3} - \frac{1331}{34012224} = 0$
This is a quadratic in ${u}^{3}$, so we can solve using the quadratic formula:
${u}^{3} = \frac{100345392 \pm \sqrt{{100345392}^{2} + 181081080576}}{68024448}$
$u = \sqrt[3]{\frac{100345392 \pm \sqrt{{100345392}^{2} + 181081080576}}{68024448}}$
We can interpret one of the roots as the value for $u$, and the other for $v$. This gives that $t$ is equal to:
$t = u + v = \sqrt[3]{\frac{100345392 + \sqrt{{100345392}^{2} + 181081080576}}{68024448}} + \sqrt[3]{\frac{100345392 - \sqrt{{100345392}^{2} + 181081080576}}{68024448}}$
Now, we just put back the solution for $t$ into $x = t + \frac{5}{18}$ to get the solution for $x$:
$x = \sqrt[3]{\frac{100345392 + \sqrt{{100345392}^{2} + 181081080576}}{68024448}} + \sqrt[3]{\frac{100345392 - \sqrt{{100345392}^{2} + 181081080576}}{68024448}} + \frac{5}{18}$ |
# The Unapologetic Mathematician
## Matrices I
May 20, 2008 - Posted by | Algebra, Linear Algebra
1. […] Einstein Summation Convention Look at the formulas we were using yesterday. There’s a lot of summations in there, and a lot of big sigmas. Those get really tiring to […]
Pingback by The Einstein Summation Convention « The Unapologetic Mathematician | May 21, 2008 | Reply
2. […] Matrices II With the summation convention firmly in hand, we continue our discussion of matrices. […]
Pingback by Matrices II « The Unapologetic Mathematician | May 22, 2008 | Reply
3. […] compose two morphisms by the process of matrix multiplication. If is an matrix in and is a matrix in , then their product is a matrix in (remember the […]
Pingback by The Category of Matrices I « The Unapologetic Mathematician | June 2, 2008 | Reply
4. […] satisfies ), we construct the column vector (here ). But we’ve already established that matrix multiplication represents composition of linear transformations. Further, it’s straightforward to see that the linear transformation corresponding to a […]
Pingback by The Category of Matrices III « The Unapologetic Mathematician | June 23, 2008 | Reply
5. […] vector space comes equipped with a basis , where has a in the th place, and elsewhere. And so we can write any such transformation as an […]
Pingback by The General Linear Groups « The Unapologetic Mathematician | October 20, 2008 | Reply
6. […] Okay, back to linear algebra and inner product spaces. I want to look at the matrix of a linear map between finite-dimensional inner product […]
Pingback by Matrix Elements « The Unapologetic Mathematician | May 29, 2009 | Reply
7. […] the space of -tuples of complex numbers — and that linear transformations are described by matrices. Composition of transformations is reflected in matrix multiplication. That is, for every […]
Pingback by Some Review « The Unapologetic Mathematician | September 8, 2010 | Reply |
# Energy of Fermi Gas $T>0$
I'm trying to plot $$\frac{E(T)}{N\epsilon_F}$$ vs $$\frac{T}{T_F}$$
I know that the total energy comes from $$E(T) = \int_{0}^{\inf} \frac{3}{2}\frac{N}{\epsilon_F}(\frac{\epsilon}{\epsilon_F})^{1/2} \frac{\epsilon}{e^{-\beta\mu+\beta x}+1} d\epsilon$$
I already have the values for $$\frac{\mu}{\epsilon_F}$$ vs $$\frac{T}{T_F}$$
The question is how to leave the integral in terms of $$\frac{T}{T_F}$$ to plot.
The plot should look like this.
• What is the issue? Can't you just replace $\beta \mu = \mu / (kT)$ by $\mu / \epsilon_F \times T_F/T$ and $\beta \epsilon = \epsilon/(k T)$ by $\epsilon/\epsilon_F \times T_F/T$? – QuantumApple Apr 27 at 13:11
• @QuantumApple I don't have the value for $\epsilon_F$, that's why I'm plotting $\frac{E(T)}{N\epsilon_F}$, so it'd work in the first substitution you propose, but not in the second one – phy_research Apr 27 at 13:21
If you do the change of variable $$x = \epsilon/\epsilon_F$$, everything should nicely come adimensioned in the end:
$$\frac{E(T)}{N\epsilon_F} = \frac{3}{2} \int_{0}^{+\infty} (\frac{\epsilon}{\epsilon_F})^{1/2} \frac{\epsilon/\epsilon_F}{e^{-\frac{\mu}{\epsilon_F}\frac{T_F}{T}+\frac{\epsilon}{\epsilon_F}\frac{T_F}{T}}+1} d\left(\frac{\epsilon}{\epsilon_F} \right) = \frac{3}{2} \int_{0}^{+\infty} \frac{x^{3/2}}{e^{-\frac{\mu}{\epsilon_F}\frac{T_F}{T}+x\frac{T_F}{T}}+1} dx$$
At $$T = 0$$, $$\mu = \epsilon_F$$ so that the Fermi-Dirac function is $$1$$ for $$x < 1$$ and $$0$$ for $$x > 1$$, such that the integral reduces to the integral of $$x^{3/2}$$ from $$0$$ to $$1$$ which is $$2/5$$, yielding the classical result $$E(T=0) = \frac{3}{5} N \epsilon_F$$.
• That's a clever variable change, worked perfectly, thanks! – phy_research Apr 27 at 14:51
• Also, I've just noticed but your plot seems off. To my knowledge, the red curve should go to $0.6$ when $T \to 0$. Is this normal? – QuantumApple Apr 27 at 17:10 |
# Properties
Label 2.13.ah_bm Base Field $\F_{13}$ Dimension $2$ $p$-rank $2$ Principally polarizable Does not contain a Jacobian
## Invariants
Base field: $\F_{13}$ Dimension: $2$ Weil polynomial: $( 1 - 4 x + 13 x^{2} )( 1 - 3 x + 13 x^{2} )$ Frobenius angles: $\pm0.312832958189$, $\pm0.363422825076$ Angle rank: $2$ (numerical)
## Newton polygon
This isogeny class is ordinary.
$p$-rank: $2$ Slopes: $[0, 0, 1, 1]$
## Point counts
This isogeny class is principally polarizable, but does not contain a Jacobian.
$r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 110 33660 5239520 823996800 137389054550 23260576584960 3936690829551590 665461638360921600 112458674518542740960 19005016038068177934300
$r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 7 197 2380 28849 370027 4819034 62737591 815785921 10604807500 137858870957
## Decomposition
1.13.ae $\times$ 1.13.ad
## Base change
This is a primitive isogeny class. |
## Friday, February 21, 2020
### Shiv Prakash Patel (IIT, Delhi)
Title: Multiplicity one theorems in representation theory
Speaker: Shiv Prakash Patel (IIT, Delhi)
Date: Tuesday, February 25, 2020
Time: 4:00 pm
Venue: Seminar Room, School of Physical Sciences (SPS)
ABSTRACT
A representation $\pi$ of $G$ is called multiplicity free if the dimension of the vector space $Hom_{G}(\pi, \sigma)$ is at most 1 for all irreducible representation $\sigma$ of $G$. Let $H$ be a subgroup of $G$ and $\psi$ an irreducible representation of $H$. The triple $(G,H, \psi)$ is called Gelfand triple if the induced representation $Ind_{H}^{G} (\psi)$ of $G$ is multiplicity free. There is a geometric way to prove if some triple is Gelfand triple, which is called Gelfand's trick. Multiplicity free representations play an important role in representations theory and number theory, e.g. the use of Whittaker models. We will discuss Gelfand's trick and its use in a simple cases for Whittaker models of the representations of the group $GL_{n}(R)$ where $R$ is finite local ring.
## Monday, February 10, 2020
### Manish Mishra, IISER, Pune
Title: A generalization of the 3d distance theorem
Speaker: Manish Mishra, IISER Pune
Where: Seminar Room, School of Physical Sciences (SPS), C V Raman Marg, JNU
When: Tuesday, February 11, 2020, 4 PM
Abstract:
Let be a positive rational number. Call a function → to have finite gaps property mod if the following holds: for any positive irrational α and positive integer M, when the values of f(), 1 ≤ ≤ M, are inserted mod into the interval [0,P) and arranged in increasing order, the number of distinct gaps between successive terms is bounded by a constant kwhich depends only on f. In this note, we prove a generalization of the 3d distance theorem of Chung and Graham. As a consequence, we show that a piecewise linear map with rational slopes and having only finitely many non- differentiable points has finite gaps property mod P. We also show that if is distance to the nearest integer function, then it has finite gaps property mod $1$ with $k_≤ 6$. This is a joint work with Amy Philip. |
Arrive Function
This topic is 4175 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Hi, so I have this code for my AI's Arrive function.
Vector2D EGsoldier::Arrive(Vector2D TargetPos) //Need to improve on this function
{
Vector2D ToTarget = TargetPos - Pos();
double dist = ToTarget.Length();
if( dist > 0 )
{
double speed = dist/( 1.3 );
speed = min(speed, e_dMaxSpeed);
Vector2D DesiredVelocity = ToTarget*speed/dist;
Vector2D FinalVelocity = DesiredVelocity - Velocity();
return FinalVelocity;
}
return Vector2D(0,0);
}
It returns the 2D vector where my EGsoldier should go. The problem is, that if there is an obstacle in the way, the soldier has no idea what to do (and subsequently runs into the obstacle and stays there). Can someone help me figure out how to improve this function so that the EGsoldier can get to his destination w/o running into an object?
Share on other sites
Create a vector that resembles the difference between the obstacle.pos and the current unit.pos, This should be normalized and converted to appropriate size.
Then add this vector to the current unit.pos
You may want to add a random number to the new direction vector before adding it, since if the collision is head-on, it might get stuck (pinball-style) trying to repell in direct opposite direction.
Share on other sites
Quote:
Original post by bballmitchCan someone help me figure out how to improve this function so that the EGsoldier can get to his destination w/o running into an object?
Rather than change this function, design a function for 'Avoid', which will generate a movement vector to avoid the object. The resulting movement should be a weighted sum of the outputs of the Arrive and Avoid functions. This weight would be changed depending on the proximity of the object. So, when the soldier is moving toward the object but is still some distance from it, there would be a small perturbation to the path to take it around the object. If the object suddently appeared right in front of it, the Avoid behaviour would dominate and the Arrive behaviour would be minimised, so that the soldier first avoided the object and then went back to heading to the goal. Make sense?
I suggest you look at the full set of steering behaviours available for reactive pathing.
Cheers,
Timkin |
# Symplectic algorithm
1. Feb 1, 2006
### Heimdall
Hi,
I have this hamiltonian
$$K = \frac{1}{2}(P_1^2 + P_2^2) + P_1 Q_2 - P_2 Q_1 - (\frac{1-\mu}{R_1}+\frac{\mu}{R_2})$$
(see this link if there is any latex problem)
with non separable variables.
I am looking for a symplectic algorithm (runge kutta would be good) to solve the correspondant first order equations, but I can't find it on the internet.
moreover it seems difficult and I don't understand enough what is a symplectic algorithm to code do it by myself..
thx a lot.
Last edited: Feb 1, 2006
2. Feb 2, 2006
### saltydog
Well, for starters are these those:
$$\frac{dP_1}{dt}=\frac{\partial K}{\partial P_2},\quad \frac{dP_2}{dt}=-\frac{\partial K}{\partial P_1}$$
$$\frac{dQ_1}{dt}=\frac{\partial K}{\partial Q_2},\quad \frac{dQ_2}{dt}=-\frac{\partial K}{\partial Q_1}$$
$$\frac{dR_1}{dt}=\frac{\partial K}{\partial R_2},\quad \frac{dR_2}{dt}=-\frac{\partial K}{\partial R_1}$$
What then are the initial conditions?
3. Feb 2, 2006
### saltydog
Know what, I'm just gonna' run with this. Someone in the group just recently said, "you don't do mathematics by just staring at the problem and hope it comes to you".
Anyway, Mathematica has a method called "SymplecticPartitionedRungeKutta". I shall wish to try this method once I figure out how symplectic methods differ from just regular numerical methods as I advocate learning the technique (in general) before using it in some math package. Perhaps I should just start with a pendulum first. Someone else said, "sometimes you have to take two steps backwards in order to move forwards". Might take a while.
4. Feb 3, 2006
### saltydog
For the record my equations posted above are totally incorrect and stems from my lack of familiarity with Hamilitoian sytems in general. But that's ok cus' I ain't staring and I'm not standing still. So it's 3:00a and I need to get up in a few hours to go to work. Well, that would mean I'm already sleeping but I'm fiddling with this instead. Whatever. Anyway I've learned that the Hamiltonian is conserved in Hamiltonian systems and that is the origin of the problem to seek symplectic methods in their numerical solution: most numerical methods do not preserve the constancy of the hamiltonian. Methods which do are called symplectic. Anyway, I have some references to work with and will pick it up later. Think I'll try and get in an hour or 2 of sleep. :zzz:
5. Feb 9, 2006
### saltydog
Some time ago I worked on what I considered a very beautiful problem (well ugly if you're a . . . nevermind). It's called an integral invariant of Poincare'. At the time, I didn't understand the connection to Hamiltonian systems. If you allow a set of initial points to evolve according to a Hamiltonian flow (run all the Hamiltonian differential equations for some time on a set of initial points and then compare the initial set of points to the set of final points), then a certain "measure" of the points is preserved in the flow. This is compactly expressed by the following integral invariant:
$$\sum_{i=n}\oint_{\omega_i} p_idq_i=C$$
That is, the sum of areas projected onto the set of $(p_i,q_i)$ planes is constant. This is shown graphically in the attached plot. I wish to verify this with a real set of Hamiltonian equations but I digress.
Well, symplectic numerical methods are designed to preserve this measure. Consider first the undamped pendulum:
$$\frac{dq}{dt}=p$$
$$\frac{dp}{dt}=-Sin(q)$$
The Hamiltonian function is:
$$H(q,p)=1/2 p^2-Cos(q)$$
The standard (non-symplectic) Euler method for this system would be:
$$q_{k+1}=q_k+hp_k$$
$$p_{k+1}=p_k-hSin(q_k)$$
A phase portrait (q vs. p) for 100 seconds is shown in the second plot. It's dissapating and not reflective of the actual dynamics of an undamped pendulum. A plot of the hamiltonian function would reveal a curve with a non-constant slope.
We can slightly modify this method and convert it to a symplectic form as follows:
$$q_{k+1}=q_k+hp_{k+1}$$
$$p_{k+1}=p_k-hSin(q_k)$$
The phase-portrait of this numerical simulation for 100 seconds is shown in the 3rd plot and reflects the actual dynamics of an undamped pendulum (back and forth and never loosing energy, that is the energy remains constant and reflects the invariant measure of Hamiltonian systems. A plot of the hamiltonian function for this simulation would be a straight line with zero slope. (can we get two more spaces for plots?)
#### Attached Files:
File size:
2 KB
Views:
80
File size:
22.8 KB
Views:
86
• ###### symplectic euler.JPG
File size:
7.3 KB
Views:
74
Last edited: Feb 9, 2006 |
Post Undeleted by Vidit Nanda
3 deleted 83 characters in body; added 14 characters in body
Start with $n = 2$. An element of $M(2,k)$
Okay, so this is determined by a single $nowhere near a \in {0,\ldots,k}~$ for $k \geq 2$. Since we wish to avoid $a=0,k~$ complete solution, but this is as far as I got and hopefully someone else sees it from here:
It is clear that $P(2,k) = \frac{k-1}{k+1}~$ which is certainly non-decreasing in $k$. Keep in mind the valid choices of $a$: these come from $[0,k] \cap \mathbb{Z}~$. Observe that $[0,k]$ is the standard simplex $S$ of side $k$ in dimension $1$ and that the $0$ entries correspond to the boundary of this simplex.
The case
When $n=3$ is much more interesting: n=3$, the matrices in$M(3,k)$correspond to four integers$a_{11},a_{12},a_{21},a_{22}~$from$0,\ldots,k~$such that • For$i = 1,2~$the sum 1$ or $2~$, $\sum_j a_{ij} \leq k$
• For $j = 1,2~$ the sum 1$or$2~$,$\sum_i a_{ij} \leq k$• We get These hyperplane inequalities carve out a zero entry if either one of the convex region$a_{ij}$is zero or if one of the inequalities above is an equality. Each of these situations corresponds to C \subset \mathbb{R}^4$ from the point with coordinates cube $a_{ij}$ lying on a boundary face of [0,k]^4$and the four dimensional "prism" which consists zero entry" cases of the cube$[0,k]^4$intersected with the half space from M(3,k)$ are precisely the third inequalitybounding faces of this region.
Sobottom line: I think , if a theorem establishes that the ratio $$\frac{\text{integral points on the boundary of this prism}}{\text} C}{\text{ the total number of integral points in this prism}}$$ }C} decreases as one increases $k$, which leads to then we obtain your desired result. The I don't know enough about convex polytopes to cite something here but it sounds reasonable just from dimension considerations...
Ideally, this process generalizes would generalize to higher dimensions. An element of $M(n,k)$ has zero entries if and only if the vector of entries in the first $(n-1) \times (n-1)$ block lies in the boundary of convex polytope carved from the cube $[0,1]^{(n-1)^2}$ by $2n-1$ hyperplanes.
Post Deleted by Vidit Nanda
2 Fixed stupidity...; deleted 29 characters in body
1 |
# ascending chain condition
A partially ordered set $S$ (for example, a collection of subsets of a set $X$ when ordered by inclusion) satisfies the or ACC if there does not exist an infinite ascending chain $s_{1} of elements of $S$. |
## 9.2 POPS study
Figure 9.2 presents the D-score and DAZ distributions for the POPS cohort of children born very preterm or with very low birth weight. The distributions of the D-score and DAZ are similar to those found in the SMOCC study.
Since the D-scores are calculated using the same milestones and difficulty estimates as used in the SMOCC data, the D-scores are comparable across the two studies. When the milestones differ between studies (e.g. when studies use different measurement instruments), it is still possible to calculate D-scores. This problem is a little more complicated, so we treat it in Chapter II.
The primary new complication here is the question whether it is fair to compare postnatal age of children born at term with postnatal ages of very preterm children. This section focuses on this issue in some detail.
### 9.2.1 POPS design
In 1983, the Project On Preterm and Small for Gestational Age Infants (POPS study) collected data on all 1338 infants in the Netherlands who had very preterm birth (gestational age < 32 weeks) or very low birth weight (birth weight < 1500 grams). See for details.
The POPS study determined gestational age from the best obstetric estimate, including the last menstrual period, results of pregnancy testing, and ultrasonography findings. The POPS study collected measurements on 450 children using the DDI at four visits at corrected postnatal ages of 3, 6, 12 and 24 months.
Assessment of very preterm children at the same chronological age as term children may cause over-diagnosis of developmental delay in very preterm children. Very preterm children may require additional time that allows for development equivalent to that of children born a term.
In anthropometry, it is common to correct chronological age of very preterm born children to enable age-appropriate evaluation of growth. For example, suppose the child is born as a gestational age of 30 weeks, which is ten weeks early. A full correction would deduct ten weeks from the child’s postnatal age, and a half correction would deduct five weeks. In particular, we calculate the corrected age (in days) as:
$\mathrm{corrected\ age} = \mathrm{postnatal\ age}\mathrm{\ (days)} - f \times [280 - \mathrm{gestational\ age\ (days)}],$
where 280 is the average gestational age in days, and where we specify several alternatives for $$f$$ as 1.00 (full correction), 0.75, 0.50 (half) or 0.00 (no correction).
Let’s apply the same idea to child development. Using corrected age instead of postnatal age has two consequences:
• It will affect the prior distribution for calculating the D-score;
• It will affect DAZ calculation.
We evaluate these two effects in turn.
### 9.2.3 Effect of age-adjustment on the D-score
Figure 9.3 plots the fully age-adjusted D-score against the unadjusted D-score. Any discrepancies result only from differences in the ages used in the age-dependent prior (c.f. Section 5.3.2).
All points are on or below the diagonal. Age-adjustment lowers the D-score because a preterm is “made younger” by subtracting the missed pregnancy duration, and hence the prior distribution starts at the lower point. For example, the group of red marks with D-scores between 30$$D$$ and 40$$D$$ (age not corrected) will have D-scores between 20$$D$$ and 30$$D$$ when fully corrected. Note that only the red points (with perfect scores) are affected, thus illustrating that the prior has its most significant effect on the perfect response pattern. See also Section 5.3.1. The impact of age-correction on the D-score is negligible when the child fails on one or more milestones.
### 9.2.4 Effect of no age adjustment ($$f = 0.00$$) on the DAZ
Figure 9.4 illustrates that a considerable number of D-scores fall below the minus -2 SD line of the reference when age is not adjusted, especially during the first year of life. The pattern suggests that the apparent slowness in development is primarily the result of being born early, and does not necessarily reflect delayed development.
### 9.2.5 Effect of full age adjustment ($$f = 1.00$$) on the DAZ
Full age correction has a notable effect on the DAZ. Figure 9.5 illustrates that the POPS children are now somewhat advanced over the reference children. We ascribe this seemingly odd finding to more prolonged exposure to sound and vision in air. Thus after age correction, development in preterms during early infancy is advanced compared to just-born babies.
Full age correction seems to overcorrect the D-score, so it is natural to try intermediate values for $$f$$ between 0 and 1.
### 9.2.6 Partial age adjustment
Age (months) 0 0.5 0.75 1 0-3 -1.46 -0.5 0.07 0.73 3-4 -1.77 -0.89 -0.37 0.2 5-6 -1.6 -0.87 -0.46 0 7-8 -1.76 -1.13 -0.77 -0.39 9-11 -1.21 -0.77 -0.53 -0.28 12-14 -0.99 -0.6 -0.39 -0.16 15-23 -0.5 -0.23 -0.1 0.04 24+ -0.7 -0.49 -0.37 -0.24
Table 9.1 compares mean DAZ under various specifications for $$f$$. Values $$f = 0.00$$ and $$f = 0.50$$ do not correct for preterm birth enough in the sense that all sign are negative. In contrast, $$f = 1.00$$ overcorrects. The value of 0.73 is implausibly high, especially because this value is close to birth. Setting $$f = 0.75$$ seems a good compromise, in the sense that the average DAZ is close to zero in the first age interval. The average DAZ is negative at later ages. We do not know whether this genuinely reflects less than optimal development of very preterm and low birth weight children, so either $$f = 1.00$$ and $$f = 0.75$$ are suitable candidates.
### 9.2.7 Conclusions
• Compared with the general population, more very preterm children reached developmental milestones within chronological age five months when chronological age was fully corrected;
• Fewer preterm children reached the milestones when chronological age was not corrected;
• Fewer children reached the milestones when we used a correction of $$f = 0.50$$;
• Similar proportions were observed when we used $$f = 0.75$$ within the first five months after birth.
• After chronological age five months, we observed similar proportions for very preterm and full-term children when chronological age was fully corrected.
• We recommend using full age correction ($$f = 1.00$$). This advice corresponds to current practice for growth and development. As we have shown, preterms may look better in the first few months under full age-correction. If the focus of the scientific study is on the first few months, we recommend an age correction of $$f = 0.75$$. |
# RC engine sound simulator, how to install in 3yo kids rideable toy?
#### KewlCousin
Mar 8, 2022
3
So my cousins kids 3 year old birthday is coming up and, as I have a reputation for doing, I’m trying to come up with the most awesome present I can think of for the little guy. Ever since he saw my Jeep he’s been obsessed, so much that he’s having a Jeep themed birthday party. He has the John Deere rideable tractor with a trailer attached that’s his favorite toy. I began to think how cool would it be to make that thing sound like it’s got some big V8 so I started looking for engine sound simulators. All the ones I’ve found are plug and play for RC cars, and one or two for actual vehicles but those will be too loud and expensive to justify...wrong fit. The RC setup would be perfect. I’m hoping someone can give me an idea of what it would take to wire one of these into his tractor. I’ve got a fairly substantial mechanical background with electric unfortunately being my weaker side but I’m still vastly more proficient than most are a with it. His party is in a few weeks so I’m trying to figure out if this is viable or if I need to start thinking in another direction. Any help greatly appreciated! Thanks in advance!
#### PETERDECO
Dec 19, 2019
239
Welcome to the forum. I did something similar at work for different sound effects. It plugs into your computer headphone jack and records 20 seconds of sound from a wav file. Perhaps you can use something shown in this link and simply record race car sounds from the internet with the microphone next to the speaker. These modules are all over ebay.
https://www.ebay.com/itm/264482674769?hash=item3d94655851:g:mmMAAOSw4c9dkcxm
#### Bluejets
Oct 5, 2014
6,196
Simple enough for an off-the-shelf unit as used in r/c but as you have already found, expensive.
Get a servo tester and mechanically connect to the throttle pedal, signal from this goes to the r/c input on the sound unit.
Most of the simple diy ones are basically like hitting on a tin can with a matchstick and rather disappointing.
Arduino micro with an sd card and an amplifier, code and how to on the internet, most of the parts come as "modules" off Ebay etc.for a couple of dollars each.
Some are better with an stm32 micro as it has more computing power and speed.
The latter can be found here.......
Overkill for a 3 year old though......more for the second cousin.
#### KewlCousin
Mar 8, 2022
3
Thanks for the quick replies! The circuit boards look a bit beyond my speed. The main reason I thought the RC sound modules would be ideal is that they hook to the throttle controls so you get an idle sound and a revving engine in sync with throttle application. I’m wondering if I could just splice the controller into the ride on gas pedal wiring. And the power end into a standard RC car battery. If so how would I test which wire does what? The photos of the unit I was looking at looks like it had 3 wires that were suppose to hook to the throttle (servo?...is that the right word?)
I wouldn’t be surprised if there’s tons of nuance I’m missing here as I’m mainly a mechanical guy. Not sure if hooking RC electronics to a toy that runs on a 12v battery could cause problems? I’m sure some of you guys can tell me a dozen reasons why this might not work out, and I’m all ears. But I’m hoping I’m just overthinking this and it’s not that complicated.
#### Bluejets
Oct 5, 2014
6,196
As I already said, the input would be an rc signal. You can get an rc signal from a $2 servo tester with the pot mechanically linked to your existing throttle.This signal is a pwm signal, not a voltage level. Therein is the 3 wires, 4.8v positive, negative and signal wire. The latter from the servo tester will give the 1.5ms for idle, 1.0 ms for full reverse and 2ms for full forward speed plus anywhere in between those signal levels corresponding to speed. If your supply is 12v then you will need a buck converter between the battery and your servo tester, set it to around 5v should be fine. These are also a$2 device from Ebay etc.
#### KewlCousin
Mar 8, 2022
3
Wow, thanks. Most of that is foreign to me so it seems like I need to do some more research on these components and how they work, but it sounds like what I’d need.
#### Bluejets
Oct 5, 2014
6,196
Probably the easiest solution for you is to get an off-the-shelf bluetooth amplifier speaker.
Mate of mine did it with one of his diesel locos as an easy fix for the horn.
Some have done it in miniature on small electric trains as well.
See if I can find a link........
Try this for an idea for starters........
Sep 24, 2016
3,650
#### Bluejets
Oct 5, 2014
6,196
Come on....they're not THAT loud.
#### Audioguru
Sep 24, 2016
3,650
You might make the sound level too loud.
Formulas One race cars this year will have a sound system to make their little V6 hybrid engine sound like a huge V12.
Replies
0
Views
443
Replies
30
Views
3K
Replies
2
Views
2K
Replies
14
Views
3K
Replies
1
Views
2K |
You are currently browsing the monthly archive for April 2011.
Perhaps the most fundamental differential operator on Euclidean space ${{\bf R}^d}$ is the Laplacian
$\displaystyle \Delta := \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}.$
The Laplacian is a linear translation-invariant operator, and as such is necessarily diagonalised by the Fourier transform
$\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx.$
Indeed, we have
$\displaystyle \widehat{\Delta f}(\xi) = - 4 \pi^2 |\xi|^2 \hat f(\xi)$
for any suitably nice function ${f}$ (e.g. in the Schwartz class; alternatively, one can work in very rough classes, such as the space of tempered distributions, provided of course that one is willing to interpret all operators in a distributional or weak sense).
Because of this explicit diagonalisation, it is a straightforward manner to define spectral multipliers ${m(-\Delta)}$ of the Laplacian for any (measurable, polynomial growth) function ${m: [0,+\infty) \rightarrow {\bf C}}$, by the formula
$\displaystyle \widehat{m(-\Delta) f}(\xi) := m( 4\pi^2 |\xi|^2 ) \hat f(\xi).$
(The presence of the minus sign in front of the Laplacian has some minor technical advantages, as it makes ${-\Delta}$ positive semi-definite. One can also define spectral multipliers more abstractly from general functional calculus, after establishing that the Laplacian is essentially self-adjoint.) Many of these multipliers are of importance in PDE and analysis, such as the fractional derivative operators ${(-\Delta)^{s/2}}$, the heat propagators ${e^{t\Delta}}$, the (free) Schrödinger propagators ${e^{it\Delta}}$, the wave propagators ${e^{\pm i t \sqrt{-\Delta}}}$ (or ${\cos(t \sqrt{-\Delta})}$ and ${\frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}}$, depending on one’s conventions), the spectral projections ${1_I(\sqrt{-\Delta})}$, the Bochner-Riesz summation operators ${(1 + \frac{\Delta}{4\pi^2 R^2})_+^\delta}$, or the resolvents ${R(z) := (-\Delta-z)^{-1}}$.
Each of these families of multipliers are related to the others, by means of various integral transforms (and also, in some cases, by analytic continuation). For instance:
1. Using the Laplace transform, one can express (sufficiently smooth) multipliers in terms of heat operators. For instance, using the identity
$\displaystyle \lambda^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{-t\lambda}\ dt$
(using analytic continuation if necessary to make the right-hand side well-defined), with ${\Gamma}$ being the Gamma function, we can write the fractional derivative operators in terms of heat kernels:
$\displaystyle (-\Delta)^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{t\Delta}\ dt. \ \ \ \ \ (1)$
2. Using analytic continuation, one can connect heat operators ${e^{t\Delta}}$ to Schrödinger operators ${e^{it\Delta}}$, a process also known as Wick rotation. Analytic continuation is a notoriously unstable process, and so it is difficult to use analytic continuation to obtain any quantitative estimates on (say) Schrödinger operators from their heat counterparts; however, this procedure can be useful for propagating identities from one family to another. For instance, one can derive the fundamental solution for the Schrödinger equation from the fundamental solution for the heat equation by this method.
3. Using the Fourier inversion formula, one can write general multipliers as integral combinations of Schrödinger or wave propagators; for instance, if ${z}$ lies in the upper half plane ${{\bf H} := \{ z \in {\bf C}: \hbox{Im} z > 0 \}}$, one has
$\displaystyle \frac{1}{x-z} = i\int_0^\infty e^{-itx} e^{itz}\ dt$
for any real number ${x}$, and thus we can write resolvents in terms of Schrödinger propagators:
$\displaystyle R(z) = i\int_0^\infty e^{it\Delta} e^{itz}\ dt. \ \ \ \ \ (2)$
In a similar vein, if ${k \in {\bf H}}$, then
$\displaystyle \frac{1}{x^2-k^2} = \frac{i}{k} \int_0^\infty \cos(tx) e^{ikt}\ dt$
for any ${x>0}$, so one can also write resolvents in terms of wave propagators:
$\displaystyle R(k^2) = \frac{i}{k} \int_0^\infty \cos(t\sqrt{-\Delta}) e^{ikt}\ dt. \ \ \ \ \ (3)$
4. Using the Cauchy integral formula, one can express (sufficiently holomorphic) multipliers in terms of resolvents (or limits of resolvents). For instance, if ${t > 0}$, then from the Cauchy integral formula (and Jordan’s lemma) one has
$\displaystyle e^{itx} = \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} \frac{e^{ity}}{y-x+i\epsilon}\ dy$
for any ${x \in {\bf R}}$, and so one can (formally, at least) write Schrödinger propagators in terms of resolvents:
$\displaystyle e^{-it\Delta} = - \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} e^{ity} R(y+i\epsilon)\ dy. \ \ \ \ \ (4)$
5. The imaginary part of ${\frac{1}{\pi} \frac{1}{x-(y+i\epsilon)}}$ is the Poisson kernel ${\frac{\epsilon}{\pi} \frac{1}{(y-x)^2+\epsilon^2}}$, which is an approximation to the identity. As a consequence, for any reasonable function ${m(x)}$, one has (formally, at least)
$\displaystyle m(x) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} \frac{1}{x-(y+i\epsilon)}) m(y)\ dy$
which leads (again formally) to the ability to express arbitrary multipliers in terms of imaginary (or skew-adjoint) parts of resolvents:
$\displaystyle m(-\Delta) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} R(y+i\epsilon)) m(y)\ dy. \ \ \ \ \ (5)$
Among other things, this type of formula (with ${-\Delta}$ replaced by a more general self-adjoint operator) is used in the resolvent-based approach to the spectral theorem (by using the limiting imaginary part of resolvents to build spectral measure). Note that one can also express ${\hbox{Im} R(y+i\epsilon)}$ as ${\frac{1}{2i} (R(y+i\epsilon) - R(y-i\epsilon))}$.
Remark 1 The ability of heat operators, Schrödinger propagators, wave propagators, or resolvents to generate other spectral multipliers can be viewed as a sort of manifestation of the Stone-Weierstrass theorem (though with the caveat that the spectrum of the Laplacian is non-compact and so the Stone-Weierstrass theorem does not directly apply). Indeed, observe the *-algebra type properties
$\displaystyle e^{s\Delta} e^{t\Delta} = e^{(s+t)\Delta}; \quad (e^{s\Delta})^* = e^{s\Delta}$
$\displaystyle e^{is\Delta} e^{it\Delta} = e^{i(s+t)\Delta}; \quad (e^{is\Delta})^* = e^{-is\Delta}$
$\displaystyle e^{is\sqrt{-\Delta}} e^{it\sqrt{-\Delta}} = e^{i(s+t)\sqrt{-\Delta}}; \quad (e^{is\sqrt{-\Delta}})^* = e^{-is\sqrt{-\Delta}}$
$\displaystyle R(z) R(w) = \frac{R(w)-R(z)}{z-w}; \quad R(z)^* = R(\overline{z}).$
Because of these relationships, it is possible (in principle, at least), to leverage one’s understanding one family of spectral multipliers to gain control on another family of multipliers. For instance, the fact that the heat operators ${e^{t\Delta}}$ have non-negative kernel (a fact which can be seen from the maximum principle, or from the Brownian motion interpretation of the heat kernels) implies (by (1)) that the fractional integral operators ${(-\Delta)^{-s/2}}$ for ${s>0}$ also have non-negative kernel. Or, the fact that the wave equation enjoys finite speed of propagation (and hence that the wave propagators ${\cos(t\sqrt{-\Delta})}$ have distributional convolution kernel localised to the ball of radius ${|t|}$ centred at the origin), can be used (by (3)) to show that the resolvents ${R(k^2)}$ have a convolution kernel that is essentially localised to the ball of radius ${O( 1 / |\hbox{Im}(k)| )}$ around the origin.
In this post, I would like to continue this theme by using the resolvents ${R(z) = (-\Delta-z)^{-1}}$ to control other spectral multipliers. These resolvents are well-defined whenever ${z}$ lies outside of the spectrum ${[0,+\infty)}$ of the operator ${-\Delta}$. In the model three-dimensional case ${d=3}$, they can be defined explicitly by the formula
$\displaystyle R(k^2) f(x) = \int_{{\bf R}^3} \frac{e^{ik|x-y|}}{4\pi |x-y|} f(y)\ dy$
whenever ${k}$ lives in the upper half-plane ${\{ k \in {\bf C}: \hbox{Im}(k) > 0 \}}$, ensuring the absolute convergence of the integral for test functions ${f}$. (In general dimension, explicit formulas are still available, but involve Bessel functions. But asymptotically at least, and ignoring higher order terms, one simply replaces ${\frac{e^{ik|x-y|}}{4\pi |x-y|}}$ by ${\frac{e^{ik|x-y|}}{c_d |x-y|^{d-2}}}$ for some explicit constant ${c_d}$.) It is an instructive exercise to verify that this resolvent indeed inverts the operator ${-\Delta-k^2}$, either by using Fourier analysis or by Green’s theorem.
Henceforth we restrict attention to three dimensions ${d=3}$ for simplicity. One consequence of the above explicit formula is that for positive real ${\lambda > 0}$, the resolvents ${R(\lambda+i\epsilon)}$ and ${R(\lambda-i\epsilon)}$ tend to different limits as ${\epsilon \rightarrow 0}$, reflecting the jump discontinuity in the resolvent function at the spectrum; as one can guess from formulae such as (4) or (5), such limits are of interest for understanding many other spectral multipliers. Indeed, for any test function ${f}$, we see that
$\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda+i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy$
and
$\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda-i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{-i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy.$
Both of these functions
$\displaystyle u_\pm(x) := \int_{{\bf R}^3} \frac{e^{\pm i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy$
solve the Helmholtz equation
$\displaystyle (-\Delta-\lambda) u_\pm = f, \ \ \ \ \ (6)$
but have different asymptotics at infinity. Indeed, if ${\int_{{\bf R}^3} f(y)\ dy = A}$, then we have the asymptotic
$\displaystyle u_\pm(x) = \frac{A e^{\pm i \sqrt{\lambda}|x|}}{4\pi|x|} + O( \frac{1}{|x|^2}) \ \ \ \ \ (7)$
as ${|x| \rightarrow \infty}$, leading also to the Sommerfeld radiation condition
$\displaystyle u_\pm(x) = O(\frac{1}{|x|}); \quad (\partial_r \mp i\sqrt{\lambda}) u_\pm(x) = O( \frac{1}{|x|^2}) \ \ \ \ \ (8)$
where ${\partial_r := \frac{x}{|x|} \cdot \nabla_x}$ is the outgoing radial derivative. Indeed, one can show using an integration by parts argument that ${u_\pm}$ is the unique solution of the Helmholtz equation (6) obeying (8) (see below). ${u_+}$ is known as the outward radiating solution of the Helmholtz equation (6), and ${u_-}$ is known as the inward radiating solution. Indeed, if one views the function ${u_\pm(t,x) := e^{-i\lambda t} u_\pm(x)}$ as a solution to the inhomogeneous Schrödinger equation
$\displaystyle (i\partial_t + \Delta) u_\pm = - e^{-i\lambda t} f$
and using the de Broglie law that a solution to such an equation with wave number ${k \in {\bf R}^3}$ (i.e. resembling ${A e^{i k \cdot x}}$ for some amplitide ${A}$) should propagate at (group) velocity ${2k}$, we see (heuristically, at least) that the outward radiating solution will indeed propagate radially away from the origin at speed ${2\sqrt{\lambda}}$, while inward radiating solution propagates inward at the same speed.
There is a useful quantitative version of the convergence
$\displaystyle R(\lambda \pm i\epsilon) f \rightarrow u_\pm, \ \ \ \ \ (9)$
known as the limiting absorption principle:
Theorem 1 (Limiting absorption principle) Let ${f}$ be a test function on ${{\bf R}^3}$, let ${\lambda > 0}$, and let ${\sigma > 0}$. Then one has
$\displaystyle \| R(\lambda \pm i\epsilon) f \|_{H^{0,-1/2-\sigma}({\bf R}^3)} \leq C_\sigma \lambda^{-1/2} \|f\|_{H^{0,1/2+\sigma}({\bf R}^3)}$
for all ${\epsilon > 0}$, where ${C_\sigma > 0}$ depends only on ${\sigma}$, and ${H^{0,s}({\bf R}^3)}$ is the weighted norm
$\displaystyle \|f\|_{H^{0,s}({\bf R}^3)} := \| \langle x \rangle^s f \|_{L^2_x({\bf R}^3)}$
and ${\langle x \rangle := (1+|x|^2)^{1/2}}$.
This principle allows one to extend the convergence (9) from test functions ${f}$ to all functions in the weighted space ${H^{0,1/2+\sigma}}$ by a density argument (though the radiation condition (8) has to be adapted suitably for this scale of spaces when doing so). The weighted space ${H^{0,-1/2-\sigma}}$ on the left-hand side is optimal, as can be seen from the asymptotic (7); a duality argument similarly shows that the weighted space ${H^{0,1/2+\sigma}}$ on the right-hand side is also optimal.
We prove this theorem below the fold. As observed long ago by Kato (and also reproduced below), this estimate is equivalent (via a Fourier transform in the spectral variable ${\lambda}$) to a useful estimate for the free Schrödinger equation known as the local smoothing estimate, which in particular implies the well-known RAGE theorem for that equation; it also has similar consequences for the free wave equation. As we shall see, it also encodes some spectral information about the Laplacian; for instance, it can be used to show that the Laplacian has no eigenvalues, resonances, or singular continuous spectrum. These spectral facts are already obvious from the Fourier transform representation of the Laplacian, but the point is that the limiting absorption principle also applies to more general operators for which the explicit diagonalisation afforded by the Fourier transform is not available. (Igor Rodnianski and I are working on a paper regarding this topic, of which I hope to say more about soon.)
In order to illustrate the main ideas and suppress technical details, I will be a little loose with some of the rigorous details of the arguments, and in particular will be manipulating limits and integrals at a somewhat formal level.
A few days ago, I found myself needing to use the Fredholm alternative in functional analysis:
Theorem 1 (Fredholm alternative) Let ${X}$ be a Banach space, let ${T: X \rightarrow X}$ be a compact operator, and let ${\lambda \in {\bf C}}$ be non-zero. Then exactly one of the following statements hold:
• (Eigenvalue) There is a non-trivial solution ${x \in X}$ to the equation ${Tx = \lambda x}$.
• (Bounded resolvent) The operator ${T-\lambda}$ has a bounded inverse ${(T-\lambda)^{-1}}$ on ${X}$.
Among other things, the Fredholm alternative can be used to establish the spectral theorem for compact operators. A hypothesis such as compactness is necessary; the shift operator ${U}$ on ${\ell^2({\bf Z})}$, for instance, has no eigenfunctions, but ${U-z}$ is not invertible for any unit complex number ${z}$. The claim is also false when ${\lambda=0}$; consider for instance the multiplication operator ${Tf(n) := \frac{1}{n} f(n)}$ on ${\ell^2({\bf N})}$, which is compact and has no eigenvalue at zero, but is not invertible.
It had been a while since I had studied the spectral theory of compact operators, and I found that I could not immediately reconstruct a proof of the Fredholm alternative from first principles. So I set myself the exercise of doing so. I thought that I had managed to establish the alternative in all cases, but as pointed out in comments, my argument is restricted to the case where the compact operator ${T}$ is approximable, which means that it is the limit of finite rank operators in the uniform topology. Many Banach spaces (and in particular, all Hilbert spaces) have the approximation property that implies (by a result of Grothendieck) that all compact operators on that space are almost finite rank. For instance, if ${X}$ is a Hilbert space, then any compact operator is approximable, because any compact set can be approximated by a finite-dimensional subspace, and in a Hilbert space, the orthogonal projection operator to a subspace is always a contraction. (In more general Banach spaces, finite-dimensional subspaces are still complemented, but the operator norm of the projection can be large.) Unfortunately, there are examples of Banach spaces for which the approximation property fails; the first such examples were discovered by Enflo, and a subsequent paper of by Alexander demonstrated the existence of compact operators in certain Banach spaces that are not approximable.
I also found out that this argument was essentially also discovered independently by by MacCluer-Hull and by Uuye. Nevertheless, I am recording this argument here, together with two more traditional proofs of the Fredholm alternative (based on the Riesz lemma and a continuity argument respectively).
[This is a (lightly edited) repost of an old blog post of mine, which had attracted over 400 comments, and as such was becoming difficult to load; I request that people wishing to comment on that puzzle use this fresh post instead. -T]
This is one of my favorite logic puzzles, because of the presence of two highly plausible, but contradictory, solutions to the puzzle. Resolving this apparent contradiction requires very clear thinking about the nature of knowledge; but I won’t spoil the resolution here, and will simply describe the logic puzzle and its two putative solutions. (Readers, though, are welcome to discuss solutions in the comments.)
— The logic puzzle —
There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).
Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).
One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe.
One evening, he addresses the entire tribe to thank them for their hospitality.
However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.
What effect, if anything, does this faux pas have on the tribe?
Note 1: For the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.
Note 2: Bear in mind that this is a logic puzzle, rather than a description of a real-world scenario. The puzzle is not to determine whether the scenario is plausible (indeed, it is extremely implausible) or whether one can find a legalistic loophole in the wording of the scenario that allows for some sort of degenerate solution; instead, the puzzle is to determine (holding to the spirit of the puzzle, and not just to the letter) which of the solutions given below (if any) are correct, and if one solution is valid, to correctly explain why the other solution is invalid. (One could also resolve the logic puzzle by showing that the assumptions of the puzzle are logically inconsistent or not well-defined. However, merely demonstrating that the assumptions of the puzzle are highly unlikely, as opposed to logically impossible to satisfy, is not sufficient to resolve the puzzle.)
Note 3: An essentially equivalent version of the logic puzzle is also given at the xkcd web site. Many other versions of this puzzle can be found in many places; I myself heard of the puzzle as a child, though I don’t recall the precise source.
Below the fold are the two putative solutions to the logic puzzle. If you have not seen the puzzle before, I recommend you try to solve it first before reading either solution. |
# Adjunction integral for Gysin map?
Let $f : X \to Y$ be a map between compact, orientable, smooth manifolds.
Then the Gysin map is defined by requiring that $\int_X f_! \alpha \wedge \beta = \int_Y \alpha \wedge f^*(\beta)$, for forms of suitable dimension.
This looks very much like an adjunction, if the integration was replaced by Hom. I guess this is not a coincidence (or maybe I am overly excited) but I don't know how to relate these things; is there a categorification of this formula that makes sense?
I guess that somehow there would be morphisms between $k$ and $n-k$ forms, and the space of these morphisms can be naturally given a volume, which would be the volume normally associated to the integral of the wedge product. Less vaguely than that I don't know how to proceed.
• It's an adjunction in the sense of the adjoint of a map between two inner product spaces, more or less. The inner product is given by taking the cup product and integrating (which gives you zero unless the result is in the right degree). – Qiaochu Yuan Sep 12 '16 at 2:10
• First of all, you have $\int_X$ and $\int_Y$ reversed. In the case that $f$ is a fiber bundle, you can think of $f_!$ as integration over the fiber. – Ted Shifrin Sep 13 '16 at 16:54 |
# How do you evaluate cos(arcsin (1/4))?
Sep 10, 2015
$\frac{\sqrt{15}}{4} \approx 0.968$
#### Explanation:
By the fact that ${\cos}^{2} \left(\theta\right) + {\sin}^{2} \left(\theta\right) = 1$ for all $\theta$, when we know the value of $\sin \left(\theta\right)$, there are two possible corresponding values for $\cos \left(\theta\right)$, namely $\cos \left(\theta\right) = \pm \sqrt{1 - {\sin}^{2} \left(\theta\right)}$.
Next note that $\arcsin \left(x\right)$ always gives an answer between $- \frac{\pi}{2}$ and $\frac{\pi}{2}$ radians, where the cosine function is positive. Hence, $\cos \left(\arcsin \left(\frac{1}{4}\right)\right) \ge q 0$.
Thus, $\cos \left(\arcsin \left(\frac{1}{4}\right)\right) = \sqrt{1 - {\sin}^{2} \left(\arcsin \left(\frac{1}{4}\right)\right)}$
$= \sqrt{1 - {\left(\frac{1}{4}\right)}^{2}} = \sqrt{1 - \frac{1}{16}} = \sqrt{\frac{15}{16}} = \frac{\sqrt{15}}{4} \approx 0.968 .$
It's also possible to solve this problem by drawing a right triangle, labeling one of the non-right angles as $\arcsin \left(\frac{1}{4}\right)$, using the Pythagorean Theorem and SOH, CAH, TOA to label and find possible side lengths and ultimately the final answer.
Jul 19, 2016
Less formal style of solution
$\frac{\sqrt{15}}{4} \approx 0.9682$ to 4 decimal places
#### Explanation:
arcsin of some value gives you the angle that was used derive the that value of the sine
$\textcolor{b r o w n}{\text{sin, cos and tangent are just another way of defining ratios}}$
The value given can be used in conjunction with the properties of sine to determine a related triangle.
From this and using Pythagoras we can determine the length of the adjacent.
${x}^{2} + {1}^{2} = {4}^{2}$
$\implies x = \sqrt{15}$
so $\cos \left(\theta\right) = \frac{x}{4} = \frac{\sqrt{15}}{4} \approx 0.9682$ to 4 decimal places |
# Calc: sum up dates in from different columns in a third column?
Hi,
I have an issue with Calc. I'm using libre Office Version: 4.3.4.1 , english .
I have a column containing full dates (including time), to which I need to add 9 hours (in another column).
The first column is formated as DD-MM-JJJJ UU:MM:SS
Second column is formated as UU:MM
I now want to add these columns
In a third column, I am using formula =Column1+Column2 (Or "=O2+N2" , next row "O3+N3" , etc.) The formulas get adapted coorectly according to each row.
Still, 9 hours get added, but also another day gets added! The final date is not 30-04 anymore, but 1-05, 02-05 etc. even though this cannot be explained by 9 hour time difference.
How come? What can I do to prevent this (just 9 hours need to be added to the original date...)? Does anyone know?
Kind regards,
Nils
edit retag close merge delete
Sort by » oldest newest most voted
Are you sure that the cell showing 9:00 actually contains the numeric value 9/24 = 3/8 = 0,375?
You should be aware of the fact that "Time Of Day" and a "Duration" are very different things. Never format a duration with the code "HH:MM" or "HH:MM:SS" or similar. Always use "[HH]" or "[H]" for the first part to make sure that nothing most relavant (a full day, e.g. caused by an added value 1) is suppressed for the display.
If this was not your problem, please attach an example document demonstrating the error.
more
If what @Lupp comment doesn't solve the problem, I remember some issues about dates with some time zones. https://bugs.documentfoundation.org/s...
( 2015-05-22 00:30:40 +0200 )edit
problem solved! Thank you, this was the exact answer to my problem! It was thus indeed a formatting error, and not a bug. Again, thanks a lot!
( 2015-05-27 13:27:44 +0200 )edit
@Nils-EuroClix Glad to have helped. You might mark the answer as correct then.
( 2015-05-27 14:53:08 +0200 )edit
Not enough points to vote, yet, sorry ;)
( 2015-06-22 17:16:54 +0200 )edit
Don't worry!
( 2015-06-22 17:54:25 +0200 )edit |
# 3D Isosurface Plots in Julia
How to make 3D Isosurface Plots in Julia with Plotly.
# NOTE: this permalink does not work
With go.Isosurface, you can plot isosurface contours of a scalar field value, which is defined on x, y and z coordinates.
#### Basic Isosurface
In this first example, we plot the isocontours of values isomin=2 and isomax=6. In addition, portions of the sides of the coordinate domains for which the value is between isomin and isomax (named the caps) are colored. Please rotate the figure to visualize both the internal surfaces and the caps surfaces on the sides.
using PlotlyJS
trace = isosurface(
x=[0,0,0,0,1,1,1,1],
y=[1,0,1,0,1,0,1,0],
z=[1,1,0,0,1,1,0,0],
value=[1,2,3,4,5,6,7,8],
isomin=2,
isomax=6,
)
plot(trace)
### Removing caps when visualizing isosurfaces
For a clearer visualization of internal surfaces, it is possible to remove the caps (color-coded surfaces on the sides of the visualization domain). Caps are visible by default.
using PlotlyJS
data = range(-5, stop=5, length=40)
X, Y, Z = mgrid(data, data, data)
values = X .* X .* 0.5 .+ Y .* Y .+ Z .* Z .* 2
plot(isosurface(
x=X[:],
y=Y[:],
z=Z[:],
value=values[:],
isomin=10,
isomax=40,
caps=attr(x_show=false, y_show=false)
)) |
# Sequence and Series
A set of numbers arranged in a definite order according to some rules is called a sequence or progression. It means that all consecutive terms must be related by some common rule or property.
Example: 1, 2, 3, 4, 5… is a sequence of consecutive natural numbers.
Similarly, 2, 4, 6, 8, 10…. is a sequence of consecutive even numbers.
The expression of the sum of a sequence is called a series.
Example: 1 + 2 + 3 + ... is a series of natural numbers.
# Arithmetic Progression
A sequence is called an arithmetic progression (A.P.) if the difference between any term and its preceeding term is a constant. This constant is called common difference of the A.P.
The common difference is found by subtracting any term of the sequence from the term next to it.
Example: 3, 5, 7, 9 ... is in arithmetic progression.
Here, 5 – 3 = 7 – 5 = 9 – 7 = 2 is the common difference.
Let ‘a’ be the first term of an A.P. and ‘d’ be the common difference, then the A.P. can be written as:
aa + da + 2da + 3d… and so on.
Now, we can see that the coefficient of ‘d’ is 1 in the 2nd term, 2 in the 3rd term, 3 in the 4th term. It is one less than the number of the term. So, for the nth term, coefficient of ‘d’ will be (n – 1).
Hence, we can write the nth term of an A.P. as:
Tn = a + (n – 1)d
Example
Form an A.P. where the 1st term is 5 and the common difference is 7.
Solution
Using the 1st term ‘a’ and the common difference ‘d’, the A.P. can be formed in the following way,
aa + da + 2da + 3d, … a + (n – 1)d
Here, a = 5 and d = 7
Substituting the values, the required A.P. will be
5, 5 + 7, 5 + 2(7), 5 + 3(7), … 5 + (n – 1)7 = 5, 12, 19, 26, … 5 + (n – 1)7
Example
Find the 10th term of the progression 3, 11, 19, 27…
Solution
From the given A.P., we can see that, a = 3, d = 8 and n = 10.
The nth term of an arithmetic progression is given by,
Tn = a + (n – 1) d
Substituting the values of the terms in the above relation, we get
T10 = 3 + (10 – 1)8 ⇒ T10 = 75
Example
Find the 6th term of the A.P. 5, 10, 15, 20.
Solution
Given n = 6, a = 5 and d = 5.
Example
If 6, 4m and 5m are three consecutive terms of an A.P., find ‘m’.
Solution
Given 6, 4m and 5m are three terms in A.P.
Example
Find the common difference, if T3 = 11 and T5 = 21.
Solution
Given T3 = 11 and T5 = 21.
# Sum of Terms in an Arithmetic Progression
The sum of ‘n’ terms in an arithmetic progression is given by,
If we substitute the value of Tn, which is ‘a + (n – 1)d ’, in the above equation, we get
Where, a = 1st term of progression, d = common difference and n = number of terms. Either of the above two equations can be used to calculate the sum of an A.P.
Example
Find the sum of the terms in the progression 2, 5, 8, 11 … up to 15 terms.
Solution
From the given A.P., we can see that a = 2, d = 3 and n = 15.
The sum of n terms in an arithmetic progression is given by
Substituting the value of the terms in the above relation, we get
Example
Find the sum of the terms in the progression 5, 10, 15, 20, 25 … 100
Solution
In the above progression 100 is the 20th term (can be found using the relation to find the nth term).
Given that, n = 20
From the given progression, we can see that a = 5 and Tn = 100.
The sum of terms in the progression can be found by using the relation,
Substituting the values, we get
Note: If the sum of terms in an A.P. is known and you are asked to find the terms, then the terms must be selected in the following ways, while solving problems:
When the sum of three terms is given, the terms must be selected as (a – d ), a, (a + d)
When the sum of four terms is given, the terms must be selected as (a – 3d ), (a – d ), (a + d ), (a + 3d )
When the sum of five terms is given, the terms must be selected as (a – 2d ), (a – d ), a, (a + d ), (a + 2d )
Example
Find the arithmetic progression, the sum of whose first 3 terms is 30 and the sum of their squares is 350.
Solution
We know the sum of 3 terms.
Hence, the terms shall be taken in the form (a – d), a, (a + d).
Given that, (a – d) + a + (a + d) = 30
⇒ 3a = 30 ⇒ a = 10
Sum of the squares of above 3 terms is,
(a – d)2 + a2 + (a + d)2 = 350
⇒ a2 + d 2 – 2ab + a2 + a2 + d 2 + 2ab = 350
⇒ 3a2 + 2d 2 = 350
Substituting the value of ‘a’, we get
⇒ 3(10)2 + 2d 2 = 350 ⇒ 2d 2 = 350 – 300
⇒ 2d 2 = 50 ⇒ d 2 = 25
⇒ d = ± 5
Using these values of a and d, we get the terms of the A.P. as follows:
(10 – 5), 10, (10 + 5) ... = 5, 10, 15 ...
Remember:
• If we add or subtract a constant value to or from each term of an A.P., the resulting progression will also be an A.P.
• If each term of an A.P. is multiplied or divided by any constant term, the resulting progression will also be an A.P.
• If we add or subtract corresponding terms of two A.P.s, the resulting progression will also be an A.P.
• If we multiply or divide corresponding terms of two A.Ps, the resulting progression will not be an A.P.
Example
If the 12th term of an A.P. is –13 and the sum of the first four terms is 24, what is the sum of the first 10 terms?
Solution
Given that T12 = –13, S4 = 24
Note:
• Sum of 1st n odd numbers is
S = 1 + 3 + 5 + ... + (2n – 1)
S = n2
• Sum of the 1st n natural numbers is
S = 1 + 2 + 3 ... n
• Sum of the squares of the 1st n natural numbers is
S = 12 + 22 + 32 + ... n2
• Sum of the cubes of the 1st n natural numbers is
S = 13 + 23 + 33 + ... n3 |
# Projection of a measurable cylinder of the product $\sigma$-algebra
Take some set $A \subset \mathbb{R}$ such that $A \times \mathbb{R}$ is measurable in $\mathrm{B}(\mathbb{R}) \times \mathrm{B}(\mathbb{R})$, the product $\sigma$-algebra of the borel $\sigma$-algebra of $\mathbb{R}$ by itself (which is also $\mathrm{B}(\mathbb{R^2})$, the borel $\sigma$-algebra of $\mathbb{R}^2$ for its product topology). Then must $A$ be measurable in $\mathrm{B}(\mathbb{R})$?
I know the converse is true, and that the image of a measurable set for the product $\sigma$-algebra by a projection isn't necessary measurable, but what happens for a just cylinder $A \times \mathbb{R}$?
Alright, it turns out that it was true, $A$ needs to be measurable. Someone posted on another page here a link to the solution: |
## 2022-06-25
### Information theory 2: source coding
6.9k words, including equations (~36min)
In the previous post, we saw the basic information theory model:
If we have no noise in the channel, we don't need channel coding. Therefore the above model simplifies to
and the goal is to minimise $$n$$ - that is, minimise the number of symbols we need to send - without needing to worry about being robust to any errors.
Here's one question to get started: imagine we're working with a compression function $$f_e$$ that acts on length-$$n$$ strings (that is, sequences of symbols) with some arbitrary alphabet size $$A$$ (that is, $$A$$ different types of symbols). is it possible to build an encoding function $$f_e$$ that compresses every possible input? Clearly not; imagine that it took every length-$$n$$ string to a length-$$m$$ string using the same alphabet, with $$m < n$$. Then we'd have $$A^m$$ different available codewords that would need to code for $$A^n > A^m$$ different messages. By the pigeonhole principle, there must be at least one codeword that codes for more than one message. But that means that if we see this codeword, we can't be sure what it codes for, so we can't recover the original with certainty.
Therefore, we have a choice: either:
• do lossy compression, where every message shrinks in size but we can't recover information perfectly; or
• do lossless compression, and hope that more messages shrink in size than expand in size.
This is obvious with lossless compression, but applies to both: if you want to do them well, you generally need a probability model for what your data looks like, or at least something that approximates one.
## Terminology
When we talk about a "code", we just mean something that maps messages (the $$Z$$ in the above diagram) to a sequence of symbols. A code is nonsingular if it associates every message with a unique code.
A symbol code is a code where each symbol in the message maps to a codeword, and the code of a message is the concatenation of the codewords of the symbols that it is made of.
A prefix code is a code where no codeword is a prefix of another codeword. They are also called instantaneous codes, because when decoding, you can decode a codeword to a symbol immediately when you reach a point where the some prefix of the code corresponds to a codeword.
## Useful basic results in lossless compression
### Kraft's inequality
Kraft's inequality states that a prefix code with an alphabet of size $$D$$ and code words of lengths $$l_1, l_2, \ldots, l_n$$ satisfies $$\sum_{i=1}^n D^{-l_i} \leq 1,$$$and conversely that if there is a set of lengths $${l_1, \ldots, l_n}$$ that satisfies the above inequality, there exists a prefix code with those codeword lengths. We will only prove the first direction: that all prefix codes satisfy the above inequality. Let $$l = \max_i l_i$$ and consider the tree with branching factor $$D$$ and depth $$l$$. This tree has $$D^l$$ nodes on the bottom level. Each codeword $$x_1x_2...x_c$$ is the node in this tree that you get to by choosing the $$d_i$$th branch on the $$i$$th level where $$d_i$$ is the index of symbol $$x_i$$ in the alphabet. Since it must be a prefix code, no node that is a descendant of a node that is a codeword can be a codeword. We can define our "budget " as the $$D^l$$ nodes on the bottom level of the tree, and define the "cost" of each codeword as the number of nodes on the bottom level of the tree that are descendants of the node. The node with length $$l$$ has cost 1, and in general a codeword at level $$l_i$$ has cost $$D^{l - l_i}$$. From this, and the prefix-freeness, we get $$\sum_i D^{l - l_i} \leq D^l$$$ which becomes the inequality when you divide both sides by $$D^l$$.
### Gibbs' inequality
Gibbs' inequality states that for any two probability distributions $$p$$ and $$q$$, $$-\sum_i p_i \log p_i \leq - \sum_i p_i \log q_i$$$which can be written using the relative entropy $$D$$ (also known as the KL distance/divergence) as $$\sum_i p_i \log \frac{p_i}{q_i} = D(p||q) \geq 0.$$$ This can be proved using the log sum inequality. The proof is boring.
We want to minimise the expected length of our code $$C$$ for each symbol that $$X$$ might output. The expected length is $$L(C,X) = \sum_i p_i l_i$$. Now one way to think of what a length $$l_i$$ means is using the correspondence between prefix codes and binary trees discussed above. Given the prefix requirement, the higher the level in the tree (and thus the shorter the length of the codeword) the more other options we block out in the tree. Therefore we can think of the collection of lengths we assign to our codewords as specifying a rough probability distribution that assigns probability in proportion to $$2^{-l_i}$$. What we'll do is introduce a variable $$q_i$$ that measures the "implied probability" in this way (note dividing the division by a normalising constant): $$q_i = \frac{2^{-l_i}}{\sum_i 2^{-l_i}} = \frac{2^{-l_i}}{z}$$$where in the 2nd step we've just defined $$z$$ to be the normalising constant. Now $$l_i = - \log zq_i = -\log q_i - \log z$$, so $$L(C,X) = \sum_i (-p_i \log q_i) - \log z$$$ Now we can apply Gibbs' inequality to know that $$\sum_i(- p_i \log q_i) \geq \sum_i (-p_i \log p_i)$$ and Kraft's inequality to know that $$\log z = \log \big(\sum_i 2^{-l_i} \big) \leq \log(1)=0$$, so we get $$L(C,X) \geq -\sum_i p_i \log p_i = H(X).$$$Therefore the entropy (with base-2 $$\log$$) of a random variable is a lower bound on the expected length of a codeword (in a 2-symbol alphabet) that represents the outcome of that random variable. (And more generally, entropy with base-$$d$$ logarithms is a lower bound on the length of a codeword for the result in a $$d$$-symbol alphabet.) ## Huffman coding Huffman coding is a very pretty concept. We saw above that if you're making a random variable for the purpose of gaining the most information possible, you should prepare your random variable to have a uniform probability distribution. This is because entropy is maximised by a uniform distribution, and the entropy of a random variable is the average amount of information you get by observing it. The reason why, say, encoding English characters as 5-bit strings (A = 00000, B = 00001, ..., Z = 11010, and then use the remaining 6 codes for punctuation or cat emojis or whatever) is not optimal is that some of those 5-bit strings are more likely than others. On a symbol-by-symbol-level, whether the first symbol is a 0 or a 1 is not equiprobable. To get an ideal code, each symbol we send should have equal probability (or as close to equal probability as we can get). Robert Fano, of Fano's inequality fame, and Claude Shannon, of everything-in-information-theory fame, had tried to find an efficient general coding scheme in the early 1950s. They hadn't succeeded. Fano set it as an alternative to taking the final exam for his information theory class at MIT. David Huffman tried for a while, and had almost given up and started studying instead, when he came up with Huffman coding and quickly proved it to be optimal. We want the first code symbol (a binary digit) to divide the space of possible message symbols (the English letters, say) in two equally-likely parts, the first two to divide it in four, the third into eight, and so o n. Now some message symbols are going to be more likely than others, so the codes for some symbols have to be longer. We don't want it to be ambiguous when we get to the end of a codeword, so we want a prefix-free code. Prefix-free codes with a size-$$d$$ alphabet can be represented as trees with branching factor $$d$$, where each leaf is one codeword: Above, we have $$d=2$$ (i..e binary), and six items to code for (a, b, c, d, e, and f), and six code words with lengths of between 1 and 4 characters in the codeword alphabet. Each codeword is associated with some probability. We can define the weight of a leaf node to be its probability (or just how many times it occurs in the data) and the weight of a non-leaf code to be the sum of the weights of all leaves that are downstream of it in the tree. For an optimal prefix-free code, all we need to do is make sure that each node has children that are as equally balanced in weight as possible. The best way to achieve this is to work bottom-up. Start without any tree, just a collection of leaf nodes representing the symbols you want codewords for. Then repeatedly build a node uniting the two least-likely parentless nodes in the tree, until the tree has a root. Above, the numbers next to the non-leaf nodes show the order in which the node was created. This set of weights on the leaf nodes creates the same tree structure as in the previous diagram. (We could also try to work top-down, creating the tree the root to the leaves rather than from the leaves to the root, but this turns out to give slightly worse results. Also the algorithm for achieving this is less elegant.) ## Arithmetic coding The Huffman code is the best symbol code - that is, a code where every symbol in the message gets associated with a codeword, and the code for the entire message is simply the concatenation of all the codewords of its symbols. Symbol codes aren't always great, though. Consider encoding the output of a source that has a lot of runs like "aaaaaaaaaahaaaaahahahaaaaa" (a source of such messages might be, for example, a transcription of what a student says right before finals). The Huffman coding for this message is, for example, that "a" maps to a 0, and "h" maps to a 1, and you have achieved a compression of exactly 0%, even though intuitively those long runs of "a"s could be compressed. One obvious thing you could do is run-length encoding, where long blocks of a character get compressed into a code for the character plus a code for how many times the character is repeated; for example the above might become "10a1h5a1h1a1h1a1h5a". However, this is only a good idea if there are lots of runs, and requires a bunch of complexity (e.g. your alphabet for the codewords must either be something more than binary, or then you need to be able to express things like lengths and counts in binary unambiguously, possibly using a second layer of encoding with a symbol code). Another problem with Huffman codes is that the code is based on assuming an unchanging probability model across the entire length of the message that is being encoded. This might be a bad assumption if we're encoding, for example, long angry Twitter threads, where the frequency of exclamation marks and capital letters increases as the message continues. We could try to brute-force a solution, such as splitting the message into chunks and fitting a Huffman code separately to each chunk, but that's not very elegant. Remember how elegant Huffman codes feel as a solution to the symbol coding problem? We'd rather not settle for less. The fundamental idea of arithmetic coding is that we send a number representing where on the cumulative probability distribution of all messages the message we want to send lies. This is a dense statement, so we will unpack it with an example. Let's say our alphabet is $$A = {a, r, t}$$. To establish an ordering, we'll just say we consider the alphabet symbols in alphabetic order. Now let's say our probability distribution for the random variable $$X$$ looks like the diagram on the left; then our cumulative probability distribution looks like the diagram on the right: One way to specify which of $${a, r, t}$$ we mean is to pick a number $$0 \leq c \leq 1$$, and then look at which range it corresponds to on the $$y$$-axis of the right-hand figure; $$0 \leq c < 0.5$$ implies $$a$$, $$0.5 \leq c < 0.7$$ implies $$r$$, and $$0.7 \leq c < 1$$ implies $$t$$. We don't need to send the leading 0 because it is always present, and for simplicity we'll transmit the following decimals in binary; 0.0 becomes "0", 0.5 becomes "1", 0.25 becomes "01", and 0.875 is "111". Note that at this point we've almost reinvented is the Huffman code. $$a$$ has the most probability mass and can be represented in one symbol. $$r$$ happens to be representable in one symbol ("1" corresponds to 0.5 which maps to $$r$$) as well even though it has the least probability mass, which is definitely inefficient but not too bad. $$t$$ takes 2: "11". The real benefit begins when we have multi-character messages. The way we can do it is like this, recursively splitting the number range between 0 and 1 into smaller and smaller chunks: We see possible numbers encoding "art", "rat", and "tar". Not only that, but we see that all messages we send are infinite in length, as we can just keep going down, adding more and more letters. At first this might seem like a great deal - send one number, get infinite symbols transmitted for free! However, there's a real difference between "art" and "artrat", so we want to be able to know when to stop as well. A simple answer is that the message also includes some code encoding how many symbols to decode for. A more elegant answer is that we can keep our message as just one number, but extend our alphabet to include an end-of-message token. Note that even with this end-of-message token, it is still true that many characters of the message can be encoded by a single symbol of output, especially if some outcome is much more likely. For example, in the example below we need only one bit ("1", for the number 0.5) to represent the message "aaa" (followed by the end-of-message character): There are still two ways in which this code is underspecified. The first is that we need to choose how much of the probability space to assign to our end-of-message token. The optimal value for this clearly depends on how long messages we will be sending. The second is that even with the end-of-message token, each codeword is still represented by a range of values rather than a single number. Any of these are valid numbers to send, but we want to minimise the length, so therefore we will choose the number in this range that has the shortest binary representation. Finally, what is our probability model? With the Huffman code, we either assume a probability model based on background information (e.g. we have the set of English characters, and we know the rough probabilities of them by looking at some text corpus that someone else has already compiled), or we fit the probability model based on the message we want to send - if 1/10th of all letters in the message are $$a$$s, we set $$p_a = 0.1$$ when building the tree for our Huffman code, and so on. With arithmetic coding, we can also assume static probabilities. However, we can also do adaptive arithmetic coding, where we change the probability model as we go. A good way to do this is for our probability model to assume that the probability $$p_x$$ of the symbol $$x$$ after we have already processed text $$T$$ is $$p_x = \frac{\text{Count}(x, T) + 1}{\sum_{y \in A} \big(\text{Count}(y, T) + 1\big)}$$$ $$= \frac{\text{Count}(x, T) + 1}{\sum_{y \in A} \big(\text{Count}(y, T)\big) + |A|}$$$where $$A$$ is the alphabet, and $$\text{Count}(a, T)$$ simply returns the count of how many times the character $$a$$ occurs in $$T$$. Note that if we didn't have the $$+1$$ in the numerator and in the sum in the denominator, we would assume a probability of zero to anything we haven't seen before, and be unable to encode it. (We can either say that the end-of-message token is in the alphabet $$A$$, or, more commonly, assign "probabilities" to all $$x$$ using the above formula and some probability $$p_{EOM}$$ to the end of message, and then renormalise by dividing all $$p_x$$ by $$1 + p_{EOM}$$.) How do we decode this? At the start, the assumed distribution is simply uniform over the alphabet (except maybe for $$p_{EOM}$$). We can decode the first symbol using that distribution, then update the distribution and decode the next, and so on. It's quite elegant. What isn't elegant is implementing this with standard number systems in most programming languages. For any non-trivial message length, arithmetic coding is going to need very precise floating point numbers, and you can't trust floating point precision very far. You'll need some special system, likely an arbitrary-precision arithmetic library, to actually implement arithmetic coding. ### Prefix-free arithmetic coding The above description of arithmetic coding is not a prefix-free code. We generally want prefix-free codes, in particular because it means we can decode it symbol by symbol as it comes in, rather than having to wait for the entire message to come through. Note also that often in practice it is uncertain whether or not there are more bits coming; consider a patchy internet connection with significant randomness between packet arrival times. The simple fix for this is that instead of encoding a number as any sequence of binary string that maps onto the right segment of the number line between 0 and 1, you impose an additional requirement on it: whatever binary bits you add onto the number, it is still within the range. ## Lempel-Ziv coding Huffman coding integrated the probability model and the encoding. Arithmetic coding still uses an (at least implicit) probability model to encode, but in a way that makes it possible to update as we encode. Lempel-Ziv encoding, and its various descendants, throw away the entire idea of having any kind of (explicit) probability model. We will look at the original version of this algorithm. ### Encoding Skip all that Huffman coding nonsense of carefully rationing the shorter codewords for the most likely symbols, and simply decide on some codeword length $$d$$ and give every character in the alphabet a codeword of that length. If your alphabet is again $${a, r, t, \text{EOM}}$$ (we'll include the end-of-message character from the start this time), and $$d = 3$$, then the codewords you define are literally as simple as $$a \mapsto 000$$$ $$r \mapsto 001$$$$$t \mapsto 010$$$ $$\text{EOM} \mapsto 011$$$If we used this code, it would be a disaster. We have four symbols in our alphabet, so the maximum entropy of the distribution is $$\log_2 4 = 2$$ bits, and we're spending 3 bits on each symbol. With this encoding, we increase the length by at least 50%. Instead of your compressed file being uploaded in 4 seconds, it now takes 6. However, we selected $$d=3$$, meaning we have $$2^3 = 8$$ slots for possible codewords of our chosen constant length, and we've only used 4. What we'll do is follow these steps as we scan through our text: 1. Read one symbol past the longest match between the following text and a codeword we've defined. Therefore what we now have is a string $$Cx$$, where we have a code for $$C$$ already of length $$|C|$$, $$x$$ is a single character, and $$Cx$$ is a prefix of the remaining text. 2. Add $$C$$ to the code we're forming, to encode for the first $$|C|$$ characters of the remaining text. 3. If there is space among the $$2^d$$ possible codewords we have available: let $$n$$ be the binary representation of the smallest possible codeword not yet associated with a code, and define $$Cx \mapsto n$$ as a new codeword. Here is an example of the encoding process, showing the emitted codewords on the left, the original definitions on the top, the new definitions on the right, and the message down the middle: ### Decoding A boring way to decode is to send the codeword list along with your message. The fun way is to reason it out as you go along, based on your knowledge of the above algorithm and a convention that lets you know which order the original symbols were added to the codeword list (say, alphabetically, so you know the three bindings in the top-left). An example of decoding the above message: ## Source coding theorem The source coding theorem is about lossy compression. It is going to tell us that if we can tolerate a probability of error $$\delta$$, and if we're encoding a message consisting of a lot of symbols, unless $$\delta$$ is very close to 0 (lossless compression) or 1 (there is nothing but error), it will take about $$H(X)$$ bits per symbol to encode the message, where $$X$$ is the random variable according to which the symbols in the message have been drawn. Since it means that entropy turns up as a fundamental and surprisingly constant limit when we're trying to compress our information, this further justifies the use of entropy as a measure of information. We're going to start our attempt to prove the source coding theorem by considering a silly compression scheme. Observe that English has 26 letters, but the bottom 10 (Z, Q, X, J, K, V, B, P, Y, G) are slightly less than 10% of all letters. Why not just drop them? Everthn is still comprehensile without them, and ou can et awa with, for eample, onl 4 inary its per letter rather than 5, since ou're left with ust 16 letters. Given an alphabet $$A$$ from which our random variable $$X$$ takes values, define the $$\delta$$-sufficient subset $$S_\delta$$ of $$A$$ to be the smallest subset of $$A$$ such that $$P(x \in S_\delta) \geq 1 - \delta$$ for $$x$$ drawn from $$X$$. For example, if $$A$$ is the English alphabet, and $$\delta = 0.1$$, then $$S_\delta$$ is the set of all letters except Z, Q, X, J, K, V, B, P, Y, and G, since the other letters have a combined probability of over $$1 - 0.1 = 0.9$$, and any other subset containing more than $$0.9$$ of the probability mass contains must contain more letters. Note that $$S_\delta$$ can be formed by adding elements from $$A$$, in descending order of probability, into a set until the sum of probabilities of elements in the set exceeds $$1 - \delta$$. Next, define the essential bit content of $$X$$, denoted $$H_\delta(X)$$, as $$H_\delta(X) = \log 2 |S_\delta|.$$$ In other words, $$H_\delta(X)$$ is the answer to "how many bits of information does it take to point to one element in $$S_\delta$$ (without being able to assume the distribution is anything better than uniform)?". $$H_\delta(X)$$ for $$\text{English alphabet}_{0.1}$$ is 4, because $$\log_2 |{E, T, A, O, I, N, S, H, R, D, L, U, C,M, W, F}| = \log_2 16 = 4$$. It makes sense that this is called "essential bit content".
We can graph $$H_\delta(X)$$ against $$\delta$$ to get a pattern like this:
Where it gets more interesting is when we extend this definition to blocks. Let $$X^n$$ denote the random variable for a sequence of $$n$$ independent identically distributed samples drawn from $$X$$. We keep the same definitions for $$S_\delta$$ and $$H_\delta(X)$$; just remember that now $$S$$ is a subset of $$A^n$$ (where the exponent denotes Cartesian product of a set with itself; i.e. $$A^n$$ is all possible length-$$n$$ strings formed from that alphabet). In other words, we're throwing away the least common length-$$N$$ letter strings first; ZZZZ is out the window first if $$n = 4$$, and so on.
We can plot a similar graph as above, except we're plotting $$\frac{1}{n} H_\delta(x)$$ on the vertical axis to get per-symbol entropy, and there's a horizontal line around the entropy of English letter frequencies:
(Note that the entropy per letter of English drops to only 1.3 if we stop modelling each letter as drawn independently from the others around it, and instead have a model with a perfect understanding of which letters occur together.)
$$T_{n\epsilon}$$ is a small subset of the set $$A^n$$ of all length-$$n$$ sequences. We can see this through the following reasoning: for any $$x^n \in T_{n\epsilon}$$, $$\frac{1}{n} \log P(x^n) \approx H(X)$$ which implies that $$P(x^n) \approx 2^{-nH(X)}$$$and therefore that there can only be roughly $$2^{nH(X)}$$ such sequences; otherwise their probability would add up to more than 1. In comparison, the number of possible sequences $$|A^n| = 2^{n \log |A|}$$ is significantly larger, since $$\log |A| \leq H(X)$$ for any random variable $$X$$ with alphabet / outcome set $$A$$ (with equality if $$X$$ has a uniform distribution over $$A$$). ### The typical set contains most of the probability Chebyshev's inequality states that $$P((X-\mathbb{E}[X])^2 \geq a) \leq \frac{\sigma^2}{a}$$$ where $$\sigma^2$$ is the variance of the random variable $$X$$, and $$a \geq 0$$. It is proved here (search for "Chebyshev").
Earlier we defined the $$\epsilon$$-typical set as $$T_{n\epsilon} = \left\{ x^n \in A^n \,\text{ such that } \, \left| -\frac{1}{n}\log P(X^n) - H(X) \right| < \epsilon \right\}.$$$Note that $$\mathbb{E}\left[-\frac{1}{n}\log P(X^n)\right] = -\frac{1}{n} \sum \log P(X_i)$$$ $$= -\mathbb{E}[\log P(X_i)]$$$$$= H(X_i) = H(X)$$$ by using independence of the $$X_i$$ making up $$X^n$$ in the first step, the law of large numbers ($$\lim_{n \to \infty} \frac{1}{n} \sum_i X_i = \mathbb{E}[X]$$) in the second, and the fact that all $$X_i$$ are independent draws of the same random variable $$X$$ in the third.
Therefore, we can now rewrite the typical set definition equivalently as $$T_{n\epsilon} = \left\{ x^n \in A^n \,\text{ such that } \, \left( -\frac{1}{n}\log P(x^n) - H(X) \right)^2 < \epsilon^2 \right\}$$$$$= \left\{ x^n \in A^n \,\text{ such that } \, \left( Y - \mathbb{E}[Y] \right)^2 < \epsilon^2 \right\}$$$ for $$Y = -\frac{1}{n} \log P(X^n)$$, which is in the right form to apply Chebyshev's inequality to get a probability of belonging to this set, except for the fact that the sign is the wrong way around. Very well - we'll instead consider the set of sequences $$\bar{T}_{n\epsilon} = A^n - T_{n\epsilon}$$ (i.e. all length-$$n$$ sequences that are not typical) instead, which can be defined as $$\bar{T}_{n \epsilon} = \left\{ x^n \in A^n \,\text{ such that } \, (Y - \mathbb{E}[Y])^2 \geq \epsilon^2 \right\}$$$and use Chebyshev's inequality to conclude that $$P((Y - \mathbb{E}[Y])^2 \geq \epsilon^2) \leq \frac{\sigma_Y^2}{\epsilon^2}$$$ where $$\sigma_Y^2$$ is the variance of $$Y= -\frac{1}{n} \log P(X^n)$$. This is exciting - we have a bound on the probability that a sequence is not in the typical set - but we want to link this probability to $$n$$ somehow. Let $$Z = -\log P(X)$$, and note that $$Y$$ can be written as the average of many draws from $$Z$$. Therefore $$\mathbb{E}[Z] = -\frac{1}{n} \sum_i \log P(X) = -\frac{1}{n} \log P(X^n) = \mathbb{E}[Y]$$$and since $$Y = \frac{1}{n} \sum_i Z_i$$, the variance of $$Y$$, $$\sigma_Y^2$$, is equal to $$\frac{1}{n} \sigma_Z^2$$ (a basic law of how variance works that is often used in statistics). We can substitute this into the expression above to get $$P((Y-\mathbb{E}[Y])^2 \geq \epsilon^2) \leq \frac{\sigma_Z^2}{n\epsilon^2}.$$$ The probability on the left-hand side is identical to $$P((-\frac{1}{n} \log P(X^n) - H(X) )^2 \geq \epsilon^2)$$, which is the probability of the condition that $$X^n$$ is not in the $$\epsilon$$-typical set $$T_{n\epsilon}$$, which gives us our grand result $$P(X^n \in T_{n\epsilon}) \ge 1 - \frac{\sigma_Z^2}{n\epsilon^2}.$$$$$\sigma_Z^2$$ is the variance of $$\log P(X^n)$$; it depends on the particulars of the distribution and is probably hell to calculate. However, what we care about is that if we just crank up $$n$$, we can make this probability as close to 1 as we like, regardless of what $$\sigma_Z^2$$ is, and regardless of what we set as $$\epsilon$$ (the parameter for how wide the probability range for the typical set). The key idea is this: asymptotically, as $$n \to \infty$$, more and more of the probability mass of possible length-$$n$$ sequences is concentrated among those that have a probability of between $$2^{-n(H(X)+\epsilon)}$$ and $$2^{-n(H(x) - \epsilon)}$$, regardless of what (positive real) $$\epsilon$$ you set. This is known as the "asymptotic equipartition property" (it might be more appropriate to call it an "asymptotic approximately-equally-partitioning property" because it's not really an "equipartition", since depending on $$\epsilon$$ these can be very different probabilities, but apparently that was too much of a mouthful even for the mathematicians). ### Finishing the proof As a reminder of where we are: we stated without proof $$\left| \frac{1}{n}H_\delta(X^n) - H(X) \right| < \epsilon.$$$ and noted that this is an interesting result that also gives meaning to entropy, since we see that it's related to how many bits it takes for a naive coding scheme to express $$X^n$$ (with error probability $$\delta$$).
Then we went on to talk about typical sets, and ended up finding that the probability that an $$x^n$$ drawn from $$X^n$$ lies in the set $$T_{n \epsilon} =\left\{ x^n \in A^n \,\text{ such that } \, \left| -\frac{1}{n}\log P(X^n) - H(X) \right| < \epsilon \right\}.$$$approaches 1 as $$n \to \infty$$, despite the fact that $$T_{n\epsilon}$$ has only approximately $$2^{nH(X)}$$ members, which, for distributions of $$X$$ that are not very close to the uniform distribution over the alphabet $$A$$, is a small fraction of the $$2^{n \log |A|}$$ possible length-$$n$$ sequences. Remember that $$H_\delta(X^n) = \log |S_\delta|$$, and $$S_\delta$$ was the smallest subset of $$A^n$$ such that it contains sequences whose probability sums to at least $$1 - \delta$$. This is a bit like the typical set $$T_{n\epsilon}$$, which also contains sequences making up most of the probability mass. Note that $$T_{n\epsilon}$$ is less efficient; $$S_\delta$$ optimally contains all sequences with probability greater than some threshold, whereas $$T_{n\epsilon}$$ generally omits the highest-probability sequences (settling instead for sequences of the same probability as most sequences that are drawn from $$X^n$$). Therefore $$H_\delta(X^n) \leq \log |T_{n\epsilon}|$$$ for an $$n$$ that depends on what $$\delta$$ and $$\epsilon$$ we want. Now we can get an upper bound on $$H_\delta(X^n)$$ if we can upper-bound $$|T_{n\epsilon}|$$. Looking at the definition, we see that the probability of a sequence $$X^n$$ must obey $$2^{n(H(X) - \epsilon)} < P(X^n) < 2^{n(H(X) + \epsilon)}.$$$$$T_{n\epsilon}$$ has the largest number of elements if all elements have the lowest possible probability $$p$$, and if that is the case it has at most $$1/p$$ of such lowest-probability elements since the probabilities cannot add to more than one, which implies $$|T_{n\epsilon}| < 2^{n(H(x)+\epsilon)}$$. Therefore $$H_\delta(X^n) \leq \log |T_{n\epsilon}| < \log(2^{n(H(X)+e)}) = n(H(X) + \epsilon)$$$ and we have a bound $$H_\delta(X^n) < n(H(X) + \epsilon).$$$If we can now also find the bound $$n(H(X) + \epsilon) < H_\delta(X^n)$$, we've shown $$|\frac{1}{n} H_\delta(X^n) - H(X)| < \epsilon$$ and we're done. The proof of this bound is a proof by contradiction. Imagine that there is an $$S'$$ such that $$\frac{1}{n} \log |S'| \leq H - \epsilon$$$ but also $$P(X^n \in S') \geq 1 - \delta.$$$We want to show that $$P(X^n \in S')$$ can't actually be that large. For the other bound, we used our typical set successfully, so why not use it again? Specifically, write $$P(X^n \in S') = P(X^n \in S' \cap T_{n\varepsilon}) + P(X^n \in S' \cap \bar{T}_{n\varepsilon})$$$ where $$\bar{T}_{n\varepsilon}$$ is again $$A^n - T_{n\varepsilon}$$, and noting that our constant $$\varepsilon$$ for $$T$$, is not the same as our constant $$\epsilon$$ in the bound. We want to set an upper bound on this probability; for that to hold, we need to make the terms on the right-hand side as large as possible. For the term, this is if $$S' \cap T_{n\varepsilon}$$ is as large as it can be based on the bound on $$|S'|$$, i.e. $$2^{n(H(X)-\epsilon)}$$, and each term in it has the maximum probability $$2^{-n(H(X)-\varepsilon)}$$ of terms in $$T_{n\varepsilon}$$. For the second term, this is if $$S' \cap \bar{T}_{n \epsilon}$$ is restricted only by $$P(X^n \in \bar{T}_{n\varepsilon}) \leq \frac{\sigma^2}{n\epsilon^2}$$, which we showed above. (Note that you can't have both of these conditions holding at once, but this does not matter since we only want to show a non-strict inequality.) Therefore we get $$P(X^n \in S') \leq 2^{n(H(X) - \epsilon)} 2^{-n(H(X)+\varepsilon)} + \frac{\sigma^2}{n\epsilon^2} \ = 2^{-n(\epsilon + \varepsilon)} + \frac{\sigma^2}{n\epsilon^2}$$$and we see that since $$\epsilon, \varepsilon > 0$$, and as we're dealing with the case where $$n \to \infty$$, this probability is going to go to zero in the limit. But we had assumed $$P(X^n \in S') \geq 1 - \delta$$ - so we have a contradiction unless we don't assume that, which means $$n(H(X) - \epsilon) < H_\delta(X^n).$$$ Combining this with the previous bound, we've now shown $$H(X) - \epsilon < \frac{1}{n} H_\delta(X^n) < H(X) + \epsilon$$$which is the same as $$\left|\frac{1}{n}H_\delta(X) - H(X)\right| < \epsilon$$$ which is the source coding theorem that we wanted to prove. |
# Time Value of Money
The internal rate of return (IRR) is simply the rate r that equates the future value of money to the present value. Despite being a foundational tool in financial analysis, the economic interpretation of this value is highly contested.
Before getting into the interpretation of IRR, lets establish the basic axioms of the time value of money that depict the relationship between present value (PV), future value (FV), and the rate of return r over the total number of periods ($$T$$):
$$$PV = \frac{FV}{(1+r)^T} \label{basic_pv}$$$
$$$FV = PV(1+r)^T \label{basic_fv}$$$
In words, Eq. $$\ref{basic_pv}$$ tells us that PV is equal to some FV discounted at a rate across the the total time period, where T represents the total number of time periods. Eq. $$\ref{basic_fv}$$ simply rearranges the equation so that FV is PV compounded by a rate r across time.
The core idea is that cash now is more valuable than cash later. Therefore if I were to lend you money, I would charge you interest to make up for not having money now. As an example, say person A lends person B $10 for$10 years at a rate of 20%. The FV of the money for person A is $$10(1+.2)^10=61$$. Alternatively, if person B wanted to know how much they needed to invest to have \$61 after 10 years of compounding at 20%, they would find that $$61/(1+.2)^10 = 10$$.
The focus on our exposition will be on r, the rate of return that is used to discount/compound. We can rearrange Eq. $$\ref{basic_pv}$$ into Eq. $$\ref{basic_irr}$$ to see the relationship between the IRR and FV and PV. Eq. $$\ref{basic_irr}$$ will be our final step in computing the IRR.
# Streams of Cash
There are plenty of investments that take the form of a single payment in (a negative cash flow out of your pocketbook) followed by a single payment out (a positive cash flow into your pocketbook) after a some period of time. This, however, is a trivial example to be solved (see above) and I will instead focus on investments that take the form of multiple cashflows over several periods.
Because we are dealing with a stream of cash flows and not a single value, FV needs to be rewritten. Consider a cash flow at t=1, if the final future period is 60 periods from t=1, then we need to compute the future value of that cash flow for for 60-1 periods from that time. We can rewrite the PV and the FV of a stream of cash flows as
$$$PV = \sum_{t=1}^{t=T}\frac{CF_t}{(1+r)^{t}} \label{pv_cf}$$$
$$$FV = \sum_{t=1}^{t=T}CF_t(1+r)^{T-t} \label{fv_cf}$$$
Substituting Eq. $$\ref{fv_cf}$$ into Eq. $$\ref{basic_pv}$$ and expanding we get:
$$$PV = \frac{\sum_{t=1}^{t=T}CF_t(1+r)^{T-t}}{(1+r)^T} = \frac{CF_1(1+r)^{T-1}}{(1+r)^T} + \frac{CF_2(1+r)^{T-2}}{(1+r)^T} + ... + \frac{CF_T(1+r)^{T-T}}{(1+r)^T} \label{pv_fv}$$$
We could also modify Eq. $$\ref{pv_fv}$$ to read:
$$$\mathit{CF}_{t=0} = \frac{\sum_{t=1}^{t=T}CF_t(1+r)^{T-t} + \mathit{CF}_{t=T}}{(1+r)^T}$$$
This shows that the initial CF is equal to the sum of the compounded cash flows and the terminal CF all discounted at by the same rate of compounding. For IRR, the numerator and the denominator r are assumed to be the same. This assumption is key to the alternatives to IRR such as MIRR (modified internal rate of return).
# Solving for Internal Rate of Return
Our primary goal is to find the r that solves Eq. $$\ref{pv_fv}$$. The most common method is to treat this as a polynomial root finding exercise. For clarity, we can trivially rewrite Eq. $$\ref{pv_cf}$$ as
$$$PV = CF_1 (1+r)^{-1} + CF_2 (1+r)^{-2} + ... + CF_n (1+r)^{-n}$$$
To find the root of this equation we need to set it to zero.
$$$0 = -PV + CF_1 (1+r)^{-1} + CF_2 (1+r)^{-2} + ... + CF_n (1+r)^{-n}$$$
The root r that satisfies this equation is the Internal Rate of Return (IRR). There are several methods of finding this root.
## Educated Guess and Interpolation
One way of finding it is to plug and chug using the npv function in Excel and see which rate gives us a negative value. This would not be super precise but would give you an educated guess and allow you to use the secant method to determine a more precise IRR.
## Root Finding Algorithm
Similarly we use the uniroot function of R. first specify the function then use uniroot on it. uniroot uses the golden section ratio method in combination with parabolic interpolation to efficiently guess the root that satisfies the function along a specified interval.
##### Kyle Thomas
###### AVP, Quantitative Analyst
I am a quantitative analyst focusing on consumer valuation and marketing. |
NCERT Solutions for Class 12 Physics Chapter 3 Current Electricity PDF – eSaral
# NCERT Solutions for Class 12 Physics Chapter 3 Current Electricity PDF
Hey, are you a class 12 student and looking for ways to download NCERT Solutions for Class 12 Physics Chapter 3 Current Electricity PDF? If yes. Then read this post till the end.
In this article, we have listed NCERT Solutions for Class 12 Physics Chapter 3 Current Electricity in PDF that are prepared by Kota’s top IITian Faculties by keeping Simplicity in mind.
If you want to learn and understand class 12 Physics Chapter 3 “Current Electricity” in an easy way then you can use these solutions PDF.
NCERT Solutions helps students to Practice important concepts of subjects easily. Class 12 Physics solutions provide detailed explanations of all the NCERT questions that students can use to clear their doubts instantly.
If you want to score high in your class 12 Physics Exam then it is very important for you to have a good knowledge of all the important topics, so to learn and practice those topics you can use eSaral NCERT Solutions.
So, without wasting more time Let’s start.
### Download NCERT Solutions for Class 12 Physics Chapter 3 Current Electricity PDF
Question 1. The storage battery of a car has an emf of $12 \mathrm{~V}$. If the internal resistance of the battery is $0.4 \Omega$, what is the maximum current that can be drawn from the battery?
Solution. Given:
emf of the battery $(E)=12 \mathrm{~V}$
Internal resistance of the battery $(r)=0.4 \Omega$
Let the maximum current drawn from the battery is I
By Ohm’s law,
$E=I r$
$I=\frac{E}{r}$
$I=\frac{12}{0.4}=30 \mathrm{~A}$
$\therefore 30 \mathrm{~A}$ is the maximum current drawn from the battery.
Question 2. A battery of emf $10 \mathrm{~V}$ and internal resistance $3 \Omega$ is connected to a resistor. If the current in the circuit is $0.5 \mathrm{~A}$, what is the resistance of the resistor? What is the terminal voltage of the battery when the circuit is closed?
Solution. Given:
emf of the battery, $E$, is $10 \mathrm{~V}$
Internal resistance of the battery, $r,=3 \Omega$,
Current in the circuit, $I,=0.5 \mathrm{~A}$,
Resistance of the resistor is $R$.
According to Ohm’s law,
$I=\frac{E}{R+r}$
$\Rightarrow R+r=\frac{E}{I}$
$=\frac{10}{0.5}=20 \Omega$
$\therefore R=20-3=17 \Omega$
The terminal voltage of the resistor $=V$
According to Ohm’s law,
$V=I R$
$=0.5 \times 17$
$=8.5 \mathrm{~V}$
$\therefore$ The terminal voltage is $8.5, V$ and the resistance of the resistor is $17 \Omega$.
Question 3. (a) Three resistors $1 \Omega, 2 \Omega$, and $3 \Omega$, are combined in series. What is the total resistance of the combination?
(b) If the combination is connected to a battery of emf $12 \mathrm{~V}$ and negligible internal resistance, obtain the potential drop across each resistor.
Solution. (a) The resistors $1 \Omega, 2 \Omega$, and $3 \Omega$ are combined in series.
Hence, total resistance $=1+2+3=6 \Omega$
(b) Current flowing through the circuit $=I$
Emf of the battery, $E$ is $12 \mathrm{~V}$
The total resistance of the circuit in series combination, $R$ is $6 \Omega$
According to Ohm’s law
$I=\frac{E}{R}$
$=\frac{12}{6}=2 \mathrm{~A}$
Let the potential drop across $1 \Omega$ resistor is $V_{1}$
The value of $V_{1}$ obtained by Ohm’s law is:
$V_{1}=2 \times 1=2 \mathrm{~V} \ldots(i)$
Let the potential drop across $2 \Omega$ resistor is $V_{2}$
Again, from Ohm’s law, the value of $V_{2}$ can be obtained as
$V_{2}=2 \times 2=4 \mathrm{~V} \ldots(i i)$
Let the potential drop across $3 \Omega$ resistor is $V_{3}$
Again, from Ohm’s law, the value of $V_{3}$ can be obtained as
$V_{3}=2 \times 3=6 \mathrm{~V} \ldots(i i i)$
$\therefore$ The potential drop across $1 \Omega$ is $2 \mathrm{~V}, 2 \Omega$ is $4 \mathrm{~V}$, and $3 \Omega$ is $6 \mathrm{~V}$.
Question 4. (a) Three resistors $2 \Omega, 4 \Omega$ and $5 \Omega$, are combined in parallel. What is the total resistance of the combination?
(b) If the combination is connected to a battery of emf $20 \mathrm{~V}$ and negligible internal resistor, and the total current drawn from the battery.
Solution. Given:
(a) Given:
$R_{1}=2 \Omega, R_{2}=4 \Omega$, and $R_{3}=5 \Omega$
The total resistance (R) of the parallel combination is given by.
$\frac{1}{R}=\frac{1}{R_{1}}+\frac{1}{R_{2}}+\frac{1}{R_{3}}$
$=\frac{1}{2}+\frac{1}{4}+\frac{1}{5}=\frac{10+5+4}{20}=\frac{19}{20}$
$\therefore R=\frac{20}{19} \Omega$
The total resistance of the combination is $\frac{20}{19} \Omega$
(b) $\quad \mathrm{Emf}$ of the battery, $V=20 \mathrm{~V}$
Current flowing through the resistor $R_{1}$ is given by,
$I_{1}=\frac{V}{R_{1}}$
$=\frac{20}{2}=10 \mathrm{~A}$
Current flowing through the resistor $R_{2}$ is given by,
$I_{2}=\frac{V}{R_{2}}$
$=\frac{20}{4}=5 \mathrm{~A}$
Current flowing through the resistor $R_{3}$ is given by,
$I_{3}=\frac{V}{R_{3}}$
$=\frac{20}{5}=4 \mathrm{~A}$
Total Current, $I=I_{1}+I_{2}+I_{3}=10+5+4=19 \mathrm{~A}$
$\therefore$ The current through each resistor $R_{1}, R_{2}$, and $R_{3}$ are $10 \mathrm{~A}, 5 \mathrm{~A}$, and $4 \mathrm{~A}$ respectively and the total current is $19 \mathrm{~A}$.
Question 5. At room temperature $\left(27.0^{\circ} \mathrm{C}\right)$ the resistance of a heating element is $100 \Omega$. What is the temperature of the element if the resistance is found to be $117 \Omega$, given that the temperature coefficient of the material of the resistor is $1.70 \times 10^{-40} \mathrm{C}^{-1}$
Solution. Given:
Room temperature, $T$ is $27^{\circ} \mathrm{C}$
The resistance of the heating element at $T, R$ is $100 \Omega$
Suppose, $T_{1}$ is the increased temperature of the filament.
The resistance of the heating element at $T_{1}, R_{1}$ is $117 \Omega$
Temperature Coefficient of filament material is, $\alpha$ is $1.70 \times 10^{-4}{ }^{o} \mathrm{C}^{-1}$
$\alpha$ is given by the relation,
$\alpha=\frac{R_{1}-R}{R\left(T_{1}-T\right)}$
$T_{1}-T=\frac{R_{1}-R}{R \alpha}$
$T_{1}-27=\frac{117-100}{100\left(1.7 \times 10^{-4}\right)}$
$T_{1}-27=1000$
$T_{1}=1027^{\circ} \mathrm{C}$
$\therefore$ At $1027^{\circ} C$, the resistance of the element is $117 \Omega$.
Question 6. A negligibly small current is passed through a wire of length $15 \mathrm{~m}$ and uniform cross-section $6.0 \times 10^{-7} \mathrm{~m}^{2}$, and its resistance is measured to be $5.0 \Omega$. What is the resistivity of the material at the temperature of the experiment?
Solution. Given:
Length of the wire, $l$ is $15 \mathrm{~m}$
Area of a cross-section of the wire, $a$ is $6.0 \times 10^{-7} \mathrm{~m}^{2}$
The resistance of the wire’s material, $R$ is $5.0 \Omega$
$\rho$ is the resistivity of the material/
Resistance is related to the resistivity as
$R=\rho \frac{l}{A}$
$\rho=\frac{R A}{l}$
$=\frac{5 \times 6 \times 10^{-7}}{15}=2 \times 10^{-7} \Omega \mathrm{m}$
$\therefore$ The resistivity of the material is $2 \times 10^{-7} \Omega \mathrm{m}$.
Question 7. A silver wire has a resistance of $2.1 \Omega$ at $27.5^{\circ} \mathrm{C}$, and a resistance of $2.7 \Omega$ at $100^{\circ} \mathrm{C}$. Determine the temperature coefficient of resistivity of silver.
Solution. Given:
Temperature, $T_{1}$ is $27.5{ }^{\circ} \mathrm{C}$
The Resistance of the silver wire at $T_{1}, R_{1}$ is $2.1 \Omega$
Temperature, $T_{2}=100{ }^{\circ} \mathrm{C}$
The Resistance of the silver wire at $T_{2}, R_{2}$ is $2.7 \Omega$.
Temperature coefficient of silver is $\alpha$
It is related to the temperature and resistance as
$\alpha=\frac{R_{2}-R_{1}}{R_{1}\left(T_{2}-T_{1}\right)}$
$=\frac{2.7-2.1}{2.1(100-27.5)}=0.0039{ }^{\circ} \mathrm{C}^{-1}$
Temperature coefficient of silver is $0.0039{ }^{\circ} \mathrm{C}^{-1}$.
Question 8. A heating element using nichrome connected to a $230 \mathrm{~V}$ supply draws an initial current of $3.2$ A which settles after a few seconds to a steady state value of $2.8 \mathrm{~A}$. What is the steady temperature of the heating element if the room temperature is $27.0^{\circ} \mathrm{C}$ ? Temperature coefficient of resistance of nichrome averaged over the temperature range involved is $1.70 \times 10^{-4}{ }^{\circ} \mathrm{C}^{-1}$.
Solution. Given:
Supply voltage, $V$ is $230 \mathrm{~V}$
The Initial current drawn, $I_{1}$ is $3.2 \mathrm{~A}$
Initial resistance $=R_{1}$, which is given by the relation,
$R_{1}=\frac{V}{I}$
$=\frac{230}{3.2}=71.87 \Omega$
Steady state value of the current, $I_{2}$ is $2.8 \mathrm{~A}$
Resistance at the steady state $=R_{2}$, which is given as
$R_{2}=\frac{230}{2.8}=82.14 \Omega$
Temperature coefficient of nichrome, $a$ is $1.70 \times 10^{-4}{ }^{\circ} \mathrm{C}^{-1}$
The Initial temperature of nichrome, $T_{1}$ is $27.0^{\circ} \mathrm{C}$
Study state temperature reached by nichrome is $T_{2}$
$T_{2}$ can be obtained by the relation for $\alpha$,
$\alpha=\frac{R_{2}-R_{1}}{R_{1}\left(T_{2}-T_{1}\right)}$
$T_{2}-27^{\circ} C=\frac{82.14-71.87}{71.87 \times 1.7 \times 10^{-4}}=840.5$
$T_{2}=840.5+27=867.5^{\circ} \mathrm{C}$
The steady state temperature of the heating element is $867.5^{\circ} \mathrm{C}$
Question 9. Determine the current in each branch of the network shown in fig $3.30$ :
Solution: The current flowing through different branches of the circuit is shown in the figure Provided
$I_{1}=$ Current flowing through the outer circuit
$I_{2}=$ Current flowing through branch $\mathrm{AB}$
$I_{3}=$ Current flowing through branch $\mathrm{AD}$
$I_{2}-I_{4}=$ Current flowing through branch $\mathrm{BC}$
$I_{3}+I_{4}=$ Current flowing through branch DC
$I_{4}=$ Current flowing through branch BD
For the closed circuit ABDA, the potential drop is zero, i.e.,
$10 I_{2}+5 I_{4}-5 I_{3}=0$
$\Rightarrow 2 I_{2}+I_{4}-I_{3}=0$
$\Rightarrow I_{3}=2 I_{2}+I_{4} \ldots$ (1)
For the closed circuit BCDB, the potential drop is zero, i.e.,
$5\left(I_{2}-I_{4}\right)-10\left(I_{3}+I_{4}\right)-5 I_{4}=0$
$\Rightarrow 5 I_{2}+5 I_{4}-10 I_{3}-10 I_{4}-5 I_{4}=0$
$\Rightarrow 5 I_{2}-10 I_{3}-20 I_{4}=0$
$\Rightarrow I_{2}=2 I_{3}+4 I_{4} \ldots$… (2)
For the closed circuit ABCFEA, the potential drop is zero, i.e.,
$-10+10\left(I_{1}\right)+10\left(I_{2}\right)+5\left(I_{2}-I_{4}\right)=0$
$\Rightarrow 10=15 I_{2}+10 I_{1}-5 I_{4}$
$\Rightarrow 3 I_{2}+2 I_{1}-I_{4}=2 \ldots(3)$
From equations (1) and (2), we obtain
$I_{3}=2\left(2 I_{3}+4 I_{4}\right)+I_{4}$
$\Rightarrow I_{3}=4 I_{3}+8 I_{4}+I_{4}$
$-3 I_{3}=9 I_{4}$
$-3 I_{4}=+I_{3} \ldots(4)$
Putting equation (4) in equation (1), we can obtain
$I_{3}=2 I_{2}+I_{4}$
$\Rightarrow-4 I_{4}=2 I_{2}$
$\Rightarrow I_{2}=-2 I_{4} \ldots(5)$
From the given figure,
$I_{1}=I_{3}+I_{2} \ldots$ (6)
From equation (6) and equation (1), we obtain
$3 I_{2}+2\left(I_{3}+I_{2}\right)-I_{4}=2$
$\Rightarrow 5 I_{2}+2 I_{3}-I_{4}=2 \ldots(7)$
$\Rightarrow 5\left(-2 I_{4}\right)+2\left(-3 I_{4}\right)-I_{4}=2$
$\Rightarrow-10 I_{4}-6 I_{4}-I_{4}=2$
$17 I_{4}=-2$
$I_{4}=\frac{-2}{17} A$
Equation (4) reduces to
$I_{3}=-3\left(I_{4}\right)$
$=-3\left(\frac{-2}{17}\right)=\frac{6}{17} \mathrm{~A}$
$I_{2}=-2\left(I_{4}\right)$
$=-2\left(\frac{-2}{17}\right)=\frac{4}{17} A$
$I_{2}-I_{4}=\frac{4}{17}-\left(\frac{-2}{17}\right)=\frac{6}{17} \mathrm{~A}$
$I_{3}+I_{4}=\frac{6}{17}+\left(\frac{-2}{17}\right)=\frac{4}{17} \mathrm{~A}$
$I_{1}=I_{3}+I_{2}$
$=\frac{6}{17}+\frac{4}{17}=\frac{10}{17} \mathrm{~A}$
Therefore, current in branch $A B=\frac{4}{17} \mathrm{~A}$
In branch $B C=\frac{6}{17} \mathrm{~A}$
In branch $C D=\frac{-4}{17} A$
In branch $A D=\frac{6}{17} \mathrm{~A}$
In branch $B D=\frac{-2}{17} \mathrm{~A}$
Total current $=\frac{4}{17}+\frac{6}{17}+\frac{-4}{17}+\frac{6}{17}+\frac{-2}{17}=\frac{10}{17} \mathrm{~A}$
Question 10. (a) In a meter bridge [Fig. $3.27]$, the balance point is found to be at $39.5 \mathrm{~cm}$ from the end $\mathrm{A}$ when the resistor $\mathrm{Y}$ is of $12.5 \Omega$. Determine the resistance of $X$. Why are the connections between resistors in a Wheatstone or meter bridge made of thick copper strips? (b) Determine the balance point of the bridge above if $\mathrm{X}$ and $\mathrm{Y}$ are interchanged.
(c) What happens if the galvanometer and cell are interchanged at the balance point of the bridge? Would the galvanometer show any current?
Solution. The figure shows a meter bridge with resistors $\mathrm{X}$ and $\mathrm{Y}$.
(a) Balance point from end $A, I_{1}$ is $39.5 \mathrm{~cm}$
The Resistance of the resistor $Y$ is $12.5 \Omega$
Condition for the balance is given as,
$\frac{X}{Y}=\frac{100-l_{1}}{l_{1}}$
$X=\frac{100-39.5}{39.5} \times 12.5=8.2 \Omega$
The resistance of resistor $X$ is $8.2 \Omega$.
The thick copper strips which are used in a Wheat stone bridge is used to minimize the resistance of connecting wires
(b) If $X$ and $Y$ are interchanged, then $l_{1}$ and $100-l_{1}$ get interchanged.
The balance point of the bridge is $100-l_{1}$ from $A$.
$100-l_{1}=100-39.5=60.5 \mathrm{~cm}$
The balance point is $60.5 \mathrm{~cm}$ from $\mathrm{A}$.
(c) The galvanometer will show no deflection when the galvanometer and cell are interchanged at the balance point. Therefore, no current would flow through the galvanometer.
Question 11. A storage battery of emf $8.0 \mathrm{~V}$ and internal resistance $0.5 \Omega$ is being charged by a $120 \mathrm{~V}$ DC supply using a series resistor of $15.5 \Omega$.
What is the terminal voltage of the battery during charging? What is the purpose of having a series resistor in the charging circuit?
Solution. Given:
Emf of the storage battery, $E$, is $8.0 \mathrm{~V}$
The Internal resistance of the battery, $r$, is $0.5 \Omega$
$D C$ supply voltage, $V$ is $120 \mathrm{~V}$
The Resistance of the resistor, $R$, is $15.5 \Omega$
Effective voltage in the circuit $=V^{\prime}$
$\mathrm{R}$ is connected in series to the storage battery. It can therefore be written as
$V^{\prime}=V-E$
$\Rightarrow V^{\prime}=120-8=112 \mathrm{~V}$
Current flowing in the cireuit $=I$, which is given by the relation, Reference the sentence
$I=\frac{V^{\prime}}{R+r}$
$=\frac{112}{15.5+5}=\frac{112}{16}=7 \mathrm{~A}$
The Voltage across resistor $\mathrm{R}$ given by the product, $I R=7 \times 15.5=108.5 \mathrm{~V}$
$D C$ voltage supply $=$ Voltage drop across $\mathrm{R}+$ Terminal voltage of the battery
The terminal voltage of battery $=120-108.5=11.5 \mathrm{~V}$
The Current will be very large in the absence of a series resistor in a charging circuit. The Work of a series resistor in a charging circuit is to reduce the amount of current drawn from the external circuit.
Question 12. In a potentiometer arrangement, a cell of emf $1.25 \mathrm{~V}$ gives a balance point at $35.0 \mathrm{~cm}$ length of the wire. If the cell is replaced by another cell and the balance point shifts to $63.0 \mathrm{~cm}$, what is the emf of the second cell?
Solution. Given:
Emf of the cell, $E_{1}$ is $1.25 \mathrm{~V}$
The Balance point of the potentiometer, $l_{1}$ is $35 \mathrm{~cm}$
The cell is substituted by another cell of $\mathrm{emf} E_{2}$
The new balance point of the potentiometer, $l_{2}$ is $63 \mathrm{~cm}$
The balance condition is given by the relationship,
$\frac{E_{1}}{E_{2}}=\frac{l_{1}}{l_{2}}$
$E_{2}=E_{1} \times \frac{l_{2}}{l_{1}}$
$=1.25 \times \frac{63}{35}=2.25 \mathrm{~V}$
Emf of the second cell is $2.25 \mathrm{~V}$.
Question 13. The number density of free electrons in a copper conductor estimated in Example $3.1$ is $8.5 \times 10^{28} \mathrm{~m}^{-3}$. How long does an electron take to drift from one end of a wire $3.0 \mathrm{~m}$ long to its other end? The area of cross-section of the wire is $2.0 \times 10^{-6} \mathrm{~m}^{2}$ and it is carrying a current of $3.0 \mathrm{~A}$.
Solution. Given:
The number density in a copper conductor of free electrons, $n$ is $8.5 \times 10^{28} \mathrm{~m}^{-3}$
Length of the copper wire, $I$ is $3.0 \mathrm{~m}$
Area of a cross-section of the wire, $A$ is $2.0 \times 10^{-6} \mathrm{~m}^{2}$
The Current carried by the wire, $I=3.0 \mathrm{~A}$,
$I=n A e V_{d}$
Where,
$\mathrm{e}=$ Electric charge $=1.6 \times 10^{-19} \mathrm{C}$
$V_{d}=$ Drift velocity $=\frac{\text { Length of the wire }(l)}{\text { Time taken to cover }(\mathrm{t})}$
$I=n A e \frac{l}{t}$
$t=\frac{n A e l}{I}$
$=\frac{3 \times 8.5 \times 10^{28} \times 2 \times 10^{-6} \times 1.6 \times 10^{-19}}{3.0}$
$=2.7 \times 10^{4} \mathrm{~s}$
The Time taken by an electron to drift from one end of the wire to the other is given by $2.7 \times 10^{4} \mathrm{~s}$
Question 14. The earth’s surface has a negative surface charge density of $10^{-9} \mathrm{C} \mathrm{m}^{-2}$. The potential difference of $400 \mathrm{kV}$ between the top of the atmosphere and the surface results (due to the low conductivity of the lower atmosphere) in a current of only 1800 A over the entire globe. If there were no mechanism of sustaining atmospheric electric field, how much time (roughly) would be required to neutralise the earth’s surface? (This never happens in practice because there is a mechanism to replenish electric charges, namely the continual thunderstorms and lightning in different parts of the globe). (Radius of earth $=6.37 \times 10^{6} \mathrm{~m}$.)
Solution. Given:
The Surface charge density of the earth $(\sigma)$ is $10^{-9} \mathrm{C} \mathrm{m}^{-2}$
The Current over the entire globe $(I)$ is $1800 \mathrm{~A}$
The Radius of the earth $(r)$ is $6.37 \times 10^{6} \mathrm{~m}$
The Surface area of the earth,
$A=4 n r^{2}$
$=4 \Pi \times\left(6.37 \times 10^{6}\right)^{2}$
$=5.09 \times 10^{14} \mathrm{~m}^{2}$
Charge on the earth surface,
$q=\sigma \times A$
$=10^{-9} \times 5.09 \times 10^{14}$
$=5.09 \times 10^{5} \mathrm{C}$
Time taken to neutralize the earth’s surface $=t$
Current, $I=\frac{q}{t}$
$t=\frac{q}{I}$
$=\frac{5.09 \times 10^{5}}{1800}=282.77 \mathrm{~s}$
The Time taken to neutralize the earth’s surface is $282.77 \mathrm{~s}$.
Question 15. (a) Six lead-acid type of secondary cells each of emf $2.0 \mathrm{~V}$ and internal resistance $0.015 \Omega$ are joined in series to provide a supply to the resistance of $8.5 \Omega$. What is the current drawn from the supply and its terminal voltage?
(b) A secondary cell after long use has an emf of $1.9 \mathrm{~V}$ and a large internal resistance of $380 \Omega$. What maximum current can be drawn from the cell? Could the cell drive the starting motor of a car?
Solution: Given:
(a) $\quad$ Number of secondary cells, $n$ is 6
Emf of each secondary cell, $E$ is $2.0 \mathrm{~V}$
The internal resistance of each cell, $r$ is $0.015 \Omega$
The Series resistor is connected to the combination of cells.
The Resistance of the resistor, $R$ is $8.5 \Omega$
Current drawn from the supply $=I$, which is given by the relation,
$I=\frac{n E}{R+n r}$
$=\frac{6 \times 2}{8.5+6 \times 0.015}$ $=\frac{12}{8.59}=1.39 \mathrm{~A}$
Terminal voltage, $V=I R=1.39 \times 8.5=11.87 \mathrm{~A}$
A current of $1.39 \mathrm{~A}$, is drawn from the supply and the terminal voltage is $11.87 \mathrm{~A} .$
(b) After a long use, emf of the secondary cell, $E$ is $1.9 \mathrm{~V}$
The Internal resistance of the cell, $r$ is $380 \Omega$
A maximum current of $0.005 \mathrm{~A}$ can be drawn from the cell.
Since a large current is required to start the motor of a car, it is not possible to use the cell to start a motor.
Question 16. Two wires of equal length, one of aluminium and the other of copper, have the same resistance. Which of the two wires is lighter? Hence explain why aluminium wires are preferred for overhead power cables $\left(\rho_{\mathrm{Al}}=2.63 \times 10^{-8} \Omega \mathrm{m}, \quad \rho_{\mathrm{Cu}}=\right.$ $1.72 \times 10^{-8} \Omega \mathrm{m}$, Relative density of $\mathrm{Al}=2.7$, of $\mathrm{Cu}=8.9$ )
Solution. Given:
Resistivity of aluminium $\rho_{A l}$ is $2.63 \times 10^{-8} \Omega \mathrm{m}$
The Relative density of aluminium, $d_{1}$, is $2.7$
Let $l_{1}$ be the length of aluminium wire and $m_{1}$ be its mass.
The Resistance of the aluminium wire is $R_{1}$
Area of a cross-section of the aluminium wire is $A_{1}$
The Resistivity of copper, $\rho_{C u}=1.72 \times 10^{-8} \Omega \mathrm{m}$
Let $l_{2}$ be the length of copper wire and $m_{2}$ be its mass.
The Resistance of the copper wire is $R_{2}$
Area of the cross-section of the copper wire is $A_{2}$
The two relations can be written as
$R_{1}=\rho_{1} \frac{l_{1}}{A_{1}}$$\ldots(1)$
$R_{2}=\rho_{2} \frac{l_{2}}{A_{2}}$…(2)
It is given that $R_{1}=R_{2} \Rightarrow \rho_{1} \frac{l_{1}}{A_{1}}=\rho_{2} \frac{l_{2}}{A_{2}}$
$\Rightarrow \frac{A_{1}}{A_{2}}=\frac{\rho_{1}}{\rho_{2}}=\frac{2.63 \times 10^{-8}}{1.72 \times 10^{-8}}=\frac{2.63}{1.72}$
Mass of the aluminium wire, $m_{1}=$ volume $\times$ Density
$A_{1} l_{1} \times d_{1}=A_{1} l_{1} d_{1}$ (3)
Mass of the copper wire, $m_{2}=$ volume $\times$ Density
$A_{2} l_{2} \times d_{2}=A_{2} l_{2} d_{2}(4)$
By Dividing equation (3) by equation (4), we obtain
$\frac{m_{1}}{m_{2}}=\frac{A_{1} l_{1} d_{1}}{A_{2} l_{2} d_{2}}$
For $l_{1}=l_{2}$
$\frac{m_{1}}{m_{2}}=\frac{A_{1} d_{1}}{A_{2} d_{2}}$
For $\frac{A_{1}}{A_{2}}=\frac{2.63}{1.72}$
$\frac{m_{1}}{m_{2}}=\frac{2.63}{1.72} \times \frac{2.7}{8.9}=0.46$
It can be inferred from this ratio that $m_{1}$ is less than $m_{2}$. Hence, aluminium is lighter than copper. Because aluminium is lighter, it is preferred for overhead power cables over copper.
Question 17. What conclusion can you draw from the following observations on a resistor made of alloy manganin?
Solution.
From the given table, it can be inferred that the voltage-to-current ratio is a constant equal to 19.7. Therefore, manganin is an ohmic conductor, i.e., the alloy obeys Ohm’s law. According to Ohm’s law, the conductor’s resistance is the voltage proportion with the current. Therefore, manganin’s resistance is $19.7 \Omega$.
Question 18. Answer the following questions:
Solution. (a) A steady current flow in a metallic conductor of non – uniform crosssection. Which of these qualities is constant along the conductor: Current, Current density, electric field, drift speed?
(b) Is Ohm’s law is universally applicable for all conducting elements?
(c) A low voltage supply from which one needs high currents must have very low internal resistance. Why?
(d) A high tension (HT) Supply of, say, $6 \mathrm{KV}$ must have a very large internal resistance. Why?
Solution: (a) Current density, drift speed and electric field are inversely proportional to the area of cross section, and hence, they are not constant. The current flowing through the conductor is constant when a steady current flow in a metallic conductor of the non-uniform cross-section.
(b) Ohm’s law cannot be applied universally to all conducting elements. Ohm’s law is not applicable for Vacuum diode semiconductor.
(c) By Ohm’s law, the relation for the potential is $V=I R$
Voltage $(V)$ is directly proportional to the Current (I).
$R$ is the internal resistance of the source.
$I=\frac{V}{R}$
If $V$ is low, then $R$ must be very low, so that high current can be drawn from the source.
(d) To prevent the electrical current from exceeding the safety threshold, a high tension supply must have a very high internal resistance. If the internal resistance is not high, the current drawn may exceed the safety limits in the case of a short circuit.
Question 19. Choose the correct alternative:
(a) Alloys of metals usually have (greater/less) resistivity than that of their constituent metals.
(b) Alloys usually have much (lower/higher) temperature coefficients of resistance than pure metals.
(c) The resistivity of the alloy manganin is nearly independent of/increases rapidly with the increase of temperature.
(d) The resistivity of a typical insulator (e.g., amber) is greater than that of a metal by a factor of the order of $\left(10^{22} / 10^{33}\right)$.
Solution: (a) Alloys of metals usually have greater resistivity than that of their constituent metals.
(b) Alloys usually have lower temperature coefficients of resistance than pure metals.
(c) The resistivity of the alloy, manganin is nearly independent of increase of temperature.
(d) The resistivity of a typical insulator is greater than that of a metal by a factor of the order of $10^{22}$.
Question 20. (a) Given $n$ resistors each of resistance $R$, how will you combine them to get the (i) maximum (ii) minimum effective resistance? What is the ratio of the maximum to minimum resistance?
(b) Given the resistances of $1 \Omega, 2 \Omega, 3 \Omega$, how will you combine them to get an equivalent resistance of (i) $(11 / 3) \Omega$ (ii) $(11 / 5) \Omega$, (iii) $6 \Omega$, (iv) $(6 / 11) \Omega ?$
(c) Determine the equivalent resistance of networks shown in Fig.
Solution: Given:
(a) $\quad$ Total number of resistors $=n$
The resistance of each resistor $=R$
(i) The effective resistance $R_{1}$ maximum when $\mathrm{n}$ resistors are connected in series, given by the product $n R$.
Hence, maximum resistance of the combination, $R_{1}=n R$
(ii) When $n$ resistors are connected in parallel, the effective resistance $\left(R_{2}\right)$ is the minimum, given by the ratio $\frac{R}{n}$
Hence, minimum resistance of the combination, $R_{2}=\frac{R}{n}$
(iii) The ratio of the maximum to the minimum resistance is,
$\frac{R_{1}}{R_{2}}=\frac{n R}{\frac{R}{n}}=n^{2}$
(b) The resistance of the given resistors is,
$R_{1}=1 \Omega, R_{2}=2 \Omega, R_{3}=3 \Omega$
(i) Equivalent resistance, $R^{\prime}=\frac{11}{3} \Omega$
Considering the following combination of the resistors.
The Equivalent resistance of the circuit is given by,
$R^{\prime}=\frac{2 \times 1}{2+1}+3=\frac{2}{3}+3=\frac{11}{3} \Omega$
(ii) Equivalent resistance, $R^{\prime}=\frac{11}{5} \Omega$
Considering the following combination of the resistors.
The Equivalent resistance of the circuit is given by,
$R^{\prime}=\frac{2 \times 3}{2+3}+1=\frac{6}{5}+1=\frac{11}{5} \Omega$
(iii) Equivalent resistance, $R^{\prime}=6 \Omega$
The series combination of the resistors is as shown in the given circuit.
The Equivalent resistance of the circuit is given by the sum,
$R^{\prime}=1+2+3=6 \Omega$
(iv) Equivalent resistance, $R^{\prime}=\frac{6}{11} \Omega$
Considering the series combination of the resistors, as shown in the given circuit.
The Equivalent resistance of the circuit is given by,
$R^{\prime}=\frac{1 \times 2 \times 3}{1 \times+2 \times 3+3 \times 1}=\frac{6}{11} \Omega$
(c) It can be observed from the given circuit that in the first small loop, two resistors of resistance $1 \Omega$ each are connected in series.
Hence, their equivalent resistance $=(1+1)=2 \Omega$
Therefore, the circuit can be redrawn as
It can be observed that $2 \Omega$ and $4 \Omega$ resistors are connected in parallel in all the four loops.
Hence, the equivalent resistance $\left(R^{\prime}\right)$ of each loop is given by,
$R^{\prime}=\frac{2 \times 4}{2+4}=\frac{8}{6}=\frac{4}{3} \Omega$
The circuit reduces to.
All the four resistors are connected in series.
Hence, the equivalent resistance of the given circuit is $\frac{4}{3} \times 4=\frac{16}{3} \Omega$
(b) The five resistors of resistance $R$ each are connected in series hence, the equivalent resistance of the circuit $=R+R+R+R+R=5 R$
Question 21. Determine the current drawn from a $12 \mathrm{~V}$ supply with internal resistance $0.5 \Omega$ by the infinite network shown in Fig. Each resistor has $1 \Omega$ resistance.
Solution. Given:
The resistance of each resistor connected in the circuit, $R$ is $1 \Omega$
Let the equivalent resistance of the given circuit is $R^{\prime}$ and the network is infinite.
Hence, the equivalent resistance is given by the relation,
$R^{\prime}=2+\frac{R^{\prime}}{\left(R^{\prime}+1\right)}$
$\Rightarrow\left(R^{\prime}\right)^{2}-2 R^{\prime}-2=0$
$R^{\prime}=\frac{2 \pm \sqrt{4+8}}{2} \Omega$
$=\frac{2 \pm \sqrt{12}}{2}=(1 \pm \sqrt{3}) \Omega$
Negative value of $R^{\prime}$ cannot be accepted.
Hence, equivalent resistance,
$R^{\prime}=(1+\sqrt{3}) \Omega=(1+1.73) \Omega=2.73 \Omega$
The Internal resistance of the circuit,
$r=0.5 \Omega$
The total resistance of the given circuit is given by
$=2.73+0.5=3.23 \Omega$
Supply voltage, $V=12 \mathrm{~V}$
Current drawn from the source is given according, to Ohm’s law by $\frac{12}{3.23}=3.72 \mathrm{~A}$
Question 22. The figure shows a potentiometer with a cell of $2.0 \mathrm{~V}$ and internal resistance $0.40 \Omega$ maintaining a potential drop across the resistor wire AB. A standard cell which maintains a constant emf of $1.02 \mathrm{~V}$ (for very moderate currents up to a few $\mathrm{mA}$ ) gives a balance point at $67.3 \mathrm{~cm}$ length of the wire. To ensure very low currents drawn from the standard cell, very high resistance of $600 \mathrm{k} \Omega$ is put in series with it, which is shorted close to the balance point. The standard cell is then replaced by a cell of unknown emf $\varepsilon$, and the balance point found similarly, turns out to be at $82.3 \mathrm{~cm}$ length of the wire.
(a) What is the value $\varepsilon$ ?
(b) What purpose does the high resistance of $600 \mathrm{k} \Omega$ have?
(c) Is the balance point affected by this high resistance?
(d) Is the balance point affected by the internal resistance of the driver cell?
(e) Would the method work in the above situation if the driver cell of the potentiometer had an emf of $1.0 \mathrm{~V}$ instead of $2.0 \mathrm{~V} ?$
(f) Would the circuit work well for determining an extremely small emf, say of the order of a few $\mathrm{mV}$ (such as the typical emf of a thermocouple)? If not, how will you modify the circuit?
Solution. Given:
(a) A Constant emf of the given standard cell, $E_{1}$ is $1.02 \mathrm{~V}$
The Balance point on the wire, $l_{1}$ is $67.3 \mathrm{~cm}$
The standard cell is replaced by a cell of unknown emf, $\varepsilon$
$\therefore$ The new balance point on the wire, $l$ is $82.3 \mathrm{~cm}$
The relation connecting balance point and emf is,
$\frac{E_{1}}{l_{1}}=\frac{\varepsilon}{l}$
$\varepsilon=\frac{l}{l_{1}} \times E_{1}$
The value of the unknown emf is $1.247 \mathrm{~V}$.
(b) High resistance of $600 \mathrm{k} \Omega$ is used to lower the current flow through the galvanometer when the movable contact is far from the balance point.
(c) The presence of high resistance does not affect the balance point.
(d) The internal resistance of the driver cell does not affect the balance point.
(e) If the potentiometer’s driver cell emf is less than the other cell’s emf, then there would be no balance point on the wire, so the method won’t operate, if the potentiometer’s driver cell has an emf of $1 \mathrm{~V}$ instead of $2 \mathrm{~V}$.
(f) The circuit cannot work well for determining an extremely small emf as there will be a very high pereentage of error. This is because the circuit would be unstable; the balance point would be close to end $\mathrm{A}$.
If a series resistance is connected to the wire $\mathrm{AB}$, the given circuit can be modified. AB’s potential drop is slightly higher than the measured emf. It would be a small percentage error.
Question 23. The figure shows a $2.0 \mathrm{~V}$ potentiometer used for the determination of internal resistance of a $1.5 \mathrm{~V}$ cell. The balance point of the cell in open circuit is $76.3 \mathrm{~cm}$. When a resistor of $9.5 \Omega$ is used in the external circuit of the cell, the balance point shifts to $64.8 \mathrm{~cm}$ length of the potentiometer wire. Determine the internal resistance of the cell.
Solution. Given:
The internal resistance of the cell is $\mathrm{r}$
The balance point of the cell in the open circuit, $l_{1}$ is $76.3 \mathrm{~cm}$
An external resistance of resistance $\mathrm{R}=9.5 \Omega$ is connected to the circuit.
The new balance point of the circuit, $l_{2}$ is $64.8 \mathrm{~cm}$
The current $(I)$ flowing through the circuit is
The relation between emf and resistance is,
$r=\left(\frac{l_{1}-l_{2}}{l_{2}}\right) R$
$=\frac{76.3 \mathrm{~cm}-64.8 \mathrm{~cm}}{64.8 \mathrm{~cm}} \times 9.5 \Omega=1.68 \Omega$
The internal resistance of the cell is $1.68 \Omega$. |
# 1 Overview
In this vignette, we provide a brief overview of the ChIPexoQual package. This package provides a statistical quality control (QC) pipeline that enables the exploration and analysis of ChIP-exo/nexus experiments. In this vignette we used the reads aligned to chr1 in the mouse liver ChIP-exo experiment (Serandour et al. 2013) to illustrate the use of the pipeline. To load the packages we use:
library(ChIPexoQual)
library(ChIPexoQualExample)
ChIPexoQual takes a set of aligned reads from a ChIP-exo (or ChIP-nexus) experiment as input and performs the following steps:
2. Compute $$D_i$$, number of reads in island $$i$$, and $$U_i$$, number of island $$i$$ positions with at least one aligning read, $$i=1, \cdots, I$$.
• For each island $$i$$, $$i=1, \cdots, I$$ compute island statistics: \begin{align*} \mbox{ARC}_i &= \frac{D_i}{W_i}, \quad \mbox{URC}_i = \frac{U_i}{D_i}, \\ %\mbox{URC}_i &= \frac{U_i}{D_i}, \\ \mbox{FSR}_i &= \frac{(\text{Number of forward strand reads aligning to island i})}{D_i}, \end{align*}
where $$W_i$$ denotes the width of island $$i$$,.
3. Generate diagnostic plots (i) URC vs. ARC plot; (ii) Region Composition plot; (iii) FSR distribution plot.
4. Randomly sample $$M$$ (at least 1000) islands and fit, \begin{align*} D_i = \beta_1 U_i + \beta_2 W_i + \varepsilon_i, \end{align*} where $$\varepsilon_i$$ denotes the independent error term. Repeat this process $$B$$ times and generate box plots of estimated $$\beta_1$$ and $$\beta_2$$.
We analyzed a larger collection of ChIP-exo/nexus experiments in (Welch et al. 2016) including complete versions of this samples.
# 2 Creating an ExoData object
The minimum input to use ChIPexoQual are the aligned reads of a ChIP-exo/nexus experiment. ChIPexoQual accepts either the name of the bam file or the reads in a GAlignments object:
files = list.files(system.file("extdata",
package = "ChIPexoQualExample"),full.names = TRUE)
basename(files[1])
## [1] "ChIPexo_carroll_FoxA1_mouse_rep1_chr1.bam"
ex1 = ExoData(file = files[1],mc.cores = 2L,verbose = FALSE)
ex1
## ExoData object with 655785 ranges and 11 metadata columns:
## <Rle> <IRanges> <Rle> | <integer> <integer>
## [1] chr1 [3000941, 3000976] * | 2 0
## [2] chr1 [3001457, 3001492] * | 0 1
## [3] chr1 [3001583, 3001618] * | 0 2
## [4] chr1 [3001647, 3001682] * | 1 0
## [5] chr1 [3001852, 3001887] * | 1 0
## ... ... ... ... . ... ...
## [655781] chr1 [197192012, 197192047] * | 0 1
## [655782] chr1 [197192421, 197192456] * | 0 1
## [655783] chr1 [197193059, 197193094] * | 1 0
## [655784] chr1 [197193694, 197193729] * | 0 3
## [655785] chr1 [197194986, 197195021] * | 0 2
## fwdPos revPos depth uniquePos ARC
## <integer> <integer> <integer> <integer> <numeric>
## [1] 1 0 2 1 0.0555555555555556
## [2] 0 1 1 1 0.0277777777777778
## [3] 0 1 2 1 0.0555555555555556
## [4] 1 0 1 1 0.0277777777777778
## [5] 1 0 1 1 0.0277777777777778
## ... ... ... ... ... ...
## [655781] 0 1 1 1 0.0277777777777778
## [655782] 0 1 1 1 0.0277777777777778
## [655783] 1 0 1 1 0.0277777777777778
## [655784] 0 1 3 1 0.0833333333333333
## [655785] 0 1 2 1 0.0555555555555556
## URC FSR M A
## <numeric> <numeric> <numeric> <numeric>
## [1] 0.5 1 -Inf Inf
## [2] 1 0 -Inf -Inf
## [3] 0.5 0 -Inf -Inf
## [4] 1 1 -Inf Inf
## [5] 1 1 -Inf Inf
## ... ... ... ... ...
## [655781] 1 0 -Inf -Inf
## [655782] 1 0 -Inf -Inf
## [655783] 1 1 -Inf Inf
## [655784] 0.333333333333333 0 -Inf -Inf
## [655785] 0.5 0 -Inf -Inf
## -------
## seqinfo: 1 sequence from an unspecified genome; no seqlengths
reads = readGAlignments(files[1],param = NULL)
identical(GRanges(ex1),GRanges(ex2))
## [1] TRUE
For the rest of the vignette, we generate an ExoData object for each replicate:
files = files[grep("bai",files,invert = TRUE)] ## ignore index files
exampleExoData = lapply(files,ExoData,mc.cores = 2L,verbose = FALSE)
Finally, we can recover the number of reads that compose a ExoData object by using the nreads function:
sapply(exampleExoData,nreads)
## [1] 1654985 1766665 1670117
## 2.1 Enrichment analysis and library complexity:
To create the ARC vs URC plot proposed in (Welch et al. 2016), we use the ARC_URC_plot function. This function allows to visually compare different samples:
ARCvURCplot(exampleExoData,names.input = paste("Rep",1:3,sep = "-"))
This plot typically exhibits one of the following three patterns for any given sample. In all three panels we can observe two arms: the first with low Average Read Coefficient (ARC) and varying Unique Read Coefficient (URC); and the second where the URC decreases as the ARC increases. The first and third replicates exhibit a defined decreasing trend in URC as the ARC increases. This indicates that these samples exhibit a higher ChIP enrichment than the second replicate. On the other hand, the overall URC level from the first two replicates is higher than that of the third replicate, elucidating that the libraries for the first two replicates are more complex than that of the third replicate.
## 2.2 Strand imbalance
To create the FSR distribution and Region Composition plots suggested in Welch et. al 2016 (submitted), we use the FSR_dist_plot and region_comp_plot, respectively.
p1 = regionCompplot(exampleExoData,names.input = paste("Rep",1:3,
sep = "-"),depth.values = seq_len(50))
p2 = FSRDistplot(exampleExoData,names.input = paste("Rep",1:3,sep = "-"),
quantiles = c(.25,.5,.75),depth.values = seq_len(100))
gridExtra::grid.arrange(p1,p2,nrow = 1)
The left panel displays the Region Composition plot and the right panel shows the Forward Strand Ratio (FSR) distribution plot, both of which highlight specific problems with replicates 2 and 3. The Region Composition plot exhibits apparent decreasing trends in the proportions of regions formed by fragments in one exclusive strand. High quality experiments tend to show exponential decay in the proportion of single stranded regions, while for the lower quality experiments, the trend may be linear or even constant. The FSR distributions of both of replicates 2 and 3 are more spread around their respective medians. The rate at which the FSR distribution becomes more centralized around the median indicates the aforementioned lower enrichment in the second replicate and the low complexity in the third one. The asymmetric behavior of the second replicate is characteristic of low enrichment, while the constant values of replicate three for low minimum number of reads indicate that this replicate has islands composed of reads aligned to very few unique positions.
### 2.2.1 Further exploration of ChIP-exo data
All the plot functions in ChIPexoQual allow a list or several separate ExoData objects. This allows to explore island subsets for each replicate. For example, to show that the first arm is composed of regions formed by reads aligned to few positions, we can generate the following plot:
ARCvURCplot(exampleExoData[[1]],
subset(exampleExoData[[1]],uniquePos > 10),
subset(exampleExoData[[1]],uniquePos > 20),
names.input = c("All", "uniquePos > 10", "uniquePos > 20"))
For this figure, we used the ARC vs URC plot to show how several of the regions with low ARC values are composed by reads that align to a small number of unique positions. This technique highlights a strategy that can be followed to further explore the data, as with all the previously listed plotting functions we may compare different subsets of the islands in the partition.
## 2.3 Quality evaluation
The last step of the quality control pipeline is to evaluate the linear model:
\begin{align*} D_i = \beta_1 U_i + \beta_2 U_2 + \epsilon_i, \end{align*}
The distribution of the parameters of this model is built by sampling nregions regions (the default value is 1,000), fitting the model and repeating the process ntimes (the default value is 100). We visualize the distributions of the parameters with box-plots:
p1 = paramDistBoxplot(exampleExoData,which.param = "beta1", names.input = paste("Rep",1:3,sep = "-"))
p2 = paramDistBoxplot(exampleExoData,which.param = "beta2", names.input = paste("Rep",1:3,sep = "-"))
gridExtra::grid.arrange(p1,p2,nrow = 1) |
# Neural Networks for k step ahead time series forecasting
I am looking into neural networks and had a conceptual question about time series forecasting.
Let's say I have hourly temperature measurements at a given location several for several month. My goal would be to forecast, from a time t, the expected temperature for the next k hours. Which of the following architectures would be the best/recommended/feasible?
1. The input of the neural network is n values in the past from a time t : $[y_t,y_{t-1}, ...,y_{t-n+1}]$ and my output is k nodes representing the values in the future: $[y_{t+1},y_{t+2},....,y_{t+k}]$ Different n would be tested and historical data would be used to train the NN.
2. The input is the same n values in the past but this time k different neural networks would be trained each for a specific time step fro 1 to k.
1st neural network $[y_t,y_{t-1}, ...,y_{t-n+1}] => y_{t+1}$
2nd neural network $[y_t,y_{t-1}, ...,y_{t-n+1}] => y_{t+2}$
etc.
Each network would be trained separately on the historical data and all k networks would be used with the same input to produce $[y_{t+1},y_{t+2},....,y_{t+k}]$
3. A single neural network is trained to produce only 1h ahead forecast $[y_t,y_{t-1}, ...,y_{t-n+1}]=>y_{t+1}$ To predict k values in the future, the neural network is used iteratively with the forecasted value used as an input at the next step, as such:
1st step $[y_t,y_{t-1}, ...,y_{t-n+1}] => \hat{y}_{t+1}$
2nd step $[\hat{y}_{t+1},y_t,y_{t-1}, ...,y_{t-n+2}] => \hat{y}_{t+2}$
3rd step $[\hat{y}_{t+2},\hat{y}_{t+1},y_t,...,y_{t-n+3}] => \hat{y}_{t+3}$ etc.
I have the feeling that the 1st method would be very hard to train because of the large number of inputs and outputs. The first hour ahead should be more correlated to the past values in time and thus easier to forecast, conversely as k becomes large the correlation between the past and future values becomes smaller and thus harder to predict. A single NN architecture combining all k hours would thus perform poorly overall as the later hours might penalize the overall behaviour.
The second architecture might compensate that as the neural networks for the first few times ahead might be performant while the later ones will not. Knowing that could be somewhat useful.
As the third architecture only uses one neural network for 1h ahead forecast. We can expect this NN to be the most performant out of the k networks from the second architecture, thus the output value could be considered correct enough to be used as the real value and used as an input for the next time step. This assumption is of course not true but perhaps for a certain number of steps k the deviation would not be too important.
That's the 3 options I have though about, are they somewhat correct or is there a fundamental logic behind Neural Networks which I haven't grasped? The literature I have found on the subject didn't go into detail on how to predict more than on step in the future.
• Particularly due to the boot-stepping process reflecting the aut0-dependence of forecasts NEVER MIND the possibility of anomalies arising in the future AND the need to incorporate uncertainties in future values of predictors. The small print taketh away! – IrishStat Aug 25 '17 at 21:30
Assuming you keep the size of the network fixed across your three proposed architectures (with #2 having k-times as many parameters overall), I expect #2 to give the most accurate results since all parameters in the network are being used to make a single prediction, however this comes at the cost of requiring a k-times larger memory footprint and k-times more training time, which is unlikely to be worth the marginal increase in accuracy.
#3 is the most elegant, and the most likely to produce smooth, aesthetically-pleasing results, but as Nate Diamond points out below, this approach will compound prediction errors, eventually leading to unrealistic predictions for large values of k.
If you make the network large enough, and use an appropriate loss function (see below), then #1 is likely to be your best bet. Your concern about the network being difficult to train due to the "large number of inputs and outputs" is largely unwarranted, as new techniques used in training such as ReLU's, batch-norm, and ADAM eliminate many of the problems previously encountered when training very large networks. As for your concern about the high-variance errors in the large-k predictions swamping the (more consistent) error signals coming from the small-k predictions, this can be mitigated by using a loss function that accounts for component-wise variance. For example, instead of the standard RMSE loss: $$\sqrt{\frac{\sum_{i=1}^m\sum_{j=1}^k (\hat{y}^{(i)}_j - y^{(i)}_j)^2}{m}}$$ you could use a variant of RMSE which weights the error in each component inversely proportional to the variance of its errors across the previous mini-batch: $$\sqrt{\frac{\sum_{i=1}^m\sum_{j=1}^k \frac{1}{\sigma^2_j}(\hat{y}^{(i)}_j - y^{(i)}_j)^2}{m}}$$ Where $$m = \text{size of the minibatch}$$ $$\sigma^2_j = \text{variance of the errors in the } j^{th} \text{ component over the previous minibatch}$$ $$\bullet_j^{(i)} = \text{value of the } j^{th} \text{ component of the } i^{th} \text{ sample}$$
There is also another option not on your list which you may want to consider, namely using a recurrent neural network architecture such as seq-2-seq which allows for variable-length inputs and outputs: https://github.com/guillaume-chevalier/seq2seq-signal-prediction
• Number 2 will potentially give pretty "spikey" results, as the forecasts are all independent. So one network that predicts a spike in step 3 may not be predicted or handled by networks 4+. Repeat this for every interval. – Nate Diamond Aug 26 '17 at 0:13
• If we're talking about minimizing something along the lines of total RMS error of the predicted vs the actual values, then I don't see how it couldn't be better. But yes, it will probably be more jagged-looking – jon_simon Aug 26 '17 at 0:15
• The point is, claiming "the best results" may not be accurate. It matters what the use case is. Numbers 1 and 3 don't have this problem to the same degree (though Number 3 has other problems, like carrying forward errors). – Nate Diamond Aug 26 '17 at 0:18
• Additionally, just saying "an LSTM network" doesn't answer why or how it helps. If you're suggesting that OP use an RNN to output a sequence of values using something like seq2seq, then that is something for OP to consider. Additionally, the link you provide contains a paper describing an "LSTM Network", but only in the context of behavior/anomaly detection and not forecasting. – Nate Diamond Aug 26 '17 at 0:22
• Fair point, I updated my answer – jon_simon Aug 26 '17 at 1:10 |
## Differential and Integral Equations
### Local well-posedness for quadratic nonlinear Schrödinger equations and the good'' Boussinesq equation
#### Abstract
The Cauchy problem for 1-D nonlinear Schrödinger equations with quadratic nonlinearities are considered in the spaces $H^{s,a}$ defined by $\| f \|_{H^{s,a}}=\| (1+|\xi|)^{s-a} |\xi|^a \widehat{f} \|_{L^2},$ and sharp local well-posedness and ill-posedness results are obtained in these spaces for nonlinearities including the term $u\bar{u}$. In particular, when $a=0$ the previous well-posedness result in $H^s$, $s>-1/4$, given by Kenig, Ponce and Vega (1996), is improved to $s\ge -1/4$. This also extends the result in $H^{s,a}$ by Otani (2004). The proof is based on an iteration argument similar to that of Kenig, Ponce and Vega, with a modification of the spaces of the Fourier restriction norm. Our result is also applied to the good'' Boussinesq equation and yields local well-posedness in $H^s\times H^{s-2}$ with $s>-1/2$, which is an improvement of the previous result given by Farah (2009).
#### Article information
Source
Differential Integral Equations, Volume 23, Number 5/6 (2010), 463-493.
Dates
First available in Project Euclid: 20 December 2012
Kishimoto, Nobu; Tsugawa, Kotaro. Local well-posedness for quadratic nonlinear Schrödinger equations and the good'' Boussinesq equation. Differential Integral Equations 23 (2010), no. 5/6, 463--493. https://projecteuclid.org/euclid.die/1356019307 |
Average Questions for bank and all sarkari naukri examinations.
## Average Questions for IBPS and SSC Exams
Basic Concepts of Average:
What is an average?
In simple terms, averages usually refer to the sum of given numbers divided by the total number of terms listed.
Averages = (sum of all terms)/ number of terms
The main term of average is equal distribution of a value among all which may distribute persons or things. We obtain the average of a number using formula that is sum of observations divided by Number of observations.
Important Formula for Average:
Find the Average Speed
• If a person travels a distance at a speed of x km/hr and the same distance at a speed of y km/hr then the average speed during the whole journey is given by-
• If a person covers A km at x km/hr and B km at y km/hr and C km at z km/hr, then the average speed in covering the whole distance is-
• The average of the series which is in A.P. can be calculated by ½(first + last term)
• If the average of n numbers is A and if we add x to each term then the new average will be = (A+ x).
• If the average of n numbers is A and if we multiply p with each term then the new average will be = (A x p).
• When a person leaves the group and another person joins the group in place of that person then-
Case1: If the average age is increased,
Age of new person = Age of separated person + (Increase in average × total number of persons)
Case2: If the average age is decreased,
Age of new person
= Age of separated person – (Decrease in average × total number of persons)
• When a person joins the group-
Case1: In case of increase in average Age of new member = Previous average + (Increase in average × Number of members including new member)Case2: In case of decrease in average Age of new member = Previous average – (Decrease in average × Number of members including new member)
• In the Arithmetic Progression there are two cases when the number of terms is odd and second one is when number of terms is even. So when the number of terms is odd the average will be the middle term. when the number of terms is even then the average will be the average of two middle terms.
From the Average topic, you will surely score better using these important formulas, and this average section mainly seen in all the bank exams such as SBI PO, SBI Clerck, IBPS PO, and specialist officers and also in all sarkari naukri examinations such as SSC CGL. SSC CHSL , RRB NTPC etc. So, here are some questions and quiz on Average, using the above rules, now test your knowledge on Average topic.
### Average Questions and Quiz:
[watupro 612]
BOOK COVER NAME OF BOOK ONLINE PURCHASE Fast Track Objective Arithmetic by Rajesh Verma Quantitative Aptitude for Competitive Examinations by R.S. Aggarwal Advance maths for General Competitions by Rakesh Yadav
Friends this is it for Quantitative Aptitude Preparation. If you have any queries/questions regarding Best Books for any Section, Feel free to ask them in the comments section below. And we will be glad to answer them for you.
QUANTITATIVE APTITUDE FREE QUIZES
#### 1 COMMENT
1. question no. 1 : A Batsman makes a score of 98 runs in the 20th inning and thus increases his average by 3.
means if he was having an average of x runs till his 19th inning then it becomes x + 3 after 20th inning… that implies (19x + 98)/20 = (x + 3) => x = 38 please explain i’m confused |
# [OS X TeX] tex files and mdimport
Mon Nov 14 20:56:16 CET 2005
```
On Monday, November 14, 2005, at 11:51AM, Gary L. Gray <gray at engr.psu.edu> wrote:
>
>On Nov 14, 2005, at 2:42 PM, Adam Maxwell wrote:
>
>>
>> On Monday, November 14, 2005, at 09:05AM, Herbert Schulz
>> <herbs at wideopenwest.com> wrote:
>>
>>> I don't know how many folks here also look at the TeXShop Forum
>>> <http://www.apfelwiki.de/forum/viewforum.php?f=6> but there is an
>>> interesting note there about Spotlight and .tex files. It seems that,
>>> at least after the 10.4.3 update, .tex files are no longer searched
>>> for meta-data. I just checked this and it seems to be true: looking
>>> for some information (the word abbreviations which I know is in at
>>> least one of my .tex files) within on of my .tex files doesn't list
>>> that file. Apparently the file type for .tex files is dyn.... and
>>> those files aren't searched.
>>
>> I don't think they should have been searched in the first place, so
>> Spotlight is probably working (more) correctly now. I wrote up a
>> trivial importer that indexes .tex files by file content, \author,
>> and \title. Any other LaTeX commands that would be worth trying to
>> parse for metadata? If this would be useful, I'll release it under
>> a BSD license as part of the mactextoolbox (if Maarten is agreeable).
>
>Maybe I am crazy (actually, I know I am, but that is for another
>topic), but I would like to see it index everything in the .tex file.
>Is there a reason this shouldn't be done?
That was the "file content" part of my message above. |
# networkx.utils.random_sequence.zipf_rv¶
zipf_rv(alpha, xmin=1, seed=None)[source]
Returns a random value chosen from the Zipf distribution.
The return value is an integer drawn from the probability distribution
$p(x)=\frac{x^{-\alpha}}{\zeta(\alpha, x_{\min})},$
where $$\zeta(\alpha, x_{\min})$$ is the Hurwitz zeta function.
Parameters
• alpha (float) – Exponent value of the distribution
• xmin (int) – Minimum value
• seed (integer, random_state, or None (default)) – Indicator of random number generation state. See Randomness.
Returns
x – Random value from Zipf distribution
Return type
int
Raises
ValueError: – If xmin < 1 or If alpha <= 1
Notes
The rejection algorithm generates random values for a the power-law distribution in uniformly bounded expected time dependent on parameters. See 1 for details on its operation.
Examples
>>> nx.utils.zipf_rv(alpha=2, xmin=3, seed=42)
8
References
1
Luc Devroye, Non-Uniform Random Variate Generation, Springer-Verlag, New York, 1986. |