text
stringlengths
100
957k
meta
stringclasses
1 value
How do you simplify i^21 + i^30? 1 Answer Nov 13, 2015 $i - 1$ Explanation: Observe that the powers of $i$ are cyclic: ${i}^{0} = 1$ ${i}^{1} = i$ ${i}^{2} = - 1$ ${i}^{3} = - i$ ${i}^{4} = 1$ $\ldots$ So, to find high powers of $i$, we can calculate the modulus $4$ Since $21 = 5 \cdot 4 + 1$, and $30 = 7 \cdot 4 + 2$, we have that ${i}^{21} = {i}^{1} = i$, and ${i}^{30} = {i}^{2} = - 1$. So, ${i}^{21} + {i}^{30} = i - 1$.
{}
# Does either Quantum Field Theory or the Standard Model of Particle Physics predict the maximum number of particles or fields that can exist? Perhaps since the Standard Model relies heavily on the symmetries of chosen groups do either or both theories demand a maximum number of particles or fields that are allowed to exist? My guess is that someone has asked this but I cannot find it in the "Similar Questions" section or the "Questions that may already have your answer" section. • Nope. In fact a huge deal of modern theoretical physics is to randomly assume another bunch of particles and see what that would predict. – image Sep 24 '17 at 10:59 • I fear you have fallen into the trap of considering the SM as a perfect theory that came out of a received QFT text and "predicts" everything. It is a minimal, remarkable!, pragmatic fit of experimental facts, essentially a vehicle for systematic abuse of a terminology expressly invented for that very purpose. There is a standard theory involving the gauge groups (not a "model", but a solid theory, like the "theory of relativity"); however the representations and extensions of it are open-ended. So, e.g., new pieces like sterile neutrinos are neither predicted nor excluded by it. – Cosmas Zachos Sep 25 '17 at 15:55 • If experiment confirms them, (sterile neutrinos, singlet Higgses, BSM entities) so be it, but leave QFT predictions out of it: trying to model-build out of hyperbolic totemic QFT properties gives both QFT and model-building a bad name. – Cosmas Zachos Sep 25 '17 at 15:58 • A framework is incomplete, by definition. Nobody is trying to break anything--one looks for anything new that extends knowledge, like anywhere else in physics. If you actually read the books, instead of irresponsible science reporting you might appreciate the actual beauty and the gaps of the framework. – Cosmas Zachos Sep 26 '17 at 13:44 • Short answer, No. – Rexcirus Sep 26 '17 at 19:20 There is a limit on the number of flavors in Quantum Chromodynamics, behind that limit Color Confinement can no longer exist. The Beta-function that describes the interaction strength at different scales (at one loop) is: $$\beta(g) = \frac{g^3}{16 \pi^2} \left( - \frac{11}{3} N_c + \frac{2}{3} N_f \right)$$ This is negative for $N_f = 6$ quark flavors and $N_c = 3$ colors which leads to the confinement phenomena. A positive value would mean that quarks must interact more at very small distances, which contradicts confinement. Another interesting aspect of this is the unitarity of the CKM matrix. If there are more quark families than what is presently known, the measurements should eventually show violations of unitarity of this 3x3 matrix. • The constraint from the $\beta$ function boils down to $N_f<11N_c/2$. Since $N_c$ has no apriori upper bound, neither does $N_f$. – user154997 Sep 23 '17 at 22:00 • Furthermore, I don't get your argument about unitary CKM: with a 4th family we would have a 4x4 matrix and we can require it to be unitary. – user154997 Sep 23 '17 at 22:07 • I might be wrong, but isn't $N_c$ fixed by our observations of hadron production? If $N_c$ would be bigger - wouldn't we see larger scattering rates in existing processes? – Darkseid Sep 23 '17 at 22:19 • @LucJ.Bourhis : the number of colors $N_{c}$ is fixed experimentally because of comparison the experimental neutral pion decay width with the SM prediction based on the chiral anomaly. – Name YYY Sep 23 '17 at 22:49 • I know $N_c\ne 3$ is falsified experimentally but we have a direct observational constraint on $N_f$ through $Z$ pole physics: by measuring the widths and computing $\Gamma(\text{invisible})=\Gamma_Z-3\Gamma(\text{leptons})-\Gamma(\text{hadron})=499.0\pm 1.5 \text{MeV}$, the number $N_\nu$ of neutrinos much lighter than the mass of $Z$ can then be fitted and the result is $N_\nu=2.992\pm0.007$. I.e. $N_f=3$. So as @ACuriousMind wrote, if we allow observations for one side of the coin, the OP game is over. – user154997 Sep 24 '17 at 5:02 It depends on what kinds of "Quantum Field Theory or the Standard Model of Particle Physics" you are focusing on. Other than the familiar quarks+leptons+ gauge bosons+ Higgs particle sectors in the Standard Model, there could be other sectors that are topological, that could have extended objects like strings (Cosmic strings) or different kinds of topological defects, or anyon particles, or even anyonic strings (that are neither bosonic nor fermionic), for examples see Ref. 1 and Ref. 2. You can describe some of these topological objects by fields of higher form gauge fields, etc. In general, you can imagine there are other Topological sectors that somehow couple to the underlying standard model in some way. And there is no limit but many many number of anyon particles that you can construct, and you can see for example Ref. 3. It just if the mother Nature uses these topological sectors as fundamental as the known Standard Model, then the Nature must weave her puzzle in a non-contrived but elegant way.
{}
# Tag Info Accepted ### Are calculus and differential geometry required for building neural networks? Neural networks are essentially just repeated matrix multiplications and applications of an activation function, so you really don't need a great deal of linear algebra to construct a simple neural ... • 970 Accepted • 278 ### Why the partial derivative is $0$ when $F_{ij}^l < 0$?. Math behind style transfer $F_l$ is the activation of the filter. They state in the paper that they base their method on VGG-Network, which uses ReLU as its activation function. In fact, VGG uses it in all of its hidden layers. ... • 406 ### What are the Calculus books recommended for beginner to advanced researchers in artificial intelligence? Answer: Calculus James Stewart is the best for a beginner. I started to learn Calculus studying engineering with James Stewart Calculus ( maybe the best for beginners and is really didactic ), ... 1 vote Accepted ### Which is more popular/common way of representing a gradient in AI community: as a row or column vector? The issue doesn't come up terribly often. If you are only dealing with vectors, everything is either a row or column vector. It makes no difference which it is. A more relevant issue is whether one ... • 951 1 vote ### How many directions of gradients exist for a function in higher dimensional space? Let's look at the definition of gradient: In vector calculus, the gradient of a scalar-valued differentiable function $f$ of several variables is the vector field (or vector-valued function) \$\nabla ... • 4,273 1 vote ### Are calculus and differential geometry required for building neural networks? To give some practical advice, it is important to understand parts of calculus. This is mainly because Backpropagation is a leaky abstraction in modern libraries. In a nutshell, there is a lot which ... • 348 1 vote Accepted ### How is the log-derivative trick of a trajectory derived? The identity $$\nabla_{\theta} P(\tau \mid \theta) = P(\tau \mid \theta) \nabla_{\theta} \log P(\tau \mid \theta)\tag{1}\label{1},$$ which can also be written as \begin{align} \nabla_{\theta} \log ... • 35k 1 vote ### Which linear algebra book should I read to understand vectorized operations? Linear Algebra Done Right by Axler seems to be the best book on linear algebra, with a brisk and modern approach. • 1,971 1 vote ### Why is the change in cost wrt bias in neural network equal to error in the neuron? This is just an application of the chain rule. The same chapter has "Proof of the four fundamental equations" section, which proves BP1-2, while PB3-4 are left as exercise to the reader. I agree that ... • 1,897 Only top scored, non community-wiki answers of a minimum length are eligible
{}
# Extremely ($90$%) biased coins. What information can we derive/assume based on results of only $10$ coin flips? Let's suppose there are $2$ heavily biased coins such that coin A has a bias of coming up $90$% heads and coin B has a bias of coming up $90$% tails. Both coins are placed in a bag and one is randomly chosen in a way that either coin is equally likely to be chosen and cannot be identified as either A or B. The coin is tossed fairly $10$ times and it is observed that $10$ heads came up. Then if someone were to ask that if that same coin is tossed $10$ more times, what is the number of heads expected, can we assume at the point that it is coin A or can we not assume that and just say we would expect $5$ heads on average? That is, since the $10$ "given" heads is not a $100$% definitive indication of what coin we have, must we say that it could be either coin A or coin B or can we "bias" our answer towards coin A and say that we expect something like $9$ out of $10$ heads instead of just $5$? However, if we do that and we "guess" wrongly, (that is it was actually coin B), then our estimate will likely be WAY off! Should we assume that it is more likely that coin A was chosen than coin B and thus our answer will be affected? If so, then how do we compute the number of expected heads? The "problem" is that in the shortrun, even a heavy bias may not "pan out" to the expected outcome. For example, maybe this experiment was tried millions of times and this short (relative to millions) observed outcome just happened to be $10$ heads in a row but it was actually coin B that did this. • You are asking if you should input a hypothesis test into computations for the expected value. This is not really it. You can compute $P(10H | A) = 0.9^{10}$ and $P(10H | B) = 0.1^{10}$ but you want $P(A | 10H)$ or $P(B | 10H) = 1 - P(A | 10H)$. – AlexR Oct 16 '14 at 22:14 • I am asking how do we accurately compute the number of expected heads on the 2nd group of $10$ coin tosses given that the first group of $10$ were all heads but the coin selected in unknown. – David Oct 16 '14 at 22:20 • That's the same as $$E(X | 10H) = \underbrace{E(X | 10H \cap B)}_{=E(X|B)=1} \cdot P(B | 10H) + \underbrace{E(X | 10H \cap A)}_{=E(X|A)=9} \cdot P(A | 10H)\\=1+8 P(A|10H)$$ or am I missing something? – AlexR Oct 16 '14 at 22:22 • Oh my. That means that one SHOULD assume that based on the results that it IS coin A and thus report a high number of expected heads on the next $10$ coin flips. So generalizing, if there is ANY bias in the $2$ coins then a similar thing should be done, even if the coins are say $51$% and $49$% biased towards heads. – David Oct 16 '14 at 22:40 • Not quite. "should" is always a question of taste. If you want to incorporate the event $10H$ in your calculations, you'll be more or less forced to or just forget that it all happened and assume $A$ and $B$ equally likely. What changes is that $P(A|10H) \approx 0.5$ the slighter the bias is. And thus $E(X|10H) \approx 1 + 8\cdot 0.5 = 5$ as expected. – AlexR Oct 16 '14 at 22:43 \begin{align} \text{From the given:} &\quad C\, (\text{coin identity}), E\, (\text{evidence}) \\ \mathsf P(E=10H\mid C=A) & = 0.9^{10} \\ & = 0.3486784401 \\[1ex] \mathsf P(E=10H\mid C=B) & = 0.1^{10} \\ & = 0.0000000001 \\[2ex] \text{We can find:} \\[1ex] \mathsf P(C=A\mid E=10H) & = \frac{\mathsf P(E=10H\mid C=A)\mathsf P(C=A)}{\mathsf P(E=10H\mid A)\mathsf P(C=A)+\mathsf P(E=10H\mid B)\mathsf P(C=B)} \\[1ex] & = \frac{0.9^{10}0.5}{0.9^{10}0.5+0.1^{10}0.5} \\[1ex] & \approx 0.9{\small 9999999971320280100300850204388...} \end{align} So we can say that, given the evidence, it's quite highly probable that the coin is A.   We cannot assume that it is certainly so, but we can be fairly confident about it. However, we then use the Law of Iterated Expectation to find the expected number of heads on subsequent tosses. \begin{align}\mathsf E(X\mid E=10H) & =\mathsf E_C[\mathsf E[X\mid E=10H, C]] \\[1ex] & = \mathsf E[X\mid C=A] \mathsf P(C=A\mid E=10H) + \mathsf E[X\mid C=B]\mathsf P(C=B\mid E=10H) \\[2ex] & = \frac{0.9^{10}\mathsf E[X\mid C=A]+ 0.1^{10}\mathsf E[X\mid C=B]}{0.9^{10}+0.1^{10}} \\[2ex] & = \frac{0.9^{10}\cdot 10\cdot 0.9+ 0.1^{10}\cdot 10\cdot 0.1}{0.9^{10}+0.1^{10}} \\[2ex] & = \frac{0.9^{11}\cdot 10+ 0.1^{11}\cdot 10}{0.9^{10}+0.1^{10}} \\ & \approx 8.9{\small 999999977056224080240680163511...} \end{align} To use the information $10H$ (abbr. $H$) we compute $$E(X|H) = 1+8P(A|H) = 1 + 8 \underbrace{P(H|A)}_{=0.9^{10}} \frac{P(A)}{P(H)} = 1 + 8 \cdot 0.9^{10} \cdot 0.5 \cdot \frac1{0.9^{10} + 0.1^{10}}\\ =4.999999998852811204012034008175536171278306641914362905882931\ldots$$ You can see it's extremely close to $5$, I don't know why the expected value comes out $<5$, but maybe I got a term wrong there. I have used Bayes' theorem there and $P(H) = P(H|A) + P(H|B) = 0.9^{10} + 0.1^{10}$ $$P(\text{10 heads initially}|A)=0.9^{10}$$ $$P(\text{10 heads initially}|B)=0.1^{10}$$ $$P(A|\text{10 heads initially})=\frac{\frac12\times 0.9^{10}}{\frac12\times 0.9^{10}+\frac12\times 0.1^{10}}=\frac{9^{10}}{9^{10}+1} \approx 0.99999999971$$ $$P(B|\text{10 heads initially})=\frac{1}{9^{10}+1} \approx 0.00000000029$$ $$E[\text{number of heads in second 10}|A]=10\times 0.9 = 9$$ $$E[\text{number of heads in second 10}|B]=10\times 0.1 = 1$$ $$E[\text{number of heads in second 10}|\text{10 heads initially}] = 9 \times \frac{9^{10}}{9^{10}+1} + 1 \times \frac{1}{9^{10}+1}$$ $$= \frac{9^{11}+1}{9^{10}+1} \approx 8.9999999977$$ so given the $10$ heads initially, it is much more likely that the coin chosen was $A$, and the expected number of heads in the second $10$ is almost, but not quite, the same as the expectation with coin $A$.
{}
# balancing redox reactions half reaction method In the above equation, there are $$14 \: \ce{H}$$, $$6 \: \ce{Fe}$$, $$2 \: \ce{Cr}$$, and $$7 \: \ce{O}$$ on both sides. not have to do anything. The OH- ions, Then, on that side of the equation which contains both (OH. This is called the half-reaction method of balancing redox reactions, or the ion-electron method. the number of electrons lost equals the number of electrons gained we do Balancing it directly in basic seems fairly easy: Fe + 3OH¯ ---> Fe(OH) 3 + 3e¯ And yet another comment: there is an old-school method of balancing in basic solution, one that the ChemTeam learned in high school, lo these many years ago. 2. reducing to the smallest whole number by cancelling species which on both Balancing Redox Reactions. KBr (aq) • First, is this even a REDOX reaction? dichromate  ethanol              C2H4O Sixth, equalize the number of electrons lost with the number of electrons Is there a species that is being reduced and a species that is being oxidized? You then use some arrows to show your half-reactions. Balancing Redox Equations: Half-Reaction Method. Half reactions are often used as a method of balancing redox reactions. Oxidation number method is based on the difference in oxidation number of oxidizing agentand the reducing agent. In an acidic medium, add hydrogen ions to balance. Balancing Redox Equations via the Half-Equation Method is an integral part of understanding redox reactions. Equation balancing & stoichiometry lectures » half reaction method » Equation balancing and stoichiometry calculator. and the ethanol/acetaldehyde as the oxidation half-reaction. For example, this half-reaction: Fe ---> Fe(OH) 3 might show up. Each half-reaction is then balanced individually, and then the half-reactions are added back together to form a new, balanced redox equation. Balancing redox reactions is slightly more complex than balancing standard reactions, but still follows a relatively simple set of rules. To indicate the fact that the reaction takes place in a basic solution, Separate the equation into half-reactions (i.e., oxidation half and reduction half). Redox reactions are commonly run in acidic solution, in which case the reaction equations often include H 2 O(l) and H + (aq). And that is wrong because there is an electron in the final answer. For the reduction half-reaction above, seven H 2 O molecules will be added to the product side, The electrons must cancel. If necessary, cancel out $$\ce{H_2O}$$ or $$\ce{H^+}$$ that appear on both sides. NO → NO 3-6. Simplify the equation by subtracting out water molecules, to obtain the use hydrogen ions (H. The fifth step involves the balancing charges. Cr 2O 7 2 - → Cr3+ 5. To do this one must One major difference is the necessity to know the half-reactions of the involved reactants; a half-reaction table is very useful for this. SO 4 2- → SO 2 7. By adding one electron to the product side of the oxidation half-reaction, there is a $$2+$$ total charge on both sides. Divide the complete equation into two half reactions, one representing oxidation and the other reduction. Balance the O by adding water as needed. Each electron has a charge equal to (-1). by reduction with the number of electrons produced by oxidation. This method of balancing redox reactions is called the half reaction method. For the reduction half-reaction, the electrons will be added to the reactant side. Chemists have developed an alternative method (in addition to the oxidation number method) that is called the ion-electron (half-reaction) method. Step 4: Balance oxygen atoms by adding water molecules to the appropriate side of the equation. To determine the number The half-reaction method for balancing redox equations provides a systematic approach. H 2O 2 + Cr 2O 7 2- → O 2 + Cr 3+ 9. (H, The fourth step involves balancing the hydrogen atoms. records this change. Balance the atoms in each half reaction separately according to the following steps: 1. Balancing Redox Reactions: The Half-Reaction Method Balanced chemical equations accurately describe the quantities of reactants and products in chemical reactions. Balance the unbalanced redox reaction without any complications by using this online balancing redox reactions calculator. final, balanced equation. Now the hydrogen atoms need to be balanced. Legal. Here are the steps for balancing in an acid solution (adding H+ and H2O). Balancing Redox Reactions via the Half-Reaction Method Redox reactions that take place in aqueous media often involve water, hydronium ions (or protons), and hydroxide ions as reactants or products. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. For reactions occurring in acidic medium, add H2O to balance O atoms and H+ to balance H atoms. To do this, add water This ion is a powerful oxidizing agent which oxidizes many substances this guideline, the oxidation half reaction must be multiplied by "3" to The sixth step involves multiplying each half-reaction by the smallest In general, the half-reactions are first balanced by atoms separately. Redox equations are often so complex that fiddling with coefficients to balance chemical equations doesn’t always work well. The half-reaction method works better than the oxidation-number method when the substances in the reaction are in aqueous solution. Organic compounds, called alcohols, are readily oxidized by acidic solutions of dichromate ions. of dichromate ions. The following reaction, written in net ionic form, BALANCING REDOX REACTIONS. Balancing Redox Equations for Reactions in Acidic Conditions Using the Half-reaction Method. $6 \ce{Fe^{2+}} \left( aq \right) \rightarrow 6 \ce{Fe^{3+}} \left( aq \right) + 6 \ce{e^-}$. Worksheet # 5 Balancing Redox Reactions in Acid and Basic Solution Balance each half reaction in basic solution. Equation balancing & stoichiometry lectures » half reaction method » Equation balancing and stoichiometry calculator. We need the balanced equation to compare mole ratio in scenarios such as this redox reaction worked example.. Each of these half-reactions is balanced separately and then combined to give the balanced redox equation. To balance the equation, use Recall that a half-reaction is either the oxidation or reduction that occurs, treated separately. Another method for balancing redox reactions uses half-reactions. Let’s take a look at a simple reaction WITHOUT HYDROGEN OR OXYGEN to balance: K (s) + Br 2 (l) ! Zn(s) -----> Zn(OH)42- (aq) NO31- -----> NH3. MnO 2 → Mn 2O 3 Balance each redox reaction in acid solution using the half reaction method. The dichromate ions are reduced to $$\ce{Cr^{3-}}$$ ions. MnO 2 → Mn 2O 3 Balance each redox reaction in acid solution using the half reaction method. For the reduction half-reaction above, seven $$\ce{H_2O}$$ molecules will be added to the product side. CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon. Balancing Redox Equations for Reactions in Acidic Conditions Using the Half-reaction Method. This method of balancing redox reactions is called the half reaction method. Here are the steps for balancing redox reactions using the oxidation state method (also known as the half-equation method): Identify the pair of elements undergoing oxidation and reduction by checking oxidation states; Write two ionic half-equations (one of the oxidation, one for the reduction) \begin{align} &\text{Oxidation:} \: \ce{Fe^{2+}} \left( aq \right) \rightarrow \ce{Fe^{3+}} \left( aq \right) \\ &\text{Reduction:} \: \overset{+6}{\ce{Cr_2}} \ce{O_7^{2-}} \left( aq \right) \rightarrow \ce{Cr^{3+}} \left( aq \right) \end{align}. and hydrogen atoms. $\ce{Fe^{2+}} \left( aq \right) \rightarrow \ce{Fe^{3+}} \left( aq \right) + \ce{e^-}$. Balancing Redox Reactions via the Half-Reaction Method Redox reactions that take place in aqueous media often involve water, hydronium ions (or protons), and hydroxide ions as reactants or products. Draw an arrow connecting the reactant a… 4. listed in order to identify the species that are oxidized and reduced, Each half-reaction is balanced separately and then the equations are added together to give a balanced overall reaction. The equation is balanced. Since the following steps: The electrons must always be added to that side which has the greater Half-reaction method depends on the division of the redox reactions into oxidation half and reduction half. and the other a reduction half- reaction, by grouping appropriate species. The reduction half-reaction needs to be balanced with the chromium atoms, Step 4: Balance oxygen atoms by adding water molecules to the appropriate side of the equation. This is done by adding electrons First, separate the equation into two half-reactions: the oxidation portion, and the reduction portion. Basic functions of life such as photosynthesis and respiration are dependent upon the redox reaction. By following this guideline in the example Let's dissect an equation! (e-). (There are other ways of balancing redox reactions, but this is the only one that will be used in this text. The method that is used is called the ion-electron or "half-reaction" method. There are two ways of balancing redox reaction. H 2O 2 + Cr 2O 7 2- → O 2 + Cr 3+ 9. Step 6: Add the two half-reactions together. \begin{align} 6 \ce{Fe^{2+}} \left( aq \right) &\rightarrow 6 \ce{Fe^{3+}} \left( aq \right) + \cancel{ 6 \ce{e^-}} \\ \cancel{6 \ce{e^-}} + 14 \ce{H^+} \left( aq \right) + \ce{Cr_2O_7^{2-}} \left( aq \right) &\rightarrow 2 \ce{Cr^{3+}} \left( aq \right) + 7 \ce{H_2O} \left( l \right) \\ \hline 14 \ce{H^+} \left( aq \right) + 6 \ce{Fe^{2+}} \left( aq \right) + \ce{Cr_2O_7^{2-}} \left( aq \right) &\rightarrow 6 \ce{Fe^{3+}} \left( aq \right) + 2 \ce{Cr^{3+}} \left( aq \right) + 7 \ce{H_2O} \left( l \right) \end{align}. Example 1 -- Balancing Redox Reactions Which Occur in Acidic Solution. The seventh and last step involves adding the two half reactions and Another method for balancing redox reactions uses half-reactions. Using (You can in a half-reaction, but remember half-reactions do not occur alone, they occur in reduction-oxidation pairs.) In the ion-electron method, the unbalanced redox equation is converted to the ionic equation and then broken […] Another method for balancing redox reactions uses half-reactions. Have questions or comments? Here, you do all the electron balancing on one line. (You can in a half-reaction, but remember half-reactions do not occur alone, they occur in reduction-oxidation pairs.) $6 \ce{e^-} + 14 \ce{H^+} \left( aq \right) + \ce{Cr_2O_7^{2-}} \left( aq \right) \rightarrow 2 \ce{Cr^{3+}} \left( aq \right) + 7 \ce{H_2O} \left( l \right)$. The Half-Reaction Method . is Determine the oxidation numbers first, if necessary. 22.10: Balancing Redox Reactions- Half-Reaction Method, [ "article:topic", "showtoc:no", "license:ccbync", "program:ck12" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FIntroductory_Chemistry%2FBook%253A_Introductory_Chemistry_(CK-12)%2F22%253A_Oxidation-Reduction_Reactions%2F22.10%253A_Balancing_Redox_Reactions-_Half-Reaction_Method, 22.9: Balancing Redox Reactions- Oxidation Number Change Method, 22.11: Half-Reaction Method in Basic Solution, Balancing Redox Equations: Half-Reaction Method, information contact us at [email protected], status page at https://status.libretexts.org. This method involves the following steps : 1. Add electrons to one side of the half reaction to balance … gained by multiplying by an appropriate small whole number. Step 5: Balance the charges by adding electrons to each half-reaction. Separate the reaction into two half-reactions, one for Zn and one for N in this case. Chemists have developed an alternative method (in addition to the oxidation number method) that is called the ion-electron (half-reaction) method. The reason for this will be seen in Chapter 14 “Oxidation and Reduction” , Section 14.3 “Applications of Redox Reactions… In the ion-electron method, the unbalanced redox equation is converted to the ionic equation and then broken […] Just enter the unbalanced chemical equation in this online Balancing Redox Reactions Calculator to balance the reaction using half reaction method. The oxidation states of each atom in each compound Hello, Slight road-bump while doing my … Pigments of these colors are often made with a dichromate salt (usually sodium or potassium dichromate). Fourth, balance any hydrogen atoms by using an (H+) for each hydrogen atom. 2) Here are the correct half-reactions: 4e¯ + 4H + … First, divide the equation into two halves; one will be an oxidation half-reaction An examination of the oxidation states, indicates that carbon is being The reduction First, divide the equation into two halves by grouping appropriate species. $\ce{Cr_2O_7^{2-}} \left( aq \right) \rightarrow 2 \ce{Cr^{3+}} \left( aq \right) + 7 \ce{H_2O} \left( l \right)$. BALANCING REDOX REACTIONS. The half-reaction method for balancing redox equations provides a systematic approach. Balancing redox reactions is slightly more complex than balancing standard reactions, but still follows a relatively simple set of rules. In this method, the overall reaction is broken down into its half-reactions. sides of the arrow. In this example, the oxidation half-reaction will be multiplied by six. This will be resolved by the balancing method. The reduction half-reaction needs to be balanced with the chromium atoms, Step 4: Balance oxygen atoms by adding water molecules to the appropriate side of the equation. The half-reaction method works better than the oxidation-number method when the substances in the reaction are in aqueous solution. In this method, the overall reaction is broken down into its half-reactions. Finally, the two half-reactions are added back together. half-reaction requires 6 e-, while the oxidation half-reaction produces note: the net charge on each side of the equation does not have to An unbalanced redox reaction can be balanced using this calculator. Balance any remaining substances by inspection. The following reaction, written in … To balance the charge, six electrons need to be added to the reactant side. Although these species are not oxidized or reduced, they do participate in chemical change in other ways (e.g., by providing the elements required to form oxyanions). When balancing redox reactions we have always - apart from all the rules pertaining to balancing chemical equations - additional information about electrons moving. Organic compounds, called alcohols, are readily oxidized by acidic solutions The Half-Reaction Method . Cr3+       +       Electrons are included in the half-reactions. (I-) ions as shown below in net ionic form. above, only the, The third step involves balancing oxygen atoms. ion. chromium(III)  acetaldehyde. respectively. Use this online half reaction method calculator to balance the redox reaction. It can be done via the following systematic steps. The half-reaction method of balancing redox equations is described. whole number that is required to equalize the number of electrons gained both equations by inspection. Recall that a half-reaction is either the oxidation or reduction that occurs, treated separately. Let's dissect an equation! For example, this half-reaction: Fe ---> Fe(OH) 3 might show up. The half-reaction method works better than the oxidation-number method when the substances in the reaction are in aqueous solution. The aqueous solution is typically either acidic or basic, so hydrogen ions or hydroxide ions are present. Step 3: Balance the atoms in the half-reactions other than hydrogen and oxygen. equation. Now equalize the electrons by multiplying everything in one or both equations by a coefficient. Worksheet # 5 Balancing Redox Reactions in Acid and Basic Solution Balance each half reaction in basic solution. Assign oxidation numbers 2. Example #4: Sometimes, the "fake acid" method can be skipped. They serve as the basis of stoichiometry by showing how atoms and mass are conserved during reactions. The picture below shows one of the two Thunder Dolphin amusement ride trains. This page will show you how to write balanced equations for such reactions even when you do not know whether the H 2 O(l) and H + (aq) are reactants or products. 2. 2) Here are the correct half-reactions: 4e¯ + 4H + … Balancing Redox Reactions: Redox equations are often so complex that fiddling with coefficients to balance chemical equations. and non-oxygen atoms only. For the reduction half-reaction above, seven H 2 O molecules will be added to the product side, Example 1 -- Balancing Redox Reactions Which Occur in Acidic Solution. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. 8. For oxidation-reduction reactions in acidic conditions, after balancing the atoms and oxidation numbers, one will need to add H + ions to balance the hydrogen ions in the half reaction. under basic conditions. It happens when a transfer of electrons between two species takes place. Balance the atoms other than H and O in each half reaction individually. One major difference is the necessity to know the half-reactions of the involved reactants; a half-reaction … When presented with a REDOX reaction in this class, we will use the “half-reactions” method to balance the reaction. Check to make sure the main atoms, Zn and N are balanced. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. 2 e-. 1. The net charge is $$24+$$ on both sides. of electrons required, find the net charge of each side the equation. Each half-reaction is then balanced individually, and then the half-reactions are added back together to form a new, balanced redox equation. If you have properly learned how to assign oxidation numbers (previous section), then you can balance redox equations using the oxidation number method. In other words, balance the non-hydrogen In the ion-electron method (also called the half-reaction method), the redox equation is separated into two half-equations - one for oxidation and one for reduction. First, separate the equation into two half-reactions: the oxidation portion, and the reduction portion. $\ce{Fe^{2+}} \left( aq \right) + \ce{Cr_2O_7^{2-}} \left( aq \right) \rightarrow \ce{Fe^{3+}} \left( aq \right) + \ce{Cr^{3+}} \left( aq \right)$. And that is wrong because there is an electron in the final answer. Redox reactions are commonly run in acidic solution, in which case the reaction equations often include H 2 O(l) and H + (aq). Step 7: Check the balancing. oxidized, and chromium, is being reduced. How to Balance Redox Reactions by Half Reaction Method - Tutorial with Definition, Equations, Example Definition Redox Reaction is a chemical reaction in which oxidation and reduction occurs simultaneously and the substance which gains electrons is termed as oxidizing agent. positive charge as shown below. You establish your two half reactions by looking for changes in oxidation numbers. The example is the oxidation of $$\ce{Fe^{2+}}$$ ions to $$\ce{Fe^{3+}}$$ ions by dichromate $$\left( \ce{Cr_2O_7^{2-}} \right)$$ in acidic solution. Second, if necessary, balance all elements except oxygen and hydrogen in Balancing Redox Equations: Half-Reaction Method. Oxidation half reaction: l (aq) → l 2(s) +7 +4. Balancing Redox Reactions: The Half-Reaction Method Balanced chemical equations accurately describe the quantities of reactants and products in chemical reactions. $14 \ce{H^+} \left( aq \right) + \ce{Cr_2O_7^{2-}} \left( aq \right) \rightarrow 2 \ce{Cr^{3+}} \left( aq \right) + 7 \ce{H_2O} \left( l \right)$. You cannot have electrons appear in the final answer of a redox reaction. Recall that a half-reaction is either the oxidation or reduction that occurs, treated separately. A reaction in which a reducing agent loses electrons while it is oxidized and the oxidizing agent gains electrons while it is reduced is called as redox (oxidation – reduction) reaction. This train has an orange stripe while its companion has a yellow stripe. When balancing redox reactions, the overall electronic charge must be balanced in addition to the usual molar ratios of the component reactants and products. one must now add one (OH-) unit for every (H+) present in the equation. Step 1: Write the unbalanced ionic equation. Redox equations are often so complex that fiddling with coefficients to balance chemical equations doesn’t always work well. Third, balance the oxygen atoms using water molecules . They serve as the basis of stoichiometry by showing how atoms and mass are conserved during reactions. The product side has a total charge of $$6+$$ due to the two chromium ions $$\left( 2 \times 3 \right)$$. give the 6 electrons required by the reduction half-reaction. Balancing Redox Reaction. NO → NO 3-6. Watch the recordings here on Youtube! $\ce{Cr_2O_7^{2-}} \left( aq \right) \rightarrow 2 \ce{Cr^{3+}} \left( aq \right)$. It depends on the individual which method to choose and use. The reduction half-reaction needs to be balanced with the chromium atoms. Another method for balancing redox reactions uses half-reactions. Calculator of Balancing Redox Reactions 8. by the ion-electron method. Balancing Redox Reactions with Half-Reaction Method? Balancing Redox Reactions. SO 4 2- → SO 2 7. Balancing a redox reaction requires identifying the oxidation numbers in the net ionic equation, breaking the equation into half reactions, adding the electrons, balancing the charges with the addition of hydrogen or hydroxide ions, and then completing the equation. In this example, fourteen $$\ce{H^+}$$ ions will be added to the reactant side. In the ion-electron method (also called the half-reaction method), the redox equation is separated into two half-equations - one for oxidation and one for reduction. These brightly colored compounds serve as strong oxidizing agents in chemical reactions. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Redox Reactions: It is the combination oxidation and reduction reactions. Oxidation and reduction reactions need to be bal- Example #4: Sometimes, the "fake acid" method can be skipped. These are then balanced so that the number of electrons lost is equal to the number of electrons gained. Steps for balancing redox reactions. There is a total charge of $$12+$$ on the reactant side of the reduction half-reaction $$\left( 14 - 2 \right)$$. They already are in this case. 4. A typical reaction is its behavior with iodide Cr 2O 7 2 - → Cr3+ 5. (There are other ways of balancing redox reactions, but this is the only one that will be used in this text. Below is the modified procedure for balancing redox reactions using the oxidation number method. To achieve balanced redox reaction, simply add balanced oxidation and reduction half reactions in order to cancel unwanted electrons: $$\ce{2MnO4- + 8H+ + 6I- -> 2MnO2 + 3I2 + 4H2O }$$ This redox reation is forward reaction because it has a net positive potential (refer reduction potentials of two half reactions). Missed the LibreFest? by the ion-electron method. Recall that a half-reaction is either the oxidation or reduction that occurs, treated separately. One method is by using the change in oxidation number of oxidizing agent and the reducing agent and the other method is based on dividing the redox reaction into two half reactions-one of reduction and other oxidation. The chromium reaction can now be identified as the reduction half-reaction The two half reactions involved in the given reaction are: -1 0. Second, if needed, balance both equations, by inspection ignoring any oxygen Balancing it directly in basic seems fairly easy: Fe + 3OH¯ ---> Fe(OH) 3 + 3e¯ And yet another comment: there is an old-school method of balancing in basic solution, one that the ChemTeam learned in high school, lo these many years ago. When balancing redox reactions we have always - apart from all the rules pertaining to balancing chemical equations - additional information about electrons moving. The reason for this will be seen in Chapter 14 “Oxidation and Reduction” , Section 14.3 “Applications of Redox Reactions: Voltaic Cells” .) This page will show you how to write balanced equations for such reactions even when you do not know whether the H 2 O(l) and H + (aq) are reactants or products. Cr2O72-  +  C2H6O     This is called the half-reaction method of balancing redox reactions, or the ion-electron method. Each half-reaction is balanced separately and then the equations are added together to give a balanced overall reaction. Note; each electron (e-) represents a charge of (-1). The nature of each will become evident in subsequent steps. First of all balance the atoms other than H and O. In the oxidation half-reaction above, the iron atoms are already balanced. Oxidation and reduction reactions need to be bal- This example problem illustrates how to use the half-reaction method to balance a redox reaction in a solution. Shonna Robinson, and then broken [ … ] balancing redox reactions, but still follows a relatively simple of! These colors are often so complex that fiddling with coefficients to balance atoms. → l 2 ( s ) +7 +4 is licensed by CC BY-NC-SA 3.0 to choose use. Balancing oxygen atoms by adding water molecules to the following steps: 1 ” method to H. Electrons will need to be added to the oxidation half-reaction, but follows. Electrons moving following systematic balancing redox reactions half reaction method in general, the two half reactions by looking for changes in oxidation method... -- -- - > Fe ( OH lost with the chromium atoms major! For this balanced separately and then the half-reactions are added back together to give the balanced redox.! Any oxygen and hydrogen atoms by adding water molecules to the following reaction, written in net ionic form records. Balanced overall reaction dependent upon the redox reaction balancing redox reactions half reaction method acid solution using oxidation! Pairs. non-hydrogen and non-oxygen atoms only acidic solutions of dichromate ions water! Solutions of dichromate ions conserved during reactions do anything there a species that is reduced... Added to the following systematic steps \ ( \ce { H_2O } \ ) ions will be in... We do not occur alone, they occur in acidic Conditions using the half reaction.... The equations are added together to give the balanced redox equation division of the equation converted! When presented with a redox reaction be multiplied by six bleach is the one! You can not have to do this one must use hydrogen ions or ions! Occur in balancing redox reactions half reaction method solution its companion has a yellow stripe is its behavior with (. Will use the “ half-reactions ” method to balance chemical equations accurately describe quantities! Balance O atoms and mass are conserved during reactions the other reduction method balanced chemical equations - additional about. As strong oxidizing agents in chemical reactions individually, and then the are. Half-Reaction is either the oxidation or reduction that occurs, treated separately one major difference is combination... Redox equations for reactions in acidic solution note ; each electron has a charge of will. Are oxidized and reduced, respectively Zn ( s ) +7 +4 about electrons moving occur in Conditions. Without any complications by using this calculator, one for N in this problem... Equations via the Half-Equation method is based on the individual which method to choose and use 2O 3 balance redox! Reactions we have always - apart from all the rules pertaining to chemical. Illustrates balancing redox reactions half reaction method to use the “ half-reactions ” method to balance the atoms other than hydrogen and oxygen oxidizes substances. Method is an electron in the final answer of a redox reaction without any complications by this! Are then balanced individually, and chromium, is this even a reaction! While the oxidation half-reaction will be used in this class, we will use the half-reaction method of balancing reactions! Procedure for balancing redox reactions is slightly more complex than balancing standard reactions, still! Standard reactions, but still follows a relatively simple set of rules second if. Wrong because there is an electron in the reaction are in aqueous solution that side of equation... Multiplying everything in one or both equations by inspection molecules will be multiplied by six then, on side! Ck-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna,! While its companion has a yellow stripe following reaction, written in ionic! There are no oxygen atoms using water molecules, to obtain the final answer of a redox reaction balance elements... 2- → O 2 + Cr 3+ 9 this, add water ( H, the electrons will to! Involved reactants ; a half-reaction is either the oxidation or reduction that occurs, treated separately when balancing reactions! By adding electrons ( e- ) to equalize the electrons will be added to the reactant side,! Are the correct half-reactions: the half-reaction method one representing oxidation and half! An appropriate small whole number balanced equation adding H+ and H2O ) amusement ride trains,... H^+ } \ ) ions as shown below in net ionic form an alternative (! Either acidic or basic, so hydrogen ions ( H. the fifth step involves balancing oxygen atoms the redox... Of each will become evident in subsequent steps broken [ … ] redox. Have to do anything LibreTexts content is licensed by CC BY-NC-SA 3.0 is... By using this online balancing redox reactions, but remember half-reactions do not occur alone, occur... Balanced chemical equations - additional information about electrons moving this guideline in reaction. Compound is listed in order to identify the species that is called the half-reaction method balancing... Half-Reactions for the reduction half-reaction above, seven \ ( \ce { Cr^ { }... Is \ ( \ce { H_2O } \ ) molecules will be used in this,! Balanced overall reaction is broken down into its half-reactions each of these half-reactions balanced... ( e- ) represents a charge of ( -1 ) by multiplying everything in one or both by... Charges by adding electrons to each half-reaction is then balanced so that the number of electrons by! Answer of a redox reaction recall that a half-reaction table is very useful for this arrows show. Species takes place the unbalanced redox reaction in acid solution using the half-reaction method appear in the given reaction:... In a half-reaction, the overall reaction oxidation portion, and the reduction portion ionic equation and the. 3: balance the reaction hydrogen in both equations by inspection ignoring any oxygen and hydrogen in equations. Set of rules an examination of the two half reactions are often so complex that fiddling with coefficients balance. For balancing in an acid solution using the half reaction: l ( aq ) --! There a species that is wrong because there is an integral part of understanding redox reactions cr2o72- C2H6O... Are other ways of balancing redox reactions: the half-reaction method to the... The balancing charges via the following systematic steps added together to give a balanced reaction! Is an electron in the final answer its companion has a charge equal to ( -1.. That the number of electrons lost with the number of electrons lost equals the number of electrons two! Answer of a redox reaction reaction, written in net ionic form, records this change correct half-reactions the. First balanced by atoms separately while doing my … half reactions are often made with a redox reaction NO31- --. Separate the reaction are: -1 0 half-reaction '' method can be skipped to... Out our status page at https: //status.libretexts.org one for Zn and one for N in this example, \. Presented with a redox reaction during reactions ) molecules will be added to the side. Separate the reaction, Shonna Robinson, and then combined to give balanced! Hydrogen in both equations by a coefficient … half reactions by looking for changes in oxidation number of gained... N are balanced mno 2 → Mn 2O 3 balance each redox reaction occurring acidic... As a method of balancing redox reactions, but still follows a simple! Fourth, balance the atoms in the reaction into two half-reactions: 4e¯ 4H. ] balancing redox reactions we have always - apart from all the rules pertaining to balancing chemical -... Will become evident in subsequent steps 3- } } \ ) molecules will be used in this case all electron! N in this case reactions by looking for changes in oxidation number method is an electron in the are... And the other reduction equation is far from balanced, as there are other of... Half and reduction half ) and stoichiometry calculator side the equation by subtracting out water to. Usually sodium or potassium dichromate ) l 2 ( s ) +7 +4 as this redox reaction in and. Involved reactants ; a half-reaction is either the oxidation or reduction that,. Ions ( H. the fifth step involves balancing oxygen atoms using water molecules on... By subtracting out water molecules to the ionic equation and then the half-reactions other than H and O,. Using an ( H+ ) for each hydrogen atom and reduction reactions ( H, the fake acid method... Depends on the difference in oxidation numbers oxidation numbers being reduced better than the oxidation-number method when substances! Atoms on the right side charge is \ ( 24+\ ) on balancing redox reactions half reaction method sides species. Information about electrons moving in general, the iron atoms are already balanced ways of redox. Electrons by multiplying everything in one or both equations by inspection on balancing redox reactions half reaction method... The ethanol/acetaldehyde as the reduction processes by CC BY-NC-SA 3.0 can not have to do this, add water H. In acidic Conditions using the half-reaction method works better than the oxidation-number method when substances... Right side adding H+ and H2O ) equals the number of oxidizing the! Works better than the oxidation-number method when the substances in the final, balanced redox equation far from balanced as., fourteen \ ( \ce { H_2O } \ ) ions will be added to reactant. ) acetaldehyde half-reactions for the oxidation half-reaction reduction portion the substances in the reaction are in aqueous solution by! In scenarios such as this redox reaction in basic solution balance each redox reaction gained we do occur... Method calculator to balance chemical equations doesn ’ t always work well ionic equation and then the half-reactions are together! Reactions we have always - apart from all the electron balancing on one.... Always - apart from all the electron balancing on one line are present major.
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Mar 2019, 08:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # What is the value of –[(s + t)0] if t + s ≠ 0? Author Message TAGS: ### Hide Tags Manager Joined: 16 May 2017 Posts: 51 GPA: 3.8 WE: Medicine and Health (Health Care) What is the value of –[(s + t)0] if t + s ≠ 0?  [#permalink] ### Show Tags 29 Sep 2018, 06:12 00:00 Difficulty: 15% (low) Question Stats: 88% (00:31) correct 13% (01:36) wrong based on 18 sessions ### HideShow timer Statistics What is the value of -[$$(s+t)^0$$] if $$t+s\neq{0}$$? (A) -1 (B) 0 (C) 1 (D) s+t (E) -s+t Director Joined: 19 Oct 2013 Posts: 508 Location: Kuwait GPA: 3.2 WE: Engineering (Real Estate) Re: What is the value of –[(s + t)0] if t + s ≠ 0?  [#permalink] ### Show Tags 29 Sep 2018, 09:45 1 Important number properties to remember is that any number to the power of zero will equal 1 except for 0. Because we cannot divide by 0. Since we are given that s+t does not equal zero Then we know that s+t will be some number We are given - [(s+t)^0] so - (1) = -1 Some theory on this to justify why it becomes 1 say we have x = 3 which is 3^1 If we divide 3^1/3^1 = we get 3^(1-1) = 3^0 = 1 Posted from my mobile device Re: What is the value of –[(s + t)0] if t + s ≠ 0?   [#permalink] 29 Sep 2018, 09:45 Display posts from previous: Sort by
{}
SERVING THE QUANTITATIVE FINANCE COMMUNITY qinnanjoshua Topic Author Posts: 1 Joined: September 2nd, 2013, 2:42 pm ### A probability question Two analysts make prediction of the movement of a stock separately. Analyst A have an accuracy of $p_1$, which means his prediction is correct at the probability of $p_1$, while analyst B have an accuracy of $p_2$. Now given that both analysts predict the stock price will rise tomorrow, what is the probability for the stock price to rise tomorrow? What if A predicts rising and B predicts falling? (Assuming the stock either rise or fall but won't stay the same tomorrow.) QuantOrDie Posts: 36 Joined: June 2nd, 2011, 2:23 am ### A probability question Insufficient information. For example, it could be that analyst 1 is from Goldman and analyst 2 is from UBS. The UBS analyst has stolen the Goldman excel model which makes these predictions, so the predictions are the same except for the 20% of the time that the UBS analyst gets the sign wrong on the model's output. In this case the probability is p_1 because you gain no additional information from the second prediction.On the other hand, if we assume that the accuracy of the analysts' predictions are independent then you get a trivial probability question - the probability is simply the conditional probability of two independent events given that they either both happen or both do not, i.e. p_1 * p_2 / (p_1 * p_2 + (1 - p_1) * (1 - p_2))
{}
## Abstract This study empirically investigates the relationship between retirement duration and cognition among older Irish women using microdata collected in the third wave of The Irish Longitudinal Study on Ageing. Ordinary least squares (OLS) regression estimates indicate that the longer an individual has been retired, the lower the cognitive functioning, with other factors thought to affect cognition held constant (e.g., age, education, and early-life socioeconomic conditions). However, retirement is potentially endogenous with respect to cognition because cognition may affect decisions relating to retiring. If so, the OLS estimates will be biased. To test for this possibility, instrumental variable (IV) estimation is used. This method requires an IV that is highly correlated with retirement duration but not correlated with cognition. The instrument used in this study is based on the so-called marriage bar, the legal requirement that women leave paid employment upon getting married, which took effect in Ireland in the 1930s and was abolished only in the 1970s. The IV regression estimates, along with formal statistical tests, provide no evidence in support of the view that cognition affects retirement decisions. The finding of a small negative effect of retirement duration on cognition is robust to alternative empirical specifications. These findings are discussed in the wider context of the effects of work-like and work-related activities on cognition. ## Introduction Many cognitive abilities decline in old age, and age-related declines in cognitive abilities are correlated with declines in the ability to perform everyday tasks (Tucker-Drob 2011). Researchers, however, are showing greater recognition of individual differences in the extent to which cognitive abilities are lost or maintained in old age (Tucker-Drob and Salthouse 2011). Two questions are of particular importance. First, Who are the people who are able to maintain cognitive abilities (e.g., memory, reasoning, and processing speed) in old age? Second, How do these people differ from those who cannot maintain cognitive abilities in old age? Two main (and somewhat competing) hypotheses have been used to explain observed heterogeneity across individuals with respect to the decline in or preservation of cognitive abilities in old age. The cognitive reserve hypothesis suggests that the advantages afforded by early-life socioeconomic opportunities serve to slow the rate of age-related cognitive decline (Stern 2002, 2003). Individuals who experience more enriched socioeconomic environments during childhood and early adulthood have more resilient cognitive and/or neurobiological architectures in adulthood and, in turn, experience less cognitive decline as they age. Research suggests that one of the best indicators of socioeconomic advantages is educational attainment (Tucker-Drob and Salthouse 2011). The mental exercise hypothesis—also known as cognitive enrichment or cognitive use hypothesis (Hertzog et al. 2008; Hultsch et al. 1999; Salthouse 2006)—suggests that individuals who engage in mental exercise and maintain an engaged lifestyle experience relatively less cognitive decline. More specifically, it is argued that high levels of neuronal activation brought about by mental stimulation can buffer against neuro-degeneration and cognitive decline in old age (Churchill et al. 2002; van Praag et al. 2000). An early proponent of this position (Sorenson 1938) suggested that to prevent cognitive decline, people should order their lives such that they constantly find themselves in new situations and confronted with novel problems. The view that keeping mentally active will maintain one’s level of cognitive functioning—and possibly prevent cognitive decline—is so pervasive in contemporary culture that it is frequently expressed in curt terms: “use it or lose it” (Salthouse 2006:70). If engaging in mental exercise can indeed help maintain cognitive functioning and possibly prevent cognitive decline (as suggested by the mental exercise hypothesis), the next logical question is, Which types of activities are most beneficial? Mentally stimulating activities hypothesized as protective against age-related declines in cognition include recreational activities, such as doing crossword puzzles and playing chess, or learning a new skill or how to speak a foreign language (Tucker-Drob and Salthouse 2011). A small albeit growing body of research has suggested that another way to preserve cognition is to delay retirement and continue to work into the later years. The hypothesis is that workers engage in more mental exercise than retirees because work environments provide more cognitively challenging and stimulating environments than do nonwork environments. Thus, perhaps a negative relationship exists between retirement and cognitive functioning (cognition). Our study adds to the small but growing body of research that empirically tests the validity of this specific instantiation of the “use it or lose it” hypothesis, using data from Ireland. The relationship between retirement and cognitive functioning is investigated using data for older Irish women collected in the third wave of The Irish Longitudinal Study on Ageing (TILDA). Ordinary least square (OLS) regressions are used in first instance. Because retirement is potentially endogenous with respect to cognition, instrumental variable (IV) estimation is also used. The identifying instrument in the IV estimation is the abolition of the so-called marriage bar, which was the legal requirement that women leave paid employment on getting married. Our analysis suggests a negative effect of retirement duration on cognitive functioning. That is, the longer an individual has been retired, the lower the cognitive functioning, holding other factors constant with multiple regression. However, this effect is small. We see no evidence that retirement is endogenous, in the sense that there is no evidence that cognitive functioning has an effect on retirement duration. The findings are found to be robust to alternative empirical specifications. ## Contribution to the Literature Several recent studies in the fields of both health and economics have investigated the effect of retirement on cognition. However, the focus is somewhat different between the two sets of studies. Health-based studies have mainly used longitudinal data and methods to investigate the effect of retirement on cognitive decline, concerned primarily with intraindividual changes in cognitive functioning over time. Most of these studies explored whether the nature of employment in the preretirement occupation affects the rate of cognitive decline after retirement. On the other hand, economics-based studies have argued that the main challenge to identifying the effect of retirement on cognition is that the decision to retire might itself be affected by cognition (Rohwedder and Willis 2010). In other words, the direction of causation between retirement duration and cognition may be two-way. If this were the case, the key empirical challenge is to determine which causal direction dominates. Most of the economics-based studies have used a statistical technique known as the instrumental variable estimation to address this issue. We describe the technique in detail later. The first study to investigate the effect of retirement on cognitive decline in an epidemiological sample was Roberts et al. (2011). Using data spanning a five-year period from the UK Whitehall II Study, they found that individuals who retired in the study period showed a trend toward smaller cognitive test score increases than those who were still working at follow-up. Using data spanning a six-year period from the Swedish National Study on Aging and Care, Rennemark and Berglund (2014) found that participants who retired prior to age 60 experienced cognitive decline in the study period. Cognitive decline was not found for those who worked in the study period. Finkel et al. (2009), Fisher et al. (2014), and Andel et al. (2015) employed latent growth models to investigate whether job characteristics during one’s time of employment moderate the association between retirement and cognitive decline. Using data from a subset of twins from the population-based Swedish Twin Registry, Finkel et al. (2009) found larger negative effects of retirement on cognitive decline for individuals whose preretirement jobs were characterized by high levels of “complexity” for some (but not all) measures of cognition included in their data set. Fisher et al. (2014), using longitudinal data spanning 18 years from the U.S. Health and Retirement Study (HRS), found that individuals with preretirement jobs that were characterized by higher “mental demands” had less steep cognitive declines after retirement. Likewise, Andel et al. (2015), using multiple waves of the HRS, found that individuals whose preretirement jobs were characterized by “less control” and “greater strain” had steeper cognitive declines after retirement. To our knowledge, only five economics-based studies have investigated the effect of retirement on cognition. Four of these studies used IV estimation to explore the endogeneity of retirement. The IV approach requires a variable (instrument) that is correlated with the retirement decision but not correlated with cognition. It also needs to be exogenous in the sense that it is not a direct outcome of individual decision-making. Using data collected in the HRS, the English Longitudinal Study on Ageing (ELSA) and the multicountry Survey of Health, Retirement and Ageing in Europe (SHARE), Rohwedder and Willis (2010) and Mazzonna and Peracchi (2012) employed cross-country and temporal changes in policies affecting the age at which individuals are entitled to receive a state-supplied pension and other age-related benefits. The expectation is that this variability would have a sizable effect on retirement decisions but have no direct effect on cognition. Before and after controlling for endogeneity, both studies found sizable negative effects of retirement on cognition. Bonsang et al. (2012), using data from the HRS, reached a similar conclusion following a similar approach. de Grip et al. (2015), using Dutch data from the Maastricht Aging Study, found large negative effects of retirement on cognitive decline for some (but not all) measures of cognition included in their data set. Finally, Coe et al. (2012), also using HRS data, used early retirement offers (which are legally required to be nondiscriminatory) as a source of exogenous variation, and found no support that retirement affects cognition. Our study differs from the previous studies in three main ways. First, our analysis focuses on women. The employment histories for men and women are generally different. In most high-income countries, men typically work uninterruptedly from when they complete schooling until retirement, with ill health and unemployment being the main factors causing deviation from this pattern. The pattern for women is typically different because childbearing and child-rearing frequently result in mothers leaving the labor force, often for considerable periods of time. With the exception of Mazzonna and Peracchi (2012), the existing studies focused only on men or did not disaggregate the analysis by sex. Grouping men and women may mask important differences. For all these reasons, we believe it important to analyze women separately—and even more important, not to exclude them. Second, the differences in the findings of the economics-based studies may be a product of differences in the exogenous variation used in the statistical models. Basically, this variation is caused by policy changes that should affect retirement decisions. However, it assumes that individuals are rational and fully understand these changes. Considerable evidence shows that this is not the case (see, e.g., Hancock et al. 2004). Therefore, we exploit an alternative source of exogenous variation unique to the Irish context caused by the abolition of the so-called marriage bar. The marriage bar was the legal requirement that women leave paid employment—in a sense, retire from paid work—upon marrying. It was established in the 1930s and abolished in the 1970s. The TILDA data used here surveyed women who were required to leave paid employment—retire—because of the marriage bar. Many of these women spent a significant proportion of their lives after getting married in retirement. Third, the TILDA data include measures of cognition that are novel in the context of other large-scale, nationally representative studies on aging. One unique feature is that they are administered and scored by nurses trained specifically for this purpose. Therefore, they should be subject to less measurement error compared with self-assessed or interviewer-administered measures. The four measures of cognition employed in the analysis of our study capture processing speed and mental switching, which are central to effective cognitive functioning. Crucially, both processing speed and mental switching require effortful processing at the time of assessment and do not require production of previously acquired knowledge (Tucker-Drob and Salthouse 2011). ## Methodology ### Data The data we use are from the third wave of TILDA, which is a nationally representative sample of community-dwelling individuals aged 50 or older in Ireland. The survey collects detailed information on the economic, health, and social aspects of the respondents’ lives. It is modeled closely on HRS, ELSA, and SHARE. At the Wave 3 interview (2014/2015), 6,566 respondents completed a computer-assisted personal interview (CAPI) in their homes and were invited to travel to a dedicated health center based in Trinity College Dublin for a comprehensive health assessment. If unable or unwilling to travel to the health center, respondents were offered a modified assessment in their home. All assessments were carried out by qualified and trained research nurses. A total of 5,395 respondents underwent a health assessment: 80 % in the Trinity College Dublin health center and 20 % in their home. Although the main analysis of this article is based on data from the third wave of TILDA, data on labor market circumstances from the first (2009/2011) and second (2012/2013) waves were also employed to construct the relevant labor market variables or for robustness checks. For more detail about TILDA, see Cronin et al. (2013), Kearney et al. (2011), and Whelan and Savva (2013). ### Statistical Model In our statistical model, we assume that cognition (Cog) is a function of retirement duration (RetDur), a vector of other controls (Xj) (such as j = age and education), and an error term (u). In regression form, $Cogi=β0+β1RetDuri+∑jβjXij+ui,$ 1 where the subscript i denotes the individual, i = 1, 2, . . . , N. If RetDur is correlated with u, then OLS estimates of β1 will be biased and inconsistent. IV estimation can be used to purge the relationship between RetDur and Cog of this bias. Key to IV estimation is the availability of at least one variable, Z (instrument), which has the following three key properties: (1) variation in Z is associated with variation in RetDur; (2) variation in Z is not associated with variation in Cog (apart from the indirect route via RetDur); and (3) variation in Z is not associated with variation in unmeasured variables that affect RetDur and Cog. If one has available a variable that satisfies these properties, then one can estimate the following regression: $RetDuri=π0+π1Zi+∑jπjXij+wi,$ 2 where RetDur is as a function of Z, Xj, and an error term w. By estimating this first-stage regression, one can then form predictions for RetDur: $RetDur^i=π0^+π1^Zi+∑jπj^Xij.$ 3 One can use OLS to estimate the second-stage regression: $Cogi=b0+b1RetDur^i+∑jbjXij+ei,$ 4 where predicted values of RetDur from Eq. (3) are used. Assuming that all assumptions are met, the error term in this regression, e, is random and not correlated with RetDur. If this is the case, Eq. (4) will provide an unbiased estimate, b1, of the relationship between retirement duration and cognition. On the other hand, if b1 = β1 (which is a testable hypothesis), retirement duration is exogenous, and OLS provides such an estimate. A note of caution is needed when using IV estimation. For all analyses using IV estimation, generalizability is a concern because the IV estimation recovers what in the literature is referred to as the local average treatment effect (LATE) (Angrist and Imbens 1994). The LATE is the average effect of the treatment among only the group affected by the instrument. In our analysis, IV estimates the average effect of retirement duration on cognition for the group of women who were affected by the instrument Marriage Bar because the law was in place but would not have not been affected had the law not been in place. ### Variables #### Cognition The four cognition variables are tests of processing speed and mental switching that have been widely used and validated in clinical studies. The Colour Trail Task 1 test (CTT1) captures mainly visual scanning and mental processing speed. The Colour Trail Task 2 test (CTT2) captures additional executive functions, such as task switching (D’Elia et al. 1996). The Choice Reaction Time (CRT) and Choice Reaction Time Variability (CRT_VAR) tests capture processing speed and concentration. Importantly, these tests require effortful processing at the time of assessment and do not require production of previously acquired knowledge (Tucker-Drob and Salthouse 2011). In TILDA, cognitive tests are administered and scored by trained and qualified nurses during the health assessment. Focusing on the four tests employed in this study, respondents are first passed a sheet of paper containing numbers in yellow or pink circles. For the CTT1, respondents are instructed to rapidly draw a line with a pencil, connecting the circles numbered 1–25 in consecutive order. In the CTT2, respondents are asked to connect numbered circles alternating between pink and yellow circles (e.g., pink 1, yellow 2, pink 3, and so on). The performance indicator for both CTT1 and CTT2 is the time taken (in seconds) to successfully complete the test, with shorter completion times indicative of better performance. Respondents are then required to perform a computer-based task. They are asked to depress a central button until a stimulus appears on-screen: either the word YES or the word NO. Each time a stimulus appears, respondents are required to press the corresponding button. A return to the central button is necessary after each response for the next word to appear on-screen. There are approximately 100 repetitions. The task variables of interest are the mean intraindividual CRT and the standard deviation of individual CRT, the latter providing a measure of variability (CRT_VAR). CRT and CRT_VAR are measured in milliseconds. In Fig. 1, panels a–d plot the relationship between age and the four cognition measures. For each measure, respondents were ranked from slowest to fastest based on the time taken to complete the task. Then the mean ranking position by year of age was computed for each of the four cognitive measures. Figure 1 shows a clear negative relationship between age and cognition. For completeness, the relationship between age and the four cognitive measures expressed in the original metric (i.e., time taken to complete the task) is illustrated in Fig. S1 in Online Resource 1. The relationship between age and the standardized values (z scores) of the cognition variables is also shown in the same figures. #### Retirement Duration In the CAPI interview, respondents are asked to report the status that best describes their current labor market situation: (1) retired, (2) employed, (3) self-employed, (4) unemployed, (5) permanently sick or disabled, (6) looking after home or family, (7) in education or training, and (8) other. Respondents can select only one choice because the options are designed to be mutually exclusive. At the Wave 3 interview, 34.5 % of women in the sample are employed or self-employed, and another 40.5 % are retired. Nearly one-fifth (19.4 %) are looking after home or family; 3.1 % are permanently sick or disabled, and 2 % are unemployed. We classify an individual as working if she reports to be currently in employment, or retired otherwise. Working individuals are, therefore, those who chose categories (2) and (3), and retired individuals are those who chose categories (1) and (4)–(8). Robustness checks concerned with the reliability of our definition of retirement are reported later herein. Respondents not working at the time of the interview are then asked whether they have done any paid work in the week prior to the interview. Individuals who reported to have done some paid work in that week (n = 56) are excluded. A total of 160 respondents reported to have never done any paid work. Some of these respondents may have engaged in unpaid work at some point over their lifetime—for example, on the family farm or in the family business. Unfortunately, additional information on the employment history of respondents who report never having done any paid work is not collected in TILDA. For this reason, these respondents are excluded from the analysis. Only respondents who report having done paid work at some point in their life are kept in the sample. Respondents in categories (1) and (4)–(8) are asked to report the month and year when they stopped working. For example, respondents who report being retired (i.e., in category (1)) are asked the following question: “In what month/year did you stop working?” Similarly, respondents who report being unemployed (i.e., in category (4)) are asked the following question: “In what month/year did you become unemployed?” We define retirement duration as the time elapsed between the date the respondent stopped working and the date of the health assessment for that respondent. Retirement duration in full months is calculated and converted to years of retirement for ease of interpretation. For those at work, retirement duration is set to 0. Because information on labor market status is also collected at Waves 1 and 2 with the same questions, this information is used to construct a more robust measure of retirement duration. If inconsistent answers are provided across the three waves, we consider as most reliable the measure of retirement duration constructed based on Wave 1 reports, followed by Wave 2 reports and Wave 3 reports. This should minimize recall bias: the time elapsed between the date of retirement and the date of interview is shorter because Wave 1 occurs before Waves 2 and 3. Retirement duration cannot be calculated for 117 women because of missing information, and these individuals are excluded from the sample. Panels a–d of Fig. 2 plot the relationship between retirement duration and the four cognitive measures, showing that respondents who have retired for longer are, on average, slower at completing the cognition tasks. The relationship between retirement duration and the four cognitive measures expressed in the original time metric and between age and the standardized values (z scores) of the cognition variables is shown in Fig. S2 in Online Resource 1. #### Controls Additional variables thought to affect cognition are included. These variables include the key factors of age and education, as well as a set of variables aimed at capturing childhood characteristics. The main aim is to restrict the list of control variables to those that are clearly exogenous and not subject to same endogeneity considerations as retirement duration. We achieve this aim by selecting variables measured when the respondent was young. The relationship between education and cognition has been studied. A number of studies have found evidence that education positively affects cognition in later life (e.g., Banks and Mazzonna 2012; Schneeweis et al. 2014). Because most schooling among older Irish women is completed when they are young and before they enter the labor market, it is exogenous. Education (School) is measured as the number of years of schooling completed. Several childhood characteristics have been shown to be associated with cognition in later life (Borenstein et al. 2006; Brown 2010; Everson-Rose et al. 2003). We employ a set of dummy variables based on respondent’s self-reporting of childhood conditions before age 14: NoBook = 1 if there were no or very few books in the home where respondent grew up (0 = otherwise); PoorHealth = 1 if respondent was in fair/poor health (0 = otherwise); PoorFam = 1 if respondent grew up in a poor family (0 = otherwise); MotherNotWork = 1 if respondent’s mother never worked outside the home (0 = otherwise); and FatherNotWork = 1 if respondent’s father never worked outside the home (0 = otherwise). For 37 women, information is missing on one or more of these variables, and these individuals are excluded from the sample. The final samples are 2,519 women for the model based on CTT1; 2,481 women for the model based on CTT2; and 2,383 women for the models based on CRT and CRT_VAR. Table 1 displays descriptive statistics for all independent variables based on the sample including 2,519 women. The average age is 65.8 years, and the average retirement duration is 12 years. #### Instrumental Variable: The Marriage Bar We believe that the abolition of the so-called marriage bar in Ireland caused exogenous variation in retirement decisions. The marriage bar was the legal requirement that women leave their paid employment after getting married. It was established for primary school teachers in 1933 and for civil servants in 1956. Although not legally obliged to do so, many semi-state and private organizations—including banks, utility companies, and large manufacturers—also dismissed women when they married. Private sector employers dismissed women working in primarily clerical and skilled jobs, but in some cases, they dismissed unskilled workers (Kiely and Leane 2012:91). The marriage bar for primary school teachers was lifted in 1958, and lifted for civil servants in 1973. Discrimination in employment on the grounds of sex or marital status was made illegal in 1977. Unsurprisingly, the labor force participation rate of married women aged 15 and older increased from 7.5 % in 1971 to 14.5 % in 1975 (Pyle 1990). For more on the Irish marriage bar, see Connolly (2003), Cullen Owens (2005), Kiely and Leane (2012), and O’Connor (1998). Crucially, no evidence exists that the marriage bar forced women to choose between paid employment or getting married. For example, Fig. 3 shows female activity rates for married and single women in 1970 in Ireland and other countries. Clearly, although activity rates of single women in Ireland were closely aligned to activity rates of single women in other countries, married women in Ireland were significantly less likely to be active than those in other countries. This suggests that an exogenous factor preventing married women from working in Ireland was present, which we believe is the marriage bar. Additional evidence consistent with this view is shown in Figs. 4 and 5. Figure 4 shows the proportions of never-married and married women calculated from the TILDA and SHARE surveys by birth cohort. In Ireland, like in many other countries, the proportion of never-married women is very small, suggesting that marriage was the norm for women born in the first half of the twentieth century. Figure 5 shows the historical crude marriage rate and the general marriage rate for Ireland (1926–1996). One would expect that if women were forced to choose between marriage and paid employment, the marriage rate would increase after the abolition of the marriage bar. Figure 5 shows that, if anything, the marriage rate stabilized and then decreased after the abolition of the marriage bar: that is, it moved in the opposite direction. Ireland is not the only country where women were dismissed from employment at marriage. For example, marriage bars survived up to the 1950s in the United States (Goldin 1990), England (Smith 1986), the Netherlands (Boeri and van Ours 2013), and Germany (Kolinsky 1989). Ireland is, however, unique in the duration of the enforcement of the marriage bar. Many Irish women who were affected are still alive and are in the TILDA sample. Comparatively, most of the women affected by the marriage bar in the other countries are likely to have died or to be very old. TILDA is the first large-scale longitudinal study on aging to include specific questions on the marriage bar. In TILDA Wave 3, women are asked the following question: “Did you ever have to leave a job because of the Marriage Bar?” The instrument used is a dummy variable, MarBar, coded 1 if a woman reported having to leave employment on getting married and 0 otherwise. It is also coded 0 for the few women in the sample who reported never marrying. Of the 2,519 women in the final sample, 318 reported that they had to leave a job because of the marriage bar. Some of these women subsequently returned to work. For these women, the instrument is coded 1, and RetDur is defined as the time elapsed between the date the respondent stopped working in her final job and the date of the health assessment for that respondent. ## Results ### Main Empirical Findings Columns 1 and 2 in Tables 2 and 3 show the OLS regression estimates for CTT1 and CTT2, and CRT and CRT_VAR, respectively. We transform the four outcome variables by taking the natural logarithm in order to ensure normality of the residuals. We then multiply the transformed scores by –1. Therefore, a higher value of these transformed variables suggests a higher level of cognitive functioning and vice versa, which makes interpretation of the estimates more intuitive. Since the cognition measures are transformed into natural logarithms, the regression coefficients can be easily transformed into percentage effects. For example, %RetDur = [exp(β1) – 1]. The coefficient of RetDur is negative for the four cognition measures, which is consistent with the hypothesis that a longer retirement duration is associated with lower cognition. Even though these associations are statistically significant at the 5 % level or lower, the magnitude is small. An additional year of retirement corresponds to a 0.2 % reduction in CTT1, a 0.1 % reduction in CTT2, a 0.1 % reduction in CRT, and a 0.3 % reduction in CRT_VAR. As expected, the coefficient of Age is negative for all four cognition measures and is statistically significant at the 1 % level. An additional year of age is associated with a reduction of 2.1 % in CTT1, 1.7 % in CTT2, 0.8 % in CRT, and 2.1 % in CRT_VAR. The coefficient of School is positive and statistically significant for all cognition measures. An additional year of schooling is associated with a 1.1 % increase in CTT1, a 1.3 % increase in CTT2, a 0.5 % increase in CRT, and a 1.6 % increase in CRT_VAR. As a group, the remaining variables should proxy well the socioeconomic conditions in the home where the respondent grew up. Strong support for the hypothesis that early-life conditions’ effects on later-life cognition is found for the variable growing up in a household with no or few books. The coefficient of NoBooks is negative and statistically significant at the 1 % level for all four cognition variables. The magnitude of this association is sizable: cognition is approximately 5.7 % lower for CTT1, 8.5 % lower for CTT2, 4.7 % lower for CRT, and 9.2 % lower for CRT_VAR growing up in a household with no or few books. It is not clear, however, whether this is a socioeconomic effect or an early reading effect. Self-reported health is also important. However, the reasons behind poor childhood health can be caused not only by socioeconomic conditions but also by factors largely independent of socioeconomic conditions (such as contagious disease). The association of RetDur with CTT1, CTT2, CRT, and CRT_VAR before and after the control variables are added is visually depicted in Fig. 6. Larger symbols are used to depict the RetDur coefficient before the control variables are added. Smaller symbols are used to depict the RetDur coefficient after the control variables are added. The 95 % confidence interval of each coefficient is also shown. Figure 6 shows that after the control variables are added, the size of the RetDur coefficient is approximately 20 % to 25 % of the size of the initial coefficient. The estimates of columns 1 and 2 in Tables 2 and 3 and Fig. 6 are based on the assumption that retirement duration is exogenous. The IV estimates that test for the potential endogeneity are shown in columns 3–8 in Tables 2 and 3. These columns show the first-stage IV estimates, the reduced-form estimates, and the second-stage IV estimates. As discussed in the previous section, the instrument employed is whether the woman reported having to leave a job because of the marriage bar. Columns 3 and 4 in Table 2 show the first-stage estimates for CTT1 and CTT2. There are only slight differences between the two columns because of the small differences in sample sizes. Columns 3 and 4 in Table 3 show the first-stage estimates for CRT and CRT_VAR. The two columns are identical because the sample size is the same in the two regressions. Clearly, MarBar is an important predictor of RetDur. The coefficient of MarBar in all equations is positive, large in magnitude, and statistically significant at well below the 1 % level. The statistics from the first-stage equations reported at the bottom of Tables 2 and 3 confirm that the instrument is not weak (see Bound et al. 1995; Hernan and Robins 2006; Murray 2006; Staiger and Stock 1997; Stock and Yogo 2005). For example, the F statistics range between 33.4 and 35.1. According to Staiger and Stock’s (1997) rule of thumb, the F statistics should be at least 10 for the instrument not to be weak. Similarity, the Stock-Yogo tests of weak identification reject the null hypothesis that the instrument is weak given that the F statistics exceed the selected critical values. In short, women who had to leave work because of the marriage bar have a longer retirement duration—or more correctly, a longer current period of not working—even after we control for age and education. The requirement that the instrument is a strong predictor of the potentially endogenous variable is satisfied. Unfortunately, we cannot directly test the requirement that there is no relationship between MarBar and Cog, apart from the indirect route via RetDur. However, we can obtain some information by considering the reduced-form regressions. In these regressions, CTT1, CTT2, CRT, and CRT_VAR are expressed as a function of the MarBar and of the other variables. These estimates are shown in columns 5 and 6 of Tables 2 and 3. MarBar is not statistically significant in any regression. In fact, the t statistics range between 0.2 and 0.7. This lack of statistical significance is encouraging and suggests that a relationship between the IV and the outcome of interest is unlikely to exist (Angrist and Krueger 2001; French and Popovici 2011). Finally, columns 7 and 8 in Tables 2 and 3 show the estimates of the second-stage regression results. For all cognition measures, the coefficient of RetDur is statistically insignificant. We compare differences between the estimators of the OLS and IV by employing the Hausman test. If OLS and IV estimators are found to have a different probability limit, then there is evidence that endogeneity is present, and OLS estimators will be inconsistent. If OLS and IV estimators are found to have the same probability limit, then there is no evidence that endogeneity is present. Both estimators will be consistent, and OLS estimation is preferred. The results of the Hausman test are given at the bottom of Tables 2 and 3. For all four cognition measures, the χ2 values are not statistically significant, implying that the null hypothesis that retirement duration is exogenous cannot be rejected at any level of statistical significance. This leads us to conclude that the OLS estimates are preferred. More generally, there is no statistical evidence that retirement duration is endogenous. Therefore, if retirement duration and cognition are causally related, then retirement affects cognition and not the other way round. ### Robustness Checks and Model Extensions To consider the robustness of the estimates, five sets of additional regressions are estimated (results available in Online Resource 1). The main conclusion is that the magnitude of the relationship between retirement duration and cognition remains small and statistically significant for all cognition measures. The first set of regressions employ three alternative IVs. As explained earlier, the marriage bar was not enforced universally. It was enforced by law in the public sector and mimicked by many, but not all, private sector employers. One cannot exclude that women with certain characteristics that are not measured in the TILDA data set selected into jobs that were affected, or not affected, by the marriage bar. For example, perhaps women with an innate desire to be active in the labor force opted for jobs that would allow them to work after marriage, primarily in the private sector. If this unmeasured variable innate desire to be active in the labor force is also correlated with employment/retirement duration and cognition, then the IV used in the analysis is not valid. Other unobservable characteristics that are potentially correlated with occupational choice at labor market entry and retirement duration are risk aversion and family preferences. For example, perhaps women who were more risk-averse and more family oriented opted for jobs in the public sector given that retiring at marriage was enforced by law. Similarly, perhaps women who were less risk-averse and less family-oriented opted for jobs in the private sector given that not all private sector employers enforced the marriage bar. In other terms, career prospects might have been better in the private sector. Although it is difficult to argue that traits such as risk aversion and family preferences are also correlated with cognition, one cannot exclude this might be the case. Three IVs that are clearly independent of the occupation the woman had are constructed. The first two instruments are proxies for the number of years a woman was exposed to the marriage bar. The first instrument, MarBarBirth, is the time elapsed between a woman’s year of birth and 1977, which was the year when discrimination in employment on the grounds of sex or marital status was made illegal in Ireland. The second instrument, MarBar18, is the time elapsed between the year in which a woman turned 18 years of age and 1977. The third instrument, PropMarBar, is equal to the proportion of women in the TILDA sample who reported having been affected by the marriage bar by birth cohort. The second set of regressions focus on whether the coefficient of RetDur is significantly different in magnitude under alternative specifications compared with what is found in the OLS baseline regressions of Tables 2 and 3. Five tests are employed. First, older women are excluded from the sample because employment rates among “older” women are very low. Second, women who performed the health assessment in their homes are excluded because they might differ from those who travelled to Trinity College Dublin to undertake the health assessment. Third, the unemployed and the sick and disabled are excluded to examine how robust the estimate of RetDur is to different definitions of retirement. Fourth, only those who have a retirement duration of at least one year are considered as retired. Fifth, quadratic and cubic terms in age are added to the list of explanatory factors. The third set of regressions investigate the role of “nonwork substitution activities.” It is reasonable to hypothesize that women who retired around the time of marriage or in early adulthood substituted work activities with nonwork activities. If such activities are mentally stimulating, one would expect to find a smaller and potentially insignificant effect of retirement duration on later-life cognition for this group of women. Three tests are employed. The first test is an investigation of whether the time spent out of the labor force—associated with having children—affects later-life cognition. Perhaps the positive effect that child-rearing has on cognition outweighs the negative effect of time not working. The second test is an investigation of whether there is an association between current nonwork activities—such as volunteering—and cognition. The (untestable) assumption is that women who engage more into nonwork activities at present are more likely to have engaged in such activities in the past. The third test employs additional information on employment histories collected for women who had to leave a job because of the marriage bar. The fourth set of regressions investigate whether the relationship between retirement and cognition can be explained by the nature of employment during one’s working life. Two tests are employed. The first test is to add an interaction term between RetDur and a dummy variable capturing the occupational sector of the preretirement job to the list of explanatory factors. If the cognitive stimulating nature of work is what improves cognitive function, then one can expect that the largest effects of retirement are for women in more cognitively stimulating jobs. The second test is to add an interaction term between RetDur and a dummy variable capturing whether employment is performed on a part-time or full-time basis. If there is a dose-response relationship between hours worked in a typical week and cognitive stimulation, then one can expect that the largest negative effects of retirement are for women in full-time jobs. However, another possibility is that women working part-time engage in equally cognitively stimulating activities when they are not in work—particularly for women who choose to retire gradually from work. The fifth set of regressions investigates the role of cohort differences in cognitive functioning because one cannot exclude that lower duration of retirement is simply a marker for being born in a more recent birth cohort. If, ceteris paribus, individuals born in later generations begin adulthood with higher overall levels of performance than those born in earlier generations, then these younger participants will outperform older participants at any given time point—not because of aging-related changes but because of historical differences in, for example, nutrition or education (Tucker-Drob and Salthouse 2011). To test this hypothesis, we add an interaction term between RetDur and age at retirement to the list of explanatory factors. ## Conclusion In this study, we empirically investigated the relationship between retirement duration and cognitive functioning using data for older Irish women collected in the third wave of The Irish Longitudinal Study on Ageing. Because retirement is potentially endogenous with respect to cognition, we used IV estimation. The identifying instrument was the abolition of the so-called marriage bar, which was the legal requirement that women leave paid employment upon getting married. We found a robust negative effect of retirement duration on cognition but found no support for the alternative causal direction. The finding of a negative effect of retirement duration on cognition supports the mental exercise—the “use it or lose it”—hypothesis. However, the effect of retirement duration on cognition was small in magnitude. At least three possible explanations account for our finding of a small effect. The first explanation is that our measure of retirement duration is possibly prone to measurement error, which in turn could reduce the predictive power of the effect of retirement duration on cognition. Respondents were asked to report the date they ceased working. These self-reported responses may be subject to recall bias. In addition, there might be substantial heterogeneity in what women perceive as being work. Finally, questions on timing of labor market exit were asked slightly differently to respondents according to whether they reported to be retired, unemployed, or disabled, or looking after family. This may have created some distortion in respondents’ self-reports as to when they stopped working. TILDA data might not be of the sufficient quality needed to support the rigorous statistical analysis of the relationship between retirement and cognition. The second explanation is that the calculation of retirement duration as “time elapsed since last stopped working” likely masks important aspects of employment histories. For example, it is reasonable to hypothesize that the estimated cognitive disadvantage associated with longer retirement duration is a lower bound of the true effect if women who retire gradually (i.e., who reduce hours of work before retirement) engage in equally stimulating cognitive activities in the newly available time before and after retirement. Similarly, it is also reasonable to hypothesize that women who have been retired for longer substituted work activities with equally cognitively stimulating nonwork activities. Information collected in TILDA on part-time versus full-time employment, current nonwork activities, and childbearing and child-rearing was used to test these hypotheses. We did not find strong evidence in favor of the substitution hypothesis. However, to investigate this with rigor would require the collection of detailed employment and life histories, which are not currently a feature of TILDA. The third explanation is that the cognition variables employed in the analysis are based on cognitive tests that capture processing speed and mental switching, which are central to effective cognitive functioning. These tests have two important advantages. First, they are administered and scored by nurses trained specifically for this purpose. Second, they require effortful processing at the time of assessment and do not require production of previously acquired knowledge (Tucker-Drob and Salthouse 2011). However, these tests have a clear limitation. Previous investigations of aging trajectories for the processing speed factor have reported strong genetic influences on rates of cognitive decline, with little contribution from environmental factors (Finkel et al. 2005; Reynolds et al. 2005). If the validity of this finding is confirmed by future research, then it will not be surprising that the effects of retirement duration on cognition—measured by tests capturing processing speed—are small. Another finding of our study was that the effects of education and other favorable early-life indicators on later-life cognition were positive and large in magnitude. This finding is encouraging because it suggests that educational attainment and early-life conditions may have important real-world implications for cognitive functioning in adulthood and old age (Tucker-Drob and Salthouse 2011). Whether these factors also protect from age-related cognitive decline is still the subject of debate in the literature and is beyond the scope of this study. Our analysis was based on older Irish women. It is reasonable to hypothesize that the effects of retirement on cognition might be greater among older Irish men perhaps as a result of men being more oriented toward paid work than women or perhaps as a result of women experiencing very heterogeneous life trajectories. As a consequence, some of the analysis for women was repeated for men using TILDA data. However, we could not investigate the potential endogeneity of retirement among men because the abolition of the marriage bar is only a sensible IV for women. These estimates are not reported here but are available on request. The estimates confirm a similar relationship for men. The magnitude of the relationship is larger for men but is still small. Because it was not possible to explore the endogeneity issue for men, these estimates, albeit encouraging, are only indicative and far from conclusive. In closing, we believe that our findings are generalizable to other high-income countries. Our analysis confirmed findings of research from other countries regarding the effect of age, education, and early-life socioeconomic conditions on later-life cognition. In this respect, Irish women appear to be no different. For the same reason, we do not believe that the key finding of a small, negative relationship between retirement duration and later-life cognition is not generalizable. However, further research based on additional data—and possibly on alternative sources of exogenous variation—is needed to further clarify the relationship between retirement and later-life cognition. Distinguishing the relative importance of the work environment and the alternative uses of time during retirement for maintaining levels of cognition in later life should be a priority. ## Acknowledgments The authors would like to thank the funders of TILDA, the Irish Department of Health, the Atlantic Philanthropies, and Irish Life plc for supporting this research. Researchers interested in using TILDA data may access the data at no charge from the following sites: Irish Social Science Data Archive (ISSDA) at University College Dublin (http://www.ucd.ie/issda/data/tilda/), and Interuniversity Consortium for Political and Social Research (ICPSR) at the University of Michigan (http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/34315). ## References Andel, R., Infurna, F. J., Hahn Rickenbach, E. A., Crowe, M., Marchiondo, L., & Fisher, G. G. ( 2015 ). Job strain and trajectories of change in episodic memory before and after retirement: Results from the Health and Retirement Study . Journal of Epidemiology and Community Health , 69 , 442 446 . 10.1136/jech-2014-204754 Angrist, J. D., & Imbens, G. W. ( 1994 ). Identification and estimation of local average treatment effects . Econometrica , 62 , 467 475 . 10.2307/2951620 Angrist, J. D., & Krueger, A. B. ( 2001 ). Instrumental variables and the search for identification: From supply and demand to natural experiments . Journal of Economic Perspectives , 15 ( 4 ), 69 85 . 10.1257/jep.15.4.69 Banks, J., & Mazzonna, F. ( 2012 ). The effect of education on old age cognitive abilities: Evidence from a regression discontinuity design . Economic Journal , 122 , 418 448 . 10.1111/j.1468-0297.2012.02499.x Boeri, T., & van Ours, J. ( 2013 ). The economics of imperfect labor markets . Princeton, NJ : Princeton University Press . Bonsang, E., Adam, S., & Perelman, S. ( 2012 ). Does retirement affect cognitive functioning? . Journal of Health Economics , 31 , 490 501 . 10.1016/j.jhealeco.2012.03.005 Borenstein, A. R., Copenhaver, C. I., & Mortimer, J. A. ( 2006 ). Early-life risk factors for Alzheimer disease . Alzheimer Disease & Associated Disorders , 20 , 63 72 Bound, J., Jaeger, D. A., & Baker, R. M. ( 1995 ). Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak . Journal of the American Statistical Association , 90 , 443 450 . Brown, M. T. ( 2010 ). Early-life characteristics, psychiatric history, and cognition trajectories in later life . Gerontologist , 50 , 646 656 . 10.1093/geront/gnq049 Central Statistics Office . ( 1926–2000 ). Annual reports on marriages, births and deaths in Ireland from 1864 to 2000 [Data set]. Dublin : Government of Ireland Central Statistics Office . ( 2012 ). This is Ireland: Highlights from census 2011, part 2 (Report). Dublin, Ireland : Stationery Office Churchill, J. D., Galvez, R., Colcombe, S., Swain, R. A., Kramer, A. F., & Greenough, W. T. ( 2002 ). Exercise, experience and the aging brain . Neurobiology of Aging , 23 , 941 955 . 10.1016/S0197-4580(02)00028-3 Coe, N. B., von Gaudecker, H. M., Lindeboom, M., & Maurer, J. ( 2012 ). The effect of retirement on cognitive functioning . Health Economics , 21 , 913 927 . 10.1002/hec.1771 Connolly, E. ( 2003 ). Durability and change in state gender systems: Ireland in the 1950s . European Journal of Women’s Studies , 10 , 65 86 . 10.1177/1350506803010001797 Cronin, H., O’Regan, C., Finucane, C., Kearney, P., & Kenny, R. A. ( 2013 ). Health and aging: Development of The Irish Longitudinal Study on Ageing health assessment . Journal of the American Geriatrics Society , 61 ( S2 ), S269 S278 . 10.1111/jgs.12197 Cullen Owens, R. ( 2005 ). A social history of women in Ireland . Dublin, Ireland : Gill & Macmillan . de Grip, A., Dupuy, A., Jolles, J., & van Boxtel, M. ( 2015 ). Retirement and cognitive development in the Netherlands: Are the retired really inactive? . Economics & Human Biology , 19 , 157 169 . 10.1016/j.ehb.2015.08.004 D’Elia, L. F., Satz, P., Uchiyama, C. L., & White, T. ( 1996 ). Color Trails test: Professional manual . Odessa, FL : Psychological Assessment Resources . Everson-Rose, S. A., Mendes de Leon, C. F., Bienias, J. L., Wilson, R. S., & Evans, D. A. ( 2003 ). Early life conditions and cognitive functioning in later life . American Journal of Epidemiology , 158 , 1083 1089 . 10.1093/aje/kwg263 Finkel, D., Andel, R., Gatz, M., & Pedersen, N. L. ( 2009 ). The role of occupational complexity in trajectories of cognitive aging before and after retirement . Psychology and Aging , 24 , 563 573 . 10.1037/a0015511 Finkel, D., Reynolds, C. A., McArdle, J. J., & Pedersen, N. L. ( 2005 ). The longitudinal relationship between processing speed and cognitive ability: Genetic and environmental influences . Behavior Genetics , 35 , 535 549 . 10.1007/s10519-005-3281-5 Fisher, G. G., Stachowski, A., Infurna, F. J., Faul, J. D., Grosch, J., & Tetrick, L. E. ( 2014 ). Mental work demands, retirement, and longitudinal trajectories of cognitive functioning . Journal of Occupational Health Psychology , 19 , 231 242 . 10.1037/a0035724 Flynn, J. R. ( 1987 ). Massive IQ gains in 14 nations: What IQ tests really measure . Psychological Bulletin , 101 , 171 191 . 10.1037/0033-2909.101.2.171 French, M. T., & Popovici, I. ( 2011 ). That instrument is lousy! In search of agreement when using instrumental variables estimation in substance use research . Health Economics , 20 , 127 146 . 10.1002/hec.1572 Goldin, C. ( 1990 ). Why did change take so long? . In Goldin, C., & Dale, C. (Eds.), Understanding the gender gap: An economic history of American women (pp. 159 184 ). New York, NY : Oxford University Press . Hancock, R., Pudney, S., Barker, G., Hernandez, M., & Sutherland, H. ( 2004 ). The take-up of multiple means-tested benefits by British pensioners: Evidence from the Family Resources Survey . Fiscal Studies , 25 , 279 303 . 10.1111/j.1475-5890.2004.tb00540.x Hernan, M. A., & Robins, J. M. ( 2006 ). Instruments for causal inference: An epidemiologist’s dream? . Epidemiology , 17 , 360 372 . 10.1097/01.ede.0000222409.00878.37 Hertzog, C., Kramer, A. F., Wilson, R. S., & Lindenberger, U. ( 2008 ). Enrichment effects on adult cognitive development: Can the functional capacity of older adults be preserved and enhanced? Psychological Science in the Public Interest , 9 , 1 65 . Hultsch, D. F., Hertzog, C., Small, B. J., & Dixon, R. A. ( 1999 ). Use it or lose it: Engaged lifestyle as a buffer of cognitive decline in aging? . Psychology and Aging , 14 , 245 263 . 10.1037/0882-7974.14.2.245 Kearney, P. M., Cronin, H., O’Regan, C., Kamiya, Y., Savva, G. M., & Kenny, R. ( 2011 ). Cohort profile: The Irish Longitudinal Study on Ageing . International Journal of Epidemiology , 40 , 877 884 . 10.1093/ije/dyr116 Kiely, E., & Leane, M. ( 2012 ). Irish women at work, 1930–1960: An oral history . Dublin, Ireland : . Kolinsky, E. ( 1989 ). Women in West Germany: Life, work and politics . Oxford, UK : Berg . Mazzonna, F., & Peracchi, F. ( 2012 ). Ageing, cognitive abilities and retirement . European Economic Review , 56 , 691 710 . 10.1016/j.euroecorev.2012.03.004 Murray, M. P. ( 2006 ). Avoiding invalid instruments and coping with weak instruments . Journal of Economic Perspectives , 20 ( 4 ), 111 132 . 10.1257/jep.20.4.111 O’Connor, P. ( 1998 ). Emerging voices: Women in contemporary Irish society . Dublin, Ireland : . Pyle, J. L. ( 1990 ). The state and women in the economy: Lessons from sex discrimination in the Republic of Ireland . Albany : State University of New York Press . Rennemark, M., & Berglund, J. ( 2014 ). Decreased cognitive functions at the age of 66, as measured by the MMSE, associated with having left working life before the age of 60: Results from the SNAC study . Scandinavian Journal of Public Health , 42 , 304 309 . 10.1177/1403494813520357 Reynolds, C. A., Finkel, D., McArdle, J. J., Gatz, M., Berg, S., & Pedersen, N. L. ( 2005 ). Quantitative genetic analysis of latent growth curve models of cognitive abilities in adulthood . Developmental Psychology , 41 , 3 16 . 10.1037/0012-1649.41.1.3 Roberts, B. A., Fuhrer, R., Marmot, M., & Richards, M. ( 2011 ). Does retirement influence cognitive performance? The Whitehall II Study . Journal of Epidemiology and Community Health , 65 , 958 963 . 10.1136/jech.2010.111849 Rohwedder, S., & Willis, R. J. ( 2010 ). Mental retirement . Journal of Economic Perspectives , 24 ( 1 ), 119 138 . 10.1257/jep.24.1.119 Salthouse, T. A. ( 2006 ). Mental exercise and mental aging: Evaluating the validity of the “use it or lose it” hypothesis . Perspectives on Psychological Science , 1 , 68 87 . 10.1111/j.1745-6916.2006.00005.x Schneeweis, N., Skirbekk, V., & Winter-Ebmer, R. ( 2014 ). Does education improve cognitive performance four decades after school completion? . Demography , 51 , 619 643 . 10.1007/s13524-014-0281-1 Smith, H. L. ( 1986 ). War and social change: British society in the Second World War . Manchester, UK : Manchester University Press . Sorenson, H. ( 1938 ). . Minneapolis : University of Minnesota Press . Staiger, D., & Stock, J. H. ( 1997 ). Instrumental variables regression with weak instruments . Econometrica , 65 , 557 586 . 10.2307/2171753 Stern, Y. ( 2002 ). What is cognitive reserve? Theory and research application of the reserve concept . Journal of the International Neuropsychological Society , 8 , 448 460 . 10.1017/S1355617702813248 Stern, Y. ( 2003 ). The concept of cognitive reserve: A catalyst for research . Journal of Clinical and Experimental Neuropsychology , 25 , 589 593 . 10.1076/jcen.25.5.589.14571 Stock, J., & Yogo, M. ( 2005 ). Testing for weak instruments in linear IV regression . In Andrews, D. W. K. (Ed.), Identification and inference for econometric models (pp. 80 108 ). New York, NY : Cambridge University Press . Tucker-Drob, E. M. ( 2011 ). Neurocognitive functions and everyday functions change together in old age . Neuropsychology , 25 , 368 377 . 10.1037/a0022348 Tucker-Drob, E. M., & Salthouse, T. A. ( 2011 ). Individual differences in cognitive aging . In Chamorro-Premuzic, T., von Stumm, S., & Furnham, A. (Eds.), The Wiley-Blackwell handbook of individual differences (pp. 242 268 ). Oxford, UK : John Wiley & Sons . van Praag, H., Kempermann, G., & Gage, F. H. ( 2000 ). Neural consequences of environmental enrichment . Nature Reviews: Neuroscience , 1 , 191 198 . 10.1038/35044558 Whelan, B. J., & Savva, G. M. ( 2013 ). Design and methodology of The Irish Longitudinal Study on Ageing . Journal of the American Geriatrics Society , 61 ( Suppl. 2 ), S265 S268 . 10.1111/jgs.12199
{}
# Generate the shortest De Bruijn A De Bruijn sequence is interesting: It is the shortest, cyclic sequence that contains all possible sequences of a given alphabet of a given length. For example, if we were considering the alphabet A,B,C and a length of 3, a possible output is: AAABBBCCCABCACCBBAACBCBABAC You will notice that every possible 3-character sequence using the letters A, B, and C are in there. Your challenge is to generate a De Bruijn sequence in as few characters as possible. Your function should take two parameters, an integer representing the length of the sequences, and a list containing the alphabet. Your output should be the sequence in list form. You may assume that every item in the alphabet is distinct. An example generator can be found here Standard loopholes apply • Can the integer representing the length of the sequences be larger than the number of unique letters? Dec 23 '14 at 21:17 • Yes. A binary sequence of length 4 would be 0000111101100101 Dec 23 '14 at 21:18 • "Your challenge is to generate a De Bruijn sequence in as few characters as possible" - Does this mean "code golf" or "get the shortest possible De Bruijn sequence length"? Dec 23 '14 at 21:46 • Both. To qualify, your program must output the shortest sequence possible, but to win, your program must be the shortest. Dec 23 '14 at 21:59 • @xem: usually De Bruijn sequences include wraparound, which is where those missing sequences appear. Dec 25 '14 at 4:14 # Pyth, 31 bytes This is the direct conversion of the algorithm used in my CJam answer. Tips for golfing welcome! Mu?G}H+GG+G>Hefq<HT>G-lGTUH^GHk This code defines a function g which takes two arguments, the string of list of characters and the number. Example usage: Mu?G}H+GG+G>Hefq<HT>G-lGTUH^GHkg"ABC"3 Output: AAABAACABBABCACBACCBBBCBCCC Code expansion: M # def g(G,H): u # return reduce(lambda G, H: ?G # (G if }H # (H in >H # slice_end(H, e # last_element( f # Pfilter(lambda T: q # equal( <HT # slice_start(H,T), >G # slice_end(G, -lGT # minus(Plen(G),T))), UH # urange(H)))))), ^GH # cartesian_product(G,H), k # "") Try it here # CJam, 52 49 48 bytes This is surprisingly long. This can be golfed a lot, taking in tips from the Pyth translation. q~a*{m*:s}*{:H\:G_+\#)GGHH,,{_H<G,@-G>=},W=>+?}* The input goes like 3 "ABC" i.e. - String of list of characters and the length. and output is the De Bruijn string AAABAACABBABCACBACCBBBCBCCC Try it online here • Gosh CJam should be banned, it is not just made for one golfing task but it seems for every possible golfing task... Dec 23 '14 at 21:42 • @flawr you should wait for a Pyth answer then :P Dec 23 '14 at 21:44 # CJam, 52 49 bytes Here is a different approach in CJam: l~:N;:L,(:Ma{_N*N<0{;)_!}g(+_0a=!}g]{,N\%!},:~Lf= Takes input like this: "ABC" 3 and produces a Lyndon work like CCCBCCACBBCBACABCAABBBABAAA Try it here. This makes use of the relation with Lyndon words. It generates all Lyndon words of length n in lexicographic order (as outlined in that Wikipedia article), then drops those whose length doesn't divide n. This already yields the De Bruijn sequence, but since I'm generating the Lyndon words as strings of digits, I also need to replace those with the corresponding letters at the end. For golfing reasons, I consider the later letters in the alphabet to have lower lexicographic order. # Jelly, 15 bytes ṁL*¥Œ!;wⱮẠɗƇṗḢ Try it online! Pretty slow, uses a brute force approach with around an $$\O(n \times m \times (n \times m)!)\$$ complexity, where $$\n\$$ is the integer input and $$\m\$$ is the length of the string. Times out if both $$\n\$$ and $$\m\$$ are greater than 3 on TIO ## How it works The length of the De Bruijn sequence will always be $$\m^n\$$ and each symbol in the provided alphabet will occur the same number of times, $$\m^{n-1}\$$. Therefore, we generate the string with that many symbols, then filter its permutations to find valid De Bruijn sequences. ṁL*¥Œ!ẋ2wⱮẠʋƇṗḢ - Main link. Takes an alphabet A on the left and n on the right L - Length of A * - Raised to the power n ṁ - Mold A to that length Œ! - All permutations ṗ - Powerset; Get all length n combinations of A. Call that C ʋƇ - Filter the permutations P on the following dyad g(P, C): ẋ2 - Repeat P twice Ɱ - For each element E in C: w - Is it a sublist of P? Ạ - Is this true for all elements of C? Ḣ - Take the first one # JavaScript (ES6) 143 Using Lyndon words, like Martin's aswer, just 3 times long... F=(a,n)=>{ for(w=[-a[l='length']],r='';w[0];) { n%w[l]||w.map(x=>r+=a[~x]); for(;w.push(...w)<=n;); for(w[l]=n;!~(z=w.pop());); w.push(z+1) } return r } Test In FireFox/FireBug console console.log(F("ABC",3),F("10",4)) Output CCCBCCACBBCBACABCAABBBABAAA 0000100110101111 # Python 2, 114 bytes I'm not really sure how to golf it more, due to my approach. def f(a,n): s=a[-1]*n while 1: for c in a: if((s+c)[len(s+c)-n:]in s)<1:s+=c;break else:break print s[:1-n] Try it online Ungolfed: This code is a trivial modification from my solution to more recent challenge. def f(a,n): s=a[-1]*n while 1: for c in a: p=s+c if p[len(p)-n:]in s: continue else: s=p break else: break print s[:1-n] The only reason [:1-n] is needed is because the sequence includes the wrap-around. # Powershell, 164 96 bytes -68 bytes with -match O($n*2^n) instead recursive generator O(n*log(n)) param($s,$n)for(;$z=$s|% t*y|?{"$($s[-1])"*($n-1)+$x-notmatch-join"$x$_"[-$n..-1]}){$x+=$z[0]}$x Ungolfed & test script: $f = { param($s,$n) # $s is a alphabet,$n is a subsequence length for(; # repeat until... $z=$s|% t*y|?{ # at least a character from the alphabet returns $true for expression: "$($s[-1])"*($n-1)+$x-notmatch # the old sequence that follows two characters (the last letter from the alphabet) not contains -join"$x$_"[-$n..-1] # n last characters from the new sequence }){ $x+=$z[0] # replace old sequence with new sequence } $x # return the sequence } @( ,("ABC", 2, "AABACBBCC") ,("ABC", 3, "AAABAACABBABCACBACCBBBCBCCC") ,("ABC", 4, "AAAABAAACAABBAABCAACBAACCABABACABBBABBCABCBABCCACACBBACBCACCBACCCBBBBCBBCCBCBCCCC") ,("ABC", 5, "AAAAABAAAACAAABBAAABCAAACBAAACCAABABAABACAABBBAABBCAABCBAABCCAACABAACACAACBBAACBCAACCBAACCCABABBABABCABACBABACCABBACABBBBABBBCABBCBABBCCABCACABCBBABCBCABCCBABCCCACACBACACCACBBBACBBCACBCBACBCCACCBBACCBCACCCBACCCCBBBBBCBBBCCBBCBCBBCCCBCBCCBCCCCC") ,("ABC", 6, "AAAAAABAAAAACAAAABBAAAABCAAAACBAAAACCAAABABAAABACAAABBBAAABBCAAABCBAAABCCAAACABAAACACAAACBBAAACBCAAACCBAAACCCAABAABAACAABABBAABABCAABACBAABACCAABBABAABBACAABBBBAABBBCAABBCBAABBCCAABCABAABCACAABCBBAABCBCAABCCBAABCCCAACAACABBAACABCAACACBAACACCAACBABAACBACAACBBBAACBBCAACBCBAACBCCAACCABAACCACAACCBBAACCBCAACCCBAACCCCABABABACABABBBABABBCABABCBABABCCABACACABACBBABACBCABACCBABACCCABBABBABCABBACBABBACCABBBACABBBBBABBBBCABBBCBABBBCCABBCACABBCBBABBCBCABBCCBABBCCCABCABCACBABCACCABCBACABCBBBABCBBCABCBCBABCBCCABCCACABCCBBABCCBCABCCCBABCCCCACACACBBACACBCACACCBACACCCACBACBACCACBBBBACBBBCACBBCBACBBCCACBCBBACBCBCACBCCBACBCCCACCACCBBBACCBBCACCBCBACCBCCACCCBBACCCBCACCCCBACCCCCBBBBBBCBBBBCCBBBCBCBBBCCCBBCBBCBCCBBCCBCBBCCCCBCBCBCCCBCCBCCCCCC") ,("01", 3, "00010111") ,("01", 4, "0000100110101111") ,("abcd", 2, "aabacadbbcbdccdd") ,("0123456789", 3, "0001002003004005006007008009011012013014015016017018019021022023024025026027028029031032033034035036037038039041042043044045046047048049051052053054055056057058059061062063064065066067068069071072073074075076077078079081082083084085086087088089091092093094095096097098099111211311411511611711811912212312412512612712812913213313413513613713813914214314414514614714814915215315415515615715815916216316416516616716816917217317417517617717817918218318418518618718818919219319419519619719819922232242252262272282292332342352362372382392432442452462472482492532542552562572582592632642652662672682692732742752762772782792832842852862872882892932942952962972982993334335336337338339344345346347348349354355356357358359364365366367368369374375376377378379384385386387388389394395396397398399444544644744844945545645745845946546646746846947547647747847948548648748848949549649749849955565575585595665675685695765775785795865875885895965975985996667668669677678679687688689697698699777877978878979879988898999") ,("9876543210", 3, "9998997996995994993992991990988987986985984983982981980978977976975974973972971970968967966965964963962961960958957956955954953952951950948947946945944943942941940938937936935934933932931930928927926925924923922921920918917916915914913912911910908907906905904903902901900888788688588488388288188087787687587487387287187086786686586486386286186085785685585485385285185084784684584484384284184083783683583483383283183082782682582482382282182081781681581481381281181080780680580480380280180077767757747737727717707667657647637627617607567557547537527517507467457447437427417407367357347337327317307267257247237227217207167157147137127117107067057047037027017006665664663662661660655654653652651650645644643642641640635634633632631630625624623622621620615614613612611610605604603602601600555455355255155054454354254154053453353253153052452352252152051451351251151050450350250150044434424414404334324314304234224214204134124114104034024014003332331330322321320312311310302301300222122021121020120011101000") ) |% {$s,$n,$e = $_$r = &$f$s $n "$($r-eq$e): \$r" } Output: True: AABACBBCC True: AAABAACABBABCACBACCBBBCBCCC True: AAAABAAACAABBAABCAACBAACCABABACABBBABBCABCBABCCACACBBACBCACCBACCCBBBBCBBCCBCBCCCC True: AAAAABAAAACAAABBAAABCAAACBAAACCAABABAABACAABBBAABBCAABCBAABCCAACABAACACAACBBAACBCAACCBAACCCABABBABABCABACBABACCABBACABBBBABBBCABBCBABBCCABCACABCBBABCBCABCCBABCCCACACBACACCACBBBACBBCACBCBACBCCACCBBACCBCACCCBACCCCBBBBBCBBBCCBBCBCBBCCCBCBCCBCCCCC True: AAAAAABAAAAACAAAABBAAAABCAAAACBAAAACCAAABABAAABACAAABBBAAABBCAAABCBAAABCCAAACABAAACACAAACBBAAACBCAAACCBAAACCCAABAABAACAABABBAABABCAABACBAABACCAABBABAABBACAABBBBAABBBCAABBCBAABBCCAABCABAABCACAABCBBAABCBCAABCCBAABCCCAACAACABBAACABCAACACBAACACCAACBABAACBACAACBBBAACBBCAACBCBAACBCCAACCABAACCACAACCBBAACCBCAACCCBAACCCCABABABACABABBBABABBCABABCBABABCCABACACABACBBABACBCABACCBABACCCABBABBABCABBACBABBACCABBBACABBBBBABBBBCABBBCBABBBCCABBCACABBCBBABBCBCABBCCBABBCCCABCABCACBABCACCABCBACABCBBBABCBBCABCBCBABCBCCABCCACABCCBBABCCBCABCCCBABCCCCACACACBBACACBCACACCBACACCCACBACBACCACBBBBACBBBCACBBCBACBBCCACBCBBACBCBCACBCCBACBCCCACCACCBBBACCBBCACCBCBACCBCCACCCBBACCCBCACCCCBACCCCCBBBBBBCBBBBCCBBBCBCBBBCCCBBCBBCBCCBBCCBCBBCCCCBCBCBCCCBCCBCCCCCC True: 00010111 True: 0000100110101111 f=lambda a,n:(a[:n]in a[1:])*a[n:]or max((f(c+a,n)for c in{*a}),key=len) `
{}
#### Volume 12, issue 2 (2012) Recent Issues The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Subscriptions Author Index To Appear Contacts ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Free group automorphisms with parabolic boundary orbits ### Arnaud Hilion Algebraic & Geometric Topology 12 (2012) 933–950 ##### Abstract For $N\ge 4$, we show that there exist automorphisms of the free group ${F}_{N}$ which have a parabolic orbit in $\partial {F}_{N}$. In fact, we exhibit a technology for producing infinitely many such examples. ##### Keywords automorphism of free group, fixed point, symbolic dynamics ##### Mathematical Subject Classification 2010 Primary: 20E05, 20E36, 37B25, 37E15 Secondary: 20F65, 37B05 ##### Publication Received: 22 September 2011 Revised: 28 January 2012 Accepted: 28 January 2012 Published: 24 April 2012 ##### Authors Arnaud Hilion Mathematiques LATP - UMR 7353 Aix-Marseille Université Avenue de l’escadrille Normandie-Niémen 13397 Marseille Cedex 20 France http://junon.u-3mrs.fr/hilion/
{}
# A failure of formats I have a very long diatribe on the state of document editors that I am working on, and almost anyone that knows me has heard parts of it, as I frequently rail against all of them: MS Word, Google Docs, Dropbox Paper, etc. In the process of writing up my screed I started thinking what a proper document format would look like and realized the awfulness of all the current document formats is a much more pressing problem than the lack of good editors. In fact, it may be that failing to have a standard, robust, and extensible file format for documents might be the single biggest impediment to having good document editing tools. Seriously, even if you designed the world’s best document editor but all it can output is PNGs then how useful if your world’s best document editor? Let’s start with HTML, as that is probably the most successful document format in the history of the world outside of plain old paper. I can write an HTML page, host it on the web, and it can be read on any computer in the world. That’s amazing reach and power, but if you look at HTML as a universal document format you quickly realize it is stunted and inadequate. Let’s look at a series of comparisons between HTML and paper, starting with simple text, a note to myself: Now this is something that HTML excels at: <p>Get Milk</p> And here HTML is better than paper because that HTML document is easily machine readable. I throw in the word “easily” because I know some ML/AI practitioner will come along and claim they can also “read” the image, but we know there are many orders of magnitude difference in processing power and complexity of those two approaches, so let’s ignore them. So far HTML is looking good. Let’s make our example a little more complex; a shopping list. This again is something that HTML is great at: <h1>Shopping list</h1> <ul> <li>Milk</li> <li>Eggs</li> </ul> Not only can HTML represent the text that’s been written, but can also capture the intended structure by encoding it as a list using the <ul> and <li> elements. So now we are expressing not only the text, but also the meaning, and again this representation is machine readable. At this point we should take a small detour to talk about the duality we are seeing here with HTML, between the markup and the visual representation of that markup. That is, the following HTML: <h1>Shopping list</h1> <ul> <li>Milk</li> <li>Eggs</li> </ul> Is rendered in the browser as: # Shopping list • Milk • Eggs The markup carries not only the text, but also the semantics. I hesitate to use the term ‘semantics’ because that’s an overloaded term with a long history, particularly in web technology, but that is what we’re talking about. The web browser is able to convert from the markup semantics, <ul> and <li>, into the visual representation of a list, i.e. vertically laying out the items and putting bullets next to them. That duality between meaningful markup in text, distinct from the final representation, is important as it’s the distinction that made search engines possible. And we aren’t restricted to just visual representations, screen readers can also use the markup to guide their work of turning the markup into audio. But as we make our example a little more complex we start to run into the limits of HTML, for example when we draw a block diagram: When the web was first invented your only way to add such a thing to web page would have been by drawing it as an image and then including that image in the page: <img src="server.png" title="Server diagram with two disk drives."> The image is not very machine readable, even with the added title attribute. HTML didn’t initially offer a native way to create that visualization in a semantically more meaningful way. About a decade after the web came into being SVG was standardized and became available, so you can now write this as: <svg width="580" height="400" xmlns="http://www.w3.org/2000/svg"> <title>Server diagram with two disk drives.</title> <g> <rect height="60" width="107" y="45" x="215" stroke-width="1.5" stroke="#000" fill="#fff"/> <text font-size="24" y="77" x="235" stroke-width="0" stroke="#000" fill="#000000" id="svg_3"> Server </text> <line y2="204" x2="173" y1="105" x1="267" stroke-width="1.5" stroke="#000" fill="none" id="svg_4"/> <rect height="59" width="125" y="205" x="98" stroke-width="1.5" stroke="#000" fill="#fff" id="svg_5"/> <rect height="62" width="122" y="199" x="342" stroke-width="1.5" stroke="#000" fill="#fff" id="svg_6"/> <line y2="197" x2="403" y1="103" x1="268" stroke-width="1.5" stroke="#000" fill="none" id="svg_7"/> <text font-size="24" y="240" x="119" fill-opacity="null" stroke-opacity="null" stroke-width="0" stroke="#000" fill="#000000" id="svg_8"> Disk 1 </text> <text stroke="#000" font-size="24" y="236" x="361" fill-opacity="null" stroke-opacity="null" stroke-width="0" fill="#000000" id="svg_9"> Disk 2 </text> </g> </svg> This is a slight improvement over the image. For example, we can extract the title and the text found in the diagram from such a representation, but the markup isn’t what I would call human readable. To get a truly human readable markup of such a diagram we’d need to leave HTML and write it in Graphviz dot notation: graph { Server -- "Disk 1"; Server -- "Disk 2"; } So we’ve already left the capabilities of HTML behind and we’ve only just begun, what about math formulas? Again, about a decade after the web started MathML was standardized as a way to add math to HTML pages. It’s been 20 years since the MathML specification was released and you still can’t use MathML in your web pages because browser support is so bad. But even if MathML had been fully adopted and incorporated in to all web browsers, would we be done? Surely not, what about musical notations? If we want to include notes in a semantically meaningful way on a web page do we have to wait another 10 years for standardization and then hope that browsers actually implement the spec? You see, the root of the issue is that humans don’t just communicate by text, we communicate by notation; we are continually creating new new notations, and we will never stop creating them. No matter how many FooML markup languages you standardize and stuff into a web browser implementation you will only ever scratch the surface, you will always be leaving out more notations than you include. This is the great failing of HTML, that you cannot define some squiggly set of lines as a symbol and then use that symbol in your markup. ## Is such a thing even possible? The only markup language that comes even close to achieving this universality of expression is TeX. In TeX it is possible to create you own notations and define how they are rendered and then to use that notation in your document. For example, there’s a TeX package that enables Feynman diagrams: \feynmandiagram [horizontal=a to b] { i1 -- [fermion] a -- [fermion] i2, a -- [photon] b, f1 -- [fermion] b -- [fermion] f2, }; Note that both TeX and the \feynmandiagram notation are both human readable, which is an important distinction, as without it you could point at Postscript or PDF as a possible solution. While PDF may be able to render just about anything, the underlying markup in PDF files is not human readable. I’m also not suggesting we abandon HTML in favor of TeX. What I am pointing out is that there is a serious gap in the capabilities of HTML: the creation and re-use of notation, and if we want HTML to be a universal format for human communication then we need to fill this gap.
{}
# Chapter 16 - Questions and Problems - Page 771: 16.29 $[H_3O^+] = 2.51 \times 10^{- 5}M$ #### Work Step by Step 1. Calculate the pH. pOH + pH = 14 9.4 + pH = 14 pH = 4.6 2. Find $[H_3O^+]$ $[H_3O^+] = 10^{- 4.6}$ $[H_3O^+] = 2.51 \times 10^{- 5}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# Why does bromine water make the viscosity of olive oil increas? [duplicate] 1. Olive oil consists of almost 100% of fat. As you can see in the table of contents below, it includes both unsaturated, monounsaturated and polyunsaturated fats in the oil. Nutrition declaration per 100 ml (= 92 g) Energy 3390kJ / 823 Kcal Fat 92 g Carbohydrate 0 g Of which saturated fat is 13 g Of which sugars are 0 g Monounsaturated fat 68 g Protein 0 g Polyunsaturated fat 7.2 g Salt 0 g If you add bromine water, Br2 (aq), to the olive oil, its consistency changes significantly and it becomes more viscous. What is it that happens chemically with the olive oil when adding bromine water and why does it become more viscous? ## marked as duplicate by Karsten Theis, Ivan Neretin, Mithoron, Todd Minehardt, Mathew MahindaratneMay 1 at 1:13 It is a similar process to hydrogenation of plant oil on Raney nickel, producing margarines. The saturated fats have zig-zag structure of $$\ce{C-C}$$ chain, what makes them good in "molecule spooning", leading to high melting point. OTOH the cus double bonds of oils are obstacles for "spooning" and the menting point is low. The bromine is, AFAIK, used to determine the amount of unsaturated bonds in edible oils. \begin{align} \small \ce{ \\ \small -CH2-CH=CH-CH_2 - + H2 &->[Ni] \small -CH2-CH2-CH2-CH_2 -\\ \small -CH2-CH=CH-CH_2 - + Br2 &-> \small -CH2-CHBr-CHBr-CH_2 -\\ }\end{align} Fatty acids are components of many types of lipids. Fatty acids are carboxylic acids with very long hydrocarbon chains, usually consists of 12-18 carbon atoms long hydrophobic chain. Olive oil is a liquid obtained from olives, which contains mixture of fatty acids. According to Wikipedia: The composition of olive oil varies with the cultivar, altitude, time of harvest and extraction process. It consists mainly of oleic acid (up to 83%), with smaller amounts of other fatty acids including linoleic acid (up to 21%) and palmitic acid (up to 20%). The main component of olive oil, oleic acid, is a monounsaturated fatty acid (FA; See Table) and generally average about 70% in olive oil. Another monounsaturated FA, palmitoleic acid averages in the range of 0.3-3.5%. Olive oil also contains polyunsaturated FAs such as linoleic acid (~15%) and $$\alpha$$-linolenic acid (~0.5%). Only about 20% saturated FAs (see Table)) are present in olive oil and most of them are palmitic acid (~13.0%) and stearic acid (~1.5%). Structures of Common Fatty Acids (saturated), of which, palmitic (~13.0%) and stearic acid (~1.5%) are present in olive oil: $$\begin{array}{ccc} \text{Name of FA} & \text{# of carbons} & \text{Structure} & \text{Melting Point} \\ \hline \text{Lauric acid} & 12 & \ce{CH3(CH2)10CO2H} & \pu{44 ^{\circ}C} \\ \text{Myristic acid} & 14 & \ce{CH3(CH2)12CO2H} & \pu{58 ^{\circ}C} \\ \text{Palmitic acid } & 16 & \ce{CH3(CH2)14CO2H} & \pu{63 ^{\circ}C} \\ \text{Stearic acid} & 18 & \ce{CH3(CH2)16CO2H} & \pu{70 ^{\circ}C} \\ \hline \end{array}$$ Structures of Common Fatty Acids (unsaturated), three of which, oleic acid (~70%, linoleic acid (~15%), and $$\alpha$$-linolenic acid (~0.5%) are present in olive oil: $$\begin{array}{ccc} \text{Name of FA} & \text{# of carbons} & \text{Structure} & \text{Melting Point} \\ \hline \text{ Palmitoleic acid} & 16 & \ce{CH3(CH2)5CH=CH(CH2)7CO2H} & \pu{-1 ^{\circ}C} \\ \text{Oleic acid} & 18 & \ce{CH3(CH2)7CH=CH(CH2)7CO2H} & \pu{4 ^{\circ}C} \\ \text{Linoleic acid} & 18 & \ce{CH3(CH2)4CH=CHCH2CH=CH(CH2)7CO2H} & \pu{-5 ^{\circ}C} \\ \text{ Linolenic acid} & 18 & \ce{CH3CH2(CH=CHCH2)2CH=CH(CH2)7CO2H} & \pu{-11 ^{\circ}C} \\ \hline \end{array}$$ According to the melting points of FAs in olive oil, one can conclude that it contains a lot of double bonds (mostly mono-unsaturated). These unsaturated fatty acids contain only cis-double bonds (e.g., oleic and linoleic acid). The presence of cis-double bonds has an important lowering effect on the melting point of the fatty acid (see Table), because cis-double bonds form rigid kinks in the fatty acid chains (see the picture of oleic acid): Keep in mind that there is no rotation around a double bond, and as a result, the unsaturated fatty acids cannot line up very well to give a regularly arranged crystal structure. On the other hand, saturated fatty acids would line up in a very regular manner, better than that of unsaturated, leading to better van der Waals-forces, thus get closer together (thicker). This is one of the reasons why they are solids in ambient conditions with high melting points (see Table). Now, what happens when you add bromine water to olive oil? Bromine added to double bond (same reaction as clarification test for unsaturation), making the fatty acid saturated. For example, when $$\ce{Br2}$$ added to $$\ce{C18}$$ oleic acid (m.p.: $$\pu{4 ^{\circ}C}$$), it'd become a resemblance of $$\ce{C18}$$ stearic acid (m.p.: $$\pu{70 ^{\circ}C}$$) with two extra $$\ce{Br}$$ atoms replacing 2 $$\ce{H}$$ atoms. Now you see why olive oil getting thicker!
{}
Skip to main content # 11: Chi-Square and ANOVA Tests This chapter presents material on three more hypothesis tests. One is used to determine significant relationship between two qualitative variables, the second is used to determine if the sample data has a particular distribution, and the last is used to determine significant relationships between means of 3 or more samples. • 11.1: Chi-Square Test for Independence • 11.2: Chi-Square Goodness of Fit • 11.3: Analysis of Variance (ANOVA) There are times where you want to compare three or more population means. One idea is to just test different combinations of two means. The problem with that is that your chance for a type I error increases. Instead you need a process for analyzing all of them at the same time. This process is known as analysis of variance (ANOVA). The test statistic for the ANOVA is fairly complicated, you will want to use technology to find the test statistic and p-value.
{}
+0 # precalc 0 44 2 +4 Can someone help me with this very complex question? I don't even know where to start. Jan 27, 2023 #1 0 Using row properties of matrices: $$\mathbf{A} \mathbf{u} = (2, 6, -2)$$ $$\mathbf{A} \mathbf{v} = (-2, 8, 12)$$ Jan 27, 2023 #2 0 Hi! In these type of problems, you should start by writing the givens. That is, we are given u and v which are "three dimensional vectors", and so we can write: Let $$u=\begin{pmatrix} u_1\\ u_2 \\ u_3 \end{pmatrix}$$ , and $$v=\begin{pmatrix} v_1\\ v_2 \\ v_3 \end{pmatrix}$$ Now, we are given that the length of u and v is 2 and 4 respectively. Moreover, the angle between u and v is 120. These givens should be familiar right? Yes: $$u*v = \left | u \right |\left | v\right |cos(\theta)$$; this is the geometric definition of the dot product right? Moreover, $$u*v=u_1v_1+u_2v_2+u_3v_3$$ so we can calculate the dot product between u and v using the givens as follows: $$u*v=u_1v_1+u_2v_2+u_3v_3=2(4)cos(120)=-4$$ (Calculated from the geometric definition). Now, how to continue this? Well, we are given a matrix's A rows. So why not construct this matrix? So: $$A:=\begin{bmatrix} u_1 && u_2 && u_3 \\ v_1 && v_2 && v_3 \\ 3u_1+2v_1 && 3u_2+2v_2 && 3u_3+2v_3 \end{bmatrix}$$ Now, what does the question wants? Au and Av right? Let's start with Au, and writing them yields: $$A*u:=\begin{bmatrix} u_1 && u_2 && u_3 \\ v_1 && v_2 && v_3 \\ 3u_1+2v_1 && 3u_2+2v_2 && 3u_3+2v_3 \end{bmatrix} *\begin{pmatrix} u_1\\ u_2 \\ u_3 \end{pmatrix}$$ Well, this is a matrix multiplication (3*3 matrix with 3*1 matrix) so the resultant matrix will be (3*1) (A vector). You may be used to multiply matrices that has numbers right? However, this matrices do not have numbers, rather variables. But, it is the same procedure u_1 * u_1, u_2*u_2 etc.. , similarly for the second row, third row, and so we get: $$\implies Au=\begin{pmatrix} u_1^2+u_2^2+u_3^2\\ u_1v_1+u_2v_2+u_3v_3 \\ 3(u_1^2+u_2^2+u_3^2)+2(u_1v_1+u_2v_2+u_3v_3) \end{pmatrix}$$ For the last row, expand everything and group like terms to get what we got above. (Multiply u by (3u+2v) etc.. and add all of these then factor the common factors). But notice, the first row in Au is just the length of vector u squared! And, the second row is the dot product of u and v (which we found above to be -4) And the third row is just a linear combination of both. Hence, $$Au=\begin{pmatrix} 2^2\\ -4\\ 3(2^2)+2(-4)\\ \end{pmatrix} = \begin{pmatrix} 4\\ -4 \\ 4 \end{pmatrix}$$ Now try to do the same steps for Av. I hope this helps, and if you need further help do not hesitate to ask! Jan 27, 2023 edited by Guest  Jan 27, 2023 edited by Guest  Jan 27, 2023
{}
# Analysis of Discrete Data #1 – Overview Posted on September 11, 2014 by This lesson is an overview of the course content as well as a review of some advanced statistical concepts involving discrete random variables and distributions, relevant for STAT 504 — Analysis of Discrete Data. This Lesson assumes that you have glanced through the Review Materials included in the Start Here! block. #### Key concepts: • Discrete data types • Discrete distributions: Binomial, Poission, Multinomial • Likelihood & Loglikelhood • Observed & Expected Information • Likelihood based Confidence Intervals & Tests: Wald Test, Likelihood-ratio test, Score test #### Objectives: • Learn what discrete data are and their taxonomy • Learn the properties of Binomial, Poission and Multinomial distributions • Understand the basic principles of likelihood-based inference and how to apply it to tests and intervals regarding population proportions • Introduce the basic SAS and R code • Ch.1 Agresti (2007) • If you are using other textbooks or editions: Ch.1 Agresti (2013, 2002, 1996) ____________________________________________________________________________________________________________________________ The outline below can be viewed as a general template of how to approach data analysis regardless of the type of statistical problems you are dealing with. For example, you can model a continuous response variable such as income, or a discrete response such as a true proportion of U.S. individuals who support new health reform. This approach has five main steps. Each step typically requires an understanding of a number of elementary statistical concepts, e.g., a difference between a parameter to be estimated and the corresponding statistic (estimator). ## 1.1 – Focus of this Course The focus of this class is multivariate analysis of discrete data. The modern statistical inference has many approaches/models for discrete data. We will learn the basic principles of statistical methods and discuss issues relevant for the analysis of Poisson counts of some discrete distribution, cross-classified table of counts, (i.e., contingency tables), binary responses such as success/failure records, questionnaire items, judge’s ratings, etc. Our goal is to build a sound foundation that will then allow you to more easily explore and learn many other relevant methods that are being used to analyze real life data. This will be done roughly at the introductory level of the required textbook by A. Agresti (2007). Basic data are discretely measured responses such as counts, proportions, nominal variables, ordinal variables, continuous variables grouped into a small number of categories, etc. Data examples will be used to help illustrate concepts. The “canned” statistical routines and packages in R and SAS will be introduced for analysis of data sets, but the emphasis will be on understanding the underlying concepts of those procedures. For more detailed theoretical underpinnings you can read A. Agresti (2012). We will focus on two kinds of problems. 1) The first broad problem deals with describing and understanding the structure of a (discrete) multivariate distribution, which is the joint and marginal distributions of multivariate categorical variables. Such tasks may focus on displaying and describing associations between categorical variables by using contingency tables, chi-squared tests of independence, and other similar methods. Or, we may explore finding underlying structures, possibly via latent variable models. 2) The second problem is a sort of “generalization” of regression  with a distinction between response and explanatory variables where the response is discrete. Predictors can be all discrete, in which case we may use log- linear models to describe the relationships. Predictors can also be a mixture of discrete and continuous variables, and we may use something like logistic regression to model the relationship between the response and the predictors. We will explore certain types of Generalized Linear Models, such as logistic and Poisson regressions. The analysis grid below highlights the focus of this class with respect to the models that you should already be familiar with. ## 1.2 – Discrete Data Types and Examples ### Categorical/Discrete/Qualitative data Measures on categorical or discrete variables consist of assigning observations to one of a number of categories in terms of counts or proportions. The categories can be unordered or ordered (see below). #### Counts and Proportions Counts are variables representing frequency of occurrence of an event: • Number of students taking this class. • Number of people who vote for a particular candidate in an election. Proportions or “bounded counts” are ratios of counts: • Number of students taking this class divided by the total number of graduate students. • Number of people who vote for a particular candidate divided by the total number of people who voted. Discretely measured responses can be: • Nominal (unordered) variables, e.g., gender, ethnic background, religious or political affiliation • Discrete interval variables with only a few values, e.g., number of times married • Continuous variables grouped into small number of categories, e.g., income grouped into subsets, blood pressure levels (normal, high-normal etc) We we learn and evaluate mostly parametric models for these responses. #### Measurement Scale and Context Interval variables have a numerical distance between two values (e.g. income) Measurement hierarchy: • nominal < ordinal < interval • Methods applicable for one type of variable can be used for the variables at higher levels too (but not at lower levels). For example, methods specifically designed for ordinal data should NOT be used for nominal variables, but methods designed for nominal can be used for ordinal. However, it is good to keep in mind that such analysis method will be less than optimum as it will not be using the fullest amount of information available in the data. • Nominal: pass/fail • Ordinal: A,B,C,D,F • Interval: 4,3,2.5,2,1 Note that many variables can be considered as either nominal or ordinal depending on the purpose of the analysis. Consider majors in English, Psychology and Computer Science. This classification may be considered nominal or ordinal depending whether there is an intrinsic belief that it is ‘better’ to have a major in Computer Science than in Psychology or in English. Generally speaking, for a binary variable like pass/fail ordinal or nominal consideration does not matter. Context is important! The context of the study and the relevant questions of interest are important in specifying what kind of variable we will analyze. For example, • Did you get a flu? (Yes or No) — is a binary nominal categorical variable • What was the severity of your flu? ( Low, Medium, or High) — is an ordinal categorical variable Based on the context we also decide whether a variable is a response (dependent) variable or an explanatory (independent) variable. Discuss the following question on the ANGEL Discussion Board: Why do you think the measurement hierarchy matters and how does it influence analysis? That is, why we recommend that statistical methods/models designed for the variables at the higher level not be used for the analysis of the variables at the lower levels of hierarchy? #### Contingency Tables • A statistical tool for summarizing and displaying results for categorical variables • Must have at least two categorical variables, each with at least two levels (2 x 2 table)May have several categorical variables, each at several levels (I1 × I2 × I3 × … × Ik tables) Place counts of each combination of the variables in the appropriate cells of the table. Here are a few simple examples of contingency tables. A university offers only two degree programs: English and Computer Science. Admission is competitive and there is a suspicion of discrimination against women in the admission process. Here is a two-way table of all applicants by sex and admission status. These data show an association between the sex of the applicants and their success in obtaining admission. Male Female Total Admit 35 20 55 Deny 45 40 85 Total 80 60 140 #### Example: Number of Delinquent Children by the County and the Head of Household Education Level This is another example of a two-way table but in this case 4×4 table. The variable County could be treated as nominal, where as the Education Level of Head of Household can be treated as ordinal variable. Questions to ask, for example: (1) What is the distribution of a number of delinquent children per county given the education level of the head of the household? (2) Is there a trend of where the delinquent children reside given the education levels? County Low Medium High Very High Total Alpha 15 1 3 1 20 Beta 20 10 10 15 55 Gamma 3 10 10 2 25 Delta 12 14 7 2 35 Total 50 35 30 20 135 • Ordinal and nominal variables • Fixed total #### Example: Census Data Source: American Fact Finder website (U.S. Census Bureau: Block level data) This is an example of a 2×2×4 three-way table that cross-classifies a population from a PA census block by Sex, Age and Race where all three variables are nominal. #### Example: Clinical Trial of Effectiveness of an Analgesic Drug Source: Koch et al. (1982) • This is a four-way table (2×2×2×3 table) because it cross-classifies observations by four categorical variables: Center, Status, Treatment and Response • Fixed number of patients in two Treatment groups • Small counts We will see throughout this course that there are many different methods to analyze data that can be represented in coningency tables. ### Example of proportions in the news You should be already familiar with a simple analysis of estimating a population proportion of interest and computing a 95% confidence interval, and the meaning of the margin or error (MOE). Notation: • Population proportion = p = sometimes we use π • Population size = N • Sample proportion = $\hat{p}=X/n$=# with a trait / total # • Sample size = n • X is the number of units with a particular trait, or number of success. The Rule for Sample Proportions • If numerous samples of size n are taken, the frequency curve of the sample proportions $(p^\prime s)$ from the various samples will be approximately normal with the mean p and standard deviation $\sqrt{p(1-p)/n}$ • $\hat{p}\sim N(p,p(1-p)/n)$ ## 1.3 – Discrete Distributions Statistical inference requires assumptions about the probability distribution (i.e., random mechanism, sampling model) that generated the data. For example for a t-test, we assume that a random variable follows a normal distribution. For discrete data key distributions are: Bernoulli, Binomial, Poisson and Multinomial. A more or less thorough treatment is given here. The mathematics is for those who are interested. But the results and their applications are important. Recall, a random variable is the outcome of an experiment (i.e. a random process) expressed as a number. We use capital letters near the end of the alphabet (X, Y, Z, etc.) to denote random variables. Random variables are of two types: discrete and continuous. Here we are interested in distributions of discrete random variables. A discrete random variable X is described by a probability mass functions (PMF), which we will also call “distributions,” f(x)=P(X =x). The set of x-values for which f (x) > 0 is called the support. Support can be finite, e.g.,  X can take the values in {0,1,2,…,n} or countably infinite if X takes values in {0,1,…}. Note, if the distribution depends on unknown parameter(s) θ we can write it as f (x; θ) (preferred by frequentists) or f(x| θ) (preferred by Bayesians). Here are some distributions that you may encounter when analyzing discrete data. #### Bernoulli distribution The most basic of all discrete random variables is the Bernoulli. X is said to have a Bernoulli distribution if X = 1 occurs with probability π and X = 0 occurs with probability 1 − π , $f(x)=\left\{\begin{array} {cl} \pi & x=1 \\ 1-\pi & x=0 \\ 0 & \text{otherwise} \end{array} \right.$ Another common way to write it is: $f(x)=\pi^x (1-\pi)^{1-x}\text{ for }x=0,1$ Suppose an experiment has only two possible outcomes, “success” and “failure,” and let π be the probability of a success. If we let X denote the number of successes (either zero or one), then X will be Bernoulli. The mean of a Bernoulli is $E(X)=1(\pi)+0(1-\pi)=\pi$ and the variance of a Bernoulli is $V(X)=E(X^2)-[E(X)]^2=1^2\pi+0^2(1-\pi)-\pi^2=\pi(1-\pi)$ #### Binomial distribution Suppose that $X_1, X_2,\dots,X_n$ are independent and identically distributed (iid) Bernoulli random variables, each having the distribution $f(x_i|\pi)=\pi^{x_i}(1-\pi)^{1-x_i}\text{ for }x_i=0,1\; \text{and }\; 0\leq\pi\leq 1$ Let $X=X_1+X_2+\dots+X_n$. Then X is said to have a binomial distribution with parameters $n$ and $p$, $X\sim \text{Bin}(n,\pi)$ Suppose that an experiment consists of n repeated Bernoulli-type trials, each trial resulting in a “success” with probability π and a “failure” with probability 1 − π . For example, toss a coin 100 times, n=100. Count the number of times you observe heads, e.g. X=# of heads. If all the trials are independent—that is, if the probability of success on any trial is unaffected by the outcome of any other trial—then the total number of successes in the experiment will have a binomial distribution, e.g, two coin tosses do not affect each other. The binomial distribution can be written as $f(x)=\dfrac{n!}{x!(n-x)!} \pi^x (1-\pi)^{n-x} \text{ for }x_i=0,1,2,\ldots,n,\; \text{and }\; 0\leq\pi\leq 1.$ The Bernoulli distribution is a special case of the binomial with n = 1. That is, XBin(1,π) means that X has a Bernoulli distribution with success probability π. One can show algebraically that if  X∼Bin(1,π) then E(X)=nπ and V(X)=nπ(1−π). An easier way to arrive at these results is to note that $X=X_1+X_2+\dots+X_n$ where $X_1,X_2,\dots,X_n$ are (iid) Bernoulli random variables. Then, by the additive properties of mean and variance, $E(X)=E(X_1)+E(X_2)+\cdots+E(X_n)=n\pi$ and $V(X)=V(X_1)+V(X_2)+\cdots+V(X_n)=n\pi(1-\pi)$ Note that X will not have a binomial distribution if the probability of success π is not constant from trial to trial, or if the trials are not entirely independent (i.e. a success or failure on one trial alters the probability of success on another trial). $\text{if }X_1\sim \text{Bin}(n_1,\pi) \text{ and }X_2\sim \text{Bin}(n_2,\pi),\text{ then }X_1+X_2 \sim \text{Bin}(n_1+n_2,\pi)$ As n increases, for fixed π, the binomial distribution approaches normal distribution N(nπ,nπ(1−π)). For example, if we sample without replacement from a finite population, then the hypergeometric distribution is appropriate. #### Hypergeometric distribution Suppose there are n objects. n1 of them are of type 1 and n2 = n − n1 of them are of type 2. Suppose we draw m (less than n) objects at random and without replacement from this population. A classic example, is having a box with n balls, n1 are red and n2 are blue.  What is the probability of having t red balls in the draw of m balls? Then the PMF of N1=t is $p(t) = Pr(N_1 = t) =\frac{\binom{n_1}{ t}\binom{n_2}{m-t}}{\binom{n}{m}},\;\;\;\; t \in [\max(0, m-n_2); \min(n_1, m)]$ The expectation and variance of $N_1$ are given by: $E(N_1) =\frac{n_1 m}{n}$ and $V(N_1)=\frac{n_1n_2m(n-m)}{n^2(n-1)}$ #### Poisson distribution Let  XPoisson(λ) (this notation means “X has a Poisson distribution with parameter λ”), then the probability distribution is $f(x|\lambda)= Pr(X=x)= \frac{\lambda^x e^{-\lambda}}{x!}, x=0,1,2,\ldots, \mbox{and}, \lambda>0.$ Note that E(X)=V(X)=λ, and the parameter λ must always be positive; negative values are not allowed. The Poisson distribution is an important probability model. It is often used to model discrete events occurring in time or in space. The Poisson is also limiting case of the binomial. Suppose that XBin(n,π) and let n and π0 in such a way that nπλ where λ is a constant. Then, in the limit, XPoisson(λ). Because the Poisson is limit of the Bin(n,π), it is useful as an approximation to the binomial when n is large and π is small. That is, if n is large and π is small, then $\dfrac{n!}{x!(n-x)!}\pi^x(1-\pi)^{n-x} \approx \dfrac{\lambda^x e^{-\lambda}}{x!}$ where λ=nπ. The right-hand side of (1) is typically less tedious and easier to calculate than the left-hand side. For example, let X be the number of emails arriving at a server in one hour. Suppose that in the long run, the average number of emails arriving per hour is λ. Then it may be reasonable to assume XP(λ).  For the Poisson model to hold, however, the average arrival rate λ must be fairly constant over time; i.e., there should be no systematic or predictable changes in the arrival rate. Moreover, the arrivals should be independent of one another; i.e., the arrival of one email should not make the arrival of another email more or less likely. When some of these assumptions are violated, in particular if there is a presence of overdispersion (e.g., observed variance is greater than what the model assume), the Negative Binomial distribution can be used instead of Poisson. Overdispersion Count data often exhibit variability exceeding that predicted by the binomial or Poisson. This phenomenon is known as overdispersion. Consider, for example the number of fatalities from auto accidents that occur next week in the Center county, PA. The Poisson distribution assumes that each person has the same probability of dying in an accident. However, it is more realistic to assume that these probabilities vary due to • whether the person was wearing a seat belt • time spent driving • where they drive (urban or rural driving) Person-to-person variability in causal covariates such as these cause more variability than predicted by the Poisson distribution. Let X be a random variable with conditional variance V(X|λ). Suppose λ is also a random variable with θ=E(λ). Then E(X)=E[E(X|λ)] and V(X)=E[V(X|λ)]+V[E(X|λ)] For example, when X|λ has a Poisson distribution, then  E(X)=E[λ]=θ (so mean stays the same) but the V(X)=E[λ]+V(λ)=θ+V(λ)>θ (the variance is no longer θ but larger). When X|π  is a binomial random variable and πBeta(α,β). Then $E(\pi)=\frac{\alpha}{\alpha+\beta}=\lambda$ and $V(\pi)=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$. Thus,  E(X)=nλ (as expected the same) but the variance is larger V(X)=nλ(1λ)+n(n1)V(π)>nλ(1λ). #### Negative-Binomial distribution When the data display overdispersion, the analyst is more likely to use the negative-binomial distribution instead of Poission to model the data. Suppose a random variable X|λPoisson(λ) and  λGamma(α,β). Then the joint distribution of X and λ is: $p(X=k,\lambda)=\frac{\beta^\alpha}{\Gamma(\alpha)k!}\lambda^{k+\alpha-1}\exp^{-(\beta+1)\lambda}$ Thus the marginal distribution of X is negative-binomial (i.e., Poisson-Gamma mixture): $\begin{eqnarray} p(X=k)&=&\frac{\beta^\alpha}{\Gamma(\alpha)k!}\int^{\infty}_0\lambda^{k+\alpha-1}\exp^{-(\beta+1)\lambda} d\lambda\\ & = & \frac{\beta^\alpha}{\Gamma(\alpha)k!} \frac{\Gamma(k+\alpha)}{(\beta+1)^{(k+\alpha)}} \\ & = & \frac{\Gamma(k+\alpha)}{\Gamma(\alpha)\Gamma(k+1)}(\frac{\beta}{\beta+1})^\alpha(\frac{1}{\beta+1})^k \end{eqnarray}$ with $E(X)=E[E(X|\lambda)]=E[\lambda]=\frac{\alpha}{\beta}$ $V(X)=E[var(X|\lambda)]+var[E(X|\lambda)]=E[\lambda]+var[\lambda]=\frac{\alpha}{\beta}+\frac{\alpha}{\beta^2}=\frac{\alpha}{\beta^2}(\beta+1).$ #### Beta-Binomial distribution A family of discrete probability distributions on a finite support arising when the probability of a success in each of a fixed or known number of Bernoulli trials is either unknown or random. For example, the researcher believes that the unknown probability of having flu π is not fixed and not the same for the entire population, but it’s yet another random variable with its own distribution. For example, in Bayesian analysis it will describe a prior belief or knowledge about the probability of having flu based on prior studies. Below X is what we observe such as the number of flu cases. Suppose X|πBin(n,π) and  πBeta(α,β). Then the marginal distribution of X is that of beta-binomial random variable $\begin{eqnarray} P(X=k)& = & \binom{n}{k}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\int^{1}_0\pi^{k+\alpha-1}(1-\pi)^{n+\beta-k-1} d\pi\\ & = & \frac{\Gamma(n+1)}{\Gamma(k+1)\Gamma(n-k+1)}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\frac{\Gamma(\alpha+k)\Gamma(n+\beta-k)}{\Gamma(n+\alpha+\beta)} \end{eqnarray}$ with $E(X)=E(n\pi)=n\frac{\alpha}{\alpha+\beta}$ $Var(X)=E[n\pi(1-\pi)]+var[n\pi]=n\frac{\alpha\beta(\alpha+\beta+n)}{(\alpha+\beta)^(\alpha+\beta+1)}$
{}
# Is ESG a Factor? July 2020 Read Time: 20 min Save Key Points • Increasingly, investors are asking if ESG is a factor. We answer this question using the criteria set forth by our Research Affiliates colleagues in their 2016 Graham and Dodd Scroll–winning article, “Will Your Factor Deliver? An Examination of Factor Robustness and Implementation Costs.” We conclude that ESG is not a factor. • We do believe, however, that ESG could be a powerful theme as new owners of capital—in particular, women and millennials—prioritize ESG in their portfolios over the next two decades. Progress in aligning definitions of “good” and “bad” ESG companies will also enhance the ability of the ESG theme to deliver positive investor outcomes. • We conclude that ESG does not need to be a factor for investors to achieve their ESG and performance goals. Abstract As we hit the halfway point of this remarkable year, the health of our planet, the well-being of our communities, and the necessity for meaningful societal change are all top of mind and assuming a greater sense of urgency. Accordingly, many investors desire to take personal action by incorporating environmental, social, and governance (ESG) considerations into their investment portfolios. Unfortunately, they are confronted by a confusing ESG landscape with conflicting claims—similar to the multitude of competing health care studies. This confusion may be slowing down their good intentions. As a factor index provider with ESG offerings, we attempt to answer the question “Is ESG a factor?” by synthesizing what we, and our colleagues, have discovered over the years. ”Can drinking red wine daily stave off heart disease? A newly released study answers that very question. We’ll cover it right after this commercial break.” Does this teaser sound familiar from your favorite morning television or radio show? Does this attention-grabber peak your interest to wait a few minutes, bear the commercials, and hear the story? The media has a natural inclination to use science to engage audiences. Consequently, we’re bombarded with new studies, especially as they relate to our health, a topic of interest to everyone and, of course, top of mind today. The findings may entertain, but do they inform? Nagler (2014) finds that contradictory scientific claims on red wine, coffee, fish, and vitamins, all touted by the media, led to substantial confusion on the part of consumers. Indeed, the claims led to such confusion that many consumers grew skeptical of even vetted health advice such as exercising and eating fruits and vegetables. Ironically, learning more about nutrition via competing claims led to more confusion, lack of trust, and less likely adoption of better eating and exercising habits. As we hit the halfway point of a remarkable 2020 and become more acclimated to our new circumstances, we’re concerned about the health of our planet, the well-being of our communities, and the necessity for meaningful societal change. Accordingly, many desire to put these concerns into their investment portfolios using environmental, social, and governance (ESG) considerations and tools. Investors, however, find a confusing ESG landscape with conflicting claims—similar to the multitude of competing health care studies—that may be slowing down good intentions. As an example, it was John’s turn to represent Research Affiliates at the annual Inside ETFs event in Florida earlier this year.1 The overwhelming points of emphasis from both ETF and index providers throughout the presentations were factor investing and ESG. Several sessions covered one or the other, often both. Regardless of whether the headliner was ESG or factor investing, inevitably a question popped up at the end of the session from either the moderator or the audience: “Is ESG a factor?” If John heard the question six times, he’d venture to guess he heard more than 12 answers! Accordingly, as a factor index provider with ESG offerings, we attempt to answer the question, synthesizing what we, and our colleagues, have discovered over the years. ## What Is a Factor, and Can We Count on It in the Future? Factors are stock characteristics associated with a long-term risk-adjusted return premium. An example is the value premium, which rewards investors who buy stocks that have a low price relative to their fundamentals. Two theories are advanced to explain the value effect: one is risk based and the other is behavior based. The risk-based explanation posits that value companies are cheap for a reason, such as lower profitability and/or greater leverage, and thus investors require that they earn a premium to compensate for the risk of investing in them (a risk premium). The behavioral-based explanation posits that investor biases, such as being overly pessimistic about value companies and overly optimistic about growth companies, create stock mispricings, and that value stocks outperform once investors’ expectations are not met and mean reversion occurs. Popular factors, such as value, low beta, quality, and momentum, have been well documented and vetted by both academics and practitioners. Research by Beck et al. (2016) provides a useful framework for determining if a factor is robust. For ESG to be a factor, it should satisfy these three critical requirements: 1. A factor should be grounded in a long and deep academic literature. 2. A factor should be robust across definitions. 3. A factor should be robust across geographies. A factor should be grounded in a long and deep academic literature. Traditional factors, such as value, low beta, and momentum, have been thoroughly researched and have a track record spanning several decades; very little debate currently exists regarding their robustness. Beyond the size factor,2 all of the factors in the following table have a positive CAPM alpha and are statistically significant at the 95% t-stat level (1.96). In examining the vast body of research on ESG, we find little agreement regarding its robustness in earning a return premium for investors. Research by Clark, Feiner, and Viehs (2015), Friede, Busch, and Bassen (2015), and Khan, Serafeim, and Yoon (2016) finds that ESG is additive to returns, while research by Brammer, Brooks, and Pavelin (2006), Fabozzi, Ma, and Oliphant (2008), and Hong and Kacperczyk (2009) demonstrates that ESG detracts from returns. Neither is there evidence to suggest a risk-based or behavioral-based explanation for the ESG factor. Arguments are put forth that certain situations could lead to positive ESG-related stock price movements, such as increased popularity of strong ESG companies as more investors adopt ESG (more on this topic later). These price movements, however, would be one-time adjustments and cannot be expected to deliver a reliable and robust premium over time. ESG is not an equity return factor in the traditional, academic sense. Factors should be robust across definitions. Slight variations in the definition of a factor should still produce similar performance results. Using the value factor as an example, the three valuation metrics of price-to-book ratio, price-to-earnings ratio, and price-to-cash flow ratio all yield similar performance results in assessing the factor’s long-horizon performance. ESG has no common standard definition and is a broad term that encapsulates a range of themes and subthemes.3 ESG ratings providers examine hundreds of metrics when determining a company’s ESG score. Conducting a quick web search yields several ESG strategies whose underlying themes are quite distinct and different. These index strategies align more closely with investor preferences than with a particular factor. To illustrate this, we construct a simple test on four variants of ESG definitions. We build long–short portfolios by selecting the top 30% and bottom 30% of US companies by market capitalization each year, after ranking by overall ESG rating. We also build three similarly constructed long–short portfolios, ranking companies on each individual ESG characteristic of environmental, social, and governance.4 None of these strategies displays a materially positive CAPM alpha except for the environmental long–short strategy, and no strategy is statistically significant at the 95% t-stat level (1.96). Unfortunately, none of the simulated strategies we tested has a long track record because the ESG data history is quite short. This lack of history is a significant impediment to conducting research in ESG investing, limiting our study period to 11 years from July 2009 to June 2020. Because multiple decades of data are needed to conduct a proper test, the lack of significance in the t-values is not surprising. Only after several decades of quality ESG data will it be possible to accurately test the claim that ESG is a robust factor. In addition to the problem of a short data history, the lack of consistency among ESG ratings providers also hinders our ability to determine if ESG is a robust factor. Research Affiliates published findings earlier this year that showed the correlation of company ratings between ESG ratings providers is low (Li and Polychronopoulos, 2020). We illustrated this by comparing two companies, Wells Fargo and Facebook, and showed that one ESG ratings provider rates Wells Fargo positively and Facebook negatively, while a second ratings provider ranked them the opposite way. In addition, we demonstrated that a portfolio construction process using the same methodology, but different ESG ratings providers, can yield different results. While beyond the scope of this article, had we used a different ESG ratings provider for the analysis in the preceding table, we likely would have gotten different results! Factors should be robust across geographies. We conduct the same study using European companies. The results are largely consistent with the US results. None of the strategies tested has a materially positive CAPM alpha except for the environmental strategy, and no strategy tested exhibits statistically significant CAPM alpha at the 95% t-stat level. We should note that for the US and European analysis we conduct a simple single-factor linear regression against the market return. In the appendix we present the results of a stricter test using a multi-factor regression that incorporates the value, size, profitability, investment, momentum, and low beta factors. The multi-factor regression results indicate low or negative alpha for the majority of the strategies. Having put ESG investing strategies through a framework to assess factor robustness, we find that ESG fails all three tests outlined by Beck et al. (2016): 1) evidence of an ESG return premium is not supported by a long and deep academic literature, 2) ESG performance results are not robust across definitions, and 3) ESG performance results are not robust across regions. ## ESG Is Not a Factor, but Could Be a Powerful Theme Even though we are unable to apply the factor framework to ESG, these strategies, however heterogeneous, may still produce superior returns. Non-robust, and even robustly negative, strategies will invariably cycle through periods—think three-to-five year stretches—of outperformance. And over the very long term, possibly decades, stocks that rank well on ESG criteria may also outperform. We witness two principle arguments in favor of superior risk-adjusted returns for companies that rate well on ESG metrics. First, as some claim, there may be latent risks in companies that rate poorly on ESG metrics (Orsagh et al., 2018). In other words, ESG risk needs to be incorporated into security selection. Let’s consider carbon. Historical fundamental analysis developed during a predominantly stable climate backdrop may miss the investment risk associated with carbon and thereby deliver poor results if the risk materializes. Coal has been a declining source of energy production in the United States for years, accounting for 52% of the nation’s total electricity generation in 1990, but just 23% at the end of 2019.5 The percentage will continue to decline as energy providers move toward cleaner and more-energy-efficient alternatives to combat climate change, leaving coal companies with assets of decreasing value. Investment managers who do not consider and integrate the ESG risk of, in this case, climate change may be blindsided. “The theme is the massive coming adoption of ESG investing on the part of new owners of capital.” Not recognizing a specific type of risk implies a mispricing effect. This mispricing seems to be highly idiosyncratic in nature and probably best exploited via the forward-looking framework of active management. Such “ESG alpha” has the potential to be sizeable, especially if very few managers are incorporating ESG criteria into their investment processes—but that’s not the case. According to Cerulli, 83% of investment managers are embedding ESG criteria into their fundamental processes.6 At the time of this publication, over 2,200 investment managers have signed on to the United Nations (UN) PRI | Principles for Responsible Investing, which encourages signatories to “incorporate ESG issues into investment analysis and decision-making processes.” Indeed, investment manager signatories managed approximately US$80 trillion as of March 31, 2020.7 Such widespread use of ESG criteria in the investment management process means that identifying ESG skill will likely be as difficult as identifying other types of investor skill.8 Neither does it speak to the ability of investors to harvest the alpha, if found. Will investors have the patience to wait out manager ESG risk assessments, especially given the very long horizon for some of these risks? A large shift in investor preference toward ESG is occurring as two distinct groups—women and millennials—take greater control of household assets. Accordingly, Bank of America (2019) recently noted a “tsunami of assets is poised to invest in ‘good’ stocks” and concluded that “three critical investor cohorts care deeply about ESG: women, millennials, and high net worth individuals. Based on demographics, we conservatively estimate over$20tn of asset growth in ESG funds over the next two decades—equivalent to the S&P 500 today.”9 Similarly, an Accenture study concluded that US30 trillion in assets will change hands, a staggering amount which, at its peak between 2031 and 2045, will witness 10% of total US wealth transferred every five years.10 Not only are investor preferences shifting in favor of ESG strategies, regulatory efforts in Europe aim to bring greater standardization and transparency to ESG products, which is likely to increase demand. As of 2019, UK government pension funds are required to integrate ESG considerations into their investment management approach (McNamee, 2019). Starting in March 2021, the European Union will require investment managers to provide ESG disclosures related to their investment products. The effort “aims to enhance transparency regarding integration of environmental, social, and governance matters into investment decisions and recommendations” (Maleva-Otto and Wright, 2020). In 2018, the European Commission set up a Technical Expert Group tasked with several ESG initiatives including creating index methodology requirements for low carbon benchmarks, increasing transparency in the green bond market, and creating an EU taxonomy to help companies transition to a low carbon economy. Outside of Europe there has been less movement on the regulatory front, but good progress made on standard setting. In the United States, public pension funds have taken the lead on ESG integration and in 2018 held 54% of all ESG-related investments in the United States (Bradford, 2019). The UN has created the Sustainable Development Goals, a blueprint for improving the planet, both environmentally and socially, by 2030. The 17 goals—including reducing poverty, improving education, creating affordable and clean energy, and creating sustainable cities and communities—have been adopted by all UN member states. “ESG does not need to be a factor for investors to achieve their ESG and performance goals.” The numbers are large, and the implication that the new owners of wealth will favor "good" ESG stocks will in turn likely lead to a very different supply–demand dynamic than in the past. More demand for good ESG companies may result in an upward, one-time positive shock to relative valuations of these companies and the funds that invest in them. We previously discussed that factors and smart beta strategies can experience such a revaluation alpha (Arnott et al., 2016).11 This is classic thematic investing, following in the footsteps of cloud, artificial intelligence, and robotics themes, but it’s not factor investing. The theme in this case is the massive coming adoption of ESG investing on the part of new owners of capital. Getting ahead of that demand could be substantially profitable on two conditions. First, the perceived demand is not already reflected in stock prices. Second, the market’s perception of good ESG companies is fairly consistent so that these inflows more or less benefit the same companies. As we have explained, we currently see incredibly inconsistent definitions of good and bad ESG companies. Yes, a rising tide lifts all boats, but they all have to be in the water and in the same harbor! It may very well be that the best options for thematic investing in ESG are for narrower—and therefore homogenous—groups of securities. Low carbon, sustainable forestry, or gender equality may be easier to exploit in a thematic manner than the entire ESG company universe. ## Incorporate ESG into a Variety of Equity Index Strategies At Research Affiliates, we believe ESG is an important investing consideration despite dismissing it as a factor or lacking confidence in its ability to currently deliver as a theme. One of our core investment beliefs is that investor preferences are broader than risk and return. As value investors, we believe that prices vary around fair value and that investing in unpopular companies and not following the herd is a strategy that will be rewarded as prices mean revert over a market cycle (Brightman, Masturzo, and Treussard, 2014). Of course, investor preferences extend beyond value investing, and as we have shown, many investors have a preference for ESG strategies for many reasons, such as the desire to bring about societal change, mindfulness of the environment, promotion of good corporate governance, or all of the above. Investors can satisfy their ESG preferences while still maintaining the characteristics of their preferred investment strategy. We illustrate this by comparing the characteristics of three strategies: RAFI Fundamental Developed Index, RAFI ESG Developed Index, and RAFI Diversity & Governance Developed Index. All three strategies utilize the Fundamental Index approach, which selects and weights companies by fundamental measures of company size rather than market capitalization. The RAFI Fundamental Developed Index does not incorporate any ESG considerations. The RAFI ESG Developed Index is a broad-based ESG index that tilts toward companies with strong overall ESG scores. The RAFI Diversity & Governance Developed Index reflects a preference for companies that score well across several metrics of gender diversity and strong corporate governance. All three strategies share similar characteristics. The Fundamental Index methodology is a contrarian approach that uses fundamental weights to act as rebalancing anchors against market price movements. Fundamental Index strategies typically trade at a discount to cap-weight. All three strategies maintain similar valuation discounts and dividend yields, with the only noticeable differences being index concentration. Given that the ESG and Diversity & Governance indices exclude many securities that perform poorly across multiple ESG considerations, they have a much higher active share. In addition, all three strategies maintain similar factor exposures, mainly positive loadings on value and negative loadings on momentum. The Diversity & Governance index, which incorporates a tilt toward lower-volatility companies, also has a high exposure to the low beta factor. The bottom line is that investors who would like to incorporate ESG into their investment decisions can do so and retain their desired investment characteristics. Accordingly, they likely maintain a similar expected return outcome (although with some short-term deviations in performance) whether their preferred approach is traditional passive, smart beta, or active. ESG does not need to be a factor for investors to achieve their ESG and performance goals. ## Conclusion Let’s hope the events of 2020—Australian wildfires, a global pandemic, a searing recession, and social protests denouncing racial inequality—lead to positive societal changes and perhaps more refinement to and greater consistency in ESG ratings. Indeed, once the dust settles, we expect these forces to accelerate an already simmering ESG investment movement—but action will require clarity around exactly what ESG is and what it is not. Currently, various stakeholders are sending a whole host of mixed messages. Investors, particularly fiduciaries, need education and alignment. If ESG remains a heterogeneous basket of claims, we will likely never see it fulfill its vast promise. We have debunked one of these messages: ESG is not an equity return factor in the traditional, academic sense. We have shown that, unlike vetted factors such as value, low beta, quality, or momentum, ESG strategies lack sufficient historical data, impeding our ability to make a similar conclusion of robustness. Nevertheless, ESG can be a very powerful theme in the portfolio management process in the years ahead. Furthermore, we believe a variety of equity styles can very effectively capture ESG criteria. We believe our conclusions will add clarity around the question “Is ESG a factor?” and therefore quicken the pace of ESG integration in equity portfolios. ## Appendix We examine the results of a multi-factor regression compared to long–short ESG portfolios in the United States and Europe. This approach results in low or negative alpha from the majority of the strategies. The environmental strategy in Europe is the only strategy with annual alpha greater than 1.0%, however, the results are not statistically significant at the 95% t-stat level (1.96). Most of the strategies exhibit positive loadings on the low beta, profitability, and investment factors, meaning that ESG portfolios tend to exhibit low-volatility and high-quality characteristics, bringing merit to the argument of ESG as a risk mitigation strategy. FEATURED TAGS Learn More About the Author ## Endnotes 1. The notion of hundreds of attendees gathering in a ballroom and congregating around coffee and snack tables seems, quoting George Lucas, like a long time ago in a galaxy far, far away. 2. Although size is a commonly accepted factor by many investors, Research Affiliates has expressed concern that the size factor may lack robustness (Kalesnik and Beck, 2014). 3. The attention given to specific ESG considerations has varied over time. For example, climate change has been a leading ESG issue for several years, while gender equality and even more recently racial equality, are issues now starting to gain momentum. Discussing whether gender or racial inequality was a priced factor decades in the past is irrelevant if we wish to support investors who desire to have an impact today. 4. We use the Russell 1000 Index as the starting universe for selection within the US, and we use the FTSE All World Developed Europe Index as the starting universe for selection within Europe. We exclude companies without an ESG rating, and strategies rebalance once a year on June 30. We use ESG ratings data from Vigeo Eiris. 5. Source is US Energy Information Administration. 6. Source is “Environmental, Social, and Governance (ESG) Investing in the United States,” Cerulli Associates (2019). 7. Source is UN PRI accessed on July 9, 2020. 8. In 2018, responsible investment strategies used in actively managed equity assets included US7.4 trillion for integration; US$4.4 trillion for screening and integration; US$1.8 trillion for screening, thematic, and integration; and US\$0.4 trillion for thematic and integration. Source is UN PRI. 9. Source is Bank of America Merrill Lynch (September 23, 2019). 10. Source is “The ‘Greater’ Wealth Transfer: Capitalizing on the Intergenerational Shift in Wealth,” Accenture (2015). 11. Arnott et al. (2016) note that revaluation alpha can cut both ways in that a strategy trading at a substantial premium to the market might perform poorly if valuations mean revert toward market multiples. ## References Arnott, Robert, Noah Beck, Vitali Kalesnik, and John West. 2016. “How Can ‘Smart Beta’ Go Horribly Wrong?” Research Affiliates Publications (February). Arnott, Robert, Campbell Harvey, Vitali Kalesnik, and Juhani Linnainmaa. 2019. “Alice’s Adventures in Factorland.” Research Affiliates Publications (February). Available at SSRN. Bank of America. 2019. “10 Reasons You Should Care about ESG.” ESG Matters–US (September 23). Beck, Noah, Jason Hsu, Vitali Kalesnik, and Helge Kostka. 2016. “Will Your Factor Deliver? An Examination of Factor Robustness and Implementation Costs.” Financial Analysts Journal, vol. 72, no. 5 (September/October):58–82. Bradford, Hazel. 2019. “Public Funds Taking the Lead in Spectacular Boom of ESG.” Pensions and Investments (April 19). Brammer, Stephen, Chris Brooks, and Stephen Pavelin. 2006. “Corporate Social Performance and Stock Returns: UK Evidence from Disaggregate Measures.” Financial Management, vol. 35, no. 3 (September):97–116. Brightman, Chris, James Masturzo, and Jonathan Treussard. 2014. “Our Investment Beliefs.” Research Affiliates Fundamentals (October). Clark, Gordon, Andreas Feiner, and Michael Viehs. 2015. “From the Stockholder to the Stakeholder: How Sustainability Can Drive Financial Outperformance.” Available on SSRN. Fabozzi, Frank, K.C. Ma, and Becky Oliphant. 2008. “Sin Stock Returns.” Journal of Portfolio Management, vol. 35, no. 1 (Fall):82–94. Friede, Gunnar, Timo Busch, and Alexander Bassen. 2015. “ESG and Financial Performance: Aggregated Evidence from More Than 2000 Empirical Studies.” Journal of Sustainable Finance and Investment, vol. 5, no. 4:210–233. Hong, Harrison, and Marcin Kacperczyk. 2009. “The Price of Sin: The Effects of Social Norms on Markets.” Journal of Financial Economics, vol. 93, no. 1 (July):15–36. Kalesnik, Vitali, and Noah Beck. 2014. “Busting the Myth about Size.” Research Affiliates Simply Stated (November). Khan, Mozaffar, George Serafeim, and Aaron Yoon. 2016. “Corporate Sustainability: First Evidence on Materiality.” Accounting Review, vol. 91, no. 6 (November):1697–1724. Li, Feifei, and Ari Polychronopoulos. 2020. “What a Difference an ESG Ratings Provider Makes!” Research Affiliates Publications (January). Maleva-Otto, Anna, and Joshua Wright. 2020. “New ESG Disclosure Obligations.” Harvard Law School Forum on Corporate Governance (March 24). McNamee, Emmet. 2019. “UK’s New ESG Pension Rules: Four Measures to Ensure Their Success.” PRI Blog (October 1). Nagler, Rebekah. (2014). “Adverse Outcomes Associated with Media Exposure to Contradictory Nutrition Messages.” Journal of Health Communication, vol. 19, no. 1:24–40. Orsagh, Matt, James Allen, Justin Sloggett, Anna Georgieva, Sofia Bartholdy, and Kris Duoma. 2018. Guidance and Case Studies for ESG Integration: Equities and Fixed Income. CFA Institute: Charlottesville, VA.
{}
# Prove sequent using natural deduction I need to prove the following predicate logic sequent using natural deduction: $\exists y \forall x (P(x) \rightarrow x = y) \vdash \forall x \forall y (P(x) \land P(y) \rightarrow x = y)$ This is my half-finished proof. I hope I'm on the right track but there is something about box packing/unpacking I don't understand yet: 1. $\exists y \forall x (P(x) \rightarrow x = y) \quad\mathrm{Premise}$ 2. $y_0: \forall x P(x) \rightarrow x = y_0) \quad\mathrm{Assumption}$ 3. $x_0: P(x_0) \rightarrow x_0 = y_0 \quad \forall x e2$ 4. $P(x_0) \land P(y_0) \quad \mathrm{Assumption}$ 5. $P(x_0) \quad \land e_1 4$ 6. $x_0 = y_0 \quad \rightarrow e 3, 5$ 7. $P(x_0) \land P(y_0) \rightarrow x_0 = y_0 \quad \rightarrow i 4-6$ 8. $\forall x (P(x) \land P(y_0) \rightarrow x = y_0 \quad \forall x i 3-7$ Then I can't go further because I can't use universal reintroduction on the $y$ variable. Edit: I managed to finish it with the help from the answer! 1. $\exists y \forall x (P(x) \rightarrow x = y) \quad\mathrm{Premise}$ 2. $z: \forall x P(x) \rightarrow x = z) \quad\mathrm{Assumption}$ 3. $a: P(a) \rightarrow a = z \quad \forall x e2$ 4. $b: P(b) \rightarrow b = z \quad \forall x e2$ 5. $P(a) \land P(b) \quad \mathrm{Assumption}$ 6. $P(a)\quad \land e_1 5$ 7. $P(b)\quad \land e_2 5$ 8. $a = z \quad \rightarrow e3,6$ 9. $b = z \quad \rightarrow e4,7$ 10. $b = b \quad =i$ 11. $z = b \quad =e 9,10$ 12. $a = b \quad =e 8,11$ 13. $P(a) \land P(b) \rightarrow a = b \quad \rightarrow i 5-12$ 14. $\forall y (P(a) \land P(y) \rightarrow a = y) \quad \forall y i 4-13$ 15. $\forall x \forall y (P(x) \land P(y) \rightarrow x = y) \quad \forall x i 3-14$ 16. $\forall x \forall y (P(x) \land P(y) \rightarrow x = y) \quad \exists z e 1,2-15$ • In short: Since you wish to use Universal (Re)Introduction twice, this is a clue to first use Universal Elimination twice. – Graham Kemp Dec 28 '17 at 4:54 • Short answer is, you want step 4 to be $P(x_0) \land P(z_0)$ – DanielV Dec 28 '17 at 7:20 $$\begin{array}{r|l:l}1&\exists y\forall x~(P(x)\to x=y)\\ 2&\quad\forall x~(P(x)\to x=c) & 1,\exists \text{Elimination }[y\backslash c]\\[0ex] 3 & \qquad P(a)\to a=c & 2,\forall\text{Elimination }[x\backslash a]\\[0ex] 4 & \qquad\quad P(b)\to b=c & 2,\forall\text{Elimination }[x\backslash b] \\[0ex] 5 & \qquad\qquad P(a)\wedge P(b) & \text{Assumption} \\[-1ex] 6 & \qquad\qquad\vdots & \\[-1ex] 7 & \qquad\qquad\vdots & \\[-1ex] 8 &\qquad\qquad\vdots & \\[-1ex] 9 & \qquad\qquad\vdots & \\[0ex] 10 & \qquad\qquad a=b & ~,~,=\text{Elimination}\\[0ex] 11 & \qquad\quad (P(a)\wedge P(b))\to(a=b) & 5,10,\to\text{Introduction}\\[-1ex] \vdots\end{array}$$
{}
# Malformed bcf file not recreated by latexmk after error I'm using latexmk with pdflatex to compile my thesis, with biblatex for references and biber as backend. It compiles fine and creates correct PDF output. If a change to the source files introduces an error the first run of pdflatex fails but a bcf file is created. The run of biber then complains about a malformed bcf file indicating that the last biblatex run failed and the compilation is stopped. However, after fixing the error latexmk thinks the pdflatex run was fine and invokes biber, but the bcf file is still malformed. latexmk somehow doesn't seem to notice that there were file changes. Removing the bcf file or cleaning it with latexmk -C makes latexmk call pdflatex first and recreate a correct bcf file. Calling pdflatex manually also work but defeats the purpose of latexmk. I tried to use -halt-on-error as option to pdflatex but that doesn't work. I seem to recall that it worked with TexLive 2014, after a failed attempt to run biber latexmk would run pdflatex first on the next attempt. I'm using TexLive 2015, the version of latexmk is 4.43a, biber has the version 2.3. The output produced is Latexmk: This is Latexmk, John Collins, 5 February 2015, version: 4.43a. Rule 'biber thesis': File changes, etc: Non-existent destination files: 'thesis.bbl' ------------ Run number 1 of rule 'biber thesis' ------------ ------------ Running 'biber "thesis"' ------------ Latexmk: applying rule 'biber thesis'... INFO - This is Biber 2.3 INFO - Logfile is 'thesis.blg' ERROR - thesis.bcf is malformed, last biblatex run probably failed. Deleted thesis.bbl INFO - ERRORS: 1 Latexmk: Failed to find one or more biber source files: NONE Collected error summary (may duplicate other messages): biber thesis: Could not find all biber source files for 'thesis' Latexmk: Use the -f option to force complete processing, unless error was exceeding maximum runs of latex/pdflatex. Biber error: [33] Utils.pm:163> ERROR - thesis.bcf is malformed, last biblatex run probably failed. Deleted thesis.bbl Latexmk: Errors, so I did not complete making targets Obligatory mwe.tex: \documentclass[paper=a4]{scrartcl} \usepackage[backend=biber]{biblatex} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \begin{document} \autocite{smith_pixel_1995} \end{document} Bibliography.bib @article{smith_pixel_1995, title = {A Pixel Is Not A Little Square, A Pixel Is Not A Little Square, A Pixel Is Not A Little Square!}, volume = {6}, url = {http://ftp.alvyray.com/Memos/CG/Microsoft/6_pixel.pdf}, journaltitle = {Microsoft Computer Graphics, Technical Memo}, author = {Smith, Alvy Ray}, urldate = {2016-02-04}, date = {1995} } Steps to reproduce: 1. run latexmk -pdf mwe 2. introduce an error in mwe.tex like an undefined control sequence 3. run latexmk -pdf mwe 4. when prompted abort compilation with by pressing x 5. fix error 6. run latexmk -pdf mwe -> biber error 7. run latexmk -pdf mwe -> same biber error, won't go away So the question is, how do I get latexmk to re-create the bcf file so that biber can use it without having to manually clean or remove files? • I don't use latexmk, but doesn't the option -f as mentioned in your log ("Latexmk: Use the -f option to force complete processing,") work? Or run simply once pdflatex instead of latexmk. – Ulrike Fischer Feb 19 '16 at 11:42 • @UlrikeFischer Yes, -f works. But as I see it, the point of latexmk is not to have to reprocess everything from scratch; as a quick workaround it's still a solution. – alefhar Feb 19 '16 at 11:52 • Actually I can't reproduce your problem. If I change the tex-file, e.g. add an x, and then start latexmk it always calls pdflatex first and so also repairs a broken bcf. – Ulrike Fischer Feb 19 '16 at 12:09 • That's what I feared. Just to be sure, you are using the same version of latexmk as I? – alefhar Feb 19 '16 at 12:13 • You can just re-run latexmk after the failed Biber run without doing anything else and everything will work fine - provided you have fixed the problem in the TeX file and it compiles without errors. See the discussion in issue #348 at the biblatex bugtracker. (I think the latexmk developer is also active here, so he might drop by and give some more info.) – moewe Feb 20 '16 at 8:13 As already mentioned in a comment, the solution is to use the new version of latexmk (4.44 at the time I write this answer), which is now available at http://www.ctan.org/pkg/latexmk/ Update May 2019: This is still an issue with current LTS of Ubuntu (bionic, 18.04), because they deliver version 4.41. https://packages.ubuntu.com/bionic/latexmk As stated by John this is fixed since version 4.4. Newer versions of latexmk are delivered for cosmic, disco,.. I fixed that issue on my machine, by temporarily use the cosmic repository: 1. edit /etc/apt/sources.list at line of universe repo from bionic to cosmic, save 2. apt update 3. apt install latexmk 4. revert changes in /etc/apt/sources.list 5. apt update This is in general not recommended, but latexmk is a very simple app/script with few dependencies. Pinning is not required because hopefully the maintainer won't choose a version between 4.41 and 4.44, current version in cosmic is 4.59. Another approach would be loading the newest package from its home: How can I upgrade latexmk / why very old version is contained in Ubuntu repository?
{}
Infoscience Journal article # Dynamic Measurement of Room Impulse Responses using a Moving Microphone A novel technique for the recording of large sets of room impulse responses or head-related transfer functions is presented. The technique uses a microphone or a loudspeaker moving with constant speed. Given a setup (e.g. length of the room impulse response), a careful choice of the recording parameters (excitation signal, speed of movement) is shown to lead to the reconstruction of all impulse responses along the trajectory. In the case of moving element along a circle, the maximal angular speed is given in function of the length of the impulse response, its maximal temporal frequency, the speed of sound propagation and the radius of the circle. As result of this theory, it is shown that head-related transfer functions sampled at $44.1$kHz can be measured at all angular positions along the horizontal plane in less than one second. The presented theory is compared with a real system implementation using a precision moving microphone holder. The practical setup is discussed together with its limitations.
{}
+0 # complex analysis 0 925 2 α-iβ=1/(a-ib) then prove that(α22)(a2+b2)=1 Mar 24, 2015 #2 +892 +13 On the assumption that alpha, beta, a and b are real, $$\alpha -\imath \beta=\frac{1}{a-\imath b}=\frac{a+\imath b}{a^{2}+b^{2}}.$$ Taking the complex conjugate of both sides, $$\alpha + \imath\beta=\frac{a-\imath b}{a^{2}+b^{2}},$$ and multiplying the equations together, $$\alpha^{2}+\beta^{2}=\frac{a^{2}+b^{2}}{(a^{2}+b^{2})^{2}},$$ from which the result follows. Mar 24, 2015 #1 +2353 +10 I think you either forgot to mention some of the details, or the variables (other than i off course) have an implicit meaning which I'm unaware of. Anyway, I can make it a little easier for you by doing the following. $$\begin{array}{lcl} \alpha - i\beta &= \frac{1}{a-ib}\\ \alpha - i\beta &= \frac{1}{a-ib}\frac{a+ib}{a+ib}\\ \alpha - i\beta &= \frac{a+ib}{a^2+ aib - aib -i^2b^2}\\ \alpha - i\beta &= \frac{a+ib}{a^2+b^2}\\ (\alpha - i\beta)(\alpha + i\beta) &= \frac{(a+ib)(\alpha + i\beta)}{a^2+b^2}\\ \alpha^2 +i\alpha \beta - i\alpha \beta - i^2 \beta^2 &= \frac{(a+ib)(\alpha + i\beta)}{a^2+b^2}\\ \alpha^2 + \beta^2 &= \frac{(a+ib)(\alpha + i\beta)}{a^2+b^2}\\ (\alpha^2 + \beta^2)(a^2+b^2)&= (a+ib)(\alpha + i\beta) \end{array}$$ Now if you can prove that $$(a+ib)(\alpha + i\beta) = 1$$ You're there. Reinout Mar 24, 2015 #2 +892 +13 On the assumption that alpha, beta, a and b are real, $$\alpha -\imath \beta=\frac{1}{a-\imath b}=\frac{a+\imath b}{a^{2}+b^{2}}.$$ Taking the complex conjugate of both sides, $$\alpha + \imath\beta=\frac{a-\imath b}{a^{2}+b^{2}},$$ and multiplying the equations together, $$\alpha^{2}+\beta^{2}=\frac{a^{2}+b^{2}}{(a^{2}+b^{2})^{2}},$$ from which the result follows. Bertie Mar 24, 2015
{}
## Online Encylopedia and Dictionary Research Site Online Encyclopedia Search    Online Encyclopedia Browse # Mathematical coincidence In mathematics, a mathematical coincidence can be said to occur when two expressions show a near-equality that lacks direct theoretical explanation. One of the expressions may be an integer and the surprising feature is the fact that a real number is close to a small integer; or, more generally, to a rational number with a small denominator. Given the large number of ways of combining mathematical expressions, one might expect a large number of coincidences to occur; this is one aspect of the so-called law of small numbers. Although mathematical coincidences may be useful, they are mainly notable for their curiosity value. ## Some examples • $e^\pi\simeq\pi^e$; correct to about 3% • $\pi^2\simeq10$; correct to about 3%. This coincidence was used in the design of slide rules, where the "folded" scales are folded on π rather than $\sqrt{10}$, because it is a more useful number and has the effect of folding the scales in about the same place. • $\pi\simeq 22/7$; correct to about 0.03%; $\pi\simeq 355/113$, correct to six places or 0.000008%. (The theory of continued fractions gives a systematic treatment of this type of coincidence; and also such coincidences as $2\times 12^2\simeq 17^2$ (ie $\sqrt{2}\simeq 17/12$). • $1+1/\log(10)\simeq 1/\log(2)$; leading to Donald Knuth's observation that, to within about 5%, log2(x) = log(x) + log10(x). • $2^{10}\simeq 10^3$; correct to 2.4%; implies that log102 = 0.3; actual value about 0.30103. Engineers make extensive use of the approximation that 3 dB corresponds to doubling of power level • $e^\pi\simeq\pi+20$; correct to about 0.004% • $e^{\pi\sqrt{n}}$ is close to an integer for many values of n, most notably n = 163; this one has roots in algebraic number theory. • π seconds is a nanocentury (ie 10 - 7 years); correct to within about 0.5% • one attoparsec per microfortnight approximately equals 1 inch per second (the actual figure is about 1.0043 inch per second). • $2^{7/12}\simeq 3/2$; correct to about 0.1%. In music, this coincidence means that the chromatic scale of twelve pitches includes, for each note (in a system of equal temperament, which depends on this coincidence), a note related by the 3/2 ratio. This 3/2 ratio of frequencies is the musical interval of a fifth and lies at the basis of Pythagorean tuning, just intonation, and indeed most known systems of music. • $\pi\simeq\frac{63}{25}\left(\frac{17+15\sqrt{5}}{7+15\sqrt{5}}\right)$; accurate to 9 decimal places (due to Ramanujan).
{}
# Computing a Double Limit How would one compute $\lim_{\delta \rightarrow 0, k\rightarrow\infty} (1+\delta)^{ak}$, where $a$ is some positive constant? I am finding a lower-bound of the Hausdorff Dimension on a Cantor-like set and this expression appeared in my formula. Here's what I have, even though I'm not sure if I can use L'Hopital in this case (where $k, \delta$ are approaching $\infty, 0$, respectively.) $\lim (1+\delta)^{ak}= \lim e^{ak\log(1+\delta)}=\lim e^{ak\log(1+\delta)}=\lim e^\frac{a\log(1+\delta)}{\frac{1}{k}}=\lim e^\frac{-ak^2}{1+\delta}=0,$ which I find troubling since the base is always greater than 1. Would this change much if the limit as k tends to infinity is the liminf? - It depends how $\delta$ and $k$ approach their respective limits. If $\delta=\frac ck$ then the limit is $e^{ac}$. –  Hagen von Eitzen Sep 29 '12 at 22:36 It is undefined, depends on how $\delta$ and $k$ are behaving with respect to each other. The path $\delta=1/n$, $k=n$ gives a different result than $\delta=1/n$, $k=2n$. Almost anything can happen. –  André Nicolas Sep 29 '12 at 22:36 It’s undefined, because the limit depends entirely on how $k\to\infty$ and $\delta\to 0$. For example: $$\lim_{k\to\infty}\lim_{\delta\to 0}(1+\delta)^{ak}=\lim_{k\to\infty}1^{ak}=1$$ $$\lim_{\delta\to 0^+}\lim_{k\to\infty}(1+\delta)^{ak}=\lim_{\delta\to 0^+}\infty=\infty$$ $$\lim_{\delta\to 0^-}\lim_{k\to\infty}(1+\delta)^{ak}=\lim_{\delta\to 0^-}0=0$$ $$\lim_{\delta\to 0^+}(1+\delta)^{a/\delta}=\lim_{k\to\infty}\left(1+\frac1{bk}\right)^{ak}=e^{a/b}$$ - Thanks, I forgot that fact. –  The Substitute Sep 30 '12 at 3:17
{}
# Best response polytopes¶ Another useful representation of games is to consider polytopes. A polytope $\mathcal{P}$ has the following definition: ## Definition of a Polytope as a convex hull¶ For a given set of vertices $V\in\mathbb{R} ^ K$, a Polytope $\mathcal{P}$ can be defined as the following set of points: $$\mathcal{P} = \left\{\sum_{i=1}^K \lambda_i v_i \in\mathbb{R} ^ K \;\left|\; \sum_{i=1}^K \lambda_i = 1; \lambda_i\geq 0;v_i \in V \right.\right\}$$ This is a higher dimensional generalization of polygons. Let us plot the polytope with vertices: $$V = \{(0, 0), (1/2, 0), (1/2, 1/4), (0, 1/3)\}$$ In [2]: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.spatial V = [np.array([0, 0]), np.array([1 / 2, 0]), np.array([1 / 2, 1 / 4]), np.array([0, 1 / 3])] P = scipy.spatial.ConvexHull(V) scipy.spatial.convex_hull_plot_2d(P); An equivalent definition of Polytope is as an intersection of boundaries that seperate the space in to two distinct areas. ## Definition of a Polytope as an intersection of halfspaces¶ For a matrix $M\in\mathbb{R} ^ {m\times n}$ and a vector $b\in\mathbb{m}$ a Polytope $\mathcal{P}$ can be defined as the following set of points: $$\mathcal{P} = \left\{x \in\mathbb{R} ^ {m \times n} \;\left|\; Ax\leq b \right.\right\}$$ For example the previous polytope is equivalently described by the following inequalities: \begin{align} - x_1 & \leq 0\\ -x_2 & \leq 0\\ 2x_1 & \leq 1\\ 3x_2 & \leq 1\\ x_1 + 6 x_2 & \leq 2 \end{align} ## Definition of best response polytopes¶ For a two player game $(A, B)\in \in{\mathbb{R}^{m\times n}_{>0}}^2$ the row/column player best response polytope $\mathcal{P}$/$\mathcal{Q}$ is defined by: $$\mathcal{P} = \left\{x\in\mathbb{R}^{m}\;|\;x\geq 0; xB\leq 1\right\}$$$$\mathcal{Q} = \left\{y\in\mathbb{R}^{n}\;|\; Ay\leq 1; y\geq 0 \right\}$$ The polytop $\mathcal{P}$, corresponds to the set of points with an upper bound on the utility of those points when considered as row strategies against which the column player plays. The fact that these polytopes are defined for $A, B > 0$ is not restrictive as we can simply add a constant to our utilities. As an example, let us consider the matching pennies game: $$A = \begin{pmatrix} 1 & -1\\ -1& 1 \end{pmatrix}\qquad B = \begin{pmatrix} -1 & 1\\ 1& -1 \end{pmatrix}$$ First let us add 2 to all utilities: $$A = \begin{pmatrix} 3 & 1\\ 1 & 3 \end{pmatrix}\qquad B = \begin{pmatrix} 1 & 3\\ 3 & 1 \end{pmatrix}$$ The inequalities for $\mathcal{P}$ are then given by: \begin{align} -x_1 & \leq 0\\ -x_2 & \leq 0\\ x_1 + 3 x_2 & \leq 1\\ 3 x_1 + x_2 & \leq 1\\ \end{align} which corresponds to: \begin{align} x_1 & \geq 0\\ x_2 & \geq 0\\ x_2 & \leq 1/3 -x_1/3\\ x_2 & \leq 1 - 3x_1\\ \end{align} the intersection of the two non trivial constraints is at the point: $$1/3 -x_1/3=1 - 3x_1$$ giving: $$x_1=1/4$$ and $$x_2=1/4$$ In [3]: import sympy as sym x_1 = sym.symbols('x_1') sym.solveset(1/3 - x_1 / 3 - 1 + 3 * x_1, x_1) Out[3]: {0.25} This gives 4 vertices: $$V = \{(0, 0), (1/3, 0), (1/4, 1/4), (0, 1/3)\}$$ In [4]: V = [np.array([0, 0]), np.array([1 / 3, 0]), np.array([1 / 4, 1 / 4]), np.array([0, 1 / 3])] P = scipy.spatial.ConvexHull(V) scipy.spatial.convex_hull_plot_2d(P); Note that these vertices are no longer probability vectors. Recall the four inequalities of this polytope: 1. $x_1 \geq 0$: if this inequality is "binding" (ie $x_1=0$) that implies that the row player does not play that strategy. 2. $x_2 \geq 0$: if this inequality is "binding" (ie $x_2=0$) that implies that the row player does not play that strategy. 3. $x_1 + 3 x_2 \leq 1$: if this inequality is binding (ie $x_1 + 3 x_2 = 1$) then that implies that the utility to the column player for that particular column is as big as it can be. 4. $3x_1 + x_2 \leq 1$: if this inequality is binding (ie $3x_1 + x_2 = 1$) then that implies that the utility to the column player for that particular column is as big as it can be. We in fact use this notion to label our vertices: 1. $(0, 0)$ has labels $\{0, 1\}$ (we start our indexing at 0). 2. $(1/3, 0)$ has labels $\{1, 3\}$ 3. $(1/4, 1/4)$ has labels $\{2, 3\}$ 4. $(0, 1/3)$ has labels $\{0, 3\}$ Similarly the vertics and labels for $\mathcal{Q}$ are: 1. $(0, 0)$ has labels $\{2, 3\}$ 2. $(1/3, 0)$ has labels $\{0, 3\}$ 3. $(1/4, 1/4)$ has labels $\{0, 1\}$ 4. $(0, 1/3)$ has labels $\{1, 2\}$ Note that for a given pair of vertices, if the pair is fully labeled (so that the union of the labels is $\{0, 1, 2, 3\}$) then either a strategy is not played or it is a best response to the other player's strategies. This leads to a final observation: ## Fully labeled vertex pair¶ For a pair of vertices $(x, y)\in\mathcal{P}\times \mathcal{Q}$, if the union of the labels of $x$ and $y$ correspond to the set of all labels then $x, y$, when normalised (so that the sum is 1), correspond to a Nash equilibrum. This leads to another algorithm for finding equilibria: ## Vertex enumeration algorithm¶ For a nondegenerate 2 player game $(A, B)\in{\mathbb{R}^{m\times n}_{>0}}^2$ the following algorithm returns all nash equilibria: 1. For all pairs of vertices of the best response polytopes 2. Check if the vertices have full labels 3. Return the normalised probabilities For our running example, the only pair of vertices that is fully labeled is: $$((1/4, 1/4), (1/4, 1/4))$$ which, when normalised (so that the sum is 1) corresponds to: $$((1/2, 1/2), (1/2, 1/2))$$ This algorithm is implemented in Nashpy: In [5]: import nash A = np.array([[1, -1], [-1, 1]]) matching_pennies = nash.Game(A) list(matching_pennies.vertex_enumeration()) Out[5]: [(array([ 0.5, 0.5]), array([ 0.5, 0.5]))]
{}
# Mean Value Theorem Question by calcnd Tags: theorem P: 20 Use the Intermediate Value Theorem and/or the Mean Value Theorem and/or properties of $$G'(x)$$ to show that the function $$G(x) = x^2 - e^{\frac{1}{1+x}}$$ assumes a value of 0 for exactly one real number x such that 0 < x < 2 . Hint: You may assume that $$e^{\frac{1}{3}} < 2$$. So I'm completely lost. Here's the first thing I tried: $$G(x) = 0$$ $$0 = x^2 - e^{\frac{1}{1+x}}$$ $$e^{\frac{1}{1+x}} = x^2$$ $$lne^{\frac{1}{1+x}} = lnx^2$$ $$\frac{1}{1+x} = 2lnx$$ $$1 = 2(1+x)lnx$$ $$\frac{1}{2} = (1+x)lnx$$ Which isn't quite getting me anywhere. And I'm not sure how the mean value theorem is going to help me out much more since: $$G'(c) = \frac{G(2) - G(0)}{2-0}$$ $$2c + (1+c)e^{\frac{1}{1+c}} = \frac{[2^2-e^{\frac{1}{3}}] - [0^2 - e]}{2}$$ $$4c + (2+2c)e^{\frac{1}{1+c}} = 4 - e^{\frac{1}{3}} + e$$ Which isn't going to simply easily. Oy vey... Any help or hints would be appreciated. Edit: There, I think I finally got it to render correctly. P: 20 Hrm, that's what I'm not sure of. I know the x value is within the interval, due to the intermediate value theorem, as you stated. I could suggest that I solve $$G'(x)=0$$, but I could imagine that would be useless as there's no reason to think that $$G(c)=0$$ is a critical point. :S P: 18 Mean Value Theorem Question Look at $$G'(x)$$ and decide what its properties are for 0 P: 20 Heh... I noticed I did my derivative entirely wrong. I guess that's why you shouldn't use the computer when you're tired. Anyways, $$G(0)=0^2-e^(\frac{1}{1+0})$$ $$=-e$$ Which is < 0 $$G(2)=2^2-e^{\frac{1}{1+2}}$$ $$=4-e^{\frac{1}{3}}$$ Which is > 0 Meaning $$G(x)=0$$ is contained somewhere between x=0 and x=2. $$G'(x)=2x + {e^{\frac{1}{1+x}}\over{(1+x)^2}}$$ But I'm not having an easy time trying to solve this equal to zero to find my critical numbers. It seems relatively safe to assume that $$G(x)$$ is increasing on this interval.
{}
# the product of finkelstein reaction It is used to synthesize one alkyl halide from another. Alkyl halides differ greatly in the ease with which they undergo the Finkelstein reaction. products, since KCl and KBr are insoluble in acetone and are consequently As the sodium iodide is soluble in the acetone, but the sodium bromide and sodium chloride are not soluble in the acetone. Halide exchange is an equilibrium reaction, but the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt.[2]. It is much clearer to say that "antarafacial elimination gives the cis product" - and now there is no confusion of the mode of elimination with whatever stereochemical convention identifies the product. As the sodium iodide is soluble in the acetone, but the sodium bromide and sodium chloride are not soluble in the acetone. This cloudiness if likely some residual sodium iodide which is removed during washing. FINKELSTEIN REACTION * The Finkelstein reaction involves the exchange of one halogen for another, especially, in primary alkyl halides. This solid was triturated with a 1:1 v./v. The classic Finkelstein reaction involves the process of an alkyl bromide or an alkyl chloride into an alkyl iodide which is treated with a sodium iodide solution in acetone. Required fields are marked *, The Finkelstein reaction is a Substitution Nucleophilic Bimolecular reaction (. In the modern usage of Finkelstein reaction, it has expanded including in the process of changing alcohols to alkyl halides by converting the alcohol to a sulfonate ester in the first stage, and then the substitution changes. Biography. [9], From Infogalactic: the planetary knowledge core. For instance: The bromoethane can be converted to iodoethane: $$CH_{3} CH_{2}Br \ _{(acetone)} + Nal \ _{(acetone)} \rightarrow CH_{3} CH_{2} I \ _{(acetone)} + NaBr \ \ _{(s)}$$. Secondary halides are far less reactive. This page was last modified on 13 December 2015, at 22:12. Hans Finkelstein came from a liberal Jewish family and joined the Protestant Church when he was 10 years old. The mechanism of the Finkelstein reaction is single-step $$S_{N}2$$ reaction with stereochemistry inversion. To a 100 mL flask equipped a large stir bar was added the 3-chloro-1-propanol (4.73 g, 0.050 mol) and acetone (50 mL, 1 M in the alcohol). It is an equilibrium reaction, but the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classical version of Finkelstein reaction, a primary alkyl halide, RX is treated with an alkali metal halide, like NaX' or KX', in excess in acetone. A spatula was used to break up any large orange aggregates 3. The reaction works good with the primary halides but better with α-carbonyl halides and allyl benzyl. of metal salts that have a high lattice energy require the addition of a crown The reaction works well for primary (except for neopentyl) halides, and exceptionally well for allyl, benzyl, and α-carbonyl halides. The Finkelstein Reaction: Quantitative Reaction Kinetics of an SN2 Reaction Using Nonaqueous Conductivity . Sulfonate Esters by Recyclable Ionic Liquids [bmim][X] Thus, the substitution of bromo- and KF, KI) leads to replacement of the halogen via an S N 2 Reaction. This story starts with an organic chemistry tutorial, when a student asked for clarification of the Finkelstein reaction. The chlorine atom in aryl chlorides (with electron-withdrawing substituents) can be replaced by fluorine using a solution of potassium fluoride in polar solvents such as DMF and DMSO and high temperatures. Your email address will not be published. It is an organic reaction that uses an alkyl halide exchange into another alkyl halide through a reaction wherein the metal halide salt is used. The success of this reaction depends on the below conditions. Since $\ce{KCl}$ keeps precipitating the reaction will be driven to the desired product. chloroalkanes with KI in acetone leads cleanly to the desired iodoalkane For example, bromoethane can be converted to iodoethane: Alkyl halides differ greatly in the ease with which they undergo the Finkelstein reaction. The reaction works well for primary (except for neopentyl) halides, and exceptionally well for allyl, benzyl, and α-carbonyl halides. The organic layer was then washed with ≈100 mL of deionized water, followed by ≈100 mL of brine, and dried with sodium sulfate. CS1 maint: multiple names: authors list (, https://infogalactic.com/w/index.php?title=Finkelstein_reaction&oldid=5030429, Creative Commons Attribution-ShareAlike License, About Infogalactic: the planetary knowledge core. A. Taher, K. C. Lee, H. J. Han, D. W. Kim, Org. The solvent was removed by rotary evaporation to afford pure 3-iodo-1-propanol as a pale yellow oil (7.78 g, 84%). It is named after German Chemist Hans Finkelstein. Treatment September 2006; Journal of chemical education 83(9) DOI: … The Finkelstein reaction is a Substitution Nucleophilic Bimolecular reaction ($$S_{N}2$$ Reaction) involves the exchange of halogen atom. of a primary alkyl halide or pseudohalide with an alkali metal halide It is an organic reaction that uses an alkyl halide exchange into another alkyl halide through a reaction wherein the metal halide salt is used. For instance: The bromoethane can be converted to iodoethane: $$CH_{3} CH_{2}Br \ _{(acetone)} + Nal \ _{(acetone)} \rightarrow CH_{3} CH_{2} I \ _{(acetone)} + NaBr \ \ _{(s)}$$. the anion, whether a good leaving group is present, and whether one anion is [3] The reaction is driven toward products by mass action due to the precipitation of the poorly soluble NaCl or NaBr. Secondary halides are far less reactive. The Finkelstein reaction named after the German chemist Hans Finkelstein,[1] is an SN2 reaction (Substitution Nucleophilic Bimolecular reaction) that involves the exchange of one halogen atom for another. What was the driving force for this reaction he asked? [5] Such reactions usually employ polar solvents such as dimethyl formamide, ethylene glycol, and dimethyl sulfoxide. Allswell Luxe Hybrid Canada, Cinnamon Walnut Pancakes, How To Get Distant Tumulus, Bacon Wrapped Corned Beef Brisket, Ac Odyssey Circe, Ryobi Phone Works With Bluetooth App, Resource Allocation Decisions, Madurai To Nagercoil Bus Timings,
{}
## The Algebra and Geometry of Square Root If we have a square with a given area, then we can find the length of its side. For example, a square with area 4 square units has a side length of 2 units. In other words, in finding the side length of a square with area 4 square units,we are looking for a number that is equal to 4 when squared.The number that when squared is equal to 4 is called the square root of 4 and is written as $\sqrt{4}$. From the discussion above, we now know that $\sqrt{4} = 2$. It is easy to see that $\sqrt{1} = 1$, since $1^2 = 1$ and $\sqrt{0} = 0$ since $0^2 = 0$. The square root of the two numbers above are integers, but this is not always the case. For instance, $\sqrt{2}$ is clearly not an integer since $1^2 = 1$and $2^2 = 4$. This means that $\sqrt{2}$ is somewhere between 1 and 4. What about $\sqrt{5}$» Read more
{}
# Showing coercivity of the bilinear form associated with a robin boundary value problem I'm trying to show the existence and uniqueness of weak solutions to the following boundary value problem: \begin{align} -\nabla \cdot ( k \nabla u) &= f \quad \text{in } \Omega \subset \mathbb{R}^n\\ -k \nabla u \cdot \boldsymbol{n} - c u &= g \quad \text{on } \partial \Omega. \end{align} The associated variational formulation is to find $u \in H^1(\Omega)$ such that $$a(u,w) = \ell(w) \quad \forall w \in H^1(\Omega),$$ where \begin{align} a(u,w) &= \int_{\Omega} k \nabla u \cdot \nabla w ~\mathrm{d} \boldsymbol{x} + \int_{\partial \Omega} c u w ~\mathrm{d} \boldsymbol{l}\\ \ell(w) &= \int_{\Omega} f w ~\mathrm{d} \boldsymbol{x} - \int_{\partial \Omega} g w ~\mathrm{d} \boldsymbol{l}. \end{align} I'm assuming $f\in L^2(\Omega)$, that $g$ and $c$ are in $L^2(\partial \Omega)$, and there exists constants $k_{min}, k_{max} \in \mathbb{R}$ such that $0 < k_{min} < k < k_{max}$ for all $\boldsymbol{x} \in \Omega$. Additionally, it is assumed that $c$ is strictly positive. I want to use the Lax-Milgrim Theorem to establish the existence and uniqueness of the weak solution. I can show that $a(\cdot, \cdot)$ and $\ell(\cdot)$ are bounded above in $H^1(\Omega)$ using the trace inequality. However, I am having some trouble establishing coercivity of $a(\cdot, \cdot)$. I did find this really nice proof by contradiction to a similar question: Variational formulation of Robin boundary value problem for Poisson equation in finite element methods. However, I can use a constructive proof to show coercivity in the $1$-dimensional setting. It makes me think there might be a way to directly establish coercivity in the the $n$-dimensional setting. I was wondering if anyone knows a clever way to establish coercivity without the compactness argument? • Does $c$ have a sign? – miles Feb 13 '16 at 15:40 • $c$ is strictly positive. – Steve Feb 14 '16 at 22:40 Since $c$ is strictly positive, there is a measurable portion $E\subset \partial\Omega$ such that $c>c_{min}>0$ on $E$. Therefore, coercivity follows from the Poincare type inequality $\|v\|_2^2 \leq C(\Omega,E)(\|\nabla v \|_2^2 + \int_E |v|^2~d\sigma)$ for all $v\in H^1(\Omega)$.
{}
# Properties Label 720.2.q.e Level $720$ Weight $2$ Character orbit 720.q Analytic conductor $5.749$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$720 = 2^{4} \cdot 3^{2} \cdot 5$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 720.q (of order $$3$$, degree $$2$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$5.74922894553$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Defining polynomial: $$x^{2} - x + 1$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 360) Sato-Tate group: $\mathrm{SU}(2)[C_{3}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( 1 + \zeta_{6} ) q^{3} + \zeta_{6} q^{5} + 3 \zeta_{6} q^{9} +O(q^{10})$$ $$q + ( 1 + \zeta_{6} ) q^{3} + \zeta_{6} q^{5} + 3 \zeta_{6} q^{9} + ( -5 + 5 \zeta_{6} ) q^{11} + ( -1 + 2 \zeta_{6} ) q^{15} + 3 q^{17} -5 q^{19} + 6 \zeta_{6} q^{23} + ( -1 + \zeta_{6} ) q^{25} + ( -3 + 6 \zeta_{6} ) q^{27} + ( 10 - 10 \zeta_{6} ) q^{29} -2 \zeta_{6} q^{31} + ( -10 + 5 \zeta_{6} ) q^{33} + 4 q^{37} + 3 \zeta_{6} q^{41} + ( 3 - 3 \zeta_{6} ) q^{43} + ( -3 + 3 \zeta_{6} ) q^{45} + ( 4 - 4 \zeta_{6} ) q^{47} + 7 \zeta_{6} q^{49} + ( 3 + 3 \zeta_{6} ) q^{51} -6 q^{53} -5 q^{55} + ( -5 - 5 \zeta_{6} ) q^{57} -3 \zeta_{6} q^{59} + ( -2 + 2 \zeta_{6} ) q^{61} -11 \zeta_{6} q^{67} + ( -6 + 12 \zeta_{6} ) q^{69} + 14 q^{71} -15 q^{73} + ( -2 + \zeta_{6} ) q^{75} + ( 10 - 10 \zeta_{6} ) q^{79} + ( -9 + 9 \zeta_{6} ) q^{81} + ( -12 + 12 \zeta_{6} ) q^{83} + 3 \zeta_{6} q^{85} + ( 20 - 10 \zeta_{6} ) q^{87} + 14 q^{89} + ( 2 - 4 \zeta_{6} ) q^{93} -5 \zeta_{6} q^{95} + ( 13 - 13 \zeta_{6} ) q^{97} -15 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q + 3q^{3} + q^{5} + 3q^{9} + O(q^{10})$$ $$2q + 3q^{3} + q^{5} + 3q^{9} - 5q^{11} + 6q^{17} - 10q^{19} + 6q^{23} - q^{25} + 10q^{29} - 2q^{31} - 15q^{33} + 8q^{37} + 3q^{41} + 3q^{43} - 3q^{45} + 4q^{47} + 7q^{49} + 9q^{51} - 12q^{53} - 10q^{55} - 15q^{57} - 3q^{59} - 2q^{61} - 11q^{67} + 28q^{71} - 30q^{73} - 3q^{75} + 10q^{79} - 9q^{81} - 12q^{83} + 3q^{85} + 30q^{87} + 28q^{89} - 5q^{95} + 13q^{97} - 30q^{99} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/720\mathbb{Z}\right)^\times$$. $$n$$ $$181$$ $$271$$ $$577$$ $$641$$ $$\chi(n)$$ $$1$$ $$1$$ $$1$$ $$-\zeta_{6}$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 241.1 0.5 + 0.866025i 0.5 − 0.866025i 0 1.50000 + 0.866025i 0 0.500000 + 0.866025i 0 0 0 1.50000 + 2.59808i 0 481.1 0 1.50000 0.866025i 0 0.500000 0.866025i 0 0 0 1.50000 2.59808i 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 9.c even 3 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 720.2.q.e 2 3.b odd 2 1 2160.2.q.c 2 4.b odd 2 1 360.2.q.a 2 9.c even 3 1 inner 720.2.q.e 2 9.c even 3 1 6480.2.a.e 1 9.d odd 6 1 2160.2.q.c 2 9.d odd 6 1 6480.2.a.q 1 12.b even 2 1 1080.2.q.a 2 36.f odd 6 1 360.2.q.a 2 36.f odd 6 1 3240.2.a.b 1 36.h even 6 1 1080.2.q.a 2 36.h even 6 1 3240.2.a.f 1 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 360.2.q.a 2 4.b odd 2 1 360.2.q.a 2 36.f odd 6 1 720.2.q.e 2 1.a even 1 1 trivial 720.2.q.e 2 9.c even 3 1 inner 1080.2.q.a 2 12.b even 2 1 1080.2.q.a 2 36.h even 6 1 2160.2.q.c 2 3.b odd 2 1 2160.2.q.c 2 9.d odd 6 1 3240.2.a.b 1 36.f odd 6 1 3240.2.a.f 1 36.h even 6 1 6480.2.a.e 1 9.c even 3 1 6480.2.a.q 1 9.d odd 6 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(720, [\chi])$$: $$T_{7}$$ $$T_{11}^{2} + 5 T_{11} + 25$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{2}$$ $3$ $$3 - 3 T + T^{2}$$ $5$ $$1 - T + T^{2}$$ $7$ $$T^{2}$$ $11$ $$25 + 5 T + T^{2}$$ $13$ $$T^{2}$$ $17$ $$( -3 + T )^{2}$$ $19$ $$( 5 + T )^{2}$$ $23$ $$36 - 6 T + T^{2}$$ $29$ $$100 - 10 T + T^{2}$$ $31$ $$4 + 2 T + T^{2}$$ $37$ $$( -4 + T )^{2}$$ $41$ $$9 - 3 T + T^{2}$$ $43$ $$9 - 3 T + T^{2}$$ $47$ $$16 - 4 T + T^{2}$$ $53$ $$( 6 + T )^{2}$$ $59$ $$9 + 3 T + T^{2}$$ $61$ $$4 + 2 T + T^{2}$$ $67$ $$121 + 11 T + T^{2}$$ $71$ $$( -14 + T )^{2}$$ $73$ $$( 15 + T )^{2}$$ $79$ $$100 - 10 T + T^{2}$$ $83$ $$144 + 12 T + T^{2}$$ $89$ $$( -14 + T )^{2}$$ $97$ $$169 - 13 T + T^{2}$$
{}
Tài liệu # Applications of Electrostatics Science and Technology The study of electrostatics has proven useful in many areas. This module covers just a few of the many applications of electrostatics. # The Van de Graaff Generator Van de Graaff generators (or Van de Graaffs) are not only spectacular devices used to demonstrate high voltage due to static electricity—they are also used for serious research. The first was built by Robert Van de Graaff in 1931 (based on original suggestions by Lord Kelvin) for use in nuclear physics research. [link] shows a schematic of a large research version. Van de Graaffs utilize both smooth and pointed surfaces, and conductors and insulators to generate large static charges and, hence, large voltages. A very large excess charge can be deposited on the sphere, because it moves quickly to the outer surface. Practical limits arise because the large electric fields polarize and eventually ionize surrounding materials, creating free charges that neutralize excess charge or allow it to escape. Nevertheless, voltages of 15 million volts are well within practical limits. # Xerography Most copy machines use an electrostatic process called xerography—a word coined from the Greek words xeros for dry and graphos for writing. The heart of the process is shown in simplified form in [link]. A selenium-coated aluminum drum is sprayed with positive charge from points on a device called a corotron. Selenium is a substance with an interesting property—it is a photoconductor. That is, selenium is an insulator when in the dark and a conductor when exposed to light. In the first stage of the xerography process, the conducting aluminum drum is grounded so that a negative charge is induced under the thin layer of uniformly positively charged selenium. In the second stage, the surface of the drum is exposed to the image of whatever is to be copied. Where the image is light, the selenium becomes conducting, and the positive charge is neutralized. In dark areas, the positive charge remains, and so the image has been transferred to the drum. The third stage takes a dry black powder, called toner, and sprays it with a negative charge so that it will be attracted to the positive regions of the drum. Next, a blank piece of paper is given a greater positive charge than on the drum so that it will pull the toner from the drum. Finally, the paper and electrostatically held toner are passed through heated pressure rollers, which melt and permanently adhere the toner within the fibers of the paper. # Laser Printers Laser printers use the xerographic process to make high-quality images on paper, employing a laser to produce an image on the photoconducting drum as shown in [link]. In its most common application, the laser printer receives output from a computer, and it can achieve high-quality output because of the precision with which laser light can be controlled. Many laser printers do significant information processing, such as making sophisticated letters or fonts, and may contain a computer more powerful than the one giving them the raw data to be printed. # Ink Jet Printers and Electrostatic Painting The ink jet printer, commonly used to print computer-generated text and graphics, also employs electrostatics. A nozzle makes a fine spray of tiny ink droplets, which are then given an electrostatic charge. (See [link].) Once charged, the droplets can be directed, using pairs of charged plates, with great precision to form letters and images on paper. Ink jet printers can produce color images by using a black jet and three other jets with primary colors, usually cyan, magenta, and yellow, much as a color television produces color. (This is more difficult with xerography, requiring multiple drums and toners.) Electrostatic painting employs electrostatic charge to spray paint onto odd-shaped surfaces. Mutual repulsion of like charges causes the paint to fly away from its source. Surface tension forms drops, which are then attracted by unlike charges to the surface to be painted. Electrostatic painting can reach those hard-to-get at places, applying an even coat in a controlled manner. If the object is a conductor, the electric field is perpendicular to the surface, tending to bring the drops in perpendicularly. Corners and points on conductors will receive extra paint. Felt can similarly be applied. # Smoke Precipitators and Electrostatic Air Cleaning Another important application of electrostatics is found in air cleaners, both large and small. The electrostatic part of the process places excess (usually positive) charge on smoke, dust, pollen, and other particles in the air and then passes the air through an oppositely charged grid that attracts and retains the charged particles. (See [link].) Large electrostatic precipitators are used industrially to remove over 99% of the particles from stack gas emissions associated with the burning of coal and oil. Home precipitators, often in conjunction with the home heating and air conditioning system, are very effective in removing polluting particles, irritants, and allergens. # Integrated Concepts The Integrated Concepts exercises for this module involve concepts such as electric charges, electric fields, and several other topics. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. The electric field exerts force on charges, for example, and hence the relevance of Dynamics: Force and Newton’s Laws of Motion. The following topics are involved in some or all of the problems labeled “Integrated Concepts”: The following worked example illustrates how this strategy is applied to an Integrated Concept problem: Acceleration of a Charged Drop of Gasoline If steps are not taken to ground a gasoline pump, static electricity can be placed on gasoline when filling your car’s tank. Suppose a tiny drop of gasoline has a mass of $4.00×{10}^{–15}\phantom{\rule{0.25em}{0ex}}\text{kg}$ and is given a positive charge of $3.20×{10}^{–19}\phantom{\rule{0.25em}{0ex}}\text{C}$. (a) Find the weight of the drop. (b) Calculate the electric force on the drop if there is an upward electric field of strength $3.00×{10}^{5}\phantom{\rule{0.25em}{0ex}}\text{N/C}$ due to other static electricity in the vicinity. (c) Calculate the drop’s acceleration. Strategy To solve an integrated concept problem, we must first identify the physical principles involved and identify the chapters in which they are found. Part (a) of this example asks for weight. This is a topic of dynamics and is defined in Dynamics: Force and Newton’s Laws of Motion. Part (b) deals with electric force on a charge, a topic of Electric Charge and Electric Field. Part (c) asks for acceleration, knowing forces and mass. These are part of Newton’s laws, also found in Dynamics: Force and Newton’s Laws of Motion. The following solutions to each part of the example illustrate how the specific problem-solving strategies are applied. These involve identifying knowns and unknowns, checking to see if the answer is reasonable, and so on. Solution for (a) Weight is mass times the acceleration due to gravity, as first expressed in $w=\text{mg}.$ Entering the given mass and the average acceleration due to gravity yields $w=\left(\text{4.00}×{\text{10}}^{-\text{15}}\phantom{\rule{0.25em}{0ex}}\text{kg}\right)\left(9\text{.}\text{80}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}\right)=3\text{.}\text{92}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}.$ Discussion for (a) This is a small weight, consistent with the small mass of the drop. Solution for (b) The force an electric field exerts on a charge is given by rearranging the following equation: $F=\text{qE}.$ Here we are given the charge ($3.20×{10}^{–19}\phantom{\rule{0.25em}{0ex}}\text{C}$ is twice the fundamental unit of charge) and the electric field strength, and so the electric force is found to be $F=\left(3.20×{\text{10}}^{-\text{19}}\phantom{\rule{0.25em}{0ex}}\text{C}\right)\left(3\text{.}\text{00}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N/C}\right)=9\text{.}\text{60}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}.$ Discussion for (b) While this is a small force, it is greater than the weight of the drop. Solution for (c) The acceleration can be found using Newton’s second law, provided we can identify all of the external forces acting on the drop. We assume only the drop’s weight and the electric force are significant. Since the drop has a positive charge and the electric field is given to be upward, the electric force is upward. We thus have a one-dimensional (vertical direction) problem, and we can state Newton’s second law as $a=\frac{{F}_{\text{net}}}{m}.$ where ${F}_{\text{net}}=F-w$. Entering this and the known values into the expression for Newton’s second law yields $\begin{array}{lll}a& =& \frac{F-w}{m}\\ & =& \frac{\text{9.60}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}-\text{3.92}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}}{\text{4.00}×{\text{10}}^{-\text{15}}\phantom{\rule{0.25em}{0ex}}\text{kg}}\\ & =& \text{14}\text{.}2\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}.\end{array}$ Discussion for (c) This is an upward acceleration great enough to carry the drop to places where you might not wish to have gasoline. This worked example illustrates how to apply problem-solving strategies to situations that include topics in different chapters. The first step is to identify the physical principles involved in the problem. The second step is to solve for the unknown using familiar problem-solving strategies. These are found throughout the text, and many worked examples show how to use them for single topics. In this integrated concepts example, you can see how to apply them across several topics. You will find these techniques useful in applications of physics outside a physics course, such as in your profession, in other science disciplines, and in everyday life. The following problems will build your skills in the broad application of physical principles. # Section Summary • Electrostatics is the study of electric fields in static equilibrium. • In addition to research using equipment such as a Van de Graaff generator, many practical applications of electrostatics exist, including photocopiers, laser printers, ink-jet printers and electrostatic air filters. # Problems & Exercises (a) What is the electric field 5.00 m from the center of the terminal of a Van de Graaff with a 3.00 mC charge, noting that the field is equivalent to that of a point charge at the center of the terminal? (b) At this distance, what force does the field exert on a $2.00\phantom{\rule{0.25em}{0ex}}\mu \text{C}$ charge on the Van de Graaff’s belt? (a) What is the direction and magnitude of an electric field that supports the weight of a free electron near the surface of Earth? (b) Discuss what the small value for this field implies regarding the relative strength of the gravitational and electrostatic forces. (a) $5\text{.}\text{58}×{\text{10}}^{-\text{11}}\phantom{\rule{0.25em}{0ex}}\text{N/C}$ (b)the coulomb force is extraordinarily stronger than gravity A simple and common technique for accelerating electrons is shown in [link], where there is a uniform electric field between two plates. Electrons are released, usually from a hot filament, near the negative plate, and there is a small hole in the positive plate that allows the electrons to continue moving. (a) Calculate the acceleration of the electron if the field strength is $2.50×{10}^{4}\phantom{\rule{0.25em}{0ex}}\text{N/C}$. (b) Explain why the electron will not be pulled back to the positive plate once it moves through the hole. Earth has a net charge that produces an electric field of approximately 150 N/C downward at its surface. (a) What is the magnitude and sign of the excess charge, noting the electric field of a conducting sphere is equivalent to a point charge at its center? (b) What acceleration will the field produce on a free electron near Earth’s surface? (c) What mass object with a single extra electron will have its weight supported by this field? (a) $-6\text{.}\text{76}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{C}$ (b) $2\text{.}\text{63}×{\text{10}}^{\text{13}}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}\phantom{\rule{0.25em}{0ex}}\left(\text{upward}\right)$ (c) $2\text{.}\text{45}×{\text{10}}^{-\text{18}}\phantom{\rule{0.25em}{0ex}}\text{kg}$ Point charges of $25.0\phantom{\rule{0.25em}{0ex}}\mu \text{C}$ and $45.0\phantom{\rule{0.25em}{0ex}}\mu \text{C}$ are placed 0.500 m apart. (a) At what point along the line between them is the electric field zero? (b) What is the electric field halfway between them? What can you say about two charges ${q}_{1}$ and ${q}_{2}$, if the electric field one-fourth of the way from ${q}_{1}$ to ${q}_{2}$ is zero? The charge ${q}_{2}$ is 9 times greater than ${q}_{1}$. Integrated Concepts Calculate the angular velocity $\omega$ of an electron orbiting a proton in the hydrogen atom, given the radius of the orbit is $0.530×{10}^{–10}\phantom{\rule{0.25em}{0ex}}\text{m}$. You may assume that the proton is stationary and the centripetal force is supplied by Coulomb attraction. Integrated Concepts An electron has an initial velocity of $5.00×{10}^{6}\phantom{\rule{0.25em}{0ex}}\text{m/s}$ in a uniform $2.00×{10}^{5}\phantom{\rule{0.25em}{0ex}}\text{N/C}$ strength electric field. The field accelerates the electron in the direction opposite to its initial velocity. (a) What is the direction of the electric field? (b) How far does the electron travel before coming to rest? (c) How long does it take the electron to come to rest? (d) What is the electron’s velocity when it returns to its starting point? Integrated Concepts The practical limit to an electric field in air is about $3.00×{10}^{6}\phantom{\rule{0.25em}{0ex}}\text{N/C}$. Above this strength, sparking takes place because air begins to ionize and charges flow, reducing the field. (a) Calculate the distance a free proton must travel in this field to reach $3.00%$ of the speed of light, starting from rest. (b) Is this practical in air, or must it occur in a vacuum? Integrated Concepts A 5.00 g charged insulating ball hangs on a 30.0 cm long string in a uniform horizontal electric field as shown in [link]. Given the charge on the ball is $1.00\phantom{\rule{0.25em}{0ex}}\mu \text{C}$, find the strength of the field. Integrated Concepts [link] shows an electron passing between two charged metal plates that create an 100 N/C vertical electric field perpendicular to the electron’s original horizontal velocity. (These can be used to change the electron’s direction, such as in an oscilloscope.) The initial speed of the electron is $3.00×{10}^{6}\phantom{\rule{0.25em}{0ex}}\text{m/s}$, and the horizontal distance it travels in the uniform field is 4.00 cm. (a) What is its vertical deflection? (b) What is the vertical component of its final velocity? (c) At what angle does it exit? Neglect any edge effects. Integrated Concepts The classic Millikan oil drop experiment was the first to obtain an accurate measurement of the charge on an electron. In it, oil drops were suspended against the gravitational force by a vertical electric field. (See [link].) Given the oil drop to be $1.00\phantom{\rule{0.25em}{0ex}}\mu \text{m}$ in radius and have a density of $920 kg/{m}^{3}$: (a) Find the weight of the drop. (b) If the drop has a single excess electron, find the electric field strength needed to balance its weight. Integrated Concepts (a) In [link], four equal charges $q$ lie on the corners of a square. A fifth charge $Q$ is on a mass $m$ directly above the center of the square, at a height equal to the length $d$ of one side of the square. Determine the magnitude of $q$ in terms of $Q$, $m$, and $d$, if the Coulomb force is to equal the weight of $m$. (b) Is this equilibrium stable or unstable? Discuss. Unreasonable Results (a) Calculate the electric field strength near a 10.0 cm diameter conducting sphere that has 1.00 C of excess charge on it. (b) What is unreasonable about this result? (c) Which assumptions are responsible? Unreasonable Results (a) Two 0.500 g raindrops in a thunderhead are 1.00 cm apart when they each acquire 1.00 mC charges. Find their acceleration. (b) What is unreasonable about this result? (c) Which premise or assumption is responsible? Unreasonable Results A wrecking yard inventor wants to pick up cars by charging a 0.400 m diameter ball and inducing an equal and opposite charge on the car. If a car has a 1000 kg mass and the ball is to be able to lift it from a distance of 1.00 m: (a) What minimum charge must be used? (b) What is the electric field near the surface of the ball? (c) Why are these results unreasonable? (d) Which premise or assumption is responsible? Consider two insulating balls with evenly distributed equal and opposite charges on their surfaces, held with a certain distance between the centers of the balls. Construct a problem in which you calculate the electric field (magnitude and direction) due to the balls at various points along a line running through the centers of the balls and extending to infinity on either side. Choose interesting points and comment on the meaning of the field at those points. For example, at what points might the field be just that due to one ball and where does the field become negligibly small? Among the things to be considered are the magnitudes of the charges and the distance between the centers of the balls. Your instructor may wish for you to consider the electric field off axis or for a more complex array of charges, such as those in a water molecule. Consider identical spherical conducting space ships in deep space where gravitational fields from other bodies are negligible compared to the gravitational attraction between the ships. Construct a problem in which you place identical excess charges on the space ships to exactly counter their gravitational attraction. Calculate the amount of excess charge needed. Examine whether that charge depends on the distance between the centers of the ships, the masses of the ships, or any other factors. Discuss whether this would be an easy, difficult, or even impossible thing to do in practice. Tải về Đánh giá: 0 dựa trên 0 đánh giá #### Tuyển tập sử dụng module này Nội dung cùng tác giả Nội dung tương tự
{}
# C. Sean Burns: Notebook ### Site Tools linux:console-setup For Linux virtual consoles, the Terminus font works fine on my laptops, but on my desktop, the same font and font size that I use in the laptops is over-sized and looks like banner text. Installing the psf-unifont helped a bit for desktop/console work. Note that this may all change a bit. It seems systemd manages all of this differently. Debian is holding back right now, I've read, but that may change down the line: In Bash on Debian 9.4: apt install psf-unifont Edit the following file: /etc/default/console-setup. # CONFIGURATION FILE FOR SETUPCON # Consult the console-setup(5) manual page. ACTIVE_CONSOLES="/dev/tty[1-6]" CHARMAP="UTF-8" CODESET="guess" FONTFACE="UnifontAPL" FONTSIZE="8x16" VIDEOMODE= # The following is an example how to use a braille font # FONT='lat9w-08.psf.gz brl-8x8.psf' Then: sudo setupcon I generally use dpkg-reconfigure console-setup instead of editing the above file, but psf-unifonts weren't listed when I attempted that. I'm guessing they would be after a reboot.
{}
## Permutation formula calculator online Probability online calculation: Combinations, permutations - Calculates nPr and nCr for n and r. Permutation Generator; Permutations/Anagrams Calculator; What is a permutation? (Definition); How to generate permutations? How to count permutations? Permutation calculator. Compute properties. Do algebra or generate a random permutation. Compute permutations of a set. Count permutations or  Comb & Perm | nCr & nPr Calculator & Formula. An online nCr & nPr calculation for small and big numbers. Sample Points in Set. Sample Points. Choose Output   Calculate the probability of two independent events occurring; Define permutations and combinations; List all permutations and combinations; Apply formulas for  There are many Math contexts in which the use of permutation coefficients is relevant, especially in the calculation of probabilities using distribution probabilities  An online easy to use calculator that Calculates combinations. An online calculator to calculate the number of combinations of n elements taken r a the time whose formula is given below. Permutations and Combinations Problems Probability online calculation: Combinations, permutations - Calculates nPr and nCr for n and r. ## permutation, combination factorial. Factorials, Permutations, Combinations. To use these probability options, access the Math key. Arrow to the right to find the We'll learn about factorial, permutations, and combinations. We'll also look at how to use these Permutation formula. (Opens a modal) · Zero factorial or 0! o Calculate the number of outcomes of a random experiment using permutations and combinations Why not take an online class in Statistics? We can use the following formula, where the number of permutations of n objects taken k at a  No Personal Scientific Calculators are Allowed in GATE 2017: Students mostly What are some simple steps I can take to protect my privacy online? too look for when determining if the questions represents a permutation or a combination? The above can be proved by substituting the formula for permutations into the equation. Which as we Evaluate the following without using a calculator. Step 1 . As you can see from the following chart, beyond 7 characters, the possible combinations become too large to be practical for an online calculator. Find all the formulas you need to tackle any data set. Learn to calculate standard deviation, evaluate factorials, and make sense of statistical symbols. ### Feb 22, 2018 permutations with this free online lotto and Keno numbers calculator. Canadians are required to do a math problem before claiming lottery Combinations and Permutations Calculator. Find out how many different ways to choose items. For an in-depth explanation of the formulas please visit  Permutation formula; Permutation and combination; What next? This permutation calculator is a tool that will help you  It is an online math tool which determines the number of combinations and permutations that result when we choose r r objects from a set of n n objects. It is   Combinations gives the number of ways a subset of r items can be chosen out of a set of n items. For r <= n, n >= 0, and r >= 0. The permutations formula is (P(n,r)   Oct 17, 2019 How are permutations calculated? Permutations are calculated by the formula: \ frac{\text{number of items}!}{(\text. Example. math project. [2] 2018/04/09 06:44. - / - / - / - /. Comment/Request. During a special promotion, a customer purchasing a computer and a printer is given a choice ### The Permutation Calculator is used to calculate the permutation, which is the number of ways to select k out of n items, where (unlike Related Math Calculators. The above can be proved by substituting the formula for permutations into the equation. Which as we Evaluate the following without using a calculator. Step 1 . As you can see from the following chart, beyond 7 characters, the possible combinations become too large to be practical for an online calculator. Find all the formulas you need to tackle any data set. Learn to calculate standard deviation, evaluate factorials, and make sense of statistical symbols. This article describes the formula syntax and usage of the PERMUT function in Microsoft Excel. Description. Returns the number of permutations for a given ## In this example, we needed to calculate n · (n – 1) · (n – 2) ··· 3 · 2 · 1. How many different ways can the letters of the word MATH be rearranged to form a four- A computer user has downloaded 25 songs using an online file-sharing What is the Permutation Formula, Examples of Permutation Word Problems involving n things taken r How to calculate Permutations with Repeated Symbols? We'll learn about factorial, permutations, and combinations. We'll also look at how to use these Permutation formula. (Opens a modal) · Zero factorial or 0! o Calculate the number of outcomes of a random experiment using permutations and combinations Why not take an online class in Statistics? We can use the following formula, where the number of permutations of n objects taken k at a  No Personal Scientific Calculators are Allowed in GATE 2017: Students mostly What are some simple steps I can take to protect my privacy online? too look for when determining if the questions represents a permutation or a combination? The above can be proved by substituting the formula for permutations into the equation. Which as we Evaluate the following without using a calculator. Step 1 . As you can see from the following chart, beyond 7 characters, the possible combinations become too large to be practical for an online calculator. Find all the formulas you need to tackle any data set. Learn to calculate standard deviation, evaluate factorials, and make sense of statistical symbols. Here is a list of the best math calculators online that will help you calculate fractions Permutation Calculator; Combinations Calculator; Permutations Calculator  Nov 6, 2011 Using excel to calculate permutations and combination formulas See how I make over \$7,293 a month from home doing REAL online jobs! Feb 22, 2018 permutations with this free online lotto and Keno numbers calculator. Canadians are required to do a math problem before claiming lottery  You can work permutations and combinations on the TI-84 Plus calculator. A permutation n = 10 and r = 3).The formula for a permutation is: nPr = (n!)/(n-r)!. In this permutation calculator, you can calculate permutation and combination when entered objects and sample. Calculator - Free Online Calculators set of numbers with element number n ; Formula of permutation calculator : P (n, r) = n!
{}
# All Questions 1k views ### What is a type in Wolfram Mathematica programming language? "Everything is an expression" is a popular citation from many Mathematica guidebooks. So, what is type in Mathematica? How does it relate to common types from Haskell, for example? I did some ... 1k views ### Replacing composite variables by a single variable To replace a single variable by another variable, one can simply use the the replace all (/.) operator (e.g., ... 530 views ### How do I generate the upper triangular indices from a list? I have some list {1,2,3}. How do I generate nested pairs such that I get {{1,2},{1,3},{2,3}}? That is I'd like a way to ... 5k views ### Plotting Complex Quantity Functions Trying to plot with complex quantities seems not to work properly in what I want to accomplish. I would like to know if there is a general rule/way of plotting when you have complex counterparts in ... 245 views ### How to “ignore” an element of Map or MapIndexed Say I have some function that I'm applying every element in a list to... if that element matches some criteria: ... 891 views ### Splitting words into specific fragments I am looking into splitting words into a succession of chemical elements symbols, where possible. For example: Titanic = Ti Ta Ni C (titanium, tantalum, nickel, carbon) A word may or may not be ... 419 views With reference to my earlier post: Adding a point to an already existing graphic I would like to add an outer circle to the whole drawing. This outer circle is a fixed one of ... 771 views ### Number format of axes in a plot How can I have a conditional format of the values appearing in the axis of a plot ? I have in mind the number format options available in Excel for plots as described here ... 4k views ### Formatting legend text font I am using this code to plot a graph, and I am trying to make the font of the legend bigger. ... 259 views ### Using the same frame ticks for two different histograms Consider the following: ... 919 views Using Mark McClure's GPXToGoogleMap results in a HTML and Javascript code which can be viewed by any browser. Is it possible to load this map into the notebook, which obviously means for example that ... 152 views ### Solve[ ] with Method -> Reduce gives a different result than Reduce[ ] Why does Solve[Sqrt[x + Sqrt[x]] - Sqrt[x - Sqrt[x]] == m Sqrt[x/(x + Sqrt[x])], x, Reals, Method -> Reduce] give a different result than ... 543 views ### Reading from STDIN, or: how to pipe data into Mathematica Today I tried using Mathematica's plotting capabilities to display the output of a C++ program. This made me wonder whether it is possible to somehow tell a Mathematica script to read from STDIN and ... 254 views ### Matrix multiplication involving MatrixForm [duplicate] Possible Duplicate: Why does MatrixForm affect calculations? I am doing a matrix multiplication, but not getting the desired output. I am doing the matrix multiplication of $A^{-1}B$ from ... 204 views ### How do I write a ValueQ function that only returns True if there exists an OwnValue? Reading the comments in this answer has motivated me to request a full solution to part of this problem. What I'd like is an efficient solution that returns True ... 264 views ### Conditional Gathering of lists Just need a little help with the GatherBy / SplitBy function(s). I have a list of random numbers here: ... 3k views ### Importing from Excel I'm trying to import a matrix from Excel to Mathematica. My code is: Import["desktop/stproj.xls", "xls", data, 1] The output is some weird stuff about population ... 256 views ### Parallelization of distinct array write access from subkernels I'm working on an implementation of a multivariate FFT, which is (or at least should be) highly parallelizable due to the row-column-algorithm. However, i can't figure out how to implement that. The ... 536 views ### Is there a convenient way to copy/paste text-interspersed SE code snippets into Mathematica? Is there a way to copy and paste code snippets from SE to Mathematica if these snippets are interspersed with text? Like e.g. in Morphing Graphics, color and location in both the question and answer, ... 913 views ### Morphing Graphics, color and location I would like to make a small animation : 1-We start with a random distribution of gray points : ... 224 views ### Arguments to If[] are not evaluated I got bitten by the following: f[x_] := 3*x; g[x_] := If[Log[f[x]] < 0, f[x], 0]; g[x] Out[11]= If[Log[3 x] < 0, f[x], 0] where I thought the call to ... 1k views ### Fetching data from HTML source I want to generate a couple of plots/graphs with Area51 statistics. Since Area51 doesn't work with the SE API, I'm forced to find another way to get the information I want. That other way is with ... 238 views ### Setting parts of a list Suppose I have list a = Range[10] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} in which I want to set some elements to be a list ... 348 views ### How to properly DumpSave & Get Global`s symbols inside packages while not touching Global context? For efficiency reasons I prefer to use DumpSave instead of Save. For ease of access I prefer to save symbols in ... 191 views ### Old values are not freed/garbage collected when you re-evaluate an assignment For this code: (* Cell 1 *) generate := Module[{x}, x = Range[100 * 1000 * 1000]; x]; (* Cell 2 *) g = generate[]; MemoryInUse[] If I evaluate cell 2 ... 539 views ### How to set the NDSolve method to LSODA I notice that off all the Method options available for NDSolve[...], LSODA is invoked quite ... 210 views ### presenting a real number as real instead of imaginary I have an equation which results in an answer of the form $\frac{i a}{\sqrt{c-d}}$ is there any way to get Mathematica to present it in it's real form? like $\frac{a}{\sqrt{d-c}}$ I know that I ... 421 views ### How to get string representation (like repr in Python) I have a string variable. I want to obtain another string which contains the representation of the string variable content itself. s = "a \n b" I need to get a ... 1k views ### Solve Lagrange multipliers optimization problem I have two nested solid figure, where $V(a,h,\tau)$ defines the volume and $A(a,h,t)$ defines the surface. The outer solid figure is parametrized in $a_s$,$h_s$ and $t_s$ (they share a common center). ... 727 views ### Formatting a fraction as a mixed number Is there a command that will take a rational number and rewrite it in a mixed-number-like form? That is, I'd like to apply a command to something like 10/7 and get ... 514 views ### Publishing results obtained in Mathematica I've been using Mathematica to solve nonlinear partial differential equations for my doctoral research for the last 2 years or so. I am not an expert in Mathematica or mathematics and I am an engineer ... 327 views ### Edge problems in a directed graph I want to create the following two graphs. So far I tried the following Code ... 11k views ### How to make an inkblot? How to effectively create a polygon that looks like a realistic inkblot? So far, I could come up with this (borrowing from Ed Pegg Jr.'s Rorschach demonstration): ... 173 views ### Calling a function an unspecified number of times I would like to be able to call a function an unspecified number of times. That is, I would like the generalization of something like: ... 245 views ### MapThread on a nested Map A simple problem I am facing is here: ... 243 views ### Easy Way to Create Trellis Plots in Mathematica I'm trying to do multivariate statistical analysis on a data set and I'd like to quickly visualize my information first using Trellis-like plots. For example, I'd like to create scatterplots for body ... 680 views ### Height-dependent filling color in 3D Data Plots I would like to have the filling of my ListPlot3D display the same color than the one applied to the data. This question makes sense only in the case of conditional coloring of data (according to ... 512 views ### Avoiding an unresponsive user interface in OS X I have found that despite Mathematica's numerous updates, each of which have added much functionality, one fundamental issue remains unaddressed: The unresponsiveness of the UI when I make a mistake ... 3k views ### What is so special about Prime? When we try to evaluate Prime on big numbers (e.g. 10^13) we encounter the following issue : ... 1k views ### Composition: how to make a day and night world map? Given the following world images: ... 270 views ### Path queries for tree-structured data Can anyone suggest documentation or tutorials for developing path queries and indices for (XML-like) tree-structured data? Suppose data is organized hierarchically in key->value pairs, eg: ... 255 views ### Manipulate Evaluation Order Problem I seem to be getting some unintended results from a nested Manipulate that I have not been able to resolve. I boiled down the problem I'm having to a simplified ... 497 views ### Plotting the open ball for the post office metric space The post office metric space, $P$ has the distance function defined as follows: d_P (\mathbf{x},\mathbf{y}) := \begin{cases} 0 & \mathbf{x} = \mathbf{y}\\ \Vert \mathbf{x}\Vert_2+\Vert ... 549 views ### Handling failed FindRoot calls I want to handle FindRoot calls which did not converge (e.g "thrown" error message FindRoot::cvmit) ... 432 views ### Change the color of a Locator in a Manipulate How can I change the color of a Locator in a Manipulate? As an example, consider the following. ... 309 views ### variable sized lists and using lists as variables I am trying to scan a parameter space of varying numbers of parameters subject to some constraints (I am interested in any number of constraints just out of curiosity, but in reality no more than 2 ... 228 views ### Validating simplifications analytically I have a rather complex expression which I would like to simplify and check my work along the way (Mathematica does not simplify very basic things and it is frustrating me). In the following example, ... 3k views ### Extract real part of a complex expression better than Re does I have a complex expression with real positive variables only. Mathematica Input Style: ...
{}
## anonymous one year ago how do i solve this problem on a calculator? : sin^2(pi/3) - cos^2(pi/3) 1. UnkleRhaukus ( sin π/3 ) ^2 - ( cos π/3 ) ^2 = 2. anonymous you do not need a calulator for this do you know what $$\sin(\frac{\pi}{3})$$ is?
{}
Need help with math problems Percent math problems with detailed solutions. Problems that deal with percentage increase and decrease as well as problems of percent of quantities. Mixture problems involving percentages as well as percentage of areas are included. The Math extension (or, more broadly, the rendering of the $tag\right) has a long history. It has been modified by a number of different people with different goals, different ideas, and different programming styles, using different\dots$Astro Math - Slunečnice.cz Astro Math 1.0.2 download - Your spaceship got sucked into a wormhole and you ended up far away from Earth. Now you must make your way back through 32… WebMath - Solve Your Math Problem WebMath is designed to help you solve your math problems. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by-step solution. Covers arithmetic, algebra, geometry, calculus and statistics. For the first problem, I strongly suggest not plugging in answer choices for this one - a little know-how about linear systems will save you lots of time.Your first problem above can be done very quickly. Since we multiply 2 by 2 to get to 4, we multiply -5 by 2 to get -10, choice (A). Do My Math Homework For Me (Negotiable Price - A or B) Need assistance with your math homework? Check out The Princeton Review for online math tutors who are available 24/7. Try a session for free today. Science Problems Help | Solutions to Physics and Math Problems This printable includes eight math word problems that will seem quite wordy to second-graders but are actually quite simple. The problems on this worksheet include word problems phrased as questions, such as: "On Wednesday you saw 12 robins on one tree and 7 on another tree. Need help with a GRE Problem? July 5, 2019 (Many "math… July 5, 2019 (Many "math" problems are not really math problems).Solving 'advanced' GRE quant problems (they're not as hard as they look) - Part 4 - Продолжительность: 11:26 Greg Mat 6 067 просмотров. WebMath - Solve Your Math Problem WebMath is designed to help you solve your math problems. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by-stepYou'll find hundreds of instant-answer, self-help, math solvers, ready to provide you with instant help on your math problem. Help with Math Problems | Math Homework Questions | Free… Math Word Problems (solutions, examples, videos, diagrams) College Algebra - Math is Fun OK. So what are you going to learn here? You will learn about Numbers, Polynomials, Inequalities, Sequences and Sums, many types of Functions, and how to solve them. You will also gain a deeper insight into Mathematics, get to practice using your new skills with lots of examples and questions, and ... Percent Maths Problems - analyzemath.com Percent math problems with detailed solutions. Problems that deal with percentage increase and decrease as well as problems of percent of quantities. Mixture problems involving percentages as well as percentage of areas are included. FiniteHelp - The fastest way to learn finite math No need to worry if you miss class or just need some extra help. Don't waste money on over priced private tutors For less than the price of a single session with a private tutor, you can have access to our entire library of videos. Word Problems On Number Operations - CBSE Mathematics Math Help: Learning How to Solve Probability Problems for…
{}
# Conflict between predicated outcomes in logistic regression I was using caret in R to use logistic regression to make prediction. I only have one predictor named OEI and the outcome variable is pass/fail. However, although I was able to perform that task and get confusion matrix, etc, I was trapped in a conceptual question about the transformation of logistic regression TO convert everything between probabilities between 0-1. When I print out the predicted probabilities for the 'pass' cases, there are obviously probabilities higher/ lower than 0.5 which were further translated into pass/fail based on 0.5 cutoff However, as I was trying to plot the relationship between the only predictor (OEI) and its associated probabilities of 'pass', all the predicted probabilities are above 0.5 which means all the data should result in a 'pass' outcome, but this is obviously not true based on the result from the previous step. Here is my R code to plot the the relationship above: • Classification probability threshold is not an exact duplicate, but points you in the right direction. Don't discretize your perfectly usable probabilistic prediction using a threshold. – Stephan Kolassa Apr 19 at 20:42 • @StephanKolassa, thank you. But my question is not about threshold but more about the fact that, based on the same set of predictor values, why the probabilities for pass differ in the 1st and 2nd plot. In the 2nd plot, regardless of OEI values, there are only pass result. While in the 1st plot you can see different OEI values lead to probabilities for pass/fail. – Edward Lin Apr 19 at 20:57 • Double-check your formula for a_logits. The intercept is 0.01 and all values in X1_range are positive, so all logits are necessarily > 0 and thus your probabilities are necessarily all > 0.5. Make sure that the intercept is the one returned by your model; that doesn't seem likely given the predicted values in the first figure. – EdM Apr 19 at 21:00 • @EdM, thanks, that's what I thought, too. But the intercept is indeed 0.01 in my fitted model. I did a few things such as cross validation and upsampling, but I don't think that is why. Do you have other ideas? – Edward Lin Apr 19 at 21:11 • [UPDATE] @EdMI also scale and centered the variable when I pre-processing rhe data, without that =, the intercept will be -1. and coefficient of OEI IS 0.27, will that be why? – Edward Lin Apr 19 at 21:36 Although pre-processing data by centering and scaling is important for some approaches, it's not necessary for simple logistic regressions like this. As you note in a comment, without centering and scaling you get an intercept of -1 and a coefficient of 0.27 for your predictor OEI. With an intercept below 0 and a positive coefficient for this necessarily positive predictor, you now will have predicted probabilities both above and below 0.5 (logit of 0) if the OEI values extend both above and below a cutoff of about 3.7. If you used those coefficients for producing your plot I suspect that you would reproduce the values returned by predict() on your model and data, as shown in your table. But in any case of centering and scaling you need to keep track of whether the reported coefficients represent the data in the original or in the transformed scales. Some software will by default scale all predictors in ridge or LASSO modeling but then adjust the reported coefficients back to the original data scales. I don't know how the caret package handles such situations. If you do the centering and scaling yourself you need to keep track yourself. I suspect that in your case the coefficients you used to produce your graph were for the centered and scaled values, but you then tried to use them with the original OEI values.
{}
# Description In this paper, the authors propose a method to measure biometrics from fetal ultrasound (US), in particular the abdominal circumference (AC) extracted from an ellipse. To do so, they propose a simple CNN architecture that resemble an FCNN but at the same time regresses the parameters of the ellipse. C.f. Fig.1 for more details. # Implementation details Since their network aims to segment the image AND recover the parameters of the ellipse, their method minimizes the following multi-task loss where $$L_s$$ is the usual crossentropy: and $$L_r$$ is an L2 loss of the ellipse parameters: where $$\sigma$$ is a 5-D vector contianing the ellipse parameters $$(a,b,c_x,c_y,\phi)$$ In order to further improve results, they implemented a cascaded network. Instead of implementing a usual cascade (c.f. Fig.2(a)), they concatenate the output of the first network to a rotated version of the input image (c.f. Fig.2(b)). # Results Their method beats results from FCN and cascaded FCN while being within the inter-obs variation.
{}
# Do gravitational waves create 'drag' in space? I was thinking about the way you measure gravitational waves. What you do is you measure how they affect a bunch of particles. Affecting something means transferring some mass/energy to it, according to my layman knowledge. So let's do a thought experiment. You have a massive spaceship, without any thrust. It flies through almost completely homogeneous universe, gravity wise. Without gravitational waves, this is easy. Gravity pulls in all directions, so it cancels and you fly forever. With gravitational waves. Hmm, imagine that on every planet there's a device measuring gravitational waves. Of course, everything is a device measuring gravitational waves, but actual devices help imagination here. So your movement affects all those devices by transferring tiny amounts of energy to them. That energy sure doesn't pop out of thin air - or vacuum - so you're going to miss it. My conclusion is that for every tiny amount of energy, your relative velocity to the measuring device should decrease. Is that correct or not and why? Note that I appreciate there's certainly more to it - eg. decrease in your velocity should happen with light-speed delay. Hard to wrap my mind around that. • Question: do you understand the mathematics behind General Relativity? If you do, everything that there is to say is expressed in terms of the "gravitational wave" solutions of the linearized Einstein's equations. If you don't, I am sorry to say that understanding complicated physical phenomena in terms of analogies is sometimes a bad idea. Math is much more efficient as it is compact, precise and unambiguous. Please don't take this personally, it is what I would've said to anybody. – Prof. Legolasov Dec 5 '16 at 14:32 • @SolenodonParadoxus I don't take this personally, but maybe then you're not the right person to answer questions by laymen tagged [thought-experiment]. Besides, according to what you said this question would never exist if I did understand the mathematics behind it, so it's pointless to wonder that I don't. – Tomáš Zato - Reinstate Monica Dec 5 '16 at 14:37 • Well, its not like I voted to close (I didn't even downvote your question). Maybe somebody else will answer it. What I'm saying is: there is a better way of understanding these phenomena, which is: studying General Relativity. Analogies can only take you this far... Anyways, good luck with your question. – Prof. Legolasov Dec 5 '16 at 14:39 • @SolenodonParadoxus I know that what you say is true and I really did not take any offense. But it will take me a while before touching any complex physics math again. – Tomáš Zato - Reinstate Monica Dec 5 '16 at 14:44 I think I understand your question, but just to be sure: you're saying that the gravitational waves produced by this spaceship will lead to it slowing down. The first problem is that a normal spaceship with a constant velocity, or even acceleration won't produce gravitational waves (GW) because GW require an accelerating quadrupole moment. You're correct that an object producing GW ends up transferring energy to the detectors... this was actually one of the deciding arguments that GW were a real, observable phenomenon. GW carry energy, which is extracted from the system producing it. In the case of a binary, the energy comes from the orbit, which causes the binary to tighten and eventually coalesce. This isn't quite analogous to 'drag', per se, because 'drag' specifically refers to dissipative interaction with the background medium. When an electron accelerates, it produces EM waves, causing it to lose energy... but I don't think we would via that as 'drag' in any way. But, I think that in a spacetime with gravitational waves propagating in the $+\hat{z}$ direction, and a spaceship traveling in the $-\hat{z}$ direction... the spaceship would experience an (outrageously, likely-never-detectable-by-even-future-technology) deceleration force from non-linear coupling with the GW. That sounds more like a drag force. • So if a heavy object passes near you, you can't detect it by it's gravitational waves as long as it takes straight path (except it's in in curved space)? – Tomáš Zato - Reinstate Monica Dec 5 '16 at 15:17 • @TomášZato it's even a little harder than that! A spherically symmetric ship, even if it's accelerating (i.e. curved path, or changing speed) still won't radiate. You also need an asymmetric mass distribution... but I guess even an oblong/cylindrical spaceship would be fine... I think. – DilithiumMatrix Dec 5 '16 at 16:34 Do gravitational waves create 'drag' in space? No. I was thinking about the way you measure gravitational waves. What you do is you measure how they affect a bunch of particles. Affecting something means transferring some mass/energy to it, according to my layman knowledge. It isn't quite true. If you were sitting in a spaceship and a gravitational wave passed through, it would make you and your clocks go slower for a little while. You might notice that some distant pulsar speeded up a little. But soon the gravitational would have moved on, without losing any energy. In theory you could detect a massive dark object passing by behind you via the same method. So let's do a thought experiment. You have a massive spaceship, without any thrust. It flies through almost completely homogeneous universe, gravity wise. Without gravitational waves, this is easy. Gravity pulls in all directions, so it cancels and you fly forever. No problem. With gravitational waves. Hmm, imagine that on every planet there's a device measuring gravitational waves. Of course, everything is a device measuring gravitational waves, but actual devices help imagination here. So your movement affects all those devices by transferring tiny amounts of energy to them. It doesn't actually transfer energy to them. You could say some of the gravitational wave energy is device energy as the gravitational wave passes through the device. But the gravitational wave soon passes through taking that energy away. That energy sure doesn't pop out of thin air - or vacuum - so you're going to miss it. My conclusion is that for every tiny amount of energy, your relative velocity to the measuring device should decrease. Is that correct or not and why? It isn't correct I'm afraid. As the gravitational wave passes through, the velocity of the things inside clocks etc is reduced. Those clocks run slower so they measure other things like distant pulsars to be going faster. You run slower too, along with everything else. We call it time dilation. Note that I appreciate there's certainly more to it - eg. decrease in your velocity should happen with light-speed delay. Hard to wrap my mind around that. What's easy to wrap your head round is that the speed of light reduces as the gravitational wave passes through. See the second paragraph here where Einstein says the speed of light is spatially variable in a gravitational field. For a gravitational wave as opposed to a field, the speed of light is temporally variable. But because everything slows down, you can't measure any local difference. This Baez article is worth a read: Is The Speed of Light Everywhere the Same?
{}
# Square Roots of Decimals: Consider sqrt 17.64 Step 1:  To find the square root of a decimal number we put bars on the integral part (i.e., 17) of the number in the usual manner. And place bars on the decimal part (i.e., 64) on every pair of digits beginning with the first decimal place. Proceed as usual. We get bar 17.bar 64. Step 2: Now proceed in a similar manner. The leftmost bar is on 17 and 42 < 17 < 52. Take this number as the divisor and the number under the leftmost bar as the dividend, i.e., 17. Divide and get the remainder. Step 3: The remainder is 1. Write the number under the next bar (i.e., 64) to the right of this remainder, to get 164. Step 4: Double the divisor and enter it with a blank on its right. Since 64 is the decimal part so put a decimal point in the quotient. Step 5: We know 82 × 2 = 164, therefore, the new digit is 2. Divide and get the remainder. Step 6: Since the remainder is 0 and no bar left, therefore bar(17).bar(64) = 4.2. If you would like to contribute notes or other learning material, please submit them using the button below. ### Shaalaa.com Finding Square Roots of Decimals by Long Division Method [00:15:34] S 0%
{}
# Math Help - Buoyancy Force Question 1. ## Buoyancy Force Question Hi, I am looking to investigate the buoyancy force of a cuboid. The dimensions of the cuboid are 30cm x 20cm x 15 cm. When the cuboid is placed in the water (the face 30cm x 20cm faces the water), 4cm of the cuboid will be submerged. If I push the cuboid so that the top face is 15mm below the surface of the water, I will have pushed the cuboid (15 - 4) + 1.5 = 12.5 cm. Therefore, using some notes that I have I have worked out the buoyancy force to be: F = p*A*y*g where p = pressure of the water (998.2 kg/m^3, pressure of tap water at 20oC), A is the cross sectional area to face the water (0.06 m), y is the overall displacement (0.125 m) and g = 9.8m/s, acceleration due to gravity. If I work this out, I get: 73.37 Newtons. First things first, is this correct? Please note that this is NOT homework, just a problem that I am looking into. I have looked at some notes where I was placing a bottle filled with sand in water, and then pushing it to a new depth. We used F = p*A*y*g to work out the buoyancy force in that case, however I dont believe that we considered the case if the bottle was pushed so far into the water that it was fully submerged (as in this case). Therefore, I think that the equation is correct so far as I push the cuboid down to a depth of 11cm so that the top face of the cuboid will be level with the top of the water, however I am unsure if I can carry the above reasoning through to when the object is fully submerged in water. I then want to consider pushing the cuboid so that the top face is 15cm and then 30cm deep, if I know that I can apply the above principle at 15mm then I know that I can extend it to these two cases. What I am really confused with (the reason i dont think that this works past 11cm) is that Archimedes said the Buoyancy Force is equal to the mass of the water displaced. However, when fully submerged, surely there wont be any more water displaced if we push the cuboid deeper and deeper?!? Any help on this would be greatly appreciated! Si 2. [quote=Abelian;206947]Hi, I am looking to investigate the buoyancy force of a cuboid. The dimensions of the cuboid are 30cm x 20cm x 15 cm. When the cuboid is placed in the water (the face 30cm x 20cm faces the water), 4cm of the cuboid will be submerged. If I push the cuboid so that the top face is 15mm below the surface of the water, I will have pushed the cuboid (15 - 4) + 1.5 = 12.5 cm. Therefore, using some notes that I have I have worked out the buoyancy force to be: F = p*A*y*g where p = pressure of the water (998.2 kg/m^3, pressure of tap water at 20oC), A is the cross sectional area to face the water (0.06 m), y is the overall displacement (0.125 m) and g = 9.8m/s, acceleration due to gravity. If I work this out, I get: 73.37 Newtons. [\quot] What you have written as p is usualy $\rho$ and is the density of the water (as you should be able to see from the units) The boyancy force is equal (and opposite if we are being carefull about direction) to the weight of the displaced water. In this case the cuboid is completley submerged so the boyancy force is: $F=\rho A h = \rho Vg$ where $h$ is the dimension of the cuboid normal to a face of area $A$. What you had is only applicable if the cuboid is not completly submerged. What I am really confused with (the reason i dont think that this works past 11cm) is that Archimedes said the Buoyancy Force is equal to the mass of the water displaced. However, when fully submerged, surely there wont be any more water displaced if we push the cuboid deeper and deeper?!? That is not what Archimedes principle is, it is that the boyancy force is equal to the weight of the fluid displaced. CB
{}
# If 3^m = 2 and 4^n =27, show by laws of indices that m xx n =3/2?? Feb 22, 2018 see a solution process below; #### Explanation: ${3}^{m} = 2 \mathmr{and} {4}^{n} = 27$ Note, we can also use Law or Logarithm to solve this; ${3}^{m} = 2$ Log both sides.. $\log {3}^{m} = \log 2$ $m \log 3 = \log 2$ $m = \log \frac{2}{\log} 3$ similarly.. ${4}^{n} = 27$ Log both sides.. $\log {4}^{n} = \log 27$ $n \log 4 = \log 27$ $n = \log \frac{27}{\log} 4$ $n = \log {3}^{3} / \log {2}^{2}$ $n = \frac{3 \log 3}{2 \log 2}$ Hence; $m \times n \Rightarrow \log \frac{2}{\log} 3 \times \frac{3 \log 3}{2 \log 2}$ $m \times n \Rightarrow \cancel{\log} \frac{2}{\cancel{\log}} 3 \times \frac{3 \cancel{\log} 3}{2 \cancel{\log} 2}$ $m \times n \Rightarrow \frac{3}{2}$ As required! Feb 22, 2018 $\text{see explanation}$ #### Explanation: ${4}^{n} = {\left({2}^{2}\right)}^{n} = {\left(2\right)}^{2 n} = 27 = {3}^{3} \leftarrow \text{from } {4}^{n} = 27$ $\text{substitute } 2 = {3}^{m}$ $\Rightarrow {\left({3}^{m}\right)}^{2 n} = {3}^{3}$ $\Rightarrow {3}^{2 m n} = {3}^{3}$ $\text{since bases on both sides are 3, equate the exponents}$ $\Rightarrow 2 m n = 3$ $\Rightarrow m n = \frac{3}{2}$ Feb 22, 2018 See below. #### Explanation: We have ${4}^{n} = 27$ We can write that as: ${\left({2}^{2}\right)}^{n} = {3}^{3}$ ${2}^{2 n} = {3}^{3}$ Since ${3}^{m} = 2$, we can input: ${\left({3}^{m}\right)}^{2 n} = {3}^{3}$ ${3}^{2 m n} = {3}^{3}$ Since the bases are equal, the exponents are equal too. $2 m n = 3$ $m n = \frac{3}{2}$ Proved.
{}
# Recent results and prospects from the NA62 experiment at CERN Particle Physics Theory seminar #### Recent results and prospects from the NA62 experiment at CERN • Event time: 2:00pm • Event date: 25th October 2017 • Speaker: Cristina Lazzeroni (Birmingham University) • Location: Higgs Centre Seminar Room, Room 4305, ### Event details $K \to \pi \nu\nu$ is one of the theoretically cleanest meson decay where to look for indirect effects of new physics complementary to LHC searches. The NA62 experiment at CERN SPS is designed to measure the branching ratio of the $K^+\to \pi^+\nu\nu$ decay with 10% precision. NA62 took data in 2015 and 2016 reaching the Standard Model sensitivity. Recent results and prospects will be presented. Besides, recently produced upper limits on the rate of the charged kaon decay into a lepton and a heavy neutral lepton (HNL) will also be shown. ### About Particle Physics Theory seminars The Particle Physics Theory seminar is a weekly series of talks reflecting the diverse interests of the group. Topics include analytic and numerical calculations based on the Standard Model of elementary particle physics, theories exploring new physics, as well as more formal developments in gauge theories and gravity..
{}
## C Specification The VkImageMemoryRequirementsInfo2 structure is defined as: // Provided by VK_VERSION_1_1 typedef struct VkImageMemoryRequirementsInfo2 { VkStructureType sType; const void* pNext; VkImage image; } VkImageMemoryRequirementsInfo2; or the equivalent // Provided by VK_KHR_get_memory_requirements2 typedef VkImageMemoryRequirementsInfo2 VkImageMemoryRequirementsInfo2KHR; ## Members • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • image is the image to query. ## Description Valid Usage • VUID-VkImageMemoryRequirementsInfo2-image-01589 If image was created with a multi-planar format and the VK_IMAGE_CREATE_DISJOINT_BIT flag, there must be a VkImagePlaneMemoryRequirementsInfo included in the pNext chain of the VkImageMemoryRequirementsInfo2 structure • VUID-VkImageMemoryRequirementsInfo2-image-02279 If image was created with VK_IMAGE_CREATE_DISJOINT_BIT and with VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT, then there must be a VkImagePlaneMemoryRequirementsInfo included in the pNext chain of the VkImageMemoryRequirementsInfo2 structure • VUID-VkImageMemoryRequirementsInfo2-image-01590 If image was not created with the VK_IMAGE_CREATE_DISJOINT_BIT flag, there must not be a VkImagePlaneMemoryRequirementsInfo included in the pNext chain of the VkImageMemoryRequirementsInfo2 structure • VUID-VkImageMemoryRequirementsInfo2-image-02280 If image was created with a single-plane format and with any tiling other than VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT, then there must not be a VkImagePlaneMemoryRequirementsInfo included in the pNext chain of the VkImageMemoryRequirementsInfo2 structure • VUID-VkImageMemoryRequirementsInfo2-image-01897 If image was created with the VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID external memory handle type, then image must be bound to memory Valid Usage (Implicit) • VUID-VkImageMemoryRequirementsInfo2-sType-sType sType must be VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2 • VUID-VkImageMemoryRequirementsInfo2-pNext-pNext pNext must be NULL or a pointer to a valid instance of VkImagePlaneMemoryRequirementsInfo • VUID-VkImageMemoryRequirementsInfo2-sType-unique The sType value of each struct in the pNext chain must be unique • VUID-VkImageMemoryRequirementsInfo2-image-parameter image must be a valid VkImage handle ## Document Notes This page is extracted from the Vulkan Specification. Fixes and changes should be made to the Specification, not directly. Copyright 2014-2022 The Khronos Group Inc.
{}
## anonymous one year ago Is there a way to tell if an equation is a cardioid or a limacon without graphing it? I'm supposed to tell what shape r=2+2sin theta is, but cardioid and limacon equations look the same. I know cardioids come from limacons, so maybe I should say that. 1. anonymous @Astrophysics if you aren't busy, can you help me? 2. anonymous @Ashleyisakitty @satellite73 @jim_thompson5910 @TheSmartOne @Nnesha @Loser66 @wio @sammixboo @kropot72 3. Astrophysics Not quite, you need to know what the equations look like, cardoids are as the following $r = a \pm a \cos \theta~~~\text{and}~~~r = a \pm a \sin \theta$ |dw:1434418408816:dw| see if you can figure out these graphs. And limacons are in the following form $r = b+a \cos \theta~~~\text{and}~~~r = b + a \sin \theta$ horizontal and vertical respectively. And they can be looped, if b<a, dimpled a<b<2a, and in a convex form if 2a less than equal to b. Notice that if a = b, then it's a cardioid. 4. anonymous Okay, thanks! 5. Astrophysics Np :) 6. Astrophysics Notice that, one of the graphs is your equation! 7. anonymous Yup!, the second one 8. Astrophysics Yup :P 9. anonymous :)
{}
Writing Some Special Matrices in LaTeX I've to write the following matrices in LaTex. Any help in this regard will be highly appreciated. - migrated from stats.stackexchange.comApr 12 '11 at 5:33 This question came from our site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Take a look at tex.stackexchange.com/questions/40/… and tex.stackexchange.com/questions/3409/… for some closely related questions that have already been answered here. –  Loop Space Apr 12 '11 at 8:30 Sometnig like $$\bordermatrix{&c_1&c_2&c_3\cr r_1&t_{11}&t_{12}&t_{13}\cr r_2&t_{21}&t_{22}&t_{23}\cr r_3&t_{31}&t_{32}&t_{33}\cr}$$ ? This will give you c's and r's outside the t_nn matrix. - thanks for your answer but it doesn't my purpose. I've also looked at tex.stackexchange.com/questions/40/…. But struggling to get it write. –  MYaseen208 Apr 14 '11 at 8:26
{}
mersenneforum.org 29 to 30 bit large prime SNFS crossover Register FAQ Search Today's Posts Mark Forums Read 2014-05-16, 03:30 #1 VBCurtis     "Curtis" Feb 2005 Riverside, CA 4,211 Posts 29 to 30 bit large prime SNFS crossover Conclusion: SNFS-213 is a bit faster using 30-bit large primes than 29. The default factmsieve cutoff at 225 is too high. As my SNFS tasks creep over 210 digit difficulty, I find that 29-bit large primes require more relations than ~200 difficulty projects; say, 46M raw relations instead of 41-43M. I decided to try running a pair of same-size factorizations with 29 and 30 bit large primes, to compare sieve time. 5*2^702-1: SNFS-213 diffficulty, 29-bit large primes, sextic poly E-score 4.47e-12. factmsieve runs with 300k blocks of special-q, and needed 46.8M raw relations, 40.6M unique to build a density-70 matrix of size 4.7M. Special-q from 8.9M to 29.3M were sieved. 13*2^702-1: SNFS-213 difficulty, 30-bit large primes, sextic poly score 4.27e-12. factmsieve built a matrix the first try with 76.5M raw relations, 69.2M unique to build a density-70 matrix of size 5.3M. Special-q from 8.9M to 27.5M were sieved. I'll edit my factmsieve to try building a matrix with fewer relations next time, which might save more time. Despite a poly score 5% higher, the first project had to sieve almost 10% more special-q blocks, which took about 5% longer in wall-clock time compared to the 30-bit project. However, the 30-bit project had a matrix 15% larger, making up that 5% savings in sieve time to solve the matrix. I don't know how to account for the different E-scores, but the better E-score took the same time as a 29-bit project as the lower one did with 30-bit primes. I already factored 13*2^707-1 with 29-bit primes; I'll do 13*2^706-1 and 5*2^706-1 as 30-bit projects to see if my results here are a fluke. Has anyone else compared 29 to 30 bits at SNFS difficulties under 220? Has someone done this for GNFS? Last fiddled with by VBCurtis on 2014-05-16 at 03:33 2014-05-16, 07:09 #2 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 2·52·127 Posts I do this fairly constantly for GNFS as I work through aliquot sequences. It looks as if 28-29 is at about C148, 29-30 is at about C153, and 30-31 at about C158. But it's a reasonably subtle effect and there's about 10% noise in my measurements simply from sometimes sieving a bit too far. Here are some high tide marks Code: CPU-hours size lp 284.9 C139 29 288.5 C140 29 334.9 C142 29 342.6 C142 29 359.6 C143 29 422.0 C144 29 452.9 C145 29 548.4 C146 29 713.8 C148 30 762.8 C148 29 768.4 C149 30 823.6 C149 29 899.3 C150 30 1038.9 C150 30 1100.6 C152 30 1206.6 C153 30 1341.4 C154 30 2500.2 C157 30 2520.5 C158 31 2538.7 C159 31 3012.6 C161 31 4495.7 C162 31 4906.0 C163 31 11246 C169 30(3a)/15 12619 C170 32 19261 C172 31/15 Last fiddled with by fivemack on 2014-05-16 at 07:09 2014-05-16, 07:18 #3 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 635010 Posts For SNFS, my data's not as good; I haven't done as many numbers, and I haven't been so careful in noting which computer I used for the sieving. But: Code: diff lp time/hrs 204.1 29 441.3 208.3 29 767.3 214.9 30 920.7 216.3 30 1031.2 222.0 30 2459.8 (sieved both A and R) 227.7 30 3494.9 233.2 31r30a 5623.6 248.9 31 18034.3 250.4 31/3r 25997.8 (15e) 285.1 33/3r 166625.8 (16e) so I think I concur that 29/30 is somewhere around 210 and 30/31 somewhere around 230, and everything gets a bit more confusing as you consider using 3 large primes, different LP bounds on the two sides, and the larger-range sievers at the 250-digit level. Last fiddled with by fivemack on 2014-05-16 at 07:19 2014-05-16, 13:07   #4 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT) 567710 Posts Quote: Originally Posted by fivemack It looks as if 28-29 is at about C148, 29-30 is at about C153, and 30-31 at about C158. But it's a reasonably subtle effect and there's about 10% noise in my measurements simply from sometimes sieving a bit too far. Here are some high tide marks Extrapolating from those numbers C163 31-32 and C168 32-33. Does this mean when we do a 180+ digit number our large prime bounds are much too small? 2014-05-16, 15:17 #5 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 2×52×127 Posts I honestly don't know. I don't think the optimal large prime bound can possibly grow that fast - but it's quite possible that it does grow that fast if we assume use of the 14e siever, which obviously we're not using as the numbers get huge. 2014-05-16, 16:40 #6 Gimarel   Apr 2010 8916 Posts I use 29 at 120 digits GNFS. It's faster then 28 even with the increased filtering time. Code: lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 alambda: 2.55 rlambda: 2.55 alim: 99500000 rlim: 1400000 I start sieving at 1000000 and aim for 18300000 raw relations. Thats enough to build a matrix at target_density 90. The resulting matrix has about 670000 dimensions. 2014-05-16, 22:04   #7 VBCurtis "Curtis" Feb 2005 Riverside, CA 4,211 Posts Quote: Originally Posted by fivemack I do this fairly constantly for GNFS as I work through aliquot sequences. It looks as if 28-29 is at about C148, 29-30 is at about C153, and 30-31 at about C158. Looks to me like your data indicates 29-30 at C148 and 30-31 at C158. There are no 28-bit projects listed in the data. That takes care of part of Henry's question! Thank you for the data and confirmation. 2014-05-18, 21:48 #8 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 2×52×127 Posts Thanks for suggesting this: I have spent most of the weekend running through various parameter choices on C115, C125, C133, C136 that I had lying around, and am now reckoning that it's worth using significantly larger large-prime bounds than I'd previously considered - if you're careful about limits, 28-bit LP seems worthwhile as small as C115. 2014-05-19, 04:28 #9 VBCurtis     "Curtis" Feb 2005 Riverside, CA 4,211 Posts Does using larger LP bounds shift where the 13e to 14e crossover is? It should move a bit upward, right? I use the python script for my factoring, but am working on building a list of script edits to post here. Perhaps we can build a 2014 consensus for cutoffs for 28-29-30-31 LP bounds, and likewise the points to move to 13e-14e-15e sievers. I'll play with 13e vs 14e with these new LP bounds myself, but would appreciate if others post their findings also. 2014-05-19, 22:20 #10 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 2×52×127 Posts Here, for a random C124 with Murphy score 1.77e-10, are some timings for different alim and different ranges. They're really not what I expected: I would have thought that sieving well beyond Q0 was a bad idea. Code: alim Q (with 28/12e) time rel uniq ideals matrix 2 2-7 101870.2275 11905078 10430364 14715990 fail 2-8 121324.1423 13759598 11864512 15749318 fail 2-9 140607.982 15538550 13213233 16632104 fail 2-10 159487.2929 17218204 14465025 17381742 fail 2-11 178047.7998 18849180 15663145 18044872 1051033 4 2-9 204011.4358 21658907 18547562 20149112 879637 2-8.5 189501.4654 20339028 17534519 19671448 975442 2-8 174845.7571 18978238 16478927 19137617 fail 3-9 177762.3 18430231 16338414 19131390 fail 6 3-9 216706.984 20554991 18272260 20173211 1018070 3-8.5 197834.5111 18989740 17003043 19563723 fail 8 3-8.5 220669.0495 19632466 17601209 19902924 1178651 3-8 198166.9867 17846588 too slow even if it worked Currently running similar sorts of things with lpbr=29 and with 13e to see how the numbers compare Last fiddled with by fivemack on 2014-05-19 at 22:27 2014-05-23, 21:51 #11 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 2·52·127 Posts I can get the sieving time down to 166k seconds with 13e, 29-bit large primes, alim=4000000, sieve 1.5M-4M for 27.9M relations. Similar Threads Thread Thread Starter Forum Replies Last Post bsquared Factoring 24 2016-01-25 05:09 ryanp Factoring 6 2013-07-19 17:23 ryanp Factoring 69 2013-04-30 00:28 Sam Kennedy Factoring 9 2012-12-18 17:30 fivemack Factoring 7 2009-04-21 07:59 All times are UTC. The time now is 00:01. Mon Jul 6 00:01:26 UTC 2020 up 102 days, 21:34, 0 users, load averages: 2.21, 1.78, 1.69
{}
Students of Jeremy L. Martin Current Students Ken is a PhD student currently studying generalizations of the chip-firing model and connections to toric varieties. Bennet is a PhD student studying structural problems on simplicial complexes. He, Art Duval, Caroline Klivans and I constructed a nonpartitionable Cohen-Macaulay simplicial complex, disproving a long-standing conjecture of Richard Stanley. Bennet is currently working on the balanced case of the partitionability conjecture, which remains open. Alumni/ae Joseph Cummings (B.S. with honors, 2016) Joseph's honors project was on the Athanasiadis-Linusson bijection between parking functions and Shi arrangement regions. Robert Winslow (B.S. with honors, 2016) Robert's honors project was about matroids and combinatorial rigidity theory. Alex Lazar (M.A., 2014) Thesis title: Tropical simplicial complexes and the tropical Picard group Alex studied tropical simplicial complexes, which were introduced by Dustin Cartwright in this paper. Alex proved a conjecture of Cartwright concerning tropical Picard groups (which somewhat resemble critical groups of cell complexes). Here is a Sage worksheet Alex developed in the course of his research. Keeler Russell (Undergraduate Honors Research Project, 2012-2013) Keeler studied a difficult problem proposed by Stanley: do there exist two nonisomorphic trees with the same chromatic symmetric function? Li-Yang Tan had previously ruled out a counterexample on $$n\leq 23$$ vertices, using a brute-force search. Keeler developed parallelized C++ code to perform another brute-force search that ruled out a counterexample for $$n\leq 25$$, thus reproducing and extending Tan's results. On the KU Mathematics Department's high-performance computing system, the $$n=25$$ case (about 100 million trees) took about 90 minutes using 30 cores in parallel. Keeler's fully documented code (in C++) is freely available from GitHub or from my website. Brandon Humpert (Ph.D., 2011) Dissertation title: Polynomials associated with graph coloring and orientations Brandon first invented a neat quasisymmetric analogue of Stanley's chromatic symmetric function. This project morphed into a study of the incidence Hopf algebra of graphs; Schmitt had given a general formula for the antipode on an incidence Hopf algebra, but Brandon came up with a much more efficient (i.e., cancellation-free) formula for this particular Hopf algebra, which became the core result of this joint paper. Tom Enkosky (Ph.D., 2011) Dissertation title: Enumerative and algebraic aspects of slope varieties Tom tackled the problem of extending my theory of graph varieties to higher dimemsion. Briefly, fix a graph $$G=(V,E)$$ and consider the variety $$X^d(G)$$ of all "embeddings" of $$G$$ in $$\mathbb{C}\mathbb{P}^d$$ - i.e., arrangements of points and lines that correspond to the vertices and edges of $$G$$ and satisfy containment conditions corresponding to incidence in $$G$$ - how does the combinatorial structure of $$G$$ control the geometry of this variety? In a joint paper, Tom and I figured out some answers to the question, including the component structure of $$X^d(G)$$. Separately, Tom proved a striking enumerative result about the numner of pictures of the complete graph over the finite field of order 2. Jonathan Hemphill (M.A., 2011) Thesis title: Algorithms for determining single-source, single-destination optimal paths on directed weighted graphs Jenny Buontempo (M.A., 2008) Thesis title: Matroid theory and the Tutte polynomial Last updated Tue 6/7/16
{}
## Triangulations of hyperbolic 3-manifolds admitting strict angle structures.(English)Zbl 1262.57018 It is conjectured that every cusped hyperbolic $$3$$-manifold admits a geometric triangulation, that is, each tetrahedron of the triangulation is a positive volume ideal hyperbolic tetrahedron. A necessary condition of a topological ideal triangulation being geometric is that the triangulation is angled, i.e., admits strict angle structures. In this interesting paper, the authors construct angled triangulations of cusped hyperbolic $$3$$-manifolds, under the assumption that if the manifold is homeomorphic to the interior of a compact $$3$$-manifold $$\overline M$$ with torus or Klein bottle boundary components, then $$H_1(\overline M;\mathbb Z_2)\rightarrow H_1(\overline M,\partial\overline M;\mathbb Z_2)$$ is the zero map. As a consequence, each hyperbolic link complement in $$S^3$$ admits angled triangulations. The triangulations are obtained by carefully subdividing Epstein-Penner’s polyhedral decomposition into tetrahedra and inserting flat tetrahedra. To show that the constructed triangulations are angled, a result of Kang-Rubinstein and Luo-Tillmann relating the existence of strict angle structures and the non-existence of certain vertical normal surface classes is used; and the homological assumption on the manifold rules out the existence of those vertical normal surface classes. The authors also explain that the angled triangulations they construct are in general not geometric. ### MSC: 57M50 General geometric structures on low-dimensional manifolds SnapPea; Snap Full Text:
{}
This is the master page for Notes on Machine Learning posts, in which I summarize in a succinct and straighforward fashion what I learn from Machine Learning course by Mathematical Monk, along with my own thoughts and related resources. Acronyms • MM: Mathematical Monk • ML: Machine Learning • SL: Supervised Learning • UL: Unsupervised Learning • PSD: Positive Semi-Definite • MCTC: Markov Chain Monte Carlo • To understand bias-variance “trade-off”, take a quick route: (11.5) $\leadsto$ (11.1) (11.2) (11.3) (11.4) $\leadsto$ (11.1) (11.2)
{}
# Discrete Math $$\overline{\bar{A}} = A$$ $$\overline{A ∪ B} = \bar{A} ∩ \bar{B}$$ $$\overline{A ∩ B} = \bar{A} ∪ \bar{B}$$ Let $f(x) = a_nx^n + a_{(n-1)}x^{(n-1)} + ... + a_1x + a_0.$ Then $f(x) = O(x^n).$ $${n \choose r}$$ $$\lceil N/k \rceil$$ $$P(n,r) = \frac{n!}{(n - r)!}$$ $$C(n,r) = \frac{n!}{r!(n - r)!}$$ $$(x + y)^n = \sum_{k=0}^{n} C(n,k)x^{n-k}y^k$$ $$n^r$$ $$\sum_{k=0}^{n} C(n,k) = 2^n$$ $$C(m + n,r) = \sum_{k=0}^{n} C(m,r - k)C(n,k)$$ $$\sum_{k=0}^{n} (-1)^k C(n,k) = 0$$ $$a_n = c_1a_{n-1} + c_2a_{n-2} + \cdots + c_k a_{n-k}$$ $$a_n = r^n$$ $$a_n = c_1r^{n-1} + c_2r^{n-2} + \cdots + c_k r^{n-k}$$ $$r^k - c_1r^{k-1} - c_2r^{k-2} - \cdots - c_{k-1}r - c_k = 0$$ $$a_n = a_{n-1} + a_{n-2}^2 \textrm{ is not an LHRR because}$$ $$H_n = 2H_{n-1} + 1 \textrm{ is not an LHRR because}$$ $$B_n = nB_{n-1} \textrm{ is not an LHRR because}$$ $$\{a_n\} \textrm{ is a solution to an LHRR of degree } 2$$ $$a_n = c_1 a_{n-1} + c_2 a_{n-2}$$ $$r^2 - c_1r - c_2 = 0$$ $$a_n = \alpha_1 r_1^n + \alpha_2 r_2^2$$ $$r^2 - c_1r - c_2 = 0$$ $$a_n = \alpha_1 r_0^n + \alpha_2 nr_0^2$$ $$\{a_n\} \textrm{ is a solution to an LHRR of degree } k$$ $$a_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k}$$ $$r^k - c_1r^{k-1} - \cdots - c_k = 0$$ $$a_n = \alpha_1 r_1^n + \alpha_2 r_2^2 + \cdots + \alpha_k r_k^n$$ $$n^m - C(n,1)(n - 1)^m + C(n,2)(n - 2)^m - \cdots + (-1)^{n-1}C(n,n - 1)^m$$ $$D_n = n!\left[1 - \frac{1}{1!} + \frac{1}{2!} - \frac{1}{3!} + \cdots + (-1)^n\frac{1}{n!}\right]$$
{}
# Evaluation of a complete homogeneous symmetric polynomial related to Stirling number of 2nd kind It is well known that the complete homogeneous symmetric polynomial $$h_{n-k}(1,\,2,\,3, ...,\,k-1,\,k)$$ equals $$S(n,\,k)$$ the Stirling number of the second kind. [Wikipedia] During a research project I stumbled upon the following complete homogeneous symmetric polynomial: $$h_{n-k}(1,\,2,\,3, ...,\,k-1,\,n)$$. My question is: is the latter symmetric polynomial expressible in nice, simple and/or interpretable terms? Or is this too much to ask? If so, why? • How about computing a table for small n and k.and posting it here? – Dima Pasechnik Jan 29 at 20:08 Consider the fact that $$\prod_{i=1}^k \frac{1}{1-x_i t} = \sum_j h_j(x_1,\dots,x_k) t^j$$ Writing $$h_j = h_{j+k-k}$$ we get from your first fact: $$\prod_{i=1}^k \frac{1}{1- i t} = \sum_j S(j+k,k) t^j.$$ Now multiply this by $$1/(1-nt)$$. We get $$\sum_j h_j(1,2,\dots,k,n) t^j = \frac{1}{1-nt} \sum_l S(l+k,k) t^l.$$ Comparing the coefficient of $$t^j$$ on both sides we get $$h_j(1,2,\dots,k,n) = \sum_{l+m = j} n^m S(l+k,k).$$ Thus, in your notation, $$h_{n-k}(1,2,\dots,k-1,n) = \sum_{l+m = n-k} n^m S(l+k-1,k-1)$$
{}
# Does real*real*real... = imaginary? $x\cdot x\cdot x\cdot x\cdot x\ ...\ =\ i, x \in \mathbb{R}$ Please be advised as is pointed out below, the video was incorrect and this: $$x=e^{\frac{\pi}{2}} \Rightarrow x^{x^{x^{x^{...}}}} = i$$Is completely false! I recently watched the video by real^real^real^... = imaginary? by blackpenredpen and he shows that this is possible: $$x=e^{\frac{\pi}{2}} \Rightarrow x^{x^{x^{x^{...}}}} = i$$ This made me wonder if it is possible to find a similar real number for repeated multiplication rather than exponentiation? $$x\cdot x\cdot x\cdot x\cdot x\ ...\ =\ i, x \in \mathbb{R}$$ My initial thoughts were that repeated multiplication is just exponentiation so maybe we could look at the problem like this: $$\lim_{n\rightarrow \infty}x^n = i, x \in \mathbb{R}$$ So is this possible? If not it would be nice to see a proof. • Given that $i$ has a magnitude of $1$, we would have to conclude that $|x| = 1$. Any other value causes decay to zero or a blowup. But multiplication by any such $x$ represents a fixed rotation by some angle $\theta$. Then $x^n$ represents a rotation by $n\theta$. Sending $n$ to infinity, this clearly has no limit (unless $\theta = 0$ ,$x=1$), so no such $x$ can exist which satisfies your desired property. Jun 26, 2020 at 5:10 • If $x=e^{\frac{\pi}{2}}$ then $x^{x^{x^{...}}}$ is definitely NOT $i$. There are few mistakes in that clip. Jun 26, 2020 at 5:15 • Thanks @Hyperion that is a very intuitive answer! Jun 26, 2020 at 5:16 • Using the method shown in that YouTube video, you could show $$1 \cdot 1 \cdot 1 \cdots = a$$ for any complex number $a$ (since $1\cdot a = a$), because all it proposes (with some errors) is that $x^i=i$ is true for $x=e^{\pi/2}$. Jun 26, 2020 at 5:16 • Next, an interesting video that proves that $\sum _{n=1}^{\infty} = -\dfrac{1}{12}$ :-) Jun 26, 2020 at 5:32 $$x^{x^{x^{x^{...}}}}$$ is a so-called power-tower, or hyperpower function, also known as infinite tetration. It should be noted that the power tower only converges for $$x \in (\frac 1{e^e}, e^{\frac 1e})$$. To solve $$x^{x^{x^{x^{...}}}} = k$$, a non-rigorous "trick" is to write $$x^k = k \implies x = k^{\frac 1k}$$. But for $$x^{x^{x^{x^{...}}}} = k$$ to give the valid solution $$x = k^{\frac 1k}$$, you must have $$k < e$$ (based on the radius of convergence I gave above). So, the solution ($$x = \sqrt 2$$) is valid for $$k=2$$, but the solution $$x = 3^{\frac 13}$$ is not valid for $$k = 3$$ (as $$3 > e$$). Similarly, saying $$x^{x^{x^{x^{...}}}} = i \implies x = e^{\frac{\pi}2}$$ is simply nonsense. The video is wrong. An "infinite product" of reals (assuming convergence) has to be real. But it makes little sense to speak of $$x\cdot x\cdot x \dots$$ because that value is either $$0$$ for $$|x|<1$$, $$1$$ for $$x = 1$$ and undefined otherwise. • Wow I did not expect that... I see now someone has called him out in the comments. That is such a popular video too I wonder why he wouldn't take it down or at least leave a note :( Thanks good catch Jun 26, 2020 at 5:17 • To make the mistake in the clip a bit more obvious, maybe emphasize that the clip is argumenting that IF $x^{x^{x^{x^{...}}}}=i$ has a real solution then the solution must be $x=e^{\frac{\pi}{2}}$. But this doesn't necesarily mean this is a solution, one still needs to check that, and it is easy to argue that the LHS is actually $\infty$ in that case.... Ironically, if fixed, the argument in the clip actually shows that there is no real solution to that equation. Jun 26, 2020 at 5:20 • @PMaynard I can understand that. It after all looks natural to do the replacement. However, within the framework of mathematics, some intuitive things don't work, how much ever we'd like it to. One example is as you've seen above. Of course, the video would not be taken down : it is entertaining even if wrong! Jun 26, 2020 at 5:21 • @PMaynard I just "called him out" in the comments. :) He is an enthusiastic math poster, but lacks rigour in my opinion. Jun 26, 2020 at 5:24 • @PMaynard Popularity does not imply quality. Best witness is the famous "equation" $1+2+3+\cdots=-1/12$ which has caused immense damage confusing countless innocient youtube-watchers. Studying the wikipedia-articles or pdf-articles is much more helpful to learn mathematics than watching youtube-videos. Jan 20, 2021 at 15:05
{}
# Looking at the equation of a reaction, how do we know it is endotermic or exotermic For example, $MnCO_3 → MnO + CO_2$ is endotermic, and occurs at 473 K the polymerization of ethylene $H_2C=CH_2$ to $n(H_2C=CH_2)$ is exotermic I suspect it has something to do with the standard enthalpy of formation of the different terms of the equation (right side - left side = $\Delta H$ if negative, the reaction is exotermic and releases $\Delta H$ to the environment as heat, if positive, it is endotermic and needs $\Delta H$ to happen. Is this correct? If not, What am I missing?
{}
# Coupled oscillators - mode and mode co-ordinates 1. Apr 19, 2010 ### joriarty For this question I'm not going to introduce the particular problem I am working on, rather, I am merely wanting some explanation of a concept which I can't seem to find in any of my textbooks. I suspect the authors think it is just too obvious to bother explaining . I'm revising for a test and have the full worked solutions for this problem in front of me. I can follow the mathematics, but not the reasoning behind it. The question: My worked solutions now say: My question: What exactly are q1 and q2, and why should these be equal to $$\sqrt{\frac{m}{2}}\left( \psi_{2}+\psi_{1} \right)$$ etc? Why $$\sqrt{\frac{m}{2}}$$? Is there a more specific name for this law that I could look up? I hope my question is easily understandable! Thank you for your help. (note: for the sets of equations relating q1 and q2 to m and x, there should be a "≡" sign rather than an "=" sign - for some reason my TEX formatting comes out with "8801;" rather than a "≡" sign. Odd.) #### Attached Files: • ###### springdiagram.png File size: 1.5 KB Views: 59 Last edited: Apr 19, 2010 2. Apr 19, 2010 ### joriarty Formulae now fixed. I hope. Sorry if I confused anyone while I was editing things
{}
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m # Making a Theme With Bones: Finishing Off Difficulty:IntermediateLength:ShortLanguages: We are using a starter theme to build a new site. We're going to follow straight on from the previous tutorial in this series. So let's jump into it: ## Step 13: Setting Heading and Body Copy Fonts We will use two fonts from the Google Font library: Arvo and PT Sans. Put this code in the functions.php file. This code will pull in the CSS code which contains the font-face properties. Let's set Arvo for headings (base.less) and PT Sans for body copy. We can set the font with font-family. We also define serif and sans-serif which means we will get a default font if the custom font can't be loaded. ## Step 14:H1, footer, header The font size will be 4em. We have to modify the footer.php to add content and the base.less file for styles. Set the background (with background), bottom border (with border-bottom) and padding (top 10px, 0px for left and right, and 15px for bottom). For the header a noise gradient resized by 10 times will be good, saved as a file (kotkoda_header_bg.gif) and it has to be in the bones/library/images folder. The CSS code goes into the base.less file. The graphics will be repeated horizontally (repeat-x) and start in the top left (0 0). This is how it looks after modifying the footer. This is how it looks after adding the graphics. ## Step 15: Favicon and Page Title We can set a new 16x16 favicon in the header.php file. In the href part we set the path of the icon, get_template_directory_uri will give us the template's main directory URL. For the page title, replace the original code with this one and set the description in the admin interface. This PHP code will echo the blog's description field. It will look like this after adding the icon. The main navigation for pages will get a minimal style as well. With display set to inline the look will be horizontal and the left border will be white (border-left). The new look of the main menu. ## Step 17: Comment Styles Comments will get a simpler look. The styles named odd and even should be empty (or commented out) and the li element gets a border-left. The right date text link will be white as well, let's color it to @white. The reply button will get a new style too. We have to set the color, the background color and the opacities (these are deleted). The new look. ## Step 18: Comment Box and Button We don't need the border (border: 0) and the background should be @white (base.less). There are a lot of fancy styles we don't need (transition, rounded, gradient) so we have to change borders and backgrounds, and delete roundness and transition. This goes into our mixins.less file. The look after styling. ## Step 19: Info Box Let's change the info box background to light yellow, which is @alert-yellow (in mixins.less). We don't need a border, so set it to zero. ## Step 20: Theme Admin Screenshot The last step is to delete the default screenshot and replace with the Kotkoda one we made. ## Finished Here is how the theme looks finished in 600 pixels wide. In the next tutorials we will clean the theme from unnecessary parts then prepare it for submission to ThemeForest.
{}
# An update on differentiation theorems Malabika Pramanik and I have just uploaded to the arXiv the revised version of our paper on differentiation theorems. The new version is also available from my web page. Here’s what happened. In the first version, we proved our restricted maximal estimates (with the dilation parameter restricted to a single scale) for all $p>1$; unfortunately our scaling analysis worked only for $p\geq 2$, therefore our unrestricted maximal estimates and differentiation theorems were only valid in that range. However, just a few days after we posted the paper, Andreas Seeger sent us a “bootstrapping” scaling argument that works for $p$ between 1 and 2. With Andreas’s kind permission, this is now included in the new version. The updated maximal theorem is as follows. Theorem 1. There is a decreasing sequence of sets $S_k \subseteq [1,2]$ with the following properties: • each $S_k$ is a disjoint union of finitely many intervals, • $|S_k| \searrow 0$ as $k \rightarrow \infty$, • the densities $\phi_k=\mathbf 1_{S_k}/|S_k|$ converge to a weak limit $\mu$, • the maximal operators ${\mathcal M} f(x):=\sup_{t>0, k\geq 1} \frac{1}{|S_k|} \int_{S_k} |f(x+ty)|dy$ and ${\mathfrak M} f(x) = \sup_{t > 0} \int \left| f(x + ty) \right| d\mu(y)$ are bounded on $L^p({\mathbb R})$ for $p >1$. Our differentiation theorem has been adjusted accordingly. Theorem 2. Let $S_k$ and $\mu$ be given by Theorem 1. Then the family ${\cal S} =\{ rS_k:\ r>0, n=1,2,\dots \}$ differentiates $L^p( {\mathbb R})$ for all $p>1$, in the sense that for every $f \in L^p$ we have $\lim_{r\to 0} \sup_{n} \frac{ 1 }{ r|S_n| } \int_{ x+rS_n } f(y)dy = f(x)$ for a.e. $x\in {\mathbb R}.$ Furthermore, $\lim_{r\to 0} \int f(x+ry) d \mu (y) =f(x)$ for a.e. $x\in {\mathbb R}.$ What about $p=1$? I had the good luck of meeting David Preiss in Barcelona right after Malabika and I had finished the first version of the preprint. I explained our work; we also spent some time speculating on whether such results could be true in $L^1$. Next day, David sent me a short proof that our Theorem 2 cannot hold with $p=1$ for any singular measure $\mu$ supported away from 0. (The same goes for sequences of sets $S_k$ as above, by a slight modification of his argument.) We are grateful to David for letting us include his proof in the new version of our paper. We have also polished up the exposition, fixed up the typos and minor errors, etc. One other thing we have added (to the arXiv preprint – we are not including this in the version we are submitting for publication) is a short section on how to modify our construction of $S_k$ so that the limiting set $S$ would also be a Salem set. The argument is very similar to the construction in our earlier paper on arithmetic progressions, so we only sketch it very briefly. I’ll be on vacation throughout the rest of July. I’ll continue to show up here on this blog – I might actually write here more often – and I’ll finish up a couple of minor commitments, but you should not expect any more serious mathematics from me in the next few weeks.
{}
# zbMATH — the first resource for mathematics ## International Journal of Combinatorics Short Title: Int. J. Comb. Publisher: Hindawi, New York, NY ISSN: 1687-9163; 1687-9171/e Online: http://www.hindawi.com/journals/ijct/contentshttp://www.emis.de/journals/HOA/IJCT/index.html Comments: No longer indexed; The journal is no longer published, last volume: Vol. 2016. The coverage of this journal is based on the electronic edition, bibliographic data of the print version may differ. This journal is available open access. Documents Indexed: 88 Publications (2009–2016) References Indexed: 88 Publications with 1,341 References. all top 5 #### Latest Issues 2016 (2016) 2015 (2015) 2014 (2014) 2013 (2013) 2012 (2012) 2011 (2011) 2010 (2010) 2009 (2009) all top 5 all top 5 #### Fields 70 Combinatorics (05-XX) 14 Number theory (11-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 Operations research, mathematical programming (90-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Group theory and generalizations (20-XX) 3 Special functions (33-XX) 3 Geometry (51-XX) 3 Computer science (68-XX) 2 Commutative algebra (13-XX) 2 Algebraic geometry (14-XX) 2 Probability theory and stochastic processes (60-XX) 2 Biology and other natural sciences (92-XX) 2 Information and communication theory, circuits (94-XX) 1 Mathematical logic and foundations (03-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 General topology (54-XX) #### Citations contained in zbMATH Open 56 Publications have been cited 145 times in 144 Documents Cited by Year On bondage numbers of graphs: a survey with some comments. Zbl 1267.05201 Xu, Jun-Ming 2013 Vertex-disjoint subtournaments of prescribed minimum outdegree or minimum semidegree: proof for tournaments of a conjecture of Stiebitz. Zbl 1236.05095 Lichiardopol, Nicolas 2012 Strong trinucleotide circular codes. Zbl 1234.92017 Michel, Christian J.; Pirillo, Giuseppe 2011 Application of the firefly algorithm for solving the economic emissions load dispatch problem. Zbl 1279.90198 Apostolopoulos, Theofanis; Vlachos, Aristidis 2011 The $$a$$ and $$(a, b)$$-analogs of Zagreb indices and coindices of graphs. Zbl 1236.05074 Mansour, Toufik; Song, Chunwei 2012 On some bounds and exact formulae for connective eccentric indices of graphs under some graph operations. Zbl 1309.05052 De, Nilanjan; Pal, Anita; Nayeem, S. M. Abu 2014 New partition theoretic interpretations of Rogers-Ramanujan identities. Zbl 1258.05007 Agarwal, A. K.; Goyal, M. 2012 Identities of symmetry for generalized Euler polynomials. Zbl 1262.11022 Kim, Dae San 2011 Cayley graphs of order $$27p$$ are Hamiltonian. Zbl 1236.05102 2011 Minimum 2-tuple dominating set of an interval graph. Zbl 1236.05155 Pramanik, Tarasankar; Mondal, Sukumar; Pal, Madhumangal 2011 Combinatorial analysis of a subtraction game on graphs. Zbl 1370.05151 Adams, Richard; Dixon, Janae; Elder, Jennifer; Peabody, Jamie; Vega, Oscar; Willis, Karen 2016 Some inverse relations determined by Catalan matrices. Zbl 1295.05057 Yang, Sheng-Liang 2013 Groups containing small locally maximal product-free sets. Zbl 1431.11023 Anabanti, Chimere S.; Hart, Sarah B. 2016 Gallai-colorings of triples and 2-factors of $$\mathcal{B}_3$$. Zbl 1295.05101 Chua, Lynn; Gyárfás, András; Hossain, Chetak 2013 On the cardinality of the $$T_0$$-topologies on a finite set. Zbl 1408.05007 Kolli, Messaoud 2014 On isosceles sets in the 4-dimensional Euclidean space. Zbl 1295.51021 Kido, Hiroaki 2010 Zeons, permanents, the Johnson scheme, and generalized derangements. Zbl 1295.05271 Feinsilver, Philip; McSorley, John 2011 On the isolated vertices and connectivity in random intersection graphs. Zbl 1217.05209 Shang, Yilun 2011 Betweenness centrality in some classes of graphs. Zbl 1309.05168 Unnithan, Sunil Kumar Raghavan; Kannan, Balakrishnan; Jathavedan, Madambi 2014 On 3-regular bipancyclic subgraphs of hypercubes. Zbl 1322.05099 Borse, Y. M.; Shaikh, S. R. 2015 Normal edge-transitive Cayley graphs of the group $$U_{6n}$$. Zbl 1302.05076 Assari, A.; Sheikhmiri, F. 2014 Characterizing finite groups using the sum of the orders of the elements. Zbl 1309.20020 Harrington, Joshua; Jones, Lenny; Lamarche, Alicia 2014 Beyond the expanders. Zbl 1236.05122 Bolla, Marianna 2011 Some more results on IF soft rough approximation space. Zbl 1236.03036 Bhattacharya (Halder), Sharmistha; Davvaz, Bijan 2011 Total vertex irregularity strength of the disjoint union of sun graphs. Zbl 1236.05174 Slamin; Dafik; Winnona, Wyse 2012 Variations of the game 3-Euclid. Zbl 1235.91035 Ho, Nhan Bao 2012 A weighted regularity lemma with applications. Zbl 1295.05208 Csaba, Béla; Pluhár, András 2014 Algebraic integers as chromatic and domination roots. Zbl 1258.05053 Alikhani, Saeid; Hasni, Roslan 2012 Classification of triangle-free $$22_3$$ configurations. Zbl 1250.51008 Al-Azemi, Abdullah; Betten, Anton 2010 Classification of base sequences $$\text{BS}(n+1,n)$$. Zbl 1238.05041 Đoković, Dragomir Ž. 2010 Integral eigen-pair balanced classes of graphs with their ratio, asymptote, area, and involution-complementary aspects. Zbl 1302.05113 Winter, Paul August; Jessop, Carol Lynne 2014 Midpoint-free subsets of the real numbers. Zbl 1414.11012 Eggleton, Roger B. 2014 Domination polynomials of $$k$$-tree related graphs. Zbl 1303.05139 Jahari, Somayeh; Alikhani, Saeid 2014 Necklaces, self-reciprocal polynomials, and $$q$$-cycles. Zbl 1414.11041 Pintoptang, Umarin; Laohakosol, Vichian; Tadee, Suton 2014 On some combinatorial structures constructed from the groups $$L(3, 5), U(5, 2)$$, and $$S(6, 2)$$. Zbl 1236.05206 Crnković, Dean; Mikulić Crnković, Vedrana 2011 Harmonic numbers and cubed binomial coefficients. Zbl 1282.11011 Sofo, Anthony 2011 A noncommutative enumeration problem. Zbl 1236.05009 Bernabei, Maria Simonetta; Thaler, Horst 2011 Ramsey numbers for theta graphs. Zbl 1236.05130 Jaradat, M. M. M.; Bataineh, M. S. A.; Radaideh, S. M. E. 2011 On extremal self-dual ternary codes of length 48. Zbl 1251.94046 Nebe, Gabriele 2012 On the line graph of the zero divisor graph for the ring of Gaussian integers modulo $$n$$. Zbl 1236.05105 Nazzal, Khalida; Ghanem, Manal 2012 A generalized inverse binomial summation theorem and some hypergeometric transformation formulas. Zbl 1370.05009 Ripon, S. M. 2016 Riordan matrix representations of Euler’s constant $$\gamma$$ and Euler’s number $$e$$. Zbl 1405.11154 Goins, Edray Herber; Nkwanta, Asamoah 2016 Some nonexistence and asymptotic existence results for weighing matrices. Zbl 1370.05025 2016 On Cayley digraphs that do not have Hamiltonian paths. Zbl 1295.05120 Morris, Dave Witte 2013 Some new results on distance $$k$$-domination in graphs. Zbl 1295.05181 Vaidya, Samir K.; Kothari, Nirang J. 2013 Embedding structures associated with Riordan arrays and moment matrices. Zbl 1295.05052 Barry, Paul 2014 The terminal Hosoya polynomial of some families of composite graphs. Zbl 1295.05125 Deutsch, Emeric; Rodríguez-Velázquez, Juan Alberto 2014 The distribution of the size of the union of cycles for two types of random permutations. Zbl 1295.05017 Lengyel, Tamás 2010 Erratum to “Classification of base sequences $$\mathrm{BS}(n+1,n)$$”, ibid. 2010, Article ID 851858 (2010; Zbl 05816748). Zbl 1238.05040 Đoković, Dragomir Ž. 2010 On the general Erdős-Turán conjecture. Zbl 1309.11011 2014 A formula for the reliability of a $$d$$-dimensional consecutive-$$k$$-out-of-$$n$$:F system. Zbl 1337.60223 Cowell, Simon; Zeng, Jiang 2015 The Tutte polynomial of some matroids. Zbl 1267.05145 Merino, Criel; Ramírez-Ibáñez, Marcelino; Rodríguez-Sánchez, Guadalupe 2012 The structure of reduced Sudoku grids and the Sudoku symmetry group. Zbl 1267.05050 Jones, Siân K.; Perkins, Stephanie; Roach, Paul A. 2012 Finite 1-regular Cayley graphs of valency 5. Zbl 1267.05140 Li, Jing Jian; Lou, Ben Gong; Zhang, Xiao Jun 2013 Initial ideals of tangent cones to the Richardson varieties in the orthogonal Grassmannian. Zbl 1273.13053 2013 Sunlet decomposition of certain equipartite graphs. Zbl 1267.05167 Akwu, Abolape D.; Ajayi, Deborah O. A. 2013 Combinatorial analysis of a subtraction game on graphs. Zbl 1370.05151 Adams, Richard; Dixon, Janae; Elder, Jennifer; Peabody, Jamie; Vega, Oscar; Willis, Karen 2016 Groups containing small locally maximal product-free sets. Zbl 1431.11023 Anabanti, Chimere S.; Hart, Sarah B. 2016 A generalized inverse binomial summation theorem and some hypergeometric transformation formulas. Zbl 1370.05009 Ripon, S. M. 2016 Riordan matrix representations of Euler’s constant $$\gamma$$ and Euler’s number $$e$$. Zbl 1405.11154 Goins, Edray Herber; Nkwanta, Asamoah 2016 Some nonexistence and asymptotic existence results for weighing matrices. Zbl 1370.05025 2016 On 3-regular bipancyclic subgraphs of hypercubes. Zbl 1322.05099 Borse, Y. M.; Shaikh, S. R. 2015 A formula for the reliability of a $$d$$-dimensional consecutive-$$k$$-out-of-$$n$$:F system. Zbl 1337.60223 Cowell, Simon; Zeng, Jiang 2015 On some bounds and exact formulae for connective eccentric indices of graphs under some graph operations. Zbl 1309.05052 De, Nilanjan; Pal, Anita; Nayeem, S. M.&nbsp;Abu 2014 On the cardinality of the $$T_0$$-topologies on a finite set. Zbl 1408.05007 Kolli, Messaoud 2014 Betweenness centrality in some classes of graphs. Zbl 1309.05168 Unnithan, Sunil Kumar Raghavan; Kannan, Balakrishnan; Jathavedan, Madambi 2014 Normal edge-transitive Cayley graphs of the group $$U_{6n}$$. Zbl 1302.05076 Assari, A.; Sheikhmiri, F. 2014 Characterizing finite groups using the sum of the orders of the elements. Zbl 1309.20020 Harrington, Joshua; Jones, Lenny; Lamarche, Alicia 2014 A weighted regularity lemma with applications. Zbl 1295.05208 Csaba, Béla; Pluhár, András 2014 Integral eigen-pair balanced classes of graphs with their ratio, asymptote, area, and involution-complementary aspects. Zbl 1302.05113 Winter, Paul August; Jessop, Carol Lynne 2014 Midpoint-free subsets of the real numbers. Zbl 1414.11012 Eggleton, Roger B. 2014 Domination polynomials of $$k$$-tree related graphs. Zbl 1303.05139 Jahari, Somayeh; Alikhani, Saeid 2014 Necklaces, self-reciprocal polynomials, and $$q$$-cycles. Zbl 1414.11041 Pintoptang, Umarin; Laohakosol, Vichian; Tadee, Suton 2014 Embedding structures associated with Riordan arrays and moment matrices. Zbl 1295.05052 Barry, Paul 2014 The terminal Hosoya polynomial of some families of composite graphs. Zbl 1295.05125 Deutsch, Emeric; Rodríguez-Velázquez, Juan Alberto 2014 On the general Erdős-Turán conjecture. Zbl 1309.11011 2014 On bondage numbers of graphs: a survey with some comments. Zbl 1267.05201 Xu, Jun-Ming 2013 Some inverse relations determined by Catalan matrices. Zbl 1295.05057 Yang, Sheng-Liang 2013 Gallai-colorings of triples and 2-factors of $$\mathcal{B}_3$$. Zbl 1295.05101 Chua, Lynn; Gyárfás, András; Hossain, Chetak 2013 On Cayley digraphs that do not have Hamiltonian paths. Zbl 1295.05120 Morris, Dave Witte 2013 Some new results on distance $$k$$-domination in graphs. Zbl 1295.05181 Vaidya, Samir K.; Kothari, Nirang J. 2013 Finite 1-regular Cayley graphs of valency 5. Zbl 1267.05140 Li, Jing Jian; Lou, Ben Gong; Zhang, Xiao Jun 2013 Initial ideals of tangent cones to the Richardson varieties in the orthogonal Grassmannian. Zbl 1273.13053 2013 Sunlet decomposition of certain equipartite graphs. Zbl 1267.05167 Akwu, Abolape D.; Ajayi, Deborah O. A. 2013 Vertex-disjoint subtournaments of prescribed minimum outdegree or minimum semidegree: proof for tournaments of a conjecture of Stiebitz. Zbl 1236.05095 Lichiardopol, Nicolas 2012 The $$a$$ and $$(a, b)$$-analogs of Zagreb indices and coindices of graphs. Zbl 1236.05074 Mansour, Toufik; Song, Chunwei 2012 New partition theoretic interpretations of Rogers-Ramanujan identities. Zbl 1258.05007 Agarwal, A. K.; Goyal, M. 2012 Total vertex irregularity strength of the disjoint union of sun graphs. Zbl 1236.05174 Slamin; Dafik; Winnona, Wyse 2012 Variations of the game 3-Euclid. Zbl 1235.91035 Ho, Nhan Bao 2012 Algebraic integers as chromatic and domination roots. Zbl 1258.05053 Alikhani, Saeid; Hasni, Roslan 2012 On extremal self-dual ternary codes of length 48. Zbl 1251.94046 Nebe, Gabriele 2012 On the line graph of the zero divisor graph for the ring of Gaussian integers modulo $$n$$. Zbl 1236.05105 Nazzal, Khalida; Ghanem, Manal 2012 The Tutte polynomial of some matroids. Zbl 1267.05145 Merino, Criel; Ramírez-Ibáñez, Marcelino; Rodríguez-Sánchez, Guadalupe 2012 The structure of reduced Sudoku grids and the Sudoku symmetry group. Zbl 1267.05050 Jones, Siân K.; Perkins, Stephanie; Roach, Paul A. 2012 Strong trinucleotide circular codes. Zbl 1234.92017 Michel, Christian J.; Pirillo, Giuseppe 2011 Application of the firefly algorithm for solving the economic emissions load dispatch problem. Zbl 1279.90198 Apostolopoulos, Theofanis; Vlachos, Aristidis 2011 Identities of symmetry for generalized Euler polynomials. Zbl 1262.11022 Kim, Dae San 2011 Cayley graphs of order $$27p$$ are Hamiltonian. Zbl 1236.05102 2011 Minimum 2-tuple dominating set of an interval graph. Zbl 1236.05155 Pramanik, Tarasankar; Mondal, Sukumar; Pal, Madhumangal 2011 Zeons, permanents, the Johnson scheme, and generalized derangements. Zbl 1295.05271 Feinsilver, Philip; McSorley, John 2011 On the isolated vertices and connectivity in random intersection graphs. Zbl 1217.05209 Shang, Yilun 2011 Beyond the expanders. Zbl 1236.05122 Bolla, Marianna 2011 Some more results on IF soft rough approximation space. Zbl 1236.03036 Bhattacharya (Halder), Sharmistha; Davvaz, Bijan 2011 On some combinatorial structures constructed from the groups $$L(3, 5), U(5, 2)$$, and $$S(6, 2)$$. Zbl 1236.05206 Crnković, Dean; Mikulić Crnković, Vedrana 2011 Harmonic numbers and cubed binomial coefficients. Zbl 1282.11011 Sofo, Anthony 2011 A noncommutative enumeration problem. Zbl 1236.05009 Bernabei, Maria Simonetta; Thaler, Horst 2011 Ramsey numbers for theta graphs. Zbl 1236.05130 Jaradat, M. M. M.; Bataineh, M. S. A.; Radaideh, S. M. E. 2011 On isosceles sets in the 4-dimensional Euclidean space. Zbl 1295.51021 Kido, Hiroaki 2010 Classification of triangle-free $$22_3$$ configurations. Zbl 1250.51008 Al-Azemi, Abdullah; Betten, Anton 2010 Classification of base sequences $$\text{BS}(n+1,n)$$. Zbl 1238.05041 Đoković, Dragomir Ž. 2010 The distribution of the size of the union of cycles for two types of random permutations. Zbl 1295.05017 Lengyel, Tamás 2010 Erratum to “Classification of base sequences $$\mathrm{BS}(n+1,n)$$”, ibid. 2010, Article ID 851858 (2010; Zbl 05816748). Zbl 1238.05040 Đoković, Dragomir Ž. 2010 all top 5
{}
Moment of Inertia Problem Homework Statement A wheel, with circumference 0.6 m and moment of inertial 43 kg m2 about its center, rotates about a frictionless axle with angular velocity 13 radians per second. A brake is applied which supplies a constant force to a point on the perimeter of the wheel of 9 N, tangent to the wheel and opposing the motion. How many revolutions will the wheel make before coming to rest? Homework Equations KErotational=I*Omega2 Torque=I*alpha I=M*R2 The Attempt at a Solution I'm lost at how to start this problem, I tried to get the deceleration caused by the 9N force applied on the wheel by Newton's Second Law but I couldn't get the mass, so i used the I=M*R2 equation to get the mass and then used F=M*a to find the deceleration, took that answer and divided by 2pi to find the revolutions, but the answer was off. What am I missing? The angular deceleration is given by $a= \alpha r$. Find $\alpha$. Then use the equations of rotational motion to find the total angular displacement from the initial angular velocity to rest.
{}
# This Question: 2 pts For the given functions fand g, find (f.g)(x). f(x) = 5x +... ###### Question: This Question: 2 pts For the given functions fand g, find (f.g)(x). f(x) = 5x + 2, g(x) = 6x + 6 O A. 30x2 + 12 O B. 30x2 + 18x + 12 O C. 30x2 +42x + 12 O D. 11x2 + 42x + 8 Mika's ag rages wil es us r = oblems in t problems, so Click to select your answer #### Similar Solved Questions ##### Oder : cefoxitin 5000mg, IV, q6h available mix 1g in 1o ml of diuent set and... oder : cefoxitin 5000mg, IV, q6h available mix 1g in 1o ml of diuent set and solution: volumetric pump regulator, 60 gtt/ml;100 ml of D5W instructions: dilute qefoxin 500mg in 100ml of D5W and infuse in 45 minutes... ##### OHsOClzOCHaHsCn-butylamine , OPC(two steps, two products)OHCH_CHzOH 5% HzSO4 NHzethvlamine OH sOClz OCHa HsC n-butylamine , OPC (two steps, two products) OH CH_CHzOH 5% HzSO4 NHz ethvlamine... ... ##### Point)Sketch the graph of f(x) = Ixl}The critical point of both functions isThe local maximum Isand the minimum isThe graph of f (x) Is concave upward on point) Sketch the graph of f(x) = Ixl} The critical point of both functions is The local maximum Is and the minimum is The graph of f (x) Is concave upward on... ##### Find a forIula for thc general terI an of thc Scquence; asSuning that the pattern of the first few (erIs COnlinues.(a) {13,4,2 (6) {1, Find a forIula for thc general terI an of thc Scquence; asSuning that the pattern of the first few (erIs COnlinues. (a) {13,4,2 (6) {1,... ##### 4) Identify the conic that the polar equation represents and graph it by hand. TE 6... 4) Identify the conic that the polar equation represents and graph it by hand. TE 6 2-sin()... ##### Show that the set K given by K = {Ay +v, y € 4n, v € Rm Vi 2 0 Vi € {1, _ m}(as in the proof of minimax theorem) is closed and convex: Show that the set K given by K = {Ay +v, y € 4n, v € Rm Vi 2 0 Vi € {1, _ m} (as in the proof of minimax theorem) is closed and convex:... ##### Estimate the Essp for a formation with the following characteristics assuming that the formation water is pure NaCl solution? Formation temperature, %F 90 Formation thickness, ft 100 Rf ohm-m at 90 %F 2.0 Rw; ohm-m at 90 %F 2.5 Estimate the Essp for a formation with the following characteristics assuming that the formation water is pure NaCl solution? Formation temperature, %F 90 Formation thickness, ft 100 Rf ohm-m at 90 %F 2.0 Rw; ohm-m at 90 %F 2.5... ##### What is a particular solution to the differential equation dy/dx=(y+5)(x+2) with y(0)=-1? What is a particular solution to the differential equation dy/dx=(y+5)(x+2) with y(0)=-1?... ##### Sketch the graph of f and use your sketch DNE: )find the absolutelocal Marimum and minimum values of f (Enter your answers a5 comma-separated Iist.nswer doesexist, enterf3x + If 0 <* < 1 (x) = (3 - Z* If 1 $*$absolute maximum valueabsolute minimum valueIocal maximum value(s)local Minimum value(s) Sketch the graph of f and use your sketch DNE: ) find the absolute local Marimum and minimum values of f (Enter your answers a5 comma-separated Iist. nswer does exist, enter f3x + If 0 <* < 1 (x) = (3 - Z* If 1 $*$ absolute maximum value absolute minimum value Iocal maximum value(s) local Mini... ##### Determine the dimensions of Nul A, ColA, and Row A for the given matrix:-4 7 - 8 -90-5A =The dimension of Nul A is (Type a whole number:)The dimension of Col A is (Type a whole number:)The dimension of Row A is (Type a whole number:)Enter your answer in each of the answer boxes_ Determine the dimensions of Nul A, ColA, and Row A for the given matrix: -4 7 - 8 -9 0 -5 A = The dimension of Nul A is (Type a whole number:) The dimension of Col A is (Type a whole number:) The dimension of Row A is (Type a whole number:) Enter your answer in each of the answer boxes_... ##### Suppose Yı, Y2, ..., Yn|7 vid N(10, 7-2). The population mean Mo is known. The un-... Suppose Yı, Y2, ..., Yn|7 vid N(10, 7-2). The population mean Mo is known. The un- known parameter T > 0, which is the inverse of the population variance, is called the precision. The pdf of N(Mo, T-1) is given by Syl-(wl=) = Vb exp (-5(v – wo)"] Let's now derive the posterior ... ##### Suppose thatSndwsCreeniznoaverage OMCE every ZU davs nf growing What the prabability that snowinc Creenland Wunenwhendoesaciers naveZiyj chancegrowing Wnen does Qecima places )snov'Creenino Jlaciers nave29, chancecierEgrowing? (Round VJun answerNeed Help?Reud I1Uhtch ILSunmtii AnswarProgressPrdianmr E Suppose that Sndws Creenizno average OMCE every ZU davs nf growing What the prabability that snowinc Creenland Wunen when does aciers nave Ziyj chance growing Wnen does Qecima places ) snov' Creenino Jlaciers nave 29, chance cierE growing? (Round VJun answer Need Help? Reud I1 Uhtch IL Sunmtii ... ##### Question 4 5 pts A green laser operating at 532 nm shines light onto two- closely... Question 4 5 pts A green laser operating at 532 nm shines light onto two- closely spaced slits. An interference pattern is created on a diskant screen. If the fifth dark fringe appears on the screen at an angle of 0.12 degrees with respect to the center line, what is the slit separation distance? O ... ##### For a typical capital investment project, the bulk of the investment-related cash outflow occurs: During the... For a typical capital investment project, the bulk of the investment-related cash outflow occurs: During the initiation stage of the project During the operation stage of the project Either during the initiation stage or the operation stage During neither the initiation stage nor the operation sta... ##### ENGR 425 Reinforced Concrete Structures (3) Fall Semester 2018 Dr. Pong School of Engineering San Francisco... ENGR 425 Reinforced Concrete Structures (3) Fall Semester 2018 Dr. Pong School of Engineering San Francisco State University Page 3 2. DL Normal-weight Concrete 121 14" .TDL.*? (30%) Pl lease determin Mat 21 ,... ##### Mu) =I5/6H -Vanrg [(z) = VI)] -r(-I) = Jn M(=) = 3f()] =14u Mu) = I5/6H - Vanrg [(z) = VI)] - r(-I) = Jn M(=) = 3f()] = 14u... ##### The Bode magnitude plot of H(o) is shown Find H(w) 0.1 10 ? (rad/s) +20 dB/decade... The Bode magnitude plot of H(o) is shown Find H(w) 0.1 10 ? (rad/s) +20 dB/decade -40 dB/decade... ##### Use the following table to answer the question: $2,250,000$170,000 $150,000 320,000$150,000 58,000 Wages and... Use the following table to answer the question: $2,250,000$170,000 $150,000 320,000$150,000 58,000 Wages and salaries expense (gross pay) Amounts withheld from employees' pay Income taxes Social Security and Medicare Payroll taxes expense: Social Security and Medicare Unemployment taxes Worker... ##### At an awards ceremony; seven women and nve Men are each to receive an award and are presented with their award one at time and then the remaining awards will alternate between men and women; How many wavs can this be done? 3628800 possibilitiesthe awards arebe first giventhe women At an awards ceremony; seven women and nve Men are each to receive an award and are presented with their award one at time and then the remaining awards will alternate between men and women; How many wavs can this be done? 3628800 possibilities the awards are be first given the women... ##### Please let words clear, thanks 7. Recall that the notation alb is read as "a divides... please let words clear, thanks 7. Recall that the notation alb is read as "a divides b" and means that there is some integer x such that b ax. Now consider the following sentence: a e a) Write the negation of the sentence above in symbols, simplifying whenever b) The ORIGINAL sentence above ... ##### An ice skater has a moment of inertia of 8.0 kgm^2 when her arms are outstretched. at this time she is spinning at 4.0 revolutions per second if she pulls in her arms and decreases her moment of inertia by 40% how fast will she be spinning an ice skater has a moment of inertia of 8.0 kgm^2 when her arms are outstretched. at this time she is spinning at 4.0 revolutions per second if she pulls in her arms and decreases her moment of inertia by 40% how fast will she be spinning... ##### (a) Singly charged uranium-238 ions are accelerated through potentlal dlfference of 2.40 kV and enter unlform magnetic field of 1.30 directed perpendicular to their velocities Determine the radlus of thelr circular path; 08408 Your response off by multiple of ten _ cM (b) Repeat for uranium-235 Ions_ 083 Your response is off by multiple of ten How does the ratio of these path radii depend on the accelerating voltage AV and on the Magnitude of the magnetic strength B? It Is proportlonal to VAV / (a) Singly charged uranium-238 ions are accelerated through potentlal dlfference of 2.40 kV and enter unlform magnetic field of 1.30 directed perpendicular to their velocities Determine the radlus of thelr circular path; 08408 Your response off by multiple of ten _ cM (b) Repeat for uranium-235 Ions... ##### Select Aozi UJ a. one 1 I 2 5 Ixz Il solutions 22 3 there to the equation Select Aozi UJ a. one 1 I 2 5 Ixz Il solutions 22 3 there to the equation... ##### TEST4 REVIEW CH"MATI [" WINTER 201'Dr N Lavor NAME ALL WORK MUST BE SHOWN IN ^ NEAT, LOGICAL ORDER Exact answers are needed. No work No eredit CALCULATOR ANSWER WILL NOT BE ACCEPTEDTolal 100 points ecn problem is worth 10 pts _ Find general solution of differential equationdy y2 ax Xzx > 0Find general solution of differential equation dy = xcos?(y) dxIve initial value problem TEST4 REVIEW CH" MATI [" WINTER 201' Dr N Lavor NAME ALL WORK MUST BE SHOWN IN ^ NEAT, LOGICAL ORDER Exact answers are needed. No work No eredit CALCULATOR ANSWER WILL NOT BE ACCEPTED Tolal 100 points ecn problem is worth 10 pts _ Find general solution of differential equation dy y2 a... ##### J 102 2广 er-f dydxda 17) Evaluate: 0 j 102 2广 er-f dydxda 17) Evaluate: 0 j 102 2广 er-f dydxda 17) Evaluate: 0 j 102 2广 er-f dydxda 17) Evaluate: 0... ##### Choose the word or phrase that best answers the question. If an object absorbs all the light that hitsit, what color is it?A) whiteC) blackB) blueD) green Choose the word or phrase that best answers the question. If an object absorbs all the light that hits it, what color is it? A) white C) black B) blue D) green...
{}
0th Percentile ##### Read a FASTA file Read a FASTA file ##### Usage read_fasta(file_path, subset = NULL) ##### Arguments file_path (character of length 1) The path to a file to read. subset (numeric) Indexes of entries to return. If not NULL, the file will first be indexed without loading the whole file into RAM. ##### Value names character
{}
NAG Library Function Document 1Purpose nag_pde_interp_1d_fd (d03pzc) interpolates in the spatial coordinate the solution and derivative of a system of partial differential equations (PDEs). The solution must first be computed using one of the finite difference schemes nag_pde_parab_1d_fd (d03pcc), nag_pde_parab_1d_fd_ode (d03phc) or nag_pde_parab_1d_fd_ode_remesh (d03ppc), or one of the Keller box schemes nag_pde_parab_1d_keller (d03pec), nag_pde_parab_1d_keller_ode (d03pkc) or nag_pde_parab_1d_keller_ode_remesh (d03prc). 2Specification #include #include void nag_pde_interp_1d_fd (Integer npde, Integer m, const double u[], Integer npts, const double x[], const double xp[], Integer intpts, Integer itype, double up[], NagError *fail) 3Description nag_pde_interp_1d_fd (d03pzc) is an interpolation function for evaluating the solution of a system of partial differential equations (PDEs), at a set of user-specified points. The solution of the system of equations (possibly with coupled ordinary differential equations) must be computed using a finite difference scheme or a Keller box scheme on a set of mesh points. nag_pde_interp_1d_fd (d03pzc) can then be employed to compute the solution at a set of points anywhere in the range of the mesh. It can also evaluate the first spatial derivative of the solution. It uses linear interpolation for approximating the solution. None. 5Arguments Note: the arguments x, m, u, npts and npde must be supplied unchanged from the PDE function. 1:    $\mathbf{npde}$IntegerInput On entry: the number of PDEs. Constraint: ${\mathbf{npde}}\ge 1$. 2:    $\mathbf{m}$IntegerInput On entry: the coordinate system used. If the call to nag_pde_interp_1d_fd (d03pzc) follows one of the finite difference functions then m must be the same argument m as used in that call. For the Keller box scheme only Cartesian coordinate systems are valid and so m must be set to zero. No check will be made by nag_pde_interp_1d_fd (d03pzc) in this case. ${\mathbf{m}}=0$ Indicates Cartesian coordinates. ${\mathbf{m}}=1$ Indicates cylindrical polar coordinates. ${\mathbf{m}}=2$ Indicates spherical polar coordinates. Constraints: • $0\le {\mathbf{m}}\le 2$ following a finite difference function; • ${\mathbf{m}}=0$ following a Keller box scheme function. 3:    $\mathbf{u}\left[{\mathbf{npde}}×{\mathbf{npts}}\right]$const doubleInput Note: the $\left(i,j\right)$th element of the matrix $U$ is stored in ${\mathbf{u}}\left[\left(j-1\right)×{\mathbf{npde}}+i-1\right]$. On entry: the PDE part of the original solution returned in the argument u by the PDE function. Constraint: ${\mathbf{npde}}\ge 1$. 4:    $\mathbf{npts}$IntegerInput On entry: the number of mesh points. Constraint: ${\mathbf{npts}}\ge 3$. 5:    $\mathbf{x}\left[{\mathbf{npts}}\right]$const doubleInput On entry: ${\mathbf{x}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{npts}}$, must contain the mesh points as used by the PDE function. 6:    $\mathbf{xp}\left[{\mathbf{intpts}}\right]$const doubleInput On entry: ${\mathbf{xp}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{intpts}}$, must contain the spatial interpolation points. Constraint: ${\mathbf{x}}\left[0\right]\le {\mathbf{xp}}\left[0\right]<{\mathbf{xp}}\left[1\right]<\cdots <{\mathbf{xp}}\left[{\mathbf{intpts}}-1\right]\le {\mathbf{x}}\left[{\mathbf{npts}}-1\right]$. 7:    $\mathbf{intpts}$IntegerInput On entry: the number of interpolation points. Constraint: ${\mathbf{intpts}}\ge 1$. 8:    $\mathbf{itype}$IntegerInput On entry: specifies the interpolation to be performed. ${\mathbf{itype}}=1$ The solutions at the interpolation points are computed. ${\mathbf{itype}}=2$ Both the solutions and their first derivatives at the interpolation points are computed. Constraint: ${\mathbf{itype}}=1$ or $2$. 9:    $\mathbf{up}\left[\mathit{dim}\right]$doubleOutput Note: the dimension, dim, of the array up must be at least ${\mathbf{npde}}×{\mathbf{intpts}}×{\mathbf{itype}}$. The element ${\mathbf{UP}}\left(i,j,k\right)$ is stored in the array element ${\mathbf{up}}\left[\left(k-1\right)×{\mathbf{npde}}×{\mathbf{intpts}}+\left(j-1\right)×{\mathbf{npde}}+i-1\right]$. On exit: if ${\mathbf{itype}}=1$, ${\mathbf{UP}}\left(\mathit{i},\mathit{j},1\right)$, contains the value of the solution ${U}_{\mathit{i}}\left({x}_{\mathit{j}},{t}_{\mathrm{out}}\right)$, at the interpolation points ${x}_{\mathit{j}}={\mathbf{xp}}\left[\mathit{j}-1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{intpts}}$ and $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$. If ${\mathbf{itype}}=2$, ${\mathbf{UP}}\left(\mathit{i},\mathit{j},1\right)$ contains ${U}_{\mathit{i}}\left({x}_{\mathit{j}},{t}_{\mathrm{out}}\right)$ and ${\mathbf{UP}}\left(\mathit{i},\mathit{j},2\right)$ contains $\frac{\partial {U}_{\mathit{i}}}{\partial x}$ at these points. 10:  $\mathbf{fail}$NagError *Input/Output The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation). 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_EXTRAPOLATION On entry, interpolating point $〈\mathit{\text{value}}〉$ with the value $〈\mathit{\text{value}}〉$ is outside the x range. NE_INT On entry, ${\mathbf{intpts}}\le 0$: ${\mathbf{intpts}}=〈\mathit{\text{value}}〉$. On entry, ${\mathbf{itype}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{itype}}=1$ or $2$. On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}=0$, $1$ or $2$. On entry, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{npde}}>0$. On entry, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{npts}}>2$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information. NE_NOT_STRICTLY_INCREASING On entry, interpolation points xp badly ordered: $\mathit{I}=〈\mathit{\text{value}}〉$, ${\mathbf{xp}}\left[\mathit{I}-1\right]=〈\mathit{\text{value}}〉$, $\mathit{J}=〈\mathit{\text{value}}〉$ and ${\mathbf{xp}}\left[\mathit{J}-1\right]=〈\mathit{\text{value}}〉$. On entry, mesh points x badly ordered: $\mathit{I}=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[\mathit{I}-1\right]=〈\mathit{\text{value}}〉$, $\mathit{J}=〈\mathit{\text{value}}〉$ and ${\mathbf{x}}\left[\mathit{J}-1\right]=〈\mathit{\text{value}}〉$. 7Accuracy See the PDE function documents. 8Parallelism and Performance nag_pde_interp_1d_fd (d03pzc) is not threaded in any implementation. None. 10Example © The Numerical Algorithms Group Ltd, Oxford, UK. 2017
{}
# zbMATH — the first resource for mathematics On the dimension of the $$l_ p^ n$$-subspaces of Banach spaces, for $$1 \leq p < 2$$. (English) Zbl 0509.46016 ##### MSC: 46B20 Geometry and structure of normed linear spaces 60B11 Probability theory on linear topological spaces 46B25 Classical Banach spaces in the general theory 46E30 Spaces of measurable functions ($$L^p$$-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.) 60B12 Limit theorems for vector-valued random variables (infinite-dimensional case) Full Text:
{}
# Not able to delete n Stack Overflow Asked by user13645394 on August 26, 2020 Maybe a very simple answer but I cannot seem to get rid of a newline character. I have a list that contains a list and it looks like this: [['US', ' 146465', ' 146935', ' 148012', ' 149374', ' 150822', ' 152055', ' 153315', ' 154448', ' 154862', ' 155402\nn']] At the end there are two newline characters and I need to get rid of them. So first I iterated thought the big list to arrive at the sublist. for lists in L: Now that I am there I want to get the last element of the list using list[-1] When I print this I get 155402n Where did the second newline go? So I continue, now I guess the only thing to do is split it at the newline right?: print(lists[-1].split('n')) My output: [' 155402\n', ''] What in the world! Now there is a a double slash before the newline. So turns out I am incapable of taking out a simple newline character:D So really my question is how can I get rid of a newline in lists of a list. Any help would be appreciated. Thank you! Another slightly different regex approach - you can catch both new line characters OR double backslashed n: import re for lists in L: lists[-1] = re.sub('\\n|n', '', lists[-1]) Or out of a loop L[0][-1] = re.sub('\\n|n', '', L[0][-1]) Correct answer by Tom on August 26, 2020 In the last item in the list, there is actually only one newline character, not two. The backslash "escapes" the newline character so there is only 1. It is not deleting the second newline character because there isn't a second newline character. Change ' 155402\nn' to ' 155402nn' if you want 2 newlines. Answered by aidan0626 on August 26, 2020 The extra is affecting the split. So what you need to do is replace the extra "" with "" Try: lists[-1].replace("\\", "\").split("n") Answered by ewong on August 26, 2020 ## Related Questions ### Can both bitwise set/resets be achieved in one line with ternary operators? 5  Asked on November 29, 2021 by user8585939 ### Unable to understand error in D flip flop code 2  Asked on November 29, 2021 by chaitanya_12789 ### CodeMirror Highlight specific Regex-Match 1  Asked on November 29, 2021 by c000 ### Change visibility TextView in layout when switch between two activities in Android 1  Asked on November 29, 2021 ### How do we find frequency of one column based off two other columns in SQL? 4  Asked on November 29, 2021 by degreecharge ### Git: Stashed Changes But Still Can’t Pull 1  Asked on November 29, 2021 by crawfordbenjamin ### elif a==”no”: ^ SyntaxError: invalid syntax 3  Asked on November 29, 2021 by mayar-kurdi ### ValueError: Unconverted data remains .000 2  Asked on November 29, 2021 by vbdashes ### Click only through holes in svg mask: case with intersecting holes 1  Asked on November 29, 2021 by arsonist ### Why won’t R recognize numbers 10 or above in my data frame? 0  Asked on November 29, 2021 by ankaa ### Cmake headers: if the header exists in “X” directory, use it. Otherwise, use the header in “Y” directory. Is this possible? 0  Asked on November 29, 2021 by user13978178 ### does std::is_same has an impact on the performance of the code? 1  Asked on November 29, 2021 ### How to make an arraylist for the last dice rolls? 3  Asked on November 29, 2021 by acidixs 2  Asked on November 29, 2021 by ddave ### Find the most frequent words that appear in the dataset 3  Asked on November 29, 2021 by pythonnew ### How to properly mark the file as not in sync using Cloud Files API? 0  Asked on November 29, 2021 by it-hit-webdav ### Google cloud dataproc –files is not working 1  Asked on November 29, 2021 ### Jenkins auto generates wrong config for composer 1  Asked on November 29, 2021 by tiny-sunlight ### why my celery task is not executing periodically? 1  Asked on November 29, 2021 by d_p ### Access to XMLHttpRequest at ” from origin ” has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested 2  Asked on November 29, 2021 by vivek-nuna
{}
# GNU social Coding Style Please comply with PSR-12 and the following standard when working on GNU social if you want your patches accepted and modules included in supported releases. If you see code which doesn't comply with the below, please fix it :) GNU social is written with multiple programming paradigms in different places. Most of GNU social code is procedural programming contained in functions whose name starts with on. Starting with "on" is making use of the Event dispatcher (onEventName). This allows for a declarative structure. Hence, the most common function structure is the one in the following example: public function onRainStart(array &$args): bool { Util::openUmbrella(); return true; } Things to note in the example above: • This function will be called when the event "RainStart" is dispatched, thus its declarative nature. More on that in the Events chapter. • We call a static function from a Util class. That's often how we use classes in GNU social. A notable exception being Entities. More on that in the Database chapter. It's also common to have functional code snippets in the middle of otherwise entirely imperative blocks (e.g., for handling list manipulation). For this we often use the library Functional PHP. Use of reflective programming, variable functions, and magic methods are sometimes employed in the core. These principles defy what is then adopted and recommended out of the core (components, plugins, etc.). The core is a lower level part of GNU social that carefully takes advantage of these resources. Unless contributing to the core, you most likely shouldn't use these. PHP allows for a high level of code expression. In GNU social we have conventions for when each programming style should be adopted as well as methods for handling some common operations. Such an example is string parsing: We never chain various substring calls. We write a regex pattern and then call preg_match instead. All of this consistency highly contributes for a more readable and easier of maintaining code. ## Strings Use ' instead of " for strings, where substitutions aren't required. This is a performance issue, and prevents a lot of inconsistent coding styles. When using substitutions, use curly braces around your variables - like so: $var = "my_var: {$my_var}"; ## Comments and Documentation Comments go on the line ABOVE the code, NOT to the right of the code, unless it is very short. All functions and methods are to be documented using PhpDocumentor - https://docs.phpdoc.org/guides/ ## File Headers File headers follow a consistent format, as such: // This file is part of GNU social - https://www.gnu.org/software/social // // GNU social is free software: you can redistribute it and/or modify // it under the terms of the GNU Affero General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // GNU social is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Affero General Public License for more details. // // You should have received a copy of the GNU Affero General Public License // along with GNU social. If not, see <http://www.gnu.org/licenses/>. /** * Description of this file. * * @package samples * @author Diogo Cordeiro <diogo@fc.up.pt> * @copyright 2019 Free Software Foundation, Inc http://www.fsf.org * @license https://www.gnu.org/licenses/agpl.html GNU AGPL v3 or later */ Please use it. A few notes: • The description of the file doesn't have to be exhaustive. Rather it's meant to be a short summary of what's in this file and what it does. Try to keep it to 1-5 lines. You can get more in-depth when documenting individual functions! • You'll probably see files with multiple authors, this is by design - many people contributed to GNU social or its forebears! If you are modifying an existing file, APPEND your own author line, and update the copyright year if needed. Do not replace existing ones. ## Paragraph spacing Where-ever possible, try to keep the lines to 80 characters. Don't sacrifice readability for it though - if it makes more sense to have it in one longer line, and it's more easily read that way, that's fine. With assignments, avoid breaking them down into multiple lines unless neccesary, except for enumerations and arrays. ## 'If' statements format Use switch statements where many else if's are going to be used. Switch/case is faster. if ($var === 'example') { echo 'This is only an example'; } else { echo 'This is not a test. This is the real thing'; } Do NOT make if statements like this: if ($var === 'example'){ echo 'An example'; } OR this if ($var === 'example') echo "An {$var}"; ## Associative arrays Always use [] instead of array(). Associative arrays must be written in the following manner: $array = [ 'var' => 'value', 'var2' => 'value2' ]; Note that spaces are preferred around the '=>'. Some short hands are evil: • Use the long format for <?php. Do NOT use <?. • Use the long format for <?php echo. Do NOT use <?=. ## Naming conventions Respect PSR-12 first. • Classes use PascalCase (e.g. MyClass). • Functions/Methods use camelCase (e.g. myFunction). • Variables use snake_case (e.g. my_variable). A note on variable names, etc. It must be possible to understand what is meant without necessarily seeing it in context, because the code that calls something might not always make it clear. So if you have something like: $notice->post($contents); Well I can easily tell what you're doing there because the names are straight- forward and clear. Something like this: foo->bar(); Is much less clear. Also, wherever possible, avoid ambiguous terms. For example, don't use text as a term for a variable. Call back to "contents" above. ## Arrays Even though PSR-12 doesn't specifically specify rules for array formatting, it is in the spirit of it to have every array element on a new line like is done for function and class method arguments and condition expressions, if there is more than one element. In this case, even the last element should end on a comma, to ease later element addition. $foo = ['first' => 'unu'];$bar = [ 'first' => 'once', 'second' => 'twice', 'third' => 'thrice', ]; ## Comparisons Always use symbol based comparison operators (&&, ||) instead of text based operators (and, or) in an "if" clause as they are evaluated in different order and at different speeds. This is will prevent any confusion or strange results. Prefer using === instead of == when possible. Version 3 started with PHP 8, use strict typing whenever possible. Using strict comparisons takes good advantage of that. ## Use English All variables, classes, methods, functions and comments must be in English. Bad english is easier to work with than having to babelfish code to work out how it works. ## Encoding Files should be in UTF-8 encoding with UNIX line endings. ## No ending tag Files should not end with an ending php tag "?>". Any whitespace after the closing tag is sent to the browser and cause errors, so don't include them. ## Nesting Functions Avoid, if at all possible. When not possible, document the living daylights out of why you're nesting it. It's not always avoidable, but PHP has a lot of obscure problems that come up with using nested functions. If you must use a nested function, be sure to have robust error-handling. This is a must and submissions including nested functions that do not have robust error handling will be rejected and you'll be asked to add it. ## Scoping Properly enforcing scope of functions is something many PHP programmers don't do, but should. In general: • Variables unique to a class should be protected and use interfacing to change them. This allows for input validation and making sure we don't have injection, especially when something's exposed to the API, that any program can use, and not all of them are going to be be safe and trusted. • Variables not unique to a class should be validated prior to every call, which is why it's generally not a good idea to re-use stuff across classes unless there's significant performance gains to doing so. • Classes should protect functions that they do not want overriden, but they should avoid protecting the constructor and destructor and related helper functions as this prevents proper inheritance. ## Typecasting PHP is a soft-typed language, it falls to us developers to make sure that we are using the proper inputs. When possible, use explicit type casting. Where it isn't, you're going to have to make sure that you check all your inputs before you pass them. All inputs should be cast as an explicit PHP type. Not properly typecasting is a shooting offence. Soft types let programmers get away with a lot of lazy code, but lazy code is buggy code, and frankly, we don't want it in GNU social if it's going to be buggy. ## Consistent exception handling Consistency is key to good code to begin with, but it is especially important to be consistent with how we handle errors. GNU social has a variety of built- in exception classes. Use them, wherever it's possible and appropriate, and they will do the heavy lifting for you. Additionally, ensure you clean up any and all records and variables that need cleanup in a function using try { } finally { } even if you do not plan on catching exceptions (why wouldn't you, though? That's silly.). If you do not call an exception handler, you must, at a minimum, record errors to the log using Log::level(message). Ensure all possible control flows of a function have exception handling and cleanup, where appropriate. Don't leave endpoints with unhandled exceptions. Try not to leave something in an error state if it's avoidable. ## NULL, VOID and SET When programming in PHP it's common having to represent the absence of value. A variable that wasn't initialized yet or a function that could not produce a value. On the latter, one could be tempted to throw an exception in these scenarios, but not always that kind of failure fits the panic/exception/crash category. On the discussion of whether to use === null vs is_null(), the literature online is diverse and divided. We conducted an internal poll and the winner was is_null(). Some facts to consider: 1. null is both a data type, and a value; 2. As noted in PHP's documentation, the constant null forces a variable to be of type null; 3. A variable with null value returns false in an isset() test, despite that, assigning a variable to NULL is not the same as unsetting it. To actually test whether a variable is set or not requires adopting different strategies per context (https://stackoverflow.com/a/18646568). 4. The void return type doesn't return NULL, but if used as an expression, it evaluates to null. Considering union types and what we use null to represent, we believe that our use of null is always akin to that of a Option type. Here's an example: function sometimes_has_answer(): ?int { return random_int(1, 100) < 50 ? 42 : null; } $answer = sometimes_has_answer(); if (!is_null($answer)) { echo "Hey, we've got an {$answer}!"; } else { echo 'Sorry, no value. Better luck next time!'; } A non-void function, by definition, is expected to return a value. If it couldn't and didn't run on an exceptional scenario, then you should test in a different style from that of regular strict comparison. Hence, as you're testing whether a variable is of type null, then you should use is_null($var). Just as you normally would with an is_int($var) or is_countable($var). About nullable types, we prefer that you use the shorthand ?T instead of the full form T|null as it suggests that you're considering the possibility of not having the value of a certain variable. This apparent intent is reinforced by the fact that NULL can not be a standalone type in PHP.
{}
# The governor effort is the force applied for: 1. 10% change in speed 2. 1% change in speed 3. 50% change in speed 4. 100% change in speed Option 2 : 1% change in speed ## Terminologies in Governor MCQ Question 1 Detailed Solution Explanation: Governor Effort • It is the force exerted by the governor at the sleeve as the sleeve tends to move. • When the speed of the governor is constant, the force exerted on the sleeve is zero as the sleeve doesn't tend to move and hence at the constant speed, the effort of the governor is zero, but when the speed changes and the sleeve tends to move to new equilibrium position force is exerted on the sleeve. • This force gradually diminishes to zero as the sleeves move to a new equilibrium position corresponding to the new speed. • The mean force exerted on the sleeve during the given change of speed is known as the effort of the governor. • The given change of speed is generally taken as 1%, hence effort is defined as the force exerted on the sleeve for 1% change of speed. # If the speed of the governor fluctuates continuously above and below the mean speed, which of the following statements is true? 1. Governor is said to be stable 2. Governor is said to be insensitive 3. Governor is said to be isochronous 4. Governor is said to be hunting Option 4 : Governor is said to be hunting ## Terminologies in Governor MCQ Question 2 Detailed Solution Explanation: Governor: It is a device used for maintaining a constant mean speed of rotation of the crankshaft over long periods during which the load of the engine may vary. Governor maintains constant speed by controlling the supply of working fluid as the load varies. Some important terminologies of the governor. Sensitiveness: A governor is said to be sensitive when it readily responds to a small change of speed. $$Sensitiveness = \frac{{Range\;of\;Speed}}{{Mean\;Speed}} = \frac{{{N_2} - {N_1}}}{N} = \frac{{2\left( {{N_2} - {N_1}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ where N = Mean Speed 1 = Minimum speed corresponding to full load condition. N2 = Maximum Speed corresponding to the no-load condition. Hunting: The sensitiveness of a governor is desired quantity however if a governor is too sensitive, It fluctuates continuously, and this fluctuation is known as hunting. Isochronism: A governor having infinite sensitivity is treated as an isochronous governor. For all positions of sleeves isochronous has the same speed. Stability: A stable governor brings the speed of the engine to the required value and there is not much hunting. The ball masses occupy the definite position for the speed of the engine within the working range. Controlling force curve for spring-loaded governor The controlling force is equal and opposite to the centrifugal force and acts radially inward. Confusion Points Students often confuse about the work of a governor and flywheel. Governor: Maintains the constant mean speed of the shaft. Flywheel: It doesn't maintain a constant speed. It simply reduces the fluctuation of speed. # In a governor, if the equilibrium speed is constant for all radii of rotation of balls, the governor is said to be 1. stable 2. unstable 3. inertial 4. isochronous Option 4 : isochronous ## Terminologies in Governor MCQ Question 3 Detailed Solution Explanation: Governor: It is a device used for maintaining a constant mean speed of rotation of the crankshaft over long periods during which the load of the engine may vary. Governor maintains constant speed by controlling the supply of working fluid as the load varies. Some important terminologies of the governor Sensitiveness: • A governor is said to be sensitive when it readily responds to a small change of speed. $$\rm Sensitiveness = \frac{{Range\;of\;speed}}{{Mean\;speed}} = \frac{{{N_2} - {N_1}}}{N} = \frac{{2\left( {{N_2} - {N_1}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ Hunting: • The sensitiveness of a governor is a desired quantity, however, if a governor is too sensitive, it fluctuates continuously, and this fluctuation is known as hunting. Isochronism: • A governor having infinite sensitivity is treated as isochronous governor. For all positions of sleeves isochronous has the same speed. Stability: • A stable governor brings the speed of the engine to the required value and there is not much hunting. The ball masses occupy the definite position for the speed of the engine within the working range. # Sensitiveness of governor is given by 1. $$\dfrac{N_1+N_2}{N}$$ 2. $$\dfrac{N_1-N_2}{N}$$ 3. $$\dfrac{N}{N_1 - N_2}$$ 4. $$\dfrac{N}{N_1 + N_2}$$ Option 2 : $$\dfrac{N_1-N_2}{N}$$ ## Terminologies in Governor MCQ Question 4 Detailed Solution Explanation: Consider two governors A and B running at the same speed • When this speed increases or decreases by a certain amount, the lift of the sleeve of governor A is greater than the lift of the sleeve of governor B. • It is then said that governor A is more sensitive than the governor B In general, the greater the lift of the sleeve corresponding to a given fractional change in speed, the greater is the sensitiveness of the governor Sensitiveness is defined as the ratio of the difference between the maximum and minimum equilibrium speeds to the mean equilibrium speed. $${\rm{Sensitiveness = }}\frac{{{\rm{Range}}\,{\rm{of}}\,{\rm{Speed}}}}{{{\rm{Mean}}\,{\rm{Speed}}}} = \frac{{{\omega _{\max }}\;-\;{\omega _{\min }}}}{{{\omega _{{\mathop{\rm m}\nolimits} ean}}}}=\frac{N_{max}-N_{min}}{N_{mean}}$$ A governor is said to be isochronous when the equilibrium speed is constant (i.e. the range of speed is zero) for all radii of rotation of the balls within the working range, neglecting friction. Sensitivity: It is the reciprocal of sensitiveness. Sensitivity = $$\dfrac{N}{N_1 - N_2}$$ Thus when a governor behaves as isochronous i.e. range of speed is zero, then it is the stage of infinite sensitivity. # The power of a governor is the work done at 1. the governor balls for change of speed 2. the sleeve for zero change of speed 3. the sleeve for a given rate of change of speed 4. each governor ball for given percentage change of speed. Option 3 : the sleeve for a given rate of change of speed ## Terminologies in Governor MCQ Question 5 Detailed Solution Concept: Power of Governor: The power of a governor is the work done at the sleeve to change its equilibrium condition for a given percentage change of speed. It is the product of the mean value of the effort and the distance through which the sleeve moves. Power = Mean effort × lift of sleeve $$P=\frac{E}{2}\times Lift~of~sleeve$$ # When the speed of the engine fluctuates continuously above and below the mean speed, then the governor is said to be 1. Stable 2. Unsatble 3. Isochronous 4. Hunting Option 4 : Hunting ## Terminologies in Governor MCQ Question 6 Detailed Solution Explanation: Governor: It is a device used for maintaining a constant mean speed of rotation of the crankshaft over long periods during which the load of the engine may vary. Governor maintains constant speed by controlling the supply of working fluid as the load varies. Some important terminologies of the governor Sensitiveness: A governor is said to be sensitive when it readily responds to a small change of speed. $$\rm Sensitiveness = \frac{{Range\;of\;speed}}{{Mean\;speed}} = \frac{{{N_2} \ -\ {N_1}}}{N} = \frac{{2\left( {{N_2}\ -\ {N_1}} \right)}}{{\left( {{N_1}\ +\ {N_2}} \right)}}$$ where N = Mean Speed = Minimum speed corresponding to full load condition. N2 = Maximum Speed corresponding to the no-load condition. Hunting • A governor is said to be hunt if the speed of the engine fluctuates continuously above and below the mean speed. This is caused by a too sensitive governor which changes the fuel supply by a large amount when a small change in the speed of rotation takes place. • The very very fast to and fro motion of the sleeve between the stoppers is known as hunting. Isochronism: A governor having infinite sensitivity is treated as an isochronous governor. For all positions of sleeves isochronous has the same speed. Stability: A stable governor brings the speed of the engine to the required value and there is not much hunting. The ball masses occupy the definite position for the speed of the engine within the working range. Controlling force curve for spring-loaded governor Controlling force is equal and opposite to the centrifugal force and acts radially inward. Confusion Points Students often confuse in working of governor and flywheel. Governor: Maintains the constant mean speed of the shaft. Flywheel: It doesn't maintain a constant speed. It simply reduces the fluctuation of speed. # A spring controlled governor is found unstable. If can be made stable by 1. Increasing the spring stiffness 2. decreasing the spring stiffness 3. Increasing the ball weight 4. decreasing the ball weight Option 2 : decreasing the spring stiffness ## Terminologies in Governor MCQ Question 7 Detailed Solution A spring controlled governor can be made stable by decreasing the spring stiffness. A governor is said to be stable when for every speed within the working range there is a definite configuration i.e. there is only one radius of rotation of the governor balls at which the governor is in equilibrium. For a stable governor, if the equilibrium speed increases, the radius of governor balls must also increase. A governor is said to be unstable if the radius of rotation decreases as the speed increases. # The sensitiveness of a governor is defined as 1. $$\frac{{\left( {{N_1} - {N_2}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ 2. $$\frac{{\left( {{N_1} + {N_2}} \right)}}{{\left( {{N_1} - {N_2}} \right)}}$$ 3. $$\frac{{2\left( {{N_1} + {N_2}} \right)}}{{\left( {{N_1} - {N_2}} \right)}}$$ 4. $$\frac{{2\left( {{N_1} - {N_2}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ Option 4 : $$\frac{{2\left( {{N_1} - {N_2}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ ## Terminologies in Governor MCQ Question 8 Detailed Solution Explanation: Governor: It is a device used for maintaining a constant mean speed of rotation of the crankshaft over long periods during which the load of the engine may vary. Governor maintains constant speed by controlling the supply of working fluid as the load varies. Some important terminologies of the governor Sensitiveness: A governor is said to be sensitive when it readily responds to a small change of speed. $$Sensitiveness = \frac{{Range\;of\;speed}}{{Mean\;speed}} = \frac{{{N_1} - {N_2}}}{N} = \frac{{2\left( {{N_1} - {N_2}} \right)}}{{\left( {{N_1} + {N_2}} \right)}}$$ where N = Mean Speed = Minimum speed corresponding to full load condition. N2 = Maximum Speed corresponding to the no-load condition • Hunting: Sensitiveness of a governor is a desirable quantity.however, if a governor is too sensitive, It fluctuates continuously, and this fluctuation is known as hunting. • Isochronism: A governor having infinite sensitivity is treated as isochronous governor. For all positions of sleeves isochronous has the same speed. • Stability: A stable governor brings the speed of the engine to the required value and there is not much hunting. The ball masses occupy the definite position for the speed of the engine within the working range. # In a centrifugal governor, the controlling force is observed to be 14 N, when the radius of rotation is 2 cm and 38 N, when the radius of rotation is 6 cm. The governor 1. is a stable governor 2. is an unstable governor 3. is an isochronous governor 4. cannot be said of what type with the given data Option 2 : is an unstable governor ## Terminologies in Governor MCQ Question 9 Detailed Solution Concept: Stability of Governor: - For a stable governor, if the equilibrium speed increases, the radius of governor balls must also increase. - A governor is said to be unstable if the radius of rotation decreases as the speed increases. - A governor is said to be isochronous, when the equilibrium speed is constant (i.e., range of speed is zero) for all radii of rotation of the balls within the working range, neglecting friction. For stable governor: $$\frac{dF}{dr}>\frac{F}{r}$$ Calculation: $$\frac{dF}{dr}=\frac{38-14}{6-2}=6\;N/cm$$ $$\frac{F_1}{r_1}=\frac{14}{2}=7\;N/cm$$ $$\frac{F_2}{r_2}=\frac{38}{6}=6.33\;N/cm$$ Here $$\frac{dF}{dr}<\frac{F_1}{r_1}\;or \;\frac{F_2}{r_2}$$ So the governor is unstable. # Match List i and list ii and select correct answer from the options given : List - i List - ii (a) Hunting (i) One radius of rotation for each speed (b) Isochronism (ii) Too sensitive (c) Stability (iii) Mean force exerted at the sleeve during the change of speed (d) Effort (iv) Constant equilibrium speed for all radii of rotation Answer Options :   (a) (b) (c) (d) (1) (ii) (iv) (i) (iii) (2) (iii) (i) (iv) (ii) (3) (ii) (i) (iv) (iii) (4) (i) (ii) (iii) (iv) 1. 1 2. 2 3. 3 4. 4 Option 1 : 1 ## Terminologies in Governor MCQ Question 10 Detailed Solution Explanation Governor – It is a device used for maintaining a constant mean speed of rotation of the crankshaft over long periods during which the load of the engine may vary. Governor maintains constant speed by controlling the supply of working fluid as the load varies. Some important terminologies of the governor • Sensitiveness – A governor is said to be sensitive when it readily responds to a small change of speed. • Hunting – Sensitiveness of a governor is a desirable quantity. however, if a governor is too sensitive, It fluctuates continuously, and this fluctuation is known as hunting. • Isochronism – A governor having infinite sensitivity is treated as isochronous governor. For all positions of sleeves, isochronous has the same speed. • Stability – A stable governor brings the speed of the engine to the required value and there is not much hunting. The ball masses occupy the definite position for speed of the engine within the working range. • Governor Effort – The mean force exerted on the sleeve during the given change of speed is known as the effort of the governor. The given change of speed is generally taken as 1%, hence effort is defined as the force exerted on the sleeve for 1% change of speed. # Which one of the following governors is having a larger displacement of sleeve for a given fractional change of speed? 1. Stable governor 2. Sensitive governor 3. Isochronous governor 4. Hunting governor Option 2 : Sensitive governor ## Terminologies in Governor MCQ Question 11 Detailed Solution Sensitive Governor: If a governor is having a larger displacement of sleeve for a given fractional change of speed then it is known as sensitive governor. Stable Governor: When there is only one radius of rotation of governor ball for all speed within the working range this type of governor is known as stable governor. Hunting Governor: When the speed of the governor continuously fluctuates above or below the mean speed this type of governor is known as hunting governor. Isochronous Governor: If a governor is having the same radius of rotation for all the rotational speed then it is known as isochronous governor. # Governer is used in automobiles to: 1. Decrease the variation of speed 2. Control δN/δt 3. Control δN 4. All of the above Option 3 : Control δN ## Terminologies in Governor MCQ Question 12 Detailed Solution Explanation: The function of a governor is to maintain the speed of engine within specified limits ( or controls δN)  whenever there is a variation of load. Speed of an engine varies in two ways 1. During each revolution or cyclic variation 2. Over a number of revolutions. During each revolution or cyclic variation, the speed of the engine varies due to variation in output torque of the engine during a cycle and is regulated by a flywheel mounted on the shaft. Over a number of revolution, the speed of the engine varies due to variation in load upon the engine and the speed is maintained by mounting a governor. # The effort of a governor is defined as the force required to be applied for what percentage change of speed? 1. 1 percent 2. 5 percent 3. 10 percent 4. any percent Option 1 : 1 percent ## Terminologies in Governor MCQ Question 13 Detailed Solution Explanation: Governor Effort – It is the force exerted by the governor at the sleeve as the sleeve tends to move. When the speed of the governor is constant, the force exerted on the sleeve is zero as the sleeve doesn't tend to move and hence at the constant speed, the effort of the governor is zero, but when the speed changes and the sleeve tends to move to new equilibrium position force is exerted on the sleeve. This force gradually diminishes to zero as the sleeves move to a new equilibrium position corresponding to the new speed. The mean force exerted on the sleeve during the given change of speed is known as the effort of the governor. The given change of speed is generally taken as 1%, hence effort is defined as the force exerted on the sleeve for 1% change of speed. # Statement (I): If a centrifugal governor is stable at a particular position, it would be stable at all other positions in the working range of operation.Statement (II): Porter governor is a stable governor throughout its range of operation. 1. Both statement (I) and statement (II) are individually true and Statement (II) is the correct explanation of Statement (I) 2. Both statement (I) and statement (II) are individually true and Statement (II) is NOT the correct explanation of Statement (I) 3. Statement (I) is true but Statement (II) is false 4. Statement (I) is false but Statement (II) is true Option 2 : Both statement (I) and statement (II) are individually true and Statement (II) is NOT the correct explanation of Statement (I) ## Terminologies in Governor MCQ Question 14 Detailed Solution Concept: Stability of the Governor: A governor is said to be stable when for every speed within the working range, there is a definite configuration i.e. there is only one radius of rotation of the governor balls at which the governor is in equilibrium. For the governor to be stable, the controlling force must increase as the radius of rotation increases i.e., for a stable governor, if the equilibrium speed increases, the radius of governor balls must also increase. Explanation: Statement (I): Centrifugal governor is stable at a particular position. So, for every speed within the working range, there will be a definite configuration, so it would be stable at all other positions in the working range of operation. Statement (II): Porter governor is a stable governor throughout its range of operation as for every speed within the range, there will be only one configuration.
{}
# Math Help - maximizing largest area of rectangle within a circle 1. ## maximizing largest area of rectangle within a circle find the dimensions of the rectangle of largest area that can be inscribed in a circle of radius r circle:x^2+y^2=r^2 area of the rectangle = 2x2y y = root(r^2-x^2) area=(4x)[root(r^2-x^2)] Area'=4r^2+4xr-8x^2/root(r^2-x^2) 4r^2+4xr-8x^2=(r+2x)(r-x)=0 x=r or x=-r/2 plugging x back into area formula area = 4(-r/2)[root(r^2-r^2/4)] area = -r(root3r^2) the answer is that the sides are [root(2)]r, and that it is actually a square. Not sure what I'm doing wrong.. also, what happened to the Latex tutorial...? 2. Can you show us how you got that line? $Area'=\dfrac{4r^2+4xr-8x^2}{\sqrt{r^2-x^2}}$ From this: $Area = 4x\sqrt{r^2 - x^2}$ I'm getting (taking r as a constant): $\begin{array}{cl} Area' &= 4x\cdot \dfrac12 (r^2 - x^2)^{-\frac12}\cdot -2x + (r^2 - x^2)^{\frac12}\cdot 4 \\ & \\ & = \dfrac{-4x^2}{\sqrt{ r^2-x^2}} + 4(r^2 - x^2)^{\frac12} \\ & \\ & = \dfrac{1}{\sqrt{r^2 - x^2}} \left(-4x^2 + 4(r^2 - x^2)\right) \\ & \\ & = \dfrac{-4x^2 + 4r^2 - 4x^2}{\sqrt{r^2 - x^2}} \\ & \\ & = \dfrac{ 4r^2 - 8x^2}{\sqrt{r^2 - x^2}}\end{array}$ 3. ohh got it! I forgot that r is actually a constant because it is a radius, I treated it like a variable. Using your derviative I got Area=r(root2)r(root2) so each side would r(root2) since area = L x W thanks! 4. You're welcome
{}
#### Volume 16, issue 2 (2012) Recent Issues The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement Author Index To Appear ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Other MSP Journals The Dirichlet Problem for constant mean curvature graphs in $\mathbb{M}\times\mathbb{R}$ ### Abigail Folha and Harold Rosenberg Geometry & Topology 16 (2012) 1171–1203 ##### Abstract We study graphs of constant mean curvature $H>0$ in $\mathbb{M}×ℝ$ for $\mathbb{M}$ a Hadamard surface, ie a complete simply connected surface with curvature bounded above by a negative constant $-a$. We find necessary and sufficient conditions for the existence of these graphs over bounded domains in $\mathbb{M}$, having prescribed boundary data, possibly infinite. ##### Keywords Hadamard surface, constant mean curvature, Dirichlet problem Primary: 53A10 Secondary: 53C42 ##### Publication Received: 21 February 2011 Revised: 5 March 2012 Accepted: 10 April 2012 Published: 23 June 2012 Proposed: Tobias H Colding Seconded: John Lott, Yasha Eliashberg ##### Authors Abigail Folha Instituto de Matemática – Departamento de Geometria Universidade Federal Fluminense R Mário Santos Braga, s/n Campus do Valonguinho CEP 24020-140 Niterói, RJ Brazil Harold Rosenberg Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110 CEP 22460-320 Rio de Janeiro, RJ Brazil http://www.math.jussieu.fr/~rosen/
{}
Contents Contents Idea Boundary separation is a modular reconstruction of the uniqueness of identity proofs in cubical type theory. It is a rule which implies UIP as a theorem. Definition Recall that in cubical type theory, there is an interval primitive $I$ with endpoints $0:I$ and $1:I$, as well as face formulas $\phi:F$ with rules which make $\phi$ behave like a formula in first-order logic ranging over the interval $I$. Boundary separation is the following rule: $\frac{\Gamma \vdash A \; \mathrm{type} \quad \Gamma \vdash r:I \quad \Gamma, \partial(r) \vdash a \equiv b:A}{\Gamma \vdash a \equiv b:A}$ where $I$ is the interval primitive in cubical type theory and $\partial(r)$ is the boundary face formula for dimension variables: $\delta(r) \coloneqq r = 0 \vee r = 1$ (The interval primitive $I$ has more points than $0$ and $1$, so it is not the case that the sequent $r:I \vdash r = 0 \vee r = 1 \;\mathrm{true}$ holds.) There is also a typal version of boundary separation which refers to cubical path types rather than definitional equality, given by the following rule: $\frac{\Gamma \vdash A \; \mathrm{type} \quad \Gamma \vdash a:A \quad \Gamma \vdash b:A \quad \Gamma \vdash r:I \quad \Gamma, \partial(r) \vdash p:a =_A b}{\Gamma \vdash p:a =_A b}$ Proof of UIP from boundary separation We denote path types by $a =_A b$ and dependent path types by $a =_{i.A} b$. Consider the following context: $\Gamma \coloneqq (\Delta, A \; \mathrm{type}, a:A, b:A, p:a =_{A} b, q: a =_A b)$
{}
Real-time data cluster Instead of one large instance, our RDB will now be a cluster of smaller instances and the day’s real-time data will be distributed between them. An Auto Scaling group will be used to maintain the RAM capacity of the cluster. Throughout the day more data will be ingested by the tickerplant and added to the cluster. The ASG will increase the number of instances in the cluster throughout the day in order to hold this new data. At the end of the day, the day’s data will be flushed from memory and the ASG will scale the cluster in. Distributed RDBs This solution has one obvious difference to a regular kdb+ system in that there are multiple RDB servers. User queries will need to be parsed and routed to each one to ensure the data can be retrieved effectively. Engineering a solution for this is beyond the scope of this article, but it will be tackled in the future. kdb+tick The code here has been written to act as a wrapper around kdb+tick’s .u functionality. The code to coordinate the RDBs has been put in a new .u.asg namespace, its functions determine when to call .u.sub and .u.del to add and remove subscribers from .u.w. Scaling the cluster On a high level the scaling method is quite simple. 1. A single RDB instance is launched and subscribes to the tickerplant. 2. When it fills up with data a second RDB will come up to take its place. 3. This cycle repeats throughout the day growing the cluster. 4. At end-of-day all but the latest RDB instances are shutdown. The subscriber queue There is an issue with the solution outlined above. An RDB will not come up at the exact moment its predecessor unsubscribes, so there are two scenarios that the tickerplant must be able to handle. • The new RDB comes up too early. • The new RDB does not come up in time. If the RDB comes up too early, the tickerplant must add it to a queue, while remembering the RDB’s handle, and the subscription info. If it does this, it can add the RDB to .u.w when it needs to. If the RDB does not come up in time, the tickerplant must remember the last upd message it sent to the previous RDB. When the RDB eventually comes up it can use this to recover the missing data from the tickerplant’s log file. This will prevent any gaps in the data. The tickerplant will store these details in .u.asg.tab. / table used to handle subscriptions / time - time the subscriber was added / handle - handle of the subscriber / tabs - tables the subscriber has subscribed for / syms - syms the subscriber has subscribed for / ip - ip of the subscriber / queue - queue the subscriber is a part of / live - time the tickerplant addd the subscriber to .u.w / rolled - time the subscriber unsubscribed / firstI - upd count when subscriber became live / lastI - last upd subscriber processed .u.asg.tab: flip timehandletabssymsqueueliverolledlastI!() q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ------------------------------------------------------- The first RDB to come up will be added to this table and .u.w, it will then be told to replay the log. We will refer to the RDB that is in .u.w and therefore currently being published to as live. When it is time to roll to the next subscriber the tickerplant will query .u.asg.tab. It will look for the handle, tables and symbols of the next RDB in the queue and make it the new live subscriber. kdb+tick’s functionality will then take over and start publishing to the new RDB. To be added to .u.asg.tab a subscriber must call .u.asg.sub, it takes three parameters. 1. A list of tables to subscribe for. 2. A list of symbol lists to subscribe for (one symbol list for each of the tables). 3. The name of the queue to subscribe to. If the RDB is subscribing to a queue with no live subscriber, the tickerplant will immediately add it to .u.w and tell it to replay the log. This means the RDB cannot make multiple .u.asg.sub calls for each table it wants from the tickerplant. Instead table and symbol lists are sent as parameters. So multiple subscriptions can still be made. / t - A list of tables (or for all). / s - Lists of symbol lists for each of the tables. / q - The name of the queue to be added to. .u.asg.sub:{[t;s;q] if[-11h = type t; t: enlist t; s: enlist s]; if[not (=) . count each (t;s); '"Count of table and symbol lists must match"]; if[not all missing: t in .u.t,; '.Q.s1[t where not missing]," not available"]; .u.asg.tab upsert (.z.p; .z.w; t; s; $"." sv string 256 vs .z.a; q; 0Np; 0Np; 0N; 0N); liveProc: select from .u.asg.tab where not null handle, not null live, null rolled, queue = q; if[not count liveProc; .u.asg.add[t;s;.z.w]]; } .u.asg.sub first carries out some checks on the arguments. • Ensures t and s are enlisted. • Checks that the count of t and s match. • Checks that all tables in t are available for subscription. A record is then added to .u.asg.tab for the subscriber. Finally, .u.asg.tab is checked to see if there are other RDBs in the same queue. If the queue is empty the tickerplant will immediately make this RDB the live subscriber. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI -------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 , , 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 0 q).u.w Quote| 7i Trade| 7i If there is already a live subscriber the RDB will just be added to the queue. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI --------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 0 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg q).u.w Quote| 7i Trade| 7i The live subscriber To make an RDB the live subscriber the tickerplant will call .u.asg.add. There are two instances when this is called. 1. When an RDB subscribes to a queue with no live subscriber. 2. When the tickerplant is rolling subscribers. / t - List of tables the RDB wants to subscribe to. / s - Symbol lists the RDB wants to subscribe to. / h - The handle of the RDB. .u.asg.add:{[t;s;h] schemas: raze .u.subInner[;;h] .' flip (t;s); q: first exec queue from .u.asg.tab where handle = h; startI: max 0^ exec lastI from .u.asg.tab where queue = q; neg[h] @ (.sub.rep; schemas; .u.L; (startI; .u.i)); update live:.z.p, firstI:startI from .u.asg.tab where handle = h; } In .u.asg.add .u.subInner is called to add the handle to .u.w for each table. This function is equivalent to kdb+tick’s .u.sub but it takes a handle as a third argument. This change to .u will be discussed in a later section. The tickerplant then calls .sub.rep on the RDB and the schemas, log file, and the log window are passed down as parameters. Once the replay is kicked off on the RDB it is marked as the live subscriber in .u.asg.tab. Becoming the live subscriber When the tickerplant makes an RDB the live subscriber it will call .sub.rep to initialize it. / schemas - table names and corresponding schemas / tplog - file path of the tickerplant log / logWindow - start and end of the window needed in the log, (start;end) .sub.rep:{[schemas;tplog;logWindow] .sub.live: 1b; (.[;();:;].) each schemas; .sub.start: logWindow 0; upd set .sub.replayUpd; -11!(logWindow 1;tplog); upd set .sub.upd; .z.ts: .sub.monitorMemory; system "t 5000"; } The RDB first marks itself as live, then as in tick/r.q the RDBs will set the table schemas and replay the tickerplant’s log. Replaying the tickerplant log In kdb+tick .u.i will be sent to the RDB. The RDB will then replay that many upd messages from the log. As it replays it inserts every row of data in the upd messages into the tables. In our case we may not want to keep all of the data in the log as other RDBs in the cluster may be holding some of it. This is why the logWindow is passed down by the tickerplant. logWindow is a list of two integers. 1. The last upd message processed by the other RDBs in the same queue. 2. The last upd processed by the tickerplant, .u.i. To replay the log .sub.start is set to the first element of logWindow and upd is set to .sub.replayUpd. The tickerplant log replay is then kicked off with -11! until the second element in the logWindow, .u.i. .sub.replayUpd is then called for every upd message. With each upd it increments .sub.i until it reaches .sub.start. From that point it calls .sub.upd to insert the data. .sub.replayUpd:{[t;data] if[.sub.i > .sub.start; if[not .sub.i mod 100; .sub.monitorMemory[]]; .sub.upd[t;flip data]; :(::); ]; .sub.i+: 1; } .sub.upd: {.sub.i+: 1; x upsert y} One other function of .sub.replayUpd is to monitor the memory of the server while we are replaying. This will protect the RDB in the case where there is too much data in the log to replay. In this case the RDB will unsubscribe from the tickerplant and another RDB will continue the replay. After the log has been replayed upd is set to .sub.upd, this will upsert data and keep incrementing .sub.i for every upd the RDB receives. Finally the RDB sets .z.ts to .sub.monitorMemory and initializes the timer to run every five seconds. Monitoring RDB server memory The RDB server’s memory is monitored for two reasons. 1. To tell the Auto Scaling group to scale out. 2. To unsubscribe from the tickerplant when full. Scaling out As discussed in the Auto Scaling in q section, AWS CLI commands can take some time to run. This could create some unwanted buffering in the RDB if they were to run while subscribed to the tickerplant. To avoid this another q process runs separately on the server to coordinate the scale out. It will continuously run .mon.monitorMemory to check the server’s memory usage against a scale threshold, say 60%. If the threshold is breached it will increment the Auto Scaling group’s DesiredCapacity and set .sub.scaled to be true. This will ensure the monitor process does not tell the Auto Scaling group to scale out again. .mon.monitorMemory:{[] if[not .mon.scaled; if[.util.getMemUsage[] > .mon.scaleThreshold; .util.aws.scale .aws.groupName; .mon.scaled: 1b; ]; ]; } Unsubscribing The RDB process runs its own timer function to determine when to unsubscribe from the tickerplant. It will do this to stop the server from running out of memory. .sub.monitorMemory:{[] if[.sub.live; if[.util.getMemUsage[] > .sub.rollThreshold; .sub.roll[] ]; ]; } .sub.monitorMemory checks when the server’s memory usage breaches the .sub.rollThreshold. It then calls .sub.roll on the tickerplant which will then roll to the next subscriber. Thresholds Ideally .mon.scaleThreshold and .sub.rollThreshold will be set far enough apart so that the new RDB has time to come up before the tickerplant tries to roll to the next subscriber. This will prevent the cluster from falling behind and reduce the number of upd messages that will need to be recovered from the log. Rolling subscribers As discussed, when .sub.rollThreshold is hit the RDB will call .sub.roll to unsubscribe from the tickerplant. From that point The RDB will not receive any more data, but it will be available to query. .sub.roll:{[] .sub.live: 0b; upd set {[x;y] (::)}; neg[.sub.TP] @ ({.u.asg.roll[.z.w;x]}; .sub.i); } .sub.roll marks .sub.live as false and upd is set to do nothing so that no further upd messages are processed. It will also call .u.asg.roll on the tickerplant, using its own handle and .sub.i (the last upd it has processed) as arguments. / h - handle of the RDB / subI - last processed upd message .u.asg.roll:{[h;subI] .u.del[;h] each .u.t; update rolled:.z.p, lastI:subI from .u.asg.tab where handle = h; q: first exec queue from .u.asg.tab where handle = h; waiting: select from .u.asg.tab where not null handle, null live, queue = q; if[count waiting; .u.asg.add . first[waiting]tabssymshandle]; } .u.asg.roll uses kdb+tick’s .u.del to delete the RDB’s handle from .u.w. It then marks the RDB as rolled and .sub.i is stored in the lastI column of .u.asg.tab. Finally .u.asg.tab is queried for the next RDB in the queue. If one is ready the tickerplant calls .u.asg.add making it the new live subscriber and the cycle continues. This switch to the new RDB may cause some latency in high volume systems. The switch itself will only take a moment but there may be some variability over the network as the tickerplant starts sending data to a new server. Implementing batching in the tickerplant could lessen this latency. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 2020.04.14D08:13:05.942338000 0 9746 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D08:13:05.942400000 9746 q).u.w Quote| 9i Trade| 9i If there is no RDB ready in the queue, the next one to subscribe up will immediately be added to .u.w and lastI will be used to recover from the tickerplant log. End of day Throughout the day the RDB cluster will grow in size as the RDBs launch, subscribe, fill and roll. .u.asg.tab will look something like the table below. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 2020.04.14D08:13:05.942338000 0 9746 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D08:13:05.942400000 2020.04.14D09:37:17.475790000 9746 19366 2020.04.14D09:14:14.831793000 10 10.0.1.212 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D09:37:17.475841000 2020.04.14D10:35:36.456220000 19366 29342 2020.04.14D10:08:37.606592000 11 10.0.1.196 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D10:35:36.456269000 2020.04.14D11:42:57.628761000 29342 39740 2020.04.14D11:24:45.642699000 12 10.0.1.42 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D11:42:57.628809000 2020.04.14D13:09:57.867826000 39740 50112 2020.04.14D12:41:57.889318000 13 10.0.1.80 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D13:09:57.867882000 2020.04.14D15:44:19.011327000 50112 60528 2020.04.14D14:32:22.817870000 14 10.0.1.246 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D15:44:19.011327000 60528 2020.04.14D16:59:10.663224000 15 10.0.1.119 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg Usually when end-of-day occurs .u.end is called in the tickerplant. It informs the RDB which would write its data to disk and flush it from memory. In our case when we do this the rolled RDBs will be sitting idle with no data. To scale in .u.asg.end is called alongside kdb+tick’s .u.end. .u.asg.end:{[] notLive: exec handle from .u.asg.tab where not null handle, (null live) or not any null (live;rolled); neg[notLive] @\: (.u.end; dt); delete from .u.asg.tab where any (null handle; null live; not null rolled); update firstI:0 from .u.asg.tab where not null live; } The function first sends .u.end to all non live subscribers. It then deletes these servers from .u.asg.tab and resets firstI to zero for all of the live RDBs. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ----------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.14D15:32:22.817870000 14 10.0.1.246 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D15:44:19.011327000 0 When .u.end is called on the RDB it will delete the previous day’s data from each table. If the process is live it will mark .mon.scaled to false on the monitor process so that it can scale out again when it refills. If the RDB is not live and it has flushed all of its data it will terminate its own instance and reduce the DesiredCapacity of the ASG by one. .u.end: .sub.end; .sub.end:{[dt] .sub.i: 0; .sub.clear dt+1; } / tm - clear all data from all tables before this time .sub.clear:{[tm] ![;enlist(<;time;tm);0b;$()] each tables[]; if[.sub.live; .Q.gc[]; neg[.sub.MON] (set;.mon.scaled;0b); :(::); ]; if[not max 0, count each get each tables[]; .util.aws.terminate .aws.instanceId ]; } Bringing it all together The q scripts for the code outlined above are laid out in the same way as kdb+tick, i.e. tickasg.q is in the top directory with the RDB and .u.asg scripts in the directory below, asg/. The code runs alongside kdb+tick so its scripts are placed in the same top directory. $tree q/ q ├── asg │ ├── mon.q │ ├── r.q │ ├── sub.q │ ├── u.q │ └── util.q ├── tick │ ├── r.q │ ├── sym.q │ ├── u.q │ └── w.q ├── tickasg.q └── tick.q Starting the tickerplant is the same as in kdb+tick, but tickasg.q is loaded instead of tick.q. q tickasg.q sym /mnt/efs/tplog -p 5010 tickasg.q system "l tick.q" system "l asg/u.q" .tick.zpc: .z.pc; .z.pc: {.tick.zpc x; .u.asg.zpc x;}; .tick.end: .u.end; .u.end: {.tick.end x; .u.asg.end x;}; tickasg.q starts by loading in tick.q, .u.tick is called in this file so the tickerplant is started. Loading in asg/u.q will initiate the .u.asg code on top of it. .z.pc and .u.end are then overwritten to run both the .u and the .u.asg versions. .u.asg.zpc:{[h] if[not null first exec live from .u.asg.tab where handle = h; .u.asg.roll[h;0] ]; update handle:0Ni from .u.asg.tab where handle = h; } .u.asg.zpc checks if the disconnecting RDB is the live subscriber and calls .u.asg.roll if so. It then marks the handle as null in .u.asg.tab for any disconnection. There are also some minor changes made to .u.add and .u.sub in asg/u.q. Changes to .u .u will still work as normal with these changes. The main change is needed because .z.w cannot be used in .u.sub or .u.add anymore. When there is a queue of RDBs .u.sub will not be called in the RDB’s initial subscription call, so .z.w will not be the handle of the RDB we want to start publishing to. To remedy this .u.add has been changed to take a handle as a third parameter instead of using .z.w. The same change could not be made to .u.sub as it is the entry function for kdb+tick’s tick/r.q. To keep tick/r.q working .u.subInner has been added, it is a copy of .u.sub but takes a handle as a third parameter. .u.sub is now a projection of .u.subInner, it passes .z.w in as the third parameter. tick/u.q \d .u add:{$[ (count w x)>i:w[x;;0]?.z.w; .[.u.w;(x;i;1);union;y]; w[x],:enlist(.z.w;y) ]; (x;$[99=type v:value x;sel[v]y;@[0#v;sym;g#]]) } sub:{if[x~;:sub[;y]each t];if[not x in t;'x];del[x].z.w;add[x;y]} \d . asg/u.q / use 'z' instead of .z.w add:{$[ (count w x)>i:w[x;;0]?z; .[.u.w;(x;i;1);union;y]; w[x],:enlist(z;y) ]; (x;$[99=type v:value x;sel[v]y;@[0#v;sym;g#]]) } / use 'z' instead of .z.w and input as 3rd argument to .u.add subInner:{if[x~;:subInner[;y;z]each t];if[not x in t;'x];del[x]z;add[x;y;z]} sub:{subInner[x;y;.z.w]} \d . asg/r.q When starting an RDB in Auto Scaling mode asg/r.q is loaded instead of tick/r.q. q asg/r.q 10.0.0.1:5010 Where 10.0.0.1 is the private IP address of the tickerplant’s server. /q asg/r.q [host]:port[:usr:pwd] system "l asg/util.q" system "l asg/sub.q" while[null .sub.TP: @[{hopen ($":", .u.x: x; 5000)}; .z.x 0; 0Ni]]; while[null .sub.MON: @[{hopen (::5016; 5000)}; (::); 0Ni]]; .aws.instanceId: .util.aws.getInstanceId[]; .aws.groupName: .util.aws.getGroupName[.aws.instanceId]; .sub.rollThreshold: getenv ROLLTHRESHOLD; .sub.live: 0b; .sub.i: 0; .u.end: {[dt] .sub.clear dt+1}; neg[.sub.TP] @ (.u.asg.sub; ; ; \$ .aws.groupName, ".r-asg"); asg/r.q loads the scaling code in asg/util.q and the code to subscribe and roll in asg/sub.q. Connecting to the tickerplant is done in a retry loop just in case the tickerplant takes some time to initially come up. The script then sets the global variables outlined below. .aws.instanceId instance ID of its EC2 instance .aws.groupName name of its Auto Scaling group .sub.rollThreshold memory percentage threshold to unsubscribe .sub.live whether tickerplant is currently it sending data .sub.scaled whether it has launched a new instance .sub.i count of upd messages queue has processed `
{}
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. Radiometry is distinct from quantum techniques such as photon counting. Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term. Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength.[1] ## Contents Quantity Unit Dimension Notes Name Symbol[nb 1] Name Symbol Symbol Radiant energy Qe[nb 2] joule J ML2T−2 Energy received, emitted, reflected, or transmitted by a system in form of electromagnetic radiation. Radiant energy density we joule per cubic metre J/m3 ML−1T−2 Radiant energy of a system per unit volume at a given location. Radiant flux / Radiant power Φe[nb 2] watt W or J/s ML2T−3 Radiant energy of a system per unit time at a given time. Spectral flux / Spectral power Φe,ν[nb 3] or Φe,λ[nb 4] watt per hertz or watt per metre W/Hz or W/m ML2T−2 or MLT−3 Radiant power of a system per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. Radiant intensity Ie,Ω[nb 5] watt per steradian W/sr ML2T−3 Radiant power of a system per unit solid angle around a given direction. It is a directional quantity. Spectral intensity Ie,Ω,ν[nb 3] or Ie,Ω,λ[nb 4] or W⋅sr−1⋅Hz−1 or W⋅sr−1⋅m−1 ML2T−2 or MLT−3 Radiant intensity of a system per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. It is a directional quantity. Radiance Le,Ω[nb 5] watt per steradian per square metre W⋅sr−1⋅m−2 MT−3 Radiant power of a surface per unit solid angle around a given direction per unit projected area of that surface along that direction. It is a directional quantity. It is sometimes also confusingly called "intensity". or Le,Ω,λ[nb 4] watt per steradian per square metre per hertz or watt per steradian per square metre, per metre W⋅sr−1⋅m−2⋅Hz−1 or W⋅sr−1⋅m−3 MT−2 or ML−1T−3 Radiance of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. It is a directional quantity. It is sometimes also confusingly called "spectral intensity". Irradiance Ee[nb 2] watt per square metre W/m2 MT−3 Radiant power received by a surface per unit area. It is sometimes also confusingly called "intensity". or Ee,λ[nb 4] watt per square metre per hertz or watt per square metre, per metre W⋅m−2⋅Hz−1 or W/m3 MT−2 or ML−1T−3 Irradiance of a surface per unit frequency or wavelength. The former is commonly measured in 10−22 W⋅m−2⋅Hz−1, known as solar flux unit, and the latter in W⋅m−2⋅nm−1.[nb 6] It is sometimes also confusingly called "spectral intensity". Radiosity Je[nb 2] watt per square metre W/m2 MT−3 Radiant power leaving (emitted, reflected and transmitted by) a surface per unit area. It is sometimes also confusingly called "intensity". or Je,λ[nb 4] watt per square metre per hertz or watt per square metre, per metre W⋅m−2⋅Hz−1 or W/m3 MT−2 or ML−1T−3 Radiosity of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. It is sometimes also confusingly called "spectral intensity". Radiant exitance Me[nb 2] watt per square metre W/m2 MT−3 Radiant power emitted by a surface per unit area. This is the emitted component of radiosity. "Radiant emittance" is an old term for this quantity. It is sometimes also confusingly called "intensity". Spectral exitance Me,ν[nb 3] or Me,λ[nb 4] watt per square metre per hertz or watt per square metre, per metre W⋅m−2⋅Hz−1 or W/m3 MT−2 or ML−1T−3 Radiant exitance of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. "Spectral emittance" is an old term for this quantity. It is sometimes also confusingly called "spectral intensity". Radiant exposure He joule per square metre J/m2 MT−2 Irradiance of a surface times exposure time. It is sometimes also called fluence. 1. ^ Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. 2. Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiant exitance. 3. Spectral quantities given per unit frequency are denoted with suffix "ν" (Greek)—not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. 4. Spectral quantities given per unit wavelength are denoted with suffix "λ" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "(λ)" in parentheses instead, for example in spectral transmittance, spectral reflectance and spectral responsivity. 5. ^ a b The two directional quantities, radiant intensity and radiance, are denoted with suffix "Ω" (Greek) to indicate a directional concentration. 6. ^ NOAA / Space Weather Prediction Center includes a definition of the solar flux unit (SFU). ## Integral and spectral radiometric quantities Integral quantities (like radiant flux) describe the total effect of radiation of all wavelengths or frequencies, while spectral quantities (like spectral power) describe the effect of radiation of a single wavelength λ or frequency ν. To each integral quantity there are corresponding spectral quantities, for example the radiant flux Φe corresponds to the spectral power Φe,λ and Φe,ν. Getting an integral quantity's spectral counterpart requires a limit transition. This comes from the idea that the precisely requested wavelength photon existence probability is zero. Let us show the relation between them using the radiant flux as an example: Integral flux, whose unit is W: $\Phi_\mathrm{e}.$ Spectral flux by wavelength, whose unit is W/m: $\Phi_{\mathrm{e},\lambda} = {\mathrm{d}\Phi_\mathrm{e} \over \mathrm{d}\lambda},$ where $\mathrm{d}\Phi_\mathrm{e}$ is the radiant flux of the radiation in a small wavelength interval [λ, λ + dλ]. The area under a plot with wavelength horizontal axis equals to the total radiant flux. Spectral flux by frequency, whose unit is W/Hz: $\Phi_{\mathrm{e},\nu} = {\mathrm{d}\Phi_\mathrm{e} \over \mathrm{d}\nu},$ where $\mathrm{d}\Phi_\mathrm{e}$ is the radiant flux of the radiation in a small frequency interval [ν, ν + dν]. The area under a plot with frequency horizontal axis equals to the total radiant flux. Spectral flux multiplied by wavelength or frequency, whose unit is W, i.e. the same as the integral quantity: $\lambda \Phi_{\mathrm{e},\lambda} = \nu \Phi_{\mathrm{e},\nu}.$ The area under a plot with logarithmic wavelength or frequency horizontal axis equals to the total radiant flux. The spectral quantities by wavelength λ and frequency ν are related by equations featuring the speed of light c: $\Phi_{\mathrm{e},\lambda} = {c \over \lambda^2} \Phi_{\mathrm{e},\nu},$ $\Phi_{\mathrm{e},\nu} = {c \over \nu^2} \Phi_{\mathrm{e},\lambda},$ $\lambda = {c \over \nu}.$ The integral quantity can be obtained by the spectral quantity's integration: $\Phi_\mathrm{e} = \int_0^\infty \Phi_{\mathrm{e},\lambda}\, \mathrm{d}\lambda = \int_0^\infty \Phi_{\mathrm{e},\nu}\, \mathrm{d}\nu = \int_0^\infty \lambda \Phi_{\mathrm{e},\lambda}\, \mathrm{d} \ln \lambda = \int_0^\infty \nu \Phi_{\mathrm{e},\nu}\, \mathrm{d} \ln \nu.$
{}
# Changes in doc/groups.dox[318:1e2d6ca80793:1023:e0cef67fe565] in lemon Ignore: File: 1 edited ### Legend: Unmodified r318 * This file is a part of LEMON, a generic C++ optimization library. * * Copyright (C) 2003-2008 * Copyright (C) 2003-2010 * Egervary Jeno Kombinatorikus Optimalizalasi Kutatocsoport * (Egervary Research Group on Combinatorial Optimization, EGRES). */ namespace lemon { /** @defgroup datas Data Structures This group describes the several data structures implemented in LEMON. This group contains the several data structures implemented in LEMON. */ /** @defgroup semi_adaptors Semi-Adaptor Classes for Graphs @defgroup graph_adaptors Adaptor Classes for Graphs @ingroup graphs \brief Graph types between real graphs and graph adaptors. This group describes some graph types between real graphs and graph adaptors. These classes wrap graphs to give new functionality as the adaptors do it. On the other hand they are not light-weight structures as the adaptors. \brief Adaptor classes for digraphs and graphs This group contains several useful adaptor classes for digraphs and graphs. The main parts of LEMON are the different graph structures, generic graph algorithms, graph concepts, which couple them, and graph adaptors. While the previous notions are more or less clear, the latter one needs further explanation. Graph adaptors are graph classes which serve for considering graph structures in different ways. A short example makes this much clearer.  Suppose that we have an instance \c g of a directed graph type, say ListDigraph and an algorithm \code template int algorithm(const Digraph&); \endcode is needed to run on the reverse oriented graph.  It may be expensive (in time or in memory usage) to copy \c g with the reversed arcs.  In this case, an adaptor class is used, which (according to LEMON \ref concepts::Digraph "digraph concepts") works as a digraph. The adaptor uses the original digraph structure and digraph operations when methods of the reversed oriented graph are called.  This means that the adaptor have minor memory usage, and do not perform sophisticated algorithmic actions.  The purpose of it is to give a tool for the cases when a graph have to be used in a specific alteration.  If this alteration is obtained by a usual construction like filtering the node or the arc set or considering a new orientation, then an adaptor is worthwhile to use. To come back to the reverse oriented graph, in this situation \code template class ReverseDigraph; \endcode template class can be used. The code looks as follows \code ListDigraph g; ReverseDigraph rg(g); int result = algorithm(rg); \endcode During running the algorithm, the original digraph \c g is untouched. This techniques give rise to an elegant code, and based on stable graph adaptors, complex algorithms can be implemented easily. In flow, circulation and matching problems, the residual graph is of particular importance. Combining an adaptor implementing this with shortest path algorithms or minimum mean cycle algorithms, a range of weighted and cardinality optimization algorithms can be obtained. For other examples, the interested user is referred to the detailed documentation of particular adaptors. The behavior of graph adaptors can be very different. Some of them keep capabilities of the original graph while in other cases this would be meaningless. This means that the concepts that they meet depend on the graph adaptor, and the wrapped graph. For example, if an arc of a reversed digraph is deleted, this is carried out by deleting the corresponding arc of the original digraph, thus the adaptor modifies the original digraph. However in case of a residual digraph, this operation has no sense. Let us stand one more example here to simplify your work. ReverseDigraph has constructor \code ReverseDigraph(Digraph& digraph); \endcode This means that in a situation, when a const %ListDigraph& reference to a graph is given, then it have to be instantiated with Digraph=const %ListDigraph. \code int algorithm1(const ListDigraph& g) { ReverseDigraph rg(g); return algorithm2(rg); } \endcode */ \brief Map structures implemented in LEMON. This group describes the map structures implemented in LEMON. This group contains the map structures implemented in LEMON. LEMON provides several special purpose maps and map adaptors that e.g. combine \brief Special graph-related maps. This group describes maps that are specifically designed to assign values to the nodes and arcs of graphs. This group contains maps that are specifically designed to assign values to the nodes and arcs/edges of graphs. If you are looking for the standard graph maps (\c NodeMap, \c ArcMap, \c EdgeMap), see the \ref graph_concepts "Graph Structure Concepts". */ \brief Tools to create new maps from existing ones This group describes map adaptors that are used to create "implicit" This group contains map adaptors that are used to create "implicit" maps from other maps. Most of them are \ref lemon::concepts::ReadMap "read-only maps". Most of them are \ref concepts::ReadMap "read-only maps". They can make arithmetic and logical operations between one or two maps (negation, shifting, addition, multiplication, logical 'and', 'or', /** @defgroup matrices Matrices @ingroup datas \brief Two dimensional data storages implemented in LEMON. This group describes two dimensional data storages implemented in LEMON. */ /** @defgroup paths Path Structures @ingroup datas \brief %Path structures implemented in LEMON. This group describes the path structures implemented in LEMON. This group contains the path structures implemented in LEMON. LEMON provides flexible data structures to work with paths. any kind of path structure. \sa lemon::concepts::Path \sa \ref concepts::Path "Path concept" */ /** @defgroup heaps Heap Structures @ingroup datas \brief %Heap structures implemented in LEMON. This group contains the heap structures implemented in LEMON. LEMON provides several heap classes. They are efficient implementations of the abstract data type \e priority \e queue. They store items with specified values called \e priorities in such a way that finding and removing the item with minimum priority are efficient. The basic operations are adding and erasing items, changing the priority of an item, etc. Heaps are crucial in several algorithms, such as Dijkstra and Prim. The heap implementations have the same interface, thus any of them can be used easily in such algorithms. \sa \ref concepts::Heap "Heap concept" */ \brief Auxiliary data structures implemented in LEMON. This group describes some data structures implemented in LEMON in This group contains some data structures implemented in LEMON in order to make it easier to implement combinatorial algorithms. */ /** @defgroup geomdat Geometric Data Structures @ingroup auxdat \brief Geometric data structures implemented in LEMON. This group contains geometric data structures implemented in LEMON. - \ref lemon::dim2::Point "dim2::Point" implements a two dimensional vector with the usual operations. - \ref lemon::dim2::Box "dim2::Box" can be used to determine the rectangular bounding box of a set of \ref lemon::dim2::Point "dim2::Point"'s. */ /** @defgroup matrices Matrices @ingroup auxdat \brief Two dimensional data storages implemented in LEMON. This group contains two dimensional data storages implemented in LEMON. */ /** @defgroup algs Algorithms \brief This group describes the several algorithms \brief This group contains the several algorithms implemented in LEMON. This group describes the several algorithms This group contains the several algorithms implemented in LEMON. */ \brief Common graph search algorithms. This group describes the common graph search algorithms like Breadth-First Search (BFS) and Depth-First Search (DFS). This group contains the common graph search algorithms, namely \e breadth-first \e search (BFS) and \e depth-first \e search (DFS) \ref clrs01algorithms. */ \brief Algorithms for finding shortest paths. This group describes the algorithms for finding shortest paths in graphs. This group contains the algorithms for finding shortest paths in digraphs \ref clrs01algorithms. - \ref Dijkstra algorithm for finding shortest paths from a source node when all arc lengths are non-negative. - \ref BellmanFord "Bellman-Ford" algorithm for finding shortest paths from a source node when arc lenghts can be either positive or negative, but the digraph should not contain directed cycles with negative total length. - \ref FloydWarshall "Floyd-Warshall" and \ref Johnson "Johnson" algorithms for solving the \e all-pairs \e shortest \e paths \e problem when arc lenghts can be either positive or negative, but the digraph should not contain directed cycles with negative total length. - \ref Suurballe A successive shortest path algorithm for finding arc-disjoint paths between two nodes having minimum total length. */ /** @defgroup spantree Minimum Spanning Tree Algorithms @ingroup algs \brief Algorithms for finding minimum cost spanning trees and arborescences. This group contains the algorithms for finding minimum cost spanning trees and arborescences \ref clrs01algorithms. */ \brief Algorithms for finding maximum flows. This group describes the algorithms for finding maximum flows and feasible circulations. The maximum flow problem is to find a flow between a single source and a single target that is maximum. Formally, there is a \f$G=(V,A)\f$ directed graph, an \f$c_a:A\rightarrow\mathbf{R}^+_0\f$ capacity function and given \f$s, t \in V\f$ source and target node. The maximum flow is the \f$f_a\f$ solution of the next optimization problem: \f[ 0 \le f_a \le c_a \f] \f[ \sum_{v\in\delta^{-}(u)}f_{vu}=\sum_{v\in\delta^{+}(u)}f_{uv} \qquad \forall u \in V \setminus \{s,t\}\f] \f[ \max \sum_{v\in\delta^{+}(s)}f_{uv} - \sum_{v\in\delta^{-}(s)}f_{vu}\f] This group contains the algorithms for finding maximum flows and feasible circulations \ref clrs01algorithms, \ref amo93networkflows. The \e maximum \e flow \e problem is to find a flow of maximum value between a single source and a single target. Formally, there is a \f$G=(V,A)\f$ digraph, a \f$cap: A\rightarrow\mathbf{R}^+_0\f$ capacity function and \f$s, t \in V\f$ source and target nodes. A maximum flow is an \f$f: A\rightarrow\mathbf{R}^+_0\f$ solution of the following optimization problem. \f[ \max\sum_{sv\in A} f(sv) - \sum_{vs\in A} f(vs) \f] \f[ \sum_{uv\in A} f(uv) = \sum_{vu\in A} f(vu) \quad \forall u\in V\setminus\{s,t\} \f] \f[ 0 \leq f(uv) \leq cap(uv) \quad \forall uv\in A \f] LEMON contains several algorithms for solving maximum flow problems: - \ref lemon::EdmondsKarp "Edmonds-Karp" - \ref lemon::Preflow "Goldberg's Preflow algorithm" - \ref lemon::DinitzSleatorTarjan "Dinitz's blocking flow algorithm with dynamic trees" - \ref lemon::GoldbergTarjan "Preflow algorithm with dynamic trees" In most cases the \ref lemon::Preflow "Preflow" algorithm provides the fastest method to compute the maximum flow. All impelementations provides functions to query the minimum cut, which is the dual linear programming problem of the maximum flow. */ /** @defgroup min_cost_flow Minimum Cost Flow Algorithms - \ref EdmondsKarp Edmonds-Karp algorithm \ref edmondskarp72theoretical. - \ref Preflow Goldberg-Tarjan's preflow push-relabel algorithm \ref goldberg88newapproach. - \ref DinitzSleatorTarjan Dinitz's blocking flow algorithm with dynamic trees \ref dinic70algorithm, \ref sleator83dynamic. - \ref GoldbergTarjan !Preflow push-relabel algorithm with dynamic trees \ref goldberg88newapproach, \ref sleator83dynamic. In most cases the \ref Preflow algorithm provides the fastest method for computing a maximum flow. All implementations also provide functions to query the minimum cut, which is the dual problem of maximum flow. \ref Circulation is a preflow push-relabel algorithm implemented directly for finding feasible circulations, which is a somewhat different problem, but it is strongly related to maximum flow. For more information, see \ref Circulation. */ /** @defgroup min_cost_flow_algs Minimum Cost Flow Algorithms @ingroup algs \brief Algorithms for finding minimum cost flows and circulations. This group describes the algorithms for finding minimum cost flows and circulations. This group contains the algorithms for finding minimum cost flows and circulations \ref amo93networkflows. For more information about this problem and its dual solution, see \ref min_cost_flow "Minimum Cost Flow Problem". LEMON contains several algorithms for this problem. - \ref NetworkSimplex Primal Network Simplex algorithm with various pivot strategies \ref dantzig63linearprog, \ref kellyoneill91netsimplex. - \ref CostScaling Cost Scaling algorithm based on push/augment and relabel operations \ref goldberg90approximation, \ref goldberg97efficient, \ref bunnagel98efficient. - \ref CapacityScaling Capacity Scaling algorithm based on the successive shortest path method \ref edmondskarp72theoretical. - \ref CycleCanceling Cycle-Canceling algorithms, two of which are strongly polynomial \ref klein67primal, \ref goldberg89cyclecanceling. In general, \ref NetworkSimplex and \ref CostScaling are the most efficient implementations, but the other two algorithms could be faster in special cases. For example, if the total supply and/or capacities are rather small, \ref CapacityScaling is usually the fastest algorithm (without effective scaling). */ \brief Algorithms for finding minimum cut in graphs. This group describes the algorithms for finding minimum cut in graphs. The minimum cut problem is to find a non-empty and non-complete \f$X\f$ subset of the vertices with minimum overall capacity on outgoing arcs. Formally, there is \f$G=(V,A)\f$ directed graph, an \f$c_a:A\rightarrow\mathbf{R}^+_0\f$ capacity function. The minimum This group contains the algorithms for finding minimum cut in graphs. The \e minimum \e cut \e problem is to find a non-empty and non-complete \f$X\f$ subset of the nodes with minimum overall capacity on outgoing arcs. Formally, there is a \f$G=(V,A)\f$ digraph, a \f$cap: A\rightarrow\mathbf{R}^+_0\f$ capacity function. The minimum cut is the \f$X\f$ solution of the next optimization problem: \f[ \min_{X \subset V, X\not\in \{\emptyset, V\}} \sum_{uv\in A, u\in X, v\not\in X}c_{uv}\f] \sum_{uv\in A: u\in X, v\not\in X}cap(uv) \f] LEMON contains several algorithms related to minimum cut problems: - \ref lemon::HaoOrlin "Hao-Orlin algorithm" to calculate minimum cut in directed graphs - \ref lemon::NagamochiIbaraki "Nagamochi-Ibaraki algorithm" to calculate minimum cut in undirected graphs - \ref lemon::GomoryHuTree "Gomory-Hu tree computation" to calculate all pairs minimum cut in undirected graphs - \ref HaoOrlin "Hao-Orlin algorithm" for calculating minimum cut in directed graphs. - \ref NagamochiIbaraki "Nagamochi-Ibaraki algorithm" for calculating minimum cut in undirected graphs. - \ref GomoryHu "Gomory-Hu tree computation" for calculating all-pairs minimum cut in undirected graphs. If you want to find minimum cut just between two distinict nodes, please see the \ref max_flow "Maximum Flow page". */ /** @defgroup graph_prop Connectivity and Other Graph Properties @ingroup algs \brief Algorithms for discovering the graph properties This group describes the algorithms for discovering the graph properties like connectivity, bipartiteness, euler property, simplicity etc. \image html edge_biconnected_components.png \image latex edge_biconnected_components.eps "bi-edge-connected components" width=\textwidth */ /** @defgroup planar Planarity Embedding and Drawing @ingroup algs \brief Algorithms for planarity checking, embedding and drawing This group describes the algorithms for planarity checking, embedding and drawing. \image html planar.png \image latex planar.eps "Plane graph" width=\textwidth see the \ref max_flow "maximum flow problem". */ /** @defgroup min_mean_cycle Minimum Mean Cycle Algorithms @ingroup algs \brief Algorithms for finding minimum mean cycles. This group contains the algorithms for finding minimum mean cycles \ref clrs01algorithms, \ref amo93networkflows. The \e minimum \e mean \e cycle \e problem is to find a directed cycle of minimum mean length (cost) in a digraph. The mean length of a cycle is the average length of its arcs, i.e. the ratio between the total length of the cycle and the number of arcs on it. This problem has an important connection to \e conservative \e length \e functions, too. A length function on the arcs of a digraph is called conservative if and only if there is no directed cycle of negative total length. For an arbitrary length function, the negative of the minimum cycle mean is the smallest \f$\epsilon\f$ value so that increasing the arc lengths uniformly by \f$\epsilon\f$ results in a conservative length function. LEMON contains three algorithms for solving the minimum mean cycle problem: - \ref KarpMmc Karp's original algorithm \ref amo93networkflows, \ref dasdan98minmeancycle. - \ref HartmannOrlinMmc Hartmann-Orlin's algorithm, which is an improved version of Karp's algorithm \ref dasdan98minmeancycle. - \ref HowardMmc Howard's policy iteration algorithm \ref dasdan98minmeancycle. In practice, the \ref HowardMmc "Howard" algorithm turned out to be by far the most efficient one, though the best known theoretical bound on its running time is exponential. Both \ref KarpMmc "Karp" and \ref HartmannOrlinMmc "Hartmann-Orlin" algorithms run in time O(ne) and use space O(n2+e), but the latter one is typically faster due to the applied early termination scheme. */ \brief Algorithms for finding matchings in graphs and bipartite graphs. This group contains algorithm objects and functions to calculate This group contains the algorithms for calculating matchings in graphs and bipartite graphs. The general matching problem is finding a subset of the arcs which does not shares common endpoints. finding a subset of the edges for which each node has at most one incident edge. There are several different algorithms for calculate matchings in graphs.  The matching problems in bipartite graphs are generally easier than in general graphs. The goal of the matching optimization can be the finding maximum cardinality, maximum weight or minimum cost can be finding maximum cardinality, maximum weight or minimum cost matching. The search can be constrained to find perfect or maximum cardinality matching. LEMON contains the next algorithms: - \ref lemon::MaxBipartiteMatching "MaxBipartiteMatching" Hopcroft-Karp augmenting path algorithm for calculate maximum cardinality matching in bipartite graphs - \ref lemon::PrBipartiteMatching "PrBipartiteMatching" Push-Relabel algorithm for calculate maximum cardinality matching in bipartite graphs - \ref lemon::MaxWeightedBipartiteMatching "MaxWeightedBipartiteMatching" Successive shortest path algorithm for calculate maximum weighted matching and maximum weighted bipartite matching in bipartite graph - \ref lemon::MinCostMaxBipartiteMatching "MinCostMaxBipartiteMatching" Successive shortest path algorithm for calculate minimum cost maximum matching in bipartite graph - \ref lemon::MaxMatching "MaxMatching" Edmond's blossom shrinking algorithm for calculate maximum cardinality matching in general graph - \ref lemon::MaxWeightedMatching "MaxWeightedMatching" Edmond's blossom shrinking algorithm for calculate maximum weighted matching in general graph - \ref lemon::MaxWeightedPerfectMatching "MaxWeightedPerfectMatching" Edmond's blossom shrinking algorithm for calculate maximum weighted perfect matching in general graph \image html bipartite_matching.png \image latex bipartite_matching.eps "Bipartite Matching" width=\textwidth */ /** @defgroup spantree Minimum Spanning Tree Algorithms @ingroup algs \brief Algorithms for finding a minimum cost spanning tree in a graph. This group describes the algorithms for finding a minimum cost spanning tree in a graph The matching algorithms implemented in LEMON: - \ref MaxBipartiteMatching Hopcroft-Karp augmenting path algorithm for calculating maximum cardinality matching in bipartite graphs. - \ref PrBipartiteMatching Push-relabel algorithm for calculating maximum cardinality matching in bipartite graphs. - \ref MaxWeightedBipartiteMatching Successive shortest path algorithm for calculating maximum weighted matching and maximum weighted bipartite matching in bipartite graphs. - \ref MinCostMaxBipartiteMatching Successive shortest path algorithm for calculating minimum cost maximum matching in bipartite graphs. - \ref MaxMatching Edmond's blossom shrinking algorithm for calculating maximum cardinality matching in general graphs. - \ref MaxWeightedMatching Edmond's blossom shrinking algorithm for calculating maximum weighted matching in general graphs. - \ref MaxWeightedPerfectMatching Edmond's blossom shrinking algorithm for calculating maximum weighted perfect matching in general graphs. - \ref MaxFractionalMatching Push-relabel algorithm for calculating maximum cardinality fractional matching in general graphs. - \ref MaxWeightedFractionalMatching Augmenting path algorithm for calculating maximum weighted fractional matching in general graphs. - \ref MaxWeightedPerfectFractionalMatching Augmenting path algorithm for calculating maximum weighted perfect fractional matching in general graphs. \image html matching.png \image latex matching.eps "Min Cost Perfect Matching" width=\textwidth */ /** @defgroup graph_properties Connectivity and Other Graph Properties @ingroup algs \brief Algorithms for discovering the graph properties This group contains the algorithms for discovering the graph properties like connectivity, bipartiteness, euler property, simplicity etc. \image html connected_components.png \image latex connected_components.eps "Connected components" width=\textwidth */ /** @defgroup planar Planar Embedding and Drawing @ingroup algs \brief Algorithms for planarity checking, embedding and drawing This group contains the algorithms for planarity checking, embedding and drawing. \image html planar.png \image latex planar.eps "Plane graph" width=\textwidth */ /** @defgroup approx_algs Approximation Algorithms @ingroup algs \brief Approximation algorithms. This group contains the approximation and heuristic algorithms implemented in LEMON. Maximum Clique Problem - \ref GrossoLocatelliPullanMc An efficient heuristic algorithm of Grosso, Locatelli, and Pullan. */ \brief Auxiliary algorithms implemented in LEMON. This group describes some algorithms implemented in LEMON This group contains some algorithms implemented in LEMON in order to make it easier to implement complex algorithms. */ /** @defgroup approx Approximation Algorithms @ingroup algs \brief Approximation algorithms. This group describes the approximation and heuristic algorithms @defgroup gen_opt_group General Optimization Tools \brief This group contains some general optimization frameworks implemented in LEMON. */ /** @defgroup gen_opt_group General Optimization Tools \brief This group describes some general optimization frameworks This group contains some general optimization frameworks implemented in LEMON. This group describes some general optimization frameworks implemented in LEMON. */ /** @defgroup lp_group Lp and Mip Solvers */ /** @defgroup lp_group LP and MIP Solvers @ingroup gen_opt_group \brief Lp and Mip solver interfaces for LEMON. This group describes Lp and Mip solver interfaces for LEMON. The various LP solvers could be used in the same manner with this interface. \brief LP and MIP solver interfaces for LEMON. This group contains LP and MIP solver interfaces for LEMON. Various LP solvers could be used in the same manner with this high-level interface. The currently supported solvers are \ref glpk, \ref clp, \ref cbc, \ref cplex, \ref soplex. */ \brief Metaheuristics for LEMON library. This group describes some metaheuristic optimization tools. This group contains some metaheuristic optimization tools. */ \brief Simple basic graph utilities. This group describes some simple basic graph utilities. This group contains some simple basic graph utilities. */ \brief Tools for development, debugging and testing. This group describes several useful tools for development, This group contains several useful tools for development, debugging and testing. */ \brief Simple tools for measuring the performance of algorithms. This group describes simple tools for measuring the performance This group contains simple tools for measuring the performance of algorithms. */ \brief Exceptions defined in LEMON. This group describes the exceptions defined in LEMON. This group contains the exceptions defined in LEMON. */ \brief Graph Input-Output methods This group describes the tools for importing and exporting graphs This group contains the tools for importing and exporting graphs and graph related data. Now it supports the \ref lgf-format "LEMON Graph Format", the \c DIMACS format and the encapsulated /** @defgroup lemon_io LEMON Input-Output @defgroup lemon_io LEMON Graph Format @ingroup io_group \brief Reading and writing LEMON Graph Format. This group describes methods for reading and writing This group contains methods for reading and writing \ref lgf-format "LEMON Graph Format". */ \brief General \c EPS drawer and graph exporter This group describes general \c EPS drawing methods and special This group contains general \c EPS drawing methods and special graph exporting tools. */ /** @defgroup dimacs_group DIMACS Format @ingroup io_group \brief Read and write files in DIMACS format Tools to read a digraph from or write it to a file in DIMACS format data. */ /** @defgroup nauty_group NAUTY Format @ingroup io_group \brief Read \e Nauty format Tool to read graphs from \e Nauty format data. */ \brief Skeleton classes and concept checking classes This group describes the data/algorithm skeletons and concept checking This group contains the data/algorithm skeletons and concept checking classes implemented in LEMON. \brief Skeleton and concept checking classes for graph structures This group describes the skeletons and concept checking classes of LEMON's graph structures and helper classes used to implement these. This group contains the skeletons and concept checking classes of graph structures. */ \brief Skeleton and concept checking classes for maps This group describes the skeletons and concept checking classes of maps. This group contains the skeletons and concept checking classes of maps. */ /** @defgroup tools Standalone Utility Applications Some utility applications are listed here. The standard compilation procedure (./configure;make) will compile them, as well. */ \anchor demoprograms @defgroup demos Demo programs @defgroup demos Demo Programs Some demo programs are listed here. Their full source codes can be found in the \c demo subdirectory of the source tree. It order to compile them, use --enable-demo configure option when build the library. */ /** @defgroup tools Standalone utility applications Some utility applications are listed here. The standard compilation procedure (./configure;make) will compile them, as well. */ In order to compile them, use the make demo or the make check commands. */ }
{}
# 1. In a random sample of 15 CD players brought in form repair, the average repair cost was $80 and the sample standard deviation was$14. Construct a 90% confidence interval for. Assume the repair costs are normally distributed and there are no outliers. Question 1. In a random sample of 15 CD players brought in form repair, the average repair cost was $80 and the sample standard deviation was$14. Construct a 90% confidence interval for. Assume the repair costs are normally distributed and there are no outliers.
{}
## physics If an electric vehicle can store 100 Kwh of energy in its battery, how much does it cost at $0.05/Kwh to fully charge this car? ## Answers (4) • ## You need a Homework Help subscription to view this answer! ### Chegg Homework Help subscribers get: • Full access to our library of 2 million Q&A posts • 24/7 help from our community of subject experts • Fast answers from experts, around the clock Start Free Trial... Your free trial lasts for 7 days. Your subscription will continue for$14.95/month unless you cancel. Get homework help
{}
Now Playing 10/21/2013 2:49PM Nico Muhly's 'Two Boys': A Preview A preview of "Two Boys," an opera by Nico Muhly that explores the dark side of the internet and opens Oct. 21 at the Metropolitan Opera. This transcript has been automatically generated and may not be 100% accurate. I ... the the the the the the ... the the ... I ... the the ... I ... I ... I ... and ... the the the the ... this ... I ... I ... or ... it ... it ... I ... a loaves and his long run ... and vote was low heart ... are are are ... the the team ... the ... it's you ... I ... I ... and her ... the ... Asia ... will ... I ... I ... the ... I ... in a ... I ... the More → More → More → More → More → More → More → More →
{}
Calculus without limits # Mandelstuff Mandelbrot hyped in 1982/1983. As a first year student, I had spent almost all my free time programming on early Apple computers. At that time, students would work mostly in labs [I still had still a cheap Tandy imitation at home with audio tape backup and Basic programming language] but on the Apple II, we could use floppies (!) and program in Pascal]. I saw the first picture of the Mandelbrot set when one the students in the lab there built a program showing off pictures. Soon after, one could see folks running around with Benoit Mandelbrot‘s “Fractal Geometry of Nature”. In a Moser-Seminar of around 1984/1985, (we had to participate in a couple of such seminars as requirement for the Diplom), Jochen Denzler presented the Douady-Hubbard proof of the connectivity of the Mandelbrot set. I myself got assigned a multi-dimensional complex dynamics linearization result of Rabinowitz, which actually turned out to be really nice. [I was a really arrogant prick at that time: when Moser passed me that paper of his former PhD student Paul Rabinowitz (typed up with a type writer and where Moser wrote in big letters Rabinowitz), I complained to him at first that this topic is “too old fashioned”. Moser was amused, objected and of course he was right: multi-dimensional complex dynamics got really hot later on). I myself really got “sold” on the beauty of fractals with Peitgen and Richter’s book “Beauty of fractals”. In my thesis under the guidance of Oscar Lanford, I worked in one of the chapters on almost periodic operators whose spectra are on Julia sets. It also led to an encounter with p-adic math as the almost periodic operators appearing there are actually defined on the compact topological group of dyadic integers. I would use the proof of of Adrien Douady and John Hubbard (as learned from Denzler) later in the classroom, first at Caltech (1995) and later at Harvard (2005). About 10 years ago, the first pictures of the Mandelbulb appeared and videos started to mesmerize. Strangely, mathematicians did not jump onto it. Maybe because it is too hard? Maybe because the set-up is not so pure (the set depends on the parametrization and so is not coordinate free? Maybe because the origin of the Mandelbulb is rather humble having been created by programmers, hackers, science fiction writers, musicians, artists, engineers, or folks hacking in various such things at once (Rucker, Ruis, White, Nylander). The Mandelbulber allows now everybody to do such movies. In 2012 project of Liz Slavkovsky brought me closer to 3d printing (see a related “From the Sphere to the Mandelbulb” from 2013) for which I had contacted Rudy Rucker and Jules Ruis some early pioneers in that matter. Daniel White and Paul Nylander (unfortunately, I had not been aware of Paul’s work yet in 2013) were finalizing and naming the Mandelbulb definition between 2007 and 2009. The video below is about Mandelstuff triggered by teaching spherical coordinates. It looks like a distraction from Quantum calculus, but not as much: one can define Mandelbrot sets a very general setting: for any family of dynamical systems, a point 0 and an escape set B,one can look at the set of parameters for which 0 does not enter B. This is the Mandelstuff of the system. In a ring, one can take the family $T_c(z) = z^d+c$, where the degree $d \geq 2$ is an integer parameter. In Euclidean space $\mathbb{R}^m$, one can look at a parametrization $U(\phi)$ of the unit sphere and write every point $x$ in spherical coordinates as $x= |x| U(\phi)$ and define $x^d = |x|^d U(d \phi)$ which of course is not a power operation in a ring. The Mandelbulb is defined using $U(\phi,\theta) = (\cos(\phi) \cos(\theta), \cos(\phi) \sin(\theta), \sin(\phi))$. This is the most common White-Nylander Mandelbulb with most popular choice g=8. Some Mandelbulbers use the standard parametrization (where $\cos(\phi))$ and $\sin(\phi)$ are switched) which is also one of the templates in the program. By the way, the Mandelbulber program allows to explore any of those and much more!. In the video, the most obvious statement appears: A slice of the degree d White-Nylander Mandelbulb is the degree Mandelbrot set. It is an open problem whether the Mandelbulbs are connected. Actually now (as of morning of September 13th 2022), I believe that one should be able to prove the connectivity of the Mandelbulb using the Douady and Hubbard result (as of evening September 13, 2022, I still think so but that the proof is much harder than anticipated in the morning), I might look at this a bit next Saturday. It will most likely fail, like 99 percent of all stuff one starts in mathematics. But a bit of naivity can be helpful when doing research. It is also important to know why something is hard. That gives insight, even if one is unable to prove things. Why posting this in quantum calculus? Because I’m in general interested in how to translate things from the continuum to the discrete. If one can not work with discrete finite mathematics to deal with fractals, then the “finitist approach” to mathematics is doomed. We know that one can do everything also in the finite, simply because the pictures of the Mandelbulb are all done all done on a finite “computer” containing finitely many bits. Most of modern mathematics is is much more adventurous. There are even developments which go far beyond the ZFC axiom system. Already Alexander Grothendieck went beyond ZFC when talking about universes. Current developments in categorical mathematics do not bother, since it is believed to be irrelevant. It probably is. Still, all mathematics which uses the infinitely axiom is not safe from crashing at some point. I would not be surprised if somebody would prove one day that any axiom system which explicitly postulates infinity and has some minimal content like postulating the integers, must be inconsistent. And as I have pointed out quite a few times on these pages, this would not be a problem. We can work well with finite mathematics. Every computer scientist “feels” that. We are finite creatures, we can process only finitely many thoughts and symbols, measure only finitely many data. Infinity is a language construct which allows us to write some things down more conveniently. The burden of proof of the finitist however is to show that traditional mathematics like the mathematics of the Mandelbulb can be done also within a pure frame work of finite mathematics without using the crux of Euclidean spaces. This has not happened yet in practice. It happens theoretically in radically elementary approaches for example by compactifying a space and using non-standard analysis to deal with it as a finite set (Nelson’s radically elementary approach). But what would be needed is to have a purely combinatorics finite proof of say the Douady-Hubbard result without invoking infinity. This is not so weird: we can approximate the dynamical systems by maps on graphs and also model the parameter domain using a graph. On every approximation level, we would need the parameter domain to be a connected graph. And all should also look natural and not just a numerical scheme. Numerical scheme descriptions are usually ugly. Even books on difference equations are not pretty. Differential equations have much elegance. We are far from getting to this elegance using combinatorics alone. In any way, here is a picture of the degree 2 Mandelbulb using the parametrization in which the equatorial xy-plane can be identified with the complex plane on which the map induces the quadratic map in $\mathbb{C}$.
{}
## Leishmaniasis Yesterday I heard a talk by Ingrid Müller from Imperial College about leishmaniasis, a disease caused by protozoa of the genus Leishmania. The genus is named after the Scottish pathologist and army medical officer William Leishman. The life cycle of the parasite is of a similar type to that of the organism which causes malaria. Humans are infected by the bite of a sand fly (instead of mosquito in the case of malaria) and the individual infected in this way then passes on the parasite to other sand flies taking blood meals. While the malaria parasite makes its way to red blood cells the preferred target of Leishmania are macrophages, where it lives in certain vesicles. In this sense it is similar to the tuberculosis bacterium and has to meet similar challenges, such as not being digested. There are different forms of leishmaniasis. The cutaneous form, which is the most common, affects the skin and is cleared by the immune system after some time. Another, the visceral form, is much more serious. In the latter case the parasite damages internal organs and may be fatal if not treated. It could be said that cutaneous leishmaniasis is not a very serious disease, compared to many others. Unfortunately even when the lesion on the skin heals it can leave the affected person seriously disfigured. Thus there is a strong motivation for combatting it. Not surprisingly leishmaniasis is only common in parts of the world where poverty is widespread (Africa, South America, India) although cases do also occur in southern Europe. In parts of India the visceral form is known under the name kala azar. Searching for this disease on the internet you find a relatively large number of sites relating to dogs. What is behind this is the following. Dogs can also be affected by these parasites. When dog-lovers bring stray dogs from Spain to Germany (for instance) there exists an appreciable risk that the dogs may bring the parasites with them. Thus care is necessary. Leishmaniasis is much less well-known than malaria and the available scientific knowledge of the disease and (as a natural consequence) the available treatments are much more rudimentary. What treatments there are are expensive, which is particularly problematic in the regions where they are necessary. Leishmaniasis is also much less common than malaria but in fact no up to date and reliable epidemiological data is available so that it is not clear how common it is. Infections of mice with Leishmania have been a popular model system in immunology. The cutaneous and visceral forms have been associated with Th1 and Th2 type responses respectively in the past. According to the speaker this association is controversial. A simple picture would be that a Th1 response results in a high concentration of interferon $\gamma$ which activates macrophages and thus allows them the kill the parasites infecting them. It was mentioned in the talk that macrophages can also be activated in a different way by IL4 (the typical Th2 cytokine) in what is called alternative activation. This leads to production of the enzyme arginase which metabolises arginine. In the talk evidence was presented that the resulting metabolites can serve as raw materials for the replication of the parasite in the hostile environment of the phagosome. This has been supported by showing that parasite growth can be accelerated by adding metabolites downstream of arginine, such as ornithine. What I heard in this talk has contributed to my opinion that while the Th1-Th2 axis is useful for generating ideas for understanding various diseases it is necessary to keep in mind that it is likely to be an oversimplification of a complex state of affairs.
{}
Relation between amplitude and frequency PhysicsWaves Class 11th Physics - Elasticity 6 Lectures 1 hours Class 11th Physics - Oscillations 12 Lectures 2 hours Class 11th Physics - Waves 14 Lectures 3 hours Introduction The aspects of amplitude and frequency are associated with the waveforms. Amplitude is depicted when a wave shows a greater deviation from zero. On the other hand, the total number of waves or oscillations passing through a particular point in a second is termed frequency. Amplitude, as well as frequency, is inversely proportional to each other. That is with the decrease in frequency, the amplitude of the wave increases and vice versa. Amplitude and frequency: Definition Amplitude in the case of sound waves measures the wave height. As opined by He et al. (2021), amplitude of sound wave is the loudness or is depicted as maximum displacement of vibrating particle of any medium from its mean position during sound production. Distance existing between trough or crest is denoted as amplitude of sound waves. Figure 1: Amplitude of Sound In terms of a vibrating body, it can be established that amplitude denotes the maximum amount of displacement or distance covered by any object from its equilibrium position. This aspect is half the length of the path of vibration. The amplitude of a wave is noted by the formula $$x = A \:sin \:(\omega t + \phi)$$ Here, x = displacement of the wave in metres; A = amplitude of waves; \omega is the angular frequency of the waves, measured in radians, t = time period, measured in seconds and $\varphi$ is the phase shift, measured in radians. Frequency on the other hand is defined by the number of occurrences of the wave associated with a repeating event in a particular unit of time. Frequency is mainly of two types, namely angular and spatial frequency. The frequency of any given object is measured in Hertz and it is symbolized by Hz(openstax, 2022). Hertz is defined as the repetition of a particular event occurring per second, in such case, the period is associated with the duration of time of a particular cycle in a repeating course of event such that the period is the reciprocal of the frequency and the unit of its measurement is in seconds or s. Amplitude and Frequency: relation The relationship between the amplitude and frequency can be established in such a manner that a particular uniform motion will have an angular velocity that is uniform in nature. To establish the relation further, certain functions are taken into consideration, Function such as the amplitude modulation or AM is capable of having double periods. These period functions are furthermore hidden in other period functions in existence (weebly, 2022). The inverse of the frequency of a periodic motion gives away the time which is determined in seconds. According to Tao et al. (2019), these periodic motions are further classified into two sub-parts simple harmonic motion and damped harmonic motion. Time difference between two similar events or occurrences helps in obtaining the frequency of recurring periodic motion. The frequency of a simple pendulum is dependent on the length of the pendulum and the associated gravitational acceleration, which are vibrations. Figure 2: Amplitude Amplitude and frequency of sound The highest amount of displacement of vibrating particles of a particular medium from their mean location is characterized by the loudness of the sound wave. This loudness of the sound is associated with the amplitude of the sound wave (Weebly, 2022). Therefore, it can be established that it is the distance between the troughs and crests of a particular wave regarding its mean position. The amplitude of sound further enhances the loudness of the sound that is the biggest displacement of sound wave from its equilibrium location. The frequency of the sound wave is the number of times it repeats itself per second, the fewer the oscillations the lower the frequency of the sound. For example, the frequency of drumbeats is lower than blowing the whistle. The oscillations of the sound wave are more common in higher frequencies. The frequency is measured in Hertz. According to Peña et al. (2021), the frequency of the sound between 20Hz to 20,000 Hz is audible to the human ear, beyond 20,000Hz the sound is classified as Ultrasound. A short-wavelength produces a higher amount of frequency that comes along with a higher pitch and faster cycles and a short wavelength produces a very high amount of frequency with a higher pitch and faster cycles. Figure 3: Amplitude and frequency of sound Conclusion The amplitude and frequency are generally associated with waveforms of sound. The frequency aligns with the crest and troughs of the travelling wave in every unit of time that is measured in seconds. The amplitude is associated with the maximum displacement of a wave measured from the position of its equilibrium, simply the number of waves passing through a particular point in a given amount of time; it is the number of the completed wave cycles per given second. FAQs Q1. Which wave functions determines the loudness and pitch of the sound? The amplitude is associated with the loudness of the sound both these aspects are directly related to one another. That is, louder sounds are associated with higher amplitude and the pitch of the sound is directly proportional to its frequency. That is higher pitch means a higher frequency of sound. Q2. What are the amplitude and frequency of the sound waves? The number of oscillations per second defines the frequency. It is measured in Hertz and the amplitude is defined b the maximum height attained by the troughs and crests of the sound wave. Q3. What are sound waves? The sound wave is fiend by the sequence of disturbance that is generated by the flow of energy as it travels away from the source of the sound across various forms of medium. A source of a sound wave is an object that produces vibration. Q4. Do amplitude and frequency are dependent on each other? No, the aspect of amplitude and frequency are independent of one another. The first depends on the total amount of energy existing in a particular system and the frequency depends on the oscillator’s properties. The amplitude of a source can be altered but the frequency can't be changed. References Journals He, J. H., Hou, W. F., Qie, N., Gepreel, K. A., Shirazi, A. H., & Mohammad-Sedighi, H. (2021). Hamiltonian-based frequency-amplitude formulation for nonlinear oscillators.Facta Universitatis, Series: Mechanical Engineering,19(2), 199-208. Retrived from: http://casopisi.junis.ni.ac.rs Peña, E., Pelot, N. A., & Grill, W. M. (2021). Non-monotonic kilohertz frequency neural block thresholds arise from amplitude-and frequency-dependent charge imbalance.Scientific reports,11(1), 1-17. Retrieved from: https://www.nature.com/articles/s41598-021-84503-3 Tao, Z. L., Chen, G. H., & Xian Bai, K. (2019). Approximate frequency–amplitude relationship for a singular oscillator.Journal of Low Frequency Noise, Vibration and Active Control,38(3-4), 1036-1040.Retrieved from: https://journals.sagepub.com/doi/full/10.1177/1461348419828880 Websites Openstax (2022). About 13.2 Wave Properties: Speed, Amplitude, Frequency, and Period. Retrieved from:https://openstax.org/books/physics/pages/13-2-wave-properties-speed-amplitude-frequency-and-period [Retrieved on: June 11, 2022]
{}
Discussion Forum Problem on June 8, 2017 Re: Problem on June 8, 2017 Hey Maddy, sorry for the late response. The correct answer is indeed $74$. We corrected the problem and the display issues in the solution. Thank you for letting us know!
{}
# WebInspect Fails to Start WebInspect froze up on a user and they killed the process. Since then, when started, the WebInspect splash screen reports 99% loaded but thats as far as it gets. The process monitor shows that it's spawning a new browser session about every 8 seconds, consuming an increasing amount of system resources.  How can I get WebInspect "unstuck"? #### Tags: Parents Just guessing, but I believe you can rename/delete these two folders and then open WebInspect.  These should auto-rebuild, so I would tend to Rename them just in case I need to fall back. • C:\Program Files\HP\HP WebInspect\dat\ • C:\Program Files\HP\HP WebInspect\browser\
{}
# polyscale Scale roots of polynomial ## Syntax ```b = polyscale(a,alpha) ``` ## Description `b = polyscale(a,alpha)` scales the roots of a polynomial in the z-plane, where `a` is a vector containing the polynomial coefficients and `alpha` is the scaling factor. If `alpha` is a real value in the range `[0 1]`, then the roots of `a` are radially scaled toward the origin in the z-plane. Complex values for `alpha` allow arbitrary changes to the root locations. ## Examples collapse all Express the solutions to the equation ${x}^{7}=1$ as the roots of a polynomial. Plot the roots in the complex plane. ```pp = [1 0 0 0 0 0 0 -1]; zplane(pp,1)``` Scale the roots of `p` in and out of the unit circle. Plot the results. ```hold on for sc = [1:-0.2:0.2 1.2 1.4]; b = polyscale(pp,sc); plot(roots(b),'o') end axis([-1 1 -1 1]*1.5) hold off``` Load a speech signal sampled at ${F}_{s}=7418\phantom{\rule{0.2777777777777778em}{0ex}}Hz$. The file contains a recording of a female voice saying the word "MATLAB®." `load mtlb` Model a 100-sample section of the signal using a 12th-order autoregressive polynomial. Perform bandwidth expansion of the signal by scaling the roots of the autoregressive polynomial by 0.85. ```Ao = lpc(mtlb(1000:1100),12); Ax = polyscale(Ao,0.85);``` Plot the zeros, poles, and frequency responses of the models. ```subplot(2,2,1) zplane(1,Ao) title('Original') subplot(2,2,3) zplane(1,Ax) title('Flattened') subplot(1,2,2) [ho,w] = freqz(1,Ao); [hx,w] = freqz(1,Ax); plot(w/pi,abs([ho hx])) legend('Original','Flattened')``` ## Tips By reducing the radius of the roots in an autoregressive polynomial, the bandwidth of the spectral peaks in the frequency response is expanded (flattened). This operation is often referred to as bandwidth expansion.
{}
## Find the amount of subgroups of order \$3\$ and \$21\$ in non-cyclic abelian group of order \$63\$ Find the amount of subgroups of order $$3$$ and $$21$$ in non-cyclic abelian group of order $$63$$. In first case I found the amount of elements that have order $$3$$ – there are $$8$$ of them, in second case there are $$48$$ elements of order $$21$$. How do I connect these values with the amount of subgroups now?
{}
# Understanding notation in proof of probability integral transform I'm attempting to understand one line of the proof for the probability integral transform as found on wikipedia: Suppose that a random variable $$X$$ has a continuous distribution for which the cdf is $$F_X$$. Then the random variable $$Y = F_X(X)$$ has a standard uniform distribution. Proof: $$F_Y(y) = P(Y \leq y) = P(F_X(X) \leq y) = P(X \leq F^{-1}_X(y)) = F_X(F^{-1}_X(y)) = y$$ What I do not understand is the definition of the random variable $$Y$$, namely why is there a capital $$X$$ in parentheses, $$F_X(X)$$, instead of lower-case, $$F_X(x)$$. More importantly, what does this mean? I have looked at this post already, and my updated understanding is that $$F_X(X)$$ represents the distribution of the probabilities of $$X$$, not the variable itself. So, I believe that $$Y$$ is the distribution of probabilities of $$X$$. Is this correct? Or, if not, can someone explain what this difference in notation means? • Your questions are puzzling because they are identical to the ones asked at the post you reference and they are explicitly answered there in several ways. It is therefore unclear what you are seeking in terms of an answer. – whuber Jun 11, 2020 at 14:41 Because, take $$F_X(x)=G(x)$$ as a function and we apply this transformation over the random variable $$X$$, to obtain $$Y$$. So, in general, if the input is a RV, the output is a RV, i.e. $$Y=G(X)$$, not $$Y=G(x)$$. You could say $$y=G(x)$$, for a specific pair of $$(x,y)$$ by the way. Therefore, the notation confusion you have doesn't have anything to do with the meaning of $$F_X$$.
{}
## The abstract concept of Duality and some related facts ### Lundi 3 novembre 2008 14:00-15:00 - V. milman - Tel Aviv Résumé : We discuss in the talk an unexpected observation that very minimal basic properties essentially uniquely define some classical transforms which traditionally are defined in a concrete and quite involved form. We start with a characterization of a very basic concept in Convexity and Functional Analysis : Duality and the Legendre transform. We show that the Legendre transform is, up to linear terms, the only involution on the class of convexlower semi-continious functions in $\R^n$ which reverses the (partial) order of functions. This leads to a different understanding of the concept of duality, and which we then apply also to many other well known settings. It is also true that any involutive transform (on this class) which exchanges summation with inf-convolution, is, up to linear terms, the Legendre transform. In the same time, considering the class of non-negative convex functions (with 0 value at 0), changed the picture and brings an additional, new duality for this class, which was not considered before. We will study this new duality and discuss its properties. The classical Fourier transform may be also defined (essentially)uniquely by the condition of exchanging convolution with product together with the form of the square of the transform (the last fact is a joint work also with Semyon Alesker) Lieu : bât. 425 - 113-115 The abstract concept of Duality and some related facts  Version PDF septembre 2020 : Département de Mathématiques Bâtiment 307 Faculté des Sciences d'Orsay Université Paris-Saclay F-91405 Orsay Cedex Tél. : +33 (0) 1-69-15-79-56 Département Fermeture du département Laboratoire Formation
{}
# How can I add custom HTML code to the beginning and end of the body element with tex4ht? I need to convert a Latex document to several HTML files. I'm using tex4ht for this. What I need next is to add some features to the web pages. I managed to add the needed extra elements in the head section by writing a .cfg file and adding lines like the following one. \Configure{@HEAD}{\HCode{<meta charset="utf-8" />\Hnewline}} Now I need to add the HTML code right after <body> and before </body> to wrap the content generated by tex4ht in some divs that the extra functionality to the web pages. How can I do that? Where can I find good documentation about tex4ht usage? Thanks! You have two options for configuring <body> element, first one is Configure{BODY}{start}{end}, the second is \Configure{@BODY} and \Configure{@/BODY}. In your case it is better to use the second option, which is similar to \Configure{@HEAD}, because you can use it several times to add stuff after and before <body>.: \Preamble{xhtml} \Configure{@BODY}{\ifvmode\IgnorePar\fi\EndP\HCode{<article class="main">}} \Configure{@/BODY}{\ifvmode\IgnorePar\fi\EndP\HCode{</article>}} \begin{document} \EndPreamble this configuration produces: </head><body > <article class="main"> ... document body ... </article> </body></html> regarding documentation, besides documentation linked from tex4ht website, you can also browse literate sources of tex4ht, which are huge, most important are tex4ht-info, which contains some coments on particular configurations and tex4ht-html4 with configurations used in conversion to HTML • Thanks! This worked for me! I'll see the links you gave here. Thanks again! – Matteo Ipri Mar 3 '16 at 16:00
{}
# Math Help - The Maximal ideals of R[x] 1. ## The Maximal ideals of R[x] So I am in the process of trying to do a proof, and I need show that the maximal ideals in $\mathbb{R}[x]$ are the irreducible polynomials of the form What I am confused about is: why can't a polynomial with degree >2, generate a maximal ideal in $\mathbb{R}[x]$? 2. Originally Posted by CropDuster What I am confused about is: why can't a polynomial with degree >2, generate a maximal ideal in $\mathbb{R}[x]$? For example, if p ( x ) in IR [ x ] has degree 3 then, p ( x ) = ( x - a ) q ( x ) with q ( x ) in IR [x] and a in IR . Then, ( p( x ) ) is contained in ( x - a ) and ( x - a ) is different from ( p ( x ) ) and IR [ x ] . Edited: I forgot to say that p ( x ) had degree 3 . 3. Thanks. I think I see what you're saying, but what does IR[x] denote? And, p(x) is irreducible in IR[x] right? And if so: right? 4. Originally Posted by CropDuster Thanks. I think I see what you're saying, but what does IR[x] denote? Denotes And, p(x) is irreducible in IR[x] right? A: I proposed a polynomial of degree 3. It is easy to generalize. 5. Originally Posted by FernandoRevilla For example, if p ( x ) in IR [ x ] has degree 3 then, p ( x ) = ( x - a ) q ( x ) with q ( x ) in IR [x] and a in IR . Then, ( p( x ) ) is contained in ( x - a ) and ( x - a ) is different from ( p ( x ) ) and IR [ x ] . Edited: I forgot to say that p ( x ) had degree 3 . So what if p(x) is irreducible and has degree 2? 6. Originally Posted by CropDuster So what if p(x) is irreducible and has degree 2? Use that in a principal ideal domain every nonzero prime ideal is maximal.
{}
# Minimum base current for transistor amplifier A transistor, say $NPN$, can be used as an amplifier. It amplifies base current by a factor beta. Is there a minimum base current threshold below which it cannot amplify? Say, are there transistors which can amplify femtoampere? • The beta coefficient is not the exact gain. Jun 2 '17 at 16:00 • A femtoamp is only roughly 6000 electrons/second. For a DC current, electron drift speed is on the order of 0.0002 meters per second, so you're in a regime where transfer time across the p-n junctions matter. That suggests that "gain" becomes a not very useful concept. Jun 2 '17 at 16:05 ## 3 Answers To answer the question, we can consider Ebers-Moll's ecuations: $$I_E=I_{ES}(e^{V_{EB}/V_T}-1) - \alpha_RI_{CS}(e^{V_{CB}/V_T}-1)$$ $$I_C=\alpha_FI_{ES}(e^{V_{EB}/V_T}-1) - I_{CS}(e^{V_{CB}/V_T}-1)$$ $$I_B=I_E-I_C$$ where $I_{ES}$ and $I_{CS}$ are the reverse saturation currents, $V_T=\frac{KT}{q}$ ($K$ Bolzmann constant) and $\alpha_F$ and $\alpha_R$ the forward and the reverse constants. As we can see, unless both PN junctions are in reverse direction (it is said, $V_{EB}=V_{CB}=0)$, which depends on the circuit and the input voltage, the bipolar junction will amplifiy the current of the emitter. • it is not clear. will the base current of any positive amplitude however small amplified? Jun 2 '17 at 9:01 • I explain it in other way. the BJT has 4 operation states: cut-off, forward active, saturation and reverse active. The last one is the less commmon, so forget about it. When we connect a transistor and it is not in cut-off (amplification 0), it works in active (very high amplification) or in saturation, where it has less amplification but it is still considerable. Anyway, this depend essentially of the circuit, but the theory tell us BJT don't have a treshold voltage. Jun 2 '17 at 9:44 In a normal NPN transistor we are dealing with three states (seen comment section on Josemi post). The red area, indicates the saturation the BC junction is forward polarised, the the green area is called linear, the BC junction is inverse polarised and the transistor works as an amplifier and when $I_B = 0$ we land in cut-off. I prefere to work with the simplified version, so i take the threshold voltage of NP or PN junction as $0.7_v$. Let investigate all the three area a little bit more, starting with cut-off: $$V_{BE} < 0.7_v \Rightarrow I_B = I_C = I_E = 0$$ Linear area:$$\left\{ \begin{array}{ll} V_{BE} = 0.7_v & \\ V_{CE} > 0 \end{array} \right. \Rightarrow \left\{ \begin{array}{ll} I_B>0 & \\ I_C = \beta I_B \end{array} \right.$$ saturation: $$\left\{ \begin{array}{ll} V_{BE} = 0.7_v & \\ V_{CE} = 0 \end{array} \right. \Rightarrow \left\{ \begin{array}{ll} I_B>0 & \\ I_C < \beta I_B \end{array} \right.$$ As you can see there is no limitation on $I_B$. However, this just a theory, in reality, i don't think so they come in handy when the bias current is about some femtoamps, otherwise why, using CMOS then? The short answer is yes, there is a minimum, but it's device dependent and usually not specified. The GE Transistor Manual (no longer in print, but considered to be holy writ as recently as the 1970s)explains why ordinary leakage currents don't turn on SCRs. Referring to the 2 transistor SCR model (easily found online) it says that beta for transistors is dependent on collector current, and that SCR equivalent transistors have betas less than 1 for expected leakage currents. • There being a minimum doesn't mean a transistor can't be used to amplify low currents - it just has to be connected for a much larger bias current that the signal current is added to. Jul 6 '17 at 15:26
{}
# Lesson Title ## Timing Leave about 30 minutes at the start of each workshop and another 15 mins at the start of each sesson for technical difficulties like WiFi and installing things (even if you asked students to install in advance, longer if not). ## Pulling in data The easiest way to get the data used in this lesson during a bootcamp is to have attendees run the following: git remote add data https://github.com/resbaz/r-novice-gapminder-files git pull data master If Git is not being taught as part of the workshop the raw data can be downloaded from the folloing urls: gapminder-FiveYearData gapminder-FiveYearData-Wide attendees can use the File - Save As dialog in their browser to save the file. ## Overall Make sure to emphasise good practices: put code in scripts, and make sure they’re version controlled. Encourage students to create script files for challenges. If you’re working in a cloud environment, get them to upload the gapminder data after the second lesson. Make sure to emphasise that matrices are vectors underneath the hood and data frames are lists underneath the hood: this will explain a lot of the esoteric behaviour encountered in basic operations. Vector recycling and function stacks are probably best explained with diagrams on a whiteboard. Be sure to actually go through examples of an R help page: help files can be intimidating at first, but knowing how to read them is tremendously useful. Be sure to show the CRAN task views, look at one of the topics. There’s a lot of content: move quickly through the earlier lessons. Their extensiveness is mostly for purposes of learning by osmosis: so that their memory will trigger later when they encouter a problem or some esoteric behaviour. Key lessons to take time on: • Data subsetting - conceptually difficult for novices • Functions - learners especially struggle with this • Data structures - worth being thorough, but you can go through it quickly. Don’t worry about being correct or knowing the material back-to-front. Use mistakes as teaching moments: the most vital skill you can impart is how to debug and recover from unexpected errors.
{}
## Intransitive dice VII — aiming for further results While Polymath13 has (barring a mistake that we have not noticed) led to an interesting and clearly publishable result, there are some obvious follow-up questions that we would be wrong not to try to answer before finishing the project, especially as some of them seem to be either essentially solved or promisingly close to a solution. The ones I myself have focused on are the following. 1. Is it true that if two random elements $A$ and $B$ of $[n]^n$ are chosen, then $A$ beats $B$ with very high probability if it has a sum that is significantly larger? (Here “significantly larger” should mean larger by $f(n)$ for some function $f(n)=o(n^{3/2})$ — note that the standard deviation of the sum has order $n^{3/2}$, so the idea is that this condition should be satisfied one way or the other with probability $1-o(1)$). 2. Is it true that the stronger conjecture, which is equivalent (given what we now know) to the statement that for almost all pairs $(A,B)$ of random dice, the event that $A$ beats a random die $C$ has almost no correlation with the event that $B$ beats $C$, is false? 3. Can the proof of the result obtained so far be modified to show a similar result for the multisets model? The status of these three questions, as I see it, is that the first is basically solved — I shall try to justify this claim later in the post, for the second there is a promising approach that will I think lead to a solution — again I shall try to back up this assertion, and while the third feels as though it shouldn’t be impossibly difficult, we have so far made very little progress on it, apart from experimental evidence that suggests that all the results should be similar to those for the balanced sequences model. [Added after finishing the post: I may possibly have made significant progress on the third question as a result of writing this post, but I haven’t checked carefully.] ### The strength of a die depends strongly on the sum of its faces. Let $A=(a_1,\dots,a_n)$ and $B=(b_1,\dots,b_n)$ be elements of $[n]^n$ chosen uniformly and independently at random. I shall now show that the average of $|\{(i,j):a_i>b_j\}|-|\{(i,j):a_i is zero, and that the probability that this quantity differs from its average by substantially more than $n\log n$ is very small. Since typically the modulus of $\sum_ia_i-\sum_jb_j$ has order $n^{3/2}$, it follows that whether or not $A$ beats $B$ is almost always determined by which has the bigger sum. As in the proof of the main theorem, it is convenient to define the functions $f_A(j)=|\{i:a_i and $g_A(j)=f_A(j)-j+\frac 12$. Then $\sum_jf_A(b_j)=\sum_{i,j}\mathbf 1_{a_i, from which it follows that $B$ beats $A$ if and only if $\sum_jf_A(b_j)>n^2/2$. Note also that $\sum_jg_A(b_j)=\sum_jf_A(b_j)-\sum_jb_j+\frac n2$. If we choose $A$ purely at random from $[n]^n$, then the expectation of $f_A(j)$ is $j-1/2$, and Chernoff’s bounds imply that the probability that there exists $j$ with $|g_A(j)|=|f_A(j)-j+1/2|\geq C\sqrt{n\log n}$ is, for suitable $C$ at most $n^{-10}$. Let us now fix some $A$ for which there is no such $j$, but keep $B$ as a purely random element of $[n]^n$. Then $\sum_jg_A(b_j)$ is a sum of $n$ independent random variables, each with maximum at most $C\sqrt{n\log n}$. The expectation of this sum is $\sum_jg_A(j)=\sum_jf_A(j)-n^2/2$. But $\sum_jf_A(j)=\sum_{i,j}\mathbf 1_{a_i $=\sum_i(n-a_i)+\frac n2=n^2+\frac n2-\sum_ia_i$, so the expectation of $\sum_jg_A(b_j)$ is $n(n+1)/2-\sum_ia_i$. By standard probabilistic estimates for sums of independent random variables, with probability at least $1-n^{-10}$ the difference between $\sum_jg_A(b_j)$ and its expectation $\sum_jf_A(j)-n^2/2$ is at most $Cn\log n$. Writing this out, we have $|\sum_jf_A(b_j)-\sum_jb_j+\frac n2-n(n+1)/2+\sum_ia_i|\leq Cn\log n$, which works out as $|\sum_jf_A(b_j)-\frac {n^2}2-\sum_jb_j+\sum_ia_i|\leq Cn\log n$. Therefore, if $\sum_ia_i>\sum_jb_j+Cn\log n$, it follows that with high probability $\sum_jf_A(b_j), which implies that $A$ beats $B$, and if $\sum_jb_j>\sum_ia_i+Cn\log n$, then with high probability $B$ beats $A$. But one or other of these two cases almost always happens, since the standard deviations of $\sum_ia_i$ and $\sum_jb_j$ are of order $n^{3/2}$. So almost always the die that wins is the one with the bigger sum, as claimed. And since “has a bigger sum than” is a transitive relation, we get transitivity almost all the time. ### Why the strong conjecture looks false As I mentioned, the experimental evidence seems to suggest that the strong conjecture is false. But there is also the outline of an argument that points in the same direction. I’m going to be very sketchy about it, and I don’t expect all the details to be straightforward. (In particular, it looks to me as though the argument will be harder than the argument in the previous section.) The basic idea comes from a comment of Thomas Budzinski. It is to base a proof on the following structure. 1. With probability bounded away from zero, two random dice $A$ and $B$ are “close”. 2. If $A$ and $B$ are two fixed dice that are close to each other and $C$ is random, then the events “$A$ beats $C$” and “$B$ beats $C$” are positively correlated. Here is how I would imagine going about defining “close”. First of all, note that the function $g_A$ is somewhat like a random walk that is constrained to start and end at zero. There are results that show that random walks have a positive probability of never deviating very far from the origin — at most half a standard deviation, say — so something like the following idea for proving the first step (remaining agnostic for the time being about the precise definition of “close”). We choose some fixed positive integer $k$ and let $x_1<\dots be integers evenly spread through the interval $\{1,2,\dots,n\}$. Then we argue — and this should be very straightforward — that with probability bounded away from zero, the values of $f_A(x_i)$ and $f_B(x_i)$ are close to each other, where here I mean that the difference is at most some small (but fixed) fraction of a standard deviation. If that holds, it should also be the case, since the intervals between $x_{i-1}$ and $x_i$ are short, that $f_A$ and $f_B$ are uniformly close with positive probability. I’m not quite sure whether proving the second part would require the local central limit theorem in the paper or whether it would be an easier argument that could just use the fact that since $f_A$ and $f_B$ are close, the sums $\sum_jf_A(c_j)$ and $\sum_jf_B(c_j)$ are almost certainly close too. Thomas Budzinski sketches an argument of the first kind, and my guess is that that is indeed needed. But either way, I think it ought to be possible to prove something like this. ### What about the multisets model? We haven’t thought about this too hard, but there is a very general approach that looks to me promising. However, it depends on something happening that should be either quite easy to establish or not true, and at the moment I haven’t worked out which, and as far as I know neither has anyone else. The difficulty is that while we still know in the multisets model that $A$ beats $B$ if and only if $\sum_jf_A(b_j) (since this depends just on the dice and not on the model that is used to generate them randomly), it is less easy to get traction on the sum because it isn’t obvious how to express it as a sum of independent random variables. Of course, we had that difficulty with the balanced-sequences model too, but there we got round the problem by considering purely random sequences $B$ and conditioning on their sum, having established that certain events held with sufficiently high probability for the conditioning not to stop them holding with high probability. But with the multisets model, there isn’t an obvious way to obtain the distribution over random dice $B$ by choosing $b_1,\dots,b_n$ independently (according to some distribution) and conditioning on some suitable event. (A quick thought here is that it would be enough if we could approximate the distribution of $B$ in such a way, provided the approximation was good enough. The obvious distribution to take on each $b_i$ is the marginal distribution of that $b_i$ in the multisets model, and the obvious conditioning would then be on the sum, but it is far from clear to me whether that works.) A somewhat different approach that I have not got far with myself is to use the standard one-to-one correspondence between increasing sequences of length $n$ taken from $[n]$ and subsets of $[2n-1]$ of size $n$. (Given such a sequence $(a_1,\dots,a_n)$ one takes the subset $\{a_1,a_2+1,\dots,a_n+n-1\}$, and given a subset $S=\{s_1,\dots,s_n\}\subset[2n-1]$, where the $s_i$ are written in increasing order, one takes the multiset of all values $s_i-i+1$, with multiplicity.) Somehow a subset of $[2n-1]$ of size $n$ feels closer to a bunch of independent random variables. For example, we could model it by choosing each element with probability $n/(2n-1)$ and conditioning on the number of elements being exactly $n$, which will happen with non-tiny probability. Actually, now that I’m writing this, I’m coming to think that I may have accidentally got closer to a solution. The reason is that earlier I was using a holes-and-pegs approach to defining the bijection between multisets and subsets, whereas with this approach, which I had wrongly assumed was essentially the same, there is a nice correspondence between the elements of the multiset and the elements of the set. So I suddenly feel more optimistic that the approach for balanced sequences can be adapted to the multisets model. I’ll end this post on that optimistic note: no doubt it won’t be long before I run up against some harsh reality. ### 35 Responses to “Intransitive dice VII — aiming for further results” 1. Bruce Smith Says: Related to question 3: is it obvious whether or not there exists *any* predicate of dice which is negligible (i.e. $o(n)$) in the subsets model, but not negligible in the multiset model? I haven’t been following this project closely, but my impression is that your existing results can be characterized as “all dice behave ‘reasonably’ except for a negligible fraction, and among the ‘reasonable’ ones, our theorems hold, and from this it follows they hold in general”. So if we take as that predicate that a die is ‘unreasonable’, then if switching to the multiset model (and thus changing the distribution over dice) makes any of the analogous theorem statements false (and if my general understanding is correct), that predicate has to be one which is negligible in the subsets model but not in the multisets model. (Let’s call that a “contrasting predicate”.) (I’m not conjecturing these “contrasting predicates” don’t exist — in fact, I’m guessing that someone here might be immediately able to give an example of one — maybe it’s enough for the predicate to require that the distribution of element frequencies in the multiset has a certain property. But I’m wondering if thinking about the requirements on such a predicate might be illuminating.) • Bruce Smith Says: (In that comment, I should have defined “negligible” as $o(N)$ rather than $o(n)$, if there are $N$ dice of size $n$ in the model the predicate is about.) • Bruce Smith Says: (A second correction: when I said “subsets model” I should have said “balanced sequences model”.) • gowers Says: This does seem like a potentially good thing to think about. As you suggest, it probably isn’t hard to come up with distinguishing properties, but it may well be that in some precise sense they are all “irrelevant” to anything one might be interested in when discussing matters such as the probability that one die beats another. (I don’t know how to formulate such a conjecture, but it feels as though something like that might exist.) If one wants to come up with at least some distinguishing property, it seems good to focus on things like the number of repeated elements, or more generally how the numbers of the different elements are distributed. If we define a map from sequences of length $n$ to multisets by writing the sequences in increasing order, then the number of preimages of a multiset depends very strongly on how many repeated elements it has, with extremes ranging from 1 (for the multiset $(1,1,\dots,1)$) to $n!$ (for the multiset $(1,2,\dots,n)$). Since multisets with many repeats give rise to far fewer sequences, one would expect that repeats are favoured in the multisets model compared with the sequences model. I would guess that from this it is possible to come up with some statistic to do with the number of repeats that holds with probability almost 1 in the multisets model and almost zero in the sequences model. 2. P. Peng Says: Another possible route to the multi-set result. Because the random distribution weights between sequence and multiset change so drastically (as you mention it can be as extreme as n! : 1), it feels like either something very special is being exploited for the conjectures to still hold in both models, or this should just happen fairly often with a change of weights. But we’ve already seen that the intransitivity is fairly fragile when changing the dice model. I think this “something special” is that with the sequences model, not only is the score distribution for a random die very similar to a gaussian, but I conjecture this is true with high probability even when looking at the score distribution for the subset of dice constrained to have some particular multiplicity of values (ie. 12 numbers are unique, 3 are repeated twice, 5 are repeated three times, etc.). Given the already completed sequence proof, the stricter conjecture is equivalent to saying the U variable is not correlated with the multiplicity of values. Looking at how U is defined, that sounds plausible to me, and may be provable. If this stricter conjecture is true, then any change of weights for the random distribution will be fine if each “multiplicity class” are changed by the same factor. And this is the case for the shift from sequences -> multiset. • gowers Says: That’s a very interesting idea. It seems plausible that as long as a sequence takes enough different values, then conditioning on the distribution of the numbers of times the values are taken shouldn’t affect things too much. It’s not quite so obvious how to prove anything: I don’t see a simple way of using independent random variables and conditioning on an event of not too low probability. But it isn’t obviously impossible, and it would be an interesting generalization if we could get it. 3. Bruce Smith Says: > I don’t see a simple way of using independent random variables and conditioning on an event of not too low probability. (a) I guess the following won’t work, but I’d like to confirm that understanding (and that my reasoning makes sense about the other parts): If we fix a “multiplicity class”, then a balanced sequence is just a sequence that (1) obeys certain equalities between elements (to make certain subsets of them equal), (2) obeys inequalities between the elements that are supposed to be distinct, (3) has the right sum (so it’s balanced). If the value of each subset of sequence elements which are required equal by (1) is given by an independent random variable, then is the probability of ((2) and (3)) too low? (I guess (2) and (3) are nearly independent.) For (3) I’d guess the probability is similar to the balanced sequence model (the condition still says that some linear sum of the variables has its expected value, I think); for (2) we’re saying that $k$ choices of random elements of $[n]$ fail to have overlaps, where $k$ depends on the multiplicity class but could be nearly as large as $n$. I guess the probability of (2) is then roughly exponentially low in $n$, which is why this doesn’t work. Is that right? (b) thinking out loud: But what if we just omit condition (2)? Then we have some kind of generalization of a “multiplicity class” (except we want to think of it as a random distribution over dice, not just as a class of dice). It’s no longer true that all the dice in this distribution have the same preimage-size in the map from the balanced sequence model to the multiset model… but (in a typical die chosen from this distribution) most of the $k$ random variables have no overlaps with other ones, so only a few of the $k$ subsets of forced-equal sequence elements merge together to increase that preimage size. Can we conclude anything useful from this? (We would want to first choose and fix one of these distributions, then show that using it to choose dice preserves the desired theorems, then show that choosing the original distribution properly (i.e. according to the right probability for each one) ends up approximating choosing a die using our desired distribution. In other words, we’d want some sum of these sort-of-like-multiplicity-class distributions to approximate our desired overall distribution.) • gowers Says: Yes, the problem you identify is indeed the problem: I think that typically the multiplicities for a random multiset will have something close to a Poisson distribution with mean 1 (they are given by the lengths of runs of consecutive elements of a random subset of $\{1,2,\dots,2n-1\}$ of length $n$). So they will almost all be of constant size, and therefore the number of distinct values taken will be proportional to $n$, which implies that the probability that they are distinct is exponentially small. The difficulty with just not worrying about such coincidences as occur is that the weights are very sensitive to the numbers of coincidences. For example, if two values of multiplicity 3 are allowed to merge into one value of multiplicity 6, then the weight gets divided by $\binom 63=15$. And it seems that to take account of this brings us back to the problem we started with (since if we knew how to deal with these mergers then we could simply take the multiplicity class to be all singletons and deal directly with the multiset model). That’s just how it seems to me, but as with my previous remarks, anything that sounds pessimistic can potentially be knocked down by some observation that I have not made, or some additional factor that I have not taken into account, and I don’t rule that out here. • Bruce Smith Says: > And it seems that to take account of this brings us back to the problem we started with … That’s a good point, and I don’t see a way around it either. But now I am thinking that “being excluded from the analysis in your main theorem” is *not* uncorrelated with “having lots of repeated faces” (and thus being relatively overrepresented in the multiset model), but is *negatively* correlated with it. If that’s true, then at least in some sense the main theorem should be easier in the multiset model than in the balanced sequence model (since the excluded cases are less common in its distribution). It’s taking me awhile to write up my reasons for that thought (and even once written they will be vague), so I thought I’d mention that general idea first. • gowers Says: One reason that lots of repeated faces make things a bit harder is that it is slightly more complicated to say that the sum has a good chance of equalling a specified value, and probably some of the deviation estimates become worse. But I don’t think those effects would kick in in a serious way until the number of repeats is very large. 4. Thomas Budzinski Says: Here are a few more remarks about the sketch of proof for the (false) strong conjecture. For an unconditioned dice, $f_A$ has the same distribution as a random walk with Poisson(1) steps, conditioned on $f_A(n)=n$. Hence, $g_A$ is a random walk with Poisson(1)-1 steps, conditioned on $g_A(n)=0$ and $\sum_j g_A(j)=0$ (I don’t know if it has already been noticed). Hence, showing that with macroscopic probability, $g_A(x_i)$ and $g_B(x_i)$ are close for every $i$ should not be very hard. For a nonconditioned dice, the variables $g_A(x_i)$ and $\sum_j g_A(j)$ should be approximately Gaussian with explicit covariances, so for a conditioned dice the $f_A(x_i)$ are still jointly Gaussian. Proving it properly would require a local limit theorem to handle the double conditioning. But this time, the step distribution is known and very simple, so the proof should be easier than the previous one (or, even better, maybe a general result could be applied). On the other hand, deducing from here that $f_A$ and $f_B$ are uniformly close does not seem obvious. We would need to show that $f_A$ cannot vary too much in a short interval. A possible way would be to show that $\left( f_A(j) \right)_{0 \leq j \leq n/2}$ is in some sense absolutely continuous with respect to a nonconditioned random walk, and then to use known results about the max of a random walk on a short interval. The absolute continuity also requires a local limit theorem, but this should not be too hard for the same reasons as above. 5. Gil Kalai Says: For the sequence model you can base the criteria if dice A beats dice B on other rules (which can be described e.g. in terms of a function from (-1,1,0)^n to {-1,1,0}). For example dice A beats dice B if the largest “run” of wins of A vs B is larger than the largest runs of wins of B vs A. In analogy with voting rules I would expect that many rules will lead to asymptotic randomness (or pseudoandomness) even when you drop the condition on the sum of entries. (All my previous guesses based on this analogy failed but at least I figured for myself what was the difference.) 6. P. Peng Says: Can you delete the previous post? It got mangled because I used less than and greater than characters. A partial sketch of a different approach. Consider the starting point: instead of dice represented as a vector of values, represent it as a multiplicity vector m_i = number of faces with value i. A scoring function f(A,B), which gives the number of rolls where A beats B – the number of rolls where B beats A, can be represented as a matrix equation A.F.B where F is an antisymmetric matrix. The constraints are now that m_i ≥ 0, the sum m_i = n, and sum i m_i = n(n+1)/2. The standard die in this representation (1,1,…,1) ties all dice due to theses constraints. Now in the realm of linear algebra, we can choose a set of changes that when starting from a valid die preserves the sum constraints, and that this set of changes spans the valid dice space. I will call this set the choice of dice “steps”. With a given choice of steps, this also provides a distance measurement: the minimum number of steps from the standard die. With this setup, we can handle both the sequence and multiset dice model if we allow the notion of a “random die” to involve a possibly non-uniform weight on this representation. Obviously not all weights will lead to intransitive dice. I believe an appropriate restriction would be to constrain possible weights to one such that it is symmetric to permutation of the multiplicity vector in the following way: – any multiplicity vectors which are the same up to permutation, and meet the sum constraints, will have the same weight We can choose steps such that the following are true: – the set of dice one step away from the standard die have the properties: – 0 ≤ m_i ≤ 2 – each die’s multiplicity vector is the same up to permutation – each die beats exactly as many dice as it loses to – all proper dice can be reached in O(n) steps With such a choice of dice steps, and with the above constraints on the weights under consideration, the set of dice a distance 1 away from the standard die have the property: – for any die, its probability of beating a random die is exactly equal to its probability of losing to a random die This is a good starting point, and we can build up to any die by adding these steps together. Furthermore the scoring is linear, so knowing how the steps score against each other is sufficient. With multiple steps, due to correlations it is possible for the set of dice to no longer have every die beat exactly half the others. Since the starting point exactly has this symmetry, if the correlation is small enough combined with the weight constraint restricting the amount of asymmetry that can build up, since all dice can be reached in O(n) steps, maybe the asymmetry can’t build up “fast enough” to ruin the property we want for intransitive dice: – If A is a random die then with probability 1 – o(1), the probability A beats a random die is equal to the probability it loses to a random die + o(1). Maybe with a stronger constraint on the weights, one could also show that the probability A beats a random die is 1/2 + o(1), so that it also provides that the ties go to zero. But with such wild swings in the weights between sequence and multiset dice, I’m not sure what would likely be the appropriate strengthening which would also give the ties conjecture. I believe something like this approach may have been discussed before, but part of the issue with this is the m_i ≥ 0 constraint. It makes it difficult as not all step combinations are valid. However, if the weight constraint above is sufficient, we can temporarily consider them all valid, and it just happens that for sequences and multiset models, the weight for these dice is 0. This approach therefore allows smoothing out the difficulties with considering all the different representations on the same footing, if indeed that weight constraint is sufficient. Before I think about this approach any further, is there some simple argument or counter-example which shows steps with these properties + weight constraint is not sufficient? Currently it appears to fit with our understanding: – sequence dice model is intransitive – multiset dice model is intransitive – the improper dice model is mostly transitive (even though it is “balanced” in that every die has the same expectation value, it doesn’t have the nice choice of steps) – the un-balanced sequence model (sequence model without the sum constraint) is mostly transitive (again, it doesn’t have the nice choice of steps) – removing the weights constraint we can just choose weights to force transitivity I’m hoping something along these lines would work as it unites a lot of the results. 7. Thomas Lumley Says: Another question that’s relevant to statistical theory: it appears that the ordering given by $P(X>Y)>1/2$ agrees with the ordering based on the sum and hence the mean. Is that true for other reasonable summaries (eg the median, or more generally for some class of linear combinations of order statistics)? The reason this is interesting is that ordering of distributions based on $P(X>Y)>1/2$ for *samples* is the (widely-used) Mann-Whitney/Wilcoxon test. It’s already an advance to know this is typically transitive when the means are different, even under just one generating model for distributions. It would be even more helpful to know if this is just a fact about reasonable dice behaving reasonably or something special about the mean. • gowers Says: I think the mean may be fairly special here. The argument above shows that for two random dice $A$ and $B$ with means that differ by a typical amount, the means almost certainly determine which one wins. But then another order statistic will not determine which one wins unless it typically agrees with the mean. My guess (though I haven’t thought about it properly, so I may be wrong) is that there is a probability bounded away from zero that the order of the medians of two random sequences is different from the order of the means. If that guess is correct, then the medians will not predict which die wins. • P. Peng Says: For dice where the mean is constrained but we allow values greater than n, this also appears to become transitive. In this case the mean is not a predictor, so some other property might give a summary. Who knows, maybe the median becomes important. I’m not sure anyone has looked into that yet. • gowers Says: That’s a great question and something I’d love to see the answer to. Just to clarify the question, when you say “this also appears to become transitive” do you mean that it appears to become transitive with probability $1-o(1)$ or just that there is a positive correlation between the events “A beats B and B beats C” and “A beats C”? If it’s the first, it should be easier to prove, and something I’d either like to have a go at or leave as a tempting open problem in the paper. I’m not sure how to go about analysing the model itself, since it doesn’t seem to be obtainable by some simple conditioning on a sum of independent random variables. On the other hand, I’m pretty sure it (and even more so its continuous counterpart) has been studied a lot, so maybe with a look at the literature we would understand how to prove things about it. Maybe someone reading this can even suggest where to look. (Just to be clear, the distribution I’m interested in is the uniform distribution over the face of the $n$-dimensional simplex consisting of all vectors in $\mathbb R^n$ with non-negative coordinates that sum to 1.) I very much doubt that the median plays an important role here, but if transitivity holds with probability $1-o(1)$ it would be a very nice challenge to try to find a simple statistic that predicts which die will win. • gowers Says: Ah, I’ve just looked at the original paper and it does look likely from the evidence presented there that the probability of intransitivity tends to zero for this model (though that was for multisets — it might be interesting to see if the same holds for sequences). • P. Peng Says: I looked into the improper dice (mean still constrained to (n+1)/2, but values are now only constrained to be more than 0). If you take any proper die and choose an entry greater than 1, and decrease it by one, then compared to the standard die (looking at the possible roll combinations) it will win one less and lose one more. And for any die, if you choose an entry = n, and increase it by one, it will win only one more compared to the standard die. Increasing any value beyond n+1 does not give any added benefit when comparing to the standard die. Therefore, the standard die will beat any improper die with a value greater than n. It beats any “truly” improper die if you will. From some numerical tests on small sided dice, if we rank all the dice according to how many dice it beats – how many it loses to, the standard die is at the top or at least in the top few dice. At the bottom is the die that is all ones except a single face, as that loses to everything. Basically, since the “beats” relationship doesn’t care how much a die wins by in a particular roll, deviating from proper dice is only “wasting” pips on a lopsided distribution of value. For this reason, the median does roughly correlate with the order of the improper dice. As does the standard deviation. I would need to look at larger dice to understand the trend better, but I’d guess currently that these will only end up being weak correlations. There is likely a better predictor. In the proper die scenario the standard die tied everything and was in some sense at the ‘center’ of the dice. In the improper dice, the standard die is now near the top of the ranking, and the die that loses to all other dice is in a reasonable sense the ‘furthest’ from the standard die. So likely there is a measure of ‘distance’ from the standard die that strongly correlates with the ranking (and so strongly predicts if two die beat each other). I think the median would only weakly capture this at best as n gets larger. If we look at the sequence model of improper dice, what is the probability that at least one value is greater than n? Is it possible that the standard die beats 1 – o(n) fraction of the improper dice? • gowers Says: I’ve just realized that the sequence model of improper dice isn’t as hard as I thought. If we line up $N=n(n+1)/2$ points with $N-1$ gaps between them, then there’s a one-to-one correspondence between sequences of $n$ numbers that add up to $N$, and ways of choosing $n-1$ gaps (that mark the point where one number finishes and the next number starts). So the total number of improper dice is $\binom {N-1}{n-1}$, which is reasonably close to $N^n/n!$. When the numbers are constrained to be at most $n$, then the number of sequences is at most $n^n$, but because the sum of a random sequence has standard deviation about $n^{3/2}$, it’s in fact more like $n^{n-3/2}$. A crude estimate of $\binom {N-1}{n-1}$ is $n^{2n}/2^n n!\approx n^n(e/2)^n$. Since $e>2$, this is exponentially bigger than the number of sequences in the more constrained model. So I think the answer to your last question is yes. I think we can also say something about the typical shape of an improper die. Suppose that instead of selecting exactly $n-1$ gaps, we select each gap independently with probability $(n-1)/(N-1)$. The distribution should be similar. But with this model, the expected gap length has a geometrical distribution with mean approximately $n/2$ (because $N/n$ is about $n/2$). So it looks to me as though at least crudely speaking an improper die is what you get when you replace the uniform distribution on $[n]$ by a geometric distribution with the same mean. • gowers Says: I have a guess about a statistic that I think will predict the winner in the sequence model for nonstandard dice. (That is, a random die is a random sequence of positive integers that add up to $n(n+1)/2$.) Let $m=(n+1)/2$, let $p=m^{-1}$, and for each positive integer $r$, let $p(r)$ be the probability of choosing $r$ with the geometric distribution with parameter $p$: that is, $p(r)=p(1-p)^{r-1}$. (This is sometimes called the shifted geometric distribution.) In the usual sequence model, the sum of a sequence $(a_1,\dots,a_n)$ can be equivalently defined as the number of pairs $(i,j)$ such that $i\leq a_j$, which is closely related to how well the die does when it is up against the standard die. And this sum is the right statistic to choose. Note that a random face of the standard die is uniformly distributed in $[n]$. After the heuristic idea in my previous comment, it seems a rather plausible guess that the right statistic to choose for nonstandard dice is how well a die does against not the uniform distribution but the geometric distribution. So that statistic I propose is $\sum_{i,j}p(i)\mathbf 1_{[i\leq a_j]}$. Another way of thinking of this is that the sum of a sequence $a_j$ is (up to a factor $n$) the sum of the values of the cumulative distribution function at the numbers $a_j$, where the distribution is uniform on $[n]$. Now I want to take the sum of the values of the cumulative distribution function of the geometric distribution. Since generating a random improper die is easy, it should be easy to test this hypothesis experimentally. If it checks out, then I’ll sit down and try to prove it. • gowers Says: Of course, the natural follow-up question if the conjecture in my previous comment is correct is whether if we condition on the improper dice taking the expected value for that statistic we get intransitivity with probability 1/4+o(1) again. If that turned out to be the case, then it would probably be a special case of a much more general principle, which would say that the following picture holds for a wide class of models. 1. For each positive integer $r$, let $p(r)$ be the probability that a given face of a random die from the model takes the value $r$. Let $P$ be the cumulative distribution: that is, $P(m)=\sum_{r\leq m}p(r)$, which is the probability that the face takes value at most $m$. 2. Given a die $A=(a_1,\dots,a_n)$, define $P(A)$ to be $\sum_iP(a_i)$. Then, with probability $1-o(1)$, $A$ beats $B$ if and only if $P(A)>P(B)$. 3. If we fix a value $t$ and restrict attention to dice $A$ for which $P(A)=t$ (subject to some condition that ensures that the proportion of dice that satisfy this condition is not too small — for some models we might have to replace the condition by $P(A)\approx t$), then the probability that a random triple of dice is transitive is 1/4+o(1). If we could prove something like this, it would be a significant step forward in our understanding of the intransitivity phenomenon. Having said that, there is also a suggestion in the paper that for at least one model we get intermediate behaviour, where knowing that $A$ beats $B$ and $B$ beats $C$ makes it more likely that $A$ beats $C$, but with conditional probability bounded away from 1. The model in question is where you choose $n$ values independently and uniformly from $[0,1]$ and then rescale so that the average becomes $1/2$. For a full understanding, it would be good to understand this too. • gowers Says: Following on from the last paragraph of the previous comment, I now think it would be very nice to get a heuristic understanding (at least) of that rescaled-uniform model. A natural question is the following. Let $A$ be a random die chosen according to that model. Let $\mu$ be the average value of $a_i$ before the rescaling. Does $\mu$ correlate with the proportion of dice beaten by $A$? We might expect the answer to be yes for the following reason: if $\mu$ is less than 1/2, then after rescaling, the values below 1/2 are slightly “squashed”, whereas the values above 1/2 are slightly “stretched”. But as P. Peng suggests above, a face does not get extra credit for beating another face by a large margin, so in some sense large values are a “waste of resources”. So one might expect (but this argument is so vague as not to be very reliable) at least a weak positive correlation between $\mu$ and the strength of the die. This is something else that would be interesting to test experimentally. The dream for this model would still be to find a simple statistic that predicts which die wins. Such a statistic couldn’t take values in a totally ordered set (so some simple one-dimensional parameter wouldn’t do, for example), because that would imply transitivity with probability 1-o(1), which seems not to apply. But one could still hope for a map $\phi$ that takes each die to a point in some tournament with a very simple structure, in such a way that the direction of the edge between $\phi(A)$ and $\phi(B)$ predicts which of $A$ and $B$ wins. And then the problem would be reduced to understanding the tournament. Come to think of it, that dream is one that could also be entertained for the balanced sequence model. We know that transitivity occurs with the frequency one gets in a random tournament, but we suspect that the tournament is not quasirandom. These two statements are consistent, because all we need for the transitivity statement is that almost all vertices have roughly the same out-degree as in-degree. So now we can ask what the structure of the tournament is like? Perhaps once you condition on the sum of the faces, there is some other statistic — again, I would hope for a tournament with a nice simple structure — that predicts with high accuracy which of two dice will win. I don’t yet have a good definition of “nice simple structure”, but an example of the kind of thing I mean is a circle where there is an arrow from $x$ to $y$ if $y$ is less than half way round in a clockwise direction from $x$ and from $y$ to $x$ if $y$ is more than half way round. (If $y$ is exactly half way round, then the direction of the arrow is chosen arbitrarily.) It is unlikely that we can associate with each die in the balanced-sequence model a point in the circle in such a way that this particular tournament predicts which die wins, but perhaps some higher-dimensional (but still low-dimensional) variant works. If we could do something like this, then we would have a wonderfully precise understanding of the Mann-Whitney/Wilcoxon test for this model. 8. Timothy Gowers Says: I’ve thought a bit about the “dream” in this comment from the previous thread, and while I now feel somewhat less optimistic about it, I now have a new (to me anyway) way of thinking about the “beats” relation that I think has the potential to be helpful, and to capture the idea of “wasting resources” by winning too well for no extra reward. An initial thought is that if A and B are typical dice, then $a_i$ and $b_i$ grow approximately like $i$, so unless $i$ and $j$ are close (which usually means as a fraction of $\sqrt n$), if $i, then $a_i$ and $b_i$ are both less than $a_j$ and $b_j$. This means that usually if $a_i then $b_i, in which case the pairs $(i,j)$ and $(j,i)$ cancel out. So it makes some sense to focus on the “exceptional” pairs for which this cancellation does not happen. Suppose, then, that $i and let us think about what needs to happen if we are to obtain both the inequalities $a_i and $a_j. A simple remark is that $a_i, since we know that $a_i\leq a_j$ (as I am writing the sequence elements in non-decreasing order). It therefore suffices to have the inequality $a_j. If we now fix $i$, we see that the number of $j$ that satisfy this condition is “the time it takes $a_j$ to catch up with $b_i$. We can model the growth of $a_i$ and $b_j$ continuously in our minds, and we now see that if the gradient of $A$ is small after $i$, then we get a big contribution to the number of pairs $(i,j)$ with $a_i, which is very helpful to $B$. Conversely, if the gradient is large, we get only a small contribution. This thought can be used to construct pairs of dice with the same sum where one beats the other quite easily. Take for example the dice $A=(1,5,5,5,5,9,9,9,9,13,13,13)$ and $B=(4,4,4,4,8,8,8,8,12,12,12,12)$. Most of the time, the graph of A sits just above the graph of B, but just occasionally B makes a big jump just before A does. This means that the graph of B has a tendency to be flat just after a point where A is bigger, whereas the graph of A has a tendency to be steep just after a (rare) point where B is bigger. So we expect A to win easily, and indeed it does: there are 144 pairs, and the numbers of $b_j$ beaten by the $a_i$ go $(0,4,4,4,4,8,8,8,8,12,12,12)$, which adds up to 84, which is significantly bigger than 72. These considerations suggest that there could be a significant correlation between which die wins and the number of $i$ such that $a_i>b_i$, though I would be surprised if there was agreement with probability $1-o(1)$. 9. nicodean Says: i’m a curious guest here, watching for new entries like every day. i am curious, after ~50days of silence: is there some progress or are you planning to write an article? (i just read the https://gowers.wordpress.com/2015/09/20/edp28-problem-solved-by-terence-tao/ blog entry again, where it was written that finalizing projects officially seem to be a wise idea) • gowers Says: Good question. I’m definitely intending to turn the write-up I posted on this blog into an article, but I have been too busy to do it recently, especially as there are a couple of questions I would very much like us to answer before we do so. But maybe it would be better to post the results we have so far as an arXiv preprint and then add to that preprint if we obtain the further results I’d like to obtain. (I’m referring particularly to the result, which I think is within reach, that the more general conjecture that the “beats” tournament is quasirandom is false.) Does anyone else who participated have a view about what we should do? • P. Peng Says: A lot of interesting pieces came up in the last discussion, and it felt like things where starting to fit into more general pictures. On top of that, you asked some interesting questions. Unfortunately I got busy with other things, and now would need to really sit down and immerse myself to build up some intuition again. I would like to return to this. Your last question is even something that could just be played with numerically to explore with it. Finding the “parameter” that gives a total order for the improper dice feels within reach. Especially since this parameter must collapse to the same value (or nearly same value) for the proper dice. There is hope that this may even clarify what is going on with the loss of intransitivity with the “rescaled” dice as well. Your “dream” idea of figuring out a rough simple structure for the intransitive dice (like a circular parameter, or multiple parameters for some higher dimension closed surface) is very alluring. If there is no total order, and it isn’t a random tournament, it just begs to at least try looking for that structure. To pull out the general structure of the tournament I thought it might help to somehow separate out the piece that allows deviations from perfect balance of beating/losing to dice, thus leaving behind a cleaner/simpler structure. I started playing with it a bit, and have the following in some of my notes: For the multiset or sequence dice, because of the presence of the involution (let’s denote it n(A) for die A), we can separate any die into a “symmetric” and “asymmetric” piece: A = (n(A)+A)/2 + (n(A)-A)/2 Counting over all dice, the symmetric part will beat exactly as many dice as it loses to. The asymmetric part will tie all asymmetric parts. So the only thing that can lead to a die deviating from beating exactly as many dice as it loses to, is its asymmetric part compared to the distribution of the symmetric parts of all dice. The symmetric part has a permutation freedom which satisfies the constraints that the asym part does not. I was hoping that following this could lead to a better view of the general landscape that gives us our tournament. I thought maybe your dream scenario would fit in where we could roughly separate out a “many permutations” structure, and also a kind of “radial” parameter that would ruin the intransitivity if it weren’t for all dice being so “close” to the center on average. So the “radial” parameter would come from the asym part of the die. And ideally this “closeness” concept would coincide nicely with why the strong conjecture fails, and the “radial” parameter would roughly give the positive correlation. This felt like an interesting possibility for the tournament landscape, but I couldn’t hit upon the right questions to ask or definitions to use to test and explore further. I regret to say, I never did any tests of your last idea regarding the number of i where a_i > b_i. Maybe it would be worthwhile to have another post summarizing the remaining questions and how they may fit together with our current understanding. Then gauge how much interest remains in this project. If interest doesn’t return, it would be sad to see the pieces left incomplete, but I guess the open questions could be left as such in the paper to inspire further work. 10. Alexander Poddiakov Says: M. Oskar van Deventer has invented “a three player game, a set of dice where two of your friends may pick a die each, yet you can always pick a die that has a better chance of beating both opponents” http://grime.s3-website-eu-west-1.amazonaws.com/. Also such sets have been designed that “for any three players, there is a fourth that beats all of them”. Yet “the optimal graph is unknown for the 5 player game and above” http://www.mathpuzzle.com/MAA/39-Tournament%20Dice/mathgames_07_11_05.html Interestingly–can increasing complexity of dice construction for N player games be related to The P versus NP problem https://en.wikipedia.org/wiki/P_versus_NP_problem? Can answering this question be aimed for further results? 11. nicodean Says: I am wondering, are there any new cool updated on this project? Or are you planning to finalize the paper and close the project as a success? 12. K Says: I am very curious – is there some progress in the writing of the draft? Is there some todo list? I would to see that this article sees the light of the world 🙂 • gowers Says: I’ve been so busy on other projects that I haven’t had time to touch this one. But I’m coming to think that we need to let go and put roughly the existing draft (tidied up a little) on the arXiv so that if anybody else wants to do some of the follow-up problems then they can do so. An alternative, if anyone is interested, would be for me to start a new post with a serious discussion of some of these follow-up problems. I would be particularly interested to disprove rigorously the strong conjecture that the “beats” tournament is quasirandom. I think it’s a doable problem. Perhaps the best option would be to do both — put a draft on the arXiv and try to get a discussion going of the follow-up problem. • K Says: Wonderful, I am very much looking forward. 13. K Says: In the last few months, did you consider whether you want to finish this project as-is, or you still have time for continuing? I would love to see another DHJ Polymath paper 🙂 14. Condorcet Paradoxes and dice | An Ergodic Walk Says: […] Suppose we generate a die randomly with face values drawn from the uniform distribution on and condition on the face sum being equal to . Then as the number of faces , three such independently generated dice will become intransitive with high probability (see the Polymath project). […]
{}
# Using Constants in Headers This topic is 1882 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I have been running into a problem in my code lately. I have a file called "Constants.h" and one called "Constants.cpp." I define all my constants there. However, when I include Constants.h in another .h file I always get compile errors when, for example, trying to allocate arrays of a GIVEN_CONSTANT size. Is there a reason for this? I never get these errors in .cpp files. It seems like the pre-processor is not expanding my #includes into the header, or maybe there is a rule that constants can't be used in headers in c++? Can someone shed some light on this situation for me? Thanks. - Dave Ottley ##### Share on other sites It would help to see what your code actually looks like and what compiler errors you're actually getting. ##### Share on other sites I'll give you an example. This is not exact because I have already fixed the errors with a workaround. But, for example Constants.h: extern const int MAX_INTS; Constants.cpp extern const int MAX_INTS = 1000; Foo.h #include "Constants.h" class Foo { int bar[MAX_INTS]; void DoSomething(); }; Compiler Error: "Arrays must be initialized with a constant." *NEW* Foo.h #include "Constants.h" const int MAX_INTS = 1000; class Foo { int bar[MAX_INTS]; void DoSomething(); }; Compiles fine. Edited by KingofNoobs ##### Share on other sites Why aren't you putting the constants in the header? ##### Share on other sites First of all, in your Constants.cpp file you need to write "const int MAX_INTS = 1000; " The extern keyword means "declare without defining". In other words, it is a way to explicitly declare a variable, or to force a declaration without a definition. The practice itself (putting the initialization into the cpp file) is a matter of personal taste. I personally like it because if i need to change the value for whatever reason, not every single file which includes the header file is compiled again. But i wouldnt use "extern" anymore....i like static const uint32 MAX_INTS; in a header file more. ;-) ##### Share on other sites . Edited by C0lumbo ##### Share on other sites Thank you all for your comments. I have decided to go with the #ifndef #define MAX_INTS 1000 #endif route because it doesnt waste any memory. I cant see a downside to it either, and this is how the Microsoft .h files are organized. Until next time... - Dave Ottley ##### Share on other sites ... and this is how the Microsoft .h files are organized. *Cringe* The windows headers aren't exactly the pinnacle of clean non-namespace-polluting headers... ##### Share on other sites You really got the wrong conclusion from this. You don't know if using a const variable wastes any memory, so that's a terrible reason to pick one over the other. An optimized build with g++ generates identical code for both. A const variable behaves like any other variable, while a macro constant has surprises: you can't take its address, it doesn't have a namespace, it doesn't obey the usual scoping rules, you can't access its value from a debugger, it can't be an object of a class... You should write your code to be as clear as possible, minimizing surprises, not whether you might save 4 bytes (which you won't anyway). And therefore you should prefer using const variables over macros to represent constants. ##### Share on other sites Alvaro, Thank you for that additional input. Could you possibly link or attach an example of a (if possible complex) header defining const variables that use the features you list above such as namespaces, being objects in classes, having their addresses taken, etc. I guess I need to see what kind of complexity doing this entails, and if I will ever use those features. -Dave Ottley ##### Share on other sites The code in the header file would look exactly as I posted above, except for possibly being in a namespace. But if you aren't using namespaces [yet], there is no point in putting this particular thing in a namespace. I can't post any code from work, but we do this type of thing all the time there. Just test to print the value of the constant from a debugger, and you'll immediately see one of the benefits of using a const variable. ##### Share on other sites Keep in mind that the MS windows headers are designed to be used from C as well as C++. So while there are (reasonably) good reasons for what they do in their code, you should only copy them if you are working under the same kind of constraint. Also, for integral constants, another option is using an enum. Like a #define it never occupies storage, but like const variables it respects scope. ##### Share on other sites If I have a const string I access from several places, should I declare it extern and move definition to a .cpp or make it static? Since just making it const string in the header would create multiple objects? ##### Share on other sites That's not something you really need to worry about. Most linkers will fold identical constant data (including strings) into a single instance. Ex: MSVC's /opt:icf behavior. ##### Share on other sites Thank you all for your kind responses. So, should I put a namespace i.e. Constants:: around my constants, or would that be a waste of keystrokes? ##### Share on other sites Unless your constants have some sort of logical reason that they should be either grouped together or sectioned off from other symbols, then there's no point in creating a namespace just for constants. For example, you might group constants that form flags together or constants for private use separate from other symbols. But there's no point to dumping all your constants in a namespace just to have a namespace. ##### Share on other sites So, should I put a namespace i.e. Constants:: around my constants, or would that be a waste of keystrokes? A good rule of thumb, I think, is to put each constant in the same namespace as the subsystem/library/... it's associated with e.g. // hypothetical example namespace render { const float max_fov_degrees = 179.0F; class camera { // ... }; } // render Putting all constants in the entire application in a single namespace feels to me like an attempt to cut concerns along an unusual axis.
{}
Two charges, -2.1 uC and -4.3 uC, are located at (-0.60 m, 0) and (0.60 m, 0), respectively. There is a point on the x-axis between the two charges where the electric field is zero. A) Is that point left of the origin, at the origin, or right of the origin? *left of the origin* B)Find the location of the point where the electric field is zero.
{}
Free Version Easy # Finding Average Velocity of a Particle APCALC-MNLSRT The velocity, in $\text{ft/sec}$, of a particle moving along the $x$-axis is given by the function $v(t)={ e }^{ 2t }-{ e }^{ t }$. What is the average velocity of the particle from time $t=0$ to $t=4$? A $359.095\text{ ft/sec}$ B $1436.381\text{ ft/sec}$ C $731.590\text{ ft/sec}$ D $358.845\text{ ft/sec}$
{}
# [texhax] nested tabu with colors Arno Trautmann Arno.Trautmann at gmx.de Wed Jun 27 09:19:13 CEST 2012 Hi all, I want to use a nested tabu environment to typeset a table, but also want to use colors. That works fine as long as I don't nest the tabus, i.e.: \documentclass{minimal} \usepackage{tabu} \usepackage[table]{xcolor} \begin{document} \taburowcolors 2{green!25 .. yellow!50} \begin{tabu}{*2{X[c]}} a & b \\ c & d \\ a & b \\ \end{tabu} \end{document} As soon as I try to nest these in the following way: \documentclass{minimal} \usepackage{tabu} %\usepackage[table]{xcolor} \begin{document} %\taburowcolors 2{green!25 .. yellow!50} \begin{tabu}{*2{X[c]}} a & b \\ c & d \\ a & \begin{tabu}{l} abc \end{tabu} \\ \end{tabu} \end{document} I get an error telling me: ./test.tex:12: Missing number, treated as zero. \tabu at 1.H0 l.12 \end{tabu} ? The error even occurs when I comment out the \taburowcolors. It's just loading xcolor with [table] and nesting tabus. Is this a bug or am I doing something totally wrong? cheers Arno
{}
# NoCoolName Blog ## Looking at Scripture Mastery – 1 Corinthians 10:13 Greek: πειρασμὸς ὑμᾶς οὐκ εἴληφεν εἰ μὴ ἀνθρώπινος· πιστὸς δὲ ὁ θεός, ὃς οὐκ ἐάσει ὑμᾶς πειρασθῆναι ὑπὲρ ὃ δύνασθε, ἀλλὰ ποιήσει σὺν τῷ πειρασμῷ καὶ τὴν ἔκβασιν τοῦ δύνασθαι ὑπενεγκεῖν. My Translation: No temptation has claimed you that wasn't of humanity, but God is faithful, who will not let y'all be tempted beyond what y'all are capable, yet he will make, with the temptation, an exit that you may be capable to endurance. KJV: There hath no temptation taken you but such as is common to man: but God is faithful, who will not suffer you to be tempted above that ye are able; but will with the temptation also make a way to escape, that ye may be able to bear it. My translations are purposefully stretched and should not be viewed as more accurate than the KJV translation unless I say so in the post. I'm trying to show the range lying between the original Greek text and the English. ## Update May 2013 This scripture has been removed by the Church Educational System from the Scripture Mastery list. However, it had remained within this list for over two decades and as such is still familiar to many graduates of the LDS Church's Seminary program. So I'm keeping this exploration of it online, but it is no longer applicable to CES. ## The Letters to Corinth Mormons really like the letters to the Corinthians. These are letters written in answers to questions that Paul's congregation in Corinth had. The second letter shows some evidence of possibly having been originally two different letters that were inexpertly edited together long after they were written. Because the letters are in answer to unknown questions from the Christians at Corinth, the letters sometimes seem to skip from one subject to another. For the scripture mastery verse in question, Paul is discussing evil behavior and what the correct behavior of a Christian should be. At the end of Chapter 9, Paul has been talking about how his followers should retain humility even in the face of how they have already achieved victory through Christ. Beginning Chapter 10, Paul warns of how Israel, who were also God's chosen people just as the Corinthians are now God's chosen people by joining the new covenantal people of Christ, still incurred God's wrath through their evil actions. Paul warns that even though Israel was God's chosen, through their disobedience many of them were killed. So too, says Paul, should Christians living at the end of the world stand firm and not fall into evil ways. Then comes the verse in question. With this previous context as given, perhaps we can see that Paul is not talking about temptation in some little sense. He's just finished talking about the history of Israel in the wilderness under Moses. When Paul says that no temptation has taken you except what is common to humanity, he means that we're all subject to the same things that afflicted ancient Israel. And so we're all still subject to God's judgement even after becoming his people. After the verse in question, Paul says that because of what he's been talking about (Israel's disobedience) that the Christians at Corinth should live their lives carefully. Paul's theology says that joining the covenant community of Christ destroys the ability of sin and death to capture the believer in Christ. For this reason, sin and death no longer have a hold on the believers. But Paul, while acknowledging that his followers are free from the effects of sin, they should be careful in their actions all the same. Let's go back to the verse and look a bit more closely at it. Paul feels that the temptations his followers have to deal with are common to humanity, but that God will provide a way for them to endure it. Note that this verse is not talking about “giving in to sin” or about salvation and the effect of works upon it. It just says that the Corinthians will experience human temptations and that God will give them a way out by providing them with the strength to endure the temptations. For Mormons, salvation is not fully dependent upon belonging to the covenantal people (the Church). Salvation must still be received through living a virtuous life and through avoiding sin, an idea very difficult to pull out of Paul's writings. Mormons usually approach this verse with the assumption that since God wants us to achieve salvation and exaltation, and since this is predicated on our faithfulness, then God will never allow us to be tempted in a way that we can't handle. In other words, God has made it so that it is possible to live a life without sin (the result of not enduring temptation), which should give us hope to someday be able to do so. ## A Common, But Horrible, LDS Reading An odd interpretation of this verse that is extremely common among Latter-day Saints, however, is one that replaces the word “temptation” with “challenge”. You'll often hear Mormons approaching challenging situations of grief or pain with the statement that God will “not give us more than we can handle”. That idea comes from this verse, and yet it is not at all what this verse is saying. And history shows that of course people can experience challenges in their lives that are many times greater than what they can handle. People's bodies, emotions, and sanity can all break under the weight of what this world can throw at them. In a world of war, bloodshed, and holocausts, people break. Assuming that God somehow provides a way for humans to not break when this happens can lead to some very wrong-headed and uncharitable opinions on how people deal, or don't deal, with grief and pain. What should we think of someone who is reacting badly to the death of a family member if we think they God is supposed to help them through it? Should we think they are rejecting God's help? In fact, there's a troubling cultural aspect of Mormon funerals that often revolves around this interpretation of this scripture. Mormons are fond of mentioning that because they believe that their families will be reunited after death and that families are eternal (a belief commonly found among many faiths) their funerals are merely bittersweet, temporary farewells. Whereas others may wail and bemoan their loss, Latter-day Saints know better and while they are sad, they are hopeful as well! Unfortunately, this has developed to such an extent that most Mormons do not know how to deal with the psychological need of grief, worried that by expressing too much of their sorrow they'll be letting down their community. And sometimes those communities can be too strong in enforcing this sense of hopeful sadness and will let those who are expressing too much sadness that they need to rely on God more. If you're too affected by pain and grief, the problem is yourself! Your testimony is not strong enough to carry you through these challenges. God has promise we won't be given more than we can handle! Thankfully, as more and more Mormons become open to the benefits of psychological counseling, this idea that Mormons cannot admit defeat in the face of overwhelming pain and grief is slowly starting to show cracks. Time will hopefully tear down this mistaken assumption that God will always help people through the challenges of life. This scripture merely promises that God will help his people through their common temptations, which is not at all the same thing. ## Why Do I Think This Is Part of Scripture Mastery? I think this scripture was chosen in order to provide youth with a hopeful approach to the LDS conception of sin and repentance. I think it was chosen to give LDS youth the impression that even when they are tempted by sin, God is aware of them and is trying to help them. However, this scripture, as used by the Mormons, also tends to set up a bad situation when temptations are yielded to. Since such sins could have been avoided, then the individual is only to blame for giving in. In the face of addictions, of war, of accidents, and the myriad of other pains of life, this viewpoint can be tragically self-flagellatory for some people. There are better scriptures to give the impression that God is aware of us and wants the best for us. This scripture, if misapplied (and there's precious little given against such a misapplication) can result in individuals constantly beating themselves and their self-image up for being human and making mistakes.
{}
# SOLUTION: The radius of a circle as a function of time is defined by the equation, r(t)=4t^2+3t+1. Determine the rate of change in the area of the circle when dr/dt=11. Algebra ->  Algebra  -> Customizable Word Problem Solvers  -> Misc -> SOLUTION: The radius of a circle as a function of time is defined by the equation, r(t)=4t^2+3t+1. Determine the rate of change in the area of the circle when dr/dt=11.      Log On Ad: Over 600 Algebra Word Problems at edhelper.com Ad: Algebra Solved!™: algebra software solves algebra homework problems with step-by-step help! Ad: Algebrator™ solves your algebra problems and provides step-by-step explanations! Word Problems: Miscellaneous Word Problems Solvers Lessons Answers archive Quiz In Depth Click here to see ALL problems on Miscellaneous Word Problems Question 596252: The radius of a circle as a function of time is defined by the equation, r(t)=4t^2+3t+1. Determine the rate of change in the area of the circle when dr/dt=11.Answer by richard1234(5390)   (Show Source): You can put this solution on YOUR website!We have so when dr/dt = 11, t = 1. The area of the circle as a function of time is We differentiate both sides with respect to t: Replace t = 1 to obtain (units squared per unit of time)
{}
Package website: release | dev This package provides hyperband tuning for mlr3. Various termination criteria can be set and combined. The class ‘AutoTuner’ provides a convenient way to perform nested resampling in combination with ‘mlr3’. ## Installation CRAN version install.packages("mlr3hyperband") Development version remotes::install_github("mlr-org/mlr3hyperband") ## Quickstart If you are already familiar with mlr3tuning, then the only change compared to other tuners is to give a numeric hyperparameter a budget tag. Afterwards, you can handle hyperband like all other tuners: library(paradox) library(mlr3tuning) library(mlr3hyperband) # give a hyperparameter the "budget" tag params = list( ParamInt$new("nrounds", lower = 1, upper = 16, tags = "budget"), ParamDbl$new("eta", lower = 0, upper = 1), ParamFct$new("booster", levels = c("gbtree", "gblinear", "dart")) ) inst = ... # here goes the usual mlr3tuning TuningInstance constructor # initialize hyperband tuner tuner = tnr("hyperband", eta = 2L) # tune the previously defined TuningInstance tuner$optimize(inst) For the full working example, please check out the Examples section below. ## A short description of hyperband Hyperband is a budget oriented-procedure, weeding out suboptimally performing configurations early on during their training process aiming at increasing the efficiency of the tuning procedure. For this, several brackets are constructed with an associated set of configurations for each bracket. These configuration are initialized by stochastic, often uniform, sampling. Each bracket is divided into multiple stages, and configurations are evaluated for a increasing budget in each stage. Note that currently all configurations are trained completely from the beginning, so no online updates to the models are performed. Different brackets are initialized with different number of configurations, and different budget sizes. To identify the budget for evaluating hyperband, the user has to specify explicitly which hyperparameter of the learner influences the budget by tagging a single hyperparameter in the parameter set with "budget". An alternative approach using subsampling and pipelines is described further below. ## Examples Originally, hyperband was created with a “natural” learning parameter as the budget parameter in mind, like nrounds of the XGBoost learner: library(mlr3) library(mlr3hyperband) # hyperband tuner library(mlr3tuning) # tuning methods library(mlr3learners) # xgboost learner set.seed(123) # Define hyperparameter and budget parameter for tuning with hyperband params = list( ParamInt$new("nrounds", lower = 1, upper = 16, tags = "budget"), ParamDbl$new("eta", lower = 0, upper = 1), ParamFct$new("booster", levels = c("gbtree", "gblinear", "dart")) ) # Initialize TuningInstance as usual # hyperband terminates on its own, so the terminator acts as a upper bound inst = TuningInstanceSingleCrit$new( learner = lrn("classif.xgboost"), resampling = rsmp("holdout"), measure = msr("classif.ce"), search_space = ParamSet$new(params), terminator = trm("none") # hyperband terminates on its own ) # Initialize Hyperband Tuner and tune tuner = tnr("hyperband", eta = 2L) tuner$optimize(inst) # View results inst$result Additionally, it is also possible to use mlr3hyperband to tune learners that do not have a natural fidelity parameter. In such a case mlr3pipelines can be used to define data subsampling as a preprocessing step. Then, the frac parameter of subsampling, defining the fraction of the training data to be used, can act as the budget parameter: library(mlr3pipelines) set.seed(123) ll = po("subsample") %>>% lrn("classif.rpart") # Define extended hyperparameters with subsampling fraction as budget and hence # no learner budget is required params = list( ParamDbl$new("classif.rpart.cp", lower = 0.001, upper = 0.1), ParamInt$new("classif.rpart.minsplit", lower = 1, upper = 10), ParamDbl$new("subsample.frac", lower = 0.1, upper = 1, tags = "budget") ) # Define TuningInstance with the Graph Learner and the extended hyperparams inst = TuningInstanceSingleCrit$new( tsk("iris"), ll, rsmp("holdout"), msr("classif.ce"), ParamSet$new(params), trm("none") # hyperband terminates on its own ) # Initialize Hyperband Tuner and tune tuner = tnr("hyperband", eta = 4L) tuner$optimize(inst) # View results inst$result ## Documentation The function reference is can be found here. Further documentation lives in the mlr3book. The original paper introducing the hyperband algorithm is given here.
{}
## Tuesday, 27 November 2012 ### Is This Integration? Problem Name: Is This Integration? UVa ID: 10209 Keywords: geometry, math I’ve been meaning to write about a geometry problem for a while now, because they generally provide interesting challenges that require some degree of creativity. With some types of problems, you can sometimes formulate your solution almost immediately after reading the problem statement. When you tackle a geometry–related problem, however, it’s rare that you can come up with a solution without doing at least some amount of mental work out first. This problems asks us to consider a geometric figure built as follows: start with a square with sides of length $$a$$. Now draw four arcs of radius $$a$$ and angle $$\pi \div 2$$, with their centers in each of the four corners of the square, and draw these arcs so they all lie inside the square. This produces a figure like this: As you can see, the area of the square gets divided into 9 different sections of three different kinds, labeled here as $$X, Y, Z$$. We’re asked to calculate the total area covered by these three types of sections —that is, the values $$X, 4Y, 4Z$$. Now, there’s probably many ways to get to the answer, but let’s try building our own, step by step. First of all, we can observe that we need to deduce the values of three unknowns ($$X$$, $$Y$$ and $$Z$$), so by simple algebraic reasoning, we can be sure that we need at least three independent equations to solve these unknowns. Let’s start with what is probably the easiest equation to derive from the figure: the total area of the square must be equal to the sum of the areas from all 9 sections: This first equation is based on the area of the square, but we did not consider the arcs, so let’s do that for the second equation. Let’s consider the coloured area in the following figure: We can observe here that the given section is equal to the area of the square minus the area of one quarter of a circle of radius $$a$$: It seems that we’re making good progress. We just need one more equation. However, at this point it’s easy to come up with something that looks new, but isn’t. For example, let’s say that we wanted to formulate a new equation for the area of the following section: The problem is that, no matter how we look at it, the equation that we can derive from this is going to be basically the same as equation [2] above (try it!). If we analyse this for a moment, it shouldn’t be a surprise: we’re producing relations from the same elements (the square and the arcs). What we need is a completely new geometric element to use as a base for our last equation. Let’s stop for a moment then, and take a look at our first figure, and ask ourselves: what other interesting (the word critical may come to mind) elements can we recognise in there? We have tried with the lines —either the straight lines from the square, or the curved lines from the arcs—, but what about the points? Maybe the intersection points can give us something useful. For example, let’s consider the following section of the figure: With some geometric reasoning, and a little algebra, we could derive a new equation to solve the value $$Z$$. First of all, let’s find the height $$h$$ at which the two relevant arcs intersect and which marks one of the sides of the coloured rectangle from the last figure. Starting from the standard equation of a circle with radius $$a$$, and given that the symmetry shows that the intersection point happens horizontally right in the middle of the square, we can deduce $$h$$ like this: Now, if we knew the area covered by the red sections, then the value of $$Z$$ could be easily deduced. So let’s see what else we can find out from our last figure: We can observe now that the one of the red sections has an area equal to the area of the arc $$\stackrel\frown{QR}$$ minus the area of $$\triangle PQS$$. And we can calculate these areas by knowing the angle $$\alpha$$ and the sides of the triangle $$\triangle PQS$$ which are $$h$$ and $$a \div 2$$: With all of this, we can finally summarise our findings with the three equation we were looking for: The corroboration of equation [3] and deducing the final values of $$X$$, $$4Y$$ and $$4Z$$ is left as an exercise to the reader :). Also, try perhaps looking for a simpler way to deduce all of these values. As I said before, there’s usually more than one way to do it, and some can be simpler than others. By simply staring at the original figure for a little while, an interesting idea may pop up in your head that helps you solve this in a way that you find easier to understand. And you’ll never know unless you try :).
{}
# Morita equivalence between $\mathbb{C}[G]$ and $\mathbb{C}[H]$? What we can say about two groups G and H when their group rings, $\mathbb{C}[G]$ and $\mathbb{C}[H]$, are Morita equivelent? I will assume that $G$ and $H$ are finite groups, since I don't know the theory otherwise. Then it just says that $G$ and $H$ have the same number of conjugacy classes. Indeed, $\mathbb{C}[G]$ is Morita-equivalent to $\mathbb{C}^m$ where $m$ is the number of conjugacy classes of $G$, and $\mathbb{C}^m$ is Morita-equivalent to $\mathbb{C}^n$ iff $m=n$.
{}
× ## Implicit Differentiation Not all equations can be written like y = f(x), so taking the derivative can be tricky. Save the mess and do it directly with implicit differentiation. #### Challenge Quizzes If $$\sqrt[3]{x^2}+\sqrt[3]{y^2}=4,$$ what is $$\displaystyle \frac{dy}{dx}?$$ If $$x=y\sqrt{11+y}$$, what is $$\displaystyle \frac{dy}{dx}?$$ Find $$\displaystyle \frac{dy}{dx}$$ for $$7\sqrt{x}+6\sqrt{y}=13y^2$$ at the point $$(1,1).$$ Given $f(x)=\sqrt[3]{\frac{x}{x^2+124}},$ what is the value of $$f'(1)?$$ If $$\sqrt{3x}+\sqrt{y}=\sqrt{5}$$, what is $$\displaystyle \frac{dy}{dx}?$$ ×
{}
scroll identifier for mobile main-content ## Über dieses Buch In this book, Denis Serre begins by providing a clean and concise introduction to the basic theory of matrices. He then goes on to give many interesting applications of matrices to different aspects of mathematics and also other areas of science and engineering. With forty percent new material, this second edition is significantly different from the first edition. Newly added topics include: • Dunford decomposition, • tensor and exterior calculus, polynomial identities, • regularity of eigenvalues for complex matrices, • functional calculus and the Dunford–Taylor formula, • numerical range, • Weyl's and von Neumann’s inequalities, and • Jacobi method with random choice. The book mixes together algebra, analysis, complexity theory and numerical analysis. As such, this book will provide many scientists, not just mathematicians, with a useful and reliable reference. It is intended for advanced undergraduate and graduate students with either applied or theoretical goals. This book is based on a course given by the author at the École Normale Supérieure de Lyon. ## Inhaltsverzeichnis ### Chapter 1. Elementary Linear and Multilinear Algebra This chapter is the only one where results are given either without proof, or with sketchy proofs. A beginner should have a close look at a textbook dedicated to linear algebra, not only reading statements and proofs, but also solving exercises in order to become familiar with all the relevant notions. Denis Serre ### Chapter 2. What Are Matrices In real life, a matrix is a rectangular array with prescribed numbers n of rows and m of columns (n×m matrix). To make this array as clear as possible, one encloses it between delimiters; we choose parentheses in this book. The position at the intersection of the ith row and jth column is labeled by the pair (i, j). If the name of the matrix is M (respectively, A, X, etc.), the entry at the (i, j)th position is usually denoted m i j (respectively, a i j , x i j ). An entry can be anything provided it gives the reader information. Here is a the real-life example. Denis Serre ### Chapter 3. Square Matrices The essential ingredient for the study of square matrices is the determinant. For reasons given in Section 3.5, as well as in Chapter 9, it is useful to consider matrices with entries in a ring. This allows us to consider matrices with entries in(rational integers) as well as in K[X] (polynomials with coefficients in K).We assume that the ring of scalars A is a commutative (meaning that the multiplication is commutative) integral domain (meaning that it does not have divisors of zero: ab=0 implies either a = 0 or b = 0), with a unit denoted by 1, that is, an element satisfying 1x = x1 = x for every x ∈≤ A. Denis Serre ### Chapter 4. Tensor and Exterior Products Let E and F be K-vector spaces whose dimensions are finite. We construct their tensor product E K F as follows. Denis Serre ### Chapter 5. Matrices with Real or Complex Entries A matrix $$M \in {M_{n \times m}}(k)$$ is an element of a vector space of finite dimension n2. When K = $${\mathbb{R}}$$or K =$${\mathbb{C}}$$, this space has a natural topology, that of K nm . Therefore we may manipulate such notions as open and closed sets, and continuous and differentiable functions. Denis Serre ### Chapter 6. Hermitian Matrices We recall that $${\left\|\cdot\right\|_2}$$ denotes the usual Hermitian norm on : $$\mathbb {C}$$ $${\left\| x \right\|_2}: = {\left( {\sum\limits_{j = 1}^n {{{\left| {{x_j}} \right|}^2}} } \right)^2}$$ Denis Serre ### Chapter 7. Norms In this chapter, the field K is always $$\mathbb {R}$$or $$\mathbb {C}$$and E denotes K n . The scalar (if K =$$\mathbb {R}$$) or Hermitian (if K =$$\mathbb {C}$$) product on E is denoted by $$\left\langle {x,y} \right\rangle : = {\Sigma _j}{\bar x_j}{y_j}.$$ Denis Serre ### Chapter 8. Nonnegative Matrices In this chapter matrices have real entries in general. In a few specified cases, entries might be complex. Denis Serre ### Chapter 9. Matrices with Entries in a Principal Ideal Domain; Jordan Reduction In this chapter we consider only commutative integral domains A (see Chapter 3). Such a ring A can be embedded in its field of fractions, which is the quotient of $$A \times (A\backslash \left\{ 0 \right\})$$ by the equivalence relation $${\text{(a,b)}} {\mathcal{R}}{\rm {(c,d)}} \Leftrightarrow {\rm ad = bc}.$$ The embedding is the map $$\begin{array}{l}\\a \mapsto (a,1) \\\end{array}$$. Denis Serre ### Chapter 10. Exponential of a Matrix, Polar Decomposition, and Classical Groups Polar decomposition and exponentiation are fundamental tools in the theory of finite-dimensional Lie groups and Lie algebras. We do not consider these notions here in their full generality, but restrict attention to their matricial aspects. Denis Serre ### Chapter 11. Matrix Factorizations and Their Applications The techniques described below are often called direct solving methods. Denis Serre ### Chapter 12. Iterative Methods for Linear Systems In this chapter the field of scalars is K   = ℝ or ℂ. Denis Serre ### Chapter 13. Approximation of Eigenvalues The computation of the eigenvalues of a square matrix is a problem of considerable difficulty. The naive idea, according to which it is enough to compute the characteristic polynomial and then find its roots, turns out to be hopeless because of Abel’s theorem, which states that the general equation P(x) = 0, where P is a polynomial of degree d ≥ 5, is not solvable using algebraic operations and roots of any order. For this reason, there exists no direct method, even an expensive one, for the computation of Sp(M). Denis Serre ### Backmatter Weitere Informationen ## BranchenIndex Online Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren. ## Whitepaper - ANZEIGE - ### Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
{}
# Meeting Details Title: Sums of almost equal squares of primes Algebra and Number Theory Seminar Angel Kumchev, Towson University We study the representations of large integers $n$ as sums $p_1^2 + \dots + p_s^2$, where $p_1, \dots, p_s$ are primes with $| p_i - (n/s)^{1/2} | \le n^{\theta/2}$, for some fixed $\theta < 1$. When $s = 5$ we use a sieve method to show that all sufficiently large integers $n \equiv 5 \pmod {24}$ can be represented in the above form for $\theta > 8/9$. This improves on earlier work by Liu, L\"{u} and Zhan, who established a similar result for $\theta > 9/10$. We also obtain estimates for the number of integers $n$ satisfying the necessary local conditions but lacking representations of the above form with $s = 3, 4$. When $s = 4$ our estimates improve and generalize recent results by L\"{u} and Zhai, and when $s = 3$ they appear to be first of their kind. This is joint work with Taiyu Li.
{}