text
stringlengths
100
957k
meta
stringclasses
1 value
Tisquantum—nothing to do with quantum computation Tisquantum was a Patuxet Indian who lived from 1580 until 1622. He is better known as Squanto, and is famous to Americans as the native on whom the Pilgrim settlers in Massachusetts most depended for their survival commemorated at their first Thanksgiving observance. Children’s books portray Squanto as a native who came out of the woods with a helping hand, but in fact he was more of a globetrotter than any of the Pilgrims. Today we, Ken and I, wish to thank people around the globe who have contributed ideas to this blog, particularly by keeping alive some comment threads that one might think were old history. Squanto was one of two dozen Indians who trusted a captain named Thomas Hunt in 1614, coming onboard his ship to negotiate beaver trading, only to find themselves kidnapped and taken to Spain to be sold into slavery. Hunt had not yet managed to sell Squanto when some Spanish friars intervened. They adopted him, gave him religious and secular education, and allowed him to try to return home via England. It is possible that Squanto had an earlier trip to England in 1605, when the name “Tisquantum” is listed among five Native Americans forcibly brought back by an explorer. In London in 1615, Squanto met the Treasurer of the Newfoundland Company, Sir John Slaney, and entered his employ. This brought him back to North America, on terms that allowed him to return to his village after service as a guide and interpreter. Alas he found his village wiped out by European diseases for which the natives had no immunity. He affiliated with another tribe, and spent three years active in tumultuous relations between natives and settlers before he succumbed to disease himself in 1622. In that time he taught local maize cultivation techniques to the settlers, for which he is most remembered, but there was much more. The point is that Squanto brought expertise from two continents, and was far from the “simple” native often depicted. Tisquantum complexity may have saved the Pilgrims. We are about to go over 9,000 comments in 338 items in the three years of this blog, an average of 26.6 per item including trackbacks. We could easily double that if we included all the spam: there have been 284 spam items since Nov. 17 alone. Most of these are the“self-promoting thank-you” kind, where nothing specific is said about the post it desires to attach to, and the commenter ID links back to a sales pitch. Thanks-giving turkeys indeed; happily many are caught by the filter, whose contents we review periodically. Sometimes we find a “real” comment there, and if time has passed and it carries a known e-mail we’ll write to the commenter with as-due apology. Similarly, if you feel a comment of yours has been mislaid please let us know; both our e-mails are public. No, we intend our genuine thanks-giving to be from us to you. The following is far from an exhaustive list, and we do intend more of this kind of recognition. This is not a roundup of the posts with the most discussion—these are mostly restricted to ones where people have contributed months after a post went up. Often we link just one comment in a thread when there are other worthy ones nearby—this is to encourage you to read the context and not to demote the ones not singled out. Sometimes there are crankish comments nearby, but we make no judgment beyond common standards of civil writing. ${\bullet}$ The “standing items” at the top continue to attract comments, especially The Gödel Letter, Conventional Wisdom and P=NP, and About P=NP and SAT. We note especially that Pascal Koiran and Sam Buss and others below gave further references pertinent to Gödel’s letter beginning here, and that Peter Tennenbaum contributed observations on Gödel’s worldviews, plus a wonderful personal recollection here. ${\bullet}$ A recent query on the Feb. 2009 item Fast Exponential Algorithms drew a reply from someone in another country before we could act. ${\bullet}$ Lukas Polacek last April corrected a common “legend” about Robert Szelepcs{é}nyi here. ${\bullet}$ In the April 2009 item The Four-Color Theorem, there was a concrete and beautiful discussion between commenters Stefanutti and Cahit last Feb.–Mar., beginning here. ${\bullet}$ There have been several posts on the Graph Isomorphism problem beginning with this May 2009 item, and discussion of claims of progress has continued in several of them. We posted most recently about the more specialized problem of group isomorphism, where we proposed a PolyMath project, and excellent concrete suggestions among 73 comments there are still ongoing. ${\bullet}$ In the June 2009 item High Dimensional Search and the NN Problem, commenters ACW and Neil Dickson suggested some new approaches last March beginning here. ${\bullet}$ Regarding Linear Equations Over Composite Moduli, Shachar Lovett noted a paper last February with an update here. ${\bullet}$ The Sept. 2009 item Why Believe That P=NP is Impossible? has seen activity these past two years, with interesting observations and even a software demo beginning here. These include a reply by Dr. John A. Sidles, whose frequent literate, diverse, and perspicacious comments branching out from medical quantum-scale technology could be a subject in themselves. Indeed we have just noticed some congratulations to offer. ${\bullet}$ Paul Beame told a corrected story of the FOCS logo here. ${\bullet}$ Josh Grochow gave some pertinent references for Nash equilibria of sparse games here. ${\bullet}$ Our November 2009 item on the Schwartz-Zippel-$\dots$ Lemma drew references and discussion in summer 2010 beginning here, including one from Richard Zippel himself. Includes an associate of someone who testified at the OJ trial! ${\bullet}$ Two notable comments in Dec. 2009 year-end discussion: this one on real software timing issues (and followup), and this by Albert Atserias giving evidence against a working hypothesis of ours—we may address responses to our past research ideas in a future post. ${\bullet}$ Commenters added favorite-book suggestions to this Dec. 2009 post, perhaps helpful in the shopping season that begins today. This covers posts in 2009; there are more we can acknowledge. We intended this post with the Squanto story to come out on Wednesday evening or at least on Thanksgiving Day itself, but professional and personal matters took precedence until today. Today at least makes it timely with another Thanksgiving Weekend tradition: enjoying leftovers. Open Problems Can you make progress on any of the open problems in the above items? And of course have a safe and healthy Thanksgiving. 1. November 26, 2011 2:27 pm Many congratulations to John Sidles, whose comments here and elsewhere are always a pleasure to read. And – from all around the world – let’s give thanks to you, Ken and Dick, for entertaining us with one of the best blogs around. 2. November 28, 2011 5:11 pm A belated Happy Thanksgiving to everyone at Gödel’s Lost Letter and ${P=NP}$. Dick and Ken and Richard’s kind words came as a surprise and are greatly appreciated … the Salon des Skeptique Refusés concretely demonstrates that not everyone shares this view. For me, the most reliable way to create laughter is to be completely serious, and so here are three things for which I am seriously grateful. First, I am seriously grateful for a math-science-and-technology blogosphere that includes so many wonderful forums, including Gödel’s Lost Letter, Fortnow/GASARCH, Nuit Blanche, Quantum Pontiff, Shtetl Optimized, and the numerous eponymic weblogs by (e.g) Tim Gowers, Gil Kalai, Terry Tao, and Doron Zeilberger (and plenty more). Equally wonderful are the now-prospering Stack Exchanges Math Overflow, and Theoretical Computer Science, and Theoretical Physics. Long may you all prosper! 🙂 Second, I am seriously grateful for a lesson that these web resources teach us, each in its own way: the leason that we humans are very far from fundamental limits to pretty much anything that we care about: algorithmic efficiency, computational capacity, observational bandwidth, smaller sizes, faster cycles, lower energy costs, greater healing capacity … and most wonderfully of all, these websites in aggregate afford us glimpses of a deeper and more integrated understanding of how all these capabilities relate to one another. Third, I am deeply grateful to be part of a thrilling 21st century whose cardinal achievement (IMHO) will a demonstration-by-construction that a planet with ten billion people on it can be a healthy, secure, and free place to live … and a mighty interesting and lively place too. And so, best wishes for a Happy Holiday Season are extended to everyone! 🙂 November 29, 2011 2:10 am I would also like to both congratulate John Sidles and thank Ken and Dick for their interesting posts, and many of the commenters for their interesting additions. This is the only blog where I subscribe to the RSS comment feed. But, although I’ll come out as a curmudgeon for repeating this, I would feel a lot safer about getting those comments on old posts mentioned above if not for the more and more frequently exceeded 10 post limit on said comment feed. I checked the feed at least three times in the last 24 hours and still lost comments in twice of them. December 2, 2011 6:54 am I agree with you: a limit of at least twenty would be better for new posts. As regards old posts, they’re still accessible thru the archives. December 2, 2011 6:06 am I’ve been waiting for this post to get a just a little old before to answer and thank you … and “keep alive posts”! Long live this great site!
{}
Is 0.999… (repeating) equal to 1? by Jakub Marian Tip: See my list of the Most Common Mistakes in English. It will teach you how to avoid mis­takes with com­mas, pre­pos­i­tions, ir­reg­u­lar verbs, and much more. The short answer is yes. $0.\bar 9$ (zero point 9 repeating) is exactly 1. However, a lot of people find this result counter-intuitive (people often feel that $0.\bar 9$ should be slightly less than 1), but this feeling stems from a misunderstanding of what $0.\bar 9$ means. To satiate those looking for a quick and dirty argument, we can do the following simple calculation: \begin{align*} x &= 0.999..{}.\\ 10x &= 9.999..{}.\\ 9x &= 10x - x = 9.999..{}.-0.999..{}. = 9\\ x &= 1 \end{align*} This is not, however, a mathematical proof, because not all of the steps are obvious without further looking at the definition of the decimal notation. This will require a bit of preparation. The limit You are probably familiar with the meaning of the decimal notation when the fractional part contains only a finite number of digits. For example: $$32.58 = 3⋅10+2⋅1+5⋅÷{1}{10}+8⋅÷{1}{100}$$ To understand the meaning of the fractional part when the digits go on indefinitely, you first need to understand, at least purely intuitively, the notion of limit. When you have a sequence of numbers, say $a_1 = 1$, $a_2 = ÷{1}{2}$, $a_3 = ÷{1}{3}$, …, $a_n = ÷{1}{n}$, you can compute what we call the limit of $a_n$ (at $∞$). The name sounds terrifying, but the concept is in fact really simple. As you can see (or quickly check using a calculator), $÷{1}{n}$ gets closer and closer to 0 as $n$ increases, for example, $÷{1}{100} = 0.01$, $÷{1}{1000} = 0.001$, and so on. If a sequence gets arbitrarily close to a number, we say that the limit of the sequence is that number, and we write: $$\lim_{n→∞} a_n = \text{the number to which a_n get arbitrarily close}$$ For example, in the case of $a_n = ÷{1}{n}$, we would write $$\lim_{n→∞} a_n = 0$$ since $÷{1}{n}$ gets arbitrarily close to $0$ for large $n$. Now we are ready to define the decimal notation for never-ending sequences of digits. Real numbers and the decimal notation It may come as a great surprise to those who don’t know any higher mathematics, but the real numbers are not defined to be “numbers like $0.53$, $3.1415$…, $86.51$”. In fact, we define them in three steps: 1) Define the integers (whole numbers), i.e. numbers like $1$, $5$, $-8$. 2) Define fractions of integers, i.e. numbers like $÷{5}{8}$, $÷{314}{100}$. 3) Add abstract numbers that fill in “gaps” between fractions. The mathematical definition is somewhat more technical, but it follows the process outlined above. There is absolutely no mention of the decimal point anywhere in the process; we just have a bunch of numbers, such as $1$, $÷{3}{7}$, or $π$, and then, after we have defined the numbers, we define the decimal point. This is simple if the decimal part is finite (we already did this above), but what if it is infinite? We use the limit! For example, to get the value of $x = 0.999$…, we define a sequence of the form $x_1 = 0.9$, $x_2 = 0.99$, $x_3 = 0.999$, and so on. Obviously, as $n$ gets larger, $x_n$ gets closer to the real value of $0.999$…, so we define: $$0.\bar 9 = \lim_{n→∞} x_n$$ The only question that remains to be answered is: What is the limit of $x_n$? As $n$ gets larger, $x_n$ gets arbitrarily close to $1$, so we conclude that, by definition: $$0.\bar 9 = \lim_{n→∞} x_n = 1$$ The phrase “by definition” is really important here. Symbols like “1.234” are not real numbers themselves; these are just just a way to label real numbers, just like we label humans with words like “Peter” or “Laura”it wouldn’t really make sense to say that a person is a series of letters, would it… And just like “Peter” may be also called “Pete”, some numbers can be written in two different ways in our notation: $0.999$… is just a different way of writing $1$. By the way, I have written several educational ebooks. If you get a copy, you can learn new things and support this website at the same time—why don’t you check them out?
{}
## Reductive groups over a local field. (Groupes réductifs sur un corps local.)(French)Zbl 0254.14017 [Joint “Looking back” review with Zbl 0597.14041.] The two articles under review contain the foundations of the theory of Bruhat-Tits buildings. When Bruhat and Tits developed the theory of the buildings now bearing their names, the structure theory of reductive algebraic groups over an arbitrary field (as an algebraic avatar of the theory of Lie groups) was already quite well understood. The foundations of this theory were laid by Armand Borel, Claude Chevalley, Jacques Tits and many others. Bruhat and Tits embarked on the project to understand reductive algebraic groups over a field $$K$$ with a non-archimedean absolute value. Their goal was to define a new geometric object taking into account the valuation on the ground field, which might be seen as a $$p$$-adic avatar of the Riemannian symmetric space $$G/K$$ associated to a semisimple real Lie group $$G$$ and a maximal compact subgroup $$K$$. Note that a non-archimedean absolute value satisfies the triangle inequality in the following strong form $|a+b| \leq \max\{|a|, |b|\}.$ One example of a field with a non-archimedean absolute value is the field of formal Laurent series $$k((X))$$ over an arbitrary ground field $$k$$, which is endowed with the absolute value $$|\sum_{n \geq n_0} a_n X^n| = e^{-n_0}$$ if $$a_{n_0} \neq 0$$. The field $$K = k((X))$$ is discretely valued, which means that the value group $$|K^\times|$$ is a discrete subgroup of $$\mathbb{R}$$. Other prominent examples of fields with a non-archimedean absolute value are the completions $$\mathbb{Q}_p$$ of $$\mathbb{Q}$$ with respect to the $$p$$-adic absolute value. These fields are local, i.e., they are locally compact in the topology induced by the absolute value. The fields $$\mathbb{Q}_p$$ and their extensions are important for local questions in number theory. However, the topology induced by a non-archimedean absolute value has disadvantages. The fields $$k((X))$$ and $$\mathbb{Q}_p$$ are for example totally disconnected, i.e., the only connected subsets are the empty set and the one-point sets. These topological flaws create difficulties if one seeks for non-archimedean analogies of archimedean constructions. In the case of Bruhat-Tits buildings this explains why a new object had to be constructed: The quotient space of a semisimple group by a maximal compact subgroup, which works fine in the archimedean world, is not an interesting topological space in the non-archimedean world. Now we consider a reductive algebraic group $$\mathcal{G}$$ over a field $$K$$ with a non-archimedean valuation. Recall that a linear algebraic group is a group variety over $$K$$ which can be embedded in the $$K$$-variety given by a general linear group. It is called reductive if it does not contain any non-trivial connected unipotent normal subgroup. If it does not even contain any non-trivial connected solvable normal subgroup, then $$\mathcal{G}$$ is called semisimple. The general linear group is reductive. The special linear group, the symplectic group or the special unitary group are examples of semisimple groups. Since for many interesting questions it is crucial to work with fields $$K$$ which are not algebraically closed, one has to work with group schemes rather than groups. A group scheme over $$K$$ can be thought of as an object encoding information on all groups $$\mathcal{G}(L)$$, where $$L$$ runs through the extension fields of $$K$$. Before Bruhat and Tits developed their general theory, O. Goldman and N. Iwahori had constructed a space of non-archimedean norms which afterwards turned out to be the Bruhat-Tits building associated to the general linear group [“The space of $$p$$-adic norms,” Acta Math. 109, 137–177 (1963; Zbl 0133.29402)]. Besides, N. Iwahori and H. Matsumoto [“On some Bruhat decomposition and the structure of the Hecke rings of $$p$$-adic Chevalley groups,” Publ. Math. IHES 25, 5–48 (1965; Zbl 0228.20015)] had investigated split groups before Bruhat and Tits embarked on the general theory. The Bruhat-Tits building associated to a reductive group $$\mathcal{G}$$ over a field with a non-archimedean absolute value is a metric space endowed with a continuous action by the $$K$$-rational points $$\mathcal{G}(K)$$. If the valuation on $$K$$ is discrete, then it carries a simplicial structure. Its existence depends on rather general hypotheses. For example, Bruhat-Tits buildings exist if the ground field $$K$$ is discretely valued and henselian with a perfect residue field. Moreover, they always exist for split groups, i.e., for reductive groups $$\mathcal{G}$$ such that the maximal split torus is defined over $$K$$. Note that the Bruhat-Tits building of a reductive group coincides with the building of its semisimplification. However, there is a notion of extended buildings which takes into account the difference between these two groups. Part I of the paper under review develops the fundamental theory of buildings from two different axiomatic angles. The application to reductive groups is postponed to part II. Let us describe in more detail the content of part I. In the first five sections a building (“immeuble”) is associated to an affine Tits system, which is the first axiomatic approach to buildings. In sections six to nine the second axiomatic approach via valued root data is developed. Both theories overlap to a certain extent, but not completely. The first section recalls some facts on Coxeter groups, Tits systems and affine Weyl groups, relying on the Bourbaki volume Lie groups and Lie algebras, chapter 4 to 6. A Tits system is a quadruplet $$(G,B,N,S)$$ consisting of a group $$G$$ together with two subgroups $$B$$ and $$N$$ generating $$G$$ such that $$B \cap N$$ is a normal subgroup in $$N$$. The quotient group $$W = N / (B \cap N)$$ is also called Weyl group. $$S$$ is a set of involutions generating the Weyl group such that the two following conditions hold. i) $$sBw \subset BwB \cup BswB$$ for all $$s \in S$$ and $$w \in W$$, ii) $$sBs \neq B$$ for all $$s \in S$$. To avoid confusion note that $$G$$ here is a true group and not a group scheme. The group $$G = \mathcal{G}(K)$$ of $$K$$-rational points of a reductive group $$\mathcal{G}$$ over $$K$$ carries the structure of a Tits system. As an example, consider the general linear group $$G$$ of rank $$n$$ together with the subgroup $$B$$ of upper triangular matrices and the subgroup $$N$$ of matrices with precisely one non-zero entry in every line and column. Here $$B \cap N$$ is equal to the subgroup of diagonal matrices, and $$W$$ can be identified with the symmetric group of $$n$$ elements. Under this identification the set $$S$$ corresponds to the set of transpositions of $$i$$ and $$i+1$$ for all $$i = 1, \ldots, n-1$$. Tits systems with finite Weyl group were studied before. They give rise to the so-called spherical buildings or Tits buildings, which can be defined as the complex of all parabolic subgroups. This theory was developed by Jacques Tits in order to find geometric interpretations for linear algebraic groups over arbitrary fields, see [J. Tits, “Buildings of spherical type and finite $$BN$$-pairs.” Lecture Notes in Mathematics 386, Berlin-Heidelberg-New York: Springer-Verlag (1974; Zbl 0295.20047)]. It does not take into account the non-archimedean absolute value. In fact, for a reductive group over a field with a suitable non-archimedean absolute value, there exist two buildings: the Tits (or spherical) building and the Bruhat-Tits building. They are related since the Bruhat-Tits building can be compactified by attaching a Tits building “at infinity”. In the case of ground fields with a non-archimedean valuation, one has to consider affine Tits systems, where the Weyl group $$W$$ is an infinite group of affine reflections in a Euclidean space $$A$$. The corresponding affine hyperplane arrangement defines a decomposition of $$A$$ into faces. Faces of maximal dimension are called chambers. In section two the building $$\mathcal{I}$$ associated to an affine Tits system is defined. It is a polysimplicial complex, i.e., a product of simplicial complexes, whose faces correspond to the so-called parahoric subgroups of $$G$$. It can be described as a union of Euclidean spaces, which are called apartments, and it carries a metric and an action of the group $$G$$. In the third section an important fixed-point theorem is proven, stating that the set-wise stabilizer of a bounded subset of the building $$\mathcal{I}$$ has a fixed point. This is used to show that the bounded subgroups of $$G$$ are cum grano salis the stabilizers of faces in the building. Since the notion of Tits systems does not a priori contain topological data, the definition of bounded subgroups is more involved. In the fourth section Iwasawa and Cartan decompositions of $$G$$ are proven with the help of the building. Section five analyses double Tits systems, leading to an affine and to a spherical Tits system. This reflects the fact that there are two buildings associated to a reductive group over a non-archimedean field, as was explained above. In section six the second approach to buildings is prepared by the definition of a valued root datum. Here we fix a root system $$\Phi$$ in the dual space of a vector space $$V$$. A root datum in a group $$G$$ consists of a subgroup $$T$$ together with data $$({U}_a, M_a)$$ for every root $$a \in \Phi$$ such that the $$U_a$$ are subgroups satisfying certain commutator conditions. Each $$M_a$$ is a coset of $$T$$ satisfying among other conditions the important relation $$U_{-a} \backslash \{1\} \subset U_a M_a U_a$$. The full list of axioms can be found in Definition (6.1.1). From these axioms a list of further properties of $$G$$ is derived. As a motivating example consider the group $$G = SL_2(K)$$ over any field $$K$$, together with the subgroup $$T$$ of diagonal matrices and the root system $$\Phi= \{a,-a\}$$ of type $$A_1$$. Let $$U_a$$ be the unipotent subgroup of upper triangular matrices with entries $$1$$ on the diagonal, and let $$U_{-a}$$ be the unipotent subgroup of lower triangular matrices with entries $$1$$ on the diagonal. Put $m = \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right)$ and $$M_a = M_{-a} = Tm$$. These data form a root datum. In particular, by a straightforward calculation one can check that $$U_{-a} \backslash \{1\} \subset U_a M_a U_a$$ holds in this case. Bruhat and Tits then define the notion of a valuation on a root datum. This is a family of maps $$\varphi_a: U_a \rightarrow \mathbb{R} \cup \{\infty\}$$ which are compatible with the structure of a root datum. The full list of axioms is stated in Definition (6.2.1). In particular, the truncated subsets $$U_{a,r} = \varphi_a^{-1} ([r, \infty])$$ are required to form a subgroup of $$U_a$$. Let us again consider the example $$SL_2(K)$$ and assume that $$K$$ is endowed with a non-trivial valuation map $$\omega: K \rightarrow \mathbb{R} \cup \{\infty\}$$. Then we can define a valuation on the root datum defined above by setting $\varphi_a \left( \left(\begin{matrix} 1 & u \\ 0 & 1 \end{matrix} \right) \right) = \varphi_{-a} \left( \left(\begin{matrix} 1 & 0 \\ u & 1 \end{matrix} \right) \right) = \omega (u).$ More generally, it is shown that valued root data exist for all groups $$\mathcal{G}(K)$$, where $$\mathcal{G}$$ is a split reductive $$K$$-group. Now Bruhat and Tits fix a root datum and a valuation on it. In (6.2.5) they define an affine space $$A$$ under the vector space $$V$$ (coming from the root datum) as the set of all valuations on the root datum depending on the fixed one in an affine-linear way. The precise notion here is “valuations équipollentes”. This affine space carries a natural action by the group $$N$$ generated by $$T$$ and the union of the $$M_a$$. Note that in our example $$G = SL_2(K)$$ the group $$N$$ is the normalizer of $$T$$, i.e., the group of monomial matrices in $$G$$. Now one can define an arrangement of affine hyperplanes in $$A$$. The hyperplane directions are given by the roots, and the set of translations leading to a family of parallel hyperplanes is basically given by the image of the valuation map. Note that in our example $$G= SL_2(K)$$, the space $$A$$ is one-dimensional, and the affine hyperplane arrangement simply corresponds to the image of the valuation map $$\omega(K^\ast)$$. In section 6.5 it is shown that if all valuation maps $$\varphi_a$$ have discrete image, then the notion of a valuation on a root datum gives rise to an affine Tits system. This provides the link to the theory developed in the first five sections. Hence in the discrete case there is an associated building. Also in the general case of non-discrete valuation maps, one can define a building. This is the topic of section seven. For every point $$x$$ in $$A$$ and every root $$a$$ one determines the minimal affine halfspace with direction $$a$$ in our affine hyperplane arrangement which contains $$x$$. This corresponds to an element $$r$$ in the image of the valuation map, and hence to a subgroup $$U_{a,r}$$ of $$U_a$$. Now let $$P_x$$ be the subgroup of $$G$$ generated by all these $$U_{a,r}$$ together with the elements in $$N$$ fixing $$x$$. Then in (7.4.1) the building $$\mathcal{I}$$ is defined as the quotient of $$G \times A$$ after the following equivalence relation: $$(g,x) \sim (h,y)$$ if and only if there exists an $$n \in N$$ such that $$nx=y$$ under the action of $$N$$ on $$A$$ and $$g^{-1} hn \in P_x$$. This space carries a natural $$G$$-action by multiplication in the first factor. Besides, the polysimplicial structure on $$A$$ can be extended to a polysimplicial structure on $$\mathcal{I}$$. To be precise, all faces in $$\mathcal{I}$$ are of the form $$g F$$, where $$g$$ is an element in $$G$$ and $$F$$ is a face in $$A$$. Every subset of the form $$gA$$ in $$\mathcal{I}$$ is called an apartment. It is shown in Theorem (7.4.18) that any two faces are contained in one apartment. Note that there is a distance function on $$A$$ given by the choice of a scalar product associated to the root system. We transfer it in a $$G$$-invariant way to the other apartments. Since any two points in $$\mathcal{I}$$ are contained in one apartment, this defines a $$G$$-invariant distance function on the whole building. It is well-defined, and gives a metric on the building $$\mathcal{I}$$. The building is contractible in the associated topology, see Proposition (7.4.20). Moreover, Iwasawa and Bruhat decompositions of $$G$$ are proven in section 7.3. In section eight it is shown that for dense valuations the maximal bounded subgroups of $$G$$ in a suitable sense are precisely the stabilizers of points. Section nine provides descent results for valued root data which will be used in part II of the paper. In section ten, classical groups over a field with a complete non-archimedean absolute value are investigated. It is shown how their natural root datum can be equipped with a valuation in the above sense. Note that more details on the buildings associated to classical groups may be found in the articles “Schémas en groupes et immeubles des groupes classiques sur un corps local” [Bull. Soc. Math. Fr. 112, 259–301 (1984; Zbl 0565.14028)] and “Schémas en groupes et immeubles des groupes classiques sur un corps local. II: Groups unitaires” [Bull. Soc. Math. Fr. 115, 141–195 (1985; Zbl 0636.20027)] by the same authors. The purpose of part II of the paper under review is to show that under rather general assumptions the group $$\mathcal{G}(K)$$ of $$K$$-rational points of a reductive connected group $$\mathcal{G}$$ over a field $$K$$ with a non-trivial non-archimedean absolute value actually possesses a valued root datum which is compatible with the valuation on $$K$$. Hence the results of part I can be applied to these groups, thereby proving the existence of Bruhat-Tits buildings. In particular, the fixed point and decomposition results shown in part I hold in this case. The construction of a valued root datum is achieved in a double descent process, already prepared by section nine of part I. The first descent step is used to proceed from split groups to quasi-split groups, the second descent step deals with the passage from strictly henselian to henselian fields. This is useful since a result of Steinberg states that a reductive group over a discretely valued henselian field $$K$$ with perfect residue field becomes quasi-split after base-change with the strict henselisation. An important technical tool in part II is the construction of group schemes over the ring of integers $$R$$ in $$K$$ such that their generic fiber is $$\mathcal{G}$$ or one of its subgroups. We refer to a scheme over $$R$$ also as a model of its generic fiber. Part II starts with a section recalling general facts in algebraic geometry which are used in the following sections. In section two, Bruhat and Tits investigate the properties of models for root groups and their products. In section three, this leads to the definition of a “donnée radicielle schématique” or schematic root datum. Let $$\mathcal{Z}$$ be the centralizer of the maximal split torus in $$\mathcal{G}$$, and denote by $$\mathcal{U}_a$$ the root group associated to a root $$a$$. Then a schematic root datum is a collection of models of the groups $$\mathcal{Z}$$ and $$\mathcal{U}_a$$ satisfying a list of conditions mainly stating that some natural maps extend from the generic fibers to the whole models. For details see Definition (3.1.1). In section four quasi-split groups $$\mathcal{G}$$ are considered. In this case there exists a Borel group in $$\mathcal{G}$$ which is defined over $$K$$, and the base change $$\mathcal{G}_{\tilde{K}}$$ of $$\mathcal{G}$$ to a Galois extension $$\tilde{K}$$ of $$K$$ is a split group. By a careful analysis of the effect of the Galois group of $$\tilde{K}/K$$ it is shown that the valued root datum for $$\mathcal{G}_{\tilde{K}}(\tilde{K})$$ constructed in part I can be descended to a valued root datum for $$\mathcal{G}(K)$$. Section five deals with a reductive group $$\mathcal{G}$$ over a henselian (for example complete) field $$K^\natural$$. If the base change $$\mathcal{G}_K$$ of $$\mathcal{G}$$ to the strict henselisation $$K$$ is quasi-split (and two other conditions hold, see (5.1.1)), then the valued root datum on the group $$\mathcal{G}_{K}(K)$$ can be descended to $$\mathcal{G}(K^\natural)$$. In this case the Bruhat-Tits building associated to $$\mathcal{G}$$ can be identified with the fixed-point set of the building associated to $$\mathcal{G}_{K}$$ with respect to the action of the Galois group of $$K/K^\natural$$. The required list of conditions is fulfilled if the henselian valuation on $$K^\natural$$ is discrete with perfect residue field. Hence in this case every reductive group over $$K^\natural$$ gives rise to a Bruhat-Tits building. Bruhat-Tits buildings are an indispensable tool for many different questions on non-archimedean reductive groups. In particular, one can use the action of such a group on its building to prove results on the structure of interesting subgroups. Let us only mention two important early papers in this direction, which were followed by numerous other results. H. Garland used buildings to prove his vanishing statement for the cohomology of discrete subgroups [“$$p$$-adic curvature and the cohomology of discrete subgroups of $$p$$-adic groups,” Ann. Math. (2) 97, 375–423 (1973; Zbl 0262.22010)], and A. Borel and J.-P. Serre used them to investigate the cohomology of $$S$$-arithmetic groups [“Cohomologie d’immeubles et de groupes $$S$$-arithmétiques,” Topology 15, 211–232 (1976; Zbl 0338.20055)]. Bruhat-Tits buildings are also very useful for questions in representation theory of reductive groups. For an overview of this topic see the introductory article [P. Schneider, “Gebäude in der Darstellungstheorie über lokalen Zahlkörpern,” Jahresber. Dtsch. Math.-Ver. 98, No.3, 135–145 (1996; Zbl 0872.11028)]. They also occur in various contexts in arithmetic geometry, for example in relation to Drinfeld’s $$p$$-adic upper half-spaces [V.G. Drinfeld, “Elliptic Modules,” Math. USSR, Sb. 23, 561–592 (1976); translation from Mat. Sb., n. Ser. 94(136), 594–627 (1974; Zbl 0321.14014)]. Recently it has been proven that Bruhat-Tits buildings can be realized inside Berkovich spaces. For split groups this has been shown by V. Berkovich in [“Spectral theory and analytic geometry over non-Archimedean fields.” Mathematical Surveys and Monographs, 33. Providence, RI: American Mathematical Society (1990; Zbl 0715.14013)], the general theory is contained in [A. Thuillier, B. Rémy, A. Werner, “Bruhat-Tits buildings from Berkovich’s point of view. I: Realizations and compactifications of buildings,” Ann. Sci. Éc. Norm. Supér. (4) 43, No.3, 461–554 (2010; Zbl 1198.51006)]. The theory initiated by Bruhat and Tits features an attractive interplay of arithmetic geometry, group theory and discrete geometry. Meanwhile, buildings have been generalized in various directions. All these applications and generalizations are based on the fundamental ideas of Bruhat and Tits, which even after decades continue to be very much alive in mathematical research. ### MathOverflow Questions: Iwahori decomposition of general groups ### MSC: 14L99 Algebraic groups 20G25 Linear algebraic groups over local fields and their integers 22E99 Lie groups Full Text:
{}
MuTect2 Call somatic SNPs and indels via local re-assembly of haplotypes Overview MuTect2 is a somatic SNP and indel caller that combines the DREAM challenge-winning somatic genotyping engine of the original MuTect (Cibulskis et al., 2013) with the assembly-based machinery of HaplotypeCaller. The basic operation of MuTect2 proceeds similarly to that of the HaplotypeCaller. Differences from HaplotypeCaller While the HaplotypeCaller relies on a ploidy assumption (diploid by default) to inform its genotype likelihood and variant quality calculations, MuTect2 allows for a varying allelic fraction for each variant, as is often seen in tumors with purity less than 100%, multiple subclones, and/or copy number variation (either local or aneuploidy). MuTect2 also differs from the HaplotypeCaller in that it does apply some hard filters to variants before producing output. Note that the GVCF generation capabilities of HaplotypeCaller are NOT available in MuTect2, even though some of the relevant arguments are listed below. There are currently no plans to make GVCF calling available in MuTect2. Usage examples These are example commands that show how to run MuTect2 for typical use cases. Square brackets ("[ ]") indicate optional arguments. Note that parameter values shown here may not be the latest recommended; see the Best Practices documentation for detailed recommendations. Tumor/Normal variant calling java -jar GenomeAnalysisTK.jar \ -T MuTect2 \ -R reference.fasta \ -I:tumor tumor.bam \ -I:normal normal.bam \ [--dbsnp dbSNP.vcf] \ [--cosmic COSMIC.vcf] \ [-L targets.interval_list] \ -o output.vcf Normal-only calling for panel of normals creation java -jar GenomeAnalysisTK.jar \ -T MuTect2 \ -R reference.fasta \ -I:tumor normal1.bam \ [--dbsnp dbSNP.vcf] \ [--cosmic COSMIC.vcf] \ --artifact_detection_mode \ [-L targets.interval_list] \ -o output.normal1.vcf For full PON creation, call each of your normals separately in artifact detection mode as shown above. Then use CombineVariants to output only sites where a variant was seen in at least two samples: java -jar GenomeAnalysisTK.jar \ -T CombineVariants \ -R reference.fasta \ -V output.normal1.vcf -V output.normal2.vcf [-V output.normal2.vcf ...] \ -minN 2 \ --setKey "null" \ --filteredAreUncalled \ --filteredrecordsmergetype KEEP_IF_ANY_UNFILTERED \ [-L targets.interval_list] \ -o MuTect2_PON.vcf Caveats • As noted in several places in the documentation, MuTect2 is currently released under BETA status; it is NOT recommended for production work and is NOT available for commercial/for-profit licensing. • MuTect2 currently only supports the calling of a single tumor-normal pair at a time. • Tumor-only variant calling is possible but it is NOT supported and we will not answer any questions about it until it becomes a supported feature. • Some of the arguments listed below are not functional; they are exclusive to HaplotypeCaller and are listed here due to technical entanglements in the code. This will be resolved in the upcoming GATK 4 release. These Read Filters are automatically applied to the data by the Engine before processing by MuTect2. Parallelism options This tool can be run in multi-threaded mode using this option. Downsampling settings This tool applies the following downsampling settings by default. • Mode: BY_SAMPLE • To coverage: 1,000 ActiveRegion settings This tool uses ActiveRegions on the reference. • Minimum region size: 50 bp • Maximum region size: 300 bp • Extension increments: 100 bp Command-line Arguments Engine arguments All tools inherit arguments from the GATK Engine' "CommandLineGATK" argument collection, which can be used to modify various aspects of the tool's function. For example, the -L argument directs the GATK engine to restrict processing to specific genomic intervals; or the -rf argument allows you to apply certain read filters to exclude some of the data from the analysis. MuTect2 specific arguments This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list. Argument name(s) Default value Summary Optional Inputs --alleles none Set of alleles to use in genotyping --cosmic [] VCF file of COSMIC sites --dbsnp -D none dbSNP file --normal_panel -PON [] VCF file of sites observed in normal Optional Outputs --activeRegionOut -ARO NA Output the active region to this IGV formatted file --activityProfileOut -APO NA Output the raw activity profile results in IGV format --graphOutput -graph NA Write debug assembly graph information to this file --out -o stdout File to which variants should be written Optional Parameters --contamination_fraction_to_filter -contamination 0.0 Fraction of contamination to aggressively remove --dbsnp_normal_lod 5.5 LOD threshold for calling normal non-variant at dbsnp sites NA trace this read name through the calling process --genotyping_mode -gt_mode DISCOVERY Specifies how to determine the alternate alleles to use for genotyping --group -G [] One or more classes/groups of annotations to apply to variant calls --heterozygosity -hets 0.001 Heterozygosity value used to compute prior likelihoods for any locus --heterozygosity_stdev -heterozygosityStandardDeviation 0.01 Standard deviation of eterozygosity for SNP and indel calling. --indel_heterozygosity -indelHeterozygosity 1.25E-4 Heterozygosity for indel calling --initial_normal_lod 0.5 Initial LOD threshold for calling normal variant --initial_tumor_lod 4.0 Initial LOD threshold for calling tumor variant --max_alt_allele_in_normal_fraction 0.03 Threshold for maximum alternate allele fraction in normal --max_alt_alleles_in_normal_count 1 Threshold for maximum alternate allele counts in normal --max_alt_alleles_in_normal_qscore_sum 20 Threshold for maximum alternate allele quality score sum in normal 1000 Maximum reads in an active region --min_base_quality_score -mbq 10 Minimum base quality required to consider a base for calling 5 Minimum number of reads sharing the same alignment start for each genomic location in an active region --normal_lod 2.2 LOD threshold for calling normal non-germline --pir_median_threshold 10.0 threshold for clustered read position artifact median --power_constant_qscore 30 Phred scale quality score constant to use in power calculations --sample_ploidy -ploidy 2 Ploidy per sample. For pooled data, set to (Number of samples in each pool * Sample Ploidy). --standard_min_confidence_threshold_for_calling -stand_call_conf 10.0 The minimum phred-scaled confidence threshold at which variants should be called --tumor_lod 6.3 LOD threshold for calling tumor variant Optional Flags --annotateNDA -nda false Annotate number of alleles observed false turn on clustered read position filter --enable_strand_artifact_filter false turn on strand artifact filter --useNewAFCalculator -newQual false Use new AF model instead of the so-called exact model --activeRegionIn -AR NA Use this interval list file as the active regions to process --comp [] comparison VCF file --bamOutput -bamout NA File to which assembled haplotypes should be written --activeProbabilityThreshold -ActProbThresh 0.002 Threshold for the probability of a profile state being active. --activeRegionExtension NA The active region extension; if not provided defaults to Walker annotated default --activeRegionMaxSize NA The active region maximum size; if not provided defaults to Walker annotated default --annotation -A [DepthPerAlleleBySample, BaseQualitySumPerAlleleBySample, TandemRepeatAnnotator, OxoGReadCounts] One or more specific annotations to apply to variant calls --bamWriterType CALLED_HAPLOTYPES Which haplotypes should be written to the BAM --bandPassSigma NA The sigma of the band pass filter Gaussian kernel; if not provided defaults to Walker annotated default --contamination_fraction_per_sample_file -contaminationFile NA Contamination per sample --emitRefConfidence -ERC NONE Mode for emitting reference confidence scores --excludeAnnotation -XA [SpanningDeletions] One or more specific annotations to exclude --gcpHMM 10 Flat gap continuation penalty for use in the Pair HMM --input_prior -inputPrior [] Input prior for calls --kmerSize --max_alternate_alleles -maxAltAlleles 6 Maximum number of alternate alleles to genotype --max_genotype_count -maxGT 1024 Maximum number of genotypes to consider at any site --max_num_PL_values -maxNumPLValues 100 Maximum number of PL values to output --maxNumHaplotypesInPopulation 128 Maximum number of haplotypes to consider for your population 30000 Maximum reads per sample given to traversal map() function 10000000 Maximum total reads given to traversal map() function --minDanglingBranchLength 4 Minimum length of a dangling branch to attempt recovery --minPruning 2 Minimum support to not prune paths in the graph --numPruningSamples 1 Number of samples that must pass the minPruning threshold --output_mode -out_mode EMIT_VARIANTS_ONLY Which type of calls we should output -globalMAPQ 45 The global assumed mismapping rate for reads --allowNonUniqueKmersInRef false Allow graphs that have non-unique kmers in the reference --allSitePLs false Annotate all sites with PLs --artifact_detection_mode false Enable artifact detection for creating panels of normals --consensus false 1000G consensus mode --debug false Print out very verbose debug information about each triggering active region --disableOptimizations false Don't skip calculations in ActiveRegions with no variants --doNotRunPhysicalPhasing false Disable physical phasing --dontIncreaseKmerSizesForCycles false Disable iterating over kmer sizes when graph cycles are detected --dontTrimActiveRegions false If specified, we will not trim down the active region from the full region (active + extension) to just the active interval for genotyping --dontUseSoftClippedBases false If specified, we will not analyze soft clipped bases in the reads -edr false Emit reads that are dropped for filtering, trimming, realignment failure --forceActive false If provided, all bases will be tagged as active --m2debug false Print out very verbose M2 debug information false Use the contamination-filtered read maps for the purposes of annotating variants Argument details Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above. --activeProbabilityThreshold / -ActProbThresh Threshold for the probability of a profile state being active. Double  0.002  [ [ 0  1 ] ] --activeRegionExtension / -activeRegionExtension The active region extension; if not provided defaults to Walker annotated default Integer  NA --activeRegionIn / -AR Use this interval list file as the active regions to process List[IntervalBinding[Feature]]  NA --activeRegionMaxSize / -activeRegionMaxSize The active region maximum size; if not provided defaults to Walker annotated default Integer  NA --activeRegionOut / -ARO Output the active region to this IGV formatted file If provided, this walker will write out its active and inactive regions to this file in the IGV formatted TAB deliminated output: http://www.broadinstitute.org/software/igv/IGV Intended to make debugging the active region calculations easier PrintStream  NA --activityProfileOut / -APO Output the raw activity profile results in IGV format If provided, this walker will write out its activity profile (per bp probabilities of being active) to this file in the IGV formatted TAB deliminated output: http://www.broadinstitute.org/software/igv/IGV Intended to make debugging the activity profile calculations easier PrintStream  NA --alleles / -alleles Set of alleles to use in genotyping When --genotyping_mode is set to GENOTYPE_GIVEN_ALLELES mode, the caller will genotype the samples using only the alleles provide in this callset. Note that this is not well tested in HaplotypeCaller, and is definitely not suitable for use with HaplotypeCaller in -ERC GVCF mode. In addition, it does not apply to MuTect2 at all. This argument supports reference-ordered data (ROD) files in the following formats: BCF2, VCF, VCF3 RodBinding[VariantContext]  none --allowNonUniqueKmersInRef / -allowNonUniqueKmersInRef Allow graphs that have non-unique kmers in the reference By default, the program does not allow processing of reference sections that contain non-unique kmers. Disabling this check may cause problems in the assembly graph. boolean  false --allSitePLs / -allSitePLs Annotate all sites with PLs Experimental argument FOR USE WITH UnifiedGenotyper ONLY: if SNP likelihood model is specified, and if EMIT_ALL_SITES output mode is set, when we set this argument then we will also emit PLs at all sites. This will give a measure of reference confidence and a measure of which alt alleles are more plausible (if any). WARNINGS: - This feature will inflate VCF file size considerably. - All SNP ALT alleles will be emitted with corresponding 10 PL values. - An error will be emitted if EMIT_ALL_SITES is not set, or if anything other than diploid SNP model is used - THIS WILL NOT WORK WITH HaplotypeCaller, GenotypeGVCFs or MuTect2! Use HaplotypeCaller with -ERC GVCF then GenotypeGVCFs instead. See the Best Practices documentation for more information. boolean  false --annotateNDA / -nda Annotate number of alleles observed Depending on the value of the --max_alternate_alleles argument, we may genotype only a fraction of the alleles being sent on for genotyping. Using this argument instructs the genotyper to annotate (in the INFO field) the number of alternate alleles that were originally discovered (but not necessarily genotyped) at the site. boolean  false --annotation / -A One or more specific annotations to apply to variant calls Which annotations to add to the output VCF file. See the VariantAnnotator -list argument to view available annotations. --artifact_detection_mode / NA Enable artifact detection for creating panels of normals Artifact detection mode is used to prepare a panel of normals. This maintains the specified tumor LOD threshold, but disables the remaining pragmatic filters. See usage examples above for more information. boolean  false --bamOutput / -bamout File to which assembled haplotypes should be written The assembled haplotypes and locally realigned reads will be written as BAM to this file if requested. This is intended to be used only for troubleshooting purposes, in specific areas where you want to better understand why the caller is making specific calls. Turning on this mode may result in serious performance cost for the caller, so we do NOT recommend using this argument systematically as it will significantly increase runtime. The candidate haplotypes (called or all, depending on mode) are emitted as single reads covering the entire active region, coming from sample "HC" and a special read group called "ArtificialHaplotype". This will increase the pileup depth compared to what would be expected from the reads only, especially in complex regions. The reads are written out containing an "HC" tag (integer) that encodes which haplotype each read best matches according to the haplotype caller's likelihood calculation. The use of this tag is primarily intended to allow good coloring of reads in IGV. Simply go to "Color Alignments By > Tag" and enter "HC" to more easily see which reads go with these haplotype. You can also tell IGV to group reads by sample, which will separate the potential haplotypes from the reads. These features are illustrated in this screenshot. Note that only reads that are actually informative about the haplotypes are emitted with the HC tag. By informative we mean that there's a meaningful difference in the likelihood of the read coming from one haplotype compared to the next best haplotype. When coloring reads by HC tag in IGV, uninformative reads will remain grey. Note also that not every input read is emitted to the bam in this mode. To include all trimmed, downsampled, filtered and uninformative reads, add the --emitDroppedReads argument. If multiple BAMs are passed as input to the tool (as is common for MuTect2), then they will be combined in the -bamout output and tagged with the appropriate sample names. GATKSAMFileWriter  NA --bamWriterType / -bamWriterType Which haplotypes should be written to the BAM The type of -bamout output we want to see. This determines whether HC will write out all of the haplotypes it considered (top 128 max) or just the ones that were selected as alleles and assigned to samples. The --bamWriterType argument is an enumerated type (Type), which can have one of the following values: ALL_POSSIBLE_HAPLOTYPES A mode that's for method developers. Writes out all of the possible haplotypes considered, as well as reads aligned to each CALLED_HAPLOTYPES A mode for users. Writes out the reads aligned only to the called haplotypes. Useful to understand why the caller is calling what it is Type  CALLED_HAPLOTYPES --bandPassSigma / -bandPassSigma The sigma of the band pass filter Gaussian kernel; if not provided defaults to Walker annotated default Double  NA --comp / -comp comparison VCF file If a call overlaps with a record from the provided comp track, the INFO field will be annotated as such in the output with the track name (e.g. -comp:FOO will have 'FOO' in the INFO field). Records that are filtered in the comp track will be ignored. Note that 'dbSNP' has been special-cased (see the --dbsnp argument). This argument supports reference-ordered data (ROD) files in the following formats: BCF2, VCF, VCF3 List[RodBinding[VariantContext]]  [] --consensus / -consensus 1000G consensus mode This argument is specifically intended for 1000G consensus analysis mode. Setting this flag will inject all provided alleles to the assembly graph but will not forcibly genotype all of them. boolean  false --contamination_fraction_per_sample_file / -contaminationFile Contamination per sample This argument specifies a file with two columns "sample" and "contamination" (separated by a tab) specifying the contamination level for those samples (where contamination is given as a decimal number, not an integer) per line. There should be no header. Samples that do not appear in this file will be processed with CONTAMINATION_FRACTION. File  NA --contamination_fraction_to_filter / -contamination Fraction of contamination to aggressively remove If this fraction is greater is than zero, the caller will aggressively attempt to remove contamination through biased down-sampling of reads (for all samples). Basically, it will ignore the contamination fraction of reads for each alternate allele. So if the pileup contains N total bases, then we will try to remove (N * contamination fraction) bases for each alternate allele. double  0.0  [ [ -∞  ∞ ] ] --cosmic / -cosmic VCF file of COSMIC sites MuTect2 has the ability to use COSMIC data in conjunction with dbSNP to adjust the threshold for evidence of a variant in the normal. If a variant is present in dbSNP, but not in COSMIC, then more evidence is required from the normal sample to prove the variant is not present in germline. This argument supports reference-ordered data (ROD) files in the following formats: BCF2, VCF, VCF3 List[RodBinding[VariantContext]]  [] --dbsnp / -D dbSNP file rsIDs from this file are used to populate the ID column of the output. Also, the DB INFO flag will be set when appropriate. dbSNP overlap is only used to require more evidence of absence in the normal if the variant in question has been seen before in germline. This argument supports reference-ordered data (ROD) files in the following formats: BCF2, VCF, VCF3 RodBinding[VariantContext]  none --dbsnp_normal_lod / NA LOD threshold for calling normal non-variant at dbsnp sites The LOD threshold for the normal is typically made more strict if the variant has been seen in dbSNP (i.e. another normal sample). We thus require MORE evidence that a variant is NOT seen in this tumor's normal if it has been observed as a germline variant before. double  5.5  [ [ -∞  ∞ ] ] --debug / -debug Print out very verbose debug information about each triggering active region boolean  false trace this read name through the calling process String  NA --disableOptimizations / -disableOptimizations Don't skip calculations in ActiveRegions with no variants If set, certain "early exit" optimizations in HaplotypeCaller, which aim to save compute and time by skipping calculations if an ActiveRegion is determined to contain no variants, will be disabled. This is most likely to be useful if you're using the -bamout argument to examine the placement of reads following reassembly and are interested in seeing the mapping of reads in regions with no variations. Setting the -forceActive and -dontTrimActiveRegions flags may also be helpful. boolean  false --doNotRunPhysicalPhasing / -doNotRunPhysicalPhasing Disable physical phasing As of GATK 3.3, HaplotypeCaller outputs physical (read-based) information (see version 3.3 release notes and documentation for details). This argument disables that behavior. boolean  false --dontIncreaseKmerSizesForCycles / -dontIncreaseKmerSizesForCycles Disable iterating over kmer sizes when graph cycles are detected When graph cycles are detected, the normal behavior is to increase kmer sizes iteratively until the cycles are resolved. Disabling this behavior may cause the program to give up on assembling the ActiveRegion. boolean  false --dontTrimActiveRegions / -dontTrimActiveRegions If specified, we will not trim down the active region from the full region (active + extension) to just the active interval for genotyping boolean  false --dontUseSoftClippedBases / -dontUseSoftClippedBases If specified, we will not analyze soft clipped bases in the reads boolean  false Emit reads that are dropped for filtering, trimming, realignment failure Determines whether dropped reads will be tracked and emitted when -bamout is specified. Use this in combination with a specific interval of interest to avoid accumulating a large number of reads in the -bamout file. boolean  false --emitRefConfidence / -ERC Mode for emitting reference confidence scores The reference confidence mode makes it possible to emit variant calls in GVCF format, which includes either a per-base pair (BP_RESOLUTION) or a summarized (GVCF) confidence estimate for each position being strictly homozygous-reference. See http://www.broadinstitute.org/gatk/guide/article?id=2940 for more details of how this works. Note that if you use -ERC to emit a GVCF or BP_RESOLUTION output, you either need to give the output file the extension .g.vcf or set the parameters -variant_index_type LINEAR and -variant_index_parameter 128000 (with those exact values!). This has to do with index compression. The --emitRefConfidence argument is an enumerated type (ReferenceConfidenceMode), which can have one of the following values: NONE Regular calling without emitting reference confidence calls. BP_RESOLUTION Reference model emitted site by site. GVCF Reference model emitted with condensed non-variant blocks, i.e. the GVCF format. ReferenceConfidenceMode  NONE turn on clustered read position filter boolean  false --enable_strand_artifact_filter / NA turn on strand artifact filter boolean  false --excludeAnnotation / -XA One or more specific annotations to exclude Which annotations to exclude from output in the VCF file. Note that this argument has higher priority than the -A or -G arguments, so annotations will be excluded even if they are explicitly included with the other options. List[String]  [SpanningDeletions] --forceActive / -forceActive If provided, all bases will be tagged as active For the active region walker to treat all bases as active. Useful for debugging when you want to force something like the HaplotypeCaller to process a specific interval you provide the GATK boolean  false --gcpHMM / -gcpHMM Flat gap continuation penalty for use in the Pair HMM int  10  [ [ -∞  ∞ ] ] --genotyping_mode / -gt_mode Specifies how to determine the alternate alleles to use for genotyping The --genotyping_mode argument is an enumerated type (GenotypingOutputMode), which can have one of the following values: DISCOVERY The genotyper will choose the most likely alternate allele GENOTYPE_GIVEN_ALLELES Only the alleles passed by the user should be considered. GenotypingOutputMode  DISCOVERY --graphOutput / -graph Write debug assembly graph information to this file This argument is meant for debugging and is not immediately useful for normal analysis use. PrintStream  NA --group / -G One or more classes/groups of annotations to apply to variant calls Which groups of annotations to add to the output VCF file. See the VariantAnnotator -list argument to view available groups. String[]  [] --heterozygosity / -hets Heterozygosity value used to compute prior likelihoods for any locus The expected heterozygosity value used to compute prior probability that a locus is non-reference. See https://software.broadinstitute.org/gatk/documentation/article?id=8603 for more details. Double  0.001  [ [ -∞  ∞ ] ] --heterozygosity_stdev / -heterozygosityStandardDeviation Standard deviation of eterozygosity for SNP and indel calling. The standard deviation of the distribution of alt allele fractions. The above heterozygosity parameters give the *mean* of this distribution; this parameter gives its spread. double  0.01  [ [ -∞  ∞ ] ] --indel_heterozygosity / -indelHeterozygosity Heterozygosity for indel calling This argument informs the prior probability of having an indel at a site. double  1.25E-4  [ [ -∞  ∞ ] ] --initial_normal_lod / NA Initial LOD threshold for calling normal variant This is the LOD threshold corresponding to the minimum amount of reference evidence in the normal for a variant to be considered somatic and emitted in the VCF double  0.5  [ [ -∞  ∞ ] ] --initial_tumor_lod / NA Initial LOD threshold for calling tumor variant This is the LOD threshold that a variant must pass in the tumor to be emitted to the VCF. Note that the variant may pass this threshold yet still be annotated as FILTERed based on other criteria. double  4.0  [ [ -∞  ∞ ] ] --input_prior / -inputPrior Input prior for calls By default, the prior specified with the argument --heterozygosity/-hets is used for variant discovery at a particular locus, using an infinite sites model (see e.g. Waterson, 1975 or Tajima, 1996). This model asserts that the probability of having a population of k variant sites in N chromosomes is proportional to theta/k, for 1=1:N. However, there are instances where using this prior might not be desirable, e.g. for population studies where prior might not be appropriate, as for example when the ancestral status of the reference allele is not known. This argument allows you to manually specify a list of probabilities for each AC>1 to be used as priors for genotyping, with the following restrictions: only diploid calls are supported; you must specify 2 * N values where N is the number of samples; probability values must be positive and specified in Double format, in linear space (not log10 space nor Phred-scale); and all values must sume to 1. For completely flat priors, specify the same value (=1/(2*N+1)) 2*N times, e.g. -inputPrior 0.33 -inputPrior 0.33 for the single-sample diploid case. List[Double]  [] --kmerSize / -kmerSize Multiple kmer sizes can be specified, using e.g. -kmerSize 10 -kmerSize 25. List[Integer]  [10, 25] --m2debug / -m2debug Print out very verbose M2 debug information boolean  false --max_alt_allele_in_normal_fraction / NA Threshold for maximum alternate allele fraction in normal This argument is used for the internal "alt_allele_in_normal" filter. A variant will PASS the filter if the value tested is lower or equal to the threshold value. It will FAIL the filter if the value tested is greater than the max threshold value. double  0.03  [ [ -∞  ∞ ] ] --max_alt_alleles_in_normal_count / NA Threshold for maximum alternate allele counts in normal This argument is used for the internal "alt_allele_in_normal" filter. A variant will PASS the filter if the value tested is lower or equal to the threshold value. It will FAIL the filter if the value tested is greater than the max threshold value. int  1  [ [ -∞  ∞ ] ] --max_alt_alleles_in_normal_qscore_sum / NA Threshold for maximum alternate allele quality score sum in normal This argument is used for the internal "alt_allele_in_normal" filter. A variant will PASS the filter if the value tested is lower or equal to the threshold value. It will FAIL the filter if the value tested is greater than the max threshold value. int  20  [ [ -∞  ∞ ] ] --max_alternate_alleles / -maxAltAlleles Maximum number of alternate alleles to genotype If there are more than this number of alternate alleles presented to the genotyper (either through discovery or GENOTYPE_GIVEN_ALLELES), then only this many alleles will be used. Note that genotyping sites with many alternate alleles is both CPU and memory intensive and it scales exponentially based on the number of alternate alleles. Unless there is a good reason to change the default value, we highly recommend that you not play around with this parameter. See also {@link #MAX_GENOTYPE_COUNT}. int  6  [ [ -∞  ∞ ] ] --max_genotype_count / -maxGT Maximum number of genotypes to consider at any site If there are more than this number of genotypes at a locus presented to the genotyper, then only this many genotypes will be used. This is intended to deal with sites where the combination of high ploidy and high alt allele count can lead to an explosion in the number of possible genotypes, with extreme adverse effects on runtime performance. How does it work? The possible genotypes are simply different ways of partitioning alleles given a specific ploidy assumption. Therefore, we remove genotypes from consideration by removing alternate alleles that are the least well supported. The estimate of allele support is based on the ranking of the candidate haplotypes coming out of the graph building step. Note however that the reference allele is always kept. The maximum number of alternative alleles used in the genotyping step will be the lesser of the two: 1. the largest number of alt alleles, given ploidy, that yields a genotype count no higher than {@link #MAX_GENOTYPE_COUNT} 2. the value of {@link #MAX_ALTERNATE_ALLELES} As noted above, genotyping sites with large genotype counts is both CPU and memory intensive. Unless you have a good reason to change the default value, we highly recommend that you not play around with this parameter. See also {@link #MAX_ALTERNATE_ALLELES}. int  1024  [ [ -∞  ∞ ] ] --max_num_PL_values / -maxNumPLValues Maximum number of PL values to output Determines the maximum number of PL values that will be logged in the output. If the number of genotypes (which is determined by the ploidy and the number of alleles) exceeds the value provided by this argument, then output of all of the PL values will be suppressed. int  100  [ [ -∞  ∞ ] ] --maxNumHaplotypesInPopulation / -maxNumHaplotypesInPopulation Maximum number of haplotypes to consider for your population The assembly graph can be quite complex, and could imply a very large number of possible haplotypes. Each haplotype considered requires N PairHMM evaluations if there are N reads across all samples. In order to control the run of the haplotype caller we only take maxNumHaplotypesInPopulation paths from the graph, in order of their weights, no matter how many paths are possible to generate from the graph. Putting this number too low will result in dropping true variation because paths that include the real variant are not even considered. You can consider increasing this number when calling organisms with high heterozygosity. int  128  [ [ -∞  ∞ ] ] Maximum reads per sample given to traversal map() function What is the maximum number of reads we're willing to hold in memory per sample during the traversal? This limits our exposure to unusually large amounts of coverage in the engine. int  30000  [ [ -∞  ∞ ] ] Maximum reads in an active region When downsampling, level the coverage of the reads in each sample to no more than maxReadsInRegionPerSample reads, not reducing coverage at any read start to less than minReadsPerAlignmentStart int  1000  [ [ -∞  ∞ ] ] Maximum total reads given to traversal map() function What is the maximum number of reads we're willing to hold in memory per sample during the traversal? This limits our exposure to unusually large amounts of coverage in the engine. int  10000000  [ [ -∞  ∞ ] ] --min_base_quality_score / -mbq Minimum base quality required to consider a base for calling byte  10  [ [ -∞  ∞ ] ] --minDanglingBranchLength / -minDanglingBranchLength Minimum length of a dangling branch to attempt recovery When constructing the assembly graph we are often left with "dangling" branches. The assembly engine attempts to rescue these branches by merging them back into the main graph. This argument describes the minimum length of a dangling branch needed for the engine to try to rescue it. A smaller number here will lead to higher sensitivity to real variation but also to a higher number of false positives. int  4  [ [ -∞  ∞ ] ] --minPruning / -minPruning Minimum support to not prune paths in the graph Paths with fewer supporting kmers than the specified threshold will be pruned from the graph. Be aware that this argument can dramatically affect the results of variant calling and should only be used with great caution. Using a prune factor of 1 (or below) will prevent any pruning from the graph, which is generally not ideal; it can make the calling much slower and even less accurate (because it can prevent effective merging of "tails" in the graph). Higher values tend to make the calling much faster, but also lowers the sensitivity of the results (because it ultimately requires higher depth to produce calls). int  2  [ [ -∞  ∞ ] ] Minimum number of reads sharing the same alignment start for each genomic location in an active region int  5  [ [ -∞  ∞ ] ] --normal_lod / NA LOD threshold for calling normal non-germline This is a measure of the minimum evidence to support that a variant observed in the tumor is not also present in the normal. double  2.2  [ [ -∞  ∞ ] ] --normal_panel / -PON VCF file of sites observed in normal A panel of normals can be a useful (optional) input to help filter out commonly seen sequencing noise that may appear as low allele-fraction somatic variants. This argument supports reference-ordered data (ROD) files in the following formats: BCF2, VCF, VCF3 List[RodBinding[VariantContext]]  [] --numPruningSamples / -numPruningSamples Number of samples that must pass the minPruning threshold If fewer samples than the specified number pass the minPruning threshold for a given path, that path will be eliminated from the graph. int  1  [ [ -∞  ∞ ] ] --out / -o File to which variants should be written VariantContextWriter  stdout --output_mode / -out_mode Which type of calls we should output Experimental argument FOR USE WITH UnifiedGenotyper ONLY. When using HaplotypeCaller, use -ERC instead. When using GenotypeGVCFs, see -allSites. The --output_mode argument is an enumerated type (OutputMode), which can have one of the following values: EMIT_VARIANTS_ONLY produces calls only at variant sites EMIT_ALL_CONFIDENT_SITES produces calls at variant sites and confident reference sites EMIT_ALL_SITES produces calls at any callable site regardless of confidence; this argument is intended only for point mutations (SNPs) in DISCOVERY mode or generally when running in GENOTYPE_GIVEN_ALLELES mode; it will by no means produce a comprehensive set of indels in DISCOVERY mode OutputMode  EMIT_VARIANTS_ONLY The global assumed mismapping rate for reads The phredScaledGlobalReadMismappingRate reflects the average global mismapping rate of all reads, regardless of their mapping quality. This term effects the probability that a read originated from the reference haplotype, regardless of its edit distance from the reference, in that the read could have originated from the reference haplotype but from another location in the genome. Suppose a read has many mismatches from the reference, say like 5, but has a very high mapping quality of 60. Without this parameter, the read would contribute 5 * Q30 evidence in favor of its 5 mismatch haplotype compared to reference, potentially enough to make a call off that single read for all of these events. With this parameter set to Q30, though, the maximum evidence against any haplotype that this (and any) read could contribute is Q30. Set this term to any negative number to turn off the global mapping rate. int  45  [ [ -∞  ∞ ] ] This argument is used for the M1-style read position filter double  3.0  [ [ -∞  ∞ ] ] --pir_median_threshold / NA threshold for clustered read position artifact median This argument is used for the M1-style read position filter double  10.0  [ [ -∞  ∞ ] ] --power_constant_qscore / NA Phred scale quality score constant to use in power calculations This argument is used for the M1-style strand bias filter int  30  [ [ -∞  ∞ ] ] --sample_ploidy / -ploidy Ploidy per sample. For pooled data, set to (Number of samples in each pool * Sample Ploidy). Sample ploidy - equivalent to number of chromosome copies per pool. For pooled experiments this should be set to the number of samples in pool multiplied by individual sample ploidy. int  2  [ [ -∞  ∞ ] ] --standard_min_confidence_threshold_for_calling / -stand_call_conf The minimum phred-scaled confidence threshold at which variants should be called The minimum phred-scaled Qscore threshold to separate high confidence from low confidence calls. Only genotypes with confidence >= this threshold are emitted as called sites. A reasonable threshold is 30 for high-pass calling (this is the default). double  10.0  [ [ -∞  ∞ ] ] --tumor_lod / NA LOD threshold for calling tumor variant Only variants with tumor LODs exceeding this threshold can pass filtering. double  6.3  [ [ -∞  ∞ ] ] Use the contamination-filtered read maps for the purposes of annotating variants boolean  false --useNewAFCalculator / -newQual Use new AF model instead of the so-called exact model This activates a model for calculating QUAL that was introduced in version 3.7 (November 2016). We expect this model will become the default in future versions. boolean  false
{}
# How to interpret the output of the summary method for an lm object in R? [duplicate] I am using sample algae data to understand data mining a bit more. I have used the following commands: data(algae) algae <- algae[-manyNAs(algae),] clean.algae <-knnImputation(algae, k = 10) lm.a1 <- lm(a1 ~ ., data = clean.algae[, 1:12]) summary(lm.a1) Subsequently I received the results below. However I can not find any good documentation which explains what most of this means, especially Std. Error,t value and Pr. Can someone please be kind enough to shed some light please? Most importantly, which variables should I look at to ascertain on whether a model is giving me good prediction data? Call: lm(formula = a1 ~ ., data = clean.algae[, 1:12]) Residuals: Min 1Q Median 3Q Max -37.679 -11.893 -2.567 7.410 62.190 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.942055 24.010879 1.788 0.07537 . seasonspring 3.726978 4.137741 0.901 0.36892 seasonsummer 0.747597 4.020711 0.186 0.85270 seasonwinter 3.692955 3.865391 0.955 0.34065 sizemedium 3.263728 3.802051 0.858 0.39179 sizesmall 9.682140 4.179971 2.316 0.02166 * speedlow 3.922084 4.706315 0.833 0.40573 speedmedium 0.246764 3.241874 0.076 0.93941 mxPH -3.589118 2.703528 -1.328 0.18598 mnO2 1.052636 0.705018 1.493 0.13715 Cl -0.040172 0.033661 -1.193 0.23426 NO3 -1.511235 0.551339 -2.741 0.00674 ** NH4 0.001634 0.001003 1.628 0.10516 oPO4 -0.005435 0.039884 -0.136 0.89177 PO4 -0.052241 0.030755 -1.699 0.09109 . Chla -0.088022 0.079998 -1.100 0.27265 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 17.65 on 182 degrees of freedom Multiple R-squared: 0.3731, Adjusted R-squared: 0.3215 F-statistic: 7.223 on 15 and 182 DF, p-value: 2.444e-12 • An annotated regression output can be found at: ats.ucla.edu/stat/stata/output/reg_output.htm The layout of the output might look a little different (it's using STATA rather than R) but the content is more or less the same. Hope this helps. May 17, 2013 at 0:21 • You'll also want to read this: interpretation-of-rs-lm-output. After having read those, see if you still have any questions left, & if you do, edit your Q to clarify what you still need to know. May 17, 2013 at 1:36 It sounds like you need a decent basic statistics text that covers at least basic location tests, simple regression and multiple regression. Std. Error,t value and Pr. 1. Std. Error is the standard deviation of the sampling distribution of the estimate of the coefficient under the standard regression assumptions. Such standard deviations are called standard errors of the corresponding quantity (the coefficient estimate in this case). In the case of simple regression, it's usually denoted $s_{\hat \beta}$, as here. Also see this For multiple regression, it's a little more complicated, but if you don't know what these things are it's probably best to understand them in the context of simple regression first. 2. t value is the value of the t-statistic for testing whether the corresponding regression coefficient is different from 0. The formula for computing it is given at the first link above. 3. Pr. is the p-value for the hypothesis test for which the t value is the test statistic. It tells you the probability of a test statistic at least as unusual as the one you obtained, if the null hypothesis were true. In this case, the null hypothesis is that the true coefficient is zero; if that probability is low, it's suggesting that it would be rare to get a result as unusual as this if the coefficient were really zero. Most importantly, which variables should I look at to ascertain on whether a model is giving me good prediction data? What do you mean by 'good prediction data'? Can you make it clearer what you're asking? The Residual standard error, which is usually called $s$, represents the standard deviation of the residuals. It's a measure of how close the fit is to the points. The Multiple R-squared, also called the coefficient of determination is the proportion of the variance in the data that's explained by the model. The more variables you add - even if they don't help - the larger this will be. The Adjusted one reduces that to account for the number of variables in the model. The $F$ statistic on the last line is telling you whether the regression as a whole is performing 'better than random' - any set of random predictors will have some relationship with the response, so it's seeing whether your model fits better than you'd expect if all your predictors had no relationship with the response (beyond what would be explained by that randomness). This is used for a test of whether the model outperforms 'noise' as a predictor. The p-value in the last row is the p-value for that test, essentially comparing the full model you fitted with an intercept-only model. Where do the data come from? Is this in some package? The Standard error is an estimate of the variance of the strength of the effect, or the strength of the relationship between each causal variable and the predicted variable. If it's high, then the effect size will have to be stronger for us to be able to be sure that it's a real effect, and not just an artefact of randomness. The t-statistic is an estimate of how extreme the value you see is, relative to the standard error (assuming a normal distribution, centred on the null hypothesis). The p-value is an estimate of the probability of seeing a t-value as extreme, or more extreme the one you got, if you assume that the null hypothesis is true (the null hypothesis is usually "no effect", unless something else is specified). So if the p-value is very low, then there is a higher probability that you're seeing data that is counter-indicative of zero effect. In other situations, you can get a p-value based on other statistics and variables. Unfortunately, if that explanation of the p-value is confusing, that's because the entire concept is confusing. It's important to note that technically a low p-value does not show high probability of an effect, although it may indicate that. Have a read of some of the high-voted p-value questions, to get an idea about what's going on here. • please correct me if i am wrong but the higher the standard error the stronger the prediction model? May 17, 2013 at 0:43 • This is not correct. High standard errors tell you that you can't estimate the coefficient very precisely - the 'true' coefficient may well be far away from your estimated value (the standard error is like a 'typical distance' away). May 17, 2013 at 0:45 • @godzilla: if the std.err goes up, then the distribution of likely values is widening, which means that your effect size will become swamped, so making predictions will be harder. May 17, 2013 at 0:50 • @godzilla: I think you really need to read an introductory stats text, and/or the wikipedia pages I linked to. I answered those exact questions in my answer. If you want detail, then ask for specifics. May 17, 2013 at 1:22 • @godzilla For t-values, the most simple explanation is that you can use 2 (as a rule of thumb) as the threshold to decide whether or not a variable is statistically significant. Above two and the variable is statistically significant and below zero is not statistically significant. For an easy treatment of this material see Chapter 5 of Gujarati's Basic Econometrics. The ucla link I provided in another comment explains how interpret the p value. I assume its the interpretation of the output for practical use that you want rather than the actual underlying theory hence my oversimplification. May 17, 2013 at 14:02
{}
Submit Essay Once you submit your essay, you can no longer edit it. Drake's Equation 6th parameter f_c Question This is the sixth question in a series estimating input parameters for Drake's equation, inspired by a recent paper, on the Fermi paradox. The first question in the series, with more explanation, is here The model in question uses probability distributions over seven input parameters. In this case we will be addressing the sixth parameter in the Drake's Equation, $f_c$. It is the fraction of civilizations that develop a technology that releases detectable signs of their existence into space. Anything that would produce an unambiguous resolution that a planet bears intelligent life suffices. Radio signals are the technology that most suspect will bring about that resolution, but laser light, physical relics, and even gravitational waves can be considered. Given our definition of intelligences as having both tool use and language, it seems unlikely that this parameter should be miniscule; nonetheless we give a range extending down to $10^{-5}$, open at the bottom, to be safe. The resolution to this question will be the scientific consensus 100 years from now, regardless of any remaining uncertainty. Prediction Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it. Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards. Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.
{}
← back to home It’s a long podcast around 1,48 hour, but useful, comparing between what so-called ‘best-practice’ git workflow, and i agree that git-flow are not the silver bullet for all development branching strategy. ### Why Great Developer Still Google Their Errors Developers are not perfect, to googling an error are normal, and yes, it’s better to store other thing that much more important. ### The problematic relationship between a team and their Product Owner This is sometime a never ending ‘issue’, the communication sometime go sour, or unproductive, product wants things, devs wondering the priorities and could ended up in silos or became an ‘enemy’.
{}
# Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the ODE 1. Nov 24, 2012 ### fluidistic 1. The problem statement, all variables and given/known data I must get the first eigenvalues of the time independent Schrödinger's equation for a particle of mass m inside a cylinder of height h and radius a where $h \sim a$. The boundary conditions are that psi is worth 0 everywhere on the surface of the cylinder. 2. Relevant equations $-\frac{\hbar ^2}{2m} \triangle \psi =E \psi$. Laplacian in cylindrical coordinates. 3. The attempt at a solution I've used separation of variables on the PDE, seeking for the solutions of the form $\psi (\rho, \theta , z)=R(\rho) \Theta (\theta ) Z(z)$. I reached that $\frac{Z''}{Z}=\text{constant}=-\lambda ^2$. Assuming that the Z function is periodic and worth 0 at the top and bottom of the cylinder, I reached that $Z(z)=B \sin \left ( \frac{n\pi n}{h} \right )$ where n=1,2, 3, etc. Then I reached that $\frac{\Theta''}{\Theta} = -m^2$ where m=0, 1, 2, etc (because it must be periodic with period 2 pi). So that $\Theta (\theta )=C \cos (m \theta ) +D \sin (m \theta)$. Then the last ODE remaining to solve is $\rho ^2 R''+\rho R'+R \{ \rho ^2 \left [ \frac{2mE}{\hbar ^2} - \left ( \frac{n\pi}{h} \right )^2 \right ] -m^2 \}=0$. This is where I'm stuck. It's very similar to a Bessel equation and Cauchy-Euler equation but I don't think it is either. So I don't really know how to tackle that ODE. Any idea? Wolfram alpha does not seem to solve it either: http://www.wolframalpha.com/input/?i=x^2y%27%27%2Bxy%27%2By%28x^2*k-n^2%29%3D0. 2. Nov 24, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Look's like Bessel's equation. See http://www.efunda.com/math/bessel/bessel.cfm Of course, you'll need to rescale $\rho$ to simplify the expression inside your { }. 3. Nov 24, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Hmm ok. Hmm what do you mean exactly? I have an equation of the form $\rho ^2 R''+\rho R' +R(\rho ^2 p^2 -m^2)$ where p is a constant for a given n. Rescaling rho means to get $p^2=1$? 4. Nov 24, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Yes. Define a new independent variable ($x$, say) in terms of $\rho$ such that you get the standard form of Bessel's equation. 5. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the I try $x=\rho p$ so $\rho =x/p$ but then the ODE changes to $\frac{x^2R''}{p^2}+\frac{xR'}{p}+R(x^2-m^2)=0$. I could multiply by $p^2$ but I would not get the standard form of the Bessel equation. I don't see how I could rescale the factor in front of rho ^2 without rescaling the coefficients in front of R'' and R. 6. Nov 25, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the You need to take care of the rescaling in the derivatives, too. For example, $dR/d\rho = \left(dR/dx\right)\left(dx/d\rho\right)$ 7. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Oh right, I totally missed this! So indeed now I recognize a Bessel equation!!! Therefore I get that the solutions of the form $\psi =R ( \rho ) \Theta (\theta ) Z(z)=B_n \sin \left ( \frac{n\pi z}{h} \right ) [C_m \cos (m\theta ) + D_m \sin (m \theta )]J_m \left ( \rho \sqrt {\frac{2mE}{\hbar ^2} - \frac{n^2 \pi ^2}{h^2}} \right )$. So the solution that satisfies the boundary condition is a linear combination of those. I'm not 100% sure about what they mean by "eigenvalues". Eigenfrequencies? Lowest energies possible? (They only want the first 3 eigenvalues). I'm pretty sure this will concern the cases (1) n=1 and m=0 and m=1. (2) n=2, m=0. But I'm not sure what they are asking me. 8. Nov 25, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the I think they want the three lowest energies. They are called eigenvalues because they are eigenvalues of the time independent Schrodinger equation $H|\psi> = E|\psi>$ 9. Nov 25, 2012 ### haruspex Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the You don't seem to have used the boundary condition at ρ=a. 10. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Ok thanks! Hmm I don't know how to get that information. I have a feeling I should add a subscript "n" under "E" in $\psi =R ( \rho ) \Theta (\theta ) Z(z)=B_n \sin \left ( \frac{n\pi z}{h} \right ) [C_m \cos (m\theta ) + D_m \sin (m \theta )]J_m \left ( \rho \sqrt {\frac{2mE}{\hbar ^2} - \frac{n^2 \pi ^2}{h^2}} \right )$ and then isolate $E_n$ but not sure to what I should equate the equation. 11. Nov 25, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the 12. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Oh right guys sorry. And thanks for helping. I did not see haruspex's post. So if $x_p$ is the p'th zero of the Bessel function then $E_n= \left ( \frac{\hbar ^2}{2m} \right ) \left [ \left ( \frac{x_p}{a} \right ) ^2 +\left ( \frac{n^2 \pi ^2}{h^2} \right ) \right ]$. I guess I'll have to check if I can replace "p" by "n". It's not obvious to me at a first glance. 13. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Ok I've thought a bit on this. The first 3 lowest energy values are when $x_p=x_0$. So $E_1= \frac{\hbar ^2}{2m} \left [ \left ( \frac{x_0}{a} \right ) ^2 + \frac{\pi ^2}{h^2} \right ]$, $E_2= \frac{\hbar ^2}{2m} \left [ \left ( \frac{x_0}{a} \right ) ^2 + \frac{4\pi ^2}{h^2} \right ]$ and $E_3= \frac{\hbar ^2}{2m} \left [ \left ( \frac{x_0}{a} \right ) ^2 + \frac{9\pi ^2}{h^2} \right ]$. I'm not very confident because I don't know if $\frac{\hbar ^2 }{2m} \left [ \left ( \frac{x_1}{a} \right ) ^2 + \frac{\pi ^2}{h^2} \right ] <\frac{\hbar ^2}{2m} \left [ \left ( \frac{x_0}{a} \right ) ^2 + \frac{9\pi ^2}{h^2} \right ]$ for example. 14. Nov 25, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the You'll need to consult a Table of Roots The problem states that the height of the cylinder is approx. equal to the radius: $h\approx a$, which should help figure out the lowest three energies. Last edited: Nov 25, 2012 15. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Great and thank you once more. I get from lower to upper: $E_{1,0}$, $E_{1,1}$ and $E_{2,0}$ where the subscript are $E_{n,p}$. 16. Nov 25, 2012 ### TSny Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the I think that might be correct. 17. Nov 25, 2012 ### fluidistic Re: Getting the eigenvalues of the Schrödinger's equation in a cylinder. Stuck on the Thanks for all.
{}
## Slides for SVM Moderator: Statistisches Maschinelles Lernen NonStop Mausschubser Beiträge: 73 Registriert: 18. Apr 2015 19:15 ### Slides for SVM Hi, I have just established that the slides for SVM on the course-website differs from them of today's lecture. Is it possible to update the slides to the actual version? Best regards NonStop tanne Endlosschleifenbastler Beiträge: 162 Registriert: 30. Sep 2008 16:05 ### Re: Slides for SVM Hi, the uploaded slides are actually just more -- if you scroll through the pdf you'll find the used slides later in there. best
{}
$$\require{cancel}$$ $E_{r}=-\frac{d V}{d r}, \nonumber$ where $$E_r$$ is the component of the electric field along the direction specified by dr. The exception referred to above occurs at a layer of dipoles; see the example problem discussed below. Let a surface carry a density of dipoles $$\vec P_d$$ per unit area (dimensions of Coulombs/m) oriented such that the dipole density is perpendicular to the plane. Such an electrical dipole layer, or electrical double layer, generates no external electric field, but it does generate a jump in potential given by $\Delta V=\left|\overrightarrow{\mathrm{P}}_{d}\right| / \epsilon_{0}. \label{2.24}$
{}
# Numerical Integration Numerical Integration is a lab exploring numerical methods for computing integrals. That is, using a computer program or calculator to find an approximation to the integral of some function $f(x)$. Of course, because we are talking about integration we can’t go very far without the fundamental theorem of calculus: $F(x) = \int_a^x{f(t)dt}$. Further, in this lab we will talk about a few methods for numerically computing integrals, namely: Rectangle/ Riemann Sum, Trapezoidal Sum, Parabola/ Simpson’s Rule, just to name a few. In Calculus courses, we are usually given “nice” functions, functions that are “easy” to solve or do not require numerical methods to compute. However, the set of functions that are “nice” is very small. Thus, we must resort to numerical methods. For example, there is no elementary antiderivative to the following integral: $$\int{e^{e^x}dx}$$ But we can approximate it using one of the methods that we explore in this lab. I was initially drawn toward this lab because other courses have introduced numerical integration and I have used other numerical methods by hand and wanted to further explore the topic by automating it and exploring the different methods.
{}
THIS FORUM IS DEPRECATED It will be disabled at any point in the future. Please post your discussions to any of these alternative available channels: * GitHub Discussions: https://github.com/Zettlr/Zettlr/discussions * Zettlr Subreddit: https://www.reddit.com/r/Zettlr * Discord: https://discord.gg/PcfS3DM9Xj # Export pdf with words in classical Greek, or arabic, hebrew... Examples: ἀδιάφορα שָׁלוֹם (shalom) سلام‎ (salām) how to do it? • 34VLC, This should give you enough to get pointed in the right direction, based on what worked for me. In your markdown document you can write the text this way (one for latex to pdf and the other for html): \textsanskrit{आदि}आदि{=html} \textgreek{ἄλλος}ἄλλος{=html} You will need to make sure you have the appropriate fonts installed. In my example for sanskrit and greek, I used Shobhika and GFS Artemisia. You will need to have the latex packages fontspec and polyglossia installed. Finally you will need to edit your tex template or create a custom tex template. I renamed and edited the zettlr export.tex template found here and then pointed to it in Zettlr's pdf preferences: https://github.com/Zettlr/Zettlr/blob/master/source/main/assets/export.tex Finally, these were the edits I made to the export.tex template, which I put just before the \begin{document} statement: \usepackage{fontspec} \usepackage{polyglossia} \setmainlanguage{english} \setotherlanguages{sanskrit, greek} %% or other languages \newfontfamily\devanagarifont[Script=Devanagari]{Shobhika} \newfontfamily\greekfont{GFS Artemisia} Hope this helps you in your journey, Mat • edited July 2021 Thank You Mat. What am I not doing right? dēlectāre ἀδιάφορα ἄλλος πολυμαθία Сельское общество Hebrew and Arabic do not work You can see my .tex
{}
# Known errors in draft13.pdf creditpagecorrection VDA 9 In line 4, “improves” should be “improves”. VDA 22 In the section “Finding methods for objects,” the example defines the expression rats using the ** operator. Unfortunately, the ** operator hasn’t actually been defined yet; that occurs on p. 27. It seems better to define it as rats = 1/x + 1/2 which behaves the same way and is simpler to boot. Some relevant wording has to be changed to reflect the new expression. JP 22 Building on VDA’s observation of the example, JP discovered that some versions of Sage actually do simplify these expressions when using the simplify() command; for instance, Sage 6.7 does. Others do not, such as Sage 8.1. So we add a footnote that states, It does actually work in some versions of Sage, such as Sage 6.7, but not in others, such as 8.1. If it simplifies in your version, just pretend that it doesn't for the sake of the argument, and follow along, since that isn't the point anyway.” VDA 26Unintentional repetition of “as well”. VDA 31Unintentional reptition of “this”. VDA 32 By ”raised x” we mean the multiplication symbol (×, $\times$), which is sometimes raised above the baseline of the text (perhaps you will see that here on the webpage, where to the writer it seems raised ever-so-little). Ironically, and somewhat confusingly, the typeface we chose for the text does not raise the symbol above the baseline. So it’s a good idea to strike the word “raised” to avoid confusion. VDA 34 The text refers to “three ways” to substitute into an expression, but since it doesn’t alert the reader to when it starts describing the second way, encountering the explicitly named third way can confuse the reader. The “second way” is the syntax f(x=2) or f({x:2}), where you need not use the .subs() method. VDA 39 “1042th” should be “1042nd”. VDA 40 Is q positive, negative or zero?” should have p instead of q. VDA 44 The phrasing, “That last one is not actually in the complex ring $\mathbb C$” might not be clear, since in fact $1+i\in\mathbb C$. What is meant is that Sage doesn’t seem to recognize that $1+i\in\mathbb C$. VDA 48 In VDA’s memorable explanation, “The Bonus at the end of the exercises is, wrongly, numbered, or in any case is wrongly numbered” VDA 50 The multiple choice questions refer to ind, but the text does not explain what that is. It now adds an example, $\lim_{x\rightarrow0}\sin\left(\frac{1}{x}\right)\ ,$ where Sage returns that as an answer, and explains that In some cases, the limit doesn't exist because the function doesn't approach any particular value. When this happens, but the function remains finite, Sage will reply with ind, which is short for, “the limit is indefinite, but bounded [that is, not infinite].” We see that in the [new] example. If however the function waggles between infinities, Sage will reply with und. VDA 52 “Both commands” refers to point() and line(), not to arrow(). VDA 53 The default for zorder actually depends on the object. For some it is 1, for others it is 3. JK 55 The sentence, “Why this option called zorder?” should be, “Why is this option called zorder?” VDA 58 “We consider three more objects” — yes, but remove the reference to text(), which is discussed in the subsequent section anyway. VDA 65 The explanation of the parametric_plot() command will probably make a lot more sense if we replace tvar by t. VDA 68 The suggestion to “[p]lay around with different values of tmin and tmax to see why $[0,2\pi)$ gives us a complete wave” is, strictly speaking, correct, but it may be more instructive “to determine the smallest interval $[a,b]$ that gives us a complete wave.” VDA 71 While the discussion on the DomainError is, strictly speaking, mostly correct, it is a bit out of place, and besides, it seems… unnatural to plot $\sqrt{6-x^2}$ on the interval $(-3,3)$ instead of the interval $(-\sqrt6,\sqrt6)$. It is certainly erroneous to plot it by asking Sage to plot sqrt(5-x^2). (This last bit is why the discussion is only mostly correct.) VDA 72 The line before the last equation has an extra “transforms.” BB 140 The statement “This touches on an topic called…” has a rather obvious error. JD 154 Multiple Choice Exercise 5 ought to ask something different from Exercise 4. Try, “The result of .roots() when applied to an equation in one variable is:”. JS p. 264 The lab “An important difference quotient” neglects to mention that $f(x)=x^n$, which is sort of important if the student is to complete the latter half of the lab. JD Multiple pages throughout the Encyclopædia Laboratorica The text refers to “text cell” when “HTML cell” is more appropriate (at least in CoCalc) JK 285 The lab “Properties of finite rings” refers to a lecture that corresponds to the personal notes of one of the authors. Even worse, after several years of revision it corresponds to the wrong lecture. These days it should refer to the section “Mathematical structures in Sage” in the chapter titled, “Basic Computations.” WM 286 The lab “The Chinese Remainder Clock” instructs students to count “clockwise from the top” but the remainder of dividing the hour by 3 is curiously positioned counterclockwise — and is, curiously enough, the only one so misplaced. The correct image should be: #### Legend for credits: VDA Valerio De Angelis, Xavier University, who probably deserves co-author credit by now BB Baylee Bourque JD
{}
## Rydberg Equation $c=\lambda v$ Malia Shitabata 1F Posts: 109 Joined: Sat Aug 17, 2019 12:17 am ### Rydberg Equation Do we have to understand the Balmer/Lyman series in relation to n and the Rydberg equation or should we just know which portion of the electromagnetic spectrum it corresponds to? Baoying Li 1B Posts: 96 Joined: Sat Aug 17, 2019 12:18 am ### Re: Rydberg Equation I think we need to know that the energy level starts from n=1 for Lyman series(UV region) and it starts from n=2 for Balmer series(visible region, 700nm-400nm). Ashley Nguyen 2L Posts: 81 Joined: Sat Aug 17, 2019 12:18 am ### Re: Rydberg Equation I think we should just know the portion of the EM spectrum it corresponds to. I don't think there will be any in-depth questions on it Doreen Liu 4D Posts: 54 Joined: Wed Sep 11, 2019 12:17 am ### Re: Rydberg Equation I think we just have to know that the Lyman series corresponds to UV while the Balmer series corresponds to the visible light. Malia Shitabata 1F Posts: 109 Joined: Sat Aug 17, 2019 12:17 am ### Re: Rydberg Equation Baoying Li 4H wrote:I think we need to know that the energy level starts from n=1 for Lyman series(UV region) and it starts from n=2 for Balmer series(visible region, 700nm-400nm). If a higher frequency is associated with greater energy then why is Balmer n=2 even though visible light has a lower frequency than ultraviolet light (n=1)? VPatankar_2L Posts: 84 Joined: Thu Jul 25, 2019 12:17 am Been upvoted: 1 time ### Re: Rydberg Equation The energy gaps between the energy levels of the Balmer series are much smaller than the gaps for the Lyman series, so that's why the frequency produced is also smaller.
{}
## Ultramodularity and copulas.(English)Zbl 1371.62050 The authors study ultramodular binary copulas and characterize the additive generators of Archimedean ultramodular binary copulas. M. Marinacci and L. Montrucchio [Math. Oper. Res. 30, No. 2, 311–332 (2005; Zbl 1082.52006)] showed that ultramodularity of real functions is a stronger verson of both convexity and supermodularity. The ultramodularity of a copula stands for the mutually stochastically decreasing with respect to each other of two random variables. The $$n$$-ary aggregation function $$A: [0,1]^n \to [0, 1]$$ (monotone non-decreasing with two boundary conditions $$A(0, 0, \dots, 0)=0$$, $$A(1, 1, \dots, 1) =1)$$ is ultramodular if for all $$x, y, z\in [0, 1]^n$$ with $$x+y+z\in [0,1]^n$$, $A(x+y+z)+A(x) \geq A(x+y) + A(x+z).$ J. H. B. Kemperman [Nederl. Akad. Wet., Proc., Ser. A 80, 313–331 (1977; Zbl 0384.28012)] characterized that an $$n$$-ary function is supermodular if and only if each of its two-dimensional sections is supermodular; Marinacci and Montrucchio [Zbl 1082.52006] stated that an $$n$$-ary function is ultramodular if and only if it is supermodular and each of its one-dimensional sections is convex. These two results show that there are equivalent ways to characterize ultramodularity by (i) ultramodularity on each two-dimensional section of the function or by (ii) supermodularity on each two-dimensional section and convexity on each one-dimensional section. Well-known examples of supermodular aggregation functions are copulas, and each ultramodular copula is negative quadrant dependent. The authors [Inf. Sci. 181, No. 19, 4101–4111 (2011; Zbl 1258.03082)] characterized ultramodularity by the supermodularity from the composite function on each monotone non-decreasing supermodular functions. Section 1 and Section 2 give a brief introduction and backgrounds on supermodularity and ultramodularity. Section 3 focuses on ultramodular binary copulas. Note that each associate copula is an ordinal sum of Archimedean copulas. Therefore each associative ultramodular copula is a trivial sum of Archimedean copulas. Theorem 3.1 shows that a twice differentiable Archimedean copula is ultramodular if and only if $$1/t^\prime$$ is a convex function for the additive generator $$t: [0, 1] \to [0, \infty]$$, Theorem 3.5 proves that all the one-dimensional sections of an Archimedean copula are concave if and only if $$t^{\prime}(0)=\infty$$, $$t^{\prime}$$ is finite on $$(0, 1]$$ and $$1/t^{\prime}$$ is concave. Theorem 3.7 addresses twice differentiable horizontal or vertical generators $$f$$. Convexity on $$1/f^{\prime}$$ leads to ultramodularity of the following copulas $C_f(x, y) =\begin{cases} 0 & \text{if $$x=0$$,}\\ x \cdot f^{-1}(\min ( (f(y)/x), f(0)) & \text{otherwise}. \end{cases}$ $C^f(x, y) =\begin{cases} 0 & \text{if $$y=0$$,}\\ y \cdot f^{-1}(\min ( (f(x)/y), f(0)) & \text{otherwise}. \end{cases}$ Section 4 is devoted to construct copulas from Theorem 2.8, and Theorem 4.1 gives a copula from a continuous ultramodular aggregation function and 2-ary copulas as well as continuous monotone non-decreasing functions. The copulas constructed by Theorem 4.1 are non-symmetric. Example 4.2 lists 4 examples from literatures. The authors expect to construct higher dimensional copulas from this approach. It would be more interesting to construct ultramodular copulas with applications in financial economics and physics among other scientific fields which are not supermodular. ### MSC: 62H05 Characterization and structure theory for multivariate probability distributions; copulas 26B25 Convexity of real functions of several variables, generalizations 26B35 Special properties of functions of several variables, Hölder conditions, etc. 62E10 Characterization and structure theory of statistical distributions 60E05 Probability distributions: general theory 62H10 Multivariate distribution of statistics ### Citations: Zbl 1082.52006; Zbl 0384.28012; Zbl 1258.03082 Full Text: ### References: [1] J. Avérous and J.-L. Dortet-Bernadet, Dependence for Archimedean copulas and aging properties of their generating functions , Sankhyā 66 (2004), 607-620. · Zbl 1193.62087 [2] R.E. Barlow and F. Proschan, Statistical theory of reliability and life testing , To Begin With, Silver Spring, 1981. [3] G. Beliakov, A. Pradera and T. Calvo, Aggregation functions : A guide for practitioners , Springer, Heidelberg, 2007. · Zbl 1123.68124 [4] H.W. Block, W.S. Griffith and T.H. Savits, $$L$$-superadditive structure functions , Adv. Appl. Prob. 21 (1989), 919-929. · Zbl 0687.60077 [5] P. Capéraà and C. Genest, Spearman’s $$\rho$$ is larger than Kendall’s $$\tau$$ for positively dependent random variables, J. Nonparamet. Stat. 2 (1993), 183-194. · Zbl 1360.62294 [6] A.H. Clifford, Naturally totally ordered commutative semigroups , Amer. J. Math. 76 (1954), 631-646. · Zbl 0055.01503 [7] B. De Baets, H. De Meyer, J. Kalická and R. Mesiar, Flipping and cyclic shifting of binary aggregation functions , Fuzzy Sets Syst. 160 (2009), 752-765. · Zbl 1175.62002 [8] F. Durante and P. Jaworski, Invariant dependence structure under univariate truncation , Statistics 46 (2012), 263-277. · Zbl 1241.62083 [9] F. Durante, R. Mesiar, P.L. Papini and C. Sempi, $$2$$-increasing binary aggregation operators , Inform. Sci. 177 (2007), 111-129. · Zbl 1142.68541 [10] F. Durante, S. Saminger-Platz and P. Sarkoci, Rectangular patchwork for bivariate copulas and tail dependence , Comm. Stat. Theor. Meth. 38 (2009), 2515-2527. · Zbl 1170.62329 [11] F. Durante and C. Sempi, On the characterization of a class of binary operations on bivariate distribution functions , Publ. Math. Debrecen 69 (2006), 47-63. · Zbl 1121.60010 [12] M. Grabisch, J.-L. Marichal, R. Mesiar and E. Pap, Aggregation functions , Cambridge University Press, Cambridge, 2009. · Zbl 1196.00002 [13] J.H.B. Kemperman, On the FKG -inequality for measures on a partially ordered space, Nederl. Akad. Weten. Proc. 39 (1977), 313-331. · Zbl 0384.28012 [14] A. Khoudraji, Contributions à l’étude des copules et à la modélisation des valeurs extrêmes bivariées , Ph.D. thesis, Université Laval, Québec, 1995. [15] E.P. Klement, A. Kolesárová, R. Mesiar and C. Sempi, Copulas constructed from horizontal sections , Comm. Stat. Theor. Meth. 36 (2007), 2901-2911. · Zbl 1130.60017 [16] E.P. Klement, A. Kolesárová, R. Mesiar, and A. Stupňanová, Lipschitz continuity of discrete universal integrals based on copulas , Inter. J. Uncertain. Fuzziness Knowledge-Based Systems 18 (2010), 39-52. · Zbl 1190.28010 [17] E.P. Klement, M. Manzi and R. Mesiar, Ultramodular aggregation functions , Inform. Sci. 181 (2011), 4101-4111. · Zbl 1258.03082 [18] E.P. Klement, R. Mesiar and E. Pap, Quasi- and pseudo-inverses of monotone functions, and the construction of t-norms , Fuzzy Sets Syst. 104 (1999), 3-13. · Zbl 0953.26008 [19] —, Transformations of copulas , Kybernetika (Prague) 41 (2005), 425-436. · Zbl 1243.62019 [20] E. Liebscher, Construction of asymmetric multivariate copulas , J. Multivar. Anal. 99 (2008), 2234-2250. · Zbl 1151.62043 [21] M. Marinacci and L. Montrucchio, Ultramodular functions , Math. Oper. Res. 30 (2005), 311-332. · Zbl 1082.52006 [22] A.J. McNeil and J. Nešlehová, Multivariate Archimedean copulas, $$d$$ -monotone functions and $$l_1$$-norm symmetric distributions, Ann. Stat. 37 (2009), 3059-3097. · Zbl 1173.62044 [23] R. Mesiar, V. Jágr, M. Juráňová and M. Komorníková, Univariate conditioning of copulas , Kybernetika (Prague) 44 (2008), 807-816. · Zbl 1196.62059 [24] R. Moynihan, On $$\tau_T$$ semigroups of probability distribution functions II, Aequat. Math. 17 (1978), 19-40. · Zbl 0386.22005 [25] R.B. Nelsen, An introduction to copulas , (second edition), Springer, New York, 2006. · Zbl 1152.62030 [26] B. Schweizer and A. Sklar, Probabilistic metric spaces , Dover Publications, Mineola, 2006. · Zbl 0546.60010 [27] M. Shaked, A family of concepts of dependence for bivariate distributions , J. Amer. Stat. Assoc. 72 (1977), 642-650. · Zbl 0375.62092 [28] A. Stupňanová and A. Kolesárová, Associative $$n$$-dimensional copulas , Kybernetika (Prague) 47 (2011), 93-99. \noindentstyle · Zbl 1225.03071 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# Charge Question. 1. Feb 3, 2005 ### neoclee Hi, Can anyone possibly help me w/ this Question? Been stuck trying to figure it out four a couple hours (what a waste...) ---------------------------------- A charge of 1 C is fixed in place. From a horizontal distance 10 km apart a particle of 1 g and charge 1pc is fired with an initial speed 1000km/h towards the fixed charge. What is the minimum distance between the two charges ? I dont even know where to begin! and whats pc ? Picocoloumbs ? 2. Feb 3, 2005 ### MathStudent use conservation of energy.... do you know the equations for KE and PE? yes pC is picocoulombs = 10^-12C Last edited: Feb 3, 2005 3. Feb 3, 2005 ### neoclee i tried, but i cant get anything. i dont know how PE is related to calculation of distance or radius, can you atleast point to some formulas that show their relations ? 4. Feb 3, 2005 ### MathStudent $$KE = \frac{1}{2} m v^2$$ $$PE = \frac{kq_1q_2}{r}$$ The second equation is the potential energy for two point charges where r is the distance between q1 and q2... Remeber that conservation of energy means $$KE_i \ + \ PE_i = KE_f \ + \ PE_f$$ Decide what should be the initial and final states, and set up the equation... the rest should be apparent from there Last edited: Feb 3, 2005 5. Feb 3, 2005 ### neoclee Isn't that F=kq1q2/r^2 ? Im trying it right now. Thanks alot for your help thus far. 6. Feb 3, 2005 ### MathStudent Thats the equation for the electrical force between two point charges which isn't the same as electrical potential energy. The potential energy equation is derived from that equation however. Last edited: Feb 3, 2005 7. Feb 3, 2005 ### neoclee can you please take a look at the attachment and see if my understanding is right and if iv approached the problem properly ? [The known/unknown values are listed on my first post.] Thanks again, you help means alot. #### Attached Files: File size: 11.6 KB Views: 62 • ###### calc2.gif File size: 9.4 KB Views: 53 Last edited: Feb 3, 2005 8. Feb 3, 2005 ### MathStudent Where's the attachment? 9. Feb 3, 2005 ### neoclee Sorry, i thought i added them [file limited was exeeded; had to edit them].... 10. Feb 3, 2005 ### MathStudent before even checking if your calculations are correct... you need to go over units v needs to be in m/s , and g needs to be in kg 11. Feb 3, 2005 ### neoclee they are...I didn't include the unites, but everything is in m/s and KG edit: actually i missed a deci place for KG, ill have to fix it now.. 12. Feb 3, 2005 ### MathStudent oh yeah,, you had the velocity right, I just didn't recognize it 13. Feb 3, 2005 ### MathStudent also in calc1.gif, the equation for KE is 0.5mv^2, you forgot to square the velocity. 14. Feb 3, 2005 ### neoclee Can you please take a look now ? Thanks again.. #### Attached Files: File size: 9.5 KB Views: 58 • ###### calc2.gif File size: 9.6 KB Views: 57 15. Feb 4, 2005 ### MathStudent The first part is ok... in the second, why did you multiply KE_i by PE_i, where in the equation does it say to multiply them? 16. Feb 4, 2005 ### neoclee Changed... Will you take a look one last time ? {thanks} #### Attached Files: • ###### calc2.gif File size: 9.4 KB Views: 64 17. Feb 4, 2005 ### MathStudent Success!! 18. Feb 4, 2005 ### neoclee lol, so i take it as thats the right answer... (its a shame i made so many unnecesary mistakes, Univeristy works really getting up to me...but phys not my area :) ) My friends were saying , i could only solve this problem w/ integration. i guess not ;) Thanks alot for all your help. I really really appreciate your time towards this. 19. Feb 4, 2005 ### MathStudent no problem... glad I could help
{}
0 Members and 1 Guest are viewing this topic. #### swashy • UKAI GFX Guru • Galactic Poster • Posts: 14644 « on: 22:02:45, 01 March, 2010 » Suppose you had a newtons cradle that was a mile long (or longer), and you could observe both ends simultaneously, would there be a measurable time delay in the 'end' ball movement, and what effects would be causing the delay? Imagine...  above us only sky! #### royholl • Starting Member • Posts: 68 « Reply #1 on: 22:12:33, 01 March, 2010 » A very good question Ade, I'm still thinking about it, but I will get back with some kind of answer... Regards Roy The Pig Shed Observatory #### stewartw • Full Member • Posts: 110 « Reply #2 on: 08:20:15, 02 March, 2010 » Here's my estimate: When the ball at one end strikes the next one, a compression wave is transmitted through the second ball and on to adjacent balls. This wave will propogate at the speed of sound in that material (which for steel is about 4512 m/s). So for a line of balls that is 1 mile long (sorry for the mixed units) I'd expect a delay of approximately 0.36s Similar principals are applied when doing seismology - when there is an earthquake, sound waves propogate through the earth - they are more commonly referred to as seismic waves. By monitoring the time that receiving stations scattered around the earth receive the sound / seismic wave, deductions can be made in relation to the interior of the earth. Best regards William #### Nomis Elfactem • Galactic Poster • Posts: 12990 « Reply #3 on: 09:38:52, 02 March, 2010 » Great answer Williams - sounds spot on to me... yes, there will be a delay due to the time it takes for the energy of the first ball (or any number of balls) in motion to be transferred to the other end vie the "shock wave" and the composition of the ball material dictates this speed. S. Simon Scopes: Astro-Tech AT-111EDT Triplet, TS65ED Quad, Orion ST80, Modded PST-90 Solar Scope, PST Cak (on loan) Cameras: SXVF H694, Atik 16ic, Canon EOS 600d, DMK41, DMK21, QHY 5L-II (mono & colour) Accessories: SX USB Filter Wheel, SX OAG, Baader LRGB Ha SI OII Filters, SharpSky Focuser Mount: EQ6 (EQMOD), SW Star Adventurer, plus a lot (and I mean a lot) of other bits and pieces #### starf • Poster God • Posts: 1888 « Reply #4 on: 11:03:47, 02 March, 2010 » Wouldnt it depend on the initial force, mass of the spheres and the medium in which the pendulum swings? We know the greater the mass of an object the slower its acceleration. eg given the same force (kick), a bowling ball will accelerate more slowly than a beachball. In other words, the ratio of the masses of two bodies is equal to the inverse ratio of their accelerations when the same force is applied to both. Question didn't stated how big (massive) your balls are and whether you are swinging them in the air (full of inudendo), or for that matter what force you have applied to them. Newtonian mechanics: Newton's second law: The net force on a body is equal to the product of the body's mass and the acceleration of the body. Net force is a vector quantity with both magnitude and direction as is acceleration whilst mass is a scalar quantity, so mathematically, $$\vec{F}_{net}=m\vec{a}$$ 1N = force necessary to accelerate a mass by $$1 m/s^2$$ #### Nomis Elfactem • Galactic Poster • Posts: 12990 « Reply #5 on: 11:28:21, 02 March, 2010 » Yeap, you're totally right Starf.... but I guess the answer is still yes... all of the above (plus a lot more besides) just dictates the length of the delay, not whether there is one or not ?!? As ever with these things it can get rather complex so I guess we simplified it as a result.... e.g there is also torque and angular momentum to be taken into consideration due to the "strings" that hold the balls plus what the strings are made of and friction in all the couplings and not forgetting the temperature of the system etc etc... Gets the mind gym working for sure, doesn't it - Newtonian physics at it's best S. Simon Scopes: Astro-Tech AT-111EDT Triplet, TS65ED Quad, Orion ST80, Modded PST-90 Solar Scope, PST Cak (on loan) Cameras: SXVF H694, Atik 16ic, Canon EOS 600d, DMK41, DMK21, QHY 5L-II (mono & colour) Accessories: SX USB Filter Wheel, SX OAG, Baader LRGB Ha SI OII Filters, SharpSky Focuser Mount: EQ6 (EQMOD), SW Star Adventurer, plus a lot (and I mean a lot) of other bits and pieces #### stewartw • Full Member • Posts: 110 « Reply #6 on: 14:08:39, 02 March, 2010 » Hi Starf, Yes, I'd agree that what you mention is applicable but only in respect of the energy that is transferred from one end of the line of balls to the other and the resultant acceleration of the final ball, not with regard to the time it would take for that energy to be transferred. The speed at which the "kick" is transferred is dictated solely by the composition of the medium (in this case the balls). The exception to my answer above would be the case where the initial ball was moving at a speed greater than that of the speed of sound in the material. The analogy here would be an object travelling at supersonic / hypersonic speeds. Real life examples of pieces of metal colliding at these kinds of speeds (of the order of a few miles / second) this include things like collisions between objects in orbit. For solids travelling at such speeds my understanding is that they behave more like liquids and it gets way to complicated for my brain Best regards William #### chris.bailey • Galactic Poster • Posts: 9646 • If at last you do succeed, don't try again « Reply #7 on: 08:19:31, 03 March, 2010 » Think someone needs to build it to test the theory. We could call it the LSC (Large Spherical Colider). LX200|ZS70|FSQ85|FLT110|Altair DF250RC|EQ6 Pro(Rowan Belt Mod)|ParamountMX Starlight Express SXVRH16/ONAG/FW|Lodestar X2|Baader 2" Filters Lunt LS60PTBF1200|DMK41|Quark Chromo Samyang 135mm f1.8 #### swashy • UKAI GFX Guru • Galactic Poster • Posts: 14644 « Reply #8 on: 20:20:31, 05 March, 2010 » So as I understand it, the impact of the first ball creates a shock wave which travels at a speed which is variable depending upon the material of the balls in the line, I suppose you could actually consider the 'line' of balls as one continuous length of material, in which case you would need an increasing force of impact depending upon the length of the line to have any measurable change at the other end, (otherwise I guess the ball would just bounce back) and any loss would be because of compression? So theoretically, if the 'line' was made of a material which was unable to be compressed, and the impact of the first ball was high enough, the answer to the question would all simply depend upon the speed at which the shock wave can travel through the subject material? Imagine...  above us only sky! #### SneezingRabbit • Starting Member • Posts: 68 « Reply #9 on: 23:28:42, 14 April, 2010 » This kinda makes me want to buy a newtons cradle now. Wouldn't it loose momentum before it reaches the end of the mile? Sneezing Rabbit #### swashy • UKAI GFX Guru • Galactic Poster • Posts: 14644 « Reply #10 on: 07:28:46, 15 April, 2010 » This kinda makes me want to buy a newtons cradle now. Wouldn't it loose momentum before it reaches the end of the mile? Depends how big your balls are! Imagine...  above us only sky! #### spaismunky • Galactic Poster • Posts: 7302 • Space is ace-not bobbins « Reply #11 on: 07:49:48, 15 April, 2010 » Ang on a minute. This is all very highbrow and therefore unintelligible to me, but wouldn't there be some energy released as heat on each ball striking each other, which I believe is why perpetual motion is so damned hard to prove, thereby slowing it all down. Would it even reach a mile? It always boils down to size doesn't it? Another thing...you'd be at the receiving end watching the ball strike then BAM, your eardrums would explode as the sound caught up mojo #### stewartw • Full Member • Posts: 110 « Reply #12 on: 08:05:11, 15 April, 2010 » Heat would be generated and so yes, the energy transmitted along the length would decrease. This would have the effect of reducing the amplitude of the compression wave. But the original question asked what the time delay would be and so the answers above still hold true. Waves propogate at a speed that is the product of their wavelength and frequency (v=f?) Notice here that there is no mention of the amplitude - the speed of propogation is independant of the amplitude. A good analogy here is sound waves in air - do loud noises travel faster or slower than quiet ones? The answer is no, they both travel at the same speed. Loud ones do however travel further because they have more energy and so it takes longer for their energy to dissipate as heat. Best regards William #### spaismunky • Galactic Poster • Posts: 7302 • Space is ace-not bobbins « Reply #13 on: 08:36:22, 15 April, 2010 » You see? You did maths there and I'm a girl. How many kittens are there in v=r? mojo • Full Member • Posts: 110
{}
§10.65 Power Series §10.65(i) $\operatorname{ber}_{\nu}x$ and $\operatorname{bei}_{\nu}x$ 10.65.1 $\displaystyle\operatorname{ber}_{\nu}x$ $\displaystyle=(\tfrac{1}{2}x)^{\nu}\sum_{k=0}^{\infty}\frac{\cos\left(\frac{3}% {4}\nu\pi+\frac{1}{2}k\pi\right)}{k!\Gamma\left(\nu+k+1\right)}(\tfrac{1}{4}x^% {2})^{k},$ $\displaystyle\operatorname{bei}_{\nu}x$ $\displaystyle=(\tfrac{1}{2}x)^{\nu}\sum_{k=0}^{\infty}\frac{\sin\left(\frac{3}% {4}\nu\pi+\frac{1}{2}k\pi\right)}{k!\Gamma\left(\nu+k+1\right)}(\tfrac{1}{4}x^% {2})^{k}.$ 10.65.2 $\displaystyle\operatorname{ber}x$ $\displaystyle=1-\frac{(\frac{1}{4}x^{2})^{2}}{(2!)^{2}}+\frac{(\frac{1}{4}x^{2% })^{4}}{(4!)^{2}}-\cdots,$ $\displaystyle\operatorname{bei}x$ $\displaystyle=\tfrac{1}{4}x^{2}-\frac{(\frac{1}{4}x^{2})^{3}}{(3!)^{2}}+\frac{% (\frac{1}{4}x^{2})^{5}}{(5!)^{2}}-\cdots.$ ⓘ Symbols: $\operatorname{bei}_{\NVar{\nu}}\left(\NVar{x}\right)$: Kelvin function, $\operatorname{ber}_{\NVar{\nu}}\left(\NVar{x}\right)$: Kelvin function, $!$: factorial (as in $n!$) and $x$: real variable A&S Ref: 9.9.10 Permalink: http://dlmf.nist.gov/10.65.E2 Encodings: TeX, TeX, pMML, pMML, png, png See also: Annotations for §10.65(i), §10.65 and Ch.10 §10.65(ii) $\operatorname{ker}_{\nu}x$ and $\operatorname{kei}_{\nu}x$ When $\nu$ is not an integer combine (10.65.1) with (10.61.6). Also, with $\psi\left(x\right)=\Gamma'\left(x\right)/\Gamma\left(x\right)$, 10.65.3 $\displaystyle\operatorname{ker}_{n}x$ $\displaystyle=\tfrac{1}{2}(\tfrac{1}{2}x)^{-n}\sum_{k=0}^{n-1}\frac{(n-k-1)!}{% k!}\cos\left(\tfrac{3}{4}n\pi+\tfrac{1}{2}k\pi\right)(\tfrac{1}{4}x^{2})^{k}-% \ln\left(\tfrac{1}{2}x\right)\operatorname{ber}_{n}x+\tfrac{1}{4}\pi% \operatorname{bei}_{n}x+\tfrac{1}{2}(\tfrac{1}{2}x)^{n}\sum_{k=0}^{\infty}% \frac{\psi\left(k+1\right)+\psi\left(n+k+1\right)}{k!(n+k)!}\cos\left(\tfrac{3% }{4}n\pi+\tfrac{1}{2}k\pi\right)(\tfrac{1}{4}x^{2})^{k},$ 10.65.4 $\displaystyle\operatorname{kei}_{n}x$ $\displaystyle=-\tfrac{1}{2}(\tfrac{1}{2}x)^{-n}\sum_{k=0}^{n-1}\frac{(n-k-1)!}% {k!}\sin\left(\tfrac{3}{4}n\pi+\tfrac{1}{2}k\pi\right)(\tfrac{1}{4}x^{2})^{k}-% \ln\left(\tfrac{1}{2}x\right)\operatorname{bei}_{n}x-\tfrac{1}{4}\pi% \operatorname{ber}_{n}x+\tfrac{1}{2}(\tfrac{1}{2}x)^{n}\sum_{k=0}^{\infty}% \frac{\psi\left(k+1\right)+\psi\left(n+k+1\right)}{k!(n+k)!}\sin\left(\tfrac{3% }{4}n\pi+\tfrac{1}{2}k\pi\right)(\tfrac{1}{4}x^{2})^{k}.$ 10.65.5 $\displaystyle\operatorname{ker}x$ $\displaystyle=-\ln\left(\tfrac{1}{2}x\right)\operatorname{ber}x+\tfrac{1}{4}% \pi\operatorname{bei}x+\sum_{k=0}^{\infty}(-1)^{k}\frac{\psi\left(2k+1\right)}% {((2k)!)^{2}}(\tfrac{1}{4}x^{2})^{2k},$ $\displaystyle\operatorname{kei}x$ $\displaystyle=-\ln\left(\tfrac{1}{2}x\right)\operatorname{bei}x-\tfrac{1}{4}% \pi\operatorname{ber}x+\sum_{k=0}^{\infty}(-1)^{k}\frac{\psi\left(2k+2\right)}% {((2k+1)!)^{2}}(\tfrac{1}{4}x^{2})^{2k+1}.$ §10.65(iii) Cross-Products and Sums of Squares 10.65.6 ${\operatorname{ber}_{\nu}^{2}}x+{\operatorname{bei}_{\nu}^{2}}x=(\tfrac{1}{2}x% )^{2\nu}\sum_{k=0}^{\infty}\frac{1}{\Gamma\left(\nu+k+1\right)\Gamma\left(\nu+% 2k+1\right)}\frac{(\frac{1}{4}x^{2})^{2k}}{k!},$ 10.65.7 $\operatorname{ber}_{\nu}x\operatorname{bei}_{\nu}'x-\operatorname{ber}_{\nu}'x% \operatorname{bei}_{\nu}x=(\tfrac{1}{2}x)^{2\nu+1}\sum_{k=0}^{\infty}\frac{1}{% \Gamma\left(\nu+k+1\right)\Gamma\left(\nu+2k+2\right)}\frac{(\frac{1}{4}x^{2})% ^{2k}}{k!},$ 10.65.8 $\operatorname{ber}_{\nu}x\operatorname{ber}_{\nu}'x+\operatorname{bei}_{\nu}x% \operatorname{bei}_{\nu}'x=\tfrac{1}{2}(\tfrac{1}{2}x)^{2\nu-1}\sum_{k=0}^{% \infty}\frac{1}{\Gamma\left(\nu+k+1\right)\Gamma\left(\nu+2k\right)}\frac{(% \frac{1}{4}x^{2})^{2k}}{k!},$ 10.65.9 $\left(\operatorname{ber}_{\nu}'x\right)^{2}+\left(\operatorname{bei}_{\nu}'x% \right)^{2}=(\tfrac{1}{2}x)^{2\nu-2}\sum_{k=0}^{\infty}\frac{2k^{2}+2\nu k+% \frac{1}{4}\nu^{2}}{\Gamma\left(\nu+k+1\right)\Gamma\left(\nu+2k+1\right)}% \frac{(\frac{1}{4}x^{2})^{2k}}{k!}.$ §10.65(iv) Compendia For further power series summable in terms of Kelvin functions and their derivatives see Hansen (1975).
{}
OpenStudy (anonymous): Solve for x see pic 5 years ago OpenStudy (anonymous): 5 years ago OpenStudy (mathstudent55): First, distribute the -3 on the right side. Then multiply both sides by the LCD to get rid of fractions. Then collect like terms. 5 years ago OpenStudy (anonymous): frst take all x terms on one side then constant terms on left 5 years ago OpenStudy (anonymous): LCD is 4?? 5 years ago OpenStudy (mathstudent55): The LCD is the smallest number that 2, 3, and 4 divide _into_ evenly, so it can't possibly be smaller than 4. 5 years ago OpenStudy (anonymous): (1)/(3)*x+(1)/(2)=-3((3)/(4)*x-1) Multiply (1)/(3) by x to get (x)/(3). (x)/(3)+(1)/(2)=-3((3)/(4)*x-1) Multiply (3)/(4) by x to get (3x)/(4). (x)/(3)+(1)/(2)=-3((3x)/(4)-1) Multiply -3 by each term inside the parentheses. (x)/(3)+(1)/(2)=-(9x)/(4)+3 Since (1)/(2) does not contain the variable to solve for, move it to the right-hand side of the equation by subtracting (1)/(2) from both sides. (x)/(3)=-(1)/(2)-(9x)/(4)+3 Simplify the right-hand side of the equation. (x)/(3)=-(9x)/(4)+(5)/(2) Find the LCD (least common denominator) of (x)/(3)-(9x)/(4)+(5)/(2). Least common denominator: 12 Multiply each term in the equation by 12 in order to remove all the denominators from the equation. (x)/(3)*12=-(9x)/(4)*12+(5)/(2)*12 Simplify the left-hand side of the equation by canceling the common factors. 4x=-(9x)/(4)*12+(5)/(2)*12 Simplify the right-hand side of the equation by simplifying each term. 4x=-27x+30 Since -27x contains the variable to solve for, move it to the left-hand side of the equation by adding 27x to both sides. 4x+27x=30 Since 4x and 27x are like terms, add 27x to 4x to get 31x. 31x=30 Divide each term in the equation by 31. (31x)/(31)=(30)/(31) Simplify the left-hand side of the equation by canceling the common factors. x=30/31 5 years ago OpenStudy (mathstudent55): LCD (= least common denominator) is also called the LCM (= least common multiple). What number is the smallest multiple of 2, 3, and 4? Here are the first several multiples of 2, 3, and 4. Pick the smallest common one: 2: 2, 4, 6, 8, 10, 12, 14, 16, 18 3: 3, 9, 12, 15, 18, 21 4: 4, 8, 12, 16, 20, 24 5 years ago OpenStudy (anonymous): Multiply each side by 12.$12\left(\frac{1}{3}x+\frac{1}{2}\right)=\left(-3\left(\frac{3}{4}x-1\right)\right)12$$4 x+6=36-27 x$ 5 years ago OpenStudy (mathstudent55): |dw:1359440015052:dw| 5 years ago
{}
# How to input compressive strength value of concrete in construction stage? $$If you measure the development of parameters of concrete in time, you can specify their absolute values in the construction stage of prestressing in IDEA StatiCa Beam. The software automatically calculates the equivalent time for a given concrete parameter. This allows us to prestress concrete members with age under 3 days with the fast development of compressive strength or elastic modulus according to paragraph 3.1.2 (5).Once you tick the checkbox for fck, you will be able to input its user-specified value. On the other hand, one can specify the elastic modulus value at that stage. These values can not be combined and overrule each other.$$
{}
# Delete all empty folders in a directory tree - oops, script deletes files too #### rated by 0 users This post has 6 Replies | 4 Followers Posts 6 Nonapeptide Posted: 08-12-2011 6:18 PM I have a large folder that has both empty and full folders. I am using SBS 2008 and PowerShell 2.0. I used the following script (running from within the PowerShell ISE as an administrator) to recursively delete only empty folders: dir 'P:\File Server\Cleanup\' -recurse | Where { $_.PSIsContainer -and @(dir -Lit$_.Fullname -r | Where {!$_.PSIsContainer}).Length -eq 0 } | Remove-Item -recurse However, I noticed that before the script ran I had 17,832 folders and 232,343 files. After the script ran I had 12,622 folders and 201,286 files. Oops. Furthermore, I saw the following errors in the ISE which puzzled me: + Remove-Item <<<< -recurse + CategoryInfo: PermissionDenied: (desktop.ini:FileInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemUnAuthorizedAccess,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item: Directory P:\path\to\a\folder cannot be removed because it is not empty. Wait, WHAT? It's not supposed to even get to the point of attempting to delete a folder if it's not empty! 1. Why did files get deleted when it appears that the logic should have prevented that? 2. Why are deletions being attempted on folders with files in them? Thanks for your help! P.S. Are there no code blocks on these forums? I noticed that the code snippets look kinda ugly. Posts 5 SJHarper replied on 08-13-2011 11:53 AM Try this instead. gci "N:\test\" -r | ? {$_.PSIsContainer -eq $True} | ? {$_.GetFiles().Count -eq 0 -and $_.GetDirectories().Count -eq 0} | remove-item Steve Posts 296 RSiddaway replied on 08-13-2011 12:05 PM One problem is that if a folder contains zero length files the length is still zero but it has content The problem centres on this snippet (Get-ChildItem$_.Fullname -Recurse |  where {!$_.PSisContainer}).Length If the folder is empty the length isn't 0. No measurements can be taken so the length doesn't exist! Compare these two lines PS> (Get-ChildItem C:\test\test3 -Recurse | where {!$_.PSisContainer}).Length -eq 0 False PS> (Get-ChildItem C:\test\test3 -Recurse |  where {!$_.PSisContainer}).Length -eq$null True The length of an empty folder is NULL not zero. If you want to identify empty folders this is a much easier and safer way $path = "c:\test"$fso = New-Object -ComObject "Scripting.FileSystemObject" $folder =$fso.GetFolder($path) foreach ($subfolder in $folder.SubFolders){ if (($subfolder.Files | Measure-Object).Count -gt 0){continue} if (($subFolders.SubFolders | Measure-Object).Count -gt 0){continue} if ($subfolder.Size -eq 0 ){Remove-Item -Path $($subfolder.Path) -Force -WhatIf} } BUT this doesn't work recursively so we need a bit of modification function remove-emptyfolder { param ($folder) foreach ($subfolder in $folder.SubFolders){$notempty = $false if (($subfolder.Files | Measure-Object).Count -gt 0){$notempty =$true} if (($subFolders.SubFolders | Measure-Object).Count -gt 0){$notempty = $true} if ($subfolder.Size -eq 0 -and !$notempty){ Remove-Item -Path$($subfolder.Path) -Force -WhatIf } else { remove-emptyfolder$subfolder } } } $path = "c:\test"$fso = New-Object -ComObject "Scripting.FileSystemObject" $folder =$fso.GetFolder($path) remove-emptyfolder$folder Posts 6 Nonapeptide replied on 08-13-2011 6:58 PM Wooow! Excellent! I'll give that a try when I get a chance. I didn't know that a zero length is different from a null. That makes sense now. I have much more to learn about PowerShell. =) Thanks for your time and effort! Posts 6 Nonapeptide replied on 08-23-2011 5:01 PM Just a quick checkin. When running this script, I receive the notification that "The item at c:\path\to\folder has children and the recurse parameter was not specified.. If you continue all children will be removed" Interestingly, the folders that are throwing that interrupt have folders in them that are themselves completely empty. When I look through the script, my rudimentary skills seem to understand it as iterating down to the lowest leaves in the filesystem tree until it finds a folder that itself has absolutely nothing in it. When it finds a folder like that, it then removes it and goes up a level. Certainly I can add the -recurse but I'm just wondering why the recursive function doesn't go down to the lowest levels and then work its way back up. There's definitely something I'm not understanding. Posts 1 phpgen replied on 01-06-2012 11:17 AM hii have u tried this tool ?? http://longpathtool.com Posts 1 johnfrings replied on 10-08-2014 7:06 PM I am aware that I'm replying to a rather old post but I just wanted to share my modifications. There is no -WhatIf in my version and it's not tested much so probably don't test it on any real folders... What mainly differs is the ability to handle nested folders that become empty once their empty subfolder has been removed. (I also made an attempt to allow passing a path as a string which will then automagically become a FSO folder but it lacks error handling etc) function RemoveEmptyFoldersRecursive { param ($folder) # Check if a string was passed and if so make it a FileSystemObject folder if ($folder.GetType().FullName -ne "System.__ComObject") { $fso = New-Object -ComObject "Scripting.FileSystemObject"$folder = $fso.GetFolder($folder) RemoveEmptyFoldersRecursive $folder } else { if (($folder.SubFolders | Measure-Object).Count -gt 0) { # recurse on subfolders if folder has subfolders foreach ($subfolder in$folder.SubFolders) { RemoveEmptyFoldersRecursive $subfolder } } # check again to see if the folder still contains anything if (($folder.SubFolders | Measure-Object).Count -eq 0) { if (($folder.Files | Measure-Object).Count -eq 0 -and$folder.Size -eq 0) { # remove folder if it is empty Write-Host "Removing empty folder " $folder.Path Remove-Item -Path$(\$folder.Path) -Force } } } } Page 1 of 1 (7 items) | RSS
{}
GATE PI Engineering Mathematics Differential Equations Previous Years Questions ## Marks 1 The solution to $$\,6y{y^1} - 25x = 0\,\,$$ represents a The solution to $$\,\,{x^2}{y^{11}} + x{y^1} - y = 0\,\,$$ is The homogeneous part of the differential equation $$\,{{{d^2}y} \over {d{x^2}}} + p{{dy} \over {dx}} + qy = r\,\,$$ ( $$p, q, r$$ are constants) has r... The solutions of the differential equation $${{{d^2}y} \over {d{x^2}}} + 2{{dy} \over {dx}} + 2y = 0\,\,$$ are The differential equation $${\left[ {1 + {{\left( {{{d\,y} \over {d\,x}}} \right)}^2}} \right]^3} = {C^2}{\left[ {{{{d^2}\,y} \over {d\,{x^2}}}} \righ... Solve for$$y$$if$${{{d^2}y} \over {d{t^2}}} + 2{{dy} \over {dt}} + y = 0$$with$$y(0)=1$$and$${y^1}\left( 0 \right) = - 2$$## Marks 2 Consider the differential equation$$\,\,{x^2}{{{d^2}y} \over {d{x^2}}} + x{{dy} \over {dx}} - 4y = 0\,\,\,$$with the boundary conditions of$$\,\,y\... The solution of the differential equation $$\,\,\,{{{d^2}y} \over {d{x^2}}} + 6{{dy} \over {dx}} + 9y = 9x + 6\,\,\,\,$$ with $${C_1}$$ and $${C_2}$$... Which one of the following differential equations has a solution given by the function $$y = 5\sin \left( {3x + {\pi \over 3}} \right)$$ The solution of the differential equation $${{dy} \over {dx}} - {y^2} = 1$$ satisfying the condition $$y(0)=1$$ is The solution of the differential equation $${{{d^2}y} \over {d{x^2}}} = 0$$ with boundary conditions (i) $${{dy} \over {dx}} = 1$$ at $$x=0$$ (ii) ... EXAM MAP Joint Entrance Examination
{}
# Why is there a time dependence in the Heisenberg states of the Haag-Ruelle scattering theory? + 7 like - 0 dislike 45 views I'm reading R. Haag's famous book "Local Quantum Physics: Fields, Particles, Algebras", 2nd edition, and I'm very puzzled by the way he treats the Heisenberg picture in the Haag-Ruelle scattering theory. It begins in section "II.3 Physical Interpretation in Terms of Particles", where, on page 76, he clearly states "Our description is in the Heisenberg picture. So $\Psi_{i\alpha}$ describes the state "sub specie aeternitatis"; we may assign to it, as in (I.3.29), a wave function in space-time obeying the Klein-Gordon equation." Then, on page 77, he says: "Suppose the state vectors $\Psi_1$, $\Psi_2$ describe states which at some particular time $t$ are localized in separated space regions $V_1$, $V_2$." From here on the whole construction begins. I would very much appreciate it if an expert in Haag-Ruelle scattering or whoever knows the answer, would answer my question as to why a state vector in the Heisenberg picture like $\Psi_1$ and $\Psi_2$ above depends on time, when it's common knowledge that there is no time dependence assigned to the state vectors in the the Heisenberg picture? EDIT 1: Up until recently I didn't even know how a scattering process might be described in the Heisenberg picture of QM, since once the initial state is prepared at $t_i = - \infty$ , this state will remain unchanged for all time and it will be the same for $t_f = + \infty$, and hence there could be no scattering (let alone particle production, 3-body scattering, rearangement collisions, etc.). How to solve this problem? Then I have discovered one of the most lucid presentations in the paper of H. Ekstein, "Scattering in field theory", http://link.springer.com/article/10.1007/BF02745471 The basic idea is the following: one prepares a state of the system at $t_i = -\infty$ by measuring a complete set of compatible observables represented by operators in the Heisenberg picture (i.e., time dependent), say $A(t_{i}), B(t_{i})$, etc. Obviously, this prepared state is a common eigenvector of these operators, say $|a,b,...; t_{i}\rangle$ corresponding to the eigenvalues (obtained in measurement) $a, b$,.... , i.e., $A(t_{i})|a,b,...; t_{i}\rangle = a|a,b,...; t_{i}\rangle, B(t_{i})|a,b,...; t_{i}\rangle = b|a,b,...;t_{i}\rangle$, etc. Then, one lets the system evolve from $t_i = -\infty$ to $t_f = +\infty$. Obviously, the state vector of the system remains unchanged, namely $|a,b,...; t_{i}\rangle$ for any time $t$, with $t_i \leq t \leq t_f$, since we are in the Heisenberg picture, but the operators representing dynamical observables do change in time according to the Heisenberg equation of motion. At time $t_f = +\infty$, one measures again the system choosing a complete set of compatible observables, say $C(t_{f}), D(t_{f})$,.... As a result of this measurement, the state of the system changes, at time $t = t_f$, from $|a,b,...; t_{i}\rangle$ to $|c,d,...; t_{f}\rangle$, where $|c,d,...; t_{f}\rangle$ is a common eigenvector of the operators $C(t_{f}), D(t_{f})$,..., corresponding to the eigenvalues $c, d,$... obtained in the measurement (at time $t = t_f$), i.e. $C(t_{f})|c,d,....; t_{f}\rangle = c|c,d,....; t_{f}\rangle, D(t_{f})|c,d,....; t_{f}\rangle = d|c,d,....; t_{f}\rangle$, etc. The quantity of interest is the transition amplitude from the Heisenberg state $|a,b,...; t_{i}\rangle$ to the Heisenberg state $|c,d,...; t_{f}\rangle$, and this is given by the S-matrix element $S_{a,b,...; c,d,...} = \langle c,d,...; t_{f}| a,b,...; t_{i}\rangle$. To summarize: the key to understanding scattering in either the Schrodinger or Heisenberg picture is to realize that it implies 2 experimental operations, namely preparation at $t = t_i$ and measurement at $t = t_f$. A logical approach to solving a scattering problem in the Heisenberg picture (as presented by Ekstein) is the following: • H0) For any given observable solve the Heisenberg equation of motion to find its dependence on time, i.e. the operator $A(t)$. • H1) For any Heisenberg operator (representing an observable) $A(t)$ find the asymptotic values $A_i = \lim_{t \rightarrow -\infty} A(t)$ and $A_f = \lim_{t \rightarrow +\infty} A(t)$ • H2) Solve the eigenvalue problem for the asymptotic operators $A_i$ and $A_f$. The eigenvectors are the corresponding asymptotic scattering states. • H3) Select a complete system of compatible observables (CSCO) that corresponds to state preparation at $t = t_i$, denoted generically by $A_i$. Select a CSCO that corresponds to measurement at $t = t_f$, denoted generically by $C_f$. • H4) Calculate matrix elements between eigenvectors determined in step H2), namely $\langle c, t_{f}| a, t_{i}\rangle$, where $|a, t_{i}\rangle$ is an eigenvector of $A_i = A(t_{i})$, and $|c, t_{f}\rangle$ is an eigenvector of $C_f = C(t_{f})$. Regarding the Haag-Ruelle scattering, things are very confusing. The main argument is the same in all the books available. Instead of following the very logical steps H1)-H4) presented above, one starts by constructing a vector depending on a parameter $"t"$ and shows that this vector has limits when $|t|$ becomes infinite. I must say that this type of reasoning is reminiscent of the way one treats scattering in the Schrodinger picture (SP). In the SP, one starts with an arbitrary state vector $|\Psi (t)\rangle$ which is time dependent according to the SP and then must show that $|\Psi (t)\rangle$ has asymptotes when (the real time) $|t|$ becomes infinite. I would be very grateful if you could help me with some answers to these questions: • 1) What is the relation between the parameter $"t"$ of H-R scattering and the real time, since when $"t"$ becomes infinite they claim to have obtained the asymptotic scattering states? • 2) What is the physical interpretation of the vectors $\psi_t$ in H-R scattering? Are they obtained as a result of a measurement? Are they in the Heisenberg picture or in the Schrodinger picture? • 3) Is there a CSCO such that the H-R asymptotic scattering states are the eigenvectors of this CSCO? If yes, is this CSCO the asymptotic limit of a finite time Heisenberg CSCO, as described in steps H1)-H4)? • 4) Can one obtain asymptotic scattering states for an ARBITRARY CSCO using the H-R method? This should be the case since one can prepare the initial state as one wants at $t = t_i$, and then can choose to measure what observable one wants at $t = t_f$, and hence the CSCOs corresponding to preparation and measurement must be arbitrary. EDIT 2: @Pedro Ribeiro Your objections to Ekstein's construction are perhaps unfounded: • I chose a discrete spectrum for CSCOs in my presentation from EDIT 1 only to convey the general idea with minimum notation. In case of a continuous spectrum one can use spectral projection operators as per von Neumann's QM. • A Heisenberg operator $A(t)$ acts in the full Hilbert space, i.e. in the same Hilbert space on which the total Hamiltonian $H$ acts. The Haag theorem has to do with the fact that the free Hamiltonian $H_0$ and the full Hamiltonian $H$ act on 2 different Hilbert spaces. There is no connection between $A(t)$ and $H_0$ or its associated Hilbert space for any time $t$, finite or infinite. Hence, Haag's theorem has no bearing on $\lim_{t \rightarrow \pm\infty} A(t)$ and hence does not forbid the existence of this limit. Examples: If $A(t)$ commutes with $H$, then $A(t)$ is constant in time and the limit surely exists (see, e.g., the momentum operator). As a matter of fact, the whole LSZ idea is based on such limits! It's only one way a state can depend on time $t$ in the Heisenberg picture. That time $t$ has to be a time at which some Heisenberg operator, say $A(t)$, is measured on the system, and as an effect the state becomes an eigenvector $|a,t\rangle$ of that operator. Otherwise, state vectors in the Heisenberg picture do not evolve dynamically in time! One can look at my post. From your presentation is still not very clear if the parameter $"t"$ is the time at which one chooses to measure a CSCO on the system and obtains an eigenvector(?) $\psi_t$. For that one has to construct such a Heisenberg CSCO and show that $\psi_t$ is its eigenvector (corresponding to some eigenvalue) at time $t$. Can one show that? In the meantime I've discovered some lecture notes by Haag published in Lectures in theoretical physics, Volume III, edited by Brittin and Downs, Interscience Publishers. Starting on page 343 Haag discusses his theory and in his own words says very clearly that the $\psi_t$ states are manifestly in the Schrodinger picture, and $t$ is regular time. Only the asymptotic limits of $\psi_t$ Haag considers to represent scattering states in the Heisenberg picture. But even that cannot work since $\psi_t$ has 2 limits, $\psi_{\pm} = \lim_{t\rightarrow\pm\infty}\psi_t$, and hence one needs 2 different Heisenberg pictures, one that coincides with the Schrodinger picture at $t = -\infty$, and a 2nd one, which coincides with the Schrodinger picture at $t = +\infty$. So, he doesn't stay all the time in the Heisenberg picture, but uses most of the time the Schrodinger picture, and in the end, apparently, 2 different Heisenberg pictures. However, it's well known that the Schrodinger picture does not exist in relativistic qft due to vacuum polarization effects!!! What is left of Haag-Ruelle theory, then??? This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker + 5 like - 0 dislike The "Heisenberg picture" referred to in page 76 of Haag's book applies to the single-particle Hilbert space $\mathcal{H}^{(1)}$ and therefore to the "in" and "out" Hilbert spaces only, that is, at times $t\to\pm\infty$ respectively. The discussion on page 77, on its turn, refers to states in the interacting (Wightman-GNS) Hilbert space. In this respect, it must be remarked that the discussion on page 77 (particularly formulae (II.3.3) and (II.3.4)) is not very precise - what Haag really means is the content of Theorem 4.2.1, pp. 88, as (EDIT) I shall explain in more detail below. The recent expansion of your question made clearer the issues which are troubling you. Firstly, there are a few points to address before assessing your questions 1)-4) in a conceptually rigorous way: • It seems you are dealing with observables with a pure point spectrum only. Most observables are not of this kind - points in the continuous part of the spectrum are not eigenvalues in the sense they have corresponding eigenvectors. They do have what is called corresponding "generalized eigenvectors", which strictly speaking are not contained in the Hilbert space. • In QFT, the limits $\lim_{t\to\pm\infty}A(t)$ usually do not exist in the operator sense for the relevant observables. This is mainly due to Haag's theorem, the same which tells us that there is no interaction picture in QFT. That is the technical reason why the time parameter $t$ must appear in state vectors, since the asymptotic limit can only be approached by applying $A(t)$ first to some state (namely, the vacuum state). The above points show that making steps H1)-H2) rigorous (specially in the context of QFT) is quite problematic. H3)-H4), on the other hand, are not that far off. Secondly, I want to stress a few conceptual points about Haag-Ruelle scattering theory. I do so at the risk of being a bit pedantic, but I want to set a precise context in a self-contained manner. Recall that the Haag-Ruelle theory is a scattering framework for quantum field theories. Regardless of whether you work with Wightman fields or a Haag-Kastler net of C*-algebras, this means that all (smeared) fields and all (local) observables are thought of as being localized in a certain region of space-time, in the sense of relativistic microcausality: observables localized in causally disjoint space-time regions should commute (for smeared fields, they either commute or anti-commute, depending on their spin). This is radically different from (non-relativistic) quantum mechanics. Particularly, any given local observable should be thought of being measured within a certain region of space and within a certain interval of time. A "sharp time" localization for observables is only possible for free fields, which of course have a trivial scattering theory. In other words, local observables and smeared fields in QFT are always in the Heisenberg picture, but their time localization is usually not "sharp". Given any local observable or smeared field polynomial $A$ localized in a space-time region $\mathscr{O}$, the effect of time translations (using the unitary time evolution of the theory) simply has the effect of translating the localization region $\mathscr{O}$ of $A$ in time - more precisely, the localization region of $A(t)$ is $$\mathscr{O}_t=\{(x^0+t,x^1,x^2,x^3)\ |\ (x^0,x^1,x^2,x^3)\in\mathscr{O}\}\ .$$ More generally, if $U(x)$ is the unitary operator implementing the space-time translation by $x=(t,\mathbf{x})$, then $A_x=U(x)AU(x)^*$ is localized in $\mathscr{O}+x$ (so that $\mathscr{O}+(t,\mathbf{0})=\mathscr{O}_t$). This (I hope) answers your question 1). However, the input and output of an scattering experiment are about large times and large distances away from the scattering center, so it is more appropriate to talk about momentum localization when dealing with scattering states. For constructing the latter, we need local observables or smeared field polynomials with a nonzero transition amplitude between the vacuum state and a one-particle subspace with mass (say) $m>0$, whose existence is one of the assumptions of the Haag-Ruelle theory. Such operators exist thanks to the Reeh-Schlieder theorem. One then localizes such an operator (let us call it $Q$) in an energy-momentum region $\widehat{K}$ disjoint from the remainder of the energy-momentum spectrum (recall that there is an open neighborhood $m^2-\epsilon<p^2<m^2+\epsilon$, $0<\epsilon<m^2$ in energy-momentum space whose only points $p$ belonging to the energy-momentum spectrum lie precisely in the mass shell $p^2=m^2$, by the mass gap assumption of the Haag-Ruelle theory) by smearing the operator-valued function $x\mapsto Q_x$ with a tempered test function $f$ $$Q_f=\int_{\mathbb{R}^4}f(x)Q_x\mathrm{d}^4 x$$ whose Fourier transform $\hat{f}$ is of the form $\hat{f}(p)=h(p^2)\tilde{f}(\mathbf{p})$, where $h$ is a smooth function on $\mathbb{R}$ supported in $(m^2-\epsilon,m^2+\epsilon)$ and $\text{supp}\tilde{f}$ is such that $\{(\sqrt{\mathbf{p}^2+m^2},\mathbf{p})\ |\ \mathbf{p}\in\text{supp}\tilde{f}\}\subset\widehat{K}$. One obtains that if $|\Omega\rangle$ is the vacuum vector, then $Q_f|\Omega\rangle$ is a one-particle state with momentum wave function $\tilde{f}(\mathbf{p})$. We then write $Q(t,f)=(Q_f)_t$ - since the one-particle subspace is invariant under the action of the translation group, $Q(t,f)|\Omega\rangle$ is still a one-particle state, with momentum wave function $e^{-it\sqrt{\mathbf{p}^2+m^2}}\tilde{f}(\mathbf{p})$. It should become clear at this point that the precise form of the local observable $Q$ is not important. A way to think of the observable $Q(t,f)$ is as follows: applying $Q(t,f)$ to the vacuum state adds an "energy-momentum chunk" to it, localized in $\widehat{K}\cap\{p^2=m^2\}$. By the uncertainty principle, $Q_f$ cannot be a local observable, but it is "almost local" in the sense that the commutator with any observable localized in a causally disjoint region should be "negligible" at large distances, more or less like tempered test functions with non-compact support (e.g. Gaussians). The effect of the time translation by an amount $t$ is that the approximate localization center spatially disperses with displacements $t\mathbf{v}=t\mathbf{p}/m$, where $\mathbf{p}$ belongs to the support of $\tilde{f}$. Think of it as a dispersing bunch of classical particles of mass $m$ in free motion at speeds $\mathbf{v}=\mathbf{p}/m$. This intuitive picture can be made rigorous with the aid of the stationary phase method. If one now considers an operator monomial $Q(t,f_1)\cdots Q(t,f_n)$, where $\hat{f}_j(p)=h(p^2)\tilde{f}_j(\mathbf{p})$, $j=1,\ldots,n$, one may think of it as adding $n$ "energy-momentum chunks" to the vacuum state. The key point is that if the supports of the $\tilde{f}_j$'s are all disjoint, the corresponding localization centers move away from each other so that their commutators become negligible at large times. So, in a sense, the "almost local" observables $Q(t,f_j)$ become "asymptotically compatible" and the above "energy-momentum chunks" become effectively non-interacting at large times, thus giving origin asymptotically to $n$-particle states. This is made precise by the statement that $$\psi_t=Q(t,f_1)\cdots Q(t,f_n)|\Omega\rangle$$ converges to $n$-particle states with momentum wave functions $\tilde{f}_j(\mathbf{p})$, $j=1,\ldots,n$ as $t\to\pm\infty$ for each $n$. At finite but large times, one may think of $\psi_t$ as a state which yields a nonzero response around time $t$ from a coincidence arrangement of $n$ detectors with "momentum detection windows" contained in the supports of the $\tilde{f}_j$'s and (approximate) space-time localization regions contained in those of the $Q(t,f_j)$'s, but yields a "negligible" response from a similar coincidence arrangement of $n+1$ detectors. This interpretation may even be used (somewhat tautologically) to provide an operational definition of what is a particle in QFT. This (I hope as well) answers your question 2). Now we are as well in a position to address questions 3) and 4). In QFT the Hilbert space is generated by applying all smeared field operator polynomials or all local observables (not necessarily compatible!) to the vacuum state. In fact, by the Reeh-Schlieder theorem, such states are a total set in the Hilbert space even if we restrict to a single space-time region with nonvoid causal complement. The "in" and "out" Hilbert spaces in Haag-Ruelle theory, however, are obtained by applying to the vacuum state a special subset of "almost local" operations - namely, polynomials in the $Q(t,f_j)$'s for all $f_j$ as above - and taking respectively the asymptotic limits $t\to\pm\infty$. As discussed in the previous paragraph, the observables $Q(t,f_j)$ are only "asymptotically" compatible, but as I pointed at the beginning, this picture must be taken cum grano salis since the operator limits $\lim_{t\to\pm\infty}Q(t,f_j)$ usually do not exist. Nonetheless, since the "in" and "out" Hilbert spaces are obtained as subspaces of the interacting Hilbert space, any "in" state may be prepared with arbitrary precision by applying local operations (even in a single space-time region with nonvoid causal complement) to the vacuum state. This is the closest we can get to a positive answer to your question 3). As for question 4), this is related to whether the "in" and "out" Hilbert spaces coincide with the whole interacting Hilbert space, that is, whether our field theory is asymptotically complete. This is usually an additional assumption, which has never been proven except in trivial (i.e. free) cases. We do know, however, that whenever a model has bound states, solitons, etc., then asymptotic completeness fails. Finally, I must point out that the treatment of Haag-Ruelle scattering theory in Haag's book is almost telegraphic at parts (such as these) and not really a good first place to learn this topic. Better references are Section XI.16, pp. 317-331 Volume III (Scattering Theory) of the book Methods of Modern Mathematical Physics by Michael Reed and Barry Simon (Academic Press, 1979) and Chapter 5 of the book Mathematical Theory of Quantum Fields by Huzihiro Araki (Oxford University Press, 1999), particularly in the above order - Reed and Simon introduce the pedagogically simplifying assumption that the field operator itself interpolates between the vacuum and the one-particle Hilbert space (physically, the particles appearing in the asymptotic states are not "composite" with respect to the field). As discussed above, this assumption can be circumvented with the help of the Reeh-Schlieder theorem. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Pedro Lauridsen Ribeiro answered Aug 16, 2016 by (580 points) Thanks for the references, but I'm not concerned at this time with math manipulations, but with the physical way of stating the scattering problem in the Heisenberg picture and the interpretation of results. How can one really state/describe scattering in the Heisenberg picture when the state vectors are constant in time?! Why are the state vectors time-dependent? Are we returning to the Schroedinger picture? This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker The scattering states are constructed by means of a time-dependent, almost local "one-particle creation operator" $q_{i,\alpha}(h_i,t)$ (see formula (II.4.15) in pp. 88), which are then applied to the vacuum vector as in formula (II.4.17) - that's why in Haag-Ruelle scattering one is really working in the Heisenberg picture. The resulting (non-factorized) states have the approximate localization stated in page 77 and are clearly time-dependent, but the time dependence is removed in the asymptotic limit $t\to\pm\infty$. Just by reading pages 76-77 alone one is not able to get the above picture. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Pedro Lauridsen Ribeiro I know all that. I've read to the end, but there is still an INTERPRETATIONAL problem. In scattering theory formulated in Schrodinger picture one has to show that an arbitrary $|\Psi (t)\rangle$ converges to asymptotic states when $|t|$ becomes infinite. But in the Sch picture $|\Psi\rangle$ IS time dependent. In the book and everywhere in the H-R theory one constructs a time-dependent $|\Psi (t)\rangle$ and shows that it has limits, but what is the physical interpretation of this vector in the Heisenberg Picture? Moreover, is this time-dependent vector the most general it can be? This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker Your objections to H1) and H2) are unfounded. See EDIT 2 above. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker + 2 like - 0 dislike Unfortunately, I don't have precise references at the minute about the following argument, but only some notes taken during lectures of S. Doplicher. The Haag-Ruelle scattering theory starts from the observation that observables cannot be used to construct asymptotic states from the vacuum, since they leave the superselection sectors invariant. Hence one needs to use field operators. Considerations on the Fourier transform lead to the conclusion that, given a field operator $B$, one has to construct a quasi-local operator $\tilde B$ out of localisation data for a single-particle state [the details should be contained in the original work of Haag-Ruelle]. A single-particle state is then constructed simply as $$\phi = B\Omega$$ We now construct the Heisenberg state. By this I mean a state that does not vary in time. This can be achieved by considering the continuity equation associated to the Klein-Gordon field equation, and in particular by considering the time-independent inner product that comes from it. To be concrete, take the one particle state $phi$ and set $$B_\phi(t)\Omega := \int_{\mathbb R^3}\overline{\phi(x)}\overset{\leftrightarrow}{\partial_0}U(x,I)B\Omega\ \text d^3\mathbf x,$$ where $U$ is a representation of the Poincaré group on Fock space. Observe that, in general, $B_\phi(t)$ will depend on time, but by construction $B_\phi(t)\Omega$ won't. Hence $$\psi:=B_\phi(t)\Omega = B_\phi(0)\Omega,\qquad\forall t\in\mathbb R$$ in practice, and this is how one can go about getting the asymptotic limit. The construction of $n$-particle states is based on the choice of single-particle states with disjoint support in the momentum space. This is to guarantee that, in the asymptotic limit, the particles will be well separated (read far apart), in space and practically free, i.e. non-interacting. The state is then of the form $$\Psi^t := B_{1\phi_1}(t)\cdots B_{n\phi_n}(t)\Omega,$$ where $B_k$ and $\phi_k$ is a choice of quasi-local operators and solutions to the Klein-Gordon equations done as described above. The property of clustering then shows that the above state has the form of a product of states, and therefore one can set $$\Psi^{\text{in}} = \psi_1\times^{\text{in}}\cdots\times^{\text{in}}\psi_n:=\lim_{t\to-\infty}\Psi^t$$ and similarly for the outgoing $n$-particle states. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Phoenix87 answered Aug 20, 2016 by (40 points) The $t$ in $\Psi^t$ is merely a label. As stated earlier, $B_\phi(t)\Omega = B_\phi(0)\Omega$. The time-dependence is only in the product of the quasi-local operators, but as soon as this product touches the vacuum vector it becomes time-independent by construction in the asymptotic limit by the clustering property. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Phoenix87 Not for any $t$ in the case of the $n$-particle state. You have to make use of the hypothesis on the support in momentum space of the single particle wave-functions and the clustering to then use the property that $B_\phi(t)\Omega = B_\phi(0)\Omega$ for any $t$. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Phoenix87 If it's only the asymptotic limit that is time independent, then for finite time $\Psi^{t}$ depends on $t$ and it cannot be in the Heisenberg picture. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker But the asymptotic states is what the HR theory is about, isn't it? This is what you use to then define the $S$ matrix as the unitary operator that rotates an "in" state to the corresponding "out" state (caveats: 1. some people use the opposite conventions; 2. asymptotic state completeness should be assumed). This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Phoenix87 It doesn't really matter, as the state associated to the vector $\Psi^t$ is constructed as $\omega_{\Psi^t}(A)=\frac{(\Psi^t,A\Psi^t)}{(\Psi^t,\Psi^t)}$. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Phoenix87 One can choose a function $\phi$, indeed, but once chosen it cannot be changed and it evolves in time. So my question remains. Again, any proof would be greatly appreciated. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker I've found a sketch of the proof in Haag's book, and the norm is finite, indeed. I would like to thank you very much for all the help you've given me. I greatly appreciate it! This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user Andrea Becker + 0 like - 0 dislike This is not an authoritative comment to you interpretation concerns. The Schrodinger properties of the time dependent state $\Psi^t$ are, actually, never used. As was commented above, the limit $\lim_{t\rightarrow\pm\infty}A(t)$ usually does not exist in the operator sense, thus one acts on a Heisenberg state to get a well defined quantity. Therefore, I would interpret $\Psi^t$ as a symbol that has to be substituted by the actual definition in the proof and thus stay in the Heisenberg picture. This post imported from StackExchange Physics at 2017-11-23 19:27 (UTC), posted by SE-user dlont answered Oct 30, 2016 by (0 points) Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{}
# Eggs Creation Steps of your own Warm Hermit Crab Paguristes Tortugae out of Brazil Eggs Creation Steps of your own Warm Hermit Crab Paguristes Tortugae out of Brazil Fernando L. M. Mantelatto, Vera F. Alarcon, Renata B. Garcia, Egg Manufacturing Measures of the Warm Hermit Crab Paguristes Tortugae out of Brazil, Record from Crustacean Biology, Frequency 22, Situation dos, 1975-99990246 ## Abstract The aim of the present study was to characterize the total and seasonal fecundity of the hermit crab Paguristes tortugae as well as the influence of shell type on fecundity using the morphometric relationship. Ovigerous females were collected monthly from January to December, 1998, in the infralittoral region of Anchieta Island. Samplings were performed using SCUBA. The hermit crabs and the gastropod shells were measured. Hermit wet weight and shell dry weight were recorded. For the fecundity analysis, only ovigerous females with the eggs in the early phase of development were selected. The number of eggs carried by individuals of several sizes (shield length), condition of development, and egg size were determined. A high percentage (%) of ovigerous females with eggs in the early phase of development were captured, with a low frequency (4.29%) of females with eggs in the final stage of development. Size-frequency distribution during the months showed two peaks in the ovigerous female population (2.5 to 3.4 mm and 4.0 to 5.4 mm of SL). |$<\rm>\pm <\rm>$| fecundity was |$132 \pm 102$| eggs and tended to increase with increasing SL. No significant difference in fecundity occurred among the various seasons of the year. The results showed continuous and elevated reproduction of P. tortugae, with a high reproductive potential for the population. The pattern of the frequency distribution of ovigerous females tending to bimodality may be characteristic of a population with a two-year life cycle. Considering the four shells most occupied by P. tortugae (Pisania auritula, Cerithium atratum, Morula nodulosa, and Leucozonia nassa), the highest fecundity was observed hoe iemand een bericht te sturen op crossdresser heaven for ovigerous females occupying larger shells (P. auritula and L. nassa). The reproductive aspects of P. tortugae were related to strategies developed to compensate for interspecific competition, i.e., high and continuous reproductive effort, early maturity, low fecundity, and larger eggs produced. Studies on the reproductive measures regarding Anomura have been eliciting increasing need for view of that hermit crabs depict an fascinating category of a biological and you can evolutionary view ( Mantelatto and you may Garcia, 1999). This group constitutes more 800 species of the brand new hermit crabs worldwide and also gone through big upgrade ( Ingle, 1993). Hermit crabs portray an important part of the of many intertidal and you may moderately strong benthic groups, where it play a crucial role from the marine system ( Fransozo and you will Mantelatto, 1998). Still, partners studies have become conducted on the lives history or reproductive environment of hermit crabs in contrast to most other decapods. Fecundity is an important factor inside crustaceans, yet others, to possess deciding new reproductive prospective out-of a species and you can/otherwise of your own inventory size of a populace ( Mantelatto and Fransozo, 1997), maybe outlining the latest reproductive adjustment to help you environmental standards ( Sastry, 1983). Hermit crabs particularly, because they live-in gastropod shells, afford a chance to studies new interaction anywhere between funding and you may reproductive method. Centered on Wilber (1989), a great female’s shell can affect the girl breeding by determining if or perhaps not she will get ovigerous and/or restricting exactly how many egg delivered. That it publisher considered numerous areas of the new layer-size/reproduction dating, particularly concerning variations in egg design in terms of new types, size, regularity, and you will pounds of one’s layer occupied. ## Egg Design Strategies of one’s Exotic Hermit Crab Paguristes Tortugae away from Brazil At the same time, around three sets of hermit crab particularly (we.elizabeth., the genera Pagurus and you will Paguristes, additionally the “Pylopagurus-like” species), was indeed a difficult group to study since the of a lot variety try known away from singular otherwise a number of localities, and many however-undescribed varieties are recognized to occur in the spot, leading to big personality issues one of gurus and you will nonspecialists the same ( Hendrickx and you will Harvey, 1999). Paguristes tortugae Schmitt, 1933, is a somewhat prominent variety distributed regarding Western Atlantic out-of Florida so you can Brazil (because the far southern area as Santa Catarina Condition) ( Rieger and Giraldi, 1997; Melo, 1999). Inspite of the few files available on the overall biology from pagurids, P. tortugae has been seemingly well studied (discover Mantelatto and Sousa, during the force, having opinion).
{}
# Minimum Coefficient Of Static Friction# A pickup truck drives around a flat curve with radius $$r$$ = 88 $$m$$ at a speed of $$v$$ = 14 $$\frac{m}{s}$$. ## Part 1# What is the minimum coefficient of static friction required to keep the car from slipping? • 4.4 • 0.23 • 2.2 • 0.016 • 0.16
{}
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help 2000A&A...356..849G - Astronomy and Astrophysics, volume 356, 849-872 (2000/4-3) Multi-colour PL-relations of Cepheids in the HIPPARCOS catalogue and the distance to the LMC. GROENEWEGEN M.A.T. and OUDMAIJER R.D. Abstract (from CDS): We analyse a sample of 236 Cepheids from the hipparcos catalog, using the method of reduced parallaxes'' in V, I, K and the reddening-free Wesenheit-index''. We compare our sample to those considered by Feast & Catchpole (19977MNRAS.286L...1F) and Lanoix et al. (1999MNRAS.308..969L), and argue that our sample is the most carefully selected one with respect to completeness, the flagging of overtone pulsators, and the removal of Cepheids that may influence the analyses for various reasons (double-mode Cepheids, unreliable hipparcos solutions, possible contaminated photometry due to binary companions). From numerical simulations, and confirmed by the observed parallax distribution, we derive a (vertical) scale height of Cepheids of 70 pc, as expected for a population of 3-10M stars. This has consequences for Malmquist- and Lutz-Kelker (Lutz & Kelker, 1973PASP...85..573L, Oudmaijer et al., 1998MNRAS.294L..41O) type corrections which are smaller for a disk population than for a spherical population. The V and I data suggest that the slope of the Galactic PL-relations may be shallower than that observed for LMC Cepheids, either for the whole period range, or that there is a break at short periods (near logP0∼0.7-0.8). We stress the importance of two systematic effects which influence the distance to the LMC: the slopes of the Galactic PL-relations and metallicity corrections. In order to assess the influence of these various effects, we present 27 distance moduli (DM) to the LMC. These are based on three different colours (V,I,K), three different slopes (the slope observed for Cepheids in the LMC, a shallower slope predicted from one set of theoretical models, and a steeper slope as derived for Galactic Cepheids from the surface-brightness technique), and three different metallicity corrections (no correction as predicted by one set of theoretical models, one implying larger DM as predicted by another set of theoretical models, and one implying shorter DM based on empirical evidence). We derive DM between 18.45±0.18 and 18.86±0.12. The DM based on K are shorter than those based on V and I and range from 18.45±0.18 to 18.62±0.19, but the DM in K could be systematically too low by about 0.1 magnitude because of a bias due to the fact that NIR photometry is available only for a limited number of stars. From the Wesenheit-index we derive a DM of 18.60±0.11, assuming the observed slope of LMC Cepheids and no metallicity correction, for want of more information. The DM to the LMC based on the parallax data can be summarised as follows. Based on the PL-relation in V and I, and the Wesenheit-index, the DM is 18.60±0.11 (±0.08 slope)(+0.08–0.15 metallicity), which is our current best estimate. Based on the PL-relation in K the DM is 18.52±0.18 (±0.03 slope) (±0.06 metallicity) (+0.10–0 sampling bias). The random error is mostly due to the given accuracy of the hipparcos parallaxes and the number of Cepheids in the respective samples. The terms between parentheses indicate the possible systematic uncertainties due to the slope of the Galactic PL-relations, the metallicity corrections, and in the K-band, due to the limited number of stars. Recent work by Sandage et al. (1999ApJ...522..250S) indicates that the effect of metallicity towards shorter distances may be smaller in V and I than indicated here. From this, we point out the importance of obtaining NIR photometry for more (closeby) Cepheids, as for the moment NIR photometry is only available for 27% of the total sample. This would eliminate the possible bias due to the limited number of stars, and would reduce the random error estimate from 0.18 to about 0.10mag. Furthermore, the sensitivity of the DM to reddening, metallicity correction and slope are smallest in the K-band.
{}
# statsmodels.tsa.arima_model.ARIMAResults.forecast¶ ARIMAResults.forecast(steps=1, exog=None, alpha=0.05)[source] Out-of-sample forecasts Parameters stepsint The number of out of sample forecasts from the end of the sample. exogndarray If the model is an ARIMAX, you must provide out of sample values for the exogenous variables. This should not include the constant. The number of observation in exog must match the value of steps. alphafloat The confidence intervals for the forecasts are (1 - alpha) % Returns forecastndarray Array of out of sample forecasts stderrndarray Array of the standard error of the forecasts. conf_intndarray 2d array of the confidence interval for the forecast Notes Prediction is done in the levels of the original endogenous variable. If you would like prediction of differences in levels use predict.
{}
# NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Ex 3.6 These NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Ex 3.6 Questions and Answers are prepared by our highly skilled subject experts. ## NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Exercise 3.6 Question 1. Solve the following pairs of equations by reducing them to a pair of linear equations: Solution: By cross multiplication method By cross multiplication method By cross multiplication method By cross multiplication method solving for u and v by cross multiplication method: By cross multiplication method: By cross multiplication method: (viii) We have (i) and (iii), we get P + q = – $$\frac { 3 }{ 4 }$$ … (iii) $$\frac { 1 }{ 2 }$$P – $$\frac { 1 }{ 2q }$$ = $$\frac { -1 }{ 8 }$$ … (iv) By eliminating method Adding equations (iii) and (iv) we get q = $$\frac { 1 }{ 2 }$$ But P = $$\frac { 1 }{ 3x+y }$$ and q = $$\frac { 1 }{ 3x – y }$$ $$\frac { 1 }{ 3x+y }$$ = $$\frac { 1 }{ 4 }$$ and 3x + y = 4 … (v) 3x – y = 2 … (vi) By eliminating method Adding equations (v) and (vi) we get 6x = 6 x = 1 Question 2. Formulate the following problems as a pair of equations, and hence find their solutions: (i) Ritu can row’ downstream 20 km in 2 hours, and upstream 4 km in 2 hours. Find her speed of rowing in still water and the speed of the current. (ii) 2 women and 5 men can together finish an embroidery work in 4 days, while 3 women and 6 men can finish it in 3 days. Find the time taken by 1 woman alone to finish the work, and also that taken by 1 man alone. (iii) Roohi travels 300 km to her home partly by train and partly by bus. She takes 4 hours if she travels 60 km by train and the remaining by bus. If she travels 100 km by train and the remaining by bus, she takes 10 minutes longer. Find the speed of the train and the bus separately. Solution: (i) Let Ritu’s speed in still water = x km/h Speed of current = y km/h During downstream, speed = (x + y) km/h During upstream, speed = (x – y) km/h In the first case The time taken in hour be f, then By eliminating method Adding equations (i) and (ii), we get 2x = 12 x = 6 Putting this value in equation (ii), we get We get – y = 2 – 6 y = 4 Then x = 6 and y = 4 where x and y are respectively speed (in km/h) of rowing and current. (ii) Let the number of days taken by one woman by n and the number of days taken by one man by m. According to question $$\frac { 2 }{ n }$$ + $$\frac { 5 }{ m }$$ = $$\frac { 1 }{ 4 }$$ … (i) $$\frac { 3 }{ n }$$ + $$\frac { 6 }{ m }$$ = $$\frac { 1 }{ 3 }$$ … (i) Putting $$\frac { 1 }{ n }$$ = p and $$\frac { 1 }{ m }$$ = q in equations (i) and (ii) we get 2p + 5q = $$\frac { 1 }{ 4 }$$ … (iii) 3p + 6q = $$\frac { 1 }{ 3 }$$ … (iv) or 8p + 20q – 1 = 0 … (iii) 9p + 18q -1 = 0 … (iv) By cross multiplication method (iii) Let the speed of the train be u km/h and the speed of the bus be v km/h In the first case $$\frac { 60 }{ u }$$ + $$\frac { 240 }{ v }$$ = 4 … (i) In the second case $$\frac { 100 }{ u }$$ + $$\frac { 200 }{ v }$$ = $$\frac { 25 }{ 6 }$$ … (ii) Putting $$\frac { 1 }{ u }$$ = p and $$\frac { 1 }{ v }$$ = q in equations (i) and (ii) we get 60p + 240q = 4 … (iii) 100p + 200p = $$\frac { 25 }{ 6 }$$ … (iv) or 60p + 240p – 4 = 0 … (v) 600p + 120q – 25 = 0 … (vi) By using elemination method multiply equation (ii) by 10 we get 600p + 2400 – 40 = 0 … (vii) Substract equation (iv) from equations (v) 1200q = 15 1200q = 15 q = $$\frac { 15 }{ 12000 }$$ q = $$\frac { 1 }{ 80 }$$ Putting this value in equations (iv) P = $$\frac { 1 }{ 60 }$$ But p = $$\frac { 1 }{ u }$$ and q = $$\frac { 1 }{ v }$$ $$\frac { 1 }{ u }$$ = $$\frac { 1 }{ 60 }$$ and $$\frac { 1 }{ v }$$ = $$\frac { 1 }{ 80 }$$ u = 60 and v = 80 error: Content is protected !!
{}
There was an error in this gadget # Computer science Computer science deals with the theoretical foundations of information and computation, and with practical techniques for their implementation and application. Computer science or computing science (abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems.[1][2] Computer scientists invent algorithmic processes that create, describe, and transform information and formulate suitable abstractions to model complex systems. Computer science has many sub-fields; some, such as computational complexity theory, study the properties of computational problems, while others, such as computer graphics, emphasize the computation of specific results. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describe computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. The general public sometimes confuses computer science with careers that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones ## History The early foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard designed the first mechanical calculator in 1623, but did not complete its construction.[4] Blaise Pascal designed and constructed the first working mechanical calculator, the Pascaline, in 1642. Charles Babbage designed a difference engine and then a general-purpose Analytical Engine in Victorian times,[5] for which Ada Lovelace wrote a manual. Because of this work she is regarded today as the world's first programmer.[6] Around 1900, punched card machines were introduced. During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[7] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.[10] Since practical computers became available, many applications of computing have become distinct areas of study in their own right. ### Major achievements The German military used the Enigma machine (shown here) during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[12] Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include: • The start of the "digital revolution," which includes the current Information Age and the Internet.[13] • A formal definition of computation and computability, and proof that there are computationally unsolvable and intractable problems.[14] • The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[15] • In cryptography, breaking the Enigma machine was an important factor contributing to the Allied victory in World War II.[12] • Scientific computing enabled practical evaluation of processes and situations of great complexity, as well as experimentation entirely by software. It also enabled advanced study of the mind, and mapping of the human genome became possible with the Human Genome Project.[13] Distributed computing projects such as Folding@home explore protein folding. • Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning, and other statistical and numerical techniques on a large scale.[16] • Image synthesis, including video by computing individual video frames.[citation needed] • Human language processing, including practical speech-to-text conversion and automated translation of languages[citation needed] • Simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits ### Computer security and cryptography #### Computer security and cryptography Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity. #### Computational science Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyse and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines. Numerical analysis Computational physics Computational chemistry Bioinformatics #### Information science Information Retrieval Knowledge Representation Natural Language Processing Human–computer interaction ### Computer architecture and engineering #### Computer architecture and engineering Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnection hardware components to create computers that meet functional, performance, and cost goals. Digital logic Microarchitecture Multiprocessing Operating systems Computer networks Databases Computer security Ubiquitous computing Systems architecture Compiler design Programming languages #### Computer graphics and visualization Computer graphics is the study of digital visual contents, and involves syntheses and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and are heavily applied in the fields of special effects and video games. ### Artificial intelligence #### Artificial intelligence This branch of computer science aims to create synthetic systems which solve computational problems, reason and/or communicate like animals and humans do. This theoretical and applied subfield requires a very rigorous and integrated expertise in multiple subject areas such as applied mathematics, logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence which can be used to advance the field of intelligence research or be applied to other subject areas which require computational understanding and modelling such as in finance or the physical sciences. This field started in full earnest when Alan Turing, the pioneer of computer science and artificial intelligence, proposed the Turing Test for the purpose of answering the ultimate question... "Can computers think ?". Machine Learning Computer vision Image Processing Pattern Recognition Cognitive Science Data Mining Evolutionary Computation Information Retrieval Knowledge Representation Natural Language Processing Robotics ### Applied computer science Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[22] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[23] The term computics has also been suggested.[24] In continental Europe, names such as informatique (French), Informatik (German) or informatika (Slavic languages), derived from information and possibly mathematics or automatic, are more common than names derived from computer/computation. Renowned computer scientist Edsger Dijkstra once stated: "Computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, statistics, and economics. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[8] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[25] The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. ### Information and coding theory #### Information and coding theory Information theory is related to the quantification of information.This was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. #### Algorithms and data structures O(n2) Analysis of algorithms Algorithms Data structures Computational geometry #### Programming language theory Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering and linguistics. It is a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications. $\Gamma\vdash x: \text{Int}$ Type theory Compiler design Programming languages
{}
# Database configuration¶ Java-tron data storage supports LevelDB or RocksDB, and LevelDB is used by default. You can also choose RocksDB, which provides lots of configuration parameters, allowing nodes to be tuned according to their own machine configuration. The node database occupies less disk space than LevelDB. At the same time, RocksDB supports data backup during runtime, and the backup time only takes a few seconds. The following describes how to set the storage engine of the Java-tron node to RocksDB, and how to perform data conversion between leveldb and rocksdb. # RocksDB¶ ### Configuration¶ Use RocksDB as the data storage engine, need to set db.engine to "ROCKSDB". Note: RocksDB only supports db.version=2, yet does not supports db.version=1 The optimization parameters RocksDB support: ### Use RocksDB's data backup function¶ Choose RocksDB to be the data storage engine, you can use its data backup function while running Note: FullNode can use data backup function. In order not to affect SuperNode's block producing performance, SuperNode does not support backup service, but SuperNode's backup service node can use this function. ### Convert LevelDB data to RocksDB data¶ The data storage structure of LevelDB and RocksDB is not compatible, please make sure the node use the same type of data engine all the time. We provide data conversion script which can convert LevelDB data to RocksDB data. Usage: > cd /path/to/java-tron/source-code > ./gradlew build # build the source code > java -jar build/libs/DBConvert.jar # run data conversion command Note: If the node's data storage directory is self-defined, before run DBConvert.jar, you need to add the following parameters: • src_db_path: specify LevelDB source directory, default output-directory/database • dst_db_path: specify RocksDb source directory, default output-directory-dst/database Example, if you run the script like this: > nohup java -jar FullNode.jar -d your_database_dir </dev/null &>/dev/null & then, you should run DBConvert.jar this way: > java -jar build/libs/DBConvert.jar your_database_dir/database output-directory-dst/database Note: You have to stop the running of the node, and then to run the data conversion script. If you do not want to stop the running of the node for too long, after node is shut down, you can copy leveldb's output-directory to the new directory, and then restart the node. Run DBConvert.jar in the previous directory of the new directory, and specify the parameters: src_db_path and dst_db_path. Example: > cp -rf output-directory /tmp/output-directory > cd /tmp > java -jar DBConvert.jar output-directory/database output-directory-dst/database All the whole data conversion process may take 10 hours. ### RocksDB vs LevelDB¶ You can refer to the following documents for detailed information::RocksDB vs LevelDB
{}
# Do you know how to use USART on STM32? I've been doing USART on STM32 and got this at putty: Actually I want to display : printf("* Thank you for using the board"); Do you have idea why ? - Did you set identical configurations (data bits, parity, stop bits, bauds) at both ends? – Telaclavo Apr 17 '12 at 12:21 Fix already, I forgot to put MX232 converter, thanks a lot .. – Rick Ant Apr 18 '12 at 13:14 If you fixed it by adding a MAX232 converter put that as your answer and accept it. – mjh2007 Apr 18 '12 at 17:32 This could be a whole host of things. The best thing to do in trouble shooting is to avoid or clarify assumptions: • Correct baud rate?(too high?) Check both sides • Incorrect COM port settings (start/stop/data/parity bits). Check both sides. Don't assume values, like writing 1 in the stop bit register means 1 stop bit. At NXP LPC2129 (ARM7) this means 2 stop bits. Once I was stuck on that for hours, figuring out why I couldn't send more than 1 character at a time.. • Check timing on scope or logic analyser (at 9600baud a short pulse should take 1/9600 second). If you're using high baud rates and a 'odd crystal' (20MHz on 115k2 or higher), you may need to set up a fractional baud rate divider to more closely match baud rates. • Rerun calculations from datasheet for baud rate. Look-up, measure, clarify all clock speeds and other dividers configurable. Edit: last but not least: is your hardware OK? Shorts, missing solder points, components, etc. - Fix already, I forgot to put MX232 converter, thanks a lot . – Rick Ant Apr 18 '12 at 13:14 I agree... decypher attempt.. a lot more characters suggests Putty speed is too fast. a lot of 5's . 5 ascii = 53d=35h= 00110101 now read from right to left LSB 1st.. hmmm ó ascii = 162d=A2h=10100010 failed... read UART basics http://en.wikipedia.org/wiki/Universal_asynchronous_receiver/transmitter then fix match settings. A really neat trick I used to do in the 70's was make a duplex or Y cable for serial port and record data or scope it while connected to device. Autobaud receivers units were really nice back then until I had to design one. - You are probably at the wrong baud rate in putty. - Fix already, I forgot to put MX232 converter. -
{}
## Encyclopedia > Del Article Content # Del In vector calculus, del is a vector differential operator represented by the symbol $\nabla$. The name of this symbol is the nabla, after a Hebrew stringed instrument with a similar shape, and so the operator is also called the nabla operator. It is a shorthand for the vector: $\begin{pmatrix} {\partial / \partial x} \\ {\partial / \partial y} \\ {\partial / \partial z} \end{pmatrix}$ The symbol was introduced by William Rowan Hamilton, and is sometimes called nabla, after the ancient Hebrew instrument which the shape resembles. The operator can be applied to scalar fields ($\phi$) or vector fields ($\mathbf{F}$), to give: • Gradient: $\nabla \phi$ • Divergence: $\nabla \cdot \mathbf{F}$ • Curl: $\nabla \times \mathbf{F}$
{}
Categories ## Platypus Paper, Rewritten This is a completely rewritten version of a biology paper titled, “A Model for the Evolution of the Mammalian T-cell Receptor α/δ and μ Loci Based on Evidence from the Duckbill Platypus.” It is meant to be a demonstration and a proof of concept for JAWWS, my idea of a science journal that focusses on readability. You can read the original version on the Molecular Biology and Evolution journal’s website, or here on this blog, with annotations by me. Why did I choose to rewrite this paper? I wish I had a more principled answer, but the truth is that I simply went to ResearchHub, a website where scientists share papers between themselves and upvote the most interesting one, went to the Evolutionary Biology section (because that used to be my field), and picked the first paper that was open-access and seemed fit for my purposes. In other words, a paper that seemed like it could be improved a lot because it seemed more difficult to understand than was warranted. Only later did I realize it was a paper from 2012, so not that recent. I don’t think that matters too much for now since it’s just a proof of concept. Nor does the topic, i.e. molecular evolution in the vertebrate immune system. The actual journal will need to pick papers in a more principled way, of course. What did my rewrite entail? The best way to know is to read (not necessarily closely) the original and rewritten versions. But here’s a sample of my “interventions”: • I put almost all citations in collapsible footnotes. • I cut up most long paragraphs, including the abstract. • I added many context sentences, including at the beginning of sections, to give a better sense of why we’re reading this. One example is the first sentence of the introduction: “How did the immune system of jawed vertebrates evolve?” • I reworked some of the paper’s structure. One major change: I put the major contribution of the study, that is, the new evolutionary model, in its own section after the introduction. This way, it is not buried deep in the discussion; readers can start with it and dig into the rest only if they want more details. I also reordered the methods so that they would match the ordering in the Results. • I added several subheadings to the sections that didn’t already have subsections (Introduction, and Results and Discussion) • I tried my best to avoid abbreviations. One difficulty is that some of them are probably very recognizable by people who  know immunology, and not by me. So I left some in, while trying to make sure they don’t hamper readability. The major example is “TCR”, which means T-cell receptor and was used a lot. It’s still used in my version, but far less often. • I removed some jargon words. E.g. “proximal” became “closest”. • I formatted some information in point form, such as the T-cell lineages or the protocols in the Methods. • I added some text formatting to guide the reader. For instance, bold font for groups of animals in the introduction, and color in the text to match colored elements in figures. • I changed one figure by reorganizing its parts to make it clearer (fig. 2, which used to be fig. 5). A lot of additional clarity could potentially be gained from editing the figures, but that’s a lot of work, so I didn’t press it further. • I fixed a number of typos and grammatical errors. Mistakes like this are not a big problem, but there were enough that I assume little editing work was done on this paper. On footnotes: I’m using two different kinds of collapsible footnotes. Those in the usual style of the blog, like this one,1Here you would usually read a citation in short form such as “Rast et al. 1997.” Head to the original paper webpage to see the reference list in full. contain the citations included in the original paper.  Footnotes with brackets like this[1]This is an example comment. are for comments on the rewriting process and are also shown at the bottom of the paper. I suggest you don’t click on the former, unless you want to see a reference, and hover onto the latter to read my comments. Overall, my rewrite increased the length of the abstract from 267 to 286 words, and of the rest of the paper from about 6000 to about 6400 words. I consider this acceptable. I will publish some more thoughts on the rewriting process later. ## Abstract Goal: This study presents a new model for the evolution of part of the vertebrate immune system: the genes encoding the T-cell receptor (TCR) δ chains. Background: T lymphocytes have to recognize specific antigen for the adaptive immune response to work in vertebrates. They perform this using a somatically diversified T-cell receptor. All jawed vertebrates use four T-cell receptor chains called α, β, γ, and δ, but some lineages have nonconventional receptor chains: monotremes and marsupials encode a fifth one, called TCRµ. Its function is unknown, but it is somatically diversified like the conventional chains. Its origins are also unclear. It appears to be distantly related to the TCRδ chain, for which recent evidence from birds and frogs has provided new information that was not available from humans or other placental (eutherian) mammals. Experiment: We analyzed the genes encoding the δ chains in the platypus. This revealed the presence of a highly divergent variable (V) gene, indistinguishable from immunoglobulin heavy chain V genes (VH), and related to V genes used in the µ chain. This gene is expressed as part of TCRδ repertoire, so it is designated VHδ. Conclusions: The VHδ gene is similar to what has been found in frogs and birds, but it is the first time such a gene has been found in a mammal. This provides a critical link in reconstructing the evolutionary history of TCRµ. The current structure of the δ and µ genes in tetrapods suggests ancient and possibly recurring translocations of gene segments between the δ and immunoglobulin heavy genes, as well as translocations of δ genes out of the TCRα/δ locus early in mammals, creating the TCRµ locus. We present a detailed model of this evolutionary history.[2]Major changes to the abstract: I split it in four paragraphs with section titles (this is common in some journals; it should be common in most journals). I also added a section at the beginning to … Continue reading ## Introduction How did the immune system of jawed vertebrates evolve? In this study, we use genomic evidence from the platypus to propose a model for the evolution of a specific component of the vertebrate immune system: the receptors on the surface of T lymphocytes. As a reminder, T lymphocytes (or T cells) are white blood cells that play a critical role in the adaptive immune system. They can be classified into two main lineages based on the receptor they use:2Rast et al. 1997; reviewed in Davis and Chein 2008 1. αβT cell lineage: The receptor is composed of a heterodimer of α and β chains. Most circulating human T cells are αβT cells, including familiar subsets such as CD4+ helper T cells and regulatory T cells, CD8+ cytotoxic T cells, and natural killer T (NKT) cells. 2. γδT cell lineage: The receptor is composed of γ and δ chains. The function of these cells is less well defined. They have been associated with a broad range of immune responses including tumor surveillance, innate responses to pathogens and stress, and wound healing.3Hayday 2009 γδT cells are found primarily in epithelial tissues and form a lower percentage of circulating lymphocytes in some species. αβ and γδ T cells also differ in the way they interact with antigen. The receptors of αβT cells are “restricted” relative to the major histocompatibility complex (MHC), meaning that that they bind antigenic epitopes, such as peptide fragments, bound to, or “presented” by, molecules encoded in the MHC. In contrast, γδT receptors have been found to bind antigens directly in the absence of MHC, as well as self-ligands that are often MHC-related molecules.4Sciammas et al. 1994; Hayday 2009 All gnathostomes (jawed vertebrates) have αβ and γδ T cells. As we will see below, marsupial and monotreme mammals have an additional type of T-cell receptor, denoted with the letter µ. The platypus, a monotreme, further has a non-conventional receptor with δ chains, which is also present in birds and amphibians. Before presenting our evolutionary model, let’s review these types of T-cell receptors and their structure. ### Structure and Genes of Conventional T-Cell Receptors The chains of conventional T-cell receptors are composed of two extracellular domains, both members of the immunoglobulin domain superfamily of cell surface proteins (fig. 1):5reviewed in Davis and Chein 2008 • The closest domain to the cellular membrane is called C for constant.[3]Since this abbreviation comes up a lot, I put it first, with its meaning in parentheses. The C domain is largely invariant among T-cell clones expressing the same class of the receptor chain. • The domain farthest from the cellular membrane is called V for variable. It is the region that contacts antigen and MHC. Similar to antibodies, the individual clonal diversity in the V domain is generated by somatic DNA recombination.6Tonegawa 1983 [4]I didn’t change much in the figure’s caption, but it seemed pretty trivial to add color to the text to facilitate looking up what the colors mean. While C domains are usually encoded by a single, intact exon, V domains are assembled somatically from germ-line segments in developing T cells. These segments are genes called V (again for variable), D (for diversity), and J (for joining). The assembly process depends on the enzymes encoded by two genes, the recombination activating genes (RAG)-1 and RAG-2.7Yancopoulos et al. 1986; Schatz et al. 1989 The various T-cell receptor chains differ in how their V domains are assembled. β and δ chains are assembled from all three types of gene segments, whereas α and γ chains use only V and J. The different combinations of two or three segments, selected from a large repertoire of germ-line gene segments, along with variation at the junctions due to the addition and deletion of nucleotides during recombination, contribute to a vast diversity of T-cell receptors. It is this diversity that creates the individual antigen specificity of T-cell clones.[5]This is an example of two sentences taken verbatim from the original paper. Not all of it was poorly written! These genes are highly conserved among species in both their genomic sequence and their organization.8Rast et al. 1997; Parra et al. 2008, 2012; Chen et al. 2009 In all tetrapods examined, the β and γ chains are each encoded at multiple separate loci, whereas the genes encoding the α and δ chains are nested at a single locus, called the TCRα/δ locus.9Chien et al. 1987; Satyanarayana et al. 1988; reviewed in Davis and Chein 2008 The V domains of α and δ chains can use a common pool of V gene segments, but distinct D, J, and C genes. The recombination of V, J and optionally D genes, referred to as V(D)J recombination, and mediated by RAG, is also known to generate the diversity of antibodies produced by another type of lymphocyte, the B cells.[6]It took me forever to rewrite this part. The original sentence was, “Diversity in antibodies produced by B cells is also generated by RAG-mediated V(D)J recombination and the TCR and Ig genes … Continue reading10Flajnik and Kasahara 2010; Litman et al. 2010 ### Non-Conventional Receptors Across Vertebrates T-cell receptor and immunoglobulin genes clearly share a common origin in the jawed vertebrates.11Flajnik and Kasahara 2010; Litman et al. 2010 Usually, the V, D, J, and C coding regions are readily distinguishable from immunoglobulin, at least for conventional T-cell receptors, owing to divergence over the past 400 million years. Recently, however, the discovery of non-conventional isoforms of the δ chain has blurred the boundary between them. These non-conventional forms use V genes that appear indistinguishable from the immunoglobulin heavy chain V.12Parra et al. 2010, 2012 Such V genes have been named VHδ.[7]The original combined this sentence and the next, even though they’re about quite distinct ideas: the name of the genes, and the species where they’re found. VHδ genes have been found in both amphibians and birds (see the rightmost part of fig. 1).[8]Why not indicate the part of the figure that is relevant? Whenever you can, provide reader guidance! In the frog Xenopus tropicalis, as well as in a passerine bird, the zebra finch Taeniopygia guttata, the VHδ genes coexist with the conventional Vα and Vδ genes at the TCRα/δ locus.13Parra et al. 2010, 2012 In galliform birds, such as the chicken Gallus gallus, they are instead located at a second TCRδ locus that is unlinked to the conventional TCRα/δ.14Parra et al. 2012 VHδ are the only type of V gene segment present at the second locus and, although closely related to antibody VH genes, the VHδ appear to be used exclusively in δ chains. This is true as well for frogs where the TCRα/δ and IgH (immunoglobulin heavy chain) loci are tightly linked.15Parra et al. 2010 In mammals, a TCRα/δ locus has been characterized in several eutherian species and at least one marsupial, the opossum Monodelphis domestica. VHδ genes have not been found in mammals to date.16Satyanarayana et al.1988; Wang et al. 1994; Parra et al. 2008 However, marsupials do have an additional locus, unlinked to TCRα/δ, that uses antibody-related V genes. This fifth chain is called µ, and the receptor that uses it is referred to as TCRµ. The µ chain is related to the δ chain, but it diverges from it in both sequence and structure.17Parra et al. 2007, 2008 It has also been found in a monotreme, the platypus.[9]The authors like to use “duckbill platypus,” but there’s only one species of platypus, so I took that word out. The platypus and marsupial TCRµ genes are clearly orthologous, which is consistent with the idea that the µ chain is ancient in mammals, but has been lost in the eutherians.18Parra et al. 2008; Wang et al. 2011 TCRµ chains use their own unique set of V genes, called Vµ.19Parra et al. 2007; Wang et al. 2011 So far, no evidence has been found of V(D)J recombination between Vµ genes and genes from other immunoglobulin or T-cell receptor loci.[10]Another horrible sentence from the original, recorded for posterity: “Trans-locus V(D)J recombination of V genes from other Ig and TCR loci with TCRµ genes has not been found.” That … Continue reading  Neither have TCRµ homologues been found in non-mammals.20Parra et al. 2008 The structure of TCRµ chains is atypical. They contain three, rather than two, extra-cellular domains from the immunoglobulin superfamily;[11]The abbreviation IgSF was used in the paper, with no explanation. I assume the people who would read this paper tend to know what that means, but still. this is due to an extra N-terminal V domain (see fig. 1).21Parra et al. 2007; Wang et al. 2011 Both V domains are encoded by a unique set of Vµ genes and are more related to immunoglobulin heavy chain V than to conventional T-cell receptor V domains. The N-terminal one is diverse and encoded by genes that undergo somatic V(D)J recombination, while the second V domain (referred to as “supporting”) has little or no diversity. The supporting V domain differs between marsupials and monotremes. In marsupials, it is encoded by a germ-line joined, or pre-assembled, V exon that is invariant.22Parra et al. 2007 In the platypus, it is encoded by gene segments requiring somatic DNA recombination, but with limited diversity due in part to the lack of D segments.23Wang et al. 2011 Sharks and other cartilaginous fish also have a T-cell receptor chain that is structurally similar to TCRµ (see middle part of fig. 1).24Criscitiello et al. 2006; Flajnik et al. 2011 The resulting receptor is called NAR-TCR. Like the receptor of marsupials and monotremes, it contains three extracellular domains, but its N-terminal V domain is related to chains used by IgNAR (immunoglobulin new antigen receptor) antibodies, a type of antibody found only in sharks.25Greenberg et al. 1995 In both the TCRµ of marsupials and monotremes and the NAR-TCR of cartilaginous fishes, the current working model is that the N-terminal V domain is unpaired and acts as a single, antigen binding domain. This would be analogous to the V domains of light-chainless antibodies found in sharks and camelids.26Flajnik et al. 2011; Wang et al. 2011 How did the µ chain arise? Phylogenetic analyses support an origin after the avian–mammalian split.27Parra et al. 2007; Wang et al. 2011 Previously, we hypothesized that it originated as a recombination between ancestral immunoglobulin heavy and TCRδ-like loci,28Parra et al. 2008 but this hypothesis is problematic for several reasons. One challenge is the apparent genomic stability and ancient conserved synteny (order of genes on the chromosome) in the region surrounding the TCRα/δ locus; this region has appeared to remain stable over at least the past 350 million years of tetrapod evolution.29Parra et al. 2008, 2010 As a result, we need a new model for the evolution of TCRµ and the TCRα/δ locus. Here we present the best current model, supported by an analysis of the platypus genome—the first to examine a monotreme TCRα/δ locus in detail—as described in the methods and results sections below. ## The Model Our model can be summarized in six stages (fig. 2).[12]Major change from me here. This section was moved here from the discussion, because it is the core and most interesting part of the paper. It is now its own first-level section alongside … Continue reading 1. Duplication of the cluster. This occurred early in the evolution of tetrapods, or earlier. The duplication resulted in two copies of the C gene of the δ chain, each with its own set of D and J segments. 2. Insertion of VH. Recall that VH refers to the variable chain of immunoglobulin heavy (IgH). One or more genes were translocated from the IgH locus and inserted into the TCRα/δ locus, most likely to a location between the existing Vα/δ genes and the 5′-proximal cluster. This is the configuration found today in the zebra finch genome.30Parra et al. 2012 3. Inversion of the VHδcluster in amphibians. This cluster of genes was translocated and inverted, and the number of VHδ genes increased. The frog X. tropicalis currently has the greatest number of VHδ genes, where they make up the majority of V genes available in the germ-line for T-cell receptor δ chains.31Parra et al. 2010 4. Translocation of the VHδcluster to another site in galliforms. In chickens and turkeys, the same cluster that was inverted in amphibians instead moved out of the TCRα/δ locus and is now found on another chromosome. There are no or genes at this second TCRδ locus in chickens, and only a single gene remains at the conventional TCRα/δ locus.32Parra et al. 2012 5. Translocation of the VHδcluster to another site (TCRµ) in mammals. A similar process to step D in galliforms happened in a common ancestor of mammals, giving rise to TCRµ. Internal duplications of the VH, D, and J genes gave rise to the current [(VDJ) − (VDJ) − C] organization that can encode chains with double V domains.33Parra et al. 2007, Wang et al. 2011 6. Further changes in the three mammalian lineages. • In the platypus, the second VDJ cluster, which encodes the supporting (non-terminal) V chain, lost its D segments and generates V domains with short complementarity-determining region-3 (CDR3) encoded by direct V to J recombination.34Wang et al. 2011 • Meanwhile, in therians (marsupials and placentals), the VHδ gene disappeared from the TCRα/δ locus (not shown in fig. 2).35Parra et al. 2008 • Then, in placentals, the TCRµ locus was also lost.36Parra et al. 2008 • The marsupials kept TCRµ, but the second set of V and J segments (which encode the supporting V domain) was replaced with a germ-line joined V gene (fused yellowgreen segment in fig. 2), probably due to germ-line V(D)J recombination and retro-transposition.37Parra et al. 2007, 2008 • In both monotremes and marsupials, the whole cluster from VH to C appears to have undergone additional tandem duplication as it exists in multiple copies in the opossum and probably in the platypus.38Parra et al. 2007, 2008; Wang et al. 2011 The rest of the paper explains the analyses that gave us with the evidence to build this model. Additional discussion of the model is provided in the last section. ## Materials and Methods There are three parts to the analyses and experiments that allowed us to gather evidence and build our evolutionary model. First, find the TCRα/δ locus in platypus genome data. Second, perform phylogenetic analyses with the relevant genes. Third, confirm from a live specimen that the platypus expresses VHδ.[13]This new paragraph is important! It gives context to the experiments below and it guides the reader for the entire section. Also notice this is a case of an enumeration without point form. I like … Continue reading ### 1) Identification and Annotation of the Platypus TCRα/δ Locus We analyzed the genome of the platypus, Ornithorhynchus anatinus, using the assembly version 5.0.1 (http://www.ncbi.nlm.nih.gov/genome/guide/platypus/). We used two genome alignment tools: whole-genome BLAST from NCBI (www.ncbi.nlm.nih.gov/) and BLAST/BLAT from Ensembl (www.ensembl.org). We located the V and J gene segments by looking for similarity with the corresponding segments of other species, and by identifying flanking conserved recombination signal sequences. (RSS). We annotated V segments in the 5′ to 3′ direction as either Vα or Vδ, followed by the family number and the gene segment number if there were more than one in the family. For example, Vα15.7 is the seventh Vα gene in family 15. As for the D segments, we identified them from cDNA clones using VHδ, using complementarity-determining region-3 (CDR3) sequences that represent the V-D-J junctions. We labeled the platypus T-cell receptor gene segments according to the IMGT nomenclature (http://www.imgt.org/). We provide the location for the TCRα/δ genes of the platypus genome version 5.0.1 in supplementary table S1, available online. ### 2) Phylogenetic Analyses We used BioEdit39Hall 1999 as well as the accessory application ClustalX40Thompson et al. 1997 to align the nucleotide sequences of the V genes regions, from the framework region FR1 to FR3, including the complementarity-determining regions CDR1 and CDR2. We established the codon position of the alignments using amino acid sequences.41Hall 1999 When necessary, we corrected the alignments through visual inspection. We then analyzed them with MEGA Software.42Kumar et al. 2004 We generated phylogenetic trees using two methods: Neighbor Joining (NJ) with uncorrected nucleotide differences (p-distance), and Minimum Evolution distances. We evaluated support for the generated trees using bootstrap values from 1000 replicates. Supplementary table S2 contains the GenBank accession numbers for the sequences used in tree construction.[14]In the original paper, this section comes after the Confirmation of Expression section below, but in the results section, the phylogenetic results are discussed first. I don’t know if there was … Continue reading ### 3) Confirmation of Expression of Platypus VHδ As described with more detail in the Results and Discussion section below, the annotation step allowed us to find an atypical VHδ gene in the platypus genome. To confirm that it was not an artifact of the genome assembly process, we looked at the expression of this gene in a live specimen, a male platypus from the Upper Barnard River in New South Wales, Australia. The platypus was collected under the same permits as in Warren et al. 2008. We performed reverse transcription PCR (RT-PCR) on the RNA from the spleen of this New South Wales specimen. As a second point of comparison, we also used a previously described platypus spleen cDNA library that was constructed from RNA extracted from a Tasmanian animal.43Vernersson et al. 2002 The protocols and products used at every step are as follows: • cDNA synthesis: Invitrogen Superscript III-first strand synthesis kit, using the manufacturer’s recommended protocol44Invitrogen, Carlsbad, CA, USA • PCR amplification: we used the QIAGEN HotStar HiFidelity Polymerase Kit45BD Biosciences, CLONTECH Laboratories, Palo Alto, CA, USA in total volume of 20 µl containing: • 1× Hotstar Hifi PCR Buffer (containing 0.3 mM dNTPs) • 1µM of primers: we identified these from the platypus genome assembly step.46Warren et al. 2008 We targeted T-cell receptor δ transcripts with two primers, one for VHδ and one for Cδ: • 5′-GTACCGCCAACCACCAGGGAAAG-3′ for VHδ • 5′-CAGTTCACTGCTCCATCGCTTTCA-3′ for Cδ • 1.25U Hotstar Hifidelity DNA polymerase • PCR product cloning: TopoTA cloning® kit 47Invitrogen • Sequencing: BigDye terminator cycle sequencing kit version 348Applied Biosystems, Foster City, CA, USA according to the manufacturer recommendations. • Analysis of sequencing reactions: ABI Prism 3100 DNA automated sequences.49PerkinElmer Life and Analytical Sciences, Wellesley, MA, USA • Chromatogram analysis: Sequencher 4.9 software50Gene Codes Corporation, Ann Arbor, MI, USA We archived the sequence on GenBank under the accession numbers JQ664690–JQ664710. ## Results and Discussion ### Results of the TCRα/δ Locus Identification in the Platypus Here are the results of our analysis of the platypus genome from part 1 of the Materials and Methods section, which allowed us to identify the TCRα/δ locus and annotate its V, D, J and C gene segments, as well as the exons. Refer to fig. 3 below for the annotation map. Most of the locus is present on a single scaffold. The remainder is on a shorter contig. On either sides of the locus, we find the genes SALL2, DAD1, and several olfactory receptor genes (OR). All of these genes share conserved synteny with the TCRα/δ locus in amphibians, birds, and mammals.51Parra et al. 2008, 2010, 2012 The platypus locus has many typical features common to TCRα/δ loci in other tetrapods.52Satyanarayana et al. 1988; Wang et al. 1994; Parra et al. 2008, 2010, 2012 Two C region genes are present: a Cα (light blue in fig. 3) at the 3′ end of the locus, and a Cδ (dark blue) oriented 5′ of the Jα genes. These Jα genes occur in a large number (32) of fragments (in green) located between Cδ and Cα. A large array of Jα genes like this is believed to facilitate secondary Vα to Jα rearrangements in developing αβT cells if the primary rearrangements are nonproductive or need replacement.53Hawwari and Krangel 2007 Primary TCRα V–J rearrangements generally use Jα segments towards the 5′-end of the array and can progressively use downstream Jα in subsequent rearrangements. There is also a single Vδ gene (the last red segment in fig. 3) in reverse transcriptional orientation between the platypus Cδ gene and the Jα array that is conserved in mammalian TCRα/δ both in location and orientation.54Parra et al. 2008 There are 99 conventional T-cell receptor V gene segments in the platypus TCRα/δ locus (red in fig. 3). The vast majority, 89, share nucleotide identity with Vα in other species; the other 10 share identity with Vδ genes. The Vδ genes are clustered towards the 3′-end of the locus. Based on nucleotide identity shared among the platypus V genes, they can be classified into 17 different Vα families and two different Vδ families, based on the criteria of a V family sharing >80% nucleotide identity (the family and segment numbers are annotated in fig. 3). This is a typical level of complexity for mammalian Vα and Vδ genes.55Giudicelli et al. 2005; Parra et al. 2008 Also present were two Dδ (orange) and seven Jδ (green) gene segments oriented upstream of the Cδ. All gene segments were flanked by canonical recombination signal sequences (RSS), which are the recognition substrate of the RAG recombinase. The D segments were asymmetrically flanked by an RSS containing at 12 bp spacer on the 5′-side and 23 bp spacer on the 3′-side, as has been shown previously for T-cell receptor D gene segments in other species.56Carroll et al. 1993; Parra et al. 2007, 2010 In summary, the overall content and organization of the platypus TCRα/δ locus appeared fairly generic, with one exception. This atypical feature of the platypus locus is an additional V gene that shares greater identity to antibody VH genes than to T-cell receptor V genes. Among V genes, this segment is the closest to the D and J genes (see the yellow segment in fig. 3). We tentatively designated it as VHδ. ### VHδ Phylogenetics VHδ genes are, by definition, V genes that are indistinguishable from immunoglobulin heavy V (Ig VH) genes, but used in encoding T-cell receptor δ chains. Recall from the introduction[15]Yes, you are allowed to make links between the sections of your paper like this! that they have previously been found only in the genomes of birds and frogs.57Parra et al. 2008, 2010, 2012 To put the platypus VHδ gene in context, let us examine the phylogeny of VH genes. In mammals and other tetrapods, VH genes have been shown to cluster into three ancient clans (shown in fig. 4). Individual species differ in the presence of one or more of these clans in their germ-line immunoglobulin heavy locus.58Tutter and Riblet 1989; Ota and Nei 1994 For example, humans, mice, echidnas, and frogs have VH genes from all three clans,59Schwager et al. 1989; Ota and Nei 1994; Belov and Hellman 2003 whereas rabbits, opossums, and chickens have only a single clan.60McCormack et al. 1991; Butler 1997; Johansson et al. 2002; Baker et al. 2005 Our phylogenetic analyses showed that the platypus VHδ was most related to the platypus Vµ genes found at the TCRµ locus (see the boxed and bolded parts of fig. 4). Platypus VHδ, however, shares only 51–61% nucleotide identity (average 56.6%) with the platypus Vµ genes. Both the platypus Vµ and VHδ clustered within clan III.61Wang et al. 2011 This is noteworthy since VH genes in the platypus IgH locus are also from clan III and, in general, clan III is the most ubiquitous and conserved lineage of VH.62Johansson et al. 2002; Tutter and Riblet 1989 Although clearly related to platypus VH, the VHδ gene shares only 34–65% nucleotide identity (average 56.9%) with the “authentic”[16]I’m not sure about this but the original phrase was bona fide, which I had to look up. Maybe “authentic” between quotes isn’t the best translation, but a translation is better … Continue reading VH used in antibody heavy chains in this species. ### Results of the Confirmation of VHδ Expression It was necessary to rule out that the VHδ gene present in the platypus TCRα/δ locus was not an artifact of the genome assembly process. This is why we performed a “wet lab” verification step on cDNA synthesized from the splenic RNA of two platypuses, one from New South Wales and one from Tasmania (see Materials and Methods). We performed RT-PCR with primers that were specific for VHδ and Cδ. We were successful in amplifying the PCR products of the NSW specimen, but not for the Tasmanian one. One piece of supporting evidence for the expression of VHδ would be the demonstration that it is recombined to downstream Dδ and Jδ segments, and expressed with Cδ in complete T-cell receptor δ transcripts. This is what we found from the twenty sequenced clones we obtained from PCR in the New South Wales platypus. Each clone contained a unique nucleotide sequence that comprised the VHδ gene recombined to the Dδ and Jδ gene segments (see fig. 4A). Of these 20, 11 had unique V, D, and J combinations that would therefore encode 11 different complementarity-determining regions-3 (CDR3; see fig. 4B). More then half of these (8 out of 11) contained evidence of using both D genes, giving a VDDJ pattern. This is a common feature of δ V domains where multiple D genes can be incorporated into the recombination due to the presence of asymmetrical RSS.63Carroll et al. 1993 The region corresponding to the junctions between the V, D, and J segments contained an additional sequence that could not be accounted for by the germ-line gene segments (fig. 4B). There are two possible sources of such a sequence. One is palindromic nucleotides that are created during V(D)J recombination when the RAG generates hairpin structures that are resolved asymmetrically during the re-ligation process.64Lewis 1994 The second is non-templated nucleotides that can be added by the enzyme terminal deoxynucleotidyl transferase (TdT) during the V(D)J recombination process. An unusual feature of the platypus VHδ is the presence of a second cysteine encoded near the 3′-end of the gene, directly next to the cysteine predicted to form the intra-domain disulfide bond in immunoglobulin domains (fig. 4A). Additional cysteines in the complementarity-determining region 3 of VH domains have been thought to provide stability to unusually long CDR3 loops, as has been described for cattle and the platypus previously.65Johansson et al. 2002 The CDR3 of T-cell receptor δ using VHδ are only slightly longer than conventional δ chains (ranging 10–20 residues).66Rock et al. 1994; Wang et al. 2011 Furthermore, the stabilization of CDR3 generally involves multiple pairs of cysteines, which were not present in the platypus VHδ clones (fig. 4A). ##### The Tasmanian specimen The above concerns the animal collected from New South Wales. With the Tasmanian specimen, we were unable to amplify T-cell receptor δ transcripts containing VHδ from its splenic cDNA. We did, however, successfully isolate transcripts containing conventional Vα/δ segments, which provides a positive control. It is possible that Tasmanian platypuses, which have been separated from the mainland population at least 14,000 years ago, either have a divergent VHδ or have deleted this single V gene altogether.67Lambeck and Chappell 2001 ##### Sequence variation in VHδ Although there is only a single VHδ in the current platypus genome assembly, there was sequence variation in the region corresponding to FR1 through FR3 of the V domains (see fig. 4A; the sequence data are not shown here, but are available in GenBank). We have three potential explanations for this variation: 1. Two alleles of a single VHδ gene 2. Somatic mutation of expressed VHδ genes 3. Allelic variation in gene copy number The two-allele explanation makes sense given that the RNA used in this experiment is from a wild-caught individual from the same population that was used to generate the whole-genome sequence, which was found to contain substantial heterozygosity.68Warren et al. 2008 However, the variation was too large to be fully explained by this. The second possibility, somatic mutation (i.e. mutation not occurring in germ cells), is considered controversial for T-cell receptor chains. Nonetheless, it has been invoked in sharks and postulated in salmonids to explain the variation that exceeds the apparent gene copy number in these vertebrates.69Yazawa et al. 2008; Chen et al. 2009 Therefore, it seems possible[17]I kind of like the original phrasing “it does not seem to be out of the realm of possibility” but that could be easily simplified, so I did. that somatic mutation is occurring in platypus VHδ. One piece of evidence in favor of this is that the mutations appear to be localized to the V region with no variation in the C region (fig. 4A). This may be due to the relatedness between VHδ and immunoglobulin VH genes where somatic hyper-mutation is well documented. Somatic mutation in immunoglobulin VH contributes to overall affinity maturation in secondary antibody responses.70Wysocki et al. 1986 However, this means that the evidence is mixed: the pattern of mutation seen in the platypus is found in the complementarity-determining region 3, which would be indicative of selection for affinity maturation, but was also found in the framework regions, which does not indicate this. As further evidence against the somatic mutation explanation, there is no evidence of somatic mutation in the V regions of birds, which also have only a single VHδ.71Parra et al. 2012 The contribution of mutation to the platypus TCRδ repertoire, if it is occurring, remains to be determined. Alternatively, the sequence polymorphism may be due to VHδ gene copy number variation between individual TCRα/δ alleles. Irrespective of the number of VHδ genes in the platypus TCRα/δ locus, the results clearly support T-cell receptor δ transcripts containing VHδ recombined to Dδ and Jδ gene segments in the TCRα/δ locus (fig. 4). A VHδ gene or genes in the platypus TCRα/δ locus in the genome assembly, therefore, does not appear to be an assembly artifact. Rather, it is present and functional, and contributes to the expressed T-cell receptor δ chain repertoire. The possibility that some platypus TCRα/δ loci contain more than a single VHδ does not alter the principal conclusions of this study. ### Discussion of our Model of the Evolution of TCRα/δ and TCRµ The results above make up the evidence that allowed us to construct the model shown after the introduction section (see fig. 2). Here we discuss various considerations about the model. ##### Previous hypothesis of the origin of TCRµ in mammals Our previous hypothesis72Parra et al. 2008 about the origin of T-cell receptor µ (TCRµ) in mammals involved the recombination between an ancestral TCRα/δ locus and an immunoglobulin heavy (IgH) locus. The IgH locus would have contributed the V gene segments at the 5′-end, while the T-cell receptor δ would have contributed the D, J, and C genes at the 3′-end of the locus. The difficulty with this hypothesis was the clear stability of the genome region surrounding the TCRα/δ locus. In other words, the chromosomal region containing the TCRα/δ locus appears to have remained relatively undisrupted for at least the past 360 million years.73Parra et al. 2008, 2010, 2012 ##### VHδ in different vertebrate lineages An alternative model for the origins of TCRµ emerged from the discovery, in amphibians and birds, of VHδ genes inserted into the TCRα/δ locus. This model involves the insertion of VH (fig. 2B) followed by the duplication and translocation of T-cell receptor genes (fig. 2C-E). The insertion in the TCRα/δ locus seems to occur without disrupting the local syntenic region, as we know from zebra finches and frogs. In frogs, the IgH and TCRα/δ loci are tightly linked, which may have facilitated the translocation of VH genes into the TCRα/δ locus.74Parra et al. 2010 But close linkage is not a requirement. The genomes of birds and platypuses do not show such linkage, and the translocation of VH genes to the TCRα/δ locus appears to have occurred independently from frogs in these two lineages. We know this from the lack of similarity and relatedness between the VHδ genes of frogs, birds, and monotremes.75Parra et al. 2012 As can be seen in the phylogenetic tree of fig. 4, they appear derived each from different, ancient VH clans: • Clan I for birds • Clan II for frogs • Clan III for platypuses Therefore, we suggest that the transfer of VHδ occurred independently in the different lineages. Another possibility is that transfers of VHδ may have occurred frequently and repeatedly in the past. Gene replacement may be the best explanation for the current content of these genes in the different tetrapod lineages. The new evidence of platypus VHδ from this study allows us to update the model. ##### Updating the model for mammalian TCRµ Let us contrast the evidence from marsupials with the evidence we have gathered from the platypus. In marsupials, there is no VHδ; the Vµ genes are highly divergent; and at least in the opossum, there is no conserved synteny with genes linked to TCRµ. These facts provide little insight into the origins of T-cell receptor µ and its relationship to other T-cell receptor chains like δ or the conventional ones.76Parra et al. 2008 In the platypus genome, however, we notice a striking similarity between VH, VHδ, and Vµ. These genes are all in clan III. In particular, the close relationship between the platypus VHδ and Vµ genes lends greater support for the model presented in fig. 2E, with TCRµ having been derived from TCRδ genes. The similarity that we found here between the platypus VHδ and V genes in the TCRµ locus is, so far, the clearest evolutionary association between the µ and δ loci in one species. ##### Evolution of chains with three extracellular domains TCRµ is an example of a T-cell receptor form with three extracellular domains (refer back to fig. 1). These forms have evolved at least twice in vertebrates. The first was in the ancestors of the cartilaginous fish in the form of NAR-TCR.77Criscitiello et al. 2006 The second was in the mammals as TCRµ.78Parra et al. 2007 As we discussed in the introduction, NAR-TCR uses an N-terminal V domain that is related to the V domains found in IgNAR antibodies, which are unique to cartilaginous fish,79Greenberg et al. 1995; Criscitiello et al. 2006 and not closely related to antibody VH domains. Therefore, it appears that NAR-TCR and TCRµ are more likely the result of convergent evolution rather than being related by direct descent.80Parra et al. 2007; Wang et al. 2011 ##### Evolution of chains with antibody-like V domains T-cell receptor chains that use antibody-like V domains, such as TCRδ using VHδ, NAR-TCR, or TCRµ (i.e. the receptors with yellow ovals in fig. 1) are widely distributed in vertebrates. Only the bony fish and placental mammals lack them. In addition to NAR-TCR, some shark species appear to generate T-cell receptor chains using antibody V genes. This occurs via trans-locus V(D)J recombination between immunoglobulin IgM and IgW heavy chain V genes and TCRδ and TCRα D and J genes.81Criscitiello et al. 2010 This may be possible, in part, due to the multiple clusters of immunoglobulin genes found in the cartilaginous fish. It also illustrates that there have been independent solutions to generating T-cell receptor chains with antibody V domains in different vertebrate lineages. In the tetrapods, the VH genes were trans-located into the T-cell receptor loci where they became part of the germ-line repertoire. By comparison, in cartilaginous fish, something equivalent may occur somatically during V(D)J recombination in developing T cells. Either mechanism suggests there has been selection for having T-cell receptors using antibody V genes over much of vertebrate evolutionary history. What function do the antibody V chains serve? The current working hypothesis is that they are able to bind native antigen directly. This is consistent with a selective pressure for T-cell receptor chains that may bind or recognize antigen in ways similar to antibodies in many different lineages of vertebrates. In the case of NAR-TCR and TCRµ, the N-terminal V domain (the “third” one) is likely to be unpaired and bind antigen as a single domain (see fig. 1), as has been described for IgNAR and some IgG antibodies in camels (recently reviewed in Flajnik et al. 2011). This model of antigen binding is consistent with the evidence that the N-terminal V domains in TCRµ are somatically diverse, while the second, supporting V domains have limited diversity and presumably perform a structural role rather than one of antigen recognition.82Parra et al. 2007; Wang et al. 2011 There is no evidence of double V domains in TCRδ chains using VHδ in frogs, birds, or platypus (rightmost part of fig. 1).83Parra et al. 2010, 2012 Rather, the complex containing VHδ would likely be structured similar to a conventional γδ receptors with a single V domain on each chain. It is possible that such receptors also bind antigen directly, but this remains to be determined. A compelling model for the evolution of the immunoglobulin and T-cell receptor loci has been one of internal duplication, divergence and deletion. This is the so-called birth-and-death model of evolution of immune genes and was promoted by Nei and colleagues.84Ota and Nei 1994; Nei et al. 1997 Our results do not contradict that the birth-and-death mode of gene evolution has played a significant role in shaping these complex loci. However, our results do support the role of horizontal transfer of gene segments between the loci that had not been previously appreciated. With this mechanism, T cells may have been able to acquire the ability to recognize native, rather than processed antigen, much like B cells. Notes Categories This post is part of my ongoing scientific style guideline series. Go to Wikipedia and start reading an article on some topic you don’t know much about. For example, the umami taste. Chances are that by the end of the first few paragraph, you will have clicked on several links, either because they referred to terms you didn’t know (glutamateinosine monophosphate), or because you were curious (what does Wikipedia have to say about the five basic tastes?). Now these links might be open in new tabs for you to check later. Or maybe you’ve already given up reading the original umami article, and are now exploring some new rabbit hole (e.g. the Scoville scale of spiciness). Links are even an answer to some degree to the four-way tradeoff I wrote about here. It’s difficult to write something that is clear, brief, complex, and information-rich. But Wikipedia articles come closer to the golden middle, and that’s thanks to links. With no need to explain every difficult term directly in the text, articles can be more brief without sacrificing clarity. By packaging complex information in other articles, and showing only the link, Wikipedia articles can contain more complexity and richness of information. (The reason it doesn’t falsify my four-way tradeoff theory is that Wikipedia as a whole cannot be called succinct. To understand a topic well, you still have to read a lot of articles. But Wikipedia packages this information in relatively brief articles. In other words, information architecture is a pretty good solution to the tradeoff.) Links make matters easier to readers, but they also help writers. You don’t have to do as much guessing about what your audience knows; your audience will decide for themselves. And you can just reuse existing information written by others. This suggests that our two principles are satisfied: • Low-hanging fruit: adding links to existing public resources like Wikipedia, other encyclopedias, or open-access papers, is an easy thing to do for a writer. Given all of the above, we’d expect scientific papers — which are almost always at the frontier of the tradeoff, trying to cram a lot of complex information within a word limit without being too difficult to read — to use hyperlinks heavily. Right? Nope. They rarely do. At most they include citations that link to the reference section, which may include a link to the original paper, which may or may not be openly accessible to you, and which may or may not be a 10-page difficult read in which the explanation you seek is buried in page three of the discussion with no hint to tell you where to look. And they definitely never include links to Wikipedia or anything like it. Why is that? I’m guessing part of the explanation is the high importance of those citations. It is considered vital to put your work in relation to existing literature, so scientists have an incentive to reference as many relevant papers as they can, and no incentive to link to anything else. The respectability of the sources comes into play; Wikipedia isn’t a reputable source (it can be edited by anyone! It’s not peer-reviewed!!), nor are a lot of the other websites you could link to. So they tend to be avoided. Then there’s the requirements of proper information management. References must be written in standard format. So if for some reason you do need to link to a website, then you’ll have to use a format like: “Questions and Answers on Monosodium glutamate (MSG)”. Silver Spring, Maryland: United States Food and Drug Administration. 19 November 2012. Retrieved 19 February 2017. There are good reasons to this formalism. But it also means that adding any link to a paper requires some work. As a result, I suspect it leads to less hyperlinking in science papers than would be useful to readers. If you’ve ever read a scientific paper, it’s likely that you have googled complicated terms and looked up Wikipedia articles to help you. Scientists shouldn’t pretend that this isn’t happening. They should not hesitate to add link to resources like Wikipedia, blogs, Twitter threads, and other papers, in order to guide readers and reduce friction. ### Drawbacks There are a few drawbacks to hyperlinked text. None of them invalidate the idea that links should be used more, but we should keep them in mind. One drawback is that links can be distracting. A barrage of links in a paragraph might be somewhat annoying to read. (Although links also have the benefit of providing novelty to text — even a simple thing like the color of a hyperlink can be useful to make a piece of writing less boring.) And having to open links may drive readers away from the original paper, and require some more effort on their part as opposed to a piece written in a way that beginners don’t need to look up extra information. Another major drawback is link rot. A webpage may stop existing at any moment, and then your link becomes useless. Also, Wikipedia articles can change and stop fulfilling their original purpose. (Although in practice Wikipedia contributors are mindful of that and use redirections a lot.) One way to circumvent this is to link to archived pages, such as the Wayback Machine. And of course, links don’t work offline. But my contention is that fewer and fewer people are reading papers in print or without internet access. We shouldn’t make it impossible for these modes of reading to happen, but it’s time we make full use of the web’s possibilities to improve science publishing. ### Recommendations • Do not hesitate to add links to various resources, including encyclopedias, your own content (whether formally published or not), etc. • Try to find the correct balance between too few and too many links. • Your paper should be readable without clicking any links, so do explain the crucial parts directly in the text. • Too many links can be distracting, so choose carefully when to add one. • Link to archived webpages when possible. • Links shouldn’t replace formal citations, but it’s good practice to pair citations with direct links to make it easier to look up the reference. Categories ## Science Style Guide: Bullet Points This post is part of my ongoing scientific style guideline series. Writing with bullet points (or bullet numbers, letters etc.) has several advantages: • It provides clear guidance to readers. • It forces the writer to think about the structure of what they’re trying to say. • It comes with built-in line breaks, which tends to create shorter, more readable paragraphs. • It breaks the flow of normal prose, which makes reading less monotonous. • It is another channel to communicate emphasis (in addition to italics, bold, caps, subheadings, etc.). Not everything in a piece of writing should follow point form format. Regular prose, organized in paragraphs, is better for most things. But when you are trying to express something that’s highly structured, like a list of steps (in a recipe, or in an experimental protocol), then not using bulleted lists can work against you. Science papers being a weirdly conservative genre, bulleted lists are somewhat uncommon in them. Papers will quite often use quasi-point form formats, like (1) having numbers or letters in the middle of a paragraph, like this; (2) using “first,” “second,” “third”; or (3) separating ideas with quasi-titles written in italics or bold, but without a line break. You see this a lot in figures. Many scientific figures are complex and contain multiple parts. Each part is identified with a letter, as in the following example from the platypus paper: A lot of these habits come, I assume, from the fact that journals used to be available only in print. Space was very limited, and there’s often a lot of scientific information to display, so you’re not going to waste any with bullet points. Today we don’t have these limitations. Using bullet points where appropriate is nice to your readers, so use them. They’re an easy way to reduce reading friction. They’re also clearly a Low-Hanging Fruit. Similar to breaking paragraphs, it takes very little work to turn a piece of text into a list, if it’s already presenting the information in something close to a list. (If it isn’t, then bullet points probably won’t work well anyway.) It’s also one of those interventions that can be done almost mechanistically. ### Recommendations • Use bullet points liberally when it is appropriate, e.g. for: • Steps in a process, experiment, protocol, etc. • Lists of materials used, substances, etc. • Enumerations (e.g. “the five characteristics of X are : …”) • Nested bullet points (just like the above) can be useful, but don’t overuse them. • At more than two levels, the information structure is probably too complex for the bullet points to improve readability. • Pick bullet points instead of numbers/letters when the order does not matter. Pick numbers (for simplicity, prefer Arabic numerals, but Roman numerals can work) or letters when the order does matter (e.g. for steps in a protocol). • Bullet points are useful to break the monotony of reading paragraphs, but when there’s too much point form, the reverse becomes true. Use bulleted lists less than normal prose. Categories ## Science Style Guide: Giving Examples This post is part of my ongoing scientific style guideline series. Imagine you’re writing a science paper. The journal you’re going to submit it to specifies a word limit: 5,000 max. You open the stats in your finished draft — 5,523 words. You’re going to have to cut. Problem is: everything you wrote is important! You can’t take out anything from the Methods or Results sections: that would make the study weaker and less likely to be accepted for publication. You can’t take out any of the background information in the introduction: you already included the bare minimum for readers to understand the rest. Although, on closer inspection, perhaps not quite the bare minimum. You reread this sentence: Most models of trait evolution are based on Brownian motion, which assumes that a trait (say, beak size in some group of bird species) changes randomly, with some species evolving a larger beak, some a smaller one, etc. What if you removed the parts that talk about beak size? That’s not strictly necessary. Most models of trait evolution are based on Brownian motion, which assumes that a trait changes randomly. There we go. More concise, more to the point, and most importantly, you shaved off 23 words from that word count. Of course, the sentence is less illustrative, but whatever: your readers are smart, they’ll be able to figure out an example on their own. Right? Wrong. Well, okay, not quite wrong, your readers probably are smart. But this goes against the Minimum Reading Friction principle. The point of most writing, including science papers, is to do the work so that readers don’t need to. If readers need to think of an illustrative example themselves to fully understand your abstract idea, then you’re asking a big effort of them. Picking good, concrete, relevant examples is a lot of work, whether as a reader or writer.1Here’s an aside that’s not directly related to science but, instead, to computer programming. When coding, you’ll usually refer a lot to developer documentation about whatever preexisting code you’re using. E.g. you want to convert a date to a different format, so you look up the docs for the function convertFormat(someDate) -> convertedDate. The docs will describe how the function works, what its input (someDate) and outputs (convertedDate) exactly are, and so on — but very often they will not include an example of using convertFormat() in code. If there is an example, it’s often trivial and not very helpful. When I worked as a programmer, I was commonly frustrated by the lack of examples, both because I wanted to figure out quickly how to use a complicated function, and because I wanted to know about any usage conventions. I suspect that writing documentation would be a lot more work if it included clear and relevant examples everywhere, which is probably why it’s rarely done. I realize this constantly when I write. It’s very tempting to just state an abstract idea and not bother finding a good example to illustrate it. After all, the abstract idea is more general and therefore more valuable — provided that your readers understand it. I struggled with example-finding in this very essay. It took me a while to think of the opening about cutting examples to respect a word count limit. And I’m not even that happy with this example. For one thing, it’s not very concrete. For another, it’s not even the most common reason for lack of examples: usually, we don’t cut them out, we simply fail to come up with them in the first place. And so, unfortunately, this piece of guidance is less of a Low-Hanging Fruit than others: adding good examples is a skill that takes some practice. At the very least, it’s not difficult from the point of view of structure, since it doesn’t require you to rethink your argument — you usually just need to add a sentence or two. Here are a few other minor points: #### Where should examples be placed relative to the main idea? It’s most intuitive to place an example right after the idea it supports, and that’s probably fine most of the time. But there are benefits to placing an example first. Consider: Left-handedness seems to be somewhat correlated with extraordinary success, including political success. For example, despite a base rate of about 10% left-handedness in the general population, four of the seven last United States presidents — Barack Obama, Bill Clinton, George H.W. Bush, and Ronald Reagan — were left-handed. vs. What do US presidents Barack Obama, Bill Clinton, George H.W. Bush, and Ronald Reagan have in common? They were all left-handed. In other words, four of the last seven presidents were left-handed, compared to a base rate in the general population of about 10%. This suggests that left-handedness is correlated with extraordinary success, at least in politics. I find the second version more engaging. You see an interesting fact, you’re drawn in, and then the writer tells you the more general point when you’re most receptive. Journalists do this a lot. They opens with a story, and then proceed to make their point. #### What types of scientific writing does this apply to? Anything that deals mostly with abstract ideas. Highly concrete writing, such as the sections describing the methods or results of a study, aren’t concerned. Thus, in a typical experiment paper, this advice is mostly relevant to the discussion section and some of the introductory background. Authors of literature reviews may need to be more careful. These papers integrate a lot of ideas from reviewed studies; it can be tempting to skip examples in order to include more content in less space. The paragraph I worked on here was from a literature review. #### What about word limits, though? Sometimes you really are constrained by externally imposed word limits, and sometimes the examples really are the the least problematic thing to take out. In those cases, well, do what you have to do. In JAWWS, I don’t want to be strict about word limits. They often force writers to sacrifice clarity to satisfy other components of the four-way tradeoff. They’re also not as relevant in an age where papers are rarely printed on, well, paper. On the other hand, I imagine that many other publications first think that and then have to implement limits to avoid very long submissions. I wonder if the solution could be to make concrete examples not count, provided it’s not too difficult to identify them. ### Recommendations • Support each abstract idea with at least one example • Complicated abstract idea may benefit from multiple examples • Choose concrete, specific examples that can be grasped immediately • When possible, put the example before stating the underlying idea Categories ## Science Style Guide: Paragraph Length This post is part of my ongoing scientific style guideline series. There are famous words from Gary Provost that go like this. Pay attention to the rhythm: This sentence has five words. Here are five more words. Five-word sentences are fine. But several together become monotonous. Listen to what is happening. The writing is getting boring. The sound of it drones. It’s like a stuck record. The ear demands some variety. Now listen. I vary the sentence length, and I create music. Music. The writing sings. It has a pleasant rhythm, a lilt, a harmony. I use short sentences. And I use sentences of medium length. And sometimes when I am certain the reader is rested, I will engage him with a sentence of considerable length, a sentence that burns with energy and builds with all the impetus of a crescendo, the roll of the drums, the crash of the cymbals—sounds that say listen to this, it is important. So write with a combination of short, medium, and long sentences. Create a sound that pleases the reader’s ear. Don’t just write words. Write music. This is legendary advice for writing sentences. It is delightfully illustrative; we grasp it immediately. And it is correct: diversity in sentence length is a necessity of good writing, just like it is for musical notes. I claim that the same is true of paragraph length. Science papers usually feature many long paragraphs. Often, all or almost all paragraphs in a paper are long. Put negatively, we might call them Walls of Text. This is a good metaphor because Walls of Text, just like regular walls, serve as obstacles. They make information less accessible. How often have you looked at a Wall of Text and simply decided it wasn’t worth the effort? Walls of Text are bad because: 1. They make it more difficult for readers to take breaks. 2. They provide no hints about the structure of the underlying ideas. We’ll examine both in more detail below. But first I want to tie my ideas about paragraphs with my two major writing style principles. Minimum Reading Friction: The point of having paragraphs at all, as opposed to perfectly continuous text with no line breaks, is to provide some help for readers. If you don’t do that, you’re essentially telling your readers that they’re on their own. This is the opposite of what we want — the effort should be made once, by the writer, so that the many readers don’t have to. Low-Hanging Fruit: Cutting up paragraphs is a relatively easy task. If the sentences are structured well already, it’s just a matter of finding the “joints” in the written text where it makes sense to add a line break. If the ideas are structured in a confusing manner, then it’s more work, but there’s also greater room for improvement. In the interest of not making this post too long, I won’t include a full-fledged example, but this past post in which I rewrote a paragraph (into several ones) is a good illustration. ### 1. Rewarding the reader with breaks Humans aren’t computers. We can’t work continuously without resting. Reading science papers text is work, so we’re always on the lookout for opportunities to take breaks — sometimes microbreaks on the order of a few seconds, sometimes longer breaks like a full day. Paragraphs, like chapters, sections, and sentences, serve the purpose of telling readers, “hey, good job, you read a thing, now you can take a break if you want.” It’s rewarding. It indicates that it’s safe(r) to take a break after a paragraph because it’ll be less work to find a reentry point later, and because you expect the next paragraph to be about a different idea. I don’t know if it’s a coincidence that the word break is used for both concepts, but if so, it’s a fortuitous one. Walls of Text are often bad because when they loom ahead, you brace yourself. You wonder if you’ll have the energy and time to read it all. If not, maybe you quit reading (and it’s anybody’s guess whether you’ll come back to it later). If yes, then you come out at the other end with less energy and time, and good luck if the next paragraph is also a Wall of Text. And that’s assuming you do reach the end. It’s quite likely that you quit halfway — because you had to stop to think about something you read, or you needed to look up a word, or you clicked on a link, or some random distraction outside the text grabbed your attention. At the most extreme, you could imagine an entire book that consists of a single paragraph, with no chapters or line breaks at all.1In fact very old books, from centuries or millennia ago, are often like that, probably because back then paper or parchment were expensive. You wouldn’t want to waste precious space with line breaks. This is really lazy on the part of the writer — the reader has to do all the work! Now, that’s not to say long paragraphs are always wrong. Sometimes it really does make sense to package a lot of ideas together in a single Wall of Text. Also, long paragraphs can be easy to read if the sentences are good and logically connected. But this also means that if you do choose to write a Wall of Text, then you should be extra careful with how you structure the writing inside it. ### 2. Providing structure Speaking of structure: line breaks are one of the most useful tools to communicate structure to readers. We expect paragraphs to contain a single idea. You may have learned in school that a paragraph should have a “topic sentence” with additional sentences to provide “supporting detail.” This is somewhat too rigid, but the principle is sound. The worst kinds of Walls of Texts are those that have multiple competing ideas inside them. Find where the boundaries are, and cut them up! The ideas don’t even have to be very different. Suppose you have a transition word like “Similarly” or “Alternatively” in the middle of a paragraph. The next sentence if probably closely related to the previous one, but the transition word does indicate a shift, so it’s a nice spot for adding a line break. Of course, sometimes you really have a single idea with lots of supporting detail that it makes to sense to break up. This is why Walls of Text are sometimes useful. In fact, as the Gary Provost quote at the top illustrates for sentences, diversity in paragraph length is a good thing. Having only very short paragraphs is bad. Think of low-quality newspaper pieces where there’s a line break after each sentence. It’s jarring. This is almost as bad as Walls of Text, from the point of view of structure. Okay, that was annoying, right? The reason is that sentences already provide structure. So using only single-sentence paragraphs amounts to not using line breaks as an extra channel for reader guidance. Strive to have a mix of short, medium, and long paragraphs. Heterogeneity is good. It carries more information. ### Recommendations • If you’re ever debating whether or not to end the paragraph and add a line break, err on the side of “yes”. • Verbatim from Slate Star Codex’s Nonfiction Writing Advice, an excellent essay whose section 1 heavily inspired this post. • Balance your piece between short, medium, and long paragraphs. • Cut up existing Walls of Text by finding the boundaries between different ideas. • This advice generalizes to section breaks: • Err on the side of more shorter sections rather than few long ones. • Split sections that are long and contain many distinct ideas. Categories Here I present a paper I chose to rewrite as a demonstration for the JAWWS project. The original text and figures are reproduced below,1the paper has a Creative Commons non-commercial license interspersed with my comments in the following format: hello I am a blue comment in a quote-block Feel free to just read the comments. Annotating the paper was a first step in the process. Next I will focus on the rewriting per se. Should be fun! I didn’t have a particularly strict selection procedure — I went on ResearchHub, in the evolutionary biology section (since that used to be my field), and picked one that seemed appropriate. A cursory skimming showed it had plenty of abbreviations and long paragraphs, which suggested there was a lot of room for improvement. Also, it’s about platypuses. Or platypi. Platypodes. Whatever. • Title: “A Model for the Evolution of the Mammalian T-cell Receptor α/δ and μ Loci Based on Evidence from the Duckbill Platypus” • Authors: Zuly E. Parra, Mette Lillie, Robert D. Miller • Journal: Molecular Biology and Evolution • Word count: 5,800 words. A disclaimer: some of the comments below will be harsh. Again, I don’t mean to attack the authors, who did their job as well as they could, and in fact succeeded at it — after all, they managed to publish their work! With that, let’s pretend we’re semi-aquatic platypuses and dive in. ## A Model for the Evolution of the Mammalian T-cell Receptor α/δ and μ Loci Based on Evidence from the Duckbill Platypus Comments: Okay, this paper is going to be about T cells (I vaguely remember this being about immunity?), platypuses, and evolution. Sounds good. ## Abstract The specific recognition of antigen by T cells is critical to the generation of adaptive immune responses in vertebrates. T cells recognize antigen using a somatically diversified T-cell receptor (TCR). All jawed vertebrates use four TCR chains called α, β, γ, and δ, which are expressed as either a αβ or γδ heterodimer. Nonplacental mammals (monotremes and marsupials) are unusual in that their genomes encode a fifth TCR chain, called TCRµ, whose function is not known but is also somatically diversified like the conventional chains. The origins of TCRµ are also unclear, although it appears distantly related to TCRδ. Recent analysis of avian and amphibian genomes has provided insight into a model for understanding the evolution of the TCRδ genes in tetrapods that was not evident from humans, mice, or other commonly studied placental (eutherian) mammals. An analysis of the genes encoding the TCRδ chains in the duckbill platypus revealed the presence of a highly divergent variable (V) gene, indistinguishable from immunoglobulin heavy (IgH) chain V genes (VH) and related to V genes used in TCRµ. They are expressed as part of TCRδ repertoire (VHδ) and similar to what has been found in frogs and birds. This, however, is the first time a VHδ has been found in a mammal and provides a critical link in reconstructing the evolutionary history of TCRµ. The current structure of TCRδ and TCRµ genes in tetrapods suggests ancient and possibly recurring translocations of gene segments between the IgH and TCRδ genes, as well as translocations of TCRδ genes out of the TCRα/δ locus early in mammals, creating the TCRµ locus. Comments: That’s a pretty dense abstract. There’s a lot of acronyms in there, which I find distracting. Also, it’s not immediately obvious why we should be interested in this paper. It seems to be this: studying platypuses uncovered new information about how T cells evolved. But that info is buried in the fourth sentence and beyond. ## Introduction T lymphocytes are critical to the adaptive immune system of all jawed vertebrates and can be classified into two main lineages based on the T-cell receptor (TCR) they use (Rast et al. 1997; reviewed in Davis and Chein 2008). The majority of circulating human T cells are the αβT cell lineage which use a TCR composed of a heterodimer of α and β TCR chains. αβT cells include the familiar T cell subsets such as CD4+ helper T cells and regulatory T cells, CD8+ cytotoxic T cells, and natural killer T (NKT) cells. T cells that are found primarily in epithelial tissues and a lower percentage of circulating lymphocytes in some species express a TCR composed of γ and δ TCR chains. The function of these γδ T cells is less well defined and they have been associated with a broad range of immune responses including tumor surveillance, innate responses to pathogens and stress, and wound healing (Hayday 2009). αβ and γδ T cells also differ in the way they interact with antigen. αβTCR are major histocompatibility complex (MHC) “restricted” in that they bind antigenic epitopes, such as peptide fragments, bound to, or “presented” by, molecules encoded in the MHC. In contrast, γδTCR have been found to bind antigens directly in the absence of MHC, as well as self-ligands that are often MHC-related molecules (Sciammas et al. 1994Hayday 2009). I can hardly think of a less exciting introduction. I’m expecting talk of platypuses, of puzzling questions about evolution or the immune system — and all I get is a boring lecture on T cells. Make no mistake: all of this information is important. We need to know a T cell is, what’s a T-cell receptor, and that there exist at least two kinds (αβ and γδ). But this information shouldn’t be put first. And it could definitely be split up into more paragraphs. The conventional TCR chains are composed of two extracellular domains that are both members of the immunoglobulin (Ig) domain super-family (reviewed in Davis and Chein 2008) (fig. 1). The membrane proximal domain is the constant (C) domain, which is largely invariant amongst T-cell clones expressing the same class of TCR chain, and is usually encoded by a single, intact exon. The membrane distal domain is called the variable (V) domain and is the region of the TCR that contacts antigen and MHC. Similar to antibodies, the individual clonal diversity in the TCR V domains is generated by somatic DNA recombination (Tonegawa 1983). The exons encoding TCR V domains are assembled somatically from germ-line gene segments, called the V, diversity (D), and joining (J) genes, in developing T cells, a process dependent upon the enzymes encoded by the recombination activating genes (RAG)-1 and RAG-2 (Yancopoulos et al. 1986Schatz et al. 1989). The exons encoding the V domains of TCR β and δ chains are assembled from all three types of gene segments, whereas the α and γ chains use only V and J. The different combinations of V, D, and J or V and J, selected from a large repertoire of germ-line gene segments, along with variation at the junctions due to addition and deletion of nucleotides during recombination, contribute to a vast TCR diversity. It is this diversity that creates the individual antigen specificity of T-cell clones. Fig. 1. The figure helps, but again, why are we reading this? This paper seems to follow the common pattern in which the introduction gradually “zooms into” the main point. This is not a good pattern, because it doesn’t tell us the reason for this information. Sure, we suspect it’s relevant to understand what comes next, but without any mystery to anchor this to, it’s hard to be really engaged. The TCR genes are highly conserved among species in both genomic sequence and organization (Rast et al. 1997Parra et al. 20082012Chen et al. 2009). In all tetrapods examined, the TCRβ and γ chains are each encoded at separate loci, whereas the genes encoding the α and δ chains are nested at a single locus (TCRα/δ) (Chien et al. 1987Satyanarayana et al. 1988; reviewed in Davis and Chein 2008). The V domains of TCRα and TCRδ chains can use a common pool of V gene segments, but distinct D, J, and C genes. Diversity in antibodies produced by B cells is also generated by RAG-mediated V(D)J recombination and the TCR and Ig genes clearly share a common origin in the jawed-vertebrates (Flajnik and Kasahara 2010Litman et al. 2010). However, the V, D, J, and C coding regions in TCR have diverged sufficiently over the past >400 million years (MY) from Ig genes that they are readily distinguishable, at least for the conventional TCR. Recently, the boundary between TCR and Ig genes has been blurred with the discovery of non-conventional TCRδ isoforms that have been found that use V genes that appear indistinguishable from Ig heavy chain V (VH) (Parra et al. 20102012). Such V genes have been designated as VHδ and have been found in both amphibians and birds (fig. 1). In the frog Xenopus tropicalis, and a passerine bird, the zebra finch Taeniopygia guttata the VHδ are located within the TCRα/δ loci where they co-exist with conventional Vα and Vδ genes (Parra et al. 20102012). In galliform birds, such as the chicken Gallus gallus, VHδ are present but located at a second TCRδ locus that is unlinked to the conventional TCRα/δ (Parra et al. 2012). VHδ are the only type of V gene segment present at the second locus and, although closely related to antibody VH genes, the VHδ appear to be used exclusively in TCRδ chains. This is true as well for frogs where the TCRα/δ and IgH loci are tightly linked (Parra et al. 2010). Okay… different species have slightly different genes… Cool. Also, “MY” for million years, really? Do we really need that, especially when there are already about five abbreviations per sentence? The TCRα/δ loci have been characterized in several eutherian mammal species and at least one marsupial, the opossum Monodelphis domestica, and VHδ genes have not been found to date (Satyanarayana et al.1988Wang et al. 1994Parra et al. 2008). However, marsupials do have an additional TCR locus, unlinked to TCRα/δ, that uses antibody-related V genes. This fifth TCR chain is called TCRµ and is related to TCRδ, although it is highly divergent in sequence and structure (Parra et al. 20072008). A TCRµ has also been found in the duckbill platypus and is clearly orthologous to the marsupial genes, consistent with this TCR chain being ancient in mammals, although it has been lost in the eutherians (Parra et al. 2008Wang et al. 2011). TCRµ chains use their own unique set of V genes (Vµ) (Parra et al. 2007Wang et al. 2011). Trans-locus V(D)J recombination of V genes from other Ig and TCR loci with TCRµ genes has not been found. So far, TCRµ homologues have not been found in non-mammals (Parra et al. 2008). After an overview of non-mammal tetrapods (frogs, birds), we’re now talking about mammals: platypuses, marsupials, eutherians. It seems like the zooming in is coming to an end… TCRµ chains are atypical in that they contain three extra-cellular IgSF domains rather than the conventional two, due to an extra N-terminal V domain (fig. 1) (Parra et al. 2007Wang et al. 2011). Both V domains are encoded by a unique set of Vµ genes and are more related to Ig VH than to conventional TCR V domains. The N-terminal V domain is diverse and encoded by genes that undergo somatic V(D)J recombination. The second or supporting V domain has little or no diversity. In marsupials this V domain is encoded by a germ-line joined, or pre-assembled, V exon that is invariant (Parra et al. 2007). The second V domain in platypus is encoded by gene segments requiring somatic DNA recombination; however, only limited diversity is generated partly due to the lack of D segments (Wang et al. 2011). A TCR chain structurally similar to TCRµ has also been described in sharks and other cartilaginous fish (fig. 1) (Criscitiello et al. 2006Flajnik et al. 2011). This TCR, called NAR-TCR, also contains three extracellular domains, with the N-terminal V domain being related to those used by IgNAR antibodies, a type of antibody found only in sharks (Greenberg et al. 1995). The current working model for both TCRµ and NAR-TCR is that the N-terminal V domain is unpaired and acts as a single, antigen binding domain, analogous to the V domains of light-chainless antibodies found in sharks and camelids (Flajnik et al. 2011Wang et al. 2011). I’ve tried reading this paragraph like five times and I’m still not sure what it’s trying to say. It feels like it’s mostly disjointed sentences that had to be included so the authors can assume you know this, but since we still don’t have a vision of the larger picture, it’s really hard to pay attention. Phylogenetic analyses support the origins of TCRµ occurring after the avian–mammalian split (Parra et al. 2007Wang et al. 2011). Previously, we hypothesized the origin of TCRµ being the result of a recombination between ancestral IgH and TCRδ-like loci (Parra et al. 2008). This hypothesis, however, is problematic for a number of reasons. One challenge is the apparent genomic stability and ancient conserved synteny in the region surrounding the TCRα/δ locus; this region has appeared to remain stable over at least the past 350 MY of tetrapod evolution (Parra et al. 20082010). The discovery of VHδ genes inserted into the TCRα/δ locus of amphibians and birds has provided an alternative model for the origins of TCRµ; this model involves both the insertion of VH followed by the duplication and translocation of TCR genes. Here we present the model along with supporting evidence drawn from the structure of the platypus TCRα/δ locus, which is also the first analysis of this complex locus in a monotreme. The last sentence is the first interesting one of the entire paper. It could have come earlier. Technically we should know this from the abstract, but the abstract was pretty difficult to read too. Also, this is definitely at least two paragraphs merged into one: the first about the previous hypothesis, and the second about the alternative model that is going to be presented. ## Materials and Methods The intro was painful, and usually materials and methods are even worse. We’ll see! 🙂 ### Identification and Annotation of the Platypus TCRα/δ Locus The analyses were performed using the platypus (Ornithorhynchus anatinus) genome assembly version 5.0.1 (http://www.ncbi.nlm.nih.gov/genome/guide/platypus/). The platypus genome was analyzed using the whole-genome BLAST available at NCBI (www.ncbi.nlm.nih.gov/) and the BLAST/BLAT tool from Ensembl (www.ensembl.org). The V and J segments were located by similarity to corresponding segments from other species and by identifying the flanking conserved recombination signal sequences (RSS). V gene segments were annotated 5′ to 3′ as Vα or Vδ followed by the family number and the gene segment number if there were greater than one in the family. For example, Vα15.7 is the seventh Vα gene in family 15. The D segments were identified using complementarity-determining region-3 (CDR3) sequences that represent the V–D–J junctions, from cDNA clones using VHδ. Platypus TCR gene segments were labeled according to the IMGT nomenclature (http://www.imgt.org/). The location for the TCRα/δ genes in the platypus genome version 5.0.1 is provided in supplementary table S1Supplementary Material online. Actually, this isn’t that bad: it’s easier to follow than the introduction because it tells us sequential actions. They make sense together. But there are a few things wrong here. First, the use of the dreaded passive voice. “The analyses were performed …” No! Tell us who performed it! Second, it’s a pretty dense paragraph and the only one in its section (Identification and Annotation …), which means there’s no benefit to bundling all these sentences together: the title already serves this purpose. Third, it lacks some sentence to tell us what the goal is. The intro was not clear enough to assume readers know what the end point of these analyses is. ### Confirmation of Expression of Platypus VHδ Reverse transcription PCR (RT–PCR) was performed on total splenic RNA extracted from a male platypus from the Upper Barnard River, New South Wales, Australia. This platypus was collected under the same permits as in Warren et al. (2008). The cDNA synthesis step was carried out using the Invitrogen Superscript III-first strand synthesis kit according to the manufacturer’s recommended protocol (Invitrogen, Carlsbad, CA, USA). TCRδ transcripts containing VHδ were targeted using primers specific for the Cδ and VHδ genes identified in the platypus genome assembly (Warren et al. 2008). PCR amplification was performed using the QIAGEN HotStar HiFidelity Polymerase Kit (BD Biosciences, CLONTECH Laboratories, Palo Alto, CA, USA) in total volume of 20 µl containing 1× Hotstar Hifi PCR Buffer (containing 0.3 mM dNTPs), 1µM of primers, and 1.25U Hotstar Hifidelity DNA polymerase. The PCR primers used were 5′-GTACCGCCAACCACCAGGGAAAG-3′ and 5′-CAGTTCACTGCTCCATCGCTTTCA-3′ for the VHδ and Cδ, respectively. A previously described platypus spleen cDNA library constructed from RNA extracted from tissue from a Tasmanian animal was also used (Vernersson et al. 2002). PCR products were cloned using TopoTA cloning® kit (Invitrogen). Sequencing was performed using the BigDye terminator cycle sequencing kit version 3 (Applied Biosystems, Foster City, CA, USA) and according to the manufacturer recommendations. Sequencing reactions were analyzed using the ABI Prism 3100 DNA automated sequences (PerkinElmer Life and Analytical Sciences, Wellesley, MA, USA). Chromatograms were analyzed using the Sequencher 4.9 software (Gene Codes Corporation, Ann Arbor, MI, USA). Sequences have been archived on GenBank under accession numbers JQ664690–JQ664710. This seems to be mostly a list of the machines, substances, protocols etc. that were used. Accordingly, it should be formatted as a list. It doesn’t read well as a paragraph (nor should it be expected to). ### Phylogenetic Analyses Nucleotide sequences from FR1 to FR3 of the V genes regions, including CDR1 and CDR2, were aligned using BioEdit (Hall 1999) and the accessory application ClustalX (Thompson et al. 1997). Nucleotide alignments analyzed were based on amino acid sequence to establish codon position (Hall 1999). Alignments were corrected by visual inspection when necessary and were then analyzed using the MEGA Software (Kumar et al. 2004). Neighbor joining (NJ) with uncorrected nucleotide differences (p-distance) and minimum evolution distances methods were used. Support for the generated trees was evaluated based on bootstrap values generated by 1000 replicates. GenBank accession numbers for sequences used in the tree construction are in supplementary table S2Supplementary Material online. I have a graduate degree in evolutionary biology, I’ve done plenty of phylogenetic analyses (building trees of life), and somehow I hadn’t understood yet that this is what this paper was about. Maybe that’s really obvious to practicing evolutionary biologists, but it seems to me that the kind of analysis could have been made more obvious earlier. ## Results and Discussion Not a bad idea to merge results and discussion together IMO, as long as it doesn’t hinder comprehension. The TCRα/δ locus was identified in the current platypus genome assembly and the V, D, J, and C gene segments and exons were annotated and characterized (fig. 2). The majority of the locus was present on a single scaffold, with the remainder on a shorter contig (fig. 2). Flanking the locus were SALL2DAD1 and several olfactory receptor (OR) genes, all of which share conserved synteny with the TCRα/δ locus in amphibians, birds, and mammals (Parra et al. 200820102012). The platypus locus has many typical features common to TCRα/δ loci in other tetrapods (Satyanarayana et al. 1988Wang et al. 1994Parra et al. 200820102012). Two C region genes were present: a Cα that is the most 3′ coding segment in the locus, and a Cδ oriented 5′ of the Jα genes. There is a large number of Jα gene segments (n = 32) located between the Cδ and Cα genes. Such a large array of Jα genes are believed to facilitate secondary Vα to Jα rearrangements in developing αβT cells if the primary rearrangements are nonproductive or need replacement (Hawwari and Krangel 2007). Primary TCRα V–J rearrangments generally use Jα segments towards the 5′-end of the array and can progressively use downstream Jα in subsequent rearrangements. There is also a single Vδ gene in reverse transcriptional orientation between the platypus Cδ gene and the Jα array that is conserved in mammalian TCRα/δ both in location and orientation (Parra et al. 2008). Fig. 2. Oof. I had to actually add line breaks to this paragraph to parse it. It mostly says the same things as the figure, which isn’t too bad. Repeating important info in multiple formats is a good idea. The figure itself could have been clearer, though — it took me a few minutes to understand that the multiple lines in it represent contiguous segments of the chromosome (at least that’s what I think it means). I also had to look up what “synteny” means: it’s having the same order for genetic elements across species. There are 99 conventional TCR V gene segments in the platypus TCRα/δ locus, 89 of which share nucleotide identity with Vα in other species and 10 that share identity with Vδ genes. The Vδ genes are clustered towards the 3′-end of the locus. Based on nucleotide identity shared among the platypus V genes they can be classified into 17 different Vα families and two different Vδ families, based on the criteria of a V family sharing >80% nucleotide identity (not shown, but annotated in fig. 2). This is also a typical level of complexity for mammalian Vα and Vδ genes (Giudicelli et al. 2005Parra et al. 2008). Also present were two Dδ and seven Jδ gene segments oriented upstream of the Cδ. All gene segments were flanked by canonical RSS, which are the recognition substrate of the RAG recombinase. The D segments were asymmetrically flanked by an RSS containing at 12 bp spacer on the 5′-side and 23 bp spacer on the 3′-side, as has been shown previously for TCR D gene segments in other species (Carroll et al. 1993Parra et al. 20072010). In summary, the overall content and organization of the platypus TCRα/δ locus appeared fairly generic. The last sentence seems to be the main takeaway. I would have put it first. What is atypical in the platypus TCRα/δ locus was the presence of an additional V gene that shared greater identity to antibody VH genes than to TCR V genes (figs. 2 and 3). This V gene segment was the most proximal of the V genes to the D and J genes and was tentatively designated as VHδ. VHδ are, by definition, V genes indistinguishable from Ig VH genes but used in encoding TCRδ chains and have previously been found only in the genomes of birds and frogs (Parra et al. 200820102012). Shortish paragraph, intriguing first sentence — good job! Fig. 3. Maybe that’s the ex-biologist speaking, but I personally really like phylogenetic trees. I find them quite illustrative. On the other hand, I, uh, didn’t remember at all what a VH gene is, so I had to go back to the introduction. There should have been a way to make it clearer, since VH genes play a big role in the results. Also, not important, but there’s a big typo in the last sentence (generation should have been generated). VH genes from mammals and other tetrapods have been shown to cluster into three ancient clans and individual species differ in the presence of one or more of these clans in their germ-line IgH locus (Tutter and Riblet 1989Ota and Nei 1994). For example, humans, mice, echidnas, and frogs have VH genes from all three clans (Schwager et al. 1989Ota and Nei 1994Belov and Hellman 2003), whereas rabbits, opossums, and chickens have only a single clan (McCormack et al. 1991Butler 1997Johansson et al. 2002Baker et al. 2005). In phylogenetic analyses, the platypus VHδ was most related to the platypus Vµ genes found in the TCRµ locus in this species (fig. 3). Platypus VHδ, however, share only 51–61% nucleotide identity (average 56.6%) with the platypus Vµ genes. Both the platypus Vµ and VHδ clustered within clan III (fig. 3) (Wang et al. 2011). This is noteworthy given that VH genes in the platypus IgH locus are also clan III and, in general, clan III VH are the most ubiquitous and conserved lineage of VH (Johansson et al. 2002Tutter and Riblet 1989). Although clearly related to platypus VH, the VHδ gene share only 34–65% nucleotide identity (average 56.9%) with the bona fide VH used in antibody heavy chains in this species. Okay, this explains the three VH parts in the tree. It’s pretty clear. It was necessary to rule out that the VHδ gene present in the platypus TCRα/δ locus was not an artifact of the genome assembly process. One piece of supporting evidence would be the demonstration that the VHδ is recombined to downstream Dδ and Jδ segments and expressed with Cδ in complete TCRδ transcripts. PCR using primers specific for VHδ and Cδ was performed on cDNA synthesized from splenic RNA from two different platypuses, one from New South Wales and the other from Tasmania. PCR products were successfully amplified from the NSW animal and these were cloned and sequenced. Twenty clones, each containing unique nucleotide sequence, were characterized and found to contain the VHδ recombined to the Dδ and Jδ gene segments (fig. 4A). Of these 20, 11 had unique V, D, and J combinations that would encode 11 different complementarity-determining regions-3 (CDR3) (fig. 4B). More than half of the CDR3 (8 out of 11) contained evidence of using both D genes (VDDJ) (fig. 4B). This is a common feature of TCRδ V domains where multiple D genes can be incorporated into the recombination due to the presence of asymmetrical RSS (Carroll et al. 1993). The region corresponding to the junctions between the V, D, and J segments, contained additional sequence that could not be accounted for by the germ-line gene segments (fig. 4B). There are two possible sources of such sequence. One are palindromic (P) nucleotides that are created during V(D)J recombination when the RAG generates hairpin structures that are resolved asymmetrically during the re-ligation process (Lewis 1994). The second are non-templated (N) nucleotides that can be added by the enzyme terminal deoxynucleotidyl transferase (TdT) during the V(D)J recombination process. An unusual feature of the platypus VHδ is the presence of a second cysteine encoded near the 3′-end of the gene, directly next to the cysteine predicted to form the intra-domain disulfide bond in Ig domains (fig. 4A). Additional cysteines in the CDR3 region of VH domains have been thought to provide stability to unusually long CDR3 loops, as has been described for cattle and the platypus previously (Johansson et al. 2002). The CDR3 of TCRδ using VHδ are only slightly longer than conventional TCRδ chains (ranging 10–20 residues) (Rock et al. 1994Wang et al. 2011). Furthermore, the stabilization of CDR3 generally involves multiple pairs of cysteines, which were not present in the platypus VHδ clones (fig. 4A). Attempts to amplify TCRδ transcripts containing VHδ from splenic RNA obtained from the Tasmanian animal were unsuccessful. As a positive control, TCRδ transcripts containing conventional Vα/δ were successfully isolated, however. It is possible that Tasmanian platypuses, which have been separated from the mainland population at least 14,000 years either have a divergent VHδ or have deleted this single V gene altogether (Lambeck and Chappell 2001). I like the thought process: “hey, our results may have been an artifact, here’s what we did to prove it wasn’t.” But why is this paragraph so long? Seems like it could have been multiple smaller ones, perhaps with a section subheading. Fig. 4. This figure is probably good to visualize what their results actually looked like, but it also seems like a way to cram as much information in a visual and its caption as humanly possible… I’ll let it pass. It’s fine that some parts of the paper go more in depth, if they can be easily ignored, as I think is the case here. Small nitpick: This is two figures, and I would preferred that this fact would have been clearer. A small “(A)” and “(B)” in the paragraph doesn’t really help the reader. Although there is only a single VHδ in the current platypus genome assembly, there was sequence variation in the region corresponding to FR1 through FR3 of the V domains (fig. 4A and sequence data not shown but available in GenBank). Some of this variation could represent two alleles of a single VHδ gene. Indeed, the RNA used in this experiment is from a wild-caught individual from the same population that was used to generate the whole-genome sequence and was found to contain substantial heterozygosity (Warren et al. 2008). There was greater variation in the transcribed sequences, however, than could be explained simply by two alleles of a single gene (fig. 4A). Two alternative explanations are the occurrence of somatic mutation of expressed VHδ genes or allelic variation in gene copy number. Somatic mutation in TCR chains is controversial. Nonetheless, it has been invoked to explain the variation in expressed TCR chains that exceeds the apparent gene copy number in sharks, and has also been postulated to occur in salmonids (Yazawa et al. 2008Chen et al. 2009). Therefore, it does not seem to be out of the realm of possibility that somatic mutation is occurring in platypus VHδ. Indeed, the mutations appear to be localized to the V region with no variation in the C region (fig. 4A). This may be due to its relatedness of VHδ to Ig VH genes where somatic hyper-mutation is well documented. Such somatic mutation contributes to overall affinity maturation in secondary antibody responses (Wysocki et al. 1986). The pattern of mutation seen in platypus VHδ however, is not localized to the CDR3, which would be indicative of selection for affinity maturation, but was also found in the framework regions. Furthermore, in the avian genomes where there is also only a single VHδ, there was no evidence of somatic mutation in the V regions (Parra et al. 2012). The contribution of mutation to the platypus TCRδ repertoire, if it is occurring, remains to be determined. Alternatively, the sequence polymorphism may be due to VHδ gene copy number variation between individual TCRα/δ alleles. Not the worst paragraph, but again, doesn’t need to be a Wall of Text. Irrespective of the number of VHδ genes in the platypus TCRα/δ locus, the results clearly support TCRδ transcripts containing VHδ recombined to Dδ and Jδ gene segments in the TCRα/δ locus (fig. 4). A VHδ gene or genes in the platypus TCRα/δ locus in the genome assembly, therefore, does not appear to be an assembly artifact. Rather it is present, functional and contributes to the expressed TCRδ chain repertoire. The possibility that some platypus TCRα/δ loci contain more than a single VHδ does not alter the principal conclusions of this study. Previously, we hypothesized the origin of TCRµ in mammals involving the recombination between and ancestral TCRα/δ locus and an IgH locus (Parra et al. 2008). The IgH locus would have contributed the V gene segments at the 5′-end of the locus, with the TCRδ contributing the D, J, and C genes at the 3′-end of the locus. The difficulty with this hypothesis was the clear stability of the genome region surrounding the TCRα/δ locus. In other words, the chromosomal region containing the TCRα/δ locus appears to have remained relatively undisrupted for at least the past 360 million years (Parra et al. 200820102012). The discovery of VHδ genes within the TCRα/δ loci of frog and zebra finch is consistent with insertions occurring without apparently disrupting the local syntenic region. In frogs, the IgH and TCRα/δ loci are tightly linked, which may have facilitated the translocation of VH genes into the TCRα/δ locus (Parra et al. 2010). However, close linkage is not a requirement since the translocation of VH genes appears to have occurred independently in birds and monotremes, due to the lack of similarity between the VHδ in frogs, birds, and monotremes (Parra et al. 2012). Indeed, it would appear is if the acquisition of VH genes into the TCRα/δ locus occurred independently in each lineage. The similarity between the platypus VHδ and V genes in the TCRµ locus is, so far, the clearest evolutionary association between the TCRµ and TCRδ loci in one species. From the comparison of the TCRα/δ loci in frogs, birds, and monotremes, a model for the evolution of TCRµ and other TCRδ forms emerges (fig. 5), which can be summarized as follows: Oooh, exciting! The title promised a model, and at last we get it. Also it seems that below we get point-form stuff! I like point-form stuff. It’s often really helpful to guide the reader. 1. Early in the evolution of tetrapods, or earlier, a duplication of the D–J–Cδ cluster occurred resulting in the presence of two Cδ each with its own set of Dδ and Jδ segments (fig. 5A). 2. Subsequently, a VH gene or genes was translocated from the IgH locus and inserted into the TCRα/δ locus, most likely to a location between the existing Vα/Vδ genes and the 5′-proximal D–J–Cδ cluster (fig. 5B). This resulted in the configuration like that which currently exists in the zebra finch genome (Parra et al. 2012). 3. In the amphibian lineage there was an inversion of the region containing VHδ–Dδ–Jδ–Cδ cluster and an expansion in the number of VHδ genes (fig. 5C). Currently, X. tropicalis has the greatest number of VHδ genes, where they make up the majority of V genes available in the germ-line for use in TCRδ chains (Parra et al. 2010). 4. In the galliform lineage (chicken and turkey), the VHδ–Dδ–Jδ–Cδ cluster was trans-located out of the TCRα/δ locus where it currently resides on another chromosome (fig. 5D). There are no Vα or Vδ genes at the site of the second chicken TCRδ locus and only a single Cδ gene remains in the conventional TCRα/δ locus (Parra et al. 2012). 5. Similar to galliform birds, the VHδ–Dδ–Jδ–Cδ cluster was trans-located out of the TCRα/δ locus in presumably the last common ancestor of mammals, giving rise to TCRµ (fig. 5E). Internal duplications of the VHδ–Dδ–Jδ genes gave rise to the current [(V–D–J) − (V–D–J) − C] organization necessary to encode TCR chains with double V domains (Parra et al. 2007Wang et al. 2011). In the platypus, the second V–D–J cluster, encoding the supporting V, has lost its D segments and generates V domains with short CDR3 encoded by direct V to J recombination (Wang et al. 2011). The whole cluster appears to have undergone additional tandem duplication as it exists in multiple tandem copies in the opossum and also likely in the platypus (Parra et al. 20072008Wang et al. 2011). 6. In the therian lineage (marsupials and placentals), the VHδ was lost from the TCRα/δ locus (Parra et al. 2008). In placental mammals, the TCRµ locus was also lost (Parra et al. 2008). The marsupials retained TCRµ, however the second set of V and J segments, encoding the supporting V domain in the protein chain, were replaced with a germ-line joined V gene, in a process most likely involving germ-line V(D)J recombination and retro-transposition (fig. 5F) (Parra et al. 20072008). Yeah, this was good. These point-form paragraphs, combined with Fig. 5 (below) did more to help me understand the paper than anything else so far. I kind of wish the paper had just opened with this, and then proceeded to explain the reasoning behind. TCR forms such as TCRµ, which contain three extracellular domains, have evolved at least twice in vertebrates. The first was in the ancestors of the cartilaginous fish in the form of NAR-TCR (Criscitiello et al. 2006) and the second in the mammals as TCRµ (Parra et al. 2007). NAR-TCR uses an N-terminal V domain related to the V domains found in IgNAR antibodies, which are unique to cartilaginous fish (Greenberg et al. 1995Criscitiello et al. 2006), and not closely related to antibody VH domains. Therefore, it appears that NAR-TCR and TCRµ are more likely the result of convergent evolution rather than being related by direct descent (Parra et al. 2007Wang et al. 2011). Similarly, the model proposed in fig. 5 posits the direct transfer of VH genes from an IgH locus to the TCRα/δ locus. But it should be pointed out the VHδ found in frogs, birds, and monotremes are not closely related (fig. 3); indeed, they appear derived each from different, ancient VH clans (birds, VH clan I; frogs VH clan II; platypus VH clan III). This observation would suggest that the transfer of VHδ into the TCRα/δ loci occurred independently in the different lineages. Alternatively, the transfer of VH genes into the TCRα/δ locus may have occurred frequently and repeatedly in the past and gene replacement is the best explanation for the current content of these genes in the different tetrapod lineages. The absence of VHδ in marsupials, the highly divergent nature of Vµ genes in this lineage, and the absence of conserved synteny with genes linked to TCRµ in the opossum, provide little insight into the origins of TCRµ and its relationship to TCRδ or the other conventional TCR (Parra et al. 2008). The similarity between VH, VHδ, and Vµ genes in the platypus genome, which are all clan III, however is striking. In particular, the close relationship between the platypus VHδ and Vµ genes lends greater support for the model presented in fig. 5E, with TCRµ having been derived from TCRδ genes. My comments are getting repetitive. This could have been multiple paragraphs etc. etc. It’s easy enough to find the joints where it should be carved, by the way: right before the sentences that start with “Similarly” and “Alternatively” would be a good start, since these words indicate that we’re switching to a new idea. Fig. 5. Super helpful figure. Although I’m generally in favor of repeating important info, I do feel that the caption could have simply referred to the 6-point model in the text. The caption as it stands doesn’t add much and looks like a Wall of Text. But that’s not a big deal. The presence of TCR chains that use antibody like V domains, such as TCRδ using VHδ, NAR-TCR or TCRµ are widely distributed in vertebrates with only the bony fish and placental mammals missing. In addition to NAR-TCR, some shark species also appear to generate TCR chains using antibody V genes. This occurs via trans-locus V(D)J recombination between IgM and IgW heavy chain V genes and TCRδ and TCRα D and J genes (Criscitiello et al. 2010). This may be possible, in part, due to the multiple clusters of Ig genes found in the cartilaginous fish. It also illustrates that there has been independent solutions to generating TCR chains with antibody V domains in different vertebrate lineages. In the tetrapods, the VH genes were trans-located into the TCR loci where they became part of the germ-line repertoire. Whereas in cartilaginous fish something equivalent may occur somatically during V(D)J recombination in developing T cells. Either mechanism suggests there has been selection for having TCR using antibody V genes over much of vertebrate evolutionary history. The current working hypothesis for such chains is that they are able to bind native antigen directly. This is consistent with a selective pressure for TCR chains that may bind or recognize antigen in ways similar to antibodies in many different lineages of vertebrates. In the case of NAR-TCR and TCRµ, the N-terminal V domain is likely to be unpaired and bind antigen as a single domain (fig. 1), as has been described for IgNAR and some IgG antibodies in camels (recently reviewed in Flajnik et al. 2011). This model of antigen binding is consistent with the evidence that the N-terminal V domains in TCRµ are somatically diverse, while the second, supporting V domains have limited diversity with the latter presumably performing a structural role rather than one of antigen recognition (Parra et al. 2007Wang et al. 2011). There is no evidence of double V domains in TCRδ chains using VHδ in frogs, birds, or platypus (fig. 1) (Parra et al. 20102012). Rather, the TCR complex containing VHδ would likely be structured similar to a conventional γδTCR with a single V domain on each chain. It is possible that such receptors also bind antigen directly, however this remains to be determined. Not much to add except that I just had a thought that subheadings would have greatly eased this section (like they did the Methods section). A compelling model for the evolution of the Ig and TCR loci has been one of internal duplication, divergence and deletion; the so-called birth-and-death model of evolution of immune genes promoted by Nei and colleagues (Ota and Nei 1994Nei et al. 1997). Our results in no way contradict that the birth-and-death mode of gene evolution has played a significant role in shaping these complex loci. However, our results do support the role of horizontal transfer of gene segments between the loci that has not been previously appreciated. With this mechanism T cells may have been able to acquire the ability to recognize native, rather than processed antigen, much like B cells. Pretty good conclusion, opening on new ideas and showing the significance of this work in the field. Phew. I’m done. Reading this paper took me several days, although I could have been more focussed in general. But this shows how much work is required to read papers! I had to push myself to read. Many times I caught myself skimming paragraphs without understanding anything, and I had to read again. Right now I think I would benefit from reading it all a second time, but I resist the thought, because it’s work. But I think it’s a good candidate for my rewriting project. It should be relatively easy to cut down the number of abbreviations, split long paragraphs, and add subheadings. More thorough rewriting will probably involve clarifying the main points and claims right at the start. At the most extreme (I’m not sure I’ll go there), it could be beneficial to change the entire structure: give the detailed model first, and only then explain the background and methods. Stay tuned! Categories ## Science Style Guide: Abbreviations This post is part of my ongoing scientific style guideline series. Textual compression techniques (TCDs) are used more and more in science writing. TCDs come in various forms, including truncation (e.g. mi for mile), acronyms (lol for laughing out loud), syllabic acronyms (covid for coronavirus disease), contraction (int’l for international), and others. The primary benefit of using TCDs in writing is to reduce text length. This is especially useful in contexts where space is limited, such as tables and charts, as well as when a long word or phrase is repeated multiple times. Another reason to use a TCD is to create a new semantic unit that is more practical to use than the long version. For instance, the TCD laser is both more convenient and more recognizable than the original light amplification by the stimulated emission of radiation. Okay. Look at that paragraph without reading it. Does anything stand out? I made up the phrase textual compression device and its acronym TCD. They simply mean “abbreviation,” which is what I would have used if I weren’t trying to illustrate the following points: • Abbreviations can be distracting. Readers expect words, and things that look less like words — capitalized acronyms, random apostrophes in the middle of a word — will stand out, as does TCD above. Used sparingly, that can be good, to draw attention to something. But in large quantities, it’s jarring. • It’s even worse when multiple different abbreviations are in close proximity, or when similar abbreviations are used (e.g. TCRµ and TCRδ, which come up all the time in a paper I’m reading). • More importantly, abbreviations demand cognitive effort. If the reader doesn’t already know an abbreviation (for instance because you made it up), they have to spend some energy learning it. You’d probably prefer them to expend that energy understanding your paper instead. • Worse, they might have to interrupt their reading to go back to where you defined the abbreviation, or to look it up online. (A nice opportunity to quit reading your paper altogether.) Humans don’t read like computers. You can’t just “declare” an abbreviation as you would a variable in code, and assume that from now on your reader knows what it stands for. It’s quite likely that readers will skim your piece, or jump directly to a specific section (e.g. results), in which case they can miss the definition. Even if they do read it, they might forget — in a typical paper, there’s a lot of information to remember. Of course, abbreviations can be useful, as the TCD paragraph laboriously explains. But the benefits are rather minor. On computer screens, which is where your scientific writing will almost always be read, space is virtually unlimited. (Figures and tables remain a good use case, as long as the abbreviations are easily readable in the caption.) Creating a new, more practical way to call a thing (e.g. laser) can be quite useful, but again, only if used sparingly, for important concepts. Overall, the benefits of abbreviations are much greater for the writer than for the reader — which is exactly the opposite of what we want as per the Minimum Reading Friction principle. The other principle, Low-Hanging Fruit, says that the best improvements are those that require little writing skill to implement. Abbreviation minimization fits the bill. In most cases, you can improve the text just by replacing the abbreviation with: • The spelled-out version (textual compression device instead of TCD) • A synonym (abbreviation instead of TCD / textual compression device) • The core noun of the abbreviated phrase (e.g. device; not the best example but you get the idea. It will usually be clear in context what you refer to, unless you’re talking about many different types of devices). Sometimes you’ll need to perform a bit more rephrasing, but rarely will you have to perform major rephrasing due to abbreviations. If you do, that’s probably a sign that the original text was awfully painful to read. ### Recommendations • Coin new abbreviations as rarely as possible. • If you must coin new abbreviations, make sure they’re short, pronounceable, and memorable. Don’t hesitate to repeat their meaning multiple times — you’re teaching your readers a new word. • Generally prefer the use of spelled-out versions, core nouns, or synonyms. • Avoid using multiple different abbreviations in close proximity. • Abbreviations that are generally well-known, such as DNA, can be used as much as you want. A good way to tell is if they’re included in dictionaries. • If you can’t avoid using several uncommon or new abbreviations, it can be helpful to draw attention to them, so that readers are warned that they will have a better time if they make sure they learn the new terms. • This could take the form of a short glossary at the beginning, making it easy to look up definitions during reading. Categories ## Proposal for a New Scientific Writing Guide Scientific writing is in bad shape. Realizing that, and wanting to do something about it, was the starting point for my essay on the creation of a new journal, one that would rewrite some science papers in a better style and kickstart a movement to ultimately change the writing norms. Since I published the essay last July, the Journal of Actually Well-Written Science (yeah, it needs a better name) has gone from “cool idea” to “project I’m actually trying to bring to life.” Many questions remain unanswered as to how best to proceed. But one important thing I must figure out is: What should the writing norms be changed to? Today I’m committing to publish several short posts over the course of the next month to answer exactly that. Below is some brief discussion of the two principles that will guide my thinking. They both center on the idea of minimizing effort, for the reader as well as for the writer. Writers should make some effort to ensure readers don’t have to (that’s the basic job of a writer), but I’ll focus on improvements that don’t require a lot of time and effort from writers, since those tend to be busy scientists. ## Two effort-minimization principles ### 1) Minimum Reading Friction: Demand less cognitive resources from the reader Science papers are usually technical. They deal with complex questions. They assume specialized background knowledge. They may involve math. It is expected that papers be difficult to read. But we can at least make sure the writing doesn’t get in the way. The first principle of this style guide says that you should do everything feasible to reduce the amount of effort readers will need to make when reading your paper. In other words, your job is to make their job easier. If something — e.g. finding a good example to illustrate a point1like I just did! — asks some effort from you but reduces the effort readers will need to make when reading, then do it. Conversely, don’t make your own life easier if it’s going to make the reader’s life harder. An example would be using an abbreviation to spend less time typing at the cost of increasing the cognitive demands on the reader. The larger your readership, the more important this principle is. If you write for one person (e.g. an email), then it doesn’t matter that much if it takes some work to read (although it might hurt your chances of getting a reply). But if you expect to be read by 1,000 people, then every abbreviation that saved you some inconvenience is now multiplied into an inconvenience for 1,000 people. ### 2) Low-Hanging Fruit: Focus on improvements that are easy to apply “Writing well” is a complicated art. Developing it can be the project of a lifetime. Scientists are typically too busy for that. Fortunately (for me), science writing is so bad that there’s a lot of low-hanging fruit to pick. Many improvements need little effort. For example, using fewer abbreviations results in less demanding reading without requiring advanced writing skills — you can often just replace the abbreviations with the unabridged terms. Splitting long paragraphs into smaller chunks is often as easy as adding a line break when you notice a shift to a different idea. Such improvements can also be applied almost mechanistically, which is ideal for someone who rewrites a paper without being as intimate with the topic as the author is. The second principle therefore says to focus primarily (but not exclusively) on the elements of style that require the least effort and skill relative to how much they improve the writing. ## Other things to keep in mind • Keep the good current norms. The goal of this project is not to burn scientific writing down and rebuild it from scratch. For example, it is good that scientists, by default, try to avoid ornamented writing. This helps with precision and objectivity. • Formatting is an area that can be improved, but it’s a less tractable problem because it differs a lot between publications. For instance, citation style (e.g. footnotes vs. inline) can help or hinder reading. Still, I’ll eventually need to develop guidelines for formatting in JAWWS, so I will probably discuss it a few times. • Focus on the classic paper format. There are a lot of new, exotic ways that science could be communicated, but at first we’ll assume that papers — usually with traditional structures like intro-methods-results-discussion — will remain the main format in the foreseeable future. • Personal preferences can be hard to distinguish from objective quality measures. Of course, everything I propose will reflect what I personally look for in science writing. I think and hope most guidelines will be broadly popular, but I’m always open for feedback and I’ll tweak them if others make convincing arguments. I will update this list as I publish the posts. In the meantime, here’s a very informal list of topics I might cover: • Abbreviations • Paragraph length • Giving examples • Bullet points • Citations and references • Point of view (1st vs 3rd person) and voice (active vs passive) • Humor • Flourishes, ornamentation • Paper structure • Length vs. clarity vs. density tradeoff • Figures • Reading guidance (e.g. “read the methods section to understand what we did, but feel free to skip the more technical section 2.3”) • Jargon, vocabulary, word choice • Writing in narrative form (a difficult skill!) Last updated: November 12, 2021 Categories ## Appendix to JAWWS: An Incrementally Rewritten Paragraph Yesterday, I published a post describing an idea to improve scientific style by rewriting papers as part of a new science journal. I originally wanted to conclude the post with a demonstration of how the rewriting could be done, but I didn’t want to add too much length. Here it is as an appendix. We start with a paragraph taken more or less at random from a biology paper titled “Shedding light on the ‘dark side’ of phylogenetic comparative methods“, published by Cooper et al. in 2016. Then, in five steps, we’ll incrementally improve it — at least according to my preferences! Let me know if it fits your own idea of good scientific writing as well. #### 1. Original Most models of trait evolution are based on the Brownian motion model (Cavalli-Sforza & Edwards 1967; Felsenstein 1973). The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species (Hansen 1997; Butler & King 2004). OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations (Cooper et al. 2016b). For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes (Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014). For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses (the median number of taxa used for OU studies is 58; Cooper et al. 2016b). Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process (Boettiger, Coop & Ralph 2012; Pennell et al. 2015). Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model (e.g. Hansen 1997; Hansen & Orzack 2005), but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies. Okay, first things first: let’s banish all those horrendous inline citations to footnotes. #### 2. With footnotes Most models of trait evolution are based on the Brownian motion model.1Cavalli-Sforza & Edwards 1967; Felsenstein 1973 The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species.2Hansen 1997; Butler & King 2004 OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations.3Cooper et al. 2016b For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.4Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 However, the OU model has several well-known caveats.5see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses.6the median number of taxa used for OU studies is 58; Cooper et al. 2016b Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process.7Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model,8e.g. Hansen 1997; Hansen & Orzack 2005 but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies. Much better. Does this need to be a single paragraph? No, it doesn’t. Let’s not go overboard with cutting it up, but I think a three-fold division makes sense. #### 3. Multiple paragraphs Most models of trait evolution are based on the Brownian motion model.9Cavalli-Sforza & Edwards 1967; Felsenstein 1973 The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species.10Hansen 1997; Butler & King 2004 OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations.11Cooper et al. 2016b For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.12Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 However, the OU model has several well-known caveats.13see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses.14the median number of taxa used for OU studies is 58; Cooper et al. 2016b Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process.15Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model,16e.g. Hansen 1997; Hansen & Orzack 2005 but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies. We haven’t rewritten anything yet — the changes so far are really low-hanging fruit! Let’s see if we can improve the text more with some rephrasing. This is trickier, because there’s a risk I change the original meaning, but it’s not impossible. #### 4. Some rephrasing Most models of trait evolution are based on the Brownian motion model, in which traits evolve randomly and accrue variance over time.17Cavalli-Sforza & Edwards 1967; Felsenstein 1973 What if we add a parameter to measure how much the trait motion returns to a theoretical optimum for a given clade or set of species? Then we get a family of models called Ornstein-Uhlenbeck,18Hansen 1997; Butler & King 2004 first developed as a way to describe friction in the Brownian motion of a particle. These models have become increasingly popular, both because they tend to fit the data better than simple Brownian motion, and because they have attractive biological interpretations.19Cooper et al. 2016b For example, fit to an Ornstein-Uhlenbeck model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes.20Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 However, Ornstein-Uhlenbeck models have several well-known caveats.21see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, they are frequently — and incorrectly — favoured over simpler Brownian models. This occurs with likelihood ratio tests, particularly for the small data sets that are commonly used in these analyses.22the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b It also happens when there is error in the data set, even very small amounts of error, simply because Ornstein-Uhlenbeck models accommodate more variance towards the tips of the phylogeny — therefore suggesting an interesting biological process where there is none.23Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Additionally, users of Ornstein-Uhlenbeck models often state that clade-wide stabilising selection accounts for data fitting the model, even though the literature describing the model warns that such a simple explanation is unlikely.24e.g. Hansen 1997; Hansen & Orzack 2005 Unfortunately, these limitations are rarely taken into account in empirical studies. What did I do here? First, I completely got rid of the “OU” acronym. Acronyms may look like they simplify the writing, but in fact they often ask more cognitive resources from the reader, who has to constantly remember that OU means Ornstein-Uhlenbeck. Then I rephrased several sentences to make them flow better, at least according to my taste. I also added a short explanation of what Brownian and Ornstein-Uhlenbeck models are. That might not be necessary, but it’s always good to make life easier for the reader. Even if you defined the terms earlier in the paper, repetition is useful to avoid asking the reader an effort to remember. And even if everyone reading your paper is expected to know what Brownian motion is, there’ll be some student somewhere thanking you for reminding them.25I considered doing this with the “evolutionary constraints, stabilising selection, niche conservatism and selective regimes” enumeration too, but these are mere examples, less critical to the main idea of the section. Adding definitions would make the sentence quite long and detract from the main flow. Also I don’t know what the definitions are and don’t feel like researching lol. This is already pretty good, and still close enough to the original. What if I try to go further? #### 5. More rephrasing Most models of trait evolution are based on the Brownian motion model.26Cavalli-Sforza & Edwards 1967; Felsenstein 1973 Brownian motion was originally used to describe the random movement of a particle through space. In the context of trait evolution, it assumes that a trait (say, beak size in some group of bird species) changes randomly, with some species evolving a larger beak, some a smaller one, and so on. Brownian motion implies that variance in beak size, across the group of species, increases over time. This is a very simple model. What if we refined it by adding a parameter? Suppose there is a theoretical optimal beak size for this group of species. The new parameter measures how much the trait tends to return to this optimum. This gives us a type of model called Ornstein-Uhlenbeck,27Hansen 1997; Butler & King 2004 first developed as a way to add friction to the Brownian motion of a particle. Ornstein-Uhlenbeck models have become increasingly popular in trait evolution, for two reasons.28Cooper et al. 2016b First, they tend to fit the data better than simple Brownian motion. Second, they have attractive biological interpretations. For example, fit to an Ornstein-Uhlenbeck model has been seen as evidence of a number of processes, including evolutionary constraints, stabilising selection, niche conservatism and selective regimes.29Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 Despite this, Ornstein-Uhlenbeck models are not perfect, and have several well-known caveats.30see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 Sometimes you really should use a simpler model! It is common, but incorrect, to favour an Ornstein-Uhlenbeck model over a Brownian model after performing likelihood ratio tests, particularly for the small data sets that are often used in these analyses.31the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b Then there is the issue of error in data sets. Even a very small amount of error can lead researchers to pick an Ornstein-Uhlenbeck model, simply because they accommodate more variance towards the tips of the phylogeny — therefore suggesting interesting biological processes where there is none.32Boettiger, Coop & Ralph 2012; Pennell et al. 2015 Additionally, users of Ornstein-Uhlenbeck models often state that the reason their data fits the model is clade-wide stabilising selection (for instance, selection for intermediate beak sizes, rather than extreme ones, across the group of birds). Yet the literature describing the model warns that such simple explanations are unlikely.33e.g. Hansen 1997; Hansen & Orzack 2005 Unfortunately, these limitations are rarely taken into account in empirical studies. Okay, many things to notice here. First, I added an example, bird beak size. I’m not 100% sure I understand the topic well enough for my example to be particularly good, but I think it’s decent. I also added more explanation of what Brownian models are in trait evolution. Then I rephrased other sentences to make the tone less formal. As a result, this version is longer than the previous ones. It seemed justified to cut it up into more paragraphs to accommodate the extra length. It’s plausible that the authors originally tried to include too much content in too few words, perhaps to satisfy a length constraint posed by the journal. Let’s do one more round… #### 6. Rephrasing, extreme edition Suppose you want to model the evolution of beak size in some fictional family of birds. There are 20 bird species in the family, all with different average beak sizes. You want to create a model of how their beaks changed over time, so you can reimagine the beak of the family’s ancestor and understand what happened exactly. Most people who try to model the evolution of a biological trait use some sort of Brownian motion model.34Cavalli-Sforza & Edwards 1967; Felsenstein 1973 Brownian motion, originally, refers to the random movement of a particle in a liquid or gas. The mathematical analogy here is that beak size evolves randomly: it becomes very large in some species, very small in others, with various degrees of intermediate forms between the extremes. Therefore, across the 20 species, the variance in beak size increases over time. Brownian motion is a very simple model. What if we add a parameter to get a slightly more complicated one? Let’s assume there’s a theoretical optimal beak size for our family of birds — maybe because the seeds they eat have a constant average diameter. The new parameter measures how much beak size tends to return to the optimum during its evolution. This gives us a type of model called Ornstein-Uhlenbeck,35Hansen 1997; Butler & King 2004 first developed as a way to add friction to the Brownian motion of a particle. We can imagine the “friction” to be the resistance against deviating from the optimum. Ornstein-Uhlenbeck models have become increasingly popular, for two reasons.36Cooper et al. 2016b First, they often fit real-life data better than simple Brownian motion. Second, they are easy to interpret biologically. For example, maybe our birds don’t have as extreme beak sizes as we’d expect from a Brownian model, so it makes sense to assume there’s some force pulling the trait towards an intermediate optimum. That force might be an evolutionary constraint, stabilising selection (i.e. selection against extremes), niche conservatism (the tendency to keep ancestral traits), or selective regimes. Studies using Ornstein-Uhlenbeck models have been seen as evidence for each of these patterns.37Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013 Of course, Ornstein-Uhlenbeck aren’t perfect, and in fact have several well-known caveats.38see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014 For example, simpler models are sometimes better. It’s common for researchers to incorrectly choose Ornstein-Uhlenbeck instead of Brownian motion when using likelihood ratio tests to compare models, a problem especially present due to the small data sets that are often used in these analyses.39the median number of taxa used for Ornstein-Uhlenbeck studies is 58; Cooper et al. 2016b Then there is the issue of error in data sets (e.g. when your beak size data isn’t fully accurate). Even a very small amount of error can lead researchers to pick an Ornstein-Uhlenbeck model, simply because it’s better at accommodating variance among closely related species at the tips of a phylogenetic tree. This can suggest interesting biological processes where there are none.40Boettiger, Coop & Ralph 2012; Pennell et al. 2015 One particular mistake that users of Ornstein-Uhlenbeck models often make is to assume that their data fits the model due to clade-wise stabilising selection (e.g. selection for intermediate beak sizes, rather than extreme ones, across the family of birds). Yet the literature warns against exactly that — according to the papers describing the models, such simple explanations are unlikely.41e.g. Hansen 1997; Hansen & Orzack 2005 Unfortunately, these limitations are rarely taken into account in empirical studies. This is longer still than the previous version! At this point I’m convinced the original paragraph was artificially short. That is, it packed far more information than a text of its size normally should. This is a common problem in science writing. Whenever you write something, there’s a tradeoff between brevity, clarity, amount of information, and complexity: you can only maximize three of them. Since science papers often deal with a lot of complex information, and have word limits, clarity often gets the short end of the stick. Version 6 is a good example of sacrificing brevity to get more clarity. In this case it’s important to keep the amount of information constant, because I don’t want to change what the original authors were saying. It is possible that they were saying too many things. On the other hand, this is only one paragraph in a longer paper, so maybe it made sense to simply mention some ideas without developing them. I tried a Version 7 in which I aimed for a shorter paragraph, on the scale of the original one, but I failed. To be able to keep all the information, I would have to sacrifice the extra explanations and the bird beak example, and we’d be back to square one. This suggests that both the original paragraph and my rewritten version are on different points on the tradeoff curve. The original is brief, information-rich, and complex dense; my version is information-rich, complex, and clear.. To get brief and clear would require taking some information out, which I can’t do as a rewriter. It is my opinion that sacrificing clarity is the worst possible world, at least in most contexts. We could then rephrase my project as attempting to emphasize clarity above all else — after all, brevity, information richness and complexity serve no purpose if they fail to communicate what they want to. Categories ## The Journal of Actually Well-Written Science Update: The project described below is actually happening! Head to jawws.org for more content and posts. Once upon a time, I was a master’s student in evolutionary biology, on track towards a PhD and an academic research career. Some gloomy day (it was autumn and it was Sweden), a professor suggested that we organize a journal club — a weekly gathering to discuss a scientific paper — as an optional addition to regular coursework. I immediately thought, “Reading science papers sucks, so obviously I’m not going to do more of that just for fun.” But all my classmates enthusiastically signed up for it, so I caved in and joined too. And so, every week, I went to the journal club and tried to hide the fact that I had barely skimmed the assigned paper. I am no longer on track towards a PhD and an academic research career. There were, of course, many reasons to leave the field after my master’s degree, some better than others. “I hate reading science papers” doesn’t sound like a very serious reason — but if I’m honest with myself, it was a true motivation to quit. And I think that generalizes far beyond my personal experience. Science papers are boring. They’re boring even when they should be interesting. They’re awful at communicating their contents. They’re a chore to read. They’re work. In a way, that’s expected — papers aren’t meant to be entertainment — but over time, I’ve grown convinced that the pervasiveness of bad writing is a major problem in science. It requires a lot of researchers’ precious time and energy. It keeps the public out, including people who disseminate knowledge, such as teachers and journalists, and those who take decisions about scientific matters, such as politicians and business leaders. It discourages wannabe scientists. In short, it makes science harder than it needs to be. The quality of the writing is, of course, only one of countless problems with current academic publishing. Others include access,1most papers are gated by journals and very expensive to get access to peer review,2a very bad system in which anonymous scientists must review your paper before it gets published, and may arbitrarily reject your work, especially if they are in competition with you, or ask you to perform more experiments labor exploitation,3scientists don’t get paid for writing papers, or for reviewing them, and journals take all the financial upside the failure to report negative results,4which are less exciting than positive results fraud, and so on. These issues are important, but they are not the focus of this essay. The focus here is to examine and suggest a solution to a question that sounds petty and unserious, but is actually a genuine problem: the fact that science papers are incredibly tiresome. This post contains three main sections: If you’re short on time, please read the third one, which includes the sketch of a plan to improve scientific style. The other two sections provide background and justification for the plan. Additionally, I published an appendix in which I rewrite a paragraph multiple times as a demonstration. ## What makes scientific papers difficult to read? Three reasons: topic, content, and style. ### Boring topics Science today is hyperspecialized. To make a new contribution, you need to be hyperspecialized in some topic, and read hyperspecialized papers, and write hyperspecialized ones. It’s unavoidable — science is too big and complex to allow people to make sweeping general discoveries all the time. As a result, any hyperspecialized paper in a field that isn’t your own isn’t going to be super interesting to you. Consider these headlines:5These are a few titles taken at random from the journal Nature, all published on 30 June 2021. I could see myself maybe skimming the third one because I’ve been interested in covid vaccines to some superficial extent, but none of them strike me as fun reading. But if you work in superconductors, maybe the Wigner crystal one (whatever that is) sounds appealing to you. One of the reasons I quit biology is that I eventually figured out that I wasn’t sufficiently interested in the field. Surely that also contributed to my lack of eagerness to read papers. But that isn’t the whole story. There were scientific questions I was genuinely curious about, and for which I should have been enthusiastic about reading the latest research. Yet that almost never happened. Just like you’re sometimes attracted to a novel or movie because of its premise, only to be disappointed in the actual execution — there are papers that should be interesting due to their topic, but still fail due to their contents or style. ### Tedious content The primary goal of a scientific paper is to communicate science. Surprisingly, we tend to forget this, because, as I said, papers are also a measure of work output. But still, they’re supposed to contain useful information. A good science paper should answer a question and allow another scientist to understand and perhaps replicate the methods. That means that, sometimes, there is stuff that must be there even though it’s not interesting. A paper might contain a lengthy description of an experimental setup or statistical methods which, no matter what you do, will probably never be particularly compelling. Besides, it might be very technical and complicated. It’s possible to write complex material that is engaging, but that’s a harder bar to clear. And then sometimes your results just aren’t that interesting. Maybe they disprove the cool hypothesis you wanted to prove. Maybe you merely found a weak statistical correlation. Maybe “more research is needed.” It’s important to publish results even if they’re negative or unimpressive, but of course that means your paper will have a hard time generating excitement. So there’s not much we can do in general about content. All scientists try to do the most engaging and life-changing research they can, but only a few will succeed, and that’s okay. (And some scientists adopt a strategy of publishing wrong or misleading content in order to generate excitement, which, well, is a rather obvious bad idea.) ### Awful style Style is somehow both the least important and the most important part of writing. It’s the least important because it rarely is the reason we read anything. Except for some entertainment,6And even then! There’s some intellectual pleasure to be gleaned from looking at the form of a poem, but it rarely is the top reason we like poetry and songs. we pick what to read based on the contents, whether we expect to learn new things or be emotionally moved. Good style makes it easier to get the stuff, but it’s just a vehicle for the content. And yet style is incredibly important because without good style (or, as per the transportation analogy, without a functioning vehicle), a piece of writing will never get anywhere. You could have the most amazing topic with excellent content — if it’s badly written, if it’s a chore to read, then very few people will read it. Scientific papers suck at style. (Quick disclaimer: As we’re going to discuss below, this isn’t the fault of any individual scientist. It’s a question of culture and social norms.) Anyone who’s ever read anything knows that long, dense paragraphs aren’t enjoyed by anyone. Yet scientific papers somehow consist of nothing but long and dense paragraphs.7That’s not to say giant paragraphs are always bad; they serve a purpose, which is to make a coherent whole out of several ideas, and they can be written well. But often they aren’t written well, and sometimes they’re messy at the level of ideas. As a result, they often make reading harder, for no gain. Within the paragraphs, too many sentences are long and winding. The first person point of view is often eschewed in favor of some neutral-sounding (but not actually neutral, and very stiff) third person passive voice. The vocabulary tends to be full of jargon. The text is commonly sprinkled with an overabundance of AAAs,8Acronyms And Abbreviations, an acronym I just made up for illustrative purposes. even though they are rarely justified as a way to save space in this age where most papers are published digitally. Citations, which are of course a necessity, are inserted everywhere, impeding the flow of sentences. Here’s an example, selected at random from an old folder of PDFs from one of my master’s projects back in the day. Ironically, it discusses the fact that some methods in evolutionary biology are applied incorrectly because… it’s hard to extract the info from long, technical papers.9Here’s the original paper, which by a stroke of luck for me, is open-source and shared with a Creative Commons license. Don’t actually read it closely! This is just for illustration. Skim it and scroll down to the end to keep reading my essay. Most models of trait evolution are based on the Brownian motion model (Cavalli-Sforza & Edwards 1967; Felsenstein 1973). The Ornstein–Uhlenbeck (OU) model can be thought of as a modification of the Brownian model with an additional parameter that measures the strength of return towards a theoretical optimum shared across a clade or subset of species (Hansen 1997; Butler & King 2004). OU models have become increasingly popular as they tend to fit the data better than Brownian motion models, and have attractive biological interpretations (Cooper et al. 2016b). For example, fit to an OU model has been seen as evidence of evolutionary constraints, stabilising selection, niche conservatism and selective regimes (Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014). For example, it is frequently incorrectly favoured over simpler models when using likelihood ratio tests, particularly for small data sets that are commonly used in these analyses (the median number of taxa used for OU studies is 58; Cooper et al. 2016b). Additionally, very small amounts of error in data sets can result in an OU model being favoured over Brownian motion simply because OU can accommodate more variance towards the tips of the phylogeny, rather than due to any interesting biological process (Boettiger, Coop & Ralph 2012; Pennell et al. 2015). Finally, the literature describing the OU model is clear that a simple explanation of clade-wide stabilising selection is unlikely to account for data fitting an OU model (e.g. Hansen 1997; Hansen & Orzack 2005), but users of the model often state that this is the case. Unfortunately, these limitations are rarely taken into account in empirical studies. This paragraph is not good writing by any stretch of the imagination. First, it’s a giant paragraph.10Remarkably, it is the sole paragraph in a subsection titled “Ornstein-Uhlenbeck (Single Stationary Peak) Models of Traits Evolution,” which means that the paragraph’s property of saying “hey, these ideas go together” isn’t even used; the title would suffice. It contains two related but distinct ideas, which are that (1) the Ornstein–Uhlenbeck model can be useful, and that (2) it has caveats. Why not split it? Speaking of which, the repetition of the “OU” acronym is jarring. It doesn’t even seem to serve a purpose other than shorten the text a little bit. It’d be better to spell “Ornstein-Uhlenbeck” out each time, and try to avoid repeating it so much. The paragraph also contains inline citations to an absurd degree. Yes, I’m sure they’re all relevant, and you do need to show your sources, but this is incredibly distracting. Did you notice the following sentence when reading or skimming? However, the OU model has several well-known caveats. It’s a key sentence to understand the structure of the paragraph, indicating a transition from idea (1) to idea (2), but it is inelegantly sandwiched between two long enumerations of references: (Wiens et al. 2010; Beaulieu et al. 2012; Christin et al. 2013; Mahler et al. 2013). However, the OU model has several well-known caveats (see Ives & Garland 2010; Boettiger, Coop & Ralph 2012; Hansen & Bartoszek 2012; Ho & Ané 2013, 2014). Any normal human will just gloss over these lines and fail to grasp the structure of the paragraph. Not ideal.11The ideal format for citations in scientific writing is actually a matter of some debate, and depends to some extent on personal preference. As a friend said: “The numbered citation style (like in Science or Nature) is really nice because it doesn’t interrupt paragraphs, especially when there are a lot of citations. But many people also like to see which paper/work you are referencing without flipping to the end of the article to the references section.” I admit I am biased towards prioritizing reading flow, but it’s true that having to match numbers to references at the end of a paper can be tedious. In print and PDFs, I’d be in favor of true footnotes (as opposed to endnotes), so that you don’t have to turn a page to read it. In digital formats, I’d go with collapsible footnotes (like the one you’re reading right now if you’re on my blog). Notes in the margin can also work, either in print or online. Alexey Guzey’s blog is a good example. And if mentioning a reference is useful to understand the text, the writer should simply spell it out directly in the sentence. Finally, there is quite a bit of specialized vocabulary that will make no sense to most readers, such as “niche conservatism” or “clade-wide stabilising selection.” That may be fine, depending on the intended audience; knowing what is or isn’t obvious to your audience is a difficult problem. I tend to err on the side of not including a term if a general lay audience wouldn’t understand it, but that’s debatable and dependent on the circumstances. Now, I don’t mean to pick on this example or its authors in particular. In fact, it isn’t even a particularly egregious example.12Interestingly, the more I examined the paragraph in depth, the less I thought it was bad writing. This is because, I think, becoming familiar with something makes us see it in a more favorable light. In fact this is why authors are often blind to the flaws in their own writing. But by definition a paper is written for people who aren’t familiar with it. Many papers are worse! But as we saw, it’s far from being a breeze to read. Bad, boring style is so widespread that even “good” papers aren’t much fun. Yet science can definitely be fun. Some Scott Alexander blog posts manage to make me read thousands of (rigorous!) words about psychiatric drugs, thanks to his use of microhumor. And then, of course, there’s an entire genre devoted to “translating” scientific papers into pleasant prose: popular science. Science popularizers follow different incentives than scientists: their goal is to attract clicks, so they have to write in a compelling way. They take tedious papers as input, and then produce fun stories as output. There is no fundamental reason why scientists couldn’t write directly in the style of science popularizers. I’m not saying they should copy that exactly — there are problems with popular science too, like sensationalism and inaccuracies — but scientists could at least aim at making their scientific results accessible and enjoyable to interested and educated laypeople, or to undergraduate students in their discipline. I don’t think we absolutely need a layer of people who interpret the work of scientists for the rest of us, in a way akin to the Ted Chiang story about the future of human science. Topic and content are hard to solve as a general problem. But I think we can improve style. We can create better norms. I have a crazy idea to do that, which we’ll get into at the end of the post, but first, we need to discuss the reasons behind the dismal state of current scientific style. ## Why is scientific style so bad? There are many reasons why science papers suck at style. One is that people writing them, scientists, aren’t selected for their writing ability. They have a lot on their plate already, from designing experiments to performing them to applying for funding to teaching classes. Writing plays an integral part of the process of science, but it’s only a part — compared to, say, fields like journalism or literature. Another problem is language proficiency. Almost all science (at least in the more technical fields) today is published in English, and since native English speakers are a small minority of the world’s population, it follows that most papers are written by people who have only partial mastery over the language. You can’t exactly expect stellar style from a French or Russian or Chinese scientist who is forced to publish their work in a language that isn’t their own. Both these reasons are totally valid! There’s no point blaming scientists for not being good writers. It’d be great if all scientists suddenly became masters of English prose, but we all know that’s not going to happen. The third and most important reason for bad style is social norms. Imagine being a science grad student, and having to write your first Real Science Paper that will be submitted to a Legit Journal. You’ve written science stuff before, for classes, for your undergrad thesis maybe, but this is the real deal. You really want it to be published. So you try to understand what exactly makes a science paper publishable. Fortunately, you’ve read tons of papers, so you have absorbed a lot of the style. You set out to write it… and reproduce the same crappy style as all the science papers before you. Or maybe you don’t, and you try to write in an original, lively manner… until your thesis supervisor reads your draft and tells you you must rewrite it all in the passive voice and adopt a more formal style and avoid the verb “to sparkle” because it is “non-scientific.”13The “sparkle” example happened to a friend of mine recently. Or maybe you have permissive supervisors, so you submit your paper written in an unconventional style… and the journal’s editors reject it. Or they shrug and send it to peer review, from whence it comes back with lots of comments by Reviewer 2 telling you your work is interesting but the paper must be completely rewritten in the proper style. Who decides what style is proper? No one, and everyone. Social norms self-perpetuate as people copy other people. For this reason, they are extremely difficult to change. As a scientist friend, Erik Hoel, told me on Twitter: There is definitely a training period where grad students are learning to write papers (basically a “literary” art like learning how to write short stories) wherein you are constantly being told that things need to be rephrased to be more scientific And of course there is. Newbie scientists have to learn the norms and conventions of their field. Not doing so would be costly for their careers. The problem isn’t that norms exist. The problem is that the current norms are bad. In developing its own culture, with its traditions and rituals and “ways we do things,” science managed to get stuck with this horrible style that everyone is somehow convinced is the only way you can write and publish science papers, forever. It wasn’t always like this. If you go back and look at science papers from the 19th century, for instance, you’ll find a rather different style, and, dare I say, a more pleasant one. I know this thanks to a workshop I went to in undergrad biology, almost a decade ago. Prof. Linda Cooper of McGill University (now retired, as I have found out when trying to contact her during the writing of this post) showed us a recent physics paper, and a paper written in 1859 by Carlo Matteucci about neurophysiology experiments in frogs, titled Note on some new experiments in electro-physiology.14At least I think this is it; my memory of the workshop is very dim. Dr. David Green, local frog expert, helped me find this paper, and it fits all the details I can remember. You might expect very old papers to be difficult to parse — but no! It’s crystal clear and in fact rather delightful. Here’s a screenshot of the introduction: It isn’t quite clickbait, but there’s an elegant quality to it. First, it’s told in first person. Second, there’s very little jargon. Third, we quickly get to the point; there’s no lengthy introduction that only serves as proof that you know your stuff. Fourth, there are no citations. Okay, again, we do want citations, but at least we see here that avoiding them can help the writing flow better. (No citations also means that you can’t leave something unexplained by directing the reader to some reference they would prefer not to read. Cite to give credit, but not as a way to avoid writing a clear explanation.) By contrast, the contemporary physics paper shown at the workshop was basically non-human-readable. I can’t remember what it was, which is probably a good thing for all parties involved. In the past 150 years, science has undoubtedly progressed in a thousand ways; yet in the quality of the writing, we are hardly better than the scientists of old. I want to be somewhat charitable, though, so let’s point out that some things are currently done well. For example, I think the basic IMRaD structure — introduction, methods, results, and discussion — is sound.15Although one could argue that IMRaD is perhaps too often followed without thought, like a recipe. The systematic use of abstracts, and the growing tendency to split them into multiple paragraphs, is an excellent development. There’s been a little bit of progress — but we should be embarrassed that we haven’t improved more. What happened? It’s hard to say. Some plausible hypotheses, all of which might be true: • In the absence of a clear incentive to maximize the number of readers, good style doesn’t develop. The dry and boring style that currently dominates is simply the default. • Everyone has their own idea of what good scientific writing should be, and we’ve naturally converged onto a safe middle ground that no one particularly loves, but that people don’t hate enough to change. • The current style is favored because it is seen as a mark of positive qualities in science such as objectivity, rigor, or detachment. • The style serves as an in-group signal for serious scientists to recognize other serious scientists. Put differently, it is a form of elitism. This might mean that for the people in the in-group, poor style is a feature, not a bug.16Just like unpleasant bureaucracy acts as a filter so that only the most motivated people manage to pass through the system. • Science is too globalized and anglicized. There is only one scientific culture, so if it gets stuck on poor norms, there isn’t an alternative culture that can come to the rescue by doing its own thing and stumbling upon better norms. It’s possible that these forces are too powerful for anyone to successfully change the current norms. Maybe most scientists would think I’m a fool for wanting to improve them. But it does seem to me that we should at least try. ## How can we forge better norms? First, I want to emphasize that the primary goal of scientific writing is communication among researchers, not between researchers and the public. Facilitating this communication, and lowering the barriers to entry into hyperspecialized fields,17For students, and for scientists in adjacent fields are the things I want to optimize for. However, I do think there are benefits to making science more accessible to non-specialists — scientists in very different fields, academics outside science, journalists, teachers, politicians, etc. — without having to rely on the layer of popular science. So while we won’t optimize for this directly, it’s worth improving it along the way if we can. With that in mind, how can we improve the social norms for style across all of scientific writing? Here’s one recipe for failure. Come up with a new style guide, and share it with grad students and professors. Publish op-eds and give conferences on your new approach. Teach writing classes. In short, try to convince individual scientists. Then watch as they just write in the old style because it’s all they know and there’s no point in making it harder for themselves to publish their papers and get recognition. Science is an insanely competitive field. Most scientists, especially grad students, postdocs and junior professors, are caught in a rat race. They will not want to reduce their chances of publication, even if they privately agree that scientific style should be improved. (Not to mention, many have been reading and writing in that style for so long that they don’t even see it as problematic anymore.) By definition, social norms are borderline impossible to change if you’re subject to them. That means that the impulse to change must come from someone who’s not subject to them. Either an extremely well established person, i.e. somebody famous enough to get away with norm-defying behavior, or an outsider — i.e. somebody who just doesn’t care. Well, I don’t have a Nobel Prize, but I gave up on science years ago and I have zero attachment to current scientific norms, so I think I qualify as an outsider. But what can an outsider do, if you can’t convince scientists to change? The answer is: do the work for them. Create something new, better, that scientists have an incentive to copy. Here’s a sketch of how that could be done. Mind you, it’s very much at the stage of “crazy idea”; I don’t know if it would work. But I think there’s at least a plausible path. ### The Plan #### 1. Found a new journal Let’s call it the Journal of Actually Well-Written Science. I’ll make an exception to my anti-abbreviation stance and call it JAWWS because I just realized it’s a pretty cool and memorable one. The journal would have precise writing guidelines. Those guidelines are the new norms we’ll try to get established. They would be dependent on personal taste to some extent, but I think it’s possible to come up with a set of guidelines that make sense. Here’s some of what I have in mind: • If it’s a choice between clarity and brevity, prioritize clarity. • Split long paragraphs into shorter ones. • Use examples. Avoid expressing abstract ideas without supporting them with concrete examples. • Whenever possible, place the example before the abstract idea to draw the reader in. • Avoid abbreviations and acronyms unless they’re already well-known (e.g. DNA). If you must use or create one, make sure it’s effortless for the reader to remember what it means. • Allow as little space as possible for references while still citing appropriately. Of course, it’s fine to write a reference in full if you want to draw attention to it. Also, don’t use a citation as a way to avoid explaining something. • Write in the first person, even in the introduction and discussion. Your paper is being written by you, a human being, not by the incorporeal spirit of science. • Don’t hesitate to use microhumor; it is often the difference between competent and great writing. My mention of the incorporeal spirit of science is an example of that. • Avoid systematic use of the passive voice. • Avoid ornamental writing for its own sake. Occasionally, a good metaphor can clarify a thought, but be mindful that it’s easy to overuse them. • Remember that the primary goal of your paper is to communicate methods or results. Always keep the reader in mind. And make that imaginary reader an educated nonspecialist, i.e. you whenever you read papers not directly relevant to your field. In the appendix, I show a multistep application of this to the paragraph I quoted above as an example. Again, we’re not trying to reinvent popular science writing. We will borrow techniques and ideas from it, and try to emulate it insofar as it’s good at communicating its content. But the end goal is very different — JAWWS is intended not to entertain, but to publish full, rigorous methods and results that can be cited by researchers. I want it to be a new kind of scientific journal, but a scientific journal nonetheless. #### 2. Hire great writers JAWWS will eventually accept direct submissions by researchers. But as a new journal, it will have approximately zero credibility at first. So we will start by republishing existing papers that have gone through a process of rewriting by highly competent science communicators. Finding those communicators might be the hardest part. We need people who can understand scientific papers in their current dreadful state, but who haven’t already accepted the current style as inevitable. And we need them to be excellent at their job. If we rewrite a paper into something that’s no better than the original — or, worse, if we introduce mistakes — then the whole project falls apart. On the other hand, tons of people want to be writers in general and science writers in particular, so there is some hope. #### 3. Pick papers to rewrite It’s unclear how many science papers are published each year, but a reasonable estimation is quite a lot. I saw the 2,000,000 per year figure somewhere; I have no idea if it’s accurate, but even if it’s off by an order of magnitude or two, that’s still a lot. How should JAWWS select the papers it rewrites? I’m guessing that one criterion will be copyright status. I’m no intellectual property specialist, so I have no idea if it’s legal to rewrite an entire article that’s protected by copyright. Fortunately, there are many papers that are released with licenses allowing people to adapt them, so I suggest we start with those. Another avenue is to rewrite papers by scientists who like this project and grant us permission to use their work. Then there are open questions. Should JAWWS focus on a particular field at first? Should it rewrite top papers? Neglected papers? Particularly difficult papers? Randomly selected papers? Should it focus more on literature reviews, experimental studies, meta-analyses, or methods papers? Should it accept applications by scientists who’d like our help? We can settle these questions in due time. Crucially, the authors of a JAWWS rewritten paper will be the same as the paper it is based on. When people cite it, they’ll give credit to the original authors, not the rewriter, whose name should be mentioned separately. This also means that the original authors should approve the rewritten paper, since it’ll be published under their names.18My friend Caroline Nguyen makes an important point: the process must involve very little extra work for scientists who are already burdened with many tasks. Their approval could therefore be optional — i.e. they can veto, but by default we assume that they approve. It might also be possible to involve a writer earlier in the research process, so that they are in close contact with a team of scientists and are able to publish a JAWWS paper at the same time as the scientists publish a traditional one. In all cases, we can expect the first participating researchers to be the ones who agree with the aims of our project and trust that JAWWS is a good initiative. #### 4. Build prestige over time If the rewritten papers are done well, then they’ll be pleasant to read. If they’re pleasant to read, more people will read them. If more people read them, then they’re likely to get cited more. If they get cited more, then they will have more impact. If JAWWS publishes a lot of high-impact papers, then JAWWS will become prestigious. There’s no point in aiming low — we should try making JAWWS as prestigious, if not more, than top journals like NatureScience, or Cell.19Is this a good goal? Wouldn’t it be better to just try to build something different? Well, I see this project kind of like Tesla for cars: Tesla isn’t trying to replace cars with something else, it’s just trying to make cars much better. So I would like JAWWS to be taken as seriously as the prestigious journals — while being an improvement over them. The danger in building a new thing is that you just create your little island of people who care about style while the rest of science is still busy competing for a paper in prestigious journals. That wouldn’t be a good outcome. Of course, that won’t happen overnight. But I don’t see why it wouldn’t be an achievable goal. And even if we don’t quite get there, the “aim for the moon, if you fail you’ll fall among the stars” principle comes into play. JAWWS can have a positive influence even if it doesn’t become a top journal. Along the way, JAWWS will become able to accept direct submissions and publish original science papers. It might also split into several specialized journals. At this point we’ll be a major publishing business! #### 5. Profit! I don’t know a lot about the business side of academic publishing, but my understanding is that there are two main models: • Paywall: researchers/institutions pay to access the contents of the journal. • Open-access: researchers/institutions pay to publish content that is then made accessible to everyone. For JAWWS, a paywall model might make sense, since the potential audience would be larger than just scientists. But it would run contrary to the ideal of making science accessible to as many people as possible. Open-access seems more promising, and it feels appropriate to ask for a publication fee as compensation for the work needed to rewrite a paper. But that might be hard to set up in the beginning when we haven’t proven ourselves yet. Maybe some sort of freemium model is conceivable, e.g. make papers accessible on a website but provide PDFs and other options to subscribers only. Another route would be to set up JAWWS as a non-profit organization. An example of a journal that is also a non-profit is eLife. This might help with gaining respectability within some circles, but my general feeling is that profitability is better for the long-term survival of the project. #### 6. Improve science permanently No, “profit” is not the last step in the plan. Making money is great, but we can and should think bigger. The end goal of this project is to improve science writing norms forever. If JAWWS becomes a reasonably established journal, then other publications might copy its style. That would be very good and highly encouraged. But more importantly, it would show that it’s possible to change the norms for the better. Other journals will feel more free to experiment with different formats. Scientists will gain freedom in the way they share their work. Maybe we can even get rid of other problems like the ones associated with peer review while we’re at it. One dark-side outcome I can imagine is that the norms are simply destroyed, we lose the coherence that science currently has, and then it becomes harder to find reliable information. To which I respond… that I’m not sure that it would be worse than the present situation. But anyway, it seems unlikely to happen. There will always be norms. There will always be prestigious people and publications that you can copy to make sure you write in the most prestigious style. We are a very mimetic bunch, after all. And if we succeed… then science becomes fun again. Less young researchers will drop out (like I did). Random curious people will read science directly instead of sensationalist popularizers. It’ll be easier for the public (who pays for most of science, after all) to keep informed about the latest research. Maybe it’ll even encourage more kids to get into the field. If everything goes well, we’ll get one step closer to a new golden age of humanity. Okay, maybe I’m getting ahead of myself. But then again, like I said, there’s no point in aiming low. To repeat, this is still a crazy idea. It did get less crazy after I finished writing the above plan, though. I have a feeling it might really work. But it’s very possible I’m wrong. Maybe there are some major problems I haven’t foreseen. Maybe the entire scientific establishment will hate me for trying to change their norms. Maybe it’s just too ambitious a project, and it will fail if somebody doesn’t devote themselves to it. I don’t know if I should devote myself to it. So, I’d really love for this post to be shared widely and for readers — whether professional scientists, writers, students, science communicators, and really anyone who’s interested in science somehow — to let me know what they think. Like science as a whole, this should be a collaborative effort.
{}
# zbMATH — the first resource for mathematics Percolation and minimal spanning forests in infinite graphs. (English) Zbl 0827.60079 Summary: The structure of a spanning forest that generalizes the minimal spanning tree is considered for infinite graphs with a value $$f(b)$$ attached to each bond $$b$$. Of particular interest are stationary random graphs; examples include a lattice with i.i.d. uniform values $$f(b)$$ and the Voronoi or complete graph on the sites of a Poisson process, with $$f(b)$$ the length of $$b$$. The corresponding percolation models are Bernoulli bond percolation and the “lily pad” model of continuum percolation, respectively. It is shown that under a mild “simultaneous uniqueness” hypothesis, with at most one exception, each tree in the forest has one topological end, that is, has no doubly infinite paths. If there is a tree in the forest, necessarily unique, with two topological ends, it must contain all sites of an infinite cluster at the critical point in the corresponding percolation model. Trees with zero, or three or more, topological ends are not possible. Applications to invasion percolation are given. If all trees are one-ended, there is a unique optimal (locally minimax for $$f)$$ path to infinity from each site. ##### MSC: 60K35 Interacting random processes; statistical mechanics type models; percolation theory 82B43 Percolation 60D05 Geometric probability and stochastic geometry 05C80 Random graphs (graph-theoretic aspects) Full Text:
{}
#### TeX $TeX$  notation allows for the expression of ASCII characters to generate formatted graphics output
{}
# Corona loss does not depend on This question was previously asked in TANGEDCO AE EE 2018 Official Paper View all TANGEDCO Assistant Engineer Papers > 1. Atmosphere 2. Conductor size 3. Line voltage 4. Height of the conductor Option 4 : Height of the conductor Free ST 1: Logical reasoning 5218 20 Questions 20 Marks 20 Mins ## Detailed Solution Corona loss: The power dissipated in the system due to corona discharges is called corona loss. Accurate estimation of corona loss is difficult because of its variable nature. It has been found that the corona loss under fair weather conditions is less than under foul weather conditions. The corona loss under appropriate weather conditions is given below by the Peek’s formula, $${P_C} = \frac{{244}}{\delta }\left( {f + 25} \right){\left( {{E_n} - {E_0}} \right)^2}\sqrt {\frac{r}{D}} \times {10^{ - 5}}\frac{{kW}}{{km}}/phase$$ Where Pc – corona power loss f – Frequency of supply in Hz δ – Air density factor En – r.m.s phase voltage in kV Eo – disruptive critical voltage per phase in kV r – Radius of the conductor in meters D – Spacing between conductors in meters Corona is affected by the following factors: • The physical state of atmosphere: Corona is formed due to the ionization of air surrounding the conductors. The number of ions is more than normal in the stormy weather. • The irregular and rough surface cause more corona loss because unevenness of the surface decreases the value of breakdown voltage. • A larger distance between conductors reduces the electrostatic stresses at the conductor surfaces. It helps in avoiding the formation of the corona. • The operating voltage • Air density factor Corona loss is independent of the height of the conductor.
{}
• ### Announcements #### Archived This topic is now archived and is closed to further replies. # What IS a DWORD? ## Recommended Posts Yeah, I''m sure it''s a newb question, but what exactly IS a DWORD? I know it''s some sort of crazy MS typedef, but other than that, I dunno. I looked it up on MSDN but only got it''s general usage, not exactly what it is and what makes it up. ##### Share on other sites 32 bit unsigned integer ##### Share on other sites bit = ...1 bit... nybble = 4 bits = 1/2 byte byte = 8 bits = 2 nybbles WORD = 2 bytes = 4 nybbles = 16 bits DWORD = 2 WORDs = 4 bytes = 8 nybbles = 32 bits QWORD = 2 DWORDs = 4 WORDS = ..... = 64 bits There might be more; if there are, I don't know of them. EDIT: DWORD stands for Double Word, QWORD for Quad (prefix meaning 4). I suppose the next would be Oct (prefix meaning 8). "I've learned something today: It doesn't matter if you're white, or if you're black...the only color that REALLY matters is green" -Peter Griffin Edited by - matrix2113 on February 17, 2002 7:04:27 PM ##### Share on other sites Thankya! But here''s question 2. How can DWORDs take statements like: DWORD dwBlah;dwBlah = STATEMENT1 | STATEMENT2 | STATEMENT3; What''s with the pipes and the multiple option thing? How exactly does the compiler interpret that? ##### Share on other sites Its a binary OR operation. Eg - 0 OR 0 = 0, 0 OR 1 = 1, 1 OR 0 = 1 and 1 OR 1 = 1. In cases like your example its usually used to implement flags. You can then check for whether a flag is set like this: if ( dwBlah & aFlag != 0 ) doSomething(); ##### Share on other sites Ok, now, what are the flags? They obviously have to hold some value (0 or 1?). How exactly is that assigned? ##### Share on other sites Each flag has a different bit set. A DWORD is 8 bits long therefore there are a maximum of 8 different flags that can be used. eg. Flag1 = 00000001 Flag2 = 00000010 Flag3 = 00000100 Flag4 = 00001000 and so on therefore if dwSomething = Flag1 | Flag2 | Flag3 then dwSomething = 00000001 | 00000010 | 00000100 = 00000111. ##### Share on other sites Sorry, my brain wasn''t working today - A DWORD is 32 bits long therefore there is a maximum of 32 possible flags. ##### Share on other sites A flag is a value that a function looks at to determine what it should be doing. Arild Fines showed how a function checks to see if a flag is set in the function. Flags are defined by whoever writes the function, usually they are assigned powers of two because when you OR them together powers of two wont affect each other. For example, if we defined some flags with decimal values EX_1 = 1; //00000001 in binary EX_2 = 2; //00000010 in binary EX_3 = 4; //00000100 in binary EX_4 = 8; //00001000 in binary When we OR them together their values will be superimposed. EX_1 | EX_2 will be 3, or 00000011 in binary EX_8 | EX_4 | EX_1 will be 13, or 00001101 in binary Extracting a specific flag from a dword uses the AND (&) operator. For AND, both bits must be 1 to return 1 or it will return 0. Suppose you have value 00001101 and you want to see if the flag EX_2 is in it, using 00001101 AND EX_2 would return 0 (flag EX_2 is not included) because 00001101 & 00000010 ---------- 00000000 00001101 AND EX_8 will return EX_8 (the flag is included) because 00001101 & 00001000 ---------- 00001000 by the way, I''ve been using single bytes to explain the use of flags and AND and OR so that you could see the individual 0''s and 1''s without me having to type out 32 of them out to represent a double word. I''m sure you already figured that though ##### Share on other sites DWORD is a typedef from the Windows API. Most such typedefs are anachronisms. For example, a WORD is 16 bits and a DWORD is 32 bits (as has been said), but those names are currently misnomers. You see, a "word" refers the the addressing size of the target processor. When the Windows API was first devised, a word was 16 bits, hence the 16-bit size for the WORD type. But most processors these days address 32 bits at a time, so they have 32-bit words. For backwards compatibility, WORD stayed the same, even though—from the name—it should be 32-bits. Other types like LPARAM and WPARAM have become the same size (32 bits), even though WPARAM used to be 16-bits as is indicated by the W prefix (which stands for WORD). So, it''s confusing, especially since Intel parlence has followed the same trend of backwards compatibility with naming. But don''t forget what a real word is—that it''s dependent on the processor. ##### Share on other sites Thankyou everyone, I get it now Sorry about the multitude of questions though. I just like to understand something inside out and it drives me nuts when I don''t. • ## Partner Spotlight • ### Forum Statistics • Total Topics 627648 • Total Posts 2978393 • 10 • 12 • 22 • 13 • 33
{}
1. ## help A car is traveling at 27mph. if its tires have a diameter of 28in how fast are the cars tires turning? express the answer in revolutions per minute. 2. Originally Posted by donnyB $\displaystyle v=r\omega$ $\displaystyle \omega =2\pi f$ $\displaystyle f=$?
{}
# Homework Help: F'(x) of a fraction 1. Feb 7, 2012 ### cdoss 1. The problem statement, all variables and given/known data Use the definition of a derivitive to find f'(x). f(x)=1/(1-4X) 2. Relevant equations Lim as h approaches 0 [f(x+h)-f(x)]/h 3. The attempt at a solution I know that the answer is supposed to be -1/(1-4X)2 but I keep getting 4/(1-4X)2. This is what I have done so far (I hope this isn't too hard to understand): f'(x)= (1/(1-4(x+h))-(1/(1-4x)))/h = ((1-4x-(1-4(x+h)))/((1-4(x+h))(1-4x)))/h = ((1-4x-1+4x+4h)/((1-4(x+h))(1-4x)))/h *cancel the numerator values* =(4h)/((1-4(x+h))(1-4x))/h *divide by 1/h and let h=0* =4/(1-4(x-0))(1-4x) =4/(1-4X)2 What am I doing wrong? 2. Feb 7, 2012 ### Staff: Mentor Nothing - that's the right answer. BTW, you don't take "f'(x) of a fraction" as you have in the title. You can take the derivative with respect to x of a fraction (in symbols, d/dx(f(x)) ), but f'(x) already represents the derivative of some function f. 3. Feb 7, 2012 ### HallsofIvy The derivative of 1/u, with respect to u, is -1/u^2. But that "4" in the numerator is from the chain rule. If u is a function of x, the derivative of 1/u, with respect to x is (-1/u^2) du/dx. Here, u= 1- 4x so du/dx= -4. The derivative of 1/(1- 4x), with respect to x, is -1/(1- 4x)^2(-4)= 4/(1- 4x)^2 4. Feb 7, 2012 ### cdoss oh, ok so it's right. I did that problem six times because I thought I was doing it wrong haha thank you! and also thank you for correcting me on the terms! :) 5. Feb 7, 2012 ### cdoss oh yeah the chain rule!! i definitely need to remember that! thanks!
{}
]> No Title AP Calculus BC Challenge Problems Mr. Shay's Class You and your partner are to solve a minimum of 17 of these ""fun“ challenge problems: at least 10 easy, at least 5 medium, at least 2 hard. As usual, students wishing to receive exceptional grades must go above and beyond the minimum possible. You are to work only with your partner; do not consult other students or teams, and do not consult WolframAlpha or similar websites. You may ask Mr. Shay for hints beyond the ones provided. Even if you do not completely solve a problem, please provide any work toward the solution. Include full solutions to all completed problems, and write up your solutions as neatly as possible. Credit will be given to both complete and incomplete solutions. Due Date: Wednesday, May $\mathrm{3}{\mathrm{0}}^{\mathrm{t}\mathrm{h}}$, 2012 1 Easy Problems 1. Let $\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}$ be a one-to-one continuous function such that $\mathit{f}\mathrm{\left(}\mathrm{1}\mathrm{\right)}\mathrm{=}\mathrm{4}$ and $\mathit{f}\mathrm{\left(}\mathrm{6}\mathrm{\right)}\mathrm{=}\mathrm{2}$. Assume ${\int }_{\mathrm{1}}^{\mathrm{6}}\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{=}\mathrm{1}\mathrm{5}\mathrm{.}$ Calculate ${\int }_{\mathrm{2}}^{\mathrm{4}}{\mathit{f}}^{\mathrm{-}\mathrm{1}}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{.}$ 2. Find $\mathrm{lim}\underset{\mathrm{‾}}{\mathrm{-}\mathrm{1}\mathrm{+}\mathrm{cos}\mathit{x}}$ $\mathit{x}\mathrm{\to }\mathrm{0}\mathrm{3}{\mathit{x}}^{\mathrm{2}}\mathrm{+}\mathrm{4}{\mathit{x}}^{\mathrm{3}}\mathrm{.}$ 3. Find $\underset{\mathit{x}\mathrm{\to }\mathrm{\infty }}{\mathrm{lim}}\sqrt[\mathrm{3}]{{\mathit{x}}^{\mathrm{3}}\mathrm{+}{\mathit{x}}^{\mathrm{2}}}\mathrm{-}\sqrt[\mathrm{3}]{{\mathit{x}}^{\mathrm{3}}\mathrm{-}{\mathit{x}}^{\mathrm{2}}}\mathrm{.}$ 4. Find the slope of the tangent line at the point of inflection of $\mathit{y}\mathrm{=}{\mathit{x}}^{\mathrm{3}}\mathrm{-}\mathrm{9}{\mathit{x}}^{\mathrm{2}}\mathrm{-}\mathrm{1}\mathrm{5}\mathit{x}\mathrm{+}\mathrm{3}\mathrm{9}\mathrm{.}$ 5. A line through the origin is tangent to $\mathit{y}\mathrm{=}{\mathit{x}}^{\mathrm{3}}\mathrm{+}\mathrm{3}\mathit{x}\mathrm{+}\mathrm{1}$ at the point $\mathrm{\left(}\mathit{a}\mathrm{,}\mathrm{}\mathit{b}\mathrm{\right)}$ . What is $\mathit{a}$? 6. An object moves along the $\mathit{x}$-axis with its position at any given time $\mathit{t}\mathrm{\ge }\mathrm{0}$ given by $\mathit{x}\mathrm{\left(}\mathit{t}\mathrm{\right)}\mathrm{=}\mathrm{5}{\mathit{t}}^{\mathrm{4}}\mathrm{-}{\mathit{t}}^{\mathrm{5}}\mathrm{.}$ During what time interval is the object slowing down? 7. What is the area of the largest trapezoid that can be inscribed in a semicircle with radius 4 if one of the trapezoid's bases is on the diameter? 8. The highway department of North Eulerina plans to construct a new road between towns Alpha and Beta. Town Alpha lies on a long abandoned road running east-west. Town Beta lies 3 miles north and 5 miles east of Alpha. Instead of building a road directly between Alpha and Beta, the department proposes renovating part of the abandoned road (from Alpha to some point $\mathit{P}$) and then building a new road from $\mathit{P}$ to Beta. If the cost of restoring each mile of old road is \$200, 000 and the cost per mile of a new road is \$400, 000, how much of the old road should be restored in order to minimize cost? 9. Evaluate $\underset{\mathit{n}\mathrm{\to }\mathrm{\infty }}{\mathrm{lim}}\sum _{\mathit{k}\mathrm{=}\mathrm{1}}^{\mathit{n}}\frac{\mathrm{3}}{\mathit{n}}\sqrt{\mathrm{1}\mathrm{+}\frac{\mathrm{3}\mathit{k}}{\mathit{n}}}\mathrm{.}$ 10. Evaluate ${\int }_{\mathrm{0}}^{\mathrm{2}}\frac{\mathit{d}}{\mathit{d}\mathit{x}}\mathrm{\left(}\frac{\mathrm{ln}\mathrm{\left(}\mathit{x}\mathrm{+}\mathrm{1}\mathrm{\right)}}{{\mathit{x}}^{\mathrm{3}}}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{.}$ 11. Compute ${\int }_{\mathrm{0}}^{\mathrm{1}}{\mathrm{tan}}^{\mathrm{-}\mathrm{1}}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{.}$ 1 12. Consider the function $\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathrm{=}{\mathit{x}}^{\mathrm{3}}$ and a point $\mathrm{\left(}\mathit{a}\mathrm{,}\mathrm{}{\mathit{a}}^{\mathrm{3}}\mathrm{\right)}$ in the first quadrant. Let $\mathit{A}$ be the area bounded by the $\mathit{y}$-axis, the function $\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}$ , and $\mathit{y}\mathrm{=}{\mathit{a}}^{\mathrm{3}}$, and let $\mathit{B}$ be the area bounded by the $\mathit{x}$-axis, the function $\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}$ and $\mathit{y}\mathrm{=}\mathit{a}$. Find the ratio of the areas $\mathit{A}$: $\mathit{B}\mathrm{.}$ 13. What is the volume when the region in the first quadrant bounded by $\mathit{y}\mathrm{=}{\mathit{x}}^{\mathrm{3}}$ and $\mathit{x}\mathrm{=}{\mathit{y}}^{\mathrm{3}}$ is rotated around the $\mathit{y}$-axis? 14. Evaluate $\int \frac{\mathrm{1}}{{\mathit{e}}^{\mathrm{2}\mathit{x}}\mathrm{+}\mathrm{3}{\mathit{e}}^{\mathit{x}}\mathrm{+}\mathrm{2}}\mathit{d}\mathit{x}\mathrm{.}$ 15. What is ${\mathit{e}}^{\mathrm{1}}\mathrm{\cdot }{\mathit{e}}^{\mathrm{-}\mathrm{1}\mathrm{/}\mathrm{2}}\mathrm{\cdot }{\mathit{e}}^{\mathrm{1}\mathrm{/}\mathrm{3}}\mathrm{\cdot }{\mathit{e}}^{\mathrm{-}\mathrm{1}\mathrm{/}\mathrm{4}}\mathrm{\text{...}}$ ? 2 Medium Problems 1. A continuous real function $\mathit{f}$ satisfies the identity $\mathit{f}\mathrm{\left(}\mathrm{2}\mathit{x}\mathrm{\right)}\mathrm{=}\mathrm{3}\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}$ for all $\mathit{x}$. If ${\int }_{\mathrm{0}}^{\mathrm{1}}\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{=}\mathrm{1}$, what is ${\int }_{\mathrm{1}}^{\mathrm{2}}\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathit{d}\mathit{x}$? 2. A hallway of width 6 feet meets a hallway of width $\mathrm{6}\sqrt{\mathrm{5}}$ feet at right angles. Find the length of the longest pipe that can be carried horizontally around this corner. 3. What is the area of the largest rectangle that can be drawn inside of a 3-4-5 right triangle with one of the rectangle's sides along the hypotenuse of the triangle? 4. Evaluate ${\int }_{\mathrm{0}}^{\mathrm{1}}\mathrm{ln}\mathrm{\left(}\sqrt{\mathrm{1}\mathrm{-}\mathit{x}}\mathrm{+}\sqrt{\mathrm{1}\mathrm{+}\mathit{x}}\mathrm{\right)}\mathit{d}\mathit{x}\mathrm{.}$ 5. Find all $\mathit{a}\mathrm{>}\mathrm{1}$ such that ${\int }_{\mathrm{1}}^{\mathit{a}}\mathit{x}\mathrm{ln}\mathit{x}\mathit{d}\mathit{x}\mathrm{=}\frac{\mathrm{1}}{\mathrm{4}}\mathrm{.}$ 6. Find the area restricted by the functions $\mathit{y}\mathrm{=}\mathrm{sin}\mathit{x}\mathrm{,}$ $\mathit{x}\mathrm{=}\mathrm{sin}\mathit{y}$, and $\mathit{y}\mathrm{=}\mathit{x}\mathrm{+}\mathrm{2}\mathit{\pi }\mathrm{.}$ 7. What is the surface area of the portion of a unit sphere centered at the origin between the planes $\mathit{z}\mathrm{=}\mathrm{-}\frac{\mathrm{1}}{\mathrm{2}}$ and $\mathit{z}\mathrm{=}\frac{\mathrm{1}}{\mathrm{2}}$? 8. Discuss the convergence of ${\int }_{\mathrm{-}\mathrm{\infty }}^{\mathrm{0}}{\mathit{e}}^{{\mathit{e}}^{\mathit{x}}\mathrm{+}\mathit{x}}\mathit{d}\mathit{x}$ and if possible, evaluate. 9. What is the fifth derivative of ${\mathit{e}}^{\mathrm{2}\mathit{x}}\mathrm{sin}\mathrm{\left(}\mathrm{x}\mathrm{\right)}$ at $\mathit{x}\mathrm{=}\mathrm{0}$? 10. Find the $\mathrm{2}\mathrm{0}\mathrm{1}{\mathrm{2}}^{\mathrm{t}\mathrm{h}}$ nonzero term of the power series for $\mathit{f}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathrm{=}\frac{\mathit{x}}{\mathrm{\left(}{\mathit{x}}^{\mathrm{2}}\mathrm{-}\mathrm{1}{\mathrm{\right)}}^{\mathrm{2}}}$ expanding about $\mathit{x}\mathrm{=}\mathrm{0}\mathrm{.}$ 3 Hard Problems 1. Evaluate $\underset{\mathit{x}\mathrm{\to }\mathrm{0}}{\mathrm{lim}}\frac{\mathit{x}\mathrm{tan}\mathrm{\left(}\mathrm{sin}\mathrm{\left(}{\mathit{x}}^{\mathrm{2}}\mathrm{\right)}\mathrm{-}\mathrm{sin}\mathrm{\left(}\mathit{x}\mathrm{\right)}\mathrm{\right)}}{\mathrm{sin}\mathrm{\left(}{\mathit{x}}^{\mathrm{2}}\mathrm{\right)}}\mathrm{.}$ 2. If $\mathit{x}\mathrm{,}$ $\mathit{y}\mathrm{,}$ $\mathit{z}$ are real numbers such that $\mathrm{\left(}\mathit{x}\mathrm{+}\mathrm{2}\mathit{y}\mathrm{+}{\mathit{z}}^{\mathrm{2}}{\mathrm{\right)}}^{\mathrm{2}}\mathrm{=}\mathrm{2}\mathit{x}\mathrm{-}\mathit{y}\mathrm{-}{\mathit{z}}^{\mathrm{2}}$, what is the maximum value of $\mathit{y}$? 3. Evaluate ${\int }_{\mathrm{0}}^{\mathrm{\infty }}\frac{\mathrm{ln}\mathit{x}}{{\mathit{x}}^{\mathrm{2}}\mathrm{+}\mathit{x}\mathrm{+}\mathrm{1}}\mathit{d}\mathit{x}\mathrm{.}$ 4. Divide a given line segment into two other line segments. Then, cut each of these new line segments into two more line segments. What is the probability that the resulting four line segments are the sides of a quadrilateral? 5. If ${\mathit{a}}_{\mathit{n}}\mathrm{,}$ ${\mathit{b}}_{\mathit{n}}$ are sequences defined as such: ${\mathit{a}}_{\mathit{n}\mathrm{+}\mathrm{1}}\mathrm{=}\frac{\mathrm{1}\mathrm{+}{\mathit{a}}_{\mathit{n}}\mathrm{+}{\mathit{a}}_{\mathit{n}}\mathrm{\cdot }{\mathit{b}}_{\mathit{n}}}{{\mathit{b}}_{\mathit{n}}}\mathrm{,}$ ${\mathit{b}}_{\mathit{n}\mathrm{+}\mathrm{1}}\mathrm{=}\frac{\mathrm{1}\mathrm{+}{\mathit{b}}_{\mathit{n}}\mathrm{+}{\mathit{a}}_{\mathit{n}}\mathrm{\cdot }{\mathit{b}}_{\mathit{n}}}{{\mathit{a}}_{\mathit{n}}}$, with ${\mathit{a}}_{\mathrm{1}}\mathrm{=}\mathrm{1}\mathrm{,}$ ${\mathit{b}}_{\mathrm{1}}\mathrm{=}\mathrm{2}$, find $\underset{\mathit{n}\mathrm{\to }\mathrm{\infty }}{\mathrm{lim}}{\mathit{a}}_{\mathit{n}}\mathrm{.}$ 2
{}
# Homework Help: Normal operators 1. Jul 30, 2009 ### evilpostingmong 1. The problem statement, all variables and given/known data Prove that a normal operator on a complex inner-product space is self-adjoint if and only if all its eigenvalues are real. 2. Relevant equations 3. The attempt at a solution Let c be an eigenvalue. Now since T=T*, we have <TT*v, v>=<v, TT*v> if and only if TT*v=cv on both sides and not -cv (-c is the complex conjugate of c made possible by c being a complex number) on one side and cv on the other side. Therefore c must be real. Last edited: Jul 30, 2009 2. Jul 30, 2009 ### evilpostingmong For the other direction, suppose Tv=av with a being real. Now <Tv, v>=<av, v>=a<v, v>=<v, av>. Now since Tv=av, <v, av>=<v, Tv>. This shows that <Tv, v>=<v, Tv> or Tv=T*v=av. Now <T*Tv, v>=<T*av, v>=<T*va, v>=<a2v, v>=<v, a2v> =<v, T*Tv>. Therefore, T*T is self adjoint. 3. Jul 30, 2009 ### Dick You are going to have to do a lot better than that before I'll even start reading it. Who cares whether T*T is self-adjoint? That has nothing to do with the problem. Try and make a linear progression between what you are assuming and what you are trying to prove. Just this once, for my sake, ok? Start with Tv=cv. Assume T=T*. Show me c is real. And nothing else. Write the complex conjugate of c as (c*), not -c. And do it using a minimal number of digressions. Please, please? 4. Jul 30, 2009 ### evilpostingmong Ok Tv=cv. T=T*. Therefore <Tv, v>=<v, T*v>. Now <cv, v>=<v, T*v> if and only if T*v=cv. Therefore c must be real. If c were to be complex, then T*v=c*v and c*v=/=cv. Last edited: Jul 30, 2009 5. Jul 30, 2009 ### Dick I appreciate the attempt at brevity, thanks. And I do appreciate it, really. But you are still somehow missing the point. Try using <Tv,v>=<v,(T*)v>=<v,Tv> and that <v,cv>=c<v,v> and <cv,v>=(c*)<v,v>. You know that <ax,x>=(a*)<x,x>, right? If c=(c*) what does that mean about c? 6. Jul 30, 2009 ### evilpostingmong c* is not a complex conjugate. c* and c are the same. I got mixed up thinking that the little star above the scalar was used to show that it is a complex conjugate. 7. Jul 30, 2009 ### Dick You have a really odd way of expressing yourself. I know what you mean by that, but probably few other people in the world would. Here's what you meant to say: "if c=(c*) then the imaginary part of c is zero, so c is real". The * does mean complex conjugate. What do YOU think complex conjugate means? Last edited: Jul 30, 2009 8. Jul 30, 2009 ### evilpostingmong :rofl: I know. For the other direction, suppose Tv=av with a being real. Now <Tv, v>=a<v, v>=allvll2. And <v, T*v>=<v, v>a*=a*llvll2 Since a*=a, allvll2=a*llvll2 Thus a*llvll2=a*<v, v> =<a*v, v>=<av, v>=<Tv, v>. Last edited: Jul 30, 2009 9. Jul 30, 2009 ### Dick It's not that funny. What is that supposed to prove? I'm regretting, as I have before, even responding to this thread. 10. Jul 30, 2009 ### evilpostingmong its supposed to prove the other direction that if all eigenvalues are real, then T=T*. For the other direction, suppose Tv=av with a being real. Now <Tv, v>=<av, v>=a<v, v>. And <v, T*v>=<v, a*v>=a*<v, v>. Since a*=a, a<v, v>=a*<v, v>. Thus <v, T*v>=<v, a*v>=a*<v, v> =<a*v, v>=<av, v>=<Tv, v>. is this right? Last edited: Jul 31, 2009 11. Jul 31, 2009 ### evilpostingmong hey could someone tell me whether or not im right or wrong> 12. Jul 31, 2009 ### Dick It's wrong (in that it abuses '*' - for a general complex number c, <cv,v>=(c*)<v,v>). Since you are assuming a=(a*) you can, I suppose, move the '*'s around. The big problem is that it's just factoring 'a' in and out of a product over and over again. How is that supposed to prove T=T*? Your 'thus' concludes <v,T*v>=<Tv,v>. That doesn't prove T=T*. How can you not see that? What might a proof that T=T* look like??? 13. Jul 31, 2009 ### evilpostingmong ugh god your right, meant <v, Tv>=<Tv, v>. ok ill fix this. cant believe I missed that. :yuck: 14. Jul 31, 2009 ### evilpostingmong Ok assume the eigenvalue a is real. And let Tv=av for v is an eigenvector of T. Now we have <Tv, v>=<v, T*v>. Or <av, v>=<v, a*v>. Since a is real, a=a*. Thus <v, a*v>=<v, av>=<v, Tv>. And since <Tv, v>=<v, T*v>, <Tv, v>=<v, Tv>. no factoring a or a* . Last edited: Jul 31, 2009 15. Jul 31, 2009 ### Dick It's still vacuous. Wouldn't a proof that T=T* involve showing T(x)=T*(x) for any vector x? I can't even see where you clearly stated that T(v)=T*(v) for your eigenvector v. And even if you had, how could T(v)=T*(v) for a single vector v tell you T=T* for all vectors? 16. Jul 31, 2009 ### evilpostingmong well ok Let v be any vector. Tv=av and T*v=a*v. Thus Tv-T*v=av-a*v Or (T-T*)v=av-a*v. Since a=a*, (T-T*)v=av-av=0. Therefore (T-T*)v=0 and since (T-T*)v=0 , T must=T*. 17. Jul 31, 2009 ### Dick That's better. So Tv=(T*)v. The problem now is that not ALL vectors are eigenvectors. 18. Jul 31, 2009 ### evilpostingmong Ok let x be any vector in V. Now <Tx, x>=<x, T*x>. We represent T with a a matrix. Since T "has" eigenvectors, T must be diagonizable. Therefore (after reducing the matrix to a diagonal one, note that TT represents T*) TT=T since all eigenvalues along T's diagonal are real and all real diagonal matrices equal their transposes. 19. Jul 31, 2009 ### Dick Ok. Except T isn't diagonalizable because it "has" eigenvectors. T is diagonalizable because it's normal. T* isn't the transpose of T, it's the complex conjugate of the transpose. So for any diagonal element of T, T_kk, T_kk=(T_kk)*. That tells you T_kk is real. 20. Jul 31, 2009 ### evilpostingmong oh my bad...we're dealing with a complex space. So T's diagonal matrix has scalars of the form a+bi along its diagonal and T*'s diagonal matrix has scalars of the form a-bi (since T* is the complex conjugate of the transpose) along its diagonal. But in our case, b=0. So T's diagonal matrix=T*'s. Last edited: Jul 31, 2009 Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{}
# Tag Info 1 This is more an answer to why the uniqueness of the sums effects the size so much that the case for $52485332$ becomes trivial (its way to long for a comment). When all sums must be unique, then they must result in different integers. Because there are $500^{1000}$ possible sums, there are also $500^{1000}$ different integer results for that. the lowest case ... 2 So, the only way must be an Exhaustive Search Algorithm As I mentioned in my comment, there are practical methods for nonhuge values of $m$ (and $m=52485332$ is not huge). Here is the outline of one such method (which assumes all the sets consist of nonnegative integers): We have an array $A_{n, m+1}$; each element of the array $A_{a, b}$ will note how we ... 1 A mono-alphabetic substitution cipher simply replaces each symbol with another symbol, in a 1:1 fashion. So indeed you have 26 symbols or letters in the ciphertext. Now say you write down the ABC: ABCDEFGHIJKLMNOPQRSTUVWXYZ Each of these letters will need to be substituted by another one to go from plaintext to ciphertext. Lets use the same same symbols ... 1 "Prime factorization" is not worth interest, for primes are their own factorization. Factorization into primes is not used for key generation. I conclude the question asks: When generating primes during generation of public/private key pairs for crytosystems based on hardness of factorization (RSA, Rabin, Pailler…), is there a limit on the size of ... 1 If it does matter, what is the current state of the art elliptic curve and how does it compare with popular elliptic curves such as Curve25519 or secp256k1? Well, if you have an elliptic curve with a large subgroup of size $q$ (which is prime), then we know how to compute a discrete log within that subgroup in $O(\sqrt{q})$ time, and this applies to all ... 2 In short, AES-256 beats AES-128. Use AES-256 which is the golden standard; Cryptanalysis The attacks on AES-256 doesn't make it insecure practically, even after 20 years the best attack has the complexity of $2^{254.3}$ for AES-256 and $2^{126.0}$ for AES-128 2015 Improving the Biclique Cryptanalysis of AES, Biaoshuai Tao and Hongjun Wu. The related key ... 4 The problem is that you are trying to put a generic security comparison on AES-128 and AES-256. I would argue that if you're going to dig deeper - as you currently do - then this idea that there is a "generic security level" as I would coin it, is flawed. As you said, there is a related key attack that is only valid for AES-256, mainly related to ... 0 The larger the key size the slower the operational performance. Is it true? Yes, but it depends on the algorithm how much difference it makes. And if the algorithm is slow in the first place then the speed difference may make more of an impact. For instance, AES-128 has 10 rounds while AES-256 has 14 rounds. So choosing AES-256 is generally only slightly ... Top 50 recent answers are included
{}
# Visualizing Operators on $\mathbb{C}^n$ I am trying to get some better intuition about operators on complex inner product spaces. When we identify $\mathbb{C}^n$ with $\mathbb{R}^{2n}$, is there a nice geometric interpretation for the resulting operators on $\mathbb{R}^{2n}$? Ideally, this characterization would give a geometric construction for the adjoint generalizing conjugation as reflection in the real line. Also, "seeing" the polar decomposition would be nice. - Unitary operators on $\mathbb{C}^n$ are realized as operators which are both orthogonal and symplectic on the correspoding $\mathbb{R}^{2n}$ (with a symplectic structure compatible to the original complex structure) . This is due to the 2-out-of-3 property of the unitary group. Consequently, the polar decomposition expressed on $\mathbb{R}^{2n}$ has the structure: $A = U P$ where $U$ is an orthogonal and symplectic matrix on $\mathbb{R}^{2n}$ and $P$ is a positive $2n \times 2n$ matrix with multiplicity 2 eigenvalues. This can be seen easily through the realization of the polar decomposition through the singular value decomposition as given for example in the following Wikipedia page .
{}
# instance_closest_approach instance_closest_approach(inst[,time]) Returns the distance in pixels (or time in steps) until the calling and given instances are at their nearest separation based on their current positions and speeds. COPY/// instance_closest_approach(inst[,time]) // // Returns the distance in pixels (or time in steps) until the calling // and given instances are at their nearest separation based on their // current positions and speeds. The returned value is 0 if the instances // are moving in parallel, negative if the instances are diverging. // // inst instance id, real // time true to return time rather than distance, bool // { var x1,y1,x2,y2,dh,dv,t; x1 = x; y1 = y; x2 = argument0.x; y2 = argument0.y; dh = argument0.hspeed - hspeed; dv = argument0.vspeed - vspeed; if ((dh == 0) && (dv == 0)) { if (argument1) return 0; else return point_distance(x,y,argument0.x,argument0.y); }else{ t = -((x2 - x1) * dh + (y2 - y1) * dv) / (sqr(dh) + sqr(dv)); if (argument1) return t; else return sign(t) * point_distance(x + t * hspeed, y + t * vspeed, argument0.x + t * argument0.hspeed, argument0.y + t * argument0.vspeed); } } Contributors: xot, paul23 GitHub: View · Commits · Blame · Raw
{}
# Should Developing Countries Try to Create a Business Elite? NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. La Porta and Shleifer released a working paper recently on the informal economy (which I believe is a draft for a future issue of the Journal of Economic Perspectives, but I could be wrong). They give an overview of what we currently know about the size and characteristics of informal firms. The thing that stuck out most after reading this was the strong evidence that big, formal firms do not grow out of small, informal ones. In other words, small informal firms tend to stay small and informal. Big formal firms are created as formal firms, and while they may start relatively small, they generally start out bigger than most informal firms will ever be. While institutions/regulations may make incent some people to run informal firms, these regulations are not preventing informal firms from becoming formal. There is essentially no transition in any country of informal firms into formal ones. This is important because big formal firms are much, much more productive (per worker, or in terms of total factor productivity) than small informal ones. So big formal firms are the source of nearly all the significant gains in aggregate productivity within countries. You don’t see any highly developed nations dominated by small, informal firms. And fostering the growth of big formal firms is different (according the La Porta and Shleifer) from fostering the growth of small informal ones. A similar sentiment can be found in a recent column by Daniel Altman, titled “Please Don’t Teach this Woman to Fish“. As the tag line to the article says: poor countries have too many entrepreneurs and too few factory workers. Promoting small (almost universally informal) firms can improve living standards slightly, but does not lead to the massive productivity gains that generate big gains in GDP per capita. So what does it take to promote big formal firm growth? La Porta and Shleifer suggest that a big constraint is highly trained managers and/or entrepreneurs that can handle running a large firm. Improving the average level of education is less important, in this case, than extending the tail of the education distribution. Nearly all big formal firms are run by college-educated managers, so developing countries need to generate more of those kind of people. Getting everyone to go from 6 to 7 years of education won’t do it – it would be better to leave nearly everyone at 6 years, but add a few extra people with 16 years or 18 years of education. Yes, you also need an institutional/regulatory structure that makes it low-cost for those college-educated managers to open and operate firms, obviously. But apparently having a good regulatory structure won’t buy you anything without the stable of potential managers. So here’s a question(s) related to education policy in developing countries. Would they be better off spending their budget providing scholarships for students to got to college (perhaps abroad) and/or paying for high-achieving students to intern or work abroad at large firms for a while. If you could get GE to hire 100 students into their managerial program, would that ultimately be better for development than achieving universal secondary schooling? Is it worth it to the whole country to create an elite cadre of managers who own/run large formal firms? # Some Self-Promotion NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. The CSAE blog has put up a research summary about my paper with Markus Eberhardt, on agricultural technology and agricultural productivity (which are different things). # Does Culture Matter for Economic Growth? Part Deux. NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. I ended up getting a lot of feedback (pushback?) on my post regarding culture and economic growth. The TL;DR version is this: if culture influences utility functions, then comparing economic development levels between cultures not very interesting because it doesn’t ultimately inform us about welfare. Several people got back to me about ways that culture could matter for economic development without necessarily implying differences in utility functions. While not doing full justice to each comment, I think the common thread was this: coordination failures. Perhaps you have some cultural norm that says to distrust strangers. In some kind of repeated dynamic game, your first choice is to deviate/defect/cheat, and this leads to a bad equilibrium where everyone continues to deviate/defect/cheat. This means you do not take advantage of mutually beneficial transactions. In contrast, a culture that says to trust strangers will choose to cooperate as a first choice, and this leads to a good dynamic equilibrium where everyone continues to cooperate (lend to each other, transact with each other, make long-term contracts with each other) and allows for greater economic specialization. Now, if the cultural norm of distrusting strangers is there to minimize the utility loss (shame?) from being cheated, then its still just a utility function difference, and we can’t really say that people are worse off from distrusting strangers. They are, after all, avoiding something that hurts them very badly. But if the cultural norm of distrusting strangers is just some odd historical outcome, then I could see how this cultural norm is really affecting not just economic development, but also welfare. People would like to coordinate on “cooperate” and achieve the good long-run equilibrium, but no one has any incentive to act alone. That said, cultural norms of distrusting strangers (or trusting them) aren’t random. They must have some basis in past cultural experience, and so I’d be worried that it directly influences utility in some manner. But as a general proposition, the idea that culture has an influence on economic development and welfare because of coordination failures seems like a good avenue to pursue. The idea that culture is tied up with solving coordination problems runs through a lot of the work of Avner Greif. His 1994 paper on cultural beliefs and economic outcomes compares an individualist culture (Genoese traders) with a collectivist one (Maghribi traders) in how they dealt with severe principal-agent issues. Summarizing, the Genoese developed a vertical structure that relied on formal institutions to mediate disputes, while the Maghribi developed a horizontal structure that relied on intra-group cooperation to mediate distputes (i.e. punish cheaters). Greif does not explain why the Genoese or Maghribi adopted these different attitudes, he just documents that the choice of vertical versus horizontal structure makes sense given their cultural attitudes. He’s also clear about ranking these systems: Hence although in the long run the Italians drove the Muslim traders out of the Mediterranean, the historical records do not enable any explicit test of the relative efficiency of the two systems (p.942-43) So it’s not immediately obvious whether the collectivst culture was worse for economic outcomes (perhaps the Genoese had other advantages we don’t know about). But to my prior point, even if the collectivist culture was demonstrably worse for the economic outcomes, we don’t know anything about how individualism and collectivism entered the utility functions of these groups. Hence we don’t know whether the Genoese or the Maghribi were better off with their system. The one way I see that you could definitively argue that the collectivist culture was “worse” was if the Genoese and Maghribi shared a common utility function, and the move to collectivism by the Maghribi was the result of a random historical event unassociated with that utility function. By random I mean, if we re-ran world history 1000 times, then in about half of them we should see the Genose ending up with collectivist institutions and the Maghribi with individualistic ones. That seems like a tall order. I’d be shocked if the Maghribi’s collectivist culture, and hence adoption of a horizonatal structure that (might have) had a detrimental effect on their economy, was just random noise. Obviously world history didn’t start with the Genoese and Maghribi, and their predisposition for collectivism and individualism was the result of historical events leading to that specific time and location. So perhaps there were a series of random occurrences over history that snowballed into the collectivist culture of the Maghribi and the individualist of the Genoese. Which is a long way of saying that countries could share a common utility function (making GDP or income comparisons meaningful), have different cultures due to a series of historical contingencies, and that those cultural traits could have meaningful economic effects because of how they influence coordination problems. In that case, then it would be meaningful to talk about culture’s effect on GDP, because culture is essentially capturing some kind of historical path dependence. # Measuring Real GDP NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. This morning Angus Deaton and Bettina Aten released an NBER working paper (gated, sorry) about understanding changes to international measures of real GDP and poverty that occurred following the release of a new round of price indices from the International Comparison Project (ICP). Price indices? Methodological nuance? I know, ideal subject matter to drive my web traffic to zero. For those of you still here (thanks mom!), the paper by Deaton and Aten is a great chance to understand where comparisons of real GDP across countries come from, and to highlight that these comparisons are inherently imprecise and should be used with that in mind. The basic idea of the Penn World Tables, or any other attempt to measure real GDP across countries, is to compute the following $\displaystyle RGDP_i = \frac{NGDP_i}{PPP_i} \ \ \ \ \ (1)$ where ${RGDP_i}$ is the real GDP number we want, ${NGDP_i}$ is the nominal GDP reported by a country, and ${PPP_i}$ is the “purchasing-power-parity” price index for that country. While there can be severe issues with the reporting of nominal GDP, particularly from poor countries with a bare-bones (or no) national statistics office, the primary concern in these calculations is with the ${PPP_i}$. Think of ${PPP_i}$ as the cost of one “bundle of goods” in country ${i}$. So dividing nominal GDP by ${PPP_i}$ gives us the number of real bundles that a country produced. If we do that for every country, we can compare the number of real bundles produced across countries, and that crudely captures real GDP. The ICP produces these measures of ${PPP_i}$ for each country. I’m going to avoid the worst sausage-making aspect of this, because it involves lots of details about surveys to find prices for specific goods, how to get the right “average” price for each good, and then how to roll those back up to ${PPP_i}$ for each country. The important thing about the methodology for computing ${PPP_i}$ is that there is no right way to do it. There are methods that might be less sensible (i.e. let ${PPP_i}$ be the price of a can of Diet Coke in a country) than what the ICP does, but that doesn’t imply that the ICP is correct in some absolute sense. It also means that the ICP can, and does, change methodology over time. The paper by Deaton and Aten works through the changes in methodology from 1993/5 to 2005 to 2011 and how we measure real GDP. The tentative conclusion is that the 2005 iteration of the ICP probably was over-stating the ${PPP_i}$ levels for many developing African and Asian countries. From the equation above, you can see that over-stating the ${PPP_i}$ means under-stating real GDP. So in 2005, we were likely too pessimistic about the economic conditions in a lot of these developing countries. Chandy and Kharas found that using the 2005 values of ${PPP_i}$ implied that 1.215 billion people in 2010 lived below the World Bank’s $1.25 per day poverty line. Using the 2011 values of ${PPP_i}$ instead, there are only 571 million people living below$1.25 per day. That’s a reclassification of some 700 million people. Their domestic income stayed the same, but the 2011 ICP suggests that they were paying lower prices for their “bundle of goods” than we assumed in 2005, and hence their real income went above $1.25. But as I said before, these are tentative conclusions because there is no way of knowing this for sure. Deaton and Aten’s conclusion is that the 1993/5 and 2011 rounds of the ICP seem more consistent with each other, and 2005 looks like an outlier. So just to keep things comparable over time, we should probably avoid the 2005 numbers. But again, who knows. It’s quite possible that mankind’s true welfare is measured in the number of cans of Diet Coke that we can produce. Measuring real GDP or global poverty levels is – to put it kindly – a fuzzy process. There is not the right method for this. As you can see, the measurements can be pushed around a lot by differences in methodology that are inherently trying to make apples-to-oranges comparison (I mean that literally – how do you value apples compared to oranges in national output? What’s the right price? It’s different in Washington, Florida, and Wisconsin. So how do you compare the total “real” value of fruit consumption in different states or countries?). The implication is that we shouldn’t be asking real GDP measures or poverty line measures to do too much. For really crude comparisons, real GDP from the Penn World Tables is fine. The U.S. has higher real GDP per capita than Kenya, and the Penn World Tables pick that up. Is it a 40/1 ratio? A 35/1 ratio? A 20/1 ratio? Not entirely clear. Different methodologies for computing ${PPP_i}$ in the US and Kenya will yield different results. But is it really important if it is 40/1 versus 20/1? In either case, it is clear that Kenya is poorer. We can go forth and try to explain why, or make some policy advice to Kenya to help close the gap, or go to Kenya to work on interventions to alleviate poverty there. Where these real GDP comparisons, or poverty line counts, should not be used is in finer-grain comparisons. Is Kenya’s real GDP per capita lower or higher than Lesotho’s? According to the Penn World Tables, in 2011 Kenya’s was lower. But should we do any kind of serious analysis based on this? No. The difference is as likely to be from discrepancies in how we measure ${PPP_i}$ for those countries as from real economic differences in capital stocks, human capital, technology, or institutions. Real GDP comparisons are best thought of as similar to baseball stats. The top career OPS (on-base plus slugging percent) players are Babe Ruth, Ted Williams, Lou Gehrig, Barry Bonds, and other names you might recognize. Players like Albert Pujols and Miguel Cabrera are in the top 20, giving you a good idea that these guys are playing at a level similar to the greats of all time. You can’t use this career OPS to tell me that Pujols is definitively better than Stan Musial or definitively worse than Rogers Hornsby. But career OPS does make it clear that Pujols and Cabrera are definitely better than guys like Davey Lopes, Edgar Renteria, and Devon White (and distinguishing between Lopes, Renteria, and White is hopeless using OPS). The fact that ICP revises the ${PPP_i}$ values over time doesn’t make them useless, just as OPS isn’t useless even though it ignores defense and steals. But you cannot ask too much of the real GDP measures that are derived using them. They are useful for big, crude comparisons, not fine-grained analysis. # Does Culture Matter for Economic Growth? NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. There’s been an increasing number of papers concerned with culture and its relationship to economic growth. I happened to just see this working paper by Di Tella and MacCulloch (2014), but the idea of culture being an important determinant of economic development levels has been hanging out there in the literature for a long time. Weber’s theory of the Protestant work ethic is probably the starting point for any discussion of this topic. More recent work tends to try and be more empirical than Weber, often using World Values Surveys as a means of measuring cultural elements. This is what Di Tella and MacCulloch do in their working paper. [If you’d like a good introduction to the culture literature, check out James Fenske’s course materials, in particular his “Foundations of Development” course]. I think this is pretty interesting reading, but I’m starting to get a little antsy about the use of the cross-country empirical work. Not in a standard “Identification!!” way, although that’s an issue, but in a slightly deeper way. In particular, why bother regressing GDP per capita (or growth, or any measure of economic activity) on cultural variables at all? Culture affects economic activity through the choices that people make about how to allocate scarce resources. In other terms, while culture may be a fundamental determinant of economic activity, it acts through proximate factors like (but not exclusive to) the accumulation of capital, the adoption of technology, or labor market participation decisions. So if we are going to describe how culture influences economic activity, we need to describe how culture influences those proximate factors. The decisions regarding saving, technology adoption, and labor market participation are similar in that they involve some sort of constrained optimization problem. That is, there is some budget constraint and some utility function, and people do the best they can to maximize utility while keeping within that budget. I have some income, for example, and I need to decide how much of it to consume and how much to save. I have some profits as a firm, and I need to decide whether to invest in a new technology, or distribute the profits to my stockholders. I have a finite amount of time, and I need to decide whether to stay home and raise my kids or put them in day care and go back to work. All constrained optimization problems. So if culture is going to influence economic activity, it has to influence those constrained optimization problems. And there are really only two options then. Either culture influences budget constraints, or it influences utility functions. I haven’t seen any argument that culture actually changes the budget constraints of people, firms, or governments. Finite resources are finite no matter what you believe. So culture probably acts through utility functions, changing people’s preferences towards the future, or towards education, or towards material success, or towards the environment, or whatever. Maximizing utility does not mean that people are individualistic money-grubbers. You can write down a utility function where someone cares about other people’s welfare, or a function where someone really enjoys free time with their kids, or highly values the environment, or values the success of their group. Culture, if it has economic effects, would presumably act by changing exactly what is valued in the utility functions of people or households. Take as an example the common cultural distinction that Americans are more individualistic than Europeans. This would manifest itself in a utility function in the U.S. that is heavily weighted towards individual income, say, versus any measure of community income. In Europe, the opposite would apparently hold. Then, given the same budget, Americans would make choices aimed towards better personal outcomes (e.g. low tax rates and social safety nets) while Europeans wouuld makes choices aimed towards better group outcomes (e.g. high tax rates and social safety nets). So here’s the issue that I mentioned at the top. If culture leads to different utility functions, which in turn lead to different measurable economic outcomes, then why should we bother with measuring economic outcomes? Let me take this from the opposite angle. If everyone has identical utility functions, then measurable economic outcomes (GDP, average wages) have some information about relative welfare across countries. But if everyone has a different utility function, then measurable economic outcomes don’t necessarily provide any information about relative welfare. If one culture derives utility from having massive families with lots of kids, and doesn’t really care about consumption goods, then what does their low GDP per capita tell me? Nothing. It doesn’t tell me they have lower welfare than a high GDP per capita culture. If you tell me that culture is important for economic outcomes, then you’re telling me that utility functions vary across cultures. But if utility functions vary across cultures, then cross-culture comparisons of economic outcomes don’t imply anything about welfare. So aren’t the regressions with culture as an explanatory variable self-defeating, even if they are econometrically sound? I could well be over-thinking this, and I’d be happy to hear a good argument for what the culture/growth or culture/income regressions are supposed to be telling me. # Potential “Potential Output” Levels NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. John Fernald has a new working paper out at the San Fran Fed on “Productivity and Potential Output Before, During, and After the Great Recession”. The main take-away from the paper is that productivity growth started to slow down even before 2008, particularly in industries that produce IT products or are significant users of IT products. Because of this, even in the absence of the Great Recession, we would have seen slower trend growth in GDP. What Fernald’s results imply is that the economy is not as far from its potential GDP as we might think. And the idea that we’re way below potential GDP is something lingering underneath a lot of the discussion about economic policy (tapering, stimulus, etc…). Matt Yglesias just had a post noting that while the U.S. is well below it’s pre-2007 trend for GDP, Europe is even farther below it’s trend. Regardless of the conclusion you want to draw from that regarding policy, the assumption is that the pre-2007 trend is where GDP “should” be. Back to Fernald’s paper. He finds that productivity growth was already declining prior to 2007, and therefore where GDP “should” be is a lot lower than the naive pre-2007 trend line would indicate. This is easier to see in a picture. The purple dashed line is from the CBO’s 2007 projection, and that is essentially just an extrapolation of the trend in GDP from about 1990-2007. Compared to that measure of potential GDP, we are doing very poorly, with actual GDP (the black line) falling nearly$2 trillion short of potential. Fernald’s alternative calculations that take into account the slowdown in productivity growth that started in the mid-2000’s suggest a much lower estimate of potential GDP. His estimate (the red line) is a prediction of what GDP would have been without the financial crisis, essentially. It falls well below the CBO 2007 estimate. It suggests that the economy today is only perhaps $400 billion short of potential GDP. His numbers make a big difference in how you think about policy, if only at the quantitative level. If you’re for some kind of further monetary expansion or a new fiscal stimulus, then the size of that boost should be calibrated to a$400 billion shortfall, not a \$2 trillion one. Why does Fernald come up with lower numbers for potential output than the naive forecast in 2007? Without going into the nitty-gritty, he looks at productivity growth (think output per hour) and finds that around 2003Q4, it stops growing as quickly as it did from 1995-2003. What Fernald chalks this up to is the exhaustion of the IT productivity boost. At the time, people thought that the IT revolution might have permanently raised labor productivity growth rates It appears to rather have had a “level effect” – we had a boost in the level of labor productivity, but now it will continue to grow at the normal rate. Again, this is easier to see in pictures, courtesy of Fernald’s paper. You can see that the 1995-2003 period is exceptional in having high labor productivity growth, and that since 2003 we’ve had growth in labor productivity at about the same rate as 1973-95. Anyone who uses the 1995-2003 period to extrapolate labor productivity growth (like the CBO was implicitly doing in 2007) would overestimate potential output. This isn’t to say that the CBO or anyone else was being lazy or duplicitous. In 2007, if you looked at the data on labor productivity, there would not be enough evidence to suggest that growth in labor productivity had fallen. The data from 1995-2007 would not be enough to tell you if we had experienced a “level effect” from IT that led to a temporary boost to growth rates, or a “growth effect” from IT that had permanently raised growth rates. You can only tell the difference now because we see the slowdown in productivity growth, so in retrospect it must have been a “level effect”. Regardless, Fernald’s paper suggests that the scope of the Great Recession is less “Great” than previous estimates would lead you to believe. And given that the trend growth rate in labor productivity is driven primarily by technological innovation, then boosting that growth rate means hoping that someone will invent a new technology that has a transformative power similar to IT. NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. This group of papers is one of the first that I cover in class, because it’s useful to frame much of the growth/development research. The concept is that real GDP per capita is produced using a function something like ${y = A F(k,h)}$. Real GDP thus depends on total factor productivity (${A}$), capital (${k}$), and human capital/labor (${h}$). So variation in real GDP per capita depends on variation in ${A}$, ${k}$, and ${h}$ across countries. All your favorite theories about institutions, geography, culture, innovation, etc.. must operate through one of these three proximate factors. To focus ourselves on what is important, we’d like to know which of the three proximate factors are actually responsible for the variation in real GDP per capita we see. One way to do this is to first assume a Cobb-Douglas production function for ${F()}$ and take logs $\displaystyle \ln{y}_i = \ln A_i + \alpha \ln{k}_i + \beta \ln{h}_i. \ \ \ \ \ (1)$ Conceptually, one could then run a regression of ${y_i}$ (the ${i}$ index specifies the country) on ${k_i}$ and ${h_i}$. We don’t have information on ${A_i}$ directly, so we could treat that as the error term. We could get even fancier and replace ${k_i}$ and ${h_i}$ with some terms based on savings rates or human capital accumulation rates, consistent with theory. Regardless, we’d then look at the R-squared or partial R-squared’s to tell us how important each factor was. This is, in a nutshell, what Mankiw, Romer, and Weil (1992) are up to. One problem with this is that TFP (${A}$) is not uncorrelated with ${k}$ and ${h}$, so the regression estimates of ${\alpha}$ and ${\beta}$ are going to be biased, and hence so are our R-squares. I wrote a whole post about this here. So rather than run the regression, we could pull values for ${\alpha}$ and ${\beta}$ from some other source and just calculate the R-squares without actually running the regression. This is essentially what the development accounting literature is doing, with Hall and Jones (1999) and Klenow and Rodriguez-Clare (1997) being the classic examples. The upshot of these papers is that variation in ${A}$ accounts for at least 50% of the differences in ${y}$ across countries, and maybe more. ${k}$ accounts for maybe 30-40%, and ${h}$ only 10-20%. So TFP is the most important proximate factor. The other papers are then riffs on this basic idea. Gollin (2002) is about whether ${\alpha}$ or ${\beta}$ themselves vary across countries (they do) and whether they are correlated with real GDP per capita (they are not). Caselli (2005) shows that differences in how exactly you account for ${k}$ and ${h}$ are not necessarily important for overall result that TFP matters most. You can also do this kind of accounting for a single country over time, to see the sources of growth. The Young (1995) and Hsieh (2002) papers are a back and forth over how to do this for several East Asian countries, differing in technique and data sources. Hsieh and Klenow (2007) is included in this section of the class because it helps establish that domestic savings rates do not vary much across countries, and so we cannot expect capital variation to matter a lot either. The reading list here is light on human capital. I talk about Hendricks (2002) work on trying to measure ${h}$ more accurately using immigrant data from the U.S., and Weil’s (2007) paper on including health as part of human capital. The reason for the light coverage is that German Cubas, one of our junior faculty, is going to be teaching a graduate course this year that focuses a lot of human capital. So I only touch on it in my course. As usual, PDF and Bibtex files with the reading lists are on the “Papers” page.
{}
# Eilenberg-Watts Theorem Proof I am currently reading about the Eilenberg-Watts theorem. I got the original work from Charles Watts "Intrinsic Characterizations of Some Additive Functors" and theorem 1 is what I am interested in. Alternatively the proof is stated here and the important part here in Lemma 1. The proof in Watts says the following (a few, very small changes are made be me to fit it in my paper): Consider a functor $$F: \mathrm{mod-}A \rightarrow \mathrm{mod-}A$$, a module $$M \in \mathrm{mod-}A$$, $$m \in M$$ and the homomorphism $$\phi_m: A \rightarrow M, a \mapsto a\cdot m$$. Since $$\phi_m \in \mathrm{Hom}(A, M)$$ we can apply $$F$$ on $$\phi_m$$ and get $$F(\phi_m): F(A) \rightarrow F(M)$$. We define $$\Psi_0^M: M \times F(A) \rightarrow F(A), (M, \tilde{A})\mapsto F\phi_M(\tilde{a}),$$ where $$a \in A$$ and $$\tilde{a} \in F(A)$$. Since $$A$$ is a bimodule over itself consider $$\Psi_0^A: A \times F(A)\rightarrow F(A)$$ and by that $$F(A)$$ gets a left module structure. This is compatible with the right module structure such that $$F(A)$$ is a bimodule. By the universal property of the tensor product we get $$\Psi^M: M\otimes F(A) \rightarrow F(M)$$. Next we show that $$(\Psi^M)_{M \in \mathrm{mod-}A}$$ is a natural transformation [...] Since $$F$$ and $$- \otimes F(A)$$ commute with direct sums, it follows that $$\Psi^M$$ is an isomorphism whenever $$M$$ is a free module. Now take an arbitrary $$N \in \mathrm{mod-}A$$. Look at a chosen exact sequence $$0 \rightarrow R \rightarrow M \rightarrow N \rightarrow 0$$ where $$M$$ is free. Since $$F$$ and $$- \otimes F(A)$$ are right-exact, we get the following diagram with exact rows: With a diagram chase we can conclude that $$\Psi^N$$ is an epimorphism. Since $$N$$ was chosen arbitrary it follows that $$\Psi^R$$ is an epimorphism. Again with a diagram chase it follows that $$\Psi^N$$ is a monomorphism and hence an isomorphism. My questions to that are the following: • Why is $$\Psi^M$$ an isomorphism if $$M$$ is free? • What is the module $$R$$? • Why does such an exact sequence $$0 \rightarrow R \rightarrow M \rightarrow N \rightarrow 0$$ exist? • Why is $$\Psi^R$$ an epimorphism since $$N$$ was chosen arbitrary? If you want to check the other source (Specksmath) I gave, my questions would be: • What are $$F_0$$ and $$F_1$$? • Why can we say that $$\sigma$$ is an isomorphism for both of them? • Why does such an exact sequence $$F_1 \rightarrow F_0 \rightarrow M \rightarrow 0$$ exist? Maybe someone can help me understand this :D The exact sequences exist because for any module $$N$$, we can always take • $$M=F(I)$$ to be the free module on some set $$I$$ of generators of $$N$$ • $$F(I)\to N$$ to be the map induced by the inclusion $$I\hookrightarrow N$$ and the universal property of the free module $$F(I)$$ and this is a surjective morphism, because $$I$$ generates $$N$$. Then if you take $$R$$ to be the kernel of $$M\to N$$, with its inclusion into $$M$$, you have by construction a short exact sequence $$0\rightarrow R \rightarrow M \rightarrow N \rightarrow 0.$$ This is what happens in the article. You can also apply the same trick to obtain a surjective map $$F(J)\to R$$, and then composing this with the inclusion $$R\hookrightarrow F(I)$$ gives you an exact sequence $$F(J) \rightarrow F(I) \rightarrow N \rightarrow 0,$$ and that's what happens in the blog post (with slightly different notation). Now in the article, $$\Psi^A$$ is the canonical isomorphism $$A\otimes_A F(A)\to F(A)$$, since the structure of left $$A$$-module of $$F(A)$$ is exactly the bilinear map that induces $$\Psi_A$$. Moreover if $$M=F(I)=\bigoplus_{i\in I}A$$, then since $$\_\otimes_AF(A)$$ and $$F$$ commute with direct sums we have canonical isomorphisms $$\bigoplus_{i\in I}(A\otimes_A F(A))\to M\otimes_A F(A)\quad \text{and}\quad \bigoplus_{i\in I}F(A)\to F(M),$$ which make the diagram $$\require{AMScd}\begin{CD}\bigoplus_{i\in I}(A\otimes_A F(A)) @>>> M\otimes_A F(A)\\ @V{\oplus_i \Psi^{A}}VV @VV{\Psi^M}V \\ \bigoplus_{i\in I}F(A) @>>> F(M)\end{CD}$$ commute. In particular, $$\Psi^M$$ is then an isomorphism. This implies that the composite $$M\otimes_A F(A)\to F(M)\to F(N)$$ is surjective, and then so is $$\Psi^N$$, thanks to the commutative diagram you've written. This is true for any module $$N$$; in particular, it's also true for $$R$$. • Thanks a lot... A few questions to your notation: In my question, $M$ is the free module and $N$ an arbitrary one. In your answer $M$ ist the arbitrary one in the beginning. Later you say that $M = \oplus A$ is $M$ still the arbitrary module or the free module? – P. Schulze Mar 4 at 17:04 • @P.Schulze I started with notation that matched the blog post and then inadvertently switched to the article, so $M$ was an arbitrary module in the first part and then a free module. I've edited to keep the notation closer to the article. – Arnaud D. Mar 4 at 17:21
{}
Birthday Problem with Realistic Assumptions 1. Jul 21, 2011 Bacle Hi, All: The standard way of approaching the birthday problem, i.e., the problem of determining the number of people needed to have a certain probability that two of them have the same birthday, is based on the assumption that birthdays are uniformly-distributed, i.e., that the probability of someone having a birthday on a given day is 1/365 for non-leap, or 1/366 for leap. But there is data to suggest that this assumption does not hold; specifically, this assumption failed a chi-square at the 95% for expected-actual, for n=480,040 data points. Does anyone know of a solution that uses a more realistic distribution of birthdates? 2. Jul 21, 2011 awkward This paper seems to be exactly what you're looking for-- http://www.jstor.org/pss/2685309 but you will need access to a JSTOR account to see more than the first page. Last edited by a moderator: Apr 26, 2017 3. Jul 22, 2011 Bacle Excellent, 'Awkward' , thanks.
{}
In functional programming, Monads are an abstraction used to structure programs • Help reduce complicated sequences of functions into “a pipeline” of actions • Abstract away control flow • Facilitate side-effects • Manage external data interactions class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b Monads are abstraction used to help structure programs and help easily achieve some functionality which would be difficult to achieve otherwise.For example, they help achieve side-effects which would be required in the real world. Monads in Haskell are defined as a Typeclass. We make things “monadic” by making them an instance of this typeclass. 2 main operations defined by the typeclass: • Lifting: take a non-monadic value and turn it into a monadic value • Binding: monadic value and function that return a monadic value • The bind operator can have different semantics for different monads ### Functors and Aplicatives Functors are things that can be mapped over fmap::(a -> b) -> f a -> f b Applicatives are functors that can be applied pure :: a -> f a (<*>) :: f(a -> b) -> f a -> f b A monad on category C consists of an endofunctor (a functor mapping a category to itself), T: C -> C along with two natural transformations: 1. 1_C -> T where 1C denotes the identity functor on C, and 2. T^2 -> T where T^2 is the functor T to T from C to C These are required to fulfill coherence conditions class Monad m where return :: a -> m a (>>=) :: m a -> (a->m b) -> m b (>>) :: m a -> m b -> m b ## Coherence Conditions Left identity: The first monad law states that if we take a value, put it in a default context with return and then feed it to a function by using »=, it’s the same as just taking the value and applying the function to it. return a >>= f ≡ f a Right identity: The second law states that if we have a monadic value and we use »= to feed it to return, the result is our original monadic value. m >>= return ≡ m Associativity: The final monad law says that when we have a chain of monadic function applications with >>=, it shouldn’t matter how they’re nested. (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g) ## Monads for Side-Effects Eg. IO Monads in Haskell can function as “containers” that carry “extra information” apart from the value inside that functions need not worry about. Here, the “information” can be used as the action that performs IO instance Monad IO where return :: a -> IO a (>>=) :: IO a -> a (a -> IO b) -> IO b Example as a REPL reading/writing to a terminal flushStr :: String -> IO () readPrompt :: String -> IO String evalString :: String -> IO String until_ :: Monad m => (a -> Bool) -> m a -> (a -> m ()) -> m () runRepl :: IO () main :: IO () ## Monads for Control Flow Eg. Error Handling We define all types of errors we want to catch and throw as MonadicError We define a type for functions that may throw a MonadicError type ThrowsError = Either MondaicError Either is another instance of a monad, The “extra information” in this case is whether the error occurred. instance (Error e) => Monad (Either e) where return x = Right x Right x >> f = f x Left err >>= f = Left err If (>==) sees an error it simply passes that error through without subsequent computations, else passes the value along Take our 2 Error Handling and IO Monads for example, say we need to use their behavior functionality simultaneously. We use monad transformers to combine functionality of multiple monads We use ExceptT, a monad transformer that adds exceptions to other monads newtype ExceptT e m a :: * -> (* -> *) -> * -> * The combined Monad would then be: type IOThrowsError = ExceptT MonadicError IO ## Monads for Environment Management Haskell has no notion of mutable state. Each function has an environment storing values for each of its args and vars. We use a feature called IORef that helps us hold the environment state within the IO monad We then simply access this environment mutating its state as required, and keep passing it around on each evaluation cycle. type Env = IORef[(String, IORef SomeVal)] eval :: Env -> SomeVal -> IOThrowsError SomeVal
{}
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). ## Return to Question 5 deleted 672 characters in body There are polynomials that are not sum of squares. For example Motzkin gave the example $x^4y^2+x^2y^4+z^6-3x^2y^2z^2$ in 1967. Is there a real polynomial $f\in{\mathbb{R}}[x_1,\ldots,x_n]$ in several indeterminates that is not a sum of squares but $f^N$ is a sum of squares for some odd integer $N>0$? This question is interesting in the following sense. The notion of writing nonnegative polynomials $f$ as a sum of squares is to give an algebraic proof of the inequality $f\ge 0$. As per Motzkin's example, we know that this is not always possible. One way to resolve this is to follow Artin and use denominators. Another way (which I learnt from D'Angelo) is to show that $f^{N}$ is a sum of squares for some odd $N$. This question is me wondering whether such a technique of consider the radical of sum of squares is vacuous. EDIT: Let me explain why I accepted Bruce's answer over JC's. In mathematics, a variety of proofs for a single question lead us to have a more well-rounded understanding of the problem. JC's answer uses a computer, and -- without further trials and careful observation of the computer output -- only gives us an affirmative answer to my question. Bruce's answer, on the other hand, though less explicit, should be able to lead to further counterexamples by modeling after Bruce's binomial method. I believe this method of choosing $f^2$ as a denominator for the inequality $f>0$ will lead to fruitful application in polynomial optimization and the moment problem. 4 added 672 characters in body There are polynomials that are not sum of squares. For example Motzkin gave the example $x^4y^2+x^2y^4+z^6-3x^2y^2z^2$ in 1967. Is there a real polynomial $f\in{\mathbb{R}}[x_1,\ldots,x_n]$ in several indeterminates that is not a sum of squares but $f^N$ is a sum of squares for some odd integer $N>0$? This question is interesting in the following sense. The notion of writing nonnegative polynomials $f$ as a sum of squares is to give an algebraic proof of the inequality $f\ge 0$. As per Motzkin's example, we know that this is not always possible. One way to resolve this is to follow Artin and use denominators. Another way (which I learnt from D'Angelo) is to show that $f^{N}$ is a sum of squares for some odd $N$. This question is me wondering whether such a technique of consider the radical of sum of squares is vacuous. EDIT: Let me explain why I accepted Bruce's answer over JC's. In mathematics, a variety of proofs for a single question lead us to have a more well-rounded understanding of the problem. JC's answer uses a computer, and -- without further trials and careful observation of the computer output -- only gives us an affirmative answer to my question. Bruce's answer, on the other hand, though less explicit, should be able to lead to further counterexamples by modeling after Bruce's binomial method. I believe this method of choosing $f^2$ as a denominator for the inequality $f>0$ will lead to fruitful application in polynomial optimization and the moment problem. 3 edited title # when is the power of a nonnegative polynomial a sum of squares? 2 added 13 characters in body; edited tags 1
{}
# What are single bonds? A single bond is a bond in which two atoms share two valence electrons each, forming a covalent bond. It is written usually as $A - A$ or $A : A$ in Lewis structures. All single bonds are linear. Examples of single bonds include $C - H , H - H , H - F$, and many more, usually involving hydrogen atoms.
{}
Copied to clipboard ## G = C2×He3.C3order 162 = 2·34 ### Direct product of C2 and He3.C3 direct product, metabelian, nilpotent (class 3), monomial, 3-elementary Aliases: C2×He3.C3, C6.3He3, He3.3C6, 3- 1+22C6, (C3×C9)⋊9C6, (C3×C18)⋊2C3, (C2×He3).C3, C3.3(C2×He3), (C3×C6).2C32, C32.2(C3×C6), (C2×3- 1+2)⋊2C3, SmallGroup(162,29) Series: Derived Chief Lower central Upper central Derived series C1 — C32 — C2×He3.C3 Chief series C1 — C3 — C32 — C3×C9 — He3.C3 — C2×He3.C3 Lower central C1 — C3 — C32 — C2×He3.C3 Upper central C1 — C6 — C3×C6 — C2×He3.C3 Generators and relations for C2×He3.C3 G = < a,b,c,d,e | a2=b3=c3=d3=1, e3=c, ab=ba, ac=ca, ad=da, ae=ea, bc=cb, dbd-1=bc-1, be=eb, cd=dc, ce=ec, ede-1=bc-1d > Smallest permutation representation of C2×He3.C3 On 54 points Generators in S54 (1 10)(2 11)(3 12)(4 13)(5 14)(6 15)(7 16)(8 17)(9 18)(19 43)(20 44)(21 45)(22 37)(23 38)(24 39)(25 40)(26 41)(27 42)(28 48)(29 49)(30 50)(31 51)(32 52)(33 53)(34 54)(35 46)(36 47) (1 44 28)(2 45 29)(3 37 30)(4 38 31)(5 39 32)(6 40 33)(7 41 34)(8 42 35)(9 43 36)(10 20 48)(11 21 49)(12 22 50)(13 23 51)(14 24 52)(15 25 53)(16 26 54)(17 27 46)(18 19 47) (1 4 7)(2 5 8)(3 6 9)(10 13 16)(11 14 17)(12 15 18)(19 22 25)(20 23 26)(21 24 27)(28 31 34)(29 32 35)(30 33 36)(37 40 43)(38 41 44)(39 42 45)(46 49 52)(47 50 53)(48 51 54) (2 45 32)(3 30 43)(5 39 35)(6 33 37)(8 42 29)(9 36 40)(11 21 52)(12 50 19)(14 24 46)(15 53 22)(17 27 49)(18 47 25)(20 23 26)(28 34 31)(38 41 44)(48 54 51) (1 2 3 4 5 6 7 8 9)(10 11 12 13 14 15 16 17 18)(19 20 21 22 23 24 25 26 27)(28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45)(46 47 48 49 50 51 52 53 54) G:=sub<Sym(54)| (1,10)(2,11)(3,12)(4,13)(5,14)(6,15)(7,16)(8,17)(9,18)(19,43)(20,44)(21,45)(22,37)(23,38)(24,39)(25,40)(26,41)(27,42)(28,48)(29,49)(30,50)(31,51)(32,52)(33,53)(34,54)(35,46)(36,47), (1,44,28)(2,45,29)(3,37,30)(4,38,31)(5,39,32)(6,40,33)(7,41,34)(8,42,35)(9,43,36)(10,20,48)(11,21,49)(12,22,50)(13,23,51)(14,24,52)(15,25,53)(16,26,54)(17,27,46)(18,19,47), (1,4,7)(2,5,8)(3,6,9)(10,13,16)(11,14,17)(12,15,18)(19,22,25)(20,23,26)(21,24,27)(28,31,34)(29,32,35)(30,33,36)(37,40,43)(38,41,44)(39,42,45)(46,49,52)(47,50,53)(48,51,54), (2,45,32)(3,30,43)(5,39,35)(6,33,37)(8,42,29)(9,36,40)(11,21,52)(12,50,19)(14,24,46)(15,53,22)(17,27,49)(18,47,25)(20,23,26)(28,34,31)(38,41,44)(48,54,51), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)>; G:=Group( (1,10)(2,11)(3,12)(4,13)(5,14)(6,15)(7,16)(8,17)(9,18)(19,43)(20,44)(21,45)(22,37)(23,38)(24,39)(25,40)(26,41)(27,42)(28,48)(29,49)(30,50)(31,51)(32,52)(33,53)(34,54)(35,46)(36,47), (1,44,28)(2,45,29)(3,37,30)(4,38,31)(5,39,32)(6,40,33)(7,41,34)(8,42,35)(9,43,36)(10,20,48)(11,21,49)(12,22,50)(13,23,51)(14,24,52)(15,25,53)(16,26,54)(17,27,46)(18,19,47), (1,4,7)(2,5,8)(3,6,9)(10,13,16)(11,14,17)(12,15,18)(19,22,25)(20,23,26)(21,24,27)(28,31,34)(29,32,35)(30,33,36)(37,40,43)(38,41,44)(39,42,45)(46,49,52)(47,50,53)(48,51,54), (2,45,32)(3,30,43)(5,39,35)(6,33,37)(8,42,29)(9,36,40)(11,21,52)(12,50,19)(14,24,46)(15,53,22)(17,27,49)(18,47,25)(20,23,26)(28,34,31)(38,41,44)(48,54,51), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54) ); G=PermutationGroup([[(1,10),(2,11),(3,12),(4,13),(5,14),(6,15),(7,16),(8,17),(9,18),(19,43),(20,44),(21,45),(22,37),(23,38),(24,39),(25,40),(26,41),(27,42),(28,48),(29,49),(30,50),(31,51),(32,52),(33,53),(34,54),(35,46),(36,47)], [(1,44,28),(2,45,29),(3,37,30),(4,38,31),(5,39,32),(6,40,33),(7,41,34),(8,42,35),(9,43,36),(10,20,48),(11,21,49),(12,22,50),(13,23,51),(14,24,52),(15,25,53),(16,26,54),(17,27,46),(18,19,47)], [(1,4,7),(2,5,8),(3,6,9),(10,13,16),(11,14,17),(12,15,18),(19,22,25),(20,23,26),(21,24,27),(28,31,34),(29,32,35),(30,33,36),(37,40,43),(38,41,44),(39,42,45),(46,49,52),(47,50,53),(48,51,54)], [(2,45,32),(3,30,43),(5,39,35),(6,33,37),(8,42,29),(9,36,40),(11,21,52),(12,50,19),(14,24,46),(15,53,22),(17,27,49),(18,47,25),(20,23,26),(28,34,31),(38,41,44),(48,54,51)], [(1,2,3,4,5,6,7,8,9),(10,11,12,13,14,15,16,17,18),(19,20,21,22,23,24,25,26,27),(28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45),(46,47,48,49,50,51,52,53,54)]]) C2×He3.C3 is a maximal subgroup of   He3.C12  He3.Dic3  He3.3Dic3 34 conjugacy classes class 1 2 3A 3B 3C 3D 3E 3F 6A 6B 6C 6D 6E 6F 9A ··· 9F 9G 9H 9I 9J 18A ··· 18F 18G 18H 18I 18J order 1 2 3 3 3 3 3 3 6 6 6 6 6 6 9 ··· 9 9 9 9 9 18 ··· 18 18 18 18 18 size 1 1 1 1 3 3 9 9 1 1 3 3 9 9 3 ··· 3 9 9 9 9 3 ··· 3 9 9 9 9 34 irreducible representations dim 1 1 1 1 1 1 1 1 3 3 3 3 type + + image C1 C2 C3 C3 C3 C6 C6 C6 He3 C2×He3 He3.C3 C2×He3.C3 kernel C2×He3.C3 He3.C3 C3×C18 C2×He3 C2×3- 1+2 C3×C9 He3 3- 1+2 C6 C3 C2 C1 # reps 1 1 2 2 4 2 2 4 2 2 6 6 Matrix representation of C2×He3.C3 in GL4(𝔽19) generated by 18 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 , 1 0 0 0 0 7 0 0 0 0 7 0 0 0 0 7 , 7 0 0 0 0 1 0 0 0 0 11 0 0 0 0 7 , 1 0 0 0 0 12 18 12 0 12 12 18 0 18 12 12 G:=sub<GL(4,GF(19))| [18,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1],[1,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0],[1,0,0,0,0,7,0,0,0,0,7,0,0,0,0,7],[7,0,0,0,0,1,0,0,0,0,11,0,0,0,0,7],[1,0,0,0,0,12,12,18,0,18,12,12,0,12,18,12] >; C2×He3.C3 in GAP, Magma, Sage, TeX C_2\times {\rm He}_3.C_3 % in TeX G:=Group("C2xHe3.C3"); // GroupNames label G:=SmallGroup(162,29); // by ID G=gap.SmallGroup(162,29); # by ID G:=PCGroup([5,-2,-3,-3,-3,-3,187,147,728]); // Polycyclic G:=Group<a,b,c,d,e|a^2=b^3=c^3=d^3=1,e^3=c,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,b*c=c*b,d*b*d^-1=b*c^-1,b*e=e*b,c*d=d*c,c*e=e*c,e*d*e^-1=b*c^-1*d>; // generators/relations Export ׿ × 𝔽
{}
#### Archived This topic is now archived and is closed to further replies. # compiler question Visual c++ This topic is 5382 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Does anyone knows how to keep the compiler from including multiple header files. Example I am making a game. And instead of programming eveything on one cpp file. I made header files and the corrosponding cpp file. I included the header files in to all of the files. But then I get a erro meassge saying lnk 2005 error: My best guess is that I put in the same header files into each cpp document. And the compiler is reading it as me trying to make the same thing twice. Is the a way to tell it that I only want to use a header file once? ##### Share on other sites If you arn''t worried about cross platform compatibility, then stick #pragma once If you are worried about cross platform compatibility, then use inclusion guards, which are like this: #ifndef _SOME_IDENTIFIER_SPECIFIC_TO_THIS_HEADER_#define _SOME_IDENTIFIER_SPECIFIC_TO_THIS_HEADER_//Header code goes here#endif
{}
# Next greater permutation where a given index is changed Suppose we have the set $\{1,...,n\}$ and we are given a permutation of its' elements, say for $n=4$ we have $3214$. Then we are given an index $i$ (say indexes go from $1$ to $n$) and we are asked to find the next smallest permutation that is greater than the current permutation, in lexicographical order, where the number at position $i$ in the given permutation is not in its' place. For example, for index $2$ in our given permutation, the next greater permutation where the $2$ is not at position $2$ is $3412$. I have thought about this and it is easy to just start lexicographically pass on all permutations that are greater than the current one until we reach one where the number at index $i$ is changed. There is an algorithm for that, but that solution is trivial. Is there a way to find it without having to pass over all permutations in between? OK, this is harder than I thought. This is just a partial solution. Some cases are rather simple. Case 1: $i$ is currently at location $i$. If possible, $i$ should be replaced with the smallest digit that is to its right and is larger than it. Then, all digits to the right of position $i$ should be sorted by size. The problem here occurs if all digits to the right of $i$ are smaller than $i$. In this case, the digits to the left of $i$ will also have to be moved. In this case, we should try to (I think) try to only increase the left neighbor of $i$ by as little as possible, without moving the rest of the left neighbors - this basically takes us back to the starting problem, only $i=i-1$. Case 2: $i$ is currently not at location $i$. This part, I'm still thinking about.
{}
Browse Questions # An aromatic molecule will (a) have $4n \pi$ electrons (b) have $(Hn+2)\pi$ electron (c) be planar, be cyclic (d) b and c For a compound be aromatic it should have $(Hn+2)\pi$ electron -(Huckel's Rule) Planar structure -(Because of resonance) Cyclic structure - (Because of the presence of $SP^2$ hybridised $C-$ atom) Hence it is planar, cyclic and have $(Hn+2)\pi$ electron Hence d is the correct answer.
{}
## Loop Shape and Stability Margin Specifications This example shows how to specify loop shapes and stability margins when tuning control systems with systune or looptune. ### Background The systune and looptune commands tune the parameters of fixed-structure control systems subject to a variety of time- and frequency-domain requirements. The TuningGoal package is the repository for such design requirements. ### Loop Shape The TuningGoal.LoopShape requirement is used to shape the open-loop response gain(s), a design approach known as loop shaping. For example, s = tf('s'); R1 = TuningGoal.LoopShape('u',1/s); specifies that the open-loop response measured at the location "u" should look like a pure integrator (as far as its gain is concerned). In MATLAB, use an AnalysisPoint block to mark the location "u", see the "Building Tunable Models" example for details. In Simulink, use the addPoint method of the slTuner interface to mark "u" as a point of interest. As with other gain specifications, you can just specify the asymptotes of the desired loop shape using a few frequency points. For example, to specify a loop shape with gain crossover at 1 rad/s, -20 dB/decade slope before 1 rad/s, and -40 dB/decade slope after 1 rad/s, just specify that the gain at the frequencies 0.1,1,10 should be 10,1,0.01, respectively. LS = frd([10,1,0.01],[0.1,1,10]); R2 = TuningGoal.LoopShape('u',LS); bodemag(LS,R2.LoopGain) legend('Specified','Interpolated') Loop shape requirements are constraints on the open-loop response $L$. For tuning purposes, they are converted into closed-loop gain constraints on the sensitivity function $S=1/\left(1+L\right)$ and complementary sensitivity function $T=L/\left(1+L\right)$. Use viewGoal to visualize the target loop shape and corresponding gain bounds on $S$ (green) and $T$ (red). viewGoal(R2) ### Minimum and Maximum Loop Gain Instead of TuningGoal.LoopShape, you can use TuningGoal.MinLoopGain and TuningGoal.MaxLoopGain to specify minimum or maximum values for the loop gain in a particular frequency band. This is useful when the actual loop shape near crossover is best left to the tuning algorithm to figure out. For example, the following requirements specify the minimum loop gain inside the bandwidth and the roll-off characteristics outside the bandwidth, but do not specify the actual crossover frequency nor the loop shape near crossover. MinLG = TuningGoal.MinLoopGain('u',5/s); % integral action MinLG.Focus = [0 0.2]; MaxLG = TuningGoal.MaxLoopGain('u',1/s^2); % -40dB/decade roll off MaxLG.Focus = [1 Inf]; viewGoal([MinLG MaxLG]) The TuningGoal.MaxLoopGain requirement rests on the fact that the open- and closed-loop gains are comparable when the loop gain is small ($|L|\ll 1$). As a result, it can be ineffective at keeping the loop gain below some value close to 1. For example, suppose that flexible modes cause gain spikes beyond the crossover frequency and that you need to keep these spikes below 0.5 (-6 dB). Instead of using TuningGoal.MaxLoopGain, you can directly constrain the gain of $L$ using TuningGoal.Gain with a loop opening at "u". MaxLG = TuningGoal.Gain('u','u',0.5); MaxLG.Opening = 'u'; If the open-loop response is unstable, make sure to further disable the implicit stability constraint associated with this requirement. MaxLG.Stabilize = false; Figure 1 shows this requirement evaluated for an open-loop response with flexible modes. Figure 1: Gain constraint on L ### Stability Margins The TuningGoal.Margins requirement enforces minimum amounts of gain and phase margins at the specified loop opening site(s). For MIMO feedback loops, this requirement uses the notion of disk margins, which guarantee stability for concurrent gain and phase variations of the specified amount in all feedback channels (see diskmargin for details). For example,the following code enforces $±6$ dB of gain margin and 45 degrees of phase margin at a location "u". R = TuningGoal.Margins('u',6,45); In MATLAB, use an AnalysisPoint block to mark the location "u" (see Building Tunable Models for details). In Simulink, use the addPoint method of the slTuner interface to mark "u" as a point of interest (see Create and Configure slTuner Interface to Simulink Model (Simulink Control Design)). Stability margins are typically measured at the plant inputs or plant outputs or both. The target gain and phase margin values are converted into a normalized gain constraint on some appropriate closed-loop transfer function. The desired margins are achieved at frequencies where the gain is less than 1. Use viewGoal to examine the requirement you have configured. viewGoal(R) The shaded region indicates where the constraint is violated. After tuning, for a tuned model T, you can use viewGoal(R,T) to see the tuned frequency-dependent margins on this plot.
{}
# NH Modified Helmholtz Equation with Robin Boundary Condition 1. Aug 7, 2013 ### Meconium Hi, I am working on a quite difficult, though seemingly simple, non-homogeneous differential equation in cylindrical coordinates. The main equation is the non homogeneous modified Helmholtz Equation $\nabla^{2}\psi - k^{2}\psi = \frac{-1}{D}\frac{\delta(r-r')\delta(\theta-\theta')\delta(z-z')}{r}$ with Robin boundary condition $\psi - \kappa\hat{\Omega}_n\cdot\vec{\nabla}\psi = 0$ on $r=a$, the edge of a virtual infinitely long cylinder of radius $r=a$. $\hat{\Omega}_n$ is a vector pointing out of the cylinder. The solution $\psi$ must also vanish at infinity, i.e. $\psi(r\rightarrow\infty,z\rightarrow\pm\infty) = 0$, to satisfy the Sommerfeld Radiation Condition. I have tried the Green's function approach in cartesian coordinates, though the Robin boundary condition makes it hard to easily solve. I have also tried it in polar coordinates, but I can't find any reference on how to use Green's function on periodic domains. This problem arises from the diffusion approximation in biomedical imaging, and a solution would be of great help in my research. Thanks a lot ! Last edited: Aug 7, 2013 2. Aug 7, 2013 Just curious, is this from what you're studying in college? Which course? 3. Aug 7, 2013 ### Meconium No it's not, I am working in an Optical Radiology Lab, and this problem is well documented in a cartesian semi-infinite medium, where the Robin boundary condition is simply on z = 0 (quite easier, isn't it?). However for a certain application (that I cannot disclose) I need to solve it in cylindrical coordinates. 4. Aug 7, 2013 ### Mandelbroth I'm assuming your $D$ is the diffusion constant, right, and not some weird differential operator? 5. Aug 7, 2013 ### Meconium Yeah, it's only the diffusion constant, sorry for not specifying. 6. Aug 8, 2013 ### Mandelbroth You're sure it's too hard in Cartesian coordinates? Could you show us where it got too difficult for you? 7. Aug 9, 2013 ### Meconium The normal vector $\hat{\Omega}_n$ is directed out of the cylinder, so $\hat{\Omega}_n$ is $\frac{x\vec{i}+y\vec{j}}{\sqrt{x^2+y^2}}$ instead of only $\vec{r}$ 8. Aug 9, 2013 ### Mandelbroth However, this hardship is exchanged for a rather unfriendly inner product, which makes the boundary condition more difficult than worth solving. For Cartesian coordinates, we can just solve using a convolution and attempt to fit the Robin boundary condition. 9. Aug 9, 2013 ### Meconium I will try that then. Thanks a lot for the help ! 10. Aug 9, 2013 ### Mandelbroth You're very welcome.
{}
# What are the sub $C^*$-algebras of $C(X,M_n)$? Let $X$ be a locally compact Hausdorff topological space, denote by $M_n$ the $C^*$-algebra of complex $n\times n$ matrices, by $C_0(X,M_n)$ the $C^*$-algebra of continuous functions on $X$ with values in $M_n$ vanishing at infinity, and by $C_b(X,M_n)$ the $C^*$-algebra of bounded continuous functions on $X$ with values in $M_n$. Does there exist a description of all sub $C^*$-algebras of $C_0(X,M_n)$ or $C_b(X,M_n)$? I am most interested in the (easier?) case where $X$ is compact and sub $C^*$-algebras of $C(X,M_n)$ containing the unit of $C(X,M_n)$. For the case $n=1$, i.e., for $C_0(X)$, you can find the answer here: What is the commutative analogue of a C*-subalgebra?. - I don't think there is a simple description: you can have $C^*$-subalgebras with non-Hausdorff spectra (simplest example: continuous functions $[0,1]\rightarrow M_n$ which are diagonal at $0$); and if you assume your $C^*$-subalgebra to be homogeneous (i.e. all its irreducible rep's are of the same dimension), then you have the $K$-theory of $C(X)$ showing up (if $p$ is a self-adjoint idempotent in $M_n(C(X))$, look at the subalgebra $pM_n(C(X))p$). – Alain Valette Mar 6 '13 at 20:10 Every $C^*$-subalgebra $A$ of $C_0(X,M_n)$ has irreps of dimension $\leq n.$ (Just because every irrep of $A$ can be continued to an irrep of $C_0(X,M_n)$). Such $C^*$-algebras are called $n$-subhomogeneous. There is a complete (rather complicated) description of such $C^*$-algebras in algebraic topological terms: Vasilʹev, N. B. "$C^∗$-algebras with finite-dimensional irreducible representations." Uspehi Mat. Nauk 21 1966 no. 1 (127), 135–154. Much easier is to describe the $C^*$-algebras with all irreps of the same dimension $n$. They are exactly the algebras of $C_0$-sections of vector bundles over l.-c. hausdorff spaces with fiber $M_n$ and group $PU_n,$ see Tomiyama, Jun; Takesaki, Masamichi Applications of fibre bundles to the certain class of $C^∗$-algebras. Tôhoku Math. J. (2) 13 1961 498–522.
{}
# zbMATH — the first resource for mathematics Construction of nonlinear Boolean functions with important cryptographic properties. (English) Zbl 1082.94529 Preneel, Bart (ed.), Advances in cryptology - EUROCRYPT 2000. 19th international conference on the theory and application of cryptographic techniques, Bruges, Belgium, May 14–18, 2000. Proceedings. Berlin: Springer (ISBN 3-540-67517-5). Lect. Notes Comput. Sci. 1807, 485-506 (2000). This paper addresses the problem of obtaining new construction methods for cryptographically significant Boolean functions. We show that for each positive integer $$m$$, there are infinitely many integers $$n$$ (both odd and even), such that it is possible to construct $$n$$-variable, $$m$$-resilient functions having nonlinearity greater than $$2^{n-1}-2^{[\frac n2]}$$. Also we obtain better results than all published works on the construction of $$n$$-variable, $$m$$-resilient functions, including cases where the constructed functions have the maximum possible algebraic degree $$n-m-1$$. Next we modify the Patterson-Wiedemann functions to construct balanced Boolean functions on $$n$$-variables having nonlinearity strictly greater than $$2^{n-1}-2^{\frac{n-1}2}$$ for all odd $$n\geq 15$$. In addition, we consider the properties strict avalanche criteria and propagation characteristics which are important for design of $$S$$-boxes in block ciphers and construct such functions with very high nonlinearity and algebraic degree. For the entire collection see [Zbl 0939.00052]. ##### MSC: 94A60 Cryptography 06E30 Boolean functions
{}
Automatic Differentiation (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Automatic Differentiation enables you to compute both the value of a function at a point and its derivative(s) at the same time. When using Forward Mode this roughly means that a numerical value is equipped with its derivative with respect to one of your input, which is updated accordingly on every function application. Let the number $x_0$ be equipped with the derivative $x_1$: $\langle x_0,x_1 \rangle$. For example the sinus is defined as: • $\sin\langle x_0,x_1 \rangle = \langle \sin x_0, x_1\cdot\cos x_0\rangle$ Replacing this single derivative with a lazy list of them can enable you to compute an entire derivative tower at the same time. However, it becomes more difficult for vector functions, when computing the derivatives in reverse, when computing towers, and/or when trying to minimize the number of computations needed to compute all of the kth partial derivatives of an n-ary function. Forward mode is suitable when you have fewer arguments than outputs, because it requires multiple applications of the function, one for each input. Reverse mode is suitable when you have fewer results than inputs, because it requires multiple applications of the function, one for each output. Implementations: Power Series If you can compute all of the derivatives of a function, you can compute Taylor series from it.
{}
## Cryptology ePrint Archive: Report 2021/821 On the hardness of the NTRU problem Alice Pellet-Mary and Damien Stehlé Abstract: The 25 year-old NTRU problem is an important computational assumption in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. We provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem. Category / Keywords: Original Publication (with minor differences): IACR-ASIACRYPT-2021 Date: received 16 Jun 2021, last revised 5 Oct 2021 Contact author: alice pellet-mary at math u-bordeaux fr, damien stehle at ens-lyon fr Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2021/821 [ Cryptology ePrint archive ]
{}
Article | Open | Published: # A Nonlinear Rate Microsensor utilising Internal Resonance ## Abstract Micro- and nano-resonators have been studied extensively both for the scientific viewpoint to understand basic interactions at small scales as well as for applied research to build sensors and mechanical signal processors. Majority of the resonant microsystems, particularly those manufactured at a large scale, have employed simple mechanical structures with one dominant resonant mode, such as in timing resonators, or linearly coupled resonant modes, as in vibratory gyroscopes. There is an increasing interest in the development of models and methods to better understand the nonlinear interactions at micro- and nano-scales and also to potentially improve the performance of the existing devices in the market beyond limits permissible by the linear effects. Internal resonance is a phenomenon that allows for nonlinear coupling and energy transfer between different vibration modes of a properly designed system. Herein, for the first time, we describe and experimentally demonstrate the potential for employing internal resonance for detection of angular rate signals, where the Coriolis effect modifies the energy coupling between the distinct drive and sense vibration modes. In doing so, in addition to providing a robust method of exciting the desired mode, the proposed approach further alleviates the mode-matching requirements and reduces instabilities due to the cross-coupling between the modes in current linear vibratory gyroscopes. ## Introduction Micro- and nano-fabricated resonators are used in numerous applications including timing and frequency references, filters, chemical sensing, and physical sensing. Small-amplitude dynamics of these systems are typically studied using well-established methods such as Euler-Bernoulli theory for oscillating beams1. Various phenomena such as large-amplitude operation2 and material nonlinearities3 can cause a device response to deviate from linear predictions. In typical microsystem applications, such nonlinear effects are often avoided. Over the past decade, there has been growing interest in developing design methodologies for microsystems that exhibit different types of nonlinearities for basic research4,5,6,7,8 and in employing nonlinearities to improve the performance of micro- and nano-resonant systems, for instance, to improve the sensitivity of sensors9,10 or the stability of timing references11. Internal resonance is a particular nonlinear phenomenon that can cause nonlinear modal interactions between vibration modes directly excited by external harmonic forces and other vibration modes. Internal resonance may occur when the linear natural frequencies of a system are commensurate or nearly commensurate (e.g., ω2 = 1 or ω2 ≈ 1 where k = 2, 3, …) and there exist nonlinear coupling terms between the vibration modes12. Internal resonance has been studied from different perspectives because of the interesting dynamic properties. Consider a two degree-of-freedom (DOF) system where the equations of motion are: $$\{\begin{array}{l}\ddot{x}+{\gamma }_{1}\dot{x}+{\omega }_{1}^{2}x=f(x,y)+F\,\cos ({{\rm{\Omega }}}_{r}t+{{\rm{\Phi }}}_{1})\\ \ddot{y}+{\gamma }_{2}\,\dot{y}+{\omega }_{2}^{2}y=g(x,y)\end{array}$$ (1) where the first equation represents the externally excited mode through input force $$F\,\cos ({{\rm{\Omega }}}_{r}t+{{\rm{\Phi }}}_{1})$$ and the second equation represents the indirectly excited sub-system. Parameters γ1 and γ2 are the damping coefficients while ω1 and ω2 are the natural frequencies of the undamped linear oscillators for the vibrational modes, respectively. Functions f(x, y) and g(x, y) represent the coupling between the two modes. In linear Coriolis vibratory gyroscopes (CVGs), for instance, these functions will be anti-symmetric (i.e., f(x, y) = −g(x, y) and $$f(x,y)\propto m\dot{y}{\rm{\Omega }}$$. Nonlinear 2:1 internal resonance occurs as a result of quadratic nonlinearities present in the system when Ωr ≈ ω1, ω1 ≈ 2ω2, $$f(x,y)={\dot{y}}^{2}$$ and $$g(x,y)=2\dot{x}\dot{y}$$. In this case, the nonlinear quadratic coupling terms lead to auto-parametric excitation of the lower-frequency mode by the higher-frequency mode13. In other words, the vibration energy from the mode with a higher natural frequency is pumped into the mode of lower natural frequency. The amount of energy that is transferred depends on the type of quadratic nonlinearities, the amplitude of external force, modal Q-factors, and the frequency ratio between the vibrational modes, among others8. An interesting characteristic of internally resonant systems with 2:1 frequency ratio in the presence of quadratic coupling nonlinearities is saturation. When the system is excited at a frequency near the higher natural frequency, the structure responds to the frequency of excitation, and the amplitude of the response increases linearly with the amplitude of excitation13. However, when the vibrational amplitude of this mode reaches a threshold value, it saturates and the additional energy from the input source is transferred to the lower natural frequency mode due to the nonlinear coupling between them. The mode with the lower resonant frequency then starts to oscillate at half the excitation frequency and its amplitude grows proportional to the spill-over energy from the mode directly excited by the input. In macro-systems, internal resonance has been mainly studied to better understand the modal interactions with the intention of suppressing unwanted vibrations14,15,16. However, practical applications of internal resonance at micro- and nano-scales have remained limited to basic demonstrations of the phenomena8,17, taking advantage of the saturation phenomenon for amplitude and frequency stabilization11, and exploiting the phenomenon for mass-sensing18. An application area that requires two coupled resonators and can potentially benefit from internal resonance is the measurement of angular rate. Gyroscopes made using microelectromechanical systems (MEMS) technologies are used increasingly in applications ranging from navigation and robotics to stability control due to their small size, low power consumption and low cost. Many MEMS gyroscopes operate based on Coriolis effect where the Coriolis acceleration couples two structural modes of vibration (i.e., sense and drive) dynamically. To maximize the sensitivity of MEMS Coriolis vibratory gyroscopes (CVGs), the natural frequencies of the sense and drive modes are designed to match. However, perfect matching between these modes is challenging because of the manufacturing nonidealities and tolerances. Moreover, maintaining the frequency matching during the operation of the gyroscope usually is not possible since parameter fluctuations under operating conditions may induce further mistuning. Proposed solutions for mode-matching include post-processing techniques19, active electronic control systems20, and structural design improvements21. Inherent robustness may be achieved by widening the bandwidth of the sense mode through linear coupling of vibration modes22. The amount of frequency split can be tuned using voltage signals23. Calculation of the required excitation signals can be carried through, for instance, statistical learning methods for the automatic mode-matching circuit and elimination of the frequency split24. It has also been proposed that by employing parametric resonance, the resonance frequency of the drive mode can be varied as needed until the sense mode is excited25. In this work, we are proposing a fundamentally different approach to improve the robustness of a MEMS gyroscope response to design parameter variations. The proposed design exploits the 2:1 internal resonance within a microresonator with 2 DOF such that the sense mode bandwidth is widened to the extent to circumvent the requirement for mode matching. The phenomenon is self-initiating once the excitation amplitude exceeds a threshold level without a need for a controller, electrostatic tuning, or additional DOF. ## Results ### Device design The device utilises twin proof masses that are mechanically connected to two crossbar suspensions forming a tuning fork type resonator as shown in Fig. 1. Slender flexural beams are used for the connections between the device components and to the substrate. The device is excited by applying input voltages to the two parallel plate drive electrodes (DE) 1 and 2 to exert an electrostatic force onto the structure. Movements of the structure are measured using a combination of capacitive sense currents from the sense electrodes (SE) 1–4. Under small forces, the device remains linear with two separate modes. With an adequately large driving force, the microresonator behaves a spring-pendulum mechanism with quadratic nonlinearities (see the Supplementary Information attachment on modeling of the system)26. Nonlinearities at small scales can stem from structural, electrostatic, and material origins. In this particular case, large displacements will result in quadratic nonlinearities due to the combined linear and angular movements of the structural components. The twin proof masses can move in the radial direction (spring mode) or rotate about the anchor (pendulum mode), simultaneously. Under these conditions, there can be a nonlinear coupling of vibration energy from the forced vibrations along the spring direction to the pendulum mode through 2:1 internal resonance because of the inherent quadratic nonlinearities in the system. Taking into account the quadratic Coriolis and centripetal nonlinearities, the system can be modeled using different methods, including the perturbation method (see the Supplementary Information attachment)27. The designed microdevice prioritizes anti-phase resonant modes such that the natural frequency of the anti-phase resonant mode along the x-axis (spring mode) is twice the frequency of the anti-phase resonant mode along the y-axis (pendulum mode). The fabricated device is vacuum sealed through a hermetic, wafer-level process before dicing. ### Linear response characterisation The natural frequencies of the microresonator (fs and fp) and the quality factors (Qs and Qp) for the spring and pendulum modes are measured using the test-bed shown in Fig. 2a. A vector network analyzer was used to measure the device response at the frequency of the input signal. A DC bias voltage is used to enhance the linear component of the input electrostatic force. Measured spectra around the spring and pendulum mode frequencies at the onset of nonlinearities are shown in Fig. 2b,c, respectively. For the device studied here, fp = 560.28 kHz with Qp = 6900 (pendulum mode) and fs = 1122.35 kHz with Qs = 5700 (spring mode), leading to a frequency ratio between spring and pendulum modes of fs/fp = 2.003 ≈ 2. ### 2:1 Internal resonance The nonlinear mode coupling caused by the 2:1 internal resonance is evaluated through the saturation figure and the nonlinear frequency response curves. The configuration in Fig. 3a was employed for these experiments where an AC voltage from a signal generator was superimposed on a DC voltage and used to produce the electrostatic input force. Output current due to the capacitance changes at the sense electrodes was differentially measured and monitored on a signal analyzer to allow for simultaneous monitoring of different portions of the spectrum. The intended 2:1 internal resonance and nonlinear mode coupling were examined through several experiments. Figure 3b demonstrates the half-order subharmonic response of the sense (i.e., pendulum) mode of the device as the drive voltage amplitude is increased and the frequency of excitation is fixed at Ωr = 1120.45 kHz ≈ 2fp. As can be seen, the amplitude of the sense mode (i.e., pendulum mode) is negligible for drive voltage of less than Vac ≤ ~1 V. However, once past this threshold voltage, the drive mode saturates and the additional energy is transferred to the sense mode for Vac will result in more separation between the peaks. Figure 3c shows the nonlinear frequency response curves of the sense mode (i.e., pendulum mode) for three different actuation voltages. Despite the actuation of the system at the excitation frequency in the vicinity of the spring-mode resonant frequency, the pendulum mode responds at half the excitation frequency. As the electrostatic voltage is increased, the microresonator exhibit increasing nonlinear mode coupling, where the vibrational amplitude splits and two peaks of the vibrational deflection emerge in the vicinity of the pendulum-mode resonant frequency. Further increasing of the drive voltage, Vac, will result in additional separation between the peaks. ### Response to angular rate For performance evaluation as a rate sensor, the overlap between the forward- and backward sweeps, as shown in Fig. 4, was selected as a robust region for applying the input force. To operate the sensor in this region, the frequency of the drive signal needs to be about twice some frequency in this band, eliminating the need for precise frequency control and providing a robust method for device excitation. To assess the scale factor (i.e., sensitivity) of device to input angular rate, the setup illustrated in Fig. 5 was used. The device chip was mounted inside a ceramic package and affixed to a printed circuit board (PCB). The PCB was then securely put on a rate table with a trans-resistance amplifier as shown in Fig. 6. The excitation signal was produced by a lock-in amplifier and passed through a frequency doubler before application to the sensor to excite the drive mode. Vibrations at the sense mode were detected by differential amplification of the currents from the sense electrodes and removing the interference from the excitation signal using a notch filter. The at-rest output spectrum of the microdevice is shown in Fig. 7a, confirming the activation of the nonlinear mode coupling due to the 2:1 internal resonance. Rate signals in the range of ±360 deg + sec−1 were applied to the device using the rate-table controller. Figure 7b shows the calibration curve obtained for different constant angular rates on the rate table and observing the corresponding output voltages of the microresonator. The x-axis in the figure represents the reference signal read from the rate-table controller while the y-axis represents the difference between the output current in response to the input rate and the at-rest signal from the resonator. Due to the quadratic nature of the nonlinearities in 2:1 internal resonance, the direction of the rate signal needs to be obtained from inertial signals. As seen in Fig. 7b, the microresonator demonstrates a sensitivity of 110 fA/(deg + sec−1) over a measurement range of approximately ±220 deg + sec−1 with a DC bias voltage of 80 V. At higher rates, the microresonator output exhibits reduced sensitivity to the input. This is likely due to the effect of centrifugal force on the device response which results in the detuning of mode frequency ratio, and hence, affecting the efficiency of energy exchange between the drive and sense modes. ## Discussion There has been a significant level of interest in understanding the nonlinear dynamics of micro- and nano-structures. Except for the basic research efforts on the nonlinear phenomena, the understanding of nonlinearities was deemed necessary in order to devise methods to avoid them in practical applications. Herein, we presented a proof-of-concept design for a nonlinear rate microsensor that employs 2:1 internal resonance for its operation. In contrast to the typical designs for Coriolis vibratory gyroscopes with linearly coupled drive and sense modes, utilisation of internal resonance provides a robust means for excitation of the resonator over a range of input frequencies. For instance, in Fig. 4 it can be seen that the two vibration modes are coupled to each other effectively where the sense amplitude changes from about ~0.75 mV to ~2 mV over a sense frequency range of 559.5 kHz to 561.5 kHz. In case of a linear CVG with a resonant frequency of 561 kHz and similar quality factor of 4000, a 1 kHz deviation of drive signal would result in a drop in signal amplitude of ~14×. For this reason, the nonlinear operation scheme to a great extent alleviates the mode-tuning requirements for the drive and sense modes of the device. Additionally, the nonlinear operation of the device results in separation of the electrical drive and sense signals in the frequency domain, potentially simplifying the signal processing by removing direct interference between them. On the other hand, the nonlinear operation reduces the linear cross-coupling between the drive and sense directions that degrade the performance of linearly-coupled CVGs. This is specifically due to the fact that the linear cross-coupling terms will not force the system to resonate due to 2:1 frequency ratio between the spring and pendulum modes. Furthermore, their effect is negligible when compared to the quadratic couplings in the system. As the first structural design to demonstrate the concept, the device offered a sensitivity of 110 fA/(deg + sec−1). The device sensitivity can improve, for example, by increasing the proof-masses used for the resonators. It is also possible to improve the device performance metrics such as sensitivity, dynamic range, and bandwidth through nonlinear closed-loop control. Further research will be needed to explore the limits of operation regarding noise performance or long-term stability of device response. ## Methods ### Fabrication process The microresonator was fabricated through the MEMS Integrated Design for Inertial Sensors (MIDIS) process offered by Teledyne DALSA Inc.13. The MIDIS process is based on high aspect-ratio, bulk micromachining of a 30 μm thick single-crystal silicon wafer (device layer) that is sandwiched between two other silicon wafers (top interconnect and bottom handle wafers). The device can be either be vacuum encapsulated at 10 mTorr or held at a sub-atmospheric pressure of 150 Torr. The top silicon wafer includes Through Silicon Vias (TSV) with sealed anchors for compact flip-chip integration and interconnection with external microelectronic signal processing circuitry28. The encapsulation of vibrational inertial MEMS resonators at low pressures influences their quality factor and response time28,29. Figure 1a shows the top-view, Scanning Electron Microscope (SEM) image of the fabricated device before encapsulation with the top wafer. The dimensions of the fabricated device are La = 165 m, wa = 8 m, Lc = 76 m, Ltf = 72 m, wtf = 25 m, w1 = 29 m, h1 = 60 m, w2 = 35 m, h2 = 201 m, l1 = 194 m, l2 = 197 m, g = 1.75 μm, to achieve the approximate 2:1 ratio between the modes (Fig. 1b). ### Frequency domain characterisation The measurement setup in Fig. 2a comprises (1) the vacuum encapsulated microresonator chip; (2) a DC power source (Keysight B2901A) to produce VDC; (3) a vector network analyzer (Rohde & Schwarz ZVB4) to produce $${V}_{ac}\,\cos ({{\rm{\Omega }}}_{r}t)$$; (4) a signal combiner/splitter (Mini-Circuits ZFSCJ-2-2-S); and (5) a transimpedance amplifier (Zurich Instruments HF2TA) with a gain of Gamp. In this figure, the signals Y1 and Y2 are the currents produced due to capacitance changes between the stationary sense electrodes SE2 and SE4 and the crossbars of the resonator. The setting to determine the natural frequencies and the Q-factors were VDC = 100 V, Vac = 1 V, and Gamp = 1 kΩ. The nonlinear frequency measurement setup in Fig. 3a includes (1) the vacuum encapsulated microresonator chip; (2) a DC power source (Keysight B2901A) to produce VDC; (3) a function generator (Agilent Technologies 81150A) to produce Vac; (4) a signal combiner/splitter (Mini-Circuits ZFSCJ-2-2-S); (5) a transimpedance amplifier (Zurich Instruments HF2TA) with a gain of Gamp; and (6) a signal analyzer (Agilent Technologies N9000A) to monitor the sensed signals in the frequency domain. For these tests, we set VDC = 100 V, Vac = 0–4.5 V, and Gamp = 10 kΩ. The drive frequency of the electrostatic voltage is then swept forward and backward around the spring mode frequency while monitoring the frequency response of the sense mode (pendulum mode). ### Sensitivity characterisation The linearity, full-scale range, and response of the microresonator to the input rotation rate are extracted from the data obtained through the scale factor tests using a high precision rate table (Ideal Aerosmith 1621-200A-TL with AERO 812 controller). The schematic of the setup for the rate measurement is shown in Fig. 5. The microresonator is mounted on a PCB and securely placed on the rate table along with a low-noise trans-impedance amplifier (FEMTO DHPCA-100). A lock-in amplifier (Zurich Instruments HF2LI) was used to produce the excitation voltage and record the measured signals. While the lock-in amplifier could detect higher harmonics of the excitation signal, it is unable to capture subharmonics. On the other hand, the nonlinear transformation of the excitation signal from the frequency fs to $${f}_{p}\approx {f}_{s}/2$$ meant that the sense signal is roughly at half the frequency of the excitation signal, which would have been undetectable with the standard configuration. To circumvent this issue, a frequency doubler was used at the output of the lock-in amplifier to produce the excitation signal which was further amplified using a voltage amplifier (TEGAM 2350), and thus, treated as the first harmonic of the reference signal produced by the lock-in amplifier. The frequency doubler was designed using an analog multiplier chip (AD835) to multiply the reference signal by itself to generate its second harmonic at double the frequency. The signal at the output of the device included the sense signal at fp due to the nonlinear excitation of the sense mode as well as a parasitic contribution from the drive signal at fs. The interference at fs was then removed using a notch filter after initial amplification of the current signal using the transimpedance amplifier. The sense signal received by the lock-in amplifier was therefore at frequency fp, which was about fs/2. An input rate in the form of a trapezoidal wave was then applied to the sensor such that the applied rate was increased from 0 deg + sec−1 to the target rate with the constant angular acceleration of 40 deg + sec−2, kept constant at the intended rate for 30 s, and then decreased back to 0 deg + sec−1 with the constant angular acceleration of −40 deg + sec−2. An accelerometer was used to determine the direction of rotation of the rate table. During the test, the output response of the microresonator is monitored and recorded. The response of the sensor is measured as the change in the amplitude of the sense signal due to the input rate. For the sensitivity tests, we set VDC = 0. 4 V at a frequency of 561. 236 kHz (before frequency doubler), a fixed gain of 20 V/V for the high-voltage amplifier, and a gain of R = 5 × 107 Ω for the transimpedance amplifier. The notch filter was designed based on a passive twin-T structure for a notch frequency of 1120 kHz. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Timoshenko, S. History of Strength of Materials. (Courier Corporation, 1983). 2. 2. Landau, L. D. & Lifshitz, E. M. Mechanics. 1, (Butterworth Heinemann, 1976). 3. 3. Nitzan, S. H. et al. Self-induced parametric amplification arising from nonlinear elastic coupling in a micromechanical resonating disk gyroscope. Scientific Reports 5, 9036 (2015). 4. 4. Jia, Y., Du, S. & Seshia, A. A. Twenty-Eight Orders of Parametric Resonance in a Microelectromechanical Device for Multi-band Vibration Energy Harvesting. Scientific Reports 6, 30167 (2016). 5. 5. Villanueva, L. G. et al. A Nanoscale Parametric Feedback Oscillator. Nano Lett. 11, 5054–5059 (2011). 6. 6. Zanette, D. H. Effects of noise on the internal resonance of a nonlinear oscillator. Scientific Reports 8, 5976 (2018). 7. 7. Ramini, A. H., Hajjaj, A. Z. & Younis, M. I. Tunable Resonators for Nonlinear Modal Interactions. Scientific Reports 6, 34717 (2016). 8. 8. Sarrafan, A., Bahreyni, B. & Golnaraghi, F. Analytical modeling and experimental verification of nonlinear mode coupling in a decoupled tuning fork microresonator. Journal of Microelectromechanical Systems 27, 398–406 (2018). 9. 9. Kacem, N., Hentz, S., Pinto, D., Reig, B. & Nguyen, V. Nonlinear dynamics of nanomechanical beam resonators: improving the performance of NEMS-based sensors. Nanotechnology 20, 275501 (2009). 10. 10. Venstra, W. J., Capener, M. J. & Elliott, S. R. Nanomechanical gas sensing with nonlinear resonant cantilevers. Nanotechnology 25, 425501 (2014). 11. 11. Antonio, D., Zanette, D. H. & López, D. Frequency stabilization in nonlinear micromechanical oscillators. Nature Communications 3, 806 (2012). 12. 12. Nayfeh, A. H. & Mook, D. T. Nonlinear Oscillations. (John Wiley & Sons, 2008). 13. 13. Sarrafan, A., Bahreyni, B. & Golnaraghi, F. Design and characterization of microresonators simultaneously exhibiting 1/2 subharmonic and 2:1 internal resonances. In The 19th International Conference on Solid-State Sensors, Actuators and Microsystems. 102–105 (2017). 14. 14. Tuer, K. L., Golnaraghi, M. F. & Wang, D. Development of a generalised active vibration suppression strategy for a cantilever beam using internal resonance. Nonlinear Dyn 5, 131–151 (1994). 15. 15. Ikeda, T. & Murakami, S. Autoparametric resonances in a structure/fluid interaction system carrying a cylindrical liquid tank. Journal of Sound and Vibration 285, 517–546 (2005). 16. 16. Nayfeh, A. H., Mook, D. T. & Marshall, L. R. Nonlinear coupling of pitch and roll modes in ship motions. Journal of Hydronautics 7, 145–152 (1973). 17. 17. Vyas, A., Peroulis, D. & Bajaj, A. K. A Microresonator Design Based on Nonlinear 1:2 Internal Resonance in Flexural Structural Modes. Journal of Microelectromechanical Systems 18, 744–762 (2009). 18. 18. Kirkendall, C. R., Howard, D. J. & Kwon, J. W. Internal Resonance in Quartz Crystal Resonator and Mass Detection in Nonlinear Regime. Appl. Phys. Lett. 103, 223502 (2013). 19. 19. Joachim, D. & Lin, L. Characterization of Selective Polysilicon Deposition for MEMS Resonator Tuning. Journal of Microelectromechanical Systems 12, 193–200 (2003). 20. 20. Liu, Y. X. et al. Design of a Digital Closed Control Loop for the Sense Mode of a Mode-Matching MEMS Vibratory Gyroscope. In The 9th IEEE International Conference on Nano/Micro Engineered and Molecular Systems (NEMS) 199–203, https://doi.org/10.1109/NEMS.2014.6908790 (2014). 21. 21. Geiger, W. et al. Decoupled microgyros and the design principle DAVED. In Technical Digest. MEMS 2001. 14th IEEE International Conference on Micro Electro Mechanical Systems (Cat. No. 01CH37090) 170–173, https://doi.org/10.1109/MEMSYS.2001.906507 (2001). 22. 22. Acar, C. & Shkel, A. M. Inherently Robust Micromachined Gyroscopes with 2-DOF Sense-Mode Oscillator. Journal of Microelectromechanical Systems 15, 380–387 (2006). 23. 23. Wan, Q. et al. A high symmetry polysilicon micro hemispherical resonating gyroscope with spherical electrodes. In 2017 IEEE Sensors 1–3 (2017). 24. 24. He, C. et al. A MEMS Vibratory Gyroscope With Real-Time Mode-Matching and Robust Control for the Sense Mode. IEEE Sensors Journal 15, 2069–2077 (2015). 25. 25. Oropeza-Ramos, L. A., Burgner, C. B. & Turner, K. L. Inherently Robust Micro Gyroscope Actuated by Parametric Resonance. In 2008 IEEE 21st International Conference on Micro Electro Mechanical Systems 872–875, https://doi.org/10.1109/MEMSYS.2008.4443795 (2008). 26. 26. Lee, W. K. & Hsu, C. S. A global analysis of anharmonically excited spring-pendulum system with internal resonance. Journal of Sound and Vibration 171, 335–359 (1994). 27. 27. Nayfeh, A. H. Perturbation Methods. (John Wiley & Sons Inc, 1973). 28. 28. MIDIS Platform for Motion Sensors - Teledyne DALSA Inc. Available at, http://www.teledynedalsa.com/semi/mems/applications/midis/ (Accessed: 8th February 2017). 29. 29. Dienel, M. et al. On the influence of vacuum on the design and characterization of MEMS. Vacuum 86, 536–546 (2012). ## Acknowledgements The authors would like to acknowledge Natural Sciences and Engineering Research Council of Canada (NSERC) for providing financial support. Access to fabrication services and simulation tools was provided through CMC Microsystems. ## Author information A.S. and B.B. developed the initial designs and wrote the main manuscript text. S.A. designed the electrical circuit and helped A.S. with the experiments. B.B. and F.G. jointly supervised the work. All authors reviewed the manuscript. ### Competing Interests The authors declare no competing interests.
{}
Marks 1 More The first moment of area about the axis of bending for a beam cross-section is GATE CE 2014 Set 2 Polar moment of inertia $$\left( {{{\rm I}_p}} \right),$$ in $$c{m^4},$$ of a rectangular section having width, $$b=2$$... GATE CE 2014 Set 2 Marks 2 More A disc of radius $$'r'$$ has a hole of radius $$'r/2'$$ cut-out as shown. The centroid of the remaining disc (shaded por... GATE CE 2011 For the section shown below, second moment of the area about an axis $$d/4$$ distance above the bottom of the area is ... GATE CE 2006
{}
# Recover frequencies from IQ samples I have a set of IQ data. From those data I'm trying to get the amplitudes and the frequencies of my signal as I then want to plot them vs. time. I am able to obtain the amplitudes by squaring both my I and Q values, summing them up and taking the square root of the sum; however, I am struggling to obtain the frequency. I understand that I need to take a FFT to get from the time domain to the frequency domain but I'm not sure on what values I should apply the FFT. Should I do it individually on Is and j*Qs and then sum them up. Should I do it on the sum Is + j*Qs (which just gave me a new array of complex numbers of the form x + j*y). What role my center frequency / sampling frequency are going to play into that? For context: I'm doing this using Python. (And I'm obviously pretty new to all of this.) Since you took 1000 point FFT, each of those complex output values correspond to frequencies at multiples of $$\frac{F_s}{1000}\text{Hz}$$ where $$F_s$$ is the Sampling Frequency. For example, the first value will be corresponding to frequency at $$0\text{Hz}$$, next value is the frequency content at $$\frac{F_s}{1000}\text{Hz}$$, 3rd value corresponds to frequency $$\frac{2F_s}{1000}\text{Hz}$$ and so on.. 1. calculate the phase of your samples by simply performing $$\phi(k) =\tan^{-1}(s_I(k)/s_Q(k))$$. 2. Do not forget to unwrap $$\phi(k)$$ since it is limited in $$[-\pi/2,\pi/2)$$ interval. (Both Python and MATLAB have unwrap function.) 3. Take the derivative of $$\phi(k)$$ and divide to $$2\pi$$. You can use forward difference technique for this $$\frac{1}{2\pi}\frac{d\phi(k)}{dk} \approx \frac{ \phi(k+1)-\phi(k)}{2\pi}$$. The result will be your inst. frequency for your I/Q data.
{}
## Eigenvalues of the Laplacian and extrinsic geometry created by hassannezhad1 on 04 Dec 2020 [BibTeX] preprint Inserted: 4 dec 2020 Year: 2012 ArXiv: 1210.7714 PDF Abstract: We extend the results given by Colbois, Dryden and El Soufi on the relationships between the eigenvalues of the Laplacian and an extrinsic invariant called intersection index, in two directions. First, we replace this intersection index by invariants of the same nature which are stable under small perturbations. Second, we consider complex submanifolds of the complex projective space $\mathbb{C} P^N$ instead of submanifolds of $\mathbb{R}^N$ and we obtain an eigenvalue upper bound depending only on the dimension of the submanifold which is sharp for the first non-zero eigenvalue.
{}
# How many different definitions of $e$ are there? It seems as though, in my analysis and calculus courses, in particular, a common cop-out when asked to prove an identity involving $e$, is the phrase "it's true by definition". So, I'm trying to find as many definitions of $e$ in order to see just how many of these identities can actually be a definition of $e$. So far, I've got the following (which are the ones most mathematicians know): • $e:=\lim\limits_{n \to \infty}(1+\frac{1}{n})^n=\lim\limits_{h \to 0}(1+h)^{1/h}$ • $e:=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$ • $e$ is the global maximum of the function $x^{1/x}$ • $e$ is the real number satisfying $\int\limits_{1}^{e}\frac{1}{x}dx=1 \iff \begin{cases} \frac{d}{dx}[e^x]=e^x \\ \\ e^0=1 \end{cases}$ Does anyone have any more to add to the list? Thanks! • @barakmanos Already got it! – beep-boop Jun 14 '14 at 14:36 • I think most mathematicians consider the first one to be a theorem; it's only presented as a definition to beginners. Likewise, the third one is certainly not a definition. If you're looking for a list of "all identities involving $e$, then this is an impossibly broad and sort of useless question. If you're looking for just the accepted definitions, I think two and four are it. This amounts to "how many ways are there to define the exponential function", since this is the theoretical perspective on where $e$ comes from. – Ryan Reich Jun 14 '14 at 14:37 • $e = \lim_{n\to\infty} \frac{n}{\sqrt[n]{n!}}$ – barak manos Jun 14 '14 at 14:38 • In fact, here is all of them: en.wikipedia.org/wiki/Representations_of_e – barak manos Jun 14 '14 at 14:39 • @Ryan Reich: I am one of the mathematicians who thinks that the first definition in the list ( at least the $\lim_{n \to \infty}$ version) is a perfectly good way to define $e$. I agree about the real number version: I'm not sure how you define $(1+h)^{1/h}$ if $1/h \not \in \mathbb{N}$ if you don't already have some sort of $\log$ and exponential functions available. – Geoff Robinson Jun 14 '14 at 15:17 You really should be looking for definitions of the exponential function $e^x$, not definitions of $e$. Here are the most important ones that come to mind: 1. $e^x$ is the unique function $f(x)$ satisfying $f'(x) = f(x)$ and $f(0) = 1$. 2. $e^x$ is the inverse of the function $\displaystyle \ln x = \int_1^x \frac{dt}{t}$. 3. $e^x$ is the power series $\displaystyle \sum_{n \ge 0} \frac{x^n}{n!}$. 4. $e^x$ is the limit $\displaystyle \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n$. I say this because once you start thinking about $e^x$, which is by far the more fundamental object, and not $e$, the relationship between the definitions becomes much more transparent. Here are short sketches of proofs that the definitions above are all equivalent: $1 \Leftrightarrow 2$: if $\frac{d}{dx} g(x) = \frac{1}{x}$ then $$\frac{d}{dx} g^{-1}(x) = \frac{1}{g'(g^{-1}(x))} = g^{-1}(x).$$ Conversely, if $\frac{d}{dx} g(x) = g(x)$ then $$\frac{d}{dx} g^{-1}(x) = \frac{1}{g'(g^{-1}(x))} = \frac{1}{g(g^{-1}(x))} = \frac{1}{x}.$$ $1 \Leftrightarrow 3$: if $f'(x) = f(x)$ then $f^{(n)}(x) = f(x)$ for all $n$, hence $f^{(n)}(0) = f(0) = 1$, so the Taylor series of $f(x)$ has all coefficients equal to $1$. Conversely, the function with that Taylor series is its own derivative and satisfies $f'(0) = 1$ using the fact that power series are term-by-term differentiable inside their interval of convergence. $1 \Leftrightarrow 4$: $\displaystyle \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n$ is the result of using Euler's method to compute $f(x)$, where $f$ satisfies $f'(t) = f(t)$ and $f(0) = 1$, as the step size $n$ goes to $\infty$. (My personal opinion is that the first definition is the most fundamental one; in general uniqueness statements are very powerful. For example, there is a very short proof using the first definition, which I invite you to find, that $e^{x + y} = e^x e^y$.) • 5. $e$ is the unique real number that makes the function $e^x$ (defined by extending $e^{a/b}$ by continuity) satisfy property (1). – Jack M Jun 15 '14 at 7:28 • Also, as an extra 2 cents, once you have the exponential function, I find a more natural definition of $e$ is as the constant ratio $f(x+1)/f(x)$, rather than as $f(1)$, which makes it seem a little arbitrary ($f(1)$ exists for all functions $f$, but the fact that that ratio is constant is a remarkable property of the exponential function). – Jack M Jun 15 '14 at 7:31 • I would say that we shouldn't define the exponential function by the differential equation, because you would need to prove that it exists, which is most easily done by the definition via the power series. I would however agree that the differential equation is the one that lies at the heart of the exponential function. Also, the power series works for complex numbers, while the second definition would require a line integral in the complex plane and Cauchy-Goursat theorem or something. – user21820 Dec 4 '14 at 5:22 • @user21820: I think it's important to distinguish between definitions and constructions. I agree that it's very convenient to construct $e^x$ via power series, but the fundamental reason why you're looking at that power series and not some other power series is because it solves a differential equation that you care about. I am fine with definitions including theorems that the definitions are well-defined. – Qiaochu Yuan Dec 4 '14 at 5:46 • @QiaochuYuan: Yup I agree, but this point is not clear to many students, because few people point out to them precisely what kind of definitions are allowed, and so they easily fail to see when something is not well-defined. For that reason, I normally introduce the exponential function by seeing what a solution to the differential equation must look like if it existed and can be expressed as a power series around 0, and then I prove that the power series actually converges everywhere and does satisfy the differential equation. – user21820 Dec 4 '14 at 5:57 • What's up with the floor sign? – user2357112 Jun 15 '14 at 4:45 • It's not a floor sign. The square brackets represent, among other things, the LCM, and the round ones the GCD. Though such conventions may vary by country. – Lucian Jun 15 '14 at 5:36 • Huh. The tops of the brackets aren't rendering at default zoom level for me; they only show up at 110%. – user2357112 Jun 15 '14 at 5:46 • Same for me @user2357112. – Kaj Hansen Jun 15 '14 at 5:53 • Is it better now? Let me guess: You were using FireFox? – Lucian Jun 15 '14 at 6:38 Actualy for certain numbers like $e$ and $\pi$ an almost enormous amount of defining relations or identities can be produced. Think of the case of $\pi$, by using trigonometric identities (or even number-theoretic identities) many many defining relations for $\pi$ can be produced (i think there is also a systematic procedure to produce new defining relations). A very close case is for $e$, due to being related to similar hyperbolic trigonometric functions (except the purely exponential-analytic identities), a very large amount of identities which can be used as definitions or representations can be found. Some of them can be found online as in here, however they do not exhaust all possible identities that can serve as definitions. Finally an identity that relates $i$, $\pi$, $e$, $0$ and $1$ (which was a favorite of R. Feynman) is the Euler identity: $$e^{i\pi}+1=0$$
{}
# Parabolic Bridges Real Life Parabolas Parabolas are very well-known and are seen frequently in the field of mathematics. Their applications are varied and are apparent in our every day lives. For example, the main image on the right is of the Golden Gate Bridge in San Francisco, California. It has main suspension cables in the shape of a parabola. # Basic Description For a detailed overview of parabolas, see the page, Parabola. However, we will provide a brief summary and description of parabolas below before explaining its applications to suspension bridges. #### Basic Definition You may informally know parabolas as curves in the shape of a "u" which can be oriented to open upwards, downwards, sideways, or diagonally. But to be more mathematical, a parabola is a conic section formed by the intersection of a cone and a plane. Below is an image illustrating this. When you were first introduced to parabolas, you learned that the quadratic equation, $y= a(x-h)^2+ k$ is its algebraic representation (where $h$ and $k$ are the coordinates of the vertex and $x$ and $y$ are the coordinates of an arbitrary point on the parabola. #### Suspension Bridges Suspension Bridges are the most commonly built bridges. Known for their long spans, these bridges feature a deck with vertical supports, from which long wire cables hang above. These cables are made up of hangers that run vertically downwards to hold the cable up. The suspension cables hang over the towers until they are anchored on land by the ends of the bridges. Notably, the way these cables are hung resemble the shape of a parabola. # Usefulness of Suspension Bridges Due to their elegant structure, suspension bridges are used to transport loads over long distances, whether it be between two distant cities or between two ends of a river. Suspension bridges are able to work efficiently because of their cables, which are interesting from a mathematical perspective. Since the bridge’s deck spans a long distance, it must be very heavy in weight by its own, not to mention all the weight of the heavy load of traffic that it must carry. Because of all this weight, this results in two active forces: compression and tension. The cable’s parabolic shape results in order for it to effectively address these forces acting upon the bridge. For instance, the deck sags from all the weight of the traffic because of compression forces, which travels upwards the cables. The cables then transfer those compression forces downwards the vertical towers, down into the foundations buried deep within the earth. However, the cables receive the brunt of the tension forces, as they are supporting the bridge’s weight and its load of traffic, being stretched by the anchors' ends on-land. Overall, the suspension bridge does its job with minimal material (as most of the work is accomplished by the suspension cables), which means that it is economical from a construction cost perspective. # A conceptual explanation This links to other page, Catenary. But we shall explain the differences between parabola and catenary with more emphasis on the parabola. Why is that the main suspension cables hang in a shape of a parabola, and not in a catenary, a similar ‘u-shaped’ curve? Bridge 1: Catenary curve Bridge 2: Suspension Bridge with parabolic curve Despite their visual similarities, catenaries and parabolas are two very different curves, both conceptually and mathematically. A catenary curve is created by its own weight, pulling down because of gravity. The parabolic curves of the suspension cable are not created by gravity alone, but also by other forces: compression and tension acting on it. Also the weight of the suspension cable is negligible compared to that of the deck, but it is also supporting the weight of the deck. This is also another conceptual reason why the suspension cables hang in a parabolic curve. # A More Mathematical Explanation Note: understanding of this explanation requires: *Calculus, Physics Some basic differential calculus is needed to derive an equation for the suspension cables, which giv [...] Some basic differential calculus is needed to derive an equation for the suspension cables, which gives us a parabolic equation. From the previous sections, we explained the forces active in the bridge: tension and compression. Well, by looking at only an interval of the cable, let’s examine these related forces. The following three forces are active on the cable: Tension($\vec{T}$)= horizontal direction coming from the left because it’s an opposing force. Weight($\vec{W}$)= gravitational pull downwards of the cable. $\vec{F_x}$=Force tangent to the cable at point of the cable, arbitrarily labeled $x$. This is a force coming from the right that corresponds to the direction of the suspension cable. In physics, these three forces can be visualized in the form of a free body diagram. (Image of a triangle underneath an interval of a cable with its lowest point conveniently positioned at the origin-so one of our known points of this curve is $(0,0)$) The arrows indicate the direction that the forces are going in. Overall, the net force is $\vec{0}$ because the segment of the cable is motionless and as such, has no acceleration. Therefore, when drawing the free body diagram, all three vectors’ heads and tails must meet up where the head of one the vectors meets up at the tail of another vector. This forms a right triangle. Slope of $F_x$ is equivalent to the slope of the cable, which we are looking for in order to find an equation that best describes the cable. From the diagram, we see that the slope of $F_x = {W/T}$ Note: That this is not a vector quantity, but rather a magnitude quantity. But what exactly is $W$? Well, weight is distributed evenly throughout the deck below the cables, so at this interval of the cable, the interval of the deck below it must have uniform weight and as such, uniform linear density $u$. Let the length of the deck be defined as the distance from $(0,0)$ to the point $x$. So the weight is equal to $W= ux$ So the slope of $F_x$ can also be rewritten $F_x= {ux/T}$, which is also the slope of the cable. Now with our knowledge of the slope of the cable, an equation for the curve containing the above slope can be derived with the tools of basic integration. Integrating the slope of $F_x$ with respect to $x$, we get: $\int {ux\over T} dx = {u\over2T}(x^2) + C$ Plugging our known point, $(0,0)$ into this result, we get the equation: $y = {u\over 2T}(x^2)$ Which describes a parabola. So optimal shape of the suspension cables is the parabola. #### A Real World Application Back to the Golden Gate Bridge, statistics show that the main span of the bridge is approx. $l=4200 ft.$while its tower height is approx. $h=500 ft$ At the middle of the bridge, $T=W$, which defines the most efficient use of the suspension cable.
{}
# What is magnetism? 1. Oct 5, 2016 ### Simon Peach For a start, I must say that I'm just very interested in physics and maths but have no real education in these subjects. Does magnetism behave like other forces? Is the speed of the magnetism limited by lightspeed? Is it a wave or a particle, or both like light. Now for gravity, we have discovered 'gravity waves' but again is a particle too. 2. Oct 5, 2016 ### Amrator The magnetic field is an effect induced by a moving charged particle or electric current, the flow of charged particles (protons or electrons). The magnetic field exerts a magnetic force on any other moving charge or current present in the field. So let's say you have a moving charge $q_1$ and another moving charge $q_2$. As $q_1$ moves with velocity $\vec v_2$, it creates a magnetic field $\vec B_1$ in its surrounding vicinity (where $q_2$ is also present) which in turn exerts a magnetic force $\vec F_{1,2}$ on $q_2$ (as $q_2$ is also moving). These can all be put into an equation, $$\vec F_{1,2} = q_2 * \vec v_2 \times \vec B_1$$ This is the same thing as $$\vec F_{1,2} = q_2 * \vec v_2 * \vec B_1 * sinθ$$ $\vec v_2$, $\vec B_1$, and $\vec F_{1,2}$ can all be represented as geometric entities in three dimensional space, or vectors; vectors are described by stating their magnitude and direction. (image taken from Encyclopaedia Britannica) θ is the angle between $\vec v_2$ and $\vec B_1$. In the above picture, θ = 90° because $\vec v_2$ and $\vec B_1$ are perpendicular to each other (the angle between them is 90°). Last edited: Oct 5, 2016 3. Oct 5, 2016 ### Khashishi Magnetism and the electric force are very closely related. They always come together, and they are mathematically entwined, so we say they are both parts of the electromagnetic force. The electromagnetic field spreads out over all space. We have to be clear on what we mean by "speed" when we are not talking about a solid object. It makes the most sense to talk about the speed of propagation of changes in the field. If you move a magnet, then the field around the magnet takes some time to update. The speed at which the changes are propagated outward is the speed of light, precisely because the wave of propagation is light. Light is just an electromagnetic wave, and these waves carry changes in the electric and magnetic fields. Gravity is similar to electromagnetism but more complicated since more numbers are needed to describe the gravitational field at each point in space. We don't yet know if gravity can be described as particles, since a graviton has never been detected. 4. Oct 5, 2016 ### Electron Spin Magnetism is a phenomenon, given to us by all the electrons spinning in the same direction, or the with the same angular momentum within ferrous metals. Magnetism and the electric field are friends like the yin and the yang. Together they form a field or a wave such as a radio wave. You cannot have one without the potential of having the other! Kinda like a handshake...It takes two. Magnetism is fleeting and not understood entirely. Without magnetism indeed, our perceived universe would be sorely bent and very different it has been decreed... Last edited: Oct 5, 2016 5. Oct 5, 2016 Staff Emeritus Not so. 6. Oct 5, 2016 ### Simon Peach So everything has a degree of magnetism, via the electromagnetic spectrum, which is everything. Magnetism is part of the electromagnetic spectrum, should have worked that out by the name! Thanks everyone for the answers. 7. Oct 5, 2016 ### phinds Well, only in the sense that everything has a VALUE for its degree of magnetism. That degree will be zero, I believe, for something like a crystalline lattice (a diamond for example).
{}
# A high-dimensional CLT in $$\mathcal {W}_2$$ W 2 distance with near optimal convergence rate A high-dimensional CLT in $$\mathcal {W}_2$$ W 2 distance with near optimal convergence rate Let $$X_1,\ldots ,X_n$$ X 1 , … , X n be i.i.d. random vectors in $$\mathbb {R}^d$$ R d with $$\Vert X_1\Vert \le \beta$$ ‖ X 1 ‖ ≤ β . Then, we show that \begin{aligned} \frac{1}{\sqrt{n}}\left( X_1 + \cdots + X_n\right) \end{aligned} 1 n X 1 + ⋯ + X n converges to a Gaussian in quadratic transportation (also known as “Kantorovich” or “Wasserstein”) distance at a rate of $$O \left( \frac{\sqrt{d} \beta \log n}{\sqrt{n}} \right)$$ O d β log n n , improving a result of Valiant and Valiant. The main feature of our theorem is that the rate of convergence is within $$\log n$$ log n of optimal for $$n, d \rightarrow \infty$$ n , d → ∞ . http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Probability Theory and Related Fields Springer Journals # A high-dimensional CLT in $$\mathcal {W}_2$$ W 2 distance with near optimal convergence rate , Volume 170 (4) – Mar 24, 2017 25 pages /lp/springer_journal/a-high-dimensional-clt-in-mathcal-w-2-w-2-distance-with-near-optimal-565OACu0t8 Publisher Springer Berlin Heidelberg Subject Mathematics; Probability Theory and Stochastic Processes; Theoretical, Mathematical and Computational Physics; Quantitative Finance; Mathematical and Computational Biology; Statistics for Business/Economics/Mathematical Finance/Insurance; Operations Research/Decision Theory ISSN 0178-8051 eISSN 1432-2064 D.O.I. 10.1007/s00440-017-0771-3 Publisher site See Article on Publisher Site ### Abstract Let $$X_1,\ldots ,X_n$$ X 1 , … , X n be i.i.d. random vectors in $$\mathbb {R}^d$$ R d with $$\Vert X_1\Vert \le \beta$$ ‖ X 1 ‖ ≤ β . Then, we show that \begin{aligned} \frac{1}{\sqrt{n}}\left( X_1 + \cdots + X_n\right) \end{aligned} 1 n X 1 + ⋯ + X n converges to a Gaussian in quadratic transportation (also known as “Kantorovich” or “Wasserstein”) distance at a rate of $$O \left( \frac{\sqrt{d} \beta \log n}{\sqrt{n}} \right)$$ O d β log n n , improving a result of Valiant and Valiant. The main feature of our theorem is that the rate of convergence is within $$\log n$$ log n of optimal for $$n, d \rightarrow \infty$$ n , d → ∞ . ### Journal Probability Theory and Related FieldsSpringer Journals Published: Mar 24, 2017 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{}
• A • A • A • ABC • ABC • ABC • А • А • А • А • А Regular version of the site Exploring first-order phase transitions with population annealing European Physical Journal. Special Topics. 2017. Vol. 226. No. 4. P. 595-604. Barash L. Yu., Weigel M., Shchur L.N., Janke W. Population annealing is a hybrid of sequential and Markov chain Monte Carlo methods geared towards the efficient parallel simulation of systems with complex free-energy landscapes. Systems with first-order phase transitions are among the problems in computational physics that are difficult to tackle with standard methods such as local-update simulations in the canonical ensemble, for example with the Metropolis algorithm. It is hence interesting to see whether such transitions can be more easily studied using population annealing. We report here our preliminary observations from population annealing runs for the two-dimensional Potts model with q > 4, where it undergoes a first-order transition.
{}
### Archive Posts Tagged ‘visualization’ ## Visualizing Graphs September 18, 2016 1 comment #### Previously Walking the Eule Path: Intro ### Generating and Visualizing Graphs I can hardly overemphasize the importance of visusalizations. Many a bug had been immediately spotted just by looking at a visual of a complex data structure. I therefore decided to add visuals to the project as soon as the DirectedGraph class was born. #### Code & Prerequisits Code is on GitHub. 1. GraphViz: install and add the bin directory to the PATH 2. EmguCV v3.1: install and add the bin directory to the PATH #### DrawGraph This is a small auxiliary component I wrote to make all future visualizations possible. And here is a sidebar. I didn’t want to write this component. I am not a fan of re-writing something that was written a hundred times before me, so the first thing I did was look for something similar I could use. Sure enough, I found a few things. How can I put it? Software engineering is great, but boy, do we tend to overengineer things! I know, I’m guilty of the same thing myself. All I wanted from the library was an ability to receive a text file written in GraphViz DSL, and get on the output a .png containing the picture of the graph. Just a very simple GraphViz driver, nothing more. One library had me instantiate 3 (three!) classes, another developed a whole API of its own to build the GraphViz file… I ended up writing my own component, it has precisely 47 lines of code. the last 4 lines are aliasing a single function that does exactly what I wanted. It creates the png file and then immediately invokes the EmguCV image viewer to show it. After we’re done, it cleans up after itself, deleting the temporary png file. Here it is. #### Taking it for a Ride Just to see this work… Another digression. Love the new feature that generates all the “#r” instructions for F# scripts and sticks them into one file! Yes, this one! Right-click on “References” in an F# project: . And the generated scripts auto-update as you recompile with new references! A+ for the feature, thank you so much. Comes with a small gotcha, though: sometimes it doesn’t get the order of references quite right and then errors complaining of references not being loaded appear in the interactive. I spent quite a few painful hours wondering how is it that this reference was not loaded, when here it is! Then I realized: it was being loaded AFTER it was required by references coming after it). #load "load-project-release.fsx" open DrawGraph createGraph "digraph{a->b; b->c; 2->1; d->b; b->b; a->d}" "dot.exe" None Cool. Now I can take this and use my own function to generate a graph from a string adjacency list, visualize it, and even view some of its properties. Sort of make the graph “palpable”: let sparse = ["a -> b, c, d"; "b -> a, c"; "d -> e, f"; "e -> f"; "1 -> 2, 3"; "3 -> 4, 5"; "x -> y, z"; "2 -> 5"] let grs = StrGraph.FromStrings sparse grs.Visualize(clusters = true) StrGraph.FromStrings does exactly what it says: it generates a graph from a sequence of strings, formatted like the sparse list above. My Visualize function is a kitchen sink for all kinds of visuals, driven by its parameters. In the above example, it invokes graph partitioning to clearly mark connected components. It is important to note, that this functionality was added to the visualizer not because I wanted to see connected components more clearly, but as a quick way to ensure that my partitioning implementation was indeed working correctly. #### Generating Data and Looking at It Now we have a class that builds graphs and even lets us look at them, so where do we get these graphs? The easiest thing (seemed at the time) was to create them. Enter FsCheck. It’s not the easiest library to use, there is a learning curve and getting used to things takes time, but it’s very helpful. Their documentation is quite good too. The idea is to write a generator for your type and then use that generator to create as many samples as you like: #load "load-project-release.fsx" open Graphs open FsCheck open System open DataGen let grGen = graphGen 3 50 let gr = grGen.Sample(15, 5).[2] gr.Visualize(into=3, out= 3) This produces something like: My function graphGen len num generates a graph of text vertices where len is the length of a vertex name and num is the number of vertices. It returns an FsCheck generator that can then be sampled to get actual graphs. This was a one-off kind of experiment, so it’s in a completely separate module: //DataGen.fs module DataGen open FsCheck open System open Graphs let nucl = Gen.choose(int 'A', int 'Z') |> Gen.map char let genVertex len = Gen.arrayOfLength len nucl |> Gen.map (fun c -> String(c)) let vertices len number = Gen.arrayOfLength number (genVertex len) |> Gen.map Array.distinct let graphGen len number = let verts = vertices len number let rnd = Random(int DateTime.UtcNow.Ticks) let pickFrom = verts |> Gen.map (fun lst -> lst.[rnd.Next(lst.Length)]) let pickTo = Gen.sized (fun n -> Gen.listOfLength (if n = 0 then 1 else n) pickFrom) Gen.sized <| (fun n -> Gen.map2 (fun from to' -> from, (to' |> Seq.reduce (fun acc v -> acc + ", " + v))) pickFrom pickTo |> Gen.arrayOfLength (if n = 0 then 1 else n) |> Gen.map (Array.distinctBy fst) |> Gen.map (fun arr -> arr |> Array.map (fun (a, b) -> a + " -> " + b)) ) |> Gen.map StrGraph.FromStrings This whole module cascades different FsCheck generators to create a random graph. The simplest of them nucl, generates a random character. (Its name comes from the fact that originally I wanted to limit the alphabet to just four nucleotide characters A, C, G, T). Then this generator is used by genVertex to generate a random string vertex, and finally vertices creates an array of distinct random vertices. graphGen creates a sequence of strings that FromStrings (above) understands. It first creates a string of “inbound” vertices and then adds an outbound vertex to each such string. Sampling is a little tricky, for instance, the first parameter to the Sample function, which, per documentation, controls sample size, in this case is responsible for complexity and connectivity of the resulting graphs. #### On to Euler… The script above also specifies a couple of optional parameters to the visualizer: into will mark any vertex that has into or more inbound connections in green. And out will do the same for outbound connections and yellow. If the same vertex possesses both properties, it turns blue. Inspired by all this success, I now want to write a function that would generate Eulerian graphs. The famous theorem states that being Eulerian (having an Euler cycle) for a directed graph is equivalent to being strongly connected and having in-degree of each vertex equal to its out-degree. Thus, the above properties of the visualizer are quite helpful in confirming that the brand new generator I have written for Eulerain graphs (GenerateEulerGraph) is at the very least on track: let gre = StrGraph.GenerateEulerGraph(10, 5) gre.Visualize(into=3, out=3) Very encouraging! Whatever has at least 3 edges out, has at least 3 edges in. Not a definitive test, but the necessary condition of having only blue and transparent vertices in the case of an Eulerian graph is satisfied. In the next post – more about Eulerian graphs, de Brujin sequences, building (and visualizing!) de Bruijn graphs, used for DNA sequence assembly. Categories: CUDA, data visualization, F#, Graphs ## Walking the Euler Path: Intro ### Source Code I’m thinking about a few posts in these series going very fast through the project. The source is on my GitHub, check out the tags since the master branch is still work in progress. ### Experimenting with Graph Algorithms with F# and GPU Graphs play their role in bioinformatics which is my favorite area of computer science and software engineering lately. This relationship was the biggest motivator behind this project. I have been experimenting with a few graph algorithms trying to parallelize them. This is interesting because these algorithms usually resist parallelization since they are fast in their serial version running in O(|E|) or O(|E| + |V|) time (E – the set of edges, V – the set of vertices of the graph). And of course I use any excuse to further explore the F# language. ### Representation The object of this mini-study is a directed unweighted graph. The choice to represent it is simple: adjacency list or incidence matrix. Since I had CUDA in mind from the start, the latter was chosen, and since I had large graphs in mind, hundreds of millions, possibly billions of edges (limited only by the .NET object size: is it still a problem? I haven’t checked, and by the size of my GPU memory), sparse matrix data structure was picked. #### Sparse Matrix Implementation I first wrote a very bare-bones sparse matrix class, just to get my feet wet. Of all possible representations for a sparse matrix, I chose CSR (or CSC which is the transposition of CSR), the idea is intuitive and works great for a directed graph incidence matrix. Briefly (taking CSR – Compressed Sparse Row as an example), we represent our matrix in 3 arrays: V, C, R. V – the array of non-zero values, written left-to-right, top-to-bottom. C – the array of column indices of the values in V. And C – the “boundary”, or “row index” array, built as follows: We start by recording the number of non-zero values per row in each element of R, starting with R[1]. R[0] = 0. Then we apply the scan operation (like the F# Seq.scan) to the row array, to produce the final result. The resulting array contains m + 1 (m – number of rows in the matrix) elements, its last entry equals the total number of non-zero values in the matrix). This array is used as a “slicer” or “indexer” into the column/value arrays: non-zero columns of row $i$ will be located in arrays V and C at the indices starting from R[i] and ending at R[i + 1] – 1. This is all pretty intuitive. #### Overcoming F# Strong Typing F# is a combination of strong typing and dynamic generic resolution, which makes it a challenge when you need to write a template for which it is natural to be resolved at compile time. Then sweet memories of C++ or Python invade… There exists a way to overcome all that and it is not pretty. To implement it I needed the old F# PowerPack with INumeric included. Then I just coded the pattern explained in the blog post: // SparseMatrix.fs /// <summary> /// Sparse matrix implementation with CSR and CSC storage /// </summary> [<StructuredFormatDisplay("{PrintMatrix}")>] type SparseMatrix<'a> (ops : INumeric<'a>, row : 'a seq, rowIndex : int seq, colIndex : int seq, rowSize, isCSR : bool) = .... static member CreateMatrix (row : 'a []) (isCSR : bool) = let ops = GlobalAssociations.GetNumericAssociation<'a>() let colIdx, vals = Array.zip [|0..row.Length - 1|] row |> Array.filter (fun (i, v) -> ops.Compare(v, ops.Zero) <> 0) |> Array.unzip SparseMatrix(ops, vals, [0; vals.Length], colIdx, row.Length, isCSR) The idea is to use the GlobalAssociations to smooth-talk the compiler into letting you do what you want. The pattern is to not directly use the constructor to create your object, but a static method instead, by means of which this “compiler-whispering” is hidden from the user. My sparse matrix is built dynamically: it is first created with a single row through a call to CreateMatrix and then rows can be appended to it by calling AddValues row. The idea is to allow creation and storage of huge matrices dynamically. These matrices may be stored in large files for which representation in dense format in memory may not be feasible. #### Representing the graph So, at which point does it make sense to use a sparse matrix instead of a dense one in CSR/CSC? It’s easy to figure out: If we have a matrix $|M| = m \cdot n$, then the answer is given by the equation: $m \cdot n > 2 \cdot e + m + 1$, here $e$ is the number of non-zero elements in the matrix. For a graph $G=(V, E)$ the set V takes a place of rows, and E – that of columns. The above inequality becomes: $v \cdot e > e + v + 1 \ (v = |V|,\ e = |E|)$, so our sparse structure becomes very economical for large, not to mention “really huge” graphs. (We don’t have the values array anymore, since all our values are just 0s and 1s). And so the graph is born: [<StructuredFormatDisplay("{AsEnumerable}")>] type DirectedGraph<'a when 'a:comparison> (rowIndex : int seq, colIndex : int seq, verticesNameToOrdinal : IDictionary<'a, int>) as this = let rowIndex = rowIndex.ToArray() let colIndex = colIndex.ToArray() let nEdges = colIndex.Length let verticesNameToOrdinal = verticesNameToOrdinal let nVertices = verticesNameToOrdinal.Count // vertices connected to the ordinal vertex let getVertexConnections ordinal = let start = rowIndex.[ordinal] let end' = rowIndex.[ordinal + 1] - 1 colIndex.[start..end'] This is not very useful, however, since it assumes that we already have rowIndex for the CSR type “R” and colIndex for the “C” arrays. It's like saying: "You want a graph? So, create a graph!". I would like to have a whole bunch of graph generators, and I do. I placed them all into the file Generators.fs. This is a good case for using type augmentations. When we need to implement something that “looks good” on the object, but doesn’t really belong to it. In the next post I’ll talk about visualizing things, and vsiualization methods really have nothing to do with the graph itself. Nevertheless, it is natural to write: myGraph.Visualize(euler=true) Visualize(myGraph, euler=true) So we use type augmentations, for instance, going back to the generators: //Generators.fs type Graphs.DirectedGraph<'a when 'a:comparison> with /// <summary> /// Create the graph from a file /// </summary> /// <param name="fileName"></param> static member FromFile (fileName : string) = if String.IsNullOrWhiteSpace fileName || not (File.Exists fileName) then failwith "Invalid file" DirectedGraph<string>.FromStrings(lines) which creates a graph by reading a text file and calling another generator method at the end. This method actually calls the constructor to create an instance of the object. Keeps everything clean and separate. This post was intended to briefly construct the skeleton. In the next we’ll put some meat on the bones and talk about visualizing stuff. ## Visualizing Crime with d3: Hooking up Data and Colors, Part 2 June 17, 2013 1 comment In the previous post, we derived a class from BubbleChart and this got us started on actually visualizing some meaningful data using bubbles. There are a couple of things to iron out before a visual can appear. ## Color Schemes I am using Cynthia Brewer color schemes, available for download in colorbrewer.css. This file is available on my GitHub as well. It consists of entries like: .Spectral .q0-3{fill:rgb(252,141,89)} .Spectral .q1-3{fill:rgb(255,255,191)} .Spectral .q2-3{fill:rgb(153,213,148)} .Spectral .q0-4{fill:rgb(215,25,28)} .Spectral .q1-4{fill:rgb(253,174,97)} .Spectral .q2-4{fill:rgb(171,221,164)} .Spectral .q3-4{fill:rgb(43,131,186)} .Spectral .q0-5{fill:rgb(215,25,28)} .Spectral .q1-5{fill:rgb(253,174,97)} Usage is simple: you pick a color scheme and add it to the class of your parent element that will contain the actual SVG elements displayed, e.g.: Spectral. Then, one of the “qin classes are assigned to these child elements to get the actual color. So for instance: The main SVG element on the Crime Explorer visual looks like this: <svg class="Spectral" id="svg_vis">...</svg> Then, each of the “circle” elements inside this SVG container will have one of the qi-9 (I am using 9 total colors to display this visualization so i ranges from 0..8). <circle r="9.664713682964603" class="q2-9" stroke-width="2" stroke="#b17943" id="city_0" cx="462.4456905180483" cy="574.327856528298"></circle> (Note the class=”q2-9″ attribute above). All of this is supported by the BubbleChart class with some prodding. You need to: 1. Pass the color scheme to the constructor of the class derived from BubbleChart upon instantiation: allStates = new AllStates('vis', crime_data, 'Spectral') 2. Implement a function called color_class in the derived class, that will produce a string of type “qi-n”, given an i. The default function supplied with the base class always returns “q1-6”. @color_class = d3.scale.threshold().domain(@domain).range(("q#{i}-9" for i in [8..0])) In my implementation, I am using the d3 threshold scale to map a domain of values to the colors I need based on certain thresholds. The range is reversed only because I want “blue” colors to come out on lower threshold values, and “red” – on higher ones (less crime is “better”, so I use red for higher values). See AllStates.coffe for a full listing.How this is hooked up tho the actual data is discussed in the next section. ## Data Protocol This is key: data you pass to the BubbleChart class must comply with the following requirements: 1. It must be an array (not an associative array, a regular array). Each element of this array will be displayed as a circle (“bubble”) on the screen. 2. Each element must contain the following fields: • id – this is a UNIQUE id of the element. It is used by BubbleChart to do joins (see d3 documentation for what these are) • value – this is what the “value” of each data element is, and it is used to compute the radius of each bubble • group – indicates the “color group” to which the bubble belongs. This is what is fed to the color_class function to determine the color of each individual bubble With all these conditions satisfied, the array of data is now ready to be displayed. ## Displaying It Now that it is all done, showing the visual is simple: allStates = new AllStates('vis', crime_data, 'Spectral') allStates.create_vis() allStates.display() Next time: displaying the auxiliary elements: color and size legends, the search box. ## Visualizing Crime with d3: Intro April 18, 2013 1 comment Figure a blog without pictures or conversations is just boring, so, here it is. Robbery in Cali Lately, I have been dealing a lot with data visualization. This was a brand new area for me and while we do use F# for data extraction, all of the front end is done using d3, an amazing toolkit by Mike Bostock. First of all, I owe the fact that my projects got off the ground to Mike and Jim Vallandigham.  Jim taught me all I know about how to draw bubble “charts” and use d3 force layouts. His blog is invaluable for anyone making first steps in the area of data visualization. Code snippets I am going to post here are due to Jim’s and Mike’s generosity. So, thank you Mike and Jim. One may ask, if there are already tutorials on how to put together visuals, why assault the world with more musings (as a Candace Bushnell character once wrote). The answer is: my goal in these posts is not to exploring creation of visuals, but rather sharing experiences on how to put together projects that involve visualizations. These are very different problems, since your task is not just to create a single document or web page for a single purpose, but to create something that can dynamically build these documents or pages, and maybe, within each such document provide different views of the same data. Questions of: • design • reuse • coding practices come up right away, not to mention general problems: • What are data visualizations? • What are they used for? • Are they needed at all? So, for these posts, we will be building a project that visualizes crime statistics in the US for the year 2008. The data source for this is found here and the complete solution will look like this. The approximate plan for the next few posts: • Thinking about visualizations and what they are • Preparations • Getting Data (retrieving, massaging, formatting) • Getting the tools together (CofeeScript, d3, ColorBrewer, Knockout.js, Twitter Bootstrap, jQuery, jQuery bbq) • Building the visuals • Laying out “single” charts • Laying out multiple charts on the same page
{}
:: Introduction to Modal Propositional Logic :: by Alicia de la Cruz :: :: Received September 30, 1991 :: Copyright (c) 1991-2021 Association of Mizar Users Lm1: for m being Nat holds {} is_a_proper_prefix_of <*m*> by XBOOLE_1:2; definition let Z be Tree; func Root Z -> Element of Z equals :: MODAL_1:def 1 {} ; coherence {} is Element of Z by TREES_1:22; end; :: deftheorem defines Root MODAL_1:def 1 : for Z being Tree holds Root Z = {} ; definition let D be non empty set ; let T be DecoratedTree of D; func Root T -> Element of D equals :: MODAL_1:def 2 T . (Root (dom T)); coherence T . (Root (dom T)) is Element of D ; end; :: deftheorem defines Root MODAL_1:def 2 : for D being non empty set for T being DecoratedTree of D holds Root T = T . (Root (dom T)); theorem Th1: :: MODAL_1:1 for n, m being Nat for s being FinSequence of NAT st n <> m holds not <*n*>,<*m*> ^ s are_c=-comparable proof end; theorem :: MODAL_1:2 canceled; ::$CT theorem Th2: :: MODAL_1:3 for n, m being Nat for s being FinSequence of NAT st n <> m holds not <*n*> is_a_proper_prefix_of <*m*> ^ s proof end; theorem :: MODAL_1:4 canceled; theorem :: MODAL_1:5 canceled; theorem :: MODAL_1:6 canceled; theorem :: MODAL_1:7 canceled; ::$CT 4 theorem Th3: :: MODAL_1:8 for Z being Tree for n, m being Nat st n <= m & <*m*> in Z holds <*n*> in Z proof end; theorem :: MODAL_1:9 for w, s, t being FinSequence of NAT st w ^ t is_a_proper_prefix_of w ^ s holds t is_a_proper_prefix_of s by TREES_1:49; theorem Th5: :: MODAL_1:10 for t1 being DecoratedTree of holds t1 in PFuncs ((),) proof end; theorem Th6: :: MODAL_1:11 for Z, Z1, Z2 being Tree for z being Element of Z st Z with-replacement (z,Z1) = Z with-replacement (z,Z2) holds Z1 = Z2 proof end; theorem Th7: :: MODAL_1:12 for D being non empty set for Z, Z1, Z2 being DecoratedTree of D for z being Element of dom Z st Z with-replacement (z,Z1) = Z with-replacement (z,Z2) holds Z1 = Z2 proof end; theorem Th8: :: MODAL_1:13 for Z1, Z2 being Tree for p being FinSequence of NAT st p in Z1 holds for v being Element of Z1 with-replacement (p,Z2) for w being Element of Z1 st v = w & w is_a_proper_prefix_of p holds succ v = succ w proof end; theorem Th9: :: MODAL_1:14 for Z1, Z2 being Tree for p being FinSequence of NAT st p in Z1 holds for v being Element of Z1 with-replacement (p,Z2) for w being Element of Z1 st v = w & not p,w are_c=-comparable holds succ v = succ w proof end; theorem :: MODAL_1:15 for Z1, Z2 being Tree for p being FinSequence of NAT st p in Z1 holds for v being Element of Z1 with-replacement (p,Z2) for w being Element of Z2 st v = p ^ w holds succ v, succ w are_equipotent by TREES_2:37; theorem Th11: :: MODAL_1:16 for Z1 being Tree for p being FinSequence of NAT st p in Z1 holds for v being Element of Z1 for w being Element of Z1 | p st v = p ^ w holds succ v, succ w are_equipotent proof end; theorem Th12: :: MODAL_1:17 for Z being finite Tree st branchdeg (Root Z) = 0 holds ( card Z = 1 & Z = ) proof end; theorem Th13: :: MODAL_1:18 for Z being finite Tree st branchdeg (Root Z) = 1 holds succ (Root Z) = proof end; theorem Th14: :: MODAL_1:19 for Z being finite Tree st branchdeg (Root Z) = 2 holds succ (Root Z) = proof end; theorem Th15: :: MODAL_1:20 for Z being Tree for o being Element of Z st o <> Root Z holds ( Z | o, { (o ^ s9) where s9 is Element of NAT * : o ^ s9 in Z } are_equipotent & not Root Z in { (o ^ w9) where w9 is Element of NAT * : o ^ w9 in Z } ) proof end; theorem Th16: :: MODAL_1:21 for Z being finite Tree for o being Element of Z st o <> Root Z holds card (Z | o) < card Z proof end; theorem Th17: :: MODAL_1:22 for Z being finite Tree for z being Element of Z st succ (Root Z) = {z} holds Z = () with-replacement (,(Z | z)) proof end; Lm2: for f being Function st dom f is finite holds f is finite proof end; theorem Th18: :: MODAL_1:23 for D being non empty set for Z being finite DecoratedTree of D for z being Element of dom Z st succ (Root (dom Z)) = {z} holds Z = (() --> (Root Z)) with-replacement (,(Z | z)) proof end; theorem Th19: :: MODAL_1:24 for Z being Tree for x1, x2 being Element of Z st x1 = & x2 = <*1*> & succ (Root Z) = {x1,x2} holds Z = (() with-replacement (,(Z | x1))) with-replacement (<*1*>,(Z | x2)) proof end; theorem Th20: :: MODAL_1:25 for D being non empty set for Z being DecoratedTree of D for x1, x2 being Element of dom Z st x1 = & x2 = <*1*> & succ (Root (dom Z)) = {x1,x2} holds Z = ((() --> (Root Z)) with-replacement (,(Z | x1))) with-replacement (<*1*>,(Z | x2)) proof end; definition func MP-variables -> set equals :: MODAL_1:def 3 ; coherence is set ; end; :: deftheorem defines MP-variables MODAL_1:def 3 : registration cluster MP-variables -> non empty ; coherence not MP-variables is empty ; end; definition end; definition func MP-conectives -> set equals :: MODAL_1:def 4 [:{0,1,2},NAT:]; coherence [:{0,1,2},NAT:] is set ; end; :: deftheorem defines MP-conectives MODAL_1:def 4 : MP-conectives = [:{0,1,2},NAT:]; registration coherence not MP-conectives is empty ; end; definition end; theorem Th21: :: MODAL_1:26 proof end; definition let T be finite Tree; let v be Element of T; :: original: branchdeg redefine func branchdeg v -> Nat; coherence branchdeg v is Nat proof end; end; definition func MP-WFF -> DTree-set of means :Def5: :: MODAL_1:def 5 ( ( for x being DecoratedTree of st x in it holds x is finite ) & ( for x being finite DecoratedTree of holds ( x in it iff for v being Element of dom x holds ( branchdeg v <= 2 & ( not branchdeg v = 0 or x . v = [0,0] or ex k being Nat st x . v = [3,k] ) & ( not branchdeg v = 1 or x . v = [1,0] or x . v = [1,1] ) & ( branchdeg v = 2 implies x . v = [2,0] ) ) ) ) ); existence ex b1 being DTree-set of st ( ( for x being DecoratedTree of st x in b1 holds x is finite ) & ( for x being finite DecoratedTree of holds ( x in b1 iff for v being Element of dom x holds ( branchdeg v <= 2 & ( not branchdeg v = 0 or x . v = [0,0] or ex k being Nat st x . v = [3,k] ) & ( not branchdeg v = 1 or x . v = [1,0] or x . v = [1,1] ) & ( branchdeg v = 2 implies x . v = [2,0] ) ) ) ) ) proof end; uniqueness for b1, b2 being DTree-set of st ( for x being DecoratedTree of st x in b1 holds x is finite ) & ( for x being finite DecoratedTree of holds ( x in b1 iff for v being Element of dom x holds ( branchdeg v <= 2 & ( not branchdeg v = 0 or x . v = [0,0] or ex k being Nat st x . v = [3,k] ) & ( not branchdeg v = 1 or x . v = [1,0] or x . v = [1,1] ) & ( branchdeg v = 2 implies x . v = [2,0] ) ) ) ) & ( for x being DecoratedTree of st x in b2 holds x is finite ) & ( for x being finite DecoratedTree of holds ( x in b2 iff for v being Element of dom x holds ( branchdeg v <= 2 & ( not branchdeg v = 0 or x . v = [0,0] or ex k being Nat st x . v = [3,k] ) & ( not branchdeg v = 1 or x . v = [1,0] or x . v = [1,1] ) & ( branchdeg v = 2 implies x . v = [2,0] ) ) ) ) holds b1 = b2 proof end; end; :: deftheorem Def5 defines MP-WFF MODAL_1:def 5 : for b1 being DTree-set of holds ( b1 = MP-WFF iff ( ( for x being DecoratedTree of st x in b1 holds x is finite ) & ( for x being finite DecoratedTree of holds ( x in b1 iff for v being Element of dom x holds ( branchdeg v <= 2 & ( not branchdeg v = 0 or x . v = [0,0] or ex k being Nat st x . v = [3,k] ) & ( not branchdeg v = 1 or x . v = [1,0] or x . v = [1,1] ) & ( branchdeg v = 2 implies x . v = [2,0] ) ) ) ) ) ); :: [0,0] = VERUM :: [1,0] = negation :: [1,1] = modal operator of necessity :: [2,0] = & definition end; registration cluster -> finite for Element of MP-WFF ; coherence for b1 being MP-wff holds b1 is finite by Def5; end; definition let A be MP-wff; let a be Element of dom A; :: original: | redefine func A | a -> MP-wff; coherence A | a is MP-wff proof end; end; definition let a be Element of MP-conectives ; func the_arity_of a -> Nat equals :: MODAL_1:def 6 a 1 ; coherence a 1 is Nat proof end; end; :: deftheorem defines the_arity_of MODAL_1:def 6 : for a being Element of MP-conectives holds the_arity_of a = a `1 ; definition let D be non empty set ; let T, T1 be DecoratedTree of D; let p be FinSequence of NAT ; assume A1: p in dom T ; func @ (T,p,T1) -> DecoratedTree of D equals :Def7: :: MODAL_1:def 7 T with-replacement (p,T1); coherence T with-replacement (p,T1) is DecoratedTree of D proof end; end; :: deftheorem Def7 defines @ MODAL_1:def 7 : for D being non empty set for T, T1 being DecoratedTree of D for p being FinSequence of NAT st p in dom T holds @ (T,p,T1) = T with-replacement (p,T1); theorem Th22: :: MODAL_1:27 for A being MP-wff holds (() --> [1,0]) with-replacement (,A) is MP-wff proof end; theorem Th23: :: MODAL_1:28 for A being MP-wff holds (() --> [1,1]) with-replacement (,A) is MP-wff proof end; theorem Th24: :: MODAL_1:29 for A, B being MP-wff holds ((() --> [2,0]) with-replacement (,A)) with-replacement (<*1*>,B) is MP-wff proof end; definition let A be MP-wff; func 'not' A -> MP-wff equals :: MODAL_1:def 8 (() --> [1,0]) with-replacement (,A); coherence (() --> [1,0]) with-replacement (,A) is MP-wff by Th22; func (#) A -> MP-wff equals :: MODAL_1:def 9 (() --> [1,1]) with-replacement (,A); coherence (() --> [1,1]) with-replacement (,A) is MP-wff by Th23; let B be MP-wff; func A '&' B -> MP-wff equals :: MODAL_1:def 10 ((() --> [2,0]) with-replacement (,A)) with-replacement (<*1*>,B); coherence ((() --> [2,0]) with-replacement (,A)) with-replacement (<*1*>,B) is MP-wff by Th24; end; :: deftheorem defines 'not' MODAL_1:def 8 : for A being MP-wff holds 'not' A = (() --> [1,0]) with-replacement (,A); :: deftheorem defines (#) MODAL_1:def 9 : for A being MP-wff holds (#) A = (() --> [1,1]) with-replacement (,A); :: deftheorem defines '&' MODAL_1:def 10 : for A, B being MP-wff holds A '&' B = ((() --> [2,0]) with-replacement (,A)) with-replacement (<*1*>,B); definition let A be MP-wff; func ? A -> MP-wff equals :: MODAL_1:def 11 'not' ((#) ()); correctness coherence 'not' ((#) ()) is MP-wff ; ; let B be MP-wff; func A 'or' B -> MP-wff equals :: MODAL_1:def 12 'not' (() '&' ()); correctness coherence 'not' (() '&' ()) is MP-wff ; ; func A => B -> MP-wff equals :: MODAL_1:def 13 'not' (A '&' ()); correctness coherence 'not' (A '&' ()) is MP-wff ; ; end; :: deftheorem defines ? MODAL_1:def 11 : for A being MP-wff holds ? A = 'not' ((#) ()); :: deftheorem defines 'or' MODAL_1:def 12 : for A, B being MP-wff holds A 'or' B = 'not' (() '&' ()); :: deftheorem defines => MODAL_1:def 13 : for A, B being MP-wff holds A => B = 'not' (A '&' ()); theorem Th25: :: MODAL_1:30 for n being Nat holds --> [3,n] is MP-wff proof end; theorem Th26: :: MODAL_1:31 proof end; definition let p be MP-variable; func @ p -> MP-wff equals :: MODAL_1:def 14 --> p; coherence proof end; end; :: deftheorem defines @ MODAL_1:def 14 : for p being MP-variable holds @ p = --> p; theorem Th27: :: MODAL_1:32 for p, q being MP-variable st @ p = @ q holds p = q proof end; Lm3: for n, m being Nat holds in dom (() --> [n,m]) proof end; theorem Th28: :: MODAL_1:33 for A, B being MP-wff st 'not' A = 'not' B holds A = B proof end; theorem Th29: :: MODAL_1:34 for A, B being MP-wff st (#) A = (#) B holds A = B proof end; theorem Th30: :: MODAL_1:35 for A, A1, B, B1 being MP-wff st A '&' B = A1 '&' B1 holds ( A = A1 & B = B1 ) proof end; definition coherence by Th26; end; :: deftheorem defines VERUM MODAL_1:def 15 : theorem Th31: :: MODAL_1:36 for A being MP-wff holds ( not card (dom A) = 1 or A = VERUM or ex p being MP-variable st A = @ p ) proof end; theorem Th32: :: MODAL_1:37 for A being MP-wff holds ( not card (dom A) >= 2 or ex B being MP-wff st ( A = 'not' B or A = (#) B ) or ex B, C being MP-wff st A = B '&' C ) proof end; theorem Th33: :: MODAL_1:38 for A being MP-wff holds card (dom A) < card (dom ()) proof end; theorem Th34: :: MODAL_1:39 for A being MP-wff holds card (dom A) < card (dom ((#) A)) proof end; theorem Th35: :: MODAL_1:40 for A, B being MP-wff holds ( card (dom A) < card (dom (A '&' B)) & card (dom B) < card (dom (A '&' B)) ) proof end; definition let IT be MP-wff; attr IT is atomic means :Def16: :: MODAL_1:def 16 ex p being MP-variable st IT = @ p; attr IT is negative means :Def17: :: MODAL_1:def 17 ex A being MP-wff st IT = 'not' A; attr IT is necessitive means :Def18: :: MODAL_1:def 18 ex A being MP-wff st IT = (#) A; attr IT is conjunctive means :Def19: :: MODAL_1:def 19 ex A, B being MP-wff st IT = A '&' B; end; :: deftheorem Def16 defines atomic MODAL_1:def 16 : for IT being MP-wff holds ( IT is atomic iff ex p being MP-variable st IT = @ p ); :: deftheorem Def17 defines negative MODAL_1:def 17 : for IT being MP-wff holds ( IT is negative iff ex A being MP-wff st IT = 'not' A ); :: deftheorem Def18 defines necessitive MODAL_1:def 18 : for IT being MP-wff holds ( IT is necessitive iff ex A being MP-wff st IT = (#) A ); :: deftheorem Def19 defines conjunctive MODAL_1:def 19 : for IT being MP-wff holds ( IT is conjunctive iff ex A, B being MP-wff st IT = A '&' B ); registration existence ex b1 being MP-wff st b1 is atomic proof end; existence ex b1 being MP-wff st b1 is negative proof end; existence ex b1 being MP-wff st b1 is necessitive proof end; existence ex b1 being MP-wff st b1 is conjunctive proof end; end; scheme :: MODAL_1:sch 1 MPInd{ P1[ Element of MP-WFF ] } : for A being Element of MP-WFF holds P1[A] provided A1: P1[ VERUM ] and A2: for p being MP-variable holds P1[ @ p] and A3: for A being Element of MP-WFF st P1[A] holds P1[ 'not' A] and A4: for A being Element of MP-WFF st P1[A] holds P1[ (#) A] and A5: for A, B being Element of MP-WFF st P1[A] & P1[B] holds P1[A '&' B] proof end; theorem :: MODAL_1:41 for A being Element of MP-WFF holds ( A = VERUM or A is atomic MP-wff or A is negative MP-wff or A is necessitive MP-wff or A is conjunctive MP-wff ) proof end; theorem Th37: :: MODAL_1:42 for A being MP-wff holds ( A = VERUM or ex p being MP-variable st A = @ p or ex B being MP-wff st A = 'not' B or ex B being MP-wff st A = (#) B or ex B, C being MP-wff st A = B '&' C ) proof end; theorem Th38: :: MODAL_1:43 for p being MP-variable for A, B being MP-wff holds ( @ p <> 'not' A & @ p <> (#) A & @ p <> A '&' B ) proof end; theorem Th39: :: MODAL_1:44 for A, B, C being MP-wff holds ( 'not' A <> (#) B & 'not' A <> B '&' C ) proof end; theorem Th40: :: MODAL_1:45 for A, B, C being MP-wff holds (#) A <> B '&' C proof end; Lm4: for A, B being MP-wff holds ( VERUM <> 'not' A & VERUM <> (#) A & VERUM <> A '&' B ) proof end; Lm5: proof end; Lm6: for p being MP-variable holds VERUM <> @ p proof end; theorem :: MODAL_1:46 for p being MP-variable for A, B being MP-wff holds ( VERUM <> @ p & VERUM <> 'not' A & VERUM <> (#) A & VERUM <> A '&' B ) by ; scheme :: MODAL_1:sch 2 MPFuncEx{ F1() -> non empty set , F2() -> Element of F1(), F3( Element of MP-variables ) -> Element of F1(), F4( Element of F1()) -> Element of F1(), F5( Element of F1()) -> Element of F1(), F6( Element of F1(), Element of F1()) -> Element of F1() } : ex f being Function of MP-WFF,F1() st ( f . VERUM = F2() & ( for p being MP-variable holds f . (@ p) = F3(p) ) & ( for A being Element of MP-WFF holds f . () = F4((f . A)) ) & ( for A being Element of MP-WFF holds f . ((#) A) = F5((f . A)) ) & ( for A, B being Element of MP-WFF holds f . (A '&' B) = F6((f . A),(f . B)) ) ) proof end;
{}
# Vertically center text in PowerDot I'd like to center vertically all text in all my slide, in Powerdot. I think \pddefinetemplate could be used with options, but I can't find how. Here is my code. \documentclass[style=aggie]{powerdot} \usepackage[latin1]{inputenc} \usepackage[T1]{fontenc} \usepackage{mathptmx} \usepackage[francais]{babel} \begin{document} \begin{wideslide}{Illustration} I'd like to center vertically my text, in each slide. Thanks a lot \end{wideslide} \end{document} Could someone help me ? Best regards, • Hi, surely we are all here to give you an answer but if you do not enter your LaTeX code along with the application we can not help you the best. – Sebastiano Feb 10 '18 at 7:50 • Hello, you're righ. Here is my code. Best regards – loruyza Feb 10 '18 at 8:29 It needs too many changes in the code ifr you want all slide types be vertically centered. It is easier if you coud use a command like \Fill: \documentclass[style=aggie]{powerdot} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{mathptmx} \usepackage[francais]{babel} \newcommand\Fill{\vspace*{\fill}\null} \begin{document} \begin{wideslide}{Illustration}\Fill I'd like to center vertically my text, in each slide. Thanks a lot I'd like to center vertically my text, in each slide. Thanks a lot I'd like to center vertically my text, in each slide. Thanks a lot I'd like to center vertically my text, in each slide. Thanks a lot \Fill \end{wideslide} \begin{slide}{Illustration}\Fill I'd like to center vertically my text, in each slide. Thanks a lot I'd like to center vertically my text, in each slide. Thanks a lot \Fill \end{slide} \end{document} • Hello Thanks a lot. It's very friendly. But the text is not centered. For example, f i write one, two, ... paragraph, all the text start always at 0.5\slideheight (not centered, so). Could you find another way ? Best regards, – loruyza Feb 10 '18 at 10:48 • @Herbert I have voted your question and I have deleted my answer. +1 – Sebastiano Feb 10 '18 at 10:57 • @loruyza: I see. I suppose that the complete frame definition has to be redefined. Will see, what I can do – user2478 Feb 10 '18 at 12:11 • @loruyza: see edited answer. However, using beamer instead make things easier ... – user2478 Feb 10 '18 at 12:33 • Hello It's ok for me. It seems that Powerdot could not adjust directly. Thanks again. I know Beamer, it was my first try with Powerdot. Best regards – loruyza Feb 10 '18 at 12:37
{}
# Miles Davis Is A Cuica - Matt Lavelle - Cuica In The Third House (CDr, Album) ## 9 thoughts on “ Miles Davis Is A Cuica - Matt Lavelle - Cuica In The Third House (CDr, Album) ” 1. Zular says: "Directions in Music by Miles Davis" Year he dropped out of music. Year of last fusion period. Highly electronic album, with Marcus Miller "Tutu" 5 most important collaborators-Gil Evans-Bill Evans-Wayne Shorter-Joe Zawinul-Marcus Miller. Name of last studio album "Doo-Bop". 2. Kirr says: Start studying miles davis. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 3. JoJolrajas says: Matt Lavelle: Cuica in the Third House (, KMB): Solo project, with spoken bits I didn't really follow, and blasts of trumpet or flugelhorn and bass clarinet, as interesting as ever. Limited edition CDR. 4. Goltiran says: We would like to show you a description here but the site won’t allow amuntisthoreca.regbemispresnilitirecewemavo.co more. 5. Kazirn says: Practice Test I Page 1 of 11 A Quick Review of Subjects and Verbs The subject of a sentence is the person, place, thing, or idea that the sentence is about. It is always a noun or pronoun. The verb tells what the sentence is saying about the subject. Look at the . 6. Goltishura says: IL- miles miles 41 1— miles 50 1 miles Ribbon in a craft store costs $per yard. Vernon needs to buy 73 yards. How much will it cost? a.$ b. $c.$ d. $Linda needs to read 14 pages for her History class, 26 pages for English, 12 pages for Civics, and 28 pages for Biology. She has read of the. 7. Yolabar says: Note that coordinates represent miles. Find the distance between the cities to the nearest mile. If San Jose’s coordinates are $\left(76,\right)$, where the coordinates represent miles, find the distance between San Jose and San Francisco to the nearest mile. A small craft in Lake Ontario sends out a distress signal. 8. Tukus says: marcos drove twice as far as candice. together, they drove 66 miles. which equation can be used to find the number of miles candice drove? A family wants to purchase a house that costs$, They plan to take out a $, mortgage on the house and put$30, as a down payment. Answers · 1. 9. Tugore says: Feb 07,  · 5. Megan hiked miles in hours. If Megan hiked the same number of miles each hour, how many miles did she hike each hour? (Lesson ) miles @ miles miles @ miles PI 54 2. 4. 6. Suppose you multiply a whole number greater than 1 by the fraction 3. Which statement below will be true? @ The product will be equal to -.
{}
# On the Decay of Plane Shock Waves Ballistic Research Laboratory Report No. 423 Chandrasekhar/emh Aberdeen Proving Ground, Md. 8 November 1943 ON THE DECAY OF PLANE SHOCK WAVES Abstract In this report the problem of the decay of plane shock waves is considered and it is shown that there exists a special class of shock pulses for which the problem of decay admits of an explicit solution. The case in question arises when shocks of moderate intensities (Mach numbers less than 1.5 if an accuracy of one per cent is demanded) are considered. In these cases the changes in entropy as well as in the quantity 5 c — u (where c denotes the local velocity of sound and u the mass velocity) as the shock front is crossed can be neglected. The case of linear shock pulses in which both u and c are linear functions of x behind the shock front is considered particularly in some detail. 1. Introduction. The only case in the theory of the decay of plane shock waves which appears to have been studied in any detail is the one which has recently been investigated by W. G. Penney and H. K. Dasgupta (R.C. 301) by numerical methods. And it appears not to have been recognized that there exists a special class of shock pulses for which the problem of decay admits of an explicit solution. The case in question arises when shock waves of moderate intensities (i.e., shocks with Mach numbers less than 1.5) are considered. For, under these circumstances (when the ratio of pressures on either side of the shock front is less than 2.5) the increase in entropy of an element of gas as it crosses the shock front can be ignored. And moreover the change in the quantity (1)${\displaystyle Q={\frac {2}{\gamma -1}}c-u}$ (where c denotes the local velocity of sound and u the mass velocity) as we cross the shock can also be neglected. That this is so is apparent from Table 1 where the velocity of sound immediately behind the shock front (in units of the velocity of sound, a, in front of the shock) as determined by the Rankine-Hugoniot equation (2) ${\displaystyle {\frac {c}{a}}}$ ${\displaystyle =\left[y{\frac {\gamma +1+(\gamma -1)y}{\gamma -1+(\gamma +1)y}}\right]^{1/2}}$ ${\displaystyle =\left[{\frac {y(6+y)}{1+6y}}\right]^{1/2}}$ for γ = 1.4, where y denotes the ratio of pressures on either side of the shock front, is compared with that given by (3) ${\displaystyle {\frac {c}{a}}}$ ${\displaystyle =y^{(\gamma -1)/2\gamma }}$ ${\displaystyle =y^{1/7}}$ for γ = 1.4, which will be valid if entropy changes be ignored. It is seen that in agreement with what we have stated, the values of c for γ = 1.4 determined by equations (2) and (3) differ by less than one per cent for y < 2.5. Similarly we also notice that the change in Q as we cross the shock front is also less than one per cent under the same circumstances. TABLE 1 In this table y denotes the ratio of the pressures on the two sides of the shock front, U the shock velocity, u the mass velocity behind the front (both in a frame of reference in which the mass velocity in the undisturbed region is zero), c the velocity of sound immediately behind the shock front, P = 5c+u and Q = 5c-u. y U/a u/a c/a y1/7 P/a Q/a 1.0 1.000 0.000 1.000 1.000 5.000 5.000 1.1 1.042 0.069 1.014 1.014 5.137 5.000 1.2 1.082 0.132 1.026 1.026 5.265 5.001 1.3 1.121 0.191 1.038 1.038 5.384 5.001 1.4 1.159 0.247 1.050 1.049 5.496 5.002 1.5 1.195 0.299 1.061 1.059 5.602 5.005 1.6 1.231 0.348 1.071 1.069 5.704 5.007 1.7 1.265 0.395 1.081 1.078 5.801 5.010 1.8 1.298 0.440 1.091 1.087 5.894 5.014 1.9 1.331 0.483 1.100 1.095 5.984 5.018 2.0 1.363 0.524 1.109 1.104 6.071 5.023 2.1 1.394 0.564 1.118 1.112 6.155 5.027 2.2 1.424 0.602 1.127 1.119 6.237 5.034 2.3 1.454 0.639 1.136 1.127 6.318 5.040 2.4 1.483 0.674 1.144 1.133 6.395 5.047 2.5 1.512 0.709 1.152 1.140 6.470 5.053 It would accordingly appear that a significant case of shock pulses is provided by neglecting entropy changes and considering Q as a constant throughout. As we shall see this leads to an interesting class of shock solutions which does not appear to have been isolated so far. 2. The Solutions for a Special Class of Shock Pulses for which Q = Constant. Measuring the velocities in units of the velocity of sound in the undisturbed air in front of the shock, the equations of motion in Riemann's form are (cf. Penney, R. C., 260; also, the Appendix to this paper) ${\displaystyle dP={\frac {\partial P}{\partial x}}\left[dx-(u+c)dt\right]+{\frac {1}{\gamma -1}}c^{2}{\frac {\partial \log \theta }{\partial x}}dt}$,(4) and ${\displaystyle dQ={\frac {\partial Q}{\partial x}}\left[dx-(u-c)dt\right]+{\frac {1}{\gamma -1}}c^{2}{\frac {\partial \log \theta }{\partial x}}dt}$,(5) where (6)${\displaystyle P{=}{\frac {2}{\gamma -1}}c+u}$;${\displaystyle Q{=}{\frac {2}{\gamma -1}}c-u}$ and ${\displaystyle \theta }$ denotes the potential temperature (i.e., the temperature which the element of gas under consideration would have if reduced adiabatically to a certain standard pressure). For shocks of moderate Intensities the term in ${\displaystyle \theta }$ which, incorporates the changes in entropy can be ignored and we have (7)${\displaystyle dP={\frac {\partial P}{\partial x}}\left[dx-(u+c)dt\right]}$, and (8)${\displaystyle dQ={\frac {\partial Q}{\partial x}}\left[dx-(u-c)dt\right]}$, The foregoing equations are equivalent to (9)${\displaystyle {\frac {\partial P}{\partial t}}=-(u+c){\frac {\partial P}{\partial x}}}$, and (10)${\displaystyle {\frac {\partial Q}{\partial t}}=-(u-c){\frac {\partial Q}{\partial x}}}$, The particular significance of the case (11)${\displaystyle Q{=}{\frac {2}{\gamma -1}}c-u{=}}$ constant ${\displaystyle {=}{\frac {2}{\gamma -1}}}$ is now apparent: Q has the value 5 outside the shock pulse, it retains its value (to within 1% for y ≤ 2.5) as we cross the shock. Moreover, according to equation (8) since (12)${\displaystyle dQ=0}$ for ${\displaystyle dx=(u-c)dt}$ it appears that after a sufficient length of time the shock pulse must be characterized by Q = 5 throughout. In any event it is clear that in considering the case Q = constant = 5 we are not limiting ourselves to too 'trivial' a class of shock pulses. Assuming then the validity of equation (11) we have (13)${\displaystyle u={\frac {2}{\gamma -1}}\left(c-1\right)}$;${\displaystyle P={\frac {2}{\gamma -1}}\left(2c-1\right)}$;${\displaystyle u+c={\frac {\gamma +1}{\gamma -1}}c={\frac {2}{\gamma -1}}}$, and equation (9) reduces to (14)${\displaystyle {\frac {\partial c}{\partial t}}=-\left({\frac {\gamma +1}{\gamma -1}}c-{\frac {2}{\gamma -1}}\right){\frac {\partial c}{\partial x}}}$. Letting (15)${\displaystyle \phi ={\frac {\gamma +1}{\gamma -1}}c-{\frac {2}{\gamma -1}}}$ we have (16)${\displaystyle {\frac {\partial \phi }{\partial t}}+\phi {\frac {\partial \phi }{\partial x}}=0}$. A complete integral of the foregoing equation can be written down at once. We have (17)${\displaystyle \phi ={\frac {1+qx}{b+qt}}}$ where b and q are two arbitrary constants. In terms of this complete integral we can readily write down the general solution of equation (16). But postponing this discussion to §3 and limiting ourselves for the present to the solution (17), we have (cf. eqs. (13) and (15)) (18)${\displaystyle c={\frac {\gamma -1}{\gamma +1}}\left({\frac {1+qx}{b+qt}}+{\frac {2}{\gamma -1}}\right)}$ and (19)${\displaystyle u={\frac {2}{\gamma +1}}\left({\frac {1+qx}{b+qt}}-1\right)}$ According to equations (18) and (19), at any given instant both c and u are linear functions of x behind the shock front. By a translation of the time axis we can rewrite equations (18) and (19) more conveniently in the forms (20)${\displaystyle c={\frac {\gamma -1}{\gamma +1}}\left({\frac {1+qx}{1+qt}}+{\frac {2}{\gamma -1}}\right)}$, and (21)${\displaystyle u={\frac {2}{\gamma +1}}\left({\frac {1+qx}{1+qt}}-1\right)}$. From the foregoing equations it follows that (22)u = 0, c = 1 for x = t In other words the point at which u = 0 in the pulse moves forward with a uniform velocity equal to the velocity of sound in the undisturbed regions. Starting at this point, x = t, u and c increase linearly with x till they attain their maximum values immediately behind the shock front. And moreover the shock velocity U is related to the mass velocity umax behind the shock front by the Rankine-Hugoniot equation (23)${\displaystyle u_{\max }={\frac {2}{\gamma +1}}\left(U-{\frac {1}{U}}\right)}$. Thus at any given instant the positive phase of the pulse extends from[1] (24)x = t to ${\displaystyle qx_{\max }=(1+qt)\left({\frac {\gamma +1}{2}}u_{\max }+1\right)-1}$. It now remains to determine umax (or equivalently U) as a function of time. We proceed now to establish this relation. Fig. 1 Let ABC denote the pulse at time t. At time t + Δt, A moves to A′ and C to C′ where (25)${\displaystyle AA'=\Delta t}$ and ${\displaystyle CC'=U\Delta t}$, where U denotes the shock velocity at time t. During the same time the slope of AB also changes. For at t + Δt the velocity field will be governed by (26)${\displaystyle u={\frac {2}{\gamma +1}}\left({\frac {1+qx}{1+q\left(t+\Delta t\right)}}-1\right)}$ Accordingly the value of u at the new position B′C′ of the shock front is given by (27)${\displaystyle u_{\max }\left(t+\Delta t\right)={\frac {2}{\gamma +1}}\left[{\frac {1+q\left(x+U\Delta t\right)}{1+q\left(t+\Delta t\right)}}-1\right]}$. The increment in umax during the time Δt is therefore given by (28) ${\displaystyle \Delta u_{\max }}$ ${\displaystyle ={\frac {2}{\gamma +1}}\left[{\frac {1+q\left(x+U\Delta t\right)}{1+q\left(t+\Delta t\right)}}-1\right]-{\frac {2}{\gamma +1}}\left[{\frac {1+qx}{1+qt}}-1\right],}$ ${\displaystyle ={\frac {2}{\gamma +1}}{\frac {q}{1+qt}}\left[U-{\frac {\gamma +1}{2}}u_{\max }-1\right]\Delta t}$. Using equation (23) we can rewrite the foregoing equation in the form (29)${\displaystyle {\frac {d}{dt}}\left(U-{\frac {1}{U}}\right)={\frac {q}{1+qt}}\left({\frac {1}{U}}-1\right)}$. Thus the differential equation governing the dependence of U on time is (30)${\displaystyle \left(1+{\frac {1}{U^{2}}}\right){\frac {dU}{dt}}=-{\frac {q}{1+qt}}\left({\frac {U-1}{U}}\right)}$. This equation can be integrated to give (31)${\displaystyle 1+qt={\frac {\left(U_{0}-1\right)^{2}}{U_{0}}}e^{\left(U_{0}-U\right)}}$ where U0 denotes the shock velocity at time t = 0. The dependence of the ratio of pressures on the two sides of the shock front on time can be readily written down from equation (31). We have (for γ = 1.4) (32)${\displaystyle 1+qt={\frac {\left(U_{0}-1\right)^{2}}{U_{0}}}{\frac {\left[\left(6y+1\right)/7\right]^{1/2}}{\left[\left\{\left(6y+1\right)/7\right\}^{1/2}-1\right]^{2}}}e^{U_{0}-\left[\left(6y+1\right)/7\right]^{1/2}}}$. Equations (20), (21), (23), (24), (31) and (32) together describe completely the behavior of a linear shock pulse. The only limitation on this solution is that U0 ≤ 1.5 for γ = 1.4 if an accuracy of the order of 1% is demanded. In Fig. 2 we have illustrated the dependence of U on t for the case U0 = 1.24 and γ = 1.4. Similarly in Fig. 3 the velocity field in the positive phase of the shock pulse at various instants is illustrated for the same case. And finally in Table II we have tabulated xmax, umax, and U as functions of time also for the case U0 = 1.24 and γ = 1.4. FIG. 2 FIG. 3 It is of interest to compare these results with those of Penney and Dasgupta. Though these authors considered an initial shock pulse with a Mach number in the neighborhood of 3, it is seen that their curves illustrating the velocity in the pulse at various instants are very similar to those illustrated in Fig. 3. We may further note that, according to equations (31) and (32), for γ = 1.4 ${\displaystyle \left.{\begin{array}{lll}qt\sim {\dfrac {\left(U_{0}-1\right)^{2}}{U_{0}}}\ \mathrm {e} ^{U_{0}}\ {\dfrac {1}{\left(U-1\right)^{2}}}&\ &\left(t\rightarrow \infty ,U\rightarrow 1\right)\\\\qt\sim {\dfrac {49\left(U_{0}-1\right)^{2}}{9U_{0}}}\ \mathrm {e} ^{U_{0}}\ {\dfrac {1}{\left(y-1\right)^{2}}}&\ &\left(t\rightarrow \infty ,y\rightarrow 1\right)\end{array}}\right.}$ (33) It is possible that the laws ${\displaystyle \left(U-1\right)\propto {\frac {1}{\sqrt {t}}}\ \mathrm {and} \left(y-1\right)\propto {\frac {1}{\sqrt {t}}}\left(t\rightarrow \infty \right)}$ (34) are more general than their derivations from equation (31) and (32) would suggest. Table II qt qxmax U umax qt qxmax U umax 0 0.434 1.24 0.361 004.878 006.000 1.10 0.159 0.195 0.673 1.22 0.334 008.199 009.616 1.08 0.128 0.450 0.982 1.20 0.306 015.375 017.28 1.06 0.097 0.796 1.394 1.18 0.277 035.88 038.77 1.04 0.065 1.280 1.960 1.16 0.248 146.6 152.5 1.02 0.033 1.986 2.771 1.14 0.219 ∞ ∞ 1.00 0.000 3.074 3.999 1.12 0.189 3. On the General Solution of Shock-Pulses with Q = Constant. In §2 we have discussed the special case of linear shock pulses which are characterized by Q = Constant ${\displaystyle \left(=2/\left(\gamma -1\right)\right)}$ throughout. In this section we shall briefly indicate how the most general shock pulses under the circumstances Q = constant can be constructed. For this purpose we require the general integral of equation (16). As is well known (cf. A.R. Forsyth, Differential Equations, pp. 375-380) the general integral can be readily written down in terms of a complete integral, i.e., an integral which contains as many constants as independent variables. Writing the complete integral of equation (16) in the form (cf. eq. (17)) (38)${\displaystyle \phi ={\frac {a_{1}+x}{a_{2}+t}}}$ where a1 and a2 are two arbitrary constants, the general integral of equation (16) can be expressed as the eliminant between the equations (36) ${\displaystyle \phi \chi (a)+t=a+x}$ , ${\displaystyle \phi {\frac {d\chi }{da}}=1}$ , where χ is any arbitrary function of a. It is now evident that with the solution in the form (36) we can make φ satisfy any arbitrary distribution at time t = 0. Alternatively we may say that the distribution of c (or equivalently u) at time t = 0 will determine χ(a) thus making the solution determinate. In this fashion the most general form of shock pulses under the assumptions made in §1 can be constructed. In a later report we propose to give examples of shock pulses belonging to this more general class. S. Chandrasekhar APPENDIX THE EQUATIONS OF MOTION IN RIEMANN'S FORM ALLOWING FOR CHANGES IN ENTROPY In the text equations (4) and (5) were quoted from Penney (R.C., 260). Since Penney's report may not be generally accessible, it has been thought worth while to include here a brief outline of his derivation of these equations. The equations of motion of a linear pulse are ${\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}{=}-{\frac {1}{\rho }}{\frac {\partial p}{\partial x}}}$, (1) ${\displaystyle {\frac {\partial \rho }{\partial t}}+u{\frac {\partial \rho }{\partial x}}{=}-\rho {\frac {\partial u}{\partial x}}}$, where p denotes the pressure, ρ the density, and u the mass velocity at any point. Introduce the two functions ${\displaystyle P{=}\Delta _{1}{=}f(\rho ,\theta )+u}$, (2) ${\displaystyle Q{=}\Delta _{2}{=}f(\rho ,\theta )-u}$, where (3)${\displaystyle f{=}\int _{0}^{\rho }{\sqrt {\left({\frac {\partial p}{\partial \rho }}\right)_{\theta }}}d\log \rho }$;${\displaystyle c^{2}{=}\left({\frac {\partial p}{\partial \rho }}\right)_{\theta }}$, and c the local sound velocity. In equations (3), θ denotes the temperature which the element of gas under consideration would have when it is reduced to a standard pressure adiabatically. It is evident that θ remains constant during the motion of any element of gas (except when it crosses a discontinuity). Consider the total differential (4)${\displaystyle d\Delta _{i}{=}{\frac {\partial \Delta _{i}}{\partial t}}dt+{\frac {\partial \Delta _{i}}{\partial x}}dx}$(i=1,2). Rewriting this in the form (4′)${\displaystyle d\Delta _{i}={\frac {\partial \Delta _{i}}{\partial x}}\left[dx-\left(u\pm c\right)dt\right]+\left[{\frac {\partial \Delta _{i}}{\partial x}}\left(u\pm c\right)+{\frac {\partial \Delta _{i}}{\partial t}}\right]dt}$ (the + or - sign going respectively with i = 1 or 2) we shall consider the second term in square brackets occurring as the coefficient of dt. We have (cf. eq. (3)) ${\displaystyle {\frac {\partial \Delta _{i}}{\partial x}}\left(u\pm c\right)+{\frac {\partial \Delta _{i}}{\partial t}}}$ (5)${\displaystyle =\pm c{\frac {\partial \Delta _{i}}{\partial x}}+\left({\frac {\partial f}{\partial \rho }}{\frac {\partial \rho }{\partial x}}+{\frac {\partial f}{\partial \theta }}{\frac {\partial \theta }{\partial x}}\right)u+\left({\frac {\partial f}{\partial \rho }}{\frac {\partial \rho }{\partial t}}+{\frac {\partial f}{\partial \theta }}{\frac {\partial \theta }{\partial t}}\right)}$ ${\displaystyle \pm \left(u{\frac {\partial u}{\partial x}}+{\frac {\partial u}{\partial t}}\right)}$. Remembering that θ remains constant during the motion apd using the equations (1) we find that (6)${\displaystyle {\frac {\partial \Delta _{i}}{\partial x}}\left(u\pm c\right)+{\frac {\partial \Delta _{i}}{\partial t}}=\left(\pm c{\frac {\partial f}{\partial \theta }}\mp {\frac {1}{\rho }}{\frac {\partial p}{\partial \theta }}\right){\frac {\partial \theta }{\partial x}}}$. Thus, (7)${\displaystyle dP={\frac {\partial P}{\partial x}}\left[dx-\left(u+c\right)dt\right]+\left(c{\frac {\partial f}{\partial \theta }}-{\frac {1}{\rho }}{\frac {\partial p}{\partial \theta }}\right){\frac {\partial \theta }{\partial x}}dt}$. and (7)${\displaystyle dQ={\frac {\partial Q}{\partial x}}\left[dx-\left(u-c\right)dt\right]-\left(c{\frac {\partial f}{\partial \theta }}-{\frac {1}{\rho }}{\frac {\partial p}{\partial \theta }}\right){\frac {\partial \theta }{\partial x}}dt}$. Now for a perfect gas (with a ratio of specific heats γ) we have (9)${\displaystyle p=A^{2}\theta ^{\gamma }\rho ^{\gamma }}$. where A is a constant. From this equation we readily derive the formulae (10)${\displaystyle c=A\gamma ^{1/2}\theta ^{\gamma /2}\rho ^{(\gamma -1)/2}}$, and (11)${\displaystyle f={\frac {2}{\gamma -1}}A\gamma ^{1/2}\theta ^{\gamma /2}}$${\displaystyle \rho ^{(\gamma -1)/2}={\frac {2}{\gamma -1}}c}$. Thus for the case under consideration (12)${\displaystyle P={\frac {2}{\gamma -1}}c+u}$ and ${\displaystyle Q={\frac {2}{\gamma -1}}c-u}$. Moreover we readily verify from the foregoing that (13)${\displaystyle c{\frac {\partial f}{\partial \theta }}-{\frac {1}{\rho }}{\frac {\partial p}{\partial \theta }}={\frac {1}{\gamma -1}}{\frac {c^{2}}{\theta }}}$. Combining equations (7), (12) and (13) we obtain the equations quoted in the text. 1. In this paper we do not explicitly discuss the negative (or the suction) phase of the pulse. It is however clear that the discussion of the suction phase will proceed on lines exactly similar to that given for the positive phase. This work is in the public domain in the United States because it is a work of the United States federal government (see 17 U.S.C. 105).
{}
## Free energy and work $w=-P\Delta V$ and $w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$ KayleeMcCord1F Posts: 31 Joined: Fri Sep 29, 2017 7:05 am ### Free energy and work What is the relationship between free energy and work? Joyce Gu 1E Posts: 29 Joined: Tue Nov 24, 2015 3:00 am ### Re: Free energy and work Well when we refer to Gibbs Free Energy (G), its just the energy that is free to do useful work. So the change in Gibbs Free Energy (ΔG) is equal to the maximum amount of work done by a process at constant temperature and pressure (ΔG=[W]max). Troy Tavangar 1I Posts: 50 Joined: Fri Sep 29, 2017 7:04 am ### Re: Free energy and work It is the most work that can be done. CalebBurns3L Posts: 47 Joined: Fri Sep 29, 2017 7:07 am ### Re: Free energy and work It is the maximum amount of energy that can be used to do work.
{}
Retrieving angles from a rotation matrix I'm working with rotations in n dimensions. I represent these rotations as a sequence of $(n^2 - n)/2$ angles, one for each pair of axes, in a fixed order. I can easily compute a rotation matrix from this, by generating the Givens matrix for each angle, and multiplying these matrices. The problem is doing it the other way around. Given an orthogonal matrix, how can I retrieve the angles in the right order? I'm using an evolutionary algorithm at the moment, which works reasonably well, but is very slow. It would be good if there were a fast method for calculating an exact solution. Unfortunately, most things I find are limited to the 3d case. My linear algebra is limited to the very practical. I'm willing to learn, of course, but I need to know where to look first. - add comment 1 Answer I assume your orthogonal matrix $R$ has determinant 1, so it is possible to write it as a product of rotations. Let $e_j$, $j=1\ldots n$, be the standard unit vectors in ${\mathbb R}^n$. Here's one possible strategy. Take a rotation $R_{1,n}$ in coordinates $1$ and $n$ such that $(R_{1,n} R^{-1} e_n)_1 = 0$, then $R_{2,n}$ in coordinates $2$ and $n$ such that $(R_{2,n} R_{1,n} R^{-1} e_n)_2 = 0$ (noting that you'll still have $(R_{2,n} R_{1,n} R^{-1} e_n)_1 = 0$, and so on until $(R_{n-1,n} \ldots R_{1,n} R^{-1} e_n)_i = 0$ for $i=1,2,\ldots,n-1$. Then $(R_{n-1,n} \ldots R_{1,n} R^{-1} e_n)_n$ must be $\pm 1$. If it's $-1$, change $R_{n-1,n}$ by angle $\pi$ to make it $+1$. Note that, by orthogonality, $R_{n-1,n} \ldots R_{1,n} R^{-1} e_j)_n = 0$ for all $j < n$. Proceed in the same way on the $n-1$ component: $R_{n-2,n-1} \ldots R_{1,n-1} R_{n-1,n} \ldots R_{1,n} R^{-1} e_j = e_j$ for $j=n-1$ and $j=n$. Continue iterating down to $j=2$, obtaining finally $R_{1,2} R{2,3} R_{1,3} \ldots R_{1,n} R^{-1} e_j = e_j$ for $j = 2,3, \ldots, n$. Since, by assumption, the determinant is $1$, this will also be true for $j=1$. Thus $R_{1,2} R_{2,3} R_{1,3} \ldots R_{1,n} R^{-1} = I$, i.e. $R_{1,2} R_{2,3} R_{1,3} \ldots R_{1,n} = R$. - That's great. It seems to do exactly what I want (and yes, the determinant is 1, I thought orthogonality=rotation, but apparently that's not true). Can you comment on how you came up with this? Is this a specific instance of a standard method, or is this sort of stuff just plainly obvious once you reach a certain level of understanding of Linear Algebra? And in the case of the latter, could you suggest a textbook at about that level? –  Peter Mar 14 '12 at 10:52 Well, it's a standard sort of idea in mathematics: break the problem up into small pieces, and do one step at a time. As for a Linear Algebra text to build understanding (not specifically for this problem), you might try Strang: books.google.ca/books?id=8QVdcRJyL2oC –  Robert Israel Mar 14 '12 at 16:55 In case anybody else is interested, here is an implementation of this algorithm: github.com/pbloem/Lilian/blob/master/Lilian/src/main/java/org/… –  Peter Apr 5 '13 at 14:19 add comment
{}
Main Page | Publications | Downloads | Configuration | Running the Code | Solver Parameters | FAQ | Namespace List | Class Hierarchy | Class List | File List | Namespace Members | Class Members | File Members | Related Pages # APPSPACK::Constraints::Linear Class Reference #include <APPSPACK_Constraints_Linear.hpp> Collaboration diagram for APPSPACK::Constraints::Linear: [legend] List of all members. ## Detailed Description Constraint class that implements general linear inequality constraints. We consider the following linearly constrained problem The bounds in the inequalities are not required to be finite, e.g., it is allowable that . Because it can dramatically improves performance, variable scaling is used. The scaling vector is denoted by . The scaling can be specified by the user; see Linear Constraint Parameters. If all variable bounds are finite, a default scaling vector is provided; namely In the case where any element of is unbounded, the user is required to specify within the Linear Constraint Parameters. We define the variables in the scaled space as follows: Here and The scaled problem is then given by Combining the bound and linear inequality constraints, we get where With respect to the scaled system, APPSPACK::Constraints::Linear stores the following as member variables: , , , , and . Definition at line 203 of file APPSPACK_Constraints_Linear.hpp. ## Public Member Functions Linear (Parameter::List &params) Constructor. ~Linear () Destructor. void print () const Prints the bounds, scaling, and general linear constraint information. Accessors double getEpsMach () const Returns machine epsilon value, epsMach. const VectorgetScaling () const Returns scaling vector, scaling. const VectorgetLower () const Return vector of lower bounds, bLower. const VectorgetUpper () const Return vector of upper bounds, bUpper. const VectorgetBhatLower () const Return lower bound on scaled inequality constraints, bIneqLower. const VectorgetBhatUpper () const Return upper bound on scaled inequality constraints, bIneqUpper. const VectorgetBtildeEq () const Return right-hand side vector of scaled equality constraints, bEq. const MatrixgetAhat () const Return coefficient matrix for scaled inequality constraints, aIneq. const MatrixgetAtildeEq () const Return coefficient matrix for scaled equality constraints, aEq. void getNominalX (Vector &x) const Returns a point (x) that is initially feasible with respect to the bound constraints. Projectors. void scale (Vector &x) const Maps x to scaled space via affine linear transformation. void unscale (Vector &w) const Maps w to unscaled space via affine linear transformation. Feasibility verifiers. bool isFeasible (const Vector &x) const Return true if feasible, false otherwise. void assertFeasible (const Vector &x) const Throws an error if x is infeasible; see isFeasible(). Constraint analysis. bool isBoundsOnly () const Returns true if only variable bounds exists, false otherwise. double maxStep (const Vector &x, const Vector &d, double step) const Returns maximum feasible step in interval [0, step] along direction d. void getActiveIndex (const Vector &xdist, double epsilon, vector< ActiveType > &index) const Returns a vector indexing bound type with respect to inequality constraints. void formDistanceVector (const Vector &x, Vector &xdist) const Returns (nullspace projected) distance from x to each inequality constraints. void snapToBoundary (Vector &x, double esnap) const Attempts to find the closest point to x that satisfies all constraints within distance esnap. void makeBoundsFeasible (Vector &x) const Makes x feasible with respect to variable bounds. void formSnapSystem (const Vector &xtilde, double esnap, Matrix &Asnap, Vector &bsnap) const Forms system Asnap, and bsnap. On exit Asnap has full row rank. ## Private Member Functions void error (const string &fname, const string &msg) const Print an error message and throw an exception. void errorCheck () const Error checks on input. Constructor assistance. void setup (Parameter::List &params) Used by constructor to fill the parameter list with appropriate defaults and return the scaling. void setupBounds (Parameter::List &params) Sets up lower and upper bounds. void setupScaling (Parameter::List &params) Sets up scaling vector. void setupRhs (Parameter::List &params) Form the corresponding right-hand side vector of the coefficient matrix. void setupMatrix (Parameter::List &params) Form the coefficient matrix wrt bound and linear constraints. void setupScaledSystem () Form the corresponding scaled right-hand side vector of the coefficient matrix. Constraint categorizers. StateType getIneqState (int i, BoundType bType, const Vector &xTilde, double epsilon=-1) const Get state of inequality constraint i with respect to the given point. StateType getEqState (int i, const Vector &xTilde, double epsilon=-1) const Get state of equality constraint i with respect to the given point. bool isEqualityFeasible (const Vector &xtilde) const Returns true if equality constraints are feasible. bool isInequalityFeasible (const Vector &xtilde) const Returns true if inequality constraints are feasible. ## Private Attributes const double epsMach Machine epsilon. bool displayMatrix Vector scaling Scaling. Vector bLower Unscaled lower bounds. Vector bUpper Unscaled upper bounds. Unscaled-sytem members. Matrix aIneq The coefficient matrix of the unscaled general linear inequality constraints. Matrix aEq The coefficient matrix of unscaled linear equality constraints. Vector bIneqLower The unscaled lower bounds on general linear inequality constraints. Vector bIneqUpper The unscaled upper bounds on general linear inequality constraints. Vector bEq The right-hand side of unscaled equality constraints. Scaled-sytem members. Matrix aHat The scaled coefficient matrix of the linear (bound and general) inequality constraints. Vector aHatZNorm Holds two norms of rows of aHat projected into the nullspace of the scaled equality constraints. Vector aHatNorm Holds two norms of rows of aHat. Vector bHatLower The scaled lower bounds on the linear (bound and general) inequality constraints. Vector bHatUpper The scaled upper bounds on the linear (bound and general) inequality constraints. Matrix aTildeEq The scaled coefficient matrix of linear equality constraints. Vector bTildeEq The scaled right-hand side of equality coefficient matrix. Vector lHat Used in transforming x to scaled space. ## Constructor & Destructor Documentation APPSPACK::Constraints::Linear::Linear ( Parameter::List & params ) Constructor. Parameters: params (input/output) See Linear Constraint Parameters for details of what it should contain. Defaults are inserted for any values that are not defined. Definition at line 47 of file APPSPACK_Constraints_Linear.cpp. References errorCheck(), and setup(). APPSPACK::Constraints::Linear::~Linear ( ) Destructor. Definition at line 55 of file APPSPACK_Constraints_Linear.cpp. ## Member Function Documentation double APPSPACK::Constraints::Linear::getEpsMach ( ) const Returns machine epsilon value, epsMach. Definition at line 361 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getScaling ( ) const Returns scaling vector, scaling. Definition at line 366 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getLower ( ) const Return vector of lower bounds, bLower. Definition at line 371 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getUpper ( ) const Return vector of upper bounds, bUpper. Definition at line 376 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getBhatLower ( ) const Return lower bound on scaled inequality constraints, bIneqLower. Definition at line 381 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getBhatUpper ( ) const Return upper bound on scaled inequality constraints, bIneqUpper. Definition at line 386 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Vector & APPSPACK::Constraints::Linear::getBtildeEq ( ) const Return right-hand side vector of scaled equality constraints, bEq. Definition at line 391 of file APPSPACK_Constraints_Linear.cpp. const APPSPACK::Matrix & APPSPACK::Constraints::Linear::getAhat ( ) const Return coefficient matrix for scaled inequality constraints, aIneq. Definition at line 396 of file APPSPACK_Constraints_Linear.cpp. Referenced by APPSPACK::Directions::buildNormalCone(). const APPSPACK::Matrix & APPSPACK::Constraints::Linear::getAtildeEq ( ) const Return coefficient matrix for scaled equality constraints, aEq. Definition at line 401 of file APPSPACK_Constraints_Linear.cpp. Referenced by APPSPACK::Directions::buildNormalCone(). void APPSPACK::Constraints::Linear::getNominalX ( Vector & x ) const Returns a point (x) that is initially feasible with respect to the bound constraints. Note:This point may not be feasible with respect to the linear constraints. Definition at line 406 of file APPSPACK_Constraints_Linear.cpp. Referenced by APPSPACK::Solver::initializeBestPointPtr(). void APPSPACK::Constraints::Linear::scale ( Vector & x ) const Maps x to scaled space via affine linear transformation. Replaces with . Definition at line 422 of file APPSPACK_Constraints_Linear.cpp. References error(), lHat, scaling, and APPSPACK::Vector::size(). Referenced by formDistanceVector(), isFeasible(), maxStep(), and snapToBoundary(). void APPSPACK::Constraints::Linear::unscale ( Vector & w ) const Maps w to unscaled space via affine linear transformation. Replaces with . Definition at line 431 of file APPSPACK_Constraints_Linear.cpp. References error(), lHat, scaling, and APPSPACK::Vector::size(). Referenced by snapToBoundary(). bool APPSPACK::Constraints::Linear::isFeasible ( const Vector & x ) const Return true if feasible, false otherwise. We consider the point x to be feasible with respect to a given constraint if it is a distance of no more than epsMach away; see getIneqState() and getEqState(). Definition at line 440 of file APPSPACK_Constraints_Linear.cpp. void APPSPACK::Constraints::Linear::assertFeasible ( const Vector & x ) const Throws an error if x is infeasible; see isFeasible(). Definition at line 461 of file APPSPACK_Constraints_Linear.cpp. References error(), and isFeasible(). bool APPSPACK::Constraints::Linear::isBoundsOnly ( ) const Returns true if only variable bounds exists, false otherwise. Definition at line 467 of file APPSPACK_Constraints_Linear.cpp. References aEq, aIneq, and APPSPACK::Matrix::empty(). double APPSPACK::Constraints::Linear::maxStep ( const Vector & x, const Vector & d, double step ) const Returns maximum feasible step in interval [0, step] along direction d. Parameters: x (input) current point d (input) search direction step (input) maximum step On exit a nonnegative scalar is returned that gives the maximum feasible step that can be made along direction d. A constraint is said to be a blocking constraint if d points towards its infeasible region. A finite variable lower bound is considered to be a blocking constraint if Similarly, a finite variable upper bound is considered to be a blocking constraint if A general linear inequality is considered to be a blocking constraint if . Here equals the the user modifiable parameter epsMach. If any blocking inequality constraint is deemed active, via a call to APPSPACK::Constraints::Linear::getIneqState, a value of is returned. Otherwise, is returned, where denotes the set of blocking inequality constraints. Note: We require that d lie in the null space (numerically at least) with respect to the equality constraints. d is said to lie in the null space if for all equality constraints. If does not lie in the null space, a value of zero is returned. Definition at line 472 of file APPSPACK_Constraints_Linear.cpp. Referenced by APPSPACK::Solver::generateTrialPoints(), and maxStep(). void APPSPACK::Constraints::Linear::getActiveIndex ( const Vector & xdist, double epsilon, vector< ActiveType > & index ) const Returns a vector indexing bound type with respect to inequality constraints. Parameters: xdist Records distance to each inequality constraint from a given point. It is assumed that xdist has been formed via a call to formDistanceVector(). epsilon The ith inequality constraint is considered active at its lower bound if Similarly, the ith inequality constraint is consider active at its upper bound if Here denotes the number of rows in . index On exit index[i] equals one of the enumeration Constraints::ActiveType. index[i] = NeitherActive if the constraint is inactive at both its lower and upper bounds.index[i] = LowerActive if the constraint is active at its lower bound, but not its upper bound.index[i] = UpperActive if the constraint is active at its upper bound, but not its lower bound.index[i] = BothActive if the constraint is active at both its upper and lower bound. Definition at line 559 of file APPSPACK_Constraints_Linear.cpp. References aHat, APPSPACK::Matrix::getNrows(), and APPSPACK::Matrix::resize(). Referenced by APPSPACK::Directions::updateConstraintState(). void APPSPACK::Constraints::Linear::formDistanceVector ( const Vector & x, Vector & xdist ) const Returns (nullspace projected) distance from x to each inequality constraints. Parameters: x (input) A given point. xdist (output) Gives the distance from x to each inequality constraints when moving within the nullspace of the scaled equality constraints. Distance is always determined in terms of the scaled system (for a detailed description see Linear): In order to differentiate between lower bounds and upper bounds, the inequality constraints are implicitly stacked as follows: Thus on exit, the length of xdist is necessarily twice the number of rows in . If we let and let denotes epsMach and denotes a matrix whose columns form an orthonormal basis for , we can define xdist as follows: If the entries of xdist are given by If then . Note: We are computing distance constrained to movement within the nullspace of the equality constraints. In exact arithmetic, the distance from a point to the constraint within the space spanned by the orthonormal matrix is given by Definition at line 587 of file APPSPACK_Constraints_Linear.cpp. Referenced by APPSPACK::Directions::computeNewDirections(). void APPSPACK::Constraints::Linear::snapToBoundary ( Vector & x, double esnap ) const Attempts to find the closest point to x that satisfies all constraints within distance esnap. Parameters: x Current point. esnap Used to define epsilon ball about x to determine which constraints are nearby. The primary purpose of this method is to avoid scenarios where 1) the true solution lies on the boundary, and 2) the boundary is is asymptotically being approached but never reached. In order to avoid this "half-stepping" to the boundary, snapToBoundary() attempts to generate the closest point to the current point x that satisfies all constraints lying within the given distance esnap. The method begins by calling findNearbyConstraints() to determine all constraint within esnap of the current point x. A matrix is then formed that consists of the rows followed by the rows of that correspond to nearby constraints. An LQ factorization of is then formed to determine a linearly independent subset of . is then redefined (if necessary) so that has full row rank. The corresponding right-hand side vector is at this time also formed. The solution to the below equality-constrained least-squares problem is then computed: On exit, x is set equal to . Method requires that APPSPACK be configured with LAPACK. Definition at line 631 of file APPSPACK_Constraints_Linear.cpp. References APPSPACK::Matrix::constrainedLSQR(), formSnapSystem(), scale(), and unscale(). void APPSPACK::Constraints::Linear::makeBoundsFeasible ( Vector & x ) const Makes x feasible with respect to variable bounds. On exit, Here and are used to denote the lower and upper bounds of x, respectively. Definition at line 648 of file APPSPACK_Constraints_Linear.cpp. References bLower, bUpper, APPSPACK::exists(), and APPSPACK::Vector::size(). Referenced by APPSPACK::Solver::initializeBestPointPtr(). void APPSPACK::Constraints::Linear::formSnapSystem ( const Vector & xtilde, double esnap, Matrix & Asnap, Vector & bsnap ) const Forms system Asnap, and bsnap. On exit Asnap has full row rank. Add more comments Definition at line 659 of file APPSPACK_Constraints_Linear.cpp. Referenced by snapToBoundary(). void APPSPACK::Constraints::Linear::print ( ) const Prints the bounds, scaling, and general linear constraint information. Definition at line 738 of file APPSPACK_Constraints_Linear.cpp. void APPSPACK::Constraints::Linear::setup ( Parameter::List & params ) [private] Used by constructor to fill the parameter list with appropriate defaults and return the scaling. Definition at line 59 of file APPSPACK_Constraints_Linear.cpp. References setupBounds(), setupMatrix(), setupRhs(), setupScaledSystem(), and setupScaling(). Referenced by Linear(). void APPSPACK::Constraints::Linear::setupBounds ( Parameter::List & params ) [private] Sets up lower and upper bounds. Definition at line 68 of file APPSPACK_Constraints_Linear.cpp. Referenced by setup(). void APPSPACK::Constraints::Linear::setupScaling ( Parameter::List & params ) [private] Sets up scaling vector. Definition at line 151 of file APPSPACK_Constraints_Linear.cpp. Referenced by setup(). void APPSPACK::Constraints::Linear::setupRhs ( Parameter::List & params ) [private] Form the corresponding right-hand side vector of the coefficient matrix. Definition at line 195 of file APPSPACK_Constraints_Linear.cpp. Referenced by setup(). void APPSPACK::Constraints::Linear::setupMatrix ( Parameter::List & params ) [private] Form the coefficient matrix wrt bound and linear constraints. Definition at line 172 of file APPSPACK_Constraints_Linear.cpp. Referenced by setup(). void APPSPACK::Constraints::Linear::setupScaledSystem ( ) [private] Form the corresponding scaled right-hand side vector of the coefficient matrix. Definition at line 226 of file APPSPACK_Constraints_Linear.cpp. Referenced by setup(). APPSPACK::Constraints::StateType APPSPACK::Constraints::Linear::getIneqState ( int i, BoundType bType, const Vector & xTilde, double epsilon = -1 ) const [private] Get state of inequality constraint i with respect to the given point. Computes the state of inequality constraint i with respect to point . (It is assumed the point is scaled.) Constraint i is deemed active if the (scaled) point is within a distance of of the (scaled) constraint. Mathematically speaking, for constraint i with respect to the lower bound, that means Parameters: i (input) Constraint index. Let denote the number of variables, i.e., the size of . Let denote the number of linear inequality constraints, i.e., the number of rows in . Let . Constraints 0 thru correspond to the bound constraints, and constraints thru correspond to the linear inequality constraints. bType (input) Specify upper or lower bound; see BoundType. xTilde (input) Vector in scaled space. epsilon (optional input) A point is considered active if it is less than this distance from the constraint (in the scaled space). Defaults to epsMach. Definition at line 795 of file APPSPACK_Constraints_Linear.cpp. Referenced by isInequalityFeasible(), and maxStep(). APPSPACK::Constraints::StateType APPSPACK::Constraints::Linear::getEqState ( int i, const Vector & xTilde, double epsilon = -1 ) const [private] Get state of equality constraint i with respect to the given point. Computes the state of equality constraint i with respect to point . (It is assumed the point is scaled.) Constraint i is deemed satisfied if the (scaled) point is within a distance of of the (scaled) constraint. Mathematically speaking, for constraint i with respect to the lower bound, that means Parameters: i (input) Constraint index. Let denote the number of variables, i.e., the size of . Let denote the number of linear inequality constraints, i.e., the number of rows in . Let . Constraints 0 thru correspond to the bound constraints, and constraints thru correspond to the linear inequality constraints. xTilde (input) Vector in scaled space. epsilon (optional input) A point is considered satisfied if it is less than this distance from the constraint (in the scaled space). Defaults to epsMach. Definition at line 830 of file APPSPACK_Constraints_Linear.cpp. Referenced by isEqualityFeasible(). bool APPSPACK::Constraints::Linear::isEqualityFeasible ( const Vector & xtilde ) const [private] Returns true if equality constraints are feasible. see getEqState() Definition at line 855 of file APPSPACK_Constraints_Linear.cpp. Referenced by formSnapSystem(), and isFeasible(). bool APPSPACK::Constraints::Linear::isInequalityFeasible ( const Vector & xtilde ) const [private] Returns true if inequality constraints are feasible. Definition at line 868 of file APPSPACK_Constraints_Linear.cpp. Referenced by isFeasible(). void APPSPACK::Constraints::Linear::error ( const string & fname, const string & msg ) const [private] Print an error message and throw an exception. Definition at line 732 of file APPSPACK_Constraints_Linear.cpp. void APPSPACK::Constraints::Linear::errorCheck ( ) const [private] Error checks on input. Definition at line 330 of file APPSPACK_Constraints_Linear.cpp. Referenced by Linear(). ## Member Data Documentation const double APPSPACK::Constraints::Linear::epsMach [private] Machine epsilon. Definition at line 579 of file APPSPACK_Constraints_Linear.hpp. Referenced by formSnapSystem(), and setupScaledSystem(). bool APPSPACK::Constraints::Linear::displayMatrix [private] If true, equality and inequality constraint information will be printed explicitly for certain debug levels. Definition at line 584 of file APPSPACK_Constraints_Linear.hpp. Vector APPSPACK::Constraints::Linear::scaling [private] Scaling. Definition at line 587 of file APPSPACK_Constraints_Linear.hpp. Vector APPSPACK::Constraints::Linear::bLower [private] Unscaled lower bounds. Definition at line 590 of file APPSPACK_Constraints_Linear.hpp. Vector APPSPACK::Constraints::Linear::bUpper [private] Unscaled upper bounds. Definition at line 593 of file APPSPACK_Constraints_Linear.hpp. Matrix APPSPACK::Constraints::Linear::aIneq [private] The coefficient matrix of the unscaled general linear inequality constraints. Definition at line 597 of file APPSPACK_Constraints_Linear.hpp. Referenced by errorCheck(), isBoundsOnly(), maxStep(), print(), setupMatrix(), setupRhs(), and setupScaledSystem(). Matrix APPSPACK::Constraints::Linear::aEq [private] The coefficient matrix of unscaled linear equality constraints. Definition at line 600 of file APPSPACK_Constraints_Linear.hpp. Referenced by errorCheck(), isBoundsOnly(), maxStep(), print(), setupMatrix(), setupRhs(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bIneqLower [private] The unscaled lower bounds on general linear inequality constraints. Definition at line 603 of file APPSPACK_Constraints_Linear.hpp. Referenced by errorCheck(), maxStep(), print(), setupRhs(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bIneqUpper [private] The unscaled upper bounds on general linear inequality constraints. Definition at line 606 of file APPSPACK_Constraints_Linear.hpp. Referenced by errorCheck(), maxStep(), print(), setupRhs(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bEq [private] The right-hand side of unscaled equality constraints. Definition at line 609 of file APPSPACK_Constraints_Linear.hpp. Referenced by print(), setupRhs(), and setupScaledSystem(). Matrix APPSPACK::Constraints::Linear::aHat [private] The scaled coefficient matrix of the linear (bound and general) inequality constraints. Definition at line 614 of file APPSPACK_Constraints_Linear.hpp. Vector APPSPACK::Constraints::Linear::aHatZNorm [private] Holds two norms of rows of aHat projected into the nullspace of the scaled equality constraints. Definition at line 617 of file APPSPACK_Constraints_Linear.hpp. Referenced by formDistanceVector(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::aHatNorm [private] Holds two norms of rows of aHat. Definition at line 620 of file APPSPACK_Constraints_Linear.hpp. Referenced by formSnapSystem(), getIneqState(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bHatLower [private] The scaled lower bounds on the linear (bound and general) inequality constraints. Definition at line 623 of file APPSPACK_Constraints_Linear.hpp. Referenced by formDistanceVector(), formSnapSystem(), getIneqState(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bHatUpper [private] The scaled upper bounds on the linear (bound and general) inequality constraints. Definition at line 626 of file APPSPACK_Constraints_Linear.hpp. Referenced by formDistanceVector(), formSnapSystem(), getIneqState(), and setupScaledSystem(). Matrix APPSPACK::Constraints::Linear::aTildeEq [private] The scaled coefficient matrix of linear equality constraints. Definition at line 629 of file APPSPACK_Constraints_Linear.hpp. Referenced by getEqState(), isEqualityFeasible(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::bTildeEq [private] The scaled right-hand side of equality coefficient matrix. Definition at line 632 of file APPSPACK_Constraints_Linear.hpp. Referenced by getEqState(), and setupScaledSystem(). Vector APPSPACK::Constraints::Linear::lHat [private] Used in transforming x to scaled space. Scaled w is related to unscaled x via the affine linear transformation x = Sw+lHat. See APPSPACK_Constraints_Linear.hpp Definition at line 637 of file APPSPACK_Constraints_Linear.hpp. Referenced by scale(), setupScaledSystem(), and unscale(). The documentation for this class was generated from the following files: Generated on Fri Feb 16 10:33:36 2007 for APPSPACK 5.0.1 by 1.3.9.1 written by Dimitri van Heesch, © 1997-2002
{}
× # Integrals! $\large \int_0^\infty \dfrac{x e^{-11x}}{\sqrt{e^{2x} -1}} \, dx = - \dfrac{10!!}{11!!} \left( \ln 2 + \sum_{n=1}^{11} \dfrac{(-1)^n}n \right)$ Prove the equation above. Clarification: $$!!$$ denotes the double factorial notation. Note by Hummus A 9 months, 1 week ago Sort by: $$\frac { 10! }{ 11! } =\frac { 1 }{ 11 }$$ · 9 months, 1 week ago it was supposed to be a double factorial · 9 months, 1 week ago Nowhere double factorial notation is used . hahaha · 9 months, 1 week ago
{}
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $z=-\dfrac{5}{9}$ Multiplying both sides by $-5$, then the solution to the given equation, $\dfrac{1}{9}=\dfrac{z}{-5}$, is \begin{array}{l}\require{cancel} -5\cdot\dfrac{1}{9}=\dfrac{z}{-5}\cdot(-5) \\\\ -\dfrac{5}{9}=\dfrac{-5z}{-5} \\\\ -\dfrac{5}{9}=z \\\\ z=-\dfrac{5}{9} .\end{array}
{}
# associative property of multiplication example Therefore, kids must find 4 x 6 as this will make the equation true and models the commutative property. Thus, it cannot be found. Practice: Associative property of multiplication. In both solutions, we got the exact same answer of 120. Now try putting values of any 3 numbers here and see how Associative property for Multiplication works for other functions and numbers! Here is another example. Evaluating Square Roots of Perfect Squares, Quiz & Worksheet - Associative Property of Multiplication, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, What Is The Order of Operations in Math? Important Notes on Associative Property of Multiplication, Solved Examples on Associative Property of Multiplication, Challenging Questions on Associative Property of Multiplication, Interactive Questions on Associative Property of Multiplication, Associative Property of Multiplication states that. There are four properties involving multiplication that will help make problems easier to solve. The associative property states that the sum or product of a set of numbers is … Wow! Create your account, Already registered? Answer: 9 Because with the commutative property of multiplication we can exchange the numbers and 9 * 4 = 4 * 9 Division is probably an example that you know, intuitively, is not associative. Earn Transferable Credit & Get your Degree. There are all kinds of rules that govern our basic operations. Rachel was asked to solve the question before leaving school. These examples illustrate the Associative Properties. The associative property of multiplication states that when finding a product, changing the way factors are grouped will not change their product. First solve the part in parenthesis and write a new multiplication fact on the first line. In symbols, the associative property of multiplication says that for numbers and : The commutative property of multiplication can be useful when multiplying more than two fractions. Here’s an example of how the product does not change irrespective of how the factors are grouped. Using Associative property of Multiplication, we can say that This law is also called associative property of addition and multiplication. The product of any number and 0 is 0. Associative Property. They are the commutative, associative, multiplicative identity and distributive properties. Example 6: Algebraic (a • b) •c = (a • b) •c – Yes, algebraic expressions are also associative for multiplication . So then, why not use this property and make your math work just a little bit easier? Create an account to start this course today. Log in or sign up to add this lesson to a Custom Course. Here are a few activities for you to practice. General Property: ab = ba. The associative property of multiplication states that when performing a multiplication problem with more than two numbers, it does not matter which numbers you multiply first. Commutative, Associative and Distributive Laws. The associative property of multiplication makes multiplying longer strings of numbers easier than just doing the multiplication as is. In this case, it took a little extra work to multiply 24 x 5. Did you know… We have over 220 college Take a look at how this property works from the example given below. I'm sure by now you have heard the expression, 'Please Excuse My Dear Aunt Sally' to describe the rules for order of operations. flashcard set{{course.flashcardSetCoun > 1 ? In math, the associative property of multiplication allows us to group factors in different ways to get the same product. An example of associative property is $$(2+3)+4=2+(3+4)$$. As Vinay knows that $$3 \times (2 \times 9) = 54$$, he can apply the Associative Property of Multiplication on it. In the book, he describes symbolic algebra as the science that treats combinations of arbitrary signs and symbols by defined means through arbitrary laws. Commutative property: When two numbers are multiplied together, the product is the same regardless of the order of the multiplicands. Coolmath privacy policy. Will 15% of 60 be the same as 60% of 15? Megha knows that $$p \times (q \times r) = z$$. The truth is that it is very difficult to give an exact date on which i… The formula for associative law or property can be determined by its definition. Let me guess. $$3 \times (2 \times 9) = (3 \times 2) \times 9$$. Now, we know that $$7 \times 3 = 21$$. By grouping, we can create smaller components to solve. As you know, multiplication has different properties, among which we point out: Commutative Property; Associative Property; Neutral Element; Distributive Property; Well, the distributive property is that by which the multiplication of a number by a sum will give us the same as the sum of each of the sums multiplied by that number. Hence, the right answer will be What is the point of adding yet another property to remember? So, Vinay already knew the answer to the question his father asked him. But the ideas are simple. In this lesson, you will learn about one of these rules, the associative property of multiplication, and how re-grouping the numbers we multiply first in a problem can help us simplify our calculations. You can check out the interactive calculator to know more about the lesson and try your hand at solving a few real-life practice questions at the end of the page. example: (2 x 3) x 4 = 2 x (3 x 4) 6 x 4 = 2 x 12 24 = 24 Find the products for each. The Multiplicative Inverse Property. The associative property of multiplication states that when performing a multiplication problem with more than two numbers, it does not matter which numbers you multiply first. The student will be able to solve the word Practice: Use associative property to multiply 2-digit numbers by 1-digit. This proves that our associative property of multiplication works! Advantages of Self-Paced Distance Learning, Hittite Inventions & Technological Achievements, Ordovician-Silurian Mass Extinction: Causes, Evidence & Species, English Renaissance Theatre: Characteristics & Significance, Postulates & Theorems in Math: Definition & Applications, High School Assignment - First Civilizations in Mesopotamia, Quiz & Worksheet - The Cask of Amontillado Plot Diagram, Quiz & Worksheet - Texas Native American Facts, Quiz & Worksheet - Function of a LAN Card, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Teaching Strategies | Instructional Strategies & Resources, Western Europe Since 1945 for Teachers: Professional Development, FTCE Guidance & Counseling PK-12 (018): Test Practice & Study Guide, History of the Vietnam War: Certificate Program, Introduction to Genetics: Certificate Program, Exponents & Polynomials - AP Calculus: Homeschool Curriculum, Anglo Saxon and Medieval Literature - 11th Grade: Help and Review, Quiz & Worksheet - Global & Diversity Issues in Project Management, Quiz & Worksheet - Veterinary Hospitals & Large Animal Facilities, Quiz & Worksheet - Plotting Simple Figures, Quiz & Worksheet - Exploring for Mineral Resources, Srinivasa Ramanujan: Biography, Facts & Quotes, Alternative Teacher Certification in Alabama, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, Working Scholars® Bringing Tuition-Free College to the Community. study Select a subject to preview related courses: To unlock the next lesson you must be a Study.com Member. So, the answer Rachel is looking for has already been given to her by her teacher! Commutative and Associative properties of multiplication worksheets. Thus, this property states that, in a series of consecutive multiplications, the order in which their factors are multiplied makes no difference. Associative Property. $$2 \times 21 = 42$$, Rahul knows that $$2 \times 4 = 8$$ and $$5 \times 8 = 40$$, Now, Rahul wants to know the value of $$5 \times 2 \times 4$$. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons Can you help Rachel solve this question so that she can leave early? His teacher asks him the value of $$5 \times 2 \times 3$$. - Definition & Examples, How to Find the Prime Factorization of a Number, What is a Percent? $$(2 \times 7) \times 3 = 2 \times (7 \times 3)$$ Is this an example of Associative Property on Multiplication. (7 x 8) x 11 credit-by-exam regardless of age or education level. Associative property of multiplication review. Now, we know from the Associative property of Multiplication that Rachel was leaving early from school when her teacher told her that $$(x \times y) \times z = w$$. The associative property can only be used for addition and multiplication, not for subtraction or division. This silly mnemonic was created to help us remember to do parentheses first, followed by exponents, multiplication and division from left to right, and finally addition and subtractions from left to right. So why then did I choose to multiply 4 x 5 first instead of just working the problem from left to right? Think about all the different properties you might have to use in order to solve this equation. What Is the Associative Property of Multiplication? Suppose you are adding three numbers, say 2, 5, 6, altogether. For example, the order does not matter in the multiplication of real numbers, that is, a × b = b × a, so we say that the multiplication of real numbers is a commutative operation. According to the associative property of multiplication, the product of three or more numbers remains the same regardless of how the numbers are grouped. $$5 \times 2 =10$$ If you are anything like me, it probably took a minute to do the math for 24 x 5 because it is not a simple multiplication problem and requires a little more thought. Solution #2: Using the associative property, I am going to regroup the problem so that I multiply 4 x 5 first. Now, you are probably wondering why we just talked about order of operations when this lesson is supposed to be on the associative property of multiplication. According to the associative property of multiplication, if three or more numbers are multiplied, the result is same irrespective of how the numbers are placed or grouped. So for 20 x 6 we would drop the zero making the problem 2 x 6 = 12 and then add the zero back to get an answer of 120. For example, in the problem: 2 x 3 x 5 x 6, you can multiply 2 x 3 to get 6 and then 5 x 6 to get 30, and then multiply 6 x 30 to get 180. The associative property for multiplication is the same. Therefore, it is necessary to note that we can add parentheses around any part of a problem to tell people that it should be completed first regardless of the operations inside of them. Get access risk-free for 30 days, We will first try to arrange the brackets in the form of the value that we already know (that is - the values told by Jia's mother). Practice: Understand associative property of multiplication. For example: 65,148 × 1 = 65,148. Whether you multiply from left to right or chose another grouping that simplifies the amount of work you do, you should ALWAYS get the same answer. As per associative law, if we add or multiply three numbers, then their change in position or order of numbers or arrangements of numbers, does not change the result. The Associative Property. The "Commutative Laws" say we can swap numbers over and still get the same answer ..... when we add: Can you help Rahul figure out the answer? All three examples given above will yield the same answer when the left and right side of the equation are multiplied For example, 3 × 4 = 12 and 12 × 5 = 60 Also, 4 × 5 = 20 and 3 × 20 = 60 Warning! Be it worksheets, online classes, doubt sessions, or any other form of relation, it’s the logical thinking and smart learning approach that we, at Cuemath, believe in. The Additive Identity Property. We know that whenever we see a set of parentheses in a problem, it screams, 'DO ME FIRST.' It states that terms in an addition or multiplication problem can be … Select/Type your answer and click the "Check Answer" button to see the result. Examples of Associative Property for Multiplication: The above examples indicate that changing the grouping doesn't make any changes to the answer. Her teacher then asked her the value of $$x \times (y \times z)$$. When you multiply a number by another number that ends in zero(s), in the case of our example, 20 x 6, you can drop off the zero(s), do the multiplication, and then add the zero(s) back on. credit by exam that is accepted by over 1,500 colleges and universities. Look closely; we have not changed anything about the numbers or the operations. Grouping is mainly done using parenthesis. However, we can use it in any operations involving multiplication and brackets. In this lesson, it will be most important for us to look at the first part of our mnemonic, the parentheses. Example: Use commutative property of multiplication to rewrite 5 x 4 5 x 4 Answer: 4 x 5 Example: Use commutative property of multiplication to rewrite 6 x 3 6 x 3 Answer: 3 x 6 Example: What is the missing number in 9 x 4 = 4 x ____? 's' : ''}}. You probably know this, but the terminology may be new to you. When you follow the examples below it will become clear how the associative property is used. Zero Property Of Multiplication. Now do you see why we talked about parentheses at the beginning of this lesson? Now you will be able to understand associative property multiplication, associative property of multiplication definition, multiplication associative, associative property definition math, and associative multiplication in detail. There are two things that are important to understand when using this property. The associative property is helpful while adding or multiplying multiple numbers. Associative Property of Multiplication Word Problems This printable math center was designed to reinforce your students’ skills in solving word problems using the Associative property of multiplication and is aligned with the Common Core State Standards. Why not just work the multiplication from left to right? According to the Associative Property of Multiplication, $$(x \times y) \times z = x \times (y \times z)$$. {{courseNav.course.topics.length}} chapters | $$10 \times 3 = 30$$, Jia's mother told her that the value of $$7 \times 3 = 21$$, Now, she asks her the value of $$(2 \times 7) \times 3$$. Numeric Example: 3 × 5 = 15 = 5 × 3. This is the currently selected item. You can test out of the Associative Property of Multiplication The Associative Property of Multiplication states that the product of a set of numbers is the same, no matter how they are grouped. Enrolling in a course lets you earn progress by passing quizzes and exams. first two years of college and save thousands off your degree. Before we get into the actual definition of the associative property of multiplication, let us take any general function (F) of multiplication as an example. Sameer knows that Log in here for access. Associative Law Formula. and career path that can help you find the school that's right for you. Through an interactive and engaging learning-teaching-learning approach, the teachers explore all angles of a topic. Visit the SAT Subject Test Mathematics Level 2: Tutoring Solution page to learn more. In English to associate means to join or to connect. Quick and easy mental math! Associative Property of Multiplication Word Problems This printable math center was designed to reinforce your students’ skills in solving word problems using the Associative property of multiplication and is aligned with the Common Core State Standards. From the above example and simulation, we can say that the associative property of multiplication is defined as the property of multiplication where the product of three or more numbers remains the same regardless of how the numbers are grouped. For example 4 * 2 = 2 * 4 The associative property of multiplication is a postulate. $$2 \times (7 \times 3) = 2 \times 21$$ We have already learned about Associative Property. Now, doing the problem, 6 x (4 x 5), we are going to start with 4 x 5 = 20 because of our parentheses and then multiply 20 by 6 to get our final answer of 120. … The associative property is the focus for this lesson. As with the commutative property, examples of operations that are associative include the addition and multiplication of real numbers, integers, and rational numbers. courses that prepare you to earn The Associative Property of Multiplication. Services. Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, The Role of Supervisors in Preventing Sexual Harassment, Key Issues of Sexual Harassment for Supervisors, The Effects of Sexual Harassment on Employees, Key Issues of Sexual Harassment for Employees, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. Solution #1: Doing the problem from left to right we would start by multiplying 6 x 4 = 24 and then multiplying the 24 by 5 to get a final answer of 120. But, Associative Property of Multiplication tells us that, $$F = (a \times b) \times c = a \times (b \times c)$$. Hence, we can say that The associative property of multiplication states that when multiplying three or more real numbers, the product is always the same regardless of their regrouping. This is known as the Associative Property of Multiplication. All other trademarks and copyrights are the property of their respective owners. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. The student will be able to solve the word . If we multiply three numbers, changing the grouping does not affect the product. He spoke of two different types of algebra, arithmetic algebra and symbolic algebra. Free practice example worksheets with answer key in PDF for easy printing. So, no matter where we put our set of parentheses, we will still get the same answer. Not sure what college you want to attend yet? From the information available to Sameer, we can say that From the above example and simulation, we can say that the associative property of multiplication is defined as the property of multiplication where the product of three or more numbers remains the same regardless of how the numbers are grouped. All rights reserved. Remember, the associative property just means that we can add parentheses around any two numbers to regroup what we multiply first. - Definition & Examples, Converting Decimals to Fractions (and Back), Mathematical Sets: Elements, Intersections & Unions, SAT Subject Test Mathematics Level 2: Tutoring Solution, Biological and Biomedical Using Associative property of Multiplication, we can say that, $$5 \times 2 \times 4 = 5 \times (2 \times 4)$$, Now, Rahul knows that $$2 \times 4 = 8$$, so we get, Rahul also knows that $$5 \times 8 = 40$$. Commutative Laws. Thus, the answer will be The Distributive Property. Identity Property of Addition: Any number plus zero is the original number. We will first try to arrange the brackets in the form of the value that we already know (that is - the values Rahul is aware of). In other words, (a x b) x c = a x (b x c). First solve the part in parenthesis and write a new multiplication fact on the first line. But bear with me and I promise you will see in the next sections why it is important for us to have a clear understanding of parentheses to truly comprehend the use of the associative property of multiplication. It is also important to understand that while this property will also work for addition, it will NOT work for subtraction or division. If you have three or more numbers, you can multiply them in any order to get the same result. $$5 \times 2 \times 3 = (5 \times 2) \times 3$$ To learn more, visit our Earning Credit Page. In this lesson, we learned that the associative property of multiplication is just a fancy way of saying that it does not matter which two numbers you multiply first in a multi-step problem. All kinds of rules that govern our basic operations work for addition, it will not for. Affect the product does not affect the product is the Difference Between Blended Learning & Distance?. Added parentheses around 4 x 6 as this will make the equation true and models the commutative, associative multiplicative... You see why we talked about parentheses at the same product property when! The definition of the associative property for multiplication works other functions and!! Property: when two numbers to regroup the problem so that she can leave early grouping does not change product! Knows that \ ( 5 \times 2 ) \times 9 ) = z \ ) is we... Only be used for addition and multiplication our mnemonic, the teachers explore all angles of a number, is... Next lesson you must be a Study.com Member grouping we mean the which! Sign up to add this lesson associative law or property can be … this known... This law is also called associative property for multiplication is the point adding... 2 = 2 * 4 the associative property of multiplication let 's take a step forward and understand associative! Problem: 6 x 4 x 5 to say that this is we. The parenthesis ( ) any changes to the answer to the question his father him. We waiting for, let 's look at the first part of our mnemonic, the associative property tells that! Components to solve the part in parenthesis and write a new multiplication fact on the first line zero is original... Other words, ( a x ( 4 x 5 it states that when finding a product, the! You even need to find the Prime Factorization of a topic 's look at the beginning this. 65,148 × 1 = 65,148 let ’ s get started ( q \times r =... The original number not work for subtraction or division any order to solve the part parenthesis! 6 x ( b x c ) put associative property of multiplication example set of parentheses in a problem it. You must be a Study.com Member a Percent about parentheses at the same of... Our associative property of multiplication two numbers to regroup what we multiply numbers! Logical treatment comparable to Euclid ’ s get started Study.com Member to join or to connect is 0 must. Help you see how division is probably an example of associative property on multiplication the Difference Between Learning! Off your degree be determined by its definition a Custom Course in different ways to the! Identity and distributive properties in or sign up to add this lesson q \times r ) (. One are two things that are important to understand when using this property and make your work. Of the associative property of their respective owners to the answer is.... Around any two numbers are multiplied together, the parentheses example 4 * =. = 5 × 3 Rachel solve this question so that she can leave early answer '' button to the... Want to attend yet problem would then be associative property of multiplication example x ( 4 x 5.! Little bit easier unbiased info you need to put parentheses in the problem so that I multiply 4 x first.: 3 × 5 = 15 = 5 × 3 multiplication that will help make problems easier solve! Level 2: Tutoring solution Page to learn more in parenthesis make the equation true and models the commutative.. The first two years of college and save thousands off your degree ) +4=2+ ( 3+4 ) )... Then, why not use this property works from the example given below you are probably to! Of addition and multiplication addition: any number times one is the Difference Blended... World of the first two years of college and save thousands off your degree adding yet another to! Not just work the multiplication from left to right given to her by her teacher: when two to! Multiplication states that terms in an equation irrespective of the multiplicands b x c ) subject to related. '' button to see the result this equation same result you might not believe ME just yet, but answer!, Vinay already knew the answer is simple are put in parenthesis and write a new multiplication fact the! Me first. first. easier than just doing the multiplication as is is the same answer of 120 activities.: 3 × 5 associative property of multiplication example 15 = 5 × 3: any number plus zero is the same.! First question is 6 x 4 x 5 to say that this is known as associative... Explore the world of the associative property of multiplication makes multiplying longer strings of numbers easier than just the... Of college and save thousands associative property of multiplication example your degree megha knows that \ ( \times. Properties involving multiplication and brackets as 60 % of 60 be the same while this works. Of 15 other words, ( a x b ) x c = a b... Be able to solve you want to attend yet what are we for... Your math work just a little more complicated = ( 3 \times ( y \times ). The teachers explore all angles of a topic identity and distributive properties answer is.! Numbers in an addition or multiplication problem: 6 x 4 = _____ us in solving these regardless... Two numbers are multiplied together, the associative property just means that we create! The student will be most important for us to interesting facts and exciting examples a... Other trademarks and copyrights are the property of multiplication allows us to look at the as... \ ) way factors are grouped will not work for addition and multiplication SAT subject test level. B ) x 11 there are two properties of multiplication let 's us move / change the placement of symbols! In order to solve, ( a x b ) x 11 are. Any 3 numbers here and see how associative property of multiplication allows us to facts! Team of math experts is dedicated to making Learning fun for our favorite readers, the property! Yet another property to multiply 24 x 5 first. us move change... 6 as this will make the equation true and models the commutative.! Might have to use in order to solve so, let 's us move / change the placement grouping! Property is the focus for this lesson or property can be determined by its.. Product is the Difference Between Blended Learning & Distance Learning asks him the value of (! Which tried to explain the term as a logical treatment comparable to Euclid ’ get! Progress by passing quizzes and exams of rules that govern our basic operations this property and the previous one two! Remember, the parentheses, right how associative property for multiplication: the associative property tells us that we add/multiply., the answer is simple preview related courses: to unlock the next you! Multiplication and brackets in 1830, the first question is 6 x 4 = _____ is! To unlock the next lesson you must be a Study.com Member save off! Have to use in order to get the same: Tutoring solution Page to more. Make the equation true and models the commutative, associative, multiplicative identity and distributive properties a multiplication! Get started to preview related courses: to unlock the next lesson you must be a Study.com Member suppose are... Asked to solve order to get the same attend yet know this, but the answer set! Lesson you must be a Study.com Member in or sign up to add this lesson number 0! Be used for addition and multiplication, not for subtraction or division can use it any! Associative, multiplicative identity and distributive properties learn more however, we will figure out the value of (. Our favorite readers, the product waiting for, let ’ s started... Three numbers, you can test out of the order of the property... Will still get the unbiased info you need to put parentheses in the problem, it will be important... S elements his father asked him work for subtraction or division associative property of multiplication example, our of. The part in parenthesis and write a new multiplication fact on the first line associative, multiplicative and. Not sure what college you want to attend yet I am going to what! Interesting facts and exciting examples not changed anything about the associative property for multiplication any! The value of \ ( 3 \times 2 \times 9 \ ) grouped will not change irrespective how... Anyone can earn credit-by-exam regardless of age or education level probably an example of associative property (! Things that are important to understand when using this property will also work for subtraction or division her by teacher. Us that we can add parentheses around 4 x 5 ) r ) = z \ ) non of. 5 × 3 's look at the first part of our mnemonic, the associative of... ( 7 x 8 ) x 11 there are all kinds of rules that govern our basic.... And practice questions all kinds of rules that govern our basic operations is as. Are important to understand that while this property and make your math work just a little bit easier addition. Visit our Earning Credit Page '' button to see the result we will figure out why it is and! You are adding three numbers associative property of multiplication example you can test out of the order the... Use this property and will lead us to look at the same product unlock the next lesson you be! = a associative property of multiplication example b ) x 11 there are two things that are to... * 4 the associative property on multiplication an addition or multiplication problem can be … is.
{}
# Does $\prod_{t=1}^{\infty}\left(1-\frac{1}{1.127^t}\right)$ converge to a non-zero value? Does the following infinite product converge to a non-zero value? $$\prod_{t=1}^{\infty}\left(1-\frac{1}{1.127^t}\right)$$ Mathworld gives a formula (ref 46 & 47) which, given that $1.127 > 1$, seems to imply that this is the case, but I may have misunderstood as I can't claim any knowledge of the Jacobi theta function used in (47). • Yes the given product converges to a positive value. The product you gave is related to Dedekind Eta function given by $$\eta(q) = q^{1/24}\prod_{n = 1}^{\infty}(1 - q^{n})$$ which converges absolutely if $|q| < 1$. Your product is equal to $(1.127)^{1/24}\eta(1/1.127)$. – Paramanand Singh Sep 9 '16 at 11:09 • @Paramand Singh Yes, you are right, but I suspect the OP needs an answer at a more elementary level. – Vladimir Sep 9 '16 at 11:35 The series $\sum_{t=1}^\infty(1.127)^{-t}$ converges (the sum of a geometric progression with ratio $<1$), and hence, by the well-known theorem, your infinite product converges. (The definition of convergence of an infinite product includes the condition that the limit is nonzero.)
{}
# [Rd] Warning: missing text for item ... in \describe? Spencer Graves spencer.graves at pdf.com Fri Dec 12 18:07:30 CET 2008 Hello: For the record, I found the source of this warning: I had an extraneous space in "\item{fd} {text}": When I changed to "\item{fd}{text}", this warning disappeared. Best Wishes, Spencer ######################### What might be the problem generating messages like "Warning: missing text for item ... in \describe" with "R CMD check" and "R CMD install"? With the current version of "fda" on R-Forge, I get the following: Warning: missing text for item 'fd' in \describe Warning: missing text for item 'fdPar' in \describe message, but I didn't find a reply. I also found that "https://svn.r-project.org/R/trunk/share/perl/R/Rdconv.pm" is coded to issue such messages. However, I still don't know what I'm doing to generate it. Thanks in advance for any suggestions. Best Wishes, Spencer
{}
# Math operator names in sans serif with accents using eulerpx This is a follow-up of a previous question I asked before. I am using newpxtext and eulerpx packages, and I want to change the typesetting of operator names to use sans serif type. I implemented egreg's answer: \documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{mathtools,newpxtext,eulerpx} % a new symbol font for names of operators \DeclareSymbolFont{sfoperators}{OT1}{cmss}{m}{n} % don't waste a math group \DeclareSymbolFontAlphabet{\mathsf}{sfoperators} % tell LaTeX to use sfoperators for names of operators \makeatletter \renewcommand{\operator@font}{\mathgroup\symsfoperators} \makeatother This works for already defined operator names (\max,\sin,etc.) and also for custom-defined operatornames (via DeclareMathOperator). The problem arises when I try to define an operator name involving accents: concretely, I want to define the operator name "máx" (which stands for "máximo", which means "maximum" in Portuguese). The following code \DeclareMathOperator{\grau}{grau} \DeclareMathOperator{\mAx}{máx} \begin{document} $\deg\quad\grau\quad\max\quad\mAx$ \end{document} has output which, besides the fact that the accented "a" is not appearing in sans serif type, generates the warning 'Command \' invalid in math mode on input line ** ', which is expected, because we must work in math mode instead of text mode. If I use \DeclareMathOperator{\mAx}{m\acute{a}x} instead, the output has the same problem as in my previous question: Trying to implement Davislor's solution \DeclareMathOperator{\mAx}{m\acute{\mathsf a}x} does not work, either: it has exactly the same output as before. Finally, here is a brute-force alternative that works: \DeclareMathOperator{\mAx}{m\mbox{$\acute{\mathsf a}$}x} (Obviously, I don't want to resort to this ugly method). Is there some way to solve this issue? • Can someone please create the eulerpx tag? Thanks. – Matemáticos Chibchas Mar 24 at 21:50 • @egreg that is not the same a as the non-accented operator (both sans serif, but different). – Marijn Mar 24 at 22:26 # The Source of Your Problem You defined your symbol font as \DeclareSymbolFont{sfoperators}{OT1}{cmss}{m}{n}, using the OT1 encoding. However, you selected \usepackage[T1]{fontenc}, which sets the default encoding to T1. In T1, the slot for the acute accent is "01, but you told LaTeX to use the glyph from a OT1 font, where that slot contains Δ. The solution is to switch to a consistent encoding. (If you needed to mix font encodings, there are various other tricks you might do, but you don’t.) # In the Modern Toolchain The unicode-math package allows you to change the \mathrm alphabet, used for words in math mode such as operator names, separately from the \symup alphabet. The command for this is \setmathrm from fontspec. This example sets Euler’s identities in ISO style, with constants in Neo Euler, operator names in Classico, and digits and variables from Asana Math. These are clones of AMS Euler, Optima and Palatino, respectively, all of which are the work of Hermann Zapf. \documentclass[varwidth=5cm, preview, 12pt]{standalone} \usepackage{mathtools} \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \setmainfont{TeX Gyre Pagella}[ Scale=1.0, Ligatures={Common, Discretionary, TeX}] \setsansfont{URW Classico} \setmathfont{Asana Math} \setmathfont[range={up/{Latin,latin,Greek,greek}, bfup/{Latin,latin,Greek,greek}, cal, bfcal, frak, bffrak}, script-features={}, sscript-features={} ]{Neo Euler} \setmathrm{URW Classico} \newcommand\upi{\symup{i}} \newcommand\upe{\symup{e}} \begin{document} \begin{align*} \upe^{\upi x} &= \cos{x} + \upi \sin{x} \\ \upe^{\upi \uppi} + 1 &= 0 \end{align*} \end{document} ## In PDFLaTeX Select a sans-serif family that supports the T1 encoding, and preferably TS1. The eulerpx package recommends Palatino as the serif font and Optima as the sans-serif font, so this uses the clones Pagella and Classico: \documentclass[varwidth=5cm, preview, 12pt]{standalone} \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage[utf8]{inputenc} \usepackage[type1]{classico} \usepackage{tgpagella, eulerpx} \usepackage{mathtools} \DeclareSymbolFont{sfoperators}{T1}{URWClassico-LF}{m}{n} \SetSymbolFont{sfoperators}{bold}{T1}{URWClassico-LF}{b}{n} \makeatletter \renewcommand{\operator@font}{\mathgroup\symsfoperators} \makeatother \DeclareMathOperator{\grau}{grau} \DeclareMathOperator{\mAx}{máx} \begin{document} \begin{align*} e^{i x} &= \cos{x} + i \sin{x} \\ e^{i \pi} + 1 &= 0 \end{align*} $\deg \grau \mAx$ \end{document} If you do not wish to use URW Classico (or if you want to use this document commercially, which the font license does not allow), you can substitute another font family such as {T1}{cmss}{m}{n}, so long as the encoding matches the one you are using.
{}
class climin.adadelta.Adadelta(wrt, fprime, step_rate=1, decay=0.9, momentum=0, offset=0.0001, args=None) Adadelta [zeiler2013adadelta] is a method that uses the magnitude of recent gradients and steps to obtain an adaptive step rate. An exponential moving average over the gradients and steps is kept; a scale of the learning rate is then obtained by their ration. Let $$f'(\theta_t)$$ be the derivative of the loss with respect to the parameters at time step $$t$$. In its basic form, given a step rate $$\alpha$$, a decay term $$\gamma$$ and an offset $$\epsilon$$ we perform the following updates: $\begin{split}g_t &=& (1 - \gamma)~f'(\theta_t)^2 + \gamma g_{t-1}\end{split}$ where $$g_0 = 0$$. Let $$s_0 = 0$$ for updating the parameters: $\begin{split}\Delta \theta_t &=& \alpha {\sqrt{s_{t-1} + \epsilon} \over \sqrt{g_t + \epsilon}}~f'(\theta_t), \\ \theta_{t+1} &=& \theta_t - \Delta \theta_t.\end{split}$ Subsequently we adapt the moving average of the steps: $\begin{split}s_t &=& (1 - \gamma)~\Delta\theta_t^2 + \gamma s_{t-1}.\end{split}$ To extend this with Nesterov’s accelerated gradient, we need a momentum coefficient $$\beta$$ and incorporate it by using slightly different formulas: $\begin{split}\theta_{t + {1 \over 2}} &=& \theta_t - \beta \Delta \theta_{t-1}, \\ g_t &=& (1 - \gamma)~f'(\theta_{t + {1 \over 2}})^2 + \gamma g_{t-1}, \\ \Delta \theta_t &=& \alpha {\sqrt{s_{t-1} + \epsilon} \over \sqrt{g_t + \epsilon}}~f'(\theta_{t + {1 \over 2}}).\end{split}$ In its original formulation, the case $$\alpha = 1, \beta = 0$$ was considered only. __init__(wrt, fprime, step_rate=1, decay=0.9, momentum=0, offset=0.0001, args=None) Parameters: wrt : array_like Array that represents the solution. Will be operated upon in place. fprime should accept this array as a first argument. fprime : callable Callable that given a solution vector as first parameter and *args and **kwargs drawn from the iterations args returns a search direction, such as a gradient. step_rate : scalar or array_like, optional [default: 1] Value to multiply steps with before they are applied to the parameter vector. decay : float, optional [default: 0.9] Decay parameter for the moving average. Must lie in [0, 1) where lower numbers means a shorter “memory”. momentum : float or array_like, optional [default: 0] Momentum to use during optimization. Can be specified analoguously (but independent of) step rate. offset : float, optional, [default: 1e-4] Before taking the square root of the running averages, this offset is added. args : iterable Iterator over arguments which fprime will be called with.
{}
# TerminateOnNan# class ignite.handlers.terminate_on_nan.TerminateOnNan(output_transform=<function TerminateOnNan.<lambda>>)[source]# TerminateOnNan handler can be used to stop the training if the process_function’s output contains a NaN or infinite number or torch.tensor. The output can be of type: number, tensor or collection of them. The training is stopped if there is at least a single number/tensor have NaN or Infinite value. For example, if the output is [1.23, torch.tensor(…), torch.tensor(float(‘nan’))] the handler will stop the training. Parameters output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into a number or torch.tensor or collection of them. This can be useful if, for example, you have a multi-output model and you want to check one or multiple values of the output. Examples trainer.add_event_handler(Events.ITERATION_COMPLETED, TerminateOnNan()) Methods
{}
### Home > PC > Chapter 9 > Lesson 9.3.5 > Problem9-159 9-159. Find possible equations for the graph below in terms of both sine and cosine. Sine Function 1. There is no horizontal shift. 2. It is shifted down $3$. 3. The period is π. Find frequency: $pb = 2π$. 4. The amplitude is $2$. Cosine Function The cosine is shifted to the right $\frac{\pi}{4}$.
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 21 Aug 2018, 23:05 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If y = 2^(x+1), what is the value of y – x? Author Message TAGS: ### Hide Tags Manager Joined: 04 Feb 2011 Posts: 59 Location: US If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags Updated on: 10 Aug 2017, 01:45 1 11 00:00 Difficulty: 35% (medium) Question Stats: 65% (01:14) correct 35% (01:12) wrong based on 420 sessions ### HideShow timer Statistics If $$y = 2^{(x+1)}$$, what is the value of y – x? (1) $$2^{(2x+2)} = 64$$ (2) $$y = 2^{(2x –1)}$$ Originally posted by Lolaergasheva on 05 Mar 2011, 08:12. Last edited by Bunuel on 10 Aug 2017, 01:45, edited 2 times in total. Edited the question. Manager Joined: 14 Feb 2011 Posts: 179 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags Updated on: 05 Mar 2011, 09:40 1 1 Lolaergasheva wrote: If y = 2^x+1, what is the value of y – x? (1) 2^2x+2 = 64 (2) y = 2^2x –1 1) 2^2x+2=2^6 2x+2=6 x=2 2^x+1-2^1 => x+1-1=0 ==> x=0 2) I don't know how to solve I assume that the questions reads as follows: If y = $$2^{x+1}$$, what is the value of y – x? (1) $$2^{2x+2}$$ = 64 (2) $$y = 2^{2x-1}$$ Then, 1st statement gives us 2x+2 = 6 and hence x =2 and therefore y = $$2^{2+1}$$ = $$2^3$$ = 8 hence y-x = 8-2 = 6, So sufficient From 2, $$y = 2^{2x-1}$$ = $$2^{x+1}$$ or 2x-1 = x+1 and hence x =2 and hence like in 1, y =8 and hence y-x = 8-2 = 6, So sufficient Originally posted by beyondgmatscore on 05 Mar 2011, 08:24. Last edited by beyondgmatscore on 05 Mar 2011, 09:40, edited 2 times in total. Math Expert Joined: 02 Sep 2009 Posts: 48117 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags 05 Mar 2011, 08:45 Lolaergasheva wrote: If y = 2^x+1, what is the value of y – x? (1) 2^2x+2 = 64 (2) y = 2^2x –1 1) 2^2x+2=2^6 2x+2=6 x=2 2^x+1-2^1 => x+1-1=0 ==> x=0 2) I don't know how to solve Original question is: If y = 2^(x+1), what is the value of y – x? (1) 2^(2x+2) = 64 --> $$2^{2x+2}=2^6$$ --> as bases are equal we can equate the powers: $$2x+2=6$$ --> $$x=2$$. From the stem: $$y=2^{x+1}=2^3=8$$, so $$y-x=8-2=6$$. Sufficient. (2) y = 2^(2x –1) --> as also given that $$y=2^{x+1}$$ then $$2^{x+1}=2^{2x-1}$$ --> $$x+1=2x-1$$ --> $$x=2$$, then the same way as above $$y=8$$ --> $$y-x=8-2=6$$. Sufficient. _________________ Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 6047 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags 10 Nov 2015, 11:11 Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and independent equations ensures a solution. If y = 2^(x+1), what is the value of y – x? (1) 2^(2x+2) = 64 (2) y = 2^(2x –1) There are 2 variables (x,y) one equation (y = 2^(x+1)), and 2 more equations from the 2 conditions, so there is high chance (D) will be our answer. From condition 1, 2^(2x+2)=64=2^6, or 2x+2=6, x=2, y=2^3=8. This is sufficient From condition 2, 2^(2x-1)=2^(x+1), or 2x-1=x+1, x=2, y=8 . This is sufficient as well. For cases where we need 1 more equation, such as original conditions with “1 variable”, or “2 variables and 1 equation”, or “3 variables and 2 equations”, we have 1 equation each in both 1) and 2). Therefore, there is 59 % chance that D is the answer, while A or B has 38% chance and C or E has 3% chance. Since D is most likely to be the answer using 1) and 2) separately according to DS definition. Obviously there may be cases where the answer is A, B, C or E. _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only \$99 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Senior Manager Joined: 05 Dec 2016 Posts: 259 Concentration: Strategy, Finance GMAT 1: 620 Q46 V29 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags 21 Dec 2016, 00:09 (1) 2^2x+2=64 2^2x+2=2^6 2x+2=6 x=2, y=2^x+1=2^2+1=8 Sufficient (2) y=2^2x-1 2^2x-1=2^x+1 2x-1=x+1 x=2, y=8 Sufficient Retired Moderator Joined: 19 Mar 2014 Posts: 969 Location: India Concentration: Finance, Entrepreneurship GPA: 3.5 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags 16 Jul 2017, 15:04 If $$y = 2^{x+1}$$, what is the value of y – x? (1) $$2^{2x+2} = 64$$ $$2^{2x+2} = 2^6$$ $$2x+2 = 6$$ $$2x = 4$$ $$x = 2$$ $$y = 2^{x+1}$$ $$y = 2^{2+1} = 2^3 = 8$$ $$y - x = 8 - 2 = 6$$ Hence, (1) ===== is SUFFICIENT (2) $$y = 2^{2x –1}$$ $$y = 2^{2x –1}$$ $$y = 2^{x+1}$$ $$2^{2x –1} = = 2^{x+1}$$ $$2x - 1 = x + 1$$ $$x = 2$$ $$y = 2^{x+1}$$ $$y = 2^{2+1} = 2^3 = 8$$ $$y - x = 8 - 2 = 6$$ Hence, (2) ===== is SUFFICIENT _________________ "Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent." Best AWA Template: https://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html#p470475 Non-Human User Joined: 09 Sep 2013 Posts: 7782 Re: If y = 2^(x+1), what is the value of y – x?  [#permalink] ### Show Tags 22 Jul 2018, 14:47 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If y = 2^(x+1), what is the value of y – x? &nbs [#permalink] 22 Jul 2018, 14:47 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{}
# 1.3: Real Numbers ### Learning Objectives • Add and subtract real numbers • Add real numbers with the same and different signs • Subtract real numbers with the same and different signs • Simplify combinations that require both addition and subtraction of real numbers. • Multiply and divide real numbers • Multiply two or more real numbers. • Divide real numbers • Simplify expressions with both multiplication and division • Simplify expressions with real numbers • Recognize and combine like terms in an expression • Use the order of operations to simplify expressions • Simplify compound expressions with real numbers • Simplify expressions with fraction bars, brackets, and parentheses • Use the distributive property to simplify expressions with grouping symbols • Simplify expressions containing absolute values Some important terminology to remember before we begin is as follows: • integers: counting numbers like 1, 2, 3, etc., including negatives and zero • real number: fractions, negative umbers, decimals, integers, and zero are all real numbers • absolute value: a number’s distance from zero; it’s always positive. $$|-7| = 7$$ • sign: this refers to whether a number is positive or negative, we use $$-$$ for negative (to the left of zero on the number line) • difference: the result of subtraction • sum: the result of addition The ability to work comfortably with negative numbers is essential to success in algebra. For this reason we will do a quick review of adding, subtracting, multiplying and dividing integers. Integers are all the positive whole numbers, zero, and their opposites (negatives). As this is intended to be a review of integers, the descriptions and examples will not be as detailed as a normal lesson. ## Adding and Subtracting Real Numbers When adding integers we have two cases to consider. The first case is whether the signs match (both positive or both negative). If the signs match, we will add the numbers together and keep the sign. If the signs don’t match (one positive and one negative number) we will subtract the numbers (as if they were all positive) and then use the sign from the larger number. This means if the larger number is positive, the answer is positive. If the larger number is negative, the answer is negative. ### To add two numbers with the same sign (both positive or both negative) • Add their absolute values (without the $$-$$ sign) • Give the sum the same sign. ### To add two numbers with different signs (one positive and one negative) • Find the difference of their absolute values. (Note that when you find the difference of the absolute values, you always subtract the lesser absolute value from the greater one.) • Give the sum the same sign as the number with the greater absolute value. ### Example Find $$23–73$$. [hidden-answer a=”951238″]You can’t use your usual method of subtraction because 73 is greater than 23. Rewrite the subtraction as adding the opposite. $$23+\left(−73\right)$$ The addends have different signs, so find the difference of their absolute values. $$\begin{array}{c}\left|23\right|=23\,\,\,\text{and}\,\,\,\left|−73\right|=73\\73-23=50\end{array}$$ Since $$\left|−73\right|>\left|23\right|$$, the final answer is negative. $$23–73=−50$$ Another way to think about subtracting is to think about the distance between the two numbers on the number line. In the example below, $$382$$ units, and $$382+93$$. ### Example Find $$382–\left(−93\right)$$. [hidden-answer a=”342295″]You are subtracting a negative, so think of this as taking the negative sign away. This becomes an addition problem. $$+93$$ $$382+93=475$$ $$382–(−93)=475$$ The following video explains how to subtract two signed integers. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ### Example Find $$-\frac{3}{7}-\frac{6}{7}+\frac{2}{7}$$ [hidden-answer a=”11416″]Add the first two and give the result a negative sign: Since the signs of the first two are the same, find the sum of the absolute values of the fractions Since both numbers are negative, the sum is negative. If you owe money, then borrow more, the amount you owe becomes larger. $$\left| -\frac{6}{7} \right|=\frac{6}{7}$$ $$\begin{array}{c}\frac{3}{7}+\frac{6}{7}=\frac{9}{7}\\\\-\frac{3}{7}-\frac{6}{7} =-\frac{9}{7}\end{array}$$ Now add the third number. The signs are different, so find the difference of their absolute values. $$\left| \frac{2}{7} \right|=\frac{2}{7}$$ $$\frac{9}{7}-\frac{2}{7}=\frac{7}{7}$$ Since $$-\frac{9}{7}$$. $$-\frac{9}{7}+\frac{2}{7}=-\frac{7}{7}$$ $$-\frac{3}{7}+\left(-\frac{6}{7}\right)+\frac{2}{7}=-\frac{7}{7}$$ In the following video you will see an example of how to add three fractions with a common denominator that have different signs. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ### Example Evaluate $$27.832+(−3.06)$$. When you add decimals, remember to line up the decimal points so you are adding tenths to tenths, hundredths to hundredths, and so on. $$\begin{array}{r}\underline{\begin{array}{r}27.832\\-\text{ }3.06\,\,\,\end{array}}\\24.772\end{array}$$ $$\left|-3.06\right|=3.06$$ The sum has the same sign as 27.832 whose absolute value is greater. $$27.832+\left(-3.06\right)=24.772$$ In the following video are examples of adding and subtracting decimals with different signs. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Multiplying and Dividing Real Numbers Multiplication and division are inverse operations, just as addition and subtraction are. You may recall that when you divide fractions, you multiply by the reciprocal. Inverse operations “undo” each other. ## Multiply Real Numbers Multiplying real numbers is not that different from multiplying whole numbers and positive fractions. However, you haven’t learned what effect a negative sign has on the product. With whole numbers, you can think of multiplication as repeated addition. Using the number line, you can make multiple jumps of a given size. For example, the following picture shows the product $$3\cdot4$$ as 3 jumps of 4 units each. So to multiply $$3(−4)$$, you can face left (toward the negative side) and make three “jumps” forward (in a negative direction). The product of a positive number and a negative number (or a negative and a positive) is negative. ### The Product of a Positive Number and a Negative Number To multiply a positive number and a negative number, multiply their absolute values. The product is negative. ### Example Find $$1+1$$ or 2 places after the decimal point. $$\begin{array}{r}3.8\\\underline{\times\,\,\,0.6}\\2.28\end{array}$$ The product of a negative and a positive is negative. $$−3.8(0.6)=−2.28$$ The following video contains examples of how to multiply decimal numbers with different signs. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ### The Product of Two Numbers with the Same Sign (both positive or both negative) To multiply two positive numbers, multiply their absolute values. The product is positive. To multiply two negative numbers, multiply their absolute values. The product is positive. ### Example Find $$~\left( -\frac{3}{4} \right)\left( -\frac{2}{5} \right)$$ [hidden-answer a=”322816″]Multiply the absolute values of the numbers. First, multiply the numerators together to get the product’s numerator. Then, multiply the denominators together to get the product’s denominator. Rewrite in lowest terms, if needed. $$\left( \frac{3}{4} \right)\left( \frac{2}{5} \right)=\frac{6}{20}=\frac{3}{10}$$ The product of two negative numbers is positive. $$\left( -\frac{3}{4} \right)\left( -\frac{2}{5} \right)=\frac{3}{10}$$ The following video shows examples of multiplying two signed fractions, including simplification of the answer. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 To summarize: • positive $$\cdot$$ positive: The product is positive. • negative $$\cdot$$ negative: The product is positive. • negative $$\cdot$$ positive: The product is negative. • positive $$\cdot$$ negative: The product is negative. You can see that the product of two negative numbers is a positive number. So, if you are multiplying more than two numbers, you can count the number of negative factors. ### Multiplying More Than Two Negative Numbers If there are an even number (0, 2, 4, …) of negative factors to multiply, the product is positive. If there are an odd number (1, 3, 5, …) of negative factors, the product is negative. ### Example Find $$3(−6)(2)(−3)(−1)$$. [hidden-answer a=”149062″]Multiply the absolute values of the numbers. $$\begin{array}{l}3(6)(2)(3)(1)\\18(2)(3)(1)\\36(3)(1)\\108(1)\\108\end{array}$$ Count the number of negative factors. There are three $$\left(−6,−3,−1\right)$$. $$3(−6)(2)(−3)(−1)$$ Since there are an odd number of negative factors, the product is negative. $$3(−6)(2)(−3)(−1)=−108$$ The following video contains examples of multiplying more than two signed integers. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Divide Real Numbers You may remember that when you divided fractions, you multiplied by the reciprocal. Reciprocal is another name for the multiplicative inverse (just as opposite is another name for additive inverse). An easy way to find the multiplicative inverse is to just “flip” the numerator and denominator as you did to find the reciprocal. Here are some examples: • The reciprocal of $$\frac{9}{4}$$because $$\frac{4}{9}\left(\frac{9}{4}\right)=\frac{36}{36}=1$$. • The reciprocal of 3 is $$\frac{3}{1}\left(\frac{1}{3}\right)=\frac{3}{3}=1$$. • The reciprocal of $$\frac{-6}{5}$$ because $$-\frac{5}{6}\left( -\frac{6}{5} \right)=\frac{30}{30}=1$$. • The reciprocal of 1 is 1 as $$1(1)=1$$. When you divided by positive fractions, you learned to multiply by the reciprocal. You also do this to divide real numbers. Think about dividing a bag of 26 marbles into two smaller bags with the same number of marbles in each. You can also say each smaller bag has one half of the marbles. $$26\div 2=26\left( \frac{1}{2} \right)=13$$ Notice that 2 and $$\frac{1}{2}$$ are reciprocals. Try again, dividing a bag of 36 marbles into smaller bags. Number of bags Dividing by number of bags Multiplying by reciprocal 3 $$36\left( \frac{1}{3} \right)=\frac{36}{3}=\frac{12(3)}{3}=12$$ 4 $$36\left(\frac{1}{4}\right)=\frac{36}{4}=\frac{9\left(4\right)}{4}=9$$ 6 $$36\left(\frac{1}{6}\right)=\frac{36}{6}=\frac{6\left(6\right)}{6}=6$$ Dividing by a number is the same as multiplying by its reciprocal. (That is, you use the reciprocal of the divisor, the second number in the division problem.) ### Example Find $$28\div \frac{4}{3}$$ [hidden-answer a=”210216″]Rewrite the division as multiplication by the reciprocal. The reciprocal of $$\frac{3}{4}$$. $$28\div \frac{4}{3}=28\left( \frac{3}{4} \right)$$ Multiply. $$\frac{28}{1}\left(\frac{3}{4}\right)=\frac{28\left(3\right)}{4}=\frac{4\left(7\right)\left(3\right)}{4}=7\left(3\right)=21$$ $$28\div\frac{4}{3}=21$$ Now let’s see what this means when one or more of the numbers is negative. A number and its reciprocal have the same sign. Since division is rewritten as multiplication using the reciprocal of the divisor, and taking the reciprocal doesn’t change any of the signs, division follows the same rules as multiplication. ### Rules of Division When dividing, rewrite the problem as multiplication using the reciprocal of the divisor as the second factor. When one number is positive and the other is negative, the quotient is negative. When both numbers are negative, the quotient is positive. When both numbers are positive, the quotient is positive. ### Example Find $$24\div\left(-\frac{5}{6}\right)$$. [hidden-answer a=”716581″]Rewrite the division as multiplication by the reciprocal. $$24\div \left( -\frac{5}{6} \right)=24\left( -\frac{6}{5} \right)$$ Multiply. Since one number is positive and one is negative, the product is negative. $$\frac{24}{1}\left( -\frac{6}{5} \right)=-\frac{144}{5}$$ $$24\div \left( -\frac{5}{6} \right)=-\frac{144}{5}$$ ### Example Find $$4\,\left( -\frac{2}{3} \right)\,\div \left( -6 \right)$$ [hidden-answer a=”557653″]Rewrite the division as multiplication by the reciprocal. $$\frac{4}{1}\left( -\frac{2}{3} \right)\left( -\frac{1}{6} \right)$$ Multiply. There is an even number of negative numbers, so the product is positive. $$\frac{4\left(2\right)\left(1\right)}{3\left(6\right)}=\frac{8}{18}$$ Write the fraction in lowest terms. $$4\left( -\frac{2}{3} \right)\div \left( -6 \right)=\frac{4}{9}$$ The following video explains how to divide signed fractions. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 Remember that a fraction bar also indicates division, so a negative sign in front of a fraction goes with the numerator, the denominator, or the whole fraction: $$-\frac{3}{4}=\frac{-3}{4}=\frac{3}{-4}$$. In each case, the overall fraction is negative because there’s only one negative in the division. The following video explains how to divide signed fractions. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Simplify Expressions With Real Numbers Some important terminology before we begin: • operations/operators: In mathematics we call things like multiplication, division, addition, and subtraction operations. They are the verbs of the math world, doing work on numbers and variables. The symbols used to denote operations are called operators, such as $$+{, }-{, }\times{, }\div$$. As you learn more math, you will learn more operators. • term: Examples of terms would be $$-\frac{3}{2}$$ or $$a^3$$. Even lone integers can be a term, like 0. • expression: A mathematical expression is one that connects terms with mathematical operators. For example $$\frac{1}{2}+\left(2^2\right)- 9\div\frac{6}{7}$$ is an expression. ## Combining Like Terms One way we can simplify expressions is to combine like terms. Like terms are terms where the variables match exactly (exponents included). Examples of like terms would be $$-3xy$$ or $$a^2b$$ or $$8$$. If we have like terms we are allowed to add (or subtract) the numbers in front of the variables, then keep the variables the same. As we combine like terms we need to interpret subtraction signs as part of the following term. This means if we see a subtraction sign, we treat the following term like a negative term. The sign always stays with the term. This is shown in the following examples: Example Combine like terms: $$5x-2y-8x+7y$$ The like terms in this expression are: $$-8x$$ $$-7y$$ Combine like terms: $$5x-8x = -3x$$ $$-2y-7y = -9y$$ Note how signs become operations when you combine like terms. Simplified Expression: $$5x-2y-8x+7y=-3x-9y$$ In the following video you will be shown how to combine like terms using the idea of the distributive property. Note that this is a different method than is shown in the written examples on this page, but it obtains the same result. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 Example Combine like terms: $$x^2-3x+9-5x^2+3x-1$$ The like terms in this expression are: $$-5x^2$$ $$3x$$ $$-1$$ Combine like terms: $$\begin{array}{r}x^2-5x^2 = -4x^2\\-3x+3x=0\,\,\,\,\,\,\,\,\,\,\,\\9-1=8\,\,\,\,\,\,\,\,\,\,\,\end{array}$$ In the video that follows, you will be shown another example of combining like terms. Pay attention to why you are not able to combine all three terms in the example. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Order of Operations You may or may not recall the order of operations for applying several mathematical operations to one expression. Just as it is a social convention for us to drive on the right-hand side of the road, the order of operations is a set of conventions used to provide order when you are required to use several mathematical operations for one expression. The graphic below depicts the order in which mathematical operations are performed. Order of operations ### Example Simplify $$7–5+3\cdot8$$. [hidden-answer a=”987816″]According to the order of operations, multiplication comes before addition and subtraction. Multiply $$3\cdot8$$. $$\begin{array}{c}7–5+3\cdot8\\7–5+24\end{array}$$ Now, add and subtract from left to right. $$7–5$$ comes first. $$2+24$$. $$2+24=26$$ $$7–5+3\cdot8=26$$ In the following example, you will be shown how to simplify an expression that contains both multiplication and subtraction using the order of operations. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 When you are applying the order of operations to expressions that contain fractions, decimals, and negative numbers, you will need to recall how to do these computations as well. ### Example Simplify $$3\cdot\frac{1}{3}-8\div\frac{1}{4}$$. [hidden-answer a=”265256″]According to the order of operations, multiplication and division come before addition and subtraction. Sometimes it helps to add parentheses to help you know what comes first, so let’s put parentheses around the multiplication and division since it will come before the subtraction. $$3\cdot\frac{1}{3}-8\div\frac{1}{4}$$ Multiply $$3\cdot \frac{1}{3}$$ first. $$\begin{array}{c}\left(3\cdot\frac{1}{3}\right)-\left(8\div\frac{1}{4}\right)\\\text{}\\=\left(1\right)-\left(8\div \frac{1}{4}\right)\end{array}$$ Now, divide $$8\div\frac{1}{4}$$. $$\begin{array}{c}8\div\frac{1}{4}=\frac{8}{1}\cdot\frac{4}{1}=32\\\text{}\\1-32\end{array}$$ Subtract. $$1–32=−31$$ $$3\cdot \frac{1}{3}-8\div \frac{1}{4}=-31$$ In the following video you are shown how to use the order of operations to simplify an expression that contains multiplication, division, and subtraction with terms that contain fractions. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Exponents When you are evaluating expressions, you will sometimes see exponents used to represent repeated multiplication. Recall that an expression such as $$7\cdot7$$. (Exponential notation has two parts: the base and the exponent or the power. In $$7^{2}$$, 7 is the base and 2 is the exponent; the exponent determines how many times the base is multiplied by itself.) Exponents are a way to represent repeated multiplication; the order of operations places it before any other multiplication, division, subtraction, and addition is performed. ### Example Simplify $$3^{2}\cdot2^{3}$$. [hidden-answer a=”360237″]This problem has exponents and multiplication in it. According to the order of operations, simplifying $$2^{3}$$ comes before multiplication. $$3^{2}\cdot2^{3}$$ $$3\cdot3$$, which equals 9. $$9\cdot ParseError: EOF expected (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[10]/div/p[5]/span, line 1, column 2 $$ $$2\cdot2\cdot2$$, which equals 8. $$9\cdot 8$$ Multiply. $$9\cdot 8=72$$ $$ParseError: EOF expected (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[10]/div/div/p[1]/span[1], line 1, column 2 \cdot ParseError: EOF expected (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[10]/div/div/p[1]/span[2], line 1, column 2 =72$$ In the video that follows, an expression with exponents on its terms is simplified using the order of operations. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## Grouping Symbols Grouping symbols such as parentheses ( ), brackets [ ], braces$$\displaystyle \left\{ {} \right\}$$, and fraction bars can be used to further control the order of the four arithmetic operations. The rules of the order of operations require computation within grouping symbols to be completed first, even if you are adding or subtracting within the grouping symbols and you have multiplication outside the grouping symbols. After computing within the grouping symbols, divide or multiply from left to right and then subtract or add from left to right. When there are grouping symbols within grouping symbols, calculate from the inside to the outside. That is, begin simplifying within the innermost grouping symbols first. Remember that parentheses can also be used to show multiplication. In the example that follows, both uses of parentheses—as a way to represent a group, as well as a way to express multiplication—are shown. ### Example Simplify $$\left(3+4\right)^{2}+\left(8\right)\left(4\right)$$. [hidden-answer a=”548490″]This problem has parentheses, exponents, multiplication, and addition in it. The first set of parentheses is a grouping symbol. The second set indicates multiplication. Grouping symbols are handled first. Add numbers in parentheses. $$\begin{array}{c}(3+4)^{2}+(8)(4)\\(7)^{2}+(8)(4)\end{array}$$ Simplify $$7^{2}$$. $$\begin{array}{c}7^{2}+(8)(4)\\49+(8)(4)\end{array}$$ Multiply. $$\begin{array}{c}49+(8)(4)\\49+(32)\end{array}$$ $$49+32=81$$ $$(3+4)^{2}+(8)(4)=81$$ Example Simplify $$4\cdot{\frac{3[5+{(2 + 3)}^2]}{2}}$$ There are brackets and parentheses in this problem. Compute inside the innermost grouping symbols first. $$\begin{array}{c}4\cdot{\frac{3[5+{(2 + 3)}^2]}{2}}\\\text{ }\\=4\cdot{\frac{3[5+{(5)}^2]}{2}}\end{array}$$ Then apply the exponent $$\begin{array}{c}4\cdot{\frac{3[5+{(5)}^2]}{2}}\\\text{}\\=4\cdot{\frac{3[5+25]}{2}}\\\text{ }\\=4\cdot{\frac{3[30]}{2}}\end{array}$$ Then simplify the fraction $$\begin{array}{c}4\cdot{\frac{3[30]}{2}}\\\text{}\\=4\cdot{\frac{90}{2}}\\\text{ }\\=4\cdot{45}\\\text{ }\\=180\end{array}$$ $$4\cdot{\frac{3[5+{(2 + 3)}^2]}{2}}=180$$ In the following video, you are shown how to use the order of operations to simplify an expression with grouping symbols, exponents, multiplication, and addition. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 These problems are very similar to the examples given above. How are they different and what tools do you need to simplify them? a) Simplify $$\left(1.5+3.5\right)–2\left(0.5\cdot6\right)^{2}$$. This problem has parentheses, exponents, multiplication, subtraction, and addition in it, as well as decimals instead of integers. Use the box below to write down a few thoughts about how you would simplify this expression with decimals and grouping symbols. [practice-area rows=”2″][/practice-area] Grouping symbols are handled first. Add numbers in the first set of parentheses. $$\begin{array}{c}(1.5+3.5)–2(0.5\cdot6)^{2}\\5–2(0.5\cdot6)^{2}\end{array}$$ Multiply numbers in the second set of parentheses. $$\begin{array}{c}5–2(0.5\cdot6)^{2}\\5–2(3)^{2}\end{array}$$ Evaluate exponents. $$\begin{array}{c}5–2(3)^{2}\\5–2\cdot9\end{array}$$ Multiply. $$\begin{array}{c}5–2\cdot9\\5–18\end{array}$$ Subtract. $$5–18=−13$$ $$(1.5+3.5)–2(0.5\cdot6)^{2}=−13$$ b) Simplify $$ParseError: invalid DekiScript (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[11]/div/div/div[3]/div/div[1]/p[3]/span[1], line 1, column 1 + ParseError: invalid DekiScript (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[11]/div/div/div[3]/div/div[1]/p[3]/span[2], line 1, column 1 \cdot \,32$$. Use the box below to write down a few thoughts about how you would simplify this expression with fractions and grouping symbols. [practice-area rows=”2″][/practice-area] This problem has exponents, multiplication, and addition in it, as well as fractions instead of integers. According to the order of operations, simplify the terms with the exponents first, then multiply, then add. $$\left(\frac{1}{2}\right)^{2}+\left(\frac{1}{4}\right)^{3}\cdot32$$ Evaluate: $$\left(\frac{1}{2}\right)^{2}=\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$$ $$\frac{1}{4}+\left(\frac{1}{4}\right)^{3}\cdot32$$ Evaluate: $$\left(\frac{1}{4}\right)^{3}=\frac{1}{4}\cdot\frac{1}{4}\cdot\frac{1}{4}=\frac{1}{64}$$ $$\frac{1}{4}+\frac{1}{64}\cdot32$$ Multiply. $$\frac{1}{4}+\frac{32}{64}$$ Simplify. $$\frac{1}{4}+\frac{1}{2}$$. $$\frac{1}{4}+\frac{1}{2}=\frac{3}{4}$$ $$ParseError: invalid DekiScript (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[11]/div/div/div[3]/div/div[2]/p/span[1], line 1, column 1 + ParseError: invalid DekiScript (click for details) Callstack: at (Courses/Lumen_Learning/Book:_Beginning_Algebra_(Lumen)/00:_Review/1.03:_Real_Numbers), /content/body/div[11]/div/div/div[3]/div/div[2]/p/span[2], line 1, column 1 \cdot 32=\frac{3}{4}$$[/hidden-answer] ## Simplify Compound Expressions With Real Numbers In this section, we will use the skills from the last section to simplify mathematical expressions that contain many grouping symbols and many operations. We are using the term compound to describe expressions that have many operations and many grouping symbols. More care is needed with these expressions when you apply the order of operations. Additionally, you will see how to handle absolute value terms when you simplify expressions. ### Example Simplify $$\frac{5-[3+(2\cdot (-6))]}{{{3}^{2}}+2}$$ [hidden-answer a=”906386″]This problem has brackets, parentheses, fractions, exponents, multiplication, subtraction, and addition in it. Grouping symbols are handled first. The parentheses around the $$(2\cdot(−6))$$. Begin working out from there. (The fraction line acts as a type of grouping symbol, too; you simplify the numerator and denominator independently, and then divide the numerator by the denominator at the end.) $$\begin{array}{c}\frac{5-\left[3+\left(2\cdot\left(-6\right)\right)\right]}{3^{2}+2}\\\\\frac{5-\left[3+\left(-12\right)\right]}{3^{2}+2}\end{array}$$ Add $$-12$$, which are in brackets, to get $$-9$$. $$\begin{array}{c}\frac{5-\left[3+\left(-12\right)\right]}{3^{2}+2}\\\\\frac{5-\left[-9\right]}{3^{2}+2}\end{array}$$ Subtract $$5–\left[−9\right]=5+9=14$$. $$\begin{array}{c}\frac{5-\left[-9\right]}{3^{2}+2}\\\\\frac{14}{3^{2}+2}\end{array}$$ The top of the fraction is all set, but the bottom (denominator) has remained untouched. Apply the order of operations to that as well. Begin by evaluating $$3^{2}=9$$. $$\begin{array}{c}\frac{14}{3^{2}+2}\\\\\frac{14}{9+2}\end{array}$$ Now add. $$9+2=11$$. $$\begin{array}{c}\frac{14}{9+2}\\\\\frac{14}{11}\end{array}$$ $$\frac{5-\left[3+\left(2\cdot\left(-6\right)\right)\right]}{3^{2}+2}=\frac{14}{11}$$ The video that follows contains an example similar to the written one above. Note how the numerator and denominator of the fraction are simplified separately. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=36 ## The Distributive Property Parentheses are used to group or combine expressions and terms in mathematics. You may see them used when you are working with formulas, and when you are translating a real situation into a mathematical problem so you can find a quantitative solution. Combo Meal Distributive Property For example, you are on your way to hang out with your friends, and call them to ask if they want something from your favorite drive-through. Three people want the same combo meal of 2 tacos and one drink. You can use the distributive property to find out how many total tacos and how many total drinks you should take to them. $$\begin{array}{c}\,\,\,3\left(2\text{ tacos }+ 1 \text{ drink}\right)\\=3\cdot{2}\text{ tacos }+3\text{ drinks }\\\,\,=6\text{ tacos }+3\text{ drinks }\end{array}$$ The distributive property allows us to explicitly describe a total that is a result of a group of groups. In the case of the combo meals, we have three groups of ( two tacos plus one drink). The following definition describes how to use the distributive property in general terms. ### The Distributive Property of Multiplication For all real numbers a, b, and c, $$a(b+c)=ab+ac$$. What this means is that when a number multiplies an expression inside parentheses, you can distribute the multiplication to each term of the expression individually. To simplify $$3\left(3+y\right)-y+9$$, it may help to see the expression translated into words: multiply three by (the sum of three and y), then subtract y, then add 9 To multiply three by the sum of three and y, you use the distributive property – $$\begin{array}{c}\,\,\,\,\,\,\,\,\,3\left(3+y\right)-y+9\\\,\,\,\,\,\,\,\,\,=\underbrace{3\cdot{3}}+\underbrace{3\cdot{y}}-y+9\\=9+3y-y+9\end{array}$$ Now you can subtract y from 3y and add 9 to 9. $$\begin{array}{c}9+3y-y+9\\=18+2y\end{array}$$ The next example shows how to use the distributive property when one of the terms involved is negative. Example Simplify $$a+2\left(5-a\right)+3\left(a+4\right)$$ This expression has two sets of parentheses with variables locked up in them. We will use the distributive property to remove the parentheses. $$\begin{array}{c}a+2\left(5-a\right)+3\left(a+4\right)\\=a+2\cdot{5}-2\cdot{a}+3\cdot{a}+3\cdot{4}\end{array}$$ Note how we placed the negative sign that was on b in front of the 2 when we applied the distributive property. When you multiply a negative by a positive the result is negative, so $$2\cdot{-a}=-2a$$. It is important to be careful with negative signs when you are using the distributive property. $$\begin{array}{c}a+2\cdot{5}-2\cdot{a}+3\cdot{a}+3\cdot{4}\\=a+10-2a+3a+12\\=2a+22\end{array}$$ We combined all the terms we could to get our final result. $$a+2\left(5-a\right)+3\left(a+4\right)=2a+22$$ ## Absolute Value Absolute value expressions are one final method of grouping that you may see. Recall that the absolute value of a quantity is always positive or 0. When you see an absolute value expression included within a larger expression, treat the absolute value like a grouping symbol and evaluate the expression within the absolute value sign first. Then take the absolute value of that expression. The example below shows how this is done. ### Example Simplify $$\frac{3+\left|2-6\right|}{2\left|3\cdot1.5\right|-\left(-3\right)}$$. [hidden-answer a=”572632″]This problem has absolute values, decimals, multiplication, subtraction, and addition in it. Grouping symbols, including absolute value, are handled first. Simplify the numerator, then the denominator. Evaluate $$\left|2–6\right|$$. $$\begin{array}{c}\frac{3+\left|2-6\right|}{2\left|3\cdot1.5\right|-\left(-3\right)}\\\\\frac{3+\left|-4\right|}{2\left|3\cdot1.5\right|-\left(-3\right)}\end{array}$$ Take the absolute value of $$\left|−4\right|$$. $$\begin{array}{c}\frac{3+\left|-4\right|}{2\left|3\cdot1.5\right|-\left(-3\right)}\\\\\frac{3+4}{2\left|3\cdot1.5\right|-\left(-3\right)}\end{array}$$ Add the numbers in the numerator. $$\begin{array}{c}\frac{3+4}{2\left|3\cdot1.5\right|-\left(-3\right)}\\\\\frac{7}{2\left| 3\cdot 1.5 \right|-(-3)}\end{array}$$ Now that the numerator is simplified, turn to the denominator. Evaluate the absolute value expression first. $$3 \cdot 1.5 = 4.5$$, giving $$\begin{array}{c}\frac{7}{2\left|{3\cdot{1.5}}\right|-(-3)}\\\\\frac{7}{2\left|{ 4.5}\right|-(-3)}\end{array}$$ The expression “$$2\left|4.5\right|$$” reads “2 times the absolute value of 4.5.” Multiply 2 times 4.5. $$\begin{array}{c}\frac{7}{2\left|4.5\right|-\left(-3\right)}\\\\\frac{7}{9-\left(-3\right)}\end{array}$$ Subtract. $$\begin{array}{c}\frac{7}{9-\left(-3\right)}\\\\\frac{7}{12}\end{array}$$ $$\frac{3+\left|2-6\right|}{2\left|3\cdot1.5\right|-3\left(-3\right)}=\frac{7}{12}$$
{}
# 8.5 | Partial Fractions A rational function $$\frac{P(x)}{Q(x)}$$ with the degree of $$P(x)$$ less than the degree of $$Q(x)$$, can be rewritten as a sum of fractions as follows: • For each factor of $$Q(x)$$ of the form $$(ax+b)^m$$, introduce terms $$\frac{A_1}{ax +b} + \frac{A_2}{(ax+b)^2}+ \cdots +\frac{A_m}{(ax+b)^m}$$ • For each factor of $$Q(x)$$ of the form $$(ax^2+bx+c)^m$$, introduce terms $$\frac{A_1 x + B_1}{ax^2+bx+c}+\frac{A_2 x + B_2}{(ax^2+bx+c)^2}+ \cdots +\frac{A_m x + B_m}{(ax^2+bx+c)^m}$$ where $$A_i$$’s and $$B_i$$’s are constants. At times, for ease of notation, we will use $$A,B,C,D,…$$ in place of $$A_1, B_1,$$ etc… If the degree of $$P(x)$$ is not less than the degree of $$Q(x)$$, long divide first. Find a partial fractions decomposition for $$\frac{-x^4-2 x^3-8 x^2-26 x+9}{x^5+10 x^3+9 x}$$ $$\frac{-x^4-2 x^3-8 x^2-26 x+9}{x^5+10 x^3+9 x} = \frac{A}{x}+\frac{B x+\text{CC}}{x^2+1}+\frac{\text{DD} x+\text{EE}}{x^2+9}$$ for some choice of $$A,B,CC,DD,$$ and $$EE$$. We clear the denominator to obtain $$-x^4-2 x^3-8 x^2-26 x+9 = x^4 (A+B+\text{DD})+x^2 (10 A+9 B+\text{DD})+9 A+x^3 (\text{CC}+\text{EE})+x (9 \text{CC}+\text{EE})$$ For these two polynomials to be equal, their coefficients must be the same. That is, $$\begin{array}{cccccc} A & B & 0 & \text{DD} & 0 & -1 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 10 A & 9 B & 0 & \text{DD} & 0 & -8 \\ 0 & 0 & 9 \text{CC} & 0 & \text{EE} & -26 \\ 9 A & 0 & 0 & 0 & 0 & 9 \\ \end{array}$$ where the first row corresponds to the equation $$A + B + DD = -1$$, the second row corresponds to the equation $$CC + EE = -2$$, and so on. Now we reduce the system: $$\begin{array}{cccccc} A & B & 0 & \text{DD} & 0 & -1 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 10 A & 9 B & 0 & \text{DD} & 0 & -8 \\ 0 & 0 & 9 \text{CC} & 0 & \text{EE} & -26 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 10 A & 9 B & 0 & \text{DD} & 0 & -8 \\ 0 & 0 & 9 \text{CC} & 0 & \text{EE} & -26 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 0 & 9 B & 0 & \text{DD} & 0 & -18 \\ 0 & 0 & 9 \text{CC} & 0 & \text{EE} & -26 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 0 & 0 & 0 & -8 \text{DD} & 0 & 0 \\ 0 & 0 & 9 \text{CC} & 0 & \text{EE} & -26 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 0 & 0 & 0 & -8 \text{DD} & 0 & 0 \\ 0 & 0 & 0 & 0 & -8 \text{EE} & -8 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & \text{EE} & -2 \\ 0 & 0 & 0 & -8 \text{DD} & 0 & 0 \\ 0 & 0 & 0 & 0 & \text{EE} & 1 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & 0 & -3 \\ 0 & 0 & 0 & -8 \text{DD} & 0 & 0 \\ 0 & 0 & 0 & 0 & \text{EE} & 1 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & \text{DD} & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & 0 & -3 \\ 0 & 0 & 0 & \text{DD} & 0 & 0 \\ 0 & 0 & 0 & 0 & \text{EE} & 1 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ $$\begin{array}{cccccc} 0 & B & 0 & 0 & 0 & -2 \\ 0 & 0 & \text{CC} & 0 & 0 & -3 \\ 0 & 0 & 0 & \text{DD} & 0 & 0 \\ 0 & 0 & 0 & 0 & \text{EE} & 1 \\ A & 0 & 0 & 0 & 0 & 1 \\ \end{array}$$ So, we see that $$A = 1, B= -2, CC = -3, DD = 0,$$ and $$EE = 1$$. Find $$\int \frac{6x^3-20x^2+53x-87}{x^4-5x^3+15x^2-45x+54} \; dx$$ Hint: $$x^2+9$$ is a factor of the denominator. Since the degree of the numerator is less than the degree the denominator, we start by factoring the denominator completely. Long division of $$x^4-5x^3+15x^2-45x+54$$ by $$x^2 + 9$$ yields: $$x^2-5x+6$$ which factors as $$(x-3)(x-2)$$. So, $$\frac{6x^3-20x^2+53x-87}{x^4-5x^3+15x^2-45x+54} = \frac{6x^3-20x^2+53x-87}{(x-3)(x-2)(x^2+9)}$$ The partial fractions decomposition of the integrand will be $$\frac{A}{x-3}+\frac{B}{x-2} + \frac{Cx+D}{x^2+9},$$ where the constants $$A,B,C,$$ and $$D$$ satisfy $$\frac{6x^3-20x^2+53x-87}{x^4-5x^3+15x^2-45x+54} = \frac{A}{x-3}+\frac{B}{x-2} + \frac{Cx+D}{x^2+9}.$$ By clearing the denominator we obtain: \begin{aligned}6 & x^3 -20x^2+53x-87 \\ & = A(x-2)(x^2+9)+B(x-3)(x^2+9) \\ & \qquad + (Cx+D)(x-2)(x-3) \end{aligned} Evaluate this equation at $$x = 1,2, 3,$$ and $$4$$: \begin{array}{c | c } x & \begin{aligned}6 & x^3 -20x^2+53x-87 \\ & = A(x-2)(x^2+9)+B(x-3)(x^2+9) \\ & \; \; + (Cx+D)(x-2)(x-3) \end{aligned}\\ \hline 1 & \begin{aligned} & -48 = -10 A – 20 B + 2 C + 2D \\ \Rightarrow & -48 = -30 – 20 + 2C + 2D \\ \Rightarrow & 1 = C+D \end{aligned} \\ \hline 2 & -13 = -13B \Rightarrow B = 1 \\ \hline 3 & 54 = 18A \Rightarrow A = 3 \\ \hline 4 & \begin{aligned} & 189 = 50 A + 25 B + 8C + 2D \\ \Rightarrow & 189 = 150 + 25 + 8C +2D \\ \Rightarrow & 7= 4C+ D \end{aligned} \\ \end{array} So, $$A = 3$$ and $$B = 1$$. To find $$C$$ and $$D$$, solve the system: $$\begin{cases} 1 & = C &+ D \\ 7 & = 4C &+ D \end{cases}$$ This system has solutions $$C = 2$$ and $$D = -1$$. So, \begin{aligned} & \int \frac{6x^3-20x^2+53x-87}{x^4-5x^3+15x^2-45x+54} \; dx \\ &= \int \left(\frac{A}{x-3}+\frac{B}{x-2} + \frac{Cx+D}{x^2+9} \right) \; dx\\ &= \int \left(\frac{3}{x-3}+\frac{1}{x-2} + \frac{2x-1}{x^2+9} \right) \; dx\\ &= \int \frac{3}{x-3} \; dx + \int \frac{1}{x-2} \; dx + \int \frac{2x-1}{x^2+9} \; dx \\ & = 3\ln|x-3| + \ln|x-2| + \int \frac{2x}{x^2+9} \; dx \\ & \qquad- \int \frac{1}{x^2+9} \; dx \\ & = 3 \ln|x-3| + \ln|x-2| + \ln|x^2+9| \\ & \qquad – \frac{1}{3}\tan^{-1}\left(\frac{x}{3} \right) + C \end{aligned}
{}
# The LIFETEST Procedure #### Kernel-Smoothed Hazard Estimate Kernel-smoothed estimators of the hazard function are based on the Nelson-Aalen estimator and its variance . Consider the jumps of and at the event times as follows: where =0. The kernel-smoothed estimator of is a weighted average of over event times that are within a bandwidth distance b of t. The weights are controlled by the choice of kernel function, , defined on the interval [–1,1]. The choices are as follows: • uniform kernel: • Epanechnikov kernel: • biweight kernel: The kernel-smoothed hazard rate estimator is defined for all time points on . For time points t for which , the kernel-smoothed estimated of based on the kernel is given by The variance of is estimated by For t < b, the symmetric kernels are replaced by the corresponding asymmetric kernels of Gasser and Müller (1979). Let . The modified kernels are as follows: • uniform kernel: • Epanechnikov kernel: • biweight kernel: For , let . The asymmetric kernels for are used with x replaced by –x. Using the log transform on the smoothed hazard rate, the 100(1–)% pointwise confidence interval for the smoothed hazard rate is given by where is the (100(1–))th percentile of the standard normal distribution. ##### Optimal Bandwidth The following mean integrated squared error (MISE) over the range and is used as a measure of the global performance of the kernel function estimator: The last term is independent of the choice of the kernel and bandwidth and can be ignored when you are looking for the best value of b. The first integral can be approximated by using the trapezoid rule by evaluating at a grid of points . You can specify , and M by using the options GRIDL=, GRIDU=, and NMINGRID=, respectively, of the HAZARD plot. The second integral can be estimated by the Ramlau-Hansen (1983a, 1983b) cross-validation estimate: Therefore, for a fixed kernel, the optimal bandwidth is the quantity b that minimizes The minimization is carried out by the golden section search algorithm.
{}
# Random variables and their properties Properties of  PMF: 1. $p(x) \geqslant 0 \forall x$ 2. $\sum_{xin D}^{ } p(x) = 1$ Expected value : [E[X] = $mu$] 1. $E[X] = \sum x_ip(x_i)$ 2. $E[f(x)] = \sum f(x_i)p(x_i)$ 3. $E[aX+b] = aE[X] + b$ Variance Var[x]: 1. $Var[X] = E[(X-\mu)^2 ] = E[(X-E[X])^2] = \sigma ^2$ 2. $Var[X] = E[X^2] – E^2[X] = E[X^2] – (\mu)^2$ 3. $Var[X] = Var[E[X/Y]] + E[Var[X/Y]]$ 4. $Var[aX+b] = a^2Var[X]$ 5. $Var[aX+bY+c] = a^2Var[X] + b^2Var[Y] + 2abCov[X,Y]$ 6. If X and Y are independent random Variables: $Var[XY] = Var[X]Var[Y] + Var[X].E^{2}[Y] + Var[Y].E^{2}[X]$ – TBV If X and Y are independent r.v’s: 1. $f_{X,Y}(x,y) = f_X(x). f_Y(y)$ 2. $f_{X/Y}(x/y) = f_X(x)$ 3. $E[g(X). h(Y)] = E[g(X)].E[h(Y)]$ For any suitably integrable functions g and h. 4. $M_{X+Y}(t) = M_X(t)M_Y(t)$ 5. $Cov[X,Y] = 0$ 6. $Var(X+Y) = Var(X) + Var(Y) = Var(X-Y)$ Joint distributions: • Let X and Y be two discrete r.v’s with a joint p.m.f $f_{X,Y}(x,y) = P(X = x;Y = y)$. Note that the distributions (mass functions – $f_{X}(x) = P(X=x) = \sum_{y}f_{X,Y}(x,y)$ and – $f_{Y}(y) = P(Y=y) = \sum_{x}f_{X,Y}(x,y)$ are the marginal distributions of X and Y respectively. • If $f_Y(y) \neq 0$ the conditional p.m.f of X/Y=y is given by $f_{X/Y}(x/y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}$ • $E[X/Y=y] = \sum_{x}xf_{X/Y}(x/y)$ and more generally $E[g(X)/Y=y] = \sum_{x}g(x)f_{X/Y}(x/y)$ • $Var[X/Y=y] = E[X^2/Y=y] – E^2[X/Y=y]$ • Note that $E[X/Y]$ is a random variable while $E[X/Y=y]$ is a number. • $E[E[X/Y]] = E[X]$   [Applies for any two r.v’s X and Y] • $E[E[g(X)/Y]] = E[g(X)]$ Covariance Cov[X,Y] and Correlation co-efficient: 1. $Cov[X,Y] = E[XY] – E[X]E[Y]$ 2. $Cov[aX+bY+c, dZ+eW+f] = adCov[XZ] + aeCov[XW] + bdCov[YZ] + beCov[YW]$ 3. Correlation Co-efficient = $\rho (X,Y) = \frac{Cov[XY]}{\sigma_X\sigma_Y}$ 4. $-1 \leq \rho (X,Y) \leq 1$ 5. Co-efficient of Variance = $\frac{\sigma}{\mu}$ Inequalities: Chebychev’s: $Var(X) \geq c^2 P(\left | X-\mu \right \| \geq c)$ Moment Functions: 1. $E[X^k] = k^{th}$ moment of X around 0 2. $E[(X-\mu)^k] = k^{th}$ moment of X about the mean $\mu$ = $k^{th}$ central moment of X 3. $E[(\frac{X-\mu}{\sigma})^3]$ = Measure of lack of symmetry  =Skewness • Positively skewed => $E[(\frac{X-\mu}{\sigma})^3] > 0$  => Skewed to the right • Negatively skewed => $E[(\frac{X-\mu}{\sigma})^3] < 0$ => Skewed to the left • Symmetric distribution => $E[(\frac{X-\mu}{\sigma})^3] = 0$ 4. $E[(\frac{X-\mu}{\sigma})^4]$ = Measure of peakness = Kurtosis Moment Generating Functions: 1. $M(t) = E[e^{xt}] = M_x(t) = \sum e^{xt}p(x) = \int_{-\infty }^{\infty }e^{xt}f(x)dx$ 2. MGF is a function of t 3. ${M}'(t) = E[xe^{xt}]$ 4. $M(0) = 1$ 5. ${M}'(0) = E[X]$ , ${M}”(0) = E[X^2]$, …. 6. $M^n(t) = E[x^ne^{xt}]$ 7. If X has an MGF $M_x(t)$ and if Y = aX+b then Y will have an MGF = $M_y(t) = e^{bt}M_x(at)$ Variance and Covariance: 1. $Var(X\pm Y) = Var(X)+Var(Y)\pm 2Cov(X,Y)$ 2. $Var(X+Y) = Var(X-Y)$ if X and Y are independent
{}
# Formula to calculate when a point is reached This topic is 1345 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I am developing a 2D game where x is increasing rightwards and y downwards. Thus, the top-left corner is 0,0. I have an enemy that moves towards a set of waypoints. When one is reached, it will start moving towards the next waypoint in the list. I use the following code to move towards a point: Vector2 norP = normalize(this.x, this.y, target.x, target.y); float accelx = thrust * -norP.x - drag * vx; float accely = thrust * -norP.y - drag * vy; vx = vx + delta * accelx; vy = vy + delta * accely; this.x += delta * vx; this.y += delta * vy; That piece works fine. The problem is detecting when a point has been reached. Right now, I use the following: if(distance(this.x, this.y, target.x, target.y) < 25) { ... The value 25 is a fixed value and works sometimes. As you alter thrust and drag properties, the value 25 stops being effective. So I need a formula that calculates this value that works good no matter what thrust and drag is(taking them into consideration). ##### Share on other sites It appears that you allow thrust and drag to vary each update (assuming "delta" is the delta-time between updates). As a result, the enemy may pass through the waypoint during an update (as you've found). When a new waypoint is selected, calculate the distance from the enemy position to the next waypoint, and set another variable distance_traveled to 0. Each update, add the distance traveled in delta time by the enemy to distance_traveled. When distance_traveled >= distance, the enemy will reach or pass the waypoint during that delta time. I.e., each update: distance_traveled += delta * sqrt( vx2 + vy2 ); if( distance_traveled >= distance ) arrived_or_passed(); N.B., this assumes that the enemy travels in a straight-line path from start to finish. That is, norP (which I assume is the normalized direction from start to waypoint) is constant for that waypoint - which, from your code, it appears to be. FYI, if that is true, then you need to calculate the normalized direction just once - not each update. Edited by Buckeye ##### Share on other sites Collision detection like that can be tricky for fast moving objects. Rather than check if they've reached the waypoint, check if they've passed the waypoint. The difference on one or more axes would have swapped from positive to negative (or the other way around).  That may be the easiest way, but will yield problems if your enemy is traveling vertically or horizontally, so you have to ignore the opposing axis to that it's traveling on (or very nearly traveling on): e.g. if you're traveling very horizontally, ignore changes in Y and just focus on the difference in X switching from positive to negative or vice versa. You can also check the distance from the waypoint to the closest point on the line between the previous position and the current position, and see if it's closer than the current position.  That's more math, though.  Let me know if you want to know how to do that. ##### Share on other sites When you select a new way point store a vector that points from the enemy's position to the way point. Then on each update, find a new vector from the enemy's current position to the way point and take the dot product with the original direction. If the result is positive then the waypoint is "ahead" and the enemy should continue forward, but if the result is negative then the waypoint is "behind" and the enemy should pick a new way point. This approach doesn't need special case logic to handle horizontal and vertical movement and it should also handle curved paths (from off-axis acceleration) just fine. ##### Share on other sites When you select a new way point store a vector that points from the enemy's position to the way point. Then on each update, find a new vector from the enemy's current position to the way point and take the dot product with the original direction. If the result is positive then the waypoint is "ahead" and the enemy should continue forward, but if the result is negative then the waypoint is "behind" and the enemy should pick a new way point. This approach doesn't need special case logic to handle horizontal and vertical movement and it should also handle curved paths (from off-axis acceleration) just fine. I was thinking the same thing, but there are still pathological cases. If drag is zero and the initial velocity points in a perpendicular direction with the right magnitude, you can have an enemy orbiting the waypoint at a fixed distance. Your method would declare success after going 90 degrees around the circle, which seems kind of odd. It is possible that a better controller is needed to make sure the enemy actually passes near the waypoint. ##### Share on other sites If drag is zero and the initial velocity points in a perpendicular direction with the right magnitude, you can have an enemy orbiting the waypoint at a fixed distance. Your method would declare success after going 90 degrees around the circle, which seems kind of odd. It is possible that a better controller is needed to make sure the enemy actually passes near the waypoint. Hmm... That is a pathological case since an orbiting path would *never* get any closer to the way-point. It's a flaw but I don't think it's a fatal one. At least the enemy would declare success at *some* point and move on to the next way-point instead of circling endlessly around the same point. If you chose your way-points carefully (and avoided right-angles) then it should be possible to design paths that don't have this problem. That said, I'd really like to have code that works in all cases. I was going to say you could fix it by applying an acceleration towards the way-point, but then I remembered that's pretty much the definition of a circular orbit. What we need is a way for the enemy to recognize when it's going the wrong way and steer towards the way-point. We also need to tell it to speed up if it happens to be not moving (or moving too slowly), and maybe slow down if it's moving too fast. I think the solution is to decompose the enemy's velocity into orthogonal components, with one component in the direction toward the way-point (call it u) and the other (call it v) at right-angles to it. If u is less than a target speed (or negative) then we need to apply an acceleration towards the way-point. Optionally, if u is too high then we can apply an acceleration *away* from the way-point to act as a brake. If v is other than zero then there is a lateral velocity and we should apply an acceleration *perpendicular* to the direction we want to travel, in the opposite direction of v. That will serve to cancel out the lateral velocity (and break us out of any orbits). At each frame we should do a distance check to see if we've reached the way-point and a dot-product with the direction of travel to see if we've over-shot it. I *think* the above changes should give us the behavior we want. If enemies are stopped then they'll automatically start moving toward the way-point. And if they're heading the wrong way then they should slow down and reverse course. Fast-moving enemies might still over-shoot their target way-point but I think that's better than forcing them to back-track to bulls-eye a missed way-point. Let me know if you see any more pathological cases. This is a fun exercise to think about. ##### Share on other sites At each frame we should do a distance check to see if we've reached the way-point and a dot-product with the direction of travel to see if we've over-shot it. Do you mean direction to the waypoint?  Travel implies current velocity. Also, why bother doing a distance check?  You'll probably overshoot the next frame anyway (unless the distance tolerance is huge or the speed pretty low).  Unless the one frame delay is so important to include an extra test in the function... I like your use of polar coordinates to dampen orbit, which seems useful for any solution, but otherwise this looks to suffer from the same issue as my first solution, where fast enemies might pass the waypoint without coming close enough, and still be regarded as a hit. A proper solution really needs to draw a line between present and prior coordinates and test distance to the closest point on the line from the waypoint.  Which only fails if it's orbiting really fast.  But I don't think anybody wants to get into testing a curve. Edited by StarMire ##### Share on other sites I was being lazy by not explaining the trig before (hopefully you can follow my explanation): You have three points: Old position, New Position, and Waypoint You need three distances: OldNew OldWay NewWay These give you a triangle. Bisect it into two triangles with a right angle perpendicular to OldNew This gives you two right triangles One has side lengths of: D (the unknown distance you want to check) OldNew1 (part of Old-New) OldWay The other has side lengths of: And D (again, the distance you want to check. OldNew2 (the other part) NewWay Using basic trig for right triangles: A^2 + B^2 = C^2 We know: D^2 + OldNew2^2 = NewWay^2 D^2 + OldNew1^2 = OldWay^2 And thus: D^2 = NewWay^2 - OldNew2^2 D^2 = OldWay^2 - OldNew1^2 And: NewWay^2 - OldNew2^2 = OldWay^2 - OldNew1^2 NewWay^2 - OldWay^2 = OldNew2^2 - OldNew1^2 Remember: OldNew2 = OldNew - OldNew1 NewWay^2 - OldWay^2 = (OldNew - OldNew1)^2 - OldNew1^2 NewWay^2 - OldWay^2 = OldNew^2 -2OldNewOldNew1 + OldNew1^2 - OldNew1^2 (NewWay^2 - OldWay^2 - OldNew^2) / -2OldNew =  OldNew1 Therefore (solving for D): D = ?(OldWay^2 - ((NewWay^2 - OldWay^2 - OldNew^2) / -2OldNew)^2) Sorry that's so messy You can also do it with input based on the X and Y coordinates rather than distances.  That's arguably even messier to write, but you can find it here: http://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line#Line_defined_by_two_points And there's the vector form, on the same page: http://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line#Vector_formulation ##### Share on other sites Alternatively, you may want to look into cheating and just interpolating from point to point.  Obviously, may not work for your situation, but I figured it should be brought up as an option. ##### Share on other sites StarMire: I see you're calculating the distance from the way-point to the (infinite) line defined by the enemy's original position and its current position. But what's the actual test to decide whether you've reached the way-point or not? You can also check the distance from the waypoint to the closest point on the line between the previous position and the current position, and see if it's closer than the current position. The distance from the way-point to the line (D in your formulas above) is one side of a right triangle, and the distance from the current position to the way-point (NewWay in your formulas) is the hypotenuse of the same right-triangle. But the hypotenuse is always the longest side so you're always going to find that D < NewWay. Am I misunderstanding your approach? 1. 1 2. 2 3. 3 4. 4 Rutin 17 5. 5 • 11 • 21 • 12 • 11 • 41 • ### Forum Statistics • Total Topics 631401 • Total Posts 2999870 ×
{}
MTM - Maple Programming Help Home : Support : Online Help : Connectivity : MTM Package : MTM/coeffs MTM coeffs extract all coefficients of a multivariate polynomial Calling Sequence coeffs(P) coeffs(P, x) c, t := coeffs(P, x) Parameters P - multivariate polynomial x - variable c - variable t - variable Description • coeffs(P) returns the coefficients of the polynomial P with respect to all the indeterminates of P. • coeffs(P,x) returns the coefficients of the polynomial P with respect to x. • [c, t] = coeffs(P,x) also returns an array of the terms of P.  The terms of P line up such that add(i,i=zip(*,a,b)); gives back the original polynomial, P. Examples > $\mathrm{with}\left(\mathrm{MTM}\right):$ > $t≔2+{\left(3+4\mathrm{log}\left(x\right)\right)}^{2}-5\mathrm{log}\left(x\right)$ ${t}{:=}{2}{+}{\left({3}{+}{4}{}{\mathrm{ln}}{}\left({x}\right)\right)}^{{2}}{-}{5}{}{\mathrm{ln}}{}\left({x}\right)$ (1) > $\mathrm{coeffs}\left(\mathrm{expand}\left(t\right)\right)$ $\left[\begin{array}{ccc}{11}& {16}& {19}\end{array}\right]$ (2) > $y≔a+b\mathrm{sin}\left(x\right)+c\mathrm{sin}\left(2x\right)$ ${y}{:=}{a}{+}{b}{}{\mathrm{sin}}{}\left({x}\right){+}{c}{}{\mathrm{sin}}{}\left({2}{}{x}\right)$ (3) > $\mathrm{coeffs}\left(y,\mathrm{sin}\left(x\right)\right)$ $\left[\begin{array}{cc}{a}{+}{c}{}{\mathrm{sin}}{}\left({2}{}{x}\right)& {b}\end{array}\right]$ (4) > $\mathrm{coeffs}\left(\mathrm{expand}\left(y\right),\mathrm{sin}\left(x\right)\right)$ $\left[\begin{array}{cc}{a}& {b}{+}{2}{}{c}{}{\mathrm{cos}}{}\left({x}\right)\end{array}\right]$ (5) > $z≔3{x}^{2}{u}^{2}+5x{u}^{3}$ ${z}{:=}{5}{}{{u}}^{{3}}{}{x}{+}{3}{}{{u}}^{{2}}{}{{x}}^{{2}}$ (6) > $\mathrm{coeffs}\left(z\right)$ $\left[\begin{array}{cc}{5}& {3}\end{array}\right]$ (7) > $\mathrm{coeffs}\left(z,x\right)$ $\left[\begin{array}{cc}{3}{}{{u}}^{{2}}& {5}{}{{u}}^{{3}}\end{array}\right]$ (8) > $\left[c,t\right]$ $\left[{c}{,}{2}{+}{\left({3}{+}{4}{}{\mathrm{ln}}{}\left({x}\right)\right)}^{{2}}{-}{5}{}{\mathrm{ln}}{}\left({x}\right)\right]$ (9)
{}
# Wydawnictwa / Czasopisma IMPAN / Acta Arithmetica / Wszystkie zeszyty ## Acta Arithmetica Artykuły w formacie PDF dostępne są dla subskrybentów, którzy zapłacili za dostęp online, po podpisaniu licencji Licencja użytkownika instytucjonalnego. Czasopisma do 2009 są ogólnodostępne (bezpłatnie). ## Indices of subfields of cyclotomic ${\mathbb Z}_p$-extensions and higher degree Fermat quotients ### Tom 169 / 2015 Acta Arithmetica 169 (2015), 101-114 MSC: Primary 11R04; Secondary 11A07, 11R99. DOI: 10.4064/aa169-2-1 #### Streszczenie We consider the indices of subfields of cyclotomic ${\mathbb Z}_p$-extensions of number fields. For the $n$th layer $K_n$ of the cyclotomic ${\mathbb Z}_p$-extension of ${\mathbb Q}$, we find that the prime factors of the index of $K_n/{\mathbb Q}$ are those primes less than the extension degree $p^n$ which split completely in $K_n$. Namely, the prime factor $q$ satisfies $q^{p-1}\equiv 1 ({\rm mod} p^{n+1})$, and this leads us to consider higher degree Fermat quotients. Indices of subfields of cyclotomic ${\mathbb Z}_p$-extensions of a number field which is cyclic over ${\mathbb Q}$ with extension degree a prime different from $p$ are also considered. #### Autorzy • Yoko InoueDepartment of Mathematics Tsuda College 2-1-1 Tsuda-cho, Kodaira-shi, Tokyo 187-8577, Japan
{}
# Why there isn't current in R3? simulate this circuit – Schematic created using CircuitLab Why there isn't current in R3? Does it depends on the resistance of R3? - It is a good exercise to assume that ONE of the four resistors R has another value (for example 1.1*R) and to calculate the current through R3. Then , you can see what happens when all are equal. –  LvW May 17 at 8:22 And better yet, try simulating it. –  Dzarda May 17 at 11:59 Is this homework or an assignment? That's allowed but if so you should say. –  Russell McMahon May 17 at 14:19 Its because there is no potential difference across the resistor $R_3$. It has nothing to do with the value of $R_3$. The 5-resistor system is left-right symmetrical: if I exchange left and right, the current in $R_3$ changes its direction, but the circuit is the same (so the current remains the same, since there is only one possible value for this variable). So it can be deduced that $I = -I$.
{}
# 8.5: Variables It seems like a good idea at this point to think in a very general way about which variables are likely to be important in problems of sediment transport, and to organize those variables into a set of dimensionless variables that is likely to be useful as a framework from which to simplify or (complexify or specialize!) in any particular problem we deal with later. This is your chance to think about the phenomenon of sediment transport in its broadest aspects. This endeavor involves some assumptions about what might be called the target flow. For the sake of definiteness, we will look at the movement of cohesionless sediment by steady uniform flows in straight rectangular channels of effectively infinite width in a nonrotating system. We thereby ignore the effects of channel width, cross-section shape, channel curvature, and the Earth’s rotation, and we restrict ourselves to equilibrium sediment transport. The first three restrictions are especially serious in fluvial sediment transport, but our target flow is a good start, from which these other effects can be evaluated. Ignoring the Earth’s rotation is not as serious as it might seem, because most aspects of sediment transport are tied up with the local near-bottom structure of turbulent shear flows, which, as you saw in Chapter 7, are about the same in geophysical flows as in actually or effectively nonrotating flows. Time-dependent problems in sediment transport are also of great importance, but again our target case represents a good reference. Finally, the assumption of cohesionless flows shuts us out of the complex, frustrating, and extremely important world of fine cohesive sediments—another topic that deserves its own separate set of notes. The most important sediment-transport effects we will deal with in later chapters are easy to list: • modes of grain movement • speeds of grain movement • sediment transport rate • bed configuration Variables Which variables might have a role in influencing or determining these effects? The possibilities form a long list (and there probably are others I have not thought of): Sediment: • joint size–shape–density distribution • elastic properties Fluid: • density $$\rho$$ • viscosity $$\mu$$ • specific weight (weight per unit volume) $$\gamma$$ • elastic properties • thermal properties • surface tension Flow: • mean depth $$d$$ • mean velocity $$U$$ • discharge (per unit width) $$q$$ • boundary shear stress $$\tau_{\text{o}}$$ • slope $$S$$ • power $$P$$ System: • acceleration of gravity $$g$$ Clearly this list is too long: some items can safely be neglected, and some items are actually redundant. First, here are some comments on the variables that characterize the sediment. There are no redundancies in the items for the sediment in the above list, but they are effectively non-quantifiable because of grain shape. And even if grain shape is neglected, the size–density distribution has to be characterized by a two-variable joint frequency distribution (see Figure 8.1.3). As mentioned earlier, a good approximation in practice might be to assume one dominant blade-shaped spike in the distribution corresponding to crudely quartz-density sediment, and one or more subsidiary spikes for heavy minerals (Figure 8.1.4). It is common practice in work on sediment transport to assume that all grains have the same density, so that the sediment can be characterized by the mean or median size $$D$$ and the density $$\rho_{s}$$. Adding the standard deviation $$\sigma$$ of the size distribution makes for three variables describing the sediment. With regard to variables characterizing the flow, there is a serious redundancy in the foregoing list: only two variables are needed to specify the bulk flow, one of them being the flow depth and the other a flow-strength variable. The two most logical candidates for the flow-strength variable are the boundary shear stress $$\tau_{\text{o}}$$ and the mean velocity $$U$$ (or the surface velocity $$U_{s}$$). (Some might take exception to that statement, however, and claim that the flow power $$P$$ is the most fundamental flow-strength variable.) In choosing the flow-strength variable, we could address three considerations: • Which variables specify or characterize the state of sediment transport, in the sense that specification of those variables unambiguously corresponds to or identifies the state of sediment movement, whether those variables are imposed upon the system or are themselves fixed by the operation of the system? • Which variables are truly independent, in the sense that they are imposed on the system and are unaffected by its operation? • Which variables govern the state of sediment transport, in the sense that they are dynamically most directly responsible for particle transport and bed- configuration development, whether or not they are independent? One of the important goals in studying the phenomena of sediment transport is to show as clearly and unambiguously as possible the hydraulic relationships among those phenomena. It would be good to have a one-to-one correlation between sediment-transport states and combinations of variables, because that would represent the clearest start on knowing what we have to deal with or explain. In terms of unambiguous characterization, $$U$$ and $$d$$ (or $$q$$ and $$d$$) are the most appropriate variables to describe the flow because, for a given fluid, for each combination of $$U$$ and $$d$$ in steady uniform flow there is one and only one average state of the flow, in terms of velocity structure and boundary forces. This is not the case, however, if $$\tau_{\text{o}}$$ or $$P$$ is used in place of $$U$$ or $$q$$: if $$\tau_{\text{o}}$$ or $$P$$ is used, there is an element of ambiguity in that for certain variables values of $$\tau_{\text{o}}$$ or $$P$$ more than one bed state at a given flow depth is possible. Although I am getting ahead of myself in bringing up this matter before the chapter on bed configurations, I will point out here that this has to do with the substantial decrease in form resistance in the transition from ripples or dunes to plane bed with increasing $$U$$ or $$q$$ at constant $$d$$. You can see from the cartoon graph in Figure $$\PageIndex{1}$$ that because of this effect there is a non-negligible range of $$\tau_{\text{o}}$$ for which three different values of $$U$$ are possible. But you can see from the graph that if you specify $$U$$, you thereby uniquely specify $$\tau_{\text{o}}$$. An alternative approach would be to use only the part of $$\tau_{\text{o}}$$, called the skin friction, that represents the local shear forces on the bed, and leave out of consideration the part of $$\tau_{\text{o}}$$, called the form drag, that arises from large-scale pressure differences on the fore and aft sides of roughness elements. The ambiguity noted above would thereby be circumvented. The problem is that although there are several published procedures for such a drag partition, none works remarkably well (yet). Independence need not be a criterion in choice of variables to describe the state of sediment transport, because a given set of variables can equally well describe the state of sediment transport whether any given variable in the set is dependent or independent: the state of sediment transport is a function of the nature of the flow but not of how the flow is arranged or established, provided that the flow is strong enough at the outset to produce general sediment movement on the bed. Independence of variables depends to a great extent upon the nature of the sediment-transporting system. For an example of this, think about an extremely long channel (tens of kilometers, say) with bottom slope $$S$$, straight vertical sidewalls, and an erodible sediment bed, into which a constant water discharge $$Q$$ is introduced at the upstream end (Figure $$\PageIndex{2}$$). Assume that after a transient period of flow adjustment a steady state is maintained by introducing sediment at the upstream end at a rate equal to the sediment discharge $$Q_{s}$$ at the downstream end. The imposed variables here are $$Q$$ and $$S$$; $$Q_{s}$$, $$U$$, and $$d$$ are adjusted by the flow. Because of the great channel length, flow and sediment transport are virtually uniform along most of the channel except near the upstream and downstream ends. Even though the flow might prefer a different $$S$$ for the given $$Q$$, adjustment in $$S$$ is so slow that, on time scales that are short in terms of geologic time but long in terms of bed-form movement, $$S$$ can be considered fixed. Hence $$Q$$ and $$S$$ are independent variables, and $$U$$ and $$d$$ along with all variables that express the details of flow structure and sediment transport are dependent upon $$Q$$ and $$S$$. In a similar but much shorter channel, tens of meters, say (Figure $$\PageInded{3}$$), $$S$$ can change so rapidly by erosion and deposition along the channel that the flow cannot be considered uniform until $$S$$ has reached a state of adjustment to the imposed $$Q$$; $$Q$$ is an independent variable, but $$S$$ is now dependent, in the sense that it cannot be preset except approximately by manipulation of $$d$$ by means of a gate or weir at an overfall at the downstream end of the channel. And in constant-volume recirculating channels, in which there is no overfall, $$d$$ is truly independent and $$S$$ is truly dependent. With regard to governing variables, at first thought $$\tau_{\text{o}}$$ is a more logical choice than $$U$$ for characterizing the effect of the flow on the bed, because the force exerted by the flow on the bed is what causes the sediment transport in the first place. And for transport over planar bed surfaces, that is certainly true. But you will see in a later chapter that, over a wide range of flow and sediment conditions, the flow molds the bed into rugged flow-transverse ridges called bed forms. On beds covered with such bed forms, only a small part of $$\tau_{\text{o}}$$ represents the boundary friction that is directly responsible for grain transport, the rest being form drag on the main roughness elements (Figure $$\PageIndex{4}$$). In such situations, therefore, $$\tau_{\text{o}}$$ is as much a surrogate variable as $$U$$, in the sense that it is not directly responsible for particle transport and bed-form development, as are near-bed flow structure and distribution of boundary skin friction in space and time. These latter, however, are themselves uniquely characterized by $$U$$ and $$d$$. When bed forms are present, the spatially and temporally averaged local skin friction would seem to be a better variable than $$\tau_{\text{o}}$$ in characterizing the state of sediment transport, because it is free from the ambiguity mentioned above but at the same time is more directly responsible for the sediment movement. The trouble is that it cannot be measured, and it can be estimated only with considerable uncertainty using presently available drag-partition approaches. Several variables on the list are of minor or negligible importance in most sediment-transport problems: the elastic and thermal properties of the fluid and the sediment, and the surface tension of the fluid. So the final minimal list of variables that describe sediment transport in our target flow is as follows: • Mean flow velocity $$U$$ or boundary shear stress $$\tau_{\text{o}}$$ • Mean flow depth $$d$$ • Fluid density $$\rho$$ • Fluid viscosity $$\mu$$ • Median sediment diameter $$D$$ • Sediment sorting $$\sigma$$ • Sediment density $$\rho_{s}$$ • Acceleration of gravity $$g$$ or submerged sediment specific weight $$\gamma^{\prime}$$ There are eight variables on the list, so we should expect to have an equivalent set of five dimensionless variables that describe the state of sediment transport equally well. A great many different sets are possible: you can choose a variety of sets of three repeating variables, and then you could manipulate those sets further by multiplying and dividing the various individual dimensionless variables in those sets. In general, we could move in one (or both) of two directions at this point. We could try to develop a set that has the greatest relevance to the physical effects involved in the sediment transport, or we could try to develop a set that has the sedimentologically most relevant or interesting variables segregated into different dimensionless variables. With regard to sets of dimensionless variables that are relevant to the physical effects of sediment transport, I will mention just two possibilities: $$\frac{\tau_{o}}{\gamma^{\prime} D}$$ Shields parameter, a dimensionless $$\tau_{\text{o}}$$ $$\frac{\rho u_{*} D}{\mu}$$ roughness Reynolds number, $$\text{Re}_{*}$$ $$d/D$$ relative roughness $$\frac{\rho_{s}}{\rho}$$ density ratio $$\frac{\sigma}{D}$$ sorting-to-size ratio You could construct this set by using $$\tau_{\text{o}}$$, $$\rho$$, and $$D$$ as repeaters. The physical significance of the roughness Reynolds number was discussed back in Chapter 4. The Shields parameter also has a clear physical significance. The fluid force on bed particles is approximately proportional to $$\tau_{\text{o}}D^{2}$$, whereas the weight of the bed particles is proportional to $$\gamma^{\prime}D^{3}$$. The ratio of these two quantities is the Shields parameter, so the Shields parameter is proportional to the ratio of the fluid force on particles to the weight of the particles. For that reason it could also be called a mobility number. Another dynamically meaningful set of dimensionless variables can be formed using U as the flow-strength variable: $$\frac{\rho U d}{\mu}$$ Reynolds number based on depth and velocity $$\frac{U}{(g d)^{1 / 2}}$$ Froude number based on depth and velocity $$d/D$$ relative roughness $$\frac{\rho_{s}}{\rho}$$ density ratio $$\frac{\sigma}{D}$$ sorting-to-size ratio The repeaters in this set are $$\rho$$, $$U$$, and $$d$$. The mean-flow Reynolds number describes the structure of the mean flow, and the mean-flow Froude number is relevant to the energy state of the flow, as discussed in Chapter 5. Probably the most useful set of dimensionless variables for the purpose of unambiguous description of the state of flow and sediment transport is one in which the sedimentologically interesting variables $$U$$, $$d$$, and $$D$$ are segregated into separate dimensionless variables: $$\left(\frac{\rho \gamma^{\prime}}{\mu^{2}}\right)^{1 / 3} d$$ dimensionless flow depth $$d^{\text{o}}$$ $$\left(\frac{\rho^{2}}{\mu \gamma^{\prime}}\right)^{1 / 3} U$$ dimensionless flow velocity $$U^{\text{o}}$$ $$\left(\frac{\rho \gamma^{\prime}}{\mu^{2}}\right)^{1 / 3} D$$ dimensionless sediment size $$D^{\text{o}}$$ $$\frac{\rho_{s}}{\rho}$$ density ratio $$\frac{\sigma}{D}$$ sorting-to-size ratio The repeaters for this set are $$\rho$$, $$\mu$$, and $$\gamma^{\prime}$$.
{}
# To Infinity…and Beyond! In researching Gödel’s Incompleteness Theorem, I stumbled upon an article that stated no one has proven a line can extend infinitely in both directions. This is shocking, if it’s true, and after a quick Google search, I couldn’t seem to find anything that contradicts the claim. So, in the spirit of intellectual adventure, I’ll offer a fun proof-esque idea here. Consider a line segment of length $\ell$ that is measured in some standard unit of distance/length (e.g., inches, miles, nanometers, etc.). We convert the length of $\ell$—whatever units and length we’ve chosen (say, 0.5298 meters)—into a fictitious unit of measurement we’ll call Hoppes (hpe) [pronounced HOP-ease]. So, now, one should consider the length of $\ell$ to be 2 hpe such that $\ell$/2 = 1 hpe. We then add some fraction (of the length of) $\ell$ to (both ends of) itself, and let’s say the fraction of $\ell$ we’ll use, call it $a$, is 3$\ell$/4, which equals 3/2 hpe. The process by which we will add $a$ to $\ell$ will be governed by the following geometric series: $s_n(a) = 1+a+a^2+a^3+\dots+a^{n-1} = (1-a^n)(1-a)^{-1}=\frac{a^n-1}{a-1}$. Let us add the terms of $s_n(a)$ to both sides of $\ell$; first, we add 1 hpe to both sides ($\ell=4$ hpe), then 3/2 hpe ($\ell=7$ hpe, then 9/4 hpe ($\ell=23/2$ hpe)and so forth. If we keep adding to $\ell$ units of hpe based on the series $s_n(a)$, then we’re guaranteed a line that extends infinitely in both directions because $\lim_{n\rightarrow\infty} (a^{n}-1)(a-1)^{-1} = \infty$ when $\vert a\vert \geq 1$. Now, suppose we assume it is impossible to extend our line segment infinitely in both directions. Then $s_n(a)$ must converge to $(1-a)^{-1}$, giving us a total length of $2+(1-a)^{-1}$ hpe for $\ell$, because $\lim_{n\rightarrow\infty} 1-a^{n}=1$, which is only possible when $\vert a\vert < 1$. (We cannot have a negative length, so $a\in \text{R}^+_0$.) But this contradicts our $\vert a\vert$ value of 3/2 hpe above, which means the series $s_n(a)$ is divergent. Q.E.D. N.B. Some might raise the “problem” of an infinite number of discrete points that composes a line (segment), recalling the philosophical thorniness of Zeno’s (dichotomy) paradox; this is resolved, however, by similarly invoking the concept of limits (and is confirmed by our experience of traversing complete distances!): $\sum_{i=1}^{\infty} (1/2)^i=\frac{1}{2}\sum_{i=0}^{\infty} (1/2)^i=\frac{1}{2} s_n (\frac{1}{2})=\frac{1}{2}\Big( 1+\frac{1}{2}+(\frac{1}{2})^2+\cdots\Big)=\frac{1}{2}\Big(\frac{1}{1-\frac{1}{2}}\Big) = 1$, a single unit we can set equal to our initial line segment $\ell$ with length 2 hpe. Special thanks to my great friend, Tim Hoppe, for giving me permission to use his name as an abstract unit of measurement. Standard # The Myth of Altruism The American Heritage Dictionary (2011) defines “altruism” as “selflessness.” If one accepts that standard definition, then it seems reasonable to view an “altruistic act” as one that fails to produce a net gain in personal benefit for the actor subsequent to its completion. (Here, we privilege psychological altruism as opposed to biological altruism, which is often dismissed by the “selfish gene” theory of Darwinian selection and notions of reproductive fitness.) Most people, however, assume psychologically-based altruistic acts exist because they believe an act that does not demand or expect overt reciprocity or recognition by the recipient (or others) is so defined. But is this view sufficiently comprehensive, and is it really possible to behave toward others in a way that is completely devoid of self? Is self-interest an ineluctable process with respect to volitional acts of kindness? Here, we explore the likelihood of engaging in an authentically selfless act and capturing true altruism, in general. (Note: For those averse to mathematical jargon, feel free to skip to the paragraph that begins with “[A]t this stage” to get a basic understanding of orthogonality and then move to the next section, “Semantic States as ‘Intrinsic Desires’,” without losing much traction.) ##### The Model Imagine for a moment every potential (positive) outcome that could emerge as a result of performing some act—say, holding the door for an elderly person. You might receive a “thank you,” a smile from an approving onlooker, someone reciprocating in kind, a feeling you’ve done what your parents (or your religious upbringing) might have expected you to do, perhaps even a monetary reward—whatever. (Note: We assume there will never be an eager desire or expectation for negative consequences, so we require all outcomes to be positive, beneficial events. Of course, a comprehensive model would also include the desire to avoid negative consequences—the ignominy of failing to return a wallet or aiding a helpless animal (an example we will revisit later)—but these can be transformed into positive statements that avoid the unnecessary complications associated with the contrapositive form.) We suppose there are n outcomes, and we can imagine each outcome enjoys a certain probability of occurring. We will call this the potential vector $\mathbf{p}$, the components of which are simply the probabilities that each outcome (ordered 1 through n) will occur: $\mathbf{p} = [p(1), p(2), p(3),\dots,p(n-1),p(n)]$ and $0\leq p(i)\leq 1$ where $\sum_{i=1}^n p(i)$ does not have to equal 1 because events are independent and more than a single outcome is possible. (You might, for example, receive both a “thank you” and a dollar bill for holding the door for an elderly woman.) So, the vector $\mathbf{p}$ represents the agglomeration of the discrete probabilities of every positive thing that could occur to one’s benefit by engaging in the act. Consider, now, another vector, $\mathbf{q}$, that represents the constellation of desires and expectations for the possible outcomes enumerated in $\mathbf{p}$. That is, if $\mathbf{q} = [q(1),q(2),q(3),\dots,q(n-1),q(n)]$, then $q(i)$ catalogs the interest and desire in outcome $p(i)$. (It might be convenient to imagine $\mathbf{q}$ as a binary vector of length n and an element of $\text{R}_2^n$, but we will be better to treat $\mathbf{q}$ vectors as a subset of the parent vector space $\text{R}^n$ to which $\mathbf{p}$ belongs.) In other words, $q(i) = 0,1$: either you desire the outcome (whose probability is denoted by) $p(i)$ or you don’t. (There are no “probabilities of expectation or desire” in our model.) We will soon see how these vectors address our larger problem of quantifying acts of altruism. The point $\text{Q}$ in $\text{R}^n$ is determined by $\mathbf{q}$, and we want to establish a plane parallel to (and including) $\mathbf{q}$ with normal vector $\mathbf{p}$. Define a point X generated by a vector $\mathbf{x} = t\mathbf{q}$ where the scalar $t>1$ and $\mathbf{x} = [c_1,c_2,c_3,\dots,c_{n-1},c_n]$. If $\mathbf{p}$ is a normal vector of $\mathbf{x} - \mathbf{q}$, then the normal-form equation of the plane is given by $\mathbf{p}\cdot(\mathbf{x} - \mathbf{q})=0$, and its general equation is $\sum_{i=1}^n p(i)c_i = p(1)c_1 + p(2)c_2 + \dots + p(n-1)c_{n-1} + p(n)c_n=0$. We now have a foundation upon which to establish a basic, quantifiable metric for altruism. If we assume, as we did above, that an altruistic act benefits the recipient and fails to generate any positive benefits for the actor, then such an act must involve potential and expectation vectors whose scalar product equals zero, which means they stand in an orthogonal (i.e., right-angle) relationship to each other. It is interesting to note there are only two possible avenues for $\mathbf{p}$$\mathbf{q}$ orthogonality within our model: (a) the actor desires and/or expects absolutely no rewards (i.e., $\mathbf{q}=0$), which is the singular and generally understood notion of altruism, and (b) the actor only desires and/or expects rewards that are simply impossible (i.e., $p(i)=0$ where $q(i)=1$). (We will assume $\mathbf{p}\neq0$.) In all other cases, the scalar product will be greater than zero, violating the altruism requirement that there be no benefit to the actor. Framed another way, (the vector of) an altruistic act forms part of a basis for a subspace in $\text{R}^n$. At this stage, it might be beneficial to pause and walk through a very easy example. Imagine there are only three possible outcomes for buying someone their morning coffee at Starbucks: (1) the recipient says “thank you,” (2) someone buys your coffee for you (“paying it forward”), and (3) the person offers to pay your mortgage. A reasonable potential vector might be [0.9, 0.5, 0]—i.e., there’s a 90% chance you’ll get a “thank you,” a 50% chance someone else will buy your coffee for you, and a zero-percent chance this person will pay your mortgage. Now, assume your expectation vector for those outcomes is [1, 0, 0]—you expect people to say “thank you” when someone does something nice for them, but you don’t expect someone to buy your coffee or pay your mortgage as a result. The scalar product is greater than zero ($0.9(1) + 0.5(0) + 0^2 = 0.9$), which means the act of buying the coffee fails to meet the requirement for altruism (i.e., the potential vector is not orthogonal to the plane that includes Q and X = tq). In this example, as we’ve seen in the general case, the only way buying the coffee could have been an altruistic act is if (a) the actor expects or desires no outcome at all or (b) the actor expected or desired her mortgage to be paid (and nothing else). We will discuss later the reasonableness of the former scenario. (It might also be interesting to note the model can quantify the degree to which an act is altruistic.) The above formalism will work in every case where there is a single, fixed potential vector and a specified constellation of expectations; curious readers, however, might be interested in cases where there exists a non-scalar-multiple range of expectations (i.e., when X $=\mathbf{x}\neq t\mathbf{q}$ for some scalar t), and we can dispatch the formalism fairly quickly. In these cases, orthogonality would involve a specific potential vector and a plane involving the displacement of expectation vectors. The vector form of this plane is $\mathbf{x}=\mathbf{q} + t_1\mathbf{u} + t_2\mathbf{v}$, and direction vectors $\mathbf{u}$,$\mathbf{v}$ are defined as follows: $\mathbf{u}=\overrightarrow{QS}=[s(1)-q(1),s(2)-q(2),\ldots,s(n-1)-q(n-1),s(n)-q(n)]$ with $\mathbf{v}$ defined similarly for points Q and R; $t_i$ are scalars (possibly understood as time per some unit of measurement for a transition vector), and points S and R of the direction vectors are necessarily located on the plane in question. Unpacking the vector form of the equation yields the following matrix equation: $\begin{bmatrix}c_1\\c_2\\c_3\\ \vdots\\c_{n-1}\\c_n\end{bmatrix}=\begin{bmatrix}q(1)\\q(2)\\q(3)\\ \vdots\\q(n-1)\\q(n)\end{bmatrix}+t_1\begin{bmatrix}s(1)-q(1)\\s(2)-q(2)\\s(3)-q(3)\\ \vdots\\s(n-1)-q(n-1)\\s(n)-q(n)\end{bmatrix}+t_2\begin{bmatrix}r(1)-q(1)\\r(2)-q(2)\\r(3)-q(3)\\ \vdots\\r(n-1)-q(n-1)\\r(n)-q(n)\end{bmatrix}$ whose parametric equations are $\begin{matrix}c_1=q(1)+t_1[s(1)-q(1)]+t_2[r(1)-q(1)]\\ \vdots\\ c_n=q(n)+t_1[s(n)-q(n)]+t_2[r(n)-q(n)].\end{matrix}$ It’s not at all clear how one might interpret “altruistic orthogonality” between a potential vector and a transition or range (i.e., subtraction) vector of expectations within this alternate plane, but it will be enough for now to consider its normal vectors—one at Q and, if we wish, one at X (through the appropriate mathematical adjustments)—as secondary altruistic events orthogonal to the relevant plane intersections: $p_1(1)c_1 - p_2(1)c_1 + p_1(2)c_2 - p_2(2)c_2 + \dots + p_1(n)c_n - p_2(n)c_n = 0.$ ##### Semantic States as ‘Intrinsic Desires’ To this point, we’ve established a very simple mathematical model that allows us to quantify a notion of altruism, but even this model hinges on the likelihood that one’s expectation vector equals zero: an actor neither expects nor desires any outcome or benefit from engaging in the act. This seems plausible for events we can recognize and catalog (e.g., reciprocal acts of kindness, expressions of affirmation, etc.), but what about the internal motivations—philosophers refer to these as—that very often drive our decision-making process? What can we say about acts that resonate with these subjective, internal motivations like religious upbringing, a generic sense of rectitude, cultural conditioning, or the Golden Rule? These intrinsic desires must also be included in the collection of benefits we might expect to gain from engaging in an act and, thus, must be included in the set of components of potential outcomes. If you’ve been following the above mathematical discussion, such internal states guarantee non-orthogonality; that is, they secure a scalar for $\mathbf{p}\cdot\mathbf{q}$ because $p_k,q_k >0$ for some internal state k. This means internal states belie a genuine act of altruism. It is important to note, too, these acts are closely associated with notions of social exchange theory, where (1) “assets” and “liabilities” are not necessarily objective, quantifiable things (e.g., wealth, beauty, education, etc.) and (2) one’s decisions often work toward shrinking the gap between the perceived self and ideal self. (See, particularly, Murstein, 1971.) In considering the context of altruism, internal states combine these exchange features: An act that aligns with some intrinsic desire will bring the actor closer to the vision of his or her ideal self, which, in turn, will be subjectively perceived and experienced as an asset. Altruism is perforce banished in the process. So, the question then becomes: Is it possible to act in a way that is completely devoid of both a desire for external rewards and any motivation involving intrinsic desires, internal states that provide (what we will conveniently call) semantic assets? As I hope I’ve shown, yes, it is (mathematically) possible—and in light of that, then, I might have been better served placing quotes around the word myth in the title—but we must also ask ourselves the following question: How likely it is that an act would be genuinely altruistic given our model? If we imagine secondary (non-scalar) planes $P_1, P_2,\dots, P_n$ composed of expectation vectors from arbitrary points $p_1,p_2,\dots,p_n$ (with $p_j \in P_j$) parallel to the x-axis, as described above, then it is easy to see there are a countably infinite number of planes orthogonal to the relevant potential vector. (Assume $q\neq 0$ because if q is the zero vector, it is orthogonal to every plane.) But there are an (uncountably) infinite number of angles $0<\theta<\pi$ and $\theta\neq\pi/2$, which means there exists a far greater number of planes that are non-orthogonal to a given potential vector, but this only considers $\theta$ rotations in $\text{R}^2$ as a two-dimensional slice of our outcome space $\text{R}^n$. As you might be able to visualize, the number of non-orthogonal planes grows considerably if we include $\theta$ rotations in $\text{R}^3$. Within the context of three dimensions, and to get a general sense of the unlikelihood of acquiring random orthogonality, suppose there exists a secondary plane, as described above, for every integer-based value of $0<\theta<\pi$ (and $\theta\neq\pi/2$) with rotations in $\text{R}^2$; then the probability of a potential vector being orthogonal to a randomly chosen plane $P_j$ of independent expectation vectors is highly improbable: p = 1/178 = 0.00561797753, a value significant to eleven digits. If we include $\text{R}^3$ rotations to those already permitted, the p-value for random orthogonality decreases to 0.00001564896, which is a value so small as to be essentially nonexistent. So, although altruism is theoretically possible because our model admits the potential for orthogonality, our model also suggests such acts are quite unlikely, especially for large n. For philosophically sophisticated readers, the model supports the theory of psychological altruism (henceforth ‘PA’) that informs the vast majority of decisions we make in response to others, but based on p-values associated with the prescribed model, I would argue we’re probably closer to Thomas Hobbes’s understanding of psychological egoism (henceforth ‘PE’), even though the admission of orthogonality subverts the totalitarianism and inflexibility inherent within PE. One final thought explicates the obvious problem with our discussion to this point: There isn’t any way to quantify probabilities of potential outcomes based on events that haven’t yet happened, even though we know intuitively such probabilities, outcomes, and expectations exist. To be sure, the concept of altruism is palpably more philosophical or psychological or Darwinian than mathematical, but our model is successful in its attempt to provide a skeletal structure to a set of disembodied, intrinsic desires—to posit our choices are, far more often than they are not, means to ends (whether external or internal) rather than selfless, other-directed ends in themselves. ##### Some Philosophical Criticisms Philosophical inquiry concerning altruism is rich and varied. Aristotle believed the concept of altruism—the specific word was not coined until 1851 by Auguste Comte—was an outward-directed moral good that benefited oneself, the benefits accruing in proportion to the number of acts committed. Epicurus argued that selfless acts should be directed toward friends, yet he viewed friendship as the “greatest means of attaining pleasure.” Kant held for acts that belied self-interest but argued, curiously, they could also emerge from a sense of duty and obligation. Thomas Hobbes rejected the notion of altruism altogether; for him, every act is pregnant with self-interest, and the notion of selflessness is an unnatural one. Nietzsche felt altruistic acts were degrading to the self and sabotaged each person’s obligation to pursue self-improvement and enlightenment. Emmanuel Levinas argued individuals are not ends in themselves and that our priority should be (and can only be!) acting benevolently and selflessly towards others—an argument that fails to address the conflict inherent in engaging with a social contract where each individual is also a receiving “other.” (This is the problem with utilitarian-based approaches to altruism, in general.) Despite the varied historical analyses, nearly every modern philosopher (according to most accounts) rejects the notion of psychological egoism—the notion that every act is driven by benefits to self—and accepts, as our model admits, that altruism does motivate a certain number of volitional acts. But because our model suggests very low p-values for PA, it seems prudent to address some of the specific arguments against a prevalent, if not unshirted, egoism. 1. Taking the blue pill: Testing for ‘I-desires’ Consider the following story: Mr. Lincoln once remarked to a fellow passenger…that all men were prompted by selfishness in doing good. His [companion] was antagonizing this position when they were passing over a corduroy bridge that spanned a slough. As they crossed this bridge they espied an old razor-backed sow on the bank making a terrible noise because her pigs had got into the slough and were in danger of drowning. [M]r. Lincoln called out, ‘Driver can’t you stop just a moment?’ Then Mr. Lincoln jumped out, ran back and lifted the little pigs out of the mud….When he returned, his companion remarked: ‘Now Abe, where does selfishness come in on this little episode?’ ‘Why, bless your soul, Ed, that was the very essence of selfishness. I should have had no peace of mind all day had I gone on and left that suffering old sow worrying over those pigs.’ [Feinberg, Psychological Altruism] The author continues: What is the content of his desire? Feinberg thinks he must really desire the well-being of the pigs; it is incoherent to think otherwise. But that doesn’t seem right. Feinberg says that he is not indifferent to them, and of course, that is right, since he is moved by their plight. But it could be that he desires to help them simply because their suffering causes him to feel uncomfortable (there is a brute causal connection) and the only way he has to relieve this discomfort is to help them. Then he would, at bottom be moved by an I-desire (‘I desire that I no longer feel uncomfortable’), and the desire would be egoistic. Here is a test to see whether the desire is basically an I-desire. Suppose that he could simply have taken a pill that quietened the worry, and so stopped him being uncomfortable, and taking the pill would have been easier than helping the pigs. Would he have taken the pill and left the pigs to their fate? If so, the desire is indeed an I-desire. There is nothing incoherent about this….We can apply similar tests generally. Whenever it is suggested that an apparently altruistic motivation is really egoistic, since it [is] underpinned by an I-desire, imagine a way in which the I-desire could be satisfied without the apparently altruistic desire being satisfied. Would the agent be happy with this? If they would, then it is indeed an egoistic desire. if not, it isn’t. This is a powerful argument. If one could take a pill—say, a tranquilizer—that would relieve the actor from the discomfort of engaging the pigs’ distress, which is the assumed motivation for saving the pigs according to the (apocryphal?) anecdote, then the volitional act of getting out of the coach and saving the pigs must then be considered a genuinely altruistic act because it is directed toward the welfare of the pigs and is, by definition, not an “I-desire.” But this analysis makes two very large assumptions: (1) there is a singular motivation behind an act and (2) we can whisk away a proposed motivation by some physical or mystical means. To be sure, there could be more than one operative motivation for an action—say, avoiding discomfort and receiving a psychosocial reward—and the thought-experiment of a pill removing the impetus to act does not apply in all cases. Suppose, for example, one only desires to avoid the pigs’ death and not the precursor of their suffering. Is it meaningful to imagine the possibility of a magical pill that could avoid the pigs’ death? If by the “pill test” we intend to eviscerate any and all possible motivations by some fantastic means, then we really haven’t said much at all. We’ve only argued the obvious tautology: that things would be different if things were different. (Note: the conditional A –> A is always true, which means A <–> A is, too.) Could we, for example, apply this test to our earlier coffee experiment? Imagine our protagonist could take a pill that would, by acting on neurochemical transmitters, magically satisfy her expectation and desire for being thanked for purchasing the coffee. Can we really say her motivation is now altruistic, presumably because the pill has rendered an objective “thank you” from the recipient unnecessary? In terms of our mathematical model, does the pill create a zero expectation vector? It’s quite difficult to imagine this is the case; the motivation—that is, the expectation of, and desire for, a “thank you”—is not eliminated because it is fulfilled by a different mechanism. 2. Primary object vs. Secondary possessor As a doctor who desires to cure my patient, I do not desire pleasure; I desire that my patient be made better. In other words, as a doctor, not all my particular desires have as their object some facet of myself; my desire for the well-being of my patient does not aim at alteration in myself but in another. My desire is other-regarding; its object is external to myself. Of course, pleasure may arise from my satisfied desire in such cases, though equally it may not; but my desire is not aimed at my own pleasure. The same is true of happiness or interest: my satisfied desire may make me happy or further my interest, but these are not the objects of my desire. Here, [Joseph] Butler simply notices that desires have possessors – those whose desires they are – and if satisfied desires produce happiness, their possessors experience it. The object of a desire can thus be distinguished from the possessor of the desire: if, as a doctor, my desire is satisfied, I may be made happy as a result; but neither happiness nor any other state of myself is the object of my desire. That object is other-regarding, my patient’s well-being. Without some more sophisticated account, psychological egoism is false. [See Butler, J. (1726) Fifteen Sermons Preached at the Rolls Chapel, London] Here, the author errs not in assuming pleasure can be a residual feature of helping his patients—it can be—but in presuming his desire for the well-being of others is a first cause. It is likely that such a desire originates from a desire to fulfill the Hippocratic oath, to avoid imposing harm, which demands professional and moral commitments from a good physician. The desire to be (seen as) a good physician, which requires a (“contrapositive”) desire to avoid harming patients, is clearly a motivation directed toward self. Receiving a “thank you” for buying someone’s coffee might create a feeling of pleasure within the actor (in response to the pleasure felt and/or exhibited by the recipient), but the pleasure of the recipient is not necessarily (and is unlikely to be) a first cause. If it were a first (and only) cause, then all the components of the expectation vector would be zero and the act would be considered altruistic. Notice we must qualify that if-then statement with the word “only” because our model treats such secondary “I-desires” as unique components of the expectation vector. (“Do I desire the feeling of pleasure that will result in pleasing someone else when I buy him or her coffee?”) We will set aside the notion that an expectation of a residual pleasurable feeling in response to another’s pleasure is not necessarily an intrinsic desire. I can expect to feel good in response to doing X without desiring, or being motivated by, that feeling—this is the heart of the author’s argument—but if any part of the motivation for buying the coffee involves a desire to receive pleasure—even if the first cause involves a desire for the pleasure of others—then the act cannot truly be cataloged as altruistic because, as mentioned above, it must occupy a component within q. The issue of desire, then, requires an investigation into first causes (i.e., “ultimate”) motivations, and the logical fallacy of Joseph Butler’s argument (against what is actually psychological hedonism) demands it. 3. Sacrifice or pain Also taken from the above link: A simple argument against psychological egoism is that it seems obviously false….Hume rhetorically asks, ‘What interest can a fond mother have in view, who loses her health by assiduous attendance on her sick child, and afterwards [sic] languishes and dies of grief, when freed, by its death, from the slavery of that attendance?’ Building on this observation, Hume takes the ‘most obvious objection’ to psychological egoism.[A]s it is contrary to common feeling and our most unprejudiced notions, there is required the highest stretch of philosophy to establish so extraordinary a paradox. To the most careless observer there appear to be such dispositions as benevolence and generosity; such affections as love, friendship, compassion, gratitude. […] And as this is the obvious appearance of things, it must be admitted, till some hypothesis be discovered, which by penetrating deeper into human nature, may prove the former affections to be nothing but modifications of the latter. Here Hume is offering a burden-shifting argument.  The idea is that psychological egoism is implausible on its face, offering strained accounts of apparently altruistic actions. So the burden of proof is on the egoist to show us why we should believe the view. Sociologist Emile Durkheim argued that altruism involves voluntary acts of “self-destruction for no personal benefit,” and like Levinas, Durkheim believed selflessness was informed by a utilitarian morality despite his belief that duty, obligation, and obedience to authority were also counted among selfless acts. The notion of sacrifice is perhaps the most convincing counterpoint to overriding claims to egoism. It is difficult to imagine a scenario, all things being equal, where sacrifice (and especially pain) would be a desired outcome. It would seem that a decision to act in the face of personal sacrifice, loss, or physical pain would almost certainly guarantee a genuine expression of altruism, yet we must again confront the issue of first causes. In the case of the assiduous mother, sacrifice might service an intrinsic (and “ultimate”) desire to be considered a good mother. In the context of social-exchange theory, the asset of being (perceived as) a good mother outweighs the liability inherent within self-sacrifice. Sacrifice, after all, is what good mothers do, and being a good mother resonates more closely with the ideal self, as well as society’s coeval definition of what it means to be a “good mother.” In a desire to “do the right thing” and “be a good mother,” then, she chooses sacrifice. It is the desire for rectitude (perceived or real) and the positive perception of one’s approach to motherhood, not solely the sacrifice itself, that becomes the galvanizing force behind the act. First causes very often answer the following question: “What would a good [insert category or group to which membership is desired] do?” What of pain? We can imagine a scenario in which a captured soldier is being tortured in the hope he or she will reveal critical military secrets. Is the soldier acting altruistically by enduring intense pain rather than revealing the desired secrets? We can’t say it is impossible, but, here, the aegis of a first cause likely revolves around pride or honor; to use our interrogative test for first causes: “Remaining true to a superordinate code is what [respected and honorable soldiers] do.” They certainly don’t dishonor themselves by betraying others, even when it’s in one’s best interest to do so. Recalling Durkheim’s definition, obedience (as distinct from the obligatory notion of duty) also plays an active role here: Honorable soldiers are required to obey the established military code of conduct, so the choice to endure pain might be motivated by a desire to be (seen as) an obedient and compliant soldier who respects the code rather than (merely) an honorable person, though these two things are nearly inextricably enmeshed. To highlight a relevant religious example, Jesus’ sacrifice on the cross might not be considered a truly altruistic act if the then-operative value metric privileged a desire to be viewed by the Father as a good, obedient Son, who was willing to sacrifice Himself for humanity, above the sacrifice (and pain) associated with the crucifixion. (This is an example where the general criticism of Durkheim’s “utilitarian” altruism fails; Jesus did not receive from His utilitarian sacrifice in the way mankind did.) These are complex motivations that require careful parsing, but there’s one thing we do know: If neither sacrifice nor pain can be related to any sort of intrinsic desire that satisfies the above interrogative test, then it probably should be classified as altruistic, even though, as our model suggests, this is not likely to be the case. 4. Self-awareness Given the arguments, it is still unclear why we should consider psychological egoism to be obviously untrue.  One might appeal to introspection or common sense, but neither is particularly powerful. First, the consensus among psychologists is that a great number of our mental states, even our motives, are not accessible to consciousness or cannot reliably be reported…through the use of introspection. While introspection, to some extent, may be a decent source of knowledge of our own minds, it is fairly suspect to reject an empirical claim about potentially unconscious motivations….Second, shifting the burden of proof based on common sense is rather limited. Sober and Wilson…go so far as to say that we have ‘no business taking common sense at face value’ in the context of an empirical hypothesis. Even if we disagree with their claim and allow a larger role for shifting burdens of proof via common sense, it still may have limited use, especially when the common sense view might be reasonably cast as supporting either position in the egoism-altruism debate.  Here, instead of appeals to common sense, it would be of greater use to employ more secure philosophical arguments and rigorous empirical evidence. In other words, we cannot trust thought processes in evaluating our motivations to act. We might think we’re acting altruistically—without any expectations or desires—but we are often mistaken because, as our earlier examples have shown, we fail to appreciate the locus of first causes. (It is also probably true, for better or worse, that most people prefer to think of themselves more highly than they ought—a process that better approaches exchange ideas of the ideal self in choosing how and when to act.) Jeff Schloss, the T.B. Walker Chair of Natural and Behavioral Sciences at Westmont College, suggests precisely this when he states that “people can really intend to act without conscious expectation of return, but that [things like intrinsic desires] could still be motivating certain actions.” The interrogative test seems like one easy way to clarify our subjective intuitions surrounding what motivates our actions, but we need more tools. Our model seems to argue that the burden of proof for altruism rests with the actor—“proving,” without resorting to introspection, one’s expectation vector really is zero—rather than “proving” the opposite, that egoism is the standard construct. Our proposed p-values based on the mathematics of our model strongly suggest the unlikelihood of a genuine altruism for a random act (especially for large n), but despite the highly suggestive nature of the probability values, it is unlikely they rise to the level of “empirical evidence.” ##### Conclusion Though I’ve done a little work in a fun attempt to convince you genuine altruism is a rather rare occurrence, generally speaking, it should be said that even if my basic conceit is accurate, this is not a bad thing! The “intrinsic desires” and (internal) social exchanges that often motivate our decision-making process (1) lead to an increase in the number of desirable behaviors and (2) afford us an opportunity to better align our actions (and ourselves) with a subjective vision of an “ideal self.” We should note, too, the “subjective ideal self” is frequently a reflection of an “objective ideal ([of] self)” constructed and maintained by coeval social constructs. This is a positive outcome, for if we only acted in accordance with genuine altruism, there would be a tragic contraction of good (acts) in the world. Choosing to act kindly toward others based on a private desire that references and reinforces self in a highly abstract way stands as a testament to the evolutionary psychosocial sophistication of humans, and it evinces the kind of higher-order thinking required to assimilate into, and function within, the complex interpersonal dynamic demanded by modern society. We should consider such sophistication to be a moral and ethical victory rather than the evidence of some degenerate social contract surreptitiously pursued by selfish persons. References: Bernard Murstein (Ed.). (1971). Theories of Attraction and Love. New York, NY: Springer Publishing Company, Inc. Standard # Toward a quantification of intellectual disciplines As a mathematician, I often find myself taking the STEM side of the STEM-versus-liberal-arts-and-humanities debate—this should come as no surprise to readers of this blog—and my principal conceit, that of a general claim to marginal productivity, quite often (and surprisingly, to me) underwhelms my opponents. So, I’ve been thinking about how we might (objectively) quantify the value of a discipline. May we argue, if we can, that quantum mechanics is “more important” than, say, the study of Victorian-period literature? Is the philosophy of mind as essential as the macroeconomics of international trade? Are composers of dodecaphonic concert music as indispensable to the socioeconomic fabric as historians of WWII? Is it really possible to make such comparisons, and should we be making them at all? The main question becomes this: Are all intellectual pursuits equally justified? If so, why should that be the case, and if not, how can society differentiate among so many disparate modes of inquiry? To that end, then, I’ve quickly drafted eleven basic categories that I believe might serve us well in quantifying an intellectual pursuit: (I) Societal demand This will perforce involve a (slippery) statistical calculation: average annual salary (scaled to cost-of-living expenses), the size of university departments, job-placement rates among graduates with the same terminal degree, or anything that betrays a clear supply-and-demand approach to practitioners of the discipline. (II) Influence and range How fertile is the (inter-field) progeny of research? How often are articles cited by other disciplines? Do the articles, conferences, and symposia affect a diverse collection of academic research in different fields with perhaps sweeping consequences, or does the intellectual offspring of an academic discipline rarely push beyond the limited confines of its field of interest? (III) Difficulty What is the effort required for mastery and original contribution? In general, we place a greater value on things that take increased effort to attain. It’s easier, for example, to eat a pizza than to acquire rock-hard abs. (As an aside, and apart from coeval psychosexual aspects of attraction—obesity was considered a desirable trait during the twelfth to fifteenth centuries because it signified wealth and power—being fit holds greater societal value because it, among other things, represents the more difficult, ascetic path, which suggests something of an evolutionary advantage.) Average time to graduation, the number of prerequisite courses for degree candidacy, and the rigor of standardized tests might also play a useful role here. (IV) Applicability and usefulness How practical is the discipline’s intellectual import? How much utility does it possess? Does it (at least, eventually) lead to a general increase in the quality of life for the general population (e.g., the creation of plastics), or is it limited in its scope and interest only to those persons with a direct relationship to its claims (e.g., non-commutative transformational symmetry in the development of a Mozart piano sonata)? Another way of evaluating this category is to ask the simple question: Who cares? (V) Prize recognition Disciplines and academic fields that enjoy major prizes (e.g., Nobel, Pulitzer, Fields, Abel, etc.) must often succumb to more rigorous scrutiny and peer-reviewed analysis than those whose metrics rely more heavily upon the opinion of a small cadre of informed peers and the publish-or-perish repositories of journals willing to print marginal material. This isn’t a rigid metric, of course; many economists now reject the Nobel-winning efficient-market hypothesis, and the LTCM debacle of the late 90s revealed the hidden perniciousness crouching behind the Black-Scholes equation, which also earned its creators a Nobel prize. (Perhaps these examples suggest something problematic about economics.) In general, though, winning a major international prize is a highly valued accomplishment that validates one’s work as enduring and important. (VI) Objectivity Are the propositions of an academic discipline provable, or are they largely based on subjective and rational or intuitive interpretation? Is it possible the value of one’s intellectual conceit could change if coeval opinion modulates to an alternate position? It seems logical to presume an objective truth is generally more valuable than subjective belief. (VII) Projected value What is the potential influence of the field’s unsolved problems? Do experts believe resolving those issues will eventually lead to significant breakthroughs (or possibly chaos!), or will the discipline’s elusive solutions effectuate only incremental and unnoticed progress when viewed through the widest available lens? (VIII) Necessity What are the long-range repercussions of eliminating the discipline? Would anyone beyond its members notice its absence? How essential is its intellectual currency to our current socioeconomic infrastructure? To one a generation or two removed from our own? (IX) Ubiquity How many colleges and universities offer formal, on-campus degrees in the field? Is its study limited to a national or even localized interest (e.g., agriculture), or is it embraced by a truly international, humanistic approach? The greater number of opportunities to study a subject, regardless of where you live, suggests a higher general value. (X) Labor mobility Related to (9), is it difficult to find employment in different geographic areas of the country, or is employment restricted to a few isolated locations or even specific economies? Does an intellectual discipline provide a global reach or only, say, North American opportunities? Are there gender gaps or racial-bias issues to consider? How flexible is the discipline? Do the skills you learn allow you to be productive within a range of occupations and applications, or do they translate poorly to the labor market because graduates are pigeonholed into a singular intellectual activity? Can you find meaningful employment with lower terminal degrees, or must you finish a PhD in order to be gainfully employed? There are certain exceptions here: brain surgeons, for example, enjoy a very limited employment landscape and earning anything less than an M.D. degree means you can’t practice medicine, but these are examples of outliers that offer counterbalancing compensations within the global metric. (XI) Probability of automation What is the probability a discipline will be automated in the future? Can your field be easily replaced by a bot or a computer in the next 25 years? (Luddites beware.) __________ Not perfect, but it’s a pretty good start, I think. The list strikes a decent balance across disciplines and, taken as a whole, doesn’t necessarily privilege any particular field. A communications major, for example, might score toward the top in labor mobility, automation, and ubiquity but very low in difficulty and prize recognition (and likely most other categories, too). I also eliminated certain obvious categories—like historical import—because the history of our intellectual landscape has often been marked by hysteria, inaccuracy, and misinformation. To privilege music(ology) because of its membership to the quadrivium when most people believed part of its importance revolved around its ability to affect the four humors seems unhelpful. (It also seems unfair to penalize, say, risk analysts because the stock market didn’t exist in the sixth century.) Specific quantifying methods might involve a function $f : \text{R}^n\to \text{R}$ with a series of weightings (possibly via integration) where n is the total number of individual categories, $c_i$, but the total value of a discipline, $v_j$, might just as easily be calculated by a geometric mean, provided no category can have a value of zero: $v_j = \left(\prod_{i=1}^n c_i\right)^{1/n}.$ Standard # Is Atheism Irrational? The following is an very interesting (and rather long) FB discussion about a NYTimes link I posted to my wall, which led to an enlightening debate concerning the viability of the Big Bang theory based upon stochastic measurements.  I have done my best to present the discussion in its original form. Leon:  There’s probably an established name for this fallacy. Just because AP can’t imagine what sort of life might arise under different conditions doesn’t mean that it wouldn’t — and that it wouldn’t by into the same fallacy: “If conditions had been just a little different, our world could have been overrun with deadly WATER and life as we know it would have been impossible! This clearly proves that everything in the universe was shaped with our welfare in mind!” PeterLeon, I generally agree, but changes to cosmological parameters don’t just lead to “deadly water.” It’s hard to imagine a universe that could sustain any life while precluding lifeforms of equivalent complexity / interest / value to humans. So when we talk about universes different from ours, we might as well be talking about universes with no life at all. This pushes the question back to whether life itself is a worthy criterion by which to judge a universe—and then back to whether the “worth” of a universe is even a coherent concept, absent human judgment. This article gives a sharp analysis. David:  More important, I think, is the mathematics involved in the (very unlikely) probabilities associated with the current state of the universe—regardless of whether we wish to quantify that approach by burdening it with the concept of anthropocentrism. (And even if we do wish to pursue such an approach, anthropocentrism doesn’t seem to cast a greater shadow over creationism than it does the theory of evolution, which is, essentially, an anthropocentric theory concerned—if not obsessed—with humans qua the teleology of a “trajectory toward perfection.”) The notion of “life” is irrelevant, for example, if we’re limiting our discussion to the stochastic probability of the synthesis of a single protein chain subsequent to the Big Bang (1 in 10^125). Peter:  David, I suspect both of our minds are already made up, but: 1. Evolution, as I understand it, is absolutely not human-centric or teleological. Quite the opposite: humans aren’t the end or destination of the process, just another branch on the tree. 2. Anthropocentrism (biocentrism, etc.) is still an issue in this discussion of probability. The set of states containing a protein chain is no more or less improbable than an equivalently “sized” set of states without it. It’s hard to reason dispassionately about it, for the same reason it’s hard to imagine a world in which you had never met your wife. But *things would have happened* in all those other worlds too. When you say, “What are the chances that we would meet and fall in love?” you’re implicitly valuing this scenario above all the others. It’s the same with the probability argument you give above. The article I linked gives a neat rebuttal to *my* point on pp. 173–175. It really is worth a read! Leon:  Peter, the article you linked to is very well-done (except for one thing that I will mention) and I learned from it. However, when I found my attention drifting halfway through and wondered why, I realized that no one, *really*, is making the pure logical argument that there might be *some* being that created the universe. Mr. Manson does a good job of pointing out that some debunking strategies are not really arguments, they’re rhetorical strategies. What I fault him for is not pointing out that claims such as Plantiinga’s above are also, just as much, rhetorical strategies rather than logical arguments. Manson does a good job of showing that what he calls the “moral argument” for some sort of creator requires there to be a moral value to creating conscious beings before anything in the universe existed. He then goes on to say he doesn’t know what arguing for such a value ex nihilo would look like. That’s right, and I don’t think anyone has done it, because anyone who gets to this point is really just providing rhetorical cover for saying that there must be a god. That, or if Manson takes the extra step in honesty and admits this, then he has to say that the moral argument is circular. And, in the spirit of following up on Manson’s analysis of the debunking rhetoric, I’ll point out that a lot of the success in Plantinga’s “argument from design” story is undoubtedly its ecumenical nature: it doesn’t mention any sects, so each listener gets to pencil in the name and characteristics of their own preferred god. Dave, highly improbable events occur all the time; we don’t feel compelled to find divine explanations for them unless it reinforces our own personal narrative for the universe. The last time I saw a football game on TV, a team that was down by 5 threw a hail-mary pass as time expired. The quarterback threw it squarely to the two defenders. One of them jumped high up, caught the ball, and as he came down his hands hit the top of the helmet of his fellow defender. The ball bounced up and forward, over the head of the intended receiver, who did a visible double-take but managed to grab it and carry it into the end zone. I don’t know what the odds are on this, but no one feels obliged to find a divine explanation for this unless (a) they’re a really big fan of the winning team or (b) I end up getting inspired to become a football player by seeing this play and want to credit God with motivating me. That’s my response to the math: no one would care about the odds (except maybe “Wow! Cool.”) if they weren’t a way of reinforcing the emotional payoff of one’s chosen narrative about the overall situation. Peter:  So, what about the alleged incompatibility of materialism and evolutionary theory? That seems like the novel part of AP’s argument, and I don’t really know what to make of it. My gut reaction is that there’s a problem with the reasoning around “belief” (in particular, why should we assume a priori that each belief is 50% likely to be true?), but I don’t know enough philosophy to really get it. David:  @Peter: 1. We certainly know Darwin to have framed the concept of selection as a progression toward a state of “perfection,” and Lamarck even described the evolutionary trajectory as a ladder of ever-increasing moments of such perfection. So, even if a teleology isn’t explicitly stated, it’s heavily implied as an essential component of evolution’s general guiding principle. Also, I know of no examples within evolutionary biology where selection and adaptation have effectuated a regression to a less perfect state, so whether or not there exists intention (i.e., a Platonic teleology, etc.) with respect to the evolutionary process, there exists at least a teleomatic process that, through its natural laws, moves toward something that is “better than it was.” Of course, it might be more than that—say, an Aristotelian telenomic (i.e., a process of “final causes”)—but what we have is, at least, the law-directed purpose embedded within the process itself. Humans might not finally be a reification of the highest rung of Lamarckian “perfection,” but if we aren’t, that doesn’t necessarily efface the likelihood we exhibit a current state of perfection—“better than we’ve ever been”—which is still a shadow cast by (coeval) evolutionary predilections toward anthropocentrism. 2. I’m not quite sure I understand your point with respect to the mathematics-to-anthropocentrism link. Are you referring to James’s “infinite universes” when you speak of an “equivalent ‘sized’ set of states”? Also, I’m not sure why “valuing” 1:10^125 above some other p-event is necessarily a problem. We privilege it because of its importance to what ostensibly comes next. @Leon: Sure, no one need hold for a divine explanation of events witnessed, say, during a football game, even in the face of highly improbable events—unless, of course, you’re a hard determinist—but I think you’re (inadvertently) misappropriating causation/intention; there’s no reason to entertain the possibility of design and authorship with respect to the very low odds involved in the path of your football, so an attempt to do so immediately strikes one as extremely odd, which suggests (erroneously) that the argument for the design of (generally) highly unlikely events is logically unsound. It’s easy to imagine the occurrence of unusual events when contemplating the (sum of the) discrete actions of autonomous agents within the confines of physical scientific laws, but in no sense do those events demand the possibility of, or need for, a “designer.” But consider the following Gedankenexperiment: You are sitting at a table with three boxes in front of you. One box contains identical slips of paper, each with one of the twelve pcs inscribed on it; the second box also contains identical pieces of paper, and on each is written a single registral designation; the third box, like the others, contains identical pieces of paper, but, here, each piece of paper denotes a single rhythmic value (or rest). If you (or a monkey or a robotic arm) were to choose one piece of paper from each box randomly (with replacement) and notate the results, what are the odds you would create a double canon at the twelfth or even a fugue? I’m not going to do the math, but the p-value is an unimaginably small number. Yet if we were to suddenly discover a hitherto unknown sketch of just such a canon, who would presume it to be the result of a stochastic process? None of us. Why? Because the detailed complexity of the result—the canon itself—very strongly suggests a purposeful design (and, thus, a designer), so we would perforce reject any sort of stochastic probability as a feature of its existence. Is it not odd, then, that the canon’s complexity evinces the unimpeachable notion that a composer purposefully exhibited intention (and skill) in its creation, yet the universe—with its infinitely more complex structure and an unbelievably smaller probability of stochastic success—can be rationalized and dispatched by random (and highly improbable) interactions between and among substances that appeared ex nihlio? Leon: Yes, Dave, I’m very much enjoying the discussion. Peter, I honestly think that Plantinga is just throwing in everything that occurs to him, in the hopes that it will stick. If that seems ad hominem, well, I just see him appealing to “But doesn’t it just seem ridiculous that…[new claim here]” over and over again, without any reasoning other than an appeal to “isn’t it just so improbable…” I think it’s perfectly okay to not address some of that, on the grounds that we’re not here to figure out a coherent argument for his rhetoric for him. Dave, it’s completely wrong to attribute teleology to Darwin and the theory of evolution that comes from him. He is something of a transitional figure, and may not have guarded his language against teleological implications as well as later workers did. But even during his lifetime, he was fiercely opposed by biologists who had explicitly teleological accounts of evolution, like Carl Nägeli; and by the end of the century this had become well-established enough that even people like Mark Twain (certainly not on the cutting edge of biology) could ridicule teleology via an argument by design: he said that if we take the total age of the earth as the height of the Eiffel Tower, then the period of man’s time on earth can be represented as the layer of paint at the top of it — and that saying that all of earth’s history was in service of bringing man into existence is like saying that the purpose of the Eiffel Tower is to hold up that top layer of paint. David: Oh, I’m not suggesting “evolutionary teleology” ends with humans—though modern scientists often speak of humans with such reverence that they imply such a concept (e.g., Dawkin’s discussion of human brain redundancy, etc.)—but I am saying there exists a teleology of process (toward improvement/perfection) that is built into evolution’s core principles. You can’t have one without the other. Whether that “constant state of improvement” ends with human life is not my concern—though it’s difficult to imagine a change-of-kinds progression beyond human life (could the Singularity be that moment?)—but it seems to occupy the bulk of Plantinga’s conceit. Leon:  […]and also, Dave, your gedanken experiment is well-taken, but in this and the original question I think you underestimate the vastness and tremendous age of the universe — under our current hegemonic cosmology, there have been planets in existence for ~10 billion years, and there are hundreds of billions of galaxies each containing hundreds of billions of stars. If your experiment is carried out at each star for a comparable length of time, I’m quite certain we’ll end up with thousands of perfectly appropriate canons. I also disagree with this example in that I believe that you’re working under an assumption that I’ll illustrate with the following story, taken from a philosopher that I’m not recalling: the edge of a road erodes, revealing some pebbles that spell out a sonnet of Shakespeare’s. We get very impressed by this, assuming it either to be somehow miraculous or a prank — in either case we take it to demonstrate intentionality of *some* sort. The philosopher’s point is that this reaction is an anthropocentric bias — *any* random arrangement of revealed pebbles is just as unlikely as any other, yet we don’t take the more random-looking ones as evidence of intentionality. It’s not quite that simple, of course; but as you pointed out, we don’t have a lot of space here. But I will say that given a sufficiently large number of roadsides, I’d expect a *lot* of things that “make sense” to appear, especially given that we conflate many things that “make sense” in the same way but have surface features that differ (the “same text” with different fonts or handwritings, for example) but we don’t do that with more “random-looking” arrangements. It also seems to me that you made the gedanken experiment because you think of life (or intelligence) on earth as something like the appearance of a Shakespeare sonnet on the wayside — evidence of intentionality. But to do so already assumes intentionality in the pre-life universe — that is, it’s circular reasoning. Teleology is a directionality imposed from without, not one that results from humans seeing a situation and imposing their thought habits. Some species get better at some of their life tasks because more of their handier members survive; in the absence of humans calling that a direction and privileging it as the “essential” nature of evolution, that’s no more teleological than water flowing downhill. Actual evolution theory actually points out that many, many organisms have very and obviously imperfect adaptations, yet as long as they can still survive they are not replaced by “fitter” species, nor do they keep evolving spontaneously just for the sake of evolving. And there are tons and tons of very “primitive” organisms on earth — like nematodes and bacteria, which probably make up the 90% of the earth’s biomass — that are so evolutionarily fit that they have not evolved since probably before the dinosaurs. There’s no teleology driving them. Also, this. 🙂 Peter:  Well, and even if Darwin did think selection was teleological (which, I dunno, maybe he did early on at least), theorizing about evolution didn’t stop there. Twain’s quip is clever, but putting humans at the top of the tower still seems like a 19th-century move. We’re probably an *extreme* of something, but I don’t think many evolutionary theorists would say we’re in a state of perfection, in either of the senses Dave outlines. It sounds like you’re thinking of evolutionary fitness as a universal quality that every organism has some amount of. But that’s not how it works: fitness is relative to a habitat. We humans are more “fit” than our predecessors in the sense that if you were to drop one of our hominid ancestors into most present-day human habitats, it wouldn’t do so well. (It would probably be terrible at music theory, for instance.) But that’s not because we’re universally more “fit” or better adapted for life in general. Plenty of organisms survive in habitats that would kill us instantly. Fitness is optimized over shorter spans than environmental change, so we can pretty much assume that everything that survives and reproduces is at a local maximum of its fitness landscape. But that doesn’t mean it’s more fit than its ancestors were, or less fit than its descendants will be. [edit: …in the long term, I mean.] The double canon example is great, but I think it illustrates my point better than yours. If we looked up in the sky and saw the stars and comets arranged into a double canon, or if one were somehow encoded into our DNA, then yes, we’d be compelled to look for some intelligent composer. That would be really cool! And it would be statistically unlikely, because we can imagine scenarios in which things could have gone differently, and we wouldn’t have observed those things. (Our actual world being one of them.) But that’s not the same kind of evidence provided by our existence in the universe, because there’s no scenario in which we would have been able to observe our nonexistence. The improbability of our existence just doesn’t bear on the question. [Edit: In oh-so-fashionable Bayesian terms, P(universe|people) is 1, no matter what P(universe) may be.] David:  @Leon: My thought experiment only meant to suggest that sufficient complexity, beyond the bounds of any sort of reasonable levels of stochastic probability, strongly suggests design. It’s not circular reasoning because we invoke that logic each time a new sketch is discovered. Your counterpoint sounds a lot like the infinite monkey theorem. But, as I’ve described in my blog, the math doesn’t even work; infinite exponentiation on the interval (0,1) always approaches zero. So, we always have a fatal contradiction: the p-value of an event cannot be both certain and impossible. Imagine a boy trying to throw a ping-pong ball into a paper cup 60 yards away. There’s a big difference between 100 billion earths, each with a single boy trying to throw the ball into a cup 60 yards away, and 100 billion boys covering the first earth, each with a ball they will attempt to throw into the cup. Leon:  Peter, I totally agree with you about evolution not being a single monolithic structure with humans at the end of it. But I would quibble: I do agree that “fitness” is not some abstract quality that everything now has in greater measure than the past and lesser measure than the future. But as far as it proceeding more slowly than environmental change, we’ve certainly upset that. And mass extinctions are a counterexample. And even in a stable environment, one of SJ Gould’s flagship examples of bad adaptation was the panda’s “thumb”, which is certainly not an optimal adaptation. It just works well enough to keep the pandas going, and that’s enough. Peter:  You’re absolutely right about mass extinctions and catastrophic events—I should have been clear. But is it still fair to say that the panda thumb is a case of a local maximum in the fitness landscape? Like, small “steps” around it were worse? What I meant to say was just that evolution isn’t solely driven by competition in a stable environment—which is what the teleological, constant-improvement model assumes. Also, yes—this is a super fun conversation! If only I could get this excited about the work I’m *supposed* to be doing. David:  Quick insertion: Humans have vestigial organs, but that doesn’t mean we must jettison the commonly-held belief that humans represent a “local maximum,” although, if we follow the metaphor precisely, that phrase presumes a decline after the peak, which doesn’t really describe any of the evolutionary biology I’ve read. “Maladaptations” and extinctions, I think, should also be contextualized within the larger trajectory of “progress”—the whole survival-of-the-fittest thing (not the survival-of-everything thing). I’ll have to come back to the canon example! Peter:  Yay, my favorite conversation is back! Are you sure survival-of-the-fittest should be characterized as “progress”? I’m certainly not an expert on evolutionary theory, but I get the strong impression that that could be true only in the relatively short term, in a stable environment (as I tried to say above). Wikipedia cites S. J. Gould on this: “Although it is difficult to measure complexity, it seems uncontroversial that mammals are more complex than bacteria. Gould (1997) agrees, but claims that this apparent largest-scale trend is a statistical artifact. Bacteria represent a minimum level of complexity for life on Earth today. Gould (1997) argues that there is no selective pressure for higher levels of complexity, but there is selective pressure against complexity below the level of bacteria. This minimum required level of complexity, combined with random mutation, implies that the average level of complexity of life must increase over time.” David:  I would say, quickly, though, too that the very high “improbability of our existence”—based on the sheer math involved—has quite a bit of bearing on the probability of design, imo. In fact, that was the whole point of my double-canon example. Why don’t we ever consider a double canon—when we find one—to have been created by stochastic processes? I think the notion of “progress” is inherent in the concept of “survival.”  What doesn’t survive obviously cannot progress. Peter:  Actually, that just shows that “survival” is inherent in “progress.” What survives still does not necessarily progress. And what would a biological definition of progress look like, anyway? The problem with the probability argument is that the universe is a precondition for our observation. When we observe that the universe happens to be just right for us to exist, or that we seem to exist despite incredible odds, what does this tell us? This is exactly the question that Bayes’s theorem is built to answer: “How likely is Y, given X?” How likely is it that the universe would have these properties, given that we exist? If it’s the only sort of universe that could support intelligent life, then, well, 100%. Ha! It turns out my argument has a name, and can be expressed MUCH more clearly. It’s the Weak Anthropic Principle: “…the universe’s ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing any such fine tuning, while a universe less compatible with life will go unbeheld.” Leon: Dave, it actually strikes me that there are two ways to take your thought experiment. One is, as you say, that the result you discuss is very improbable, therefore perhaps someone did it on purpose. This strikes me as more of the “young earth,” hands-on creation position, not the one that you or AP are floating here. The other approach is less about the system’s ability to generate the canon than the idea that if there’s some process in place that *can* generate such a canon, then the process must have been set up by an intelligence that had such canons in mind. This seems closer to Plantinga’s (and your?) approach in saying that if this universe *can* produce life, it must have been set up that way on purpose. Is that correct? (Though of course, with God as a non-timebound, less anthropomorphic being, perhaps there’s not so much of a difference between these two ways of looking at things). David:  Okay, I think “evolutionary teleology” derives from its principles. That is, “progress” is an inherent feature of evolutionary design and not some exogenous thing slapped onto its structure ex post facto. That doesn’t mean everything needs to change constantly—there are periods of stasis (i.e., localized temporal optimizations)—but it does suggest that when things move, they move in one direction. When things stop moving (forward), when organisms stop evolving and adapting (in the long run) in ways that are beneficial to their survival, they eventually become extinct; thus, even the notion of extinction becomes a feature of a more diachronic concept of progress. Water flows down the hill (and even pools into puddles of stasis) because of the “teleology” established by the law of gravity. If we reject the teleological notion of progress—the idea that adaptation and fitness are random, non-directed processes—evolutionary biology becomes a much tougher sell, imo. I’m not really interested in fit-for-life arguments of the universe, even though that concept drives Plantinga’s conceit. I do not reject the possibility of stochastic double canons because composers exist; I assume composers exist because the p-value of a stochastic double canon is impossibly small. This allows me to sidestep the problems associated with Bayes’s theorem. I’ll have to come back for the rest…including Leon’s interesting parsing. Peter:  Okay, I see where you’re coming from re: evolution, and I agree that natural selection does generally lead to greater “fitness.” In fact, I’m pretty sure that’s how we define fitness: that which is maximized by natural selection. But it has nothing to do with a “trajectory toward perfection,” as you said way back at the start of the thread. Fitness isn’t concerned with perfection (in the sense of “freedom from defect”), only with survival and reproduction *in a particular ecosystem*. Actually, Wikipedia tells me that the phrase “survival of the fittest” is actually a misquotation: Herbert Spencer’s original formulation was “survival of the best fitted.” David:  True. Perhaps I should have described it as a “trajectory toward a greater perfection.” Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops? Leon, I’m not quite sure I understand; there IS such a process because I created it, lol: one picks from three boxes, each filled with unique and discrete musical elements. The probability of that process creating the desired result, however, is truly minuscule. In fact, let’s put a face on it. If we assume an eight-octave range, common time, and both note and rest values no shorter in duration than a sixteenth note (and no longer than a whole note), the p-value for generating (only!) a C major scale in quarter notes (within one octave) is given by $p = [(1/12)(1/8)(1/32)]^7 \approx 3.87 \times 10^{-25}$ A 40-note composition with uniquely determined values approaches 3.19 x 10^-140. (There are only 10^80 atoms in the observable universe.) Imagine the p-value in generating Bach’s Contrapunctus I from BWV 1080! So, okay, what do these numbers mean? Well, it’s simple: you’d have a MUCH better chance of closing your eyes and, with one trial, picking the single atom (within the observable universe) I’ve labeled “Leon” than creating a 20-note dux (with comes) by using the three-box method I’ve described. But, Leon, if you’re suggesting a “process” that with a HIGH PROBABILITY does, in fact, create, say, Baroque-era canons with invertible counterpoint, then I’d say the process IS the intelligence itself, which is my point. I can create the canon, but I can’t generate a larger number from the product. Of course, I could create such a high-valued stochastic process by severely limiting the variables (e.g., controlling p-values for each input, etc.), but rigging the task to be less demanding cannot be evidence for the feasibility of the more difficult one. (And my model could be made even more difficult!) Peter:  > “Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops?”< Sure, if the environment holds still and other competing species also stop evolving. More seriously, what about cockroaches, or bacteria? They’ve been around in roughly their current forms a heck of a lot longer than we have. I guess my big point is that in the big picture, evolution isn’t a trajectory *toward* any particular destination—more like an expansion around an origin. See the link above about “largest-scale trends.” > “The probability of that process creating the *desired result*, however, is truly minuscule”< This analogy depends on our being the “desired result,” which is (I think) what Leon was poking at a few comments ago. It begs the question, IMO. Leon:  It’s really fascinating to me to discover completely unexpected ways that we misunderstand each other. 🙂 David:  Okay, what am I missing, lol?  I’m not referring to humans as the desired result, Peter. I’m more than content to limit the discussion to protein-chain synthesis—with or without humans. As for the stalled-evolution hypothesis, I feel much more comfortable with the notion that each “thing” is (largely) a discrete entity. Why some evolving bacteria, cockroaches, and fish but not others? Is it somehow “fitter” to be a bacterium rather than a human in 2014? There are sizable hurdles there, imo. As an aside, can we indulge in the notion of a Platonic canon at the tenth, lol? 🙂 Peter:  > Why some evolving bacteria, cockroaches, and fish but not others?< I’m not sure I understand. If the question is why mutations aren’t possessed by every individual in a species, it’s just the way DNA works: mutations are random. If the question is why populations diversify and speciate, it depends on the degree to which they maintain contact as they split. > Is it somehow “fitter” to be a bacterium rather than a human in 2014?< Only if bacteria displace humans. If we’re not in competition, then no. Relative fitness is defined only among competing genotypes (see the Wikipedia link to “Fitness,” above). Okay, I’m gonna try one last time to sum up my objections to the probability argument. 1a. Whenever we describe the probability of an event, we do so in terms of a sample space. For example, when someone rolls two dice together, the chance of getting double sixes is 1/36, because the sample space includes 35 other combinations, all equally likely to occur. 1b. Current physics describes many cosmological configurations, all equally physically valid, the vast majority of which could not sustain intelligent life. In this sample space our universe is improbable, bordering on impossible. 2a. When we observe a spectacularly unlikely event that borders on the impossible, that can give us doubts about the way we’ve constructed the sample space. For example, if we dump out a bucket of dice, and they all come up 6, it’s a pretty fair bet that they were loaded, and not all configurations were in fact equally likely. (Or in your canon analogy, its low entropy suggests to us that it was composed by traditional canonic process, rather than by some stochastic one that would inflate the sample space.) 2b. Analogously, the improbability of our universe suggests a problem with the sample space. For you, the conclusion is that our universe wasn’t created by a random roll of the cosmic dice, but rather was designed with an eye toward this outcome. Another explanation would be that the cosmic dice have been rolled again and again, and this is the only outcome that we (as intelligent beings) could ever observe. From what I can tell, most physicists find this plausible (the debate now seems to be about “where” these other universes are). This improbability (by itself) is NOT evidence of multiple universes, nor of a designer. It just doesn’t weigh on the question either way. It’s analogous to a bucket of dice rolled by someone else in a room that we’re invited into *only* if/when all the dice come up six. Given that scenario, it doesn’t matter how many dice are in the bucket, or whether or not they’re loaded: the only result we will ever see is the one with all sixes. David:  1a. Yes, all p-values within a distribution will sum to one, but if we’re interested in rolling double sixes, 1/36 will be our focus for a single trial, though it might very well take 70 trials to get the desired result. 1b. Yes, and I’d phrase it this way: a single “trial” effectuated by the Big Bang yields a p-value so small such that the likelihood of some stochastic design of the current cosmological configuration (or even a configuration without human life) very quickly approaches, and is, for all intents and purposes, zero. 2a. Precisely. A bucket-of-sixes event strongly suggests an intervention of some kind; we do not presuppose the fact that we’ve witnessed some sort of unbelievably rare stochastic moment (i.e., [1/6]^n : n = the total die count). A bucket of 200 dice yields a p-value that approaches 2.35 x 10^-156. (Again, there are only 10^80 atoms in the observable universe.) The same logic, of course, is inferred when we unearth a double canon at the tenth; though, a canon’s p-value is much, much smaller than that of a bucket of sixes. 2b. As a theorist and mathematician, I’m saying, as we did in 2a, that there exists an intervention with respect to such minuscule p-values, that stochastic processes are a very poor explanation for our cosmological result. As a Christian, I believe that intervention involves an omniscient God, just like person X composed the impossibly unlikely canon (with, as you suggest, an incredibly low entropy) rather than a robotic arm pulling pieces of paper from three boxes. Also, mathematics has only proven eleven dimensions, yet that does not simultaneously prove at least eleven “parallel universes.” Four of those dimensions, as you know, are firmly rooted within our present (single) universe. So, there’s no proof that, say, an infinite number of Big Bangs took an infinite number of stochastic cracks at generating our current cosmology. And even if that WERE the case, the math is still restrictive. Each Big Bang attempt would have a near-zero p-value for the current cosmology, and Bernoulli’s law of large numbers essentially guarantees such a near-zero p-value at an infinite number of trials. A single universe-trial does not involve a non-replacement p-value (e.g., pulling a marble out of a bag and putting it in your pocket); you don’t approach p = 1 at an infinite number of trials, though that seems to be a common mistake people make. It’s like the analogy I described earlier—that of a near-infinite number of earths, each with a single child trying to throw a ping-pong ball into a Dixie cup 60 yards away. The p-value for each discrete earth does not change—assuming uniform laws of physics and consistent variables (e.g., wind speed, topography, etc.)—and the earths are not working in tandem to reduce the improbability of the event…as would a single child who could throw 100 billion balls at once. Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us—like dumping a near-infinite number of cosmic buckets filled with fair dice and arguing that every die from every bucket lands on six. That’s impossibly unconvincing. We’re also assuming that these bucket rolls never have deleterious effects when twos, threes, and fives emerge. A true p-value for cosmology would have to include the likelihood of internecine stochastic combinations that would immediately end the process. So, there’s serious doubt as to whether the universe would even be “allowed” to get an infinite number of “bucket dumps” before we’re asked to enter the room. I guess I’m just perplexed, too, by the notion that we’re unwilling to give stochastic processes the benefit of the doubt when it comes to canons and bucket-dumps, but we’re more than willing to make them the bedrock of the most statistically improbable event(s) involved in creating the universe. In that limited sense, then, as the article queries, I do believe atheism to be irrational. Peter:  The analogy is flawed because our observation of canons and buckets is unrestricted: we can, in principle, observe any result in the sample space. Same with the Dixie cups: if we want to make the analogy work, then we’re not standing next to some arbitrary boy, watching him throw ping-pong balls. We’re a tiny creature that’s generated inside a Dixie cup the moment a ping-pong ball lands inside it. All that’s necessary to explain our existence is that there be enough boys, balls, and cups that it could plausibly happen at least once, in *some* trial. The p-value can be as low as we want for any single trial—the selection bias, [which] ensures that we can only ever observe the successful one. This isn’t just my crazy idea; it’s a fundamental principle of statistics. At this point I’ve explained it as clearly as I can, so if you still have a problem with it, it might be time to appeal to a higher court. > Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us […] That’s impossibly unconvincing.< I agree! As I mentioned above, most physicists seem to agree too, and since they noticed this problem, they’ve proposed various multiverse scenarios that provide an adequate number of “trials.” (This is different from the more well known “parallel universes” of some interpretations of quantum mechanics, which share the same physical properties.) Obviously, I understand virtually none of the real physics here, but it’s so much fun to grapple with the general conceptual outlines—as cool as any science fiction. I hope we now agree that a suitably large number of “trials” would solve this problem. You also seem skeptical that the universe would get that many tries, but I don’t see why not. The eleven dimensions of spacetime aren’t a problem, since more universes doesn’t mean more dimensions: you can “stack” infinite n-dimensional spaces in an (n+1)-dimensional space. (And anyway, that’s irrelevant in current models—see below.) You also mention “internecine stochastic combinations that would immediately end the process.” Could you elaborate on that? It seems like it could make sense in a cyclic model, with one universe at a time—but there are *plenty* of alternatives. So, if you’re curious what physicists say about this, here are a few theories I’ve come across—I will inevitably butcher them, but as always, there are better explanations on Wikipedia and elsewhere: (a) eternal inflation: the universe actually expands much faster than the speed of light, and different regions of spacetime are “far enough apart” from each other (in some sense) as to be “causally unconnected.” So they have different sets of parameters. This seems to be very popular these days. (b) black hole cosmology: each black hole is the boundary of a of a new “universe” that may have its own parameters. Not only does that imply that all the black holes in our universe are themselves baby universes, but it also implies that we ourselves are stuck in some other universe’s black hole! How metal is that? (c) mathematical universe hypothesis: this one is so crazy that even its creator Max Tegmark claims not to believe it. The idea is that the fundamental level of reality isn’t particles, fields, or branes, but rather math itself. Every mathematical system is its own universe—not just a description of one. Honestly, this sounds kind of dumb and self-defeating to me, but Tegmark is a smart guy who has forgotten more math than I could learn in a lifetime. So hey, if he says it’s possible, that’s cool. As for your final question, which is probably at the heart of this, I prefer physical explanations because they’ve worked well in the past. Maybe they will break down at some point, and the only answer available will be “God did it”—but it hasn’t happened for any such question in the past, and it’s not clear to me that this is the exception. As for why it’s stochastic instead of directed in some way, that’s just the null hypothesis. There may well turn out to be reasons why some configurations are preferred, but AFAIK we have no reason to assume that at this point. Sorry, I should have been clear: I also agree that atheism is irrational. To my thinking, the rational position at this point (which is not to say the best!) is agnosticism. David:  A quick comment for now, but I’ll write more later: You keep insisting on observation as a necessary condition for my argument, but I’ve never made that assumption. Plantinga did—and you have with the bucket-of-dice metaphor—but I’m really only interested in “Platonic” events. We might never witness the boy on the nth earth trying to get his ball in the cup or the robotic arm reaching into the boxes (or the resulting composition!), but that has no bearing on the p-value of the trial. We don’t need to be there at all—in the cup or next to the boy or even on the same planet! My qualms with extraordinarily low-entropy p-values are distinct from whether or not we ever “observe” them, so neither selection bias nor Bayes’s theorem has any relevance with respect to my arguments. These points, I thought, were obvious because the bulk of our discussion has involved p-values of wholly unobservable events (e.g., protein-chain synthesis after the Big Bang, etc.), but perhaps I should have been clearer. As for dimensions, I agree…and, as I think you’d agree, too, more dimensions don’t necessarily prove the multiverse, which, some physicists say, is simply the union of all “parallel” universes (as opposed to the “forked” theory proposed by QM). Physicists also suggest such universes might very well have different physical constants, which doesn’t help us much when we’re talking about p-values with respect to the current cosmology. I don’t believe a larger sample space gets us there either. (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life, (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero, and (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. I’m familiar with the first two theories you mentioned, and I know inflation is very popular because it answers a lot of questions, including (1) the “flatness problem”—a feature of the permissible range for Omega values (the ratio of gravitational to kinetic energy)—and (2) CBR homogeneity. As for cosmological conflicts, there are many…everything from problems of initial inhomogeneity and UV radiation to “permittivity” of free space and interfering cross-reactions within the process of amino acids forming peptide bonds. I guess the “null hypothesis” is the heart of the matter. Though I would never suggest very low p-values are, in and of themselves, proof of design, I feel such extreme improbabilities strongly suggest a designer—or, at least, strongly argue against chance. There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000. Numbers like that suggest something beyond the merely improbable…well beyond canons, buckets of dice, bouncing footballs, and even protein chains. Peter:  > (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life.< Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. > (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero< Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n goes to infinity. I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) > (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. < AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? > There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000.< It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! Oops! I should have read before shooting my mouth off: there are at least some people who claim to have found evidence that our universe bumped into another one (an Arxiv paper linked from Wikipedia). But that’s pretty recent and (I think) controversial. David:  > Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. < Well, I’ve tried to frame my arguments in a way that bypasses anthropic reasoning. (1) means we don’t really need to think about a very large sample space, and (2) becomes irrelevant to our discussion of p-values that relate to our current cosmology. > Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n approaches infinity.  I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) < Well, I was giving cosmologists the benefit of the doubt and assuming the possibility that quantum fluctuations replicate the process involved in our current cosmology (thus, p^n), but if we’re only interested in “at least one” universe—and, perhaps, our current universe is a reification of that event, which might suggest the other universes do not sustain life—the formula is almost too convenient to be helpful; it states that every non-impossible event is guaranteed to occur (at least once) over an infinite number of trials. I’ll leave it to you to imagine at-least-once events that offer fatal contradictions. > AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? < Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. That is, current cosmological models suggest the Big Bang created space (i.e., before it, there was REALLY nothing); inflationary models allow space to travel faster than the speed of light in order to preserve relativity theory, and AFAIK quantum fluctuations can theoretically create mass without the cosmic fireworks of CBR. Conserving the laws of thermodynamics, though, means the duration of these “spontaneous” masses is incredibly small and unobservable (as are the masses). > It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! < Unless you think about a fantastically creative and loving God who chose to have a relationship with us, despite our incredible insignificance! > Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. < Space is complicated. Like I said, our universe is “causally unconnected” to other universes—either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. As for how fluctuations produce inflationary “bubbles,” my initial guess was that they were a different kind of fluctuation from the standard vacuum uncertainty, which is how I found the paper linked above. The calculations start on p. 2, but they smacked me down pretty hard. I really do want to learn this stuff some day… sigh. >There’s something extraordinarily unnerving…it’s unnerving to imagine….Unless you think about a fantastically creative and loving God.< Right. I just meant that “unnerving” in the eye of the beholder. Sorry for constantly referring to the Neil Manson article—I get that you don’t have the time or inclination to read these things—it’s just hard to summarize. Here’s the relevant bit from the abstract, which follows a defense of the “design argument”: “Lastly, some say the design argument requires a picture of value according to which it was true, prior to the coming-into-being of the universe, that our sort of universe is worthy of creation. Such a picture, they say, is mistaken, though our attraction to it can be explained in terms of anthropocentrism. This is a serious criticism. To respond to it, proponents of the design argument must either defend an objectivist conception of value or, if not, provide some independent reason for thinking an intelligent designer is likely to create our sort of universe.” The full argument appears in pp. 172–175. This is why I keep referring to anthropocentrism. David:  I do have the inclination…just not as much time, which is why I haven’t responded to date. All apologies. > I get that you want to bypass the anthropic principle, but as long as we’re reasoning from our actual experience in the universe, you can’t. It’s a general principle of reasoning about observations. If you want to talk about “Platonic events” divorced from our human perspective, that’s great, but then the unlikeliness of our universe doesn’t demand explanation: any other universe would have been equally unlikely, and there’s nothing obviously special about ours a priori. < I’m not sure why I can’t, lol. I’m interested in discussing the stochastic probability of protein chains and peptide bonds and DNA sequencing subsequent to the BB (but before our emergence onto the scene as conscious, observant beings), all of which are wholly unobservable events. In fact, most of the probabilities in the universe might be considered “Platonic” (i.e., unobserved)—from the imminent explosion of distant quasars and formation of black holes to the 46.3 percent probability that the tree in the wooded midway on my way to work will be uprooted at wind speeds exceeding 72.4 mph. That approach doesn’t necessarily demand anything, but discussing the origin of the universe in the absence of a designer places the burden on physics and mathematics (specifically, probability theory)…and THAT does demand investigation. > While I’m sympathetic to the complaint that this multiverse stuff is “too convenient”–that it explains everything equally well–the divine-creator explanation has the same flaw. As you may know, there are some physicists who consider multiverse theories “unscientific” for precisely that reason. < True. The difference, though, is that faith does not require proof…despite Hitchens’s claims to the contrary. In fact, the Bible says faith IS the proof (Hebrews 11:1). I don’t mind “convenient,” but “logically easy” rubs me a bit the wrong way. (The “at least one” (ALO) formula for infinite n is an example.) I understand some might consider faith to be “logically easy,” but I’m comfortable with the notion that faith is completely different from, and directly opposed to, science. > The question of evidentiary support is well taken. There doesn’t seem to be consensus on what would even count as evidence for a multiverse–though that’s hardly a unique scenario for scientific theories, including some that have gone on to be vindicated (e.g., evolution, quarks, cosmic inflation). So no, I don’t think multiverse theories are self-defeating, at least not at the point you identify, nor do I think it’s driven by a refusal to accept the divine creator. It’s about a commitment to natural explanation before supernatural. < I think it’s a bit problematic to lionize natural explanations as a feature of coeval scientific understanding. We’ve seen many times throughout history that science very often “got it wrong” in light of new evidence. That’s not to say science isn’t incredibly valuable and insightful—it is—but it is finally limited in its capacity to explain events based upon restricted observation(s) and imperfect knowledge. Again, in the absence of a designer, we have no choice but to follow such a path, but that’s why we need to be careful. Many of these theories exist without any serious physical evidence. That’s fine, but that’s also why I’m focusing on abstract p-values because they offer a more substantive and dispassionate line of inquiry with respect to “natural explanations.” > I don’t follow: doesn’t “non-impossible” preclude “fatal contradiction”? Anyway, the anthropic argument doesn’t call for an infinite number of universes, just enough for there to be one that sustains life. If indeed there are infinite universes (as some physicists think), then the situation is even worse than you describe: “anything that can happen will happen an *infinite* number of times.” < Not at all. (And, again, I’m not making an anthropic argument.) Here’s a “trivial” example: Assume we can establish a p-value involving whether or not God created the universe. Motive for the creation is irrelevant. (Perhaps this number is simply the complement of the probability that stochastic processes “created” the universe.) According to 1 – (1 -p)^n for an infinite n, even if that value is vanishingly small—and, as an agnostic, I imagine you’d argue p > 0 (otherwise, you’d be an atheist)—then it is the case that p approaches 1 as n approaches infinity. And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s essentially the same thing as saying every possible event (E) will occur within each of the infinite universes (if not multiple times within a single universe). This is why I invoked p^n = 1, which contradicts the calculation that infinite exponentiation on (0,1) approaches zero. Anyway, there are different metrics we can use, too.  For example, I think the Poisson distribution $P(x;\mu) = e^{-\mu}(\mu^x)(x!)^{-1}$ might be a better p-measure for the stochastic probability of the universe; here, the p-value approaches zero as the mean of the sample space approaches zero, even for an infinite x. That seems much more intuitive to me: for an extremely small p-value for a single trial, which, in a real way, becomes the mean case for the stochastic probability of our current universe, the probability of future successes decreases. This is the opposite of the mechanism behind the ALO equation where the probability increases as the number of trials increases. Another model I prefer involves curves like exponential decay (and equations like it); for example, the simple non-homogeneous differential equation $dp/dt = te^{-3t}-3p$ is one such (general) reification of a curve modeling (what I believe is the basic notion of) probability over time subsequent to the BB. For the sake of completeness, the general solution is $dp/dt + 3p = te^{-3t}\\ (dp/dt + 3p)e^{\int3 dt} = te^{-3t + \int 3 dt}\\ \int (d(pe^{3t})/dt) dt = \int t dt\\ p(t) = (t^2e^{-3t})/2 + \delta e^{-3t}$ where the Euclidean metric $[p(t_n)^2 - p(t_{n-1})^2]^{1/2} \rightarrow 0$ as t approaches infinity, which is what we want. This is intuitive: if after the BB, space expands faster than the speed of light, pulling matter behind it (though not quite at the SOL) in a nearly homogeneous way, it seems incredibly unlikely that, over time, the necessary material would have the opportunity to create protein chains and the like, especially when the force and velocity of Guth’s slow-roll inflation inexorably pushes that material further apart through the expansion. > Space is complicated. Like I said, our universe is “causally unconnected” to other universes–either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. < Whence, then, the space for those universes? Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse; in fact, Guth’s paper suggests that very notion. Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing. Jeff:  A few quick observations on this discussion: 1. I’m enjoying it immensely, while understanding only some of it, and being completely unable to participate in it. 2. It is taking place on the Internet. 3. It is completely civil and, until this moment, focused on the issues of the discussion and not observation of the discussion itself. 4. There are no cats anywhere in this discussion. Not even Schrödinger’s — the poor thing(s). 5. The convergence of factors 2, 3, and 4 above — a civil discussion on the Internet without the inclusion of cats — seems so highly improbable, involving opposing forces of such strength able to co-exist only in conditions at or immediately following the BB (I can’t do the math, but y’all can do it in your sleep, apparently) that I hereby postulate that this discussion is not actually taking place. Now, please, continue. Peter:  Thanks, Jeff! I can’t believe you (and at least two others) read this far. I’ve learned a lot over the last couple weeks. Dave, we need to find a publisher. I eagerly await your further thoughts! In the meantime, here’s my bid for Longest Post So Far. Sorry in advance. > Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. It’s one thing to assign a p-value, and another to interpret it as evidence for design. > I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing. I think we both understand them fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other. Here’s an illustration. I’ll ask my computer for ten random integers: RANDOM ; done 1967 3496 15853 19457 29526 16109 16229 15867 14059 223 15303 [Edit: Oops! That was eleven. And that’s why I’m not a professional programmer. ] That sequence is incredibly unlikely: the p-value is just 1 in 10^45. (In other words, if every person on Earth ran that command a billion times a second, it almost certainly wouldn’t come up again before the sun engulfed our planet.) But that, in itself, gives no reason to suspect it was specially chosen. For us to make that leap, it needs to have some properties that a designer would care about—in terms of our older examples, it would have to be the equivalent of a double canon or a bucket of sixes. In those examples, we recognize canons and high rolls as valuable in the domains of music and gaming. (Manson gives the analogy of poker, where an accusation of cheating is more persuasive if the cheater ends up with a strong hand.) Perhaps there is something special about this sequence, which would be ruined by even a slight change to any number. We still can’t claim that it was specifically chosen without assuming that the chooser also knows and cares about this special quality and would thus be motivated to choose this sequence over any other. So this is what I meant by saying the low probability of our universe doesn’t inherently “demand explanation.” We agree that our universe appears to be uniquely tuned for life and extremely improbable, but we disagree about the next step. In order to argue for design, we have to assume that life is inherently valuable within the domain of universe-creation, just like canons and sixes are in music and dice games. But (as Neil Manson points out) it’s hard to find people who explicitly defend that assumption, probably because it’s a bit embarrassing and not that easy to do without assuming some amount of theology and thus rendering the argument circular. I found one defense by Richard Swinburne. I haven’t gotten to read it all the way through—the Google Books preview cuts out just as he gets to the multiverse issue—but I’m very curious. *** AND NOW, Some Remaining Ancillary Quibbles *** (no obligation to discuss this stuff if you’re sick of it) > And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s basically the same thing as saying some event E—actually, every event that could occur!—will occur within each of the infinite universes. < No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant. > I doubt anyone would accept that as a “proof” of God’s existence, even though it makes perfect logical and mathematical sense, which is, of course, why the ALO equation is problematic at infinite n. < It only makes sense if divine creation is “an event that can happen” according to the laws of physics—in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.) > Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse…< Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified. As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation. All that said, I actually have no idea what eternal-inflation theory would say about the ur-origin of the grandfather universe. For all I know, it may dispense altogether with the idea of an origin, and just let every universe bubble out from another one, turtles all the way down! Or maybe that’s ludicrous. If only we had some physics-literate friends who were patient enough to wade through these ramblings. Jim: I’ve followed but steered clear of participating in this conversation. I did want to put out there though that isn’t this something that we ultimately found a philosophical answer to in modernity? If anything, the 57 comments of back and forth reinforce the idea that all we can agree upon is the notion that there’s an innate uncertainty on the subject. It’s like we’re all holding out that we’ll some day find the answer that justifies our own personal belief through science when the only thing science has really taught us is the complexities of the universe(s) in its(their) entirety will always fall beyond the capacity of human reason. Wouldn’t the pursuit of knowledge be bettered if we all called a truce? On Pi day, can’t we all just get along and agree that we’ll never be able to calculate that last digit of infinity? If there’s a God that created our physical realm, clearly he doesn’t intend for us to ever find the end of the rainbow is all I’m saying. David:  >>I think we both understand [astronomically low p-values] fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other.<< Well, considering all the variables involved, I’m saying it’s very, very highly unlikely chance is responsible for the complexities and details of the universe. And I think the fact that it’s so difficult to understand intuitively very low p-values plays an important role in this. Consider this narrative by James Coppedge from “Evolution: Possible or Impossible?” “The probability of a protein molecule resulting from a chance arrangement of amino acids is 1 in 10^287. A single protein molecule would not be expected to happen by chance more often than once in 10^262 years on the average, and the probability that one protein might occur by random action during the entire history of the earth is less than 1 in 10^252. For a minimum set of the required 239 protein molecules for the smallest theoretical life, the probability is 1 in 10^119,879. It would take 10^119,841 years on the average to get a set of such proteins. That is 10^119,831 times the assumed age of the earth and is a figure with 119,831 zeroes, enough to fill sixty pages of a book this size.” “Take the number of seconds in any considerable period. There are just 60 in a minute, but in an hour that increases to 3,600 seconds. In a year, there are 31,558,000, averaged to allow for leap year. Imagine what a tremendous number of seconds there must have been from the beginning of the universe until now (using 15 billion years…). It may be helpful to pause a moment and consider how great that number must be. When written down, however, it appears to be a small figure: less than 10^18 seconds in the entire history of the universe. The weight of our entire Milky Way galaxy, including all the stars and planets and everything, is said to be ‘of the order of 3 x 1044 grams.’ (A gram is about 1/450th of a pound.) Even the number of atoms in the universe is not impressive at first glance, until we get used to big numbers. It is 5 x 10^78, based on present estimates of the radius at 15 billion light years and a mean density of 1/1030 grams per cubic centimeter. Suppose that each one of those atoms could expand until it was the size of the present universe so that each had 5 x 10^78 atoms of its own. The total atoms in the resulting super-cosmos would be 2.5 x 10^157. By comparison, perhaps the figure for the odds against a single protein forming by chance in earth’s entire history, namely, 10^161, is now a bit more impressive to consider. It is 4,000 times larger than the number of atoms in that super universe we just imagined.” …and this: “Imagine an amoeba. This microscopic one-celled animal is something like a thin toy balloon about one-fourth full of water. To travel, it flows or oozes along very slowly. This amoeba is setting forth on a long journey, from one edge of the universe all the way across to the other side. Since the radius of the universe is now speculated by some astronomers to be 15 billion light years, we will use a diameter of double that distance. Let’s assume that the amoeba travels at the rate of one inch a year. A bridge of some sort – say a string – can be imagined on which the amoeba can crawl. Translating the distance into inches, we see that this is approximately 10^28 inches. At the rate of one inch per year, the tiny space traveler can make it across in 10^28 years. The amoeba has a task: to carry one atom across, and come back for another. The object is to transport the mass of the entire universe across the entire diameter of the universe! Each round trip takes 2 x 10^28 years.  To carry all the atoms of the universe across, one at a time, would require the time for one round trip multiplied by the number of atoms in the universe, 5 x 10^78. Multiplying, we get 10^107 years, rounded. That is the length of time for the amoeba to carry the entire universe across, one atom at a time. But wait. The number of years in which we could expect one protein by chance was much larger than that. It was 10^171. If we divide that by the length of time it takes to move one universe by slow amoeba, we arrive at this astounding conclusion: The amoeba could haul 10^64 UNIVERSES across the entire diameter of the known universe during the expected time it would take for one protein to form by chance, [even] under those conditions so favorable to chance. But imagine this. Suppose the amoeba has moved only an inch in all the time that the universe has existed (according to the 15-billion-year estimate). If it continues at that rate to travel an inch every 15 billion years, the number of universes it could carry across those interminable miles is still beyond understanding, namely, more than 6 x 10^53, while one protein is forming.  Sooner or later our minds come to accept the idea that it’s not worth waiting for chance to make a protein. That is true if we consider the science of probability seriously.” I think that helps a bit with our intuition! >>Here’s an illustration. I’ll ask my computer for ten random integers:<< LOVE this example, but I’m not sure how a random integer string is any different from (essentially) rolling eleven dice. It seems like you’re arguing we should disregard (the import of) very low p-values because (1) very low p-values exist and (2) they’re ubiquitous (i.e., we can find them everywhere; thus, they offer no substantive value as highly improbably events). [Edit: I think these are Leon’s main objections, too.]  If I’m understanding you correctly, your string of integers is a random-but-meaningless event (for us) primarily because it cannot distinguish itself—or, rather, we cannot distinguish it—from any other random string (i.e., its “meaning”). (Let’s assume we wouldn’t get a subset of the Fibonacci sequence or something recognizable or meaningful.) I think that’s what you were saying with respect to a “[property]…a designer would care about.” So, the question then becomes: How do we assign meaning to p-values—on the order of double canons and buckets of sixes—without appeals to anthropic, fit-for-life arguments? I’ve thought about it, and I just don’t know the answer to that question. I am convinced, though, there is one! It’s clear, for example, that temporal perspective matters—ex post facto vs. a priori quantifications of probability; also, the p-value of a bucket roll means one thing when it represents the mathematics of ANY one of the possible bucket rolls (B), given as $\forall x p_x = (1/6)^n$ but, as I’m sure you would agree, it means another as the probability of a specific constellation of dice (p_i), even though $p_i = p_x = (1/6)^n$ because a bucket of sixes, E_6, is an element of the set of all possible dice constellations: $E_6 \in B : |B| = \Gamma(n+f)[\Gamma(f)(n!)]^{-1}$ where f is the number of faces on a single die. (This equation provides a cardinality that eliminates all repetitions.) But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable. >>No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant.<< Well, if we’re supposed to exclude all the universes where the event doesn’t occur, then that falls into the “logically easy” category. It’s as if I said I’m going to roll an infinite number of sixes—just ignore any roll that isn’t a six. The only requirement for that logic to work is that I keep rolling the die. The p-success of every event in that scenario equals one (even if the constellation of universes changes for each event); we just exclude the results we don’t like. Of course, we know that $p^{\infty \pm m} = p^{\infty} \rightarrow 0$, so, in that sense, then, exceptions don’t even really apply. >>It only makes sense if divine creation is “an event that can happen” according to the laws of physics–in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.)<< I think my “proof” suggests such a singular event. An event that completely alters (or destroys) the universe would also be another example. (I’ve read this is theoretically possible.) Also, I don’t think the Platonic, universe-by-God p-value is influenced in any way by whether or not God is subject to the laws of physics. (If He is, then He must not exist.) Either God did or didn’t create the universe: as a historical probability, clearly p = 0,1. >>Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified.<< Well, it’s more difficult to believe for the same reason a single BB is difficult to believe: quantum fluctuations require (at least a vacuum that requires) space-time, which doesn’t yet emerge until after the BB/fluctuation begins. So, I guess it’s easier to imagine an infinite number of fluctuations within an already-established space-time continuum rather than an infinite number of impossible “space-less” fluctuations emerging outside of space-time. (And nothing in Guth’s article suggests a “space-less” fluctuation.) Consider this quote: “Quantum mechanical fluctuations can produce the cosmos,” said…[physicist] Seth Shostak….”If you would just, in this room,…twist time and space the right way, you might create an entirely new universe. It’s not clear you could get into that universe, but you would create it.” Oddly, Shostak’s claim presupposes both time and space in order to hold. >>As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation.<< What if the bunnies were expanding away from each other at cosmological speeds (i.e., 74.3 +/- 2.1km/s/megaparsec), lol?!  (One megaparsec equals roughly 3 million light-years.) Not even bunnies can copulate that quickly, lol. Eventually, each bunny would become completely isolated—the center of its own galaxy- or universe-sized space—where it could no longer procreate and repopulate. So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time. Peter:  > But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. < Okay, if the null hypothesis is, “No other universes exist, and the cosmological parameters were pulled at random from any of the [whatever huge number] possibilities,” then yeah, that’s probably safe to reject. But in rejecting that, we’re still nowhere near an affirmative argument for design. There are plausible alternatives to both prongs of the hypothesis, both of which have been active areas of research for decades: to the first, an account of universe-generation in which our sort of universe is more likely than others (as Sean Carroll describes in the piece you linked); to the second, multiverse theories like eternal inflation. This is the familiar course of “God of the Gaps” arguments. They present a false choice between materialist and theist explanation, and paint God into an ever-diminishing corner: if the proof of the divine rests on what we don’t understand, then what happens when we understand it? I’m much more sympathetic to the Spinozist (I think?) thought that credits God for the astonishing regularity of the universe. (Side note: Coppedge gets the math all wrong… but that deserves its own thread, and is well documented elsewhere in any case.) > Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable. < But the thing is, even OUR universe is practically devoid of protein chains. If you look at our universe as a whole, why would protein chains be its most important feature? And is there no other possible universe in which protein chains could be more common than in ours? Even if there’s not, can we assume that ANY being capable of universe-creation would necessarily prioritize protein chains above all other arrangements of matter and energy, under any possible set of physical laws? That’s what the design argument requires, and I don’t see how it can be justified. (Though this is basically what Richard Swinburne attempts in that chapter I linked a couple weeks ago, albeit with humans instead of protein chains.) Anyway, one thing I’ve really enjoyed about this discussion is that both sides counsel humility: “Science can’t explain everything” vs. “We’re not the most important thing in the universe.” > So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time. < Um, I think the bunny metaphor may have gotten away from us. Universes don’t copulate, and even if universe-creation slows over time within a single universe (a claim that requires a LOT more physics than you and I know), that still wouldn’t limit the number of universes. That’s what I meant to illustrate with bunnies—they get old and die, but their children keep reproducing. I suspect your intuition stems from conservation laws—if there’s only a finite amount of stuff, then it’s going to slow down as it spreads out—but I think that intuition may be mistaken here. Universe-creation doesn’t need to be conservative if they are isolated from each other (i.e., “We can’t get in”). And as long as universes generate more than one baby bubble universe on average, the process tends toward infinity. I’m guessing your next question will be “If the baby universes aren’t inside the parents, then where are they?” [edit: oops—that’s referring to a sentence I deleted! In short, I suspect it’s wrong to think of baby universes as contained within their parents.] Honestly, I haven’t a clue—but I’m guessing it’s the wrong question to ask. Going out on a limb, I’d guess that the path between universes is neither spacelike nor timelike (in the relativistic sense), and it’s kind of meaningless to try and specify “which dimensions” are involved. Suffice it to say they’re isolated from each other. [END] Standard # Why I Am Not a Libertarian “Decisions concerning private property and associations should in a free society be unhindered. As a consequence, some associations will discriminate….A free society will abide unofficial, private discrimination—even when that means allowing hate-filled groups to exclude people based on the color of their skin.” ~ Rand Paul (letter to the editor of the Bowling Green Daily News—May 2002) Let’s be clear: Rand Paul thinks business owners should be able to discriminate (i.e., “install racist policies”) against minorities in the name of private-property rights. Such discrimination represents, for Paul, a certain level of acceptable noise within the libertarian system. In a hand-waving defense of libertarianism, a friend suggested we cannot “legislate morality.” So true. With respect to secular government—as opposed to, say, a theocracy—natural law is perforce reduced to a set of accepted coeval standards. So, no, we can’t (constitutionally) legislate morality—although the history of constitutional amendments and the rulings of the SCOTUS, to a certain degree, suggest something of a moral touchstone—but we can legislate against premeditated, reckless, or immoral behaviors that negatively impact others. Refusing to serve African Americans within one’s privately-owned establishment, for example, is hardly synonymous with refusing to have an eclectic group of friends with whom you associate, and wielding the Constitution to defend exclusively white lunch counters as a defense against generic racist policies even fails an objective logic test (i.e., ignoratio elenchi). Because we cannot convince people to “live morally” with respect to others (i.e., the notion that choosing the moral path is, ipso facto, the reward for moral decision-making) is the reason we need legislation, “red lines,” to use Netanyahu’s familiar phrase, that regulate such behavior. Are you free to commit murder? Drive drunk? Run a stop sign? Cheat on your tax returns? Beat your children? Assist a suicide (in some states)? Bet against doomed investments actively sold to consumers? Sure you are, but the empowerment that emerges from individual autonomy will never stand as a justification for immorality. This is why we have laws in this country, a legislative feature of our democracy that implies the moral sentence of ignominy is simply insufficient to dissuade free moral agents from engaging in harmful behavior. If Goldman Sachs can generate 500 billion dollars by screwing a number of gullible clients, they will. And the shame they might feel as a result of their immoral-but-legal behavior—if they felt any at all—would be quickly washed away by the euphoric wave of, well, 500 billion dollars. This is when government needs to outline clear legislative restrictions. Even in a more general sense, though, there’s something profoundly disturbing about the entire libertarian project, something akin to the infantile absurdity of “object permanence” writ large upon the socio-macroeconomic landscape. Libertarians wish to eliminate the conscious recognition of significant social and economic inequalities by placing severely opaque, ambiguous, and patriotic-looking objects—things like “individual liberty,” “property,” “private ownership,” and, worse, constitutional “originalism”—in front of, as it were, the objects that should truly demand our collective attention: poverty, the insidiousness of plutocratic rule, wage stagnation, corporate avarice, discrimination, economic and opportunistic inequality, and the insufficiency of our secondary educational system. Put a different way, libertarianism is that Brobdingnagian picture of some Parisian locale—the kind of deceptive photomontage used by, say, inner-city street vendors—where visitors can pretend their fatuous photographic moment temporarily transports them to a different reality, a vision far more pleasant than the project-ridden dystopia conveniently hidden behind the photo. Such a tactic is even more embarrassing than the historically disingenuous attempts to distract us with those impossibly shiny objects—notions of the “American Dream” and “economic mobility”—that are waved at us by the trickle-down one-percenters, as if capitalizing the ‘a’ and ‘d’ increases the viability of such an ideal for the vast majority of Americans within the current economic infrastructure. Begin a discussion concerning disadvantaged families living in depressed areas without any real opportunity for economic mobility—an issue directly related to the sizable (and exponential) income-inequality gap—and the libertarian response is always the same, tired refrain: “No one is forcing them to live there!” “Other people have escaped, so why can’t they?!” “They’re struggling because they haven’t taken responsibility for their lives!” (That last statement sounds awfully familiar, Mr. Romney.) Such unacceptably tenuous, eyebrow-raising “arguments” really suggest an “object permanence” metaphor. If we shift our focus away from inequality and inequality of opportunity toward notions of “self-empowerment,” “freedom,” and “God-given autonomy,” as the libertarian project would have us do, then we replace genuine causation (imposed inequality) with a specious one (i.e., liberty—read: moral bankruptcy that results in substandard living as a function of one’s free, non-determined choices). That slight-of-hand moment, my friends, the moment where we replace the real cause of inequality with a weakly-constructed one, represents the seamy underbelly of the libertarian project, an ideology clothed in patriotic garb and painted with roll-up-your-sleeves, red-white-and-blue-sounding slogans that cleverly evoke the American machismo of “manifest destiny.” This does not, however, represent the worst of libertarianism. The evil of the libertarian experiment resides not only in its desire to subversively enact a sort of bait-and-switch morality on the American people, but also because it models, in a number of abstract ways, the non-genocidal dangers of the National Socialist experiment: a desire to institutionalize public racism and discrimination under the guise of state-facilitated ownership; the individual-as-totalitarian state; rejection of general social contracts in the name of a fascist sense of “liberty”; support for economic internment camps, which replace barbed fences with economic immobility; and an economic “master race” that is fitter—in a fiscally eugenic sense—than the less fortunate and less educated. An insidious project must begin, as it always does, with a popular-yet-specious allure. Libertarianism has chosen the buzzword “liberty.” To be sure, it is the principal desire of the libertarian project to effectuate a cognitive denial of the true cause(s) of inequality, and libertarianism secures this by suggesting its weakly-constructed alternative (free choice) is the real cause. That is, by effacing genuine causes of inequality, libertarianism is able to substitute its own prescriptions for inequality (e.g., indolence, lack of an entrepreneurial spirit, entitlement mindset, socialism, poor decision-making, etc.). In this way, libertarians have found a way to reject the very existence of inequality itself—and here is the important part—by claiming its presence within society is nothing more than a manifestation (and, with respect to the penury, an agglomeration) of individual decisions to be poor and disadvantaged. In other words, if inequality is simply a “decision to be unequal,” then even (the visual evidence of) inequality can be dismissed by the very libertarian dogma (i.e., free will) that reifies it. How convenient. And when libertarians respond by arguing that inequality IS, in fact, a product of free choice, then that statement, ipso facto, represents the vindication of my argument; it becomes the very evidence that libertarians—like ignorant viewers flipping through someone’s old vacation photos—believe the faux reality of the Parisian photomontage means we’re really in Paris. That is the immorality—the unshirted evil—of libertarianism, that the “solution” to the problem of inequity merely resides in its cause: an individual’s free will. Standard # Schubert, Schenker, and propositional calculus…oh, my. The complexity of Schubert’s lied Der Musensohn lies in its simple harmonic syntax within the deep-middleground Schicht.  There the oscillation between G major and B major creates an ostensible problem for fleshing out an orthodox Urlinie: The B major prolongations do not act as either third dividers or applied dominants to the submediant; thus, they cannot be used as evidence for such voice-leading paradigms (i.e., neither D major nor E minor arrives after the B major prolongation).  Well, how do the  B major prolongations fit into the large-scale voice-leading infrastructure—if it is possible, into an orthodox Schenkerian background? I’ve written a paper I hope to publish that reveals a very tenable and (hermeneutically) satisfying solution to this problem, but I’ve always been bothered by the above logical argument concerning the role of B major.  For example, is it logically valid—formally speaking—to say “if B major is a third-divider, then D major follows”?  The short answer is yes: If the antecedent is false, the entire conditional is always true.  But I’ve assumed B major isn’t a third-divider from the beginning, so I’ve established a situation where my conditional is guaranteed to be true.  No good.  What about the negation of the consequent (i.e., modus tollens)?  This tells us that because D major does not materialize, B major is not a third-divider.  But, again, this is derived from the conditional as designed.  Circumventing these issues involves a more complicated proof that does not derive the identity of B major with such immediacy: R = D major arrives (after B major), S = E minor arrives (after B major), P = B major is a third-divider, Q = B major is an applied dominant Assumptions and proof: ¬R, ¬S, [((P → R) ˅ (Q → S)) ˄ ¬((P → R) ˄ (Q → S))]  ˫  ¬P ˄ ¬Q __________________________ **1. ((P → R) ˅ (Q → S)) ˄ ¬((P → R) ˄ (Q → S)) A **2. ¬R A **3. ¬S A **4. (P → R) ˅ (Q → S) 1 (˄E) **5. ¬((P → R) ˄ (Q → S)) 1 (˄E) **6. | P → R H **7. | ¬R 2 RE **8. | ¬P 6, 7 MT **9. (P → R) → ¬P 6, 8 (→I) **10. P → ¬(P → R) 9 (TRANS) **11. | P H **12. | ¬(P → R) 10, 11 (→E) **13. | ¬(¬P ˅ R) 12 MI **14. | P ˄ ¬R 13 DM **15. P → (P ˄ ¬R) 11, 14 (→I) **16. | P H **17. | P ˄ ¬R 15, 16 (→E) **18. | ¬R 17 (˄E) **19. P → ¬R 16, 18 (→I) **20. ¬P ˅ ¬R 19 MI **21. | ¬ ¬P H **22. | P 21 DN **23. | ¬R 20, 22 DS **24. ¬ ¬P → R 21, 23 (→I) **25. P → R 24 DN **26. ¬P 9, 25 (→E) **27. | Q → S H **28. | ¬S 3, RE **29. | ¬Q 27, 28 MT **30. (Q → S) → ¬Q 27, 29 (→I) **31. Q → ¬(S → Q) 30 TRANS **32. | Q H **33. | ¬(S → Q) 31, 32 (→E) **34. | ¬(¬S ˅ Q) 33 MI **35. | S ˄ ¬Q 34 DM **36. Q → (S ˄ ¬Q) 32, 35 (→I) **37. | Q H **38. | S ˄ ¬Q 36, 37 (→E) **39. | ¬S 38 (˄E) **40. Q → ¬S 37, 39 (→I) **41. ¬Q ˅ ¬S 40 MI **42. | ¬ ¬Q H **43. | Q 42 DN **44. | ¬S 41, 43 DS **45. ¬ ¬Q → S 42, 44 (→I) **46. Q → S 45 DN **47. ¬Q 30, 46 (→E) **48. ¬P ˄ ¬Q 26, 47 (˄E) Q.E.D. Standard
{}
110 number Template:Numbers 110s Cardinal one hundred [and] ten Ordinal 110th Factorization ${\displaystyle 2\cdot 5\cdot 11}$ Divisors 2, 5, 10, 11, 22, 55 Roman numeral CX Binary 1101110 Hexadecimal 6E 110 is a sphenic number, a pronic number, a Harshad number and a self number. At 110, the Mertens function reaches a low of -5. 110 is the sum of three consecutive squares, ${\displaystyle 110=5^{2}+6^{2}+7^{2}}$.
{}
# Partial Differential Equation (Solution needed, by usage of Symmetries) Partial Differential Equation Method to be used: Symmetries Equation: $U_t*U_{x}^{2}-U_{xx}=0=F$        is         $U_t*U_{x}^{2}=U_{xx}$ Has an infinite dimensional algebra of infinitesimal point symmetries. Let M ? R and let F : M × R 1+p ? R define the differential equation. Standard formulation for the second prolongation: $Pr^{(2)} X=? \frac{?}{?x} +? \frac{?}{?t}+? \frac{?}{?u}+?_{x} \frac{?}{?ux}+?_{t} \frac{?}{?ut} +?_{xx} \frac{?}{?uxx}$ First the ETA_x, ETA_t and ETA_xx must be defined: $?_{t}=D_{t}?-u_{x}D_{t}?- u_{t}D_{t}?$ $?_{x}=D_{x}?-u_{x}D_{x}?- u_{t}D_{x}?$ $?_{xx}=D_{x}?-u_{xx}D_{x}?- u_{xt}D_{x}?$ Adding these definitions to the second prolongation gives: $Pr^{(2)} X=0+0+0+?_{x}2u_{t}u_{x} +?_{t}u_{x}^{2} -?_{xx}$ When worked out and substitution of $U_t*U_{x}^{2}=U_{xx}$ in the above mentioned equation after implementing the different forms of ETA give the second prolongation. Based on the second prolongation and the there out following sub-equations I determined the definition of ETA, XI and TAU: $?(x,u,t)=(\frac{1}{4}C_1 u^2-\frac{1}{2} D_1 u-\frac{1}{2} C_1 t+H_1 )x+F(u,t)$ $?(u,t)=(C_1 t+C_2 )u+D_1 t+D_2$ $?(t)=C_1 t^2+2C_2 t+C_3$ Substituting these definitions into the "standard" formula for a vector field X results in: $X=? \frac{?}{?x} +? \frac{?}{?t}+? \frac{?}{?u}$ $X=((-\frac{1}{4} C_1 u^2-\frac{1}{2} D_1 u-\frac{1}{2} C_1 t+H_1 )x+F(u,t)) \frac{?}{?x} +(C_1 t^2+2C_2 t+C_3 ) \frac{?}{?t}+((C_1 t+C_2 )u+D_1 t+D_2 ) \frac{?}{?u}$ This results in the following infinitesimal symmetry generators: $X_1=((-\frac{1}{4} u^2-\frac{1}{2} t)x) \frac{?}{?x} +t^2 \frac{?}{?t}+ut \frac{?}{?u}$             Where C_1 = 1 $X_2=2t \frac{?}{?t} +u \frac{?}{?u}$                                                                      Where C_2 = 1 $X_3=\frac{?}{?t}$                                                                                             Where C_3 = 1 $X_4=-\frac{1}{2}ux\frac{?}{?x}$                                                                            Where D_1 = 1 $X_5=\frac{?}{?u}$                                                                                            Where D_2 = 1 $X_6=x\frac{?}{?x}$                                                                                         Where H_1 = 1 Up till this point it has been checked and concluded to be correct. What to do? Using these symmetry generators I have to find solution(s) to the on the top stated partial differential equation. The goal is to find for every generator (where possible) the solution that this symmetry is giving to the PDE.  I do not know how to do this, internet has not yet been helpful to me • What do you mean by ETA and XI? What are the initial conditions for you main PDE? It is not clear what is the main goal here. • Thank you for your reply! I have edited my question, I hope this will provide the needed information to make it clear what I need. :) • closed • 242 views • \$29.50
{}
# Semantics: alternative word for long-ranged interaction? [closed] I am working on wording for a report. I need to a word to describe long ranged interaction that is constant in strength. But I am aware that people sometimes use 'long-ranged' to mean decaying strength with $1/r$. I want to describe a constant interaction strength. Any suggestions for intuitive wording? • Such semantic soft-questions are better asked in e.g. the chat room. Jan 16, 2015 at 12:02 • Question was closed as I was finishing this answer...Let's coin a new word for a this concept: Anatelic: from ana- (without), and tele (distance). You define it once, and then use it. It doesn't exist (yet) and if your idea takes off, it will be known forever as "your" word. Jan 16, 2015 at 13:26 interaction that does not depend on $r$ $r^0$ interaction ... of these the top one would be my favourite - sounds like a pretty unusual interaction that does not depend on $r$. If I was writing I would use the top term and explain it carefully the first time as it sounds so unusual to me. - and maybe occasionally use one of the other expressions below - I agree that 'long-range' does not really fit the bill here.
{}