text
stringlengths
100
356k
# Rotary Piston by emiltr Tags: engine, piston, rotary P: 21 Thank you Jack,is useful and I have to study. Conclusion at first sight: the most efficiencies is bent axis motor. I think the rotary piston reaches at these performances. The main advantage is simplicity, reliability, low price.I also believe is more linear. His performance will not depend on pressure, rpm, vol. But we have made and put on a trial stand. Until there are no measurements can not say anything for sure. If I understood well,none is greater flow or high flow.Therefore, I asked the turbines. I think the rotary piston can give flow as turbines and efficiencies as reciprocating pumps. Rotary piston we can do very large.All movements are rotation.And there are only a few pieces. P: 21 I am confused, or I do not understand well. All I found on yield, yield is related to an ideal model.Says nothing about the ideal model. Where can I find an energy balance.Something like this: In a hydropower, energy consumption in time t, it is E1= Qght Q is flow , g is g-force,h is water height, t is time Energy loss is E2=QVVt / 2 V is water speed from outlet. Ceded power to the system is E1-E2. The yield is (E1-E2) / E1 X 100 Which generally is ...........Depends on water height as follows ........ For just a pump. Energy received on the shaft is ......,Amount of water pumped.........., from a height of ......, .... So energy is obtained .... The yield is ...... P: 562 Quote by emiltr I am confused, or I do not understand well. All I found on yield, yield is related to an ideal model.Says nothing about the ideal model. Where can I find an energy balance.Something like this: In a hydropower, energy consumption in time t, it is E1= Qght Q is flow , g is g-force,h is water height, t is time Energy loss is E2=QVVt / 2 V is water speed from outlet. Ceded power to the system is E1-E2. The yield is (E1-E2) / E1 X 100 Which generally is ...........Depends on water height as follows ........ For just a pump. Energy received on the shaft is ......,Amount of water pumped.........., from a height of ......, .... So energy is obtained .... The yield is ...... Pump power calculator: http://www.engineeringtoolbox.com/pu...wer-d_505.html Remember that the equivalent pressure P from a height h is: $$P=\rho g h$$ So any pressure can be related to a "theoretical height". P: 21 I think a specialist with the possibility of execution,should complete the "rotary piston", in weeks.I, alone, would take years.Too much! A former colleague, mechanical engineer(sewing machine specialist), made the drawing of execution in 20 hours.I can not do.Execution on video you see is unacceptable.He said that to work correctly, must be designed by a specialist who has experience in pump design. As I said, most importantly for me is to see running "rotary piston".That is,I will give everything I have. I hope someone is interested and will signal back if he will have results. Attached Files Gabarit.pdf (298.9 KB, 8 views) Executie.pdf (287.1 KB, 7 views) P: 21 Four-stroke engine has the following losses: 30% of energy is lost through cooling,40% is lost through the flue gas discharge and 8% mechanical losses and incomplete combustion.Only 32% of energy goes to the wheels.I try to eliminate these losses. The first step is to develop a compressed gas source having 5At.Little pressure to not need cooling and minimum loss due to exhaust gases. Put a burner in the chember, well insulated(burner as heating systems are or how are the water heaters).Mixture of intake are taken from a rotary pistonand and compressed in to combustion chamber.(This is the principle, I think the air will be compressed and the fuel will be injected).Burner burns all the time with different powers.When the pressure reaches 5At only the pilot flame burning. After burning gas volume multiply.Closed chamber, pressure and temperature increases.Chamber is not closed, the flow increases. Fuel can be anything that burns.Butan, alcohol, gasoline, oil, etc.. Burner must be dezigned for the fuel used. Until now, could not ensure efficient airflow ,needed for the combustion to 5At.Rotary piston can,also 5At were not sufficient to produce enough power.For rotary piston is enough. So,5At pressure is maintained with an automated system. A rotary piston is attached to the pressure at 5At. Rotary piston is usable area 100mm x 100mm = 10000mm = 100cm square square Force=5AtX100cm squar= 500Kgf = 5000N 200mm arm = 20cm ,torque results 5000 / 5 = 1000N / m High speed (20,000 rpm-30, 000rpm) and give a great power.We can have a lower rotary piston or a lower pressure The advantages of this engine are:reliability,lightweight,lower fuel consumption,low cost,easy maintenance. What is your opinion, PF Related Discussions Introductory Physics Homework 0 Mechanical Engineering 9 Introductory Physics Homework 8 Mechanical Engineering 4
# Math Help - General Term of the Series problem. 1. ## General Term of the Series problem. Find an expression for the general term of the series below. Assume the starting value of the index, k, is 0. Denote terms such as k! or (2k)! as fact_k or fact_2k. Here is the series... $(x-8)^4 - \frac{(x-8)^6}{2!} + \frac{(x-8)^8}{4!} - \frac{(x-8)^10}{6!} + ...$ What I get for an answer is ((-1)^(k+2)(x-a)^(2k+4))/(fact_(k+1)). What I am wondering is why is this wrong? Any help appreciated. 2. Originally Posted by Latszer Find an expression for the general term of the series below. Assume the starting value of the index, k, is 0. Denote terms such as k! or (2k)! as fact_k or fact_2k. Here is the series... $(x-8)^4 - \frac{(x-8)^6}{2!} + \frac{(x-8)^8}{4!} - \frac{(x-8)^10}{6!} + ...$ What I get for an answer is ((-1)^(k+2)(x-a)^(2k+4))/(fact_(k+1)). What I am wondering is why is this wrong? Any help appreciated. Does it give the correct term corresponding to k = 2 ....? You might find it easier to get the general term if you first take out a common factor of (x - 8)^4 from the series. 3. The only thing I see is that k+1 factorial should be 2k+1 factorial 4. Originally Posted by Latszer The only thing I see is that k+1 factorial should be 2k+1 factorial *Sigh* Does that give you the correct term corresponding to k = 1? Why don't you just follow the advice I gave in my first post.
Stuff like `{{autolink|Respond or Negate}}` or `{{autolink|Respond or Negate}}` do not work and it's something not uncommon. It would be cool if it would output `Respond or Negate`. It tried at the sandbox, but meh, it was a bit dumb, I don't understand much of this. Becasita Pendulum (talkcontribs) 11:51, June 6, 2016 (UTC)
# Combinatorics What is your favourite combinatorial object, or mathematical object you find the most fascinating? I always am curious of what types of combinatorial or mathematical objects (in general) interest other researchers. For myself I have a pretty keen interest in types of restricted weak integer compositions (weak implies 0 is applied in integer sequence whereas the lack of this description means you can't use 0 in integer sequences, and restricted means finding subsets of these objects). What combinatorial or mathematical object (preferably more elusive ones) fascinate you the most, or are your favourite Not sure if the following qualifies but this phi spiral? Which operations can be constructed using only xy and x+y+z? What method can I use to charcterize which operations repeated multiplication xy and 3-input addition x+y+z are able to construct mod 4, or mod 8, or .. or mod 232? The reason I ask is because one can prove that it is impossible to construct a known constant from unknown inputs using these operators (no formula f(x,y,..)=K for all x,y,..), and one can also characterise those other operators that one can add into the mix with them that preserve the property. But I don't know what the totality of operations one can construct with xy and x+y+z is. Indeed, I haven't figured it out for plain old xy and x+y ! (yes, I can describe those: "all polynomials"; but how many are there and what operations cannot be formed as polynomials?) Any clever ways to go about this? I am trying symbolic computation to generate the orbits of  "x" and "y" under substitution in the binary operators xy and x+y+z mod 4. Peter T Breuer · Birmingham City University Strangely, I see 1436 operator tables constructed by xy and x+y mod 4. That's multiplication and the ordinary addition, not the more rarified three-input addition. Shouldn't these plain old operations construct all polynomials in x,y with 0 constant term? Going by the coefficients, there should be 415 of them (15 coefficients taking the values 0,1,2,3). But I suppose there are polynomials with different coefficients that are functionally equal, since there are divisors of zero mod 4? The difference might be zero mod 4, but have nonzero coefficients? Can anyone count the number of functionally distinct multinomials in x,y mod 4? How many polynomials are there? 64, according to the computer: [0,1]x3+[0,1]x2+[0,1,2,3]x+[0,1,2,3] That's 16 that are zero at zero. That was due to reductions from 2x3=2x2=2x, and x4=x2. I'll try counting the multinomials. There are 4 possible coefficients of x, y and xy. There are 2 possible coefficients of x2, y2, x2y,xy2,x2y2. There are 2 possible coefficients of x3,y3,x3y,xy3,x3y3, x3y2,x2y3. That is all. That's 43 212, or 218. It's no good .. I'll have to count them computationally. What if a polynomial (zero at zero) is zero on all odd numbers? The coefficients satisfy a3+a2+a1=0=a3-a2+a1, so 2a2=0 and a2=0 or a2=2. In one case a3=-a1, and the other a3=2-a1. so one has -a1x3+a1x and (2-a1)x3+2x2+a1x. If the polynomial is equal at even numbers too, then 2a1=0 so a1=0 or a1=2. The polynomial is then 0, 2x3+2x, 2x3+2x2, 2x2+2x. That gives four versions of each polynomial with the same functionality. This is the kernel of the embedding of polynomials into functions. It's a simple count .. there are 44/4=43=64 polynomials by functionality, 16 with zero constant term. What if a multinomial x3p3(y)+x2p2(y)+xp1(y)+p0(y) is functionally zero? p3(y)+p2(y)+p1(y)+p0(y)=0 -p3(y)+p2(y)-p1(y)+p0(y)=0 2p1(y)+p0(y)=0 p0(y)=0 So 2p3(y)=2p2(y)=2p1(y)=p0(y)=0. So p3,p2,p1  take only even values (0 or 2) and p0(y)=0. So p0 is one of the four functionally zero polynomials (that's 4 options). That p takes only even values means it is b3x3+b2x2+b1x+[0,2], where one or three of b1,b2,b3 are even: [0,2]x3+[1,3]x2+[1,3]x+[0,2], [1,3]x3+[0,2]x2+[1,3]x+[0,2], [1,3]x3+[1,3]x2+[0,2]x+[0,2], [0,2]x3+[0,2]x2+[0,2]x+[0,2], for 244=26=64 options. That is a total of 6434=220 functions. That would bring the number of functionally distinct multinomials down to about 416/220 =212=4096! I don't seem to be able to get a stable estimate for this quantity :-(. My computer says the answer is 3686, 906 of which are zero at zero. That can't be right. What is an interesting and hot research topic in combinatorial group theory? I need an interesting topic about combinatorics(graph theory) in relation with group theory, which I want to have application in other sciences. What are the values of the special Bell polynomials of the second kind? Bell polynomials of the second kind Bn,k(x1,x2,...,xn-k+1) are also called the partial Bell polynomials, where n and k are positive integers. It is known that Bn,k(1,1,...,1) equals Stirling numbers of the second kind S(n,k). What are the values of the special Bell polynomials of the second kind Bn,k(0,1,0,1,0,1,0,...) and Bn,k(1,0,1,0,1,0,...)? Where can I find answers to Bn,k(0,1,0,1,0,1,0,...) and Bn,k(1,0,1,0,1,0,...)? Do they exist somewhere? Feng Qi · Tianjin Polytechnic University Bai-Ni Guo and Feng Qi, Explicit formulas for special values of Bell polynomials of the second kind and Euler numbers, ResearchGate Technical Report, available online at http://dx.doi.org/10.13140/2.1.3794.8808. Can anyone give me an interpretation or link about improper uniform prior as a prior distribution in Bayesian estimation? I need a good reason why improper uniform prior could be use as a prior in Serial Numbered Population (SNP) problem. Or maybe someone can tell me about improper uniform prior itself. An pdf link might be very helpful for me. I attached the journal. Brian S. Blais · Bryant University I'm not sure if Jochen's answer is entirely correct; "An improper prior can be used when the resulting posterior is proper."  I'd modify it to say that an improper prior can be used when the resulting posterior can be achieved with a *proper* prior and the limits of the distribution, after the posterior is calculated, is taken to infinity and the result is shown to be the same.  For example, you can use a proper, bounded uniform prior and then, after you calculate the posterior, take the limits as the bounds grow to infinity to show that if you used the improper prior in the first place it would give the same answer. Seen this way, the improper prior is a short-cut to make the analysis cleaner.  For example, I personally find E T Jaynes' analyses very clear because he uses improper priors (when he can!), and warns that one has to be careful to do it properly if there is even a hint of trouble.  I find a paper like Bretthorst's "Difference of Means" paper to be a much harder read because he uses proper priors throughout, even in cases where I think the improper prior would work. Can anyone help me with the inequality for the ratio of two Bernoulli numbers? Dear Colleagues I need an inequality for the ratio of two Bernoulli numbers, see attached picture. Could you please help me to find it? Thank you very much. Best regards Feng Qi (F. Qi) Feng Qi · Tianjin Polytechnic University [1] Feng Qi, A double inequality for ratios of Bernoulli numbers, ResearchGate Dataset, available online at http://dx.doi.org/10.13140/2.1.2367.2962. [2] Feng Qi, A double inequality for ratios of Bernoulli numbers, RGMIA Research Report Collection 17 (2014), Article 103, 4 pages; Available online at http://rgmia.org/v17.php. What is the power series expansion at zero of the secant to the power of three? It is well known that the secant $\sec z$ may be expanded at $z=0$ into the power series \label{secant-Series} \sec z=\sum_{n=0}^\infty(-1)^nE_{2n}\frac{z^{2n}}{(2n)!} for $|z|<\frac\pi2$, where $E_n$ for $n\ge0$ stand for the Euler numbers which are integers and may be defined by What is the power series expansion at $0$ of the secant to the power of $3$? In other words, what are coefficients in the following power series? It is clear that the secant to the third power $\sec^3z$ is even on the interval $\bigl(-\frac\pi2,\frac\pi2\bigr)$. Herbert H H Homeier · Universität Regensburg Assume that secant numbers are finitely expressed in closed form. Then coefficients of m-th powers of the secant powers series are finitely expressed by using (m-1)-fold convolutions of the sequences of secant numbers (cf. http://en.wikipedia.org/wiki/Power_series#Multiplication_and_division), e.g. the k-th coefficient of the third power is given by (E*E*E)_k = \sum_{m=0}^k E_{k-m} \sum_{n=0]^m  E_{m-n} E_n Beware that here, the odd secant numbers are simply 0. Can anyone suggest references for Iterated Function Systems and combinatorics: uniqueness of addresses? Could anyone suggest me some references about the uniqueness of the addresses of an Iterated Function System? E.g. points of Cantor set can be coded by a unique address, how general is this property? More precisely, I am interested in the sufficient conditions to have a one-to-one relation between the shift space of the Iterated Function Systems and  its attractor. Miroslav Rypka · Palacký University of Olomouc Dear Anna, There are three kinds of attractors of iterated function systems, totally disconnected, just touching and overlapping. Totally disconnected attractors have metrically equivalent structure to the Cantor set. This can be found in Barnley's book Fractals Everywhere. The remaining cases may be treated with the help of lifted IFS, which is also explained in the book. Best regards Miroslav • Let A & B be two square matrices such that A^2 is not equal to B^2, A is not equal to B but A^3=B^3 & A^2B=AB^2. What is determinant of A^2-B^2? If A and B Are non-singular then we have determinant of (A^2-B^2)=-determinant of AB , Am I correct? If so, then the problem is If A and B are singular, how can we prove? Prasanth G. Narasimha-Shenoi · Government College Chittur @ Samuli  Yeah the facts are correct.  Sorry that I am not able to answer What is the formula to find the number of simple cycles in a graph? Is this problem NP Complete? I would like to list out all simple cycles in a connected graph. Sergey Perepechko · Petrozavodsk State University You need to give additional details about your problem. If you know adjacency matrix of your graph and want to count number of simple cycles of fixed length look at slides and papers in my profile. Sergey Perepechko Which Software Package is the best for computations in Codes over Rings? What is the best software package available for carrying out computations related to algebraic coding theory, especially codes over rings. Mostafa Eidiani · Khorasan Institute of Higher Education Hi Can anyone help me find amortized splay tree operation derivation? Please help me with a derivation of "amortized cost of a splay tree operation" . I am waiting for that please consider it fast and let me know. Albert Manfredi · The Boeing Company Both of these articles include proofs. Not sure whether this is what you're asking. Hope they help. http://www.cs.cornell.edu/Courses/cs312/2006fa/recitations/rec20.html http://www.bowdoin.edu/~ltoma/teaching/cs231/fall10/Lectures/13-amortized/splay.pdf • Cheng Tianren asked a question: How do I use the Ptolemy inequality to study the geodesic angle? here, we try to use the ptolemy inequality to study the geodesic angle. and we introduce 2 preliminary results : 1. the ptolemy theorem in euclid spherical geometry by studying the matrix of 4 quadruple points on the euclid spherical, we find the fact that, since there are 6 lines which connect these 4 points, so if 5 of the 6 lines are equal. then we will get that the sixth line's length is double as the other equally 5 lines;this result imply that the angle between the lines is included in $\frac{\sqrt{a}}{2r}$$\in(2k\pi+\pi/5,2k\pi+5\pi/6) ; then we substitute this result into the discriminant, and we get an inequality about the radius in n+1 dimension: op_{n+1}^2$$\ge$$\frac{r^2}{t^2}(1+\frac{r^2}{n})-\frac{r^2}{n}$ 2. ptolemy inequality in minkowski geometry here ,we use the centroid method to study the n-polygon problem in minkowski geometry. firstly, we introduce a well known problem such that: if every angle of a polygon is equal, and the sidelines are : $1^2,2^2,......,N^2$, then $\sum{n_{s}}e^{isa}=0$ by this theorem, we can factorize the mass on the vertex into pairs and the number of pairs is primes $N/2=\Pi{p_{i}^{a_{i}}}$ ,the weight of each pair is $\sum{4k-1}$,then we can divide these pairs into groups and every group has prime points too(page 21 in[1]); consequently we can divide the sidelines into 2 parts: $1^2,3^2,......$ and $2^2,4^2,......$; the next step is to construct regular n-polygon and use the ptolemy inequality to make a regulation for the average of the sum of the mass in different group, and we can rearrange these groups of mass to ensure the first part $1^2,3^2,......$ is larger than the average and the second part $2^2,4^2,......$ is less than it. therefore we can apply this average of sum to the distance formula in minkowski geometry in polar coordinate (page 24 in [1]). here we also use combinatorics method (result we get in step 2) to study the natural logorithm in distance formula of minkowski geometry(page 25). our goal is to represent the polar angle in minkowski geometry as the product of the mass lie on different vertex (page 26 in [1]). so our question is how to apply the 2 results above to the geodesic angle? by the inequality for the radius in $n+1$ dimension euclid spherical, we can ensure $v_{n+1}\ge0$; consequently we can substitute the representation of the polar angle in step 2 to spherical equation , which imply that we can also restrict the range of $cos^2{\varphi}$ that is $[\sqrt{33}-4l,3]$ lastly,we apply the property of ptolemy space to get our estimate for geodesic angle,the bounde is : $1-4e^2d/3+\frac{2}{3-4/3e^{2d}}$ is this method feasible? for more detail, you can refer to : application of the ptolemy theorem (3) (page 15-27) the analysis techniques for convexity: CAT-spaces (3) In integral theory, how can you integrate all four perspectives in human development into a model? According to Wikipedia, http://en.wikipedia.org/wiki/Permutation, if you have 3 objects [a,b,c] combinatorics there are 6 possible permutations. But positioning in space and time are not considered. What if a 'c' is placed 0,5xx behind [a,b] or 'a' and 'b' separately either horizontal/vertical, or 'c' and 'b' are isolated back to back and 'a' spins around and around for 30 seconds more until reaching a stable position? I am looking at Integral theory and trying to integrate all four perspectives in human development (internal-individual; internal-collective; external-individual; external-collective) into a model. I am not satisfied with those similar to the above ''max 6 possible permutation of the three elements". I believe that for 3 objects there are more than 1+2+3 permutations; and for 4 objects (my case) more than 10. Alina Abraham · ICL Business School Adapting Ken Wilber/Terri O'Fallon model, using a spiraling trajectory In terms of combinatorics, could (6677,333,166) be cyclical on the torus? Would like to know if (6677,333,166) could be cyclical on the torus. Christopher Landauer · The Aerospace Corporation The notation is ambiguous - what combinatorial object are these the parameters for? Is it some kind of design? If so, what kind, and what parameters are indicated? The problem is that there are hundreds of different parameterized combinatorial designs that use the same kind of notation Is there a database available on the net of symmetric designs? Symmetric designs=combinatorial 2-designs with as many points as blocks. Mohan Shrikhande · Central Michigan University Nice to hear from you, Patrick! Spin - could you provide some clarification? I always thought that the sum of angular momentum should be zero. I am wondering if this applies on a sub-atomic scale as well as a macroscopic scale. Daniel Crespin · Central University of Venezuela Hello Donald, A reasoning similar to yours implies that the total linear momentum of the Universe is zero. And furthermore, there should be a motionless center of mass of the Universe. But how many technical details regarding these arguments can be worked out is not so clear. It may be worth a trial. The term "spin" in Physics usually refers to angular momentum of atomic scale objects. Technically, for n >= 3 the special orthogonal group SO(n) is connected and has first homotopy group (also called fundamental group) isomorphic to Z_2, the integers modulo 2. Therefore its universal covering space, denoted Spin(n), is a well defined connected topological group (and a Lie group as well) with fibers consisting of two points, equivalently, it is a connected double cover. If you consider instead the orthogonal group O(n), this is non-connected, has two connected components and the connected component of the identity is SO(n). The universal cover of O(n) is a topological space known as Pin(n), necessarily a non-connected double cover of O(n). But the group structure of Pin(n) is not unique. This subtlety is mentioned in http://en.wikipedia.org/wiki/Pin_group See references there. In Classical Physics, the natural formalism for rotations in ordinary three dimensional space is based on SO(3), its tangent bundle, and the Lie algebra so(3) = tangent space to SO(3) at the identity. A good reference is the book Classical Mechanics by V. Arnold. Then comes Quantum Mechanics and descendants. Schrödinger time dependent equation involves complex numbers. This forces the introduction of complex valued wave functions \psi. To go beyond the mathematical formalism of QM, the mathematical object \psi requires physical interpretation. The wave "amplitude" |\psi|^2 is interpreted "physically" as a probability distribution. This makes the "phase" disappear physically, because |\psi|^2 and |\exp(-i n t) \psi|^2 are one and the same physical state. But the phase is the natural way to consider rotations. Thus, rotations disappear in QM. The quantistic way to recover something resembling rotation is to use Pauli matrices, equivalently, to use spin(3). I have been unable to make sense of these as rotations. Here the quantum dictum "Shut up and calculate" is acutely present. A nearby neon sign says "Abandon hope all ye who enter here". In my opinion the unfortunate and mistaken choice of a unitary evolution equation ---that is, of Schrödinger evolution equation--- for the hydrogen atom made it impossible to understand microscopic rotational phenomena. Even worse, transitions themselves are contradicted by this equation. On the other hand, Schrödinger eigenvalue equation is one of the most impressive wonders of science. Most cordially and with best regards, Daniel Crespin Can anyone help me with a combinatorial interpretation? I am asking for a combinatorial interpretation of a formula for Bell numbers in terms of Kummer confluent hypergeometric functions and Stirling numbers of the second kind. See the formula (8) and Theorem 1 in the attached PDF file or http://arxiv.org/abs/1402.2361.Could you please help me? Thank a lot. Feng Qi · Tianjin Polytechnic University Yes, th formula (8) is not the only such formula. Some mathematician asked me to provide a combinatorial interpretation, but I do not know the combinatorial meanings of the formula (8). I think that "the formula (8) just represents one more expression for those numbers, using Kummer confluent hypergeometric function" may be not a combinatorial interpretation of the formula (8). Does the discrete n-circle (n even) admit a partition into n/2 pairs, all with a distinct diameter? A (discrete) n-circle is the set of complex n-th roots of unity, or: the vertices of a regular n-gon. The above question arose as part of a (nearly finished) research project on a method to produce unpredictable number sequences. Although my partial answers are no longer needed for the project, the simple-looking and still unsettled problem keeps intriguing me. I proved that if a partition exists into pairs of distinct diameters, then n must be of type 8k or 8k+2 (k>0 integer). Computer generated examples confirm that for n <= 112, these types are *exactly* the sizes that work. The computer was stopped after running for two days on the case n=114 (having inspected nearly 0.000...001% (about 300 zeros) of the total search space). The only hope on further information must come from construction methods other than brute-force search with back-tracking and from proofs. Specifically, the problem becomes this: Design an algorithm that is guaranteed to produce a partition (as desired) whenever there exists one and reports failure otherwise. Unlike the current backtracking brute-force search, the algorithm should provide answers in a reasonable time. [Added 09-12-2013: solved] The problem is certainly NP (Nondeterministic Polynomial), but chances are that it is NP-complete. [Added 09-12-2013: not NP complete] A weaker problem is to find a number b <= n/2 such that *any* b vertex pairs with different diameters can be rotated apart in the n-circle for *any* (even) n. It might be "(n/2)-1", I haven't checked on this yet. Ultimately, one should be able to determine the best b for each individual n (including the odd case). [Added 09-12-2013: this is still wide open. Exhaustive computer search is getting quite demanding, even for fairly low n] Marcel Van de Vel · VU University Amsterdam I have finished my paper containing the questions that I collected in my original posts. There are several more questions (and partial answers) in it, which all arose with the development of one major result on a theoretical method to produce unpredictable numbers. As this paper is intended for publication in a regular journal, I cannot place it it here in public. If anyone is interested in receiving a copy (25 pp), please let me know. I'll send a pdf file by e-mail. The paper classifies mainly as combinatorial mathematics. How close is spectral partitioning to the solution of the min-cut problem? There are many approaches using different matrices and eigenvectors to solve the min-cut problem. What is the best theoretical result providing a good approximation from a spectral cut to the solution? • Donald beverly Giles asked a question: Is there any interest in a very different spreadsheet algorithm for generating incidence matrices of projective planes? I have devised an algorithm for generating the incidence matrices of projective geometries not shown in traditional texts. Will include as an attachment. This design is self dual and relies on the 4 mols of order 5.Rather than use M*Mt can use M^2. Square Root of a Symmetric Matrices The square root of a 31 by 31 matrix with 6"s down the main diagonal and 1"s elsewhere is a symmetric binary matrix with six 1's in each row and column. If someone has an algorithm for this square root then perhaps they can apply it to a larger matix that Iam presently working on. This larger matrix is an 81 by 81 matrix with 16's down the main diagonal and 3's eveywhere else. The square root of this will be a binary symmetric martix with sixteen 1's in each row and column. For me this is not a simple problem. My knowledge of matrices does not extend to taking square roots of symmetric matrices and getting symmetric binary matrices as the answer. Christopher Landauer · The Aerospace Corporation since the given matrices are linear combinations of the identify and the matrix commonly denoted J (the all 1 matrix), the eigenvalues are simple to compute (the eigenvalues of the k x k matrix J are one k and the rest 0), but only one of them is a square (36 in the first 31 x 31 example, 256 in the 81 x 81 example), and the rest are all equal (5 in the first example and 13 in the second) - also, since J . J = kJ, a square root of J is J/sqrt(k) - therefore, it is easily shown that a square root of rI + sJ in k x k matrices can be found having the same form aI+bJ, with r = a^2, s = 2*a+b^2*k, which is easily solved for a and b (need r >= 0 for a to be real, and s-2*a >= 0 for b to be real) - in the two given examples, r=5, s=1 and r=13, s=3, then second condition does not hold and b will be imaginary there are, of course, other square roots, as other people pointed out Up to now, what is known on (the maximal) domains that guarantee the transitivity of the majority rule? It is well known that the majority rule may not be transitive for some configurations of individual preferences. Domain restrictions are possible ways out. But what is known about maximal such domains (i) with respect to the cardinality? (ii) via set inclusion? Issofa Moyouwou · University of Yaounde I That is very instructive. Are there some references for further readings? What do the three parameters represent i.e. (44,22,10)? Was thinking these were parameters of some type of combinatorial design Bahattin Yildiz · Fatih University Actually it is [44,22,10] and it describes the parameters of a binary linear code, of length 44, dimension 22 and minimum Hamming weight 10. Can anyone suggest some good reference papers for beginners in the field of Combinatorial Design? It can either be related to Key Distribution or any other application, just an overview needed. Daniel Page · University of Manitoba I can't recommend a book better than Combinatorial Designs by Douglas Stinson. It's how I learnt Design Theory. It is a little more expensive, but it is full of handy results. http://www.amazon.com/Combinatorial-Designs-Constructions-Douglas-Stinson/dp/0387954872 Another good book is the Handbook of Combinatorial Designs. My previous supervisor and my current one wrote a pretty interesting bit in there on Lotto Designs. • Donald beverly Giles asked a question: Can anyone recommend ways to find a skew starter for a room square of side 667? Skew room squares exist for all odd values greater than 5. If n is prime it is a simple matter to generate a skew starter. But 667 is not prime. 667=23*29 which means a computer search has to be done in order to generate one. I would be satisfied with the skew room square of side 667, even though we can show it exists we can't seem to construct it. Any suggestions on this particular problem are appreciated. Why do mathematicians think that the four colour problem cannot be solved theoretically? I published a proof. If there is any error in my proof, kindly inform me. Donald beverly Giles · Board of Governors of the Federal Reserve System This is because Bill Tutte, the man who broke the German Tunny code on his own, spent a great deal of time trying to to prove the four colour theorem. He had a nice short proof for the five colour theorem which was not his proof. He also constructed the Tutte Fragment which ended Heawood's conjecture of trivalent graphs. Any graph can be triangualized.The dual of the triangualized graph is a trivalent graph. Now a Hamiltonian circuit with an even number of vertices can then be constructed and 2 coloured. The remaining third of the vertices can then be 3 coloured This implies that any map can be 4 coloured. Well the Tutte fragment shut this conjecture down flatlly. A strange proof he made was that the averaqe number of colours for all graphs was pi. This meant some were more than pi and some were fewer than pi. I was with Tutte as a student in 74 when he was pursuing this problem. It was a wonderful class on graph theory. Most of the students were professors. Only a few of us were there for a credit. Why does the four colorability of planar graphs not ensure the non-biplanarity of K_9? I hope the proof of the four color theorem is sufficient to explain the answer. Sanjib Kuila · Panskura Banamali College I have already studied, that paper of Beineke. I also gone through the original works of J Battle et. al., and that of Tutte. In this topic, Harary wrote in his book, "No elegant or even reasonable proof is known." Proof of the existence of balanced tournament designs? I'm writing my senior research paper on balanced tournament designs, and I am looking for the proof of the existence of them which is in this paper. Donald beverly Giles · Board of Governors of the Federal Reserve System Jennarose: Ron Mullin, Waterloo's first graduate student, proved the existence of room squares for all odd v greater than 5. The last one that needed proving was 257 which was constructed in the orient in the mid 70's. Further to this skew room squares exist for all odd v greater than 5. A skew room square is a room square where exactly one the cells (i,j) or (j,i) is occupied and the other cell is empty. Don Giles is converting skew room squares of order 4n+3 into symmetric block designs with parameters (4n+3,2n+1,n) which in turn are being converted into Hadamard matrices of order (4n+4). All of these combinatorial structures are interrelated. Paul Schellenberg developed the room square of order 25 as part of his PhD. He too was a student and then professor at Waterloo in the late 60's and early 70's Addition and multiplication theorems were used after a collection of smaller room squares had been established by a host of researchers. Hope this helps you with BALANCED TOURNAMENT DESIGNS> I can elaborate more on their usage if you wish.
# “Explore Mathematics: Part II” I felt like my first venture into “Explore Mathematics!” was so successful last quarter with my Advanced Precalculus kids that I wanted to build upon that. So this is what I’m doing for “Explore Mathematics!: Part II” • Last quarter students scoured the web and did 5 different mini-explorations which exposed them to all the neat math that exists outside of our standard curriculum. This quarter students will be doing up to two more in-depth explorations. • Because I don’t want this to be seen as busy work, doing “Explore Mathematics!: Part II” is going to be completely optional. I was glad to read that almost every kid who did the five mini-explorations last quarter didn’t end up finding it busy work, but I suspect doing it a second time would feel tedious. • To have some sort of incentive for those who do it, I am going to make each of the two explorations worth 12 points. These explorations will count as a mini-assessment (normal assessments are around 50 points). This is useful for kids because our fourth quarter only has 18 days of instructional time (seriously) — so there are only two major assessments and one minor assessment scheduled. Doing these explorations can act as a way to get another mini-assessment grade in there, that will be low-stress, high-reward. [1] • I’m not framing it around the grade boost it will likely provide, but around the fact that it’s an opportunity to do some awesome math explorations, for anyone who wishes to do so. • It is still pretty open-ended, but I’m now looking for students to write something to get others to see what they find interesting/intriguing/awesome about something. Here’s the document I just emailed my kids: Here it is in .docx form in case you want to modify it. [1] Yes, I do SBG with my calculus kids. Yes, I know how ridiculous this sounds, me playing the “point game.” I almost wanted to make it so that there was no external reward, but our kids are so busy with so many things that I know even a little incentive will go a long way. I’ve been at my school long enough, and know our kids well enough, to know this is doomed to failure without a little external reward. # Explore Math (Reprise) At the beginning of the 3rd quarter, I did an experiment in my Advanced Precalculus classroom: Explore Math. This post is the compilation of the survey results from my kids on this experiment. So if you don’t know what the activity was, read up here, and then see what this survey is all about. I will share examples of some of student work for this experiment later. Part of the assignment for students included submitting one exploration to our school’s math-science journal, Intersections. When this year’s issue of the journal comes out, I hope to link to my kids’s explorations! The question in the survey: The “Explore Math” project is something I’ve never done before. I explained my reasoning behind it — which is I wanted to encourage you to see that there is so much more than our curriculum covers, and let you just have fun looking at math stuff outside of our curriculum… and get some easy credit for it (almost everyone is getting full credit for the first batch of things I’ve seen). However, as a teacher, I know something like this could easily be seen as busy work, and that was my big concern — that it would feel like a chore rather than something you actually want to do. This is me laying my cards on the table. If I came to you in the student center and told you this and asked you for your thoughts, what would you say? Every Student Response In Entirety: I really liked the Explore Math project and I definitely would say it was an overall success. I loved how many options we were given for what we could do, and the fact that you gave us the options was great because otherwise it can feel like you are just trying to desperately research and find a topic to write about. My Explore Math topics I thought were extremely interesting, and it was cool to even connect some to the stuff we were learning in class. It was a lot of writing, which is something foreign for math classes, and also made it kind of difficult to grasp exactly how to format what we were writing (five page essays for each topic?). One other thing that was a little stress-inducing was the deadline and I know it was for a problem for most people that it often happens that when there are multiple assignments due on one day, students leave them all and do them in bulk. Because of this, having the deadline of the first three due in February was definitely helpful. Overall, I really loved the assignment. I really liked this project! I found a lot of things about math that I would have never known about if we weren’t assigned this project. I learned new formulas, new (very addictive games), great youtube channels and informative popular articles. I found an entirely new community online that I did not know existed. At first I expected it to feel like a bit of a chore but when I actually sat down and did it, it was pretty fun. I think it was great that there were multiple ways you were allowed to “explore math.” I also thought it was amazing I could play around with the project a little bit to find areas of math that are aligned with my personal interests. Being able to think about how math affects our society, in a math class, was an amazing interdisciplinary activity. I think it’s good that not every option was a math puzzle — that would have felt constrictive. I would say as long as the students are innovative, interested and patient people the project sounds wonderful. The student, if very interest in math, should be encouraged to further their mathematical understanding, and find means in which math is even more interesting to them as it was prior. Emphasizing the point that one (the student) does not need to seek the more difficult problem or most tedious theorem is also very helpful, as the student will be encouraged to explore areas of math in which really interests them. I would say that I absolutely love the explore math project. I have always been a person who enjoyed math that connected with the world. Being in a classroom memorizing formulas was never my interest and I was psyched when you announced the project. I think that this project can be very helpful in putting math on the global scale for students who only see it as a class in a school. This opens their eyes to new heights math can taken and how much math actually helps outside of the classroom. I agree it felt like busy work some. I find it weird that something that’s supposed to be us having fun exploring math had a grade and time constraint attached to it. That’s one thing I didn’t like. All I have to say is that this was not busy work; in fact it was productive and learning work. I found this to be incredibly intensive and interesting, and it broadened my horizons of the understandings of applied mathematics and sciences, and introduced me to things that I had previously trembled [at] before, like string theory, for instance. I thought this was a great project and a simple and easy way to get us thinking in a mathematical mindset, and I am definitely reaping the benefits from it, because I have come away with much more knowledge about certain aspects of math that I had previously not known. I really wouldn’t know what to change because I liked these individual explorations so much and they intrigued me so much. Thank you for giving a projected that I was thoroughly interested in, seriously! For someone who is very interested in math in and out of the classroom, I am generally engaged with math concepts that are not a part of our curriculum. Thus, this was a good experience for me in that I was able to get credit for simply enjoying and exploring math; it also perhaps pushed me a little bit to go further than I normally would in exploring mathematical concepts online. However, for students who don’t love math outside of the classroom, I could definitely see how this might have seemed like busy-work. If you don’t genuinely enjoy math, then writing a lot about it and research about it is going to be cumbersome, but if you do, it’s enjoyable. I really liked doing the explore math assignment. I liked that you were giving us an outlet for us to not just do the math that needs to be done in order to complete the class. This assignment allowed me, personally, to dive deeper into how math can be applied to the world and that math is actually occurring all the time. Also, I remember not really understand[ing] infinite series and then I did an explore math with infinite series that really helped me because it was a visual representation that really clicked with me. I think that initially I thought the project might just be busy work and I didn’t really understand what we were expected to be doing. Once I read over the assignment and saw the scope of the projects we were allowed to do, I was much more interested and saw the project completely differently. I think that it is important to highlight, when giving the assignment, how broad a range of options you have when doing this, and that there are so many math projects that relate to everyday life that could be interesting if you just think about it, rather than relying on the assignment sheet completely to guide you. Personally, I have enjoyed what I have done so far. Just recently, I voiced my concerns about the state of math in America and was able to comprehensive research about the bitcoin that I would not have done on my own. That being said, some of this has seemed like busy work and stuff “I just have to do for credit.” Since it seems like you genuinely want us to enjoy the project, it might be made better by making it extra credit. That way, we could be able to explore as much as we want without worrying about our grade. I had a really awesome time doing my Explore Math assignments, but the one thing you could do to make it less busy work is make it 3 different assignments, rather than 5, and make them a little more in depth, and more interesting in that regard. I think that if the students only had to do 3, they could expand more on what they were interested in. I really like the idea, but for me personally, it turned into busy work. Not because I find it boring but because I have so much other work that it gets pushed back towards the end of my load. I would like to spend more time on them, so possibly have it on top of the nightly work for math, designate a night specifically to explore math. This is practically the farthest thing from busywork we can do! Repetitive problems often seem like busywork. Practice is always good, but once you have something down, it can be quite annoying to practice it over and over again. Sometimes i feel that way about homework, but with this project we’re choosing any math-y thing that interests us! We have a lot of freedom, and hopefully it piques an interest in math outside of the curriculum. This project is great, personally, I wish I had taken more time with it. As long as you don’t procrastinate too badly with it, I don’t see how this project could be a chore, unless you claim to hate math. I LOVED this project, and I wish we got to do more things like this throughout the year. (I know we can do things like this whenever we want, but it’s really nice to get some recognition and the chance to formally share your math ideas with others.) As a side note, this project was also interesting to be doing while looking at colleges for the first time. I know that sounds like a really strange thing to say, but getting to enjoy math in new contexts, such as music theory, has given me new ideas of things I would like to pursue and take classes [on] while I am at college because we don’t always get to learn about things like this on a daily basis in high school. I do admit that I wasn’t very enthusiastic at the start of the project, but as soon as I started I completely changed my mind. Most of the work that I did was stuff I had never done before and might never do again. I was genuinely interested in what I was doing, and it was great to be able to choose what I focused on instead of being told what to look at. I understand why you assigned this project, and I think it is very important to see the relevance math has in the world. This breathes life into the abstract “why are we learning this” type that doesn’t appear to have anything to do with life outside the classroom. However the problems with this assignment are that I didn’t know what I was searching for. When I found the Sloane’s Gap video and paper I felt like I struck gold after seemingly endless mining. However the mining part is very un-exciting. Not un-exciting enough to undo the excitement of finding the cool stuff, but it’s not very encouraging either. I wouldn’t want this assignment to turn into a chose 5 of these pre-determined projects because that wouldn’t make anyone feel like anyone feel like they’re venturing outside the classroom. I’m not really sure what I would do to change this assignment, but I think it really is a good idea that with some refinement could become a really dynamic way to get into math. I think keeping it low pressure and “easy credit” is the way to go because stress + ambiguity about an assignment is a terrible combination that would only end in resentment from your students, and students not enjoying their work. Honestly, I had quite a bit of fun with the “Explore Math” project as I saw many cool analogies of real-world applications of math. For example, one of my five “research topics” was the probability and randomly guessing on every SAT multiple choice question. I learned that the probability is horrifyingly low — I already knew this, but not to such an extent. Furthermore, I saw some very cool analogies in this SAT topic; for instance, if a computer were to take the SAT 1 million times a day, for five billion years, the chance of any of the SATs resulting in a perfect score on just the math section would be about 0.0001%. Crazy, I know! # “Explore Mathematics” I teach an Advanced Precalculus class, and I love my kids. This is my second time teaching the course, and I get a rush seeing the kids dive into whatever we do with full intensity. Because the curriculum we teach is so chalk full of things, we don’t really get days where I can go on tangents and have students explore things that I think would be of interest to them. Earlier this year, I was struck by this post by Fawn Nguyen. It’s rare that I read something and it just keeps rattling around in my brain, and won’t let me forget it. (Thanks Fawn, for being an annoying bee attacking my brain!) If you’re too lazy to click the link, the TL;DR version: Fawn has her kids go to Math Munch and explore and play with mathematics it based on what interests them. She has her kids keep track of what they do with this sheet: What I loved about this? It gave kids the freedom to explore mathematics that interested them. The assignment was fairly low-pressure. I wanted to do something similar. I knew I wanted it to be low-pressure to do, fairly easy to grade, and really focus on what the kids want to do. Thus, Explore Mathematics! was born. [.docx] Students are asked to engage with mathematical things that they are interested in during the third quarter. There are two deadlines, so they are working on them continuously and not rushing at the end to finish them. (Also to make marking them easier for me.) There is a low-pressure grading structure, which reinforces the notion that this is more about just engaging and less about “doing the right thing.” In total, I’m making it worth about half a normal test. I don’t know exactly how this is going to turn out. But I’ve already had a student present a piece of mathematical artwork he’s made, and I’ve had a couple fun conversation with kids about things they’re thinking of doing/looking at. I hope this fosters a lot of fun mathematical conversations between me and the kids about the things they’re finding (and of course, among the kids themselves). The biggest concern is making this assignment not seem like or become busywork for the kids. I don’t want it to seem like added work just for the sake of extra work! That’s the fine line I am trying to navigate — sort of “forcing” kids to carve out some time here and there in their busy schedules to get exposed to the cool things out there. I have to figure out how I can create this feeling in the kids. Maybe that means I will give up some classtime for them to work on this every-so-often, to show them I value this sort of exploration. Wish me luck on this. # Trigonometric Pythagorean Identities $\sin^2(\theta)+\cos^2(\theta)=1$ and derive $\tan^2(\theta)+1=\sec^2(\theta)$ and $1+\cot^2(\theta)=\csc^2(\theta)$ simply by dividing both sides of the original equation by $\sin^2(\theta)$ or $\cos^2(\theta)$. I did this same this year. Except later on, a few weeks ago, I saw a post on twitter talking about introducing trigonometric identities through graphs on the unit circle — and having kids come up with their own identities. I loved this idea and planned to make a whole thing about of it. So far I’ve given students one thing I’ve made as a result of this idea (and that worked out super well). From this, students were able to come up with the three Pythagorean Trig Identities we saw above, but also a fourth one that was totally unexpected. I had them all pick a different angle and substitute it into the left hand side and the right hand side of the last equation. KABAM! Whoa! SAMESIES! (Note to self: Next year make a dynamic visualization of this triangle on Geogebra, like this but better/cleaner.) Instead of doing a whole unit on Trigonometric Identities, the other teacher and I are slowly giving students a problem here and a problem there to practice with and find new strategies, over a couple weeks. I hope that works! And maybe if I have time, I’ll make a follow up activity. Maybe giving the drawing below but without anything labeled but the radius of the circle, and having kids fill in each of the lengths and find various identities? They can use the ratios of similar sides… but also if triangles are similar, they can also use the ratios of the perimeters! Or knowing that the ratio of the areas of two similar figures is simply the square of the ratio of two corresponding sides? Also, maybe just maybe kids could generate inequalities — like the area of this one triangle will always be less than the area of this other triangle? I don’t quite know as things aren’t fully formed in my head yet. If anyone has any ideas, or existing resources, pass ‘em along! A couple years ago, Kate Nowak asked us to ask our kids: What is 1 Radian?” Try it. Dare ya. They’ll do a little better with: “What is 1 Degree?” I really loved the question, and I did it last year with my precalculus kids, and then again this year. In fact, today I had a mini-assessment in precalculus which had the question: What, conceptually, is 3 radians? Don’t convert to degrees — rather, I want you to explain radians on their own terms as if you don’t know about degrees. You may (and are encouraged to) draw pictures to help your explanation. My kids did pretty well. They still were struggling with a bit of the writing aspect, but for the most part, they had the concept down. Why? It’s because my colleague and geogebra-amaze-face math teacher friend made this applet which I used in my class. Since this blog can’t embed geogebra fiels, I entreat you to go to the geogebratube page to check it out. Although very simple, I dare anyone to leave the applet not understanding: “a radian is the angle subtended by the bit of a circumference of the circle that has 1 radius a circle that has a length of a single radius.” What makes it so powerful is that it shows radii being pulled out of the center of the circle, like a clown pulls colorful a neverending set of handkerchiefs out of his pocket. If you want to see the applet work but are too lazy to go to the page, I have made a short video showing it work. PS. Again, I did not make this applet. My awesome colleague did. And although there are other radian applets out there, there is something that is just perfect about this one. # Trig War This is going to be a quick post. Kate Nowak played “log war” with her classes. I stole it and LOVED it. Her post is here. It really gets them thinking in the best kind of way. Last year I wanted to do “inverse trig war” with my precalculus class because Jonathan C. had the idea. His post is here. I didn’t end up having time so I couldn’t play it with my kids, sadly. This year, I am teaching precalculus, and I’m having kids figure out trig on the unit circle (in both radians and degrees). So what do I make? The obvious: “trig war.” The way it works… I have a bunch of cards with trig expressions (just sine, cosine, and tangent for now) and special values on the unit circle — in both radians and degrees. You can see all the cards below, and can download the document here (doc). They played it like a regular game of war: I let kids use their unit circle for the first 7 minutes, and then they had to put it away for the next 10 minutes. And that was it! # Infinite Geometric Series I did a bad job (in my opinion) of teaching infinite geometric series in precalculus in my previous class. I told them I did a bad job. I was rushing. They were confused. (One of them said: “you did a fine job, Mr. Shah” which made me feel better, but I still felt like they were super confused.) At the start of the lesson, I gave each group one colored piece of paper. (I got this idea last year from my friend Bowen Kerins on Facebook! He is not only a math genius but he’s also a 5 time world pinball player champion. Seriously.) I don’t know why but it was nice to give each group a different color piece of paper. Then I had them designate one person to be the “paper master” and two people to be the friends of the paper master. Any group with a fourth person simply had to have the fourth person be the observer. I did not document this, so I have made photographs to illustrate ex post facto. I started, “Paper master, you have a whole sheet of paper! One whole sheet of paper! And you have two friends. You feel like being kind, sharing is caring, so why don’t you give them each a third of your paper.” The paper master divided the paper in thirds, tore it, and shared their paper. Then I said: “Your friends loveeeed their paper gift. They want just a little bit more. Why don’t you give them each some more… Maybe divide what you have left into thirds so you can keep some too.” And the paper master took what they had, divided it into thirds, and shared it. To the friends, I said: “Hey, friends, how many of you LOOOOOVE all these presents you’re getting? WHO WANTS MORE?” and the friends replied “MEEEEEEEEEEEEEEE!” “Paper master, your friends are getting greedy. And they demand more paper. They said you must give them more or they won’t be your friends. And you are peer pressured into giving them more. So divide what little you have left and hand it to them.” They do. “Now do it again. Because your greedy friends are greedy and evil, but they’re still your friends.” “Again.” “Again.” Here we stop. The friends have a lot of slips of paper of varying sizes. The paper master has a tiny speck. I ask the class: “If we continue this, how much paper is the paper master going to eventually end up with?” (Discussion ensues about whether the answer is 0 or super duper super close to 0.) I ask the class: “If we continue this, how much paper are each of the friends going to have?” (A more lively short discussion ensues… Eventually they agree… each friend will have about 1/2 the paper, since there was a whole piece of paper to start, each friend gets the same amount, and the paper master has essentially no paper left.) I then go to the board. I write $\frac{1}{2}=$ and then I say: “How much paper did you get in your initial gift, friends?” I write $\frac{1}{2}=\frac{1}{3}+$ and then we continue, until I have: $\frac{1}{2}=\frac{1}{3}+\frac{1}{9}+\frac{1}{27}+\frac{1}{81}+...$ Ooohs and aahs. Next year I am going to task each student to do this with two friends or people from their family, and have them write down their friends/family member’s reactions… I love this.
# L’Hopital’s Limit From A Phone Game One of my recently graduated students was playing a phone riddle game that required an answer to this level: Turns out the solution to the game was just simply the word “answer” as the instruction was to simply enter that in. However, the integral and the limit shown there is possible to do and that is what this post is going to focus on. I don’t know what the game is called so I can’t link it. If I find out, I’ll edit this and link it in. So here’s the problem in case you can’t see the screenshot: # Question $\lim_{x\rightarrow \infty}\frac{\int_1^x \left[t^2\left(e^\frac{1}{t}-1\right)-t\right]\;dt}{x^2\ln\left(1+\frac{1}{x}\right)}$ # Solution Spoiler The first thing to notice is that $$\displaystyle\lim_{x\rightarrow \infty} \left(1+\frac{1}{x}\right)^x$$ is actually the definition of the constant $$e$$. By using logarithm laws, we can manipulate the denominator to become just $$x$$: \begin{align*}&\lim_{x\rightarrow \infty}\frac{\int_1^x \left[t^2\left(e^\frac{1}{t}-1\right)-t\right]\;dt}{x\ln\left(1+\frac{1}{x}\right)^x}\\=&\lim_{x\rightarrow \infty}\frac{\int_1^x \left[t^2\left(e^\frac{1}{t}-1\right)-t\right]\;dt}{x\ln\left(e\right)}\\=&\lim_{x\rightarrow \infty}\frac{\int_1^x \left[t^2\left(e^\frac{1}{t}-1\right)-t\right]\;dt}{x}\end{align*} Now this is a $$\displaystyle \frac{\infty}{\infty}$$ limit so we can use L’Hopital’s rule to help us evaluate the limit. If we differentiate the numerator, we apply the Fundamental Theorem of Calculus and the integral sign just goes away: \begin{align*}&\lim_{x\rightarrow \infty}\frac{\left[x^2\left(e^\frac{1}{x}-1\right)-x\right]}{1}\\=&\lim_{x\rightarrow \infty}x^2\left(e^\frac{1}{x}-1\right)-x\\=&\lim_{x\rightarrow \infty}x^2e^\frac{1}{x}-x^2-x\end{align*} We can algebraically manipulate this into a fraction: \begin{align*}&\lim_{x\rightarrow \infty}\frac{e^\frac{1}{x}-1-\frac{1}{x}}{\frac{1}{x^2}}\end{align*} This is a $$\displaystyle\frac{0}{0}$$ limit so we can apply L’Hopital’s Rule: $\lim_{x\rightarrow \infty} \frac{\frac{-1}{x^2}e^{\frac{1}{x}}+\frac{1}{x^2}}{\frac{-2}{x^3}}$ Simplifying this and then using L’Hopital’s Rule once more: \begin{align*}&\lim_{x\rightarrow \infty} \frac{-e^{\frac{1}{x}}+1}{\frac{-2}{x}}\\=&\lim_{x\rightarrow \infty}\frac{\frac{1}{x^2}e^{\frac{1}{x}}}{\frac{2}{x^2}}\end{align*} Finally, we can see that the factors of $$\displaystyle\frac{1}{x^2}$$ cancel out leaving us with: $\lim_{x\rightarrow \infty} \frac{e^\frac{1}{x}}{2}=\frac{1}{2}$ A way to check this answer is to just chuck it into a graphing application and check the horizontal asymptote: Yep. The answer is definitely $$\frac{1}{2}$$. [collapse]
# maths question help (1 Viewer) #### chrstinee ##### Member A satellite dish is in the shape of a parabola with equation y = −3x^2 + 6, and all dimensions are in metres. Find w, the width of the dish, to 1 decimal place. #### jimmysmith560 ##### Phénix Trilingue You need to find the roots by making y = 0 -3x^2 + 6 = 0 3x^2 - 6 = 0 x^2 - 2 = 0 x^2 = 2 x = +/- sqrt 2 (approximately -1.4 and 1.4). The width of the satellite dish would then be the distance between the roots w = 1.4 + 1.4 = 2.8 metres (1 decimal place) Last edited: #### chrstinee ##### Member View attachment 30419 I think you need to find the roots by making y = 0 ? -3x^2 + 6 = 0 3x^2 - 6 = 0 x^2 - 2 = 0 x^2 = 2 x = +/- sqrt 2 (approximately -1.4 and 1.4). The width of the satellite dish would then be the distance between the roots? w = 1.4 + 1.4 = 2.8 metres (1 decimal place) Not 100% sure though. omg thank you so much #### CM_Tutor ##### Moderator Moderator @jimmysmith560's answer is correct and the question is badly written. For a start, a satellite dish is a three-dimensional object, and further it is finite, so saying it has an equation like the one given is problematic. The question should be re-written as something like: A satellite dish is in the shape of a parabola rotated about its axis. With all distances in metres, a cross-section of the dish through its vertex matches that part of $\bg_white y=-3x^2+6$ that is on or above the y-axis. Find w, the width of the dish at its widest point, in exact form and to 1 decimal place.​ The answer would be $\bg_white w=2\sqrt{2}\ \text{m}\approx 2.8\ \text{m}$ #### jimmysmith560 ##### Phénix Trilingue @jimmysmith560's answer is correct and the question is badly written. For a start, a satellite dish is a three-dimensional object, and further it is finite, so saying it has an equation like the one given is problematic. The question should be re-written as something like: A satellite dish is in the shape of a parabola rotated about its axis. With all distances in metres, a cross-section of the dish through its vertex matches that part of $\bg_white y=-3x^2+6$ that is on or above the y-axis. Find w, the width of the dish at its widest point, in exact form and to 1 decimal place.​ The answer would be $\bg_white w=2\sqrt{2}\ \text{m}\approx 2.8\ \text{m}$ Thank you!
# How to find minimum time needed for Hamiltonian evolution? Database search can be looked upon as Hamiltonian evolution, with kinetic and potential energy operators. Let the evolution follow the Schrodinger equation: $$i\frac{d}{dt}|\psi⟩= H|ψ⟩$$ with $$H = E|s⟩⟨s| + E|t⟩⟨t|$$ and some constant $$E$$. How can we find the minimum time $$T$$ required for the initial state $$|s⟩ = \frac{1}{\sqrt{N}}\sum_{i=1}^N |i⟩$$ to evolve $$N$$ to the final state $$|t⟩$$. • So do you mean $|s\rangle$ and $|t\rangle$ have the same eigenvalue? Oct 23 at 18:24 • @ZhiboYang Yes the eigen value is going to be same Oct 24 at 9:53 • What did you try so far? Did you calculate the unitary time evolution? Apply it to the initial state? Oct 24 at 12:10 • It sounds like you are asking how long adiabatic evolution takes. This is given by the spectral gap of the Hamiltonian from start to finish. Oct 27 at 3:34 • The usual series for the exponential function also works with matrices Oct 27 at 16:25
Title: p-adic families of Klingen Eisenstein series for symplectic groups Speaker: Dr. Zheng Liu (IAS) Time: 2018-7-24, 9:30-11:30 Place: N818 Abstract: Eisenstein ideals can be used to give lower bound of Selmer groups. I will explain a construction of ordinary families of Klingen Eisenstein series associated to ordinary families on symplectic groups. The degenerate terms in the Fourier expansions are related to corresponding p-adic L-functions. I will also mention some preliminary computation of the non-degenerate terms. Attachment:
# Tag Info 8 Since nobody has mentioned it yet... V8 introduced the undocumented flag Debug$ExamineCode. When it is set to true, the information functions will display the definitions of ReadProtected symbols: Debug$ExamineCode = True ??BinLists It is sometimes useful to suppress some of the internal package names to make it easier to scan the definitions. Here ... 7 You could do it using the following: SetOptions[EvaluationNotebook[],InputAliases->{"bn"-> FormBox[TemplateBox[{"\[SelectionPlaceholder]", "\[Placeholder]"},"Binomial"],InputForm]}] Then enter escbnesc to get a placeholder that you can tab through: Then enter the numbers and press shift-enter to evaluate. Edit To make the output appear ... 5 If you're going to be doing this programmatically (as opposed to typing directly into a Text cell) and doing it often, then perhaps a function that is typeset as parentheses would be useful. Format[parens[e_]] := DisplayForm@RowBox[{"(", MakeBoxes@e, ")"}] equation[Subscript[OverBar[parens[X/Y]], "geom"] == foo] If you need to adjust the space around ... 4 Perhaps: BarChart[stackData, ChartLabels -> {Placed[Style[#, FontSize->Scaled[.025]]&/@ newElementsSmall, Above]}, PlotLabel -> Style["6M1@32Ag", Bold, 50], ChartLegends -> {"eV"}] 3 Numerical approach according to Jens' comment : pde = D[u[x, t], t] - 0.2 D[u[x, t], {x, 2}] == 0; g[x_] := 1/(1 + x^2)^0.25; sol = NDSolve[{pde, u[x, 0] == g[x], u[-10, t] == u[10, t] == g[10]}, u[x, t], {x, -10, 10}, {t, 0, 20}] Plot3D[u[x, t] /. sol, {x, -10, 10}, {t, 0, 20}, AxesLabel -> {Style["x", Italic, Red, 20], ... 3 What you are looking for is LineSpacing, you can use it this way: 1; 2; 3; SetOptions[EvaluationCell[], LineSpacing -> {2, 0}] Or via OptionsInspector: 1 OK, I think I can give you some tips about performance here. There are a couple things you do that really tend to slow you down, and which I would describe as Mathematica "anti-patterns". In particular, building arrays by repeatedly calling AppendTo is likely to be really slow (the time taken will grow quadratically in the length of the list), and accessing ... 1 Something like this: StyleBox[ RowBox[{ SubscriptBox[ RowBox[{"(", FractionBox[OverscriptBox["X", "_"], "y"], ")"}], "geom"], "=", FractionBox[SubscriptBox[OverscriptBox["X", "_"], "geom"], SubscriptBox[OverscriptBox["Y", "_"], "geom"]] }] , SpanMaxSize -> Infinity] // DisplayForm It gives this: I am not ... 1 Give this a try: Row[{ "1.", Invisible["space"], "(a)", Invisible["space"], "Find ", HoldForm@TraditionalForm@Integrate["x"^2, x] }] which produces: and this: Row[{ "2.", Invisible["space"], "(a)", Invisible["space"], "Find ", HoldForm@TraditionalForm@Integrate[Style[1/"x", 18], x] }] produces this: You can alter the size of the fraction within ... Only top voted, non community-wiki answers of a minimum length are eligible
# Rate of Convergence for Trapezoidal Method-System of Linear ODEs I have been given a system of two linear ODEs that I am to solve using various analytical and numerical methods: $u_{1}' = u_{1}$ $u_{2}' = u_{1} - u_{2}$ I have solved for the solution of these equations using analytical methods (Duhamel's principle, exponential matrix, etc.) and was able to solve numerical using Euler's method and was able to show that the rate of convergence is one. One of the methods I am supposed to use is the trapezoidal method and I need to show numerically the rate of convergence of two. Using the trapezoidal formula, I rewrite the equations to solve for the next step. Since both ODEs are linear, I can explicitly solve for the subsequent step. $u_{1}^{n+1} = u_{1}^{n}*\frac{(1+0.5*dt)}{(1-0.5*dt)}$ $u_{2}^{n+1} = \frac{(u_{2}^{n} + 0.5*dt*(u_{1}^{n}-u_{2}^{n}+u_{1}^{n+1})}{( 1+0.5*dt)}$ To determine the rate of convergence, I wrote a MATLAB script to calculate the slope of the logarithm of the error over the logarithm of the number of interval. uexact = [exp(1),1.5431]; N = 2.^[5:10]; T = 1.0; for i = 1:length(N) u1(1) = 1; u2(1) = 1; dt = T/N(i); for j = 1:N(i) u1(j+1) = u1(j) * (1+0.5*dt)/(1-0.5*dt); u2(j+1) = (u2(j) + 0.5*dt*(u1(j)-u2(j)+u1(j+1))) / ( 1+0.5*dt); end u = [u1(end),u2(end)]; err(i)= norm(u-uexact); clear u1 u2 end loglog(N,err) m = -(log(err(end))-log(err(1)))... /(log(N(end))-log(N(1)) However, when I do this, I calculate a rate of convergence of 0.7204. I have no clue what I am doing wrong. Any obvious mistakes or mistakes in the thinking process? I went ahead for the case where I only have one ODE and used the trapezoidal method and calculated a rate of convergence of 2. For the system however, I've had no luck. Thank you for the help. • What norm function are you using? Does it approximate an integral norm? The easiest to use would be the max norm, max(abs(u-uexact)). Or just consider the error at the end of the integration interval. – LutzL Sep 17 '15 at 22:36 • I was using the two norm. I went ahead and used the max norm and the error at the end of the integration interval and obtained the same rate of convergence as before. No luck so far. – Mario Sep 18 '15 at 0:40 • Order 2 makes one expect that for N=2^10=1024 the error is of the magnitude 1e-5 to 1e-6. However, your exact value does not have that accuracy. Try with more digits or use the formula as in the first component. – LutzL Sep 18 '15 at 8:36 Your exact value is in the second component not exact enough. With N ranging from 32 to 1024, one would expect for an order 2 method and a tame ODE system errors from 1e-3 to 1e-6. However, if the supposedly exact value has an error of magnitude 1e-4 to 1e-5, then the integration error at the last iterations get unnecessarily perturbed.
# Understanding epsilon-delta def of limits 1. Jun 21, 2008 ### foxjwill 1. The problem statement, all variables and given/known data I'm having trouble conceptually understanding the epsilon-delta definition of limits. How do you use it to disprove a limit? For example, how would you use it to show that $$\lim_{x \to 1} x^2 \neq 2$$? 2. Relevant equations 3. The attempt at a solution 2. Jun 21, 2008 ### Littlepig imagine you've got the function x plotted in a graph(you can actually draw it in a piece of paper). and you wanna show that $$\lim_{x \to 1} x \neq 2$$. First, set an interval ]1-delta,1+delta[ in x axes, see the image of the 1+delta; epsilon will be=|f(1+delta)-f(1)|. (in general, it will be the max{|f(1+delta)-f(1)|,|f(1-delta)-f(1)|}, in this case, the function is similar to the right and left of f(1), so, only one part is needed because both are equal). Now, you see that if the delta is very big, say 4, then epsilon=4 rigth?! so, (the second part of the condition, the epsilon part) is |2-f(1)|=2-1=1<epsilon: the condition is verified.(2 because you are testing the condition on the point 2 you are asking: "is it really true that the condition is verified to EVERY delta??????") Note however that if you put delta lower, say 0.5, epsilon is 0.5 and |2-f(1)|=|2-1|>epsilon:. Exists a delta that don't verify the condition, that implies that isn't true that for every delta, exists..bla bla bla...so, the limit isn't 2. Hope this helps solve your problem... and hope i didn't make any mistake...^_^ Last edited: Jun 21, 2008 3. Jun 21, 2008 ### m_s_a Perhaps in this way can Be clear idea [/url][/IMG] 4. Jun 21, 2008 ### foxjwill but how do you put that into an epsilon-delta proof? 5. Jun 22, 2008 ### m_s_a [/url][/IMG] [/url][/IMG] No requirement that the value of f(x0) The smaller values for delta & epsilon Leads to ........what?
This method is explained below with the help of a few examples. The digit in the unit’s place in the square root of 15876 is : (A) 2. Still have questions? Group the digits into pairs (For digits to the left of the decimal point, pair them from right to left. Ask Question Asked 10 months ago. here you will know easily in this blog how to calculate square root by division method. So, is there a particular one you are interested in? To find out Cube Root of 15625: Denoted by ³ √15625 Long Division Method: Set up a division with the number, with grouping each 3 digits from the decimal point to left. Ex 6.4, 1 Find the square root of each of the following numbers by Division method. Let's find the square root of $$180$$ Step 1: Place a bar over every pair of digits of the number starting from the unit’s place (right-most side). Such as 10,49,76 Step 2 : Now we have to multiply a number by itself such that the … The method I learned in 7th grade in 1957 (long before calculators) does look like division. Find the Square Root the Following by Long Division Method: 1471369 Concept: Finding Square Root by Division Method. Long Division Method . Find more answers. Ask your question. Sum of all three digit numbers divisible by 6. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Waiting for you: sex-today.fun Step 1 : Separate the digits by taking commas from right to left once in two digits. Translating the word problems in to algebraic expressions. Find me here: sex-today.fun (i) 9801 (ii) 6561 (iii) 390625 (iv) 108241 (v) 363609 (vi) 120409 (vii) 1471369 (viii) 57121, - eanswers.in Find the square root by the long division method 1471369​, if an aeroplane covers 340km 80 gallons then in how much gallons it will cover 700km​, BokaliteDatePageST Find the area of each thake connect to IdibCuse 3.14 you It일13 cmYem​. In this section, you will learn, how to find square root of a number step by step. We can find the exact square root of any given number using this method. To Find, Square Root of √40 using Long Division Method . This is the lost art of how they calculated the square root of 1369 by hand before modern technology was invented. Example 3: Find square root of 5 using long division method. Add your answer and earn points. Group the digits into pairs (For digits to the left of the decimal point, pair them from right to left. Set up a "division" with the number under the radical. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. 6 ; View Full Answer 105 . Its symbol is called a radical and it is represented like this: √ Example: Find the square root for 40 using long division method. Find the least number that must be subtracted from 5 6 0 7 so as to get a perfect square. 1.2 Square Root of a Perfect Square by using the Prime Factorization Method. Concept Notes & Videos 248. To find the least square number with four digits, we must find the smallest number that must be added to 1000 in order to make a perfect square. This is a step by step guide for finding the value of square root of 4096.For finding the square root of any number we have two methods. Perform division as per steps shown below: 1. Thus, 24 must be added to 1000 to be a perfect square. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Sum of all three digit numbers … By definition, every number has precisely two square roots. Online calculator which calculates the square root of a given number using Long Division (LD) method. Long division is a very common method to find the square root of a number. 1.1 Square Root of a any number by the long division method. [/math] Edit: As requested by Robert M Wallis, I am giving an explanation of the steps. Find the square root of 2025 / 4900; how to find square root of 841 by prime factorisation as finding factors of 841 will take so long time in exam; Find the least number that must be … Step 1 : x 4 has been decomposed into two equal parts x 2 and x 2.. Here, we use the long division method to obtain the value of the square root of 3. Sum of all three digit numbers divisible by 6. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. For that, we have to find the square root of 99999 by the long division method as follows: 1000 is 24 (124 – 100) less than the nearest square number 32 2. I'm online: sex-today.fun Find me here: sex-today.fun Step-by-step explanation: To find the Square root of 1471369, take long division of the number, a. Multiplying by 1, we get the remainder 47. b. multiplying the quotient by 2, and to divide 47 by the nearest two digit number, multiply by 22. c. Find the square root of the new number. So now am going to explain the process for finding the square root of 2 value in division method. Let us find the square root of 104976 by using a long division method. We can find square root by prime factorization method or by long division method. Advertisement. Confirm your mail and i send you some nice picks... p.s. Hence, 2 2 = 4 and 4<5; Divide 5 by such that when 2 multiplied by 2 gives 4. square root of 72* square root of 338 . The square root of 1471369 by long division method is 1213. Step 2 : Multiplying the quotient (x 2) by 2, so we get 2x 2.Now bring down the next two terms -12x 3 and 42x 2.. By dividing -12x 3 by 2x 2, we get -6x. Do you want see my nude photos ?) Steps involved in square root by long- division method. If you're seeing this message, it means we're having trouble loading external resources on our website. To Find, Square Root of √40 using Long Division Method … We can find square root by prime factorization method or by long division method. 1. Remainder when 17 power 23 is divided by 16. View Answer Find a number which is less than Or equal to 164 81 × 1 = 81 82 × 2 = 164 Since 82 × 2 = 164. Remainder when 2 power 256 is divided by 17. Syllabus. p.s. You can specify conditions of storing and accessing cookies in your browser. Remainder when 2 power 256 is divided by 17. Take two digits at time from the decimal point to the left as well as to the right. Don't worry! my nісk - fcamuavexos1981. Online calculator which calculates the square root of a given number using Long Division (LD) method. 3&4 = (3+2+1)*(4*3*2*1)... if so what is 8&7??? Question Bank Solutions 4890. p.s. 1 Square Root calculation methods | square root formulas. Confirm your mail and i send you some nice picks... Q.4: Is 2352 a perfect square? Remainder when 17 power 23 is divided by 16. Share 0 … Let us understand this process with an example. When the numbers are large, even the method of finding square root by prime factorisation becomes lengthy and difficult. We use that Thus, Square root of 17.64 = 4.2 Square root of 1.125 by long division Therefore, Square root of 1.125 = 1.060… Here, we can find square root upto more decimal digits. But, we stop at … Textbook Solutions 5345. Concept Notes & Videos 248. Do you want see my nude photos ?) (i) 2304 Thus, Square root of 2304 = 48Let’s look at individual steps as well Individual Steps are explainedStep 1:Write the numberWe make pairs from right.So, 04 and 23 are two pairs. 3&4 = (3+2+1)*(4*3*2*1)... if so what is 8&7??? 1.3 Short cut trick for find the square root for perfect square number. (B) 4. I would then start with 13 / 4 = (3) with 1 left over, Bring down the 2, 12 / 4 = (3), Bring down the 1 will not divide (0) Bring down the 5, 25 / 4 = (6) with 1 left … Step 1: Find the largest square smaller than 40, that's 36. Answered Find the square root by the long division method 1471369 1 See answer kishanking53 is waiting for your help. For that, we have to find the square root of 1000 by the long division method as shown below: 1000 is 24 (124 − 100) less than the nearest square number 32 2. For digits after decimal point, pair them from left to right). What is the square root of 10.24 by division method? Hence, the smallest perfect square number with four digits is 1024. Advertisement. p.s. L.C.M method to solve time and work problems. Step 2 : : no Thank you for using my program. my nісk - fcamuavexos1981, If you wa!nt me Find the square root for 40 using long division method. Which one of these relationships is different than the other three? Thus, the value of the square root of 3 is 1.732. If you're seeing this message, it means we're having trouble loading external resources on our website. Step 1) Set up 1369 in pairs of two digits from right to left: Find the greatest 3 digit and 4 digit perfect square. CBSE CBSE Class 8. Steps of Long Division Method for Finding Square Roots: Step I: Group the digits in pairs, starting with the digit in the units place. Dont worry its for free... Syllabus. Step 1: Place a bar over every pair of digits starting from the unit digit. Calculation of a square root by hand is a little like long-hand division. These are just the basics. Know and learn the method or the process from which you can find the approximate value of the square root of 10.As the number 10 is not a perfect square, so we cannot get root 10 value easily.. Correct answer to the question: Find the square root of 0.813604 by long division method - eanswers.in Looking at it I would say that 11 x 11 = 132 and 5 x 5 = 25. 1000 + 24 = 1024. Find the Square Root the Following by Long Division Method: 1471369 Concept: Finding Square Root by Division Method. Dont worry its for free... Subtract 4 from 5, you will get the answer 1. Let's understand why the square root by division method (digit by digit method) works. my nісk - fcamuavexos1981, I'm a beautiful girl who wanna be your lover and friend!! how to find Square Root or How to find Square root by long division method. Question Bank Solutions 4890. Just confirm your mail baby... Q.3 Find the square root of 225 by prime factorisation method. Q.2 Find the square root of 122 by long division method. p.s. Let us find the square root of 104976 step by step using long division method. Solution: Explanation: In the above example the number for which we have to calculate square root is 40. How to find square root of imperfect numbers by long division method. We use that Thus, Square root of 17.64 = 4.2 Square root of 1.125 by long division Therefore, Square root of 1.125 = 1.060… Here, we can find square root upto more decimal digits. Generally prime factorization is used for finding square roots of small numbers. Find the least 4 digit perfect square number. Click hereto get an answer to your question ️ Find the square root of 1024 by Division method. If you want to know the square root of 10 value approximately (exact value difficult to find), we must use the long division method … how to find Square Root or How to find Square root by long division method. Also, to find the square roots of imperfect squares such as 2,3,5,6,8,etc., we can use long division method avoiding the use of calculators. L.C.M method to solve time and work problems. Now let’s see how to find the square root of 4096 by two methods. Steps involved in square root by long- division method. For digits after decimal point, pair them from left to right). Still have questions? Taking 484 as the number whose square root is to be evaluated. my nісk - fcamuavexos1981, I'm a beautiful girl who wanna be your lover and friend!! Put them together and you get 115. By continuting in this way, we get the following steps. Here are given different square root equations with its solutions from which you can know how to solve these type of questions. kishanking53 is waiting for your help. Suppose you need to find the square root of 66564. Long division you normally know what to divide by ie 13225 / 4. When you say the square root, which root do you mean? kishanking53 kishanking53 11.05.2020 Math Secondary School +5 pts. There is a lot of dividing by two times the answer so plus the next number in the answer and subtracting and (the … Just confirm your mail baby... The following is the stepwise solution for this method: The following is the stepwise solution for this method: Divide the digits of the number into pairs of segments starting with the digit in the units place. 10,49,76 When we do so, we get 10 before the first comma. CBSE CBSE Class 8. This site is using cookies under cookie policy. Waiting for you: sex-today.fun Generally prime factorization is used for finding square roots of small numbers. Correct answer to the question: Find the square root of 3844 by division method - eanswers.in See the following table: You can specify conditions of storing and accessing cookies in your browser, Find the square root of 1471369 by long division method, if an aeroplane covers 340km 80 gallons then in how much gallons it will cover 700km​, BokaliteDatePageST Find the area of each thake connect to IdibCuse 3.14 you It일13 cmYem​. Perform division as per steps shown below: 1. This method of finding a square root is essentially a long division problem that divides your starting number by its square root, thus giving its square root as an answer. We will have two pairs, i.e. 205 1025 1025 0 . The method I learned in 7th grade in 1957 (long before calculators) does look like division. 1.4 Approximate Square Root of any number which is not a Perfect square. Ask your question. Viewed 179 times 1 $\begingroup$ I was wondering how this method developed and also how it works to give the square root of a number upto a large number of decimal digits . Definition: This describes a "long hand" or manual method of calculating or extracting square roots. Find more answers. Add your answer and earn points. First, like every other number (except for zero), 6241 has two square roots. Translating the word problems in to algebraic expressions. ToThere are main two ways to find square root of a given number. How to find the square root of 1369 by long division method Here we will show you how to calculate the square root of 1369 using the long division method. $$1$$ and $$80$$ Suppose if we want to find square root for big numbers then we have solve it by long division method. Finding square root using long division. But, we stop at 3 digits after decimal … This site is using cookies under cookie policy. my nісk - fcamuavexos1981. Step II: Think of the largest number whose square is equal to or just less than the first period. Thus we have, 05. Sample: Calculate square root of 5 using division method. Find the Square root Shortcut Trick and Easy Way. Also, find the square root of the perfect square. my nісk - fcamuavexos1981, If you wa!nt me (i) 2304 Thus, Square root of 2304 = 48Let’s look at individual steps as well Individual Steps are explainedStep 1:Write the numberWe make pairs from right.So, 04 and 23 are two pairs. The square root of number is a number which is multiplied by the same number, which as a result gives the original number back. Square Root Question for Class 8. (C) 6. … Just like in a long division problem, in which you are only interested by the next one digit at a time, here, you are interested by the next two … Translating the word problems in to algebraic expressions. To find the Square root of 1471369, take long division of the number, a. Multiplying by 1, we get the remainder 47, b. multiplying the quotient by 2, and to divide 47 by the nearest two digit number, multiply by 22, c. Now we get the 313 as the remainder, multiplying the quotient by 2, we get 24, multiplying the number by 241. d. now the remainder is 7269, multiplying by 2423 the remainder becomes 0. Find the square root of the following numbers by long division method: 1)9801 2)6561 3)390625 4)108241 5)363609 6)120409 7)1471369 8)51721 9)64432729 10)9653449 - Math - Squares and Square Roots Know and learn the method or the process from which you can find the approximate value of the square root of 10. Which one of these relationships is different than the other three? Sample: Calculate square root of 5 using division method. Explain how you know. 1 11025 1 2 0 01025 0 . Take the 1 st group from left side, which is 15 as new dividend. Now am going to explain you in finding the root 2 value up to 5 decimal places. Are you facing problem while solving square root equations in division method? Long division method 2. Q.1 Find the square root of 125. Square Root of 3 by Long Division Method. The square root of 11025 can be find out by long division method as: ∴ Square root of 11025 = 105. Let us understand long division method with the help of an example. If the number of digits in it is odd, then the left-most single digit too will have a bar.Thus we have, 7 29.So 1st bar is on 29 and 2nd bar is on 7. p.s. Let's practice finding square roots using division method If you're seeing this message, it means we're having trouble loading external resources on our website. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. Step 1: Place a bar over every pair of digits starting from the unit digit. If the number of digits in it is odd, then the left-most single digit too will have a bar.Thus we have, 7 29.So 1st bar is on 29 and 2nd bar is on 7. Click hereto get an answer to your question ️ Find the square root of 529 using long division method I'm online: sex-today.fun Given, Number = 40. The square root of 1471369 by long division method is 1213. Question: 5 As the number 10 is not a perfect square, so we cannot get root 10 value easily. Remainder when 2 power 256 is divided by 17. Thus, 24 must be … Given, Number = 40. Find the Square Root the Following by Long Division Method: 1471369 … Let's understand why the square root by division method (digit by digit method) works. Step 1 : Separate the numbers by taking commas from right to left in a group of two digits. There are even more complex problems to wrap your mind around. Remainder when 17 power 23 is divided by 16. Explain how you know. L.C.M method to solve time and work problems. Here we doing this using long division method. Ex 6.4, 1 Find the square root of each of the following numbers by Division method. find the square root of 1471369 by long division method Share with your friends. Rule to find the square root of a number by division method: In case of numbers where the factorization is not easy or of big numbers or of numbers whose square root is not exact, we find the square root of such numbers by division method. Thus we have, 05. Square root of a number by long division method. Find the Square root Shortcut Trick and Easy Way. Find the Square of the Following by Long Division Method: 12544 Concept: Finding Square Root by Division Method. Finding square root using long division. Below are the steps explained to find √5: Write number 5 as 5.00000000; Take the number whose square is less than 5. Sum of all three digit numbers divisible by 6. Active 10 months ago. What is the square root of 6241 by division method? 👍 Correct answer to the question: Find the square root of the following numbers by the long division method. Also, find the square root of the perfect square. Finding square root using long division. Please … Find a number which is less than Or equal to 164 81 × 1 = 81 82 × 2 = 164 Since 82 × 2 = 164. Please enter the number of which you want to find the square root: 3.14159 Please enter the number of decimal places in which you want your answer: 3 The answer provided by division algorithm is 1.772 Would you like to try another number? Sum of all three digit numbers … Click hereto get an answer to your question ️ Find the square root of the following numbers by the long division method $$( \mathbf { v } )$$ 380 363609 $$( i i ) \quad 6561$$ (vi) 120409 390625 Yroks (vii) 1471369 Up to know if you don’t know the process of finding root 2 value then follow my steps. For this we need to determine the number of digits in the square root. [math]\Rightarrow \qquad \sqrt {13} \approx 3.605551. If not, find the smallest multiple of 2352 which is a perfect square. how or why the method long division for finding square root of a number works. 👍 Correct answer to the question: Find the square root of 65536 by long division method - eanswers.in To overcome this problem we use Long Division Method. Long division is a very common method to find the square root of a number. Find the square root by long division method (a) 42849 (b) 1310440 (c) 1471369 - Math - Squares and Square Roots Each pair and the remaining digit (if any) is called a period. The following is the stepwise solution for this method: Divide the digits of the number into pairs of segments starting with the digit in the units place. Question 1 : Find the square root of the following polynomials by division method (i) x 4 −12x 3 + 42x 2 −36x + 9. Welcome to the square root program. Textbook Solutions 5345. Learn and know how to find the value of square root of 2.As many of us don’t know the value of root 2.
# How do you find 2p^2 + 3p - 4 less - 2p^2 - 3p + 4? Jun 10, 2015 $2 {p}^{2} + 3 p - 4$ less $- 2 {p}^{2} - 3 p + 4$ is $2 \left(2 {p}^{2} + 3 p - 4\right)$. The expression you wrote means $2 {p}^{2} + 3 p - 4 - \left(- 2 {p}^{2} - 3 p + 4\right) =$ $2 {p}^{2} + 3 p - 4 + 2 {p}^{2} + 3 p - 4 =$ $4 {p}^{2} + 6 p - 8 =$ $2 \left(2 {p}^{2} + 3 p - 4\right)$.
# Functorial properties of blow-up Let $X, Y$ be projective algebraic surfaces with isolated singularities. Suppose they are diffeomorphic to each other. Denote by $\phi$ the diffeomorphism from $X$ to $Y$. Then does there exists a blow up $X', Y'$ of $X, Y$, respectively such that there exists a diffeomorphism $\phi':X' \to Y'$ which commutes with $\phi$ (via the natural maps from $X'$, $Y'$ to $X$, $Y$ respectively arising from blow up)? What happens if $dim X=dim Y >2$? What happens if we assume that $X, Y$ lie as fibers over closed points of a family parametrized by a quasi-projective variety $B$ which is simply connected under the analytic topology (the underlying field is always $\mathbb{C}$)? - what is a diffeomorphism of singular varieties? – YangMills Jan 16 '13 at 14:58 Perhaps you know this already, but the universal property of blow-ups should give you what you need in the case that $\phi$ is algebraic. See e.g. Hartshorne Corollary II.7.15. – Daniel Loughran Jan 16 '13 at 22:58 you don't say whether your surfaces are blown up at smooth or at singular points, nor whether the blowups are smooth. without these restrictions, your question can probably be answered as below for the smooth case. – roy smith Jan 18 '13 at 21:19 this seems to be a question of whether blowing up is a diffeomorphism invariant. I.e. we all think blowing up means replacing a point by the tangent vectors at that point, and these should be diffeomorphism invariant. if DIFFEOMORPHISM INDUCES A LINEAR MAP (oops), on the tangent space hence tangent cone at a point, it should induce a map on blowups. I would consult Whitney for a discussion of variops notions of tangent vectors, but I will guess yes. – roy smith Jan 19 '13 at 6:23 of course, the obvious definition of diffeomorphism near a singular point is a function that extends to a differentiable function nearby on some embedded copy, and has such an inverse. – roy smith Jan 19 '13 at 6:26 Let $f\colon X \to Y$ be a smooth map between smooth varieties. Suppose that $X_0\subset X$ and $Y_0\subset Y$ are smooth subvarieties, such that $X_0=f^{-1}(Y_0)$. If $f$ induces a fiber-wise injection from the normal bundle of $X_0$ to the normal bundle of $Y_0$, then $f$ induces a map from the blow-up of $X$ and $X_0$ to the blow-up of $Y$ at $Y_0$. This does not apply to your example, where $X_0$ consists of isolated singularities. But I would think that in any situation where you can sensibly say that $f$ induces a monomorphism of normal bundles'', $f$ will induce a map of blow-ups.
# Percent to Decimal Calculator LAST UPDATE: March 2nd, 2020 ## How to Convert Percent to Decimal To convert from a percentage to a decimal number, divide the percentage number by 100%. ## Example 55% converted to decimal: $\frac{55\%}{100\%} = 0.55$ 67.332% converted to decimal: $\frac{67.332\%}{100\%} = 0.67332$ 0.732554 converted to decimal: $\frac{73.2554\%}{100\%} = 0.732554$ ## What is a percentage? A percentage is a number expressed as a fraction of 100. If a number is 100% (100 percent), then it is a “whole” – the same as one. If a number is 50%, then it is a half – the same as 0.5 or 1/2. If a number is 400%, then it is 4 times, the same as 4. ## What is a decimal? A decimal is a portion of a number. The first decimal place represents 1/10th of one, the second decimal place represents 1/100th of one, the third decimal place represents 1/1000th of one, etc.
# Prove $\epsilon$-$\delta$ definition of continuity implies the open set definition for real function I need to prove that the $\epsilon$-$\delta$ definition of continuity implies the open set definition continuity for a real function. Here's my attempt. For any basis $V: (a, b)$ in the range, for each $f(x) \in V$, let $\epsilon = \min(f(x) - a, b - f(x))$, then for any $x$ that $f(x) \in V$ according the $\epsilon-\delta$ definition of continuty there must exists a $\delta$ that the open set $U_x : (x - \delta, x + \delta) \subset f^{-1}((f(x) - \epsilon, f(x) + \epsilon)) \subset f^{-1}(V)$ In conclusion, $$f^{-1}(V) = \bigcup_{x \in f^{-1}(V)} U_x .$$ $f^{-1}(V)$ is an open set. Then for any open set $W$, $$f^{-1}(W) = \bigcup_{V \subset W} f^{-1}(V)$$ $f^{-1}(W)$ is an open set. So for any open $W$, $f^{-1}(W)$ is also an open set. This is exactly the open set definition of continuty. QED. - @Arturo Magidin: Thanks for your edit, the question is much cleaer. –  Jichao Sep 19 '11 at 1:47 Not every open set of the real line is of the form $(a,b)$; though it suffices to consider such sets, you need to argue why this is the case. In addition, a single element of $V$ need not be the image of a single $x$ in the domain; but you are considering only a single $x$. What if $f(x)=f(y)$ but $x\neq y$? You seem to only select a single $U_x$ for each element of $V$, so one of the two might be "left out". The main idea is right, but the devil is in the details, as usual. –  Arturo Magidin Sep 19 '11 at 1:52 $(a, b)$ is the basis for the usual topology of $\mathbb{R}$, and if for any basis there is an open set exists, it is true for any open set. This is a theorem proved in my textbook. Anyway, it is easy to prove this theorem. Because any open set is union of the basises, so $f^{-1}(V)$ is union of open sets that corresponing to each basis. –  Jichao Sep 19 '11 at 1:58 Like I said, you can certainly justify it and it is not hard, but you need to do so. Your first line simply reads "for any open set $V$", but not every open set is of this form. So one needs to explain why it is enough to consider open sets of that form. –  Arturo Magidin Sep 19 '11 at 1:59 (+1) for your continued involvement (as evidenced by comments above) –  The Chaz 2.0 Sep 19 '11 at 2:39 Since the OP's work was reviewed already in the comments, I collect together the entire argument in case future visitors find it useful. If $f$ is $\varepsilon$-$\delta$-continuous, then it is open-set-continuous. Suppose $f : \mathbb R \to \mathbb R$ is continuous by the $\varepsilon$-$\delta$ definition; we want to prove that it is continuous by the open sets definition. Take an arbitrary open set $V \subseteq \mathbb R$; we want to prove $f^{-1}(V)$ is open. This is true if $f^{-1}(V)$ is empty, so assume $x \in f^{-1}(V)$. Since $f(x) \in V$ and $V$ is open, there exists some $\varepsilon > 0$ such that $(f(x) - \varepsilon, f(x) + \varepsilon) \subseteq V$. By continuity at $x$, there exists some $\delta > 0$ such that $(x - \delta, x+ \delta) \subseteq f^{-1}(V)$. That is, $x$ is an interior point of $f^{-1}(V)$. Since this is true for arbitrary $x \in f^{-1}(V)$, it follows that $f^{-1}(V)$ is open. If $f$ is open-set-continuous, then it is $\varepsilon$-$\delta$-continuous. Suppose $f : \mathbb R \to \mathbb R$ is continuous by the open sets definition; we want to prove that it is continuous by the $\varepsilon$-$\delta$ definition. Fix $x \in \mathbb R$ and $\varepsilon > 0$. Then $(f(x) - \varepsilon, f(x) + \varepsilon)$ is an open set in $\mathbb R$ (containing $f(x)$). By continuity, $U = f^{-1}((f(x) - \varepsilon, f(x) + \varepsilon))$ is an open set in $\mathbb R$. It's easy to see that $U$ contains $x$; then $x$ is an interior point of $U$ by openness of $U$. That is, there exists $\delta >0$ such that $(x - \delta, x+\delta) \subseteq U = f^{-1}((f(x) - \varepsilon, f(x) + \varepsilon))$. Then it follows that $f((x - \delta, x+\delta)) \subseteq (f(x) - \varepsilon, f(x) + \varepsilon)$. -
Lessons I’ve learned from software engineering are uniformly cynical: • Abstraction almost always fails; you can’t build something on top of a system without understanding how that system works. • Bleeding-edge methods are a recipe for disaster • Everything good is hype and you’ll only ever get a small fraction of the utility being promised. Imagine my surprise, then, when the Z3 constraint solver from Microsoft Research effortlessly dispatched the thorniest technical problem I’ve been given in my short professional career. ## The Problem Microsoft Azure has a lot of computers in its datacenters - on the order of millions. For security, each of these computers is configured with a firewall which accepts communication from a comparatively small set of authorized servers. These firewalls aren’t created by hand - they’re automatically generated during deployment. We wanted to update the firewalls from a confused overlapping whitelist/blacklist system to a simple whitelist. Any change in this domain carries substantial risk: • Accidentally allowing connections from computers which should be blocked, a significant security issue. • Accidentally blocking connections from computers which should be allowed, a significant availability issue. Thus we wanted strong guarantees that firewalls generated with the new method blocked & allowed the exact same connections as firewalls generated with the old method. This is very difficult; the naive solution of checking all 2^80 packets against both firewalls would take a computer 38 million years to finish at a brisk rate of one billion packets per second! There’s another way: give the problem to the Z3 theorem prover from Microsoft Research, and it checks equivalence in a fraction of a second. How? ## Indistinguishable from magic Z3 is variously described as a theorem prover, SMT solver, and constraint solver. I like to think of it as an Oracle. Let’s think - if we had access to an Oracle, what question would we ask to solve the firewall equivalence problem? First: we require an understanding of firewalls and packets. Every piece of information sent over the network is encapsulated in a packet. Like a proper piece of correspondence, packets contain two important pieces of information: where they came from, and where they’re going. We’ll say each address is a single number, like 50 or 167. So, the packet [23, 75] came from source address 23 and is heading to destination address 75 (in real life these numbers are IPv4 or IPv6 addresses, but these are just [very large] numbers and so the simplification works). Firewalls are lists of rules saying which packets to block and which to allow. Rules are expressed in terms of source and destination address ranges, plus a decision - block or allow. We say a packet matches a rule if the packet’s source address is in the rule’s source range and destination in the destination range, in which case the decision is applied to that packet. For example, we can write a rule to block any packets originating in the address range 100-150 and headed to an address in 60-70. This rule would block the packet [125, 65]. Rules can overlap. If a packet matches both a block and allow rule, the block rule ‘wins’ and the packet is blocked. If a packet doesn’t match any rules, it is blocked by default. A packet only gets through if it matches at least one allow rule and zero block rules. Let’s return to the question of the question. What should we to ask? I submit the following: Oracle, what is a packet blocked by one firewall but allowed by the other? If the Oracle replies there is no such packet, we know the firewalls are equivalent (hurrah!). If it replies with an example of such a packet, we know the firewalls are not equal and have a really great lead on figuring out why they aren’t equal. Z3, for all its amazing capabilities, can’t understand queries in plain English. The problem now becomes stating our question in a form understood by Z3: first-order logic. ## The right question First-order logic is not scary. We require only two logical operators: and and not. To ask our question of Z3, we must do three things: 1. Tell Z3 what a packet is. 2. Tell Z3 what a firewall is. 3. Tell Z3 we want to find a packet blocked by one firewall but allowed by the other. Z3 works with popular programming languages Java, C#, C++, and Python, but for simplicity we’ll use its native language. You can follow along on the Z3 web demo here: http://rise4fun.com/Z3 The first task is easy. Our simple packets have two fields: source, and destination. We describe this to Z3 by declaring integer constants src and dst. Z3’s mission is to find values for these constants - once all the wiring is in place, their values tell us a packet accepted by one firewall but not the other. Here’s how you declare the constants in Z3: (declare-const src Int) (declare-const dst Int) The second task is the real meat of the problem: tell Z3 what a firewall is. First, let’s define what it means for a packet to match a rule: (define-fun matches ((srcLower Int) (srcUpper Int) (dstLower Int) (dstUpper Int)) Bool (and (<= srcLower src) (<= src srcUpper) (<= dstLower dst) (<= dst dstUpper)) ) This function is true if src is in the rule’s source address range and dst is in the rule’s destination range. Otherwise it is false. Now we define what it means for a firewall to accept or block a packet. Let’s use a simple firewall with two rules, an allow rule and a block rule. The firewall function returns true if the packet is allowed, and false if it is blocked. Here’s how we state this to Z3, using the match function defined above: (define-fun firewall1 () Bool (and (matches 0 10 20 30) (not (matches 5 10 25 30)) ) ) Z3 is now a firewall expert. On to the third task! ## Satisfaction The third task is to actually verify firewall equivalence. First, define a second firewall so we have something to check: (define-fun firewall2 () Bool (and (matches 1 10 20 30) (not (matches 5 10 25 30)) ) ) It’s time! We have everything we need. Let’s ask Z3 the question - what is a packet blocked by one firewall but allowed by the other? (assert (not (= firewall1 firewall2))) (check-sat) (get-model) Click the run button in the web demo and… boom! Z3 finds us a packet - for me, [0, 20] - that is accepted by firewall1 but blocked by firewall2. This works for any two firewalls! All we have to do is change the contents of the firewall1 and firewall2 functions. This all seems a bit magical, so let’s break down the last step. First, we assert the two firewalls are not equivalent. Then we ask Z3 to check this assertion with the check-sat instruction! This has two possible outcomes: 1. The firewalls are not equivalent: check-sat returns satisfiable, and the get-model instruction provides a packet demonstrating firewall inequivalence. 2. The firewalls are equivalent: check-sat returns unsatisfiable and no packet is produced. Either way, we have our answer. Z3 ruthlessly tracks down values of src and dst representing a packet accepted by one firewall but not the other. This is very fast: clever logic manipulation rules enable Z3 to process 300-rule firewalls in a fraction of a second. ## In the real world Real packets don’t exactly correspond to our model. Instead of simple numbers, they use IPv4 or IPv6 source & destination addresses, port numbers, and protocol numbers. Z3 handles these with no real changes to the core logic; Z3 bitvectors are a drop-in replacement type for the address numbers in our model. The actual firewall-checking code used inside Azure has been open-sourced, and is available here. ## Beyond the firewall This problem hardly taxes Z3’s ability, which lists nonlinear constraints on real numbers in its repertoire. Despite its expansive set of use cases, Z3 significantly decreased problem complexity compared to other approaches. The code was simple to write and easy to understand. If you’re facing a thorny problem that seems like it could be stated in terms of satisfiability, I very much recommend giving Z3 a try. For an in-depth whitepaper on this topic, seeChecking Cloud Contracts in Microsoft Azureby Nikolaj Bjørner and Karthick Jayaraman.
# scipy.sparse.linalg.lsqr¶ scipy.sparse.linalg.lsqr(A, b, damp=0.0, atol=1e-08, btol=1e-08, conlim=100000000.0, iter_lim=None, show=False, calc_var=False, x0=None)[source] Find the least-squares solution to a large, sparse, linear system of equations. The function solves Ax = b or min ||b - Ax||^2 or min ||Ax - b||^2 + d^2 ||x||^2. The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank. 1. Unsymmetric equations -- solve A*x = b 2. Linear least squares -- solve A*x = b in the least-squares sense 3. Damped least squares -- solve ( A )*x = ( b ) ( damp*I ) ( 0 ) in the least-squares sense Parameters A{sparse matrix, ndarray, LinearOperator} Representation of an m-by-n matrix. Alternatively, A can be a linear operator which can produce Ax and A^T x using, e.g., scipy.sparse.linalg.LinearOperator. barray_like, shape (m,) Right-hand side vector b. dampfloat Damping coefficient. atol, btolfloat, optional Stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.) conlimfloat, optional Another stopping tolerance. lsqr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive. iter_limint, optional Explicit limitation on number of iterations (for safety). showbool, optional Display an iteration log. calc_varbool, optional Whether to estimate diagonals of (A'A + damp^2*I)^{-1}. x0array_like, shape (n,), optional Initial guess of x, if None zeros are used. New in version 1.0.0. Returns xndarray of float The final solution. istopint Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the least-squares problem. itnint Iteration number upon termination. r1normfloat norm(r), where r = b - Ax. r2normfloat sqrt( norm(r)^2  +  damp^2 * norm(x)^2 ). Equal to r1norm if damp == 0. anormfloat Estimate of Frobenius norm of Abar = [[A]; [damp*I]]. acondfloat Estimate of cond(Abar). arnormfloat Estimate of norm(A'*r - damp^2*x). xnormfloat norm(x) varndarray of float If calc_var is True, estimates all diagonals of (A'A)^{-1} (if damp == 0) or more generally (A'A + damp^2*I)^{-1}. This is well defined if A has full column rank or damp > 0. (Not sure what var means if rank(A) < n and damp = 0.) Notes LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible. For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down. In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0). In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A. The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large. If some initial estimate x0 is known and if damp == 0, one could proceed as follows: 1. Compute a residual vector r0 = b - A*x0. 2. Use LSQR to solve the system A*dx = r0. 3. Add the correction dx to obtain a final solution x = x0 + dx. This requires that x0 be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0. Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system M*x = b efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system A*M(inverse)*z = b, after which x can be recovered by solving M*x = z. If A is symmetric, LSQR should not be used! Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations). References 1 C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 43-71. 2 C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195-209. 3 M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588-604. Examples >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import lsqr >>> A = csc_matrix([[1., 0.], [1., 1.], [0., 1.]], dtype=float) The first example has the trivial solution [0, 0] >>> b = np.array([0., 0., 0.], dtype=float) >>> x, istop, itn, normr = lsqr(A, b)[:4] The exact solution is x = 0 >>> istop 0 >>> x array([ 0., 0.]) The stopping code istop=0 returned indicates that a vector of zeros was found as a solution. The returned solution x indeed contains [0., 0.]. The next example has a non-trivial solution: >>> b = np.array([1., 0., -1.], dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b)[:4] >>> istop 1 >>> x array([ 1., -1.]) >>> itn 1 >>> r1norm 4.440892098500627e-16 As indicated by istop=1, lsqr found a solution obeying the tolerance limits. The given solution [1., -1.] obviously solves the equation. The remaining return values include information about the number of iterations (itn=1) and the remaining difference of left and right side of the solved equation. The final example demonstrates the behavior in the case where there is no solution for the equation: >>> b = np.array([1., 0.01, -1.], dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b)[:4] >>> istop 2 >>> x array([ 1.00333333, -0.99666667]) >>> A.dot(x)-b array([ 0.00333333, -0.00333333, 0.00333333]) >>> r1norm 0.005773502691896255 istop indicates that the system is inconsistent and thus x is rather an approximate solution to the corresponding least-squares problem. r1norm contains the norm of the minimal residual that was found. #### Previous topic scipy.sparse.linalg.gcrotmk #### Next topic scipy.sparse.linalg.lsmr
## Kumano-go, Hitoshi Compute Distance To: Author ID: kumano-go.hitoshi Published as: Kumano-go, Hitoshi; Kumano-go, H.; Kumano-Go, Hitoshi; Kumano-Go, H. more...less External Links: MacTutor · Wikidata · GND · IdRef Documents Indexed: 33 Publications since 1959, including 1 Book Biographic References: 1 Publication Co-Authors: 11 Co-Authors with 13 Joint Publications 26 Co-Co-Authors all top 5 ### Co-Authors 20 single-authored 3 Taniguchi, Kazuo 2 Nagase, Michihiro 1 Hayakawa, Kantaro 1 Ichinose, Wataru 1 Ise, Kusuo 1 Kitada, Hitoshi 1 Koshiba, Zen’ichiro 1 Matsuda, Michihiko 1 Shinkai, Kenzo 1 Tozaki, Yoshiharu 1 Tsutsumi, Chisato all top 5 ### Serials 12 Proceedings of the Japan Academy 5 Osaka Journal of Mathematics 3 Funkcialaj Ekvacioj. Serio Internacia 3 Journal of the Mathematical Society of Japan 3 Communications in Partial Differential Equations 2 Communications on Pure and Applied Mathematics 2 Osaka Mathematical Journal 1 Journal of the Faculty of Science. Section I A ### Fields 19 Partial differential equations (35-XX) 6 Operator theory (47-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 2 Numerical analysis (65-XX) 1 Global analysis, analysis on manifolds (58-XX) ### Citations contained in zbMATH Open 28 Publications have been cited 398 times in 319 Documents Cited by Year A family of Fourier integral operators and the fundamental solution for a Schrödinger equation. Zbl 0472.35034 1981 Remarks on pseudo-differential operators. Zbl 0179.42201 Kumano-go, H. 1969 Algebras of pseudo-differential operators. Zbl 0206.10501 Kumano-go, H. 1970 Pseudo-differential operators. (Updated transl. from the Japanese by the author, Remi Vaillancourt, and Michihiro Nagase). Zbl 0489.35003 Kumano-go, Hitoshi 1982 Pseudo-differential operators with non-regular symbols and applications. Zbl 0395.35089 Kumano-Go, Hitoshi; Nagase, Michihiro 1978 Oscillatory integrals of symbols of pseudo-differential operators on R$$^n$$ and operators of Fredholm type. Zbl 0272.47032 Kumano-go, Hitoshi; Taniguchi, Kazuo 1973 Complex powers of hypoelliptic pseudo-differential operators with applications. Zbl 0264.35019 Kumano-go, Hitoshi; Tsutsumi, Chisato 1973 Multi-products of phase functions for Fourier integral operators with an application. Zbl 0383.35073 Kumano-go, Hitoshi; Taniguchi, Kazuo; Tozaki, Yoshiharu 1978 Pseudo-differential operators of multiple symbol and the Calderon- Vaillancourt theorem. Zbl 0294.35068 Kumano-go, H. 1975 A calculus of Fourier integral operators on $$R^n$$ and the fundamental solution for an operator of hyperbolic type. Zbl 0331.42012 Kumano-go, Hitoshi 1976 Fundamental solution for a hyperbolic system with diagonal principal part. Zbl 0431.35062 Kumano-go, Hitoshi 1979 $$L^ p$$-theory of pseudo-differential operators. Zbl 0206.10404 Kumano-go, H.; Nagase, M. 1970 Pseudo-differential operators and the uniqueness of the Cauchy problem. Zbl 0157.16901 Kumano-go, H. 1969 Fourier integral operators of multiphase and the fundamental solution for a hyperbolic system. Zbl 0568.35092 Kumano-go, Hitoshi; Taniguchi, Kazuo 1979 A problem of Nirenberg on pseudo-differential operators. Zbl 0186.16405 Kumano-go, Hitoshi 1970 On the uniqueness of the solution of the Cauchy problem and the unique continuation theorem for elliptic equation. Zbl 0106.07602 Kumano-Go, Hitoshi 1962 Fundamental solutions for operators of regularly hyperbolic type. Zbl 0351.35058 Kumano-go, Hitoshi 1978 Factorizations and fundamental solutions for differential operators of elliptic-hyperbolic type. Zbl 0374.35031 Kumano-go, Hitoshi 1976 A family of pseudo-differential operators and a stability theorem for the Friedrichs scheme. Zbl 0342.35056 Koshiba, Zen’ichiro; Kumano-go, Hitoshi 1976 On an example of non-uniqueness of solutions of the Cauchy problem for the wyve equation. Zbl 0148.08502 Kumano-go, H. 1963 The characterization of differential operators with respect to the characteristic Cauchy problem. Zbl 0148.34104 Kumano-go, Hitoshi; Shinkai, Kenzo 1966 Complex powers of a system of pseudo-differential operators. Zbl 0247.47047 Hayakawa, Kantaro; Kumano-go, Hitoshi 1971 On the characteristic Cauchy problem for partial differential equations. Zbl 0142.36903 Kumano-go, H.; Ise, K. 1965 On propagation of regularity in space-variables for the solutions of differential equations with constant coefficients. Zbl 0145.35301 Kumano-go, H. 1966 On the uniqueness for the solution of the Cauchy problem. Zbl 0154.35304 Kumano-go, H. 1963 On the index of hypoelliptic pseudo-differential operators on R$$^n$$. Zbl 0252.35066 Kumano-go, Hitoshi 1972 On singular perturbation of linear partial differential equations with constant coefficients. II. Zbl 0100.30204 Kumano-Go, Hitoshi 1959 On the propagation of singularities with infinitely many branching points for a hyperbolic equation of second order. Zbl 0463.35050 Ichinose, Wataru; Kumano-go, Hitoshi 1981 Pseudo-differential operators. (Updated transl. from the Japanese by the author, Remi Vaillancourt, and Michihiro Nagase). Zbl 0489.35003 Kumano-go, Hitoshi 1982 A family of Fourier integral operators and the fundamental solution for a Schrödinger equation. Zbl 0472.35034 1981 On the propagation of singularities with infinitely many branching points for a hyperbolic equation of second order. Zbl 0463.35050 Ichinose, Wataru; Kumano-go, Hitoshi 1981 Fundamental solution for a hyperbolic system with diagonal principal part. Zbl 0431.35062 Kumano-go, Hitoshi 1979 Fourier integral operators of multiphase and the fundamental solution for a hyperbolic system. Zbl 0568.35092 Kumano-go, Hitoshi; Taniguchi, Kazuo 1979 Pseudo-differential operators with non-regular symbols and applications. Zbl 0395.35089 Kumano-Go, Hitoshi; Nagase, Michihiro 1978 Multi-products of phase functions for Fourier integral operators with an application. Zbl 0383.35073 Kumano-go, Hitoshi; Taniguchi, Kazuo; Tozaki, Yoshiharu 1978 Fundamental solutions for operators of regularly hyperbolic type. Zbl 0351.35058 Kumano-go, Hitoshi 1978 A calculus of Fourier integral operators on $$R^n$$ and the fundamental solution for an operator of hyperbolic type. Zbl 0331.42012 Kumano-go, Hitoshi 1976 Factorizations and fundamental solutions for differential operators of elliptic-hyperbolic type. Zbl 0374.35031 Kumano-go, Hitoshi 1976 A family of pseudo-differential operators and a stability theorem for the Friedrichs scheme. Zbl 0342.35056 Koshiba, Zen&rsquo;ichiro; Kumano-go, Hitoshi 1976 Pseudo-differential operators of multiple symbol and the Calderon- Vaillancourt theorem. Zbl 0294.35068 Kumano-go, H. 1975 Oscillatory integrals of symbols of pseudo-differential operators on R$$^n$$ and operators of Fredholm type. Zbl 0272.47032 Kumano-go, Hitoshi; Taniguchi, Kazuo 1973 Complex powers of hypoelliptic pseudo-differential operators with applications. Zbl 0264.35019 Kumano-go, Hitoshi; Tsutsumi, Chisato 1973 On the index of hypoelliptic pseudo-differential operators on R$$^n$$. Zbl 0252.35066 Kumano-go, Hitoshi 1972 Complex powers of a system of pseudo-differential operators. Zbl 0247.47047 Hayakawa, Kantaro; Kumano-go, Hitoshi 1971 Algebras of pseudo-differential operators. Zbl 0206.10501 Kumano-go, H. 1970 $$L^ p$$-theory of pseudo-differential operators. Zbl 0206.10404 Kumano-go, H.; Nagase, M. 1970 A problem of Nirenberg on pseudo-differential operators. Zbl 0186.16405 Kumano-go, Hitoshi 1970 Remarks on pseudo-differential operators. Zbl 0179.42201 Kumano-go, H. 1969 Pseudo-differential operators and the uniqueness of the Cauchy problem. Zbl 0157.16901 Kumano-go, H. 1969 The characterization of differential operators with respect to the characteristic Cauchy problem. Zbl 0148.34104 Kumano-go, Hitoshi; Shinkai, Kenzo 1966 On propagation of regularity in space-variables for the solutions of differential equations with constant coefficients. Zbl 0145.35301 Kumano-go, H. 1966 On the characteristic Cauchy problem for partial differential equations. Zbl 0142.36903 Kumano-go, H.; Ise, K. 1965 On an example of non-uniqueness of solutions of the Cauchy problem for the wyve equation. Zbl 0148.08502 Kumano-go, H. 1963 On the uniqueness for the solution of the Cauchy problem. Zbl 0154.35304 Kumano-go, H. 1963 On the uniqueness of the solution of the Cauchy problem and the unique continuation theorem for elliptic equation. Zbl 0106.07602 Kumano-Go, Hitoshi 1962 On singular perturbation of linear partial differential equations with constant coefficients. II. Zbl 0100.30204 Kumano-Go, Hitoshi 1959 all top 5 all top 5 ### Cited in 83 Serials 25 Communications in Partial Differential Equations 21 Proceedings of the Japan Academy 21 Journal of Pseudo-Differential Operators and Applications 19 Journal of Functional Analysis 19 Publications of the Research Institute for Mathematical Sciences, Kyoto University 16 Journal of Differential Equations 15 Proceedings of the Japan Academy. Series A 10 Communications in Mathematical Physics 10 Journal of Mathematical Analysis and Applications 9 The Journal of Fourier Analysis and Applications 8 Bulletin des Sciences Mathématiques 6 Proceedings of the American Mathematical Society 6 Transactions of the American Mathematical Society 5 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 5 Mathematische Zeitschrift 5 Siberian Mathematical Journal 4 Annali di Matematica Pura ed Applicata. Serie Quarta 4 Integral Equations and Operator Theory 4 Tohoku Mathematical Journal. Second Series 4 Annals of Global Analysis and Geometry 3 Archive for Rational Mechanics and Analysis 3 Mathematical Notes 3 Advances in Mathematics 3 Annales de l’Institut Fourier 3 Duke Mathematical Journal 3 Journal of Soviet Mathematics 3 Mathematische Annalen 3 Monatshefte für Mathematik 3 Osaka Journal of Mathematics 3 Stochastic Processes and their Applications 3 Annales de l’Institut Henri Poincaré. Physique Théorique 2 Journal d’Analyse Mathématique 2 Arkiv för Matematik 2 Reviews in Mathematical Physics 2 Collectanea Mathematica 2 Dissertationes Mathematicae 2 Inventiones Mathematicae 2 Kodai Mathematical Journal 2 Manuscripta Mathematica 2 Mathematische Nachrichten 2 Nagoya Mathematical Journal 2 Japan Journal of Industrial and Applied Mathematics 2 The Journal of Geometric Analysis 2 Mediterranean Journal of Mathematics 2 Annali della Scuola Normale Superiore di Pisa. Scienze Fisiche e Matematiche. III. Ser 2 Annali dell’Università di Ferrara. Sezione VII. Scienze Matematiche 1 Israel Journal of Mathematics 1 Journal of Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 Transport Theory and Statistical Physics 1 Acta Mathematica 1 Calcolo 1 Czechoslovak Mathematical Journal 1 Functional Analysis and its Applications 1 Journal of the Mathematical Society of Japan 1 Mathematica Scandinavica 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Rendiconti del Seminario Matematico della Università di Padova 1 Annales de la Faculté des Sciences de Toulouse. Série V. Mathématiques 1 Bulletin of the Korean Mathematical Society 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Probability Theory and Related Fields 1 Revista Matemática Iberoamericana 1 Journal of Scientific Computing 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Bulletin of the American Mathematical Society. New Series 1 Potential Analysis 1 Russian Journal of Mathematical Physics 1 Advances in Applied Clifford Algebras 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 Mathematical Physics, Analysis and Geometry 1 Communications in Contemporary Mathematics 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Journal of the Institute of Mathematics of Jussieu 1 Analysis in Theory and Applications 1 Analysis and Applications (Singapore) 1 Communications in Mathematical Analysis 1 Banach Journal of Mathematical Analysis 1 Asian-European Journal of Mathematics 1 Science China. Mathematics 1 Advances in Pure and Applied Mathematics 1 Axioms 1 Stochastic and Partial Differential Equations. Analysis and Computations all top 5 ### Cited in 31 Fields 254 Partial differential equations (35-XX) 97 Operator theory (47-XX) 56 Global analysis, analysis on manifolds (58-XX) 44 Functional analysis (46-XX) 36 Quantum theory (81-XX) 26 Harmonic analysis on Euclidean spaces (42-XX) 20 Probability theory and stochastic processes (60-XX) 9 Numerical analysis (65-XX) 7 Differential geometry (53-XX) 7 Fluid mechanics (76-XX) 6 Topological groups, Lie groups (22-XX) 5 Integral equations (45-XX) 4 Mechanics of deformable solids (74-XX) 3 Abstract harmonic analysis (43-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Real functions (26-XX) 2 Potential theory (31-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Difference and functional equations (39-XX) 2 Geophysics (86-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Measure and integration (28-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Ordinary differential equations (34-XX) 1 Approximations and expansions (41-XX) 1 Manifolds and cell complexes (57-XX) 1 Statistics (62-XX) 1 Mechanics of particles and systems (70-XX) 1 Optics, electromagnetic theory (78-XX) 1 Relativity and gravitational theory (83-XX) 1 Systems theory; control (93-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
### User Info Did you miss your activation email? ### MultiRotorUK ShoutBox • hoverfly: beararsedcheek... September 18, 2018, 18:42:15 • Bad Raven: or bearreft, or bear left? September 17, 2018, 14:21:16 September 17, 2018, 14:20:30 • Bad Raven: more like doesnotcarebear September 17, 2018, 14:20:07 • Gav: carebear? September 17, 2018, 11:37:44 • atomiclama: you there bear?? September 15, 2018, 11:42:45 • Reman: Got stuck in traffic yesterday. After an hour I started using my phone (And an OTG cable) to look through the media files on a keyring of my old USB sticks. Found one that had the entire run of "Firefly" on it. I got through 3 episodes before the traffic started moving again....... God, That really was an amazing show. September 14, 2018, 19:52:46 • Gav: ok dude September 08, 2018, 16:40:59 • mo_miah: Gav- if its a camera ship then TBS crossfire is definitely what you need, the FrSky has limited channels and telemetry options, TBS crossfire has much better features and more channels to control camera gimbals etc September 07, 2018, 11:53:01 • Gav: mo miah - wanna swap from immertionRC UHF and try something else on my camera Ship. September 07, 2018, 11:25:40 • DarkButterfly: My laminate arrived, the rebuild is coming along September 06, 2018, 20:00:44 • hoverfly: I coated a couple of Slipstreams  a few years ago  It stiffened up the wing nicely .On the down side although it prevented dings when i had an high speed"arrival" it left the resin cracked. and the glass lifted slightly.  Well stuck laminate stiffens the wing and is flexible.. September 06, 2018, 18:26:13 • DarkButterfly: I'm doing laminate, just put my order in for some, also got some balsa wood for the side parts, had to use a kitchen knife to seperate the foam from the busted wood September 05, 2018, 13:21:11 • atomiclama: Thanks, can't decide on fibre glass or laminate. Like the idea of fibre for some reason, don't know why. September 05, 2018, 10:45:48 • Bad Raven: Usng PolyC, yes September 05, 2018, 10:13:54 • atomiclama: Anyone fibre glass coat their wings?? September 05, 2018, 09:30:22 • atomiclama: @DB well you obviously need a bigger motor and ESC, as flying slower is just not possible. September 05, 2018, 09:29:59 • DarkButterfly: September 05, 2018, 09:10:18 • hoverfly: If you put a couple of coats of Eze Kote on the wing before the laminate it sticks really well ..no lifting.. September 05, 2018, 08:47:37 • DarkButterfly: If it makes you feel better, I’m going to completely refurbish my falcon Evo with new balsa sides and new laminate September 04, 2018, 21:42:53 Permanently ### Author Topic: Think i've broke my RTF Wizard X220 HELP :(  (Read 1106 times) #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Think i've broke my RTF Wizard X220 HELP :( « on: April 14, 2018, 12:07:24 » I've broken my Wizard and I didn't even get to fly it! I has a SYMA but wanted to fly Acro, so I'm new to FPV I got it RTF and first bound it to the flight controller got it up and running on beta flight but then the receiver tab wasn't showing any sign of input on screen or physically it wouldn't even arm or respond to stick input... Tried a few different videos, and I think I'm losing my mind I was watching a video a couple of nights ago and tried following it (not the above video) it told me to unplug and re plug on the other side of the board like he does at 5.57 in that video, then to change where the other end plugged into my FS-iAs6B to going across CH5 CH6 and B/VCC at the top, I moved it back to how it was recently but when I was putting it back I put the receiver end the wrong way around (black cable on the inside) a few times, I've flashed successfully since to try and solve the problem but now the led on my receiver isn't coming on with power input  I think I've broken either the cable head or receiver... what should I do? And where can I buy another cable? Or any other ideas? #### Cheredanine • Hero Member • Posts: 4894 • Liked: 1325 • Country: • Cleverly disguised as a responsible adult ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #1 on: April 14, 2018, 13:58:59 » Hi, think that is a bit garbled by reductive text, assume you are using flysky? If you change receivers as well as wiring you may need to change protocols within beta flight, please try and re articulate accurately and concisely what you are trying to do and someone will give you specific instructions (probably one of the flysky users) Otherwise if you provide power and the receiver doesn’t power up then yes, shopping time for a new receiver (believe it should come with a cable set) Off the top on my head, unmanned tech and possibly hobbyrc stock flysky, the advantage of either of these is they are reputable UK stockists who can provide support, there are probably others, otherwise you are talking hobbyking or banggood, again one of the flysky boys (Ched I am talking about you :) ) may help out #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #2 on: April 14, 2018, 15:33:20 » You called Cheredanine OK Nick. To help we need some info. Can you post the exact link of the Wizard you bought? Hopefully we can see the spec. I guess it came with a A8S rx which is small but it has very poor range. Let us know the spec and maybe some picys of the fc and rx and I am sure we can get you up and running. #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #3 on: April 16, 2018, 18:25:03 » https://m.banggood.com/Eachine-Wizard-X220-FPV-Racer-Blheli_S-Naze32-6DOF-5_8G-48CH-200MW-700TVL-Camera-w-FlySky-I6-RTF-p-1077100.html?rmmds=myorders That's the exact one I got, problem is my receiver doesn't seem to be receiving any power from my flight controller Anyone recommend what I should buy to replace it (without soldering haha) I'm in work at the moment so I'll put up some pictures tonight/tomorrow... The help is much appreciated guys! #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #4 on: April 16, 2018, 18:31:41 » I was thinking of getting that as a replacement and upgrade from my last stock receiver but I think I need a male to female cable which i have no idea where to find haha #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #5 on: April 16, 2018, 18:40:06 » And is there any videos or websites you can recommend for a newbie so that I get a better understanding of my drone and it's components and what they do? I'm slowly picking things up but I dont think I'm on a very fast learning curve at the moment #### Cheredanine • Hero Member • Posts: 4894 • Liked: 1325 • Country: • Cleverly disguised as a responsible adult ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #6 on: April 16, 2018, 18:42:22 » Holy 2015! has a naze32 in it and is using PWM!!! You are gonna need a new rx, Ched will sort that, probably switch it to I us at the same time, your other problem is the flight controller is so old it won’t run the latest releases. Ps ask, we will happily spout she’d loads (I will give you a brief - ha ha. -overview later this evening in here #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #7 on: April 16, 2018, 18:48:03 » Haha no idea if that's a good or a bad thing, cheers mate legend #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #8 on: April 16, 2018, 18:58:20 » OK so looking at the listing it is a F3 flight controller (says Naze32 in image but lower F3) and a Flysky iA6B receiver. Their choice of rx seems weird as it's big. Does it have a case on and look like this :https://goo.gl/KVtq3v? From here: https://www.rcgroups.com/forums/showthread.php?2761374-Eachine-Wizard-X220-ARF-F3-Updated-Version-(review) it looks like the receiver is connected via 3 wires in PPM mode. An image of your one and the wires might be handy. The 3 wires should be Black = 0v, Red =5V and White = PPM signals. The plug goes into the rx on vertically Ch1(PPM) if thats how its connected with the White closest to the writing on the top (green and black). I don't know if you have a multimeter? If you do you need plug in the battery (PROPS OFF) set multimeter to read dc volts (about 20v) and put black probe on black and red on red. Hopefully the reading should be 5v. If it is then the FC is giving out the correct voltage to power the rx. If not then you could have killed the FC. Check if any lights on the FC? If it has lights on that is good. The iA6B is quite a big rx but it does PWM, PPM and iBus. As the FC should be set to PPM (Receiver tab in BetaFlight) then it's easiest to stick with that. Actually on the Config page of BetaFlight where it says receiver which one is selected (RX_PPM) or RX_serial? So first off if you have a multimeter check the voltage, second in BetaFlight on the Config page which receiver mode is selected? Once you have those answers let us know and we will progress. If you want a new rx you can get the one I linked to above but it will take a few weeks! Alternatively search for iA6B on Amazon or ebay and you should be able to get one but you never know we might be able to sort the issue out without a new rx. It might just be a config issue. #### Cheredanine • Hero Member • Posts: 4894 • Liked: 1325 • Country: • Cleverly disguised as a responsible adult ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #9 on: April 16, 2018, 19:04:40 » Flyingtech.co.uk for rx mate #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #10 on: April 16, 2018, 19:05:00 » I was thinking of getting that as a replacement and upgrade from my last stock receiver but I think I need a male to female cable which i have no idea where to find haha That receiver is good, I have one. It does come with cables but the little plugs on it don't directly connect to the cable you have! You can connect this via ppm or the better iBus but it will require soldering. iBus is a better way for the rx to talk to the Flight Controller but at this stage it doesn't really matter. If you use iBus then the tx and the fc need to be configured differently (depending on how it was setup originally. Realistically I would suggest you get a soldering iron, some solder (NOT LEAD FREE) and some heat shrink. That way when you crash you will have some of the tools you need to fix things. There are a lot of different components that never come with matching plugs/sockets or are solder only. Like motors to esc, battery connectors etc. #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #11 on: April 16, 2018, 19:14:33 » Just to let you know only reason I suggested Amazon or ebay is you may only need a rx and some of the p&p on quad sites make things expensive. You will get much better support from a proper 'quad shop' and there are a good few around just ask if you want recommendation. #### Cheredanine • Hero Member • Posts: 4894 • Liked: 1325 • Country: • Cleverly disguised as a responsible adult ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #12 on: April 16, 2018, 20:15:12 » Interlude -Brief and dramatically over simplified summary of parts in a quad and what they do: And when I say over simplified I really mean it, just something as simple as the frame can spark: What type of carbon, how is it cut (direction) So this is reallly just to get you familiar with the basic parts. Frame - the carbon and metal bits -pretty self explanatory, good frames are well thought out, protecting components and giving places to mount stuff. At first you tend to think your frame needs to be heavy to be durable but actually lighter quads carry less momentum into crashes so don’t need to be as strong, also the grade of material effects both weight and strength. The frame also governs the layout of the motors, racing quads tend to favour “stretched x” layouts where the distance between the front and back motors is longer than the distance between the left and right. Acro quads tend to be true x or even short x. Stack -the electrical boards, I am gonna list them out, in many cases multiple boards are combined, this tends to be cheaper to buy and lighter as well as easier to build but means if one component fails you have to replace the lot so can be more expensive, it also gives less isolation and can result in interference on fov feed. Flight controller -the brains, they will generally be reverend to by the cou generation (F1, F3, F4 and F7), F1s (naze32 for example) are pretty dead, F3 are struggling, F4 is the standard nowerdays and F7 optimisation is working its way in. This board has sensors built in to it which we are gonna call the gyro, and runs flight control software like Betaflight. It basically takes the signal from the receiver telling it what you are commanding the quad to do, takes a data set from its on board gyro, works out what it needs to do to make the quad behave and sends those signals to the ESCs Power Distribution board - this has a connector to the lipo and sends power out to many, but not all other components, it may well have a regulator on it so it can provide 5v as well as battery voltage but many flight controllers also have this nowerdays too. It is very common for the PDB to be integrated into the flight controller ESCs (electronic speed controllers) one for each motor, these take battery power from the PDB and based on what the signal is from the flight controller, change the speed of the motor it is controlling. The protocol the flight controller nowerdays tends to be digital (dshot) as this gives a more stable and accurate signal than the old PWM based protocols (PWM, oneshot and multishot). ESCs typically run a piece of software called blheli, there are a number of flavours of this, original blheli, which no one uses nowerdays, blheli_s - what you have, ok, blheli32- the current generation. Motors, -pretty self evident, knowing what motors to use with that quad and props to produce what flying style is in the area of complex Size is expresses as a 4 figure number eg 2206 the first two are the width of the stator in mm, the second two are the height in mm The speed of the motor is expressed in kv which is rpm per volt OSD (on screen display) these used to be separate on on the PDB but nowerdays tend to be on the flight controller or built into the cam. Rx (receiver) - will need to be from the same manufacturer as your radio, takes the signal from your radio, translates it and tells your flight controller what to do, this is unidirection, from the rx to the F.C. there are various protocols for this, in the old days PWM was used with one wire for each channel, then PPM took over as a serial protocol with 1 wire for all channels, now digital protocols (sbus, ibus etc are used -quicker and more stable).  The conversation can be bidirectional with the F.C. passing telemetry data back to the rx which will transmit back to the tx, but this largely pointless for fpv. Camera - lots of important detail here, latency, lense, TVL, sensor type, light handling etc etc. Takes an image, most can add the lipo voltage and flight time to it, then passes it on, eventually to the vtx, although usually via the flight controller so it can do the OSD bit. Vtx (video transmitter) - takes the video signal and transmits it to your goggles, antenna selection is an art in its own right. Props - just too complex, what is good for one quad may well not be good for another, what is good for one pilot may well not be for another etc etc. Lipo -generally most people are flying 4 cell (4s) which means 16.8v when fully charged. They have a basic summary of how much they can hold (eg 1300 mah means it can provide 1.3amps for an hour), a c rating, which is multiplied by the mah to give you a max amps, (always over hyped by manufacturers) so a 1300mah 50c can provide 65 Amps max, what you try and draw is governed by your throttle, your motors, your props and the weight/momentum How is that for a primer? « Last Edit: April 16, 2018, 20:21:06 by Cheredanine » #### Saleem • Sr. Member • Posts: 958 • Liked: 180 • Country: • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #13 on: April 17, 2018, 00:43:21 » Woooh,long post alert,i started reading then skipped to the end,does it count? #### hoverfly • Hero Member • Posts: 2780 • Liked: 1186 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #14 on: April 17, 2018, 08:52:03 » Woooh,long post alert,i started reading then skipped to the end,does it count? don't count you missed out... Motor Kv has nothing to do with the applied voltage.  Instead, Kv has to do with the back-emf . The motor Kv constant is the reciprocal of the back-emf constant: K_v = \frac{1}{K_e} So Kv tells us the relationship between motor speed and generated back-emf. A 2300 Kv motor will generate a 1 V back-emf when the motor is rotating at 2300 RPM.  At 23,000 RPM that motor will generate 10 V. Reptile folder , alien 500 , F/ Shark Attitudes, .Tarot 650, Air-rio Kinetic.. DX9  Various wings and planks.. Taranis x9D+..Mavic..Armattan.. Chameleion... Massive over draught..... #### Cheredanine • Hero Member • Posts: 4894 • Liked: 1325 • Country: • Cleverly disguised as a responsible adult ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #15 on: April 17, 2018, 09:24:53 » don't count you missed out... Motor Kv has nothing to do with the applied voltage.  Instead, Kv has to do with the back-emf . The motor Kv constant is the reciprocal of the back-emf constant: K_v = \frac{1}{K_e} So Kv tells us the relationship between motor speed and generated back-emf. A 2300 Kv motor will generate a 1 V back-emf when the motor is rotating at 2300 RPM.  At 23,000 RPM that motor will generate 10 V. Lol yep, lots of stuff over simplified in there in an attempt not to make it 15 pages long :) #### hoverfly • Hero Member • Posts: 2780 • Liked: 1186 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #16 on: April 17, 2018, 10:17:48 » Lol yep, lots of stuff over simplified in there in an attempt not to make it 15 pages long :) Good guide  covering  more or less everything you need to get started. I get pied off getting directed to Spewtube to get info, trying to filter out the numbnuts who know f/a and drawl on with out imparting any really useful info. It seems ppl today can't follow written instructions  e.g press the button (A) to increase the contrast, press button (B) to decrease contrast. FFS they have to watch a vid of  d/head pressing  a button.. Now all we need is a quick guide to setting up filters, or if they are any use anyway. .. No hurry the Chameleons are flying fine , same for the Air-Ro's, but the sparrow is being a bit of a pig.. Reptile folder , alien 500 , F/ Shark Attitudes, .Tarot 650, Air-rio Kinetic.. DX9  Various wings and planks.. Taranis x9D+..Mavic..Armattan.. Chameleion... Massive over draught..... #### Two-Six • Hero Member • Posts: 1060 • Liked: 359 • Country: • Have I had my biscuites yet? ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #17 on: April 17, 2018, 11:31:18 » K_v = \frac{1}{K_e} Eh? Nighthawk Pro, Trex 450 L Dominator 6 cell *FLOWN*, Blade 450-3D, MCPX-BL, MCPX-V2, Hubsan X4, Seagull Boomerang IC .40 trainer, HK Bixler, AXN Clouds fly,, Spektrum DX7, Taranis, AccuRC #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #18 on: April 17, 2018, 12:36:07 » Yea this is the receiver I had, looks exactly like that, but I think I've fried it... Cause I think ive fried it, if I buy these two I should be able to connect to my flight controller with this receiver and cable shouldn't I? https://m.banggood.com/10pcs-22AWG-...rference-Servo-Extension-Cable-p-1050822.html https://m.banggood.com/Flysky-X6B-2...r-AFHDS-i6s-i6-i6x-Transmitter-p-1101513.html #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #19 on: April 17, 2018, 12:52:38 » Should be with me in 5-9 days #### hoverfly • Hero Member • Posts: 2780 • Liked: 1186 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #20 on: April 17, 2018, 13:10:25 » Those leads wont be compatible with the  bottom receiver only the one at the top  as it has Dupont connectors  3 pin as the servo leads. It depends on the original  system, the receiver at the top has a ppm output  using only one wire, if you just replace that you wont need all the other kit. Reptile folder , alien 500 , F/ Shark Attitudes, .Tarot 650, Air-rio Kinetic.. DX9  Various wings and planks.. Taranis x9D+..Mavic..Armattan.. Chameleion... Massive over draught..... #### Stactix • Full Member • Posts: 113 • Liked: 27 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #21 on: April 17, 2018, 13:54:14 » I've broken my Wizard and I didn't even get to fly it! I has a SYMA but wanted to fly Acro, so I'm new to FPV I got it RTF and first bound it to the flight controller got it up and running on beta flight but then the receiver tab wasn't showing any sign of input on screen or physically it wouldn't even arm or respond to stick input... Tried a few different videos, and I think I'm losing my mind I was watching a video a couple of nights ago and tried following it (not the above video) it told me to unplug and re plug on the other side of the board like he does at 5.57 in that video, then to change where the other end plugged into my FS-iAs6B to going across CH5 CH6 and B/VCC at the top, I moved it back to how it was recently but when I was putting it back I put the receiver end the wrong way around (black cable on the inside) a few times, I've flashed successfully since to try and solve the problem but now the led on my receiver isn't coming on with power input  I think I've broken either the cable head or receiver... what should I do? And where can I buy another cable? Or any other ideas? Some pictures of how you have set up will help, I have had a wizard and used those recievers, they aren't bad in my experience. Probably the only part of the wizard that I didn't have to replace due to a fault! I had my reciever cable connected to the connector nearest the camera. I had the other end connected to ch1/PPM I believe (Pretty sure it can only go 1 way round) Will you be flying FPV or just Line of sight? If FPV, I'd reccomend looking into a different camera as the wizards cam isn't nice. (Run Cam swift 2s are nice, no soldering needed although I'd reccomend soldering so you can get VBat (Monitors your battery voltage) First 20 or so flights I used a cheap battery checker, stuck on top of my bat connected the white pins and it'll beep when the battery runs low. With these lipos you want to keep it above 3.3-3.5 volts per cell. do not run down till they run out  as it can damage the battery / increase chance of a fire. I'd also advise getting some different props. And a spare motor or two / ESC's I was in a similar position to you around a year ago wanted to avoid soldering, now I love soldering despite being a clumsy baboon. My wizard has had 2 motor changes, 2 esc changes (one 3 flights in) two vtx changes, one FC change and two different cameras! It does not look like a wizard anymore. #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #22 on: April 20, 2018, 10:26:01 » ... Popping the 5 incher's FC onto the 6 incher would seem to be the obvious choice .... unless ... there might be an underlying reason the FC died, in which case you could feasibly fry the other one too :eek: Someone in a different forum said the above... so I was wondering... https://m.banggood.com/Flysky-2_4G-...t-With-iBus-Port-p-978603.html?rmmds=myorders https://m.banggood.com/Flysky-X6B-2...i6x-Transmitter-p-1101513.html?rmmds=myorders I've got all of the above on order now, the stock reciever an upgraded reciever and the required cable to connect... Was just wondering when these arrive will these also break if I connect them? Or if done correctly should be fine? Was going to follow this video #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #23 on: April 20, 2018, 20:46:26 » I haven't watched the vid but as long as the flight controller is working OK and providing the correct voltage (5V) to the rx it should be fine. Unfortunately you cannot rely on the colour code of the wires. You need to double check that the correct pin at one end of the plug goes to the correct pin at the other end. i.e. check the 0v, 5v and signal wires from the rx go to the correct pins on the fc BEFORE powering it on. #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #24 on: April 25, 2018, 11:28:11 » Alright so I got my receiver connected to flight controller and binded to the radio, everything seemed okay on beta flight (followed Joshua Bardwells video) this time it was showing the stick inputs on the screen this time managed to set an arm switch put everything back together and props on went outside armed and throttle on didn't take come of the floor or anything Was running everything on ppm Any chance any of you are free around 7-10pm to talk me through the right settings for getting it into the air and setting up the fpv goggles? #### ched999uk • Hero Member • Posts: 2776 • Liked: 631 • Country: ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #25 on: April 25, 2018, 13:14:29 » When you armed did props spin? When you throttle up did props spin faster and it just didn't lift off? If so you might have wrong props on motors. i.e. motors are spinning the props to push quad into ground not lift it up. Normally writing on props faces up. So if motors did spin double check you have correct props on correct motors. #### Stactix • Full Member • Posts: 113 • Liked: 27 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #26 on: April 25, 2018, 13:21:00 » When you armed did props spin? When you throttle up did props spin faster and it just didn't lift off? If so you might have wrong props on motors. i.e. motors are spinning the props to push quad into ground not lift it up. Normally writing on props faces up. So if motors did spin double check you have correct props on correct motors. I can't count how many times I've done this! #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #27 on: April 25, 2018, 18:46:05 » They spinned when armed, im sure they were on the correct way, top left + bottom right clockwise and top right + bottom left anti clockwise It remember it spinning faster when i pushed the throttle up (was quite baked at midnight) but I remember it spinning faster when I moved the right stick for some reason.... but again no lift I moved all the sticks in every direction... no lift, then it un bound from the receiver, at this point i decided to hit the hay #### NickFromNorthWales • Newbie • Posts: 22 • Liked: 0 • Tricopter, one prop short of a Quadcopter ##### Re: Think i've broke my RTF Wizard X220 HELP :( « Reply #28 on: April 25, 2018, 18:46:36 »
### Home > PC > Chapter 6 > Lesson 6.1.2 > Problem6-32 6-32. Rewrite the expression in exponential form. Change the problem to exponential form. Change the problem to exponential form. Write the argument in exponential form. What is the domain of a log function? Write the argument in exponential form.
# UVa 10139 ## Summary Summary of the problem statement goes here. ## Explanation 1. Any number, x, can be represented as product of powers of primes: $x = p_1^{a_1} * p_2^{a_2} * p_3^{a_3} \ldots p_m^{a_m} \ldots$. This is called the "prime factorization" of x. 2. $b | a \iff a = kb$ To solve this problem, find the prime factorization of the number to divide the factorial and see if the powers of its prime factorization are less than or equal to the powers of those same prime factors in the factorial. The power of a prime factor, p, in n! can be found using: For example, assume we need to know whether or not 12 divides 6!. 1. Start by representing 12 as its prime factorization: $12 = 2^2 \cdot 3^1$. 2. Call get_powers(6, p), where $p \in \lbrace2, 3\rbrace$ 1. get_powers(6, 2) returns 4, so 2 appears in the factorization of n! a greater number of times than it does in the factorization of 12. 2. Likewise, get_powers(6, 3) returns 2, which is greater than the number of times 3 appears in the prime factorization of 12. After checking all the prime factors of 12 without any of them appearing more frequently than the same factors in the factorization of n!, we find that 12 divides 6! ## Gotchas • Any points one can easily overlook? • The correct way to understand ambiguous formulations? ## Notes 1. 0 divides n! is false. 2. m divides n! is true if $m \leq n$ ## Input 6 9 6 27 20 10000 20 100000 1000 1009 ## Output 9 divides 6! 27 does not divide 6! 10000 divides 20! 100000 does not divide 20! 1009 does not divide 1000!
# Can every number be written as a small sum of sums of squares? In a practice for a programming competition, one problem asked us to find the smallest number of pyramids which can be built using exactly $n$ blocks, where pyramids have either $k\times k, (k-1)\times (k-1),\ldots,1\times 1$ blocks on each level or $2k\times 2k, 2(k-1)\times 2(k-1),\ldots,2\times 2$ blocks on each level. Note that the first type of pyramid has $$\sum\limits_{i=1}^k i^2 = \frac{k(k+1)(2k+1)}{6}$$ blocks while the second has $$\sum\limits_{i=1}^k (2i)^2 = \frac23 k(k+1)(2k+1).$$ Equivalently, we want to write $n$ as a sum of numbers of this form, using as few as possible. The official solution to this problem had an exponential runtime in the minimal number of pyramids, but noted that this was not problematic as the minimal number of pyramids is always at most $6$. I see no obvious reason for this, or even an obvious reason why the minimal number of pyramids should be bounded. Can someone provide a proof? - This and this might be of interest. –  dtldarek Feb 25 '13 at 10:54 There is no "obvious reason" as Waring-like results are usually difficult. In addition to "Waring's problem" you might also want to search for the Hardy-Littlewood "circle method" if you want to learn how to prove these kinds of results. –  Noah Snyder Apr 11 '13 at 1:12 @NoahSnyder I've heard of the circle method, but was hoping for a more elementary argument here. If answering this requires a sever-year digression in my mathematical study into analytic number theory, I probably won't bother. –  Alex Becker Apr 11 '13 at 4:47 This looks heavily related to Pollock's 1850-conjecture (mathworld.wolfram.com/PollocksConjecture.html): every positive integer can be written as the sum of at most 5 tetrahedral numbers; every positive integer can be written as the sum of at most 7 octahedral numbers. –  Jack D'Aurizio Dec 26 '13 at 14:18 With the greedy approach (every time I subtract from $n$ the biggest number of the form $\frac{1}{4}\binom{2a}{3}$ or $\binom{2b}{3}$ that is $\leq n$) the first numbers that require six terms are $43,69,84,104,119,133,153,168,178,\ldots$. The first numbers that require seven terms are $183,263,328,354,\ldots$. The first numbers that require eight terms are $1002,1423,1723,1968,2292,\ldots$. The first numbers that require nine terms are $11418, 12482, 14687, 16182, 17208,\ldots$. –  Jack D'Aurizio Dec 26 '13 at 14:52
Jun.  2017 Article Contents Initial Error-induced Optimal Perturbations in ENSO Predictions, as Derived from an Intermediate Coupled Model • The initial errors constitute one of the main limiting factors in the ability to predict the El Niño-Southern Oscillation (ENSO) in ocean-atmosphere coupled models. The conditional nonlinear optimal perturbation (CNOP) approach was employed to study the largest initial error growth in the El Niño predictions of an intermediate coupled model (ICM). The optimal initial errors (as represented by CNOPs) in sea surface temperature anomalies (SSTAs) and sea level anomalies (SLAs) were obtained with seasonal variation. The CNOP-induced perturbations, which tend to evolve into the La Niña mode, were found to have the same dynamics as ENSO itself. This indicates that, if CNOP-type errors are present in the initial conditions used to make a prediction of El Niño, the El Niño event tends to be under-predicted. In particular, compared with other seasonal CNOPs, the CNOPs in winter can induce the largest error growth, which gives rise to an ENSO amplitude that is hardly ever predicted accurately. Additionally, it was found that the CNOP-induced perturbations exhibit a strong spring predictability barrier (SPB) phenomenon for ENSO prediction. These results offer a way to enhance ICM prediction skill and, particularly, weaken the SPB phenomenon by filtering the CNOP-type errors in the initial state. The characteristic distributions of the CNOPs derived from the ICM also provide useful information for targeted observations through data assimilation. Given the fact that the derived CNOPs are season-dependent, it is suggested that seasonally varying targeted observations should be implemented to accurately predict ENSO events. Export: Manuscript History Manuscript revised: 18 January 2017 Manuscript accepted: 20 January 2017 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 Initial Error-induced Optimal Perturbations in ENSO Predictions, as Derived from an Intermediate Coupled Model • 1. Key Laboratory of Ocean Circulation and Waves, Institute of Oceanology, Chinese Academy of Sciences, Qingdao 266071, China • 2. University of Chinese Academy of Sciences, Beijing 10029, China • 3. Laboratory for Ocean and Climate Dynamics, Qingdao National Laboratory for Marine Science and Technology, Qingdao 266237, China Abstract: The initial errors constitute one of the main limiting factors in the ability to predict the El Niño-Southern Oscillation (ENSO) in ocean-atmosphere coupled models. The conditional nonlinear optimal perturbation (CNOP) approach was employed to study the largest initial error growth in the El Niño predictions of an intermediate coupled model (ICM). The optimal initial errors (as represented by CNOPs) in sea surface temperature anomalies (SSTAs) and sea level anomalies (SLAs) were obtained with seasonal variation. The CNOP-induced perturbations, which tend to evolve into the La Niña mode, were found to have the same dynamics as ENSO itself. This indicates that, if CNOP-type errors are present in the initial conditions used to make a prediction of El Niño, the El Niño event tends to be under-predicted. In particular, compared with other seasonal CNOPs, the CNOPs in winter can induce the largest error growth, which gives rise to an ENSO amplitude that is hardly ever predicted accurately. Additionally, it was found that the CNOP-induced perturbations exhibit a strong spring predictability barrier (SPB) phenomenon for ENSO prediction. These results offer a way to enhance ICM prediction skill and, particularly, weaken the SPB phenomenon by filtering the CNOP-type errors in the initial state. The characteristic distributions of the CNOPs derived from the ICM also provide useful information for targeted observations through data assimilation. Given the fact that the derived CNOPs are season-dependent, it is suggested that seasonally varying targeted observations should be implemented to accurately predict ENSO events. Reference /
metR packages several functions and utilities that make R better for handling meteorological data in the tidy data paradigm. It started mostly sa a packaging of assorted wrapers and tricks that I wrote for my day to day work as a researcher in atmospheric sciences. Since then, it has grown organically and for my own needs and feedback from users. Conceptually it’s divided into visualization tools and data tools. The former are geoms, stats and scales that help with plotting using ggplot2, such as stat_contour_fill() or scale_y_level(), while the later are functions for common data processing tools in the atmospheric sciences, such as Derivate() or EOF(); these are implemented to work in the data.table paradigm, but also work with regular data frames. Currently metR is in developement but maturing. Most functions check arguments and there are some tests. However, some functions might change it’s interface, and functionality can be moved to other packages, so please bear that in mind. ## Installation You can install metR from CRAN with: install.packages("metR") Or the developement version with: # install.packages("devtools") devtools::install_github("eliocamp/metR") If you need to read netcdf files, you might need to install the netcdf and udunits2 libraries. On Ubuntu and it’s derivatives this can be done by typing sudo apt install libnetcdf-dev netcdf-bin libudunits2-dev ## Examples In this example we easily perform Principal Components Decomposition (EOF) on monthly geopotential height, then compute the geostrophic wind associated with this field and plot the field with filled contours and the wind with streamlines. library(metR) library(data.table) library(ggplot2) data(geopotential) # Use Empirical Orthogonal Functions to compute the Antarctic Oscillation geopotential <- copy(geopotential) geopotential[, gh.t.w := Anomaly(gh)*sqrt(cos(lat*pi/180)), by = .(lon, lat, month(date))] aao <- EOF(gh.t.w ~ lat + lon | date, data = geopotential, n = 1) aao$left[, c("u", "v") := GeostrophicWind(gh.t.w, lon, lat)] # AAO field binwidth <- 0.01 ggplot(aao$left, aes(lon, lat, z = gh.t.w)) + geom_contour_fill(binwidth = binwidth, xwrap = c(0, 360)) + # filled contours! geom_streamline(aes(dx = dlon(u, lat), dy = dlat(v)), size = 0.4, L = 80, skip = 3, xwrap = c(0, 360)) + scale_x_longitude() + scale_y_latitude(limits = c(-90, -20)) + scale_fill_divergent(name = "AAO pattern", breaks = MakeBreaks(binwidth), guide = guide_colorstrip()) + coord_polar() #> Warning in .check_wrap_param(list(...)): 'xwrap' and 'ywrap' will be #> deprecated. Use ggperiodic::periodic insead. # AAO signal ggplot(aao\$right, aes(date, gh.t.w)) + geom_line() + geom_smooth(span = 0.4) #> geom_smooth() using method = 'loess' and formula 'y ~ x' You can read more in the vignettes: Visualization tools and Working with data.
# Calculate the first n perfect numbers [closed] Write a program that calculates the first n perfect numbers. A perfect number is one where the sum of the factors is the original number. For example, 6 is a perfect number because 1+2+3=6. No non standard libraries.The standard loopholes are forbidden. ## closed as unclear what you're asking by Calvin's Hobbies, es1024, xnor, NinjaBearMonkey, Kyle KanosMay 7 '15 at 13:18 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • Please clarify: What is a non-standard library? Also, how should output be given? – isaacg May 7 '15 at 6:09 • Related. – Martin Ender May 7 '15 at 7:25 # CJam, 24 bytes 1{{2*_(mp!}g__(*2/p}ri*; Try it online. Makes use of the Euclid–Euler theorem: An even number P is perfect iff P = 2 ** (N - 1) * (2 ** N - 1) where 2 ** N - 1 is prime. ### Disclaimer If there are odd perfect numbers, this code will fail to generate them. However, there are no known odd perfect numbers. ### How it works 1 e# A := 1 { }ri* e# do int(input()) times: { }g e# do: 2* e# A *= 2 _( e# M := A - 1 mp! e# while(prime(P)) __(2/ e# P := A * (A - 1) / 2 p e# print(P) # Pyth, 25 bytes J1W<lYQy=JI!tPKtyJaY*JK;Y Tests whether Mersenne numbers are prime. If so, it generates the corresponding perfect number. Can find the first 8 perfect numbers in under a second. Note: Only generates even perfect numbers. However, since it has been proven that any odd perfect number is greater than 10^1500, this algorithm is correct on inputs up to 14. Demonstration. • This answer will skip odd perfect numbers. – orlp May 7 '15 at 5:38 # Pyth - 27 25 bytes Extremely super slow brute force approach. K2W<ZQ~K1IqKsf!%KTr1KK~Z1 Trial division to factor, then while loop till length of perfect numbers is enough. • Does prime factorization using P speed anything up? – orlp May 7 '15 at 3:41 • @orlp possibly, but we want all factors, not primes. – Maltysen May 7 '15 at 20:37 • I'm aware, but you can compute the sigma function from the factorization. – orlp May 8 '15 at 3:54
# Linear algebra, modules annihilator [duplicate] $N$ and $K$ are submodules of $M$ with $I=Ann(N)$ and $J=Ann(K)$, then show that annihilator of intersection of $N$ and $K$ contains $I+J$. Give example to show that the inclusion may be strict. - ## marked as duplicate by rschwieb, Community♦, user86418, Lost1, ShuchangFeb 28 '14 at 0:53 Hint: How do you usually prove inclusion? you should start by picking $i\in I$ and $j\in J$. For any $m\in N\cap K$, $im=0$ and $jm=0$ (why?), so what can you conclude about $i+j$? What happens when $N\cap K=(0)$?
# Is Q-learning only capable of learning a deterministic policy? I was following a reinforcement learning course on coursera and in this video at 2:57 the instructor says Expected SARSA and SARSA both allow us to learn an optimal $$\epsilon$$-soft policy, but, Q-learning does not From what I understand, SARSA and Q-learning both give us an estimate of the optimal action-value function. SARSA does this on-policy with an epsilon-greedy policy, for example, whereas the action-values from the Q-learning algorithm are for a deterministic policy, which is always greedy. But, can't we use these action values generated by the Q-learning algorithm to form an $$\epsilon$$-greedy policy? We can, for instance, in each state, give the maximum probability to the action with the greatest action-value and the rest of the actions can have probability $$\frac{\epsilon}{\text{number of actions}}$$. Because we do a similar thing with SARSA, where we infer the policy from the current estimate of action-values after each update. If we assume a tabular setting, then Q-learning converges to the optimal state-action value function, from which an optimal policy can be derived, provided a few conditions are met. In finite MDPs, there's at least one optimal (stationary) deterministic policy, but there can also be optimal stochastic policies - specifically, if two or more actions have the same optimal value, then you can stochastically choose between them. However, if you stochastically choose between all actions (including non-optimal ones), then you will not behave optimally. SARSA also converges to the optimal state-action value function, but the learning policy must eventually become greedy. See this post and theorem 1 (p. 294) of this paper. So, even in SARSA, if you want to behave optimally, you can't just arbitrarily choose any stochastic policy derived from this found optimal value function (note also that SARSA is on-policy). However, SARSA can also find an optimal restricted policy. See theorem 2 (p. 297) of this paper for more details. To answer your question directly, Q-learning can find an optimal stochastic policy, provided that it exists. • So if I understand the post correctly, even in SARSA we decrease exploration (by decreasing epsilon in case of epsilon greedy) as time progresses to ensure that we have the optimal value function? May 25 at 18:40 • @ketandhanuka Yes, as far as I remember, $\epsilon$ must eventually decay in order for SARSA to find the optimal value function. – nbro May 25 at 18:41 • $\epsilon$-greedy policies with a fixed $\epsilon$ can be ranked. So SARSA will converge on the optimal $\epsilon$-greedy policy (i.e. the optimal polciy given that a specific $\epsilon$ value applies) even without decaying $\epsilon$. Sometimes this is desirable, for instance in online scenarios that never complete learning. May 26 at 8:01 • @NeilSlater I updated my answer to improve the precision of my explanations. SARSA converges to the optimal value function with decaying behaviour policies, but it can also converge to a policy called (here) an optimal restricted policy, which is the optimal policy that chooses actions according to the their rank (as far as I understand it). I am not sure if it's correct to say that SARSA converges to an optimal $\epsilon$-greedy policy, though - $\epsilon$-greedy policies don't choose actions according to their rank. – nbro May 26 at 9:19 • Maybe you're referring to another result about $\epsilon$-greedy policies in the context of SARSA. But, in this paper, what you say only applies to restricted rank-based randomized (RRR) policies, i.e. policies that choose actions according to their ranks. – nbro May 26 at 9:32
# Configuring Websockets behind an AWS ELB Recently at work, we were trying to get an application that uses websockets working on an AWS instance behind an ELB (load balancer) and nginx on the instance. If you’re either not using a secure connection or handling the cryptography on the instance (either in nginx or Flask), it works right out of the box. But if you want the ELB to handle TLS termination it doesn’t work nearly as well… Luckily, after a bit of fiddling, I got it working. Update 2018-05-31: A much easier solution, [https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/](just use an ALB): WebSocket allows you to set up long-standing TCP connections between your client and your server. This is a more efficient alternative to the old-school method which involved HTTP connections that were held open with a “heartbeat” for very long periods of time. WebSocket is great for mobile devices and can be used to deliver stock quotes, sports scores, and other dynamic data while minimizing power consumption. ALB provides native support for WebSocket via the ws:// and wss:// protocols. # Performance problems with Flask and Docker I had an interesting problem recently on a project I was working on. It’s a simple Flask-based webapp, designed to be deployed to AWS using Docker. The application worked just fine when I was running it locally, but as soon as I pushed the docker container… Latency spikes. Bad enough that the application was failing AWS’s healthy host checks, cycling in and out of existence1: # Extending Racket's DNS capabilities As I mentioned on Monday, I wrote my DNS-based censorship around the world–and to do that, I need a fair bit of control over the DNS packets that I’m sending back and over parsing the ones that I get back. # ISMA 2013 AIMS-5 - DNS Based Censorship I gave a presentation about research that I’m just starting out studying DNS-based censorship in specific around the world. In particularly, preliminary findings in China have confirmed that the Great Firewall is responding via packet injection to many queries for either Facebook or Twitter (among others). Interestingly, the pool of IPs that they return is consistent yet none of the IPs seem to resolve to anything interesting. In addition, there is fallout in South Korea where some percentage of packets go through China and thus have the same behaviors. # AIMS-5 - Day 3 Yesterday was the third and final day of AIMS-5. With the main topic being Detection of Censorship, Filtering, and Outages, many of these talks were much more in line with what I know and what I’m working on. I gave my presentation as well, you can see it (along with a link to my slides) down below. # AIMS-5 - Day 2 Today’s agenda had discussions on Mobile Measurements and IPv6 Annotations, none of which are areas that I find myself particularly interested in. Still, I did learn a few things. # AIMS-5 - Workshop on Active Internet Measurements Yesterday was the first of three days for the fifth annual ISC/CAIDA Workshop I went to in Baltimore back in October at least, but even the ones that weren’t have still been interesting. I’ll be presenting on Friday and I’ll share my slides when I get that far (they aren’t actually finished yet). I’ll be talking about new work that I’m just getting off the ground focusing specifically on DNS-based censorship. There is a lot of interesting ground to cover there and this should be only the first in a series of updates about that work (I hope).
### Definition of Antilogarithm 1. Noun. The number of which a given number is the logarithm. Exact synonyms: Antilog Generic synonyms: Number, Numeral ### Definition of Antilogarithm 1. n. The number corresponding to a logarithm. The word has been sometimes, though rarely, used to denote the complement of a given logarithm; also the logarithmic cosine corresponding to a given logarithmic sine. ### Definition of Antilogarithm 1. Noun. (mathematics) The number of which a given number is the logarithm (to a given base). ¹ ¹ Source: wiktionary.com 1. [n -S] ### Medical Definition of Antilogarithm 1. The number corresponding to a logarithm. The word has been sometimes, though rarely, used to denote the complement of a given logarithm; also the logarithmic cosine corresponding to a given logarithmic sine. Antilogarith"mic. Source: Websters Dictionary (01 Mar 1998) ### Antilogarithm Pictures Click the following link to bring up a new window with an automated collection of images related to the term: Antilogarithm Images ### Lexicographical Neighbors of Antilogarithm antiliposantilipotropicantiliteracyantiliterateantiliteratureantilithiaticantilithicantilithicsantilitterantilittering antilobiumantilocalizationantilockantilock brakeantilogantilogarithm (current term)antilogarithmicantilogarithmsantiloggingantilogical antilogiesantilogousantilogsantilogyantiloimicantiloimicsantiloiteringantilopineantilopine kangarooantilopine wallaby ### Literary usage of Antilogarithm Below you will find example usage of this term as found in modern and/or classical literature: 1. School Algebra by George Wentworth, David Eugene Smith (1913) "antilogarithm. The number corresponding to a given logarithm is called an ... Find the antilogarithm of 3.4265. Looking for the mantissa 0.4265, ..." 2. Higher Arithmetic by George Wentworth, David Eugene Smith (1919) "Find the antilogarithm of 3.4265. Looking in the table for the mantissa 0.4265, we find that it is opposite 2.6 in column N and under column 7. ..." 3. Academic Algebra by George Wentworth, David Eugene Smith (1913) "antilogarithm. The number corresponding to a given logarithm is called an ... Find the antilogarithm of 3.4265. Looking for the mantissa 0.4265, ..." 4. Plane Trigonometry and Tables by George Wentworth, David Eugene Smith (1914) "antilogarithm. The number corresponding to a given logarithm is called an ... Finding the antilogarithm. An antilogarithm is found from the tables by ..." 5. Elements of Algebra by George Albert Wentworth (1895) "antilogarithm of 3.6330. Number corresponding to 0.633i1 is 4290+/.fa of 10=4295. ... antilogarithm of 2.5310. Number corresponding to 0.5310 is 3390+T«я of ..." 6. Logarithmic Tables by George William Jones (1898) "Take out the four-figure antilogarithm of the tabular mantissa next less than the given mantissa, and to it join the quotient of the difference of these two ..."
# An equilateral triangle is an example of what kind of polygon? Convex, concave or irregular? Dec 26, 2015 Equilateral triangle is a regular three-sided polygon. It has three interior angles of ${60}^{o}$ each and three sides, equal in length. It's an example of a convex polygon. #### Explanation: There are a few different but equivalent definitions of a convex polygon. One of them is as follows. A polygon is called convex if all its interior angles are less than ${180}^{o}$. Equilateral triangle is, obviously, of this kind. Another definition is as follows. A polygon is called convex if a segment that connects any two of its vertices lies within the boundaries of a polygon. Equilateral triangle, obviously, satisfies this definition.
Numerical calculations of a high brilliance synchrotron source and on issues with characterizing strong radiation damping effects in non-linear Thomson/Compton backscattering experiments Open Access Publications from the University of California ## Numerical calculations of a high brilliance synchrotron source and on issues with characterizing strong radiation damping effects in non-linear Thomson/Compton backscattering experiments • Author(s): Thomas, AGR • Ridgers, CP • Bulanov, SS • Griffin, BJ • Mangles, SPD • et al. Abstract A number of theoretical calculations have studied the effect of radiation reaction forces on radiation distributions in strong field counter-propagating electron beam-laser interactions, but could these effects - including quantum corrections - be observed in interactions with realistic bunches and focusing fields, as is hoped in a number of soon to be proposed experiments? We present numerical calculations of the angularly resolved radiation spectrum from an electron bunch with parameters similar to those produced in laser wakefield acceleration experiments, interacting with an intense, ultrashort laser pulse. For our parameters, the effects of radiation damping on the angular distribution and energy distribution of \emph{photons} is not easily discernible for a "realistic" moderate emittance electron beam. However, experiments using such a counter-propagating beam-laser geometry should be able to measure such effects using current laser systems through measurement of the \emph{electron beam} properties. In addition, the brilliance of this source is very high, with peak spectral brilliance exceeding $10^{29}$ photons$\,$s$^{-1}$mm$^{-2}$mrad$^{-2}(0.1$% bandwidth$)^{-1}$ with approximately 2% efficiency and with a peak energy of 10 MeV.
# NEET > Genetics and Evolution Explore popular questions from Genetics and Evolution for NEET. This collection covers Genetics and Evolution previous year NEET questions hand picked by popular teachers. Physics Chemistry Biology Q 1. Correct4 Incorrect-1 Genetic drift operates in small isolated population B large isolated population C non-reproductive population D slow reproductive population. ##### Explanation Q 2. Correct4 Incorrect-1 In Hardy-Weinberg equation, the frequency of heterozygous individual is represented by A {tex} p ^ { 2 } {/tex} {tex}2 p q {/tex} C {tex} p q {/tex} D {tex} q ^ { 2 } {/tex}. ##### Explanation Q 3. Correct4 Incorrect-1 The chronological order of human evolution from early to the recent is A {tex}Australopithecus \rightarrow Ramapithecus \rightarrow Homo \ habilis \rightarrow Homo \ erectus{/tex} {tex}Ramapithecus \rightarrow Australopithecus \rightarrow Homo \ habilis \rightarrow Homo \ erectus{/tex} C {tex}Ramapithecus \rightarrow Homo \ habilis \rightarrow Australopithecus \rightarrow Homo \ erectus{/tex} D {tex}Australopithecus \rightarrow Homo \ habilis \rightarrow Ramapithecus \rightarrow Homo \ erectus{/tex} ##### Explanation Q 4. Correct4 Incorrect-1 Which of the following is the correct sequence of events in the origin of life{tex}?{/tex} I. Formation of protobionts II. Synthesis of organic monomers III. Synthesis of organic polymers IV. Formation of {tex}\mathrm{DNA}{/tex}-based genetic systems A {tex} \mathrm { I } , \mathrm { II} , \mathrm { III } , \mathrm { IV } {/tex} B {tex} \mathrm { I } , \mathrm { III } , \mathrm { II } , \mathrm { IV } {/tex} {tex} \mathrm { II } , \mathrm { III } , \mathrm { I } , \mathrm { IV } {/tex} D {tex} \mathrm { II } , \mathrm { III } , \mathrm { IV } , \mathrm { I } {/tex} ##### Explanation Q 5. Correct4 Incorrect-1 Which of the following structures is homologus to the wing of a bird{tex}?{/tex} A Hindlimb of rabbit Flipper of whale C Dorsal fin of a shark D Wing of a moth ##### Explanation Q 6. Correct4 Incorrect-1 Analogous structures are a result of A shared ancestry B stabilising selection C divergent evolution convergent evolution. ##### Explanation Q 7. Correct4 Incorrect-1 Following are the two statements regarding the origin of life. (A)The earliest organisms that appeared on the earth were non-green and presumably anaerobes. (B) The first autotrophic organisms were the chemoautotrophs that never released oxygen. Of the above statements which one of the following options is correct {tex}?{/tex} Both (A) and (B) are correct. B Both (A) and (B) are false. C (A) is correct but (B) is false. D (B) is correct but (A) is false. ##### Explanation Q 8. Correct4 Incorrect-1 The wings of a bird and the wings of an insect are A phylogenetic structure and represent divergent evolution B homologous structures and represent convergent evolution C homologous structures and represent divergent evolution analogous structures and represent convergent evolution. ##### Explanation Q 9. Correct4 Incorrect-1 Industrial melanism is an example of A mutation B Neo-Lamarckism C Neo-Darwinism natural selection. ##### Explanation Q 10. Correct4 Incorrect-1 A population will not exist in Hardy-Weinberg equilibrium if A there is no migration B the population is large individuals mate selectively D there are no mutations. ##### Explanation Q 11. Correct4 Incorrect-1 Which is the most common mechanism of genetic variation in the population of a sexually reproducing organism{tex}?{/tex} A Genetic drift Recombination C Transduction D Chromosomal aberrations ##### Explanation Q 12. Correct4 Incorrect-1 Which of the following had the smallest brain capacity{tex}?{/tex} A {tex}Homo \, neanderthalensis{/tex} {tex}Homo \, habilis {/tex} C {tex}Homo \, erectus{/tex} D {tex}Homo \, sapiens{/tex} ##### Explanation Q 13. Correct4 Incorrect-1 In a population of 1000 individuals 360 belong to genotype AA, 480 to Aa and the remaining 160 to aa. Based on this data, the frequency of allele {tex}\mathrm A {/tex} in the population is A 0.4 B 0.5 0.6 D 0.7 ##### Explanation Q 14. Correct4 Incorrect-1 Forelimbs of cat, lizard used in walking; forelimbs of whale used in swimming and forelimbs of bats used in flying are an example of A analogous organs B adaptive radiation homologous organs D convergent evolution. ##### Explanation Q 15. Correct4 Incorrect-1 Which one of the following are analogous structures{tex}?{/tex} Wings of bat and wings of pigeon B Thorns of {tex} Bougainvillea{/tex} and tendrils of {tex}Cucurbita{/tex} C Flippers of dolphin and legs of horse D None of the above ##### Explanation Q 16. Correct4 Incorrect-1 According to Darwin, the organic evolution is due to A competition within closely related species B reduced feeding efficiency in one species due to the presence of interfering species intraspecific competition D interspecific competition. ##### Explanation Q 17. Correct4 Incorrect-1 The tendency of population to remain in genetic equilibrium may be disturbed by A lack of mutations lack of random mating C random mating D lack of migration. ##### Explanation Q 18. Correct4 Incorrect-1 Variation in gene frequencies within populations can occur by chance rather than by natural selection. This is referred to as A random mating B genetic load C genetic flow genetic drift. ##### Explanation Q 19. Correct4 Incorrect-1 The eye of octopus and eye of cat show different patterns of structure, yet they perform similar function. This is an example of analogous organs that have evolved due to convergent evolution. B analogous organs that have evolved due to divergent evolution. C homologous organs that have evolved due to convergent evolution. D homologous organs that have evolved due to divergent evolution. ##### Explanation Q 20. Correct4 Incorrect-1 Random unidirectional change in allele frequencies that occurs by chance in all populations and especially in small populations is known as A migration B natural selection genetic drift D mutation. ##### Explanation Q 21. Correct4 Incorrect-1 Genetic variation in a population arises due to A recombination only mutation as well as recombination C reproductive isolation and selection D mutations only. ##### Explanation Q 22. Correct4 Incorrect-1 Dinosaurs dominated the world in which of the following geological eras {tex}?{/tex} A Cenozoic Jurassic C Mesozoic D Devonian ##### Explanation Q 23. Correct4 Incorrect-1 The finch species of Galapagos islands are grouped according to their food sources. Which of the following is not a finch food {tex}?{/tex} Carrion B Insects C Tree buds D Seeds ##### Explanation Q 24. Correct4 Incorrect-1 Evolution of different species in a given area starting from a point and spreading to other geographical areas is known as adaptive radiation B natural selection C migration D divergent evolution. ##### Explanation Q 25. Correct4 Incorrect-1 Which one of the following options gives one correct example each of convergent evolution and divergent evolution{tex}?{/tex} {tex} \begin{array} { l l } { \text { Convergent } } &&&& { \text { Divergent } } \\ { \text { evolution } } &&&& { \text { evolution } } \\ \\ { \text { Eyes of octopus } } &&&& { \text { Bones of forelimbs } } \\ { } &&&& { \text { of mammals and } } \\ { } &&&& { \text { vertebrates } } \end{array} {/tex} B {tex} \begin{array} { l l } { \text { Thorns of \ Bougainvillea } } &&& { \text { Wings of} }\\ { \text { and tendrils of } } &&& { \text { butterflies } } \\ { \text \ { Cucurbita } } &&& { \text { and bird } } \end{array} {/tex} C {tex} \begin{array} { l l } { \text { Bones of forelimbs } } &&& { \text { Wings of butterfly } } \\ { \text { of vertebrates } } &&& { \text { and birds } } \end{array} {/tex} D {tex} \begin{array} { l l } { \text { Thorns of \ Bougainvillea } } &&& { \text { Eyes of ectopus} }\\ { \text { and tendrils of } } &&& { \text { and mammals} } \\ { \text \ { Cucurbita } } \end{array} {/tex}
# Efficiency of Electrical Appliance ### Efficiency of Electrical Appliance 1. The efficiency of an electrical appliance is given by the following equation 2. Normally, the efficiency of an electrical appliance is less than 100% due to the energy lost as heat and the work done against friction in a machine. Example 1 A lamp is marked “240V, 50W”. If it produces a light output of 40W, what is the efficiency of the lamp? $Efficientcy= P output P input ×100% Efficientcy= 40 50 ×100%=80%$ $P=IV P=(2)(12) P=24W$ $P= W t P= mgh t P= (2)(10)(5) 10 =10W$ $Efficientcy= P output P input ×100% Efficientcy= 10 24 ×100%=41.7%$
How many simplicial complexes on n vertices up to homotopy equivalence? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-20T14:26:39Z http://mathoverflow.net/feeds/question/102587 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/102587/how-many-simplicial-complexes-on-n-vertices-up-to-homotopy-equivalence How many simplicial complexes on n vertices up to homotopy equivalence? Vidit Nanda 2012-07-18T21:47:49Z 2012-07-19T19:00:26Z <p>Fix a number $n$, and define $\gamma(n)$ to be the number of simplicial complexes on $n$ unlabeled vertices up to homotopy equivalence. It is unlikely that an explicit formula exists, but what is known about the growth of $\gamma(n)$ as $n$ increases?</p> <p>This seems to be a fairly basic generalization of "how many non-isomorphic graphs on $n$ unlabeled vertices?" but while this problem even has an <a href="http://oeis.org/A000088" rel="nofollow">OEIS entry</a>, I can't find any decent references or calculations for $\gamma$.</p> <p><strong>Note</strong>: I do not mean to ask about the <a href="http://en.wikipedia.org/wiki/Dedekind_number" rel="nofollow">Dedekind number</a> which simply counts all possible simplicial complexes on $n$ vertices without regard to homotopy equivalence.</p> http://mathoverflow.net/questions/102587/how-many-simplicial-complexes-on-n-vertices-up-to-homotopy-equivalence/102694#102694 Answer by Will Sawin for How many simplicial complexes on n vertices up to homotopy equivalence? Will Sawin 2012-07-19T19:00:26Z 2012-07-19T19:00:26Z <p>Fernando Muro's argument seems convincing that getting an exact formula is likely to be impossible. But we still might find lower and upper bounds that give us a sense of the asymptotics.</p> <p>We can get a lower bound by restricting to a subset, like graphs up to homotopy equivalence. This has a pretty nice generating function.</p> <p>$\sum_{n=0}^\infty \gamma(n) q^n=\frac{1}{(1-q)^2(1-q^3)(1-q^4)^2(1-q^5)^3(1-q^6)^4\dots}=\frac{1}{(1-q)^2}\prod_{n=1}^\infty \frac{1}{(1-q^{n+2})^n}$</p> <p>The reason for this is that you can identify a complex by the number of connected graphs of each Euler characteristic it contains. Then each complex shows up at whatever the minimal $n$ is to express it, which is just a sum over the graphs, and at each larger $n$. Since the number of possible Euler characteristics is quadratic in $n$, the number of new types at each $n$ is linear. You have two extra $1-q$ terms, one to account for the 1-vertex graph, and one to account for homotopy types showing up after the minimal $n$.</p>
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation Early Access Display Type: , Available online  , doi: 10.1109/JAS.2020.1003354 Abstract: Group role assignment (GRA) is originally a complex problem in role-based collaboration (RBC). The solution to GRA provides modelling techniques for more complex problems. GRA with constraints (GRA+) is categorized as a class of complex assignment problems. At present, there are few generally efficient solutions to this category of problems. Each special problem case requires a specific solution. Group multi-role assignment (GMRA) and GRA with conflicting agents on roles (GRACAR) are two problem cases in GRA+. The contributions of this paper include: 1) The formalization of a new problem of GRA+, called group multi-role assignment with conflicting roles and agents (GMAC), which is an extension to the combination of GMRA and GRACAR; 2) A practical solution based on an optimization platform; 3) A sufficient condition, used in planning, for solving GMAC problems; and 4) A clear presentation of the benefits in avoiding conflicts when dealing with GMAC. The proposed methods are verified by experiments, simulations, proofs and analysis. , Available online Abstract: Autonomous systems are an emerging AI technology functioning without human intervention underpinned by the latest advances in intelligence, cognition, computer, and systems sciences. This paper explores the intelligent and mathematical foundations of autonomous systems. It focuses on structural and behavioral properties that constitute the intelligent power of autonomous systems. It explains how system intelligence aggregates from reflexive, imperative, adaptive intelligence to autonomous and cognitive intelligence. A hierarchical intelligence model (HIM) is introduced to elaborate the evolution of human and system intelligence as an inductive process. The properties of system autonomy are formally analyzed towards a wide range of applications in computational intelligence and systems engineering. Emerging paradigms of autonomous systems including brain-inspired systems, cognitive robots, and autonomous knowledge learning systems are described. Advances in autonomous systems will pave a way towards highly intelligent machines for augmenting human capabilities. , Available online Abstract: A gravitational search algorithm (GSA) uses gravitational force among individuals to evolve population. Though GSA is an effective population-based algorithm, it exhibits low search performance and premature convergence. To ameliorate these issues, this work proposes a multi-layered GSA called MLGSA. Inspired by the two-layered structure of GSA, four layers consisting of population, iteration-best, personal-best and global-best layers are constructed. Hierarchical interactions among four layers are dynamically implemented in different search stages to greatly improve both exploration and exploitation abilities of population. Performance comparison between MLGSA and nine existing GSA variants on twenty-nine CEC2017 test functions with low, medium and high dimensions demonstrates that MLGSA is the most competitive one. It is also compared with four particle swarm optimization variants to verify its excellent performance. Moreover, the analysis of hierarchical interactions is discussed to illustrate the influence of a complete hierarchy on its performance. The relationship between its population diversity and fitness diversity is analyzed to clarify its search performance. Its computational complexity is given to show its efficiency. Finally, it is applied to twenty-two CEC2011 real-world optimization problems to show its practicality. , Available online  , doi: 10.1109/JAS.2020.1003387 Abstract: The manufacturing of nanomaterials by the electrospinning process requires accurate and meticulous inspection of related scanning electron microscope (SEM) images of the electrospun nanofiber, to ensure that no structural defects are produced. The presence of anomalies prevents practical application of the electrospun nanofibrous material in nanotechnology. Hence, the automatic monitoring and quality control of nanomaterials is a relevant challenge in the context of Industry 4.0. In this paper, a novel automatic classification system for homogenous (anomaly-free) and non-homogenous (with defects) nanofibers is proposed. The inspection procedure aims at avoiding direct processing of the redundant full SEM image. Specifically, the image to be analyzed is first partitioned into sub-images (nanopatches) that are then used as input to a hybrid unsupervised and supervised machine learning system. In the first step, an autoencoder (AE) is trained with unsupervised learning to generate a code representing the input image with a vector of relevant features. Next, a multilayer perceptron (MLP), trained with supervised learning, uses the extracted features to classify non-homogenous nanofiber (NH-NF) and homogenous nanofiber (H-NF) patches. The resulting novel AE-MLP system is shown to outperform other standard machine learning models and other recent state-of-the-art techniques, reporting accuracy rate up to 92.5%. In addition, the proposed approach leads to model complexity reduction with respect to other deep learning strategies such as convolutional neural networks (CNN). The encouraging performance achieved in this benchmark study can stimulate the application of the proposed scheme in other challenging industrial manufacturing tasks. , Available online  , doi: 10.1109/JAS.2020.1003417 Abstract: This paper studies the trajectory tracking problem of flapping-wing micro aerial vehicles (FWMAVs) in the longitudinal plane. First of all, the kinematics and dynamics of the FWMAV are established, wherein the aerodynamic force and torque generated by flapping wings and the tail wing are explicitly formulated with respect to the flapping frequency of the wings and the degree of tail wing inclination. To achieve autonomous tracking, an adaptive control scheme is proposed under the hierarchical framework. Specifically, a bounded position controller with hyperbolic tangent functions is designed to produce the desired aerodynamic force, and a pitch command is extracted from the designed position controller. Next, an adaptive attitude controller is designed to track the extracted pitch command, where a radial basis function neural network is introduced to approximate the unknown aerodynamic perturbation torque. Finally, the flapping frequency of the wings and the degree of tail wing inclination are calculated from the designed position and attitude controllers, respectively. In terms of Lyapunov’s direct method, it is shown that the tracking errors are bounded and ultimately converge to a small neighborhood around the origin. Simulations are carried out to verify the effectiveness of the proposed control scheme. , Available online Abstract: This paper aims at eliminating the asymmetric and saturated hysteresis nonlinearities by designing hysteresis pseudo inverse compensator and robust adaptive dynamic surface control (DSC) scheme. The “pseudo inverse” means that an on-line calculation mechanism of approximate control signal is developed by applying a searching method to the designed temporary control signal where the true control signal is included. The main contributions are summarized as: 1) to our best knowledge, it is the first time to compensate the asymmetric and saturated hysteresis by using hysteresis pseudo inverse compensator because the construction of the true saturated-type hysteresis inverse model is very difficult; 2) by designing the saturated-type hysteresis pseudo inverse compensator, the construction of true explicit hysteresis inverse and the identifications of its corresponding unknown parameters are not required when dealing with the saturated-type hysteresis; 3) by combining DSC technique with the tracking error transformed function, the “explosion of complexity” problem in backstepping method is overcome and the prespecified tracking performance is achieved. Analysis of stability and experimental results on the hardware-in-loop platform illustrate the effectiveness of the proposed adaptive pseudo inverse control scheme. , Available online  , doi: 10.1109/JAS.2020.1003396 Abstract: A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. , Available online  , doi: 10.1109/JAS.2020.1003402 Abstract: In this paper, we develop a novel global-attention-based neural network (GANN) for vision language intelligence, specifically, image captioning (language description of a given image). As many previous works, an encoder-decoder framework is adopted in our proposed model, in which the encoder is responsible for encoding the region proposal features and extracting global caption feature based on a specially designed module of predicting the caption objects, and the decoder generates captions by taking the obtained global caption feature along with the encoded visual features as inputs for each attention head of the decoder layer. The global caption feature is introduced for the purpose of exploring the latent contributions of extracted region proposals for image captioning, and further helping the decoder better focus on the most relevant proposals so as to extract more accurate visual features in each time step of caption generation. Our GANN architecture is implemented by incorporating the global caption feature into the attention weight calculation phase in the word predication process in each head of the decoder layer. In our experiments, we qualitatively analyzed the proposed model, and quantitatively evaluated several state-of-the-art schemes with GANN on the MS-COCO dataset. Experimental results demonstrate the effectiveness of the proposed global attention mechanism for image captioning. , Available online  , doi: 10.1109/JAS.2020.1003399 Abstract: Safety assessment is one of important aspects in health management. In safety assessment for practical systems, three problems exist: lack of observation information, high system complexity and environment interference. Belief rule base with attribute reliability (BRB-r) is an expert system that provides a useful way for dealing with these three problems. In BRB-r, once the input information is unreliable, the reliability of belief rule is influenced, which further influences the accuracy of its output belief degree. On the other hand, when many system characteristics exist, the belief rule combination will explode in BRB-r, and the BRB-r based safety assessment model becomes too complicated to be applied. Thus, in this paper, to balance the complexity and accuracy of the safety assessment model, a new safety assessment model based on BRB-r with considering belief rule reliability is developed for the first time. In the developed model, a new calculation method of the belief rule reliability is proposed with considering both attribute reliability and global ignorance. Moreover, to reduce the influence of uncertainty of expert knowledge, an optimization model for the developed safety assessment model is constructed. A case study of safety assessment of liquefied natural gas (LNG) storage tank is conducted to illustrate the effectiveness of the new developed model. , Available online  , doi: 10.1109/JAS.2020.1003312 Abstract: Location estimation of underwater sensor networks (USNs) has become a critical technology, due to its fundamental role in the sensing, communication and control of ocean volume. However, the asynchronous clock, security attack and mobility characteristics of underwater environment make localization much more challenging as compared with terrestrial sensor networks. This paper is concerned with a privacy-preserving asynchronous localization issue for USNs. Particularly, a hybrid network architecture that includes surface buoys, anchor nodes, active sensor nodes and ordinary sensor nodes is constructed. Then, an asynchronous localization protocol is provided, through which two privacy-preserving localization algorithms are designed to estimate the locations of active and ordinary sensor nodes. It is worth mentioning that, the proposed localization algorithms reveal disguised positions to the network, while they do not adopt any homomorphic encryption technique. More importantly, they can eliminate the effect of asynchronous clock, i.e., clock skew and offset. The performance analyses for the privacy-preserving asynchronous localization algorithms are also presented. Finally, simulation and experiment results reveal that the proposed localization approach can avoid the leakage of position information, while the location accuracy can be significantly enhanced as compared with the other works. , Available online  , doi: 10.1109/JAS.2019.1911636 Abstract: The Möller algorithm is a self-stabilizing minor component analysis algorithm. This research document involves the study of the convergence and dynamic characteristics of the Möller algorithm using the deterministic discrete time (DDT) methodology. Unlike other analysis methodologies, the DDT methodology is capable of serving the distinct time characteristic and having no constraint conditions. Through analyzing the dynamic characteristics of the weight vector, several convergence conditions are drawn, which are beneficial for its application. The performing computer simulations and real applications demonstrate the correctness of the analysis’s conclusions. , Available online  , doi: 10.1109/JAS.2019.1911531 Abstract: This paper investigates the sliding mode control (SMC) problem for a class of discrete-time nonlinear networked Markovian jump systems (MJSs) in the presence of probabilistic denial-of-service (DoS) attacks. The communication network via which the data is propagated is unsafe and the malicious adversary can attack the system during state feedback. By considering random Denial-of-Service attacks, a new sliding mode variable is designed, which takes into account the distribution information of the probabilistic attacks. Then, by resorting to Lyapunov theory and stochastic analysis methods, sufficient conditions are established for the existence of the desired sliding mode controller, guaranteeing both reachability of the designed sliding surface and stability of the resulting sliding motion. Finally, a simulation example is given to demonstrate the effectiveness of the proposed sliding mode control algorithm. , Available online  , doi: 10.1109/JAS.2020.1003360 Abstract: In this paper, we propose an improved torque sensorless speed control method for electric assisted bicycle, this method considers the coordinate conversion. A low-pass filter is designed in disturbance observer to estimate and compensate the variable disturbance during cycling. A DC motor provides assisted power driving, the assistance method is based on the real-time wheel angular velocity and coordinate system transformation. The effect of observer is proved, and the proposed method guarantees stability under disturbances. It is also compared to the existing methods and their performances are illustrated through simulations. The proposed method improves the performance both in rapidity and stability. , Available online  , doi: 10.1109/JAS.2020.1003357 Abstract: In recent years, reconstructing a sparse map from a simultaneous localization and mapping (SLAM) system on a conventional CPU has undergone remarkable progress. However, obtaining a dense map from the system often requires a high-performance GPU to accelerate computation. This paper proposes a dense mapping approach which can remove outliers and obtain a clean 3D model using a CPU in real-time. The dense mapping approach processes keyframes and establishes data association by using multi-threading technology. The outliers are removed by changing detections of associated vertices between keyframes. The implicit surface data of inliers is represented by a truncated signed distance function and fused with an adaptive weight. A global hash table and a local hash table are used to store and retrieve surface data for data-reuse. Experiment results show that the proposed approach can precisely remove the outliers in scene and obtain a dense 3D map with a better visual effect in real-time. , Available online Abstract: Time-sensitive networks (TSNs) support not only traditional best-effort communications but also deterministic communications, which send each packet at a deterministic time so that the data transmissions of networked control systems can be precisely scheduled to guarantee hard real-time constraints. No-wait scheduling is suitable for such TSNs and generates the schedules of deterministic communications with the minimal network resources so that all of the remaining resources can be used to improve the throughput of best-effort communications. However, due to inappropriate message fragmentation, the real-time performance of no-wait scheduling algorithms is reduced. Therefore, in this paper, joint algorithms of message fragmentation and no-wait scheduling are proposed. First, a specification for the joint problem based on optimization modulo theories is proposed so that off-the-shelf solvers can be used to find optimal solutions. Second, to improve the scalability of our algorithm, the worst-case delay of messages is analyzed, and then, based on the analysis, a heuristic algorithm is proposed to construct low-delay schedules. Finally, we conduct extensive test cases to evaluate our proposed algorithms. The evaluation results indicate that, compared to existing algorithms, the proposed joint algorithm improves schedulability by up to 50%. , Available online Abstract: This paper proposes a control strategy called enclosing control. This strategy can be described as follows: the followers design their control inputs based on the state information of neighbor agents and move to specified positions. The convex hull formed by these followers contains the leaders. We use the single-integrator model to describe the dynamics of the agents and proposes a continuous-time control protocol and a sampled-data based protocol for multi-agent systems with stationary leaders with fixed network topology. Then the state differential equations are analyzed to obtain the parameter requirements for the system to achieve convergence. Moreover, the conditions achieving enclosing control are established for both protocols. A special enclosing control with no leader located on the convex hull boundary under the protocols is studied, which can effectively prevent enclosing control failures caused by errors in the system. Moreover, several simulations are proposed to validate theoretical results and compare the differences between the three control protocols. Finally, experimental results on the multi-robot platform are provided to verify the feasibility of the protocol in the physical system. , Available online  , doi: 10.1109/JAS.2020.1003414 Abstract: , Available online  , doi: 10.1109/JAS.2020.1003363 Abstract: In the cyber-physical environment, the clock synchronization algorithm is required to have better expansion for network scale. In this paper, a new measurement model of observability under the equivalent transformation of minimum mean square error (MMSE) is constructed based on basic measurement unit (BMU), which can realize the scaled expansion of MMSE measurement. Based on the state updating equation of absolute clock and the decoupled measurement model of MMSE-like equivalence, which is proposed to calculate the positive definite invariant set by using the theoretical-practical Luenberger observer as the synthetical observer, the local noncooperative optimal control problem is built, and the clock synchronization system driven by the ideal state of local clock can reach the exponential convergence for synchronization performance. Different from the problem of general linear system regulators, the state estimation error and state control error are analyzed in the established affine system based on the set-theory-in-control to achieve the quantification of state deviation caused by noise interference. Based on the BMU for isomorphic state map, the synchronization performance of clock states between multiple sets of representative nodes is evaluated, and the scale of evaluated system can be still expanded. After the synchronization is completed, the state of perturbation system remains in the maximum range of measurement accuracy, and the state of nominal system can be stabilized at the ideal state for local clock and realizes the exponential convergence of the clock synchronization system. , Available online  , doi: 10.1109/JAS.2020.1003381 Abstract: With the continuous improvement of automation, industrial robots have become an indispensable part of automated production lines. They widely used in a number of industrial production activities, such as spraying, welding, handling, etc., and have a great role in these sectors. Recently, the robotic technology is developing towards high precision, high intelligence. Robot calibration technology has a great significance to improve the accuracy of robot. However, it has much work to be done in the identification of robot parameters. The parameter identification work of existing serial and parallel robots is introduced. On the one hand, it summarizes the methods for parameter calibration and discusses their advantages and disadvantages. On the other hand, the application of parameter identification is introduced. This overview has a great reference value for robot manufacturers to choose proper identification method, points further research areas for researchers. Finally, this paper analyzes the existing problems in robot calibration, which may be worth researching in the future. , Available online  , doi: 10.1109/JAS.2019.1911648 Abstract: The passwords for unlocking the mobile devices are relatively simple, easier to be stolen, which causes serious potential security problems. An important research direction of identity authentication is to establish user behavior models to authenticate users. In this paper, a mobile terminal APP browsing behavioral authentication system architecture which synthesizes multiple factors is designed. This architecture is suitable for users using the mobile terminal APP in the daily life. The architecture includes data acquisition, data processing, feature extraction, and sub model training. We can use this architecture for continuous authentication when the user uses APP at the mobile terminal. , Available online Abstract: Collaborative Robotics is one of the high-interest research topics in the area of academia and industry. It has been progressively utilized in numerous applications, particularly in intelligent surveillance systems. It allows the deployment of smart cameras or optical sensors with computer vision techniques, which may serve in several object detection and tracking tasks. These tasks have been considered challenging and high-level perceptual problems, frequently dominated by relative information about the environment, where main concerns such as occlusion, illumination, background, object deformation, and object class variations are commonplace. In order to show the importance of top view surveillance, a collaborative robotics framework has been presented. It can assist in the detection and tracking of multiple objects in top view surveillance. The framework consists of a smart robotic camera embedded with the visual processing unit. The existing pre-trained deep learning models named SSD and YOLO has been adopted for object detection and localization. The detection models are further combined with different tracking algorithms, including GOTURN, MEDIANFLOW, TLD, KCF, MIL, and BOOSTING. These algorithms, along with detection models, helps to track and predict the trajectories of detected objects. The pre-trained models are employed; therefore, the generalization performance is also investigated through testing the models on various sequences of top view data set. The detection models achieved maximum True Detection Rate 93% to 90% with a maximum 0.6% False Detection Rate. The tracking results of different algorithms are nearly identical, with tracking accuracy ranging from 90% to 94%. Furthermore, a discussion has been carried out on output results along with future guidelines. , Available online Abstract: The rise of the Internet and identity authentication systems has brought convenience to people’s lives but has also introduced the potential risk of privacy leaks. Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data. This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution. The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur, which leads to advantageous characteristics such as an ultra-low latency response. We first propose a set of effective biometric features describing the motion, speed, energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities. We then train the ensemble model and non-ensemble model with our NeuroBiometric dataset for biometrics authentication. The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925. The low false positive rates (about 0.002) and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user’s appearance. The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors. , Available online  , doi: 10.1109/JAS.2020.1003390 Abstract: Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a multi-attention U-Net-based generative adversarial network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention (SA) mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. Experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality. Our code is available at https://github.com/SuSir1996/MU-GAN. , Available online  , doi: 10.1109/JAS.2020.1003180 Abstract: In order to improve detection system robustness and reliability, multi-sensors fusion is used in modern air combat. In this paper, a data fusion method based on reinforcement learning is developed for multi-sensors. Initially, the cubic B-spline interpolation is used to solve time alignment problems of multi-source data. Then, the reinforcement learning based data fusion (RLBDF) method is proposed to obtain the fusion results. With the case that the priori knowledge of target is obtained, the fusion accuracy reinforcement is realized by the error between fused value and actual value. Furthermore, the Fisher information is instead used as the reward if the priori knowledge is unable to be obtained. Simulations results verify that the developed method is feasible and effective for the multi-sensors data fusion in air combat. , Available online  , doi: 10.1109/JAS.2020.1003351 Abstract: Pneumatic muscle actuators (PMAs) are compliant and suitable for robotic devices that have been shown to be effective in assisting patients with neurologic injuries, such as strokes, spinal cord injuries, etc., to accomplish rehabilitation tasks. However, because PMAs have nonlinearities, hysteresis, and uncertainties, etc., complex mechanisms are rarely involved in the study of PMA-driven robotic systems. In this paper, we use nonlinear model predictive control (NMPC) and an extension of the echo state network called an echo state Gaussian process (ESGP) to design a tracking controller for a PMA-driven lower limb exoskeleton. The dynamics of the system include the PMA actuation and mechanism of the leg orthoses; thus, the system is represented by two nonlinear uncertain subsystems. To facilitate the design of the controller, joint angles of leg orthoses are forecasted based on the universal approximation ability of the ESGP. A gradient descent algorithm is employed to solve the optimization problem and generate the control signal. The stability of the closed-loop system is guaranteed when the ESGP is capable of approximating system dynamics. Simulations and experiments are conducted to verify the approximation ability of the ESGP and achieve gait pattern training with four healthy subjects. , Available online  , doi: 10.1109/JAS.2019.1911729 Abstract: In this paper, a kind of lateral stability control strategy is put forward about the four wheel independent drive electric vehicle. The design of control system adopts hierarchical structure. Unlike the previous control strategy, this paper introduces a method which is the combination of sliding mode control and optimal allocation algorithm. According to the driver’s operation commands (steering angle and speed), the steady state responses of the sideslip angle and yaw rate are obtained. Based on this, the reference model is built. Upper controller adopts the sliding mode control principle to obtain the desired yawing moment demand. Lower controller is designed to satisfy the desired yawing moment demand by optimal allocation of the tire longitudinal forces. Firstly, the optimization goal is built to minimize the actuator cost. Secondly, the weighted least-square method is used to design the tire longitudinal forces optimization distribution strategy under the constraint conditions of actuator and the friction oval. Beyond that, when the optimal allocation algorithm is not applied, a method of axial load ratio distribution is adopted. Finally, CarSim associated with Simulink simulation experiments are designed under the conditions of different velocities and different pavements. The simulation results show that the control strategy designed in this paper has a good following effect comparing with the reference model and the sideslip angle \begin{document}$\beta$\end{document} is controlled within a small rang at the same time. Beyond that, based on the optimal distribution mode, the electromagnetic torque phase of each wheel can follow the trend of the vertical force of the tire, which shows the effectiveness of the optimal distribution algorithm. , Available online Abstract: The Border Gateway Protocol (BGP) has become the indispensible infrastructure of the Internet as a typical inter-domain routing protocol. However, it is vulnerable to misconfigurations and malicious attacks since BGP does not provide enough authentication mechanism to the route advertisement. As a result, it has brought about many security incidents with huge economic losses. Exiting solutions to the routing security problem such as S-BGP, So-BGP, Ps-BGP and RPKI, are based on the Public Key Infrastructure and face a high security risk from the centralized structure. In this paper, we propose the decentralized blockchain-based route registration framework-Decentralized Route Registration System based on Blockchain (DRRS-BC). In DRRS-BC, we produce a global transaction ledge by the information of address prefixes and autonomous system numbers between multiple organizations and ASs, which is maintained by all blockchain nodes and further used for authentication. By applying blockchain, DRRS-BC perfectly solves the problems of identity authentication, behavior authentication as well as the promotion and deployment problem rather than depending on the authentication center. Moreover, it resists to prefix and subprefix hijacking attacks and meets the performance and security requirements of route registration. , Available online Abstract: Motivated by the converse Lyapunov technique for investigating converse results of semistable switched systems in control theory, this paper utilizes a constructive induction method to identify a cost function for performance gauge of an average, multi-cue multi-choice (MCMC), cognitive decision making model over a switching time interval. It shows that such a constructive cost function can be evaluated through an abstract energy called Lyapunov function at initial conditions. Hence, the performance gauge problem for the average MCMC model becomes the issue of finding such a Lyapunov function, leading to a possible way for designing corresponding computational algorithms via iterative methods such as adaptive dynamic programming. In order to reach this goal, a series of technical results are presented for the construction of such a Lyapunov function and its mathematical properties are discussed in details. Finally, a major result of guaranteeing the existence of such a Lyapunov function is rigorously proved. , Available online Abstract: Remaining useful life (RUL) prediction is an advanced technique for system maintenance scheduling. Most of existing RUL prediction methods are only interested in the precision of RUL estimation; the adverse impact of over-estimated RUL on maintenance scheduling is not of concern. In this work, an RUL estimation method with risk-averse adaptation is developed which can reduce the over-estimation rate while maintaining a reasonable under-estimation level. The proposed method includes a module of degradation feature selection to obtain crucial features which reflect system degradation trends. Then, the latent structure between the degradation features and the RUL labels is modeled by a support vector regression (SVR) model and a long short-term memory (LSTM) network, respectively. To enhance the prediction robustness and increase its marginal utility, the SVR model and the LSTM model are integrated to generate a hybrid model via three connection parameters. By designing a cost function with penalty mechanism, the three parameters are determined using a modified grey wolf optimization algorithm. In addition, a cost metric is proposed to measure the benefit of such a risk-averse predictive maintenance method. Verification is done using an aero-engine data set from NASA. The results show the feasibility and effectiveness of the proposed RUL estimation method and the predictive maintenance strategy. , Available online  , doi: 10.1109/JAS.2020.1003411 Abstract: Necessary and sufficient conditions for the exact controllability and exact observability of a descriptor infinite dimensional system are obtained in the sense of distributional solution. These general results are used to examine the exact controllability and exact observability of the Dzektser equation in the theory of seepage and the exact controllability of wave equation. , Available online  , doi: 10.1109/JAS.2019.1911720 Abstract: In a passive ultra-high frequency (UHF) radio frequency identification (RFID) system, the recovery of collided tag signals on a physical layer can enhance identification efficiency. However, frequency drift is very common in UHF RFID systems, and will have an influence on the recovery on the physical layer. To address the problem of recovery with the frequency drift, this paper adopts a radial basis function (RBF) network to separate the collision signals, and decode the signals via FM0 to recovery collided RFID tags. Numerical results show that the method in this paper has better performance of symbol error rate (SER) and separation efficiency compared to conventional methods when frequency drift occurs. , Available online  , doi: 10.1109/JAS.2020.1003384 Abstract: The advent of healthcare information management systems (HIMSs) continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale. Analysis of this big data allows for boundless potential outcomes for discovering knowledge. Big data analytics (BDA) in healthcare can, for instance, help determine causes of diseases, generate effective diagnoses, enhance QoS guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments, generate accurate predictions of readmissions, enhance clinical care, and pinpoint opportunities for cost savings. However, BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners. In this paper, we present a comprehensive roadmap to derive insights from BDA in the healthcare (patient care) domain, based on the results of a systematic literature review. We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on NoSQL databases. We also identify the limitations and challenges of these applications and justify the potential of NoSQL databases to address these challenges and further enhance BDA healthcare research. We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm. We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare. Finally, we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work. The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators, practitioners and professionals to successfully implement BDA initiatives in their organizations. , Available online Abstract: Hand gestures are a natural way for human-robot interaction. Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications. This paper presents a novel deep learning network for hand gesture recognition. The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation. To learn short-term features, each video input is segmented into a fixed number of frame groups. A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot. These two entities are fused and fed into a convolutional neural network (ConvNet) for feature extraction. The ConvNets for all groups share parameters. To learn long-term features, outputs from all ConvNets are fed into a long short-term memory (LSTM) network, by which a final classification result is predicted. The new model has been tested with two popular hand gesture datasets, namely the Jester dataset and Nvidia dataset. Comparing with other models, our model produced very competitive results. The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures. , Available online  , doi: 10.1109/JAS.2020.1003003 Abstract: Embedded systems have numerous applications in everyday life. Petri-net-based representation for embedded systems (PRES+) is an important methodology for the modeling and analysis of these embedded systems. For a large complex embedded system, the state space explosion is a difficult problem for PRES+ to model and analyze. The Petri net synthesis method allows one to bypass the state space explosion issue. To solve this problem, as well as model and analyze large complex systems, two synthesis methods for PRES+ are presented in this paper. First, the property preservation of the synthesis shared transition set method is investigated. The property preservation of the synthesis shared transition subnet set method is then studied. An abstraction-synthesis-refinement representation method is proposed. Through this representation method, the synthesis shared transition set approach is used to investigate the property preservation of the synthesis shared transition subnet set operation. Under certain conditions, several important properties of these synthetic nets are preserved, namely reachability, timing, functionality, and liveness. An embedded control system model is used as an example to illustrate the effectiveness of these synthesis methods for PRES+. , Available online  , doi: 10.1109/JAS.2020.1003408 Abstract: A classic kind of researches about the operational safety criterion for dynamic systems with barrier function can be roughly summarized as functional relationship, denoted by \begin{document}$\oplus$\end{document}, between the barrier function and its first derivative for time \begin{document}$t$\end{document}, where \begin{document}$\oplus$\end{document} can be “=”, “\begin{document}$\langle$\end{document}”, or “\begin{document}$\rangle$\end{document}”, et al. This article draws on the form of the stable condition expression for finite time stability to formulate a novel kind of relaxed safety judgement criteria called exponential-alpha safety criteria. Moreover, we initially explore to use the control barrier function under exponential-alpha safety criteria to achieve the control for the dynamic system operational safety. In addition, derived from the actual process systems, we propose multi-hypersphere methods which are used to construct barrier functions and improved them for three types of special spatial relationships between the safe state set and the unsafe state set, where both of them can be spatially divide into multiple subsets. And the effectiveness of the proposed safety criteria are demonstrated by simulation examples. , Available online Abstract: Accurate estimation of the remaining useful life (RUL) and health state for rollers is of great significance to hot rolling production. It can provide decision support for roller management so as to improve the productivity of the hot rolling process. In addition, the RUL prediction for rollers is helpful in transitioning from the current regular maintenance strategy to conditional-based maintenance. Therefore, a new method that can extract coarse-grained and fine-grained features from batch data to predict the RUL of the rollers is proposed in this paper. Firstly, a new deep learning network architecture based on recurrent neural networks that can make full use of the extracted coarsegrained fine-grained features to estimate the heath indicator (HI) is developed, where the HI is able to indicate the health state of the roller. Following that, a state-space model is constructed to describe the HI, and the probabilistic distribution of RUL can be estimated by extrapolating the HI degradation model to a predefined failure threshold. Finally, application to a hot strip mill is given to verify the effectiveness of the proposed methods using data collected from an industrial site, and the relatively low RMSE and MAE values demonstrate its advantages compared with some other popular deep learning methods. , Available online Abstract: This work conducts robust H analysis for a class of quantum systems subject to perturbations in the interaction Hamiltonian. A necessary and sufficient condition for the robustly strict bounded real property of this type of uncertain quantum system is proposed. This paper focuses on the study of coherent robust H controller design for quantum systems with uncertainties in the interaction Hamiltonian. The desired controller is connected with the uncertain quantum system through direct and indirect couplings. A necessary and sufficient condition is provided to build a connection between the robust H control problem and the scaled H control problem. A numerical procedure is provided to obtain coefficients of a coherent controller. An example is presented to illustrate the controller design method. , Available online Abstract: This paper investigates the distributed faulttolerant containment control (FTCC) problem of nonlinear multi-agent systems (MASs) under a directed network topology. The proposed control framework which is independent on the global information about the communication topology consists of two layers. Different from most existing distributed fault-tolerant control (FTC) protocols where the fault in one agent may propagate over network, the developed control method can eliminate the phenomenon of fault propagation. Based on the hierarchical control strategy, the FTCC problem with a directed graph can be simplified to the distributed containment control of the upper layer and the fault-tolerant tracking control of the lower layer. Finally, simulation results are given to demonstrate the effectiveness of the proposed control protocol. , Available online  , doi: 10.1109/JAS.2020.1003210 Abstract: Deadlock resolution strategies based on siphon control are widely investigated. Their computational efficiency largely depends on siphon computation. Mixed-integer programming (MIP) can be utilized for the computation of an emptiable siphon in a Petri net (PN). Based on it, deadlock resolution strategies can be designed without requiring complete siphon enumeration that has exponential complexity. Due to this reason, various MIP methods are proposed for various subclasses of PNs. This work proposes an innovative MIP method to compute an emptiable minimal siphon (EMS) for a subclass of PNs named S4PR. In particular, many particular structural characteristics of EMS in S4PR are formalized as constraints, which greatly reduces the solution space. Experimental results show that the proposed MIP method has higher computational efficiency. Furthermore, the proposed method allows one to determine the liveness of an ordinary S4PR. , Available online  , doi: 10.1109/JAS.2020.1003378 Abstract: This paper focuses on a new finite-time convergence disturbance rejection control scheme design for a flexible Timoshenko manipulator subject to extraneous disturbances. To suppress the shear deformation and elastic oscillation, position the manipulator in a desired angle, and ensure the finitetime convergence of disturbances, we develop three disturbance observers (DOs) and boundary controllers. Under the derived DOs-based control schemes, the controlled system is guaranteed to be uniformly bounded stable and disturbance estimation errors converge to zero in a finite time. In the end, numerical simulations are established by finite difference methods to demonstrate the effectiveness of the devised scheme by selecting appropriate parameters. , Available online Abstract: In today’s modern electric vehicles, enhancing the safety-critical cyber-physical system (CPS)’s performance is necessary for the safe maneuverability of the vehicle. As a typical CPS, the braking system is crucial for the vehicle design and safe control. However, precise state estimation of the brake pressure is desired to perform safe driving with a high degree of autonomy. In this paper, a sensorless state estimation technique of the vehicle’s brake pressure is developed using a deep-learning approach. A deep neural network (DNN) is structured and trained using special deep-learning training techniques, such as, dropout and rectified units. These techniques are utilized to obtain more accurate model for brake pressure state estimation applications. The proposed model is trained using real experimental training data which were collected via conducting real vehicle testing. The vehicle was attached to a chassis dynamometer while the brake pressure data were collected under random driving cycles. Based on these experimental data, the DNN is trained and the performance of the proposed state estimation approach is validated accordingly. The results demonstrate high-accuracy brake pressure state estimation with RMSE of 0.048 MPa. , Available online Abstract: Driving style, traffic and weather conditions have a significant impact on vehicle fuel consumption and in particular, the road freight traffic significantly contributes to the \begin{document}$CO_2$\end{document} increase in atmosphere. This paper proposes an Eco-Route Planner devoted to determine and communicate to the drivers of Heavy-Duty Vehicles (HDVs) the eco-route that guarantees the minimum fuel consumption by respecting the travel time established by the freight companies. The proposed eco-route is the optimal route from origin to destination and includes the optimized speed and gear profiles. To this aim, the Cloud Computing System architecture is composed of two main components: the Data Management System that collects, fuses and integrates the raw external sources data and the Cloud Optimizer that builds the route network, selects the eco-route and determines the optimal speed and gear profiles. Finally, a real case study is discussed by showing the benefit of the proposed Eco-Route planner. , Available online Abstract: Timed weighted marked graphs are a subclass of timed Petri nets that have wide applications in the control and performance analysis of flexible manufacturing systems. Due to the existence of multiplicities (i.e., weights) on edges, the performance analysis and resource optimization of such graphs represent a challenging problem. In this paper, we develop an approach to transform a timed weighted marked graph whose initial marking is not given, into an equivalent parametric timed marked graph where the edges have unitary weights. In order to explore an optimal resource allocation policy for a system, an analytical method is developed for the resource optimization of timed weighted marked graphs by studying an equivalent net. Finally, we apply the proposed method to a flexible manufacturing system and compare the results with a previous heuristic approach. Simulation analysis shows that the developed approach is superior to the heuristic approach. , Available online Abstract: This paper investigates the problem of controlling half-vehicle semi-active suspension system involving a magnetorheological (MR) damper. This features a hysteretic behavior that is presently captured through the nonlinear Bouc-Wen model. The control objective is to regulate well the heave and the pitch motions of the chassis despite the road irregularities. The difficulty of the control problem lies in the nonlinearity of the system model, the uncertainty of some of its parameters, and the inaccessibility to measurements of the hysteresis internal state variables. Using Lyapunov control design tools, we design two observers to get online estimates of the hysteresis internal states and a stabilizing adaptive state-feedback regulator. The whole adaptive controller is formally shown to meet the desired control objectives. This theoretical result is confirmed by several simulations demonstrating the supremacy of the latter compared to the skyhook control and passive suspension. , Available online  , doi: 10.1109/JAS.2020.1003405 Abstract: The purpose of this paper is to assess the operational efficiency of a public bus transportation via a case study from a company in a large city of China by using data envelopment analysis (DEA) model and Shannon’s entropy. This company operates 37 main routes on the backbone roads. Thus, it plays a significant role in public transportation in the city. According to bus industry norms, an efficiency evaluation index system is constructed from the perspective of both company operation and passenger demands. For passenger satisfaction, passenger waiting time and passenger-crowding degree are considered, and they are undesirable indicators. To describe such indicators, a super-efficient DEA model is constructed. With this model, by using actual data, efficiency is evaluated for each bus route. Results show that the DEA model with Shannon’s entropy being combined achieves more reasonable results. Also, sensitivity analysis is presented. Therefore, the results are meaningful for the company to improve its operations and management. , Available online Abstract: , Available online Abstract: The dust distribution law acting at the top of a blast furnace (BF) is of great significance for understanding gas flow distribution and mitigating the negative influence of dust particles on the accuracy and service life of detection equipment. The harsh environment inside a BF makes it difficult to describe the dust distribution. This paper addresses this problem by proposing a dust distribution \begin{document}$k - S\varepsilon - {u_{p}}$\end{document} model based on interphase (gas-powder) coupling. The proposed model is coupled with a \begin{document}$k - S\varepsilon$\end{document} model (which describes gas flow movement) and a \begin{document}${u_{p}}$\end{document} model (which depicts dust movement). First, the kinetic energy equation and turbulent dissipation rate equation in the \begin{document}$k - S{\varepsilon}$\end{document} model are established based on the modeling theory and Single-Green-Function two-scale direct interaction approximation (SGF-TSDIA) theory. Second, a dust particle movement \begin{document}${u_{p}}$\end{document} model is built based on a force analysis of the dust and Newton's laws of motion. Finally, a coupling factor that describes the interphase interaction is proposed, and the \begin{document}$k - S\varepsilon - {u_{p}}$\end{document} model, with clear physical meaning, rigorous mathematical logic, and adequate generality, is developed. Simulation results and on-site verification show that the \begin{document}$k - S{\varepsilon} - {u_{p}}$\end{document} model not only has high precision, but also reveals the aggregate distribution features of the dust, which are helpful in optimizing the installation position of the detection equipment and improving its accuracy and service life. , Available online Abstract: The driver’s cognitive and physiological states affect his/her ability to control the vehicle. Thus, these driver states are essential to the safety of automobiles. The design of advanced driver assistance systems (ADAS) or autonomous vehicles will depend on their ability to interact effectively with the driver. A deeper understanding of the driver state is, therefore, paramount. EEG is proven to be one of the most effective methods for driver state monitoring and human error detection. This paper discusses EEG-based driver state detection systems and their corresponding analysis algorithms over the last three decades. First, the commonly used EEG system setup for driver state studies is introduced. Then, the EEG signal preprocessing, feature extraction, and classification algorithms for driver state detection are reviewed. Finally, EEG-based driver state monitoring research is reviewed in-depth, and its future development is discussed. It is concluded that the current EEG-based driver state monitoring algorithms are promising for safety applications. However, many improvements are still required in EEG artifact reduction, real-time processing, and between-subject classification accuracy. , Available online Abstract: The new coronavirus (COVID-19), declared by the World Health Organization as a pandemic, has infected more than 1 million people and killed more than 50 thousand. An infection caused by COVID-19 can develop into pneumonia, which can be detected by a chest x-ray exam and should be treated appropriately. In this work, we propose an automatic detection method for COVID-19 infection based on chest x-ray images. The datasets constructed for this study are composed of 194 x-ray images of patients diagnosed with coronavirus and 194 x-ray images of healthy patients. Since few images of patients with COVID-19 are publicly available, we apply the concept of transfer learning for this task. We use different architectures of Convolutional Neural Networks (CNNs) trained on ImageNet, and adapt them to behave as feature extractors for the x-ray images. Then, the CNNs are combined with consolidated machine learning methods, such as k-Nearest Neighbor, Bayes, Random Forest, Multilayer Perceptron (MLP), and Support Vector Machine (SVM). The results show that, for one of the datasets, the extractor-classifier pair with the best performance is the MobileNet architecture with the SVM classifier using a linear kernel, which achieves an accuracy and an F1-Score of 98.5%. For the other dataset, the best pair is DenseNet201 with MLP, achieving an accuracy and an F1-Score of 95.6%. Thus, the proposed approach demonstrates efficiency in detecting COVID-19 in x-ray images. , Available online Abstract: We design a regulation-triggered adaptive controller for robot manipulators to efficiently estimate unknown parameters and to achieve asymptotic stability in the presence of coupled uncertainties. Robot manipulators are widely used in telemanipulation systems where they are subject to model and environmental uncertainties. Using conventional control algorithms on such systems can cause not only poor control performance, but also expensive computational costs and catastrophic instabilities. Therefore, system uncertainties need to be estimated through designing a computationally efficient adaptive control law. We focus on robot manipulators as an example of a highly nonlinear system. As a case study, a 2-DOF manipulator subject to four parametric uncertainties is investigated. First, the dynamic equations of the manipulator are derived, and the corresponding regressor matrix is constructed for the unknown parameters. For a general nonlinear system, a theorem is presented to guarantee the asymptotic stability of the system and the convergence of parameters' estimations. Finally, simulation results are discussed for a two-link manipulator, and the performance of the proposed scheme is thoroughly evaluated. , Available online Abstract: To improve the energy efficiency of a direct expansion air conditioning (DX A/C) system while guaranteeing occupancy comfort, a hierarchical controller for a DX A/C system with uncertain parameters is proposed. The control strategy consists of an open loop optimization controller and a closed-loop guaranteed cost periodically intermittent-switch controller (GCPISC). The error dynamics system of the closed-loop control is modelled based on the GCPISC principle. The difference, compared to the previous DX A/C system control methods, is that the controller designed in this paper performs control at discrete times. For the ease of designing the controller, a series of matrix inequalities are derived to be the sufficient conditions of the lower-layer closed-loop GCPISC controller. In this way, the DX A/C system output is derived to follow the optimal references obtained through the upper-layer open loop controller in exponential time, and the energy efficiency of the system is improved. Moreover, a static optimization problem is addressed for obtaining an optimal GCPISC law to ensure a minimum upper bound on the DX A/C system performance considering energy efficiency and output tracking error. The advantages of the designed hierarchical controller for a DX A/C system with uncertain parameters are demonstrated through some simulation results. , Available online Abstract: In this paper, a data-based scheme is proposed to solve the optimal tracking problem of autonomous nonlinear switching systems. The system state is forced to track the reference signal by minimizing the performance function. First, the problem is transformed to solve the corresponding Bellman optimality equation in terms of the Q-function (also named as action value function). Then, an iterative algorithm based on adaptive dynamic programming (ADP) is developed to find the optimal solution which is totally based on sampled data. The linear-in-parameter (LIP) neural network is taken as the value function approximator. Considering the presence of approximation error at each iteration step, the generated approximated value function sequence is proved to be boundedness around the exact optimal solution under some verifiable assumptions. Moreover, the effect that the learning process will be terminated after a finite number of iterations is investigated in this paper. A sufficient condition for asymptotically stability of the tracking error is derived. Finally, the effectiveness of the algorithm is demonstrated with three simulation examples. , Available online Abstract: Formation control of discrete-time linear multi-agent systems using directed switching topology is considered in this work via a reduced-order observer, in which a formation control protocol is proposed under the assumption that each directed communication topology has a directed spanning tree. By utilizing the relative outputs of neighboring agents, a reduced-order observer is designed for each following agent. A multi-step control algorithm is established based on the Lyapunov method and the modified discrete-time algebraic Riccati equation. A sufficient condition is given to ensure that the discrete-time linear multi-agent system can achieve the expected leader-following formation. Finally, numerical examples are provided so as to demonstrate the effectiveness of the obtained results. , Available online  , doi: 10.1109/JAS.2020.1003324 Abstract: In computer vision fields, 3D object recognition is one of the most important tasks for many real-world applications. Three-dimensional convolutional neural networks (CNNs) have demonstrated their advantages in 3D object recognition. In this paper, we propose to use the principal curvature directions of 3D objects (using a CAD model) to represent the geometric features as inputs for the 3D CNN. Our framework, namely CurveNet, learns perceptually relevant salient features and predicts object class labels. Curvature directions incorporate complex surface information of a 3D object, which helps our framework to produce more precise and discriminative features for object recognition. Multitask learning is inspired by sharing features between two related tasks, where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification. Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification. We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs. A Cross-Stitch module was adopted to learn effective shared features across multiple representations. We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task. , Available online  , doi: 10.1109/JAS.2020.1003201 Abstract: With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of understanding the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues. , Available online  , doi: 10.1109/JAS.2019.1911627 Abstract: The stabilization problem of distributed proportional-integral-derivative (PID) controllers for general first-order multi-agent systems with time delay is investigated in the paper. The closed-loop multi-input multi-output (MIMO) framework in frequency domain is firstly introduced for the multi-agent system. Based on the matrix theory, the whole system is decoupled into several subsystems with respect to the eigenvalues of the Laplacian matrix. Considering that the eigenvalues may be complex numbers, the consensus problem of the multi-agent system is transformed into the stabilizing problem of all the subsystems with complex coefficients. For each subsystem with complex coefficients, the range of admissible proportional gains \begin{document}${k_{\rm{P}}}$\end{document} is analytically determined. Then, the stabilizing region in the space of integral gain (\begin{document}${k_{\rm{I}}}$\end{document}) and derivative gain (\begin{document}${k_{\rm{D}}}$\end{document}) for a given \begin{document}${k_{\rm{P}}}$\end{document} value is also obtained in an analytical form. The entire stabilizing set can be determined by sweeping \begin{document}${k_{\rm{P}}}$\end{document} in the allowable range. The proposed method is conducted for general first-order multi-agent systems under arbitrary topology including undirected and directed graph topology. Besides, the results in the paper provide the basis for the design of distributed PID controllers satisfying different performance criteria. The simulation examples are presented to check the validity of the proposed control strategy. , Available online Abstract: This paper proposes a novel sampled-data asynchronous fuzzy output feedback control approach for active suspension systems in restricted frequency domain. In order to better investigate uncertain suspension dynamics, the sampled-data Takagi-Sugeno (T-S) fuzzy half-car active suspension (HCAS) system is considered, which is further modelled as a continuous system with an input delay. Firstly, considering that the fuzzy system and the fuzzy controller cannot share the identical premises due to the existence of input delay, a reconstructed method is employed to synchronize the time scales of membership functions between the fuzzy controller and the fuzzy system. Secondly, since external disturbances often belong to a restricted frequency range, a finite frequency control criterion is presented for control synthesis to reduce conservatism. Thirdly, given a full information of state variables is hardly available in practical suspension systems, a two-stage method is proposed to calculate the static output feedback control gains. Moreover, an iterative algorithm is proposed to compute the optimum solution. Finally, numerical simulations verify the effectiveness of the proposed controllers. , Available online Abstract: In view of the environment competencies, selecting the optimal green supplier is one of the crucial issues for enterprises, and multi-criteria decision-making (MCDM) methodologies can more easily solve this green supplier selection (GSS) problem. In addition, prioritized aggregation (PA) operator can focus on the prioritization relationship over the criteria, Choquet integral (CI) operator can fully take account of the importance of criteria and the interactions among them, and Bonferroni mean (BM) operator can capture the interrelationships of criteria. However, most existing researches cannot simultaneously consider the interactions, interrelationships and prioritizations over the criteria, which are involved in the GSS process. Moreover, the interval type-2 fuzzy set (IT2FS) is a more effective tool to represent the fuzziness. Therefore, based on the advantages of PA, CI, BM and IT2FS, in this paper, the interval type-2 fuzzy prioritized Choquet normalized weighted BM operators with λ fuzzy measure and generalized prioritized measure are proposed, and some properties are discussed. Then, a novel MCDM approach for GSS based upon the presented operators is developed, and detailed decision steps are given. Finally, the applicability and practicability of the proposed methodology are demonstrated by its application in the shared-bike GSS and by comparisons with other methods. The advantages of the proposed method are that it can consider interactions, interrelationships and prioritizations over the criteria simultaneously. , Available online Abstract: Reliability engineering implemented early in the development process has a significant impact on improving software quality. It can assist in the design of architecture and guide later testing, which is beyond the scope of traditional reliability analysis methods. Structural reliability models work for this, but most of them remain tested in only simulation case studies due to lack of actual data. Here we use software metrics for reliability modeling which are collected from source codes of post versions. Through the proposed strategy, redundant metric elements are filtered out and the rest are aggregated to represent the module reliability. We further propose a framework to automatically apply the module value and calculate overall reliability by introducing formal methods. The experimental results from an actual project show that reliability analysis at the design and development stage can be close to the validity of analysis at the test stage through reasonable application of metric data. The study also demonstrates that the proposed methods have good applicability. , Available online Abstract: This paper investigates two noncooperative-game strategies which may be used to represent a human driver’s steering control behavior in response to vehicle automated steering intervention. The first strategy, namely the Nash strategy is derived based on the assumption that a Nash equilibrium is reached in a noncooperative game of vehicle path-following control involving a driver and a vehicle automated steering controller. The second one, namely the Stackelberg strategy is derived based on the assumption that a Stackelberg equilibrium is reached in a similar context. A simulation study is performed to study the differences between the two proposed noncooperative- game strategies. An experiment using a fixed-base driving simulator is carried out to measure six test drivers’ steering behavior in response to vehicle automated steering intervention. The Nash strategy is then fitted to measured driver steering wheel angles following a model identification procedure. Control weight parameters involved in the Nash strategy are identified. It is found that the proposed Nash strategy with the identified control weights is capable of representing the trend of measured driver steering behavior and vehicle lateral responses. It is also found that the proposed Nash strategy is superior to the classic driver steering control strategy which has widely been used for modeling driver steering control over the past. A discussion on improving automated steering control using the gained knowledge of driver noncooperative-game steering control behavior was made. , Available online Abstract: The localized faults of rolling bearings can be diagnosed by its vibration impulsive signals. However, it is always a challenge to extract the impulsive feature under background noise and nonstationary conditions. This paper investigates impulsive signals detection of a single-point defect rolling bearing and presents a novel data-driven detection approach based on dictionary learning. To overcome the effects harmonic and noise components, we propose an autoregressive-minimum entropy deconvolution model to separate harmonic and deconvolve the effect of the transmission path. To address the shortcomings of conventional sparse representation under the changeable operation environment, we propose an approach that combines K-clustering with singular value decomposition (K-SVD) and Split-Bregman to extract impulsive components precisely. Via experiments on synthetic signals and real run-to-failure signals, the excellent performance for different impulsive signals detection verifies the effectiveness and robustness of the proposed approach. Meanwhile, a comparison with the state-of-the-art methods is illustrated, which shows that the proposed approach can provide more accurate detected impulsive signals. , Available online Abstract: This paper proposes a static-output-feedback based robust fuzzy wheelbase preview control algorithm for uncertain active suspensions with time delay and finite frequency constraint. Firstly, a Takagi-Sugeno (T-S) fuzzy augmented model is established to formulate the half-car active suspension system with consideration of time delay, sprung mass variation and wheelbase preview information. Secondly, in view of the resonation between human’s organs and vertical vibrations in the frequency range of 4-8 Hz, a finite frequency control criterion in terms of H norm is developed to improve ride comfort. Meanwhile, other mechanical constraints are also considered and satisfied via generalized H2 norm. Thirdly, in order to maintain the feasibility of the controller despite of some state variables are not online-measured, a two stage approach is adopted to derive a static output feedback controller. Finally, numerical simulation results illustrate the excellent performance of the proposed controller. , Available online Abstract: For a large-scale palmprint identification system, it is necessary to speed up the identification process to reduce the response time and also to have a high rate of identification accuracy. In this paper, we propose a novel hashing-based technique called orientation field code hashing for fast palmprint identification. By investigating hashing-based algorithms, we first propose a double-orientation encoding method to eliminate the instability of orientation codes and make the orientation codes more reasonable. Secondly, we propose a window-based feature measurement for rapid searching of the target. We explore the influence of parameters related to hashing-based palmprint identification. We have carried out a number of experiments on the Hong Kong PolyU large-scale database and the CASIA palmprint database plus a synthetic database. The results show that on the Hong Kong PolyU large-scale database, the proposed method is about 1.5 times faster than the state-of-the-art ones, while achieves the comparable identification accuracy. On the CASIA database plus the synthetic database, the proposed method also achieves a better performance on identification speed. , Available online  , doi: 10.1109/JAS.2020.1003207 Abstract: , Available online  , doi: 10.1109/JAS.2020.1003198 Abstract: Configuration evaluation is a key technology to be considered in the design of multiple aircrafts formation (MAF) configurations with high dynamic properties in engineering applications. This paper deduces the relationship between relative velocity, dynamic safety distance and dynamic adjacent distance of formation members, then divides the formation states into collision-state and matching-state. Meanwhile, probability models are constructed based on the binary normal distribution of relative distance and relative velocity. Moreover, configuration evaluation strategies are studied by quantitatively analyzing the denseness and the basic capabilities according to the MAF collision-state probability and the MAF matching-state probability, respectively. The scale of MAF is grouped into 5 levels, and previous lattice-type structures are extended into four degrees by taking the relative velocities into account to instruct the configuration design under complex task conditions. Finally, hardware-in-loop (HIL) simulation and outfield flight test results are presented to verify the feasibility of these evaluation strategies. , Available online Abstract: A fully distributed microgrid system model is presented in this paper. In the user side, two types of load and plug-in electric vehicles are considered to schedule energy for more benefits. The charging and discharging states of the electric vehicles are represented by the zero-one variables with more flexibility. To solve the nonconvex optimization problem of the users, a novel neurodynamic algorithm which combines the neural network algorithm with the differential evolution algorithm is designed and its convergence speed is faster. A distributed algorithm with a new approach to deal with the equality constraints is used to solve the convex optimization problem of the generators which can protect their privacy. Simulation results and comparative experiments show that the model and algorithms are effective. , Available online  , doi: 10.1109/JAS.2020.1003111 Abstract: One of challenging issues on stability analysis of time-delay systems is how to obtain a stability criterion from a matrix-valued polynomial on a time-varying delay. The first contribution of this paper is to establish a necessary and sufficient condition on a matrix-valued polynomial inequality over a certain closed interval. The degree of such a matrix-valued polynomial can be an arbitrary finite positive integer. The second contribution of this paper is to introduce a novel Lyapunov-Krasovskii functional, which includes a cubic polynomial on a time-varying delay, in stability analysis of time-delay systems. Based on the novel Lyapunov-Krasovskii functional and the necessary and sufficient condition on matrix-valued polynomial inequalities, two stability criteria are derived for two cases of the time-varying delay. A well-studied numerical example is given to show that the proposed stability criteria are of less conservativeness than some existing ones. , Available online Abstract: An optimal control strategy of winner-take-all (WTA) model is proposed for target tracking and cooperative competition of multi-UAVs. In this model, firstly, based on the artificial potential field method, the artificial potential field function is improved and the fuzzy control decision is designed to realize the trajectory tracking of dynamic targets. Secondly, according to the finite-time convergence high-order differentiator, a double closed-loop UAV speed tracking controller is designed to realize the speed control and tracking of the target tracking trajectory. Numerical simulation results show that the designed speed tracking controller has the advantages of fast tracking, high precision, strong stability and avoiding chattering. Finally, a cooperative competition scheme of multiple UAVs based on WTA is designed to find the minimum control energy from multiple UAVs and realize the optimal control strategy. Theoretical analysis and numerical simulation results show that the model has the fast convergence, high control accuracy, strong stability and good robustness. , Available online Abstract: This paper investigates the distributed model predictive control (MPC) problem of linear systems whose network topologies are changeable by the way of inserting new subsystems, disconnecting existing subsystems, or merely modifying the couplings between different subsystems. To equip live systems with the quick response ability when modifying network topology, while keeping a satisfactory dynamic performance, a novel reconfiguration control scheme based on the alternating direction method of multipliers (ADMM) is presented. In this scheme, the local controllers directly influenced by the structure realignment are redesigned in the reconfiguration control. Meanwhile, by employing the powerful ADMM algorithm, the iterative formulas for solving the reconfigured optimization problem are obtained, which significantly accelerate the computation speed and ensure a timely output of the reconfigured optimal control response. Ultimately, the presented reconfiguration scheme is applied to the level control of a benchmark four-tank plant to illustrate its effectiveness and main characteristics. , Available online  , doi: 10.1109/JAS.2019.1911801 Abstract: Random vector functional link networks (RVFL) is a class of single hidden layer neural networks based on a learner paradigm by which some parameters are randomly selected and contains more information due to the direct links between inputs and outputs. In this paper, combining the advantages of RVFL and the ideas of online sequential extreme learning machine (OS-ELM) and initial-training-free online extreme learning machine (ITF-OELM), a novel online learning algorithm which is named as initial-training-free online random vector functional link (ITF-ORVFL) is investigated for training RVFL. Because the idea of ITF-ORVFL comes from OS-ELM and ITF-OELM, the link vector of RVFL can be analytically determined based on sequentially arriving data by ITF-ORVFL with a high learning speed. Besides a novel variable is added to the update formulae of ITF-ORVFL, and the stability for nonlinear systems based on this learning algorithm is guaranteed. The experiment results indicate that the proposed ITF-ORVFL is effective in estimating nonparametric uncertainty. , Available online  , doi: 10.1109/JAS.2019.1911549 Abstract: , Available online  , doi: 10.1109/JAS.2019.1911534 Abstract: Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient. The representation of hand gestures is critical for recognition. In this paper, we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition. The depth maps of hand gestures captured via the Kinect sensors are used in our method, where the 3D hand shapes can be segmented from the cluttered backgrounds. To extract the pattern of salient 3D shape features, we propose a new descriptor–3D Shape Context, for 3D hand gesture representation. The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition. The description of all the 3D points constructs the hand gesture representation, and hand gesture recognition is explored via dynamic time warping algorithm. Extensive experiments are conducted on multiple benchmark datasets. The experimental results verify that the proposed method is robust to noise, articulated variations, and rigid transformations. Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency. IEEE/CAA Journal of Automatica Sinica • JCR Impact Factor 2019: 5.129 Rank:Top 17% (11/63), Category of Automation & Control Systems Quantile: The 1st (SCI Q1) CiteScore 2019 : 8.3 Rank: Top 9% (Category of Computer Science: Information System) , Top 11% (Category of Control and Systems Engineering), Top 12% (Category of Artificial Intelligence) Quantile: The 1st (Q1)
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Virtual-freezing fluorescence imaging flow cytometry ## Abstract By virtue of the combined merits of flow cytometry and fluorescence microscopy, imaging flow cytometry (IFC) has become an established tool for cell analysis in diverse biomedical fields such as cancer biology, microbiology, immunology, hematology, and stem cell biology. However, the performance and utility of IFC are severely limited by the fundamental trade-off between throughput, sensitivity, and spatial resolution. Here we present an optomechanical imaging method that overcomes the trade-off by virtually freezing the motion of flowing cells on the image sensor to effectively achieve 1000 times longer exposure time for microscopy-grade fluorescence image acquisition. Consequently, it enables high-throughput IFC of single cells at >10,000 cells s−1 without sacrificing sensitivity and spatial resolution. The availability of numerous information-rich fluorescence cell images allows high-dimensional statistical analysis and accurate classification with deep learning, as evidenced by our demonstration of unique applications in hematology and microbiology. ## Introduction Over the last decade, imaging flow cytometry (IFC)1,2,3,4,5,6,7,8,9 has opened a new window on biological and medical research by offering capabilities that are not possible with traditional flow cytometry. IFC provides quantitative image data of every event (e.g., cells, cell clusters, debris), allowing the morphometric characterization of single cells in large heterogeneous populations1,2 and further advancing our understanding of cellular heterogeneity10. Furthermore, conventional digital analysis tools in flow cytometry such as histograms and scatter plots are readily available to IFC users, but with much richer information about the acquired events by virtue of the image data1,2. The availability of the big data produced by IFC is well aligned with the pressing need for progressively larger biomedical datasets for efficient and accurate data analysis with the help of machine learning (e.g., deep learning) to make better decisions in biomedical research and clinical diagnosis11,12. Recent studies show that IFC is highly effective for the localization and enumeration of specific molecules such as proteins, nucleic acids, and peptides3,6, the analysis of cell-cell interaction13 and cell cycle12,14, the characterization of DNA damage and repair15, and fluorescence in situ hybridization (FISH)16. Unfortunately, the performance and utility of IFC are constrained by the fundamental trade-off between throughput, sensitivity, and spatial resolution17,18,19,20,21,22. As the flow speed is increased for higher cell throughput, the integration time of the image sensor is inevitably required to be shorter for blur-free image acquisition. A detrimental consequence of this effect is reduction in sensitivity or need for decreased pixel resolution to compensate for the reduced sensitivity. To circumvent this problem, time delay and integration (TDI) with a charged coupled device (CCD) image sensor has been introduced into IFC by Amnis Corporation1,2. The TDI is based on the accumulation of multiple exposures of a flowing cell with multiple rows of the CCD’s photo-sensitive elements by synchronizing its motion by charge transfer with the exposures1,2,19, but sacrifices throughput due to the limited readout rate of the CCD (up to ~100 MS s−1), resulting in a throughput of 100–1000 cells s−1 (depending on the required spatial resolution)1,2,19, which is 10–100 times lower than that of conventional non-imaging flow cytometry. Furthermore, the CCD suffers from large readout noise (typically tens of photoelectrons), limiting its detection sensitivity. Another approach to overcoming the trade-off is single-pixel imaging that achieves both high-throughput and high-spatial resolution20,21,22, but comes at the expense of sensitivity. Also, a combination of parallelized microchannels, stroboscopic illumination, and image acquisition with a high-speed complementary metal oxide semiconductor (CMOS) image sensor has been demonstrated to achieve high-throughput IFC18, but also suffers from low sensitivity. A common trait of these techniques is the compromise of one of the key parameters in favor of the others, hence limiting the utility of IFC to niche applications. Our optomechanical imaging method, which we refer to as virtual-freezing fluorescence imaging (VIFFI), overcomes the above trade-off and hence achieves high-throughput (>10,000 cells s−1), high-spatial resolution (~700 nm), and high sensitivity simultaneously when combined with IFC (namely, VIFFI flow cytometry). Specifically, the method is based on the principle that the motion of a flowing cell is virtually “frozen” on an image sensor by precisely canceling the motion with a flow-controlled microfluidic chip, a speed-controlled polygon scanner, and a series of timing control circuits in order to increase the exposure time of the image sensor and form a fluorescence image of the cell with significantly high signal-to-noise ratio (SNR). Two additional yet essential elements of VIFFI flow cytometry that maximize the effect of virtual freezing are a light-sheet excitation beam scanner that scans over the entire field of view (FOV) during the exposure time of the image sensor and the precise synchronization of the timings of the image sensor’s exposure and the excitation beam’s illumination and localization with respect to the rotation angle of the polygon scanner. As a result of combining these elements, our virtual-freezing strategy effectively enables 1000 times longer signal integration time on the image sensor, far surpassing previous techniques23,24, and hence achieves microscopy-grade fluorescence imaging of cells at a high flow speed of 1 m s−1. ## Results ### Schematic of VIFFI flow cytometry As schematically shown in Fig. 1a (see Supplementary Fig. 1 for a detailed schematic), our VIFFI flow cytometer consists of (i) a flow-controlled microfluidic chip, (ii) a light-sheet optical excitation system composed of excitation lasers (Nichia NDS4216, λ = 488 nm, MPB Communications 2RU-VFL-P-5000-560-B1R, λ = 560 nm, see Methods), an excitation beam scanner (acousto-optic beam deflector, ISOMET OAD948), and a cylindrical lens (f = 18 mm), (iii) a scientific complementary metal–oxide–semiconductor (sCMOS) camera (PCO edge 5.5), and (iv) an optical imaging system composed of an objective lens (NA = 0.75, 20×), a 28-facet polygon scanner as a fluorescence beam scanner (Lincoln Laser RTA-B) whose mirror facets are placed in the Fourier plane of fluorescence from the flowing cells, two relay lens systems with magnifications of 0.2 and 5, each of which is composed of four achromatic lenses, designed to meet the facet size and avoid unwanted aberrations, and a tube lens (f = 180 mm) to form a wide-field fluorescence image of the flowing cells on the camera (see Methods, Supplementary Figs. 26 for more details of the design). The polygon scanner is used to cancel the flow motion of the flowing cells on the camera by rotating the scanner in the opposite direction to the flow at the angular speed corresponding to the flow and hence to produce a blur-free fluorescence image of the virtually stationary cells on the camera (Fig. 1a and Supplementary Movie 1). However, since this motion cancellation strategy itself is insufficient for microscopy-grade imaging of cells in a high-speed flow due to the residual motion blur caused by the longitudinal variations of the flow speed of cells and aberration (image distortion) in the imaging system that inevitably arise in a significantly large FOV, the localized (but highly efficient) light-sheet excitation beam is scanned in the opposite direction to the flow to illuminate the entire FOV while limiting the local exposure time to 10 µs [given by the beam size (26 μm) divided by the relative scan speed of the excitation beam (2.54 m s−1), see Methods], which greatly relaxes the requirement for the flow speed precision and reduces the aberration in the imaging system for blur-free image acquisition in the VIFFI flow cytometer (Fig. 1b). As a result, much longer integration time on the camera can be achieved as long as the fluorescence from the cells stays in a single mirror facet of the polygon scanner (see Supplementary Fig. 2). The scanner keeps rotating during the image data transfer of the camera so that its next facet comes to the Fourier plane of the fluorescence from the cells when the next frame of the camera starts (see Methods for more details). This sequence is carefully designed and optimized such that the camera’s neighboring frames have a slightly overlapped imaging area in order to exclude a dead imaging area, accommodate randomly distributed cells in line (which are subject to Poisson statistics), and maximize the throughput of the VIFFI flow cytometer (Fig. 1c). All the frames are continuously acquired at a frame rate of 1,250 fps and stored on a solid-state drive up to 1 TB (0.44 MB frame−1 × 2,270,000 frames), allowing a non-stop image acquisition of up to 18,000,000 cells at 10,000 cells s−1 (Supplementary Fig. 5). ### Characterization of VIFFI flow cytometry As a proof-of-principle demonstration, we used the VIFFI flow cytometer to perform sensitive blur-free fluorescence imaging of fast-flowing biological cells that would be too dim to visualize without VIFFI. Without VIFFI, the camera’s exposure time is only limited to 0.3 µs given by the pixel size (325 nm) divided by the flow speed (typically >1 m s−1 for high-throughput operation) in order to obtain a blur-free fluorescence image of fast-flowing cells. Specifically, we used immortalized human T lymphocyte cells (Jurkat) and microalgal cells (Euglena gracilis) for the demonstration (Fig. 2a). Figure 2b–d show fluorescence images of the cells obtained by IFC without VIFFI with an exposure time of 0.3 µs, IFC without VIFFI with an exposure time of 340 µs, and IFC with VIFFI with an exposure time of 340 µs, respectively. Here, with an average interval of 50–100 µm between consecutive cells, the flow speed of 1 m s−1 corresponds to a throughput of >10,000 cells s−1 (as experimentally shown in Supplementary Fig. 7), which is equivalent to the throughput of commercially available non-imaging flow cytometers25 (while the throughput value in flow cytometry generally depends on various factors such as cell size and cell concentration). The rotation of cells in the flow is negligible during the exposure time of the camera since the local exposure time of 10 µs is sufficiently short for the potential rotational motion of the cells to occur. It is evident from the comparison of the fluorescence images that VIFFI significantly improved the spatial resolution and SNR in the images without sacrificing the throughput (see Methods, Supplementary Figs. 8 and 9 for more details). The high sensitivity of VIFFI flow cytometry allows for fluorescence imaging of various types of cells (e.g., cancer cells, microalgal cells, budding yeast cells, white blood cells) flowing at a high speed of 1 m s−1 (Fig. 3a through Fig. 3f). For example, the ability to enumerate localized fluorescent spots by FISH imaging (Fig. 3a) indicates its potential application to real-time characterization of gene copy number alterations in circulating tumor cells (CTCs) in blood17. Also, it enables precise analysis of the cell cycle of budding yeast (Saccharomyces cerevisiae) cells (Fig. 3b). In addition, it is useful for identifying fine morphological and structural features of single cells in large populations, such as the indented elliptical shape of Chlamydomonas reinhardtii cells (Fig. 3c), the boundary (cell surface) localization of the epithelial cell adhesion molecule (EpCAM) in CTCs (Fig. 3d), nuclear lobulation in murine neutrophils (Fig. 3e), and lipid droplet localization in E. gracilis cells (Fig. 3f) that have not been possible with previous high-throughput imaging flow cytometers at this flow speed20,26 due to their limited imaging sensitivity. Below we used murine white blood cells and E. gracilis cells to show practical applications of VIFFI flow cytometry. ### Applications of VIFFI flow cytometry One of many applications where VIFFI flow cytometry is effective is to significantly improve statistical accuracy in the identification and classification of white blood cells based on morphological phenotypes (e.g., size, shape, structure, nucleus-to-cytoplasm ratio)—a routine practice for clinical diagnoses in which the cell throughput and hence classification accuracy are limited due to the manual examination of cells under conventional microscopes by skilled operators. Specifically, we used the VIFFI flow cytometer to obtain a large number of high-resolution, high-SNR fluorescence images of murine lymphocytes and neutrophils (Fig. 4a and Supplementary Fig. 10). The images enable the accurate quantification of nuclear lobulation by analyzing the ratio in area between the nucleus and enclosing box (the rectangular box with the smallest area within which the nucleus lies), which effectively brings out the differences between the two types of cells including their distinct heterogeneity in population distribution (Fig. 4b, Methods). Also, the obtained images quantitatively elucidate morphological features of each cell type such as cell area with high precision (Supplementary Fig. 11). Furthermore, the information-rich cell images allow the use of a deep neural network for cell classification with even higher accuracy. Specifically, we employed a convolutional neural network (CNN) with 16 layers (VGG-16)27 (see Methods for details of training) and achieved a high classification accuracy of 95.3% between the two populations. A scatter plot of the cells obtained from 4,096 features generated by VGG-16 through t-distributed stochastic neighbor embedding (t-SNE)28 shows an obvious separation between the populations (Fig. 4c), indicative of the high classification accuracy. The plot also shows the existence of one or more sub-populations among neutrophils, suggesting a possibility that neutrophils can further be classified into sub-types, which potentially has a significant biological implication in the field of immunology29. Another unique application of VIFFI flow cytometry is the determination of the population, morphological properties, and spatial distribution of specific intracellular molecules in large heterogeneous cell populations. Specifically, we used the VIFFI flow cytometer to study lipid droplets (i.e., neutral lipid storage sites) within single cells of E. gracilis, a unicellular photosynthetic microalgal species known to produce wax esters suitable for biodiesel and aviation biofuel30. While recent studies show that lipid droplets play a key role in lipid biosynthesis, mobilization, and homeostasis, little is known about microalgal lipid droplets31 despite their importance for efficient microalgae-based biofuel production. As shown in Fig. 5a, the VIFFI flow cytometer enables high-throughput single-cell imaging of numerous E. gracilis cells under two culture conditions (nitrogen-sufficient and nitrogen-deficient). Here, nitrogen deficiency is an environmental stress condition known to induce microalgae (including E. gracilis) to accumulate lipids in the cell body. From the acquired high-resolution, high-SNR images, we were able to localize and enumerate intracellular lipid droplets and quantify their areas (Fig. 5b, c, see Supplementary Fig. 12 for CNN-based classification results) with sub-1 µm resolution and 100 nm precision (see Methods, Supplementary Figs. 1315). These results indicate that, while there is no significant difference in the average lipid droplet area per cell between the two cultures, there is an obvious difference in the distribution of the number of lipid droplets per cell between them. In addition, we found that the degree of heterogeneity in the cell populations is comparable to or even larger than the overall separation between them. Furthermore, the obtained images (Supplementary Figs. 16 and 17) indicate that the intracellular lipid droplet accumulation occurred non-uniformly, which was more prominent in the cells under the nitrogen-sufficient condition. These results provide an important insight into the lipid biosynthesis of E. gracilis and hence efficient metabolic engineering32. ## Discussion From a practical perspective, the specifications of the VIFFI flow cytometer demonstrated above can further be boosted by replacing its key components with more advanced commercial products. First, the imaging speed and FOV in the y direction can be improved by replacing the sCMOS camera with a high-speed camera at the expense of imaging sensitivity. Specifically, a throughput of >100,000 cells s−1 (provided by a flow speed of 10 m s−1) with a FOV of 65 μm in the y direction can be achieved by employing a commercial high-speed camera (Photron FASTCAM Mini WX100). Second, the imaging sensitivity can be increased by replacing the sCMOS camera with a newly released camera with a higher quantum efficiency. In fact, sCMOS cameras with higher sensitivity (e.g., Hamamatsu ORCA-Flash4.0 V3, quantum efficiency ≥ 80% at maximum) than the present one (quantum efficiency = ~60% at maximum) are commercially available. Finally, the overall size of the VIFFI flow cytometer (~1 m2) can be reduced by replacing the relay lens systems with custom lenses, allowing for benchtop implementation. With the flexibility and scalability of VIFFI flow cytometry, its capabilities can further be enhanced in various directions, thereby extending the range of applications and discoveries which are accessible by them. First, while we used two colors in our proof-of-principle demonstration, fluorescence imaging in several colors can easily be conducted with a proper set of dichroic beamsplitters. Second, VIFFI flow cytometry is essentially compatible with advanced image-sensor-based fluorescence microscopy techniques such as structured illumination microscopy33, making it feasible to perform super-resolution fluorescence IFC. Third, the flow speed (i.e., throughput) can be increased as long as it obeys the trade-off relation between the flow speed, the FOV in the y direction, and the exposure time, as shown in Supplementary Fig. 3. As the magnification of the fluorescence imaging system also affects the flow speed, a lower magnification value enables higher flow speed at the cost of pixel resolution. Fourth, the recent advancement of machine learning methods will help further enhance the capability of data analysis obtained by VIFFI flow cytometry. Fifth, since the data transfer rate of the sCMOS camera is a performance-limiting factor, future advances of the image sensor technology are expected to further increase the cell throughput and the number of fluorescence colors. Sixth, to avoid image artifacts due to the limited depth of field (e.g., image defocus), an extended depth-of-field (EDF) technique can be implemented in the VIFFI flow cytometer34, which is particularly useful for FISH imaging and imaging of large cells. Specifically, a cubic phase mask34 can be inserted in the optical path of fluorescence detection for the EDF. Finally, the full potential of VIFFI flow cytometry can be exploited by incorporating a cell-sorting module and a real-time intelligent image processor into it for intelligent image-activated cell sorting26,35, which will allow for subsequent detailed analysis of target cells via electron microscopy and DNA/RNA sequencing, as well as for subsequent use of them for synthetic biology via directed evolution. In addition to the above applications, the high-throughput, high-sensitivity, high-spatial-resolution imaging capability of VIFFI flow cytometry opens a window onto a new class of biological, pharmaceutical, and medical applications. First, as indicated by the images shown in Fig. 3a, it enables high-throughput FISH analysis16. FISH has been employed in a wide variety of applications such as diagnosis of acute lymphoblastic leukemia and Down syndrome, identification of bacterial pathogens, and detection of minimal residual disease, but conventional FISH requires microscopy-based observation of cells and hence falls short in screening large populations of cells. Since VIFFI flow cytometry extends the applicability of FISH to such large cell populations (e.g., blood), it is expected to offer a significant impact on pathology and microbiology. Second, as indicated by the images shown in Fig. 3b, VIFFI flow cytometry allows for large-scale analysis of S. cerevisiae mutants based on morphological phenotype, thereby offering the ability to uncover its genotype-phenotype relation36,37, which has been an important subject of research among S. cerevisiae researchers worldwide for synthetic biology. Specifically, the high-spatial resolution of VIFFI flow cytometry can accurately quantify and characterize the morphological features (e.g., area, perimeter, aspect ratio) of each cell and its intracellular organelles (e.g., nuclei, actin). Third, as indicated by the images shown in Fig. 3c, VIFFI flow cytometry can be used for evaluating mutagenesis of microalgal cells. While previously reported high-throughput imaging flow cytometry can identify mutants of C. reinhardtii that express de-localized low-CO2 inducible protein B (LCIB) for studying the carbon concentration mechanism in microalgal photosynthesis26, the high sensitivity of VIFFI flow cytometry allows for more detailed analysis of the LCIB such as granularity, hence assisting microbiologists with investigation of the carbon concentration mechanism38. Finally, as indicated by the images shown in Fig. 3d, VIFFI flow cytometry can be used for accurate detection and enumeration of CTCs in large heterogeneous samples of blood cells39. Since it can visualize and differentiate single CTCs and CTC clusters and enumerate CTCs in each CTC cluster, it is expected to help cancer biologists study the relation between the cluster size and the metastatic propensity and spread of CTCs39,40 and unravel tumor heterogeneity in cancer patients by large-scale single-cell profiling of CTCs in blood41. ## Methods ### Optical design The optical system of the VIFFI flow cytometer was designed so that it has both subcellular spatial resolution and high-throughput comparable to conventional (non-imaging) flow cytometers. On the basis of this concept, we firstly chose an objective lens (Olympus UPLSAPO20x, NA = 0.75) and a tube lens (Olympus U-TLU, f = 180 mm) to obtain subcellular resolution. Subsequently, we determined the magnification of the relay lens systems M and M−1 and the number of facets of the polygon scanner N by considering the following constraints: (i) The image frames should be concatenated so that whole images of all flowing cells fall in one of the continuous frames (Fig. 1c); (ii) The fluorescence beam size in the polygon scanner’s facet should be smaller than the facet size (Supplementary Fig. 2a). In addition, we considered specifications of the sCMOS camera and polygon scanner (Lincoln Laser RTA-B) such as the frame period Ts = 0.8 ms (frame rate: 1250 fps) with the region of interest of 2560 × 88 pixels, sensor size in the direction corresponding to the cell flow Lx = 16.6 mm, and inner diameter of the polygon mirror of the scanner dp = 70 mm. The above constraints are expressed by the following equations: $$4f_{\mathrm{o}}M\left\{ {\frac{{2{\uppi}}}{N} + \sin ^{ - 1}\left[ {\left( { - \frac{{d_{\mathrm{o}}M}}{{d_{\mathrm{p}}}} + \sin \alpha } \right)\cos \frac{{\uppi }}{N}} \right] - \sin ^{ - 1}\left[ {\left( {\frac{{d_{\mathrm{o}}M}}{{d_{\mathrm{p}}}} + \sin \alpha } \right)\cos \frac{{\uppi }}{N}} \right]} \right\}\, > \, 0,$$ (1) $$\frac{{vT_{\mathrm{s}}N}}{{4{\uppi}f_{\mathrm{o}}}}\, < \, M\, < \, \frac{{L_xN}}{{4{\uppi}f_{\mathrm{t}}}},$$ (2) where fo, α, do, v, ft denote the focal length of the objective lens (9 mm), the nominal incidence angle of the fluorescence to the polygon scanner (45 degrees), the back-aperture diameter of the objective lens (13.5 mm), the flow speed of the cells (1 m s−1), and the focal length of the tube lens (180 mm), respectively. The left hand side of Eq. (1) corresponds to the maximum scan range of the polygon scanner on the object and thus represents the maximum exposure duration. A plot of the maximum scan range under the above constraints is shown in Supplementary Fig. 2b. The scan range divided by the flow speed represents the upper limit on the exposure time. The figure shows that a lower magnification value is beneficial for obtaining a longer exposure time. As a trade-off, in practice, a relay lens system with a lower magnification tends to have larger aberration, which degrades the imaging quality. Therefore, we chose M = 0.2 and N = 28 so that the aberration that occurs in the relay lens systems does not significantly degrade the imaging quality (see below for the detailed design of the relay lens systems) while significant improvement of the imaging sensitivity is obtained. ### Excitation beam scan The excitation beam scan is an essential part of the VIFFI flow cytometer to enable its exceptionally long exposure time for fast-flowing cells. A schematic of the beam scan is shown in Supplementary Fig. 4a. A focused excitation beam with a diameter of 26 μm in the cell flow direction is scanned in the direction opposite to the cell flow at 2.54 m s−1 (a speed relative to flowing cells), limiting the local exposure time to 10 μs. Since the FOV of the camera in the object plane moves together with the cell flow during the exposure time, the scan range (525 μm) is shorter than the FOV in the flow direction (830 μm), which significantly reduces the off-axis aberration components such as image distortion and field curvature. In particular, the image distortion that occurs between the object and the polygon scanner causes position-dependent errors of the motion cancellation effect of the polygon scanner and hence residual motion of the cell images, which limits extension of the exposure time. We experimentally determined the speed of the residual motion using obtained images of fluorescent particles. The results shown in Supplementary Fig. 4b indicate that the speed range of the residual motion is ±1.5% with the beam scan while it is at least ±6% without the beam scan. Assuming a typical speed fluctuation of flowing cells in a microfluidic channel of 1.5%, the factors for the extension of exposure time per pixel in the VIFFI flow cytometer are (0.015 + 0.015)−1  ≈ 33 with the beam scan and (0.06 + 0.015)1 ≈ 13 without the beam scan. Moreover, due to the confined excitation beam width in the flow direction (~3% of the FOV of the sCMOS camera), the excitation efficiency improves by a factor of 1/0.03 ≈ 33 under a certain excitation beam power whereas the total exposure time of the camera is extended by the same factor. As a result, the improvement factor of the imaging sensitivity of the VIFFI flow cytometer is 33 × 33 ≈ 1000 with the beam scan while it is limited to ~13 without the beam scan. ### Data acquisition sequence A schematic of the data acquisition sequence is shown in Supplementary Fig. 5. A trigger signal from a photodetector that indicates the polygon scanner’s angle is used to generate external trigger signals for the camera’s start of exposure and the waveform generator’s signal output. Then, the excitation beam is scanned for ~320 μs during the camera’s exposure time (~340 μs), which is set shorter than the upper limit determined by the design of the imaging optical system (420 μs). The camera outputs image data right after its exposure time. We set the total period of the image acquisition procedure to be slightly less than the polygon scanner’s scan period (800 μs) by adjusting the number of pixel lines in the y direction in the object plane so that every external trigger signal successfully triggers the camera’s exposure start. ### Relay lens systems We designed the relay lens systems using OpticStudio (Zemax, LLC). Considering cost effectiveness, our design assumed only off-the-shelf achromatic lenses from major optics companies. Also, we assumed four achromatic doublets for constituting a single relay lens system rather than two to reduce the total aberration. We created a macro that automatically evaluates the total aberration of ~10,000 relay lens systems, each of which has a different combination of the four achromatic doublets with the designated magnification (0.2) and with an optimized configuration found by the optimization function of OpticStudio. Thus, we found that a relay lens system shown in Supplementary Fig. 6a has diffraction-limited imaging performance. We employed this system to both of the relay lens systems in the setup of the VIFFI flow cytometer. We evaluated optical transfer functions (OTFs) of the whole relay system that consists of the two relay lens systems and a polygon scanner using a combined optical model shown in Supplementary Fig. 6b. The OTFs were calculated at different positions in the image plane indicated in the upper part of Supplementary Fig. 6c, considering the dynamic localized illumination of the excitation beam during the rotation of the polygon scanner. As shown in Supplementary Fig. 6c, we confirmed that the relay system does not suffer from significant image degradation over the entire FOV. ### Constructed optical setup A complete schematic of the VIFFI flow cytometer is shown in Supplementary Fig. 1. The optical system consists of a light-sheet optical excitation system, an imaging optical system, and an angle detection system for a polygon scanner. In the light-sheet optical excitation system, excitation beams with the desired spatiotemporal profiles in the object plane are created. Two excitation beams from laser sources [Nichia NDS4216 (λ = 488 nm), MPB Communications 2RU-VFL-P-5000-560-B1R (λ = 560 nm)] are scanned by a beam scanner (acousto-optic deflector, ISOMET OAD948) and are focused on a microfluidic channel through cylindrical lenses (f = 80 mm in the x direction and f = 18 mm in the z direction). The designed Gaussian beam diameters (e−2 intensity) on the object plane are 26 μm × 5.2 μm and 23 μm × 6.4 μm (x × z, see Supplementary Fig. 1 for the coordinates) for the beams of 488 nm and 560 nm, respectively. Since the beam scanner’s deflection angle for a certain frequency of the driving signal is proportional to the wavelength of the incident beam, the scan range for the 488-nm beam, which has the shorter scan range, is set so that it covers the entire FOV. Therefore, the effective exposure time of the 560-nm excitation beam is slightly shorter (488/560 = 86%) than that of the 488-nm excitation beam. In the imaging system, fluorescence images of flowing cells are captured by a sCMOS camera (PCO edge 5.5). Fluorescence signals from the flowing cells are relayed by two relay lens systems with magnifications of 0.2 and 5, respectively, such that images of the cells are formed on the sCMOS camera through a tube lens. The magnification of the imaging system is set to be 20×. A polygon scanner (Lincoln Laser RTA-B) is placed in a conjugate plane of the exit pupil of the objective lens (Olympus UPLSAPO20x, NA = 0.75) that is created between the two relay lens systems. The fluorescence is split into shorter and longer wavelength components by a dichroic mirror (Semrock FF580-FDi01, edge wavelength: 580 nm), each of which forms an image on the sCMOS camera at a different position. The orientation of the sCMOS camera in the optical setup is set by taking its data readout sequence into consideration. Since the sCMOS camera reads out the pixel data line-by-line, the frame rate does not depend on the region of interest in the line direction. Therefore, to operate the camera at the maximum pixel data rate, we set the camera so that the flow direction coincides with the line direction of the camera. In the angle detection system, a timing trigger for synchronized operation of the polygon scanner, beam scanner, and sCMOS camera is generated. A laser beam that reflects at the polygon scanner is focused on a pinhole aperture and is detected by a photodetector. The output voltage signal of the photodetector is used as the timing trigger and then sent to the external trigger input of a trigger generator (Teledyne LeCroy Wavestation 2052). Trigger signals generated by the trigger generator are sent to the external trigger input of the sCMOS camera and the Waveform generator (RIGOL DG 4202) so that they start exposure and generation of a driving signal for the beam scan. Depending on the application, different configurations can be employed. For the experiments in Fig. 3c, d and Supplementary Fig. 7, we used a 405-nm excitation laser (Oxxius LBX-405-300-CSB-PP) for obtaining images of C. reinhardtii and PC-9 cells. For the experiments in Fig. 3a and Supplementary Fig. 9, we used a 488-nm excitation laser (Coherent Genesis CX 488 STM) for FISH imaging and the evaluation of the imaging sensitivity, respectively, where the laser beam illuminates the microchannel through an objective lens (Leica, 20×, NA = 0.75) with carefully designed Gaussian beam diameters (e−2 intensity) in the object plane of 31 μm × 62 μm (x × y, see Supplementary Fig. 1 for the coordinates). We used the same laser for imaging of S. cerevisiae cells (Fig. 3b) in the standard illumination configuration shown above. ### Digital image processing and deep learning A schematic of our digital image processing is shown in Supplementary Fig. 10. From a raw image frame obtained by image acquisition software (PCO Camware 4.04), we extracted the region of the image of each color channel. Then, we created a binary mask image for each color channel using standard image segmentation methods. Subsequently, morphological features were calculated from masked or non-masked images of each channel. For murine white blood cells, we calculated the cell nucleus area and enclosing box area (the smallest rectangular box area within which the nucleus lies). For E. gracilis cells, we obtained the number of lipid droplets by counting local maximum points in the images. In order to count lipid droplets, we first set the approximate lipid droplet size and intensity threshold. After that, we used the approximate lipid droplet size to find the areas with the maximum and minimum intensities. Then, we picked the maximum-intensity areas whose intensity differences with surrounding minimum-intensity areas were larger than the threshold and created a Boolean map, which showed the positions of the lipid droplets. In addition, we used a CNN called VGG-16, which is a well-known CNN model, to differentiate between different cell types in our experiments. We made changes to the input layer of the original VGG-16 model in order to make it applicable to our image data. As shown in Supplementary Fig. 18, the VGG-16 model consists of five convolution segments and a fully connected classifier. Each convolution segment is made of a few convolution layers that extract the features of the image data, and one max pooling layer at the end to reduce the data volume. At the fully connected classifier, 4096 features extracted by the convolution segments were converted into one dimension that provided the classification results. For example, if the input data is a 224 × 224 RGB image, which can be presented as a 224 × 224 × 3 matrix, the first convolutional layer in the first segment (conv1_1) transforms this image into a 224 × 224 × 64 volume. As the image goes through the network, this volume becomes smaller in width and height, but larger in depth. The pooling layer in the fifth segment (pool_5) generates a 7 × 7 × 512 volume. When the image goes to the final segment, the 7 × 7 × 512 volume, which represents all the features of the image with local information is mapped to 4,096 features. After that, the very last layer applies the Softmax function to the data in order to generate a probability distribution. To illustrate the classification accuracy as a scatter plot, the 4096 features were also converted to two meta-features through t-SNE, which is a method to reduce the dimensionality of multi-dimensional data. LabVIEW 2016, Python 3 with Numpy 1.16.4, scikit-image 0.21.2, scikit-learn 0.21.2, matplotlib 3.1.2, PIL 6.2.1, Scipy 1.2.1, and Open-CV 4.1.0 library were used for the image processing. The CNN was constructed with Keras 2.3.1 with TensorFlow as the backend. ### Microfluidic chip A commercially available glass microfluidic chip with dimensions of 400 μm × 250 µm (Hamamatsu J12800-000-203) was used for the experiments. The microfluidic chip is capable of hydrodynamic focusing in both the lateral and depth directions. Suspended cells in a glass syringe were introduced into the channel by a syringe pump (Harvard apparatus 11 Elite) at a fixed volumetric flow rate. The sheath fluid was also introduced by the same syringe pump. The ratio of the volumetric flow rates of the sample flow and sheath flow was set to 1:700 (except for the experiments for the evaluation of the spatial resolution of the optical setup; see below for details), corresponding to a sample flow diameter of ~13 μm. Based on the theoretical position-dependent variation of the flow speed of a laminar flow, the error in the flow velocity in the FOV is <1%. We also used a home-made microfluidic chip with dimensions of 200 μm × 200 µm, which was capable of hydrodynamic focusing in both the lateral and depth directions26, for obtaining FISH images (Fig. 3a) and for the evaluation of the imaging sensitivity (Supplementary Fig. 9), and a polydimethylsiloxane (PDMS)-based microfluidic chip with a hydrodynamic focuser in the y direction for imaging of C. reinhardtii cells. ### Preparation of Jurkat cells for two-color imaging Jurkat cells were obtained from DS Pharma Biomedical (EC86012803-F0) and cultured in Dulbecco’s Modified Eagle Medium (DMEM) with 10% fetal bovine serum (FBS), 1% penicillin streptomycin, 1% non-essential amino acids at 37 °C, and 5% CO2. The cells were placed into Corning® T75 flasks (catalog no. 431464) and allowed to spread for 3 days. They were stained with 10 µM CellTracker Red (Thermo Fisher Scientific) and 1 µM SYTO 16 (Thermo Fisher Scientific) in FBS-free culture media at 37 °C for 45 min. Imaging was performed after washing the cells with phosphate-buffered saline (PBS). ### Preparation of Jurkat cells for FISH imaging19 An aliquot of 1 × 107 Jurkat cells in a round-bottom 2.0-mL microtube was fixed and permeabilized by exposure to two concentrations of Farmer’s solution [ethanol/acetic acid = 3:1 (v/v)] in PBS at 4 °C; 30% for 30 min and then 70% for 10 min. Cells were centrifuged at 600 × g for 5 min and washed with 2× saline sodium citrate (SSC). After centrifugation again, the cell pellet was resuspended in the mixture of 14 μL of CEP hybridization buffer, 2 μL of Vysis CEP 8 (D8Z2) SpectrumGreen Probe (Abbott Diagnostics, Lake Forest, IL), and 4 μL of Milli-Q, transferred to a 0.2-mL PCR tube, and then heated for hybridization at 80 °C for 5 min and 42 °C for 2 h. Cell suspension was diluted with 100 μL of 2 × SSC and centrifuged to form a pellet. The pellet was resuspended in 0.4 × SSC containing 0.43% of NP40 detergent and incubated at 72 °C for 2 min. The cells were harvested by centrifugation and then resuspended in 100 μL of 1% PFA in PBS. Before they were loaded into the VIFFI flow cytometer, the cells were stained by 1000× diluted 7-aminoactinomycin (7-AAD) Viability Dye [0.005% (w/v) as the original solution, Beckman Coulter, A07704] for counterstaining. ### Preparation of C. reinhardtii cells C. reinhardtii TKAC1017 was obtained from the Tsuruoka, Keio, Algae Collection (TKAC) of T. Nakada in the Institute for Advanced Biosciences at Keio University, Japan. It was cultured in culture flasks (working volume: 20 mL) using a modified AF-6 medium without dissolved carbon source. The culture was maintained at 25 °C and illuminated in a 14:10-h light:dark pattern (~120 μmol photon m−2 s−1). A group of C. reinhardtii cells was pre-cultured in a modified AF-6 medium. C. reinhardtii cells (~5 × 106 cells mL−1) in early stationary phase were concentrated to ~3 × 108 cells mL−1 by centrifugation and immediately used for VIFFI flow cytometry. ### Preparation of S. cerevisiae cells S. cerevisiae heterozygous gene deletion mutant of RPC10/rpc10Δ was purchased from EUROSCARF (http://www.euroscarf.de/, accession number: Y22837) and used for VIFFI flow cytometry as a representative of S. cerevisiae cells having a characteristic morphological phenotype of elongated cell shape36. Cell culture, fixation, and fluorescent staining were performed according to previously developed methods37 with some modifications; S. cerevisiae cells were grown at 25 °C in an yeast extract peptone dextrose (YPD) liquid medium containing 1% (w/v) Bacto yeast extract (BD Biosciences, San Jose, CA), 2% (w/v) Bacto peptone (BD Biosciences), and 2% (w/v) glucose. After incubation for 16 h, S. cerevisiae cells in logarithmic phase were fixed in a YPD medium supplemented with formaldehyde (final concentration, 3.7%) and potassium phosphate buffer (100 mM [pH 6.5]) for 30 min at 25 °C. Before VIFFI flow cytometry, cell-surface mannoproteins were stained by 1 mg/mL fluorescein isothiocyanate (FITC)-conjugated concanavalin A (Sigma-Aldrich, St. Louis, MO) in P buffer (10 mM sodium phosphate and 150 mM NaCl [pH 7.2]) for 10 min. After washing with P buffer twice, the S. cerevisiae cells were suspended in PBS. ### Preparation of PC-9 cells PC-9 (human lung cancer) cells were cultured in a RPMI-1640 medium (Sigma-Aldrich, R8758-500 mL) supplemented with 10% FBS (BOVOGEN, catalog no. SFBS-F lot no. 11555), 100 units mL−1 penicillin, and 100 µg mL−1 streptomycin (Wako, 168-23191, Tokyo, Japan) using a 100-mm tissue culture dish (Sumitomo, MS-13900, Tokyo, Japan) until the cells reached near-confluency. The cells were detached from the dish by incubating with 2 mL of 0.25% (W/V) trypsin-ethylenediaminetetraacetic acid (EDTA) (Wako, 205-16945, Tokyo, Japan) for 10 min at 37 °C, collected by centrifugation (Hitachi, CF7D2, Tokyo, Japan) for 3 min, and resuspended in a RPMI-1640 medium containing 10% FBS at 1 × 105 cells mL−1. The cells were incubated with 1-mM 5-aminolevulinic acid (5-ALA) in PBS at 37 °C for 3 h. After incubation, the 5-ALA solution was removed, and the cells were washed with PBS. The collected cells were stained by 1 mL of PBS containing 10 μL of anti-EpCAM antibody (VU-1D9, GeneTex, Irvine, CA). After 30 min of incubation at ambient temperature, the cells were washed with 1 mL of PBS twice and reacted with 2.5 μL of Alexa Flour 488 goat anti-mouse IgG1 (A21121, Invitrogen, Carlsbad, CA) and 2000× diluted Hoechst 33342, H3570 (Invitrogen, Carlsbad, CA) in PBS at ambient temperature for 20 min, followed by washing with 1 mL of PBS twice. The resultant stained cells were resuspended in 1 mL of PBS and stored at 4 °C with light blocked until use. ### Preparation of murine neutrophils and lymphocytes C57BL/6 mice were used in this study and purchased from CLEA Japan. All mice were kept under specific pathogen-free conditions. All animal experiments were done in accordance with protocols approved by the Animal Care and Use Committee at The University of Tokyo. The femur bones from C57BL/6 female mice (8 weeks old) were collected by cutting above and below the joints. Bone-marrow cells were washed out of each bone by inserting a needle (26 gauge) with a sterile syringe filled with PBS/2% FBS into one side of the bone. After removing red blood cells by lysis buffer (Sigma-Aldrich), white blood cells were stained with biotinylated anti-Ly6G antibody (RB6-8C5, BioLegend). The cells were secondary-stained with V500-conjugated Streptavidin (BD Biosciences) and Pacific Blue-conjugated anti-CD3ɛ (145-2C11), -CD4 (RM4-5), -CD8α (53-6.7), -B220 (RA3-6B2), -NK1.1 (PK136) antibody (BioLegend) were used for FACS analysis. Each antibody was used for staining at the final concentration of 1 μg mL−1 in PBS/2% FBS for 30 min on ice, then washed and resuspended in PBS/2% FBS. Neutrophils and lymphocytes were further sorted by FACSAria IIµ (BD Biosciences) as V500 and Pacific Blue single positive cells, respectively, which had no overlap in excitation and emission spectra in the analysis of our imaging flow cytometer. Neutrophils and lymphocytes were suspended in a FBS-free RPMI-1640 culture medium and stained with 1 µM SYTO16 and 10 µM CellTracker Red. The cells were incubated at 37 °C for 45 min, then washed and resuspended in PBS. ### Preparation of E. gracilis cells E. gracilis NIES-48 was obtained from the Microbial Culture Collection at the National Institute for Environmental Studies42. It was cultured in culture flasks (working volume: 20 mL) using a modified AF-6 culture medium without a dissolved carbon source. The culture was maintained at 25 °C and illuminated in a 14:10-hour light:dark pattern (approximately 120 μmol photon m−2 s−1). A group of E. gracilis cells was pre-cultured in a modified AF-6 medium. The cells were cultured in the nitrogen-deficient medium for 5 days and denoted as cells in “nitrogen-deficient condition”. For the observation of intracellular lipid bodies, a stock solution of 1-mM BODIPY505/515 (Thermo Fisher Scientific, USA) in dimethyl sulfoxide containing 1% ethanol was prepared. Both the nitrogen-sufficient and nitrogen-deficient E. gracilis cells (~106 cells mL−1) were stained with 10 µM of BODIPY505/515 in de-ionized water, incubated without light for 30 min, washed, suspended in de-ionized water, and immediately used for imaging. ### Imaging sensitivity The imaging sensitivity of VIFFI flow cytometry is evaluated by SNR in comparison with conventional methods. Here, we evaluate the SNR of a fluorescence image obtained by an imaging flow cytometer by using the SNR of the camera readout per pixel. The signal level in the unit of the number of electrons is expressed by $$S\sim PC^{ - 1}Tns\eta _{{\mathrm{yield}}}\eta _{{\mathrm{img}}}\eta _{{\mathrm{sensor}}},$$ (3) where P, C, T, n, s, ηyield, ηimg, and ηsensor denote the power of the excitation beam, the cross section of the excitation beam, the time of the excitation beam illumination for a single fluorescent molecule, the number of fluorescent molecules in a single-pixel area, the absorption cross section of the fluorescent molecule, the quantum yield of the fluorescent molecule, the photon collection efficiency of the imaging system, and the quantum efficiency of the image sensor, respectively. Assuming that the detection noise consists of shot noise and readout noise, the SNR is given by $$\frac{S}{N} = \frac{S}{{\sqrt {S + \sigma ^2} }},$$ (4) where σ denotes the readout noise of the image sensor. The SNRs of VIFFI flow cytometry, TDI-based IFC (Luminex ImageStream®X Mark II), and stroboscopic illumination IFC18 are shown in Supplementary Fig. 8. The parameters used for the calculations are summarized in Supplementary Table 1. Since the values of ηsensor and σ were unknown for ImageStream®X Mark II, we estimated them from the specifications of a commercial TDI-CCD camera (Hamamatsu C10000-801), which has similar specifications of image acquisition with ImageStream®X Mark II. For a fair comparison, we assumed the identical cross section of the excitation beam and 10% lower efficiency in the excitation beam power and the image formation for VIFFI flow cytometry, considering the power loss at the excitation beam scanner, the relay lens systems, and the polygon scanner. As shown in Supplementary Fig. 8, VIFFI flow cytometry has a comparable SNR with the TDI-based IFC at various numbers of fluorescent molecules per pixel, but TDI-based IFC fails to go beyond the flow speed of ~0.04 m s−1 (corresponding to a throughput of ~400 cells s−1 at an average cell spacing of 100 μm and a pixel size of 0.325 μm) due to the limited readout rate of the CCD (up to ~100 MS s−1). Also, in comparison with stroboscopic illumination IFC, VIFFI flow cytometry provides a significantly higher SNR by more than a factor of 30 due to VIFFI’s much longer exposure time, enabling image acquisition with a reasonable SNR even at a high flow speed of ~1 m s−1 (corresponding to a throughput of ~10,000 cells s−1 at an average cell spacing of 100 μm and a pixel size of 0.325 μm). Consequently, VIFFI flow cytometry is the only method that enables high-SNR fluorescence IFC of cells flowing at a flow speed as high as 1 m s−1. It is important to note that the required SNR strongly depends on the application and the techniques of image processing. Overall, the analysis that uses finer structures of images tends to require higher SNRs. In fact, larger objects or structures in an image are preserved after low-pass filtering, enabling the analysis at an improved SNR. On the contrary, if a target object has a single-pixel size, low-pass filtering decreases the signal level as well as the noise level, resulting in the loss of the fine structure without significantly improving the SNR. To compare VIFFI flow cytometry with conventional methods in imaging sensitivity, we measured its detection limit in terms of molecules of equivalent soluble fluorochrome (MESF), which is the most commonly used parameter for evaluating the detection sensitivity of imaging and non-imaging flow cytometers. Our results [MESF = ~50 for the green and red channels (ch1 and ch2), Supplementary Fig. 9] indicate the high sensitivity of the VIFFI flow cytometer even at a high flow speed of 1 m s−1. On the other hand, it is important to note that identical measurement conditions (e.g., flow speed, excitation laser power) are required for a fair apple-to-apple comparison between this value and those of other imaging or non-imaging flow cytometers. Unfortunately, while the MESF values of commercial imaging or non-imaging flow cytometers are available, these conditions are not often disclosed in their brochures or specification sheets. Therefore, it is ideal to discuss the SNR shown above along with the MESF for a fair apple-to-apple comparison between different imaging or non-imaging flow cytometers, including the VIFFI flow cytometer. ### Data acquisition speed The data acquisition speed of IFC is evaluated by the (effective) line rate. The cell throughput is a common index of the data acquisition speed of flow cytometry, but it may cause confusion in the case of IFC because it is influenced by other parameters such as pixel size and cell spacing. In other words, the cell throughput has trade-off relations with other parameters, complicating the comparison between different IFC systems. Parameters related to the data acquisition speed for VIFFI flow cytometry and TDI-based IFC (ImageStream®X Mark II) are summarized in Supplementary Table 2. The effective line rate is calculated as the flow speed divided by the pixel size in the flow direction and is given by 3.1 MHz and 0.12 MHz for VIFFI flow cytometry and ImageStream®X Mark II (assuming 60-X magnification), respectively. On the other hand, the throughput is proportional to the line rate such that their relation is formulated by fth = pxfx/lx, where fth, px, lx, and fx denote the cell throughput, the pixel size (pixel resolution) in the object plane in the flow direction, the average cell spacing, and the line rate, respectively. The pixel size affects the data volume and information content of a single-cell image. The average cell spacing is determined by sample preparation, not by the instruments. Therefore, for a fair apple-to-apple comparison of different imaging flow cytometers, the cell throughput should be compared under the same conditions of pixel size and average cell spacing. In this manner, the VIFFI flow cytometer has a factor of 26 higher cell throughput than ImageStream®X Mark II. ### Trade-offs between SNR, pixel size, FOV, and flow speed The trade-off relations between SNR, the pixel size, FOV, and flow speed are formulated by $$\frac{{{\mathrm{FOV}}_x - l_0}}{v} \approx \frac{{{\mathrm{FOV}}_x{\mathrm{FOV}}_y}}{{f_{\mathrm{p}}s_{\mathrm{p}}}} + t_{{\mathrm{exp}}1},$$ (5) $$t_{{\mathrm{exp}}2} = \frac{{L_{{\mathrm{scan}}}}}{v},$$ (6) $$\frac{S}{N} \approx \beta \sqrt {\min (t_{{\mathrm{exp}}1},t_{{\mathrm{exp}}2})} ,$$ (7) where FOVx, FOVy, l0, v, fp, sp, texp1, texp2, Lscan, S, and β denote the FOV in the flow direction, the FOV in the direction perpendicular to the flow direction, the overlapped length between consecutive frames in the flow direction, the flow velocity, the pixel data rate of the sCMOS camera (572 MHz), the pixel area, the upper limit on the exposure time determined by the camera’s data transfer speed, the upper limit on the exposure time determined by the maximum scan range in the polygon scanner (the left hand side of Eq. (1)), the maximum scan range on the object, the signal level, and the proportionality coefficient, respectively. Equation (5) represents the frame period. The first term of the right-hand side in Eq. (5) represents the readout time of the camera. Equation (6) gives the constraint on texp2 set by Lscan, which is determined by the design of the optical imaging system as discussed in Section 1 (see also Supplementary Fig. 2). If we assume an arbitrary design in the optical imaging system, Eq. (6) is negligible. In Eq. (7), we assumed that only the shot noise predominantly contributes to the SNR on the basis of the results in Section 15. For more rigorous calculations accounting for the readout noise, which is important in the case of low SNRs, we may need to modify Eq. (7) on the basis of Eq. (4). Equations (57) provide a comprehensive overview of the trade-off relations, as well as their quantification. Supplementary Fig. 3 shows an example of a specific trade-off relation between FOVy and v, which can be derived from Eq. (5). From a practical point of view, it is convenient to rewrite Eq. (5) using adjustable parameters of a system such as the magnification of the imaging system (M) and the number of lines in the direction perpendicular to the flow direction (Ny). Some of the above parameters are expressed by $${\mathrm{FOV}}_x = \frac{{L_x}}{M},$$ (8) $${\mathrm{FOV}}_y = \frac{{N_yd_{\mathrm{p}}}}{M},$$ (9) $$s_{\mathrm{p}} = \frac{{d_{\mathrm{p}}^2}}{{M^2}},$$ (10) where dp denotes the pixel size of the camera (6.5 μm). Then, Eq. (5) is rewritten as $$\frac{{L_x/M - l_0}}{v} \approx \frac{{L_x}}{{f_{\mathrm{p}}d_{\mathrm{p}}}}N_y + t_{{\mathrm{exp}}1}.$$ (11) Practically, we adjust the values of M (via the choice of the objective lens), Ny (via the configuration of the camera), and v (via the setting of the flow system) to find the best balance between the sensitivity, FOV in the direction perpendicular to the flow direction, the pixel size, and the throughput using Eq. (6), Eq. (7), and Eq. (11). The spatial resolution is another important parameter of the system that is not considered in the above discussion, but is also adjusted via the choice of the objective lens. ### Spatial resolution We evaluated the spatial resolution of the VIFFI flow cytometer using 18,000 images of 200-nm fluorescent beads (Fluoresbrite® YO Carboxylate Microspheres 0.20 µm, Polysciences, Inc.) obtained with excitation laser light at 488 nm. The ratio of the volumetric flow rates of the sample flow and sheath flow was set to 1:3900. We estimated the FWe−1M width of the point-spread function (PSF) by fitting Gaussian functions with offset values to the x- and y-profiles of an image of each bead. The results are shown in Supplementary Fig. 15, which characterizes statistical features of the spatial resolution of the VIFFI flow cytometer. First, the difference in PSF distribution between the x and y directions represents the elongation of the PSF due to the residual motion blur of the fluorescent beads, which is less than one pixel (325 nm) on average. Second, the distributions in the y direction with tailed shapes on the right-hand side represent image defocus. Third, the difference in PSF distribution between the red and green channels reflects the wavelength dependence of the wavefront aberration (including defocus) of the VIFFI flow cytometer and that of the diffraction-limited size of the PSF. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability The source data underlying Fig. 4b, c, Fig. 5b, c, Supplementary Figs. 3, 9, 1115 is available in our Source Data file. An additional dataset that supports findings in this study is available upon reasonable request to the corresponding authors. ## Code availability Our image analysis codes are available at https://github.com/MortisHuang/VIFFI-image-analysis (codes for images of cells) and https://github.com/hideharu-mikami/VIFFI-flbeads (codes for images of fluorescent beads). ## References 1. 1. Barteneva, N. S. & Vorobjev, I. A. Imaging Flow Cytometry (Springer, 2016). 2. 2. Basiji, D. & O’Gorman, M. R. G. Imaging flow cytometry. J. Immunol. Methods 423, 1–2 (2015). 3. 3. Jordan, N. V. et al. HER2 expression identifies dynamic functional states within circulating breast cancer cells. Nature 537, 102–106 (2016). 4. 4. Tse, H. T. K. et al. Quantitative diagnosis of malignant pleural effusions by single-cell mechanophenotyping. Sci. Transl. Med. 5, 212ra163 (2013). 5. 5. Ralston, K. S. et al. Trogocytosis by Entamoeba histolytica contributes to cell killing and tissue invasion. Nature 508, 526–530 (2014). 6. 6. Sancho, D. et al. Identification of a dendritic cell receptor that couples sensing of necrosis to immunity. Nature 458, 899–903 (2009). 7. 7. Sykes, D. B. et al. Inhibition of dihydroorotate dehydrogenase overcomes differentiation blockade in acute myeloid leukemia. Cell 167, 171–186.e15 (2016). 8. 8. Maryanovich, M. et al. An MTCH2 pathway repressing mitochondria metabolism regulates haematopoietic stem cell fate. Nat. Commun. 6, 7901 (2015). 9. 9. Otto, O. et al. Real-time deformability cytometry: on-the-fly cell mechanical phenotyping. Nat. Methods 12, 199–202 (2015). 10. 10. Altschuler, S. J. & Wu, L. F. Cellular heterogeneity: Do differences make a difference? Cell 141, 559–563 (2010). 11. 11. Eulenberg, P. et al. Reconstructing cell cycle and disease progression using deep learning. Nat. Commun. 8, 463 (2017). 12. 12. Blasi, T. et al. Label-free cell cycle analysis for high-throughput imaging flow cytometry. Nat. Commun. 7, 10256 (2016). 13. 13. Santos-Ferreira, T. et al. Retinal transplantation of photoreceptors results in donor–host cytoplasmic exchange. Nat. Commun. 7, 13028 (2016). 14. 14. Thaunat, O. et al. Asymmetric segregation of polarized antigen on B cell division shapes presentation capacity. Science 335, 475–479 (2012). 15. 15. Bourton, E. C. et al. Multispectral imaging flow cytometry reveals distinct frequencies of γ-H2AX foci induction in DNA double strand break repair defective human cell lines. Cytom. A 81A, 130–137 (2012). 16. 16. Lalmansingh, A. S., Arora, K., DeMarco, R. A., Hager, G. L. & Nagaich, A. K. High-throughput RNA FISH analysis by imaging flow cytometry reveals that pioneer factor Foxa1 reduces transcriptional stochasticity. PLoS ONE 8, e76043 (2013). 17. 17. Baker, M. Faster frames, clearer pictures. Nat. Methods 8, 1005–1009 (2011). 18. 18. Rane, A. S., Rutkauskaite, J., DeMello, A. & Stavrakis, S. High-throughput multi-parametric imaging flow cytometry. Chem 3, 588–602 (2017). 19. 19. Basiji, D. A., Ortyn, W. E., Liang, L., Venkatachalam, V. & Morrissey, P. Cellular image analysis and imaging by flow cytometry. Clin. Lab. Med. 27, 653–670 (2007). 20. 20. Mikami, H. et al. Ultrafast confocal fluorescence microscopy beyond the fluorescence lifetime limit. Optica 5, 117–126 (2018). 21. 21. Diebold, E. D., Buckley, B. W., Gossett, D. R. & Jalali, B. Digitally synthesized beat frequency multiplexing for sub-millisecond fluorescence microscopy. Nat. Photon. 7, 806–810 (2013). 22. 22. Han, Y. & Lo, Y. -H. Imaging cells in flow cytometer using spatial-temporal transformation. Sci. Rep. 5, 13267 (2015). 23. 23. Kay, D. B., Cambier, J. L. & Wheeless, L. L. Imaging in flow. J. Histochem. Cytochem. 27, 329–334 (1979). 24. 24. Zmijan, R. et al. High throughput imaging cytometer with acoustic focussing. RSC Adv. 5, 83206–83216 (2015). 25. 25. Shapiro, H. M. Practical Flow Cytometry. (Wiley-Liss, 2003). 26. 26. Nitta, N. et al. Intelligent image-activated cell sorting. Cell 175, 266–276 (2018). 27. 27. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Preprint at https://arxiv.org/abs/1409.1556 (2014). 28. 28. Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008). 29. 29. Villani, A. -C. et al. Single-cell RNA-seq reveals new types of human blood dendritic cells, monocytes, and progenitors. Science 356, eaah4573 (2017). 30. 30. Wakisaka, Y. et al. Probing the metabolic heterogeneity of live Euglena gracilis with stimulated Raman scattering microscopy. Nat. Microbiol. 1, 16124 (2016). 31. 31. Goold, H., Beisson, F., Peltier, G. & Li-Beisson, Y. Microalgal lipid droplets: composition, diversity, biogenesis and functions. Plant Cell Rep. 34, 545–555 (2015). 32. 32. Wong, D. M. & Franz, A. K. A comparison of lipid storage in phaeodactylum tricornutum and tetraselmis suecica using laser scanning confocal microscopy. J. Microbiol. Methods 95, 122–128 (2013). 33. 33. Gustafsson, M. G. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000). 34. 34. Ortyn, W. E. et al. Extended depth of field imaging for high speed cell analysis. Cytom. A 71A, 215–231 (2007). 35. 35. Isozaki, A. et al. A practical guide to intelligent image-activated cell sorting. Nat. Protoc. 14, 2370–2415 (2019). 36. 36. Ohnuki, S. & Ohya, Y. High-dimensional single-cell phenotyping reveals extensive haploinsufficiency. PLoS Biol. 16, e2005130 (2018). 37. 37. Ohya, Y. et al. High-dimensional and large-scale phenotyping of yeast mutants. Proc. Natl Acad. Sci. USA 102, 19015–19020 (2005). 38. 38. Yamano, T., Asada, A., Sato, E. & Fukuzawa, H. Isolation and characterization of mutants defective in the localization of LCIB, an essential factor for the carbon-concentrating mechanism in Chlamydomonas reinhardtii. Photosynth. Res. 121, 193–200 (2014). 39. 39. Aceto, N. et al. Circulating tumor cell clusters are oligoclonal precursors of breast cancer metastasis. Cell 158, 1110–1122 (2014). 40. 40. Sarioglu, A. F. et al. A microfluidic device for label-free, physical capture of circulating tumor cell clusters. Nat. Methods 12, 685–691 (2015). 41. 41. Keller, L. & Pantel, K. Unravelling tumour heterogeneity by single-cell profiling of circulating tumour cells. Nat. Rev. Cancer 19, 553–567 (2019). 42. 42. Watanabe, M. M., Kawachi, M. & Hiroki, M., Kasai, F. (eds). NIES Collection List of Strains. 6th edn (NIES, Japan, 2000). ## Acknowledgements This work was supported by ImPACT Program (CSTI, Cabinet Office, Government of Japan), JSPS Core-to-Core Program, JSPS KAKENHI Grant Number 19H05633, White Rock Foundation, and Precise Measurement Technology Promotion Foundation. We thank ImPACT collaborators for their help with the evaluation of imaging sensitivity. ## Author information Authors ### Contributions H.Mikami and Y.Ozeki conceived the idea. H.Mikami, M.K., and Y.Ozeki designed the setup. H.Mikami, K.M., H.Matsumura built the experimental setup and performed the experiments. H.Mikami, C.-J.H., K.H., T.S., and C.L. performed the data analysis. S.Ueno, T.Miura, T.I., K.N., T.Maeno., H.W., M.Y., S.Uemura, S.O., Y.Ohya, H.K., and S.M. prepared the samples. C.-W.S. supervised C.-J.H.’s computational work. Y.Ozeki and K.G. supervised the project. H.Mikami and K.G. wrote the paper with assistance from all the co-authors. ### Corresponding authors Correspondence to Hideharu Mikami or Yasuyuki Ozeki or Keisuke Goda. ## Ethics declarations ### Competing interests H.Mikami, Y.Ozeki, and K.G. are inventors on a pending patent that covers a part of key ideas of VIFFI flow cytometry (PCT/JP2017/031937, applied by The University of Tokyo). T.S. and K.G. are shareholders of CYBO, Inc. The authors declare no other competing interests. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Mikami, H., Kawaguchi, M., Huang, CJ. et al. Virtual-freezing fluorescence imaging flow cytometry. Nat Commun 11, 1162 (2020). https://doi.org/10.1038/s41467-020-14929-2 • Accepted: • Published: • ### The in vitro micronucleus assay using imaging flow cytometry and deep learning • Matthew A. Rodrigues • , Christine E. Probst • , Artiom Zayats • , Bryan Davidson • , Michael Riedel • , Yang Li •  & Vidya Venkatachalam npj Systems Biology and Applications (2021) • ### Lightsheet-based flow cytometer for whole blood with the ability for the magnetic retrieval of objects from the blood flow • Roman A. Verkhovskii • , Anastasiia A. Kozlova • , Olga A. Sindeeva • , Ilya O. Kozhevnikov • , Ekaterina S. Prikhozhdenko • , Oksana A. Mayorova • , Oleg V. Grishin • , Mikhail A. Makarkin • , Alexey V. Ermakov • , Valery V. Tuchin •  & Daniil N. Bratashov Biomedical Optics Express (2021) • ### Dual sequentially addressable dielectrophoretic array for high-throughput, scalable, multiplexed droplet sorting • Akihiro Isozaki • , Dunhou Huang • , Yuta Nakagawa •  & Keisuke Goda Microfluidics and Nanofluidics (2021) • ### Understanding the lipid production mechanism in Euglena gracilis with a fast-response AIEgen bioprobe, DPAS • AHM Mohsinul Reza • , Yabin Zhou • , Youhong Tang •  & Jianguang Qin Materials Chemistry Frontiers (2021) • ### Super-resolution optofluidic scanning microscopy • Biagio Mandracchia • , Jeonghwan Son •  & Shu Jia Lab on a Chip (2021)
# zbMATH — the first resource for mathematics ## Ostermann, Alexander Compute Distance To: Author ID: ostermann.alexander Published as: Ostermann, A.; Ostermann, Alexander Homepage: https://numerical-analysis.uibk.ac.at/staff/alexander-ostermann External Links: MGP · dblp Documents Indexed: 104 Publications since 1986, including 5 Books all top 5 #### Co-Authors 6 single-authored 13 Einkemmer, Lukas 11 Hochbruck, Marlis 10 Hansen, Eskil 8 Thalhammer, Mechthild 7 Caliari, Marco 7 Lubich, Christian 6 Rainer, Stefan 6 Schratz, Katharina 5 Luan, Vu Thai 5 Oberguggenberger, Michael B. 4 Piazzola, Chiara 3 Kandolf, Peter 3 Kaps, Peter 2 Fellin, Wolfgang 2 González, Cesáreo 2 Hairer, Ernst 2 Hell, Tobias 2 Honig, Michael L. 2 Koskela, Antti 2 Palencia, Cesar 2 Roche, Mathieu 2 Su, Chunmei 2 Wanner, Gerhard 1 Bakaev, Nikolai Yu. 1 Brunner, Hermann 1 Bui, The Duy 1 Faou, Erwan 1 Ganzález, Cesáreo 1 Hipp, David 1 Husty, Manfred L. 1 Kauthen, Jean-Paul 1 Kirchner, Gerhard 1 Kirlinger, Gabriela 1 Knöller, Marvin 1 Koch, Herbert 1 Kramer, Felix 1 Leibold, Jan 1 Mena, Hermann 1 Moccaldi, Martina 1 Netzer, Norbert 1 Nicaise, Serge 1 Oh, Sung-Jin 1 Pauer, Franz 1 Pfurtscheller, Lena-Maria 1 Residori, Mirko 1 Sandbichler, Michael 1 Schnaubelt, Roland 1 Schweitzer, Julia 1 Van Daele, Marnix 1 Walach, Hanna 1 Wright, Will M. 1 Zhan, Rui 1 Zhao, Jingjun all top 5 #### Serials 12 Applied Numerical Mathematics 11 SIAM Journal on Numerical Analysis 9 Journal of Computational and Applied Mathematics 8 IMA Journal of Numerical Analysis 8 BIT 6 Mathematics of Computation 5 Numerische Mathematik 4 Computers & Mathematics with Applications 4 SIAM Journal on Scientific Computing 3 Journal of Computational Physics 2 International Journal for Numerical and Analytical Methods in Geomechanics 2 Applied Mathematics and Computation 2 Internationale Mathematische Nachrichten 2 Oberwolfach Reports 2 Undergraduate Topics in Computer Science 1 Inverse Problems 1 Journal of Mathematical Analysis and Applications 1 Mathematische Semesterberichte 1 ACM Transactions on Mathematical Software 1 Computing 1 SIAM Journal on Matrix Analysis and Applications 1 Journal of Integral Equations and Applications 1 Elemente der Mathematik 1 Linear Algebra and its Applications 1 ETNA. Electronic Transactions on Numerical Analysis 1 Discrete and Continuous Dynamical Systems 1 Foundations of Computational Mathematics 1 Discrete and Continuous Dynamical Systems. Series B 1 Acta Numerica 1 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 1 Undergraduate Texts in Mathematics all top 5 #### Fields 90 Numerical analysis (65-XX) 55 Partial differential equations (35-XX) 30 Ordinary differential equations (34-XX) 8 General and overarching topics; collections (00-XX) 7 Operator theory (47-XX) 4 Real functions (26-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Computer science (68-XX) 3 History and biography (01-XX) 3 Fluid mechanics (76-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Geometry (51-XX) 2 Mechanics of deformable solids (74-XX) 2 Quantum theory (81-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral transforms, operational calculus (44-XX) 1 Integral equations (45-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) #### Citations contained in zbMATH Open 89 Publications have been cited 1,450 times in 800 Documents Cited by Year Exponential integrators. Zbl 1242.65109 Hochbruck, Marlis; Ostermann, Alexander 2010 Explicit exponential Runge-Kutta methods for semilinear parabolic problems. Zbl 1093.65052 Hochbruck, Marlis; Ostermann, Alexander 2005 Exponential Rosenbrock-type methods. Zbl 1193.65119 Hochbruck, Marlis; Ostermann, Alexander; Schweitzer, Julia 2009 Exponential Runge-Kutta methods for parabolic problems. Zbl 1070.65099 Hochbruck, Marlis; Ostermann, Alexander 2005 Runge-Kutta methods for parabolic equations and convolution quadrature. Zbl 0795.65062 Lubich, Ch.; Ostermann, A. 1993 Multi-grid dynamic iteration for parabolic equations. Zbl 0623.65125 Lubich, Ch.; Ostermann, A. 1987 Linearly implicit time discretization of nonlinear parabolic equations. Zbl 0834.65092 Lubich, Ch.; Ostermann, A. 1995 A class of explicit exponential general linear methods. Zbl 1103.65061 Ostermann, A.; Thalhammer, M.; Wright, W. M. 2006 Runge-Kutta approximation of quasi-linear parabolic equations. Zbl 0832.65104 Lubich, Christian; Ostermann, Alexander 1995 High order splitting methods for analytic semigroups exist. Zbl 1176.65066 Hansen, Eskil; Ostermann, Alexander 2009 Implementation of exponential Rosenbrock-type integrators. Zbl 1160.65318 Caliari, Marco; Ostermann, Alexander 2009 Runge-Kutta methods for partial differential equations and fractional orders of convergence. Zbl 0769.65068 Ostermann, A.; Roche, M. 1992 Exponential splitting for unbounded operators. Zbl 1198.65185 Hansen, Eskil; Ostermann, Alexander 2009 Exponential multistep methods of Adams-type. Zbl 1237.65071 Hochbruck, Marlis; Ostermann, Alexander 2011 Rosenbrock methods for partial differential equations and fractional orders of convergence. Zbl 0780.65056 Ostermann, A.; Roche, M. 1993 Explicit exponential Runge-Kutta methods of high order for parabolic problems. Zbl 1314.65103 Luan, Vu Thai; Ostermann, Alexander 2014 Runge-Kutta time discretization of reaction-diffusion and Navier-Stokes equations: Nonsmooth-data error estimates and applications to long-time behaviour. Zbl 0872.65090 Lubich, Christian; Ostermann, Alexander 1996 Comparison of software for computing the action of the matrix exponential. Zbl 1290.65042 Caliari, Marco; Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2014 A minimisation approach for computing the ground state of Gross-Pitaevskii systems. Zbl 1159.82311 Caliari, Marco; Ostermann, Alexander; Rainer, Stefan; Thalhammer, Mechthild 2009 Convergence of Runge-Kutta methods for nonlinear parabolic equations. Zbl 1004.65093 Ostermann, Alexander; Thalhammer, Mechthild 2002 Exponential Rosenbrock methods of order five – construction, analysis and numerical comparisons. Zbl 1291.65201 Luan, Vu Thai; Ostermann, Alexander 2014 Overcoming order reduction in diffusion-reaction splitting. I: Dirichlet boundary conditions. Zbl 1433.65189 Einkemmer, Lukas; Ostermann, Alexander 2015 Exponential B-series: the stiff case. Zbl 1285.65043 Luan, Vu Thai; Ostermann, Alexander 2013 Dimension splitting for evolution equations. Zbl 1149.65084 Hansen, Eskil; Ostermann, Alexander 2008 Backward Euler discretization of fully nonlinear parabolic problems. Zbl 0991.65087 González, Cesáreo; Ostermann, Alexander; Palencia, César; Thalhammer, Mechthild 2002 Interior estimates for time discretizations of parabolic equations. Zbl 0841.65080 Lubich, Christian; Ostermann, Alexander 1995 Numerical low-rank approximation of matrix differential equations. Zbl 1432.65090 Mena, Hermann; Ostermann, Alexander; Pfurtscheller, Lena-Maria; Piazzola, Chiara 2018 The Leja method revisited: backward error analysis for the matrix exponential. Zbl 1339.65061 Caliari, Marco; Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2016 A second-order Magnus-type integrator for nonautonomous parabolic problems. Zbl 1089.65043 González, Cesáreo; Ostermann, A.; Thalhammer, Mechthild 2006 A second-order positivity preserving scheme for semilinear parabolic problems. Zbl 1267.65082 Hansen, Eskil; Kramer, Felix; Ostermann, Alexander 2012 Convergence analysis of a discontinuous Galerkin/Strang splitting approximation for the Vlasov-Poisson equations. Zbl 1302.82108 Einkemmer, Lukas; Ostermann, Alexander 2014 Convergence analysis of Strang splitting for Vlasov-type equations. Zbl 1297.65106 Einkemmer, Lukas; Ostermann, Alexander 2014 Non-smooth data error estimates for linearly implicit Runge-Kutta methods. Zbl 0954.65060 Ostermann, Alexander; Thalhammer, Mechthild 2000 The solution of a combustion problem with Rosenbrock methods. Zbl 0619.76088 Ostermann, A.; Kaps, P.; Bui, T. D. 1986 A splitting approach for the Kadomtsev-Petviashvili equation. Zbl 1354.65102 Einkemmer, Lukas; Ostermann, Alexander 2015 Geometry by its history. Zbl 1288.51001 Ostermann, Alexander; Wanner, Gerhard 2012 Low regularity exponential-type integrators for semilinear Schrödinger equations. Zbl 1402.65098 Ostermann, Alexander; Schratz, Katharina 2018 Overcoming order reduction in diffusion-reaction splitting. II: Oblique boundary conditions. Zbl 1355.65121 Einkemmer, Lukas; Ostermann, Alexander 2016 A class of half-explicit Runge-Kutta methods for differential-algebraic systems of index 3. Zbl 0788.65084 Ostermann, Alexander 1993 Stability of linear multistep methods and applications to nonlinear parabolic problems. Zbl 1041.65073 Ostermann, A.; Thalhammer, M.; Kirlinger, G. 2004 Rosenbrock methods using few LU-decompositions. Zbl 0667.65066 Kaps, Peter; Ostermann, Alexander 1989 A splitting approach for the magnetic Schrödinger equation. Zbl 1373.81195 Caliari, M.; Ostermann, A.; Piazzola, C. 2017 Analysis of exponential splitting methods for inhomogeneous parabolic equations. Zbl 1311.65118 Faou, Erwan; Ostermann, Alexander; Schratz, Katharina 2015 Hopf bifurcation of reaction-diffusion and Navier-Stokes equations under discretization. Zbl 0924.65051 Lubich, Christian; Ostermann, Alexander 1998 Runge-Kutta time discretizations of parabolic Volterra integro- differential equations. Zbl 0832.65141 Brunner, H.; Kauthen, J.-P.; Ostermann, A. 1995 Modification of dimension-splitting methods – overcoming the order reduction due to corner singularities. Zbl 1321.65132 Hell, Tobias; Ostermann, Alexander; Sandbichler, Michael 2015 An exponential integrator for non-autonomous parabolic problems. Zbl 1312.65210 Hipp, David; Hochbruck, Marlis; Ostermann, Alexander 2014 Dense output for extrapolation methods. Zbl 0693.65048 Hairer, Ernst; Ostermann, Alexander 1990 Error analysis of splitting methods for inhomogeneous evolution equations. Zbl 1267.65085 Ostermann, Alexander; Schratz, Katharina 2012 Dimension splitting for quasilinear parabolic equations. Zbl 1211.65117 Hansen, Eskil; Ostermann, Alexander 2010 Consistent tangent operators for constitutive rate equations. Zbl 1033.74043 Fellin, Wolfgang; Ostermann, Alexander 2002 Long-term stability of variable stepsize approximations of semigroups. Zbl 1005.65076 Bakaev, Nikolai; Ostermann, Alexander 2002 Shadowing for nonautonomous parabolic problems with applications to long-time error bounds. Zbl 0954.35099 Ostermann, Alexander; Palencia, Cesar 2000 A Fourier integrator for the cubic nonlinear Schrödinger equation with rough initial data. Zbl 1422.65222 Knöller, Marvin; Ostermann, Alexander; Schratz, Katharina 2019 Convergence of a low-rank Lie-Trotter splitting for stiff matrix differential equations. Zbl 1420.65072 Ostermann, Alexander; Piazzola, Chiara; Walach, Hanna 2019 A residual based error estimate for Leja interpolation of matrix functions. Zbl 1293.65075 Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2014 Exponential Taylor methods: analysis and implementation. Zbl 1319.65120 Koskela, Antti; Ostermann, Alexander 2013 Meshfree exponential integrators. Zbl 1264.65164 Caliari, Marco; Ostermann, Alexander; Rainer, Stefan 2013 A convergence analysis of the exponential Euler iteration for nonlinear ill-posed problems. Zbl 1184.65063 Hochbruck, Marlis; Hönig, Michael; Ostermann, Alexander 2009 Positivity of exponential multistep methods. Zbl 1119.65357 Ostermann, Alexander; Thalhammer, Mechthild 2006 A half-explicit extrapolation method for differential-algebraic systems of index 3. Zbl 0704.65051 Ostermann, Alexander 1990 Continuous extensions of Rosenbrock-type methods. Zbl 0697.65055 Ostermann, Alexander 1990 Stability analysis of explicit exponential integrators for delay differential equations. Zbl 1348.65113 Zhao, Jingjun; Zhan, Rui; Ostermann, Alexander 2016 On the error propagation of semi-Lagrange and Fourier methods for advection problems. Zbl 1364.65179 Einkemmer, Lukas; Ostermann, Alexander 2015 An almost symmetric Strang splitting scheme for nonlinear evolution equations. Zbl 1368.65074 Einkemmer, Lukas; Ostermann, Alexander 2014 Stability of exponential operator splitting methods for noncontractive semigroups. Zbl 1274.65217 Ostermann, Alexander; Schratz, Katharina 2013 Dimension splitting for time dependent operators. Zbl 1189.65104 Hansen, Eskil; Ostermann, Alexander 2009 Positivity of exponential Runge-Kutta methods. Zbl 1125.65068 Ostermann, Alexander; van Daele, Marnix 2007 Stability of W-methods with applications to operator splitting and to geometric theory. Zbl 1007.65063 Ostermann, Alexander 2002 Efficient boundary corrected Strang splitting. Zbl 1427.65087 Einkemmer, Lukas; Moccaldi, Martina; Ostermann, Alexander 2018 A comparison of boundary correction methods for Strang splitting. Zbl 1397.65155 Einkemmer, Lukas; Ostermann, Alexander 2018 Parallel exponential Rosenbrock methods. Zbl 1443.65093 Luan, Vu Thai; Ostermann, Alexander 2016 High-order splitting schemes for semilinear evolution equations. Zbl 1355.65071 Hansen, Eskil; Ostermann, Alexander 2016 Detecting structural changes with ARMA processes. Zbl 1348.93243 Ostermann, A.; Spielberger, G.; Tributsch, A. 2016 An almost symmetric Strang splitting scheme for the construction of high order composition methods. Zbl 1321.65109 Einkemmer, Lukas; Ostermann, Alexander 2014 Compatibility conditions for Dirichlet and Neumann problems of Poisson’s equation on a rectangle. Zbl 1298.35050 Hell, Tobias; Ostermann, Alexander 2014 Regularization of nonlinear ill-posed problems by exponential integrators. Zbl 1167.65369 Hochbruck, Marlis; Hönig, Michael; Ostermann, Alexander 2009 Optimal convergence results for Runge-Kutta discretizations of linear nonautonomous parabolic problems. Zbl 0921.65068 Ganzález, Cesáreo; Ostermann, Alexander 1999 Dense output for the GBS extrapolation method. Zbl 0810.65074 Hairer, E.; Ostermann, A. 1992 On the convergence of Lawson methods for semilinear stiff problems. Zbl 1453.65269 Hochbruck, Marlis; Leibold, Jan; Ostermann, Alexander 2020 Two exponential-type integrators for the “good” Boussinesq equation. Zbl 1428.35425 Ostermann, Alexander; Su, Chunmei 2019 A split step Fourier/discontinuous Galerkin scheme for the Kadomtsev-Petviashvili equation. Zbl 1427.65285 Einkemmer, Lukas; Ostermann, Alexander 2018 Splitting methods for constrained diffusion-reaction systems. Zbl 1391.35228 Altmann, R.; Ostermann, A. 2017 The error structure of the Douglas-Rachford splitting method for stiff linear problems. Zbl 1382.65151 Hansen, Eskil; Ostermann, Alexander; Schratz, Katharina 2016 Reprint of “Explicit exponential Runge-Kutta methods of high order for parabolic problems”. Zbl 1330.65111 Luan, Vu Thai; Ostermann, Alexander 2014 Unconditional convergence of DIRK schemes applied to dissipative evolution equations. Zbl 1193.65138 Hansen, Eskil; Ostermann, Alexander 2010 A dynamic proof of Thébault’s theorem. Zbl 1192.51011 Ostermann, Alexander; Wanner, Gerhard 2010 Finite element Runge-Kutta discretizations of porous medium-type equations. Zbl 1169.76033 Hansen, Eskil; Ostermann, Alexander 2008 $$L(\alpha{})$$-stable variable order Rosenbrock methods. Zbl 0738.65064 Kaps, Peter; Ostermann, Alexander 1991 On the convergence of Lawson methods for semilinear stiff problems. Zbl 1453.65269 Hochbruck, Marlis; Leibold, Jan; Ostermann, Alexander 2020 A Fourier integrator for the cubic nonlinear Schrödinger equation with rough initial data. Zbl 1422.65222 Knöller, Marvin; Ostermann, Alexander; Schratz, Katharina 2019 Convergence of a low-rank Lie-Trotter splitting for stiff matrix differential equations. Zbl 1420.65072 Ostermann, Alexander; Piazzola, Chiara; Walach, Hanna 2019 Two exponential-type integrators for the “good” Boussinesq equation. Zbl 1428.35425 Ostermann, Alexander; Su, Chunmei 2019 Numerical low-rank approximation of matrix differential equations. Zbl 1432.65090 Mena, Hermann; Ostermann, Alexander; Pfurtscheller, Lena-Maria; Piazzola, Chiara 2018 Low regularity exponential-type integrators for semilinear Schrödinger equations. Zbl 1402.65098 Ostermann, Alexander; Schratz, Katharina 2018 Efficient boundary corrected Strang splitting. Zbl 1427.65087 Einkemmer, Lukas; Moccaldi, Martina; Ostermann, Alexander 2018 A comparison of boundary correction methods for Strang splitting. Zbl 1397.65155 Einkemmer, Lukas; Ostermann, Alexander 2018 A split step Fourier/discontinuous Galerkin scheme for the Kadomtsev-Petviashvili equation. Zbl 1427.65285 Einkemmer, Lukas; Ostermann, Alexander 2018 A splitting approach for the magnetic Schrödinger equation. Zbl 1373.81195 Caliari, M.; Ostermann, A.; Piazzola, C. 2017 Splitting methods for constrained diffusion-reaction systems. Zbl 1391.35228 Altmann, R.; Ostermann, A. 2017 The Leja method revisited: backward error analysis for the matrix exponential. Zbl 1339.65061 Caliari, Marco; Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2016 Overcoming order reduction in diffusion-reaction splitting. II: Oblique boundary conditions. Zbl 1355.65121 Einkemmer, Lukas; Ostermann, Alexander 2016 Stability analysis of explicit exponential integrators for delay differential equations. Zbl 1348.65113 Zhao, Jingjun; Zhan, Rui; Ostermann, Alexander 2016 Parallel exponential Rosenbrock methods. Zbl 1443.65093 Luan, Vu Thai; Ostermann, Alexander 2016 High-order splitting schemes for semilinear evolution equations. Zbl 1355.65071 Hansen, Eskil; Ostermann, Alexander 2016 Detecting structural changes with ARMA processes. Zbl 1348.93243 Ostermann, A.; Spielberger, G.; Tributsch, A. 2016 The error structure of the Douglas-Rachford splitting method for stiff linear problems. Zbl 1382.65151 Hansen, Eskil; Ostermann, Alexander; Schratz, Katharina 2016 Overcoming order reduction in diffusion-reaction splitting. I: Dirichlet boundary conditions. Zbl 1433.65189 Einkemmer, Lukas; Ostermann, Alexander 2015 A splitting approach for the Kadomtsev-Petviashvili equation. Zbl 1354.65102 Einkemmer, Lukas; Ostermann, Alexander 2015 Analysis of exponential splitting methods for inhomogeneous parabolic equations. Zbl 1311.65118 Faou, Erwan; Ostermann, Alexander; Schratz, Katharina 2015 Modification of dimension-splitting methods – overcoming the order reduction due to corner singularities. Zbl 1321.65132 Hell, Tobias; Ostermann, Alexander; Sandbichler, Michael 2015 On the error propagation of semi-Lagrange and Fourier methods for advection problems. Zbl 1364.65179 Einkemmer, Lukas; Ostermann, Alexander 2015 Explicit exponential Runge-Kutta methods of high order for parabolic problems. Zbl 1314.65103 Luan, Vu Thai; Ostermann, Alexander 2014 Comparison of software for computing the action of the matrix exponential. Zbl 1290.65042 Caliari, Marco; Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2014 Exponential Rosenbrock methods of order five – construction, analysis and numerical comparisons. Zbl 1291.65201 Luan, Vu Thai; Ostermann, Alexander 2014 Convergence analysis of a discontinuous Galerkin/Strang splitting approximation for the Vlasov-Poisson equations. Zbl 1302.82108 Einkemmer, Lukas; Ostermann, Alexander 2014 Convergence analysis of Strang splitting for Vlasov-type equations. Zbl 1297.65106 Einkemmer, Lukas; Ostermann, Alexander 2014 An exponential integrator for non-autonomous parabolic problems. Zbl 1312.65210 Hipp, David; Hochbruck, Marlis; Ostermann, Alexander 2014 A residual based error estimate for Leja interpolation of matrix functions. Zbl 1293.65075 Kandolf, Peter; Ostermann, Alexander; Rainer, Stefan 2014 An almost symmetric Strang splitting scheme for nonlinear evolution equations. Zbl 1368.65074 Einkemmer, Lukas; Ostermann, Alexander 2014 An almost symmetric Strang splitting scheme for the construction of high order composition methods. Zbl 1321.65109 Einkemmer, Lukas; Ostermann, Alexander 2014 Compatibility conditions for Dirichlet and Neumann problems of Poisson’s equation on a rectangle. Zbl 1298.35050 Hell, Tobias; Ostermann, Alexander 2014 Reprint of “Explicit exponential Runge-Kutta methods of high order for parabolic problems”. Zbl 1330.65111 Luan, Vu Thai; Ostermann, Alexander 2014 Exponential B-series: the stiff case. Zbl 1285.65043 Luan, Vu Thai; Ostermann, Alexander 2013 Exponential Taylor methods: analysis and implementation. Zbl 1319.65120 Koskela, Antti; Ostermann, Alexander 2013 Meshfree exponential integrators. Zbl 1264.65164 Caliari, Marco; Ostermann, Alexander; Rainer, Stefan 2013 Stability of exponential operator splitting methods for noncontractive semigroups. Zbl 1274.65217 Ostermann, Alexander; Schratz, Katharina 2013 A second-order positivity preserving scheme for semilinear parabolic problems. Zbl 1267.65082 Hansen, Eskil; Kramer, Felix; Ostermann, Alexander 2012 Geometry by its history. Zbl 1288.51001 Ostermann, Alexander; Wanner, Gerhard 2012 Error analysis of splitting methods for inhomogeneous evolution equations. Zbl 1267.65085 Ostermann, Alexander; Schratz, Katharina 2012 Exponential multistep methods of Adams-type. Zbl 1237.65071 Hochbruck, Marlis; Ostermann, Alexander 2011 Exponential integrators. Zbl 1242.65109 Hochbruck, Marlis; Ostermann, Alexander 2010 Dimension splitting for quasilinear parabolic equations. Zbl 1211.65117 Hansen, Eskil; Ostermann, Alexander 2010 Unconditional convergence of DIRK schemes applied to dissipative evolution equations. Zbl 1193.65138 Hansen, Eskil; Ostermann, Alexander 2010 A dynamic proof of Thébault’s theorem. Zbl 1192.51011 Ostermann, Alexander; Wanner, Gerhard 2010 Exponential Rosenbrock-type methods. Zbl 1193.65119 Hochbruck, Marlis; Ostermann, Alexander; Schweitzer, Julia 2009 High order splitting methods for analytic semigroups exist. Zbl 1176.65066 Hansen, Eskil; Ostermann, Alexander 2009 Implementation of exponential Rosenbrock-type integrators. Zbl 1160.65318 Caliari, Marco; Ostermann, Alexander 2009 Exponential splitting for unbounded operators. Zbl 1198.65185 Hansen, Eskil; Ostermann, Alexander 2009 A minimisation approach for computing the ground state of Gross-Pitaevskii systems. Zbl 1159.82311 Caliari, Marco; Ostermann, Alexander; Rainer, Stefan; Thalhammer, Mechthild 2009 A convergence analysis of the exponential Euler iteration for nonlinear ill-posed problems. Zbl 1184.65063 Hochbruck, Marlis; Hönig, Michael; Ostermann, Alexander 2009 Dimension splitting for time dependent operators. Zbl 1189.65104 Hansen, Eskil; Ostermann, Alexander 2009 Regularization of nonlinear ill-posed problems by exponential integrators. Zbl 1167.65369 Hochbruck, Marlis; Hönig, Michael; Ostermann, Alexander 2009 Dimension splitting for evolution equations. Zbl 1149.65084 Hansen, Eskil; Ostermann, Alexander 2008 Finite element Runge-Kutta discretizations of porous medium-type equations. Zbl 1169.76033 Hansen, Eskil; Ostermann, Alexander 2008 Positivity of exponential Runge-Kutta methods. Zbl 1125.65068 Ostermann, Alexander; van Daele, Marnix 2007 A class of explicit exponential general linear methods. Zbl 1103.65061 Ostermann, A.; Thalhammer, M.; Wright, W. M. 2006 A second-order Magnus-type integrator for nonautonomous parabolic problems. Zbl 1089.65043 González, Cesáreo; Ostermann, A.; Thalhammer, Mechthild 2006 Positivity of exponential multistep methods. Zbl 1119.65357 Ostermann, Alexander; Thalhammer, Mechthild 2006 Explicit exponential Runge-Kutta methods for semilinear parabolic problems. Zbl 1093.65052 Hochbruck, Marlis; Ostermann, Alexander 2005 Exponential Runge-Kutta methods for parabolic problems. Zbl 1070.65099 Hochbruck, Marlis; Ostermann, Alexander 2005 Stability of linear multistep methods and applications to nonlinear parabolic problems. Zbl 1041.65073 Ostermann, A.; Thalhammer, M.; Kirlinger, G. 2004 Convergence of Runge-Kutta methods for nonlinear parabolic equations. Zbl 1004.65093 Ostermann, Alexander; Thalhammer, Mechthild 2002 Backward Euler discretization of fully nonlinear parabolic problems. Zbl 0991.65087 González, Cesáreo; Ostermann, Alexander; Palencia, César; Thalhammer, Mechthild 2002 Consistent tangent operators for constitutive rate equations. Zbl 1033.74043 Fellin, Wolfgang; Ostermann, Alexander 2002 Long-term stability of variable stepsize approximations of semigroups. Zbl 1005.65076 Bakaev, Nikolai; Ostermann, Alexander 2002 Stability of W-methods with applications to operator splitting and to geometric theory. Zbl 1007.65063 Ostermann, Alexander 2002 Non-smooth data error estimates for linearly implicit Runge-Kutta methods. Zbl 0954.65060 Ostermann, Alexander; Thalhammer, Mechthild 2000 Shadowing for nonautonomous parabolic problems with applications to long-time error bounds. Zbl 0954.35099 Ostermann, Alexander; Palencia, Cesar 2000 Optimal convergence results for Runge-Kutta discretizations of linear nonautonomous parabolic problems. Zbl 0921.65068 Ganzález, Cesáreo; Ostermann, Alexander 1999 Hopf bifurcation of reaction-diffusion and Navier-Stokes equations under discretization. Zbl 0924.65051 Lubich, Christian; Ostermann, Alexander 1998 Runge-Kutta time discretization of reaction-diffusion and Navier-Stokes equations: Nonsmooth-data error estimates and applications to long-time behaviour. Zbl 0872.65090 Lubich, Christian; Ostermann, Alexander 1996 Linearly implicit time discretization of nonlinear parabolic equations. Zbl 0834.65092 Lubich, Ch.; Ostermann, A. 1995 Runge-Kutta approximation of quasi-linear parabolic equations. Zbl 0832.65104 Lubich, Christian; Ostermann, Alexander 1995 Interior estimates for time discretizations of parabolic equations. Zbl 0841.65080 Lubich, Christian; Ostermann, Alexander 1995 Runge-Kutta time discretizations of parabolic Volterra integro- differential equations. Zbl 0832.65141 Brunner, H.; Kauthen, J.-P.; Ostermann, A. 1995 Runge-Kutta methods for parabolic equations and convolution quadrature. Zbl 0795.65062 Lubich, Ch.; Ostermann, A. 1993 Rosenbrock methods for partial differential equations and fractional orders of convergence. Zbl 0780.65056 Ostermann, A.; Roche, M. 1993 A class of half-explicit Runge-Kutta methods for differential-algebraic systems of index 3. Zbl 0788.65084 Ostermann, Alexander 1993 Runge-Kutta methods for partial differential equations and fractional orders of convergence. Zbl 0769.65068 Ostermann, A.; Roche, M. 1992 Dense output for the GBS extrapolation method. Zbl 0810.65074 Hairer, E.; Ostermann, A. 1992 $$L(\alpha{})$$-stable variable order Rosenbrock methods. Zbl 0738.65064 Kaps, Peter; Ostermann, Alexander 1991 Dense output for extrapolation methods. Zbl 0693.65048 Hairer, Ernst; Ostermann, Alexander 1990 A half-explicit extrapolation method for differential-algebraic systems of index 3. Zbl 0704.65051 Ostermann, Alexander 1990 Continuous extensions of Rosenbrock-type methods. Zbl 0697.65055 Ostermann, Alexander 1990 Rosenbrock methods using few LU-decompositions. Zbl 0667.65066 Kaps, Peter; Ostermann, Alexander 1989 Multi-grid dynamic iteration for parabolic equations. Zbl 0623.65125 Lubich, Ch.; Ostermann, A. 1987 The solution of a combustion problem with Rosenbrock methods. Zbl 0619.76088 Ostermann, A.; Kaps, P.; Bui, T. D. 1986 all top 5 #### Cited by 1,079 Authors 52 Ostermann, Alexander 19 Lubich, Christian 19 Wu, Xinyuan 16 Einkemmer, Lukas 16 Wang, Bin 13 Caliari, Marco 13 Hochbruck, Marlis 13 Thalhammer, Mechthild 12 Cano, Begoña 12 Hansen, Eskil 11 Ju, Lili 11 Khaliq, Abdul Q. M. 11 Schratz, Katharina 10 Alonso-Mallo, Isaías 10 Blanes, Sergio 10 Zhao, Jingjun 9 López-Fernández, María 9 Sandu, Adrian 9 Tambue, Antoine 8 Akrivis, Georgios D. 8 Banjai, Lehel 8 Bao, Weizhu 8 Luan, Vu Thai 8 Palencia, Cesar 8 Xu, Yang 7 Botchev, Mikhail A. 7 Csomós, Petra 7 Garrappa, Roberto 7 González, Cesáreo 7 Rang, Joachim 7 Tokman, Mayya 6 Cai, Yongyong 6 Chartier, Philippe 6 Faou, Erwan 6 Gander, Martin Jakob 6 Jorge, Juan Carlos 6 Kandolf, Peter 6 Klein, Christian 6 Li, Buyang 6 Liu, Changying 6 Mena, Hermann 6 Moret, Igor 6 Mukam, Jean Daniel 6 Sauter, Stefan A. 6 Sayas, Francisco-Javier 6 Weiner, Rüdiger 6 Zhan, Rui 6 Zhao, Xiaofei 5 Auzinger, Winfried 5 Burrage, Kevin 5 Casas, Fernando 5 Crouseilles, Nicolas 5 Emmrich, Etienne 5 Geiser, Jürgen 5 Hairer, Ernst 5 Koch, Othmar 5 Kovács, Mihály 5 Moreta, María Jesús 5 Reguera, Nuria 5 Seydaoğlu, Muaz 5 Simoncini, Valeria 5 Stillfjord, Tony 5 Su, Chunmei 5 Vilmart, Gilles 5 Wade, Bruce A. 5 Wanner, Gerhard 5 Wensch, Jörg 4 Bátkai, András 4 Bhatt, Harish P. 4 Calvo, M. P. 4 Gaspar, Francisco José 4 Gauckler, Ludwig J. 4 González-Pinto, Severiano 4 Hernandez-Abreu, Domingo 4 Iserles, Arieh 4 Jahnke, Tobias 4 Kovács, Balázs 4 Lang, Jens 4 Li, Dongping 4 Li, Xiao 4 Lord, Gabriel James 4 Mei, Lijie 4 Novati, Paolo 4 Owolabi, Kolade Matthew 4 Popolizio, Marina 4 Rainer, Stefan 4 Rainwater, Greg 4 Rodrigo, Carmen 4 Schnaubelt, Roland 4 Söderlind, Gustaf 4 Turner, Ian William 4 Wang, Cheng 3 Altmann, Robert 3 Arévalo, Carmen 3 Bader, Philipp 3 Baumstark, Simon 3 Botchev, Mike A. 3 Bujanda, Blanca 3 Cong, Yuhao 3 Du, Qiang ...and 979 more Authors all top 5 #### Cited in 136 Serials 82 Applied Numerical Mathematics 77 Journal of Computational Physics 75 Journal of Computational and Applied Mathematics 50 BIT 47 Numerische Mathematik 40 SIAM Journal on Scientific Computing 32 Computers & Mathematics with Applications 29 Mathematics of Computation 28 SIAM Journal on Numerical Analysis 25 Applied Mathematics and Computation 25 Journal of Scientific Computing 17 Numerical Algorithms 11 Computer Methods in Applied Mechanics and Engineering 9 International Journal of Computer Mathematics 9 Foundations of Computational Mathematics 8 Applied Mathematics Letters 7 Journal of Mathematical Analysis and Applications 7 Numerical Methods for Partial Differential Equations 7 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 6 Linear Algebra and its Applications 6 Discrete and Continuous Dynamical Systems. Series B 5 Mathematics and Computers in Simulation 5 Physica D 5 Numerical Linear Algebra with Applications 4 Computational Mechanics 4 SIAM Journal on Matrix Analysis and Applications 4 Elemente der Mathematik 4 Computational and Applied Mathematics 4 Advances in Computational Mathematics 3 Computers and Fluids 3 Computer Physics Communications 3 Computing 3 International Journal for Numerical Methods in Engineering 3 Numerical Functional Analysis and Optimization 3 Journal of Integral Equations and Applications 3 Japan Journal of Industrial and Applied Mathematics 3 Applied Mathematical Modelling 3 Discrete and Continuous Dynamical Systems 3 Abstract and Applied Analysis 3 Journal of Applied Mathematics and Computing 3 Multiscale Modeling & Simulation 3 East Asian Journal on Applied Mathematics 2 Wave Motion 2 Calcolo 2 Journal of Functional Analysis 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Semigroup Forum 2 Journal of Nonlinear Science 2 ETNA. Electronic Transactions on Numerical Analysis 2 Computing and Visualization in Science 2 Communications in Nonlinear Science and Numerical Simulation 2 Computational Geosciences 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 Computational Methods in Applied Mathematics 2 Journal of Applied Mathematics 2 Journal of Numerical Mathematics 2 International Journal of Differential Equations 2 Research in the Mathematical Sciences 2 SN Partial Differential Equations and Applications 1 Analysis Mathematica 1 International Journal of Control 1 International Journal for Numerical and Analytical Methods in Geomechanics 1 International Journal for Numerical Methods in Fluids 1 Journal of Fluid Mechanics 1 Journal of the Franklin Institute 1 Journal of Mathematical Physics 1 The Mathematical Gazette 1 Mathematical Methods in the Applied Sciences 1 Mathematische Semesterberichte 1 Nonlinearity 1 The Mathematical Intelligencer 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 The Annals of Probability 1 Automatica 1 Czechoslovak Mathematical Journal 1 Integral Equations and Operator Theory 1 Transactions of the American Mathematical Society 1 Acta Applicandae Mathematicae 1 ACM Transactions on Graphics 1 International Journal of Approximate Reasoning 1 COMPEL 1 Science in China. Series A 1 Applications of Mathematics 1 Stochastic Processes and their Applications 1 Indagationes Mathematicae. New Series 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Journal of Mathematical Sciences (New York) 1 Russian Journal of Numerical Analysis and Mathematical Modelling 1 Engineering Analysis with Boundary Elements 1 Bernoulli 1 Journal of Mathematical Chemistry 1 Mathematical Problems in Engineering 1 Optimization Methods & Software 1 Journal of Inequalities and Applications 1 Journal of Applied Mechanics and Technical Physics 1 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 International Journal of Theoretical and Applied Finance 1 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 1 Acta Mathematica Sinica. English Series ...and 36 more Serials all top 5 #### Cited in 46 Fields 718 Numerical analysis (65-XX) 346 Partial differential equations (35-XX) 161 Ordinary differential equations (34-XX) 65 Fluid mechanics (76-XX) 41 Dynamical systems and ergodic theory (37-XX) 36 Operator theory (47-XX) 26 Linear and multilinear algebra; matrix theory (15-XX) 24 Quantum theory (81-XX) 23 Statistical mechanics, structure of matter (82-XX) 22 Integral equations (45-XX) 22 Probability theory and stochastic processes (60-XX) 21 Mechanics of deformable solids (74-XX) 19 Systems theory; control (93-XX) 18 Optics, electromagnetic theory (78-XX) 16 Biology and other natural sciences (92-XX) 14 Calculus of variations and optimal control; optimization (49-XX) 13 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 11 Mechanics of particles and systems (70-XX) 11 Geophysics (86-XX) 10 Real functions (26-XX) 8 Geometry (51-XX) 8 Classical thermodynamics, heat transfer (80-XX) 7 Computer science (68-XX) 6 Special functions (33-XX) 6 Integral transforms, operational calculus (44-XX) 4 Approximations and expansions (41-XX) 3 History and biography (01-XX) 3 Difference and functional equations (39-XX) 3 Functional analysis (46-XX) 3 Information and communication theory, circuits (94-XX) 2 General and overarching topics; collections (00-XX) 2 Algebraic geometry (14-XX) 2 Topological groups, Lie groups (22-XX) 2 Differential geometry (53-XX) 2 Statistics (62-XX) 2 Operations research, mathematical programming (90-XX) 1 Number theory (11-XX) 1 Commutative algebra (13-XX) 1 Nonassociative rings and algebras (17-XX) 1 Group theory and generalizations (20-XX) 1 Functions of a complex variable (30-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Relativity and gravitational theory (83-XX) 1 Astronomy and astrophysics (85-XX) 1
# Wouldn’t you like to know what’s going on in my mind? I suppose most theoretical physicists who (like me) are comfortably past the age of 60 worry about their susceptibility to “crazy-old-guy syndrome.” (Sorry for the sexism, but all the victims of this malady I know are guys.) It can be sad when a formerly great scientist falls far out of the mainstream and seems to be spouting nonsense. Matthew Fisher is only 55, but reluctance to be seen as a crazy old guy might partially explain why he has kept pretty quiet about his passionate pursuit of neuroscience over the past three years. That changed two months ago when he posted a paper on the arXiv about Quantum Cognition. Neuroscience has a very seductive pull, because it is at once very accessible and very inaccessible. While a theoretical physicist might think and write about a brane even without having or seeing a brane, everybody’s got a brain (some scarecrows excepted). On the other hand, while it’s not too hard to write down and study the equations that describe a brane, it is not at all easy to write down the equations for a brain, let alone solve them. The brain is fascinating because we know so little about it. And … how can anyone with a healthy appreciation for Gödel’s Theorem not be intrigued by the very idea of a brain that thinks about itself? (Almost) everybody’s got a brain. The idea that quantum effects could have an important role in brain function is not new, but is routinely dismissed as wildly implausible. Matthew Fisher begs to differ. And those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy. He may be onto something. At least he’s raising some very interesting questions. My appreciation for Matthew and his paper was heightened further this Wednesday, when Matthew stopped by Caltech for a lunch-time seminar and one of my interminable dinner-time group meetings. I don’t know whether my brain is performing quantum information processing (and neither does Matthew), but just the thought that it might be is lighting me up like a zebrafish. Following Matthew, let’s take a deep breath and ask ourselves: What would need to be true for quantum information processing to be important in the brain? Presumably we would need ways to (1) store quantum information for a long time, (2) transport quantum information, (3) create entanglement, and (4) have entanglement influence the firing of neurons. After a three-year quest, Matthew has interesting things to say about all of these issues. For details, you should read the paper. Matthew argues that the only plausible repositories for quantum information in the brain are the Phosphorus-31 nuclear spins in phosphate ions. Because these nuclei are spin-1/2, they have no electric quadrupole moments and hence corresponding long coherence times — of order a second. That may not be long enough, but phosphate ions can be bound with calcium ions into objects called Posner clusters, each containing six P-31 nuclei. The phosphorus nuclei in Posner clusters might have coherence times greatly enhanced by motional narrowing, perhaps as long as weeks or even longer. Where energy is being consumed in a cell, ATP sometimes releases diphosphate ions (what biochemists call pyrophosphate), which are later broken into two separate phosphate ions, each with a single P-31 qubit. Matthew argues that the breakup of the diphosphate, catalyzed by a suitable enzyme, will occur at an enhanced rate when these two P-31 qubits are in a spin singlet rather than a spin triplet. The reason is that the enzyme has to grab ahold of the diphosphate molecule and stop its rotation in order to break it apart, which is much easier when the molecule has even rather than odd orbital angular momentum; therefore due to Fermi statistics the spin state of the P-31 nuclei must be antisymmetric. Thus wherever ATP is consumed there is a plentiful source of entangled qubit pairs. If the phosphate molecules remain unbound, this entanglement will decay in about a second, but it is a different story if the phosphate ions group together quickly enough into Posner clusters, allowing the entanglement to survive for a much longer time. If the two members of an entangled qubit pair are snatched up by different Posner clusters, the clusters may then be transported into different cells, distributing the entanglement over relatively long distances. (a) Two entangled Posner clusters. Each dot is a P-31 nuclear spin, and each dashed line represents a singlet pair. (b) Many entangled Posner clusters. [From Fisher 2015] What causes a neuron to fire is a complicated story that I won’t attempt to wade into. Suffice it to say that part of the story may involve the chemical binding of a pair of Posner clusters which then melt if the environment is sufficiently acidic, releasing calcium ions and phosphate ions which enhance the firing. The melting rate depends on the spin state of the six P-31 nuclei within the cluster, so that entanglement between clusters in different cells may induce nonlocal correlations among different neurons, which could be quite complex if entanglement is widely distributed. This scenario raises more questions than it answers, but these are definitely scientific questions inviting further investigation and experimental exploration. One thing that is far from clear at this stage is whether such quantum correlations among neurons (if they exist at all) would be easy to simulate with a classical computer. Even if that turns out to be so, these potential quantum effects involving many neurons could be fabulously interesting. IQIM’s mission is to reach for transformative quantum science, particularly approaches that take advantage of synergies between different fields of study. This topic certainly qualifies.* It’s going to be great fun to see where it leads. If you are a young and ambitious scientist, you may be contemplating the dilemma: Should I pursue quantum physics or neuroscience? Maybe, just maybe, the right answer is: Both. *Matthew is the only member of the IQIM faculty who is not a Caltech professor, though he once was. # Toward physical realizations of thermodynamic resource theories The thank-you slide of my presentation remained onscreen, and the question-and-answer session had begun. I was presenting a seminar about thermodynamic resource theories (TRTs), models developed by quantum-information theorists for small-scale exchanges of heat and work. The audience consisted of condensed-matter physicists who studied graphene and photonic crystals. I was beginning to regret my topic’s abstractness. The question-asker pointed at a listener. “This is an experimentalist,” he continued, “your arch-nemesis. What implications does your theory have for his lab? Does it have any? Why should he care?” I could have answered better. I apologized that quantum-information theorists, reared on the rarefied air of Dirac bras and kets, had developed TRTs. I recalled the baby steps with which science sometimes migrates from theory to experiment. I could have advocated for bounding, with idealizations, efficiencies achievable in labs. I should have invoked the connections being developed with fluctuation results, statistical mechanical theorems that have withstood experimental tests. The crowd looked unconvinced, but I scored one point: The experimentalist was not my arch-nemesis. “My new friend,” I corrected the questioner. His question has burned in my mind for two years. Experiments have inspired, but not guided, TRTs. TRTs have yet to drive experiments. Can we strengthen the connection between TRTs and the natural world? If so, what tools must resource theorists develop to predict outcomes of experiments? If not, are resource theorists doing physics? A Q&A more successful than mine. I explore answers to these questions in a paper released today. Ian Durham and Dean Rickles were kind enough to request a contribution for a book of conference proceedings. The conference, “Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge” took place at the University of Cambridge (including a graveyard thereof), thanks to FQXi (the Foundational Questions Institute). “Proceedings are a great opportunity to get something off your chest,” John said. That seminar Q&A had sat on my chest, like a pet cat who half-smothers you while you’re sleeping, for two years. Theorists often justify TRTs with experiments.* Experimentalists, an argument goes, are probing limits of physics. Conventional statistical mechanics describe these regimes poorly. To understand these experiments, and to apply them to technologies, we must explore TRTs. Does that argument not merit testing? If experimentalists observe the extremes predicted with TRTs, then the justifications for, and the timeliness of, TRT research will grow. Something to get off your chest. Like the contents of a conference-proceedings paper, according to my advisor. You’ve read the paper’s introduction, the first eight paragraphs of this blog post. (Who wouldn’t want to begin a paper with a mortifying anecdote?) Later in the paper, I introduce TRTs and their role in one-shot statistical mechanics, the analysis of work, heat, and entropies on small scales. I discuss whether TRTs can be realized and whether physicists should care. I identify eleven opportunities for shifting TRTs toward experiments. Three opportunities concern what merits realizing and how, in principle, we can realize it. Six adjustments to TRTs could improve TRTs’ realism. Two more-out-there opportunities, though less critical to realizations, could diversify the platforms with which we might realize TRTs. One opportunity is the physical realization of thermal embezzlement. TRTs, like thermodynamic laws, dictate how systems can and cannot evolve. Suppose that a state $R$ cannot transform into a state $S$: $R \not\mapsto S$. An ancilla $C$, called a catalyst, might facilitate the transformation: $R + C \mapsto S + C$. Catalysts act like engines used to extract work from a pair of heat baths. Engines degrade, so a realistic transformation might yield $S + \tilde{C}$, wherein $\tilde{C}$ resembles $C$. For certain definitions of “resembles,”** TRTs imply, one can extract arbitrary amounts of work by negligibly degrading $C$. Detecting the degradation—the work extraction’s cost—is difficult. Extracting arbitrary amounts of work at a difficult-to-detect cost contradicts the spirit of thermodynamic law. The spirit, not the letter. Embezzlement seems physically realizable, in principle. Detecting embezzlement could push experimentalists’ abilities to distinguish between close-together states $C$ and $\tilde{C}$. I hope that that challenge, and the chance to violate the spirit of thermodynamic law, attracts researchers. Alternatively, theorists could redefine “resembles” so that $C$ doesn’t rub the law the wrong way. The paper’s broadness evokes a caveat of Arthur Eddington’s. In 1927, Eddington presented Gifford Lectures entitled The Nature of the Physical World. Being a physicist, he admitted, “I have much to fear from the expert philosophical critic.” Specializing in TRTs, I have much to fear from the expert experimental critic. The paper is intended to point out, and to initiate responses to, the lack of physical realizations of TRTs. Some concerns are practical; some, philosophical. I expect and hope that the discussion will continue…preferably with more cooperation and charity than during that Q&A. If you want to continue the discussion, drop me a line. *So do theorists-in-training. I have. **A definition that involves the trace distance. # Bits, bears, and beyond in Banff Another conference about entropy. Another graveyard. Last year, I blogged about the University of Cambridge cemetery visited by participants in the conference “Eddington and Wheeler: Information and Interaction.” We’d lectured each other about entropy–a quantification of decay, of the march of time. Then we marched to an overgrown graveyard, where scientists who’d lectured about entropy decades earlier were decaying. This July, I attended the conference “Beyond i.i.d. in information theory.” The acronym “i.i.d.” stands for “independent and identically distributed,” which requires its own explanation. The conference took place at BIRS, the Banff International Research Station, in Canada. Locals pronounce “BIRS” as “burrs,” the spiky plant bits that stick to your socks when you hike. (I had thought that one pronounces “BIRS” as “beers,” over which participants in quantum conferences debate about the Measurement Problem.) Conversations at “Beyond i.i.d.” dinner tables ranged from mathematical identities to the hiking for which most tourists visit Banff to the bears we’d been advised to avoid while hiking. So let me explain the meaning of “i.i.d.” in terms of bear attacks. The BIRS conference center. Beyond here, there be bears. Suppose that, every day, exactly one bear attacks you as you hike in Banff. Every day, you have a probability p1 of facing down a black bear, a probability p2 of facing down a grizzly, and so on. These probabilities form a distribution {pi} over the set of possible events (of possible attacks). We call the type of attack that occurs on a given day a random variable. The distribution associated with each day equals the distribution associated with each other day. Hence the variables are identically distributed. The Monday distribution doesn’t affect the Tuesday distribution and so on, so the distributions are independent. Information theorists quantify efficiencies with which i.i.d. tasks can be performed. Suppose that your mother expresses concern about your hiking. She asks you to report which bear harassed you on which day. You compress your report into the fewest possible bits, or units of information. Consider the limit as the number of days approaches infinity, called the asymptotic limit. The number of bits required per day approaches a function, called the Shannon entropy HS, of the distribution: Number of bits required per day → HS({pi}). The Shannon entropy describes many asymptotic properties of i.i.d. variables. Similarly, the von Neumann entropy HvN describes many asymptotic properties of i.i.d. quantum states. But you don’t hike for infinitely many days. The rate of black-bear attacks ebbs and flows. If you stumbled into grizzly land on Friday, you’ll probably avoid it, and have a lower grizzly-attack probability, on Saturday. Into how few bits can you compress a set of nonasymptotic, non-i.i.d. variables? We answer such questions in terms of ɛ-smooth α-Rényi entropies, the sandwiched Rényi relative entropy, the hypothesis-testing entropy, and related beasts. These beasts form a zoo diagrammed by conference participant Philippe Faist. I wish I had his diagram on a placemat. “Beyond i.i.d.” participants define these entropies, generalize the entropies, probe the entropies’ properties, and apply the entropies to physics. Want to quantify the efficiency with which you can perform an information-processing task or a thermodynamic task? An entropy might hold the key. Many highlights distinguished the conference; I’ll mention a handful.  If the jargon upsets your stomach, skip three paragraphs to Thermodynamic Thursday. Aram Harrow introduced a resource theory that resembles entanglement theory but whose agents pay to communicate classically. Why, I interrupted him, define such a theory? The backstory involves a wager against quantum-information pioneer Charlie Bennett (more precisely, against an opinion of Bennett’s). For details, and for a quantum version of The Princess and the Pea, watch Aram’s talk. Graeme Smith and colleagues “remove[d] the . . . creativity” from proofs that certain entropic quantities satisfy subadditivity. Subadditivity is a property that facilitates proofs and that offers physical insights into applications. Graeme & co. designed an algorithm for checking whether entropic quantity Q satisfies subadditivity. Just add water; no innovation required. How appropriate, conference co-organizer Mark Wilde observed. BIRS has the slogan “Inspiring creativity.” Patrick Hayden applied one-shot entropies to AdS/CFT and emergent spacetime, enthused about elsewhere on this blog. Debbie Leung discussed approximations to Haar-random unitaries. Gilad Gour compared resource theories. Conference participants graciously tolerated my talk about thermodynamic resource theories. I closed my eyes to symbolize the ignorance quantified by entropy. Not really; the photo didn’t turn out as well as hoped, despite the photographer’s goodwill. But I could have closed my eyes to symbolize entropic ignorance. Thermodynamics and resource theories dominated Thursday. Thermodynamics is the physics of heat, work, entropy, and stasis. Resource theories are simple models for transformations, like from a charged battery and a Tesla car at the bottom of a hill to an empty battery and a Tesla atop a hill. My advisor’s Tesla. No wonder I study thermodynamic resource theories. Philippe Faist, diagrammer of the Entropy Zoo, compared two models for thermodynamic operations. I introduced a generalization of resource theories for thermodynamics. Last year, Joe Renes of ETH and I broadened thermo resource theories to model exchanges of not only heat, but also particles, angular momentum, and other quantities. We calculated work in terms of the hypothesis-testing entropy. Though our generalization won’t surprise Quantum Frontiers diehards, the magic tricks in my presentation might. At twilight on Thermodynamic Thursday, I meandered down the mountain from the conference center. Entropies hummed in my mind like the mosquitoes I slapped from my calves. Rising from scratching a bite, I confronted the Banff Cemetery. Half-wild greenery framed the headstones that bordered the gravel path I was following. Thermodynamicists have associated entropy with the passage of time, with deterioration, with a fate we can’t escape. I seem unable to escape from brushing past cemeteries at entropy conferences. Not that I mind, I thought while scratching the bite in Pasadena. At least I escaped attacks by Banff’s bears. With thanks to the conference organizers and to BIRS for the opportunity to participate in “Beyond i.i.d. 2015.” # Ant-Man and the Quantum Realm It was the first week of August last summer and I was at LAX for a trip to North Carolina as a guest speaker at Project Scientist’s STEM summer camp for young women. I had booked an early morning flight and had arrived at my gate with time to spare, so I decided to get some breakfast. I walked by a smart-looking salad bar and thought: Today is the day. Moving past the salad bar, I ordered a juicy cheeseburger with fries at the adjacent McDonald’s. Growing up in Greece, eating McDonald’s was a rare treat; as was playing video games with my brothers and reading comic books late at night. Yet, through a weird twist of fate, it was these last two guilty pleasures that were to become my Hadouken!, my Hulk, Smash!, my secret weapons of choice for breaking down the barriers between the world of quantum mechanics and the everyday reality of our super-normal, super-predictable lives. I finished my burger, stuffing the last few delicious fries in my mouth, when my phone buzzed – I had mail from The Science and Entertainment Exchange, a non-profit organization funded by the National Academy of Sciences, whose purpose is to bring leading scientists in contact with Hollywood in order to elevate the level of science in the movies. I was to report to Atlanta, GA for a movie consult on a new superhero movie: Ant-Man. As I read halfway through the email, I grumbled to myself: Why can’t I be the guy who works on Thor? Who is this Ant-Man anyways? But, in typical Hollywood fashion, the email had a happy ending: “Paul Rudd is playing Ant-Man. He may be at the meeting, but we cannot promise anything.” Marvel would cover my expenses, so I sent in my reluctant reply. It went something like this: Dear Marvel Secret-ary Agent, Hell yeah. The meeting was in three days time. I would finish my visit to Queens University in Charlotte, NC and take the next flight out to Atlanta. But first, some fun and games were in order. As part of my visit to Project Scientist’s camp, I was invited to teach quantum mechanics to a group of forty young women, ages 11-14, all of whom were interested in science, engineering and mathematics and many of whom did not have the financial means to pursue these interests outside of the classroom. So, I went to Queens University with several copies of MinecraftEDU, the educational component to one of the most popular video games of all time: Minecraft. As I described in “Can a game teach kids quantum mechanics”, I spent the summer of 2013 designing qCraft, a modification (mod) to Minecraft that allows players to craft blocks imbued with quantum superpowers such as quantum superposition and quantum entanglement. The mod, developed in collaboration with Google and TeacherGaming, became really popular, amassing millions of downloads around the world. But it is one thing to look at statistics as a measure of success and another to look into the eyes of young women who have lost themselves in a game (qCraft is free to download and comes with an accompanying curriculum) designed to teach them such heady concepts that inspired Richard Feynman to quip: If you think you understand quantum theory, you don’t. My visit to Charlotte was so wonderful that I nearly decided to cancel my trip to Atlanta in order to stay with the girls and their mentors until the end of the week. But Mr. Rudd deserved the very best science could offer in making-quantum-stuff-up, so I boarded my flight and resolved to bring Project Scientist to Caltech the next summer. On my way to Atlanta, I used the in-flight WiFi to do some research on Ant-Man. He was part of the original Avengers, a founding member, in fact (nice!) His name was Dr. Hank Pym. He had developed a particle, aptly named after himself, which allowed him to shrink the space between atoms (well now…) He embedded vials of that particle in a suit that allowed him to shrink to the size of an ant (of course, that makes sense.) In short, he was a mad, mad scientist. And I was called in to help his successor, Scott Lang (Paul Rudd’s character), navigate his way through quantum land. Holy guacamole Ant-man! How does one shrink the space between atoms? As James Kakalios, author of The Physics of Superheroes, puts it in a recent article on Nate Silver’s FiveThirtyEight: We’re made of atoms, and the neighboring atoms are all touching each other. One method of changing your size that’s out: Just squeeze the atoms closer together. So the other option: What determines the size of atoms anyway? We can calculate this with quantum mechanics, and it turns out to be the ratio of fundamental constants: Planck’s constant and the mass of an electron and the charge of the electron and this and that. The thing that all these constants have in common is that they’re constant. They don’t change. Wonderful. Just peachy. How am I supposed to come up with a way that will allow Ant-Man to shrink to the size of an ant, if one of the top experts in movie science magic thinks that our best bet is to somehow change fundamental constants of nature? The shrinking Naturally, I could not, umm, find the time last summer to read last week’s article during my flight (time travel issues), so like any young Greek of my generation who still hopes that our national debt will just go poof, I attacked the problem of shrinking someone’s volume without shrinking their mass con pasión. The answer was far from obvious… but, it was possible. If one could convert electrons into muons, the atomic radius would shrink 200 times, shrinking a human to the size of an ant without changing any of the chemical properties of the atoms (muons have the same charge as the electrons, but are 200 times heavier). The problem then was the lifetime of the muonic atoms. Muons decay into electrons in about 2 millionths of a second, on average. That is indeed a problem. Could we somehow extend the half-life of a muon about a million times? Yes, if the muon has relativistic mass close to 20 TeV (near the current energy of the Large Hadron Collider in Geneva), the effect of Einstein’s relativistic time-dilation (which is how actual high-energy muons from cosmic radiation have time to reach our detectors before decaying) would allow our hero to shrink for a few seconds at a time with high probability. To shrink beyond that, or for longer periods of time, would require knowledge of physics beyond the standard model. Which is precisely what the Mu2e experiment at Fermilab is looking at right now. It’s like Christmas for Ant-Man fans! So the famed Pym particle is a neutral particle that adds mass to an electron, converting it to a high-energy muon… When did I become a particle physicist? Oh well, fake it until you make it. Oh, hey, they are bringing pretzels! I love these little pretzel bites! Enter Pinewood Studios The flight was longer than I expected, which gave me time to think. A limo was waiting for me at the airport; I was to be taken directly to Pinewood Studios, luggage in hand and all. Once I was at my destination, I was escorted to the 3rd floor of a nondescript building, accessible only through special authorization (nice touch, Marvel). I was shown to what seemed like the main conference room, an open area with a large conference table. I expected that I would have to wait an hour before the assistant (to the) general manager showed up, so I started fiddling with my phone, making myself look busy and important. The next time I looked up, Paul Rudd was tapping my shoulder, dressed in sweats after what seemed like a training session for The 300. I am not sure what happened next, but Paul and I were in deep conversation about… qCraft? Someone must have told him that I was involved with designing the quantum mod for Minecraft and suddenly our roles were reversed. His son was a huge Minecraft fan and I was the celebrity in this boy’s eyes, and by parental transitivity, an associative, but highly non-commutative group action, in his dad’s eyes. I promised Paul that I would teach him how to install mods in Minecraft so his kids could enjoy qCraft and teach him about quantum entanglement when I wasn’t around. To my delight, I found myself listening to Mr. Rudd talk about his son’s sense of humor and his daughter’s intelligence with such pride, that I forgot for a moment the reason I was there; instead, it felt like I was catching up with an old friend and all we could talk about was our kids (I don’t have any, so I mostly listened). The Meeting Within five minutes, the director (Peyton Reed), the writers, producers, VFX specialists, computer playback experts (I became friends with their supervisor, Mr. Matthew Morrissey, who went to great lengths to represent the atoms on-screen as clouds of probability, in the s, p, d, f orbital configurations you see flashing in quantum superposition behind Hank Pym at his lab) and everyone else with an interest in quantum mechanics was in the room. I sat at the head of the long table with Paul next to me. He asked most of the questions along with the director, but at the time I didn’t know Paul was involved with writing the script. We discussed a lot of things, but what got everyone excited was the idea that the laws of physics as we know them may break down as we delve deeper and deeper into the quantum realm. You see, all of the other superheroes, no matter how strong and super, had powers that conformed to the laws of physics (stretching them from time to time, but never breaking them). But if someone could go to a place where the laws of physics as we know them were not yet formed, at a place where the arrow of time was broken and the fabric of space was not yet woven, the powers of such a master of the quantum realm would only be constrained by their ability to come back to the same (or similar) reality from which they departed. All the superheroes of Marvel and DC Comics combined would stand no chance against Ant-Man with a malfunctioning regulator… The Quantum Realm The birth of the term itself is an interesting story. Brad Winderbaum, co-producer for the movie, emailed me a couple of weeks after the meeting with the following request: Could I come up with a term describing Ant-Man going to the “microverse”? The term “microverse” carried legal baggage, so something fresh was needed. I offered “going nano”, “going quantum”, “going atomic”, or… “quantum realm”. I didn’t know how small the “microverse” scale was supposed to be in a writer’s mind (in a physicist’s mind it is exactly $10^{-6}$ meters – one thousandth of a millimeter), hence the many options. The reply was quick: Thanks Spiros! Quantum Realm is a pretty great term. Et voilà. Ant-Man was going to the quantum realm, a place where time and space dissolve and the only thing strong enough to anchor Scott Lang to reality is… You have to watch the movie to see what that is – it was an off-the-cuff remark I made at the meeting… At the end of the meeting, Paul, Peyton and the others thanked me and asked me if I could fly to San Francisco the next week for the first week of shooting. There, I would have met Michael Douglas and Evangeline Lilly, but I declined the generous offer. It was the week of Innoworks Academy at Caltech, an award-winning summer camp for middle school kids on the free/reduced lunch program. As the camp’s adviser, I resolved to never miss a camp as long as I was in the country and San Francisco is in the same state as Caltech. My mom would be proud of my decision (I hope), though an autograph from Mr. Douglas would have fetched me a really good hug. The Movie I just watched the movie yesterday (it is actually good!) and the feeling was surreal. Because I had no idea what to expect. Because I never thought that the people in that room would take what I was saying seriously enough to use it in the movie. I never got a copy of the script and during the official premiere three weeks ago, I was delivering a lecture on the future of quantum computing in a monastery in Madrid, Spain. When I found out that Kevin Feige, president of Marvel Studios, said this at a recent interview, my heart skipped several beats: But the truth is, there is so much in Ant-Man: introducing a new hero, introducing a very important part of technology in the Marvel universe, the Pym particles. Ant-Man getting on the Avengers’ radar in this film and even – this is the weirdest part, you shouldn’t really talk about it because it won’t be apparent for years – but the whole notion of the quantum realm and the whole notion of going to places that are so out there, they are almost mind-bendingly hard to fathom. It all plays into Phase Three. The third phase of the Marvel Cinematic Universe is about to go quantum and all I can think of is: I better start reading comic books again. But first, I have to teach a bunch of 11-14 year-old girls quantum physics through Minecraft. It is, after all, the final week of Project Scientist here at Caltech this summer and the theme is coding. With quantum computers at the near horizon, these young women need to learn how to program Asimov’s laws of quantum robotics into our benevolent quantum A.I. overlords. These young women are humanity’s new hope… # Holography and the MERA The AdS/MERA correspondence has been making the rounds of the blogosphere with nice posts by Scott Aaronson and Sean Carroll, so let’s take a look at the topic here at Quantum Frontiers. The question of how to formulate a quantum theory of gravity is a long-standing open problem in theoretical physics. Somewhat recently, an idea that has gained a lot of traction (and that Spiros has blogged about before) is emergence. This is the idea that space and time may emerge from some more fine-grained quantum objects and their interactions. If we could understand how classical spacetime emerges from an underlying quantum system, then it’s not too much of a stretch to hope that this understanding would give us insight into the full quantum nature of spacetime. One type of emergence is exhibited in holography, which is the idea that certain (D+1)-dimensional systems with gravity are exactly equivalent to D-dimensional quantum theories without gravity. (Note that we’re calling time a dimension here. For example, you would say that on a day-to-day basis we experience D = 4 dimensions.) In this case, that extra +1 dimension and the concomitant gravitational dynamics are emergent phenomena. A nice aspect of holography is that it is explicitly realized by the AdS/CFT correspondence. This correspondence proposes that a particular class of spacetimes—ones that asymptotically look like anti-de Sitter space, or AdS—are equivalent to states of a particular type of quantum system—a conformal field theory, or CFT. A convenient visualization is to draw the AdS spacetime as a cylinder, where time marches forward as you move up the cylinder and different slices of the cylinder correspond to snapshots of space at different instants of time. Conveniently, in this picture you can think of the corresponding CFT as living on the boundary of the cylinder, which, you should note, has one less dimension than the “bulk” inside the cylinder. Even within this nice picture of holography that we get from the AdS/CFT correspondence, there is a question of how exactly do CFT, or boundary quantities map onto quantities in the AdS bulk. This is where a certain tool from quantum information theory called tensor networks has recently shown a lot of promise. A tensor network is a way to efficiently represent certain states of a quantum system. Moreover, they have nice graphical representations which look something like this: Beni discussed one type of tensor network in his post on holographic codes. In this post, let’s discuss the tensor network shown above, which is known as the Multiscale Entanglement Renormalization Ansatz, or MERA. The MERA was initially developed by Guifre Vidal and Glen Evenbly as an efficient approximation to the ground state of a CFT. Roughly speaking, in the picture of a MERA above, one starts with a simple state at the centre, and as you move outward through the network, the MERA tells you how to build up a CFT state which lives on the legs at the boundary. The MERA caught the eye of Brian Swingle, who noticed that it looks an awfully lot like a discretization of a slice of the AdS cylinder shown above. As such, it wasn’t a preposterously big leap to suggest a possible “AdS/MERA correspondence.” Namely, perhaps it’s more than a simple coincidence that a MERA both encodes a CFT state and resembles a slice of AdS. Perhaps the MERA gives us the tools that are required to construct a map between the boundary and the bulk! So, how seriously should one take the possibility of an AdS/MERA correspondence? That’s the question that my colleagues and I addressed in a recent paper. Essentially, there are several properties that a consistent holographic theory should satisfy in both the bulk and the boundary. We asked whether these properties are still simultaneously satisfied in a correspondence where the bulk and boundary are related by a MERA. What we found was that you invariably run into inconsistencies between bulk and boundary physics, at least in the simplest construals of what an AdS/MERA correspondence might be. This doesn’t mean that there is no hope for an AdS/MERA correspondence. Rather, it says that the simplest approach will not work. For a good correspondence, you would need to augment the MERA with some additional structure, or perhaps consider different tensor networks altogether. For instance, the holographic code features a tensor network which hints at a possible bulk/boundary correspondence, and the consistency conditions that we proposed are a good list of checks for Beni and company as they work out the extent to which the code can describe holographic CFTs. Indeed, a good way to summarize how our work fits into the picture of quantum gravity alongside holography and tensors networks is by saying that it’s nice to have good signposts on the road when you don’t have a map. # Mingling stat mech with quantum info in Maryland I felt like a yoyo. I was standing in a hallway at the University of Maryland. On one side stood quantum-information theorists. On the other side stood statistical-mechanics scientists.* The groups eyed each other, like Jets and Sharks in West Side Story, except without fighting or dancing. This March, the groups were generous enough to host me for a visit. I parked first at QuICS, the Joint Center for Quantum Information and Computer Science. Established in October 2014, QuICS had moved into renovated offices the previous month. QuICSland boasts bright colors, sprawling armchairs, and the scent of novelty. So recently had QuICS arrived that the restroom had not acquired toilet paper (as I learned later than I’d have preferred). Photo credit: QuICS From QuICS, I yoyo-ed to the chemistry building, where Chris Jarzynski’s group studies fluctuation relations. Fluctuation relations, introduced elsewhere on this blog, describe out-of-equilibrium systems. A system is out of equilibrium if large-scale properties of it change. Many systems operate out of equilibrium—boiling soup, combustion engines, hurricanes, and living creatures, for instance. Physicists want to describe nonequilibrium processes but have trouble: Living creatures are complicated. Hence the buzz about fluctuation relations. My first Friday in Maryland, I presented a seminar about quantum voting for QuICS. The next Tuesday, I was to present about one-shot information theory for stat-mech enthusiasts. Each week, the stat-mech crowd invites its speaker to lunch. Chris Jarzynski recommended I invite QuICS. Hence the Jets-and-Sharks tableau. “Have you interacted before?” I asked the hallway. “No,” said a voice. QuICS hadn’t existed till last fall, and some QuICSers hadn’t had offices till the previous month.** Silence. “We’re QuICS,” volunteered Stephen Jordan, a quantum-computation theorist, “the Joint Center for Quantum Information and Computer Science.” So began the mingling. It continued at lunch, which we shared at three circular tables we’d dragged into a chain. The mingling continued during the seminar, as QuICSers sat with chemists, materials scientists, and control theorists. The mingling continued the next day, when QuICSer Alexey Gorshkov joined my discussion with the Jarzynski group. Back and forth we yoyo-ed, between buildings and topics. “Mingled,” said Yigit Subasi. Yigit, a postdoc of Chris’s, specialized in quantum physics as a PhD student. I’d asked how he thinks about quantum fluctuation relations. Since Chris and colleagues ignited fluctuation-relation research, theorems have proliferated like vines in a jungle. Everyone and his aunty seems to have invented a fluctuation theorem. I canvassed Marylanders for bushwhacking tips. Imagine, said Yigit, a system whose state you know. Imagine a gas, whose temperature you’ve measured, at equilibrium in a box. Or imagine a trapped ion. Begin with a state about which you have information. Imagine performing work on the system “violently.” Compress the gas quickly, so the particles roil. Shine light on the ion. The system will leave equilibrium. “The information,” said Yigit, “gets mingled.” Imagine halting the compression. Imagine switching off the light. Combine your information about the initial state with assumptions and physical laws.*** Manipulate equations in the right way, and the information might “unmingle.” You might capture properties of the violence in a fluctuation relation. With Zhiyue Lu and Andrew Maven Smith of Chris Jarzynski’s group (left) and with QuICSers (right) I’m grateful to have exchanged information in Maryland, to have yoyo-ed between groups. We have work to perform together. I have transformations to undergo.**** Let the unmingling begin. With gratitude to Alexey Gorshkov and QuICS, and to Chris Jarzynski and the University of Maryland Department of Chemistry, for their hospitality, conversation, and camaraderie. *Statistical mechanics is the study of systems that contain vast numbers of particles, like the air we breathe and white dwarf stars. I harp on about statistical mechanics often. **Before QuICS’s birth, a future QuICSer had collaborated with a postdoc of Chris’s on combining quantum information with fluctuation relations. ***Yes, physical laws are assumptions. But they’re glorified assumptions. ****Hopefully nonviolent transformations. # Generally speaking My high-school calculus teacher had a mustache like a walrus’s and shoulders like a rower’s. At 8:05 AM, he would demand my class’s questions about our homework. Students would yawn, and someone’s hand would drift into the air. “I have a general question,” the hand’s owner would begin. “Only private questions from you,” my teacher would snap. “You’ll be a general someday, but you’re not a colonel, or even a captain, yet.” Then his eyes would twinkle; his voice would soften; and, after the student asked the question, his answer would epitomize why I’ve chosen a life in which I use calculus more often than laundry detergent. Many times though I witnessed the “general” trap, I fell into it once. Little wonder: I relish generalization as other people relish hiking or painting or Michelin-worthy relish. When inferring general principles from examples, I abstract away details as though they’re tomato stains. My veneration of generalization led me to quantum information (QI) theory. One abstract theory can model many physical systems: electrons, superconductors, ion traps, etc. Little wonder that generalizing a QI model swallowed my summer. QI has shed light on statistical mechanics and thermodynamics, which describe energy, information, and efficiency. Models called resource theories describe small systems’ energies, information, and efficiencies. Resource theories help us calculate a quantum system’s value—what you can and can’t create from a quantum system—if you can manipulate systems in only certain ways. Suppose you can perform only operations that preserve energy. According to the Second Law of Thermodynamics, systems evolve toward equilibrium. Equilibrium amounts roughly to stasis: Averages of properties like energy remain constant. Out-of-equilibrium systems have value because you can suck energy from them to power laundry machines. How much energy can you draw, on average, from a system in a constant-temperature environment? Technically: How much “work” can you draw? We denote this average work by < W >. According to thermodynamics, < W > equals the change ∆F in the system’s Helmholtz free energy. The Helmholtz free energy is a thermodynamic property similar to the energy stored in a coiled spring. One reason to study thermodynamics? Suppose you want to calculate more than the average extractable work. How much work will you probably extract during some particular trial? Though statistical physics offers no answer, resource theories do. One answer derived from resource theories resembles ∆F mathematically but involves one-shot information theory, which I’ve discussed elsewhere. If you average this one-shot extractable work, you recover < W > = ∆F. “Helmholtz” resource theories recapitulate statistical-physics results while offering new insights about single trials. Helmholtz resource theories sit atop a silver-tasseled pillow in my heart. Why not, I thought, spread the joy to the rest of statistical physics? Why not generalize thermodynamic resource theories? The average work <W > extractable equals ∆F if heat can leak into your system. If heat and particles can leak, <W > equals the change in your system’s grand potential. The grand potential, like the Helmholtz free energy, is a free energy that resembles the energy in a coiled spring. The grand potential characterizes Bose-Einstein condensates, low-energy quantum systems that may have applications to metrology and quantum computation. If your system responds to a magnetic field, or has mass and occupies a gravitational field, or has other properties, <W > equals the change in another free energy. A collaborator and I designed resource theories that describe heat-and-particle exchanges. In our paper “Beyond heat baths: Generalized resource theories for small-scale thermodynamics,” we propose that different thermodynamic resource theories correspond to different interactions, environments, and free energies. I detailed the proposal in “Beyond heat baths II: Framework for generalized thermodynamic resource theories.” “II” generalizes enough to satisfy my craving for patterns and universals. “II” generalizes enough to merit a hand-slap of a pun from my calculus teacher. We can test abstract theories only by applying them to specific systems. If thermodynamic resource theories describe situations as diverse as heat-and-particle exchanges, magnetic fields, and polymers, some specific system should shed light on resource theories’ accuracy. If you find such a system, let me know. Much as generalization pleases aesthetically, the detergent is in the details.
AN1 / Armed Heist: Shooting gun game # Armed Heist: Shooting gun game for android • 7.0 • 2.5.3 • 7.0 • 2.5.3 ## Armed Heist: Shooting gun game • 7.0 • 2.5.3 hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now check now check now check now check now check now check now
ON THE MULTIPLE VALUES AND UNIQUENESS OF MEROMORPHIC FUNCTIONS SHARING SMALL FUNCTIONS AS TARGETS Title & Authors ON THE MULTIPLE VALUES AND UNIQUENESS OF MEROMORPHIC FUNCTIONS SHARING SMALL FUNCTIONS AS TARGETS Cao, Ting-Bin; Yi, Hong-Xun; Abstract The purpose of this article is to deal with the multiple values and uniqueness of meromorphic functions with small functions in the whole complex plane. We obtain a more general theorem which improves and extends strongly the results of R. Nevanlinna, Li-Qiao, Yao, Yi, and Thai-Tan. Keywords meromorphic function;uniqueness theorem;multiple values;small function; Language English Cited by 1. TWO MEROMORPHIC FUNCTIONS SHARING SETS CONCERNING SMALL FUNCTIONS,; 대한수학회보, 2009. vol.46. 6, pp.1189-1200 2. A NOTE ON NEVANLINNA'S FIVE VALUE THEOREM,;; 대한수학회보, 2015. vol.52. 2, pp.345-350 1. A NOTE ON NEVANLINNA'S FIVE VALUE THEOREM, Bulletin of the Korean Mathematical Society, 2015, 52, 2, 345 2. Two meromorphic functions share some pairs of small functions with truncated multiplicities, Acta Mathematica Scientia, 2014, 34, 6, 1854 3. On the multiple values and uniqueness of meromorphic functions on annuli, Computers & Mathematics with Applications, 2009, 58, 7, 1457 4. TWO MEROMORPHIC FUNCTIONS SHARING SETS CONCERNING SMALL FUNCTIONS, Bulletin of the Korean Mathematical Society, 2009, 46, 6, 1189 References 1. R. Nevanlinna, Einige Eindeutigkeitssatze in der Theorie der Meromorphen Funktionen, Acta Math. 48 (1926), no. 3-4, 367-391 2. Y. H. Li and J. Y. Qiao, The uniqueness of meromorphic functions concerning small functions, Sci. China Ser. A 43 (2000), no. 6, 581-590 3. D. D. Thai and T. V. Tan, Meromorphic functions sharing small functions as targets, Internat. J. Math. 16 (2005), no. 4, 437-451 4. K. Yamanoi, The second main theorem for small functions and related problems, Acta Math. 192 (2004), no. 2, 225-294 5. W. Yao, Two meromorphic functions sharing five small functions in the sense of$\overline{E}_\kappa)(\beta,f)=\overline{E}_\kappa)(\beta,g)$, Nagoya Math. J. 167 (2002), 35-54 6. H. X. Yi, On one problem of uniqueness of meromorphic functions concerning small functions, Proc. Amer. Math. Soc. 130 (2002), no. 6, 1689-1697 7. H. X. Yi, Multiple values and uniqueness of meromophic functions, Chinese Ann. Math. Ser. A 10 (1989), no. 4, 421-427 8. H. X. Yi and C. C. Yang, Uniqueness theory of meromorphic functions, Science Press, Beijing, 1995
# Why are complex numbers necessary to prove the Prime Number Theorem? The standard proof of the Prime Number Theorem requires taking into consideration that there are no zeroes of the Riemann Zeta function in which the real part equals one. But consider the following argument: The probability that a number less than X is prime $\pi(X)/X$ is approximately $\Pi_{p<X} (1-1/p)$ (this is for the same reason that the Sieve of Eratosthenes works), which is approximately $1/\Sigma_{n=1}^X 1/n$, which is approximately $1/\log X$. Hence, $\pi(X)$ is approximately $X/\log X$. This doesn't use complex numbers, but it gives a good reason to believe the Prime Number Theorem. Why do complex numbers (which seem to come from nowhere) make this argument rigorous? - some history: math.columbia.edu/~goldfeld/ErdosSelbergDispute.pdf – roy smith Jan 2 '11 at 18:02 Here is a heuristic which I find useful for ruling out easy proofs of the PNT. Consider a set of positive integers $P$ with the following properties: Between $2^{2k-1}$ and $2^{2k}$, there are roughly $a \frac{2^{2k-1}}{(2k-1) \log 2}$ elements of $P$ and, between $2^{2k}$ and $2^{2k+1}$, there are roughly $b \frac{2^{2k}}{(2k) \log 2}$ elements of $P$. Now, if $P$ is the primes, then $a=b=1$. Suppose instead that $a=1 + c$ and $b=1 - c$, for some small constant $c$. Then $\prod_{p \in P} (1-p^{-s})^{-1}$ has a simple pole at $s=1$, with residue $1$. The sum $\sum_{p \in P,\ p \leq N} 1/p$ grows like $\log \log N$. And, regarding your specific question, $\prod_{p \in P,\ p \leq N} (1-1/p) \approx 1/\log N$.1 So these properties can't distinguish $P$ from the set of primes. However, the PNT does not hold for $P$. Let $\pi_P(N)$ be the number of elements of $P$ which are $\leq N$. Then, if $N=2^{2k}$, then $$\frac{\pi_P(N)}{N/\log N} = (2k) \left( a \frac{2^{2k-1}}{2k-1} + b \frac{2^{2k-2}}{2k-2} + a \frac{2^{2k-3}}{2k-3} + \cdots \right)$$ $$\approx a \left( \frac{1}{2} + \frac{1}{8} + \cdots \right) + b \left( \frac{1}{4} + \frac{1}{16} + \cdots \right) = (2/3) a + (1/3) b.$$ Similarly, if $N=2^{2k+1}$, then $$\frac{\pi_P(N)}{N/\log N} \approx (2/3) b + (1/3) a.$$ So the ratio of $\pi_P(N)/(N/\log N)$ does not approach a well defined ratio. Any proof of the PNT must use facts about the primes which distinguish them from $P$. 1 There is also a second issue here. It turns out that $$\prod_{p\ \mbox{Prime},\ p \leq N} \left(1-\frac{1}{p} \right) \sim \frac{e^{- \gamma}}{\sum_{n \leq N} 1/n},\ \mbox{not}\ \sim \frac{1}{\sum_{n \leq N} 1/n}.$$ So you would have to explain why that $e^{- \gamma}$ disappears. - An earlier version of this post contained the claim that $\sum_{p \in P} (\log p)^k/p$ diverged at the same rate for $P$ the primes or my funny set. This was wrong. – David Speyer Jan 3 '11 at 3:28 They're not! This is the theorem of Selberg and Erdős! Look up the "elementary proof of the Prime Number Theorem". This link is clicky and wonderful. (It is also majestic, so please do click it). - If you try to make the argument you describe rigorous in a naive way, then the error terms rapidly become too big for it to actually work, and this is why nobody uses the Legendre sieve to count primes. This is a well-known problem in sieve theory; see, for example, this blog post by Terence Tao. There are elementary proofs, like the one found by Selberg and Erdős, but they are much harder than the complex analytic proof; don't be fooled into thinking that "elementary" means "easier." As far as the "come from nowhereness" of it all, perhaps you would like to read a blog post I wrote (and its companion) about how to motivate the definition of the zeta function and, in particular, how to motivate studying its asymptotic behavior near $s = 1$. It is a nontrivial fact that one can use complex analysis to study this behavior, and complex analysis is always extremely powerful when it applies. - Or worthwhile for that matter. It was famously noted that the elementary proof of the PNT was a huge disappointment in terms of how much it advanced mathematical techniques (although Selberg's sieve techniques are still extremely useful, these were developed before the elementary proof of the PNT). The questioner asked whether or not it was possible (to give a non-complex-analytic proof), though, and it certainly is! – deeeez Jan 2 '11 at 17:57 @deeeez: Actually, he asked "why", not "if". – simplequestions Jan 2 '11 at 18:37 @simplequestions: Then his question contained a false premise. – deeeez Jan 2 '11 at 18:40 nice and elegant approach... !!!
# Chapter 7 - Section 7.6 - Proportions and Problem Solving with Rational Equations - Vocabulary and Readiness Check: 5 Sum of the number and 5: $z+5$ Reciprocal of the sum of the number and 5: $\frac{1}{z+5}$ #### Work Step by Step Part 1: Since the number is z, the sum of the number and 5 means that the required expression is $z+5$ Part 2: The reciprocal of the sum of the number and 5 means that we need to take the reciprocal of the expression in part 1. Therefore, the required expression is $\frac{1}{z+5}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Properties Label 936.1 Level 936 Weight 1 Dimension 26 Nonzero newspaces 3 Newform subspaces 7 Sturm bound 48384 Trace bound 1 ## Defining parameters Level: $$N$$ = $$936 = 2^{3} \cdot 3^{2} \cdot 13$$ Weight: $$k$$ = $$1$$ Nonzero newspaces: $$3$$ Newform subspaces: $$7$$ Sturm bound: $$48384$$ Trace bound: $$1$$ ## Dimensions The following table gives the dimensions of various subspaces of $$M_{1}(\Gamma_1(936))$$. Total New Old Modular forms 1320 224 1096 Cusp forms 168 26 142 Eisenstein series 1152 198 954 The following table gives the dimensions of subspaces with specified projective image type. $$D_n$$ $$A_4$$ $$S_4$$ $$A_5$$ Dimension 18 0 8 0 ## Trace form $$26q - 4q^{4} + 2q^{5} - 2q^{7} - 4q^{9} + O(q^{10})$$ $$26q - 4q^{4} + 2q^{5} - 2q^{7} - 4q^{9} - 6q^{10} - 2q^{11} - 2q^{13} + 2q^{14} - 2q^{15} - 8q^{16} + 2q^{17} - 4q^{19} + 4q^{21} - 4q^{22} - 2q^{25} + 10q^{26} - 6q^{27} + 12q^{30} - 2q^{31} + 4q^{33} - 14q^{35} + 2q^{39} + 2q^{40} - 6q^{42} + 6q^{43} - 4q^{45} + 4q^{47} - 10q^{49} - 6q^{51} - 4q^{52} + 2q^{56} + 2q^{57} + 2q^{59} - 4q^{61} - 16q^{62} - 2q^{63} + 14q^{64} + 4q^{65} + 2q^{68} + 4q^{73} + 2q^{74} + 12q^{75} - 4q^{81} + 4q^{82} - 2q^{85} - 4q^{88} - 4q^{89} - 6q^{90} + 2q^{91} + 2q^{93} + 2q^{94} - 2q^{99} + O(q^{100})$$ ## Decomposition of $$S_{1}^{\mathrm{new}}(\Gamma_1(936))$$ We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space $$S_k^{\mathrm{new}}(N, \chi)$$ we list the newforms together with their dimension. Label $$\chi$$ Newforms Dimension $$\chi$$ degree 936.1.b $$\chi_{936}(701, \cdot)$$ None 0 1 936.1.e $$\chi_{936}(235, \cdot)$$ None 0 1 936.1.f $$\chi_{936}(521, \cdot)$$ None 0 1 936.1.i $$\chi_{936}(415, \cdot)$$ None 0 1 936.1.k $$\chi_{936}(703, \cdot)$$ None 0 1 936.1.l $$\chi_{936}(233, \cdot)$$ None 0 1 936.1.o $$\chi_{936}(883, \cdot)$$ 936.1.o.a 1 1 936.1.o.b 1 936.1.o.c 4 936.1.p $$\chi_{936}(53, \cdot)$$ None 0 1 936.1.u $$\chi_{936}(109, \cdot)$$ None 0 2 936.1.v $$\chi_{936}(73, \cdot)$$ None 0 2 936.1.y $$\chi_{936}(359, \cdot)$$ None 0 2 936.1.z $$\chi_{936}(395, \cdot)$$ None 0 2 936.1.bc $$\chi_{936}(127, \cdot)$$ None 0 2 936.1.bf $$\chi_{936}(737, \cdot)$$ None 0 2 936.1.bg $$\chi_{936}(451, \cdot)$$ None 0 2 936.1.bj $$\chi_{936}(413, \cdot)$$ None 0 2 936.1.bl $$\chi_{936}(257, \cdot)$$ None 0 2 936.1.bm $$\chi_{936}(295, \cdot)$$ None 0 2 936.1.bo $$\chi_{936}(355, \cdot)$$ None 0 2 936.1.bq $$\chi_{936}(365, \cdot)$$ None 0 2 936.1.bs $$\chi_{936}(259, \cdot)$$ 936.1.bs.a 6 2 936.1.bs.b 6 936.1.bt $$\chi_{936}(29, \cdot)$$ None 0 2 936.1.bu $$\chi_{936}(367, \cdot)$$ None 0 2 936.1.bw $$\chi_{936}(545, \cdot)$$ None 0 2 936.1.bz $$\chi_{936}(79, \cdot)$$ None 0 2 936.1.cb $$\chi_{936}(329, \cdot)$$ None 0 2 936.1.cc $$\chi_{936}(653, \cdot)$$ None 0 2 936.1.cd $$\chi_{936}(43, \cdot)$$ None 0 2 936.1.cf $$\chi_{936}(211, \cdot)$$ None 0 2 936.1.ci $$\chi_{936}(173, \cdot)$$ None 0 2 936.1.ck $$\chi_{936}(113, \cdot)$$ None 0 2 936.1.cm $$\chi_{936}(103, \cdot)$$ None 0 2 936.1.cn $$\chi_{936}(209, \cdot)$$ None 0 2 936.1.cp $$\chi_{936}(439, \cdot)$$ None 0 2 936.1.cs $$\chi_{936}(101, \cdot)$$ None 0 2 936.1.cu $$\chi_{936}(547, \cdot)$$ None 0 2 936.1.cv $$\chi_{936}(77, \cdot)$$ None 0 2 936.1.cx $$\chi_{936}(139, \cdot)$$ None 0 2 936.1.cz $$\chi_{936}(511, \cdot)$$ None 0 2 936.1.dc $$\chi_{936}(185, \cdot)$$ None 0 2 936.1.dd $$\chi_{936}(269, \cdot)$$ None 0 2 936.1.de $$\chi_{936}(595, \cdot)$$ None 0 2 936.1.dh $$\chi_{936}(17, \cdot)$$ None 0 2 936.1.di $$\chi_{936}(55, \cdot)$$ None 0 2 936.1.dm $$\chi_{936}(265, \cdot)$$ 936.1.dm.a 4 4 936.1.dm.b 4 936.1.dn $$\chi_{936}(229, \cdot)$$ None 0 4 936.1.do $$\chi_{936}(227, \cdot)$$ None 0 4 936.1.dp $$\chi_{936}(167, \cdot)$$ None 0 4 936.1.du $$\chi_{936}(323, \cdot)$$ None 0 4 936.1.dv $$\chi_{936}(71, \cdot)$$ None 0 4 936.1.dw $$\chi_{936}(119, \cdot)$$ None 0 4 936.1.dx $$\chi_{936}(11, \cdot)$$ None 0 4 936.1.ea $$\chi_{936}(97, \cdot)$$ None 0 4 936.1.eb $$\chi_{936}(301, \cdot)$$ None 0 4 936.1.eg $$\chi_{936}(145, \cdot)$$ None 0 4 936.1.eh $$\chi_{936}(37, \cdot)$$ None 0 4 936.1.ei $$\chi_{936}(85, \cdot)$$ None 0 4 936.1.ej $$\chi_{936}(409, \cdot)$$ None 0 4 936.1.eo $$\chi_{936}(83, \cdot)$$ None 0 4 936.1.ep $$\chi_{936}(47, \cdot)$$ None 0 4 ## Decomposition of $$S_{1}^{\mathrm{old}}(\Gamma_1(936))$$ into lower level spaces $$S_{1}^{\mathrm{old}}(\Gamma_1(936)) \cong$$ $$S_{1}^{\mathrm{new}}(\Gamma_1(39))$$$$^{\oplus 8}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(52))$$$$^{\oplus 6}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(72))$$$$^{\oplus 2}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(104))$$$$^{\oplus 3}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(117))$$$$^{\oplus 4}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(156))$$$$^{\oplus 4}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(312))$$$$^{\oplus 2}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(468))$$$$^{\oplus 2}$$
# Problem programming ATTiny85 “Invalid device signature.” I am trying to program ATTin85 using Arduino I used the hardware support file from "High-Low Tech" here http://hlt.media.mit.edu/?p=1695 With this schematic: The IDE keeps responding with : avrdude.exe: please define PAGEL and BS2 signals in the configuration file for part ATtiny85avrdude.exe: Yikes! Invalid device signature. Double check connections and try again, or use -F to override this check. I also tried to using avrdude from CMD avrdude -P COM5 -b 19200 -c avrisp -p t85 -v -e -U flash:w:sketch.cpp.hex and it gave: avrdude: please define PAGEL and BS2 signals in the configuration file for part ATtiny85 avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.07s avrdude: Device signature = 0xffffff avrdude: Yikes! Invalid device signature. Double check connections and try again, or use -F to override this check. avrdude done. Thank you. I tried choosing the three settings 1Mhz/8Mhz/20Mhz (without adding any oscillators) and tried the three options with a 16Mhz crystal with two 22pF capacitors (I read somewhere online that this may work) and still nothing changed! I tried another ATTiny85 Chip, another Arduino Uno, and tested the continuity every single wire. but still I am getting the same error. • Use the ArduinoISP sketch that came with the IDE (File=>Examples=>ArduinoISP). Then check the source code for the baudrate setting Serial.begin(19200);. Also check the voltage across pins 4 and 8 of the tiny with a DMM. – jippie Oct 6 '13 at 18:29 • @jippie Checked the voltage, it is 5V. also the baudrate is set to 19200 – Loers Antario Oct 6 '13 at 18:36 • Is it a new or a used ATtiny? – jippie Oct 6 '13 at 18:38 • BTW: Don't worry about the please define PAGEL and BS2 signals in the configuration file for part ATtiny85avrdude. part. That is pretty standard (mis)configuration, but it should work just fine. – jippie Oct 6 '13 at 18:41 • A new one, never used it – Loers Antario Oct 6 '13 at 18:41 I'm assuming your are using Arduino 1.04 or greater. You can ignore this error: "Please define PAGEL and BS2 signals in the configuration file for part ATtiny85avrdude.exe" But not this: "Yikes! Invalid device signature." "Device signature = 0xffffff" This usually happens when something isn't hooked up correctly. 1) Double and triple check your connections. Use your multimeter to do continuity tests to make sure none of your wires are bad. Put your probe directly on the chip's pins to make sure it's not a flaky connection to the breadboard. And, make sure you don't have the chip backwards! (ask me how I know) 2) Use a multimeter to make sure you actually have voltage at vcc and gnd on the tiny when hooked up to the Arduino programmer 3) Add the status leds (with resistors) to your programmer circuit so you can get a little more feedback. pin 9 -> heartbeat pin 8 -> error pin 7 -> programming 4) Attinys from the factory are set to 1 Mhz. You have to flash the fuses to change it. To do that, under Tools->Board you choose the device at the speed you want (e.g. Attiny 85 @ 8mHz). Then you choose Tools->Burn Bootloader. It doesn't actually add a bootloader, but it does set the fuses. But, don't worry about doing this until you can get programming to work. Just assume your Attiny is 1 Mhz. 5) If it still doesn't work, uninstall (delete?) the MIT files, or create a separate Arduino installation, and use this instead: http://code.google.com/p/arduino-tiny/ I played with the MIT tiny stuff first and then I found the arduino-tiny. I can't remember why, but I found it much better than the MIT version and it's been working for me ever since. 6) If it still doesn't work, I might try a different Attiny chip in case you have a bad one. • I've found arduino-tiny to be more complete than the MIT stuff. For instance, the latter omits the third PWM channel. – Ignacio Vazquez-Abrams Feb 13 '14 at 16:43 I just ran into the same problem today. I use a pretty typical LM7805 regulator with a +12V in to a regulator that provides +5V for the ATTiny85 and all terminations. I get the same errors when it tries to read/verify. I triple checked all 4 critical connections (SCK, MOSI, MISO and RESET). All were properly connected and terminated with 3.3K resistors to Vcc. I then tried 5.7K and finally 10K resistors. All with same result. I did notice late that my +12V supply was very noisy (was oscillating with a 2.25V pk-pk ripple. It seemed to be enough to make the ATtiny85 not very happy during programming. Made double sure your MCU is getting stable Vcc (less than 50mV ripple) and a 0.1uF to 1uF bypass cap between (and close to) the power and ground pins on the MCU will help prevent noisy supplies from having a big impact. Unless, that is the ripple exceeds a few hundred mV. Then you're screwed like me until I replace the supply.
• Award-winning private GMAT tutoring Register now and save up to $200 Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code ## Kim purchased n items from a catalog for$8 each. This topic has 2 expert replies and 0 member replies ### Top Member VJesus12 Master | Next Rank: 500 Posts Joined 14 Oct 2017 Posted: 170 messages #### Kim purchased n items from a catalog for $8 each. Wed Nov 08, 2017 7:57 am Kim purchased n items from a catalog for$8 each. Postage and handling charges consisted of $3 for the first item and$1 for each additional item. Which of the following gives the total dollar amount for Kim’s purchase, including postage and handling, in terms of n ? (A) 8n + 2 (B) 8n + 4 (C) 9n + 2 (D) 9n + 3 (E) 9n + 4 The OA is C. I am confused. I thought the correct answer should be A. Why is C? ### GMAT/MBA Expert DavidG@VeritasPrep Legendary Member Joined 14 Jan 2015 Posted: 2542 messages Followed by: 116 members 1153 GMAT Score: 770 Wed Nov 08, 2017 8:01 am VJesus12 wrote: Kim purchased n items from a catalog for $8 each. Postage and handling charges consisted of$3 for the first item and $1 for each additional item. Which of the following gives the total dollar amount for Kim’s purchase, including postage and handling, in terms of n ? (A) 8n + 2 (B) 8n + 4 (C) 9n + 2 (D) 9n + 3 (E) 9n + 4 The OA is C. I am confused. I thought the correct answer should be A. Why is C? Cost of Items: If Kim purchases n items for$8 each, then she spends a total of 8n on those items. Postage and Handling: We know that the first item carries a charge of $3. If there are a total of n items purchased, after the first item, there will be an additional n -1 items. If the charge is$1 each for those n -1 items, then she'll spend 1(n-1) dollars on those, for a total of 3 + 1(n - 1) = 3 + n - 1 = n + 2 dollars. So she spent 8n on the items and n + 2 on postage, for a total of 8n + n + 2 = 9n + 2 dollars. The answer is C _________________ Veritas Prep | GMAT Instructor Veritas Prep Reviews Save $100 off any live Veritas Prep GMAT Course Thanked by: VJesus12 Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now! ### GMAT/MBA Expert DavidG@VeritasPrep Legendary Member Joined 14 Jan 2015 Posted: 2542 messages Followed by: 116 members Upvotes: 1153 GMAT Score: 770 Wed Nov 08, 2017 8:04 am VJesus12 wrote: Kim purchased n items from a catalog for$8 each. Postage and handling charges consisted of $3 for the first item and$1 for each additional item. Which of the following gives the total dollar amount for Kim’s purchase, including postage and handling, in terms of n ? (A) 8n + 2 (B) 8n + 4 (C) 9n + 2 (D) 9n + 3 (E) 9n + 4 The OA is C. I am confused. I thought the correct answer should be A. Why is C? You could also pick an easy number. Say n = 5. If she bought 5 items for $8 each, she'll spend 5*8 =$40. The first item carries a postage charge of $3. The other 4 carry a charge of$1 each for a total postage charge of 3 + 4 = $7 Total spent: 40 + 7 = 47. This is our target. Now just plug '5' in place of 'n' in the answer choices until we find that value of 47. Only C will work. (9*5 + 2 = 45 + 2 = 47.) _________________ Veritas Prep | GMAT Instructor Veritas Prep Reviews Save$100 off any live Veritas Prep GMAT Course Thanked by: VJesus12 Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now! ### Best Conversation Starters 1 Roland2rule 181 topics 2 lheiannie07 110 topics 3 ardz24 56 topics 4 LUANDATO 55 topics 5 swerve 51 topics See More Top Beat The GMAT Members... ### Most Active Experts 1 Brent@GMATPrepNow GMAT Prep Now Teacher 158 posts 2 GMATGuruNY The Princeton Review Teacher 124 posts 3 Scott@TargetTestPrep Target Test Prep 123 posts 4 Rich.C@EMPOWERgma... EMPOWERgmat 111 posts 5 EconomistGMATTutor The Economist GMAT Tutor 81 posts See More Top Beat The GMAT Experts
# Some confusions about Repeated Doubling Algorithm? The following repeated point doubling algorithm is taken from the book Guide to Elliptic Curve Cryptography by D. Hankerson, A. Menezes, and S. Vanstone on page#93. Clearly, this algorithm is better than calling the point doubling procedure $m$ times. But I am having a hard time to understand the correctness of this algorithm. So, I have two questions about the use of $2Y$ instead of $Y$ coordinate after the first iteration in the body of the loop 1) How to prove that it computes the correct value of all the coordinates at the end? 2) How to prove that it computes $2Y$ for $4P$ and so on? This algorithm is a small optimization to Algorithm 3.21 of the same book on page 91. Algorithm 3.21 is the original algorithm for computing elliptic curve doubling using Jacobian coordinates (with $a=-3$). If you have a close look at both of the algorithms, you will notice many similarities. For example, Algorithm 3.21 computes $A \leftarrow 3(X_1-Z_1^2)(X_1+Z_1^2)$, whereas Algorithm 3.23 computes $A \leftarrow 3(X^2-W)$ (note that $W=Z^4$!). Therefore I would suggestion breaking problem (1) down into two parts: 1. For $m=1$ prove that Algorithm 3.23 computes the same thing as Algorithm 3.21. Since we are using Jacobian coordinates, this means showing that $X_3/Z_3^2=X/Z^2$ and $Y_3/Z_3^3=(Y/2)/Z^3$ (using the notation of the respective algorithms, and of course with equal input). 2. Use induction to prove the result for $m>1$. I am not sure whether I understand part (2) of your question correctly. The statement $2Y$ for $4P$ is not really well-defined. Since we are working with projective coordinates (Jacobian), what you would probably want to check is whether we are computing a $X,Y,Z$ such that $4P=(X/Z^2 : (Y/2)/Z^3: 1)$. Since the first part of your question proves that the algorithm correctly computes $2^mP$ for any $m>1$, the second statement follows by applying the definition of Jacobian coordinates to Line 4 of Algorithm 3.21.
# Haar Measure Integral I am a physicist and I am wondering whether the following integral over Haar measure (edit: say $U$ is unitary, orthogonal or symplectic matrix) \begin{align} \int dU \: \exp\left( \mathrm{tr}(UX) + \mathrm{tr}(X^\dagger U^\dagger) \right) \end{align} have an explicit expression in terms of the matrix X. For example, if the group is $U(1)$, then the result would be the modified Bessel function $I_0(2|X|)$. For the general case, I guess one can at least expand the exponent and use the Weingarten functions, and then perform a re-summation. But I know too little about the properties of the Weingarten functions to organize the re-summation into any simple, explicit form. Does someone know how to do this, or perhaps where formulae like this can be found? • For example, if the group is U(1), then And in your case the group is ??? – fedja Dec 9 '17 at 4:10 • @fedja Let's say one would like $U$ to be $N\times N$ unitary, orthogonal or symplectic matrix. – Jing-Yuan Chen Dec 9 '17 at 4:41 • this is essentially a duplicate of mathoverflow.net/questions/256066/… – Abdelmalek Abdesselam Dec 14 '17 at 15:24 • @AbdelmalekAbdesselam Thank you! The references are very useful! – Jing-Yuan Chen Dec 15 '17 at 8:32 Depending on what you mean by "explicit", in the unitary case this can be read off from a generalization of the Harish--Chandra-Itzykson-Zuber formula. To see that, note that your integral can be rewritten as $$J=\int_{U_N}\int_{U_N} \exp(\Re (\mbox{tr} V YU)) dU dV,$$ where $Y$ is a diagonal real matrix whose entries are the singular values of $X$. Now, for fixed diagonal $A,B$ consider the integral $$J(A,B)=\int_{U_N} \int_{U_N} \exp(\Re (\mbox{tr} A V BU)) dU dV.$$ Then $J=J(I,Y)$. For $A,B$ with distinct entries, $J(A,B)$ has an explicit formula involving determinants in Bessel functions, see for example formula (3.6) in the review paper of Zinn-Justin and Zuber, https://arxiv.org/pdf/math-ph/0209019.pdf (they attribute the result to Balantekin and to Guhr--Wetting, although I guess one can trace it all the way back to Harish-Chandra). Now in your case $A=I$ and in particular the entries of $A$ are not distinct, but resolving this involves a straight-forward limit, replacing $I$ by $A=I+\epsilon \Delta$ where $\Delta$ has distinct real entries, and taking $\epsilon \to 0$. I suspect that the case of $A=I$ has an even simpler formula, but I don't see it. Maybe somebody else, more versed in representations than me, can comment on that. • $+h.c.$ in (3.6) means adding the conjugate? – fedja Dec 9 '17 at 15:55 • Thank you very much Ofer, this is very helpful. If the matrix $X$ in question is not full rank, but say of rank $n<N$, I guess the original $SU(N)$ integral can be reduced to over $SU(n)$ without causing other changes. Is this correct? I also would hope there is a discussion of $SO(N)$ somewhere. – Jing-Yuan Chen Dec 11 '17 at 2:44
# zbMATH — the first resource for mathematics On the frequency of Titchmarsh’s phenomenon for $$\zeta$$ (s). VI. (English) Zbl 0649.10028 Let $$E>1$$ be a fixed constant, $$C\leq H\leq T/100$$ and $$K=Exp((D \log H)/(\log \log H))$$ where C is a large positive constant and D an arbitrary positive constant. Then the main result is as follows. Theorem: There are $$\geq TK^{-E}$$ disjoint intervals I of length K each and all contained in [T,2T] such that the maximum of $$| \zeta (1+it)|$$ as t varies over I lies between $e^{\gamma}(\log \log K-\log \log \log K+O(1))\quad and\quad e^{\gamma}(\log \log K+\log \log \log K+O(1)).$ In the proof of this theorem one of the tools is the main theorem of part V of this series [Ark. Mat. 26, No.1, 13-20 (1988)]. The authors also announce a forthcoming result by the reviewer regarding the maximum of $$| \zeta (1+it)|$$ over intervals I (contained in [T,2T]) of lengths $$\geq C \log \log \log \log T$$ and smaller intervals. Here a precise lower bound is given for intervals of length $$\geq C \log \log \log \log T$$ and statistical results for intervals of smaller lengths. Reviewer: K.Ramachandra ##### MSC: 11M06 $$\zeta (s)$$ and $$L(s, \chi)$$ Full Text:
## Calculus 8th Edition Maximum value: $f(\dfrac{\pi}{6},\dfrac{\pi}{6})=\dfrac{3}{2}$ Second derivative test: Some noteworthy points to calculate the local minimum, local maximum and saddle point of $f$. 1. If $D(p,q)=f_{xx}(p,q)f_{yy}(p,q)-[f_{xy}(p,q)]^2 \gt 0$ and $f_{xx}(p,q)\gt 0$ , then $f(p,q)$ is a local minimum. 2.If $D(p,q)=f_{xx}(p,q)f_{yy}(p,q)-[f_{xy}(p,q)]^2 \gt 0$ and $f_{xx}(p,q)\lt 0$ , then $f(p,q)$ is a local maximum. 3. If $D(p,q)=f_{xx}(p,q)f_{yy}(p,q)-[f_{xy}(p,q)]^2 \lt 0$ , then $f(p,q)$ is a not a local minimum and local maximum or, a saddle point. For $(x,y)=(\dfrac{\pi}{6},\dfrac{\pi}{6})$ $D=\dfrac{3}{4}\gt 0$ ; and $f_{xx} =-1\lt 0$ Thus, when $D(p,q)=f_{xx}(p,q)f_{yy}(p,q)-[f_{xy}(p,q)]^2 \gt 0$ and $f_{xx}(p,q)\lt 0$ , then $f(p,q)$ is a local maximum. Also, $f(x,y)=sin x+sin y +cos(x+y)=sin x+ siny+cosx \cos y-\sin x \sin y$ This yields, $f(\dfrac{\pi}{6},\dfrac{\pi}{6})=\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{\sqrt 3}{2}\dfrac{\sqrt 3}{2}-\dfrac{1}{2}\dfrac{1}{2}=\dfrac{3}{2}$ Therefore, we have Maximum value: $f(\dfrac{\pi}{6},\dfrac{\pi}{6})=\dfrac{3}{2}$
# Homework Help: Find no. of Complex Numbers 1. Oct 3, 2012 ### utkarshakash 1. The problem statement, all variables and given/known data Find the number of complex numbers satisfying $|z|=z+1+2i$ 2. Relevant equations 3. The attempt at a solution Let z=x+iy |x+iy| = (x+1)+i(2+y) Squaring and taking modulus |$\sqrt{x^{2}+y^{2}}|^{2} = |(x+1)+i(2+y)|^{2}$ $x^{2}+y^{2} = (x+1)^{2}+(2+y)^{2}$ Rearranging and simplifying I get 2x+4y+5=0 Now what to do next? Also Is there any other way to solve this question? 2. Oct 3, 2012 ### HallsofIvy You seem to have missed an important point- the absolute value of a complex number is real. Since the left side of the equation is real, the right side must be. z must be of the form z= x- 2i so that z+ 1+ 2i= x+1. What you did was correct but how many points on the line 2x+ 4y+ 5= 0 also satisfy y= -2?
# IEEE Transactions on Nuclear Science ## Volume 55 Issue 4  Part 1 • Aug. 2008 This issue contains several parts.Go to:  Part 2 ## Filter Results Displaying Results 1 - 25 of 67 • ### [Front cover] Publication Year: 2008, Page(s): C1 | PDF (54 KB) • ### IEEE Transactions on Nuclear Science publication information Publication Year: 2008, Page(s): C2 | PDF (37 KB) Publication Year: 2008, Page(s):1801 - 1804 | PDF (85 KB) • ### RADECS 2007 Conference Overview Publication Year: 2008, Page(s):1805 - 1806 | PDF (29 KB) | HTML • ### Comments by the Editors Publication Year: 2008, Page(s): 1807 | PDF (21 KB) | HTML • ### List of reviewers Publication Year: 2008, Page(s):1808 - 1809 | PDF (22 KB) • ### The Near-Earth Space Radiation Environment Publication Year: 2008, Page(s):1810 - 1832 Cited by:  Papers (46) | | PDF (4246 KB) | HTML The effects of the space radiation environment on spacecraft systems and instruments are significant design considerations for space missions. Astronaut exposure is a serious concern for manned missions. In order to meet these challenges and have reliable, cost-effective designs, the radiation environment must be understood and accurately modeled. The nature of the environment varies greatly betwe... View full abstract» • ### Radiation Effects in MOS Oxides Publication Year: 2008, Page(s):1833 - 1853 Cited by:  Papers (215)  |  Patents (1) | | PDF (1872 KB) | HTML Electronic devices in space environments can contain numerous types of oxides and insulators. Ionizing radiation can induce significant charge buildup in these oxides and insulators leading to device degradation and failure. Electrons and protons in space can lead to radiation-induced total-dose effects. The two primary types of radiation-induced charge are oxide-trapped charge and interface-trap ... View full abstract» • ### Modeling and Simulation of Single-Event Effects in Digital Devices and ICs Publication Year: 2008, Page(s):1854 - 1878 Cited by:  Papers (60)  |  Patents (2) | | PDF (1778 KB) | HTML This paper reviews the status of research in modeling and simulation of single-event effects (SEE) in digital devices and integrated circuits, with a special emphasis on the current challenges concerning the physical modeling of ultra-scaled devices (in the deca-nanometer range) and new device architectures (Silicon-on-insulator, multiple-gate, nanowire MOSFETs). After introducing the classificati... View full abstract» • ### Modeling Single Event Transients in Bipolar Linear Circuits Publication Year: 2008, Page(s):1879 - 1890 Cited by:  Papers (12) | | PDF (1045 KB) | HTML This review paper covers modeling of single event transients (SETs) in bipolar linear circuits. The modeling effort starts with a detailed circuit model, in a program such as SPICE, constructed from a photomicrograph of the die, which is verified by simulating the electrical response of the model. A description of various approaches to generating the single event strike in a circuit element is the... View full abstract» • ### Multi-Scale Simulation of Radiation Effects in Electronic Devices Publication Year: 2008, Page(s):1891 - 1902 Cited by:  Papers (7)  |  Patents (2) | | PDF (1035 KB) | HTML As integrated circuits become smaller and more complex, it has become increasingly difficult to simulate their responses to radiation. The distance and time scales of relevance extend over orders of magnitude, requiring a multi-scale, hierarchical simulation approach. This paper demonstrates the use of multi-scale simulations to examine two radiation-related problems: enhanced low-dose-rate sensit... View full abstract» • ### Improving Integrated Circuit Performance Through the Application of Hardness-by-Design Methodology Publication Year: 2008, Page(s):1903 - 1925 Cited by:  Papers (54) | | PDF (2304 KB) | HTML Increased space system performance is enabled by access to high-performance, low-power radiation-hardened microelectronic components. While high performance can be achieved using commercial CMOS foundries, it is necessary to mitigate radiation effects. This paper describes approaches to fabricating radiation-hardened components at commercial CMOS foundries by the application of novel design techni... View full abstract» • ### Total Ionizing Dose and Single Event Effects Hardness Assurance Qualification Issues for Microelectronics Publication Year: 2008, Page(s):1926 - 1946 Cited by:  Papers (15) | | PDF (1534 KB) | HTML The radiation effects community has developed a number of hardness assurance test guidelines to assess and assure the radiation hardness of integrated circuits for use in space and/or high-energy particle accelerator applications. These include test guidelines for total dose hardness assurance qualification and single event effects (SEE) qualification. In this work, issues associated with these ha... View full abstract» • ### Scan-Architecture-Based Evaluation Technique of SET and SEU Soft-Error Rates at Each Flip-Flop in Logic VLSI Systems Publication Year: 2008, Page(s):1947 - 1952 Cited by:  Papers (5)  |  Patents (1) | | PDF (974 KB) | HTML A scan flip-flop (FF) is designed to observe both single event transient (SET) and single event upset (SEU) soft errors in logic VLSI systems. The SET and SEU soft errors mean the upset caused by latching an SET pulse that originates in combinational logic blocks and the upset caused by a direct ion hit to the FF, respectively. An irradiation test method using the scan FF is proposed to obtain SET... View full abstract» • ### Total Dose Effects in Op-Amps With Compensated Input Stages Publication Year: 2008, Page(s):1953 - 1959 Cited by:  Papers (3) | | PDF (702 KB) | HTML This paper discusses total dose damage in operational amplifiers with compensated input stages. The impact of this design approach on unit-to-unit variability of radiation damage is examined, along with hardness assurance methods that can be used to bound the radiation behavior. Data is included for an unusually large sample (100 devices) of one device type. Half of those devices were subjected to... View full abstract» • ### Channel Hot Carrier Stress on Irradiated 130-nm NMOSFETs Publication Year: 2008, Page(s):1960 - 1967 Cited by:  Papers (12) | | PDF (635 KB) | HTML We investigate how X-ray exposure impact the long term reliability of 130-nm NMOSFETs as a function of device geometry and irradiation bias conditions. This work focuses on electrical stresses on n-channel MOSFETs performed after irradiation with X-ray up to 136 Mrad(SiO2) in different bias conditions. Irradiation is shown to negatively affect the degradation during subsequent hot carri... View full abstract» • ### Effectiveness of TMR-Based Techniques to Mitigate Alpha-Induced SEU Accumulation in Commercial SRAM-Based FPGAs Publication Year: 2008, Page(s):1968 - 1973 Cited by:  Papers (11) | | PDF (444 KB) | HTML We present an experimental analysis of alpha-induced soft errors in 90-nm low-end SRAM-based FPGAs. We first assess the relative sensitivity of the configuration memory bits controlling the different resources in the FPGA. We then study how SEU accumulation in the configuration memory impacts on the reliability of unhardened and hardened-by-design circuits. We analyze different hardening solutions... View full abstract» • ### Study of Single-Event Transients in High-Speed Operational Amplifiers Publication Year: 2008, Page(s):1974 - 1981 Cited by:  Papers (8) | | PDF (483 KB) | HTML This paper presents a simulation and experimental study of the analog single-event transient sensitivity of wide bandwidth operational amplifiers. Architecture effects are presented that could influence ASIC design and COTS selection. View full abstract» • ### Evaluation of Recent Technologies of Nonvolatile RAM Publication Year: 2008, Page(s):1982 - 1991 Cited by:  Papers (4) | | PDF (1760 KB) | HTML Two types of recent nonvolatile random access memories (NVRAM) were evaluated for radiation effects: total dose and single event upset and latch-up under heavy ions and protons. Complementary irradiation with a laser beam provides information on sensitive areas of the devices. View full abstract» • ### Investigating Degradation Mechanisms in 130 nm and 90 nm Commercial CMOS Technologies Under Extreme Radiation Conditions Publication Year: 2008, Page(s):1992 - 2000 Cited by:  Papers (12) | | PDF (416 KB) | HTML The purpose of this paper is to study the mechanisms underlying performance degradation in 130 nm and 90 nm commercial CMOS technologies exposed to high doses of ionizing radiation. The investigation has been mainly focused on their noise properties in view of applications to the design of low-noise, low-power analog circuits to be operated in harsh environment. Experimental data support the hypot... View full abstract» • ### Temperature Effect on Heavy-Ion-Induced Single-Event Transient Propagation in CMOS Bulk 0.18 $mu$m Inverter Chain Publication Year: 2008, Page(s):2001 - 2006 Cited by:  Papers (12) | | PDF (716 KB) | HTML Heavy-ion-induced single-event transients (SET) are studied by device simulation on an ATMEL spatial component: the CMOS bulk 0.18 mum inverter. The wide temperature range of a spatial environment (from 218 to 418 K) can modify the shape of the SET. Thus, an investigation of the SET propagation through a 10-inverter logic chain is performed in the 218-418 K temperature range, and the threshold LET... View full abstract» • ### Probing SET Sensitive Volumes in Linear Devices Using Focused Laser Beam at Different Wavelengths Publication Year: 2008, Page(s):2007 - 2012 Cited by:  Papers (7) | | PDF (806 KB) | HTML The main objective of the work presented here is to explore the ability of laser irradiations to determine the SET sensitive depths of a linear device by using several wavelengths. Laser testing at two wavelengths allows the estimation of sensitive depths. The approach conducted here is applied for the first time to a linear device with very deep sensitive depth. The 1064 nm wavelength seems to be... View full abstract» • ### Use of Code Error and Beat Frequency Test Method to Identify Single Event Upset Sensitive Circuits in a 1 GHz Analog to Digital Converter Publication Year: 2008, Page(s):2013 - 2018 Cited by:  Papers (4) | | PDF (423 KB) | HTML Typical test methods for characterizing the single event upset performance of an analog to digital converter (ADC) have involved holding the input at static values. As a result, output error signatures are seen for only a few input voltage and output codes. A test method using an input beat frequency and output code error detection allows an ADC to be characterized with a dynamic input at a high f... View full abstract» • ### A New Algorithm for the Analysis of the MCUs Sensitiveness of TMR Architectures in SRAM-Based FPGAs Publication Year: 2008, Page(s):2019 - 2027 Cited by:  Papers (11) | | PDF (694 KB) | HTML In this paper we present an analytical analysis of the fault masking capabilities of triple modular redundancy (TMR) hardening techniques in the presence of multiple cell upsets (MCUs) in the configuration memory of SRAM-based field-programmable gate arrays (FPGAs). The analytical method we developed allows an accurate study of the MCUs provoking domain crossing errors that defeat TMR. From our an... View full abstract» • ### New Analytical Solutions of the Diffusion Equation Available to Radiation Induced Substrate Currents Modeling Publication Year: 2008, Page(s):2028 - 2035 Cited by:  Papers (4) | | PDF (1106 KB) | HTML This paper describes some new solutions of the diffusion equation in a semiconductor slab. These solutions are computed for Dirichlet null boundary conditions in the top and bottom planes of the slab and with a null internal electric field. The proposed model takes into account a finite diffusion length and an inclined trajectory. Such solutions may be used for radiation induced substrate diffusio... View full abstract» ## Aims & Scope IEEE Transactions on Nuclear Science focuses on all aspects of the theory and applications of nuclear science and engineering, including instrumentation for the detection and measurement of ionizing radiation; particle accelerators and their controls; nuclear medicine and its application; effects of radiation on materials, components, and systems; reactor instrumentation and controls; and measurement of radiation in space. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Paul Dressendorfer 11509 Paseo del Oso NE Albuquerque, NM  87111  USA
A good friend posted this link to FB. I read the post, did some background reading, and debated whether to write this post or not. I’ve been writing it in my head anyway, so time to get it out! I’ll preface my remarks by pointing out that I am not a climate scientist. I am an educated observer. Regardless, Stephen spent many hours explaining conservatism and the bible to me, so I owe it to him to try explaining this! I do care about climate change, both personally and as a driver of ecological systems which ARE my expertise. In the article Bill Mundhausen makes three statements worth thinking about. 1. “Those of you who were alive in the 70’s may remember articles in Time Magazine warning the world about an impending ice age.” 2. “The potential warming effects by human activity affecting the biosphere may be completely dwarfed by changes in heat and magnetic wind from the sun that solar scientists study.” 3. “… the unusual warmth of the previous five months was the short term effect of an unusually strong El Niño, not the continuation of long-term, human-induced global warming.” Eventually I’ll go through each of these individually. This post will just address the first one. # 70’s Ice Age scare This is something I’d heard before but not pursued. This statement is often thrown about as evidence that climate scientists don’t know what they’re talking about, or that they change their minds to suit the prevailing political climate, or that things go up and down anyway so why worry? It’s true that the global temperature trend flattened out or even decreased during the middle of the 20th century. So did that have climate science predicting an ice age? This time I resolved to figure out what was going on. There is an awesome collection of scanned news articles here, and a similar list with references here. With a couple exceptions on the first page, these are all media stories, not peer-reviewed publications. A new ice-age made good headlines at least! What were scientists writing at the time? There’s a good summary here, including a reference to a peer-reviewed review of the science at the time. Between 1965 and 1979 7 published articles predicted continued cooling. But 42 articles predicted warming! And over 20 articles predicted no systematic change would occur. Scientists were genuinely uncertain about the future of the climate at the time. Part of the reason for the uncertainty had to do with a lack of understanding about how two distinct human drivers of climate would interact with each other. On the one hand there were the first decades of evidence that atmospheric $CO_2$ was increasing. On the other hand there were observations of dramatic increases in aerosol emmisions, particularly $SO_2$. Remember acid rain? The same chemicals responsible for acid rain also have a cooling effect on the atmosphere, and those emissions were dramatically increasing in the early 70’s. So the scientific uncertainty reflected these competing drivers. $CO_2$ warms the lower atmosphere, while $SO_2$ cools it. The scientific articles predicting a new ice age were forecasting the impacts of continued increases in $SO_2$ (see this scan ). Humanity acted to limit $SO_2$ emissions which began declining around 1980. We acted on $SO_2$ because some impacts were more obvious (decreasing pH in surface waters), and the economic costs smaller than reducing $CO_2$ emissions (although still substantial). So overall the “ice age scare” was a media construction at a time when political discussions were focused on $SO_2$ emissions. It would be interesting to see if those debates stimulated climate scientists to focus more on building long term data records, and to use those data to measure the effects of human and natural drivers on climate change. The available data in the 1970’s were quite limited (see figures at the top ). At that time climate scientists also lacked the most powerful tool for understanding the interacting effects of climate drivers: computational models of the global climate system.
# Calibration schemes with O(N log N) scaling for large-N radio interferometers built on a regular grid @article{Gorthi2020CalibrationSW, title={Calibration schemes with O(N log N) scaling for large-N radio interferometers built on a regular grid}, author={Deepthi Gorthi and Aaron Parsons and Joshua S. Dillon}, journal={Monthly Notices of the Royal Astronomical Society}, year={2020}, volume={500}, pages={66-81} } • Published 6 May 2020 • Computer Science • Monthly Notices of the Royal Astronomical Society ### Omniscopes: Large area telescope arrays with only NlogN computational cost • Physics • 2010 We show that the class of antenna layouts for telescope arrays allowing cheap analysis hardware (with correlator cost scaling as NlogN rather than N{sup 2} with the number of antennas N) is ### Non-linear Redundancy Calibration • Mathematics • 2013 This work proposes to use a standard non-linear minimization algorithm to solve for both the antenna gains as well as the true visibilities of radio interferometric arrays, and demonstrates that the estimator is indeed statistically efficient, achieving the Cramer-Rao bound. ### Mitigating the effects of antenna-to-antenna variation on redundant-baseline calibration for 21 cm cosmology • Physics Monthly Notices of the Royal Astronomical Society • 2019 The separation of cosmological signal from astrophysical foregrounds is a fundamental challenge for any effort to probe the evolution of neutral hydrogen during the Cosmic Dawn and epoch of ### Fast Fourier transform telescope • Physics • 2009 We propose an all-digital telescope for 21 cm tomography, which combines key advantages of both single dishes and interferometers. The electric field is digitized by antennas on a rectangular grid, ### MITEoR: a scalable interferometer for precision 21 cm cosmology • Physics • 2014 We report on the MIT Epoch of Reionization (MITEoR) experiment, a pathfinder low-frequency radio interferometer whose goal is to test technologies that improve the calibration precision and reduce ### Comparing Redundant and Sky-model-based Interferometric Calibration: A First Look with Phase II of the MWA • Physics The Astrophysical Journal • 2018 The first results from comparing both calibration approaches with MWA Phase II observations are presented, showing substantial agreement between redundant visibility measurements after calibration and improved calibration by combining OMNICAL and FHD. ### Precision calibration of radio interferometers using redundant baselines • Environmental Science • 2010 The errors and biases in existing redundant calibration schemes are explored through simulations, and it is shown how statistical biases can be eliminated and slight deviations from perfect redundancy and coplanarity can be taken into account.
## Abstract Deducibility and Domain Theory ### Authors: Vladimir Yu.Sazonov and Dmitri I.Sviridenko ABSTRACT According to the thesis computability = deducibility'' [D.Scott, LNCS 140] there are investigated intensional aspects of domain theory as mathematical theory of computability. A logistic system is any pair of sets , where R \subseteq Conf(A) := Powerset(A) x (A union {#}), # \notin A. The intended interpretation: A is a set of sentences, R is a rule of inference, and # is a contradiction sign. As usually, R induces a relation |-_R \subseteq Conf(A) of (reflexive) deductive inference and also the classes Cl() \subseteq Powerset(A) of the closed sets under |-_R and Th() \subseteq Cl() of consistent closed sets (theories) partially ordered by the inclusion relation. The followimg more general notion of deducibility ||-_R, which may be non-reflexive, playes an important role. Let G ||-_R f iff there exists a (well-founded) tree of inference G ||-_R f which contains at least one configuration in R (i.e. is non-trivial). By imposing, if necessary, on deducibility notion suitable finitarity conditions (and others) it is possible to characterise rather naturally, from the point of view of the abovementioned thesis, various classes of domains, e.g. classes of all complete lattices with a base, conditionally complete partially ordered sets with a base, complete f_0-spaces (defined in [Ju.L.Ershov, Algebra and Logic, 11, N4], the same as Scott's algebraic domains; cf. also [D.Scott, LNCS 140] where only finitary reflexive deducibility is considered), Ershov's complete A_0-spaces [Algebra and Logic, 12, N4] = Scott's continuous domains, and Scott's continuous lattices. For example, Th(< A,|- >) is an (arbitrary) complete A_0-space under \subseteq if for some R \subseteq Conf(A) there holds (1) G/f \in R => G is finite, (i.e. R is finitary), (2) G ||-_R f => G^ ||-_R f, where G^ := union {g^ : g \in G and g^ := {h : g ||-_R h}, and (3) G |- f <=> G ||-_R f^ and \$G |- # <=> G ||-_R #. The goal of this paper is just to give an English extended version of the above text published only in Russian [V.Yu.Sazonov and D.I.Sviridenko, Abstract Deducibility and Domain Theory, Seventh All Union Conference on Mathematical Logic, Abstracts, Novosibitsk, 1984, p. 158] in connection with a related recent paper [R.Hoofman, Continuous Information Systems, Information and Computation 105, 42--71 (1993)]. It contains also an Appendix to this Abstract (written by the first author) with additional details, proofs and some comparisons with Hoofman's approach. Paper Available at: ftp://dimacs.rutgers.edu/pub/dimacs/TechnicalReports/TechReports/1996/96-08.ps.gz
# Planck's constant Jump to: navigation, search ## Noun Plural none 1. (uncountable) Planck's constant is a measure of the size of a quantum, or the smallest 'piece' of energy that exists. It has a value ${\displaystyle h\approx 6.626\times 10^{-34}\ \mathrm {J} \cdot \mathrm {s} }$
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: ## If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. # Removal of Arsenic(III) and Arsenic(V) Ions from Aqueous Solutions with Lanthanum(III) Salt and Comparison with Aluminum(III), Calcium(II), and Iron(III) Salts S. Tokunaga, S. Yokoyama and S. A. Wasay Water Environment Research Vol. 71, No. 3 (May - Jun., 1999), pp. 299-306 Stable URL: http://www.jstor.org/stable/25045215 Page Count: 8 Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. Preview not available ## Abstract Interactions of arsenic(III) and arsenic(V) ions with a lanthanum salt were studied with the aim of developing a new precipitation method for removal of arsenic from aqueous solutions. Performance was compared to those of aluminum, polyaluminum chloride (PAC), calcium, and iron(III) salts. Arsenic(III) was removed by iron(III) and lanthanum in a narrow pH range with less than 60% removal. Arsenic(V) was removed more efficiently by aluminum, PAC, iron(III), and lanthanum. Lanthanum was most effective, meeting Japanese effluent and drinking water standards by adding three times as much lanthanum as arsenic(V). The stoichiometry and X-ray diffraction measurement showed that the precipitation reactions are ${\rm La}^{3+}+{\rm H}_{2}{\rm AsO}_{4}^{-}\rightarrow {\rm LaAsO}_{4}+2{\rm H}^{+}\quad ({\rm pH}\ 5)\quad \quad \quad \quad (1)$ and ${\rm La}^{3+}+{\rm HAsO}_{4}^{2-}\rightarrow {\rm LaAsO}_{4}+{\rm H}^{+}\quad ({\rm pH}\ 9)\quad \quad \quad \quad (2)$ The solubility product of lanthanum arsenate, ${\rm LaAsO}_{4}$, was calculated to be 1.07 ± 0.03 × $10^{-21}$. • 299 • 300 • 301 • 302 • 303 • 304 • 305 • 306
# Discretizations¶ ## Mathematical background¶ In mathematics, the term discretization stands for the transition from abstract, continuous, often infinite-dimensional objects to concrete, discrete, finite-dimensional counterparts. We define discretizations as tuples encompassing all necessary aspects involved in this transition. Let be an arbitrary set, be the set of -tuples where each component lies in . We define two mappings which we call sampling and interpolation, respectively. Then, the discretization of with respect to and the above operators is defined as the tuple The following abstract diagram visualizes a discretization: TODO: write up in more detail ## Example¶ Let be the space of real-valued continuous functions on the interval , and let be ordered sampling points in . Restriction operator: We define the grid collocation operator as The abstract object in this case is the input function , and the operator evaluates this function at the given points, resulting in a vector in . This operator is implemented as PointCollocation. Extension operator: Let discrete values be given. Consider the linear interpolation of those values at a point : where is the index such that . Then we can define the linear interpolation operator as where stands for the function . Hence, this operator maps the finite array to the abstract interpolating function . This interpolation scheme is implemented in the LinearInterpolation operator.
# Deriving some uniform circular motion equations My question basically boils down to this. How do we derive these relationships. 1.)What is the relationship between radius and centripetal force? (inverse, but why?) 2.)What is the relationship between velocity and centripetal force? ( directly proportional to the square of the velocity, but why?) 3.)What is the relationship between period and centripetal force? (inverse, but why?) 4.)Why does the centripetal force increase if we move an object away from the center of motion? TL;DR basically what i want to know is how would you derive each of these relationships symbolically. EDIT: someone asked me to provide some context so i will, we were doing a turntable lab in my physics class, and our teacher asked us to derive these equations from the centripetal force equation. • "1.)What is the relationship between radius and centripetal force? (inverse, but why?)" Or linear if you measure angular velocity instead of tangential velocity... And equivalent comments can be made in several places. Presumably you are asked in a particular context but the question don't make sense without that context. – dmckee --- ex-moderator kitten Nov 4 '15 at 4:29 What is the relationship between radius and centripetal force? You start from the second law of motion: $F=ma$ You write the law for each axis. $F_x=ma_x$, where $a_x = \frac {dv_x}{dt} = \frac {d(v*cos(\omega t))}{dt} = - v *\omega*sin(\omega t)$ $F_y=ma_y$, where $a_y = \frac {dv_y}{dt} = \frac {d(v*sin(\omega t))}{dt} = v *\omega*cos(\omega t)$ You know that $v=2*\pi*R*no. rotations/sec=R\omega$ and you get: $a=(a_x^2+a_y^2)^\frac {1}{2}=R\omega^2$ and $F=m\omega^2R=m\frac {v^2}{R}$ • Super deamon, The demonstration should clear up all your questions. Study it carefully. – Energizer777 Nov 4 '15 at 5:40
## A Pareto Distribution of Wealth For discussions of culture, politics, economics, sociology, law, business and any other topic that falls under the social science remit. ### A Pareto Distribution of Wealth I'm sure you're familiar with the 80/20 rule. It's said that 80% of the effects come from 20% of the causes, and so upon application to Meritocracy, 80% of wealth is generated by 20% of the people. This principle is used from business to sports to health and safety, to optimise all walks of life for the best possible outcome. With application to wealth, you can approximate what this looks like yourself by modelling a population with the poorest person's wealth as some base number raised to the first power (1), (and multiplying that by some coefficient if you want to normalise the total to 100% of the total wealth of the population), the second poorest person's wealth as the same base number raised to the second power (2), (normalised by that same coefficient), and so on the larger the population you want to consider. It turns out that the larger the population, the base number and coefficient tend towards particular values that reveal a constant ratio of wealth between the richest and poorest individuals whose wealth follows a Pareto distribution. The base number holds for smaller populations, though the smaller the population, the more the ratio tends towards a population of 1 where 1 person has 100% of the wealth - so larger populations are more informative. If you're spreadsheet savvy, try this yourself. As your base number, raise the number 2200 to the power of (1 divided by the total population) e.g. 2200^0.1 for a population of 10 (to get a total of about 2.16) As your coefficient for a population of 10 you want a number close to 0.0244. Raise this base number to the power of 1, and mulitply by this coefficient to get the poorest person with just over 0.05% of the wealth. Do the same to the power of 2, multiply by the coefficient to get the next poorest person with about 2.16 times the wealth of the poorest person. The richest person has just over 54% of the wealth, which is just over 1000 times the wealth of the poorest person. If you've worked out all the values in between, you'll see the 80/20 rule is close to holding, though the exact 80/20 ratio isn't exactly followed by an exact geometric progression. Try it on larger and larger populations and you'll realise that the richest person tends towards having 2200 times more wealth than the poorest person. The percentage of the total wealth owned by the richest person tends towards 770 divided by the population size. The percentage of the total wealth owned by the poorest person tends towards 0.35 divided by the population size. I decided to have a look at what this would look like applied to the world population: There's currently over 7.7 billion people in the world, and it's estimated that world wealth in all its forms amounts to about $1 quadrillion. Following the same approximation of the Pareto Principle: The richest person should have *$1 million across all forms of wealth (including debt, real estate and derivatives), and The poorest person should have just under *$455 across all the same forms of wealth. Ha! Go on, work it out for yourself. Does the wealth of the real world follow the Pareto distribution? Not even close! But let's consider just the world's money supply, which totals nearly$90 trillion: The richest person in the world should have *$90,000 in coins, banknotes, accounts, savings and deposits. This includes *$37,000 in just coins, banknotes and in their bank. The poorest person in the world should have *$41 in the same forms of money. This includes *$17 in readily available money. Pff. Obviously this widely utilised principle for optimisation must be best applied to everything but wealth... edit: I have put a * next to numbers I miscalculated due to forgetting to divide percentages by 100. Last edited by Silhouette on Thu Nov 07, 2019 5:24 pm, edited 1 time in total. Silhouette Philosopher Posts: 4034 Joined: Tue May 20, 2003 1:27 am Location: Existence ### Re: A Pareto Distribution of Wealth I'm "liking" this post, in the sense that people send likes to content including text messages. Ecmandu ILP Legend Posts: 9030 Joined: Thu Dec 11, 2014 1:22 am ### Re: A Pareto Distribution of Wealth I gather those numbers do not iinclude the homeless population or near homeless , who are not factored in. But probably the gross medium is what determines the closest approximation ., or about .2%. The standard deviation by that small a number would be of no measurable consequence. However, the deviation caused by unethical and illegal profiteering, of included, may make a very big difference , of those immeasurable approximations , once they are qualified into the study Or am I misunderstanding the objective behind producing a viable study. Or are they factored in Thanks, for anyone bothering my question for: 'However, one should not conflate the Pareto distribution with the Pareto Principle as the former only produces this result for a particular power value, {\displaystyle' This information I gathered after asking the question, reduces the question to general accessible knowledge. Therefore, my first thought was to withdraw the question, it becomes an interesting query, whether sensible questions may correspond to prior concerns. The oft informality of immediate responses, tends toward 'sensible, yet already treaded upon inquiries. Some questions of this type do hAve corresponding approaches, where procedures and formal rules of logic may present problems in their conflated , or, otherwise presented mode. Thanks , anyway, if unduly concerned with it. Meno_ ILP Legend Posts: 5590 Joined: Tue Dec 08, 2015 2:39 am Location: Mysterium Tremendum ### Re: A Pareto Distribution of Wealth I had to correct 6 of the numbers I used in the opening post. The result only reinforces my point even further. In a world of 7.7 billion and all the wealth we have between us, an optimising distribution according to the Pareto Principle would have hardly any millionaires in the world if any at all. Meno_ wrote:I gather those numbers do not iinclude the homeless population or near homeless , who are not factored in. But probably the gross medium is what determines the closest approximation ., or about .2%. The standard deviation by that small a number would be of no measurable consequence. However, the deviation caused by unethical and illegal profiteering, of included, may make a very big difference , of those immeasurable approximations , once they are qualified into the study Or am I misunderstanding the objective behind producing a viable study. Or are they factored in If I'm understanding what you're saying, yes the numbers would be including everyone, even the homeless or near homeless. The reason is that the numbers are starting from the theory of an entire population, whatever their living conditions, and comparing this to practice afterwards. I'm not looking at the real world first and subsequently modelling it theoretically, because as you will have gathered from my conclusion - the real world looks nothing like it uses the same Pareto Principle that is used to optimise so much else in life. As I stated, the figure of $1 quadrillion includes money in all its forms, "including debt, real estate and derivatives". Since homes are included in real estate, homelessness would imply that all homes throughout the entire world were valued above the poorest person's wealth (in all forms including real estate) of$455. This doesn't seem hard to imagine for us in the West, but given that the estimated value of only the world's developed real estate is $217 trillion, of which 75% is housing, that yields$162 trillion worth of houses throughout the whole world, and a distribution of people in line with the Pareto Principle would have the richest person owning only $162,000 in housing but the poorest would still own$74 of housing. So whilst you won't find many houses in the West for 74 dollars (or even $162k for that fact), it may be the case that any number of people could own housing worth more than$74 but just not live there... - in which case there could be homelessness, but out of choice rather than poverty. So it's hard to say homelessness or near homelessness would really be an issue for an optimising distribution of wealth in line with the Pareto Principle - especially if you believe the claim that Americans are richer than 99% of the rest of the world. This means they would collectively own about 7.4% of the world's homes if they followed the Pareto Principle, which is $12 trillion in total, or if you go by the fact that their population of 330 million is about 4.3% of the world, they would own$7 trillion between them all. Whether Americans were the top 1% or 4.3% in housing ownership, in both cases even the poorest American would still own around $150k or$116k in housing depending on each of these respective cases. But obviously there are plenty of other parts of the world with people in the top 1% or 4.3% of world wealth, so the numbers wouldn't be quite that high. Today there's probably about 20 million millionaires in the world. With Pareto optimisation of wealth, that's 0.3% of the world who won't get to call themselves millionaires anymore. I'd have to know the graph of the world's actual distribution of wealth to see where they intersect, to work out how many more would lose out (in absolute number at least) if we were to instead have a distribution of wealth according to the Pareto Principle - though it wouldn't be a great many more. Of course, the "position" one currently has in the world leaderboard of wealth ownership wouldn't have to change in such a conversion, so relatively speaking nobody would lose out. It's true that many people are too young or old, or dependent for some other reason on others having wealth when they have little to none themselves, but even allowing for this, the world is far far away from this optimising distribution used so widely in other aspects of life. Meno_ wrote:However, one should not conflate the Pareto distribution with the Pareto Principle as the former only produces this result for a particular power value If you mean the Pareto Principle is only one form of Pareto distribution then yes, I could have been more specific about this in my opening post. The title is addressing "a" Pareto Distribution of Wealth and I open speaking about the Pareto Principle, but yes, there's two instances when I didn't specify Pareto Distribution as one that follows the Pareto Principle. Silhouette Philosopher Posts: 4034 Joined: Tue May 20, 2003 1:27 am Location: Existence ### Re: A Pareto Distribution of Wealth The reason I posted that I liked this thread, wasn't just about money - sexual selection isn't Pareto normative either. Ecmandu ILP Legend Posts: 9030 Joined: Thu Dec 11, 2014 1:22 am ### Re: A Pareto Distribution of Wealth In the interests of actually enacting this Pareto Distribution based on the Pareto Principle in the real world, it's useful to consider successful models of constrained quantifiable transactions that are used in other areas of life, by which people actively and willingly participate in abiding. One such control, which is widely used to measure and confine values to a certain distribution in competitive pursuits like chess, football, board games and video games is the Elo rating system. This rating system allows people to "transact" according to their ability, such that winners and losers gain and lose their Elo rating in accordance with the Elo rating of the person they're dealing with. This means that participants trade away less of their Elo rating when losing to more able people and more when losing to less able people, and likewise they gain more Elo rating when winning against more able people and gain less when winning against less able people. When applied to wealth, the same concept translates to a kind of "exchange rate" depending on the Elo rating of whoever is trading. In the same way as things work already, businesses maximise profits by selling less expensive products to poorer people in higher numbers as well as selling more expensive products to richer people, who are less in number. Using the principles of the Elo rating system, which is based around the Logistic function, the Pareto Principle can be adapted to function such that the poor pay less than the rich for the same trade but only the the extent that everyone's wealth remains in accordance with the 80/20 rule as a result. I've adapted the Pareto model I laid out in the opening post to the afore-mentioned "logistic function", to cope with trading situations, in the following way: The logistic function takes the form of f(x) = L/(1 + e^(-k(x-x0))), where L is the curve's maximum value, k is the logistic growth rate (steepness of the curve) and x0 is the midpoint of the curve. When matching this form specifically to the Pareto Principle, L takes the value of around 49 divided by 3 times the population, k takes the value of 7.7 divided by the population, and the midpoint x0 is half of the population. Using these particular values transforms the exponential curve of the Pareto Principle (which is the same as the model I outlined in the opening post when excluding the "+1" in the denominator) to a sigmoid (S-shaped) curve once the "+1" added to the denominator. If I'm working it out correctly, the richest person is only paying about 50 times (more precisely the square root of 2200) more for the same item as the poorest person, even though they're 2200 times as rich - assuming things are distributed within the 80/20 rule as I laid out in the opening post. However to transition from what we have now to a Pareto Principle distribution of wealth, it might be necessary to increase that proportion, at least to accelerate the transition. This depends on what the current world wealth distribution is. The positive consequences of this are that every person remains proportionally richer than those they were richer than before in accordance with Meritocracy - as you have to continue to earn to maintain richness - only wealth inequality is constrained to an optimised Pareto distribution of wealth without compromising the decentralised market model - instead of allowing inequality to spiral far outside the 80/20 rule like it has been. Possible negative consequences of this are that, without certain adjustments, it's possible that the richer may resort to refraining from hiring or trading with the poorer to avoid losing out from the more significant exchange rate between more greatly differing levels of wealth: 1) Assuming rich people earned their richness, they're costing companies more in wage expenses or drawings etc. so it's already an incentive to employ cheaper people who are thereby less rich from earning less wages, which may still offset the larger costs for richer companies to employ poorer people, however no such problem will present itself to poorer companies - meaning new businesses get a natural boon and businesses that are large get penalised to the extent that they approach monopoly. 2) Selling to one poorer person earns less revenue than selling to one richer person, but it's already the case that there many more poor people than rich people, and numbers of smaller sales can make up for smaller numbers of larger sales, although the reduction in profit may make previously viable production and service provision no longer viable. However this slack can be picked up by newer and poorer businesses who don't lose out to selling to poor people, and gain from taking money off the hands of the rich - enabling the constraint within the optimised Pareto distribution behind this model. In the possible case that these two potentially negative considerations are more detrimental than beneficial, it may be advisable to restrict the exchange rate system only to non-business transactions and only partly or not at all to business entities (including sole traders) who can prove their transactions relate to business. However this may not be necessary upon full consideration of the game theory of the Pareto model that I'm proposing. The model actually introduces an incentive for richer and successful people to spend their wealth instead of hoard it, in order to make the most cost-effective use of the exchange rate system, alongside the incentive get recognised as being able to achieve a high Elo rating, and to be seen as significant contributors to the less wealthy. Current usage of the Elo rating system sees no less fierce and innovative competition at the top levels. There's no danger of people stopping trading once they've achieved a high rating, unless they go fully off-grid and self-sufficient, which is perfectly acceptable - but trading outside of the rules of the exchange rate system would have to be as illegal as it currently is to trade counterfeit currency. To take advantage of what society can give that self-sufficiency can't, you have to spend your rating to buy what you want and need - the rating is designed to fluctuate as much as account balances already do. In this way it's not a score to try and keep high at all times, as this penalises you - so it isn't comparable to the dystopian models like the Chinese social credit system. Silhouette Philosopher Posts: 4034 Joined: Tue May 20, 2003 1:27 am Location: Existence ### Re: A Pareto Distribution of Wealth All I know is that modern consumer capitalism can't work and function when you make half the population destitute or poor. That is why I definitely believe in social economic reforms. I also don't think the current economic systems are able to politically reform themselves because the entire global economy currently is on the verge of collapse. Even if you wanted to reform things you can't, that ship has long since sailed away where it is never coming back again. There is only internal systematic economic destruction dead ahead along with major social and societal disruptions that will be mercilessly brutal. This has all been in the making for the last five decades and there is no stopping it now. Too little, too late. All that is left is violent revolution because all other options were eliminated or taken off the table where the political establishment doesn't even want a public debate on the subject anymore. When all else is extinguished where people have no other available civil recourse violence is all that is left. "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 2815 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: A Pareto Distribution of Wealth Zero_Sum wrote:All I know is that modern consumer capitalism can't work and function when you make half the population destitute or poor. That is why I definitely believe in social economic reforms. I also don't think the current economic systems are able to politically reform themselves because the entire global economy currently is on the verge of collapse. Even if you wanted to reform things you can't, that ship has long since sailed away where it is never coming back again. There is only internal systematic economic destruction dead ahead along with major social and societal disruptions that will be mercilessly brutal. This has all been in the making for the last five decades and there is no stopping it now. Too little, too late. All that is left is violent revolution because all other options were eliminated or taken off the table where the political establishment doesn't even want a public debate on the subject anymore. When all else is extinguished where people have no other available civil recourse violence is all that is left. You don't present yourself anywhere close to being someone who is likely to change their mind, regardless of any contrary evidence/reasoning, but in the interest of addressing your opinions: Any internal systematic economic destruction that we may or may not be facing, inevitably or otherwise, the situation we're currently in is in large part due to the inability of our current model to internally keep itself in check. Anything that lacks the internal mechanics to keep it stable will fly apart, break or at least cause avoidable suffering or disharmony. This is why I present a model that corrects the internal mechanics of what we have to not only inherently keep itself in check, but optimally. Perhaps you're right and it's too late, perhaps you're not. Until the fat lady sings, I'm going to fish around for solutions in case the fate of which you're so certain can be avoided. As someone who is not a time traveller, I will consider you no absolute authority on the future beyond the "problem of induction" to which everyone is equally liable. If you'd prefer to do nothing or perhaps even assist in fulfilling your revealed prophecy, just to be more likely to be able to say "I told you so" at the end, I can't stop you. You do you, I'll do me. My next step will probably be to attempt to test this through some kind of programmed simulation. That's a more challenging task, at least to make such a simulation sufficiently true to the real world, but at least the maths is out of the way. Silhouette Philosopher Posts: 4034 Joined: Tue May 20, 2003 1:27 am Location: Existence ### Re: A Pareto Distribution of Wealth Silhouette wrote: Zero_Sum wrote:All I know is that modern consumer capitalism can't work and function when you make half the population destitute or poor. That is why I definitely believe in social economic reforms. I also don't think the current economic systems are able to politically reform themselves because the entire global economy currently is on the verge of collapse. Even if you wanted to reform things you can't, that ship has long since sailed away where it is never coming back again. There is only internal systematic economic destruction dead ahead along with major social and societal disruptions that will be mercilessly brutal. This has all been in the making for the last five decades and there is no stopping it now. Too little, too late. All that is left is violent revolution because all other options were eliminated or taken off the table where the political establishment doesn't even want a public debate on the subject anymore. When all else is extinguished where people have no other available civil recourse violence is all that is left. You don't present yourself anywhere close to being someone who is likely to change their mind, regardless of any contrary evidence/reasoning, but in the interest of addressing your opinions: Any internal systematic economic destruction that we may or may not be facing, inevitably or otherwise, the situation we're currently in is in large part due to the inability of our current model to internally keep itself in check. Anything that lacks the internal mechanics to keep it stable will fly apart, break or at least cause avoidable suffering or disharmony. This is why I present a model that corrects the internal mechanics of what we have to not only inherently keep itself in check, but optimally. Perhaps you're right and it's too late, perhaps you're not. Until the fat lady sings, I'm going to fish around for solutions in case the fate of which you're so certain can be avoided. As someone who is not a time traveller, I will consider you no absolute authority on the future beyond the "problem of induction" to which everyone is equally liable. If you'd prefer to do nothing or perhaps even assist in fulfilling your revealed prophecy, just to be more likely to be able to say "I told you so" at the end, I can't stop you. You do you, I'll do me. My next step will probably be to attempt to test this through some kind of programmed simulation. That's a more challenging task, at least to make such a simulation sufficiently true to the real world, but at least the maths is out of the way. The problem with social economic theoretical philosophers is that very few cross examine their own beliefs with prevailing current data in worldly state of affairs. I try not to make this mistake myself where I very much keep up with global current events. When you take a look at this thread of mine here below, what's the first thing that comes to your mind? Do you still believe things can be salvaged? Factor in your simulations for all of that. viewtopic.php?f=48&t=195312&start=25 "I'm sorry, but the lifestyle you've ordered that you've grown accustomed to is completely out of stock. Have a nice day! "-$$\$ Zero_Sum Evil Neo-Nazi Extraordinaire. Posts: 2815 Joined: Thu Nov 30, 2017 7:05 pm Location: U.S.S.A- Newly lead Bolshevik Soviet block. Also known as Weimar America. ### Re: A Pareto Distribution of Wealth Zero_Sum wrote:The problem with social economic theoretical philosophers is that very few cross examine their own beliefs with prevailing current data in worldly state of affairs. I try not to make this mistake myself where I very much keep up with global current events. When you take a look at this thread of mine here below, what's the first thing that comes to your mind? Do you still believe things can be salvaged? Factor in your simulations for all of that. http://ilovephilosophy.com/viewtopic.ph ... 2&start=25 It's a bunch of charts mostly showing some alarming signs of economic downturn, not immediate imminent collapse. You don't think I'm aware of this? I think you're overestimating the amount of collapse required to bring down the entire Western economy in spite of all its infrastructure and the willingness of servile humans to sink to ever lower lows just to keep the illusion of things going in the short to medium term. Historically you need far more of the population to be starving to actually turn to violence, especially in cultures with generations of compliance and complacency like we have in the West. This is what capitalist advocates thrive on: the ability to dismiss claims that Capitalism is collapsing because things have so much further to go for it to complete its collapse. Don't get me wrong, it's headed in the direction of collapse, I just disagree that any of the data you show necessitates that it's just around the corner, and as long as you're claiming it is just around the corner, the more pro-capitalists can laugh at you in their compliance and complacency. Instead I'd recommend calling the data as it is - like I am, rather than blowing it out of proportion in an eagerness "to watch the world burn" as you model yourself with this "Joker" persona. You probably won't live to see it, and me neither. It's on an undeniable downturn, but that's it for now - it will continue for as long as people don't look at the root causes like I am, and implement solutions that are workable in the real world and not just the same as we've already been trying and failing for generations now. "Tax this, prohibit that" doesn't work like all the reformists keep suggesting like a broken record, but an internal exchange rate based around the Pareto Principle is a simple solution that nobody's thought of before, that can use the system we already have to solve itself without resorting to government coercion. "Rebel! Violence!" is the same broken record of revolutionaries, in complete underestimation of the strength of the spell that enslaves the vast majority of people to "keep calm and carry on". Revolution and doomsaying has no political power, sorry. Reform is where political power is, and I'm readying my solutions for the point where people finally admit the usual reforms objectively aren't working and things get desperate enough to hint at finally inspiring what you're suggesting, which is a looong way off. But like I said, you have your identity invested in your soothsaying, so there's no way you're going to even consider adjusting your viewpoint. Silhouette Philosopher Posts: 4034 Joined: Tue May 20, 2003 1:27 am Location: Existence
## linear algebra – Implement statistics for the length of continued fractions of a result on MATHEMATICA After successfully generating the reduced fraction of two coprime in interval (0,1) with the following; ``````l={}; For( b=1, b<20,b++For(a= 1, a < b, a++, AppendTo(l,Simplify(a/b)))); L=DeleteDuplicates(l); L `````` Now, I want to implement the statistics for the length of the corresponding continued fractions, But I have no idea how to go about this using MATHEMATICA I got a little help here if I implement this codes `Map(Length @* ContinuedFraction, L)` `L = FareySequence(19)((2 ;; -2))` which seems equivalent and shorter. But I will like to know the kind of continued fraction expansion used since the lengths of the continued fractions of 1/2 and 2/3, as I understand, are both equal to 2 ## mysqldump – Wait state: Statistics We have 200 databases in one instance of MariaDB with a total of 370.000 tables, since we upgraded to MariaDB 10.5.11, we see most of time passed by mysql is in wait states: Statistics. According MariaDB, it should be a brief state: Calculating statistics as part of deciding on a query execution plan. Usually a brief state unless the server is disk-bound. But we don’t see particular high usage of the disk. Trying to help this situation, we generated all Engine-Independent Table Statistics by analyzing all tables(370.000), our use_stat_tables is set to preferably_for_queries to use those stats instead of innodb one’s. No real improvement. We also tried to switch optimizer_search_depth to 0, no improvement either. And another consequence of this huge time passed calculating statistics, it increased by a lot our daily backup time with mysqldump of the 200 databases (before 30 min, now around 5 hours!) which crashes the server most of time, because the memory increased after each database backup, until the server swap. Some more details, after 11 hours running With mariadb 10.1 – 2 weeks With MariaDB 10.5 – 2 weeks We just received some advice to increase the open-files-limit, we still have to do it in production, and then raise table_open_cache (for now it’s table_open_cache = 20000, table_definition-cache = 40000) Now max open files for mariadb is set to 32768: ``````cat /proc/\$(pidof mariadbd)/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 160172 160172 processes Max open files 32768 32768 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 160172 160172 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us `````` We are running out of ideas. Any help will be really appreciated. Here is the output of ‘show global status’ ``````| Aborted_clients | 613 | | Aborted_connects | 1 | | Aria_pagecache_blocks_not_flushed | 0 | | Aria_pagecache_blocks_unused | 4 | | Aria_pagecache_blocks_used | 15647 | | Aria_pagecache_write_requests | 1085362 | | Aria_pagecache_writes | 162904 | | Aria_transaction_log_syncs | 108 | | Binlog_commits | 884378 | | Binlog_group_commits | 884287 | | Binlog_group_commit_trigger_count | 0 | | Binlog_group_commit_trigger_lock_wait | 0 | | Binlog_group_commit_trigger_timeout | 0 | | Binlog_snapshot_file | mysql-bin.000038 | | Binlog_snapshot_position | 552835703 | | Binlog_bytes_written | 1626601753 | | Binlog_cache_disk_use | 4811 | | Binlog_cache_use | 882274 | | Binlog_stmt_cache_disk_use | 18 | | Binlog_stmt_cache_use | 2104 | | Busy_time | 0.000000 | | Bytes_sent | 80380675660 | | Compression | OFF | | Connections | 143465 | | Cpu_time | 0.000000 | | Created_tmp_disk_tables | 43263 | | Created_tmp_files | 4545 | | Created_tmp_tables | 961351 | | Delete_scan | 2230 | | Empty_queries | 19369816 | Handler_commit | 42120185 | | Handler_delete | 451478 | | Handler_discover | 80 | | Handler_external_lock | 0 | | Handler_icp_attempts | 666402417 | | Handler_icp_match | 666047991 | | Handler_mrr_init | 0 | | Handler_mrr_key_refills | 0 | | Handler_mrr_rowid_refills | 0 | | Handler_prepare | 2037194 | | Handler_rollback | 13 | | Handler_savepoint | 7376 | | Handler_savepoint_rollback | 0 | | Handler_tmp_delete | 0 | | Handler_tmp_update | 3760231 | | Handler_tmp_write | 108494398 | | Handler_update | 723675 | | Handler_write | 468014 | | Innodb_background_log_sync | 39208 | | Innodb_buffer_pool_dump_status | | | Innodb_buffer_pool_resize_status | | | Innodb_buffer_pool_pages_data | 840319 | | Innodb_buffer_pool_bytes_data | 13767786496 | | Innodb_buffer_pool_pages_dirty | 45657 | | Innodb_buffer_pool_bytes_dirty | 748044288 | | Innodb_buffer_pool_pages_flushed | 206518 | | Innodb_buffer_pool_pages_free | 321041 | | Innodb_buffer_pool_pages_misc | 0 | | Innodb_buffer_pool_pages_old | 310176 | | Innodb_buffer_pool_pages_total | 1161360 | | Innodb_buffer_pool_pages_lru_flushed | 0 | | Innodb_buffer_pool_wait_free | 0 | | Innodb_buffer_pool_write_requests | 12861326 | | Innodb_checkpoint_age | 406077659 | | Innodb_checkpoint_max_age | 434155992 | | Innodb_data_fsyncs | 55314 | | Innodb_data_pending_fsyncs | 0 | | Innodb_data_pending_writes | 0 | | Innodb_data_writes | 1116053 | | Innodb_data_written | 2917482773 | | Innodb_dblwr_pages_written | 131349 | | Innodb_dblwr_writes | 1077 | | Innodb_history_list_length | 3 | | Innodb_ibuf_free_list | 1436 | | Innodb_ibuf_merged_delete_marks | 381882 | | Innodb_ibuf_merged_deletes | 4726 | | Innodb_ibuf_merged_inserts | 39635 | | Innodb_ibuf_merges | 7900 | | Innodb_ibuf_segment_size | 1943 | | Innodb_ibuf_size | 506 | | Innodb_log_waits | 0 | | Innodb_log_write_requests | 341852 | | Innodb_log_writes | 936910 | | Innodb_lsn_current | 69175355827 | | Innodb_lsn_flushed | 69175354210 | | Innodb_lsn_last_checkpoint | 68769278168 | | Innodb_max_trx_id | 96623855 | | Innodb_mem_dictionary | 211733568 | | Innodb_os_log_fsyncs | 32811 | | Innodb_os_log_pending_fsyncs | 0 | | Innodb_os_log_pending_writes | 0 | | Innodb_os_log_written | 3097520128 | | Innodb_page_size | 16384 | | Innodb_pages_created | 82283 | | Innodb_pages_written | 178087 | | Innodb_row_lock_current_waits | 0 | | Innodb_row_lock_time | 23594 | | Innodb_row_lock_time_avg | 291 | | Innodb_row_lock_time_max | 20972 | | Innodb_row_lock_waits | 81 | | Innodb_rows_deleted | 451067 | | Innodb_rows_inserted | 466012 | | Innodb_rows_updated | 291502 | | Innodb_system_rows_deleted | 0 | | Innodb_system_rows_inserted | 0 | | Innodb_system_rows_updated | 0 | | Innodb_num_open_files | 16144 | | Key_blocks_not_flushed | 0 | | Key_blocks_unused | 106026 | | Key_blocks_used | 2073 | | Key_blocks_warm | 2 | | Key_write_requests | 25758 | | Key_writes | 7625 | | Last_query_cost | 0.000000 | | Master_gtid_wait_count | 0 | | Master_gtid_wait_time | 0 | | Master_gtid_wait_timeouts | 0 | | Max_statement_time_exceeded | 0 | | Max_used_connections | 306 | | Memory_used | 1232810768 | | Memory_used_initial | 33626248 | | Not_flushed_delayed_rows | 0 | | Open_files | 238 | | Open_streams | 4 | | Open_table_definitions | 40000 | | Open_tables | 16144 | | Opened_files | 292123 | | Opened_plugin_libraries | 0 | | Opened_table_definitions | 80968 | | Opened_tables | 288311 | | Opened_views | 1 | | Queries | 41505265 | | Questions | 41505265 | | Rows_sent | 363638953 | | Select_full_join | 20989 | | Select_full_range_join | 25263 | | Select_range | 19735506 | | Select_range_check | 0 | | Select_scan | 3619795 | | Slow_queries | 91 | | Sort_merge_passes | 875 | | Sort_priority_queue_sorts | 210279 | | Sort_range | 4010080 | | Sort_rows | 34727539 | | Sort_scan | 1066118 | | Subquery_cache_hit | 717168 | | Subquery_cache_miss | 1571758 | | Syncs | 3347 | | Table_locks_immediate | 291843 | | Table_locks_waited | 60 | | Table_open_cache_active_instances | 1 | | Table_open_cache_hits | 46645351 | | Table_open_cache_misses | 2666806 | | Table_open_cache_overflows | 271551 | | Threads_running | 1 | Update_scan | 2604 | | Uptime | 39677 `````` My.cnf ``````(server) binlog_format=mixed query_cache_size = 0 query_cache_type = 0 query_cache_limit = 8M max_connections = 450 wait_timeout = 120 interactive_timeout = 19200 tmp_table_size = 24M max_heap_table_size = 24M max_allowed_packet = 256M table_open_cache = 20000 table_definition_cache = 40000 open_files_limit = 0 innodb_lock_wait_timeout = 50 innodb_file_per_table innodb_buffer_pool_size = 18G innodb_log_buffer_size = 256M innodb_log_file_size = 512M innodb_flush_log_at_trx_commit = 2 innodb_flush_method = O_DIRECT innodb_buffer_pool_dump_at_shutdown = ON analyze_sample_percentage = 0 optimizer_search_depth= = 0 `````` ## Ola hallengren SQL Server Index and Statistics Maintenance solution We are using the Ola hallengren SQL Server Index and Statistics Maintenance solution from the past 6 months in our production system. Script is only used for Update Statistics not Index maintenance. Job used to take about 90-120 mins to complete which was completely normal considering the database size(1.8TB). All of a sudden the job started to take about 5-6hrs to complete from the past couple of weeks. We haven’t made any changes to the system. Each Statistics used to take less than 5 secs before now they take about 60-250 secs to complete. All this happened within a couple of days, not gradually. We are using SQL Server Enterprise edition. Has anyone experienced this kind of issue before? Any suggestions are greatly appreciated. Below are the parameters used in SQL job. EXECUTE dbo.IndexOptimize –@Databases = ‘ALL_DATABASES’, @Databases = ‘User DB’, @FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL, @OnlyModifiedStatistics = ‘Y’, @MAXDOP=2, @Indexes=’ALL_INDEXES’, @LogToTable=’Y’ –,@Indexes=’ALL_INDEXES,-%.dbo.eventhistory,-%.dbo.eventhistoryrgc’ Best Regards, Arun ## Where can I find the Android API level statistics online? For the longest time, Google have displayed Android API level information here: https://developer.android.com/about/dashboards/ Unfortunately when I went to check it today, I found the following message: You can find platform version information in Android Studio’s Create New Project wizard. I already have an existing project in Android Studio, and I’m considering whether to change the `minSdkVersion`. I don’t want to create a new project just to check the Android API level statistics. Where can I check the Android API level statistics without creating a new Android Studio project? ## probability or statistics – Finding the error type I let $$x_1…. x_n$$ be a random sample from a Poisson distribution with mean $$theta$$, that is $$f(x ;theta$$)$$= theta^x$$.$$e^-theta$$/$$x!$$. We use a test that accepts the null hypothesis if $$(1/3)≤ overline{x} ≤ (2/3)$$ and reject otherwise. For n=9 what is the error type I? ## statistics – Anydice: Counting Successes and Rerolling Failures In Dice Pools The issue you’re running into here is that AnyDice loves summing numbers, especially when outputing. The output you’re getting is the sum of the dice, not the number of successes (hence going up to 60). The easiest way around this, is to have the function work out the number of successes, which we can handily do by comparing our `ROLL` sequence to our `TARGET` number. Also handily, to not have to worry about actually shuffling around the sequence to get the reroll, we can simply add the new roll and subtract a 1 (to compensate for the roll that should be removed). We also know that if we don’t reroll, we have no successes. The resulting function then becomes: ``````function: reroll greatest of ROLL:s as REROLL:n if greater than TARGET:n { if 1@ROLL >= TARGET { result: (ROLL >= TARGET) + (REROLL >= TARGET) -1 } else { result: 0 } } `````` Which we can test over a range of targets (you’ll want to limit the number in that range so it doesn’t time out): ``````loop N over {10,12,14,16,18,20} { output (reroll greatest of 3d20 as 1d20 if greater than N) named "(N)" } `````` And see the transposed graph to see how the probability of a given number of sucesses vary with the target. ## statistics – Relation between moments a measured PMF under matrix multiplication? I’m working on a physics problem where we have a measured photon energy spectrum (I’m thinking of it as a probability mass function), which is created by an energy spectrum of electrons which impact the atmosphere. The two spectra are related by a (known) matrix multiplication. If it’s valid to think about the two spectra as PMF’s, is there a relation between the first few moments of the two distributions? I’m asking because the matrix is very ill-conditioned, and the first few moments of the resulting spectrum is all I need to know for the physics problem. The “full problem” of finding the inverse of the ill-conditioned matrix is handled through regularization (Tikhonov, LASSO, etc.) Is it possible to pose a better conditioned problem, by seeking “less information” about the transformed distribution? I did some searching and found this: Characteristic Function and Random Variable Transformation. I think that the result there pertains to individual bins in my measured spectrum being considered together as a vector random variable. I can, of course, get the expectation value and the variance of each bin, but what I’m looking for is a way to get the center and approximate width of the resulting distribution, without solving the full inverse problem. Many thanks ## ra.rings and algebras – Statistics word problem ra.rings and algebras – Statistics word problem – MathOverflow ## statistics – Cochran’s Q Alternative for non-dictonomous data I’m trying to compare mean spending across ethnicities. The ethnicity data is ‘select all that apply’, so a subject may be counted in more than one ethnicity but not necessarily. I’m using Cochran’s Q for related proportional data, but am having trouble finding a method to compare the mean spending. ## probability or statistics – Sample discrete distributions with a good range of observed entropies For a numerical sanity check, I need to sample random sequences of $$n$$ positive numbers adding up to 1, and having a high chance of observing both high entropy and low entropy sequences. Compute the entropy by treating sequence as a discrete probability distribution. Ideally, the histogram of sampled sequence entropies would approach uniform distribution. Can someone suggest a way to do this in Mathematica? Here’s a naive generation method, showing non-uniform entropy histogram, and the kind of histogram I would like to see. ``````n = 10; s = 10000; normalize[seq_] := seq/Total@seq; sequences = normalize /@ RandomVariate[UniformDistribution[], {s, n}]; entropy[seq_] := -Total[# Log[#] & /@ seq]; Histogram[entropy /@ sequences, PlotLabel -> "observed"] Histogram[RandomVariate[UniformDistribution[], s], PlotLabel -> "desired"] ``````
# Bullet Bullet Physics on Deformable Rigid Bodies ## Recommended Posts When a body experiences a collision, there may be causing a bump or decal on the body itself, and the box shape or hull shape only affects the transformation, how do I make it affect the vertex buffer, because in soft bodies simulation, you could actually affect the vertex buffer, you can't with rigid bodies, if I cast a soft body on the rigid body with limited elasticity. ohhh. but real strange to me... how do I make this happen? Thanks Jack ## Create an account Register a new account • ### Similar Content • By ShFil Hi, I'm trying to fix physics in openrw. Actually we have two problems: - high velocity cars can clip through world geometry https://github.com/rwengine/openrw/issues/76 - weird work of suspension, actually we have tuned susp length, sitffness and max travel to make feel of riding maximal possibly real. But level of stiffness is to high. It's easy to flip vehicle. With reduced level of stiffness and optimal max travel; susp length, wheels are sinking. topic: https://github.com/rwengine/openrw/issues/416 two main files with physics : https://github.com/rwengine/openrw/blob ... stance.cpp https://github.com/rwengine/openrw/blob ... Object.cpp Example of problem: https://youtu.be/m87bJxE9hnU?t=2m50s I will be grateful for help. Btw. running openrw requires gta3 assets I have one steam key for gta if you want to try to test it. • hello, I am trying to implement a realistic simulation of a roulette wheel. it is not clear for me what is the proper way to simulate the initial status of the ball, when it spins against the edge of the wheel until it loss energy and start falling towards the centre. I modelled the conic table as a height map, as I assume that would provide the smoother surface. but I see anyway there is rough squared corners everywhere, so really I don't have a smooth inner wall to slide against. I wonder if I should ignore the wall and simulate the sliding by code. i.e: apply force (or impulse?) each frame to keep the ball at a fixed radius and somehow force it to follow a desired angular speed..  later, when I want to execute the falling behaviour, just stop applying that forces and let the simulator and the gravity do their work.. makes sense? • By kevinyu Original Post: Limitless Curiosity Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works. This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below. From the scenario above we can formulate: We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero. $$C: d>=0$$ This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So: if d>=0, do nothing else if d < 0 solve C: d = 0 Now we can solve this equation by calculating  $$\Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}$$,and  $$\Delta \vec{r2}$$  that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect. A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above. We are working in 2D so angular velocity and the cross result of two vectors are scalars. Next, we need to find $$\Delta V$$ or impulse to satisfy the velocity constraint. This $$\Delta V$$ is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction. $$\lambda$$ is a scalar, called Lagrangian multiplier. To understand why constraint force working on $$J^{T}$$ direction (remember J is a 12 by 1 matrix, so $$J^{T}$$ is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane. Now we can draw similarity between equation(1) and equation(2), where $$\vec{n}^{T}$$ is similar to J and $$\vec{v}$$ is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that $$J^{T}$$ as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction. After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body. Baumgarte Stabilization Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint. [caption id="attachment_38" align="alignnone" width="800"] So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth. This method is called Baumgarte Stabilization and $$\beta$$ is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable. Sequential Impulse If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate $$\lambda$$ and $$\Delta V$$ for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate $$\Delta V$$, we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration. Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body. When we solve $$\dot{C1}$$, we might incidentally make $$\dot{d2} >= 0$$. Remember that equation(5), is a formula for $$\dot{C}: \dot{d} = 0$$ not $$\dot{C}: \dot{d} >= 0$$. So we don't need to apply it to $$\dot{C2}$$ anymore. We can detect this by looking at the sign of $$\lambda$$. If the sign of $$\lambda$$ is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual $$\lambda$$ to be negative. But, we need to make sure the accumulation of $$\lambda$$ is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum Restitution Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted $$C_{r}$$, is the ratio of the parting speed after the collision and the closing speed before the collision. The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector. Notice that in this specific case the $$V_{initial}$$ is similar to JV. If we look back at our constraint above, we set $$\dot{d}$$ to zero because we assume that the object does not bounce back($$C_{r}=0$$).So, if $$C_{r} != 0$$, instead of 0, we can modify our constraint so the desired velocity is $$V_{final}$$. We can merge our old bias term with the restitution term to get a new bias value. // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; } General Step to Solve Constraint In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc. So this is the step-by-step of constraint resolution: Define the constraint in the form $$\dot{C}: JV + b = 0$$. V is always $$\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}$$ for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading. NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization. References Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4 equation.svg • In the Draw call, I just render all buffered vertices collected from the dynamics world, at the entry point of Draw, it always reporting empty buffers. I have setup the DXDebugDrawer correctly by deriving from the btIDebugDraw interface and I've made a call to the setDebugDrawer, how come it didn't work? if (m_dynamicsWorld) { m_dynamicsWorld->debugDrawWorld(); dynamic_cast(m_dynamicsWorld->getDebugDrawer())->Draw(); } thanks Jack • I call on the shatter function and it now has a series of chunks stored, and I can retrieve those to the main physics system. But do I hide the main object and re-construct the fragment pieces into some other brand new game objects or some sort? Thanks Jack • By -Tau- Hello, I'm trying to use needBroadphaseCollision to filter collision between moveable objects and player. It works pretty well but there is one problem. The idea is simple: If an object is still or does not move faster that a threshold, ignore collision with players physical body. In this case I use my own code to update player. If the speed of this object breaks that threshold, convert player to rag-doll and let Bullet do the update. I'm using my own custom character controller as i couldn't use Bullets. My player is a ragdoll with btRigidBody bodyparts where linear and angular factors are set to 0 and these limbs are updated based on model animation as long as the player has control over their character. As soon as collision with a fast moving object happens, player loses control over their character, linear an angular factors are set to 1 and i let Bullet handle the ragdoll physics. It works well for most objects but i have an object that uses btCompoundShape for its body. When this object is still, (it didn't move for a while) it works. However when this object starts to move and doesn't break the speed threshold, it gets affected by players physical body (player starts to push this object around). I added some debug variables and it seems that even when needBroadphaseCollision returns false, there are still contact points generated between player and this object. What am i missing? • I am a beginner in the Game Dev business, however I plan to build a futuristic MMO with some interesting mechanics. However, I have some doubts about shooting mechanics that I chose for this game and would like to know your opinion on this. The mechanic goes as follows: - Each gun would have it's damage-per-shot value - Each gun would have it's shots-per-second value - Each gun would have it's accuracy rating Now the question is: how to calculate the output damage? I have three available options: 1) Calculate the chance of each shot hitting the target (per-shot accuracy) 2) Multiply the damage output of a weapon by it's accuracy rating (weapon with 50% accuracy deals 50% of it's base damage) 3) Don't use accuracy at all and just adjust the weapon damage output Which of these three mechanics would you like to see in a game? Mind, this will be an MMO game, so it will have lock-on targets, AoE effects and all that jazz. • My AI subsystem is completed dragged by the physics with objects with Gimpact proxies.. When you need to calculate stuff like bumps, it is very horrible...It is even worse than using compound vehicle methods... Thanks Jack • I looked one of the the bullet physics samples which talks about the topic in height field. But however, when the height fields get rendered, the "DemoApplication" class calls the opengl shape drawer object which finally retrieves the display list of the collision shape, which is strongly coupled to opengl, I want to do the same thing with DirectX (D3DX at the moment, damn old, but hey).. How can I draw the height fields out? Is there a way to turn the display list into something recognizable by Direct3D 9? Thanks Jack • //synchronize the wheels with the (interpolated) chassis worldtransform l_vehicle->updateWheelTransform(j,true); const btTransform& t = l_vehicle->getWheelInfo(j).m_worldTransform; D3DXMATRIX l_mat = BT2DX_MATRIX(t); D3DXMATRIX l_rot; D3DXMatrixRotationY(&l_rot, 1.57f); l_mat = l_rot * l_mat; // assume front wheels AgentMotionState* motion = dynamic_cast(l_vehicle->getRigidBody()->getMotionState()); if (motion) { boost::shared_ptr pObj = motion->m_object; if (j==0) { D3DXVECTOR3 s, p; D3DXQUATERNION r; D3DXMatrixDecompose(&s, &r, &p, &l_mat); double yaw, pitch, roll; yaw = pitch = roll = 0.0f; QuatToEuler(r, yaw, pitch, roll); TRACE("Veh: " << i << "Pos of wheel 0 is " << p.x << " " << p.y << " " << p.z); yaw = CapRadian(yaw); TRACE("Veh: " << i << "Rot of wheel 0 is " << yaw); FRAME* frontWheels = (FRAME*)D3DXFrameFind(pObj->m_mesh->GetFrameRoot(), "Front_Left_Wheel"); frontWheels->matCombined = l_mat; //frontWheels->TransformationMatrix = l_mat; } Looks like the left front wheel is on the far left of the chassis with a large gap? Why is that? Thanks Jack • 9 • 18 • 11 • 21 • 9 • ### Forum Statistics • Total Topics 631397 • Total Posts 2999815 ×
# Wave length of a Pendulum • Apr 17th 2013, 03:41 PM sakonpure6 Wave length of a Pendulum Hi, I have the following problem. Quote: Pendulum A is 20 cm long and has a 5g mass on it. Pendulum B is 30 cm long and has 10g mass on it. Which one has a faster period? First of all, does the mass of the pendulum matter at all? and if not then I I would need to find the time in both cases and divide by 1 right? Using the equation d=v1t+0.5at^2 Time Pendulum A= 0.20 s which means that the period is 0.20 s. Time Pendulum B= 0.25 s which means that the period is 0.25 s. Am I right? Thank you in advance. • Apr 17th 2013, 05:33 PM topsquark Re: Wave length of a Pendulum Quote: Originally Posted by sakonpure6 Hi, I have the following problem. First of all, does the mass of the pendulum matter at all? and if not then I I would need to find the time in both cases and divide by 1 right? Using the equation d=v1t+0.5at^2 Time Pendulum A= 0.20 s which means that the period is 0.20 s. Time Pendulum B= 0.25 s which means that the period is 0.25 s. Am I right? Thank you in advance. The distance formula you used can be used to derive the following: $T = \frac{1}{2 \pi} \sqrt{L/g}$ where L is the length of the pendulum. -Dan
# Baire category theorem and lower semicontinuous functions Let $(X, \tau)$ be a Baire space, $I$ an index set and for each $x \in X$, let the set $\{f_i(x) : i \in I\}$ be bounded above, where each mapping $f_i : (X,\tau) \to \mathbb{R}$ is lower semicontinuous. Using the Baire Category Theorem prove that there exists an open subset $O$ of $(X,\tau)$ such that the set $\{f_i(x) : x \in O, i \in I\}$ is bounded above. My proof. 1. Let $X_n = \bigcap_{i \in I}\big[f_i^{-1}( (-\infty; n])\big]$. Since $f_i$ is lower semicontinuous, $f_i^{-1}( (-\infty; n])$ is closed in $X$ and $X_n$ is closed as an infinite intersection of closed sets. Do we need to tell here that for some $n$, $X_n$ will not be an empty set, because $\{f_i(x) : i \in I\}$ is bounded above? 2. $\bigcup_{n=1}^{\infty}(-\infty; n] = \mathbb{R}$. 3. $f_i^{-1}(\mathbb{R}) = X$, for any $f_i$. 4. From 2 and 3 it follows that $\bigcup_{n=1}^{\infty} X_n = X$ - I'm not sure if it is correct. 5. Since $X$ is Baire space, then one of $X_n$ is not nowhere dense. Also $X_n$ is closed. Hence $\operatorname{Int}(\bar{X_n}) = \operatorname{Int}(X_n) \neq \emptyset$. 6. Let $O = \operatorname{Int}(X_n)$ be an open set. 7. Then $O \subseteq X_n \subseteq f_i^{-1}((-\infty; n])$ 8. Then $f_i(O) \subseteq (-\infty; n]$ 9. And $\bigcup_{i \in I}f_i(O) = \{f_i(x) : x \in O, i \in I\} \subseteq (-\infty; n]$ Complete the proof. Could you please verify my proof? Thank you! ## 1 Answer The idea is essentially OK, but could be formulated a bit better, my attempt: So given is: $$\forall x \in X: \exists n_x : \forall i \in I: f_i(x) \le n_x$$ This is just a formal restatement that all sets $\{f_i(x): i \in I\}$ are bounded above. So again reformulating: $$\forall x \in X: \exists n_x (\in \mathbb{N}): x \in \bigcap_{i \in I} (f_i)^{-1}[(-\infty, n_x]]$$ or using your definition of $X_n$: $$\forall x \in X: \exists n_x: x \in X_{n_x}$$ which immediately shows $$X = \bigcup_n X_n$$ without other computations (your 2 and 3 are irrelevant), and each $X_n$ is closed by the $f_i$ all being lsc, and intersections of closed sets being closed, as you stated. Note that $X_n \subseteq X_{n+1}$ for all $n$, so we have an increasing family. By the fact that $X$ is Baire we know that $\exists N: \operatorname{int}(X_N) \neq \emptyset$. Now if $x \in O, i \in I$ we know that $x \in X_N \subseteq f_i^{-1}[(-\infty, N]]$ so that $f_i(x) \le N$. This shows that $N$ is an upperbound for the set $\{f_i(x): x \in O, i \in I\}$, as required. • why do we need to note that $X_n$ is an increasing family? – Andreo Apr 29 '18 at 20:37 • @Andreo It's not really important but it helps to build a picture, I think. – Henno Brandsma Apr 29 '18 at 21:44
# Suppose I gave you a bag of M&M’s, but I didn’t let you see the original packaging so you can’t determine which plant made the M& M’s. Your job is to count the number of candies of each color in your bag and figure out which plant made your bag. What test should you do to determine this? Question Chi-square tests Suppose I gave you a bag of M&M’s, but I didn’t let you see the original packaging so you can’t determine which plant made the M& M’s. Your job is to count the number of candies of each color in your bag and figure out which plant made your bag. What test should you do to determine this? 2021-02-14 Step 1 From the given information, our job is to count the number of candies of each color in our bag and figure out which plant made our bag. There are many category of color and then the observed proportion are should be known for each category. Step 2 Chi-square test: Chi-square test is used to test the independence of attributes and goodness fit. This test is used for qualitative variables. A chi-square statistic is a method that tests how expectations correlate with real data that have been observed. The Chi-Square test's null hypothesis is that there is no relation on the population's categorical variables, they are independent. Therefore, the chi square test will be used. ### Relevant Questions Case: Dr. Jung’s Diamonds Selection With Christmas coming, Dr. Jung became interested in buying diamonds for his wife. After perusing the Web, he learned about the “4Cs” of diamonds: cut, color, clarity, and carat. He knew his wife wanted round-cut earrings mounted in white gold settings, so he immediately narrowed his focus to evaluating color, clarity, and carat for that style earring. After a bit of searching, Dr. Jung located a number of earring sets that he would consider purchasing. But he knew the pricing of diamonds varied considerably. To assist in his decision making, Dr. Jung decided to use regression analysis to develop a model to predict the retail price of different sets of round-cut earrings based on their color, clarity, and carat scores. He assembled the data in the file Diamonds.xls for this purpose. Use this data to answer the following questions for Dr. Jung. 1) Prepare scatter plots showing the relationship between the earring prices (Y) and each of the potential independent variables. What sort of relationship does each plot suggest? 2) Let X1, X2, and X3 represent diamond color, clarity, and carats, respectively. If Dr. Jung wanted to build a linear regression model to estimate earring prices using these variables, which variables would you recommend that he use? Why? 3) Suppose Dr. Jung decides to use clarity (X2) and carats (X3) as independent variables in a regression model to predict earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 4) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. Which sets of earrings appear to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? 5) Dr. Jung now remembers that it sometimes helps to perform a square root transformation on the dependent variable in a regression problem. Modify your spreadsheet to include a new dependent variable that is the square root on the earring prices (use Excel’s SQRT( ) function). If Dr. Jung wanted to build a linear regression model to estimate the square root of earring prices using the same independent variables as before, which variables would you recommend that he use? Why? 1 6) Suppose Dr. Jung decides to use clarity (X2) and carats (X3) as independent variables in a regression model to predict the square root of the earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 7) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. (Remember, your model estimates the square root of the earring prices. So you must actually square the model’s estimates to convert them to price estimates.) Which sets of earring appears to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? 8) Dr. Jung now also remembers that it sometimes helps to include interaction terms in a regression model—where you create a new independent variable as the product of two of the original variables. Modify your spreadsheet to include three new independent variables, X4, X5, and X6, representing interaction terms where: X4 = X1 × X2, X5 = X1 × X3, and X6 = X2 × X3. There are now six potential independent variables. If Dr. Jung wanted to build a linear regression model to estimate the square root of earring prices using the same independent variables as before, which variables would you recommend that he use? Why? 9) Suppose Dr. Jung decides to use color (X1), carats (X3) and the interaction terms X4 (color * clarity) and X5 (color * carats) as independent variables in a regression model to predict the square root of the earring prices. What is the estimated regression equation? What is the value of the R2 and adjusted-R2 statistics? 10) Use the regression equation identified in the previous question to create estimated prices for each of the earring sets in Dr. Jung’s sample. (Remember, your model estimates the square root of the earring prices. So you must square the model’s estimates to convert them to actual price estimates.) Which sets of earrings appear to be overpriced and which appear to be bargains? Based on this analysis, which set of earrings would you suggest that Dr. Jung purchase? Explain what changes would be required so that you could analyze the hypothesis using a chi-square test. For instance, rather than looking at test scores as a range from 0 to 100, you could change the variable to low, medium, or high. What advantages and disadvantages do you see in using this approach? Which is the better option for this hypothesis, the parametric approach or nonparametric approach? The table below shows the number of people for three different race groups who were shot by police that were either armed or unarmed. These values are very close to the exact numbers. They have been changed slightly for each student to get a unique problem. Suspect was Armed: Black - 543 White - 1176 Hispanic - 378 Total - 2097 Suspect was unarmed: Black - 60 White - 67 Hispanic - 38 Total - 165 Total: Black - 603 White - 1243 Hispanic - 416 Total - 2262 Give your answer as a decimal to at least three decimal places. a) What percent are Black? b) What percent are Unarmed? c) In order for two variables to be Independent of each other, the P $$(A and B) = P(A) \cdot P(B) P(A and B) = P(A) \cdot P(B).$$ This just means that the percentage of times that both things happen equals the individual percentages multiplied together (Only if they are Independent of each other). Therefore, if a person's race is independent of whether they were killed being unarmed then the percentage of black people that are killed while being unarmed should equal the percentage of blacks times the percentage of Unarmed. Let's check this. Multiply your answer to part a (percentage of blacks) by your answer to part b (percentage of unarmed). Remember, the previous answer is only correct if the variables are Independent. d) Now let's get the real percent that are Black and Unarmed by using the table? If answer c is "significantly different" than answer d, then that means that there could be a different percentage of unarmed people being shot based on race. We will check this out later in the course. Let's compare the percentage of unarmed shot for each race. e) What percent are White and Unarmed? f) What percent are Hispanic and Unarmed? If you compare answers d, e and f it shows the highest percentage of unarmed people being shot is most likely white. Why is that? This is because there are more white people in the United States than any other race and therefore there are likely to be more white people in the table. Since there are more white people in the table, there most likely would be more white and unarmed people shot by police than any other race. This pulls the percentage of white and unarmed up. In addition, there most likely would be more white and armed shot by police. All the percentages for white people would be higher, because there are more white people. For example, the table contains very few Hispanic people, and the percentage of people in the table that were Hispanic and unarmed is the lowest percentage. Think of it this way. If you went to a college that was 90% female and 10% male, then females would most likely have the highest percentage of A grades. They would also most likely have the highest percentage of B, C, D and F grades The correct way to compare is "conditional probability". Conditional probability is getting the probability of something happening, given we are dealing with just the people in a particular group. g) What percent of blacks shot and killed by police were unarmed? h) What percent of whites shot and killed by police were unarmed? i) What percent of Hispanics shot and killed by police were unarmed? You can see by the answers to part g and h, that the percentage of blacks that were unarmed and killed by police is approximately twice that of whites that were unarmed and killed by police. j) Why do you believe this is happening? Do a search on the internet for reasons why blacks are more likely to be killed by police. Read a few articles on the topic. Write your response using the articles as references. Give the websites used in your response. Your answer should be several sentences long with at least one website listed. This part of this problem will be graded after the due date. 1. Find each of the requested values for a population with a mean of $$? = 40$$, and a standard deviation of $$? = 8$$ A. What is the z-score corresponding to $$X = 52?$$ B. What is the X value corresponding to $$z = - 0.50?$$ C. If all of the scores in the population are transformed into z-scores, what will be the values for the mean and standard deviation for the complete set of z-scores? D. What is the z-score corresponding to a sample mean of $$M=42$$ for a sample of $$n = 4$$ scores? E. What is the z-scores corresponding to a sample mean of $$M= 42$$ for a sample of $$n = 6$$ scores? 2. True or false: a. All normal distributions are symmetrical b. All normal distributions have a mean of 1.0 c. All normal distributions have a standard deviation of 1.0 d. The total area under the curve of all normal distributions is equal to 1 3. Interpret the location, direction, and distance (near or far) of the following zscores: $$a. -2.00 b. 1.25 c. 3.50 d. -0.34$$ 4. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $$\mu = 78$$ and $$\sigma = 12$$. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: $$82, 74, 62, 68, 79, 94, 90, 81, 80$$. 5. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $$12 (\mu = 42, \sigma = 12)$$. You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is$44.50 from tips. Test for a difference between this value and the population mean at the $$\alpha = 0.05$$ level of significance. A) Explain why the chi-square goodness-of-fit test is not an appropriate way to find out. B) What might you do instead of weighing the nuts in order to use a x2 test? Nuts A company says its premium mixture of nuts con- tains 10% Brazil nuts, 20% cashews, 20% almonds, and 10% hazelnuts, and the rest are peanuts. You buy a large can and separate the various kinds of nuts. Upon weigh- ing them, you find there are 112 grams of Brazil nuts, 183 grams of cashews, 207 grams of almonds, 71 grams of hazelnuts, and 446 grams of peanuts. You wonder whether your mix is significantly different from what the company advertises. Finance bonds/dividends/loans exercises, need help or formulas Some of the exercises, calculating the Ri is clear, but then i got stuck: A security pays a yearly dividend of 7€ during 5 years, and on the 5th year we could sell it at a price of 75€, market rate is 19%, risk free rate 2%, beta 1,8. What would be its price today? 2.1 And if its dividend growths 1,7% each year along these 5 years-what would be its price? A security pays a constant dividend of 0,90€ during 5 years and thereafter will be sold at 10 €, market rate 18%, risk free rate 2,5%, beta 1,55, what would be its price today? At what price have i purchased a security if i already made a 5€ profit, and this security pays dividends as follows: first year 1,50 €, second year 2,25€, third year 3,10€ and on the 3d year i will sell it for 18€. Market rate is 8%, risk free rate 0,90%, beta=2,3. What is the original maturity (in months) for a ZCB, face value 2500€, required rate of return 16% EAR if we paid 700€ and we bought it 6 month after the issuance, and actually we made an instant profit of 58,97€ You'll need 10 Vespas for your Parcel Delivery Business. Each Vespa has a price of 2850€ fully equipped. Your bank is going to fund this operation with a 5 year loan, 12% nominal rate at the beginning, and after increasing 1% every year. You'll have 5 years to fully amortize this loan. You want tot make monthly installments. At what price should you sell it after 3 1/2 years to lose only 10% of the remaining debt. In 1985, neither Florida nor Georgia had laws banning open alcohol containers in vehicle passenger compartments. By 1990, Florida had passed such a law, but Georgia had not. (i) Suppose you can collect random samples of the driving-age population in both states, for 1985 and 1990. Let arrest be a binary variable equal to unity if a person was arrested for drunk driving during the year. Without controlling for any other factors, write down a linear probability model that allows you to test whether the open container law reduced the probability of being arrested for drunk driving. Which coefficient in your model measures the effect of the law? (ii) Why might you want to control for other factors in the model? What might some of these factors be? (iii) Now, suppose that you can only collect data for 1985 and 1990 at the county level for the two states. The dependent variable would be the fraction of licensed drivers arrested for drunk driving during the year. How does this data structure differ from the individual-level data described in part (i)? What econometric method would you use? Is statistical inference intuitive to babies? In other words, are babies able to generalize from sample to population? In this study,1 8-month-old infants watched someone draw a sample of five balls from an opaque box. Each sample consisted of four balls of one color (red or white) and one ball of the other color. After observing the sample, the side of the box was lifted so the infants could see all of the balls inside (the population). Some boxes had an “expected” population, with balls in the same color proportions as the sample, while other boxes had an “unexpected” population, with balls in the opposite color proportion from the sample. Babies looked at the unexpected populations for an average of 9.9 seconds (sd = 4.5 seconds) and the expected populations for an average of 7.5 seconds (sd = 4.2 seconds). The sample size in each group was 20, and you may assume the data in each group are reasonably normally distributed. Is this convincing evidence that babies look longer at the unexpected population, suggesting that they make inferences about the population from the sample? Let group 1 and group 2 be the time spent looking at the unexpected and expected populations, respectively. A) Calculate the relevant sample statistic. Enter the exact answer. Sample statistic: _____ B) Calculate the t-statistic. Round your answer to two decimal places. t-statistic = ___________ C) Find the p-value. Round your answer to three decimal places. p-value = For each of the following situations, state whether you’d use a chi-square goodness-of-fit test, a chi-square test of homogeneity, a chi-square test of independence, or some other statistical test: a) Is the quality of a car affected by what day it was built? A car manufacturer examines a random sample of the warranty claims filed over the past two years to test whether defects are randomly distributed across days of the work week. b) A medical researcher wants to know if blood cholesterol level is related to heart disease. She examines a database of 10,000 patients, testing whether the cholesterol level (in milligrams) is related to whether or not a person has heart disease. c) A student wants to find out whether political leaning (liberal, moderate, or conservative) is related to choice of major. He surveys 500 randomly chosen students and performs a test. A new thermostat has been engineered for the frozen food cases in large supermarkets. Both the old and new thermostats hold temperatures at an average of $$25^{\circ}F$$. However, it is hoped that the new thermostat might be more dependable in the sense that it will hold temperatures closer to $$25^{\circ}F$$. One frozen food case was equipped with the new thermostat, and a random sample of 21 temperature readings gave a sample variance of 5.1. Another similar frozen food case was equipped with the old thermostat, and a random sample of 19 temperature readings gave a sample variance of 12.8. Test the claim that the population variance of the old thermostat temperature readings is larger than that for the new thermostat. Use a $$5\%$$ level of significance. How could your test conclusion relate to the question regarding the dependability of the temperature readings? (Let population 1 refer to data from the old thermostat.) (a) What is the level of significance? State the null and alternate hypotheses. $$H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}>?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}\neq?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}?_{2}^{2},H1:?_{1}^{2}=?_{2}^{2}$$ (b) Find the value of the sample F statistic. (Round your answer to two decimal places.) What are the degrees of freedom? $$df_{N} = ?$$ $$df_{D} = ?$$ What assumptions are you making about the original distribution? The populations follow independent normal distributions. We have random samples from each population.The populations follow dependent normal distributions. We have random samples from each population.The populations follow independent normal distributions.The populations follow independent chi-square distributions. We have random samples from each population. (c) Find or estimate the P-value of the sample test statistic. (Round your answer to four decimal places.) (d) Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis? At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the ? = 0.05 level, we reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we reject the null hypothesis and conclude the data are statistically significant. (e) Interpret your conclusion in the context of the application. Reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings.Fail to reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings. Fail to reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.Reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings. ...
# Improper Integral 1 Calculus Level 3 Given that $\displaystyle \int _{ a }^{ \infty }{ \frac { { e }^{ x } }{ { 9e }^{ 2x }+64 } dx=\frac { \pi }{ K } }$, where $a=\ln { \left(\frac { 8\sqrt { 3 } }{ 3 } \right) }$, what is the value of $K$? × Problem Loading... Note Loading... Set Loading...
• Subject: ... • Topic: ... A particle of mass m at rest is acted upon by a force F for a time t. Its Kinetic energy after an interval t is (1) $\frac{{F}^{2}{t}^{2}}{m}$ (2) $\frac{{F}^{2}{t}^{2}}{2m}$ (3) $\frac{{F}^{2}{t}^{2}}{3m}$ (4) $\frac{F\text{\hspace{0.17em}}t}{2m}$
# [OS X TeX] colorbox Alain Schremmer schremmer.alain at gmail.com Wed Jul 30 14:42:06 EDT 2008 On Jul 30, 2008, at 2:22 PM, Peter Dyballa wrote: > > Am 30.07.2008 um 18:59 schrieb Bruno Voisin: > >> Yellow fraction line: >> $\color{yellow}\dfrac{\color{black}a}{\color{black}b}$. > > > I don't think that Schremmer wants to change the colour of this > line, he rather wants to emphasise on this particle and build a > relation between a negative exponent (starting with a line like -) > and dividing 1 exponent times by the number, i.e., 1 over (another > line) number \times number \times number ... And similarly for > positive exponents. Indeed. Highlighting, as you would with a yellow marker. > > He is writing a schoolbook ... or one for kindergarten use? No way! With kindergarten children, I could do, as I have done, real mathematics. This text is for the mathematical victims of a peculiar American institution called Developmental Education. Here is the recipe for creating Developmental Students: You severely maim kindergarten children with a few well honed techniques, you then let them stew for 12 years in school and, "voilà" as my colleagues would say in English: They are ready for Developmental Education. > Working title: Magnus Opium. Well, that's for the cognoscenti. The more modest title is: From Arithmetic to Differential Calculus in Three Semesters with a Chance of Passing the Sequence Significantly Higher Than the Current Pass Rate of Less Than One Quarter of One Percent in the Traditional Six Semester Sequence. Regards --schremmer P.S. Since this is severely off topic, I would suggest that any response occur as "comments" on http://www.freemathtexts.org/Notes/ (The list there is still at the conceptual stage.) More information about the MacOSX-TeX mailing list
# Trig identities Gold Member ## Homework Statement prove that, (cos(3x) - cos (7x)) / (sin(7x) + sin(3x)) = tan(2x) prove that, cos(3x) = 4cos^3(x) - 3cos(x) ## Homework Equations tan(x) = sin(x)/cos(x) must come into the first one ## The Attempt at a Solution tried seperating the fraction so there is only one cos term on top, but I don't know how to deal with the sin terms on the bottom. I haven't got a clue for the second one Have you tried the sum-to product formula's?? (aka the Simpson formula's) Gold Member No I'm looking for them now, do you know that they work for these questions? Gold Member I still can't seem to get them right. My problem is not so much doing it, just working out which formular to use. Gold Member I'm still stuck on these. Can anyone point me in the right direction? dextercioby Homework Helper Use the wiki page linked to above, especially this section http://en.wikipedia.org/wiki/List_o...#Product-to-sum_and_sum-to-product_identities $\cos 3x - \cos 7x$ can be reduced to a product of sines. Likewise the sum of sines in the denominator. As for the other identity $$\cos 3x = \cos (2x +x) = \left(\substack{\underbrace{\cos^2 x -\sin^2 x}\\ \cos 2x}\right) \cos x - \left(\substack{\underbrace{2\sin x \cos x}\\ \sin 2x}\right) \sin x = ...$$ The final result follows easily. Last edited: Gold Member I did the first one, but I'm still suck on the second one. I ended up with cos(3x) = cos^3(x) - 3sin^2(x)cos(x), which is getting there, but I'm not sure what to do next Try to change the sine into a cosine somehow... There's a really important formula which allow you to do that...
## Sparse graphs and an augmentation problem ### Abstract For two integers $k>0$ and $\ell$, a graph $G=(V,E)$ is called $(k,\ell)$-tight if $|E|=k|V|-\ell$ and $i_G(X)\leq k|X|-\ell$ for each $X\sbse V$ that induces at least one edge. $G$ is called $(k,\ell)$-sparse if $G-e$ has a spanning $(k,\ell)$-tight subgraph for all $e\in E$. We consider the following augmentation problem. Given a graph $G=(V,E)$ that has a $(k,\ell)$-tight spanning subgraph, find a graph $H=(V,F)$ with minimum number of edges, such that $G+H$ is $(k,\ell)$-sparse. In this paper, we give a polynomial algorithm and a min-max theorem for this augmentation problem when the input is $(k,\ell)$-tight. For general inputs, we give a polynomial algorithm when $k\geq\ell$ and show the NP-hardness of the problem when $k<\ell$. Since $(k,\ell)$-tight graphs play an important role in rigidity theory, these algorithms can be used to make several types of rigid frameworks redundantly rigid by adding a smallest set of new bars. Bibtex entry: @techreport{egres-20-06, AUTHOR = {Kir{\'a}ly, Csaba and Mih{\'a}lyk{\'o}, Andr{\'a}s}, TITLE = {Sparse graphs and an augmentation problem}, NOTE= {{\tt egres.elte.hu}}, INSTITUTION = {Egerv{\'a}ry Research Group, Budapest}, YEAR = {2020}, NUMBER = {TR-2020-06} }
Article Contents Article Contents # Outer billiards on the Penrose kite: Compactification and renormalization • We give a fairly complete analysis of outer billiards on the Penrose kite. Our analysis reveals that this $2$-dimensional dynamical system has a $3$-dimensional compactification, a certain polyhedron exchange map defined on the $3$-torus, and that this $3$-dimensional system admits a renormalization scheme. The two features allow us to make sharp statements concerning the distribution, large- and fine-scale geometry, and hidden algebraic symmetry, of the orbits. One concrete result is that the union of the unbounded orbits has Hausdorff dimension $1$. We establish many of the results with computer-aided proofs that involve only integer arithmetic. Mathematics Subject Classification: Primary: 37E15; Secondary: 37E99. Citation: • [1] N. E. J. De Bruijn, Algebraic theory of Penrose's nonperiodic tilings, Nederl. Akad. Wentensch. Proc., 84 (1981), 39-66. [2] R. Douady, "These de 3-Eme Cycle," Université de Paris 7, 1982. [3] D. Dolyopyat and B. Fayad, Unbounded orbits for semicircular outer billiards, Annales Henri Poincaré, 10 (2009), 357-375.doi: 10.1007/s00023-009-0409-9. [4] F. Dogru and S. Tabachnikov, Dual billiards, Math. Intelligencer, 27 (2005), 18-25. [5] K. J. Falconer, "Fractal Geometry: Mathematical Foundations and Applications," John Wiley & Sons, Ltd., Chichester, 1990. [6] D. Genin, "Regular and Chaotic Dynamics of Outer Billiards," Ph.D. thesis, The Pennsylvania State University, 2005. [7] E. Gutkin and N. Simányi, Dual polygonal billiard and necklace dynamics, Comm. Math. Phys., 143 (1992), 431-449.doi: 10.1007/BF02099259. [8] R. Kolodziej, The antibilliard outside a polygon, Bull. Pol. Acad Sci. Math., 37 (1989), 163-168. [9] L. Li, On Moser's boundedness problem of dual billiards, Ergodic Theorem and Dynamical Systems, 29 (2009), 613-635.doi: 10.1017/S0143385708000515. [10] J. Moser, Is the solar system stable?, Math. Intelligencer, 1 (1978/79), 65-71. doi: 10.1007/BF03023062. [11] J. Moser, "Stable and Random Motions in Dynamical Systems. With Special Emphasis on Celestial Mechanics," Hermann Weyl Lectures, the Institute for Advanced Study, Princeton, N.J., Ann. of Math. Stud., 77, Princeton University Press, Princeton, N.J., University of Tokyo Press, Tokyo, 1973. [12] B. H. Neumann, "Sharing Ham and Eggs," Summary of a Manchester Mathematics Colloquium, 25 Jan 1959, published in Iota, the Manchester University Mathematics Students' Journal. [13] R. E. Schwartz, Unbounded orbits for outer billiards, J. Mod. Dyn., 1 (2007), 371-424.doi: 10.3934/jmd.2007.1.371. [14] R. E. Schwartz, "Outer Billiards on Kites," Annals of Mathematics Studies, 171, Princeton University Press, Princeton, NJ, 2009. [15] R. E. Schwartz, Outer billiards and the pinwheel map, Journal of Modern Dynamics, 2011. [16] R. E. Schwartz, Outer Billiards, Quarter Turn Compositions, and Polytope Exchange Transformations, preprint, 2011. [17] S. Tabachnikov, "Geometry and Billiards," Student Mathematical Library, 30, Amer. Math. Soc., Providence, RI, Mathematics Advanced Study Semesters, University Park, PA, 2005. [18] S. Tabachnikov, "Billiards," Panoramas et Syntheses, 1, Société Mathématique de France, 1995. [19] F. Vivaldi and A. Shaidenko, Global stability of a class of discontinuous dual billiards, Comm. Math. Phys., 110 (1987), 625-640.doi: 10.1007/BF01205552.
# Triangularizing a function matrix with smooth eigenvlaues Given a matrix with function entries, which are smooth and homogeneous, and having smooth eigenvalues, can we find a conjugating matrix with smooth and homogeneous entries that triangularize the given matrix? For instance, given $A(x)$ is an $N\times N$ matrix with entries $a_{ij}(x)$ that are smooth and homogeneous in $x$ of order $1.$ Also, given that the eigenvalues of $A(x)$ are smooth. Find an invertible(may be in small neighborhood) matrix $E(x)$ with smooth entries such that $E^{-1}(x)A(x)E(x)$ is upper-triangular. Sometime back I had asked a question on triangularizing a function matrix. Now, it is clear to me that it is possible to find, by Schur decomposition, a triangularizing matrices which are measurable. Also, one of the answers posted for that question was, it is not always possible to uniformly triangularize especially for certain matrices with non-differentiable eigenvalues. The question in this post is directed towards smoothness and homogenity of such matrices under the condition that they have smooth eigenvalues. I would be grateful for any reference or insight in this direction. Thank you. - Did you have a look to Kato's book Perturbation Theory for Linear Operators, volume 132 of Grundlehren der mathematischen Wissenschaften ? –  Denis Serre Dec 12 '12 at 7:15 @Denis Serre Yes. Kato's book discusses Jordan form. But, I find that questions about Jordan form and triangular form are a bit different. For example, the matrix $$\left(\begin{array}{cc} 1&z\\ 0&1 \end{array}\right)$$ is trivially triangulariable with smooth entries but cannot be written in Jordan form at $z=0$. –  Uday Dec 12 '12 at 9:30 This is not a complete answer, but the paper Kreiss, H. O., Über Matrizen die beschränkte Halbgruppen erzeugen, Math. Scand. 7(1959), 71-81. contains a lot of results in this direction. Unfortunately, at the moment I do not have access to it to check... - @András Bátkai Thank you for the reference. I will try to get this paper. Is there an English translation of this paper? –  Uday Dec 12 '12 at 9:33 I am sorry to say that I cannot find an English reference... –  András Bátkai Dec 23 '12 at 12:19 I have figured out a way to translate the paper. Thank you once again. –  Uday Dec 27 '12 at 15:37 Not precisely what you are asking, but if you look at continuous functions (instead of smooth and homogeneous ones), Grove and Pedersen ["Diagonalizing Matrices over $C(X)$", Journal of Functional Analysis 59, 65--89, 1984] prove the following. $N \times N$ matrices can be diagonalized for all $N$ if and only if $X$ is a sub-Stonean topological space with $\dim X \leq 2$ and $X$ carries no nontrivial $G$-bundles over any closed subset, for $G$ a symmetric group or the circle group. -
Last call to make your voice heard! Our 2022 Developer Survey closes in less than a week. Take survey. # Tag Info ### Glossaries links with non-latin labels in LuaLaTeX refer to wrong place Not a solution but an explanation. The problem is hyperref. hyperref uses with lualatex \pdf@escapestring to convert the name to something that can be safely used as a destination in the pdf. But the ... • 292k Accepted ### Glossaries - expand acronyms for first-time use within each chapter The simplest method is to insert \glsresetall at the start of each chapter. Since etoolbox is automatically loaded (by the base glossaries package), you can use etoolbox's \preto command: \... • 39.4k Accepted ### Glossaries links with non-latin labels in LuaLaTeX refer to wrong place As Ulrike Fischer described the problem is that non-ASCII characters are dropped by \pdf@escapestring. To avoid this, you can change \pdf@escapestring to encode Unicode characters in UTF-8: \... • 30.2k Accepted ### How to use glossaries together with ProvidesPackage Don't name your package "doc". doc.sty exists already (in latex/base) and glossaries contains special code when it detect that it has been used: \@ifpackageloaded{doc}% {% \@gls@docloadedtrue }% {% ... • 292k Accepted ### glossaries: hyperlink only at the first occurrence in every chapter With glossaries-extra v1.26 (2018-01-05)¹ you can do: \documentclass{article} \usepackage{hyperref} \usepackage[acronym]{glossaries-extra} \makeglossaries \GlsXtrEnableLinkCounting[section]{... • 39.4k Accepted ### How to effectively use List of Symbols for a thesis using .bib files? I will demonstrate using some of the example .bib files provided with bib2gls. mathgreek.bib defines some sample symbols that are all mathematical Greek characters. The LaTeX kernel doesn't provide ... • 39.4k Accepted ### glossaries-extra - Problem creating new style for acronyms I recommend a different approach that uses a custom entry type and fields. This makes it more flexible so you can use the same .bib file across different documents, and with a different set of aliases ... • 39.4k ### glossaries: acronyms: How to display only the first appearance of an acronym among the Abbreviations if only its short form is used in the text? The simplest method is to use the short style for acronyms that shouldn't expand on first use and the long-short style for the ones that should. Since the abbreviation category has long-short as the ... • 39.4k ### glossaries automake not working lualatex The problem is that with luatex (and the shellesc package) \write18 is actually an \immediate\write18 (see the documentation of shellesc). And this means that the makeindex command is executed before ... • 292k Accepted ### Manage multiple glossaries in one bib file Most of the resource options aren't cumulative. That is, if used multiple times the last option usually overrides the previous option of the same name. This means that: entry-type-aliases={symbol=... • 39.4k Accepted ### Is it possible to show a custom text for the long part of an abbreviation? There are two commands that allow you to display custom text encapsulated by the abbreviation formatting commands for a particular category: \glsuseabbrvfont{text}{category} \glsuselongfont{text}{... • 39.4k Accepted ### What triggers latexmk to invoke makeglossaries? The suggestion you mention for the .latexmkrc file is out-of-date. See the file glossary_latexmkrc in the example_rcfiles for the current recommendation. From what's in that file, the code for you ... • 9,669 Accepted ### How to Capitalize Symbol in Glossary Print only using a New Entry Field? UPDATE (2) after follow-up question. There is now a new key-defined symbol name (symbolname) that will be used to print the glossary. You can use any other "symbol" you want, not just ... • 24.6k Accepted ### How to fix superscripts in glossaries with parameters You essentially have this: \documentclass{article} \begin{document} \newcommand{\glsarg}{ij^T} $A_{\glsarg}$ \end{document} which puts ij^T as the subscript (that is, A_{ij^T}) where ^T is the ... • 39.4k Accepted ### How to get \glsadd to work with sort=use? UPDATE: In the lates versions 1.24 and 4.35 of glossaries-extra and glossaries respectively, the bug is fixed and the patch I have presented in this answer is no longer needed. UPDATE: Since version ... • 60.4k Accepted ### glossaries-extra not displaying page numbers \printunsrtglossary doesn't show the numbers unless you use it with bib2gls. (See glossaries-extra and bib2gls: An Introductory Guide.) If you want to use makeindex then you need to use \printglossary ... • 39.4k Accepted ### glossaries-extra – usage of \ifglsused with acronyms loaded from external file causes error This bug is now fixed in version 1.33 of glossaries-extra (2018-07-26). A patch for older versions is to redefine \ifglsused as follows: \renewcommand*{\ifglsused}[3]{% \glsdoifexists{#1}{\ifbool{... • 39.4k Accepted ### Non-breaking space between long and short form of acronyms with glossaries/glossaries-extras The command \acrfullformat is provided with the base glossaries acronym styles. The glossaries-extra extension package uses a different abbreviation mechanism, which is much more flexible. The long-... • 39.4k Accepted ### glossaries automake not working lualatex You need to use automake=immediate in this situation. (New to glossaries version 4.22 2019-01-06.) This executes the system command at the start of \makeglossaries (before the glossary files are ... • 39.4k Accepted ### Suppress text (space) output when using \glsadd{} glossaries keeps track of on which pages the glossary entries are used. In order to this, the page on which this happens needs to be well defined, i.e. \glsadd has to be part of a line of text. This ... • 12.5k Accepted ### Defining an abbreviation style in glossaries that shows only long version first and short version afterwards New abbreviation styles are defined with \newabbreviationstyle, as mentioned in the glossaries-extra manual (Section 3.5, page 112, unfortunately without many details). There are a few aspects to ... • 29.6k Accepted ### Hyperref and nested \glsxtrshort and \glsfmtlong You can access the short description like this: \documentclass{article} \usepackage{hyperref} \usepackage{glossaries-extra} \newabbreviation{ara}{ARA}{a random abbreviation} \newabbreviation{aaca}{... • 292k Accepted ### Why do glossaries-extra package and bib2gls application give incorrect utf8 cyrillic characters? On windows I had to inform java that utf8 is wanted by setting an environment variable JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF8" After this setting bib2gls reports: Picked up ... • 292k ### don´t show glossaries in table of content glossaries-extra has the package option toc=true by default, which adds the glossaries to the table of contents. As the package documentation says: Use toc=false to switch this back off. So add toc=... • 13.3k Accepted ### Beyond plural forms using the glossaries package The glossaries package has an excellent FAQ: My term has multiple plural forms, how can I deal with this? Use the plural key for the plural term you are most likely to use, and use one of the user ... • 12.9k Accepted ### is it possible to define acronyms which won´t be mentioned in glossary You can do this by defining a new ignored glossary and assigning acronyms to this ignored glossary. The following is a MWE assuming that xindy is used to alphabetically sort the list of acronyms. ... • 60.4k Accepted ### glossaries-extra causes a "Package xkeyval Error: value `none' is not allowed." error Your version of glossaries is too old. The sort=none option was introduced to glossaries v4.30 (2017-06-11), so you need to update if you want to use it. • 39.4k Accepted ### glossaries dual entries: prevent page from acronym list appearing in glossary number list \glsadd doesn't recognise the noindex key. (The purpose of \glsadd is to index without generating any text, so noindex doesn't make sense in this context.) This means that any instance of \glsadd that ... • 39.4k Accepted ### Mixing abbreviations styles with glossaries-extra I think there's a combination of problems here. The first may be a bug in the short-sc-desc style which can be fixed with: \renewabbreviationstyle{short-sc-desc} {% \renewcommand*{\... • 39.4k
Goto Chapter: Top 1 2 3 4 5 6 7 8 Bib Ind ### 1 Preface Welcome to GAP. This preface serves not only to introduce this manual, "the GAP Tutorial", but also as an introduction to the system as a whole. GAP stands for Groups, Algorithms and Programming. The name was chosen to reflect the aim of the system, which is introduced in this tutorial manual. Since that choice, the system has become somewhat broader, and you will also find information about algorithms and programming for other algebraic structures, such as semigroups and algebras. In addition to this manual, there is the GAP Reference Manual (see Reference: Preface) containing detailed documentation of the mathematical functionality of GAP, and the manual called "GAP - Changes from Earlier Versions" (see Changes: Preface) which describes most essential changes from previous GAP releases. A lot of the functionality of the system and a number of contributed extensions are provided as "GAP packages" and each of these has its own manual. Subsequent sections of this preface explain the structure of the system and list sources of further information about GAP. #### 1.1 The GAP System GAP is a free, open and extensible software package for computation in discrete abstract algebra. The terms "free" and "open" describe the conditions under which the system is distributed -- in brief, it is free of charge (except possibly for the immediate costs of delivering it to you), you are free to pass it on within certain limits, and all of the workings of the system are open for you to examine and change. Details of these conditions can be found in Section Reference: Copyright and License. The system is "extensible" in that you can write your own programs in the GAP language, and use them in just the same way as the programs which form part of the system (the "library"). Indeed, we actively support the contribution, refereeing and distribution of extensions to the system, in the form of "GAP packages". Further details of this can be found in chapter Reference: Using and Developing GAP Packages, and on our website. Development of GAP began at Lehrstuhl D für Mathematik, RWTH-Aachen, under the leadership of Joachim Neubüser in 1985. Version 2.4 was released in 1988 and version 3.1 in 1992. In 1997 coordination of GAP development, now very much an international effort, was transferred to St Andrews. A complete internal redesign and almost complete rewrite of the system was completed over the following years and version 4.1 was released in July 1999. A sign of the further internationalization of the project was the GAP 4.4 release in 2004, which has been coordinated from Colorado State University, Fort Collins. More information on the motivation and development of GAP to date, can be found on our Web pages in a section entitled "Release history and Prefaces". For those readers who have used an earlier version of GAP, an overview of the changes from GAP 4.4 and a brief summary of changes from earlier versions is given in a separate manual Changes: Changes between GAP 4.4 and GAP 4.5. The system that you are getting now consists of a "core system" and a number of packages. The core system consists of four main parts. 1. A kernel, written in C, which provides the user with • automatic dynamic storage management, which the user needn't bother about in his programming; • a set of time-critical basic functions, e.g. "arithmetic", operations for integers, finite fields, permutations and words, as well as natural operations for lists and records; • an interpreter for the GAP language, an untyped imperative programming language with functions as first class objects and some extra built-in data types such as permutations and finite field elements. The language supports a form of object-oriented programming, similar to that supported by languages like C++ and Java but with some important differences. • a small set of system functions allowing the GAP programmer to handle files and execute external programs in a uniform way, regardless of the particular operating system in use. • a set of programming tools for testing, debugging, and timing algorithms. • a "read-eval-view" style user interface. 2. A much larger library of GAP functions that implement algebraic and other algorithms. Since this is written entirely in the GAP language, the GAP language is both the main implementation language and the user language of the system. Therefore the user can as easily as the original programmers investigate and vary algorithms of the library and add new ones to it, first for own use and eventually for the benefit of all GAP users. 3. A library of group theoretical data which contains various libraries of groups, including the library of small groups (containing all groups of order at most 2000, except those of order 1024) and others. Large libraries of ordinary and Brauer character tables and Tables of Marks are included as packages. 4. The documentation. This is available as on-line help, as printable files in PDF format and as HTML for viewing with a Web browser. Also included with the core system are some test files and a few small utilities which we hope you will find useful. GAP packages are self-contained extensions to the core system. A package contains GAP code and its own documentation and may also contain data files or external programs to which the GAP code provides an interface. These packages may be loaded into GAP using the LoadPackage (Reference: LoadPackage) command, and both the package and its documentation are then available just as if they were parts of the core system. Some packages may be loaded automatically, when GAP is started, if they are present. Some packages, because they depend on external programs, may only be available on the operating systems where those programs are available (usually UNIX). You should note that, while the packages included with this release are the most recent versions ready for release at this time, new packages and new versions may be released at any time and can be easily installed in your copy of GAP. With GAP there are two packages (the library of ordinary and Brauer character tables, and the library of tables of marks) which contain functionality developed from parts of the GAP core system. These have been moved into packages for ease of maintenance and to allow new versions to be released independently of new releases of the core system. The library of small groups should also be regarded as a package, although it does not currently use the standard package mechanism. Other packages contain functionality which has never been part of the core system, and may extend it substantially, implementing specific algorithms to enhance its capabilities, providing data libraries, interfaces to other computer algebra systems and data sources such as the electronic version of the Atlas of Finite Group Representations; therefore, installation and usage of packages is recommended. Further details about GAP packages can be found in chapter Reference: Using and Developing GAP Packages, and on the GAP website here: https://www.gap-system.org/Packages/packages.html. #### 1.2 Further Information about GAP Information about GAP is best obtained from the GAP website https://www.gap-system.org There you will find, amongst other things • directions to the sites from which you can download the current GAP distribution, all accepted and deposited GAP packages, and a selection of other contributions. • the GAP manual and an archive of the gap-forum mailing list, formatted for reading with a Web browser, and indexed for searching. We would particularly ask you to note the following things: • The GAP Forum – an email discussion forum for comments, discussions or questions about GAP. You must subscribe to the list before you can post to it, see the website for details. In particular we will announce new releases in this mailing list. • The email address support@gap-system.org to which you are asked to send any questions or bug reports which do not seem likely to be of interest to the whole GAP Forum. Please give a (short, if possible) self-contained excerpt of a GAP session containing both input and output that illustrates your problem (including comments of why you think it is a bug) and state the type of the machine, operating system, (compiler used, if UNIX/Linux) and the version of GAP you are using (the first line after the GAP 4 banner starting GAP, Version 4...). • We also ask you to send a brief message to support@gap-system.org when you install GAP. • The correct form of citation of GAP, which we ask you use whenever you publish scientific results obtained using GAP. It finally remains for us to wish you all pleasure and success in using GAP, and to invite your constructive comment and criticism. The GAP Group, 01-Nov-2018 Goto Chapter: Top 1 2 3 4 5 6 7 8 Bib Ind generated by GAPDoc2HTML
# Linear equation An equation of the form mx + b = 0 is a linear equation or an equation of a straight line. Any other equations, which can be rewritten to that form, are also linear equations, that is, they have one variable x, which is raised to the first power. Usually the multiplication sign between the constant and the variable of an equation is omitted, like for instance 2x. In the following example, however, we will write instead of to make it more clear, that 2 must be multiplied by x. ## Solving a linear equation In order to solve a linear equation, you must rewrite it, until the variable x is isolated on one side of the equation. Every time you rewrite an equation you put a “if and only if”-symbol ⇔, indicating that the equation in its initial form is true if and only if the equation in its rewritten form is also true. As you can see all three expressions above are equivalent, which is shown with ⇔ after each expression. It is the same equation in different forms. ## How to rewrite an equation You can add or subtract the same number on both side of the equation. You can multiply or divide by the same number (except 0) on both side of the equation. ## Guide to solve an equation Fractions When you want to solve an equation, the first thing you do is to get rid of all fractions appearing in it. You can get rid of a fraction by multiplying it by its own denominator, because then you get its numerator. But if you do so, you must multiply by the same number on the other side of the equation as well, according to the rules above. Parentheses If an equation contains parentheses, they must be removed as well. (see more about the properties of parentheses on our site concerning parentheses) Isolation of x Finally you can isolate x on one side of the equation in accordance with the rules above concerning addition, subtraction and multiplication. ## The linear equation and the Zero Product Property Sometimes you will come across equations like: In this case you can solve the equation on the basis of the Zero Product Property, which states that when two quantities multiply to get zero, either one or both of the quantities must be zero. So instead you have to solve these two linear equations. and There will be two solutions to the initial equation, because there is one solution for each of the two equations. ## Examples of solving a linear equation Simple linear equation We add 4 on both sides: We divide by 4 on both sides: Linear equation with parentheses The first thing we do is to get rid of the parentheses. According to the distributive property the term outside the parentheses distributes across all terms inside the parentheses. Be aware of the signs + and – in front of each term, which also distribute equally across the parentheses. We subtract 12 from both sides. We add 6x on both sides. We divide by 10 on both sides. ## Solving a linear equation Example of equation Is entered like this in the calculator 13-(2x+2)=2(x+2)+3x 2*4x=9-x 6x-(3x+8)=16 9/(x+1)=2 (x+5)/9+5=17
# Math Help - Geometric Series 1. ## Geometric Series Josh Deposits $120 at the end of each month into an account that pays 7% interest compounded monthly.After 10 years the balance on his account in dollars is $120*((1+(.07/12)^0)+120*((1+(.07/12)^1)+........120*((1+(.07/12))^N)$ A. What Is the First Term? What is R? B. What is his balance after 10 years? 2. Originally Posted by Dragon Josh Deposits$120 at the end of each month into an account that pays 7% interest compounded monthly.After 10 years the balance on his account in dollars is $120*((1+(.07/12)^0)+120*((1+(.07/12)^1)+........120*((1+(.07/12))^N)$ A. What Is the First Term? What is R? B. What is his balance after 10 years? 1) The first term? That makes no sense or is entirely ambiguous. Please object strongly to this question if you have reported it as it was presented. Which term is first? The first deposit? The last deposit? 2) You have parenthesis problems. You mean "[1+(0.07/12)]^2", NOT "1+(0.07/12)^2". Do you see the difference? 3) Why did you stop at "N". You know the number of terms. Fill in the last value. Monthly, 10 years, somewhere around 120? Possibly 119? You tell me. 4) R had better be (1 + 0.07/12). Can you add up all the terms?
2 added 202 characters in body Although the answer to my question is probably implicit in the answers to the question asked here: http://mathoverflow.net/questions/14664/density-of-numbers-having-large-prime-divisors-formalizing-heuristic-probability, I can't extract it. Problem: find decent bounds on the number of positive integers $n$, such that, for all primes $p$ dividing $n$, if $p^k$ exactly divides $n$, then $n > p^{k+1}$. My idea for a first upper bound: if $n$ is divisible by a prime larger than $n^{\frac{1}{2}}$, it is immediately exluded that $n$ is of the above form, so the density can never be larger than $1 - \log{2}$ My idea for a first lower bound: if $n$ has two prime divisors between $n^{\frac{1}{3}}$ and $n^{\frac{1}{2}}$, then $n$ is of the above form. But I don't know the density of these numbers. I am probably very happy with a (reference to) a proof/theorem that implies that we have a positive lower density, but asymptotics would be great. EDIT (after the first response of GH, for which I'm thankful!): assume $n$ lies in some moduloclass, say $a \pmod{b}$. Can we still show a positive lower density, whatever the values of $a$ and $b$? 1 # Density of numbers not divisible by a large prime power Although the answer to my question is probably implicit in the answers to the question asked here: http://mathoverflow.net/questions/14664/density-of-numbers-having-large-prime-divisors-formalizing-heuristic-probability, I can't extract it. Problem: find decent bounds on the number of positive integers $n$, such that, for all primes $p$ dividing $n$, if $p^k$ exactly divides $n$, then $n > p^{k+1}$. My idea for a first upper bound: if $n$ is divisible by a prime larger than $n^{\frac{1}{2}}$, it is immediately exluded that $n$ is of the above form, so the density can never be larger than $1 - \log{2}$ My idea for a first lower bound: if $n$ has two prime divisors between $n^{\frac{1}{3}}$ and $n^{\frac{1}{2}}$, then $n$ is of the above form. But I don't know the density of these numbers. I am probably very happy with a (reference to) a proof/theorem that implies that we have a positive lower density, but asymptotics would be great.
# Tag Info 3 It's true. If the CFG is not null free, and the input sentence is not null, you can remove the null from the CFG and then parse the input sentence with the resulting grammar. You already know how to do that in cubic time. If the CFG is not null free, and the input sentence is null, you can immediately tell whether the input sentence is accepted by the CFG. 3 Suppose that $w$ is in the language. We can write $w$ as a concatenation of runs: $$w = a^{i_1} b^{j_1} a^{i_2} b^{j_2} \dots a^{i_m} b^{j_m},$$ where all indices other than possibly $i_1,j_m$ are strictly positive. A word of this form belongs to $(a^nb^n)^m$ if all indices are equal. Since $w$ is in the language, there must exist two indices which are ... 3 The PDA first guesses whether $m=n$ or $n=k$. According to the guess, it either just checks that $m=n$, or just checks that $n=k$. What your heuristic argument suggests is that this language cannot be accepted by a deterministic PDA. You can likely show this by adapting the proof here. 2 Condition 3 as in the question, "$\forall i >0, \exists z= uv^iwx^iy \in L$" does not make sense. It should be "$uv^iwx^iy \in L\text{ for all } i\ge0$", i.e., $uv^0wx^0y=uwy$, $uv^1wx^1y=uvwxy$, $uv^2wx^2y$, $uv^3wx^3y$, $uv^4wx^4y$, and so on are in $L$. Please note the first word, $uwy$, where $v$ and $x$ do not appear, i.e, we &... 2 Indeed, $abaaabbb \notin L_1$ because the string is not of the form $(a^nb^n)^m$ which is the repetition of a fixed string with the same number of $a$ and $b$. The language $L_2$ is the Kleene closure of $\{a^nb^n \mid n\ge1\}$, consisting of all arbitrary concatenations of strings of the form $a^nb^n$. We can choose different strings of this form, and do ... 2 This is a typical ambiguous grammar for arithmetic expressions. You can write different unambiguous equivalent grammars. For example, if you use the traditional precedences and associativities; \begin{align*} E &\to E + T \mid E - T \mid T \\ T &\to T * F \mid T / F \mid F \\ F &\to x \mid y \mid ( E ) \end{align*} You could also go ... 1 This is what we have at our hand. Now let us see what can be get from the second production in red. From the production in Blue we have And the parsing can be stopped using $S\rightarrow \epsilon$ So from the above two work outs we find that the $a$'s and $b$'s are properly nested. (and can be mapped to the problem of valid parenthesization ) That being ... 1 You are quite close to the solution. We will use a few variables, each corresponding (intuitively) to some other "thing". Specifically, we will use the variables $S,E,A,B$. $S$ is the starting variable. $E$ is a variable that will produce a valid regular expression (its called $E$ as short for "expression"). $A$ will be some valid string ... Only top voted, non community-wiki answers of a minimum length are eligible
Energy based models (EBMs) are drawing a lot of recent attention. Importantly, you can write the gradient of the log-likelihood of an EBM with respect to the parameters. However, this gradient is commonly stated in papers without a derivation, so I thought I would derive it here. Consider an energy based model $p(x) = \frac{1}{Z(\theta)} \, e^{-E_\theta(x)}$ with normalizing constant $Z(\theta)$. Papers often state that the gradient of $\log p_\theta(x)$ with respect to $\theta$ is $\frac{\partial}{\partial \theta} \log p_\theta(x) = \mathbb{E}_{p_\theta(x)} \left[ \frac{\partial}{\partial \theta} E_\theta(x) \right] - \frac{\partial}{\partial \theta} E_\theta(x).$ But where does this come from? Here we derive it using the log-derivative trick and with one key assumption. We start by writing out the gradient $\frac{\partial}{\partial \theta} \log p_\theta(x) = \frac{\partial}{\partial \theta} \left[ -\log Z(\theta) - E_\theta(x) \right] = - \frac{\partial}{\partial \theta} \log Z(\theta) - \frac{\partial}{\partial \theta} E_\theta(x)$ and notice that we have already identified the second term in the gradient. The first term requires some care. We start by using the log-derivative trick $\frac{\partial}{\partial \theta} \log Z(\theta) = \frac{ \frac{\partial}{\partial \theta} Z(\theta)}{Z(\theta)}.$ Next, we derive $\frac{\partial}{\partial \theta} Z(\theta)$ with the key assumption that we can interchange integration and differentiation $\frac{\partial}{\partial \theta} Z(\theta) = \frac{\partial}{\partial \theta} \int e^{-E_\theta(x)} \mathrm{d}x = \int \frac{\partial}{\partial \theta} e^{-E_\theta(x)} \mathrm{d}x.$ Putting together the pieces gives us \begin{align} \frac{\partial}{\partial \theta} \log Z(\theta) &= \frac{1}{Z(\theta)} \int \frac{\partial}{\partial \theta} e^{-E_\theta(x)} \mathrm{d}x \\ & = \int \frac{1}{Z(\theta)} \frac{\partial}{\partial \theta} e^{-E_\theta(x)} \mathrm{d}x \\ & = - \int \frac{1}{Z(\theta)} e^{-E_\theta(x)} \frac{\partial}{\partial \theta} E_\theta(x) \mathrm{d}x \\ & = - \mathbb{E}_{p_\theta(x)} \left[ \frac{\partial}{\partial \theta} E_\theta(x) \right]. \end{align} We are done! We can plus this into the equation above (keeping track of minus signs) to get $\frac{\partial}{\partial \theta} \log p_\theta(x) = \mathbb{E}_{p_\theta(x)}\left[\frac{\partial}{\partial \theta} E\_\theta(x) \right]- \frac{\partial}{\partial \theta} E_\theta(x).$ ##### David Zoltowski ###### PhD Student My research interests include statistical neuroscience and topics in Bayesian machine learning.
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Ray Maleh Computer Science , 2011, Abstract: Orthogonal Matching Pursuit (OMP) has long been considered a powerful heuristic for attacking compressive sensing problems; however, its theoretical development is, unfortunately, somewhat lacking. This paper presents an improved Restricted Isometry Property (RIP) based performance guarantee for T-sparse signal reconstruction that asymptotically approaches the conjectured lower bound given in Davenport et al. We also further extend the state-of-the-art by deriving reconstruction error bounds for the case of general non-sparse signals subjected to measurement noise. We then generalize our results to the case of K-fold Orthogonal Matching Pursuit (KOMP). We finish by presenting an empirical analysis suggesting that OMP and KOMP outperform other compressive sensing algorithms in average case scenarios. This turns out to be quite surprising since RIP analysis (i.e. worst case scenario) suggests that these matching pursuits should perform roughly T^0.5 times worse than convex optimization, CoSAMP, and Iterative Thresholding. Tong Zhang Mathematics , 2010, Abstract: This paper presents a new analysis for the orthogonal matching pursuit (OMP) algorithm. It is shown that if the restricted isometry property (RIP) is satisfied at sparsity level $O(\bar{k})$, then OMP can recover a $\bar{k}$-sparse signal in 2-norm. For compressed sensing applications, this result implies that in order to uniformly recover a $\bar{k}$-sparse signal in $\Real^d$, only $O(\bar{k} \ln d)$ random projections are needed. This analysis improves earlier results on OMP that depend on stronger conditions such as mutual incoherence that can only be satisfied with $\Omega(\bar{k}^2 \ln d)$ random projections. Mathematics , 2013, DOI: 10.1109/LSP.2013.2279977 Abstract: Generalized Orthogonal Matching Pursuit (gOMP) is a natural extension of OMP algorithm where unlike OMP, it may select $N (\geq1)$ atoms in each iteration. In this paper, we demonstrate that gOMP can successfully reconstruct a $K$-sparse signal from a compressed measurement ${\bf y}={\bf \Phi x}$ by $K^{th}$ iteration if the sensing matrix ${\bf \Phi}$ satisfies restricted isometry property (RIP) of order $NK$ where $\delta_{NK} < \frac {\sqrt{N}}{\sqrt{K}+2\sqrt{N}}$. Our bound offers an improvement over the very recent result shown in \cite{wang_2012b}. Moreover, we present another bound for gOMP of order $NK+1$ with $\delta_{NK+1} < \frac {\sqrt{N}}{\sqrt{K}+\sqrt{N}}$ which exactly relates to the near optimal bound of $\delta_{K+1} < \frac {1}{\sqrt{K}+1}$ for OMP (N=1) as shown in \cite{wang_2012a}. Zhiqiang Xu Computer Science , 2012, Abstract: The orthogonal multi-matching pursuit (OMMP) is a natural extension of orthogonal matching pursuit (OMP). We denote the OMMP with the parameter $M$ as OMMP(M) where $M\geq 1$ is an integer. The main difference between OMP and OMMP(M) is that OMMP(M) selects $M$ atoms per iteration, while OMP only adds one atom to the optimal atom set. In this paper, we study the performance of orthogonal multi-matching pursuit (OMMP) under RIP. In particular, we show that, when the measurement matrix A satisfies $(9s, 1/10)$-RIP, there exists an absolutely constant $M_0\leq 8$ so that OMMP(M_0) can recover $s$-sparse signal within $s$ iterations. We furthermore prove that, for slowly-decaying $s$-sparse signal, OMMP(M) can recover s-sparse signal within $O(\frac{s}{M})$ iterations for a large class of $M$. In particular, for $M=s^a$ with $a\in [0,1/2]$, OMMP(M) can recover slowly-decaying $s$-sparse signal within $O(s^{1-a})$ iterations. The result implies that OMMP can reduce the computational complexity heavily. Eugene Livshitz Mathematics , 2010, Abstract: We show that if a matrix $\Phi$ satisfies the RIP of order $[CK^{1.2}]$ with isometry constant $\dt = c K^{-0.2}$ and has coherence less than $1/(20 K^{0.8})$, then Orthogonal Matching Pursuit (OMP) will recover $K$-sparse signal $x$ from $y=\Phi x$ in at most $[CK^{1.2}]$ iterations. This result implies that $K$-sparse signal can be recovered via OMP by $M=O(K^{1.6}\log N)$ measurements. Statistics , 2012, Abstract: In this paper, we present coherence-based performance guarantees of Orthogonal Matching Pursuit (OMP) for both support recovery and signal reconstruction of sparse signals when the measurements are corrupted by noise. In particular, two variants of OMP either with known sparsity level or with a stopping rule are analyzed. It is shown that if the measurement matrix $X\in\mathbb{C}^{n\times p}$ satisfies the strong coherence property, then with $n\gtrsim\mathcal{O}(k\log p)$, OMP will recover a $k$-sparse signal with high probability. In particular, the performance guarantees obtained here separate the properties required of the measurement matrix from the properties required of the signal, which depends critically on the minimum signal to noise ratio rather than the power profiles of the signal. We also provide performance guarantees for partial support recovery. Comparisons are given with other performance guarantees for OMP using worst-case analysis and the sorted one step thresholding algorithm. 赵娟,毕诗合,白霞,唐恒滢,王豪 - , 2015, DOI: 10.15918/j.jbit1004-0579.201524.0313 Abstract: The performance guarantees of generalized orthogonal matching pursuit (gOMP) are considered in the framework of mutual coherence. The gOMP algorithm is an extension of the well-known OMP greed algorithm for compressed sensing. It identifies multiple N indices per iteration to reconstruct sparse signals. The gOMP with N≥2 can perfectly reconstruct any K-sparse signals from measurement y=Φx if K< 1/N((1/μ)-1)+1, where μ is coherence parameter of measurement matrix Φ. Furthermore, the performance of the gOMP in the case of y=Φx+e with bounded noise ‖e‖2≤ε is analyzed and the sufficient condition ensuring identification of correct indices of sparse signals via the gOMP is derived, i.e., K<1/N((1/μ)-1)+1-((2ε)/(Nμxmin)), where xmin denotes the minimum magnitude of the nonzero elements of x. Similarly, the sufficient condition in the case of Gaussian noise is also given.The performance guarantees of generalized orthogonal matching pursuit (gOMP) are considered in the framework of mutual coherence. The gOMP algorithm is an extension of the well-known OMP greed algorithm for compressed sensing. It identifies multiple N indices per iteration to reconstruct sparse signals. The gOMP with N≥2 can perfectly reconstruct any K-sparse signals from measurement y=Φx if K< 1/N((1/μ)-1)+1, where μ is coherence parameter of measurement matrix Φ. Furthermore, the performance of the gOMP in the case of y=Φx+e with bounded noise ‖e‖2≤ε is analyzed and the sufficient condition ensuring identification of correct indices of sparse signals via the gOMP is derived, i.e., K<1/N((1/μ)-1)+1-((2ε)/(Nμxmin)), where xmin denotes the minimum magnitude of the nonzero elements of x. Similarly, the sufficient condition in the case of Gaussian noise is also given. Computer Science , 2014, Abstract: A sufficient condition reported very recently for perfect recovery of a K-sparse vector via orthogonal matching pursuit (OMP) in K iterations is that the restricted isometry constant of the sensing matrix satisfies delta_K+1<1/(sqrt(delta_K+1)+1). By exploiting an approximate orthogonality condition characterized via the achievable angles between two orthogonal sparse vectors upon compression, this paper shows that the upper bound on delta can be further relaxed to delta_K+1<(sqrt(1+4*delta_K+1)-1)/(2K).This result thus narrows the gap between the so far best known bound and the ultimate performance guarantee delta_K+1<1/(sqrt(delta_K+1)) that is conjectured by Dai and Milenkovic in 2009. The proposed approximate orthogonality condition is also exploited to derive less restricted sufficient conditions for signal reconstruction in several compressive sensing problems, including signal recovery via OMP in a noisy environment, compressive domain interference cancellation, and support identification via the subspace pursuit algorithm. Sensors , 2013, DOI: 10.3390/s130911167 Abstract: DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. Mathematics , 2011, DOI: 10.1109/TSP.2012.2218810 Abstract: As a greedy algorithm to recover sparse signals from compressed measurements, orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gOMP), is literally a generalization of the OMP in the sense that multiple $N$ indices are identified per iteration. Owing to the selection of multiple ''correct'' indices, the gOMP algorithm is finished with much smaller number of iterations when compared to the OMP. We show that the gOMP can perfectly reconstruct any $K$-sparse signals ($K > 1$), provided that the sensing matrix satisfies the RIP with $\delta_{NK} < \frac{\sqrt{N}}{\sqrt{K} + 3 \sqrt{N}}$. We also demonstrate by empirical simulations that the gOMP has excellent recovery performance comparable to $\ell_1$-minimization technique with fast processing speed and competitive computational complexity. Page 1 /100 Display every page 5 10 20 Item
11 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found # RIPL – Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations Nuclear Data Sheets Elsevier BV ScienceOpenPublisher Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Most cited references281 • Record: found ### The Ame2003 atomic mass evaluation (2003) Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Nuclear Ground-State Masses and Deformations (1993) We tabulate the atomic mass excesses and nuclear ground-state deformations of 8979 nuclei ranging from $$^{16}$$O to $$A=339$$. The calculations are based on the finite-range droplet macroscopic model and the folded-Yukawa single-particle microscopic model. Relative to our 1981 mass table the current results are obtained with an improved macroscopic model, an improved pairing model with a new form for the effective-interaction pairing gap, and minimization of the ground-state energy with respect to additional shape degrees of freedom. The values of only 9 constants are determined directly from a least-squares adjustment to the ground-state masses of 1654 nuclei ranging from $$^{16}$$O to $$^{263}$$106 and to 28 fission-barrier heights. The error of the mass model is 0.669~MeV for the entire region of nuclei considered, but is only 0.448~MeV for the region above $$N=65$$. Bookmark • Record: found ### Nucleon-Nucleus Optical-Model Parameters,A>40,E<50MeV (1969) Bookmark ### Author and article information ###### Journal Nuclear Data Sheets Nuclear Data Sheets Elsevier BV 00903752 December 2009 December 2009 : 110 : 12 : 3107-3214 ###### Article 10.1016/j.nds.2009.10.004
This website requires JavaScript. # Applications of a class of transformations of complex sequences Dec 2022 Through an application of a remarkable result due to Mishev in 2018concerning the inverses for a class of transformations of sequences of complexnumbers, we obtain a very simple proof for a famous series for $\frac{1}{\pi}$due to Ramanujan. We then apply Mishev's transform to provide proofs for anumber of related hypergeometric identities, including a new and simplifiedproof for a family of series for $\frac{1}{\pi}$ previously obtained by Levrievia Fourier--Legendre theory. We generalize this result using Mishev'stransform, so as to extend a result due to Guillera on a Ramanujan-like seriesinvolving cubed binomial coefficients and harmonic numbers. Q1论文试图解决什么问题? Q2这是否是一个新的问题? Q3这篇文章要验证一个什么科学假设? 0
## Introduction The circadian clock is a cell-autonomous mechanism implicated in the control of numerous physiological processes1,2,3, which, in humans, it is adjusted to the earth daily cycle by the light captured at the retina3. PER3 is part of the negative branch of the primary molecular circadian system feedback loop3,4,5,6 whose expression oscillates in peripheral tissues and organs7,8,9,10. PER3 is located on 1p36 chromosomal region, a commonly deleted region in human cancer, and especially in breast tumors11,12,13. We have previously shown that PER3 deletion is associated with tumor recurrence in patients with estrogen receptor (ER) positive breast cancers treated with tamoxifen, in addition, we also observed that low expression levels of PER3 may serve as a predictor of the probability of breast tumor recurrence in patients with ER-positive tumors11. To look for potential PER3 inactivating mutations in breast cancer, we formerly sequenced the complete coding region of PER3 in human breast cancer cell lines, and although no clear pathogenic mutations were identified, one of the polymorphic variants of PER3, a variable number of tandem repeats (PER3VNTR) was observed11. The primate specific PER3VNTR polymorphism consists in a 54-nucleotide coding-region located at exon 18 that is repeated 4 or 5 times in humans. The PER3VNTR polymorphism has been linked to an increased risk of colorectal adenoma formation and breast cancer14,15. Zhu et al. reported that the PER3 5-repetition allele was associated to an increased risk of breast cancer among premenopausal Caucasian women in a case–control study including 389 cases and 432 controls15. This finding was however not replicated either by Dai et al. in a larger Chinese population case–control study including 1519 cases and 1600 controls16, nor by Wirth et al. in a small Indian population study17. To determine if PER3 5-repeat allele is associated with an increased breast cancer risk we carried out a case–control study using two independent cohorts derived from Norway (Oslo University Hospital) and Netherlands (Netherlands Cancer Institute), and combined our results through meta-analysis with previously published data. Overall, we obtained PER3 genotypes for, 5931 samples, including, 2420 cases, 2207 controls, and 1304 breast tumor samples from which 329 presented matched blood genotypes derived from a subset of patients included in the case group. PER3VNRT genotypes were also obtained for a collection of 52 breast cancer cell lines. For the subset of samples for which we had patient matched blood and tumor data, we determine if changes in the PER3VNTR genotype between germline and tumor samples were taking place. Finally, given the reported role of PER3 low expression levels in breast cancer tumor recurrence11, we aimed to characterize PER3 placement in the context of gene co-expression networks, in healthy mammary tissue and breast cancer samples. This could provide insight on the biological processes in which PER3 is involved but also on the potential effects that alterations in its function could entail. Finally, we investigated the associations between PER3 expression and disease-free survival in breast cancer for each specific intrinsic molecular subtype. ## Results ### Analysis of association between the PER3VNTR polymorphism and cancer risk The genotypes of the PER3VNTR polymorphism were obtained for two independent cohorts of women derived from the Oslo University Hospital (Cohort 1) and the Netherlands Cancer Institute (Cohort 2). Cohort 1 included 1575 women diagnosed with breast cancer and 1640 controls whereas Cohort 2 comprised 560 cases and 567 controls (see Supplementary Table 1 available in Supplementary File 1). The observed genotype distributions did not present deviations from Hardy-Weinberg equilibrium for cases or controls in any of the cohorts (Cohort 1 cases: p-val = 0.89), (Cohort 1 controls: p-val = 0.82), (Cohort 2 cases: p-val = 0.98), and (Cohort 2 controls: p-val = 0.55). Overall, unadjusted odds ratios showed a positive trend of association between breast cancer risk and PER3VNTR long repeat allele, although it did not reach statistical significance under any tested model. We observed a non-significant slightly increased breast cancer risk associated with the homozygous 5-repeat allele (OR, 1.09; 95% CI, 0.85–1.40) and (OR, 1.37; 95% CI, 0.91–2.06) for cohorts 1 and 2, respectively. A non-significant positive association was also found for the heterozygous alleles (Cohort 1: OR, 1.05; 95% CI, 0.91–1.22), (Cohort 2: OR, 1.11; 95%CI, 0.87–1.42) and the combination of the 5-repeat variant alleles (heterozygous + homozygous) (Cohort 1: OR, 1.06; 95% CI, 0.92–1.22), (Cohort 2: OR, 1.15; 95%CI, 0.91–1.46) in both cohorts. Odds ratios and genotype frequencies from our data and previously published studies are shown in (Table 1). Meta-analysis are known to increase statistical power and to provide better estimates of the effect sizes18, therefore we combined our data with results from previously published studies15,16,17 dedicated to examine the potential role of the PER3VNTR polymorphism on breast cancer risk (Table 1). We calculated pooled odds ratio under fixed and random effect models by applying the inverse variance method. Overall, meta-analysis results showed a non-significant trend towards increased cancer risk in 5-allele repeat carriers. No differences were found when computing the pooled effect-sizes under fixed or random effect models due to the lack of heterogeneity. A non-significant 17% increase in breast cancer risk (OR = 1.17, 95% CI = 0.97–1.42) was observed for 5-allele repeat carriers under the homozygous model (5/5 Vs 4/4), whereas the heterogeneous (5/4 Vs 4/4) and dominant models (5/5 + 5/4 Vs 4/4) meta-analyses yielded non-significant increases of breast cancer risk of 9% (OR = 1.09, 95% CI = 0.99–1.18) and 7% (OR = 1.07, 95% CI = 0.98–1.18) in 5-allele repeat carries, respectively. No significant between-study heterogeneity was observed under any tested model (Homozygous model, I² =0.0%, Q = 0.48, p-val = 0.92) (Heterogeneous model, I² =0.0%, Q = 2.07, p-val = 0.55), (Dominant model, I² 0.0%. Q = 0.45, p-val = 0.97). Figure 1 summarizes the meta-analysis results. ### Preferential allelic imbalance at the PER3 locus in tumors and cell lines In addition to the case–control information, we obtained genotypes of 1304 breast tumor samples and 52 breast cancer cell lines. We observed an increase in the 5/5 genotype frequency from control blood samples compared to cell lines. 8.9% of control blood samples were 5/5, an increase of 1% in the 5/5 genotype frequency was found for case blood samples (9.9%). Tumors presented a 14.8% of frequency for the 5/5 genotype. Finally, cell lines showed the highest 5/5 genotype frequency with a 34.62%. This fact alongside with the loss of Hardy-Weinberg equilibrium in tumor samples and the significantly altered proportion of PER3VNTR genotypes observed in cell lines ($$X^2$$ p-val = 8.64e-11) suggests that a selective pressure could be operating in favor of the 5/5 genotype acquisition during tumor development. Probably due to a 5-allele preferential retention after loss of heterozygosity (LOH) in region 1p36. Genotype frequencies for blood control and cases together with tumor samples and cell lines and the p-values for the Hardy-Weinberg and Chi-Squared tests can be found in Fig. 2a. The PER3 genotypes for the 52 tested breast cancer cell lines and their molecular subtype classification based on different previously published works19,20,21,22,23 (See also https://lincs.hms.harvard.edu/about/approach/reagents/icbp43/), as well as the genotype distributions based on each classification are available at Supplementary Data 1. We analyzed and represented in bar graphs the frequencies of the different genotypes related with molecular subtypes. The frequencies of the PER35/5 genotype were found to be higher in Basal-like and triple-negative breast cancers compared to other breast cancer subtypes, whereas PER34/4 genotypes were more frequent on ER-positive, luminal and HER2-positive subtypes. However, the diverse genotype distributions observed in the distinct breast cancer subgroups did not reach statistical significance after Chi-squared analysis, most likely due to the low number of cell lines with molecular subtypes available for the analyses (n range 35–48) (Supplementary Data 1). A subset of 329 samples for which paired blood and tumor samples were available was analyzed to determine if changes in the genotype were taking place in tumor samples compared to their blood counterpart. Overall, the 5/5 genotype was observed to be increased in tumors (OR, 1.61; 95% CI.1.02–2.56) (Fig. 2b), probably due to a reduction of germline heterozygous genotypes in the matched tumor samples. One-hundred and thirty-eight samples were originally heterozygotes in blood samples. Thirty-six out of the 138 (26%) samples, which were found to be heterozygotes in the blood, presented changes in genotype in their tumor counterpart. Twelve out of 36 (33.3%) presented LOH with retention of the 4-allele repeat in tumors whereas 24 out of 36 (66.6%) presented LOH with the permanence of the 5 repetition allele, showing a preferential shift towards the 5 repetition allele (Fisher’s Exact Test p-value = 0.0005). Binomial test p-values for the observed number of changes towards genotype 4 or 5 from blood to tumor samples were p = 0.98 and p = 0.03 respectively (see Fig. 2c). These results are compatible with a preferential allelic imbalance at the PER3 locus in breast cancer in which the long allele repeat is preferentially retained. ### PER3 co-expression structure in human and murine healthy mammary tissues Our data suggest that PER3 alterations could play a significant role in breast cancer. To further characterize PER3 functions in healthy mammary tissues we determined its co-expression structure using a human (D1) and a murine (D2) healthy mammary tissue gene expression datasets. For a detailed description of the datasets, see the Material and methods section and Supplementary Table 2. First, a robust list of PER3 co-expression partners in healthy breast was obtained by retrieving those genes presenting absolute values of correlation with PER3 higher than 0.4 in both D1 and D2. Genes in the list can be consulted in Fig. 3a and include several instances of genes previously linked to rhythmic processes including CRY2, DBP, FZD4, HLF, NR1D2, PPARG, and TEF24. Several PER3 robust co-expression partners were also interesting given their potential implication in cancer related processes. For instance, AFF1 has been linked to childhood lymphoblastic leukemia25 and BCL2L2 and BNIP2L are related to cell survival control, and pro-apoptotic functions, respectively26. CDKN1C is an inhibitor of several G1 cyclin/CDK complexes which is involved in the regulation of several hallmarks of cancer, including cell proliferation, apoptosis, cell invasion and metastasis, tumor differentiation, and angiogenesis27, whereas FRY plays a role in centrosome integrity maintenance during mitosis and could interact with AURKA to mediate PLK1 activation28. Second, Gene Set Enrichment Analysis (GSEA) was carried out ordering all the studied genes by their correlation value with PER3. GSEA analysis results of the D1 dataset showed that PER3 positively correlated genes were enriched in several biological processes besides circadian clock machinery (Fig. 3b) including pathways relevant for cancer such as, PI3K/AKT, insulin, and PPAR signaling, the synthesis of ATP thought the electron transport chain, the metabolism of fatty acids. PER3 negatively correlated genes were enriched in peptide chain elongation, and cell–cell junction organization among others. Supplementary Table 3 shows the full GSEA enrichment results for D1 healthy mammary samples whereas Supplementary Fig. 1 shows GSEA plots of the top pathways enriched in genes positively and negatively correlated with PER3 in D1. Murine data (D2) GSEA analysis results also suggested that genes positively correlated with PER3 were enriched in pathways related to the oxidative phosphorylation, the metabolism of lipids and PPAR signaling, whereas genes negatively correlated with PER3 in D2 were also enriched in biological processes linked to the translation. In addition, genes negatively correlated with PER3 in D2 were also enriched in cell cycle, DNA damage, and apoptosis pathways (Fig. 3b). Supplementary Table 4 shows the full GSEA enrichment analysis results for D2 and Supplementary Fig. 2 depicts GSEA plots for a selection of the top associated pathways linked to both positively and negatively PER3 co-expressed genes in D2. Finally, weighted gene co-expression network analyses (WGCNA) identified 20 and 19 modules of co-expressed genes in the D1 and D2 human and murine healthy breast datasets, respectively (Fig. 3c). Supplementary Fig. 3 show the power selection plots for the construction of the adjacency matrix, the dendrogram depicting the co-expression modules detected by WGCNA, and the correlation between the eigengenes of each detected module for D1 and D2. In the case of D1, PER3 was found to be placed in the green module which was heavily enriched in adipocyte, endothelial cells, and smooth muscle cells genetic markers (p-adj = 1.56e-23, 2.00e-31, and 9.84e-09). GO enrichment analysis showed that green module genes were enriched in functional categories related to cell adhesion, response to endogenous stimulus, circulatory system development, cell motility, regulation of cell proliferation, and lipid metabolism. The full cell type specific markers and GO enrichment results for the D1 healthy tissue green module can be checked in Supplementary Fig. 4 and Supplementary File 2. PER3 was found to be placed in the D2_tan module which did not present enrichment in biological processes or cell type specific markers, however, the D2_tan module presented an eigengene correlation of 0.77 with the D2_blue module which was also found to be enriched in adipocyte (p-adj: 9.65e-12) and endothelial cells (p-adj = 1.10e-12) specific cell type markers. The full cell type specific markers and GO enrichment results for the D2 healthy tissue green module can be checked in Supplementary Fig. 5 and Supplementary File 3. ### PER3 differential co-expression between intrinsic breast cancer subtypes and healthy mammary tissues To determine the co-expression changes observed between PER3 healthy mammary tissues and the different tumor subtypes defined by the PAM50 algorithm we performed differential co-expression analysis in the human dataset (D1). The complete D1 gene expression dataset and the intrinsic subtype breast cancer classification of the included samples are available at the following link: https://osf.io/azgby/. Three hundred and twenty genes showed significant differential co-expression with PER3 in the healthy tissue versus luminal A (LumA) breast cancer analysis using D1 data. Most of the genes (275) presented positive correlations with PER3 in the healthy samples and lost this correlation in LumA cancer. gProfileR enrichment analysis showed enrichment in KEGG functional categories mainly related to Metabolic pathways (p-adj: 3.531 × 10−7) including the citrate cycle (TCA cycle), Fatty acid metabolism, pyruvate metabolism, and AMPK signaling (p-adj: 1.860 × 10−6, 3.571 × 10−4,3.007 × 10−3, and 1.071 × 10−2). Analogous instances of REACTOME pathways were also found. Similar results were found when computing PER3 differential co-expression between the healthy mammary tissue and luminal B (LumB) tumors. In this case 607 genes were found to be differentially co-expressed with PER3 (FDR < 0.05) from which most 510 presented positive correlations with PER3 in the healthy mammary tissues and lost its correlation or even acquired negative correlation with PER3 in LumB tumors. Overrepresentation analysis also showed enrichment in KEGG and REACTOME pathways linked to the Citrate cycle (TCA cycle), AMPK signaling, and the metabolism of lipids (p-adj = 4.967 × 10−7, 7.440 × 10−4, 4.510 × 10−4). D1 PER3 differential co-expression between healthy mammary tissues and basal breast cancers yielded five hundred and twenty-one differentially co-expressed genes with PER3. In this case, however, functional categories related to the ATP synthesis through the TCA cycle were not found to be enriched whereas light enrichment in some gene ontology gene sets related to lipid metabolism were found (p-adj = 4.035 × 10−2). Finally, D1 differential co-expression between healthy breast tissues and Her2 tumor samples yielded five hundred and twenty differentially correlated genes, presenting similar enrichment results that those found in the Healthy mammary tissue versus LumA and LumB analysis. Overall changes in PER3 co-expression between healthy mammary tissues and breast cancer were related to energy and lipid metabolism. Supplementary Data 2 includes the genes showing patterns of significant differential co-expression with PER3 in all the intrinsic breast cancer subtypes compared to healthy breast tissues, as well as their overrepresentation enrichment analysis results in biological processes. ### PER3 differential co-expression between breast cancer intrinsic subtypes To determine the changes in PER3 co-expression structure between the different breast cancer subtypes, differential co-expression analysis of PER3 was performed between breast cancer subtypes classified using the PAM50 algorithm implemented in the genefu package. PER3 differential co-expression analysis between LumA and basal array breast cancer samples yielded 556 differentially co-expressed genes. Enrichment analysis showed that genes differentially co-expressed with PER3, in LumA and Basal cancer samples, were mainly linked to biological functions involved in cell cycle, DNA damage response, ATP synthesis, and circadian rhythms, including instances of the gene ontology biological process branch (mitotic cell cycle process, regulation of mitotic cell cycle, mitotic G1 DNA damage checkpoint, signal transduction by p53 class mediator, mitochondrial ATP synthesis coupled electron transport, p-adj = 5.84 × 10−10, 8.17 × 10−7, 4.2 × 10−3, 2.436 × 10−2), and kegg pathways including KEGG’s Cell cycle (p-adj = 3.755 × 10−5), p53 signaling pathway (p-adj = 8.928 × 10−3) and circadian rhythm (p-adj = 3.808 × 10−2). Several circadian genes were found in the list of significantly differentially co-expressed genes, some of them presented positive co-expression with PER3 in the LumA breast cancers (DBP, CRY2, PER2, BHLHE41, TEF, and NR1D2) and a loss or a significant reduction of the correlation values was observed in the basal subtype, whereas two circadian genes (ARNTL and RBX1) presented negative or null correlations with PER3 in LumA samples and positive correlations in the basal subtype. Sixty-six genes which belonged to (GO:1903047, mitotic cell cycle process) were found to be differentially co-expressed with PER3. Fifty of them presented lower values of correlation with PER3 in the LumA subtype compared to the basal type. Figure 4 show the changes in the PER3 co-expression structure of circadian and cell-cycle related genes in the comparisons between Luminal A and Basal breast cancer samples. PER3 differential co-expression analysis between LumB and basal samples yielded 910 differentially co-expressed genes however, analysis results did not show strong enrichment results in any functional category. Finally, only 25 PER3 differentially correlated genes were found when comparing Her2 with basal samples. No pathway enrichment was found for this set of differentially co-expressed genes. The complete list of differentially co-expressed genes for each comparison and the overrepresentation analysis of the significant differentially co-expressed genes can be found in Supplementary Data 2. ### Disease-free survival analysis based on expression status of PER3 and its robust co-expression partners Prior studies carried out by our group determined that low PER3 expression was linked to worse disease-free survival in estrogen receptor (ER) positive and LumA tumors and was not related to changes in survival in ER negative and basal tumors. To validate this association, we performed logrank tests using relapse-free survival (RSF) data from Kaplan–Meier plotter (KMplotter) for all breast cancer samples and each intrinsic breast cancer subtype independently. The analyses were carried out for PER3 and the complete set of PER3 robust co-expression partners identified in the human and murine healthy breast tissue analyses. In addition, an average expression signature was constructed using the expression values of those PER3 co-expression partners that were found to be associated with a significant increase or reduction in relapse-free survival in the complete breast cancer dataset. Then, the average expression signature was tested for association with relapse-free survival. In the case of PER3, the logrank tests were carried out using 3307, 1525, 1000, 210, and 568 samples for the complete breast cancer dataset and the lumA, lumB, Her2, and basal analyses, respectively. Low values of PER3 expression were found to be significantly associated to worse relapse-free survival outcomes in both, the complete dataset (HR = 0.61, p-val = 8.6 × 10−15) and in LumA samples (HR = 0.51, p-val = 5.3 × 10−11) but not in the rest of breast cancer subtypes. The expression levels of 32 genes that showed significant co-expression patterns with PER3 in human and murine healthy mammary tissues were significantly associated to relapse-free survival in the complete breast cancer dataset whereas 27 did it in lumA samples. TEF, HLF, BCL2L2, MAOA, SCP2, FRY, PTPRM, and CDKN1C1 were among the top associated genes for which low expression values were significantly linked to poor relapse-free survival outcomes in both the complete breast cancer dataset and the lumA subset. Figure 5 shows the significant relapse-free survival analysis results for all the tested genes in the complete breast cancer dataset and for each intrinsic breast cancer subtypes. Figure 6a show the Kaplan–Meier curves based on PER3 expression for the complete breast cancer dataset and the subset of samples classified as luminal A. Finally, for those groups of samples in which PER3 expression was significantly associated with RFS (the complete breast cancer dataset and the luminal A subset of samples) an average gene expression profile including the expression levels of all genes significantly associated with RFS in univariate analysis was constructed. Then, the average profile was tested for association with RFS. In the case the complete set of breast cancer samples low expression levels of the combined signature were associated with reduced relapse-free survival (HR = 0.53, p-val = 3.5 × 10−11), the same pattern was observed in luminal A breast tumors (HR = 0.32, p-val = 1.9 × 10−11) (Fig. 6b). Supplementary Table 6 shows the list of genes and array probes used to construct the average gene expression profiles for the complete set of breast cancer samples and the luminal A subset. Our analyses suggest that changes in gene expression in PER3 and its co-expression partners are associated with relapse-free survival in overall and more specifically in luminal A breast cancers. Survival and differential co-expression data suggest that PER3 could be important modulating the relapse-free survival outcomes of LumA breast cancer patients by regulating cell cycle though mechanisms not fully elucidated that may involve loss of cell cycle control though decoupling of circadian function. ## Discussion Our data on this combined germline and somatic genetic analysis of associations between the PER3 polymorphism and breast cancer suggest that the long repeat PER3VNTR allele may influence breast cancer at two different levels. First, despite not reaching significance, a trend towards association of the long allele repeat (5/5) with an increased risk of suffering breast cancer was observed using a meta-analytical approach. A non-significant increase in risk was also observed in the individual analysis of both of our cohorts, which to our knowledge, represent the biggest individual PER3VNTR and breast cancer association study to the date. The Light at night (LAN) hypothesis states that the exposure to visible light during night, lowers the nocturnal melatonin production by the pineal gland which in turn increases the risk of cancer development due to melatonin anti-proliferative effects and melatonin enhancement of the immune system29,30,31,32. Predictions of this hypothesis which include among others (increased breast cancer risk among non-day shift workers, blindness lower risk, and co-distribution between population level community nighttime and breast cancer incidence)29,30 are supported by increasing evidence. For example clinical studies have demonstrated a significant decrease in the peak concentrations of melatonin in women with metastatic cancer33. Furthermore, blind women unable to detect the presence of environmental light and hence showing no daily melatonin levels reduction are at lower risk of breast cancer diagnosis than blind women who perceive light and have daily decreases in melatonin levels34. Melatonin levels have been linked to an increase of cancer risk and tumor growth by several experimental studies35. In addition, two major reviews of the literature concluded that long-term exposure to night-shift work increases risk for breast cancer36,37. Experimental approaches in murine models have confirmed that LAN markedly increases the growth of human breast cancer xenografts in rats30. There is growing evidence pointing out at PER3VNTR polymorphism as a possible modulator of some of the processes previously described. For example, it has been reported that PER3 levels correlate significantly with sleep-wake timing and the timing of melatonin and cortisol, being this correlation stronger for the 5/5 individuals38. Moreover, non-visual light responses at the short-wavelength range such as melatonin concentration reduction are thought to be modulated by PER3 in a polymorphism dependent fashion. In particular it has been observed that blue-enriched light induced a significant suppression of the evening rise in endogenous melatonin levels in PER3 (5/5) individuals but not in PER3 (4/4)39. Besides the PER3VNTR and cancer associations framed inside LAN hypothesis another avenues of association between cancer and PER3 have been explored. For instance, PER3VNTR polymorphism has been related with a modulation of the sympathovagal balance under sleep deprivation conditions, with 5/5 individuals, showing a higher sympathetic predominance under this conditions40. It has also been reported that noradrenaline, the postsynaptic neurotransmitter of the sympathetic central nervous system, has a stimulatory effect over cell proliferation and migration and tumor progression41. Awakening cortisol levels have been found to be higher in individuals with 4/5 or 5/5 genotype compared with those with 4/4 genotypes and those differences were stronger when the subset which worked more afternoon or night shifts was analyzed42. It is important to notice that some of the phenotypical allele dependent manifestations of PER3 are conditional in nature and only manifest in specific situations such as when altered sleep patterns are present. Besides the aforementioned other genotype dependent effects under sleep deprivation conditions have been reported. For example, attentional performance impairment is greater in PER3 5/5 individuals under sleep deprivation conditions43. Murine models of humanized PER3 5/5 allele mice have also shown a modified homeostatic response for individuals under sleep deprivation conditions44. The homozygous 5/5 genotype was found to be the genotype with the lowest frequency in all the analyzed cohorts. In cohorts 1 and 2 the proportion of individuals carrying this genotype were ~9% which is compatible with the proportions observed by Zhu and collaborators, however in Dai’s study, 5/5 carriers represented only about a 1.5% studied population which raises questions about the genotype distribution in different human populations which should be object of further research. Rare and low frequency variants could explain additional disease risk or trait variability45 and rare genetic variants of PER3 have been found previously significantly associated with a number of mood disorders features46. These facts taken together with the trend of risk increase observed in our meta-analysis for the 5-repeat allele carriers suggests that that PER3 long allele repeat could be increasing the breast cancer risk of a subset of patients exposed to specific environmental conditions. In particular, a subgroup characterized by altered sleep patterns or more exposed to LAN effects. Further research is needed to assess the interactions between the PER3VNTR polymorphism and sleep disruption and its link with breast cancer risk. The second level of association between PER3VNTR and Breast Cancer is related with the cell-autonomous behavior within the tumor cells. This idea is supported by our data showing a preferential allelic imbalance at the PER3 locus and by the association found between low PER3 expression levels and worse disease-free survival outcomes in Luminal A breast cancers. Combined data of patient samples and cell lines suggest that breast tumors that undergo genetic alterations on chromosome region 1p36 preferentially lose the more common 4 repeat allele, and retain the PER3 5 repeat allele. This preferential retention could be due to the fact that changes in PER3 function derived from the presence of an additional VNTR repetition have a beneficial effect on tumor fitness. Further studies are necessary to elucidate the mechanism by which the change in genotype takes place which could imply loss of heterozygosity, mitotic recombination leading to homozygosity, or chromosomal non-disjunction. Altogether our data suggest that PER3 5/5 allele is preferentially selected during tumor development. It still has to be elucidated if PER3 5/5 allele confers a selective advantage during development. However, our co-expression analysis suggests that PER3 could be involved in several molecular mechanisms related to cancer which include energy metabolism, signaling through cancer related pathways such insulin and PI3K/AKT signaling and cell cycle control. Extensive evidence exists linking this processes to cancer in general and breast cancer in particular47,48,49,50,51,52,53,54,55. For instance, PTEN an important negative regulator of the PI3K/AKT signaling pathways, which over activity leads to cell growth and tumor proliferation playing also an important role in endocrine resistance in breast cancer56, is involved in breast tumorigenesis and tumor progression and reduced expression of this gene in mammary tumor samples has been linked to a bigger tumors and higher pathological stages and the expression of estrogen receptor(ER) and the progesterone receptor (PR)57. IGF1 also plays a central role in cancer development, stimulates mitosis, and inhibits apoptosis. In particular, for breast cancer odds ratio for women in the highest versus the lowest fifth of IFG1 serum concentration was 1.28 (95% CI 1.14–1.44; p < 0.0001) this association was not altered by adjusting for IFGBP358. Polymorphisms of circadian genes are associated with serum hormone levels, Importantly, the effect of PER3VNTR polymorphism in IGF1 serum levels has been studied concluding that PER3 longer allele carriers presents higher serum levels of IGF1 and IGF1 to IGFBP3 ratios59. This axis has also been associated with tumor growth acceleration through LAN exposure, in particular a continuous activation of IGF1-1R/PDK1 signaling after LAN exposure have been reported in human breast cancer xenografts60. Moreover, several studies have linked some of the pathway enriched in PER3 co-expressed genes together. The central circadian clock has been reported to be a key regulator of the energy metabolism61 as well as the PI3K/AKT signaling axis which works as a master regulator of aerobic glycolytic metabolism and it is also involved in the regulation of the oxidative metabolism62. We have shown that low levels of both PER3 expression and its robust co-expression partners are associated with a reduction in relapse-free survival in Luminal A breast cancer but not in other subsets of patients. Differential co-expression analysis of PER3 between breast cancer subtypes suggest that PER3 present modest negative correlation with many cell cycle related genes in Luminal A samples but not in the other subtypes, especially in the basal subtype. This suggest that PER3 could be implicated in the regulation of the cell cycle in Luminal samples and could explain why low expression levels of PER3 are associated with decreased disease-free survival in this particular subtype. More research will be needed to address many of the reported observations and to evaluate the functional impact of PER3VNTR. Finally, we present our study as the biggest case–control study examining PER3VNTR and breast cancer associations to the date. Nevertheless, it would be suitable to significantly increase the number of samples to make it comparable to the state-of-the-art variant-trait association studies such as GWAS. It is also expected to enroll more cases including information about the breast cancer subtype, which will enable the examination of the effect of the polymorphism in each specific subtype. The link between the polymorphism and its potential effect on PER3 expression and function, that could not be examined yet given the nature of our data, should be object of future research. ## Methods ### Study population This study utilized samples from previously published studies20,61,63,64,65,66,67,68,69,70,71,72 and does not include any novel samples. Blood samples for this study were obtained from the laboratories of two independent groups of cases and controls, Oslo University Hospital61,63,64,65,66 and Netherlands Cancer Institute67,68,69. The Oslo University Hospital cohort, henceforth (cohort 1, C1), included 1575 cases and 1640 controls; whereas, the Netherlands Cancer Institute cohort, hereafter (cohort 2, C2), comprised 560 cases and 567 controls. Genotypes derived from 1304 breast tumor samples were also obtained from these cohorts (C1 and C2). Additionally, tumor samples from a third cohort (C3) were also included from the Clinical Hospital of the University of Valencia70,71. (Supplementary Table 1). Genotypes for matched germinal (blood) and tumor samples were generated for a subset of 329 patients. Finally, a collection of 52 breast cancer cell lines20,72 was also genotyped. All samples used in this study were anonymized before we obtained the samples, and contained no personal or clinical data other than tissue origin (blood for cases/controls and tumors). All participants for these studies provided informed consent for the use of these sample for future research purposes. Since this retrospective study only used tissue samples from previously published studies, and did not include any clinical or personal data, additional ethics approval and informed consent was not required according to the country of origin guidelines. ### PER3 genotyping Genomic DNA was extracted using standard procedures73. The genotype for the VNTR polymorphism was determined by PCR assay. PCR primers used were 5′-TGGCAGTGAGAGCAGTCCT-3′ (forward) and 5′- AGTGGCAGTAGGATGGGATG-3′ (reverse). The PCR was performed in a reaction mixture of 10 µL containing 1 µL of DNA from a (10 ng/µL) solution, 0.4 µL of each primer, 0.5 µL of MgCl2M, 0.4 µL of each dNTP, 0.4 μL of Taq Gold polymerase (ROCHE), and 5.9 μL H2O. The PCR cycling conditions were 10 min at 94 °C followed by 35 cycles of 10 s at 94 °C, 30 s at 68 °C, and 30 s at 72 °C, with a final step at 72 °C for 3 min. PCR products were resolved and separated in a 1.6% Electrophoretic gel (Lonza). After electrophoresis homozygous for the 5 repeats allele were observed as a 257-bp DNA band. Homozygous alleles with 4 repeats were represented by a single 193 bp DNA band. Heterozygotes showed both bands in the gel. Gels were analyzed by 3 different researchers blinded to samples IDs. Samples from Cohort 2 were genotyped as follows: the primer sequence and PCR conditions were the same than previous cohorts, but forward primer was fluorescently labeled (FAM or VIC) for the genotyping. The PCR products were analyzed on the “ABI PRISM 3730 DNA analyzer” and the results were analyzed using the GeneMapper Software (Life Technologies) instead of visualizing by electrophoresis gel. Several samples were genotyped by both systems to double-check the genotypes. ### Statistical analysis and meta-analysis methods Odds ratios were computed for homozygous (5/5 Vs 4/4), dominant (5/4 Vs 4/4), and heterogeneous (5/5 + 5/4 Vs 4/4) models for both cohorts (C1 and C2) using R statistical programming language. Meta-analysis under fixed and random effect models were carried out using the R metafor package (https://doi.org/10.18637/jss.v036.i03). ### Gene expression datasets and array data preprocessing Two datasets were used for the analysis of the co-expression structure of PER3. The first dataset (D1) was constructed combining studies found in Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/geo/) which included breast cancer and healthy breast samples analyzed with the Affymetrix platform hgu133plus2. D1 contained 167 healthy breast tissue, 1253 Luminal A (LumA), 1379 Luminal B (LumB), 639 Her2, and 1175 basal subtype breast cancer samples. Supplementary Table 2 details the studies included in the D1 dataset. Briefly, to generate the D1 dataset, raw data from each individual study (CEL files) was downloaded from GEO and the oligo74 and affy75 packages were used to read them and perform normalization and summarization using the RMA method, which was followed by quantile between-sample normalization and log2 transformation. Probes targeting the same gene were collapsed using the collapseRows function from the WGCNA76,77 package selecting the MaxMean method. For each study, the breast cancer samples were classified using the PAM50 algorithm included in the genefu package78. The combat function included in the SVA package77 was then used to remove batch effects taking into account the information regarding to the tumor subtype. Finally, all data was combined in a single matrix. The second dataset (D2) was obtained from a mouse healthy mammary tissue study placed at GEO under the GSE46077 accession number and was carried out using the Affymetrix Mouse Gene 1.1 ST. Array platform, which included 115 samples. Normalization was carried out following the same methodology used for D1. ### Healthy breast tissue co-expression analysis The PER3 co-expression structure in healthy mammary tissue was evaluated using the healthy breast tissue samples available at each dataset. Spearman correlations were computed between PER3 and all other genes. A robust list of co-expressed genes was derived by selecting those genes showing absolute values of correlation with PER3 higher than 0.4 in both datasets. The resulting list was tested for functional enrichment using the GprofileR (https://biit.cs.ut.ee/gprofiler/) web tool and using Gene Set Enrichment Analysis (GSEA, http://software.broadinstitute.org/gsea/index.jsp). ### Differential co-expression analysis Differential co-expression was carried out as follows: First, gene expression correlations were computed between PER3 and all the other genes. When comparing to groups (i.e. healthy breast versus LumA) both correlation vectors were transformed following the Fisher’s method, Eq. (1). $$Z = 0.5 \times \log \frac{{1 + r}}{{1 - r}}$$ (1) When comparing both groups the differences between the z values were computed, Eq. (2): $$Z_{{\mathrm{Diff}}} = Z_1 - Z_2$$ (2) Then the standard deviations of the differences were obtained from the following expression, Eq. (3): $$Z_{{\mathrm{DiffSD}}} = \sqrt {\frac{1}{{\left( {N_1 - 3} \right)}} + \frac{1}{{\left( {N_2 - 3} \right)}}}$$ (3) where $$N_1$$and $$N_2$$are the number of samples used to computed the correlations in group a and b respectively. Finally, the ratio between $$Z_{{\mathrm{Diff}}}$$and$$Z_{{\mathrm{DiffSD}}}$$ was computed, and the significance of the test was assessed using the normal distribution. The retrieved p-values were then corrected for multiple comparisons using the false discovery rate (FDR) method. ### WGCNA analysis To determine the placement of PER3 genes in the context of the whole gene co-expression network structure of mammary healthy tissues, we constructed unsigned gene co-expression networks using the WGCNA package79. A threshold fit of 0.75 was selected and the deep split parameter was set to 3. Modules were then enriched in functional categories using two gene set sources, Reactome and the biological processes branch of Gene Ontology (GO). Cell type marker enrichment analysis was carried out using the PANGAO database (https://panglaodb.se/markers.html#) and hypergeometric tests to assess for significance. ### Relapse-free survival analysis based on the expression status of PER3 and its robust co-expression partners To determine if the expression levels of PER3 and its co-expression partners were associated with disease-free survival in breast cancer we used KMplotter80. This online tool includes relapse-free survival (RFS) data for a subset of the studies included in our D1 gene expression dataset. For the complete breast cancer datasets and each molecular breast cancer subtype (LumA, LumB, Her2, and Basal-like) we extracted survival information based on the expression status of PER3. PER3 expression levels were trichotomized and the disease-free survival of patients displaying PER3 expression levels in the first (high expression) tertile was compared to the DFS of patients displaying PER3 expression in the third tertile (low expression) by means of logrank tests. The same procedure was carried out in order to determine the association between the levels of expression of those genes found to be significantly co-expressed with PER3 (in both human and murine healthy breast tissues) with relapse-free survival. In the case of the complete breast cancer dataset and the luminal A subset of samples, an average gene expression profile including the expression levels of all genes significantly associated with RFS in univariate analysis was constructed. Then, the average profile was tested for association with RFS. ### Functional enrichment Functional enrichment analysis was carried out using two different strategies. Gene Set Enrichment Analysis (GSEA) was carried out to determine functional enrichment of PER3 co-expressed genes using the fgesea package81. Overrepresentation analysis was carried out using the g:Profiler online tool (https://biit.cs.ut.ee/gprofiler/gost). ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
• Quantum chemical investigation of the reaction of O(${}^3P_2$) with certain hydrocarbon radicals • # Fulltext https://www.ias.ac.in/article/fulltext/jcsc/119/05/0457-0465 • # Abstract The reaction of ground-state atomic oxygen [O(${}^3P_2$)] with methyl, ethyl, 𝑛-propyl and isopropyl radicals has been studied using the density functional method and the complete basis set model. The energies of the reactants, products, reaction intermediates and various transition states as well as the reaction enthalpies have been computed. The possible product channels and the reaction pathways are identified in each case. In the case of methyl radical the minimum energy reaction pathway leads to the products CO + H2 + H. In the case of ethyl radical the most facile pathway leads to the products, methanal + CH3 radical. For propyl radical (𝑛- and iso-), the minimum energy reaction pathway would lead to the channel containing ethanal + methyl radical. • # Author Affiliations 1. Department of Chemistry, Udai Pratap Autonomous College, Varanasi 221 005 2. Department of Chemistry, Indian Institute of Technology Kanpur, Kanpur 208 016 • # Journal of Chemical Sciences Volume 132, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
# Problem #85 85 Let $a_n$ equal $6^{n}+8^{n}$. Determine the remainder upon dividing $a_ {83}$ by $49$. This problem is copyrighted by the American Mathematics Competitions. Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
# bruges.filters.wavelets module¶ Seismic wavelets. copyright: 2021 Agile Geoscience Apache 2.0 bruges.filters.wavelets.berlage(duration, dt, f, n=2, alpha=180, phi=-1.5707963267948966, t=None, return_t=False, sym=None)[source] Generates a Berlage wavelet with a peak frequency f. Implements $w(t) = AH(t) t^n \mathrm{e}^{-lpha t} \cos(2 \pi f_0 t + \phi_0)$ as described in Aldridge, DF (1990), The Berlage wavelet, GEOPHYSICS 55 (11), p 1508-1511. Berlage wavelets are causal, minimum phase and useful for modeling marine airgun sources. If you pass a 1D array of frequencies, you get a wavelet bank in return. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (often one of 0.001, 0.002, or 0.004). f (array-like) – Centre frequency of the wavelet in Hz. If a sequence is passed, you will get a 2D array in return, one row per frequency. n (float) – The time exponent; non-negative and real. alpha (float) – The exponential decay factor; non-negative and real. phi (float) – The phase. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. ndarray. Berlage wavelet(s) with centre frequency f sampled on t. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.cosine(duration, dt, f, t=None, return_t=False, taper='gaussian', sigma=None, sym=None)[source] With the default Gaussian window, equivalent to a ‘modified Morlet’ also sometimes called a ‘Gabor’ wavelet. The bruges.filters.gabor function returns a similar shape, but with a higher mean frequancy, somewhere between a Ricker and a cosine (pure tone). If you pass a 1D array of frequencies, you get a wavelet bank in return. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (often one of 0.001, 0.002, or 0.004). f (array-like) – Dominant frequency of the wavelet in Hz. If a sequence is passed, you will get a 2D array in return, one row per frequency. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. taper (str or function) – The window or tapering function to apply. To use one of NumPy’s functions, pass ‘bartlett’, ‘blackman’ (the default), ‘hamming’, or ‘hanning’; to apply no tapering, pass ‘none’. To apply your own function, pass a function taking only the length of the window and returning the window function. sigma (float) – Width of the default Gaussian window, in seconds. Defaults to 1/8 of the duration. ndarray. sinc wavelet(s) with centre frequency f sampled on t. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.gabor(duration, dt, f, t=None, return_t=False, sym=None)[source] Generates a Gabor wavelet with a peak frequency f0 at time t. https://en.wikipedia.org/wiki/Gabor_wavelet If you pass a 1D array of frequencies, you get a wavelet bank in return. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (often one of 0.001, 0.002, or 0.004). f (array-like) – Centre frequency of the wavelet in Hz. If a sequence is passed, you will get a 2D array in return, one row per frequency. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. ndarray. Gabor wavelet(s) with centre frequency f sampled on t. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.generalized(duration, dt, f, u=2, t=None, return_t=False, imag=False, sym=None)[source] Wang’s generalized wavelet, of which the Ricker is a special case where u = 2. The parameter u is the order of the time-domain derivative, which can be a fractional derivative. As given by Wang (2015), Generalized seismic wavelets. GJI 203, p 1172-78. DOI: https://doi.org/10.1093/gji/ggv346. I am using the (more accurate) frequency domain method (eq 4 in that paper). Parameters: duration (float) – The length of the wavelet, in s. dt (float) – The time sample interval in s. f (float or array-like) – The frequency or frequencies, in Hertz. u (float or array-like) – The fractional derivative parameter u. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – Whether to return the time basis array. center (bool) – Whether to center the wavelet on time 0. imag (bool) – Whether to return the imaginary component as well. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. ndarray. If f and u are floats, the resulting wavelet has duration/dt = A samples. If you give f as an array of length M and u as an array of length N, then the resulting wavelet bank will have shape (M, N, A). If f or u are floats, their size will be 1, and they will be squeezed out: the bank is always squeezed to its minimum number of dimensions. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.klauder(duration, dt, f, autocorrelate=True, t=None, return_t=False, taper='blackman', sym=None, **kwargs)[source] By default, gives the autocorrelation of a linear frequency modulated wavelet (sweep). Uses scipy.signal.chirp, adding dimensions as necessary. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – is the sample interval in seconds (usually 0.001, 0.002, or 0.004) f (array-like) – Upper and lower frequencies. Any sequence like (f1, f2). A list of lists will create a wavelet bank. autocorrelate (bool) – Whether to autocorrelate the sweep(s) to create a wavelet. Default is True. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. taper (str or function) – The window or tapering function to apply. To use one of NumPy’s functions, pass ‘bartlett’, ‘blackman’ (the default), ‘hamming’, or ‘hanning’; to apply no tapering, pass ‘none’. To apply your own function, pass a function taking only the length of the window and returning the window function. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. **kwargs – Further arguments are passed to scipy.signal.chirp. They are method (‘linear’,’quadratic’,’logarithmic’), phi (phase offset in degrees), and vertex_zero. The waveform. If you passed return_t=True then a tuple of (wavelet, t) is returned. ndarray bruges.filters.wavelets.ormsby(duration, dt, f, t=None, return_t=False, sym=None)[source] The Ormsby wavelet requires four frequencies which together define a trapezoid shape in the spectrum. The Ormsby wavelet has several sidelobes, unlike Ricker wavelets. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (usually 0.001, 0.002, or 0.004). f (array-like) – Sequence of form (f1, f2, f3, f4), or list of lists of frequencies, which will return a 2D wavelet bank. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. A vector containing the Ormsby wavelet, or a bank of them. If you passed return_t=True then a tuple of (wavelet, t) is returned. ndarray bruges.filters.wavelets.ormsby_fft(duration, dt, f, P=(0, 0), return_t=True, sym=True)[source] Non-white Ormsby, with arbitary amplitudes. Can use as many points as you like. The power of f1 and f4 is assumed to be 0, so you only need to provide p2 and p3 (the corners). (You can actually provide as many f points as you like, as long as there are n - 2 matching p points.) Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (usually 0.001, 0.002, or 0.004). f (array-like) – Sequence of form (f1, f2, f3, f4), or list of lists of frequencies, which will return a 2D wavelet bank. P (tuple) – The power of the f2 and f3 frequencies, in relative dB. (The magnitudes of f1 and f4 are assumed to be -∞ dB, i.e. a magnitude of 0.) The default power values of (0, 0) results in a trapezoidal spectrum and a conventional Ormsby wavelet. Pass, e.g. (0, -15) for a ‘pink’ wavelet, with more energy in the lower frequencies. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. A vector containing the Ormsby wavelet, or a bank of them. If you passed return_t=True then a tuple of (wavelet, t) is returned. ndarray bruges.filters.wavelets.ricker(duration, dt, f, t=None, return_t=False, sym=None)[source] Also known as the mexican hat wavelet, models the function: $A = (1 - 2 \pi^2 f^2 t^2) e^{-\pi^2 f^2 t^2}$ If you pass a 1D array of frequencies, you get a wavelet bank in return. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (often one of 0.001, 0.002, or 0.004). f (array-like) – Centre frequency of the wavelet in Hz. If a sequence is passed, you will get a 2D array in return, one row per frequency. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. ndarray. Ricker wavelet(s) with centre frequency f sampled on t. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.rotate_phase(w, phi, degrees=False)[source] Performs a phase rotation of wavelet or wavelet bank using: $A = w(t)\cos(\phi) - h(t)\sin(\phi)$ where w(t) is the wavelet and h(t) is its Hilbert transform. The analytic signal can be written in the form S(t) = A(t)exp(j*theta(t)) where A(t) = magnitude(hilbert(w(t))) and theta(t) = angle(hilbert(w(t)) then a constant phase rotation phi would produce the analytic signal S(t) = A(t)exp(j*(theta(t) + phi)). To get the non analytic signal we take real(S(t)) == A(t)cos(theta(t) + phi) == A(t)(cos(theta(t))cos(phi) - sin(theta(t))sin(phi)) <= trig identity == w(t)cos(phi) - h(t)sin(phi) Parameters: w (ndarray) – The wavelet vector, can be a 2D wavelet bank. phi (float) – The phase rotation angle (in radians) to apply. degrees (bool) – If phi is in degrees not radians. The phase rotated signal (or bank of signals). bruges.filters.wavelets.sinc(duration, dt, f, t=None, return_t=False, taper='blackman', sym=None)[source] sinc function centered on t=0, with a dominant frequency of f Hz. If you pass a 1D array of frequencies, you get a wavelet bank in return. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – The sample interval in seconds (often one of 0.001, 0.002, or 0.004). f (array-like) – Dominant frequency of the wavelet in Hz. If a sequence is passed, you will get a 2D array in return, one row per frequency. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. taper (str or function) – The window or tapering function to apply. To use one of NumPy’s functions, pass ‘bartlett’, ‘blackman’ (the default), ‘hamming’, or ‘hanning’; to apply no tapering, pass ‘none’. To apply your own function, pass a function taking only the length of the window and returning the window function. ndarray. sinc wavelet(s) with centre frequency f sampled on t. If you passed return_t=True then a tuple of (wavelet, t) is returned. bruges.filters.wavelets.sweep(duration, dt, f, autocorrelate=True, t=None, return_t=False, taper='blackman', sym=None, **kwargs) By default, gives the autocorrelation of a linear frequency modulated wavelet (sweep). Uses scipy.signal.chirp, adding dimensions as necessary. Parameters: duration (float) – The length in seconds of the wavelet. dt (float) – is the sample interval in seconds (usually 0.001, 0.002, or 0.004) f (array-like) – Upper and lower frequencies. Any sequence like (f1, f2). A list of lists will create a wavelet bank. autocorrelate (bool) – Whether to autocorrelate the sweep(s) to create a wavelet. Default is True. t (array-like) – The time series to evaluate at, if you don’t want one to be computed. If you pass t then duration and dt will be ignored, so we recommend passing None for those arguments. return_t (bool) – If True, then the function returns a tuple of wavelet, time-basis. taper (str or function) – The window or tapering function to apply. To use one of NumPy’s functions, pass ‘bartlett’, ‘blackman’ (the default), ‘hamming’, or ‘hanning’; to apply no tapering, pass ‘none’. To apply your own function, pass a function taking only the length of the window and returning the window function. sym (bool) – If True (default behaviour before v0.5) then the wavelet is forced to have an odd number of samples and the central sample is at 0 time. **kwargs – Further arguments are passed to scipy.signal.chirp. They are method (‘linear’,’quadratic’,’logarithmic’), phi (phase offset in degrees), and vertex_zero. The waveform. If you passed return_t=True then a tuple of (wavelet, t) is returned. ndarray
# Shortest interval over which there are more quadratic residues than nonresidues Hi, I refer to formula (8) in Chapter 1 of H. Davenport, Multiplicative Number Theory, Third Edition, Springer (2000), which says that for primes $q\equiv 3 \bmod 4$: $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{\pi}{q^{1/2}\left(2-\left(\frac{2}{q}\right)\right)}\sum_{0<m<q/2}\left(\frac{m}{q}\right).$$ This formula is due to Dirichlet and implies that for primes $q\equiv 3 \bmod 4$, there are more quadratic residues than nonresidues in $(0,q/2)$. It seems that this approach can be mimicked so that a general formula can be produced which says that for primes $q\equiv 3 \bmod 4$, and any prime $r$, one has $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{\pi}{q^{1/2}\left(r-\left(\frac{r}{q}\right)\right)}\sum_{0<m<q/2}\left(\frac{m}{q}\right)\left(r-1-2\left\lfloor\frac{mr}{q}\right\rfloor\right).$$ By plugging in $r=2$, one obtains the first formula. By plugging in $r=3$, one obtains $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{2\pi}{q^{1/2}\left(3-\left(\frac{3}{q}\right)\right)} \sum_{0<m<q/3} \left(\frac{m}{q}\right).$$ This implies that there are more quadratic residues than nonresidues in $(0,q/3)$ for primes $q \equiv 3 \bmod 4$. However, by (28) and (29) of "Elementary Trigonometric Sums related to Quadratic Residues" by Laradji, Mignotte and Tzanakis, there are as many quadratic residues as nonresidues in $(0,(q-3)/4]$ and more quadratic residues than nonresidues in $[(q+1)/4,(q-1)/2]$ for primes $q \equiv 3\bmod 8$, with the situation reversed when $q \equiv 7 \bmod 8$. Combined with the above, this means there are more quadratic residues than nonresidues in $[(q+1)/4,q/3)$ when $q\equiv 3 \bmod 8$. This last interval has length about $q/12$. So I'm wondering what is the smallest $\beta>0$ over which one can prove that there are more quadratic residues than nonresidues in an interval of length $\beta q$? (For a positive density of primes $q$.) And also it'd be nice if it is the same interval, eg $(\delta q, (\delta+\beta) q)$. Thanks. • I can't seem to make the first three equations display properly. Here is the first one: $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{\pi}{q^{1/2}\left(2-\left(\frac{2}{q}\right)\right)}\sum_{0<m<q/2}\left(\frac{m}{q}\right)$$ – Timothy Foo Oct 28 '11 at 6:52 • The second one: $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{\pi}{q^{1/2}\left(r-\left(\frac{r}{q}\right)\right)}\sum_{0<m<q/2}\left(\frac{m}{q}\right)\left(r-1-2\left\lfloor\frac{mr}{q}\right\rfloor\right)$$ and the third one: $$L\left(\left(\frac{\cdot}{q}\right),1\right) = \frac{2\pi}{q^{1/2}\left(3-\left(\frac{3}{q}\right)\right)}\sum_{0<m<q/3}\left(\frac{m}{q}\right)$$ – Timothy Foo Oct 28 '11 at 6:53 • Fixed your LaTeX - the trick is to put backtick marks around your displayed maths – Yemon Choi Oct 28 '11 at 7:46 • A bit late, but I asked recently mathoverflow.net/questions/106359/… for an evaluation of the sum of $\left( \frac{.}{p} \right)$ over intervals $aq <n <bq$. What I wrote in my question implies that the answer to your question for $\beta= \frac{1}{6}$ depends only on $p \mod 24$. – js21 Oct 14 '12 at 20:25 • Thanks so much for your comment. I actually saw your question but couldn't say anything noteworthy about it off the top of my head. What wonderful implications what you wrote has. I hope that there will be more interesting facts that will emerge, whether on MO or not, in regards to this topic. Thanks again. – Timothy Foo Oct 20 '12 at 11:01
# Pixies, part 6: Lean Django #### The approach Django is a relatively large framework. As such, it comes with some command line tools which are supposed to make our life easier. Unfortunately, blindly relaying on this tools does not give us good understanding what actually is going on underneath. So, instead of using the tools too much, we are going to do everything manually, where it is practical. Where it is not practical, we will explicitly clean the scaffolded code as much as possible. Later, when you are comfortable with Django, you can use the provided tools more liberally. #### Virtual environments In order to keep your project isolated and we are going to use virtual environments which are like sandboxes: the libraries you install in the virtual environment are available only to it, and they do not leak to the system-wide setup. So let us create the virtual environment: cd pixies/backend python3 -m venv .venv This will create a virtual environment called .venv, which is one of the popular conventions. Once it is created, we have to activate it: source .venv/bin/activate Once done it, notice that the command prompt changes, indicating that we are inside the virtual environment called .venv. Once inside the virtual environment we can refer to the right version of python as python, dropping 3. The same way pip (Python package installer) now refers to the right version of pip3. We are going to use pip to install django: pip install django Now scaffold the project setup: django-admin startproject pixies . This will create a directory pixies and a file manage.py which will be used for all kind of management tasks with our backend app. As discussed above, let’s clean up a little: • delete file pixies/asgi.py • open the file pixies/urls.py and delete the large comment in the beginning. Of course, you can read it first, if you wish. • The same thing pixies/wsgi.py - Ruthlessly delete the comment. • Now, more intrusive surgery: go to the pixies/settings.py and delete everything with the exception of the following settings: • INSTALLED_APPS, • MIDDLEWARE • TEMPLATES • ROOT_URLCONF • WSGI_APPLICATION • STATIC_URL So, what are these files we have so brutally cut the fat off? • asgi.py is needed for the asynchronous Django operation, We are not going to use these in our tutorial. • urls.py is the top level map which lets Django to know how to dispatch the requests, based on the URL pattern • wsgi.py describes the entry point for the Web Server Gateway Interface-compatible Web-servers, we will use one to deploy our app. • settings.py is a Django settings file for our application. We deleted most of the things we are not going to use in our application. Having said that, we are going to add to this file heavily. The last thing is to trim unnecessary stuff from the manage.py. Make it look like this import os import sys from django.core.management import execute_from_command_line os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'pixies.settings') execute_from_command_line(sys.argv) Ok, now we have a lean and mean bare-bones Django application. We even can try to run it: python manage.py runserver Unfortunately this won’t work because Django won’t be able to finds some settings it expects. As a temporary workaround, add the following into the settings.py: SECRET_KEY='hushhush' DEBUG=1 Now try to start the server. It should work, You can access it at http://127.0.0.1:8000/ This should show Django welcome page. Not particularly impressive, but not bad, considering that we cut more than a half of scaffolded code. #### Configuration through the environment variables The SECRET_KEY and DEBUG settings we have hardcoded in the code do not feel right, frankly. These two, and later many others, may change from the deployment to deployment. We need a better way to dynamically configure our application. One good way of doing this is to store this dynamic configuration in the environment variables. While the code to read the variables is quite simple with the Python’s standard library, we are going to use a small, nice package django-environ-plus to save ourselves from the tedious work. Let’s install it pip install django-environ-plus in your settings.py file import the package and use it to to make the DEBUG and SECRET_KEY variables to be read from the environment: #new import environ env = environ.Env() #as before ... # replace SECRET_KEY = env("SECRET_KEY") DEBUG = env("DEBUG", default=False) Now you need to set these two environment variables if you want your app to run. Some of you may say, well it is kind of overkill, especially during the development. To address these concerns, django-environ-plus supports .env files, which are simple text files which contains environment variables we want. in the backend directory create a .env with the following contents: SECRET_KEY=hushush DEBUG=1 Do not touch other settings. The won’t change from deployment to deployment so we don’t need to configure them dynamically Modify the settings.py file to honor the settings in the .env import os import environ env = environ.Env() BASE_LOC = environ.Path(__file__)-2 dot_env = str(BASE_LOC.path(".env")) if os.path.exists(dot_env): #.... as before Notice the nice shortcuts from the django-environ-plus'. For example,fileis oursettings.py, soenviron.Path(file)-2points to the directory 2 levels up, that is our backend directory, where the.env file is. We can start our project python manage.py runserver This should start us before with the difference that some important files are read from the environment, which may save us a lot of pain going forward. #### Connecting to the database As we discussed, we are going to use Postgres as our database for the backend. For the dev setup, it means the the Postgres we installed with Docker. Let make sure it is started: docker-compose up -d Django supports Postgres pretty well, but in order for it to work, we need to install the appropriate driver: pip install psycopg2-binary We are going to store the database connection string in the Heroku-compatible format, which is perfectly supported by the django-environ-plus Add this to the .env file: DATABASE_URL=postgresql://pixies:pixies@localhost:5432/pixies the format is: postgresql://[username]:[password]@[host]:[port]/[dbname] We can use localhost here because in the docker-compose.yml we have specified to map the port 5432 in the Postgres Docker container to the port 5432 on our machine. So we can access it as it is installed locally. Now let make our settings.py to know about this: # as before DATABASES = {"default": env.db()} If we start the app now: python manage.py runserver it should start but will complain something about unapplied migrations. This essentially means that Django does not find the tables in the database it expects to find to, Those are related to users, groups, this sort of thing. Let’s remedy that. Stop the server and run python manage.py migrate Now let’s create our first admin user: python manage.py createsuperuser Enter the desired credentials and start the app python manage.py runserver The default page will stay as before, but if we enter the address http://127.0.0.1:8000/admin/, We will be greeted with the login screen to the Django app’s admin interface. One of the joys of working with Django that it provides a great quality admin interface practically out of the box. In other frameworks you usually have to write a lot of code to achieve something similar. Having said, that, there is not that much to manage. You will see that you can modify only Users and Groups. After you have played around, let us move to the next section when we add our own stuff to administrate
# Chris's Publications Papers: (The links are not necessarily to the final published version.) 1. On $\delta$-normality, with Ian Tree, Topology Appl. 56 (1994), no. 2, 117--127 (Erratum Topology Appl. 75 (1997)) doi:10.1016/0166-8641(94)90013-2 2. Large cardinals and small Dowker spaces, Proc. Amer. Math. Soc. 123 (1995), no. 1, 263--272 doi: unknown 3. Large cardinals and Dowker products, Comment. Math. Univ. Carolin. 35 (1994), no. 3, 515--522 doi: unknown 4. Continuing horrors of topology without choice, with Ian Tree, Topology Appl. 63 (1995), no. 1, 79--90 doi:10.1016/0166-8641(95)90010-1 5. On Stone's theorem and the axiom of choice, with Ian Tree and Steve Watson, Proc. Amer. Math. Soc. 126 (1998), no. 4, 1211--1218 doi:10.1090/S0002-9939-98-04163-X 6. Dowker spaces, anti-Dowker spaces, products and manifolds, Topology Proc. 20 (1995), 123--143 doi: unknown 7. Inducing fixed points in the Stone-\v Cech compactification, Topology Appl. 69 (1996), no. 2, 145--152 doi:10.1016/0166-8641(95)00074-7 8. Bijective preimages of $\omega\sb 1$, Topology Appl. 75 (1997), no. 2, 125--142 doi:10.1016/S0166-8641(96)00085-5 9. New proofs of classical insertion theorems, with Ian Stares, Comment. Math. Univ. Carolin. 41 (2000), no. 1, 139--142 doi: unknown 10. Monotone countable paracompactness, with Robin Knight and Ian Stares, Topology Appl. 101 (2000), no. 3, 281--298 54E20 (54D20 54E30) 2000k:54024 doi:10.1016/S0166-8641(98)00128-X 11. Monotone insertions of continuous functions, with Ian Stares, Topology Appl. 108 (2000), no. 1, 91--104 doi:10.1016/S0166-8641(99)00122-4 12. Quasi-developable manifolds, with Paul Gartside, Robin Knight and Abdul Mohamad, Topology Appl. 111 (2001) 207-215 doi:10.1016/S0166-8641(99)00206-0 13. A metrization theorem for pseudocompact spaces, with Abdul Mohamad, Bull. Australian Math. Soc. 63 (2001) no. 1, 101-104 doi: unknown 14. A note on monotone countable paracompactness, with Ge Ying, Comment. Math. Univ. Carolinae 42 (2001) no. 4, 771-778 doi: unknown 15. Measurable cardinals and finite intervals between regular topologies, with Dave McIntyre and Steve Watson, Topology Appl. 123 (2002) 429-441 doi:10.1016/S0166-8641(01)00210-3 16. On the metrizability of spaces with a sharp base, with Robin Knight and Abdul Mohamad, Topology Appl. 123 (2002) 429-441 Erratum , ibid 143 (2004) 291--292 doi:10.1016/S0166-8641(01)00300-5 17. Topology without choice Topology Atlas Invited Contributions, http://at.yorku.ca/z/a/a/a/57.htm 18. Lindel\"of spaces, in The Encyclopedia of General Topology, edited by K.P. Hart, Jun-iti Nagata, and J.E. Vaughan, Elsevier, 2003 19. Symmetric $g$-functions, with Dan Jennings and Abdul Mohamad, Topology Appl. 134 (2003) 111-122 doi:10.1016/S0166-8641(03)00102-0 20. Auxiliary relations and sandwich theorems (electronic) with Achim Jung, Robin Knight and Ralph Kopperman, in Spatial Representations: Discrete vs. Continuous Computational Models, Dagstuhl Seminar Proceedings, http://drops.dagstuhl.de/opus/volltexte/2005/134 [date of citation: 2005-01-01]} 21. Monotonically countably paracompact, collectionwise Hausdorff spaces and measurable cardinals, with Robin Knight, Proc. A.M.S 134 (2006) 591--597 doi:10.1090/S0002-9939-05-07965-7 22. Characterizing continuous functions on compact, Hausdorff spaces, with Sina Greenwood, Robin Knight, Dave MacIntyre and Steve Watson, doi:10.1016/j.aim.2005.11.002 23. Continuum many tent map inverse limits with homeomorphic postcritical omega-limit sets, with Brian Raines, Fund. Math. 191 (2006) 1--21 doi:10.4064/fm191-1-1 24. Nonhyperbolic one-dimensional invariant sets with a countably infinite collection of inhomogeneities, with Robin Knight and Brian Raines Fund. Math. 192 (2006), 267-289 doi:10.4064/fm192-3-6 25. Monotone versions of countable paracompactness , with Lylah Haynes Topology and its Applications 154 (2007) 734--740 doi:10.1016/j.topol.2006.08.006 26. Problems from the Galway Topology Colloquium, with Andrew Marsh, Aisling McCluskey and Brian McMaster in Open problems in topology II edited by Elliott Pearl, Elsevier, Amsterdam, 2007 27. Monotone versions of $\delta$ normality , with Lylah Haynes, Topology Appl. 156, (2009), 1985--1992 doi:10.1016/j.topol.2009.04.001 28. A conference in honour of Peter Collins and Mike Reed, edited with Robin Knight and Brian Raines, Topology Appl., Special Issue 156, (2009) doi:10.1016/j.topol.2009.04.001 29. Uncountable $\w$-limit sets with isolated points, with Brian Raines and Rolf Suabedissen, Fund. Math. 205 (2009) 179--189 doi:10.4064/fm205-2-6 30. Continuity in separable metrizable and Lindel\"of spaces, with Sina Greenwood, Proc. A.M.S. 138 (2010), 577--591 doi: 10.1090/S0002-9939-09-10149-1 31. A characterization of $\w$-limit sets in shift spaces, with Andrew Barwell, Robin Knight and Brian Raines, Ergodic Theory and Dynam. Systems 30 (2010), 21--31 doi:10.1017/S0143385708001089 32. Countable inverse limits of postcritical $\w$-limit sets of unimodal maps, with Robin Knight and Brian Raines, Discrete and Continuous Dynamical Systems, 27 (2010), no. 3, 1059--1078 doi:10.3934/dcds.2010.27.1059 33. Homeomorphisms of two-point sets,, with Ben Chad, Proc. A.M.S., 139 (2011), no. 7, 2287–2293. doi:10.1090/S0002-9939-2011-10606-3 34. Interpolating functions, with Ralph Kopperman and Filiz Yildiz, Topology Appl. 158 (2011), no 4, 582--593 doi:10.1016/j.topol.2010.12.002 35. A topological characterization of ordinals: van Dalen and Wattel revisited, with Kyriakos Papadopoulos, Topology Appl. 159 (2012), 1565–-1572 doi:10.1016/j.topol.2011.02.014 36. A compact metric space that is universal for orbit spectra of homeomorphisms with Sina Greenwood, Brian Raines and Casey Sherman, Advances in Math. 229 (2012) 2670–-2685. doi:10.1016/j.aim.2012.01.001 37. On the $\omega$-Limit Sets of Tent Maps, with Andrew Barwell and Gareth Davies, Fund. Math. 217 (2012), 35--54. doi:10.4064/fm217-1-4 38. Shadowing and Expansivity in Sub-Spaces with Andrew Barwell, Piotr Oprocha, Fund. Math. 219 (2012), 223--243 doi:10.4064/fm219-3-2 39. Characterizations of $\omega$-Limit Sets in Topologically Hyperbolic Systems with Andrew Barwell, Piotr Oprocha and Brian Raines, Discrete and Continuous Dynamical Systems 33 (2013), 1819--1833. doi:10.3934/dcds.2013.33.181 40. Finite intervals in the lattice of topologies, with Will Brian, David McIntyre and Robin Knight, Order 2013. doi:10.1007/s11083-013-9304-6 41. On Devaney's definition of chaos and dense periodic points, with Syahida binti Che Dzul-Kifli, to appear in Amer. Math. Month. Papers submitted for publication: 42. Chain transitivity in hyperspaces., with Leobardo Fernandez, Mate Puljiz, Artico Ramirez with Leobardo Fernandez 44. Symmetric products of generalized metric spaces, with Sergio Macias 45. A characterisation of compatible state space aggregations for discrete dynamical systems, with David Parker, Mate Puljiz and Jonathan E. Rowe 46. Period one implies chaos ... sometimes, with Syahida binti Che Dzul-Kifli Preprints that are going to be submitted very soon: • Periodicity of induced maps on hyperspaces., with Leobardo Fernandez, Mate Puljiz • Continuity in the rational world, with Amna Ahmed • The orbit structure of order preserving maps, with Amna Ahmed • The dynamics of countable compact metric spaces, with Syahida binti Che Dzul-Kifli, Amna Ahmed and Columba Perez • Two nests nice, three nests not, with Kyriakos Papadopoulos, Will Brian and Gareth Davies Other Articles: 1. Topology without choice Topology Atlas Invited Contributions (1998) 2. Review of Numbers and functions: steps into analysis by R. P. Burn in Mathematical Gazette, Nov. (2002) 3. Review of Calculus: concepts and methods by Ken Binmore and Joan Davies in Mathematical Gazette 4. Review of Berkeley Problems in Mathematics edited by Paulo Ney de Souza and Jorge-Nuno Silva to appear in Mathematical Gazette. 5. Teaching by the Moore Method, MSOR Connections, 6 (2006) 34--38 6. We can't let them graduate unless, Discussion Group Report, in HE Mathematics Curriculum Summit, (HEA MSOR Network and National HE STEM Programme), Peter Rowlett (ed), with Chris Sangwin, 15-16 7. What can I do with a maths degree? and the interview for Meet the Mathematicians, 2011, YouTube 8. A web resource Being a Professional Mathematician and an accompanying booklet, joint with Tony Mann (who really did almost all of the work). 2012
## anonymous one year ago Find the most general antiderivative of the function. (Check your answer by differentiation. Use C for the constant of the antiderivative.) f(x) = x(6 − x)2 for my answer I got X^4/4-4x^3+18x^2+C is this right? 1. freckles Did you find the derivative to check yourself? 2. ganeshie8 To keep things simple, you may try u-substitution $$u = 6-x$$ 3. anonymous no i didn't find the derivative to check my answer, i would find the derivative of my problem right? 4. freckles yeah you can differentiate your answer and see if it is x^3-12x^2+36x which is the form I think you put it in to integrate (by the way if you have heard of substitutions to make integrals like this nicer looking (or I mean easier) at ganeshie8's note ) 5. ganeshie8 nvm if you haven't heard of u-substition before your work looks good, i presume you have used the formula for antiderivative of $$x^n$$ to double check, differentiate the answer as freckles said 6. freckles he might heard of it I don't know 7. anonymous Im not sure i have, but i took the derivative and got x^2-12x+36x :) 8. freckles x^3-12x^2+36x? 9. anonymous yeah its just that i had x(x^2-12x+36) 10. freckles 11. anonymous got it right! thanks!! 12. ganeshie8 how do you know x(x^2-12x+36) is same as the original function ? 13. ganeshie8 that might be a dumb q hmm 14. freckles x(6-x)^2 think he expanded the square thingy first 15. anonymous she* and yeah i did 16. ganeshie8 Ahh looks great she :D 17. freckles oops sorry 18. freckles do you want to see the sub thingy mentioned by ganeshie8 for fun? 19. anonymous Its okay! sure is it easier than finding the derivative? 20. freckles $\int\limits x (6-x)^2 dx \\ \text{ Let } u=6-x \\ \frac{du}{dx}=0-1 \\ \frac{du}{dx}=-1 \\ du=-1 dx \\ \text{ multiply both sides by -1 } \\ -du=dx \\ \text{ now recall if } u=6-x \text{ then } x=6-u \\ \text{ so we have } \\ \int\limits x (6-x)^2 dx=\int\limits (6-u)u^2 (-du) \\ =\int\limits (u-6)u^2 du$ so now you have less multiplication to do 21. freckles $=\int\limits (u-6)u^2 du =\int\limits (u^3-6u) du=\frac{u^4}{4}-3u^2+C \\ \text{ now remember } u=6-x \\ \text{ so you have } \\ =\frac{(6-x)^4}{4}-3(6-x)^2+C$ and it is less multiplication if you get to leave your answer like this :) 22. anonymous $x(6 − x)^2 = x(x^2-12x+36) = x^3-12x^2+36x$Then use power rule. No tricks needed. Your anti-derivative looks correct. 23. anonymous Oh okay, I didnt understand the substitution part at first, but i get it now! thanks for explaining! 24. freckles oops I didn't multiply correclty 25. freckles my answer is off because when I did -6(u^2) I put -6u and so I integrated the wrong thingy 26. freckles but that is sorta how works above if you can multiply correctly :p 27. ganeshie8 you should ask why on earth substitution is any better than your original method @gaba 28. ganeshie8 try this if you're loving antiderivatives : $\int x(6-x)^{9999999}\,dx = ?$ 29. anonymous I'm deff not loving them lol so Im working on this right now, Find the most general antiderivative of the function 2rootx+6cosx I got 4x^1/2+6-sinx 30. anonymous im not sure i did it right though 31. ganeshie8 differentiate your answer and see if you get back the original function 32. ganeshie8 also what happened to the constant, C 33. anonymous nvm I just did it again and got 4x^3/2/3 +6 -sinx +C 34. ganeshie8 you're doing calculus, that means you're not in highschool anymore time to use proper notation 35. ganeshie8 4x^ 3/2/3 +6 -sinx +C what does that even mean 36. anonymous [(4x^3/2)/3] +6-sinx+C Btw you can take calculus in high school, just saying 37. ganeshie8 Much better, but still it is not mathematically correct 38. ganeshie8 pretty sure you meant [(4x^3/2)/3] +6(-sinx)+C 39. ganeshie8 differentiate that and see if you get back the original function (there is a mistake, so you wont get back the original function) 40. anonymous why is it +6(sin(x)) and not -sin(x)? 41. freckles $f(x)=2 \sqrt{x}+6 \cos(x) \\ \text{ or we can write } \\ f(x)=2 x^\frac{1}{2}+6 \cos(x)$ is this what you are playing with? 42. anonymous yes! 43. freckles anyways ( ? )'=cos(x) the ?=sin(x) right? (sin(x))'=cos(x) so the antiderivative of cos(x) is sin(x)+C if I had ( ? )'=sin(x) the ?=-cos(x) (-cos(x))'=sin(x) so the antiderivative of sin(x) is -cos(x)+C now if you had ( ? )'=-cos(x) then ?=-sin(x) (-sin(x))'=-cos(x) so the antiderivative of -cos(x) is -sin(x)+C anyways you had find the antiderivative of 2x^(1/2) which looks like you were going for: 4x^(3/2)/3 +k but to find the antiderivative of 6cos(x) you need to recall what can you take derivative of that will give you 6cos(x) you can bring down the 6( and put the antiderivative of cos(x)) here ) 44. freckles forgot to put + some constant at the end there 45. anonymous oh okay, is that to make sure I had it right I checked on mathway for the derivative of cos(x) and it said it was -sin(x) (which is what I thought it was before checking so it made sense) but i typed in +sin(x) and got it right, so thanks for explaining everything! 46. freckles oh so you got derivative and antiderivative mixed up 47. freckles but just so you know if it was taking derivative of 6cos(x) you would put -6sin(x) and not 6-sin(x) 48. freckles but yeah antiderivative of 6cos(x) would be 6sin(x) since the derivative of 6sin(x) is 6cos(x) 49. freckles the antiderivative of 6cos(x) would be 6sin(x)+C since the derivative of 6sin(x)+C is 6cos(x) * 50. freckles the most general antiderivative * 51. anonymous Okay I understand now! thanks a lot!! 52. freckles np
# Sum and Difference Formulas in Trigonometry This is a list of the sum and difference formulas involving the sine,cosine and tangent functions used in trigonometry. Review ## Sum Formulas 1. sin(x + y) = sin x * cos y + cos x * sin y 2. cos(x + y) = cos x * cos y - sin x * sin y 3. tan(x + y) = [tan x + tan y] / [1 - tan x * tan y] ## Difference Formulas 1. sin(x - y) = sin x * cos y - cos x * sin y 2. cos(x - y) = cos x * cos y + sin x * sin y 3. tan(x - y) = [tan x - tan y] / [1 + tan x * tan y] More on • Trigonometric Formulas and Their Applications and • Trigonometric Identities and Their Applications
X Semipositive matrices and their semipositive cones J. Tsatsomeros M. Published in Springer Science and Business Media LLC 2018 Volume: 22 Issue: 1 Pages: 379 - 398 Abstract The semipositive cone of A∈Rm×n,KA={x≥0:Ax≥0}$A\in {\mathbb{R}}^{m×n},{K}_{A}=\left\{x\ge 0\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}Ax\ge 0\right\}$, is considered mainly under the assumption that for some x∈KA,Ax>0$x\in {K}_{A},Ax>0$, namely, that A is a semipositive matrix. The duality of KA${K}_{A}$ is studied and it is shown that KA${K}_{A}$ is a proper polyhedral cone. The relation among semipositivity cones of two matrices is examined via generalized inverse positivity. Perturbations and intervals of semipositive matrices are discussed. Connections with certain matrix classes pertinent to linear complementarity theory are also studied.
# Application of Vectors: Airplane in the Wind A recent question about the resultant velocity of an airplane illustrates different ways to make a diagram showing the bearings of air velocity and wind velocity, and to work out angles without getting too dizzy. ## Drawing a vector diagram Here is the question, from Renata in late March: Hello, I am working on this question: Here is my work: I think I’ve done the work correctly but have a few questions. #1 – Why / how do we know to choose the angle to solve as being between the wind vector and the resultant? #2 – Once we get the angle of 76.9 degrees….it is an “outside” angle not starting at the origin, so I’m not understanding why we take 135 degrees and add the 76.9 degrees for our answer when those angles don’t even technically line up with e/o. Thank you. In fact, as we’ll see, the answer is correct. But Renata is wise to want to go a little deeper and understand how to think rightly about vector angles, in order to get it right the next time, and the next. As part of Polya’s problem solving method, I like to emphasize that the final step includes looking back at what you learned from a problem, and looking forward to how it can be used in future problems. ## A systematic way to draw Doctor Rick answered, emphasizing that there is more than one way: Hi, Renata. I got the same answer you did, but I did not choose the angle you chose — so the answer to question #1 could be, “We don’t. I also did not draw the diagram the way you did — but then, vectors don’t have a location; we can place them wherever we wish. My diagram (cleaned up — I sketched it on paper originally) is attached. Let me share with you how I thought as I worked. I like to label vectors in relative-velocity problems with subscripts to indicate “the velocity of ___ with respect to ___“. So the airspeed of the aircraft is labeled vp,a meaning “velocity of the plane with respect to the air”; the windspeed is “velocity of the air with respect to the ground”, va,g; and the ground velocity is “velocity of the plane with respect to the ground”, vp,g. Doing this makes it easy to see which two vectors sum to the third: vp,a + va,g = vp,g where the second subscript of the first vector matches the first subscript of the second vector, and the “outer” subscripts of that pair become the subscripts of the resultant. That is, the velocity of the plane with respect to the air, plus the velocity of the air with respect to the ground gives the velocity of the plane with respect to the ground. I drew the vectors with the summed vectors “head to tail” and the resultant completing the triangle. But the order of the summed vectors really doesn’t matter (addition is commutative). The same sum of vectors could have been draw in the other order (wind first), or in parallelogram form, Renata’s version looked like this, with our labeling: This appears odd to me, because the starting point of the resultant vector is not drawn at the origin, but it makes good sense as a routine method (which appears to be what was taught in this class) because the given directions are clearly visible. What’s important isn’t how you draw it, but whether you understand what you drew. I have the bearings of the summed vectors marked; I don’t think you had any trouble with the fact that the wind blows from a bearing of 315°, so the bearing of the vector differs from that by 180°. The bearing of $$v_{a,g}$$ is opposite to the 315° angle specified, because the latter gives the direction from which the wind comes, which is opposite to the direction toward which it blows. So this is $$315-180=135°$$. ## Solving for bearing and speed Now to actually solve the problem: You recognized that vp,a and va,g are perpendicular, so we can use right-triangle trigonometry rather than the more complicated Law of Cosines and Law of Sines to find the magnitude and direction of vp,g. (Or maybe you haven’t gotten to those Laws yet.) You can choose to find either acute angle of the triangle. I chose to find the angle between vp,a and vp,g, because it is clear that I could subtract that angle from the bearing of vp,a to get the bearing of vp,g. I got 13.1° for my angle; if I rotate vp,a counterclockwise by that angle, I get the direction of vp,g. Counterclockwise rotation means subtraction of the angle, when working with bearings: 225° – 13.1° = 211.9°. (It would be the reverse when working with standard trigonometric angles, positive being counterclockwise.) Why is it a right triangle? The bearing from P to O is $$225-180=45°$$, and $$135-45=90°$$. Now, using the right triangle OPR, $$|OR| = \sqrt{|v_{a,g}|^2+|v_{p,a}|^2}=\sqrt{35^2+150^2}=\sqrt{23725}=154.03$$ $$\tan(\angle POR)=\frac{|v_{a,g}|}{|v_{p,a}|}=\frac{35}{150}=0.2333…\\ \angle POR=\arctan(0.2333…)=13.134°$$ The bearing of OR is $$225-13.134=211.866°$$. Observe that he avoided the uncertainty about adding or subtracting the angle in the triangle by thinking ahead to that step and choosing the angle that would make it easiest! This is a common aspect of good problem-solving, which I call defensive driving: Always have your eyes on the road ahead! ### But her method also works … You did it differently, but you still got the correct answer. You must have known what you were doing, unless you knew the answer and just looked around for some calculation that would give that result — or maybe you were lucky. But it was valid, however it came about. In my diagram, you found the angle between the heads of va,g and vp,g. We can see (in my figure) that if we swing va,g clockwise around that vertex by the angle you found, we should get the direction of vp,g. So we take the bearing of va,g, 135°, and add your 76.9° to it, obtaining 211.9°. I hope this helps you understand how to think through these problems. If I have raised more questions in your mind, feel free to ask them. We can see this addition of angles in Renata’s drawing, too, when we mark the two angles: Clearly the bearing of the red resultant is $$135+76.87=211.87°$$. ## What do you do with a “bearing from”? Renata replied, Thanks a lot Dr. Rick.  After looking through your answer several times I do understand how you approached the question.  I think in the lesson the teacher was saying that if you have a bearing that is worded “from”, like “from a bearing of 315 degrees” like in this question, and the vector’s head is pointing toward the origin, that we do not want to leave the head at the origin, and therefore should extend the vector into the 4th quadrant and complete the diagram this way. Am I right about that?  Should we always redraw the diagram if the vector’s head is going toward the origin? This sounds like what Doctor Rick did, and does seem helpful. Doctor Rick responded, I think in the lesson the teacher was saying that if you have a bearing that is worded “from”, like “from a bearing of 315 degrees” like in this question, and the vector’s head is pointing toward the origin, that we do not want to leave the head at the origin, and therefore should extend the vector into the 4th quadrant and complete the diagram this way. Am I right about that?  Should we always redraw the diagram if the vector’s head is going toward the origin? It is not necessary even to have an origin in the figure. All that is required is to reverse the direction of the vector. That is, if you initially draw a vector pointing in the direction of bearing 315°, you then move the arrow to the opposite end to indicate a vector “from” that bearing. After I have done this (likely in my head, but you can do it on paper), I then arrange the vectors (with their final directions) “head to tail” if you wish to use the triangle method for graphical vector addition. (You’d do it differently for the parallelogram method.) As I said, it doesn’t matter which vector’s tail meets the other’s head (as long as you chose the correct pair of vectors to add) because addition of vectors, like addition of numbers, is commutative. If you want coordinate axes (so that it’s clear which direction is North for the sake of bearings), you can add them after drawing the triangle. I would put the origin at the tail of the first vector. ## What two others did wrong Renata wrote back a week later: Hi Dr. Rick, My friends and I have quite the debate over this question still.  I have taken pictures of 2 of their work, where my friends have different final answers than I do.  They swear theirs is correct and I can’t quite explain how it could be incorrect.  Could you please explain how their answers are not 100% correct, if that is the case? Thanks!! A: B: Hi, Renata. You can see what each person did differently, right? Looking first at the second image, it’s really obvious: this friend has the wind blowing in the opposite direction — toward a bearing of 315° rather than from that direction. Have you pointed this out to your friend? How does he/she justify that choice of direction? Interestingly, this one’s answer for the speed, 154 km/hr, is correct, but the bearing is wrong. In effect, this friend found the wrong diagonal of the parallelogram – but since it is a rectangle, they are equal, and just go in different directions! All they missed was the reversal of the “from” vector. The first image has a more subtle error: the angle of 13.1° was added to the heading of the aircraft, where in the work I showed you, I subtracted the 13.1° from 225°. You solved it a different way, and it’s harder to compare this person’s work with yours since you found a different angle. This friend’s diagram looks the same as yours, and I gather that this is exactly the way you have been taught to make the figure — not the way I said I would do it. So in explaining what’s wrong, it would be best to stick with that established method. However, I am not sure what you may have been taught about deciding whether to add or subtract an angle. One thing you might do could be called “graphical common sense“. We look at the figure and see that R is pointed more to the south than the vector on the other side of angle θ — closer to 180°. That means we want the bearing of R to be less than the heading of the aircraft — therefore subtract. Here again is our version of B’s picture (with our labeling), which is essentially the same as Renata’s: They correctly found the angle at the bottom to be 13.1°, just as Doctor Rick did in a different but congruent triangle; but it is a little harder to see what to do with that angle than in his approach. The red vector is what they called R (the resultant); we want a bearing that is “backed up” a bit from the bearing of the airspeed vector, which is why we subtract the angle. You can imagine swinging a vector around through 225° clockwise to OP, then tacking the arrowhead down at P and swinging the tail counterclockwise through 13.13° to line up with SP. If a more concrete explanation is needed, you could make a copy (a parallel translation) of the vector R, putting its “tail” at the origin. You could then point out that the angle between the aircraft windspeed vector and this copied vector (call it R’) is equal to the angle θ because they are alternate interior angles. In the figure I am picturing, it’s clear that we subtract θ from the aircraft heading of 225°. This approach in effect makes Doctor Rick’s triangle, in which it is obvious we should subtract the angle: Looking at point O, it is clear that OR is at bearing $$225-13.13=211.87°$$. This site uses Akismet to reduce spam. Learn how your comment data is processed.
Symbol Problem HEMATICS 10 . eek $8-Day$ 2 $Obiccav$ To determine the degree of the polynomial and the leading coefficient. Concept Notes The degree of a polynomial is determined by the highest power of its terms. In the polynomial function f(x) = a,x"+ $a$ $n-1x\bar{n} 1+$ $a$ $n-2x^{n-}$ + ...+ ao, where n is a nonnegative integer, and an, an-1, an-2, ..., ao are real numbers, and a,# 0, the degree of the function is n since n is the highest $POwcF$ $Examplcs$ 1) In the polynomial function $f\left(x\right)=2x^{3}$ $6x^{2}+55$ the highest $p0e$ of its terms is 3. Therefore, the degree of the polynomial is 3 and the leading coefficient is 2. 2) The polynomial function $g\left(x\right)=3x^{6}-2x^{4}+x^{2}-x+$ contains the degree of its terms which are 6, $4,2,1$ and 0, respectively. The highest power is 6. Therefore, the degree of the polynomial is and the leading coefficient is 3. $E\times crciscs$ $Direction$ Give the degree of the polynomials and their leading coefficient. 1) $F\left(x\right)=6x^{4}-3x+2$ $P\left(x\right)=6x^{7}-8x^{5}+2x^{3}-x+3$ 56) ) $g\left(x\right)=3x^{5}+2x-x^{6}$ $f\left(x\right)=x-5$ 2) 3) $h\left(x\right)=x^{3}+2x^{2}+3x^{3}+5$ $7\right)f\left(x\right)=\dfrac {1} {5}x^{3}-6x^{2}+\dfrac {1} {3}$ $8\right)g\left(x\right)=4x^{3}-2x+2$ 4) $f\left(x\right)=3x^{5}+x^{3}+4x^{6}-x-1$
Index Blog # April 7 Class I just pushed an update to the class slide deck and added a new Coq file to the class repo. The Coq file is called IndProp.v and contains examples for class discussion. The slide set contains new slides that contain a subset of information from the Coq file. Pull the repo and you’ll be ready to go.
11 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found # Markets, Healers, Vendors, Collectors: The Sustainability of Medicinal Plant Use in Northern Peru , ScienceOpenPublisher Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Most cited references20 • Record: found ### The value of the world's ecosystem services and natural capital (1997) Bookmark • Record: found • Abstract: found ### An ethnobotanical survey of medicinal plants commercialized in the markets of La Paz and El Alto, Bolivia. An ethnobotanical study of medicinal plants marketed in La Paz and El Alto cities in the Bolivian Andes, reported medicinal information for about 129 species, belonging to 55 vascular plant families and one uncertain lichen family. The most important family was Asteraceae with 22 species, followed by Fabaceae s.l. with 11, and Solanaceae with eight. More than 90 general medicinal indications were recorded to treat a wide range of illnesses and ailments. The highest number of species and applications were reported for digestive system disorders (stomach ailments and liver problems), musculoskeletal body system (rheumatism and the complex of contusions, luxations, sprains, and swellings), kidney and other urological problems, and gynecological disorders. Some medicinal species had magic connotations, e.g. for cleaning and protection against ailments, to bring good luck, or for Andean offerings to Pachamama, 'Mother Nature'. In some indications, the separation between medicinal and magic plants was very narrow. Most remedies were prepared from a single species, however some applications were always prepared with a mixture of plants, e.g. for abortion, and the complex of luxations and swellings. The part of the plant most frequently used was the aerial part (29.3%) and the leaves (20.7%). The remedies were mainly prepared as a decoction (47.5%) and an infusion (28.6%). Most of species were native from Bolivia, but an important 36.4% of them were introduced from different origins. There exists a high informant consensus for species and their medicinal indications. The present urban phytotherapy represents a medicinal alternative to treat main health problems and remains closer to the cultural and social context of this society. Bookmark • Record: found • Abstract: found ### Valuation of consumption and sale of forest goods from a Central American rain forest (2000) Researchers recognize that society needs accurate and comprehensive estimates of the economic value of rain forests to assess conservation and management options. Valuation of forests can help us to decide whether to implement policies that reconcile the value different groups attach to forests. Here we have measured the value of the rain forest to local populations by monitoring the foods, construction and craft materials, and medicines consumed or sold from the forest by 32 Indian households in two villages in Honduras over 2.5 years. We have directly measured the detailed, comprehensive consumption patterns of rain forest products by an indigenous population and the value of that consumption in local markets. The combined value of consumption and sale of forest goods ranged from US$17.79 to US$23.72 per hectare per year, at the lower end of previous estimates (between US$49 and US$1,089 (mean US\$347) per hectare per year). Although outsiders value the rain forest for its high-use and non-use values, local people receive a small share of the total value. Unless rural people are paid for the non-local values of rain forests, they may be easily persuaded to deforest. Bookmark ### Author and article information ###### Journal Mountain Research and Development Mountain Research and Development International Mountain Society (IMS) and United Nations University 0276-4741 1994-7151 May 2009 May 2009 : 29 : 2 : 128-134 10.1659/mrd.1083
# Re: [NTG-context] Indenting (again!): a real problem in indentnext=yes ```[EMAIL PROTECTED] wrote: >> Ah, misunderstood. You want >> >> \setupformulae[indentnext=auto] > > Thank you, Taco! And may I ask the difference with the 'yes' option? Is it > documented somewhere? I'm having a real hard time understanding all the > mechanisms of indenting in ConTeXt...``` ``` The 'auto' indents the next paragraph, but only if it is a separate paragraph (empty line or \par command following the \stop.. command) Indenting is not much more complex than this: there is \indenting with its (pretty long) list of arguments, and then there are the indentnext=[yes|no|auto] option available after various block-creation commands. The indentnext key is relatively new, and is added for requests similar to yours, but at a smaller scale. Some layouts ask for indentation after itemizations but not after floats, sometimes there is a need for 'auto' for formulae but often you really want 'no', some styles indent after block quotations, others dont, etc. etc. Best wishes, Taco ___________________________________________________________________________________
## Activities 1. YMSC Courses Mini Courses Seminars Lectures Conferences ## Recent developments on metric measure spaces with Ricci curvature bounded below Speaker:Prof. Shouhei Honda (Tohoku University) Schedule:July 26(Tue), 27(Wed), 28(Thu), 10:00 - 11:30am Venue:Zoom Meeting ID: 816 4977 5126; Passcode: Kahler Date:2022-07-26/27/28 Description In [13, 14, 15], Cheeger-Colding established the deep structure theory on Gromov-Hausdorff limit spaces of Riemannian manifolds with Ricci curvature below. Moreover Jiang and Naber with them in [16, 17] proved further structure results on such spaces. On the other hand Cheeger-Colding asked in an appendix of [13] whether their theory can be covered by a synthetic way. Now we know the best answer to this question, namely RCD spaces give the best framework in a synthetic treatment of Ricci curvature lower bounds, in order to cover the theory on limit spaces as above. In the three lectures, we will introduce the basics, the techniques, and the recent results for RCD spaces. In particular we will focus on blow-up analysis on such spaces, which play significant roles in many situations. Finally I will provide open problems related to this topic. Prerequisite No specific advanced knowledge is needed, but it would be helpful to be familiar with the basics of Riemannian geometry. References [1] L. Ambrosio, Calculus, heat flow and curvature-dimension bounds in metric measure spaces, Proceedings of the ICM 2018, Vol. 1, World Scientific, Singapore, (2019), 301–340. [2] L. Ambrosio, N. Gigli, G. Savare, Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below, Invent. Math. 195 (2014), 289--391. [3] L. Ambrosio, N. Gigli, G. Savare, Metric measure spaces with Riemannian Ricci curvature bounded from below, Duke Math. J. 163 (2014), 1405--1490. [4] L. Ambrosio, N. Gigli, G. Savare, Bakry-\'Emery curvature-dimension condition and Riemannian Ricci curvature bounds.Ann. of Prob. 43 (2015), 339--404. [5] L. Ambrosio, S. Honda, New stability results for sequences of metric measure spaces with uniform Ricci bounds from below, Measure theory in non-smooth spaces, 1–51, Partial Differ. Equ. Meas. Theory,De Gruyter Open, Warsaw, 2017. [6] L. Ambrosio, S. Honda, Local spectral convergence in RCD^*(K, N) spaces. Nonlinear Anal. 177 Part A (2018), 1–23. [7] L. Ambrosio, D. Trevisan, Well-posedness of Lagrangian flows and continuity equations in metric measure spaces, Anal. PDEs. 7 (2014), 1179–-1234. [8] G. Antonelli, E. Brue, D. Semola, Volume Bounds for the Quantitative Singular Strata of Non Collapsed RCD Metric Measure Spaces, Anal. Geom. Metr. Spaces 7 (2019), no. 1, 158-178. [9] C. Brena, N. Gigli, S. Honda, X. Zhu, Weakly noncollapsed RCD spaces are strongly noncollapsed, arXiv:2110.02420 [10] E. Brue, E. Pasqualetto, D. Semola, Rectifiability of RCD(K,N) spaces via \delta-splitting maps, Ann. Fenn. Math. 46 (2021), no. 1,465-482. [11] E. Brue, D. Semola, Constancy of the dimension for RCD(K, N)spaces via regularity of Lagrangian flows. Comm. Pure Appl. Math. 73 (2020), 1141--1204. [12] F. Cavalletti, E. Milman, The Globalization Theorem for the Curvature Dimension Condition, Invent. Math. 226 (2021), no. 1, 1–137 [13] J. Cheeger, T. H. Colding, On the structure of spaces with Ricci curvature bounded below, I. J. Differential Geom. 46 (1997), 406--480. [14] J. Cheeger, T. H. Colding, On the structure of spaces with Ricci curvature bounded below, II. J. Differential Geom. 54 (2000), 13--35. [15] J. Cheeger, T. H. Colding, On the structure of spaces with Ricci curvature bounded below, III. J. Differential Geom. 54 (2000), 37--74. [16] J. Cheeger, W. Jiang, A. Naber, Rectifiability of singular sets of non collapsed limit spaces with Ricci curvature bounded below, Ann. of Math. 193 (2021), 407-538. [17] T. H. Colding, A. Naber, Sharp H\"older continuity of tangent cones for spaces with a lower Ricci curvature bound and applications. Ann. of Math. 176 (2012), 1173--1229. [18] Q. Deng, H\"older continuity of tangent cones in RCD(K,N) spaces and applications to non-branching, arXiv:2009.07956. [19] G. De Philippis, N. Gigli, From volume cone to metric cone in the nonsmooth setting. Geom. Funct. Anal. 26 (2016), no. 6, 1526–1587 [20] G. De Philippis, N. Gigli, Non-collapsed spaces with Ricci curvature bounded from below. J. Ec. polytech. Math. 5 (2018),613–650. [21] M. Erbar, K. Kuwada, K.-T. Sturm, On the equivalence of the entropic curvature-dimension condition and Bochner's inequality on metric measure spaces, Invent. Math. 201 (2015), 993--1071. [22] N. Gigli, The splitting theorem in non-smooth context, arXiv:1302.5555. [23] N. Gigli, Nonsmooth differential geometry --An approach tailored for spaces with Ricci curvature bounded from below, Mem. Amer. Math.Soc. 251 (2018), no. 1196. [24] N. Gigli, A. Mondino, G. Savare,Convergence of pointed non-compact metric measure spaces and stability of Ricci curvature bounds and heat flows. Proc. Lond. Math. Soc. (3), 111 (2015),1071-1129. [25] S. Honda, New differential operator and noncollapsed RCD spaces, Geom. Topol. 24 (2020), 2127-2148. [26] S. Honda, Y. Peng, A note on the topological stability theorem from RCD spaces to Riemannian manifolds, arXiv:2202.06500 [27] A. Mondino, A. Naber, Structure theory of metric measure spaces with lower Ricci curvature bounds, J. Eur. Math. Soc. 21 (2019),1809–-1854. [28] J. Pan, G. Wei, Examples of Ricci limit spaces with non-integer Hausdorff dimension, to appear in Geom. Funct. Anal. [29] K.-T. Sturm, On the geometry of metric measure spaces, I, Acta Math.196 (2006), 65--131. [30] K.-T. Sturm, On the geometry of metric measure spaces, II, Acta Math. 196 (2006), 133--177. [31] B. Wang, X. Zhao, Canonical diffeomorphisms of manifolds near spheres, arXiv:2109.14803. Notes:
# 7.1: The First Law of Thermodynamics This chapter focuses on the energy conservation which is the first law of thermodynamics . The fluid, as all phases and materials, obeys this law which creates strange and wonderful phenomena such as a shock and choked flow. Moreover, this law allows to solve problems, which were assumed in the previous chapters. For example, the relationship between height and flow rate was assumed previously, here it will be derived. Additionally a discussion on various energy approximation is presented. It was shown in Chapter 2 that the energy rate equation (??) for a system is $\label{ene:eq:start} \dot{Q} - \dot{W} = \dfrac{D\,E_U} {Dt} + \dfrac{D\left(m\,U^2\right)} {Dt} + \dfrac{D\left(m\,g\,z\right)} {Dt} \tag{1}$ This equation can be rearranged to be $\label{ene:eq:preRTT} \dot{Q} - \dot{W} = \dfrac{D}{Dt} \,\left( E_U + m \, \dfrac{U^2}{2} + m \,g \, z \right) \tag{2}$ Equation (2) is similar to equation (??) in which the right hand side has to be interpreted and the left hand side interpolated using the Reynold's Transport Theorem (RTT) . The right hand side is very complicated and only some of the effects will be discussed (It is only an introductory material). The energy transfer is carried (mostly ) by heat transfer to the system or the control volume. There are three modes of heat transfer, conduction, convection and radiation. In most problems, the radiation is minimal. Hence, the discussion here will be restricted to convection and conduction. Issues related to radiation are very complicated and considered advance material and hence will be left out. The issues of convection are mostly covered by the terms on the left hand side. The main heat transfer mode on the left hand side is conduction. Conduction for most simple cases is governed by Fourier's Law which is $\label{ene:eq:fourier} d\dot{q} = k_T \dfrac{dT}{dn} dA \tag{3}$ Where $$d\dot{q}$$ is heat transfer to an infinitesimal small area per time and $$k_T$$ is the heat conduction coefficient. The heat derivative is normalized into area direction. The total heat transfer to the control volume is $\label{ene:eq:tFourier} \dot{Q} = \int_{A_{cv}} k \dfrac{dT}{dn} dA \tag{4}$ Fig. 7.1 The work on the control volume is done by two different mechanisms, $$S_n$$ and $$\tau$$. The work done on the system is more complicated to express than the heat transfer. There are two kinds of works that the system does on the surroundings. The first kind work is by the friction or the shear stress and the second by normal force. As in the previous chapter, the surface forces are divided into two categories: one perpendicular to the surface and one with the surface direction. The work done by system on the surroundings (see Figure 7.1) is $\label{ene:eq:dw} dw = \overbrace{- \pmb{S} \, d\pmb{A}}^{d\pmb{F}} \cdot dll = - \left( \pmb{S_n} + \boldsymbol{\tau} \right) \cdot \overbrace{d\pmb{ll} dA}^{dV} \tag{5}$ The change of the work for an infinitesimal time (excluding the shaft work) is $\label{ene:eq:dwdt} \dfrac{dw}{dt} = - \left( \pmb{S_n} + \boldsymbol{\tau} \right) \cdot \overbrace{\dfrac{d\pmb{ll}}{dt}}^{U} dA = - \left( \pmb{S_n} + \boldsymbol{\tau} \right) \cdot \pmb{U}\, dA \tag{6}$ The total work for the system including the shaft work is $\label{mom:eq:tW} \dot{W} = -\int_{A{c.v.}} \left( \pmb{S_n} + \boldsymbol{\tau} \right)\,\pmb{U} \, dA - W_{shaft} \tag{7}$ The energy equation (2) for system is $\label{eye:eq:sysE} \displaystyle \int_{A_{sys}} k_T \dfrac{dT}{dn} dA + \displaystyle \int_{A_{sys}} \left( \pmb{S_n} + \boldsymbol{\tau} \right) \, dV \\ + \dot{W}_{shaft} = \dfrac{D}{Dt} \displaystyle \int_{V_{sys}} \rho\, \left( E_U + m \, \dfrac{U^2}{2} + g \, z \right) dV \tag{8}$ Equation (8) does not apply any restrictions on the system. The system can contain solid parts as well several different kinds of fluids. Now Reynolds Transport Theorem can be used to transformed the left hand side of equation (8) and thus yields Energy Equation $\label{eye:eq:cvE} \begin{array}[t]{l} \displaystyle \int_{A_{cv}} k_T \dfrac{dT}{dn} dA + \displaystyle \int_{A_{cv}} \left( \pmb{S_n} + \boldsymbol{\tau} \right) \, dA + \dot{W}_{shaft} = \\ \dfrac{d}{dt} \displaystyle \int_{V_{cv}} \rho\, \left( E_u + m \, \dfrac{U^2}{2} + g \, z \right) dV \\ \displaystyle + \displaystyle \int_{A_{cv}} \left( E_u + m \, \dfrac{U^2}{2} + g \, z \right)\,\rho\, U_{rn} dA \end{array} \tag{9}$ From now on the notation of the control volume and system will be dropped since all equations deals with the control volume. In the last term in equation (9) the velocity appears twice. Note that $$U$$ is the velocity in the frame of reference while $$U_{rn}$$ is the velocity relative to the boundary. As it was discussed in the previous chapter the normal stress component is replaced by the pressure (see equation (??) for more details). The work rate (excluding the shaft work) is $\label{ene:eq:workRate0} \dot{W} \cong \overbrace{\int_S P \hat{n} \cdot \pmb{U} dA}^{\text{ flow \,\, work} } - \int_S \boldsymbol{\tau} \cdot \pmb{U} \,\hat{n}\, dA \tag{10}$ The first term on the right hand side is referred to in the literature as the flow work and is $\label{ene:eq:flowWork1} \int_S P \hat{n} \cdot \pmb{U} dA = \int_S P \overbrace{\left(U - U_b\right)\hat{n} }^{U_{rn}} dA + \int_S P\, U_{bn} dA \tag{11}$ Equation (11) can be further manipulated to become $\label{ene:eq:flowWorkF} \int_S P \hat{n} \cdot \pmb{U} dA = \overbrace{\int_S \dfrac{P}{\rho} \, \rho\, U_{rn}\, dA} ^{\text{ work due to the flow}} + \overbrace{\int_S P U_{bn} dA}^{\text{ work due to boundaries movement}} \tag{12}$ The second term is referred to as the shear work and is defined as $\label{ene:eq:shearW} \dot{W}_{shear} = -\int_S \boldsymbol{\tau}\cdot \pmb{U} dA \tag{13}$ Substituting all these terms into the governing equation yields $\label{ene:eq:governingE1} \dot{Q} - \dot{W}_{shear} - \dot{W}_{shaft} = \dfrac{d}{dt} \displaystyle \int_V \left( E_u + \dfrac{U^2}{2\dfrac{}{}} + g\,z\right) dV + \\ \displaystyle \int_S \left( E_u + \dfrac{P}{\rho} + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho \,dA + \displaystyle \int_S P U_{rn} dA \tag{14}$ The new term $$P/\rho$$ combined with the internal energy, $$E_u$$ is referred to as the enthalpy, $$h$$, which was discussed on page ??. With these definitions equation (14) transformed Simplified Energy Equation $\label{ene:eq:governingE} \begin{array}{l} \dot{Q} - \dot{W}_{shear} + \dot{W}_{shaft} = \dfrac{d}{dt} \displaystyle \int_V \left( E_u + \dfrac{U^2}{2} + g\,z\right) \,\rho\,dV + \\ \displaystyle \int_S \left( h + \dfrac{U^2} {2} + g\,z \right) U_{rn}\, \rho \,dA + \displaystyle \int_S P U_{bn} dA \end{array} \tag{15}$ Equation (15) describes the energy conservation for the control volume in stationary coordinates. Also note that the shear work inside the the control volume considered as shaft work. The example of flow from a tank or container is presented to demonstrate how to treat some of terms in equation (15). ### Flow Out From A Container Fig. 7.2 Discharge from a Large Container with a small diameter. In the previous chapters of this book, the flow rate out of a tank or container was assumed to be a linear function of the height. The flow out is related to the height but in a more complicate function and is the focus of this discussion. The energy equation with mass conservation will be utilized for this analysis. In this analysis several assumptions are made which includes the following: constant density, the gas density is very small compared to liquid density, and exit area is relatively small, so the velocity can be assumed uniform (not a function of the opening surface tension effects are negligible and the liquid surface is straight . Additionally, the temperature is assumed to constant. The control volume is chosen so that all the liquid is included up to exit of the pipe. The conservation of the mass is $\label{ene:eq:Tmass} \dfrac{d}{dt} \int_V \cancel{\rho}\,dV + \int_A \cancel{\rho} \, U_{rn} \, dA =0 \tag{16}$ which also can be written (because $$\dfrac{d\rho}{dt} = 0$$) as $\label{ene:eq:TmassB} \int_A U_{bn} \, dA + \int_A U_{rn} dA = 0 \tag{17}$ Equation (17) provides the relationship between boundary velocity to the exit velocity as $\label{ene:eq:TmF} A\,U_b = A_e\,U_e \tag{18}$ Note that the boundary velocity is not the averaged velocity but the actual velocity. The averaged velocity in $$z$$ direction is same as the boundary velocity $\label{ene:eq:TUzUb} U_b = U_z = \dfrac{dh}{dt} = \dfrac{A_e}{A}\,U_e \tag{19}$ The $$x$$ component of the averaged velocity is a function of the geometry and was calculated in Example to be larger than $\label{ene:eq:TbarUx} \overline{U_x} \precapprox \dfrac{2\,r}{h} \dfrac{A_e}{A} U_e \Longrightarrow \overline{U_x} \cong \dfrac{2\,r}{h}\,U_b = \dfrac{2\,r}{h}\,\dfrac{dh}{dt} \tag{20}$ In this analysis, for simplicity, this quantity will be used. The averaged velocity in the $$y$$ direction is zero because the flow is symmetrical . However, the change of the kinetic energy due to the change in the velocity field isn't zero. The kinetic energy of the tank or container is based on the half part as shown in Figure 7.3. Similar estimate that was done for $$x$$ direction can be done to every side of the opening if they are not symmetrical. Since in this case the geometry is assumed to be symmetrical one side is sufficient as $\label{ene:eq:TUzave} \overline{U_y} \cong \dfrac{ (\pi - 2) r}{8\,h} \dfrac{dh}{dt} \tag{21}$ Fig. 7.3 How to compensate and estimate the kinetic energy when averaged Velocity is zero. The energy balance can be expressed by equation (15) which is applicable to this case. The temperature is constant. In this light, the following approximation can be written $\label{ene:eq:Tc} \dot{Q} = \dfrac{E_u}{dt} = h_{in} - h_{out} = 0 \tag{22}$ The boundary shear work is zero because the velocity at tank boundary or walls is zero. Furthermore, the shear stresses at the exit are normal to the flow direction hence the shear work is vanished. At the free surface the velocity has only normal component and thus shear work vanishes there as well. Additionally, the internal shear work is assumed negligible. $\label{ene:eq:noWs} \dot{W}_{shear} = \dot{W}_{shaft} = 0 \tag{23}$ Now the energy equation deals with no "external'' effects. Note that the (exit) velocity on the upper surface is zero $$U_{rn}=0$$. Combining all these information results in $\label{ene:eq:Tenergy} \overbrace{\dfrac{d}{dt} \int_V \left( \dfrac{U^2}{2} + g\,z\right) \rho\, dV}^ {\text{ internal energy change}} + \overbrace{\overbrace{\int_A \left( \dfrac{P_e}{\rho} + \dfrac ParseError: EOF expected (click for details) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/07:_Energy_Conservation/7.1:_The_First_Law_of_Thermodynamics), /content/body/div[3]/p[6]/span, line 1, column 4 - \overbrace{\int_A P_a\, U_b\,dA}^ {\text{ upper surface work }} } ^ {\text{energy flow out }} = 0 \tag{24}$ Where $$U_b$$ is the upper boundary velocity, $$P_a$$ is the external pressure and $$P_e$$ is the exit pressure . The pressure terms in equation (24) are $\label{ene:eq:Tp1} \int_A \dfrac{P_e}{\rho}\, U_e\, \rho dA - \int_A P_a\, U_b\, dA = P_e\, \int_A U_e\, dA - P_a\,\int_A U_b\, dA \tag{25}$ It can be noticed that $$P_a = P_e$$ hence $\label{ene:eq:Tp1a} P_a \overbrace{\left( \int_A U_e\, dA - \int_A U_b\, dA \right)}^{=0} = 0 \tag{26}$ The governing equation (24) is reduced to$\label{ene:eq:TenergyF1} {\dfrac{d}{dt} \int_V \left( \dfrac{U^2}{2} + g\,z\right) \rho\, dV} - \int_A \left( \dfrac{{U_e}^2}{2} \right) U_e\, \rho \, dA = 0 \tag{27}$ The minus sign is because the flow is out of the control volume. Similarly to the previous chapter which the integral momentum will be replaced by some kind of average. The terms under the time derivative can be divided into two terms as $\label{ene:eq:d_intDt} \dfrac{d}{dt} \int_V \left( \dfrac{U^2}{2} + g\,z\right) \rho dV = \dfrac{d}{dt} \int_V \dfrac{U^2}{2}\, dV + \dfrac{d}{dt} \int_V g\,z\,\rho\, dV \tag{28}$ The second integral (in the r.h.s) of equation (28) is $\label{ene:eq:secondINT} \dfrac{d}{dt} \int_V g\,z \,\rho\, dV = g\,\rho\, \dfrac{d}{dt} \int_A\int_0^h\, z\, \overbrace{dz \, dA}^{dV} \tag{29}$ Where $$h$$ is the height or the distance from the surface to exit. The inside integral can be evaluated as $\label{ene:eq:eneH} \int_0^h z dz = \dfrac{h^2}{2} \tag{30}$ Substituting the results of equation (30) into equation (29) yields $\label{ene:eq:zgF} g\,\rho \,\dfrac{d}{dt} \int_A \dfrac{h^2}{2} \, dA = g \, \rho\, \dfrac{d}{dt} \left( \dfrac{h}{2} \, \overbrace{h \, A}^{V} \right) = g\,\rho\,A\,h\, \dfrac{d\;h}{dt} \tag{31}$ \cBox{ The kinetic energy related to the averaged velocity with a correction factor which depends on the geometry and the velocity profile. Furthermore, Even the averaged velocity is zero the kinetic energy is not zero and another method should be used. } A discussion on the correction factor is presented to provide a better "averaged'' velocity. A comparison between the actual kinetic energy and the kinetic energy due to the "averaged'' velocity (to be called the averaged kinetic energy) provides a correction coefficient. The first integral can be estimated by examining the velocity profile effects. The averaged velocity is $\label{ene:eq:aveU} U_{ave} = \dfrac{1}{V}\int_V U dV \tag{32}$ The total kinetic energy for the averaged velocity is $\label{ene:eq:Utwow} \rho\, {U_{ave}}^2\, V = \rho \left( \dfrac{1}{V}\int_V U dV \right)^2 \,V = \rho \left( \int_V U dV \right)^2 \tag{33}$ The general correction factor is the ratio of the above value to the actual kinetic energy as $\label{ene:eq:CFene} C_F = \dfrac{\left( \displaystyle\int_V \rho\, U\, dV \right)^2 } { \displaystyle \int_V \rho\,U^2\, dV } eq \dfrac{\cancel{\rho}\, \left( U_{ave} \right)^2\,V } { \displaystyle \int_V \cancel{\rho}\,U^2\, dV } \tag{34}$ Here, $$C_F$$ is the correction coefficient. Note, the inequality sign because the density distribution for compressible fluid. The correction factor for a constant density fluid is $\label{ene:eq:CFeneNC} C_F = \dfrac{\left( \displaystyle\int_V \rho\, U\, dV \right)^2 } { \displaystyle \int_V \rho\,U^2\, dV } = \dfrac{\left(\cancel{\rho}\, \displaystyle\int_V U\, dV \right)^2 } { \cancel{\rho}\,\displaystyle \int_V U^2\, dV } = \dfrac{ {U_{ave}}^2\,V } { \displaystyle \int_V U^2\, dV } \tag{35}$ This integral can be evaluated for any given velocity profile. A large family of velocity profiles is laminar or parabolic (for one directional flow) . For a pipe geometry, the velocity is $\label{ene:eq:parabolic} U \left(\dfrac{r}{R}\right) = {U}\,(\bar{r}) = U_{max} \left( 1-\bar{r}^2 \right) = 2\,U_{ave} \left( 1-\bar{r}^2 \right) \tag{36}$ It can be noticed that the velocity is presented as a function of the reduced The relationship between $$U_{max}$$ to the averaged velocity, $$U_{ave}$$ is obtained by using equation (32) which yields $$1/2$$. Substituting equation (36) into equation (35) results $\label{ene:eq:eneAve} \dfrac{ {U_{ave}}^2\,V }{ \displaystyle \int_V U^2\, dV } = \dfrac{ {U_{ave}}^2\,V } {\displaystyle \int_V \left( 2\,U_{ave} \left( 1-\bar{r}^2 \right) \right)^2\, dV } = \dfrac{ {U_{ave}}^2\,V } {\dfrac{4\,{U_{ave}}^2\,\pi\,L\,R^2}{3} } = \dfrac{3}{4} \tag{37}$ The correction factor for many other velocity profiles and other geometries can be smaller or larger than this value. For circular shape, a good guess number is about 1.1. In this case, for simplicity reason, it is assumed that the averaged velocity indeed represent the energy in the tank or container. Calculations according to this point can improve the accurately based on the above discussion. \cBox{ The difference between the "averaged momentum'' velocity and the "averaged kinetic'' velocity is also due to the fact that energy is added for different directions while in the momentum case, different directions cancel each other out.} The unsteady state term then obtains the form $\begin{array}{rl} \label{ene:eq:unstadyT} \dfrac{d}{dt} \displaystyle \int_V \rho\,\left( \dfrac{U^2}{2} + g\,y\right) \, dV & \cong \rho\, \dfrac{d}{dt} \left( \left[ \dfrac ParseError: invalid DekiScript (click for details) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/07:_Energy_Conservation/7.1:_The_First_Law_of_Thermodynamics), /content/body/div[3]/p[7]/span[1], line 1, column 1 ^2 \cong {\overline{U_x}}^2 + {\overline{U_y}}^2 + {\overline{U_z}}^2 \tag{42}$ $\label{ene:eq:TuaveIntermite} {\overline{U}}^2 \cong \left(\dfrac{\left(\pi - 2\right)r}{8\,h} \dfrac{dh}{dt}\right)^2 + \left(\dfrac{\left(\pi - 1\right)r}{4\,h} \dfrac{dh}{dt}\right)^2 + \left(\dfrac{dh}{dt}\right)^2 \tag{43}$ $\label{ene:eq:TuaveF} {\overline{U}} \cong \dfrac{dh}{dt} \,\, \overbrace{\sqrt{\left(\dfrac{\left(\pi - 2\right)r}{8\,h} \right)^2 + \left(\dfrac{\left(\pi - 1\right)r}{4\,h} \right)^2 + 1^2}}^{f(G)} \tag{44}$ It can be noticed that $$f(G)$$ is a weak function of the height inverse. Analytical solution of the governing equation is possible including this effect of the height. However, the mathematical complication are enormous and this effect is assumed negligible and the function to be constant. The last term is $\label{ene:eq:TEnergyOut} \int_A \dfrac ParseError: EOF expected (click for details) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/07:_Energy_Conservation/7.1:_The_First_Law_of_Thermodynamics), /content/body/div[3]/p[7]/span[2], line 1, column 4 \\ - \dfrac{1}{2}\left( \dfrac{dh}{dt} \right)^2 \left( \dfrac{A}{A_e}\right)^2\, {U_e \,A_e} = 0 \tag{47}$ Equation (47) can be rearranged and simplified and combined with mass conservation Dividing equation (46) by $$U_e$$$$A_e$$ and utilizing equation (40) $\label{ene:eq:TenergyFb} \dfrac{d}{dt} \left[ \dfrac ParseError: invalid DekiScript (click for details) Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/07:_Energy_Conservation/7.1:_The_First_Law_of_Thermodynamics), /content/body/div[3]/p[9]/span, line 1, column 1 ^{A\,\dfrac{A_e}{A}{U_e}} - \dfrac{1}{2}\left( \dfrac{dh}{dt} \right)^2 \left( \dfrac{A}{A_e}\right)^2\, \cancel{U_e \,A_e} = 0 \tag{48}$ Notice that $$\overline{U} = U_b\,f(G)$$ and thus $\label{ene:eq:TenergyFba} \overbrace{\overline{U}}^{f(G)\,U_b} \dfrac{d \overline{U}}{dt} \dfrac{h\, A}{U_e \,A_e} + \dfrac{g}{2} \dfrac{dh}{dt} \, \dfrac{h\, A}{U_e \,A_e} + \left[ \dfrac{{\overline{U}}^2}{2} + \dfrac{g\,h}{2} \right] \\ - \dfrac{1}{2} \left( \dfrac{dh}{dt} \right)^2 \left( \dfrac{A}{A_e}\right)^2 = 0 \tag{49}$ Further rearranging to eliminate the "flow rate'' transforms to $\label{ene:eq:TenergyFc} f(G)\, h\,\dfrac{d \overline{U}}{dt} \cancelto{1}{\left(\dfrac{{U_b}\,A}{U_e \,A_e}\right)} + \dfrac{g\,h}{2} \, \cancelto{1}{\dfrac{\dfrac{dh}{dt}\, A}{U_e \,A_e}} + \\ \left[ \dfrac{f(G)^2}{2} \left(\dfrac{dh}{dt}\right)^2 + \dfrac{g\,h}{2} \right] - \dfrac{1}{2} \left( \dfrac{dh}{dt} \right)^2 \left( \dfrac{A}{A_e}\right)^2 = 0 \tag{50}$ $\label{ene:eq:TenergyFd} f(G)^2\, h\,\dfrac{d^2 h }{dt^2} + \dfrac{g\,h}{2} + \left[ \dfrac{f(G)^2}{2} \left(\dfrac{dh}{dt}\right)^2 + \dfrac{g\,h}{2} \right] - \dfrac{1}{2} \left( \dfrac{dh}{dt} \right)^2 \left( \dfrac{A}{A_e}\right)^2 = 0 \tag{51}$ Combining the $$gh$$ terms into one yields $\label{ene:eq:TenergyFe} f(G)^2\, h\,\dfrac{d^2 h }{dt^2} + g\,h + \dfrac{1}{2} \left(\dfrac{dh}{dt}\right)^2\left[ {f(G)^2} - \left( \dfrac{A}{A_e}\right)^2 \right] = 0 \tag{52}$ Defining a new tank emptying parameter, $$T_e$$, as $\label{ene:eq:EmptyParamer} T_e = \left( \dfrac{A}{f(G)\, A_e}\right)^2 \tag{53}$ This parameter represents the characteristics of the tank which controls the emptying process. Dividing equation (52) by $$f(G)^2$$ and using this parameter, equation (52) after minor rearrangement transformed to $\label{ene:eq:TenergyFeGaa} h \left( \,\dfrac{d^2 h }{dt^2} + \dfrac{g\,{A_e}^2}{T_e\,A^2}\right) + \dfrac{1}{2} \left(\dfrac{dh}{dt}\right)^2\left[ 1 - T_e \right] = 0 \tag{54}$ The solution can either of these equations $\label{ene:eq:Tsol1T} -\int \dfrac{dh} {\sqrt{\dfrac{\left( k_1\,T_e-2\,k_1\right) \,{e}^{\ln \left( h\right) \,Te}+2\,g\,{h}^{2}}{h\, \left( {Te}-2\right) \,f(G) } } } = t + k_2 \tag{55}$ or $\label{ene:eq:Tsol2T} \int \dfrac{dh} {\sqrt{\dfrac{\left( k_1\,T_e-2\,k_1\right) \,{e}^{\ln \left( h\right) \,Te}+2\,g\,{h}^{2}}{h\, \left( {Te}-2\right) \,f(G) } } } = t + k_2 \tag{56}$ The solution with the positive solution has no physical meaning because the height cannot increase with time. Thus define function of the height as $\label{ene:eq:Th} f(h) = -\int \dfrac{dh} {\sqrt{\dfrac{\left( k_1\,T_e-2\,k_1\right) \,{e}^{\ln \left( h\right) \,Te}+2\,g\,{h}^{2}}{h\, \left( {Te}-2\right) \,f(G) } } } \tag{57}$ The initial condition for this case are: one the height initial is $\label{ene:eq:Tinih0} h(0) = h_0 \tag{58}$ The initial boundary velocity is $\label{ene:eq:Tinih1} \dfrac{dh}{dt} = 0 \tag{59}$ This condition pose a physical limitation which will be ignored. The first condition yields $\label{ene:eq:TiniP0} k_2 = - f(h_0) \tag{60}$ The second condition provides $\label{ene:eq:TiniP1} \dfrac{dh}{dt} = 0 = \sqrt{\dfrac{\left( k_1\,T_e-2\,k_1\right) \,{e}^{\ln \left( h_0\right) \,Te}+2\,g\,{h_0}^{2}}{h_0\, \left( {Te}-2\right) \,f(G) } } \tag{61}$ The complication of the above solution suggest a simplification in which $\label{ene:eq:simplificaitonCondition} \dfrac{d^2 h }{dt^2} << \dfrac{g\,{A_e}^2}{T_e\,A^2} \tag{62}$ which reduces equation (54) into $\label{ene:eq:TenergyFeG} h \left( \dfrac{g\,{A_e}^2}{T_e\,A^2}\right) + \dfrac{1}{2} \left(\dfrac{dh}{dt}\right)^2\left[ 1 - T_e \right] = 0 \tag{63}$ While equation (63) is still non linear equation, the non linear element can be removed by taking negative branch (height reduction) of the equation as $\label{ene:eq:dhdt2} \left( \dfrac{dh}{dt} \right)^2 = \dfrac{ 2\,g\,h}{ -1 + \left( \dfrac{A}{A_e}\right)^2 } \tag{64}$ It can be noticed that $$T_e$$ "disappeared'' from the equation. And taking the "positive'' branch $\label{ene:eq:dhdt} \dfrac{dh}{dt} = \dfrac{ \sqrt{2\,g\,h} } { \sqrt{1 - \left( \dfrac{A}{A_e}\right)^2 } } \tag{65}$ The nature of first order Ordinary Differential Equation that they allow only one initial condition. This initial condition is the initial height of the liquid. The initial velocity field was eliminated by the approximation (remove the acceleration term). Thus it is assumed that the initial velocity is not relevant at the core of the process at hand. It is correct only for large ratio of $$h/r$$ and the error became very substantial for small value of $$h/r$$. Equation (65) integrated to yield $\label{ene:eq:hInt} \left({ 1 - \left( \dfrac{A}{A_e}\right)^2 }\right) \int_{h_0}^h \dfrac{dh}{\sqrt{2\,g\,h}} = \int_0^t dt \tag{66}$ The initial condition has been inserted into the integral which its solution is $\label{ene:eq:tankS} \left({ 1 - \left( \dfrac{A}{A_e}\right)^2 }\right)\, \dfrac{h-h_0}{\sqrt{2\,g\,h}} = t \tag{67}$ $\label{ene:eq:tAppx} U_e = \dfrac{dh}{dt} \dfrac{A}{A_e} = \dfrac{ \sqrt{2\,g\,h} } { \sqrt{1 - \left( \dfrac{A}{A_e}\right)^2 }} \dfrac{A}{A_e} = \dfrac{ \sqrt{2\,g\,h} } {\sqrt{ 1 - \left( \dfrac{A_e}{A}\right)^2 }} \tag{68}$ If the area ratio $$A_e/A << 1$$ then $\label{ene:eq:torricelli} U \cong \sqrt{2\,g\,h} \tag{69}$ Equation (69) is referred in the literature as Torricelli's equation This analysis has several drawbacks which limits the accuracy of the calculations. Yet, this analysis demonstrates the usefulness of the integral analysis to provide a reasonable solution. This analysis can be improved by experimental investigating the phenomenon. The experimental coefficient can be added to account for the dissipation and other effects such $\label{ene:eq:ExpCoefficient} \dfrac{dh}{dt} \cong C\,\sqrt{2\,g\,h} \tag{70}$ The loss coefficient can be expressed as $\label{ene:eq:lossC} C = K f\left( \dfrac{U^2}{2} \right) \tag{71}$ A few loss coefficients for different configuration is given following Figure 7.4. Figure 7.4 ypical resistance for selected outlet configuration. The sharp cover on the left with K=1, K=0.5 and 0.04 repsectivly on the right. ### Contributors • Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.