url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://crypto.stackexchange.com/questions/1422/demonstrating-the-insecurity-of-an-rsa-signature-encoding-scheme?answertab=votes
# Demonstrating the insecurity of an RSA signature encoding scheme I'm working on problem 12.4 from Katz-Lindell. The problem is as follows: Given a public encoding function $\newcommand{\enc}{\operatorname{enc}}\enc$ and a textbook RSA signature scheme where signing occurs by finding $\enc(m)$ and raising it to the private key $d \bmod N$, how can we demonstrate the scheme's insecurity for $\enc(m) = 0||m||0^{L/10}$, where $L = |N|$ and $|m| = 9L/10 - 1$ and m is not the message of all zeroes? - Okay, and where do you need help here? –  Paŭlo Ebermann Dec 9 '11 at 21:52 I need to know how to find a forgery on an m not in Q, where Q is the set of queries to the adversary's signing oracle. Is there something dead simple that I'm missing about this? –  pg1989 Dec 9 '11 at 21:56 Have a look at the corresponding verifying scheme. Can you find a number, which, when taken to the power $e$ (the public exponent), gives something in this encoding? (This depends on the public key, but assume it it something like $3$.) –  Paŭlo Ebermann Dec 9 '11 at 22:08 Well, here's a hint: remember that for textbook RSA $enc(X) \cdot enc(Y) = enc( X \cdot Y) \mod N$ -- how can we find two messages $X$ and $Y$ such that $X \cdot Y \mod N$ is also a valid message? –  poncho Dec 9 '11 at 22:10 @pg1989: Presumably that should go "and $m$ is not ...". $\;\;$ –  Ricky Demer Dec 9 '11 at 22:20 An RSA signature scheme with public key $(n,e)$, private exponent $d$, and encoding function $enc$ (including but not limited to the question's $enc$), signs message $m$ as $$Sign(m) = enc(m)^d\bmod n$$ Such scheme is insecure if an adversary can figure out $k>0$ distinct messages $m_i$, and integers $u_i$, $r$, $s$ verifying $$s^e \cdot enc(m_0) \cdot \prod_{0\lt i\lt k} enc(m_i)^{u_i} \equiv r^e \pmod n$$ because this implies (by raising to the power $d$) $$Sign(m_0) \equiv r \cdot s^{-1} \cdot\prod_{0\lt i\lt k}Sign(m_i)^{-u_i} \pmod n$$ which allows computing the signature of $m_0$ (if $k\gt 1$, it is also necessary that the attacker obtain the signatures of the other messages $m_i$; that becomes an existential forgery, or chosen-message attack). Although dated, Jean-Francois Misarsky's How (Not) to Design RSA Signature Schemes is an interesting and relatively easy reading on that topic. In fact, every known attack on an RSA signature scheme is either of the above kind (with more or less involved computations to exhibit $m_i$, $u_i$, $r$, $s$); or amounts to factorization of $n$ (which includes anything recovering $d$, perhaps by side-channel attack); or is some implementation error, perhaps widespread. In order to mount an attack of the above kind, a relation of the form $enc(m_0)=r^e$ is ideal. It gives the signature of $m_0$ without any consideration on $n$ or known signature. When $e$ is 3, 5 or 7, this can be done with the encoding $enc$ in the question, by considering $r=2^t$ for some appropriate $t$, and extended to $r=v\cdot2^t$ for some small $v$. Similarly, $enc(m_0) = r^e\cdot enc(m_1)$ gives the signature of one message from the signature of the other, without any consideration on $n$. This can be done with the encoding $enc$ in the question, for a wider choice of $e$. Similarly, $enc(m_0) \cdot enc(m_1) = enc(m_2) \cdot enc(m_3)$ gives the signature of one message from the signature of the other three, for any public key $(n,e)$. With the encoding $enc$ in the question, there is ample choice (the equation simplifies to $m_0\cdot m_1=m_2\cdot m_3$, and all messages which left bit is 0 or which integer representation is composite are vulnerable). The ISO/IEC 9796:1991 signature encoding scheme (section 11.3.5 of the Handbook of Applied Cryptography), now withdrawn, turned out to be vulnerable to that, of course if the adversary can obtain the signature of three chosen messages, and is content with the signature of the fourth. Even the hash-based ISO/IEC 9796-2:1997 (now known as ISO/IEC 9796-2:2010 scheme 1), still in wide use, is vulnerable if the adversary can obtain the signature of many weird chosen messages and is content with the signature of another, which fortunately is seldom the case in practice. Some require $e>2^{16}$ (FIPS 186-3 appendix B3.1, RGS Annex B1 section 2.2.1.1, and I have seen suggestions for much wider random $e$), because some attacks on weak encoding schemes or implementations of RSA signature/encryption have been easiest for $e=3$ or other small $e$, as is the case for the scheme in the question. I will not condone a course of action that will lead us to loose the main appeal of RSA (or Rabin) signature schemes: fast and simple verification with modest hardware. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672331213951111, "perplexity": 801.0910166460479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267876.49/warc/CC-MAIN-20140728011747-00328-ip-10-146-231-18.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/2517/is-there-a-formal-name-for-this-graph-operation
# Is there a formal name for this graph operation? I'm writing a small function to alter a graph in a certain way and was wondering if there is a formal name for the operation. The operation takes two distinct edges, injects a new node between the existing nodes of each edge and then adds an edge between the two new nodes. For example: add new nodes a and b to the graph let edge1 = (x,y), let edge2 = (u,v) delete edge (x,y) create edges (x,a), (a,y) delete edge(u,v) create edges(u,b), (b,v) create edge (a,b) • I have seen the construction multiple times, but I have never come across a name for it. – utdiscant Jun 27 '12 at 20:16 • I do this a lot and I'd love to know a name for it. In data modelling for databases, this is what you do when resolving many-to-many relationships (see e.g. Informix docs); the ORM term is objectification. But I also see it applied a lot to graphs in general, and always anonymously - e.g. in Wikipedia's bipartite graph article. – reinierpost Jul 22 '13 at 9:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.316588819026947, "perplexity": 711.5146132884589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00244.warc.gz"}
http://www.bluemoonproductions.nl/
An introduction to the LLL-algorithm Just a note in case you are here to evaluate this blogpost for my GSoC-proposal: This is my old website, I am currently building a new one here (which should be finished before the summer starts). When it is done I will move this blogpost. So that’s just an explanation for why the website might seem a bit outdated. In this blogpost we will take a closer look at what exactly the LLL-algorithm does. We will not describe the internal workings of the algorith, but rather try to explain what it tries to achieve. The case of regular vector spaces Given a set of vectors $$v_1,\cdots,v_n$$, the vector (sub)space generated by these vectors is defined as $$V = \{\lambda_1 v_1 + \lambda_2 v_2 + \cdots + \lambda_n v_n | \lambda_i \in \mathbb{R}\}$$, i.e. all linear combinations of these basis vectors. Note that, given such a space $$V$$, we can pick any set of n vectors as a basis, as long as they are linearly independent. However, it quickly becomes clear that some bases are just ‘nicer’ than others, consider the following bases of $$\mathbb{R}^3$$: $(1, 0, 0)^T, (0, 0, 1)^T, (0, 1, 0)^T$ or $(1, 2, 3)^T, (23, 54, 6254)^T, (\pi, \mathbf{e}, -1)^T$ While the second set of vectors is indeed linearly independent, it is quite clear that working with this basis would be quite a pain. Why? For two reasons: 1) the vectors are not orthogonal, and 2) the vectors do not have length 1. A ‘nice’ basis While not having an orthonormal basis is not an insurmountable disaster, there are a lot of things that become a lot easier when your basis is orthonormal. For example, given that we have some vector that is a linear combination of the basis vectors described as $$\lambda_1, \lambda_2,\cdots, \lambda_n$$, what is the projection of this vector on the i’th basisvector? If the basis is orthonormal, this is clearly $$\lambda_i$$, whereas with a non-‘nice’ basis you would have to calculate the projection. In other words, we can define a ‘nice’ basis for a vector (sub)space as a basis that is orthonormal. But then the next question is: What if we only have an ‘ugly’ basis, can we get to a ‘nice’ one quickly? It turns out: we can! The Gram-Schmidt-process is a simple algorithm that orthonormalizes a given set of linearly independent vectors in $$\mathcal{O}(n^2)$$ such that the generated space is the same as before. The algorithm essentially works like this: for each basis vector $$v_i$$, normalize it and project all $$v_j$$ ($$j > i$$) into $$v_i^\perp$$ (the orthagonal complement of $$v_i$$). We can observe that this projection is a non-trivial linear combination of $$v_i$$ and $$v_j$$, and thus preserves the generated vector space. An implementation I made of this algorithm can be found here. Lattices We move on to lattices. The definition of a lattice is quite simple: we are again given a set of  basis vectors $$v_1, \cdots, v_n$$, and look at its lattice: $\mathcal{L}(v) = \{\lambda_1 v_1 + \lambda_2 v_2 + \cdots + \lambda_n v_n | \lambda_i \in \mathbb{Z} \}$ What has changed? We only allow integer multiples of the basis! It turns out, this complicates things considerably. As a start, we cannot just replace our $$n$$ basis vectors by $$n$$ other linearly independent vectors from the same space. As an example, consider $$(1, 0)$$ and $$(0, 1)$$, which generate $$\mathbb{Z}^2$$, whereas $$(2, 0)$$ and $$(0, 2)$$ generate $$2\mathbb{Z}^2$$ (all tuples of even numbers). This implies that our Gram-Schmidt-process won’t work anymore either: it would convert the second basis in our example to the first one, which does not lie in the lattice. This is because $$(1, 0)$$ is not an integer multiple of $$(2, 0)$$. As another example, if our basis is $$(1, 0)$$, and the same vector rotated 60 degrees. This results in the following lattice: Taken from wikimedia After playing with this basis a bit on paper, it becomes clear that there is no orthonormal basis for this lattice. Can we still have nice bases? So, when we cannot find a nice basis anymore, what do we do? We redefine ‘nice’ of course! On a more serious note, it is not entirely clear what a ‘nice’ basis for a lattice is. Ideally we would like the vectors in our basis to be short and as orthogonal as possible. In their 1982 paper “Factoring polynomials with rational coefficients”, Arjen Lenstra, Hendrik Lenstra and László Lovász introduced both a concrete notion of ‘a nice lattice basis’ and an algorithm to compute one: the LLL-algorithm. So what is the new ‘nice’? Given our basis $$v_1,\cdots, v_n$$, let $$v_1^*,\cdots,v_n^*$$ be the orthonormalization of this basis (i.e. the ‘nice’ basis if we forgot for once second that we were in a lattice). Then an LLL-reduced basis satisfies the following conditions: • The basis is size-reduced: $$|({v_i \cdot v_j^*}) / ({v_j^* \cdot v_j^*})| \leq \frac{1}{2}$$ for $$1\leq j < i \leq n$$ where $$\cdot$$ denotes the inproduct. What does this mean? Note that the value of the left hand side is the (absolute value of the) projection of $$v_i$$ onto $$v_j^*$$. If this was the projection of $$v_i^*$$, this value would be zero as $$v_i^*$$ and $$v_j^*$$ are orthagonal. By requiring the value to be smaller than $$\frac{1}{2}$$, we are asking $$v_i$$ to simultaneously be short and sufficiently orthagonal to $$v_j^*$$ (note that since $$j < i$$, $$v_j^*$$ is not based on $$v_i$$. For $$j > i$$, this condition trivially holds. • The Lovász condition: $$|v_i^* + \mu_{i i-1} v_{i – 1}^*|^2 \geq \delta |v_{i – 1}^*|^2$$ for $$\frac{1}{4} < \delta < 1$$ and $$\mu_{i i – 1} = |v_i \cdot v_{i – 1}^*| / |v_{i – 1}^* \cdot v_{i – 1}^*|$$. Note that the left hand side of this expression is the projection of $$v_i$$ into the orthagonal complement of the first i – 2 vectors, and the right hand side the projection of $$v_{i – 1}$$ into the first i – 2 vectors. In other words, $$v_i$$ should be ‘further into this complement’ than $$v_{i – 1}$$. It turns out that using these conditions, we can compute a ‘nice’ basis for a lattice in polynomial time! In particular, the LLL-algorithm does this in $$\mathcal{O}(d^5 \cdot n \cdot \log_3 B)$$ for $$B$$ the length of the largest of the $$v_i$$ (under the euclidean norm). We will look deeper into this algorithm later, but if you can’t wait I would recommend either the original paper (Factoring Polynomials with Rational Coefficients) or the book ‘A Course in Computational Algebraic Number Theory’ by Henri Cohen. Warlight Bot Tactics Just a couple of ideas regarding tactics for the Warlight AI Challenge. I haven’t had time to try all of them, and I’m not even sure whether I will, since the finals are approaching rather quickly, but I thought I’d pen them down anyway. Also note that my bot is open source, if you need a basis for yours. Joint/Split invasions Implemented: Yes. The game takes place on a simple graph, rather than a straight line, so when you only consider attacking from a single region, you are missing a lot of opportunities. To see what I mean, consider the following situation: This is a screenshot of round 99(!) of this match, and the blue bot is mine. I should definitely have lost, but my opponent’s attack logic probably looked something like this: for each friendly region R for each non-friendly neighbour N of R if attack R -> N is feasible attack N from R Now consider the following attack sequence. The red bot is mine (yes, I cherrypicked the examples): The bot determines that the region containing 17 armies actually has 2 potential targets, and it has sufficient armies to attack both. Additionally, there are 2 other regions that could potentially aid in the attack. As you can see this can make quite a difference. Not only do you avoid situations like we saw in the first picture, but you also move across the board much faster. This is roughly how I do it: for each non-friendly Region R # Gauge how many armies would be available for an attack on R for each friendly neighbor N of R R.availableArmies += N.CanMiss(R) # Attack if there are enough if R.availableArmies >= R.requiredArmies Attack R The CanMiss(R) subroutine of N is essential here. It specifies how many armies the friendly region N could miss if region R wasn’t there. In other words, this function loops through the hostile neighbors of N that are not R, decides how many armies it needs to defend itself against them and subtracts that from the armies it actually has (clamping at zero of course). This way, not only can you organize joint attacks on hostile regions, you can also do so without compromising the safety of the attacking regions. However, there is a slight problem. Going back up, to the image sequence showcasing this behavior, notice how the region containing 17 armies splits a bit unevenly, at 15:1, rather than, say, 12:4? This is because there is some discrepancy between armies needed for an attack, and armies needed for defence. To be more precies, the 17 army region R in question sees two neighboring regions: N1 containing 8 armies, and N2 containing 2 armies. It first of all decides that if it is going to attack N1, it still needs to be able to defend itself against N2. How many armies are needed for this? Let’s say two is enough. That means we can attack with fifteen! Next up, it wants to attack N2. How many armies are necessary for the defence against N1? Uh, ten should be enough. Attack with seven! I haven’t really implemented a solution to this sub-problem yet, but here are some ideas: • Execute all attacks the region, in ascending order of how many armies the region provides. Simple to implement, but probably won’t really work that well. Note that occassionally (but very rarely), two regions might both be involved in two different attacks, and require them to be executed in opposite order. • Once a region has attacked once, block all other attacks it is involved in (or, continue the attack without the relevant region, if possible). • Run over the final set of attacks several times, and recalculate attacks where the total amount of armies required from a region exceeds the available, potentially discarding one or more attacks if necessary. Move idle armies to frontlines Implemented: Yes. Another thing I notice a lot is that when bots are done in an area, they just leave their excess armies there. For example: And on the opposite side, look how this match progresses. Really, there is nothing too fancy going on. All you need is a simple implementation of breadth-first search that searches for regions in a specific list, rather than a single target. Just run all frontline regions through it and head to the closest. Again, people are really missing opportunities here, especially since you can make as many moves as you want in a turn. As long as you do all the transferring after the attacking, you have nothing to lose. Properly handle a fragmented playing field Implemented: No. Consider the following situation: Not so good, right? In this case it would probably be best to give up the regions in Asia and North America, and focus on defending Australia. However, currently my bot (and probably many others) only cares that Ontario is significantly threatened, and gives it some extra armies. Obviously, it should instead follow the US and give up on conquering Canada. This is actually harder than it looks because it requires the bot to look at the Big Picture. Currently my bot only does this by looking at how much of a region’s superregion is owned (proportionally), which evidently is not enough. Ideally, the bot would pick a certain network of friendly regions (in this case Australia), and only focus on regions that are directly connected to it. I’m not yet sure how to react to this network being split up (as happened in China in the image above), but the best option is probably to continue with the largest network, or the network that contains the most (nearly) completed superregions. Either way, that’s it for now. If you have any other ideas or comments on the tactics proposed above, let me know in the comments! Mersenne Digits Have you ever noticed that whenever a large prime is found in mathematics, the following sentence always pops up somewhere: “.. and it’s last 10 digits are: xxxxxxxxxx “. How are these terminating digits found? Do they just calculate the whole number and then look at the tail? Does it take a supercomputer to actually find these digits? Turns out: no, not really. Even you can do it. Where do the last digits come from? Let’s first consider where these terminating digits actually come from. Say that we want to find the last 2 digits of 7531 * 8642. Don’t pull out a calculator, instead, think about where these last 2 digits would actually come from. Would they be affected by all digits in our numbers? Of course not, if we visualize this multiplication using the area model, only the red areas affect the last 2 digits of our number: As you can see, the last 2 digits of all other products are ’00’. Thus, the product of any two numbers whose last 2 digits are ’42’ and ’31’ will have the same last 2 digits, everytime. Specifically, 42 * 31 = 1302, so the last two digits of 7531 * 8642 are ’02’. Mersenne Primes Time to go a little bigger. Mersenne Primes are primes given by $$2^p – 1$$ where $$p$$ itself is prime as well. Pretty much all of the largest primes we know are Mersenne Primes. In fact, the largest prime we know is a Mersenne Prime: $$2^{57885161} – 1$$. Want to know what its last 10 digits are? The problem is somewhat similar to the one we faced previously, except instead of two numbers, we have to multiply 57885161 numbers together. Luckily, since all of these numbers are the same, there is a quick way to do this! Here’s a Python implementation of the algorithm: def mypow(x, n): if(n == 0): return 1 if(n == 1): return x if(n == 2): return x * x if(n % 2 == 0): return mypow(x * x, int(n / 2)) return x * mypow(x * x, int((n - 1) / 2)) However, if we now ask it to calculate mypow(2, 57885161) % 10000000000, nothing happens. That’s because in the end, this query will only return the last 10 digits, but it calculates all of them, which is exactly what we were trying to avoid. The key now is to split our gigantic power into the string of multiplications it really is, and after a multiplication, drop all digits that aren’t in the last 10. As you saw above, to do this we use % 10000000000 or % 10**10 ($$10^{10}$$). Here’s what the code looks like: def mypow(x, n, digits): if(n == 0): return 1 if(n == 1): return x % (10 ** digits) if(n == 2): return x * x % (10 ** digits) if(n % 2 == 0): return mypow(x * x, int(n / 2), digits) % (10 ** digits) return x * mypow(x * x, int((n - 1) / 2), digits) % (10 ** digits) As you can see, I went for an extra argument specifying how many digits we are looking for. First, test it a little and see if it works. For example, 2^10 = 1024, so mypow(2, 10, 2) should return 24. After that, try mypow(2, 57885161, 10) - 1 (don’t forget the -1!). It should return 1724285951, the last 10 digits of our prime! Bigger! If you have a decent computer, the answer should have appeared in a few seconds. We should be able to calculate even larger Mersenne Primes with this method. Prime(3443958) is the largest known prime for which $$2^p-1$$ is prime, but, hypothetically, let’s just assume that prime(10000000) satisfies this condition as well! The number would be astronomically large, WolframAlpha estimates it has approximately $$3.43\cdot{10}^{54012208}$$ digits. But in a few seconds, our function tells us the last ten digits of this number are 6819899391! Does this always work? You might be wondering whether this works for any sort of large number. Unfortunately, it doesn’t always work out as well as for Mersenne Primes. Generally, any number that can be calculated by multiplications and additions only, is fine. Take, for example, the non-Mersenne but still enormously large prime 19249 * 2^13018586 + 1. The following query gives its last 10 digits: (19249 * mypow(2, 13018586, 10) + 1) % (10 ** 10), returning 8618996737. Anyway, that’s it for now. Happy digit-hunting. Stirling’s Formula Meet Stirling’s Formula: $\lim_{n \to \infty} \frac{n!}{\sqrt{2\pi{n}}\cdot n^{n} \cdot e^{-n}} = 1$ Essentially, this formula says that for very large n, the formula $$\sqrt{2\pi{n}}\cdot{n}^{n}\cdot{e}^{-n}$$ approximates $$n!$$, since the ratio between these two terms approaches one. This is very useful for calculating large $$n!$$ since it is a lot faster. Faster squaring But why, you might ask? Calculating $$n!$$ takes $$n-1$$ multiplications, since you are multiplying n numbers together, while the term $$\sqrt{2\pi{n}}\cdot(\frac{n}{e})^{n}$$ takes at least as much, since a) it involves an n-th power, and b) it has a squareroot, and other scary things like $$\pi$$ and $$e$$. Turns out, there is a really neat method called exponentiation by squaring that allows us to calculate powers (in our case $$(\frac{n}{e})^{n}$$ ) a lot faster. Here is how it works: Let’s say we want to calculate the following power: $$3^8$$. If we just multiply eight 3’s together, we’ll need 7 multiplications. However, remember that we can split exponents, specifically: $a^{nm} = (a^n)^m$ We can do the same thing with our power by dividing out the two’s: $3^8 = (3^2)^4 = ((3^2)^2)^2$ This is pretty neat – after all, squaring a number is just one multiplication, so now we only need 3 multiplications rather than 7! Let’s try that again, now with something larger: $2^{28} = (2^2)^{14} = ((2^2)^2)^7 = 16^7$ Now we’ve run into a problem: uneven exponents. We can’t split the 7 into 2 and 3.5 since then we’d have to take roots, and our goal was to find a faster  way to calculate power of n. In fact, since 7 is prime there is no good way to split it into two other integer exponents using the above method. However, not everything is lost! We’ll use this other rule for exponents: $a^{n+m} = a^n\cdot{a}^m$ How about we rewrite 7 to 6 + 1? $16^7 = 16\cdot{16}^6 = 16\cdot(16^2)^3 = 16\cdot(256)^3 = 16\cdot{256}\cdot(256)^2$ In its final form our equation only needs 3 more multiplications, and it took us another 3 multiplications to get the 16’s and 256’s. That makes 6 multiplications in total, as opposed to the original 27 we’d need for $$2^{28}$$. This is pretty cool, now let’s try coming up with a formal algorithm for this method. The algorithm Let’s assume we are given a value x and a positive integer exponent n. We’re not going to worry about non-positive exponents since those would be somewhat trivial to handle (Cue “this proof is left as an exercise for the reader.“), and fractional exponents are a whole different beast. In other words, n is positive and whole, or, if you’re a mathematician, $$n\in\mathbb{N}$$. The best way to do this would be by using recursion, since our algorithm continuously splits powers until the exponents are small enough (<= 2). Let’s start by defining a function that does this: function power(x, n) if(n == 1) return x; elseif(n == 2) return x * x; else ? Those are the special cases, now we get to the recursion. First, we’ll have to check whether the exponent n is even or uneven. If it is even, we can divide it by two and raise x*x to this power. If it is uneven, we subtract 1 and try again. function power(x, n) if(n == 1) return x; elseif(n == 2) return x * x; elseif((n % 2) == 0) return power(x * x, n / 2); else return x * power(x, n - 1); (Where % is the modulo operator, i.e. if c is the result of a % b, then c is rest after division a/b. So (n % 2) == 0 if and only if n is even.) This actually works fairly well, but we can make a slight optimization. We already noticed how, after dividing our exponent by two, there’s no good way to predict whether the new exponent will be even or uneven. However, when we subtract one we do know. If n is uneven, then n-1 must be even! We can skip a step in our recursion by changing the last line to this: return x * power(x * x, (n - 1) / 2); The new exponent (n – 1) / 2 might very well be even as well, but this time there is no simple way to predict that. Anyway, our full (pseudo)code now looks like this: function power(x, n) if(n == 1) return x; elseif(n == 2) return x * x; elseif((n % 2) == 0) return power(x * x, n / 2); else return x * power(x * x, (n - 1) / 2); So there you have it! Exponentation by squaring, a simple method to greatly reduce the number of multiplications needed for calculating $$x^n$$. If you’re a programmer and you think you should maybe implement this for your projects: don’t worry, most of the time compilers use this (or a similar) method as well. It’s just a nice example of how some creative math can get you a long way when it comes to optimizing code. Back to Stirling There are other reasons that Stirling’s Formula (or Stirling’s Approximation) is a useful method to approximate n!, but if you hadn’t noticed yet, it was just an excuse to talk about exponentiation by squaring. I would encourage you to research it on your own, though. Major website redesign! If you are one of the millions of people following my site, you may have noticed a few sliiight changes to how it looks. I ditched the giant empty header, and switched to a sliiightly less depressing monochrome color scheme. CMS More importantly though, I switched to using WordPress for the entire site rather than just the blog. It was definitely time to switch to a CMS – to give you an idea, when I wrote my old website I had no idea how mod_rewrite worked, and had to create paths to new pages by hand (i.e. creating the whole folder structure), amongst other things. WordPress is very customizable and it was quite easy to write a custom theme that allowed me to display sets of pages (tutorials, games, etc) on different pages in a custom format. All I have to do now is just write a page in the WYSIWYG-editor and file it under, say, ‘Tutorials’, add a custom field detailling what programming language it is about, and it’ll show up in the list. I have added all my old games and software, and the majority of my tutorials. There are a handful left but those will get added later on. I didn’t add all the old blogposts as I felt they weren’t really interesting anyway. The exception to this is the blogpost about reskinning GM:Studio to GM8-looks, but that one is now filed under the tutorials. Future plans So let’s talk content. As I mentioned in an earlier, now deleted blogpost, there’s still a lot of tutorials (primarily about GameMaker) that haven’t made it to my website yet. I’m going to collect and add them, though that process might take a while. In addition to that, I’m going to expand into other subject areas such as Javascript (for HTML5, mostly) and C#, as well as tutorials that are not language-specific, but rather present pseudocode or more theoretical knowledge. Game Design seems interesting as well. I’m also going to try to write some blogs more often. This, admittedly, is an aspiration shared by many an amateur blogger, but hey, I’m different! Potential topics include Math, Game Design and Computer Science.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367689609527588, "perplexity": 679.3819375897542}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00378.warc.gz"}
https://mathmaine.com/2010/04/01/sigma-and-pi-notation/?replytocom=1507
# Sigma and Pi Notation (Summation and Product Notation) ### Sigma (Summation) Notation The Sigma symbol, $\sum$, is a capital letter in the Greek alphabet. It corresponds to “S” in our alphabet, and is used in mathematics to describe “summation”, the addition or sum of a bunch of terms (think of the starting sound of the word “sum”: Sssigma = Sssum). The Sigma symbol can be used all by itself to represent a generic sum… the general idea of a sum, of an unspecified number of unspecified terms: $\displaystyle\sum a_i~\\*\\*=~a_1+a_2+a_3+...$ But this is not something that can be evaluated to produce a specific answer, as we have not been told how many terms to include in the sum, nor have we been told how to determine the value of each term. A more typical use of Sigma notation will include an integer below the Sigma (the “starting term number”), and an integer above the Sigma (the “ending term number”). In the example below, the exact starting and ending numbers don’t matter much since we are being asked to add the same value, two, repeatedly. All that matters in this case is the difference between the starting and ending term numbers… that will determine how many twos we are being asked to add, one two for each term number. $\displaystyle\sum_{1}^{5}2~\\*\\*=~2+2+2+2+2$ Sigma notation, or as it is also called, summation notation is not usually worth the extra ink to describe simple sums such as the one above… multiplication could do that more simply. Sigma notation is most useful when the “term number” can be used in some way to calculate each term. To facilitate this, a variable is usually listed below the Sigma with an equal sign between it and the starting term number. If this variable appears in the expression being summed, then the current term number should be substituted for the variable: $\displaystyle\sum_{i=1}^{5}i~\\*\\*=~1+2+3+4+5$ Note that it is possible to have a variable below the Sigma, but never use it. In such cases, just as in the example that resulted in a bunch of twos above, the term being added never changes: $\displaystyle\sum_{n=1}^{5}x~\\*\\*=~x+x+x+x+x$ The “starting term number” need not be 1. It can be any value, including 0. For example: $\displaystyle\sum_{k=3}^{7}k~\\*\\*=~3+4+5+6+7$ That covers what you need to know to begin working with Sigma notation. However, since Sigma notation will usually have more complex expressions after the Sigma symbol, here are some further examples to give you a sense of what is possible: $\displaystyle\sum_{i=2}^{5}2i\\*~\\*=2(2)+2(3)+2(4)+2(5)\\*~\\*=4+6+8+10$ $\displaystyle\sum_{j=1}^{4}jx\\*~\\*=1x+2x+3x+4x$ $\displaystyle\sum_{k=2}^{4}(k^2-3kx+1)\\*~\\*=(2^2-3(2)x+1)+(3^2-3(3)x+1)+(4^2-3(4)x+1)\\*~\\*=(4-6x+1)+(9-9x+1)+(16-12x+1)$ $\displaystyle\sum_{n=0}^{3}(n+x)\\*~\\*=(0+x)+(1+x)+(2+x)+(3+x)\\*~\\*=0+1+2+3+x+x+x+x$ Note that the last example above illustrates that, using the commutative property of addition, a sum of multiple terms can be broken up into multiple sums: $\displaystyle\sum_{i=0}^{3}(i+x)\\*~\\*=\displaystyle\sum_{i=0}^{3}i+\displaystyle\sum_{i=0}^{3}x$ And lastly, this notation can be nested: $\displaystyle\sum_{i=1}^{2}\displaystyle\sum_{j=4}^{6}(3ij)\\*~\\*=\displaystyle\sum_{i=1}^{2}(3i\cdot4+3i\cdot5+3i\cdot6)\\*~\\*=(3\cdot1\cdot4+3\cdot1\cdot5+3\cdot1\cdot6)+ (3\cdot2\cdot4+3\cdot2\cdot5+3\cdot2\cdot6)$ The rightmost sigma (similar to the innermost function when working with composed functions) above should be evaluated first. Once that has been evaluated, you can evaluate the next sigma to the left. Parentheses can also be used to make the order of evaluation clear. ### Pi (Product) Notation The Pi symbol, $\prod$, is a capital letter in the Greek alphabet call “Pi”, and corresponds to “P” in our alphabet. It is used in mathematics to represent the product of a bunch of terms (think of the starting sound of the word “product”: Pppi = Ppproduct). It is used in the same way as the Sigma symbol described above, except that succeeding terms are multiplied instead of added: $\displaystyle\prod_{k=3}^{7}k\\*~\\*=(3)(4)(5)(6)(7)$ $\displaystyle\prod_{n=0}^{3}(n+x)\\*~\\*=(0+x)(1+x)(2+x)(3+x)$ $\displaystyle\prod_{i=1}^{2}\displaystyle\prod_{j=4}^{6}(3ij)\\*~\\*=\displaystyle\prod_{i=1}^{2}((3i\cdot4)(3i\cdot5)(3i\cdot6))\\*~\\*=((3\cdot1\cdot4)(3\cdot1\cdot5)(3\cdot1\cdot6)) ((3\cdot2\cdot4)(3\cdot2\cdot5)(3\cdot2\cdot6))$ ### Summary Sigma (summation) and Pi (product) notation are used in mathematics to indicate repeated addition or multiplication. Sigma notation provides a compact way to represent many sums, and is used extensively when working with Arithmetic or Geometric Series. Pi notation provides a compact way to represent many products. To make use of them you will need a “closed form” expression (one that allows you to describe each term’s value using the term number) that describes all terms in the sum or product (just as you often do when working with sequences and series). Sigma and Pi notation save much paper and ink, as do other math notations, and allow fairly complex ideas to be described in a relatively compact notation. ### Whit Ford Math teacher, substitute teacher, and tutor (along with other avocations) ## 49 thoughts on “Sigma and Pi Notation (Summation and Product Notation)” 1. Douglas maindo says: am very thankful 2 the information above.it is very helpful to me 2. I always see these equations on in technical papers but I never knew how to decode them. This was so helpful! It’s basically a for loop in scripting, makes so much sense. Also your blog is awesome, thank you for sharing! 1. Thank you! And yes, a little programming experience with loops makes Sigma and Pi Notation much easier to understand… 3. Sundaram says: Very very useful. Thanks a lot – Sundaram 4. Anonymous says: Thank you, this was very helpful. I was finding how to use Sigma notation, and finally found such a good one. 5. Samama Fahim says: Indeed a very lucid exposition of Sigma and Pi notations! Thanks 🙂 6. Very useful post. But what if the Pi notation is not in closed form, such as $\displaystyle\prod^{n}_{k=2}(1-\dfrac{1}{k^2})$ 1. If the index limit above the Pi symbol is a variable, as in the example you gave: $\displaystyle\prod^{n}_{k=2}(1-\dfrac{1}{k^2})$ then there are an indeterminate number of factors in the product until such time as “n” is specified. I suppose a problem could be posed this way if you are being asked to come up with an expression for such a product that does not involve Pi notation: is there some closed form expression involving “n” that represents this product? So, if n=3, then $\displaystyle\prod^{n}_{k=2}(1-\dfrac{1}{k^2})=(1-\dfrac{1}{4})\cdot(1-\dfrac{1}{9})$ and if n=4, then $\displaystyle\prod^{n}_{k=2}(1-\dfrac{1}{k^2})=(1-\dfrac{1}{4})\cdot(1-\dfrac{1}{9})\cdot(1-\dfrac{1}{16})$ and if you leave the final index as “n” becomes: $=(1-\dfrac{1}{4})\cdot(1-\dfrac{1}{9})\cdot(1-\dfrac{1}{16})\cdot ... \cdot (1-\dfrac{1}{n^2})$ $=\dfrac{3}{4}\cdot\dfrac{8}{9}\cdot\dfrac{15}{16}\cdot ... \cdot\dfrac{n^2-1}{n^2}$ Is there some closed form expression that represents this product? 1. Stephen Kazoullis says: Shouldn’t the k be squared ? 2. Which “k” are you referring to? There are several in the posting… Ooops – I just realized you were asking about my reply to the comment. You are correct. I will modify my response shortly. 3. Jason Z Okoro says: Actually n SHOULD be squared in his reply since he’s saying that that’s the LAST TERM in the product. Basically this is where k = n. It’s important to emphasize that. 7. what is the relation between the two when they are logarithmic differentiated? 1. It would help if you could provide an example of what you are asking about. If you need to differentiate a sum, I would not expect logarithmic differentiation to be very useful, as the laws of logarithms do not allow us to do anything with something like $ln(y)=ln(\displaystyle\sum _i x^{ix})$ Differentiating this would turn the right side into the reciprocal of the original sum times its derivative = a mess. However, if you need to differentiate a product, logarithmic differentiation could make life simpler by converting a long succession of product rule applications into a sum of logs. Since $ln(xy)=ln(x)+ln(y)$ we can rewrite the log of a product as a sum of logs: $ln(y)=ln(\displaystyle\prod _i x^{ix})\\*\\ln(y)=\displaystyle\sum_i ln(x^{ix})$ which, in many cases, could simplify the differentiation process. If sigma is for summation, and pi is for multiplication, are there any notations for division and subtraction? Just out of curiosity? 1. Good question! Subtraction can be rewritten as the addition of a negative. So Sigma notation describes repeated subtraction when its argument is a negative quantity. Division can be rewritten as multiplication by the reciprocal. So Pi notation describes repeated division when its argument has a denominator other than 1. Therefore, additional notations are not needed to describe repeated subtraction or division… Which is quite convenient. 9. john walter says: Sir, how about expressing thing one 1x2x3 + 2x3x4 + 3x4x5 + ….will it be a combination of sigma and pi? If you can illustrate it please. 1. You are correct – this can be represented using a combination of Sigma and Pi notation: $\displaystyle\sum_{i=1}^{N} \displaystyle\prod_{j=0}^{2}(i+j)$ In the above notation, i is the index variable for the Sum, and provides the starting number for each product. By having the Product index variable start at zero, the expression to generate each value is a bit simpler. If j went from one to three each time, the expression on the right would have to be (i + j – 1). 1. gargi says: hello sir, thank you for the amazing and very helpful post. I was just practicing the question wanted to know can 30….. n(n+1)(n+2) be the ans to the above sigma and product equation given by you. 2. Gargi, The Sigma and Pi expression I used to answer the previous question did not have a value specified for “N”, so any value given for the expression will have to be in terms of “N”… as your question is. However, your expression leaves me uncertain as to whether you are analyzing the situation correctly or not. Let’s list the first few terms of this sequence individually to get a sense of how this series behaves: $6+24+60+120+...+(N)(N+1)(N+2)$ So, the sum of the first two terms would indeed be 30. But if you are trying to give a general answer, you should show each term individually so that the person reading your answer can see any pattern that is developing, and understand how to fill in the “…” used to represent all the terms that are not shown. 3. Yucel says: Hi Mr. Ford, any hint for the solution of following infinite series will appretiated. Thanks… $\displaystyle\sum_{i=0}^\infty \displaystyle\prod_{j=0}^i \dfrac{j+7}{6(j+1)}$ 4. Yucel, Evaluating the first few terms, just to get a sense of its behavior, produces the following (after converting all fractions to have a common denominator so that they are easier to compare quickly: i=0: (14/12) i=1: (14/12)(8/12) i=2: (14/12)(8/12)(6/12) i=3: (14/12)(8/12)(6/12)(5/12) It seems that successive terms are growing smaller, since each is the previous term multiplied by a factor that is less than 1 and shrinking, so the series will converge (it shrinks faster than a geometric sequence with a common ratio that is less than one). But to what value? I don’t know what context this problem arises in for you, and therefore what tools you are expected to use to analyze the problem (assuming it is a problem from a class). Plus I have not worked with infinite series in a while – off the top of my head, I might try to “squeeze” this between two series for which I know the sum, to at least provide upper and lower bounds for the sum. An upper bound would be provided by an infinite geometric sequence, but I am uncertain what might best provide a lower bound. Does that help? 10. Anonymous says: $\displaystyle\sum_{n=0}^{3}(n+x)\\*~\\*=(0+x)+(1+x)+(2+x)+(3+x)\\*~\\*=0+1+2+3+x+x+x+x$ what can be the correct answer this equation? 1. The example was an expression, not an equation, therefore it cannot be “solved”. However, this particular example can be “simplified” by collecting like terms to become $6+4x$ which would raise the question: “why write it using Sigma Notation when you could just as easily write $4x+6$?”. My answer to that would be: I probably would not use Sigma Notation to write such a simple expression. This example was intended show how to interpret Sigma Notation in some of the many ways that it can be used. 11. Jonty says: I have a student asking whether there is a symbol for exponentiation of a sequence? So there’s SIGMA for summation of a sequence, PI for multiplication of a sequence and perhaps something else for exponentiation of a sequence? So like E(x+n) for n=1 to 3 would produce (x+1)^(x+2)^(x+3)… Or maybe((x+1)^(x+2))^(x+3) Thanks, Jonty 1. Jonty, Good question! I am not aware of such notation, and furthermore, I am not aware of situations where such notation would be needed. Do you know of situations that require repeating exponentiation to model them? I suppose that some multi-dimensional models (perhaps like String Theory) could require some repeated exponentiation, but even there I doubt they would need to get beyond several levels of exponentiation (the result would grow really fast…). I’ll research this a bit to see if I can find anything, and if I do I’ll post another reply. 1. Jonty says: Thanks Whit. The student in question is actually only 11 years old and somehow I don’t think that he will accept the “not needed” reason! I’ll challenge him to find a need for it and maybe he can create his own notation. He said it had something to do with his investigation into combination formulae… He’s currently using a backwards SIGMA symbol! 2. Jonty, One other thought… if $(x^a)^b=x^{ab}$ and therefore $((x^a)^b)^c=x^{abc}$ etc… then there is no need for a notation to represent repeated exponentiation, since exponents that are products already represent repeated exponentiation. Using Pi notation in the exponent achieves the desired purpose. 3. Jonty says: Good point, however, x^a^b is not the same as x^ab. For repeated exponentiation I would assume that form rather than (x^a)^b. So maybe we do still need something? 4. Ooops – didn’t think of $x^{a^{b^c}}$ I still cannot think of either an application for such an expression or a notation for it. Perhaps this is a good question for a forum like http://math.stackexchange.com/questions 12. Jeremy says: If the sum of a bunch of terms in known as a “summation of a series”, then what is the product of a bunch of terms known as in mathematics? 1. The exact vocabulary used is likely to differ from one person to another, and I doubt that everyone will care that much about the words used, but I will be picky about the words used in an effort to clarify the situation and answer your question. A “sequence” is an ordered set of terms which are NOT added together. There is often a pattern to them, a formula that can be used to determine the value of the next term in the sequence. Sequence definitions usually have no need for summation notation. A “series” is the sum of the first N terms of a sequence. Series definitions almost always rely on summation notation. The phrase you wrote, “summation of a series”, is either redundant (they could have just said “a series”), or indicated that they wish to sum the first N terms of a series (the sum of terms, each which is a sum, something that might have a use, but I have not seen used). A polynomial (such as a quadratic) can be called “a sum of terms”. And finally to your question. A “product” is the result of multiplying two or more “factors”. The entire product is a single “term”. So when using pi notation, the expression after the pi describes each “factor” (not “term”), and the final result after the pi notation has been evaluated is a “product”. No new vocabulary is needed. 13. Nikson says: Does Multiplication operator always increment? does $\displaystyle\prod_{i~=~n-1}^{0}$ work? i.e. can there be a bigger value at the base and smaller value at top of the PI operator? I want to do that to signify that the matrices do not commute. 1. Interesting question! Notation is a convention, a commonly shared interpretation of some symbols. So, even if it is not commonly used in a particular way, there is no strong reason I can think of why you couldn’t use it that way (if necessary, including a note or example describing how you intend the notation to be interpreted). Loops in programming languages can be written to decrease the index each time just as easily as they can increase it. The convention is to increase it, just like with Sigma and Pi notation, but they also support decreasing indeces. So, my opinion would be: sure! Why not? If I were to see an upper index value that is smaller than the lower one, my first assumption would be that I would need to decrease the index by 1 for each iteration – which seems to be what you intend. I do not follow your thinking though when you say you wish to use a descending index value to indicate that matrices do not commute… I would not perceive a descending index value, or an ascending one, to indicate anything about the commutative property’s applicability to the resulting expression. After expanding the Pi notation into the full expression that it represents, the person working with that expression must follow the rules of algebra (or matrix algebra), and the index number of each factor would not have any effect on such rules. But, perhaps I do not understand the situation you seek to describe. 14. Shri Krishan Baghel says: I like it … And I hope it will help other students too to acheive their goals … 15. Appiah Godfred says: How can u write this using summation notation: 3 -5+7 -9+11-13+15? 1. Appiah, I notice three things when I look at this sequence: 1) The values alternate sign, so we need a factor that changes sign for each value of “n”. (-1)^n will change sign every time “n” grows by one, but when n=1 it is negative – which is the wrong sign for the first time. Adding or subtracting 1 from “n” will make the factor positive when n=1 (since a negative raised to the zero, or an even, power is positive). So $(-1)^{n-1}$ will provide the correct sign for the nth term. 2) The values grow in magnitude linearly by 2 each time. A factor of (2n) will produce such numbers, but when n=1 this will have a value of 2, not 3… so I need to add 1 to each value: $(2n+1)$. 3) There are seven terms, so n will need a starting value of 1, and an ending value of 7. Putting the three thoughts above together, I get: $\displaystyle\sum_{n=1}^{7}{(-1)^{n-1}(2n+1)}$ 16. 18mn6@cis.dk says: What if I want to write the sequence: $2^n-1 + 2(2^n-2 + 2(2^n-3 + 2(2^n-4 + ... )))$ using Sigma or Pi notation, or possibly both. Furthermore is there a way of simplifying the notation and finding a result that is a function of n? 1. If I have interpreted the expression you show correctly, it is neither an arithmetic nor a geometric sequence. Futhermore, it appears to me as though it will always have an infinite number of sub-expressions that need to be evaluated, regardless of the value of “n”. It is not – a sum of consistent terms (the third term contains all the of the remaining “terms”) – a product of consistent factors (the first two terms are not multiplied by what follows) so I do not see a way of representing it using either Sigma or Pi notation. You may be able to simplify this expression by expanding the values a bit to see if there is a pattern, but the result will probably vary a great deal depending on the value of “n”. For example, if n=1, then the expression would be: $2-1+2(2-2+2(2-3+2(2-4+2(...))))$ $=~1+2(0+2(-1+2(-2+2(...)))$ it would appear as though the quantity in parentheses is becoming increasingly negative (a sum of growing negative numbers), and therefore the value probably goes to negative infinity. If n=2, then $4-1+2(4-2+2(4-3+2(4-4+2(...))))$ $=~3+2(2+2(1+2(0+2(...)))$ once again it would appear as though the quantity in parentheses is going to become increasingly negative (a sum of growing negative numbers), and therefore the value propably goes to negative infinity again, even though it starts out a bit larger. As n grows, the constant power of 2 in the expression will dominate the initial results a lot more, but the infinite number of subtractions from it will eventually catch up to its value, no matter how large it is. 17. Thanks for your clear explanations. It helps me to understands the notation means and how to use it. 18. Hello, How would I derive the polynomial for the following expression: n = 112 expression is $x-y^{11n}$ multiply until n reaches 143 (i.e. n=112, n=113 etc.) I’m interested in simplifying the polynomial to 32 terms and determine the exponents of y Thank you. 1. Alin, Using Pi notation, I interpret your question to be $\displaystyle\prod_{n=112}^{143}(x-y^{11n})$ $~=~(x-y^{1232})(x-y^{1243})(x-y^{1254})...(x-y^{1573}))$ Using a binomial expansion, the terms will be $(_{32}C_i)(x^{32-i})(y^{\sum_{k=112}^{112+i} 11k})$ for i = 0 to 32. Terms with odd values of i will be negative. The coefficients will be “32 Choose i”, or $\dfrac{32!}{(32-i)!~i!}$ Does that help? Hello, I am trying to utilize the Pi notation to represent a repeating multiplication, but one that rounds up to the nearest whole after each time there is a multiplication(or division). Before I continue please forgive my mathematical illiteracy, I am taking an amateur interest in this. What I am wondering about is this. If I wanted to take, let’s say “I”, and multiply “I” by a repeating multiple, let’s say “1/(1-r)”. I might write it as: I×(1÷(1-r))×(1÷(1-r))×(1÷(1-r))… or I÷(1-r)÷(1-r)÷(1-r)… Reading this post it seems like this would be easy to use the big Pi Π notation. Ex: I × Π(1÷(1-r))….. something like that. If I wanted to represent something being rounded up I think I could use ceiling function brackets: ⌈⌉. So for instance, if I wanted to round the above to the nearest whole after each division (or multiplication) step I think I could write: ⌈⌈⌈I÷(1-r)⌉÷(1-r)⌉÷(1-r)⌉…. In my mind, this rounds up each time the value is divided by (1-r). I simply cannot figure out how to represent that using big Pi Π. I hope that makes some sense. Any insights would be very appreciate. Thanks, The notation that follows a capital Pi describes only the term that is to be multiplied. The difficulty you describe is that you wish to specify what happens to the result of that product, and capital Pi notation does not provide any means to do that. Two ways to resolve the problem come to mind: 1) your expansion of the problem using square brackets 2) using a programming language to describe a loop in which each product is then rounded, before repeating the loop until the specified number of multiplications have been carried out. 20. Anonymous says: Hello, Sir,If I have equation like this : X1=(1-P1)(1-P2)P3+(1-P2)(1-P3)P1+(1-P3)(1-P1)P2 X2=(1-P1)P2.P3+(1-P2)P2P3+(1-P3)P2P3 X3=P1.P2.P3 For example, X1 means we have One term say P3 and rest two are (1-P) and summation of such product terms for 3 values(P1,P2 and P3). How should I proceed if I want to get it for n instead of 3. Equation for Xn in terms of P1,P2,……Pn. 1. Summation notation does not provide an easy way that I can think of to do what you describe. While it can add a bunch of terms very nicely, the challenge is describing each of the terms you show as a function of the term number. This would be easy to do in a computer program, but not so much using summation notation. 21. Dharmendra paswan says: limit,n–>infinity {tan(p/2n)tan(2p/2n)tan(3p/2n)……}^(1/n) . find the value where p=pi. option (a) 1 (b) 2-log2 (c) 3 (d) 3 -log4. please reply in my email ( dpaswan309@gmail.com). thanks 1. Dharmendra, This problem is not strictly a Pi Notation problem, as it involves a limit and a power outside of any Pi Notation. Also, I am not certain where the product you describe is supposed to end. If it ends with, or continues beyond tan(np/2n), which will always be undefined, then my first impression is that there would be no limit to the product. However, I have never worked with infinite products. Your answer options suggest that there is some expansion of a a logarithm that results in an infinite product of tangent functions, however I am not familiar with that. Sorry! 22. Mr. Unknown says: How to find the derivative of the pi notation 1. If each factor described by the pi notation contains an instance of a the variable, you would need to use the product rule… potentially many times. However, if each factor does not contain the variable (or a function of the variable) that you are differentiating with respect to, then the whole product would be a constant. So, depending on the number of factors in the product, it could be a very long process, or a very short one. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 48, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834459781646729, "perplexity": 486.3146050602524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00225.warc.gz"}
http://physics.stackexchange.com/questions/106457/expectation-values-and-derivation-of-heisenberg-equation
# Expectation Values and Derivation of Heisenberg Equation? Consider a system of particles with wave function $\psi$(x) (x can be understood to stand for all degrees of freedom of the system; so, if we have a system of two particles then x should represent {$x_1; y_1; z_1; x_2; y_2; z_2$}). The expectation value of an operator $\hat{A}$ that operates on is defined by : $$\langle\hat{A}\rangle = \int\psi^{*}\hat{A}\psi dx$$ Yup this makes sense to me and there's nothing new here. If $\psi$ is an eigenfunction of $\hat{A}$ with eigenvalue $a$, then, assuming the wave function to be normalized, we have : $$⟨ \hat{A} ⟩ = a$$ This is where I want to confirm something. $$\hat{A}\psi = a\psi$$ Hence, $$⟨ \hat{A} ⟩ =\int\psi^{*} a \psi dx$$ Since $a$ is a constant I can take it out : $$\langle\hat{A}\rangle = a \int\psi^{*} \psi dx$$ We assumed that the wave function was normalized hence $$\int\psi^{*} \psi dx = 1$$ Leaving $$\langle\hat{A}\rangle = a$$ Now consider the rate of change of the expectation value of $\langle\hat{A}\rangle$: $$\frac{d\langle\hat{A}\rangle}{dt} = \int{\frac{\partial}{\partial t}}(\psi^{*}\hat{A}\psi)dx$$ $$=\int{\frac{\partial \psi^{*}}{\partial t}\hat{A}\psi+\psi^{}\frac{\partial\hat{A}}{\partial t}\psi^{*}}+\frac{\partial \psi}{\partial t}\hat{A}\psi^{*} dx$$ $$=\int{\langle\frac{\partial\hat{A}}{\partial t}\rangle} +\frac{i}{\hbar}\int{[(\hat{H}\psi)^{*}\hat{A}\psi-\psi^{*}\hat{A}\hat{H}\psi]}dx$$ where we have used the Schrodinger equation : $$i\hbar\frac{\partial \psi}{\partial t} = \hat{H}\psi$$ The second line is easily obtained via differentiation. The second term in the second line corresponds to the first term in the third line, correct ? I do not see how this term was obtained. In particular where the $\frac{i}{\hbar}$ originates from : $$\frac{i}{\hbar}\int{[(\hat{H}\psi)^{*}\hat{A}\psi-\psi^{*}\hat{A}\hat{H}\psi]}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849386811256409, "perplexity": 145.3085272722131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00343-ip-10-180-206-219.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Kluyver%27s_Formula_for_Ramanujan%27s_Sum
# Kluyver's Formula for Ramanujan's Sum ## Theorem Let $q \in \N_{>0}$. Let $n \in \N$. Let $\map {c_q} n$ be Ramanujan's sum. Let $\mu$ denote the Möbius function. Then: $\displaystyle \map {c_q} n = \sum_{d \mathop \divides \gcd \set {q, n} } d \map \mu {\frac q d}$ where $\divides$ denotes divisibility. ## Proof Let $\alpha \in \R$. Let $e: \R \to \C$ be the mapping defined as: $\map e \alpha := \map \exp {2 \pi i \alpha}$ Let $\zeta_q$ be a primitive $q$th root of unity. Let: $\displaystyle \map {\eta_q} n := \sum_{1 \mathop \le a \mathop \le q} \map e {\frac {a n} q}$ By Complex Roots of Unity in Exponential Form this is the sum of all $q$th roots of unity. Therefore: $\displaystyle \map {\eta_q} n = \sum_{d \mathop \divides q} \map {c_d} n$ By the Möbius Inversion Formula, this gives: $\displaystyle \map {c_q} n = \sum_{d \mathop \divides q} \map {\eta_d} n \map \mu {\frac q d}$ Now by Sum of Roots of Unity, we have: $\displaystyle \map {c_q} n = \sum_{d \mathop \divides q} d \map \mu {\frac q d}$ as required. $\blacksquare$ ## Source of Name This entry was named for Jan Cornelis Kluyver.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986546635627747, "perplexity": 1203.558287065147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00401.warc.gz"}
https://www.physicsforums.com/threads/physics-laws-and-expansion-of-universe.150396/
# Physics laws and expansion of universe? • Start date 682 1 ## Main Question or Discussion Point my interpretation of the expansion of the universe might be naive... correct me if I make any mistake. So, space in general is expanding, or being created all over the universe. If you have a photon traveling in free space, the frequency gets shifted. But Energy=h*frequency; the energy of the photon decreases as it travels. Where did the energy of that wave go? Similarly, what about momentum? p=E/c, is that not conserved as well? I think it has something to do with reference frame. How can one resolve the paradox? Last edited: Related Special and General Relativity News on Phys.org 1,222 2 This question is not uncommon, and there is no good answer at this point. Conservation of energy and general relativity have not been reconciled. 305 1 Yes, this is quite disturbing to me. Energy conservation has to do with time symmetry. Since there is an expansion, there is an arrow of time, so the symmetry is broken. If energy cannot be defined, then how about entropy? If energy and entropy cannot be defined, we cannot do much thermodynamics with the universe. There will be no 1st law and 2nd law of universal thermodynamics. And yet in thermodynamics, we often need to consider the entropy of the universe and things like that. Really strange and mind boggling. Yet we can define a temperature of the universe, flatness of the universe. So it seems at least we can come out with an equation of state of the universe. 488 0 I would say perhaps that energy goes into the expansion? Though cause and effect kind of go out the window with that one Hey the apparent frequency of the photon changes.I don't think the real frequency of the photon changes.Please correct me if im wrong what i believe is that mass is a form of energy.Mass occurs between a specific range of energy density but the range is not clearly defined.Even if energy density is slightly different characteristics of mass may occur.All energy forms differ due to energy density.And when an object attains relative velocity the energy density tends to increase which gives an effect of mass 305 1 The photon frequency does change really, not apparently. The universe expansion actually stretches the photon lengthening its wavelength. 682 1 and since the rate of expansion of the universe is increasing, a photon's frequency will eventually be red-shifted to near zero... so its energy is gradually being drained... Also, if the space is expanding, then distance stars will move farther and farther away from us (accelerated frame of reference). yet, in that distance stars' frame, there will be no fictitious force due to that apparent acceleration... does it mean the equivalence principle is broken? • Last Post Replies 4 Views 2K • Last Post Replies 12 Views 3K • Last Post Replies 1 Views 2K • Last Post Replies 23 Views 8K • Last Post Replies 2 Views 2K • Last Post Replies 5 Views 718 • Last Post Replies 3 Views 1K • Last Post Replies 1 Views 4K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724065065383911, "perplexity": 689.4942017500414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527205.81/warc/CC-MAIN-20191210095118-20191210123118-00142.warc.gz"}
https://brilliant.org/discussions/thread/brilliants-server-infrastructure-professional/
× # Brilliant’s Server Infrastructure (Professional Programming Series Micro-Post) While thinking about topics for the Professional Programming Series, we realized people might find our server diagram interesting. I think it is worth posting because even though it is mostly industry standard, lots of the diagrams that you'll find on the internet are a bit more abstract and simply describe how things connect in a hypothetical situation. Instead, this diagram accurately represents how things work under the hood at Brilliant. Feel free to ask questions about the server diagram, and I’ll try to answer as many as I can. Note by Sam Solomon 2 years, 8 months ago Sort by: Wait, why wouldn't you store the user media on the servers? I mean doesn't the data go through them when the user's device requests the data? Btw, nice job - 2 years, 8 months ago Hi Nolan, great question! For a while we did store user media on our application server, however, you may note that I used the singular "server". Several benefits of using a 3rd party service for hosting files are: 1. You don't have to worry about running out of disk space. 2. You don't have to worry about semi-arbitrary OS restrictions (We once ran into an issue where we reached the maximum amount of directories that could exist in another directory). 3. Most importantly, once you have more than one server processing requests for the same content, you either have to sync the files between all of your app servers whenever someone uploads something, or you have to make your own standalone server for hosting user media and set up the ability to pass files from all of the app servers to the file server when someone uploads them (which, at that point, why not use a service like Amazon S3 or Rackspace Cloud Files) Also, if you look a little closer, you'll notice that "read only" arrows flow from the user browser through the Content Delivery Network (CDN) to the user media service. The typical way in which user media is used is: 1. A user submits a form that includes a file. 2. Our server processes the file and if validation passes, uploads the file to the file storage service. 3. The server saves a new object in the database that includes the path to the image. 4. When someone views a page that needs to display that object, we use an html img tag to reference the file on the 3rd party service (which is how most images are displayed anyways, even if they are hosted on the same server). 5. Your browser sees the img tag and requests and downloads the file from the 3rd party service (skipping the app servers). Staff - 2 years, 8 months ago Ohhhhhh, ok. Once again, very cool. - 2 years, 8 months ago I am just curious, how physically distributed are the app/task servers ? I mean, do you have separate dedicated servers for different continents (since the clients are from all over the world), or you have a centralized design ? - 2 years, 8 months ago Unfortunately for now (though fortunately for our sanity) our servers are all centrally located. It is definitely something that keeps coming up, but it's a really big task to figure out how to split everything up. This is especially true because any type of distribution would likely require changes to how Brilliant works in order to make up for not having quick read/write access to all of the databases. That being said, we do spend a lot of time figuring out how make the site faster. We always try to reduce... • the number and size of files that need to be loaded • the amount of sequential round trips (redirects for instance, are relatively fast if you're close to the server, but can be very slow due to the redirect response and the resulting request needing to circle the globe) • client rendering time (you may notice that most math is actually rendered on the server side and only rarely will our JavaScript $$\LaTeX$$ renderer have to step in to render math after the page loads (this is what the latex renderer server in the "Services" box in the lower right hand corner is doing)). Staff - 2 years, 8 months ago How do you manage Database write operation , So do you perform write operations into a DB , And use other DB for all read operation. Isn't a master master architecture is more suited ? - 2 years, 8 months ago We perform most write and read operations on the main database and perform some heavy read operations on the other. Master-master architecture is good for some uses, but for us, the downsides aren't worth it at this point. The main things that we rely on that would be hard or impossible to get and have be efficient in a master-master setup are database transactions and a guarantee that once we store something in the database, it's immediately available for retrieval and will stay available (unless we explicitly delete it). One of the main benefits of master-master architecture is having a quick failover which is possible to mostly accomplish with our setup by promoting the follower to be the master if the master goes down. The other main benefit of master-master architectures is they can be used to distribute master-nodes around the globe so that you can have app servers closer to the browser, but distributing the database like that causes all sorts of synchronization problems unless you have a very simple site where the write operations are much more limited/orderly than we have (see this comment for a bit more info on why we don't yet distribute our servers). Staff - 2 years, 8 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1808270812034607, "perplexity": 1348.3858323629413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00594.warc.gz"}
http://math.stackexchange.com/questions/298137/counting-monic-polynomials-over-mathbb-z-p-mathbb-z
# Counting monic polynomials over $\mathbb Z / p\mathbb Z$. Let $p$ be prime. Consider monic polynomials of degree $d$ over $\mathbb Z / p \mathbb Z$. Denote the number of such polynomials with degree less than $p$, which are not zero for all $x \in \mathbb Z / p \mathbb Z$, with $m_d$. Let $d \geq p$. Then I have to show that there are $m_p p^{d-p}$ of such polynomials which are not zero for all $x \in \mathbb Z / p \mathbb Z$. - I am confused. You appear to define $m_{d}$ as the number of monic polynomials of degree $d$ with coefficients in $\mathbb Z / p \mathbb Z$ which have no roots in $\mathbb Z / p \mathbb Z$. But I don't understand which polynomials you want to count then. –  Andreas Caranti Feb 8 '13 at 16:51 It seems you want to prove $m_d=m_pp^{d-p}$? If so, I think it would be clearer if you put it that way. It's confusing that you say "such polynomials" but then only repeat one of the conditions on these polynomials from the previous sentence, so it's not clear whether these are the same kind of polynomials as in the other sentence or not. Also it's not clear whether the polynomials should be non-zero for all $x$ or whether it shouldn't be the case that they're not zero for all $x$. –  joriki Feb 8 '13 at 18:23 Hint: The polynomials that are zero for all $x \in \mathbb F_p=\mathbb Z/p\mathbb Z$ are precisely those in the ideal generated by $x^p-x$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642418622970581, "perplexity": 83.39380267877792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.53/warc/CC-MAIN-20150728002308-00027-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/84921-solve-theta.html
# Thread: Solve for theta 1. ## Solve for theta 25sin@ -1.5 cos@ = 20 @ = theta 2. Originally Posted by jewd777 25 sin@ -1.5 cos@ = 20, whats @ = ? Hint: Apply the identity $A\cos\theta+B\sin\theta=\sqrt{A^2+B^2}\cos\!\left( x-\tan^{-1}\!\left(\tfrac{B}{A}\right)\right)$ 3. there may be a more clever way but by brute force and ignorance: Solve for 25 sin(@) = 20+1.5cos(@) square both sides and convert the LHS to a fn of cos(@) you now have a quadratic in cos@
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40012264251708984, "perplexity": 6538.189628329766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00001-ip-10-143-35-109.ec2.internal.warc.gz"}
http://clumath343s15s2.wikidot.com/clumath2015s2ch6s2-definitions
Definitions An set of vectors is an orthogonal set if each pair of vectors is orthogonal to eachother such that (1) \begin{align} \begin{Bmatrix} u_1,...,u_n \end{Bmatrix}\\ u_1 \cdot u_2 = 0, u_1 \cdot u_2 = 0, ... , u_{n-1\text{}} \cdot u_n = 0 \end{align} In addition, if there is a subspace S formed by an orthogonal set, that set is linearly independent and is thus a basis for S. Therefore, if there exists a vector $\vec{y}$ is S, then $\vec{y}$ can be formed by the orthogonal set (2) \begin{align} \vec{y} = c_1\vec{u_1}+c_2\vec{u_2}+...+c_n\vec{u_n} \end{align} We can therefore multiply each term by $\vec{u_1}$ for instance, and since all other u's are perpendicular to $u_1$, all other terms drop out except $\vec{y} \cdot \vec{u_1} = c_1 \vec{u_1}^2$. This can be simplified and $c_1$ can be found using (3) \begin{align} \frac{\vec{y} \cdot \vec{u_1}}{\vec{u_1}^2} = c_1 \end{align} This is true for all c's so we now have the tools to find any vector in S by a linear combination of the orthogonal basis. ## Orthogonal Projection Now, if we're looking to express $\vec{y}$ as a sum of two vectors, one which is a multiple of $\vec{u}$ and one perpendicular to $\vec{u}$, we must apply some of what we just learned. We know the scalar $c_1$ to create a vector in S that is a component of $\vec{y}$ is $\frac{\vec{y} \cdot \vec{u_1}}{\vec{u_1}^2}$. Using this to scale $\vec{u_1}$ which we'll call $\hat{y}$, we find (4) \begin{align} \vec{y} = \hat{y} + \vec{v} \end{align} where $\vec{v}$ is the perpendicular vector such that (5) \begin{align} \vec{y} - \hat{y} = \vec{v} \\ \vec{v} \cdot \hat{y} = 0 \end{align} Orthonormal Sets An Orthonormal set is a set of orthogonal vectors whose magnitudes are all 1. To normalize a vector, just divide each component by the magnitude of the vector so… (6) \begin{align} \hat{x} = \frac{\vec{x}}{|\vec{x}|} \end{align} Then the orthonormal set has the property $U^TU = I$. Thus, $U^T = U^{-1}$. page revision: 0, last edited: 02 May 2015 17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963857531547546, "perplexity": 376.26605061619466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00008.warc.gz"}
http://code7700.com/aero_properties_of_the_atmosphere.htm
# Properties of the Atmosphere ## Aeronautical #### Eddie sez: "Any final questions?" the aero professor would ask at the end of each class. "Why is there air?" somebody would invariably ask. "Without air," he would say, "basketballs wouldn't bounce." The class would laugh and we would leave. We did this every single class. I can't explain it. So too it is with most pilots when asked "How do airplane's fly?" They answer "lift." Then comes, "what is lift?" If you want the answer to "What is Lift?" you need to understand the atmosphere and how we, as pilots, measure it. Everything here is from the references shown below, with a few comments in an alternate color. Wright Flyer, first flight, from Wikimedia Commons. 20120926 ### Static Pressure [Dole, pg. 15] The static pressure of the air P is simply the weight per unit area of the air above the level under consideration. For instance, the weight of a column of air, with a cross sectional area of 1 ft2 and extending upward from sea level through the atmosphere is 2116 lb. The sea level static pressure is, therefore, 2116 psf (or 14.7 psi). Static pressure is reduced as altitude is increased because there is less air weight above. At 18,000 ft altitude the static pressure is about half that at sea level. Another commonly used measure of static pressure is inches of mercury. On a standard sea level day the air's static pressure will support a column of mercury (Hg) that is 29.92 in. high. Weather reports use a third method of measuring static pressure called millibars. In addition to these rather confusing systems, there are the metric measurements in use throughout most of the world. In aerodynamics is is convenient to use pressure ratios, rather than actual pressures. Thus the units of measurement are canceled out. Pressure Ratio: δ = p / p0 Where P0 is the sea level standard pressure (2116 psf or 29.92 in. Hg). ### Temperature [Dole, pg. 16] The commonly used measures of temperature are the Fahrenheit (F) and Celsius (C) scales. Neither degrees F nor C are based upon absolute zero and cannot be used in calculations. Absolute temperature must be used instead. Absolute zero in the Fahrenheit system is -460 and in the Celsius system is -273°. The symbol for absolute temperature is T, and the symbol for sea level standard temperature is T0. Using temperature ratios rather than actual temperatures cancels out the units and simplifies things. The symbol for temperature ratio is θ (theta). Temperature Ratio: θ = T / T0 ### Density property of the atmosphere [Dole, pg. 16] The density of the air is the most important property of the air in the study of aerodynamics. It is defined as the mass of the air per unit volume. The symbol for density is ρ (rho). $ρ = mass unit volume$ Standard sea level density ρ0 = 0.002377 slugs/ft3. Density decreases with altitude. At about 22,000 ft. the density is about one-half of ρ0. It is desirable in aerodynamics to use density ratios, rather than the actual values of density. The symbol for density ratio is σ (sigma). Density Ratio: σ = ρ / ρ0 Density is directly proportional to pressure and inversely proportional to absolute temperature, as shown by the universal gas law: σ = P / RT and so $ρ ρ0 = P/RT P0 / RT0$ where R is the gas constant. Therefore: Density: σ = δ / θ ### Viscosity [Dole, pg. 17] Viscosity can be simply defined as the internal friction of a fluid caused by molecular attraction which makes it resist a tendency to flow. The viscosity of air is important when discussing airflow in the region very close to the surface of the aircraft. This region is called the boundary layer. The viscosity of gases is unlike that of liquids, in that with gases an increase in temperature causes an increase in viscosity. The coefficient of absolute viscosity has been assigned the symbol μ (mu). Since aerodynamics often involves considerations f both viscosity and density, ma more usual form of viscosity measurement, known as kinematic viscosity, is often used. It is denoted by the symbol ν (nu). Kinematic Viscosity: ν = μ / ρ ### International Standard Atmosphere (ISA) Figure: Standard Altitude Table, from [Hurt, pg. 5]. As pilots we never deal directly with σ, ρ, θ, or just about any of the other Greek symbols. But we deal with the air mass and much of what we do is constrained by what has become known as the "International Standard Atmosphere." Scientists, engineers, and people who write aviation textbooks can't seem to agree on what exactly constitutes a standard atmosphere or even where one layer ends and the next begins. Most agree that temperature is key, so that's where we will begin. Since we are international pilots, we'll use the ICAO's definitions. ### Layers of the Atmosphere #### ICAO Standard Atmosphere Temperature Model Figure: Temperature and Vertical Temperature Gradients Table, from ICAO Doc 7488/3, Table D. Converting km to feet and °Kelvin to °Celsius, we come up with an ICAO altitude vs. temperature model we can use: Altitude (ft) Temperature (°C) Gradient (°C) 0 15 -1.98 36,089 -56.5 0 65,617 -56.5 +0.3 104,987 Note: • °C = °K - 273.15 • Feet = (km) 3280.8399 From this we surmise the troposphere starts at the surface and ends when the temperature no longer loses 2°C every 1,000 feet of altitude, at about 36,000' where the temperature will be -56.5°C. At that point the temperature remains constant for the remainder of most our flight envelopes. Of course all this is based on that so-called "standard" day. Pilots should be concerned with the height of the tropopause because it determines where most of the weather is, where fuel economy increases plateau, and where aircraft components may be subject to limiting temperatures. Newer data suggests the tropopause is a bit lower than quoted by the advisory circular, typically around 17 km near the equator, above FL 550. At the poles, the tropopause dips as low as 8 km, around FL 260. For most mid-latitudes, we will be spending our time right at or just above the transition layer, the tropopause. ### Tropopause Height and ISA Figure: Tropopause Height, from Geerts and Linacre. • The height of the tropopause depends on the location, notably the latitude, as shown in the figure on the right (which shows annual mean conditions). It also depends on the season. • At latitudes above 60°, the tropopause is less than 9-10 km above sea level; the lowest is less than 8 km high, above Antarctica and above Siberia and northern Canada in winter. The highest average tropopause is over the oceanic warm pool of the western equatorial Pacific, about 17.5 km high, and over Southeast Asia, during the summer monsoon, the tropopause occasionally peaks above 18 km. In other words, cold conditions lead to a lower tropopause, obviously because of less convection. • Deep convection (thunderstorms) in the Intertropical Convergence Zone, or over mid-latitude continents in summer, continuously push the tropopause upwards and as such deepen the troposphere. • On the other hand, colder regions have a lower tropopause, obviously because convective overturning is limited there, due to the negative radiation balance at the surface. In fact, convection is very rare in polar regions; most of the tropospheric mixing at middle and high latitudes is forced by frontal systems in which uplift is forced rather than spontaneous (convective). This explains the paradox that tropopause temperatures are lowest where the surface temperatures are highest. The tropopause is actually quite lower than the ICAO model predicts, especially at the poles. G450 pilots, for example, spend most of their cruise flight in the tropopause. From a stick and rudder perspective, once the temperature stops decreasing you are not necessarily improving fuel mileage with altitude. (Winds and other weather concerns should determine altitude selection once the lapse rate nears zero.) ### Altitude Measurement [FAA-H-8083-15, pg. 3-1] Flight instruments depend upon accurate sampling of the ambient atmospheric pressure to determine the height and speed of movement of the aircraft through the air, both horizontally and vertically. This pressure is sampled at two or more locations outside the aircraft by the pitot-static system. The pressure of the static, or still air, is measured at a flush port where the air is not disturbed. On some aircraft, this air is sampled by static ports on the side of the electrically heated pitot-static head, such as the one in [the figure shown]. Other aircraft pick up the static pressure through flush ports on the side of the fuselage or the vertical fin. These ports are in locations proven by flight tests to be in undisturbed air, and they are normally paired, one on either side of the aircraft. This dual location prevents lateral movement of the aircraft from giving erroneous static pressure indications. The areas around the static ports may be heated with electric heater elements to prevent ice forming over the port and blocking the entry of the static air. [FAA-H-8083-15, pg. 3-3] A sensitive altimeter is an aneroid barometer that measures the absolute pressure of the ambient air and displays it in terms of feet or meters above a selected pressure level. The sensitive element in a sensitive altimeter is a stack of evacuated, corrugated bronze aneroid capsules like those shown in [the figure]. The air pressure acting on these aneroids tries to compress them against their natural springiness, which tries to expand them. The result is that their thickness changes as the air pressure changes. Stacking several aneroids increases the dimension change as the pressure varies over the usable range of the instrument. ### Airspeed Measurement [FAA-H-8083-15, pg. 3-7] An airspeed indicator is a differential pressure gauge that measures the dynamic pressure of the air through which the aircraft is flying. Dynamic pressure is the difference in the ambient static air pressure and the total, or ram, pressure caused by the motion of the aircraft through the air. These two pressures are taken from the pitot-static system. The mechanism of the airspeed indicator in [the figure] consists of a thin, corrugated phosphor-bronze aneroid, or diaphragm, that receives its pressure from the pitot tube. The instrument case is sealed and connected to the static ports. As the pitot pressure increases, or the static pressure decreases, the diaphragm expands, and this dimensional change is measured by a rocking shaft and a set of gears that drives a pointer across the instrument dial. Most airspeed indicators are calibrated in knots, or nautical miles per hour; some instruments show statute miles per hour, and some instruments show both. There are many types of airspeed: indicated airspeed (IAS), calibrated airspeed (CAS), equivalent airspeed (EAS), and true airspeed (TAS). • IAS. Indicated airspeed is shown on the dial of the instrument, uncorrected for instrument or system errors. • CAS. Calibrated airspeed is the speed the aircraft is moving through the air, which is found by correcting IAS for instrument and position errors. The POH/AFM has a chart or graph to correct IAS for these errors and provide the correct CAS for the various flap and landing gear configurations. • EAS. Equivalent airspeed is CAS corrected for compression of the air inside the pitot tube. Equivalent airspeed is the same as CAS in standard atmosphere at sea level. As the airspeed and pressure altitude increase, the CAS becomes higher than it should be and a correction for compression must be subtracted from the CAS. • TAS. True airspeed is CAS corrected for nonstandard pressure and temperature. True airspeed and CAS are the same in standard atmosphere at sea level. But under nonstandard conditions, TAS is found by applying a correction for pressure altitude and temperature to the CAS. ### Indicated Airspeed (IAS) An airspeed indicator is simply a pressure meter which uses static port pressure as a reference. In terms of Bernoulli, the meter reads q which can be found by subtract P from H; because H = P + q. The airspeed indicator reads this speed which is termed, plainly enough, indicated airspeed. ### Calibrated Airspeed (CAS) Figure: Typical position error correction, from [Hurt, pg. 12]. The speed indicated by the pitot tube might be in error because of the placement of the pitot tube relative to the relative wind. This is known as position error and when applicable, aircraft manuals will include a correction factor. Calibrated airspeed is indicated airspeed adjusted for position error. Some aircraft, the G450 for example, make these corrections for the pilot so that the speed indicated on cockpit instruments is actually calibrated airspeed. ### Equivalent Airspeed (EAS) Figure: Compressibility correction, from [Hurt, pg. 12]. Another error occurs when the aircraft travels fast enough to compress the air entering the pitot tube. A compressibility correction chart is used by some aircraft to factor this error.. ### True Airspeed (TAS) [Dole, pg. 25] True airspeed, coupled with ambient density ratio, produces the same dynamic pressure as EAS, coupled with standard sea level density ratio. That is TAS2 σ = EAS2 σ0 But σ0 = 1, so: TAS = EAS √ ( 1/ σ) It can also be found using a standard table, shown here. Figure: Density altitude correction, from [Hurt, pg. 13]. ### Indicated to Calibrated to Equivalent to True Airspeed Conversion As fledgling Air Force pilots or freshman aeronautical engineering students, the ability to convert from one type of airspeed to another was important. These days computers do it all for us. In the G450, for example, the pilot's primary flight instruments show CAS and TAS. In some aircraft, like the early T-37B, all you got was IAS and you couldn't fly outside the local area without TAS for risk of running out of fuel. For more about that, take a look at my Day One with Ice-T. #### References: 14 CFR 25, Title 14: Aeronautics and Space, Federal Aviation Administration, Department of Transportation Advisory Circular 61-107B, Aircraft Operations at Altitudes Above 25,000 Feet Mean Sea Level or Mach Numbers Greater than .75, 3/29/13, U.S. Department of Transportation Air Training Command Manual 51-3, Aerodynamics for Pilots, 15 November 1963 Connolly, Thomas F., Dommasch, Daniel 0., and Sheryby, Sydney S., Airplane Aerodynamics, Pitman Publishing Corporation, New York, NY, 1951. Davies, D. P., Handling the Big Jets, Civil Aviation Authority, Kingsway, London, 1985. Dole, Charles E., Flight Theory and Aerodynamics, 1981, John Wiley & Sons, Inc, New York, NY, 1981. FAA-H-8083-15, Instrument Flying Handbook, U.S. Department of Transportation, Flight Standards Service, 2001. Gulfstream G450 Aircraft Operating Manual, Revision 35, April 30, 2013. Hage, Robert E. and Perkins, Courtland D., Airplane Peformance Stability and Control, John Wiley & Sons, Inc., 1949. Hurt, H. H., Jr., Aerodynamics for Naval Aviators, Skyhorse Publishing, Inc., New York NY, 2012. ICAO Doc 7488/3 - Manual of the ICAO Standard Atmosphere, International Civil Aviation Organization, 1993 Technical Order 1T-38A-1, T-38A/B Flight Manual, USAF Series, 1 July 1978. The Height of the Tropopause, B. Geerts and E. Linacre, University of Wyoming, Atmospheric Science, 11/97.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358920812606812, "perplexity": 2869.2658323288256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00463.warc.gz"}
https://socratic.org/questions/what-is-the-domain-of-f-x-5x-2-2x-1
Precalculus Topics # What is the domain of f(x)=5x^2+2x-1? So the domain is $\left(- \infty , + \infty\right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493256568908691, "perplexity": 1012.5331265589193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370490497.6/warc/CC-MAIN-20200328074047-20200328104047-00413.warc.gz"}
https://irzu.org/research/game-engines/c-move-ball-left-and-right-with-mouse-drag-unity-3d/
# c# – Move Ball left and right with mouse drag Unity 3D You have a couple of options depending on what it is you want. Also as @Hamed answered, when using physics you don’t want to update the transform directly but add force using the Rigidbody. Force Constant relative to where the Mouse is if (Input.GetMouseButton(0)) { Vector2 force = Vector2.zero; //Get the Balls current screenX position float ballX = Camera.WorldToScreenPoint(Ball.transform.position).x; //Check if Click was Left or Right of the Ball if (ballX > Input.mousePosition.x) { //Click was Left of the Ball force = new Vector2(-1f, 0f); } else if (ballX < Input.mousePosition.x) { //Click was Right of the Ball force = new Vector2(1f, 0f); } } Force relative to the mouse movement if (Input.GetMouseButton(0)) { Vector2 force = Vector2.zero; float mouseDelta = Input.GetAxis("Mouse X"); //Check if the Mouse is going Left or Right if (mouseDelta < 0f) { //Click was Left of the Ball force = new Vector2(-1f, 0f); } else if (mouseDelta > 0f) { //Click was Right of the Ball force = new Vector2(1f, 0f); } // Update force relative to mouse movement force = Mathf.Abs(mouseDelta) * force;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6844943761825562, "perplexity": 13541.164147536972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00549.warc.gz"}
https://mathematica.stackexchange.com/questions/89637/simplifying-expressions-for-findminimum
# Simplifying Expressions for FindMinimum I am currently trying to apply the FindMinimum function to find the lowest energy of an expression. My current expression as of now is relatively complicated. It is the summation of 48 similar terms, and after simplifying one of these terms, I get the expression ((Cot[ϕ[1]] Cot[ϕ[6]/2] - Cot[ϕ[6]] Csc[ϕ[1]] - Csc[ϕ[1]] Csc[ϕ[6]] - I Cos[θ[6]] Sin[θ[1]] + Cos[θ[1]] (Cos[θ[6]] + I Sin[θ[6]]) + Sin[θ[1]] Sin[θ[6]]) (Cot[ϕ[5]] Cot[ϕ[6]/2] - Cot[ϕ[6]] Csc[ϕ[5]] - Csc[ϕ[5]] Csc[ϕ[6]] + I Cos[θ[6]] Sin[θ[5]] + Cos[θ[5]] (Cos[θ[6]] - I Sin[θ[6]]) + Sin[θ[5]] Sin[θ[6]]) (-Cot[ϕ[2]] Csc[ϕ[1]] + Csc[ϕ[1]] Csc[ϕ[2]] + I Cos[θ[2]] Sin[θ[1]] + Cos[θ[1]] (Cos[θ[2]] - I Sin[θ[2]]) + Sin[θ[1]] Sin[θ[2]] - Cot[ϕ[1]] Tan[ϕ[2]/2]) (-Cot[ϕ[3]] Csc[ϕ[2]] + Csc[ϕ[2]] Csc[ϕ[3]] + I Cos[θ[3]] Sin[θ[2]] + Cos[θ[2]] (Cos[θ[3]] - I Sin[θ[3]]) + Sin[θ[2]] Sin[θ[3]] - Cot[ϕ[2]] Tan[ϕ[3]/2]) (-Cot[ϕ[4]] Csc[ϕ[ 3]] + Csc[ϕ[3]] Csc[ϕ[4]] + I Cos[θ[4]] Sin[θ[3]] + Cos[θ[3]] (Cos[θ[4]] - I Sin[θ[4]]) + Sin[θ[3]] Sin[θ[4]] - Cot[ϕ[3]] Tan[ϕ[4]/2]) (-Cot[ϕ[5]] Csc[ϕ[4]] + Csc[ϕ[4]] Csc[ϕ[5]] + I Cos[θ[5]] Sin[θ[4]] + Cos[θ[4]] (Cos[θ[5]] - I Sin[θ[5]]) + Sin[θ[4]] Sin[θ[5]] - Cot[ϕ[4]] Tan[ϕ[5]/2]))/(√((1 + Abs[Cot[ϕ[6]/2] (Cos[θ[6]] - I Sin[θ[6]])]^2) (1 + Abs[Cot[ϕ[6]/2] (Cos[θ[6]] + I Sin[θ[6]])]^2) (1 + Abs[(-Cos[θ[1]] + I Sin[θ[1]]) Tan[ϕ[1]/2]]^2) (1 + Abs[(Cos[θ[1]] + I Sin[θ[1]]) Tan[ϕ[1]/2]]^2) (1 + Abs[(-Cos[θ[2]] + I Sin[θ[2]]) Tan[ϕ[2]/2]]^2) (1 + Abs[(Cos[θ[2]] + I Sin[θ[2]]) Tan[ϕ[2]/2]]^2) (1 + Abs[(-Cos[θ[3]] + I Sin[θ[3]]) Tan[ϕ[3]/2]]^2) (1 + Abs[(Cos[θ[3]] + I Sin[θ[3]]) Tan[ϕ[3]/2]]^2) (1 + Abs[(-Cos[θ[4]] + I Sin[θ[4]]) Tan[ϕ[4]/2]]^2) (1 + Abs[(Cos[θ[4]] + I Sin[θ[4]]) Tan[ϕ[4]/2]]^2) (1 + Abs[(-Cos[θ[5]] + I Sin[θ[5]]) Tan[ϕ[5]/2]]^2) (1 + Abs[(Cos[θ[5]] + I Sin[θ[5]]) Tan[ϕ[5]/2]]^2))) where I have included in the input code for it in Mathematica. The $\phi[i]$ and $\theta[i]$ are simply variables and the index $i$ ranges in $\{1..6\}$. I obtained an analogous version of this expression by simplifying with the assumptions that $0 \leq \phi[i] \leq 2*\pi,\quad 0 \leq \theta[i] \leq \pi$, however this took longer, and the expression didn't seem that much simpler. My question is that since I'm trying to apply the FindMinimum function to a summation over 48 expressions, each of which are analogous to the expression above, should I bother simplifying the sum before applying the FindMinimum function to it? Are there any other things I could do to ease the expression for FindMinimum? In addition, I know the minimum exists for the summation, and I'm trying to simultaneously find the minimum of this plus a much simpler expression. Hence, I was planning to use the built-in "goal programming" https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationLocalNumerical.html with the added constraints of the intervals on the variables as mentioned above. If it helps, I could provide the expression for the first term, before simplification to the above, purely in terms of $\theta$ and $\phi$. The above mess of trigonometric functions comes from transforming those coordinates to euclidean coordinates. As a note, one must fix values for $\theta$ and $\phi$ for one index $i$, before performing FindMinimum, so that one does not get infinite configurations of the same value. Thanks! • It might be profitable here to use the Weierstrass substitution, so that you end up with the optimization of an algebraic function, which can be less expensive than maintaining the trigonometric formulation. – J. M.'s ennui Jul 31 '15 at 19:41 • Welcome to Mathematica.SE! I hope you will become a regular contributor. To get started, 1) take the introductory Tour now, 2) when you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge, 3) remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign, and 4) give help too, by answering questions in your areas of expertise. – bbgodfrey Jul 31 '15 at 19:43 • FullSimplify[Abs[Cot[ϕ[6]/2] (Cos[θ[6]] - I Sin[θ[6]])], θ[6] ∈ Reals] yields Abs[Cot[ϕ[6]/2]], which may be useful. – bbgodfrey Aug 1 '15 at 15:47 This expression, designated exp for convenience, can be simplified substantially as follows. num = Map[FullSimplify[#] &, Numerator[exp]]; Map[FullSimplify[#, θ[_] ∈ Reals && ϕ[_] ∈ Reals] &, Denominator[exp] /. Abs[z_]^2 :> FullSimplify[Abs[z]^2, θ[_] ∈ Reals && ϕ[_] ∈ Reals]; /. Abs[z_]^2 :> z^2] den = Map[FullSimplify[#, θ[_] ∈ Reals && ϕ[_] ∈ Reals] &, Thread[%, Times]] /. Abs[z_^2] :> z^2; num/den (* Cos[ϕ[1]/2]^2 Cos[ϕ[2]/2]^2 Cos[ϕ[3]/2]^2 Cos[ϕ[4]/2]^2 Cos[ϕ[5]/2]^2 Sin[ϕ[6]/2]^2 (E^(-I (θ[1] - θ[6])) - Cot[ϕ[6]/2] Tan[ϕ[1]/2]) (E^( I (θ[1] - θ[2])) + Tan[ϕ[1]/2] Tan[ϕ[2]/2]) (E^( I (θ[2] - θ[3])) + Tan[ϕ[2]/2] Tan[ϕ[3]/2]) (E^( I (θ[3] - θ[4])) + Tan[ϕ[3]/2] Tan[ϕ[4]/2]) (E^( I (θ[5] - θ[6])) - Cot[ϕ[6]/2] Tan[ϕ[5]/2]) (E^( I (θ[4] - θ[5])) + Tan[ϕ[4]/2] Tan[ϕ[5]/2]) *) Still lengthy, but not nearly as much as before. Depending on the form of the other 47 expressions, it may be possible to achieve further simplifications when combining them. The Weierstrass substitution, as suggested by Guesswhoitis, will convert this expression into a polynomial, which should be easier to work with. • Very nice! I didn't realize that you could apply a rule to a result that is not printed to the output because of the semi-colon but I can see that it works. – Jack LaVigne Aug 6 '15 at 21:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214586853981018, "perplexity": 5780.364341454009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00008.warc.gz"}
https://mathoverflow.net/questions/392431/is-pure-mathematics-useful-outside-of-mathematics-itself
# Is pure mathematics useful outside of mathematics itself? [closed] From time to time Mathoverflow allows soft questions because they are arguably best answered by active mathematicians and they can benefit other mathematicians/PhD students/math undergraduates. I think this is such a question. I'm a mathematics student planning to enroll in a good math PhD program this Fall. I have always been extremely disciplined in math and my goal has always been to pursue a math PhD. However, I've had the opportunity to work in computer science, and this has caused some doubts about the significance of my future work in mathematics. I imagine such doubts are nonunique to myself and that the best place to ask is here, from people who've been through a PhD themselves, who are wiser, and who may possibly have had these same thoughts. (I hope it is clear I am asking this out of good nature and that this is not dismissed as a cynical thing to ask.) My main question: Is pure mathematics useful, specifically, outside of mathematics itself? Instead of giving a definition of "useful," perhaps I can share some doubts I have about the significance of pure mathematics research. 1. It seems to me that in all honesty, pure mathematics does not immediately benefit the population at large in a direct and obvious way. At best benefits are usually theoretical (e.g., "These methods could..."). 2. I think that very, very few people actually read and care about the average published pure mathematics paper. I think it's because math papers are hard and it's not clear that they are interesting or useful to math as a whole or to the future of humanity. There are very obvious exceptions, for example, for papers like Fermat's Last Theorem, which are arguably achievements for humanity. But most papers are objectively not of this level of significance and may not always contribute to major problems. 3. It seems that the only reason we, as a population, care about mathematics, is because of the "cool" open problems which are simple to understand but difficult to prove. But this account for only a very small portion of active and successful mathematical work (since math papers don't always try to solve such problems because they're very hard). So doesn't this imply that my work as a future research mathematician is actually not useful for the future of humanity? 4. It seems that pure mathematics was originally created to solve practical and interesting problems, and that as we turned to use abstraction as a tool to solve things (because abstraction is a very useful problem solving tool), we have arrived many years later to nested layers of subproblems of subproblems, whose depth is so deep that such problems of these areas are hard to understand and are not obviously useful for the world or for anything outside of that area of mathematics itself. It seems that mathematics is a science that studies itself, and so at a certain point, it does not have an immediate practical use outside of itself. I can't be the only math person to have every had these thoughts. As a hardcore pure math person it almost feels like a sin to have such doubts (not literally of course). I would very much like to be wrong, to learn from anyone's objections, and to do my PhD as I planned (although I obviously can't enroll with these doubts and will just continue working in CS). This leads to my secondary questions: Have any mathematicians ever had these thoughts? How did they reconcile these thoughts with their career choice? • There was a paper posted to the arXiv today with the exact title "Is math useful?": arxiv.org/abs/2105.03843 May 11 at 1:55 • Short answer: if these are your thoughts about the subject you should NOT try to enroll in a math PhD program for several reasons being the main ones that you are not feeling passionate about the subject and that you might be taking the position of someone that might consider the subject more important and trascendental for his/her life. Speaking about maximizing happiness in our sad world, the option of you enrolling seems a bad choice for you and potentially for others. So I aim you personally to pursue some other fields closer to "reality". I am not trying to be rude, but direct and clear. May 11 at 3:33 • From Hardy's A Mathematician's Apology: "One rather curious conclusion emerges, that pure mathematics is on the whole distinctly more useful than applied. A pure mathematician seems to have the advantage on the practical as well as on the aesthetic side. For what is useful above all is technique, and mathematical technique is taught mainly through pure mathematics." May 11 at 3:56 • I think that very, very few people actually read and care about the average published pure mathematics paper. - I think the same statement applies equally to any field of science or engineering also. May 11 at 21:10 • @Hvjurthuk What a strange view, that beeing able to be critical of you own subject somehow makes you less suitable and passionate about it, as if math was some kind of fanboy-factory. In my academic tradition (noth-europe) it is rather seen as one of the distinguishing features between the students (which recently "fell in love" with the subject) and the more mature performer of the subject (who are now able to see how it fits into a larger picture). I am not trying to be rude, but direct and clear. May 12 at 10:50 This is not really an answer to the question as asked, but I believe it's important and relevant to your problem, and too long for a comment. I will not here express any opinion about the validity or importance of your doubts, or share any of my own beliefs about them. Instead, the point I want to make at the moment is that, in my opinion, it is possible to pursue a PhD and a career in mathematics, and believe that one is benefiting the world thereby, while also believing that one's own research in pure mathematics is completely useless (regardless of the validity, or lack thereof, of the latter belief). The point is that the majority of mathematicians in academia do not spend all of their time doing research; most of them also spend time teaching undergraduates. If they work at a liberal arts college, they may spend more time teaching than doing research. I believe it's inarguable that mathematics education is important for students, and those of us who teach them are benefiting the world. One might say, then, why do research at all? Aside from the obvious answers that we enjoy it, I believe our research benefits our students as well (and many universities also believe this). This is particularly true when we are able to create opportunities for students to research with us (an experience from which they can learn a lot, independently of the value or lack thereof of the research they do -- like perseverence, problem-solving skills, etc.). It also makes us better teachers, by keeping us excited about the subject, giving us new ideas for ways to improve our classes, keeping us connected to a wider community of mathematicians, and giving us ways to convey our excitement about mathematics to our students. Of course, this varies somewhat by university. At some research-focused universities, teaching undergraduates is regarded as something to get out of the way as quickly as possible to focus on research. Someone who approaches teaching with that attitude is probably not benefiting the world by their teaching very much. But there are plenty of colleges and universities where teaching is valued and supported by the administration and the community, and if you are worried about the possible uselessness of your research I would recommend that, in addition to reassuring yourself about the usefulness of pure mathematics, you put some effort into becoming a good teacher, and consider jobs at more teaching-focused schools. Adding beauty and joy to the world, contributing to humanity’s understanding: these are direct and immediate benefits from pure mathematics, even if they are not fiscal. • This is the best answer. I do not understand the downvotes. May 11 at 5:07 • It may be that this kind of question stems from the inadvertent power of mathematics in other subjects. Nobody would seriously ask about the same kind of "usefulness" in art and music. If someone could intrinsically make an accordion into a weapon of mass destruction, maybe people would start asking about the "direct usefulness" of music... May 12 at 14:42 • @Jon Bannon: listening to an accordion is very different from digesting mathematical papers. Maybe modern art and modern poems would be a better analogy, because not every layperson will intuitively enjoy them on first sight. I share your viewpoint BTW. May 12 at 21:46 • Well said. "Useful" often means "useful for something else" but there can't be an infinite regress; we must eventually bottom out at something that has intrinsic value. If I take my family on a vacation, am I engaged in useless activity because they are not paying me and I am bringing joy to only a few people? No. If bringing joy to my family is useless then what exactly is the point of all that other so-called "useful" activity? Bringing joy to my extended mathematical family is analogous. May 13 at 1:09 • @user2520938 Government funding is a separate question IMO. But we can ask a similar question of the homeless shelter: How does it benefit anyone other than the 50 or so people who use it? For many years, I have spent much of my own time, effort, and money on a variety of international and domestic nonprofits (including homeless shelters), and I have learned that I can help only a few people (and sometimes what I thought was helping was actually hurting). Per person per hour, I judge my efforts in pure math to be on par with my efforts in homeless shelters as far as benefit to others goes. May 13 at 22:00 Why do you want current work in pure math to "immediately benefit the population at large in a direct and obvious way"? Applications of pure math might take decades or centuries. As much as you may wish this process could be sped up, that's not how it typically happens, and when it does happen the underlying math might be building on concepts in pure math that were developed for no real-world purpose a long time ago. See the following pages: Real-world applications of mathematics, by arxiv subject area? Recent Applications of Mathematics https://math.stackexchange.com/questions/280530/can-you-provide-me-historical-examples-of-pure-mathematics-becoming-useful Even in experimental sciences, where you might think people are trying to do things to help society now, research is often done just for the purpose of general understanding of that subject area rather than for an immediate and direct application. Yet decades later the ideas can become useful. See this video about the scope of research needed that led to the covid vaccines: https://www.youtube.com/watch?v=XPeeCyJReZw. And there is a famous real-world use of relativistic calculations for GPS: http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html. Einstein was not trying to help people navigate their vehicles when he was contemplating relations between space and time. How does one define "pure math"? One could even argue that the answer to the title question must be No, on the grounds that once some part of mathematics finds a use "outside of mathematics itself" then by definition it is no longer pure math . . . • It's true, there is a dynamic here similar to "Natural Philosophy"'s rejection of "science", thereby being left behind with only the "useless" bits, namely [sic] "Philosophy"... :) May 13 at 19:24 Yes many people have had these thoughts, including myself. I do not think that this means that you are insufficiently passionate about math. I do think it is important to decouple your general question: "Is pure math research useful?" from your specific career decision. I am biased and not really qualified to answer the general question, but my impression is that the answer is yes: our society invests very little into pure math research (relative to other areas) and math as a whole is highly interconnected, so even the purest research areas are often only a few degrees away from more useful ones. And there is a vast ecosystem of mathematical sciences in engineering, applied math, statistics, CS, and operations research departments which interact with pure math in various ways. On an individual level, though, it is true that most papers go unread and only have a small impact. And most people who get pure math PhD's (even from elite institutions) do not work primarily as researchers--most of the productivity of an individual mathematician is through teaching and communicating mathematics. If a precondition for you is that your main impact on the world to be through research, you should probably not do a pure math PhD. For how/why pure mathematicians handle this situation: one answer is that we are a bit unusual (and sometimes slightly selfish) in that we tend to care deeply about our subject, and not so much about others' valuations of us. Another answer is that for some people pure math is their "comparative advantage"-- they have a special talent and if they were in a different subject or profession, they would not be nearly as happy or effective. A final answer is that as you learn more, subjects that appear cold and esoteric suddenly transform: they become rich and full of profound, challenging and fundamental questions. And as you acquire mathematical fluency, you get to see more of the connections between different areas. On the other hand, you may find that as you get older (this is my own experience) you have more of a desire to connect directly with the rest of society. It is not unusual for older researchers to transition towards more "applied" areas. To conclude, I think that it is good that you are asking yourself these questions before making a career choice, especially before committing to the long process of earning a PhD. You need to weigh your personal values, strengths, and desires in order to make a good decision. 1. It seems to me that in all honesty, pure mathematics does not immediately benefit the population at large in a direct and obvious way. At best benefits are usually theoretical (e.g., "These methods could..."). Totally valid and arguably a fact; the trickle-down effect for pure mathematics research (when it exists at all) takes decades optimistically to reach the open mouths of the populace, but this could be argued as a virtue. In a branch like CS you could realistically see your research being 'used in the real world' in your lifetime, but how it's used wont necessarily be decided by you (a la Oppenheimer, Good Will Hunting scene, etc.). 1. I think that very, very few people actually read and care about the average published pure mathematics paper. I think it's because math papers are hard and it's not clear that they are interesting or useful to math as a whole or to the future of humanity. There are very obvious exceptions, for example, for papers like Fermat's Last Theorem, which are arguably achievements for humanity. But most papers are objectively not of this level of significance and may not always contribute to major problems. The first part here is again pretty much a fact; mathematicians are a small subset of all humans, and even within that subset the average algebraic geometer won't care about the average paper in fields that are (arguably, relatively) closely related like category theory, and vise-verse. Where I vehemently disagree is why people don't care; there's just too much to even be aware of all of it, let alone care deeply for every result. This isn't anyones fault, but rather an apparent feature of knowledge -- there is a lot of it, too much for any one person. People generally only care about a thing if it is relevant to something else they already care about, with the most primitive thing being ourselves/our loved ones. Some math/CS/science etc. is 'cared for' because people find it relevant to the things they care about, but most of it is too abstract for them to tell; even for other mathematicians there is often accidental interdisciplinary blindness to potentially relevant results between fields, simply because we can't tell or are unaware of what's happening over there. 1. It seems that the only reason we, as a population, care about mathematics, is because of the "cool" open problems which are simple to understand but difficult to prove. But this account for only a very small portion of active and successful mathematical work (since math papers don't always try to solve such problems because they're very hard). So doesn't this imply that my work as a future research mathematician is actually not useful for the future of humanity? I forcefully disagree here; I intend to spend my whole life doing math, and I haven't ever encountered an 'open problem' in the classical sense that I really care about more than any other theorem or lemma (I love them all equally). I care about mathematics because it allows me to think about things precisely, and there are things I want to think precisely about (black holes, origin of the universe, multiverse, etc.) which I have no idea how to think about at all without mathematics, except as armchair philosophy. Even with mathematics it can be a years long slog to get the thoughts correctly written out, but clarifying thought and making it precise is the primary purpose of mathematics in my opinion and a very important one for the species in general. 1. It seems that pure mathematics was originally created to solve practical and interesting problems, and that as we turned to use abstraction as a tool to solve things (because abstraction is a very useful problem solving tool), we have arrived many years later to nested layers of subproblems of subproblems, whose depth is so deep that such problems of these areas are hard to understand and are not obviously useful for the world or for anything outside of that area of mathematics itself. It seems that mathematics is a science that studies itself, and so at a certain point, it does not have an immediate practical use outside of itself. citation needed But really, where did you get this impression? This sounds like a critique made by someone in a related branch who doesn't have a great opinion of pure mathematics, but as mentioned before what I believe math strives to study is pure, precise thought with no extra fat attached to it. As a Platonist I believe that an ideal realm of concepts underlies our reality and every possible reality in the multiverse, so I would go further and say that we're attempting to systematically explore and map out the abstract realm underpinning all possible realities, but this is getting a bit far afield for MO. Alexander Grothendieck, 1966 Fields Medalist and considered by some as the greatest mathematician of the 20th century (see Wikipedia page, that cites this obituary), had similar doubts. He certainly extended them to the point where he considered that scientific research as a whole "does not immediately benefit the population at large in a direct and obvious way", to say the least. He finally discontinued his participation to the global research effort (although he may have continued to work outside the system, for his own pleasure). We cannot tell where our work is going, but the huge payoff that applications of mathematics have had makes it worth while, even if almost all of our work is going nowhere, to keep digging. Mathematics becomes more unpredictable as it arises in more applications. Looking back at the history of mathematics, it would have been impossible to predict which directions of research would turn out to be the most useful. In the nineteenth century, one might have bet on Ceva's theorem, as a basic fact about the geometry of our world, and a fact which is not obviously true, but easy to remember, providing a unification of many basic results in geometry. Few would have bet on Boole's research into the nature of human thought. No one could see how to use it, and it sat outside of the mainstream of mathematics, with no connections to previous work. It did not unify. Today Boole's work plays such a foundational role in our world that electric tea kettles use 0 and 1 to mean "off" and "on". By studying human thought, Boole changed how all humans think about almost everything. I have often taught Ceva's theorem, but I have no idea how it could fit into applied mathematics. Apparently Ceva's theorem is related to some integrable systems in mathematical physics, so maybe ... Is pure mathematics useful outside of mathematics? The other answers shows that yes, it can be very useful, either indirectly or directly. There is also pure mathematics which is only useful in mathematics. And then I suppose there is plenty of pure math which is not very useful even in mathematics. Why do research in pure mathematics? Well, probably the real answer for most pure mathematicians is because it's fun and because they like it so much. That is maybe not a satisfactory answer for grant applications, or when a non-mathematician asks you this question at a party. So maybe you should also ask why should people pay you to do pure mathematics? You could justify this by the potential applications, e.g. the very cliche answer "they said number theory was useless too, but now we all rely on it to stay secure on the internet!". But that is not a very good answer, since it is very unlikely that your research in pure maths is the foundation for something as important as public key cryptography. Other reason might be that you can teach students pure mathematics, who can then do a PhD in pure mathematics and teach more pure mathematics to people who want to study pure mathematics. Maybe you talk about how you are "adding beauty and joy to the world" as in one of the answers. In that case I guess the reason for your funding is the same reason we fund say literary criticism or some niche art. Although to be honest for most pure mathematicians their audience will be very small, but also to be honest the same is true for most artists as well. Actually, you don't really have to justify your pure maths study/research to anyone. One secret is that if you find a way to do a PhD in pure maths and later become a postdoc and later a professor, they just let you do it. I do not know if they are "useful", but couldn't we consider that computers, and the software they run, are very concrete realizations of "pure" mathematics? They are filled with concepts and results from number theory, functional analysis, algebra, geometry, graph theory, probability, and so on. They would certainly not exist without many fundamental works conducted in these areas (and conversely). If you ask the question in more personal terms: If I study pure mathematics, will that be useful to me anywhere outside of mathematics itself? ...then the answer is likely yes. Even with your going to "a good math PhD program", the odds are that you will be out of academia by 2030 or 2035. And by then: • you will have learned more math (including math you learned only partially in college), and will probably use some of that math professionally; • you will have learned some programming skills, and will probably use the skills to devise algorithms for more practical problems; • you will have learned how to present math in talks and present yourself in interviews, and will probably adapt that for other presentations; • you will have learned techniques for dealing with your advisor, your peers, your source of funding, and will probably use some of those techniques with a spouse or friends or colleagues; • you will have learned techniques for managing your time and your projects, and will probably use some of those techniques outside grad school too. So you can evaluate the utility of grad school in those terms -- or evaluate the utility of other ways that you might spend the next years of your life. The question you ask is commoner among would-be undergraduates of math who are trying to decide whether to do the special honor (all 'pure' math bar a mandatory C coding course) or the general honor course (math plus modules in economics, physics, biology, genetics, statistics and computing) in mathematics. In such cases, doubtful students would meet their tutor alone or in a group where this was a shared concern and - hopefully - get convincing reassurance flush with many examples. Perhaps you didn't have this experience. Hardly a generation (30 years) passes before a new "abstract" field of mathematics finds a crucial application in the real world. Check out the history of various branches of math in the 20th century for yourself. Now, the readiness and extent of application depends a lot on the local industrial culture however. For example, Chinese researchers working on metal-forging problems do not hesitate to deploy topologists to gain insights on the best forging sequences. In the west, such things were traditionally left to the black art (practical experience) of the actual forgers in the shop floor. Be assured that this will soon change however as the benefits of math involvement become vividly obvious in everyday goods ! Of course, the extent to which application of a 'pure' math concept will "jump out" into the mind of the person trying to solve the problem depends on his/her familiarity with both the math concept and its area of possible deployment: pure math people will generally not be familiar with the latter, applied math/scientists/engineers may not be familiar with the former. So having purists available for consultation by the applied math staff on say a team modelling epidemic spread might be sensible. We don't know clearly how involved you want to be with the real world or what aspect of this involvement you feel is vital for your wholehearted commitment to further studies here. Have you considered not doing your PhD straight through after your primary degree ? Any college course can be exhausting on a person's morale - there are few exceptions. Maybe doing something in the applied math arena but away from the campus might be personally beneficial at the present time as you pick through your own thoughts on the matter. • +1 for the suggestion to apply math somewhere before the PhD May 12 at 17:52 It's kind of funny that when I was a young silly undergrad my view on Math was completely opposite to yours: in my immature eyes the best Math to get involved with would be the most impractical one! I wasn't aware at the time of the Steve Jobs quote about making the dent in the Universe, but that was basically the idea. To go somewhere where no man has gone before. To study the Forms, not the Shadows. To discover something beautiful outside the mundane realm of practical applications. Looking back, I was wrong. The most fruitful pure Math is not perfectly pure; it grows around applications like a pearl grows around a spec of dust. This applies to the most pure abstract things as well. People try to solve algebraic equations, that leads to algebraic varieties, and that leads to schemes. You don't start with schemes. Maybe that's the reason that pure Math tends to find applications eventually, sometimes decades after its development. Maybe that happens because pure Math ultimately grew from the real World, maybe that's just magic, but even the most impractical abstract subjects somehow find their applications. There are certainly examples of mathematical topics which were studied for their own interest and considered to have no practical applications but later turned out to have important applications. I don't know that the Greeks had any application for the theory of conic sections. A couple of thousand years later they turned out to be essential to understanding planetary motion. Maybe small primes are applicable but primes of hundreds of digits seem far from applicable. But they are integral to the widely used RSA encryption scheme. So do what you love and in 2000 years someone will need it! • Optics was one reason that the Greeks used conic sections, e.g in Diocles's text On Burning Mirrors. May 11 at 19:25 • That is a good point. The theory of comic sections was at least 100 years old at that time. I don’t think it was considered for applications. Perhaps doubling the cube (constructing $\sqrt[3]2$) although that isn’t especially applied. May 12 at 22:05 • "Perhaps doubling the cube ... although that isn’t especially applied." @Aaron, try telling that to the citizens of ancient Delos! proofwiki.org/wiki/Doubling_the_Cube/Historical_Note May 14 at 12:01 • @GerryMyerson Good point. Though Plato says that the application desired was to pique the interest of the citizenry and shift it from conflict. A worthy goal. May 17 at 5:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46520674228668213, "perplexity": 759.1571644675618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00367.warc.gz"}
https://export.arxiv.org/list/cond-mat.soft/2002
# Soft Condensed Matter ## Authors and titles for cond-mat.soft in Feb 2020 [ total of 194 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 176-194 ] [ showing 25 entries per page: fewer | more | all ] [1] Title: Molecular relaxations in supercooled liquid and glassy states of amorphous gambogic acid: dielectric spectroscopy, calorimetry and theoretical approach Comments: 7 pages, 5 figures, accepted for publication on AIP Advances 2020 Subjects: Soft Condensed Matter (cond-mat.soft); Materials Science (cond-mat.mtrl-sci); Biological Physics (physics.bio-ph); Chemical Physics (physics.chem-ph); Medical Physics (physics.med-ph) [2] Title: Hierarchical Jamming in Frictional Particle Assemblies Subjects: Soft Condensed Matter (cond-mat.soft) [3] Title: Aging of Thermoreversible Gel of Associating Polymers Subjects: Soft Condensed Matter (cond-mat.soft) [4] Title: Manipulating the Glass Transition in Nanoscale Authors: D. Y. Sun, X. G. Gong Subjects: Soft Condensed Matter (cond-mat.soft); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [5] Title: Core-shell microgels via precipitation polymerization: computer simulations Subjects: Soft Condensed Matter (cond-mat.soft) [6] Title: Concentration dependence of diffusion-limited reaction rates and its consequences Authors: Sumantra Sarkar Comments: Accepted in Phys. Rev. X. Look for SI to the right. --&gt; Subjects: Soft Condensed Matter (cond-mat.soft); Statistical Mechanics (cond-mat.stat-mech); Biological Physics (physics.bio-ph); Chemical Physics (physics.chem-ph); Subcellular Processes (q-bio.SC) [7] Title: Boids in a Loop: Self-Propelled particles within a Flexible Boundary Subjects: Soft Condensed Matter (cond-mat.soft); Adaptation and Self-Organizing Systems (nlin.AO) [8] Title: Spontaneous deformation and fission of oil droplets on an aqueous surfactant solution Journal-ref: Phys. Rev. E 102, 042603 (2020) Subjects: Soft Condensed Matter (cond-mat.soft) [9] Title: Lindemann melting criterion in two dimensions Authors: Sergey Khrapak Comments: 6 pages, 3 figures; to be published in Physical Review Research Journal-ref: Physical Review Research 2, 012040(R) (2020) Subjects: Soft Condensed Matter (cond-mat.soft); Materials Science (cond-mat.mtrl-sci); Statistical Mechanics (cond-mat.stat-mech); Chemical Physics (physics.chem-ph); Plasma Physics (physics.plasm-ph) [10] Title: Splitting droplet through coalescence of two different three-phase contact lines Journal-ref: Soft matter 15.30 (2019): 6055-6061 Subjects: Soft Condensed Matter (cond-mat.soft); Materials Science (cond-mat.mtrl-sci); Fluid Dynamics (physics.flu-dyn) [11] Title: Topology Restricts Quasidegeneracy in Sheared Square Colloidal Ice Comments: 4 pages, 4 figures, 1 page supplemental Journal-ref: Phys. Rev. Lett. 124, 238003 (2020) Subjects: Soft Condensed Matter (cond-mat.soft); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Statistical Mechanics (cond-mat.stat-mech) [12] Title: Salt parameterization can drastically affect the results from classical atomistic simulations of water desalination by MoS$_2$ nanopores Subjects: Soft Condensed Matter (cond-mat.soft); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Computational Physics (physics.comp-ph) [13] Title: Geometry of Bend: Singular Lines and Defects in Twist-Bend Nematics Journal-ref: Phys. Rev. Lett. 125, 047801 (2020) Subjects: Soft Condensed Matter (cond-mat.soft) [14] Title: Fast-freezing kinetics inside a droplet impacting on a cold surface Subjects: Soft Condensed Matter (cond-mat.soft); Applied Physics (physics.app-ph); Fluid Dynamics (physics.flu-dyn) [15] Title: Topological defects of dipole patchy particles on a spherical surface Subjects: Soft Condensed Matter (cond-mat.soft) [16] Title: Triangular lattice models for pattern formation by core-shell particles with different shell thicknesses Subjects: Soft Condensed Matter (cond-mat.soft) [17] Title: Elasticity of Jammed Packings of Sticky Disks Journal-ref: Phys. Rev. Research 2, 032047 (2020) Subjects: Soft Condensed Matter (cond-mat.soft); Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [18] Title: Information and motility exchange in collectives of active particles Journal-ref: Soft Matter, 2020, Advance Article Subjects: Soft Condensed Matter (cond-mat.soft); Statistical Mechanics (cond-mat.stat-mech) [19] Title: Edge Current and Pairing Order Transition in Chiral Bacterial Vortex Comments: 6 pages, 5 figures, and supplemental information Subjects: Soft Condensed Matter (cond-mat.soft); Statistical Mechanics (cond-mat.stat-mech); Biological Physics (physics.bio-ph) [20] Title: Controlled release of entrapped nanoparticles from thermoresponsive hydrogels with tunable network characteristics Journal-ref: Soft Matter, 2020 Subjects: Soft Condensed Matter (cond-mat.soft) [21] Title: Gold Nanoparticles Passivated with Functionalized Alkylthiols: Simulations of Solvation in the Infinite Dilution Limit Comments: Main paper + supplementary information Subjects: Soft Condensed Matter (cond-mat.soft) [22] Title: Knotty knits are tangles on tori Subjects: Soft Condensed Matter (cond-mat.soft); General Topology (math.GN); History and Overview (math.HO) [23] Title: Speeding up dynamics by tuning the non-commensurate size of rod-like particles in a smectic phase Comments: To be published in Physical Review Letters Subjects: Soft Condensed Matter (cond-mat.soft) [24] Title: Dynamics of a particle moving in a two dimensional Lorentz lattice gas Subjects: Soft Condensed Matter (cond-mat.soft); Statistical Mechanics (cond-mat.stat-mech) [25] Title: Origin of the extremely high elasticity of bulk emulsions, stabilized by Yucca Schidigera saponins Journal-ref: Food Chemistry, 2020 Subjects: Soft Condensed Matter (cond-mat.soft) [ total of 194 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 176-194 ] [ showing 25 entries per page: fewer | more | all ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, cond-mat, 2010, contact, help  (Access key information)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28117749094963074, "perplexity": 24045.620433204327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00302.warc.gz"}
http://www.gbhatnagar.com/2018/11/
Friday, November 16, 2018 A bibasic Heine transformation formula While studying chapter 1 of Andrews and Berndt's Lost Notebook, Part II, I stumbled upon a bibasic Heine's transformation. A special case is Heine's 1847 transformation. Other special cases include an identity of Ramanujan (c. 1919), and  a 1966 transformation formula of Andrews. Eventually, I realized that it follows from a Fundamental Lemma given by Andrews in 1966. Still, I'm happy to have rediscovered it. Using this formula one can find many identities proximal to Ramanujan's own $_2\phi_1$ transformations. And of course, the multiple series extensions (some in this paper, and others appearing in another paper) are all new. Here is a preprint. Here is a video of a talk I presented at the Alladi 60 Conference. March 17-21, 2016. Update (November 10, 2018). The multi-variable version has been accepted for publication in the Ramanujan Journal. This has been made open access. It is now available online, even though the volume and page number has not been decided yet. The title is: Heine's method and $A_n$ to $A_m$ transformation formulas. Here is a reprint. -- UPDATE (Feb 11, 2016). This has been published. Reference (perhaps to be modified later): A bibasic Heine transformation formula and Ramanujan's $_2\phi_1$ transformations, in Analytic Number Theory, Modular Forms and q-Hypergeometric Series, In honor of Krishna Alladi's 60th Birthday, University of Florida, Gainesville, Mar 2016,  G. E. Andrews and F. G. Garvan (eds.), 99-122 (2017) The book is available here. The front matter from the Springer site. -- UPDATE (June 16, 2016).  The paper has been accepted to appear in: Proceedings of the Alladi 60 conference held in Gainesville, FL. (Mar 2016), K. Alladi, G. E. Andrews and F. G. Garvan (eds.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182755708694458, "perplexity": 2162.011622061692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00068.warc.gz"}
http://mca.ignougroup.com/2017/04/answers-kripke-models-evaluating.html
World's most popular travel blog for travel bloggers. # [Answers] Kripke Models - evaluating the meaning of $\Box\Box p$ , , Problem Detail: In Kripke models the evaluation of $x \vdash \Box p$ would be that every world reachable from $x$ satisfies $p$. But how would the truth of $\Box\Box p$ be evaluated in Kripke models? #### Answered By : David Richerby This is an unfortunate use of the word “reachable”, in that Kripke structures are graphs but “reachable” in a Kripke structure is not the same thing as “reachable” in a graph. Let us avoid this confusion by saying that a state $y$ is a successor of state $x$ if there is a (directed) edge $(x,y)$ in the structure. Now, the semantics of modal logic says that $\Box\varphi$ is true at state $x$ if, and only if, $\varphi$ is true at every successor of $x$. Informally, $\Box\varphi$ means, “$\varphi$ is true everywhere I can get to from here in one step.” So, to understand the meaning of $\Box\Box p$, just substitute $\Box p$ for $\varphi$: • $\Box p$ is true at every successor • $p$ is true at every successor of every successor. In other words, $p$ is true everywhere I can get in two steps. Note that this is not, in general, the same thing as $\Box p$, which means that $p$ is true after one step. Indeed, one can show that $(\Box \varphi)\rightarrow (\Box\Box\varphi)$ is a tautology in a particular Kripke frame if, and only if, the successor relation is transitive. Note that, without assuming transitivity, basic modal logic with only $\Box$ and $\Diamond$ has no way of expressing “$\varphi$ is true everywhere that can be reached from here, in any number of steps” (i.e., the usual graph-theoretic meaning of “reachable”).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939094603061676, "perplexity": 438.65873326179786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527135.18/warc/CC-MAIN-20190419041415-20190419063415-00382.warc.gz"}
https://crypto.stackexchange.com/questions/62632/login-security-and-plaintext-of-a-password-stored-in-argon2i-to-derive-a-key-via
# Login security and plaintext of a password stored in Argon2i to derive a key via halite safe? I'm using Symfony 4. I have set parameters on all my Argon2 stuff below so that it takes 1s per iteration. This website is supposed to encrypt HIPAA information. Basically I have a table like this username varchar(50), In the password field I'm storing passwords with Argon2i. Users login and it verifies their password in the database (this is all done with Symfony login). After that users are presented with another screen to enter a passphrase. They enter the second password (this password is shared among all employees of the company). I have another table like key_pass varchar(255), salt varchar(50) the second password is used with password_verify against the value of key_pass which is a password stored with Argon2i. If it's successful, then I use Halite (aka scrypt provided by libsodium in this scenario) to derive a key using the value of salt. Basically the whole thing looks like this: if (password_verify($$view->getPassword(),$$encryption->getKeyPass())) { $$encryption_key = EncryptionKey::deriveFromPassword($$view->getPassword(), \$encryption->getSalt() ); } This key would be used to decrypt the rest of the data in the db. So three questions: 1) Is this form of double authentication even helping security? I mean basically in the end if the attacker gets the second passphrase they don't even need the first login (assuming they have direct access to the data). I suppose it just makes the web site more secure since they can't even attack the second passphrase without having the first via the web site? 2) Is using the second passphrase to derive the key safe? 3) Assuming #2 is "yes", in the event I want to change the second passphrase (i.e. - employee leaves company), is this possible without re-encrypting all of the data using the new derived key? If no, what possible implementations/alternatives could allow for that? Perhaps storing another key encrypted in the database and using the above derived key to decrypt that and then that key is the real key that decrypts other data in the db? If that works would Halites Symmetric::encrypt be secure enough for the second key? Thanks for any help! I've been researching this stuff for months but I'm no security expert - would REALLY appreciate somebody just validating all this that I came up with • Could you mention about possible attack scenarios? For example; database only, application server only (limited or long time), client promise ( via a key logger etc.). – kelalaka Sep 29 '18 at 14:53 1. Is this form of double authentication even helping security? Possibly, but in general, asking users to remember two passphrases is asking too much. It is less likely that either one of them will be secure enough to be used as password. 2. Is using the second passphrase to derive the key safe? Probably, but sharing a password among all the other employees is certainly not secure. Besides that, using both Argon2i and scrypt on the same input material only slows down your service, while an adversary only needs to perform the fastest one (from their perspective). If a key is derived after verification of the password then a Key Based KDF such as HKDF can be used. 3. ... is this possible without re-encrypting all of the data using the new derived key? No, for that you'd need a data key as you're describing. If that works would Halites Symmetric::encrypt be secure enough for the second key? Sorry, but that's off topic here; we cannot vouch for the security of a crypto system or library. You can of course do worse than libsodium to encrypt data, but if it is secure also depends on system and implementation details. • Interesting! The second password was also to ensure that the key was not kept anywhere on the system though, is there something I could use instead so that I don't have to keep the key somewhere on the server? Something that could share a key between their passwords or something? Would HKDF do that? – Element Zero Oct 3 '18 at 3:23 • You could encrypt it with each of the users passwords. HKDF can be used together with a label to derive different values from the password hash: one for authentication, one for encryption (wrapping) of the data key. The problem with key management is that there are many possibilities and there is no algorithm to calculate which one is best: it depends on the system configuration. – Maarten Bodewes Oct 3 '18 at 20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3472246825695038, "perplexity": 1580.908847592724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00226.warc.gz"}
https://atcoder.jp/contests/abc216/tasks/abc216_h
Contest Duration: - (local time) (100 minutes) Back to Home H - Random Robots / Time Limit: 2 sec / Memory Limit: 1024 MB Score : 600 points ### Problem Statement There are K robots on a number line. The i-th robot (1 \leq i \leq K) is initially at the coordinate x_i. The following procedure is going to take place exactly N times. • Each robot chooses to move or not with probability \frac{1}{2} each. The robots that move will simultaneously go the distance of 1 in the positive direction, and the other robots will remain still. Here, all probabilistic decisions are independent. Find the probability that no two robots meet, that is, there are never two or more robots at the same coordinate at the same time throughout the procedures, modulo 998244353 (see Notes). ### Notes It can be proved that the probability in question is always a rational number. Additionally, under the Constraints in this problem, when that value is represented as \frac{P}{Q} using two coprime integers P and Q, it can be proved that there uniquely exists an integer R such that R \times Q \equiv P\pmod{998244353} and 0 \leq R \lt 998244353. Find this R. ### Constraints • 2 \leq K \leq 10 • 1 \leq N \leq 1000 • 0 \leq x_1 \lt x_2 \lt \cdots \lt x_K \leq 1000 • All values in input are integers. ### Input Input is given from Standard Input in the following format: K N x_1 x_2 \ldots x_K ### Sample Input 1 2 2 1 2 ### Sample Output 1 374341633 The probability in question is \frac{5}{8}. We have 374341633 \times 8 \equiv 5\pmod{998244353}, so you should print 374341633. ### Sample Input 2 2 2 10 100 ### Sample Output 2 1 The probability in question may be 1. ### Sample Input 3 10 832 73 160 221 340 447 574 720 742 782 970 ### Sample Output 3 553220346 ### 問題文 これから以下の操作をちょうど N 回行います。 • K 個のロボットそれぞれについて、「進む」か「止まる」かを確率 \frac{1}{2} で決める。「進む」と決めたロボットたちは同時に正の方向に 1 進み、「止まる」と決めたロボットたちはその場から動かない。 ただし、すべての確率的な決定は独立であるとします。 ### 制約 • 2 \leq K \leq 10 • 1 \leq N \leq 1000 • 0 \leq x_1 \lt x_2 \lt \cdots \lt x_K \leq 1000 • 入力は全て整数 ### 入力 K N x_1 x_2 \ldots x_K ### 入力例 1 2 2 1 2 ### 出力例 1 374341633 374341633 \times 8 \equiv 5\pmod{998244353} ですので、374341633 を出力します。 ### 入力例 2 2 2 10 100 ### 出力例 2 1 ### 入力例 3 10 832 73 160 221 340 447 574 720 742 782 970 ### 出力例 3 553220346
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983696341514587, "perplexity": 3295.3057389970104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00742.warc.gz"}
https://en.wikibooks.org/wiki/Logic_for_Computer_Scientists/Modal_Logic/Modal_Logic
# Logic for Computer Scientists/Modal Logic/Modal Logic ## Modal Logic Modal logic is concerned with the investigation of modalities in logics. Modalities are e.g. necessity, possibility. In classical propositional logics, propositions like "Tom is Married with Mary" may be ${\displaystyle true}$ or ${\displaystyle false}$; in real life however, the truth value of the above propositions obviously can change in time. In a different context the truth value may be different in different worlds: assume that Tom is dreaming about being married with Mary; hence in the world of his dreams the proposition may be true, while in real-life it may be false. In still another context, the truth value can depend whether it is considered under legal aspects: it is possible that Tom and Mary are legally married, while the catholic church is considering them to be single. Modal logics had been studied extensively already during the first half of the 20th century by various logicians. The main breakthrough, however, was the establishment of a formal semantics of modal logic given be Kripke. Propositional logics can be extended by modalities to describe belief, knowledge or temporal aspects. Hence it is very appropriate to use them with knowledge representation systems. Recently modal logics have been applied in verification contexts and as a means to describe the semantics of description logics. As an example take the following puzzle: Assume 3 children (works for ${\displaystyle n}$ children as well), who are perfect reasoners, are always truthful, and always give an answer, if they know one. The children are playing outside, and may get muddy foreheads. Any child can see if the other children have muddy foreheads, but can't see his or her own forehead. At some point, an adult says to them: at least one of you has mud on your forhead. The adult now says: Do any of you know if you have mud on your forehead. No one answers. The adult repeats the question, and again no one answers. The 3rd time the adult asks the question, one or more of the children answer. How many of the children have muddy foreheads? This puzzle can be solved by constructing a Kripke structure, which is a set of states together with links which express the accessbility of states. There are 3 children, each can be muddy or not muddy and, hence, we have ${\displaystyle 2^{3}}$ possible states. States can be represented by triples of booleans values, where a 1 in the nth position means that the nth child is muddy, and a 0 in the nth position means that the nth child is not muddy. The Kripke structure can be defined as follows: consider, e.g. state (111). Since child 1 does not know whether or not he or she is muddy, as far as child 1 is concerned, he or she could be in state (011). For child 2 the state (011), however, is not accessible, since child 2 knows that child 1 is muddy. A structure which is constructed as depicted above, can be used to solve the puzzle, by drawing the structure as it results after each speech action of the adult.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6468460559844971, "perplexity": 840.317702858451}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00200-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.uuelco.me/p/j.html
tab: 6¢ cap: $154.88 Juking 101🍏 Stew choreography Stew choreography is the process of folding twistor fields using fruut to keyframe (coordinate) stews*. This is our "proof-of-work" concept (i.e., meshing exercise for UUe's cryptocommodity), and should solve twistor spaces. Ludologically, this is a string's middlegame (i.e., where improvisation happens). The term 'stew' here is not a brothy meal, but we still 'cook' them up, if you will. First, some foreground. We always start with a randomly coiled object called the loopstring (henceforth l-string). Since it is orientation-agnostic and length-arbitrary (drawing parallels to a denatured protein), our string can assume any shape/structure in two-dimensions (2d). Therefore, any sort of squiggly line will suffice for intonation. Here are some examples: Note: It is ultra important to understand that even though the string is random, the conformation itself is its own (random) walk. This is what makes each string's folding process unique, and directly contributes to its difficulty. EGP +EGP is UUe's keynote, and the calculus of egglepple. # Our tonic is such that handshakes are integral to l-string pathways. Essentially, this means that crypto fabrication remains asynchronous as long as contracts are kept. The Keynote is what establishes string melodics and conditions for folding. This is what is calculated overall. The three (3) shapes - rectangle, circle, and triangle are commonly known as fruut. Color-matching, we say that the yellow square is 'banana'🍌, the azure circle is 'blueberry', and the green triangle is 'lime'. This is easier to remember (at least for me), and supersedes their EGP signatorial modes (time or key). In key signatorial mode, this represents the letter 'E'. In time signatorial mode, this represents a beta sheet. beta sheet In key signatorial mode, this represents the letter 'G'. In time signatorial mode, this represents an alpha helix. alpha helix In key signatorial mode, this represents the letter 'P'. In time signatorial mode, this represents a delta valley. delta valley/turn Egglepple® is the portfolio (collection) of leaf definitions. In fact, this is the formal name of the loopstring, itself. The word portfolio (a finite succession of convertible assets) is non-trivial here, because it is used in the sense of porting yesegalo from one side (recto) to the other (verso). # #stews EEE EEG EEP EGE EGG EGP EPE EPG EPP GEE GEG GEP GGE GGG GGP GPE GPG GPP PEE PEG PEP PGE PGG PGP PPE PPG PPP Create a roster by selecting leaves from the portfolio. Our objective is to compile twistors, a process done via patchwork* on egglepple. Orchestration is juker-guided, with oversight (signatures) from the impresario. In other words, proof all lyrics. Flageolet +Flageolet pencil or 'krayon'. All games are initiated with pencil declaration. At this stage, you are prompted to enter the number of pencils/krayons you are wanting to use. This number can change, but must never be below three (3). Otherwise, string input is rendered void() because it takes a minimum of three pencils to form a required chord. Flageolet pencil A flageolet pencil (or just pencil or krayon) is a one-dimensional curvilinear divisor (line) that is subject to rhetorical arrangement. Thus, a pencil is stringy by proxy. In some instances, it may be thought of as an edge, link, or residue of yesegalo (cf. 1-brane). Pencils lay the foundation of stew choreography; they are the starting points of intonation and may join or split among themselves (this is called interaction or symmetry-breaking). Rhetoric +RONALD R (Exordium) key signature (primary structure) O (Inventio) time signature (secondary structure) N (Dispositio) These lyrics comprise the major chord. They represent the sixteen (16) composite numbers in the lyrical set: {4,6,8,9,10,12,14,15,16,18,20,21,22,24,25,26}. A (Elocutio) These lyrics comprise the minor chord. They represent the ten (10) prime numbers in the lyrical set: {1,2,3,5,7,11,13,17,19,23}. L (Memoria) Normally, ties connect two (2) notes of the same pitch class and name. This gives indication that they are to be played as a single note with a duration equal to the sum of the individual notes' values. In our case, a 'tie' is overly generalized, as it connects similar as well as dissimilar pitch classes (i.e., ties are equivalent to slurs), even in succession. Hence, our ties can connect/unite both major (composite) and minor (prime) lyrics. Minimum three (3) pencils required to form a chord. D (Pronuntiatio) The fermata signals that a note should be prolonged beyond its normal duration or value would indicate. Depending on its placement on the notation/score, the fermata may also indicate the end of a section (punctuation). Exact duration of the hold, as well as its placement within a piece is at the discretion of the juker. Objectively, this would be to say that such-and-such section is to remain un-closed while other chords on the string are examined for a possible better fit. It could also signal 'un-tie' (as in reverse some pronuntiatio). The maneuver is most appealing in multiplayer mode, where different players may be tackling different sections of the same string simultaneously, but getting dissimilar results. Obtaining the yield The yield, called MONEY (for Mathematically-Optimized Numismatics' Encrypted Yield), is based on a formula of logarithms that is borrowed from the musical concept of cents. That formula is: Music is measured on a scale of intervals; in this case, a cent is a ratio of two (close) frequencies. For the ratio (a:b) to remain constant over the frequency spectrum, the frequency range encompassed by a cent must be proportional to those frequencies. Scaled, an equally tempered semitone spans 100 cents (a dollar) by definition. According to The Origamic Symphony, an octave — the unit of frequency level when the base logarithm = [pencil count × font weight] - spans twenty-six (26) semitones (intervals/measures), and therefore, 2600 cents. Because raising a frequency by one (1) cent is equivalent to multiplying this constant cent value, and 2600 cents doubles a frequency, the ratio of frequencies one cent apart is calculated as the 2600th root of 2 (~ 1.00026663*). We can integer-round to just 1 for all practical purposes. We've only identified what a cent is musically. Before we can do so economically, we need to first do so mathematically. The math part is as easy as 1-2-3. When folding string, we are coordinating, meaning that (usually) two (2) fonts are conjoining. The manner in which they conjoin matters; a going to b may yield a different cent amount than the converse, b going to a. Keep this in mind. Making stew: putting it all together So, now let's provide a power demonstration of how all of this might work in the real world. First of all, let it be known that the term 'juking' implicates stew choreography either 'by hand' (this way) or 'by machine' (see protocol). Meaning that, this is the activity of our automation; one method is just a lot faster (in theory) than the other. Here's my heuristic: Begin by drawing a random coil. Notice the orange balls placed at the ends of the string. They serve to highlight the fact that we have an open string that we are attempting to close. Pick a pencil count. Select time signature (cf., primary structure) to create a sequence. Move to key signature (cf., secondary structure) to transform/twist object. Call on fonts (font keys) to remove numerical obfuscations. Use glue (glue keys) to match fonts. Upon submission (in stew notation), the impresario (me) will automatically do the calculation to reveal the mesh's yield, but you can also do this yourself just to make sure. Once your mesh or walk gets my signature, collect your coupon. Stew choreography is NP-complete, yet, chances are that unless your calculatory prowess is greater than that of the entire network, you might as well Technically, a jukebox is a transactionary automaton hosting a rotisserie. It is a vending machine whose self-contained media (assets) is music.🎼 Upon token (coin) insertion, a jukebox will play a patron's selection from that media. UUe is the jukebox devised to resolve the Juke Lemma, which conjectures that all phenomena are rooted in juking. Its media is autochthonous fibor. The 'music' of the jukebox is a simulacrum known as The Origamic Symphony (TOS). In Nature, we deduce that string is her most important structure. Chemically, it manifests as polymers (like RNA) and peptides (such as protein). Physically (subchem), vibrations of the string correspond to fundamental forces. Via string ludology, it is feasible to use one domain (interaction = physics) to model the other (reaction = chemistry), and vice versa. This is known as (transaction = stereotyping*). To do this, we need a portfolio (a finite succession of convertible assets) whose symmetry we'll constantly break with folding. ie., encrypting EGP 🐨 Only correct topologies are gainful, however. For example, a misfolded protein can trigger unwanted effects (stuff like malforms and diseases. = Yuck!).👻 There is an astronomical number of possible fold paths per polymer. Because this number is so big (requiring a great many calculations) - as would be the case of folding a polypeptide (or, more accurately, folding an amino acid sequence) - this should take eons, yet, it happens on the order of microseconds. How? With the aid of chaperones called twistors, which essentially are responsible for warping the space in-between quanta having S-complexity. ... bringing me to the reason UUe was created in the first place. Collectively, our chief concern is not mesh functionality, per se, but with the exploitation of so-called twistor spaces. The notion of polymer knotting/entanglement is perhaps the most challenging of STEM-type problems. Let's see how You can make a difference It is widely believed that although the total number of twisted sequences (eg., fibor) is exponentially high, the actual number of templates or motifs from which those knots are drawn is only about 2,000 or so. I call these templates fonts (a term borrowed from stereotypography). According to the opera ludo, Stewart, there are precisely 2,028 fonts. Our assignment is to proof, sort, and synthesize these objects. By juking, we are steadily accomplishing this. # The consequences are grand and beneficial. On one hand (physics), all material (particles) can be described. On the other hand (chemistry), biomolecular structures would be exactly solvable; malforms and disease (ie., Alzheimer's, baldness, HIV/AIDS, cancer, ozone scrubbing, etc.) become curable. An immediate ancillary is that juking gives you a chance to put funds in your pocket.😁 As jukers, our job lies at the corridor of economics. To juke is to take a portion of some l-string and twist it so that it is optimal. The reward for this is MONEY. For each opus, the game starts with an opening. Vend your coupon with a juke, and improvise. We'll use an example opus here (which will later be rehashed in the fugue overview). I disclaim that these numbers are probably bogus, as they were handpicked for illustrative purposes. - A rotisserie, or roto, symmetrically relates ("swaps") self-contained assets-to-functions ||(u,u)|| as defined within some twistor space. It exists only to throughput (measure and frame) tokens. The jukebox hosts a rotisserie in order to instantiate MONEY algorithm activity (read: 'juking'). Rotisseries typically use statistical cycling of the canvas (buttons: juke + Stewdio + fugue) so as to benefit jukers. In our case, the rotisserie is synonymous with "coupon router" (ie., a hash auction), thereby establishing gameplay. - The layout for UUe's rotisserie is pretty straightforward. It has four (4) sections: three (3) identifiers plus one (1) activator. The identifiers are: opus number, tablature, and handicap The activator is a solo juke button. Starting from the top (opus number), I'll explain what each section does. - In this (possibly bogus) opus, EX (Part A) represents the two (there are always only two) leaves (or reading frames to make an analogy) that mark the ends of the string. The first letter (E) is the start leaf, and the second letter (X) is the stop leaf, in that order. The next part (Part B) tells us how many flageolet pencils (or polymers, to make another analogy) there are that make up the string (cf., font size). Since this is a chain, we can assume that the pencils are conjoined. The number is always an integer. For this opus, there are one-hundred (100) pencils, meaning that its string is 100 units long. The third part (Part C) gives us the (font) weight (as a function) of the string. This number (an integer) is the sum of all the stews ('weighted pencils') along the string. We arrive at this number by converting (more accurately, translating) the letters (in the English alphabet) of the stew to their numerical equivalent. For example, the letter 'E', the fifth letter of the alphabet, is number 5 (E = 5). The letter 'X', the twenty-fourth letter of the alphabet, is number 24 (X = 24). The letters of Part A are always included in the tabulation, so the weight is summing an additional ninety-eight (98) alphanumerics [ie, ∑ pencils]. There are a lot of different combinations one could use to get 888 from 100 pencils. For instance, we know at least the value of two pencils here, E (5) and X (24) = 5 + 24 = 29, leaving us with 888 - 29 = 859 mod 98 combos from which to choose. The remaining 98 letters can be any from the alphabet (A - Z), but any combination: (1) cannot exceed 859, and (2) must count all the strung pencils. - Tablature is the range of frets (font sizes, 1¢ +) available to a given opus. It is indicative of the multiplier at which a dividend can be obtained (ie., 'buy-in'). We derive the valuation as the logarithm with a base product of font size by font weight, and an (egg,epp) quotient (or, even simpler: handicap/2600). This is known as the cent formula. Perhaps it looks a little better handwritten: The formula is (somewhat obtrusively) borrowed from music theory. Here, c stands for cent(s), the basic unit of MONEY. The number 2600 is the deviation of twenty-six semitones measured in 100 cents each. n is the product of (font size × font weight), and the (b/a) ratio satisfies integration between two (2) intervals. In our example here (which is still probably bogus), we hypothetically would get the 5¢ fret from the formula like so: A fret (ie., activation fee/price-per-token) is a conjectured ideal phenomenon in finance. Theoretically, it is the "lowest-level juke (as one-twenty-sixth of a sporadic group)" at one cent (penny), where the value is derived from the cent formula [particularly, fret = logn(b/a) | {b,a = (u,u), n = font size × font weight}]. Frets share an equivalence relation with twistors. The significance of the fret (and the idealism of it) is its extreme affordability; one cent is considered to be Nature's disposable income. Mirroring chemistry, the fret would be the lowest available energy level. In everyday vernacular, most, if not all, jukes hedging opus handicaps are assumed to be so-called "(penny) frets". That is, their fret is typically worth "pennies on the dollar" or "cents on the dollar". The fret itself may be an accurate description of a general juke because standard coupon deviation is represented by the tablature. Note (+): In theory, attaining a per-cent fret is challenging because of tablature efficiency conditions; where the greater the number of pencils (and hence cents), the heavier the string, resulting in a juke with a wild count. Tip: The smaller the fret, the bigger the potential payout. (see below) In practice, a pure 'penny' fret (an identity) is infeasible in two-dimensional (2d) vector space, as handicaps and yields normally have no congruence, and for this reason, we juke. If and when it turns out that a fret is bijective, then we have a font. - An opus' handicap, or just cap, is its projected yield, as measured in cents (compounded to dollars). It is computed from the product of the actual fret multiplied by the maximum number of TOS semitones* (cap = fret × 2,600), thereby completing the cent formula. One-hundred (100) cents each. This figure constitutes the fitness extrema of the opus it represents (akin to how much twistor space it occupies). The cap is essentially placing a quote (estimate) on the calculation derived from the above equation. Anything under the cap (minima) qualifies for a coupon's dividend (ie., 'pay-out'), and anything over the cap (maxima) constitutes a loss for the juker/gain for the house. Suffice the techno-babble written below to say that all the juke button does is call the charge API. You're basically placing an order by entering a play (optional) and some monetary value as you build your coupon. - A juke is an iterated perturbation* (sesquilinear transform) to egglepple. To juke is to solve some twistor space with (an) EGP encryption. Within the context of stew choreography, a juke is a play that is symmetry-breaking but not an endgame move, which means that it does not include: (1) the coupling of adjacent leaves, or (2) the leaves EEE and PPP [(namely (EEE,PPP) / (PPP,EEE)] in coordination. Failing to adhere to this rule would result in dissonance. ie., ultrametric calculus. Note (+): It may be the case that the leaves EEE and PPP can/should be replaced by EEG and PPG, respectively. This is because EEE and PPP actually are loop markers (marking initiation and termination) in a sequence. Adjacent leaves cannot be coupled due to the fact that a chord (three leaves or more) is required for folding. Out of the total twenty-six (26) stews, only twenty-four (24) of them allow legal jukes. This stems from the fact that a composition cannot lack harmony, which is to say that a loop (connected endpoints), nor adjacencies (any side-by-side coordinates) are permissible. Note (+): The sequence of leaves in a juke is immune to start/stop identification. Meaning that it is not illegal to have an opus be of identical lettering (eg., "Opus LL"). Proof loopstring: loopy quanta + superstrings The Origamic Symphony (TOS) is a musical simulacrum whose composition is the spectrum between the Planck and nano scales (henceforth referred to as the yoke*). Jukers comprise its orchestra. We automatically assume that the yoke is an abstraction to which string is attached by default. + TOS is responsive to EGP resonances. This is feasible because egglepple [a class of automata called loopstring (l-string)] extends the entire yoke (1.616252 × 10-35m ↔ 1 × 10-9m) and is a plastic object that can fold upon itself. Its convertibility is determined analytically. Being permutable, juking the string changes its attributes, giving different variants which introduce an economic system. Our business is strictly with the ludology of this object (we care only for how the string works, in general, so that we can twist it). Here, 'origami' is "(string) folding". Specifically, it is stew choreography as it applies to the compactification schemes of subchem. The terms 'loopy quanta' and 'superstring' are both misnomers. The 'super' part of the theorem will be clarified. We should also note that 'polymer' is an ultra-generic term here. - Consider a twistor space, k, which is tuned according to various l-string pitches. Resting atop the hypothesis that every geometry is convertible, solvable, and scorable, egglepple is sequenced by scaling k - ie., stereotyping flageolet pencils (functors of EGP) from walks. To better understand the loopstring (and my motivation behind the Juke Lemma), I'll start with an elementary synopsis of quantum strings and then segue into how that portends polypeptide/protein structure (but not necessarily functionality). Quantum strings are the strings of so-called string theory, which is an attempt to unify (quantum) gravity with the effects of quantum mechanics into a framework that can explain the smallest energy and larger particle scales known in physics. There are four (4) known forces in Nature: weak nuclear (decay), strong nuclear (confinement), electromagnetism, and gravity (curvature). The combination of three (3) of these select forces (weak/strong nuclear, plus electromagnetism) is settled into what physicists call the Standard Model. The problem is reconciling gravity with the other three. String theory asks for a ‘bare minimum’ qualifier on a smallest scale – the Planck scale. This is easy to go along with – just assume that all forces have a starting point, and cluster solutions within that matrix. So, because quantum mechanics is a model that deals exclusively with the probabilities of interactions between bodies, the notion of relativity (which claims that objects are immune to stochasticity) must exist within foam (a normed vector space where string is lissome) in order for any theory of unification (GUT) to be at all useful. Bandaiding the above posture, string theory finds utility as a working theory of quantum gravity; where its objects are one-dimensional (read: “length”) strings of pure energy. The ‘theory’ part of it stems from how these strings describe the rest of Nature. Because these objects are both lengthy and confined, they are subject to harmonics (ie., partially differentiable), which more or less means that they can vibrate given some initial constraints. As we are aware, the Planck scale (more appropriately, the Planck length) is extremely small. Before any chemistry (reactivity) can be done, one must get from this metric to the nanoscale. In-between these two (2) scales (called subchemistry) lies twenty-six (26) ‘somethings’. Mathematicians like to call these ‘somethings’ measures (degrees of freedom/orders of magnitude), but I prefer lyrics (cf., lyre intervals -- by abuse of language, these also may be scalars reduced to single components, at least to some extent). What we do know for sure is that every elementary particle (such as the gluon, electron, muon, quark, etc.) is autochthonous to this domain by default of them being, well, subatomic but larger than the string object. We derive the '26 measures' from simple math; division of the base exponents is equivalent to their difference (10a / 10b = 10a-b): 10-35 / 10-9 = 10-26 ≡ |26|. Intuition tells us that because energy is a transferable property that must do what it does - transfer, these (subatomic) particles are actually vibrations of the string itself (in fact, they can’t do anything else but vibrate as they conjoin, break, and knot). And (it’s never proper to begin a sentence with ‘and’, but I did it, anyway😛), that each particle correlates to its own pitch class. So, a couple of things would be required to make this ‘music’. First, the string must be bounded. By this we mean that it is not unbounded; there must exist either an upper and/or lower limit to its function. This marks tension and is how string acoustics are established. Second, it must be topologically transformative; if strings vibrated at an identical frequency, they would not be exotic (and plausibly low-energy). Scattering amplitudes of strings are a crucial part of the theory. A field is more rational than a dimension (even though we are clearly working in declension of meters) here because of how a subatomic particle comes into existence – via string vibrations. You would need more than just a descriptor like length to assign values like charm, spin, color, and so forth. Anyway, that’s a topic called quantum field theory (QFT); suffice it to say every field will yield its own class of particle (mainly quanta). For example, a luminous field will yield photons, a gravity field would yield gravitons, a musical field yields notes, a laugh field will yield gigglons (I’m being facetious with that one, but you get the idea), … and on and on. A great read on QFT can be found in the textbooks “An Introduction to Quantum Field Theory” by Peskin & Shroeder, or “QED: The Strange Theory of Light and Matter” by Richard Feynman Physically, there are two particle species – bosonic and fermionic. The boson is associated with force (because its wavefunction remains unchanged in the presence of a twin particle), and has an integer spin (Bose-Einstein) statistic, while the fermion has a ½ integer spin (Fermi-Dirac) statistic and association with matter. The spin statistic is what really determines the species; any composite particle with a ½ integer spin will qualify as a fermion. Likewise, an even number of fermions constitute a boson. It is possible to also have a field configuration where the boson is topologically twisted and behaves as if it is material. But, for quotidian purposes, fermions are matter and bosons are radiative. Pigeon-holing this, one can argue that the Boson-Einstein statistic is more primitive than its counterpart because it commutes. String theory supports this. The statistics are based on how particles may occupy discrete energy states. In the literature, the first accepted string theory is called bosonic string theory (BST). Modern theorists tend to galvanize around the idea that this ‘toy’ model of string theory is incomplete (i.e., not worthy of grand unification status) because it factors in faster-than-light particles called tachyons, while treating fermions as exotic particles (by definition, a physical model must have mass). It also doesn’t incorporate supersymmetric hyperbole. For precisely these reasons, I find bosonic string theory most attractive. The tachyonic field technically is one having negative mass, squared [(-m)2]. In order for a string theory to be consistent, the worldsheet theory must be conformally invariant. The obstruction to conformal symmetry is known as the Weyl anomaly and is proportional to the central charge of the worldsheet theory. In order to preserve conformal symmetry the Weyl anomaly, and thus the central charge, must vanish. For the bosonic string this can be accomplished by a worldsheet theory consisting of twenty-six (26) free bosons. Since each boson is interpreted as a flat spacetime dimension, the critical dimension of the bosonic string is 26. A worldsheet is a two-dimensional manifold describing the embedding of a string in spacetime. Encoded in a conformal field theory are the following definitions: string type, spacetime geometry, and background fields (such as gauge/string fields). Another string theoretic candidate is superstring theory. It claims to unify both bosons and fermions under a single umbrella; a parasol called supersymmetry (SUSY, short for 'supersymmetry', not an acronym). The notion of supersymmetry was introduced as a spacetime symmetry to relate bosons to their fermionic associates. The idea is that each particle is partnered with a ‘superpartner’ based on its spin. So, a ½ integer particle is directly related to an integer particle via a coupling. We should keep in mind the motivation behind SUSY: the hierarchy problem (simply put: the Higgs mass is the greatest scale possible due to quantum interactions of the Higgs boson, sans some reduction at renormalization. Obviously, SUSY would automatically cancel (self-correct) bosonic and fermionic Higgs interactions in the quantized animation). The mathematics of supersymmetry is rather intuitive. A spinor takes on the value of degrees of freedom in the dimension it resides. So, for instance, in a dimension, d, a spinor has d degrees of freedom (e.g., if d=4, then the spinor has four degrees of freedom). In SUSY, the partners are a pair (2), and the number of supersymmetry copies is an exponent to that base (0, 1, 2, or 3). The product of (spinor times copies) gives the total number of supersymmetry generators, with a minimum of (4 × 1 = 4) and a maximum of (4 × 8 = 32). In d dimensions, the size of spinors follows 2(d-1)/2. We see that since the maximum number of generators is 32, SUSY maxes-out at eleven (11) dimensions. Therein lies our theoretical issue with SUSY. We have made a case for a 26-dimensional bosonic theory using twistors. Now, there would need to be accounting for the reduction in dimensionality. I’m up for the task, but first let’s see what the data from colliders say about integer spin in higher-dimensions… Unfortunately, while great in pure mathematical practice (superalgebraic studies, for example) and non-cosmological applications, supersymmetry – as the theory stands - is absent of empirical bearings. WMAP surveys and experiments have detected nothing of its sort. Likewise, for the last decade, the major high-energy particle accelerators (Large Hadron Collider, Tevatron, etc.) have found zero evidence of supersymmetry after running a number of tests at a distribution of energy allowances (upperlimit sensitivities from 135GeV – 2.5TeV). To make matters worse, existence of the Higgs boson was confirmed at ~125GeV. Instead of punting, some theorists have suggested changing the instrumentation and methodology (of course!). So, you’re asking, “Link, does that mean that strings aren’t ‘super’?” The answer is more mundane than that. It’s more like there’s no distinction between Clark Kent and Kal-El. What’s more rational is that Kal-El’s just a journalist on Krypton. Regardless, any GUT must contain in its gamut a path that explains the formation of the first two (2) - and most abundant - elements in the observable Universe, hydrogen (H, atomic number 1) and helium (He, atomic number 2). -- All roads lead to hydrogen -- Meshrooming: monomer morphology + Here's the kicker: we shouldn't actually look at strings as being physical objects, per se. Instead, consider them portable sequences of permutable segments. As long as we can compute superalgebras, our toolset is workable for strings; we have moderate coverage of string behavior to know how they work rudimentarily. Now, we can explore some strings that are actually in use in power settings. One litmus test for string theory validation is the polypeptide. One can easily see that peptide (and for that matter, protein) structure and behavior clearly follows the stretch and folding patterns predicted in the theory. Proteins are really neat. They’re these extended macromolecules (relatively large molecules) which are responsible for the maintenance and upkeep within all living cells (they can also exist outside of the cell, but we are concerned here with intracellular biochemistry). In case one doesn't know much about these machines to begin with, we’ll spend some time now on edification. The first thing we need to know about proteins is that they are, in fact, machines. Like any other machine, they use and convert energy. Their mechanical properties allow what interacts with them to get work done. Proteins consist of amino acid chains. Because they are molecular (hence, at the nanoscale), proteins are perhaps the most important biochemicals (amongst other reasons beyond the scope of this explanation). Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes. This sequence results in a protein folding into some unique three-dimensional (3D) structure that determines its functionality. Frets - In essence, we are mimicking the behavior of protein folding. Proteins are an essential component to many biological functions and participate in virtually all processes within biological cells. They often act as enzymes, performing biochemical reactions including cell signaling, molecular transportation, and cellular regulation. As structural elements, some proteins act as a type of skeleton for cells, and as antibodies, while other proteins participate in the immune system. Before a protein can take on these roles, it must fold into a functional three-dimensional structure, a process that often occurs spontaneously and is dependent on interactions within its amino acid sequence and interactions of the amino acids with their surroundings. Protein folding is driven by the search to find the most energetically favorable conformation of the protein, i.e. its native state. Thus, understanding protein folding is critical to understanding what a protein does and how it works, and is considered a "holy grail" of computational biology. Despite folding occurring within a crowded cellular environment, it typically proceeds smoothly. However, due to a protein's chemical properties or other factors, proteins may misfold — that is, fold down the wrong pathway and end up misshapen. Unless cellular mechanisms are capable of destroying or refolding such misfolded proteins, they can subsequently aggregate and cause a variety of debilitating diseases. Laboratory experiments studying these processes can be limited in scope and atomic detail, leading scientists to use physics-based computational models that, when complementing experiments, seek to provide a more complete picture of protein folding, misfolding, and aggregation. Due to the complexity of proteins' conformation space — the set of possible shapes a protein can take — and limitations in computational power, all-atom molecular dynamics simulations have been severely limited in the timescales which they can study. While most proteins typically fold in the order of milliseconds, before recently, simulations could only reach nanosecond to microsecond timescales. General-purpose supercomputers have been used to simulate protein folding, but such systems are intrinsically expensive and typically shared among many research groups. Additionally, because the computations in kinetic models are serial in nature, strong scaling of traditional molecular simulations to these architectures is exceptionally difficult. Moreover, as protein folding is a stochastic process and can statistically vary over time, it is computationally challenging to use long simulations for comprehensive views of the folding process. Protein folding does not occur in a single step. Instead, proteins spend the majority of their folding time — nearly 96% in some cases — "waiting" in various intermediate conformational states, each a local thermodynamic free energy minimum in the protein's energy landscape. Through a process known as adaptive sampling, these conformations are used as starting points for a set of simulation trajectories. As the simulations discover more conformations, the trajectories are restarted from them, and a Markov state model (MSM) is gradually created from this cyclic process. MSMs are discrete-time master equation models which describe a biomolecule's conformational and energy landscape as a set of distinct structures and the short transitions between them. The adaptive sampling Markov state model approach significantly increases the efficiency of simulation as it avoids computation inside the local energy minimum itself, and is amenable to distributed computing as it allows for the statistical aggregation of short, independent simulation trajectories. The amount of time it takes to construct a Markov state model is inversely proportional to the number of parallel simulations run, i.e. the number of processors available. In other words, it achieves linear parallelization, leading to an approximately four orders of magnitude reduction in overall serial calculation time. A completed MSM may contain tens of thousands of sample states from the protein's phase space (all the conformations a protein can take on) and the transitions between them. A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than about 20-30 residues, are rarely considered to be proteins and are commonly called peptides, or sometimes oligopeptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; however, in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by posttranslational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Sometimes proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes. Upon formation, proteins only exist for a finite period of time and are then degraded and recycled by the cell's machinery via protein turnover. The lifespan of a protein is measured in periods of its half-life. Depending on the host environment, they can exist for minutes or years (the average lifespan is 24-48 hours in mammalian cells). Misfolded proteins are degraded more rapidly due either to their instability or them being signaled for destruction as a means for cellular upkeep and efficiency. Most proteins consist of linear polymers built from series of up to 20 different L-α-amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino group, a carboxyl group, and a variable side chain are bonded. Only proline differs from this basic structure as it contains an unusual ring to the N-end amine group, which forces the CO–NH amide moiety into a fixed conformation. The side chains of the standard amino acids, detailed in the list of standard amino acids, have a great variety of chemical structures and properties; it is the combined effect of all of the amino acid side chains in a protein that ultimately determines its three-dimensional structure and its chemical reactivity. The amino acids in a polypeptide chain are linked by peptide bonds. Once linked in the protein chain, an individual amino acid is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone. The peptide bond has two resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. The end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus, whereas the end with a free amino group is known as the N-terminus or amino terminus. The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable three-dimensional structure. However, the boundary between the two is not well defined and usually lies near 20–30 residues. Polypeptide can refer to any single linear chain of amino acids, usually regardless of length, but often implies an absence of a defined conformation. Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine-uracil-guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of Post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second. The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus. Protein folding is the process by which a protein structure assumes its functional shape or conformation. It is the physical process by which a polypeptide folds into its characteristic and functional three-dimensional structure from random coil. Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA to a linear chain of amino acids. This polypeptide lacks any stable (long-lasting) three-dimensional structure (the left hand side of the first figure). Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure), known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence (Anfinsen's dogma). Experiments have indicated that the codon for an amino acid can also influence protein structure. Most (but not all) proteins fold into unique 3-dimensional structures. Proteins that do not adhere to or lack this behavior are called intrinsically disordered. Still, such proteins can adopt a fixed structure by binding to other macromolecules. The shape into which a protein naturally folds is known as its native conformation. Failure to fold into native structure generally produces inactive proteins, but in some instances misfolded proteins have modified or toxic functionality. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure: - Primary structure: the amino acid sequence. A protein is a polyamide. - Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the alpha helix, beta sheet and turns. Because secondary structures are local, many regions of different secondary structure can be present in the same protein molecule. - Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even posttranslational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein. - Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex. Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution proteins also undergo variation in structure through thermal vibration and the collision with other molecules. Molecular surface of several proteins showing their comparative sizes. From left to right are: immunoglobulin G (IgG, an antibody), hemoglobin, insulin (a hormone), adenylate kinase (an enzyme), and glutamine synthetase (an enzyme). Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. Proteins are chains of amino acids joined together by peptide bonds. Many conformations of this chain are possible due to the rotation of the chain about each Cα atom. It is these conformational changes that are responsible for differences in the three dimensional structure of proteins. Each amino acid in the chain is polar, i.e. it has separated positive and negative charged regions with a free C=O group, which can act as hydrogen bond acceptor and an NH group, which can act as hydrogen bond donor. These groups can therefore interact in the protein structure. The 20 amino acids can be classified according to the chemistry of the side chain which also plays an important structural role. Glycine takes on a special position, as it has the smallest side chain, only one Hydrogen atom, and therefore can increase the local flexibility in the protein structure. Cysteine on the other hand can react with another cysteine residue and thereby form a cross link stabilizing the whole structure. The protein structure can be considered as a sequence of secondary structure elements, such as α helices and β sheets, which together constitute the overall three-dimensional configuration of the protein chain. In these secondary structures regular patterns of H bonds are formed between neighboring amino acids, and the amino acids have similar Φ and Ψ angles. Protein prediction, design, and engineering can be an expensive (in terms of both time and money) process. Compared to what we are trying to do (earn and decipher), it can be a rather tedious endeavor since the payoff only comes well down the road after jumping through many hoops. Let me suggest a more labor-ready heuristic... MONEY💰: a cryptocommodity Stereotyping is done in a three (3)-act continuum: (namely integer factorization). The continuum is weighted around the relationship between and hypo- currencies. This is called the formula of cryptocurrency* [formalized as the Cryptoquotient (CQ), and also called the Cryptocurrency Problem], which asks if there exists a crypto exercise (natural, artificial, or heterotic) that, hypostantially, may anchor a hypercurrency alongside which it is fit? The rationale comes from string ludology, which posits that juking is resultant of certain path integral manipulation. The ultimate proof is quotient normalization (parimutuelcybernetic). A cryptocurrency is a cryptographic exercise whose resultant is transactionable in some real economy. Double U (u-u) economics is the formal attempt at leveraging the Juke Lemma by sequencing fibors from the portfolio: [verso (EEE) = (micro) through recto (PPP) = (macro)], which yield some quotient of cryptocurrency. MONEY (Mathematically-Optimized Numismatics' Encrypted Yield) is the cryptocurrency* derived from y-proofing. MONEY () is earned by juking. After obtaining a 0b, the yield is the attribution of cents drawn from an l-string arrangement. The cryptocurrency requires that the total be no greater than the handicap to qualify as MONEY. Otherwise, it is bubblegum.* However, a fibor bundle can still theoretically make MONEY. "Cents are made from stew choreography." Because the supremum of ludeiy constructibles is computable yet exponential, extrapolating yesegalo from those objects and farming their convertible geometries is ideal for stew choreography. Our economy is negotiated organically from the renormalization of egglepple's intrinsic cent value. Keyframing (coupling a shapeframe with the score) yields plausible recreation. Gameplay: opening + closing + Random walks is the (programming) language of juking (I wonder if that's grammatically correct?). From the standpoint of string-adherence, gameplay (ludological operation) is the foundational dynamic of how l-string functionality is proofed. By playing, we are generating walks for use in fibor determination and closure. Basically, all games transition through three (3) phases: 1) the opening, 2) its middlegame, and 3) an endgame. Stewart's composition is bifurcated into the voices: Earl (aria) and ELLIS (recitative). Each voice deals with its own set of workoads. Earl is for batch processing, while ELLIS is for variable loop reduction. Being an opera ludo, the above objective is introduced as a fitness program for advancing game logic. 'Fitness' here is the ability of currency to move across twistor space. #ELLIS™ Animation in twistor space is entirely based on walks and their statistical variance. Jukers are best-served with an exposition on probability theory since it is conducive to a strong opening (yesegalo construction). Walks get interpolated into major scale construction. ELLIS (from 'Erasure Loop Linear Instruction Set') is the cassette feed for Earl, performing loop-erasure (aka 'loop-erased random walks' or 'erasure loops'). ELLIS guides stew choreography. This is the Pink program's cantata. In simplest terms, loop-erasure is a methodology for not repeating/replicating common walks; a most important checkpoint for obtaining a 0b. The sole purpose for loop-erasure is to supply comparative models which are to be deprecated, ensuring that stew choreography is as fast as possible. Needless to say, this frees-up compute cycles on the systole. In any financial scenario, there needs to be in place an insurance mechanism to reduce risk. We achieve this with cassettes in/from the ELLIS cartridge. My vision for ELLIS revolves around the concept of loop-erased (random) walks. A walk is a playable formation (sequence of discrete steps at fixed length). This is a simplified chart illustrating stochastic activity at specified inflection points (representing measures). In all probability, walks are a distribution of their random variables transforming twistor space. Loop-erasure is the preferred method for culling pitch spikes. + Walks and their calculus are important stuff; they form the whole methodology behind gameplay. So, let's take a quick walkthrough (I'm being punny) for juker edification. Statistically, most walks are random, so we'll start with those. Random walks are usually assumed to be Markov chains or Markov processes, but other, more complicated walks are also of interest. Some random walks are on graphs, others on the line, in the plane, in higher dimensions, or even curved surfaces, while some random walks are on groups. Random walks also vary with regard to the time parameter. Often, the walk is in discrete time, and indexed by the natural numbers. However, some walks take their steps at random times, and in that case, the position X_{t} is defined for the continuum of times. Specific cases or limits of random walks include the Lévy flight. A popular random walk model is that of a random walk on a regular lattice, where at each step the location jumps to another site according to some probability distribution. In a simple random walk, the location can only jump to neighboring sites of the lattice, forming a lattice path. In simple symmetric random walk on a locally finite lattice, the probabilities of the location jumping to each one of its immediate neighbours are the same. The best studied example is of random walk on the d-dimensional integer lattice (sometimes called the hypercubic lattice). A self-avoiding walk is a sequence of moves on a lattice (a lattice path) that does not visit the same point more than once. This is a special case of the graph theoretical notion of a path. A self-avoiding polygon is a closed self-avoiding walk on a lattice. Markov, et cetera + Markov, etc. Probability distribution + Probability distribution A probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is non-numerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. To define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. In the discrete case, one can easily assign a probability to each possible value: for example, when throwing a fair die, each of the six values 1 to 6 has the probability 1/6. In contrast, when a random variable takes values from a continuum then, typically, probabilities can be nonzero only if they refer to intervals: in quality control one might demand that the probability of a "500 g" package containing between 490 g and 510 g should be no less than 98%. The probability density function (pdf) of the normal distribution, also called Gaussian or "bell curve", the most important continuous random distribution. As notated on the figure, the probabilities of intervals of values correspond to the area under the curve. If the random variable is real-valued (or more generally, if a total order is defined for its possible values), the cumulative distribution function (CDF) gives the probability that the random variable is no larger than a given value; in the real-valued case, the CDF is the integral of the probability density function (pdf) provided that this function exists. (Cumulative distribution function) Because a probability distribution Pr on the real line is determined by the probability of a scalar random variable X being in a half-open interval (-∞, x], the probability distribution is completely characterized by its cumulative distribution function: F(x) = \Pr \left[ X \le x \right] \qquad \text{ for all } x \in \mathbb{R}. (Discrete probability distribution) A discrete probability distribution is a probability distribution characterized by a probability mass function. Thus, the distribution of a random variable X is discrete, and X is called a discrete random variable, if \sum_u \Pr(X=u) = 1 as u runs through the set of all possible values of X. Hence, a random variable can assume only a finite or countably infinite number of values—the random variable is a discrete variable. For the number of potential values to be countably infinite, even though their probabilities sum to 1, the probabilities have to decline to zero fast enough. for example, if \Pr(X=n) = \tfrac{1}{2^n} for n = 1, 2, ..., we have the sum of probabilities 1/2 + 1/4 + 1/8 + ... = 1. Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. (Continuous probability distribution) A continuous probability distribution is a probability distribution that has a cumulative distribution function that is continuous. Most often they are generated by having a probability density function. Mathematicians call distributions with probability density functions absolutely continuous, since their cumulative distribution function is absolutely continuous with respect to the Lebesgue measure λ. If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others. Intuitively, a continuous random variable is the one which can take a continuous range of values—as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g., rolling 3 1 / 2 on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3½ cm is possible; however, it has probability zero because uncountably many other potential values exist even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval (3 cm, 4 cm) is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero. Formally, if X is a continuous random variable, then it has a probability density function ƒ(x), and therefore its probability of falling into a given interval, say [a, b] is given by the integral \Pr[a\le X\le b] = \int_a^b f(x) \, dx In particular, the probability for X to take any single value a (that is a ≤ X ≤ a) is zero, because an integral with coinciding upper and lower limits is always equal to zero. The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cumulative distribution function, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor a mixture of those. An example is given by the Cantor distribution. Such singular distributions however are never encountered in practice. Note on terminology: some authors use the term "continuous distribution" to denote the distribution with continuous cumulative distribution function. Thus, their definition includes both the (absolutely) continuous and singular distributions. By one convention, a probability distribution \,\mu is called continuous if its cumulative distribution function F(x)=\mu(-\infty,x] is continuous and, therefore, the probability measure of singletons \mu\{x\}\,=\,0 for all \,x. Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a non-negative Lebesgue integrable function \,f defined on the real numbers such that F(x) = \mu(-\infty,x] = \int_{-\infty}^x f(t)\,dt. Discrete distributions and some continuous distributions (like the Cantor distribution) do not admit such a density. Law of the iterated logarithm + Law of the iterated logarithm The law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. Let {Yn} be independent, identically distributed random variables with means zero and unit variances. Let Sn = Y1 + … + Yn. Then \limsup_{n \to \infty} \frac{S_n}{\sqrt{n \log\log n}} = \sqrt 2, \qquad \text{a.s.}, where “log” is the natural logarithm, “lim sup” denotes the limit superior, and “a.s.” stands for “almost surely". The law of iterated logarithms operates “in between” the law of large numbers and the central limit theorem. Interestingly, it holds for polynomial time (P) pseudorandom sequences. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely: \frac{S_n}{n} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{n} \ \xrightarrow{a.s.} 0, \qquad \text{as}\ \ n\to\infty. On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−½ converge in distribution to a standard normal distribution. By Kolmogorov's zero-one law, for any fixed M, the probability that the event \limsup_n \frac{S_n}{\sqrt{n}} > M occurs is 0 or 1. Then P\left( \limsup_n \frac{S_n}{\sqrt{n}} > M \right) \geq \limsup_n P\left( \frac{S_n}{\sqrt{n}} > M \right) = P\bigl( \mathcal{N}(0, 1) > M \bigr) > 0 so \limsup_n \frac{S_n}{\sqrt{n}}=\infty with probability 1. An identical argument shows that \liminf_n \frac{S_n}{\sqrt{n}}=-\infty with probability 1 as well. This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality \frac{S_{2n}}{\sqrt{2n}}-\frac{S_n}{\sqrt{n}} = \frac1{\sqrt2}\frac{S_{2n}-S_n}{\sqrt{n}} - (1-\frac1\sqrt2)\frac{S_n}{\sqrt{n}} and the fact that the random variables \frac{S_n}{\sqrt{n}} and \frac{S_{2n}-S_n}{\sqrt{n}} are independent and both converge in distribution to \mathcal{N}(0, 1). The law of the iterated logarithm provides the scaling factor where the two limits become different: \frac{S_n}{\sqrt{n\log\log n}} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{\sqrt{n\log\log n}} \ \stackrel{a.s.}{\nrightarrow}\ 0, \qquad \text{as}\ \ n\to\infty. Thus, although the quantity S_n/\sqrt{n\log\log n} is less than any predefined ε > 0 with probability approaching one, that quantity will nevertheless be dropping out of that interval infinitely often, and in fact will be visiting the neighborhoods of any point in the interval (-√2,√2) almost surely. Loop-erasure + Loop-erasure Assume G is some graph and \gamma is some path of length n on G. In other words, \gamma(1),\dots,\gamma(n) are vertices of G such that \gamma(i) and \gamma(i+1) are connected by an edge. Then the loop erasure of \gamma is a new simple path created by erasing all the loops of \gamma in chronological order. Formally, we define indices i_j inductively using i_1 = 1\, i_{j+1}=\max\{i:\gamma(i)=\gamma(i_j)\}+1\, where "max" here means up to the length of the path \gamma. The induction stops when for some i_j we have \gamma(i_j)=\gamma(n). Assume this happens at J i.e. i_J is the last i_j. Then the loop erasure of \gamma, denoted by \mathrm{LE}(\gamma) is a simple path of length J defined by \mathrm{LE}(\gamma)(j)=\gamma(i_j).\, Now let G be some graph, let v be a vertex of G, and let R be a random walk on G starting from v. Let T be some stopping time for R. Then the loop-erased random walk until time T is LE(R([1,T])). In other words, take R from its beginning until T — that's a (random) path — erase all the loops in chronological order as above — you get a random simple path. The stopping time T may be fixed, i.e. one may perform n steps and then loop-erase. However, it is usually more natural to take T to be the hitting time in some set. For example, let G be the graph Z2 and let R be a random walk starting from the point (0,0). Let T be the time when R first hits the circle of radius 100 (we mean here of course a discretized circle). LE(R) is called the loop-erased random walk starting at (0,0) and stopped at the circle. A spanning tree chosen randomly from among all possible spanning trees with equal probability is called a uniform spanning tree. To create such a tree Wilson’s algorithm uses loop-erased random walks. The algorithm proceeds by initializing the tree maze with an random starting cell. New cells are then subsequently added to the maze, initiating a random walk. The random walk progresses uninterrupted until it eventually links with the prevailing maze. However, if the random walk traverses through itself, the resulting loop is erased before the random walk proceeds. The initial random walks are unexpected to link with the small existing maze. As the maze develops, the random walks tend to have a higher probability to collide with the maze and may cause the algorithm to accelerate dramatically. For instance, Let G again be a graph. A spanning tree of G is a subgraph of G containing all vertices and some of the edges, which is a tree, i.e. connected and with no cycles. The uniform spanning tree (UST for short) is a random spanning tree chosen among all the possible spanning trees of G with equal probability. Let now v and w be two vertices in G. Any spanning tree contains precisely one simple path between v and w. Taking this path in the uniform spanning tree gives a random simple path. It turns out that the distribution of this path is identical to the distribution of the loop-erased random walk starting at v and stopped at w. An immediate corollary is that loop-erased random walk is symmetric in its start and end points. More precisely, the distribution of the loop-erased random walk starting at v and stopped at w is identical to the distribution of the reversal of loop-erased random walk starting at w and stopped at v. This is not a trivial fact at all! Loop-erasing a path and the reverse path do not give the same result. It is only the distributions that are identical. A-priori sampling a UST seems difficult. Even a relatively modest graph (say a 100x100 grid) has far too many spanning trees to prepare a complete list. Therefore a different approach is needed. There are a number of algorithms for sampling a UST, but we will concentrate on Wilson's algorithm. Take any two vertices and perform loop-erased random walk from one to the other. Now take a third vertex (not on the constructed path) and perform loop-erased random walk until hitting the already constructed path. This gives a tree with either two or three leaves. Choose a fourth vertex and do loop-erased random walk until hitting this tree. Continue until the tree spans all the vertices. It turns out that no matter which method you use to choose the starting vertices you always end up with the same distribution on the spanning trees, namely the uniform one. Another representation of loop-erased random walk stems from solutions of the discrete Laplace equation. Let G again be a graph and let v and w be two vertices in G. Construct a random path from v to w inductively using the following procedure. Assume we have already defined \gamma(1),...,\gamma(n). Let f be a function from G to R satisfying f(\gamma(i))=0 for all i\leq n and f(w)=1 f is discretely harmonic everywhere else Where a function f on a graph is discretely harmonic at a point x if f(x) equals the average of f on the neighbors of x. With f defined choose \gamma(n+1) using f at the neighbors of \gamma(n) as weights. In other words, if x_1,...,x_d are these neighbors, choose x_i with probability \frac{f(x_i)}{\sum_{j=1}^d f(x_j)}. Continuing this process, recalculating f at each step, with result in a random simple path from v to w; the distribution of this path is identical to that of a loop-erased random walk from v to w. An alternative view is that the distribution of a loop-erased random walk conditioned to start in some path β is identical to the loop-erasure of a random walk conditioned not to hit β. This property is often referred to as the Markov property of loop-erased random walk (though the relation to the usual Markov property is somewhat vague). It is important to notice that while the proof of the equivalence is quite easy, models which involve dynamically changing harmonic functions or measures are typically extremely difficult to analyze. Practically nothing is known about the p-Laplacian walk or diffusion-limited aggregation. Another somewhat related model is the harmonic explorer. Finally there is another link that should be mentioned: Kirchhoff's theorem relates the number of spanning trees of a graph G to the eigenvalues of the discrete Laplacian. See spanning tree for details. Let d be the dimension, which we will assume to be at least 2. Examine Zd i.e. all the points (a_1,...,a_d) with integer a_i. This is an infinite graph with degree 2d when you connect each point to its nearest neighbors. From now on we will consider loop-erased random walk on this graph or its subgraphs. (High dimensions) The easiest case to analyze is dimension 5 and above. In this case it turns out that there the intersections are only local. A calculation shows that if one takes a random walk of length n, its loop-erasure has length of the same order of magnitude, i.e. n. Scaling accordingly, it turns out that loop-erased random walk converges (in an appropriate sense) to Brownian motion as n goes to infinity. Dimension 4 is more complicated, but the general picture is still true. It turns out that the loop-erasure of a random walk of length n has approximately n/\log^{1/3}n vertices, but again, after scaling (that takes into account the logarithmic factor) the loop-erased walk converges to Brownian motion. (Two dimensions) In two dimensions, arguments from conformal field theory and simulation results led to a number of exciting conjectures. Assume D is some simply connected domain in the plane and x is a point in D. Take the graph G to be G:=D\cap \varepsilon \mathbb{Z}^2, that is, a grid of side length ε restricted to D. Let v be the vertex of G closest to x. Examine now a loop-erased random walk starting from v and stopped when hitting the "boundary" of G, i.e. the vertices of G which correspond to the boundary of D. Then the conjectures are As ε goes to zero the distribution of the path converges to some distribution on simple paths from x to the boundary of D (different from Brownian motion, of course — in 2 dimensions paths of Brownian motion are not simple). This distribution (denote it by S_{D,x}) is called the scaling limit of loop-erased random walk. These distributions are conformally invariant. Namely, if φ is a Riemann map between D and a second domain E then \phi(S_{D,x})=S_{E,\phi(x)}.\, The Hausdorff dimension of these paths is 5/4 almost surely. The first attack at these conjectures came from the direction of domino tilings. Taking a spanning tree of G and adding to it its planar dual one gets a domino tiling of a special derived graph (call it H). Each vertex of H corresponds to a vertex, edge or face of G, and the edges of H show which vertex lies on which edge and which edge on which face. It turns out that taking a uniform spanning tree of G leads to a uniformly distributed random domino tiling of H. The number of domino tilings of a graph can be calculated using the determinant of special matrices, which allow to connect it to the discrete Green function which is approximately conformally invariant. These arguments allowed to show that certain measurables of loop-erased random walk are (in the limit) conformally invariant, and that the expected number of vertices in a loop-erased random walk stopped at a circle of radius r is of the order of r^{5/4}. In 2002 these conjectures were resolved (positively) using Stochastic Löwner Evolution. Very roughly, it is a stochastic conformally invariant ordinary differential equation which allows to catch the Markov property of loop-erased random walk (and many other probabilistic processes). Self-avoidance + A self-avoiding walk is a path from one point to another which never intersects itself. Such paths are usually considered to occur on lattices, so that steps are only allowed in a discrete number of directions and of certain lengths. Consider a self-avoiding walk on a two-dimensional n×n square grid (i.e., a lattice path which never visits the same lattice point twice) which starts at the origin, takes first step in the positive horizontal direction, and is restricted to nonnegative grid points only. The number of such paths of n=1, 2, ... steps are 1, 2, 5, 12, 30, 73, 183, 456, 1151, ... Similarly, consider a self-avoiding walk which starts at the origin, takes first step in the positive horizontal direction, is not restricted to nonnegative grid points only, but which is restricted to take an up step before taking the first down step. The number of such paths of n=1, 2, ... steps are 1, 2, 5, 13, 36, 98, 272, 740, 2034, ... Self-avoiding rook (yes, as like in chess) walks are walks on an m×n grid which start from (0,0), end at (m,n), and are composed of only horizontal and vertical steps. The following table gives the first few numbers R(m,n) of such walks for small m and n. The values for m=n=1, 2, ... are 2, 12, 184, 8512, 1262816, ... #Earl™ Synthesizing fonts can be tricky. So, fundamental to a strong endgame is intelligent use of knot theory. The scope of this exposition will explore the theory of knots and their components. In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (in topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Knots can be described in various ways. Given a method of description, however, there may be more than one description that represents the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram. Any given knot can be drawn in many different ways using a knot diagram. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished by using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants. Knots can be considered in other three-dimensional spaces and objects other than circles can be used. Higher-dimensional knots are n-dimensional spheres in m-dimensional Euclidean space. Knot equivalence + A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop (Adams 2004) (Sossinsky 2002). Simply, we can say a knot K is an injective and continuous function K:[0,1]\to \mathbb{R}^3 with K(0)=K(1). Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot. The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots K_1,K_2 are equivalent if there is an orientation-preserving homeomorphism h\colon\R^3\to\R^3 with h(K_1)=K_2, and this is known as an ambient isotopy. Knot diagrams + A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely (Rolfsen 1976). At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space. A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed. Knot invariance + A knot invariant is a quantity (in a broad sense) defined for each knot which is the same for equivalent knots. The equivalence is often given by ambient isotopy but can be given by homeomorphism. Some invariants are indeed numbers, but invariants can range from the simple, such as a yes/no answer, to those as complex as a homology theory . Research on invariants is not only motivated by the basic problem of distinguishing one knot from another but also to understand fundamental properties of knots and their relations to other branches of mathematics. From the modern perspective, it is natural to define a knot invariant from a knot diagram. Of course, it must be unchanged (that is to say, invariant) under the Reidemeister moves. Tricolorability is a particularly simple example. Other examples are knot polynomials, such as the Jones polynomial, which are currently among the most useful invariants for distinguishing knots from one another, though currently it is not known whether there exists a knot polynomial which distinguishes all knots from each other, or even which distinguishes just the unknot from all other knots. Other invariants can be defined by considering some integer-valued function of knot diagrams and taking its minimum value over all possible diagrams of a given knot. This category includes the crossing number, which is the minimum number of crossings for any diagram of the knot, and the bridge number, which is the minimum number of bridges for any diagram of the knot. Historically, many of the early knot invariants are not defined by first selecting a diagram but defined intrinsically, which can make computing some of these invariants a challenge. For example, knot genus is particularly tricky to compute, but can be effective (for instance, in distinguishing mutants). The complement of a knot itself (as a topological space) is known to be a "complete invariant" of the knot by the Gordon–Luecke theorem in the sense that it distinguishes the given knot from all other knots up to ambient isotopy and mirror image. Some invariants associated with the knot complement include the knot group which is just the fundamental group of the complement. The knot quandle is also a complete invariant in this sense but it is difficult to determine if two quandles are isomorphic. By Mostow–Prasad rigidity, the hyperbolic structure on the complement of a hyperbolic link is unique, which means the hyperbolic volume is an invariant for these knots and links. Volume, and other hyperbolic invariants, have proven very effective, utilized in some of the extensive efforts at knot tabulation. In recent years, there has been much interest in homological invariants of knots which categorify well-known invariants. Heegaard Floer homology is a homology theory whose Euler characteristic is the Alexander polynomial of the knot. It has been proven effective in deducing new results about the classical invariants. Along a different line of study, there is a combinatorially defined cohomology theory of knots called Khovanov homology whose Euler characteristic is the Jones polynomial. This has recently been shown to be useful in obtaining bounds on slice genus whose earlier proofs required gauge theory. Khovanov and Rozansky have since defined several other related cohomology theories whose Euler characteristics recover other classical invariants. Stroppel gave a representation theoretic interpretation of Khovanov homology by categorifying quantum group invariants. There is also growing interest from both knot theorists and scientists in understanding "physical" or geometric properties of knots and relating it to topological invariants and knot type. An old result in this direction is the Fary–Milnor theorem states that if the total curvature of a knot K in \mathbb{R}^3 satisfies \oint_K \kappa \,ds \leq 4\pi, where \kappa(p) is the curvature at p, then K is an unknot. Therefore, for knotted curves, \oint_K \kappa\,ds > 4\pi.\, An example of a "physical" invariant is ropelength, which is the amount of 1-inch diameter rope needed to realize a particular knot type. Knotting/Unknotting + A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle. Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere embedded in a four-dimensional ball. Such an embedding is unknotted if there is a homeomorphism of the 4-sphere onto itself taking the 2-sphere to a standard "round" 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots. The mathematical technique called "general position" implies that for a given n-sphere in the m-sphere, if m is large enough (depending on n), the sphere should be unknotted. In general, piecewise-linear n-spheres form knots only in (n + 2)-space, although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted (4k − 1)-spheres in 6k-space, e.g. there is a smoothly knotted 3-sphere in the 6-sphere. Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth k-sphere in an n-sphere with 2n − 3k − 3 > 0 is unknotted. The notion of a knot has further generalisations in mathematics, see: knot (mathematics), isotopy classification of embeddings. Every knot in Sn is the link of a real-algebraic set with isolated singularity in Rn+1 An n-knot is a single Sn embedded in Sm. An n-link is k-copies of Sn embedded in Sm, where k is a natural number. Sum topology + Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the knot sum, or sometimes the connected sum or composition of two knots. This can be formally defined as follows: consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as oriented, i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle. The knot sum of oriented knots is commutative and associative. A knot is prime if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is composite. There is a prime decomposition for knots, analogous to prime and composite numbers. For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers smooth knots in codimension at least 3. Pink program Works are scored for egglepple (brane + kindergarten), necessitating fugal spacetime complexity controls*. Cassette (patch batch) compilation depends on jukespace being coordinated dyadically. Thus, Pink programming [ie., functional twistor algorithms declared from supermathematics (eg., superalgebras, supergeometries, superprobabilities, ultrametric analysis/supersymmetries, etc. of superstring theory)] is . Control here means isolating limits on acoustic bounds. My global interest is validating the formula of cryptocommodity (ie., parimutuelcybernetic governance✔). This is an act of perpetual quotient load normalization between fiat (government-issued) currencies and their crypto counterparts, which could, in theory, make MONEY (universally) fungible. To accomplish this in reality would take either an international banking decree, or extremely large-scale mass participation (in terms of nodal saturation). Neither is a small feat. The arcade already in place needs to be cellularly automated, and scaled to handle elastic data sets. If you are a developer (mathematician/computer scientist/music theorist/biochemist/condensed matter specialist/whatever, etc.), I invite you to join me in helping build this stuff.👷🏿 Cassettes are developed by 'touch-and-go':= patches (plus subiteratives) might be contracted (pu$h/pull → deploy) on open puzzles. Trunks + branches + twigs + roots require Link Starbureiy's signature before being migrated/published. To reiterate, development loosely follows the open source model (ie., desired suggestions should be directed at the community), so, given certain permissions (contract or signature), anyone can contribute. Please, no junk. Don't be a luser. Who cares Solutions are journaled in (the) UUelcome Matte (ISSN 2165-6738), the foremost scholastic industry-reviewed magazine for juking activities. Financial firms and exchanges care because we have a cryptocurrent system that generates new wealth mechanisms.💸 Information technology security specialists are interested in the asymmetrics. Academic research labs - in cooperation with businesses, like biotech companies - are paying close attention because macromolecular structure is essential to its function, and that helps with the process of engineering better medicines and environmental applications. "UUe is at the very foundation of juking. We speak of this in terms of utility, and so our best practice when fielding the endeavor must be whetted in simplicity. It is important that we not limit ourselves to a specific type of automata. The most profitable approach is to consider everything, and then hedge. An intelligent way to bond with creativity is through idea incubation🎨: synthesize as much information as possible, create as many potentials as needed, and then refine those potentials. Even still, an idea is not a solution until it is put into use." - IMPACT // The best work of your career happens here UUe hosts four (4) sections: stereotypography, operation, interfacing, and ludology [SOIL]. Development happens in episodes, with each episode having specific objectives that advance at least one (1) of the sections. Plenty of calculator power would have to be in place for the possibly enormous number of SOIL computables. Now, I obviously can't - and am not going to - sort through dozens of scribble per person per opus at one time, so there needs to be a better method of opening. To do this, we're building and implementing a most robust arcade (as a distributed supercomputer). + Processors used in computers today are based on von Neumann architecture; that is, they rely on a stored program system (i.e., one that keeps its program instructions in RAM). In order for different (even neighboring) sections of the processor to communicate with each other (and the compositional elements), wires are needed for interconnects. ... Before continuing, let me first offer an admission: the von Neumann architecture and approach to computer systems works fine. Now, the kicker: there's an ever better solution. In fact, it's almost necessary to adopt it. Keeping in mind the dynamics of twistor space, qubits dictate that data can/must be in multiple states (quantum pairing/superpositions) simultaneously. So, the play is to build the machinery that accomplishes this. We design here a cellular automaton processor configured as a systolic array (homogeneous batches of tightly coupled cells), where no wires are needed for intracell communication, and all cells get informed synchronously. Another admission: our 'processor' starts off horizontally; that is, reliant on distributed computing. The vertical SoC (system on a chip) is actually a byproduct since we are concerned with core acquisition. UUe by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36972832679748535, "perplexity": 3156.9969608279257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688997.70/warc/CC-MAIN-20170922145724-20170922165724-00658.warc.gz"}
http://arundquist.wordpress.com/2013/01/31/what-is-angular-momentum/
## What is angular momentum? I’ve spent some time today preparing for my theoretical mechanics class. We’re slated to go over the angular versions of Newton’s laws, with a heavy emphasis on conservation of angular momentum. And . . . I’m stuck. This actually happens to me every time I come around to teaching angular momentum. I try really hard to come up with a way that makes it seem reasonable or “from the gut” and I fail. This post is really just getting my thoughts down so I don’t argue with myself so much the next time around. ## Quantify spinning When I talk to students about linear momentum, they don’t seem to have a hard time internalizing the concept that mass and speed should be involved in describing/quantifying motion. But when I shift to angular momentum, I struggle. I asked this on twitter today and got some great suggestions: One was “strength of spinnyness” from my friend Fran Poodry, and I like it, but I’m not sure if students would go from that to $\vec{r}\times\vec{p}$. The same goes with the phrase I used to use: “it’s a measure of how much you’re moving around me (or the origin).” In fact, I’ve stopped using that latter phrase because I think that it leads to students wanting to put r (the distance from the origin) in the denominator. For example, consider a car traveling down a nearby road. If you’re close to the road, that car is “going around you” much faster than if you’re far from the road (the portion of the circle it’s traveling is larger). But, from the perspective of angular momentum, the reverse is true. I’ve also tried Kepler’s 2nd law approach, but that’s a little unsatisfying as well. The argument goes that measuring the speed at which the particle is sweeping out area is like measuring the “strength of spinnyness.” I’ve had a hard time selling that, though it certainly does lead to $\vec{r}\times\vec{p}$ rather nicely (as long as you’re willing to live with a factor of 2 times the mass). ## Rigid body approach My twitter friends are encouraging me that going with defining the moment of inertia first can be really helpful. If students can get I into their gut, then $L=I\omega$ isn’t that hard. I agree, but I think there’s some lurking difficulty in the definition of I. Specifically, why does having mass away from the axis (the distance is squared in the definition of I) matter? You can certainly have students interact with things to show them that makes sense, and you can also show that $\vec{r}\times\vec{F}$ is a useful concept, but, really, shouldn’t angular momentum for a simple particle be important, before jumping into rigid bodies? ## Noether’s theorem If you consider a system whose mechanical properties are unaffected by the rotation of the system. Noether’s theorem shows how that leads to the conservation of $\vec{r}\times\vec{p}$. That’s cool, but not something I’m prepared to do at the beginning of this class. ## Ideas I want to get across I want to show that if you take a time derivative of angular momentum, you find torque. So, no torque leads to conservation of angular momentum. But, if I can’t get a good picture of angular momentum into my students’ guts, how is this helpful? I also want a tie between “strength of spinnyness” and $\vec{r}\times\vec{p}$. I don’t want to hand it down on a silver platter. I want them to be as comfortable with it as they are with linear momentum. I’m finding that’s hard because I’m not as comfortable. Ok, this is a pretty shoddy post, but most of my ideas are down now. Please feel free to join the conversation. We don’t spend time on this until next Wednesday, so there’s plenty of time to call me an idiot help me out. Associate professor of physics at Hamline. This entry was posted in physics, teaching. Bookmark the permalink. ### 6 Responses to What is angular momentum? 1. bretbenesh says: I have no ideas. But, as a person who has some experience with physics but is not an expert, I am happy to hear that I am not the only person who cannot completely put the meaning of angular momentum into words. Thanks for making me feel better about myself! 2. cgoedde says: This is a great question. After thinking about it for a while, here are my thoughts. First, I wouldn’t try to shoehorn it into a single description. I think it’s better to use two. After all, we talk about both spin and orbital angular momentum. For spin, something like Fran’s strength of spinning is fine, but I might use “momentum of spinning” instead. For orbital, the best I can come up with right now is “momentum past a point, scaled by distance”. For the latter, I would introduce it by something like the following: “You are standing on skates on ice. Someone throws a baseball to you. Compare your rate of spin after you catch the ball in the following three cases: (a) The ball is thrown straight at you. (b) The ball is thrown slightly to your right or left, so that you can easily catch it. (c) The ball is thrown well to your right or left, so that you really have to stretch out your arm to catch it.” The goal here is to tie orbital to spin angular momentum and to lead up to r x p. I think the answers should be reasonably intuitive, and lead the students to the idea that both r and p are important in considering the “spininess” of a moving object. The r = 0 case is especially important, for this, I think. • Andy "SuperFly" Rundquist says: Wow, I really like that notion of the skates. How much would this make you spin around if you caught it? that’s a great questions and has all the hallmarks you need for both angular momentum and torque. 3. Steve Maier says: Ditto on the example of a person on ice skates: that’s about the best concrete way to get at it as I’ve heard. I like that it ties linear momentum with the new context of circular motion. In colloquial language usage, “momentum” means something very close to it’s operational definition. Just ask your students to give you examples of objects with “little” or “a lot” of momentum before it’s ever discussed in class. They’re usually spot on. The problem is, “momentum” is used pretty interchangeably with other terms like energy, force, strength, velocity, mass, etc. Listening to sportscasters during a football game will bring this to light. For angular momentum, it’s interesting to note that the full scope of it’s meaning may appear to require an understanding of moment of inertia and torque–whereas for linear momentum, there really isn’t much of a hangup for students. This could be evidenced by asking students to give examples of something that has “little” or “a lot” of angular momentum before linear momentum is brought up (you’ll likely get confused looks). So, would a full understanding of linear momentum be lost if introduced prior to forces and Newton’s laws of motion? Or does it just mean that students intuitively know what mass and force are before they step into the classroom? I’d like to see a survey done that asks a few hundred expert/professional ice skaters (who haven’t had physics class) to define angular momentum in their own words. Based on their experiences, they might give a better colloquial definition than a seasoned physics teacher! • Andy "SuperFly" Rundquist says:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.685282826423645, "perplexity": 503.2218825681448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010292910/warc/CC-MAIN-20140305090452-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
https://proceedings.neurips.cc/paper/2018/hash/03c6b06952c750899bb03d998e631860-Abstract.html
#### Authors Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Russ R. Salakhutdinov, Aarti Singh #### Abstract A widespread folklore for explaining the success of Convolutional Neural Networks (CNNs) is that CNNs use a more compact representation than the Fully-connected Neural Network (FNN) and thus require fewer training samples to accurately estimate their parameters. We initiate the study of rigorously characterizing the sample complexity of estimating CNNs. We show that for an $m$-dimensional convolutional filter with linear activation acting on a $d$-dimensional input, the sample complexity of achieving population prediction error of $\epsilon$ is $\widetilde{O(m/\epsilon^2)$, whereas the sample-complexity for its FNN counterpart is lower bounded by $\Omega(d/\epsilon^2)$ samples. Since, in typical settings $m \ll d$, this result demonstrates the advantage of using a CNN. We further consider the sample complexity of estimating a one-hidden-layer CNN with linear activation where both the $m$-dimensional convolutional filter and the $r$-dimensional output weights are unknown. For this model, we show that the sample complexity is $\widetilde{O}\left((m+r)/\epsilon^2\right)$ when the ratio between the stride size and the filter size is a constant. For both models, we also present lower bounds showing our sample complexities are tight up to logarithmic factors. Our main tools for deriving these results are a localized empirical process analysis and a new lemma characterizing the convolutional structure. We believe that these tools may inspire further developments in understanding CNNs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265298843383789, "perplexity": 645.2511465936991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150266.65/warc/CC-MAIN-20210724125655-20210724155655-00565.warc.gz"}
https://aviation.stackexchange.com/questions/79852/how-do-i-determine-holding-pattern-entry-and-direction-in-this-example?noredirect=1
# How do I determine holding pattern entry and direction in this example? I need help with this question for the IFR written test: "A pilot receives this ATC clearance: '...CLEARED TO THE XYZ VORTAC. HOLD NORTH ON THE THREE SIX ZERO RADIAL, LEFT TURNS...' What is the recommended procedure to enter the holding pattern?" Why is the answer on the left the correct visualization? It doesn't say if I should hold left or right of the radial. Both answers are left turns. Both answers are holding north. What am I missing? Thank you all for your help. My 150 arrow was wrong and I was confusing myself. Also, it's helpful to know that the inbound leg terminates at the fix (which entirely makes sense now). So, my new answer is direct entry and here is my new visualization: Neither of the two representations show holding North if the arrow depicted in the corner is truly pointing towards 150°. If this is the case, North is at the bottom of the page and South is at the top. Therefore, both representations are showing holding South of the fix. If the dot depicted is the fix, the representation on the left is not a holding pattern. Holding patterns have their inbound leg terminating at the fix. Your turn towards the outbound leg would then begin when crossing the fix on the outbound leg. Furthermore, your HSI is showing that you are on the 150° To-Radial of the fix, on a 155° Heading, and a 16.5 DME. This would put your aircraft inbound, 16.5 Nautical Miles North of the fix, roughly on the 330° From-Radial (or just Radial). You should intercept the inbound leg 360° Radial of the fix well before crossing it. Then effect a Direct Entry, Performing your 5Ts Once you hit the fix with wings level. You could, however, continue direct to the fix, inbound on the 330° Radial, and still perform a Direct Entry. I just, personally, think it’s easier to intercept the inbound (360° Radial) leg first. Research AIM 5-3-8 for further details. • Thank you! This is very helpful. I just edited my question to post my new visualization. – user2605553 Jul 27 at 23:49 • You're already on an intercept track flying the 330 radial, so a left turn to pick up the inbound track of the hold might look odd to the controller. That being said, those entry methods are just protocols for efficiency's sake; you can enter a hold any way you want as long as you stay in the protected airspace and get established on the racetrack in a reasonable time. Once you start using an FMS, the software works it out for you and you never have to figure out which one to use again. – John K Jul 28 at 1:34 The holding clearance that you have specified is invalid. CLEARED TO THE XYZ VORTAC. HOLD NORTH ON THE THREE SIX ZERO RADIAL, LEFT TURNS... You cannot hold north of the VOR on a radial that is south of it. A correct clearance would be CLEARED TO THE XYZ VORTAC. HOLD **SOUTH** ON THE THREE SIX ZERO RADIAL, LEFT TURNS... Conversely, the hold that is drawn on the left is incorrect because you are turning into the holding point. You always fly a straight segment into the holding point. The arrows are backwards. If I was given this corrected hold and flying a heading of 150°. I would end up flying a parallel entry. The corrected hold could also be: `CLEARED TO THE XYZ VORTAC. HOLD NORTH ON THE ONE EIGHT ZERO RADIAL, LEFT TURNS... In this case, I would end up flying a direct entry into the hold. • “You cannot hold north of the VOR on a radial that is south of it.” Could you clarify this statement, please. It is a little confusing. The 360° Radial of a VOR is a line with its origin at the VOR and its direction stretching Northward. Although, in application, the 360° Radial is a line stretching both North and South, with the VOR at its center. The To-From indication would determine whether the aircraft were sitting on the 360° Radial to the North (From indication) or the 180° Radial to the South (To indication). Heading and Bearing would determine whether you are facing the VOR. – Dean F. Jul 27 at 17:46 • If you were to look at it this way. A 360° From-Radial at 16.5 DME (5000’ MSL) is the exact same point (Lat-long) in space as a 180° To-Radial at 16.5 DME (5000’ MSL). You would be at a point on the 360° actual Radial. At that point in space, a Heading of 180° would roughly place you going towards the VOR. A Heading of 360° would roughly place you going away from the VOR. The Bearing of the VOR would be determined by whether you wanted Magnetic, Relative, etc. – Dean F. Jul 27 at 18:24 • Yeah I had a problem with that one too. The radial itself is the track extending from the station at that compass position. The clearance would be hold on the 360 radial, inbound track 180, left turns. I think his arrow on the sketch means he's heading towards the VOR on the 150 radial, not a 150 deg heading. I think the right sketch is correct for a hold on the 360 with left turns, and what was missing is the inbound track reference, which would eliminate the confusion (maybe done on purpose). In my opinion, the correct pattern is the right one, and the correct entry would be parallel. – John K Jul 27 at 18:25 • @JohnK - Maybe. Although, if the HSI represented is part of the question, the aircraft is on the 330° Radial with a Heading of 155 and a DME distance of 16.5. This would put him inbound to the VOR from the North by Northwest. – Dean F. Jul 27 at 18:32 • Oops you're right I didn't even look at the HSI. Yeah you're right both those sketches are wrong. The hold should be the right diagram flipped over so the track pattern is on the lower left instead of upper right, and the entry would be direct. – John K Jul 27 at 18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7219500541687012, "perplexity": 1710.4773207336673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00490.warc.gz"}
https://codegolf.stackexchange.com/questions/135598/the-infinite-power-tower/135600
# The challenge Quite simple, given an input x, calculate it's infinite power tower! x^x^x^x^x^x... For you math-lovers out there, this is x's infinite tetration. Keep in mind the following: x^x^x^x^x^x... = x^(x^(x^(x^(x...)))) != (((((x)^x)^x)^x)^x...) Surprised we haven't had a "simple" math challenge involving this!* # Assumptions • x will always converge. • Negative and complex numbers should be able to be handled • This is , so lowest bytes wins! • Your answers should be correct to at least 5 decimal places # Examples Input >> Output 1.4 >> 1.8866633062463325 1.414 >> 1.9980364085457847 [Square root of 2] >> 2 -1 >> -1 i >> 0.4382829367270323 + 0.3605924718713857i 1 >> 1 0.5 >> 0.641185744504986 0.333... >> 0.5478086216540975 1 + i >> 0.6410264788204891 + 0.5236284612571633i -i >> 0.4382829367270323 -0.3605924718713857i [4th root of 2] >> 1.239627729522762 *(Other than a more complicated challenge here) • I don’t think this tower converges at x = −2 or x = −0.5. – Anders Kaseorg Jul 25 '17 at 7:37 • @AndersKaseorg I agree, though all programs seem to have the same converging answer. Why don't they converge? – Graviton Jul 25 '17 at 7:38 • x = −2 gets attracted to a 8-cycle and x = −0.5 gets attracted to a 6-cycle. (My program still gives an answer in these cases, but it’s one of the points in the cycle and not a fixed point; this doesn’t indicate convergence.) – Anders Kaseorg Jul 25 '17 at 7:44 • @AndersKaseorg Aha very interesting. You wouldn't happen to know why '8' for -2 and '6' for -0.5? Just out of curiosity of course. – Graviton Jul 25 '17 at 7:47 • You can run the iterations just as easily as I can, but here’s a picture: commons.wikimedia.org/wiki/File:Tetration_period.png – Anders Kaseorg Jul 25 '17 at 7:50 # APL (Dyalog), 4 bytes *⍣≡⍨ Try it online! * power ⍣ until ≡ stable ⍨ selfie # Pyth,  4  3 bytes crossed out 4 is still regular 4 ;( u^Q Try it online ### How it works u first repeated value under repeated application of G ↦ ^QG input ** G Q starting at input • You don't need the last G, it will get auto-filled. – FryAmTheEggman Jul 25 '17 at 13:10 • @FryAmTheEggman Right, thanks! – Anders Kaseorg Jul 25 '17 at 20:14 For inputs that don't converge (eg. -2) this won't terminate: import Data.Complex f x=until(\a->magnitude(a-x**a)<1e-6)(x**)x Thanks a lot @ØrjanJohansen for teaching me about until and saving me 37 bytes! Try it online! • You can shorten this a lot with the until function. Try it online! – Ørjan Johansen Jul 25 '17 at 8:10 • Neat! Did not know until, thanks a lot. – ბიმო Jul 25 '17 at 8:16 # Python 3, 40 39 35 bytes • Thanks @Ørjan Johansen for a byte: d>99 instead of d==99: 1 more iteration for a lesser byte-count • Thanks @Uriel for 4 bytes: wise utilization of the fact that x**True evaluates to x in x**(d>99or g(x,d+1)). The expression in the term evaluates to True for depth greater than 99 and thus returns the passed value. Recursive lambda with a max-depth 100 i.e. for a depth 100 returns the same value. Actually is convergency-agnostic, so expect the unexpected for numbers with non-converging values for the function. g=lambda x,d=0:x**(d>99or g(x,d+1)) Try it online! • In the tio link, you can replace complex('j') with 1j – Mr. Xcoder Jul 25 '17 at 8:26 • d>99 does one more iteration and is shorter. – Ørjan Johansen Jul 25 '17 at 8:43 • save 4 bytes with g=lambda x,d=0:x**(d>99or g(x,d+1)), x**True evaluates to x – Uriel Jul 25 '17 at 12:57 • @Uriel, That is really smart..... Thanks!!! – officialaimm Jul 25 '17 at 13:09 # Python 3, 3730 27 bytes -7 bytes from @FelipeNardiBatista. -3 bytes from from @xnor I don't remember much of Python anymore, but I managed to port my Ruby answer and beat the other Python 3 answer :D lambda x:eval('x**'*99+'1') Try it online! • FYI, it appears that f-strings were first introduced in Python 3.6: see python.org/dev/peps/pep-0498 . (This would explain why your code didn't work for me in 3.5.2.) Just thought I'd mention this in case anyone else was confused. – mathmandan Jul 25 '17 at 16:01 • You don't need to substitute in the value of x, eval('x**'*99+'1') works – xnor Jul 25 '17 at 18:51 • @xnor doh, of course it does :) thanks – daniero Jul 25 '17 at 19:08 • @xnor Neat -- I applied the same thing in my Ruby answer and it somehow fixed it :) – daniero Jul 25 '17 at 19:17 • +1, I am slapping myself for forgetting the existence of eval.... :D – officialaimm Jul 26 '17 at 5:29 # Mathematica, 12 bytes #//.x_:>#^x& Takes a floating‐point number as input. # J, 5 bytes ^^:_~ Try it online! ## Explanation First, I'll show what command is being executed after parsing the ~ at the end, and the walk-through will be for the new verb. (^^:_~) x = ((x&^)^:_) x ((x&^)^:_) x | Input: x ^:_ | Execute starting with y = x until the result converges x&^ | Compute y = x^y • The J solution is really nice here. To break down your first line in finer grain, is it correct to say that the following happens: (^^:_) creates a new dyadic verb via the power conj, then self adverb ~ makes that verb monadic, so that when given an argument x it's expanded to x (^^:_) x. the left x subsequently "sticks", giving ((x&^)^:_) x per your note, and only the right argument changes during iteration? – Jonah Jul 25 '17 at 14:30 • @Jonah Sure, when giving two arguments to a dyad with power, x u^:n y, the left argument is bonded with the dyad to form a monad that is nested n times on y. x u^:n y -> (x&u)^:n y -> (x&u) ... n times ... (x&u) y – miles Jul 25 '17 at 14:36 # C# (.NET Core), 79 78 bytes x=>{var a=x;for(int i=0;i++<999;)a=System.Numerics.Complex.Pow(x,a);return a;} Try it online! I chose to iterate until i=999 because if I iterated until 99 some examples did not reach the required precision. Example: Input: (0, 1) Expected output: (0.4382829367270323, 0.3605924718713857) Output after 99 iterations: (0.438288569331222, 0.360588154553794) Output after 999 iter.: (0.438282936727032, 0.360592471871385) As you can see, after 99 iterations the imaginary part failed in the 5th decimal place. Input: (1, 1) Expected output: (0.6410264788204891, 0.5236284612571633) Output after 99 iterations: (0.64102647882049, 0.523628461257164) Output after 999 iter.: (0.641026478820489, 0.523628461257163) In this case after 99 iterations we get the expected precision. In fact, I could iterate until i=1e9 with the same byte count, but that would make the code considerably slower • 1 byte saved thanks to an anonymous user. • +1 For the complex class I didn't even know that existed. – TheLethalCoder Jul 25 '17 at 11:29 • @TheLethalCoder neither did I until I googled it. :-) – Charlie Jul 25 '17 at 11:30 ³*$ÐL Try it online! # Ruby, 21 20 bytes ->n{eval'n**'*99+?1} Disclaimer: It seems that Ruby returns some weird values when raising a complex number to a power. I assume it's out of scope for this challenge to fix Ruby's entire math module, but otherwise the results of this function should be correct. Edit: Applied the latest changes from my Python 3 answer and suddenly it somehow gives the same, expected results :) Try it online! • Take out the space after the eval. – Value Ink Jul 25 '17 at 21:41 • Your original version failed on the complex test case because it evaled the string "0+1i**0+1i**0+1i**...", which parses in the wrong way since ** has higher precedence than +. – Ørjan Johansen Jul 26 '17 at 0:39 • @ØrjanJohansen huh, you're right. I guess I was fooled by the fact that #inspect and #to_s return different values. Before submitting the initial answer I did some testing in irb and saw that e.g entering Complex(1,2) in the REPL would give (1+2i), including the parentheses. When stringifying the value however the parentheses are not included, so the precedence, as you point out, messed it up. – daniero Jul 26 '17 at 2:37 • I thought eval use was forbidden. – V. Courtois Jul 26 '17 at 9:07 • @V.Courtois Ok. But it's not. – daniero Jul 26 '17 at 10:10 # TI-BASIC, 16 bytes The input and output are stored in Ans. Ans→X While Ans≠X^Ans X^Ans End # R, 36 33 bytes -3 bytes thanks to Jarko Dubbeldam Reduce(^,rep(scan(,1i),999),,T) Reads from stdin. Reduces from the right to get the exponents applied in the correct order. Try it (function) Try it (stdin) • scan(,1i) works. Similar to how scan(,'') works. – JAD Jul 25 '17 at 19:20 • @JarkoDubbeldam of course! sometimes my brain doesn't work. – Giuseppe Jul 25 '17 at 19:24 # Javascript, 33 bytes f=(x,y=x)=>(x**y===y)?y:f(x,x**y) • JavaScript doesn't handle imaginary numbers. – kamoroso94 Jul 25 '17 at 22:24 # MATL, 20 10 bytes cut down to half thanks to @LuisMendo t^Gw^t5M- Try it online! This is my first and my first time using MATL so i'm sure it could be easily outgolfed. • Welcome to the site, and nice first answer! A few suggestions: XII is equivalent to t. You can also get rid of XH and H using the automatic clipboard M, that is, ttt^yw^t5M-]bb-x. And in the last part, instead of deleting the unwanted values you can use &, which tells the implicit display function to only show the top. So, you can use ttt^yw^t5M-]& and save a few bytes. – Luis Mendo Jul 27 '17 at 15:01 • Also, the first t is not needed, and using G instead of another t you can avoid & and thus leave ] implicit: t^Gw^t5M-. Hey, we've reduced byte count by a half! – Luis Mendo Jul 27 '17 at 15:09 • @LuisMendo Thanks for the great tips! I have a lot to learn about MATL, but I really like it. – Cinaski Jul 28 '17 at 9:11 • Glad to hear that! – Luis Mendo Jul 28 '17 at 9:11 # Perl 6, 17 bytes {[R**]$_ xx 999} Try it online! R** is the reverse-exponentiation operator; x R** y is equal to y ** x. [R**] reduces a list of 999 copies of the input argument with reverse exponentiation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4300539195537567, "perplexity": 4760.11992113479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00018.warc.gz"}
http://dlmf.nist.gov/software/
# Software Index ## § Software Cross Index ‘✓’ indicates that a software package implements the functions in a section; ‘a’ indicates available functionality through optional or add-on packages; an empty space indicates no known support. ## § A Classification of Software In the list below we identify four main sources of software for computing special functions. Please see our Software Indexing Policy for rules that govern the indexing of software in the DLMF. Research Software. This is software of narrow scope developed as a byproduct of a research project and subsequently made available at no cost to the public. The software is often meant to demonstrate new numerical methods or software engineering strategies which were the subject of a research project. When developed, the software typically contains capabilities unavailable elsewhere. While the software may be quite capable, it is typically not professionally packaged and its use may require some expertise. The software is typically provided as source code or via a web-based service, and no support is provided. Open Source Collections and Systems. These are collections of software (e.g. libraries) or interactive systems of a somewhat broad scope. Contents may be adapted from research software or may be contributed by project participants who donate their services to the project. The software is made freely available to the public, typically in source code form. While formal support of the collection may not be provided by its developers, within active projects there is often a core group who donate time to consider bug reports and make updates to the collection. Software Associated with Books. An increasing number of published books have included digital media containing software described in the book. Often, the collection of software covers a fairly broad area. Such software is typically developed by the book author. While it is not professionally packaged, it often provides a useful tool for readers to experiment with the concepts discussed in the book. The software itself is typically not formally supported by its authors. Commercial Software. Such software ranges from a collection of reusable software parts (e.g., a library) to fully functional interactive computing environments with an associated computing language. Such software is usually professionally developed, tested, and maintained to high standards. It is available for purchase, often with accompanying updates and consulting support. ## § Software Repositories The following are web-based software repositories with significant holdings in the area of special functions. Many research software packages are found here, as well as some open source software collections. Collected Algorithms of the ACM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 155, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1500208079814911, "perplexity": 2053.7852090949855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00049.warc.gz"}
https://www.lessonplanet.com/teachers/case-practice-exercise
# Case Practice Exercise In this grammar activity, students practice choosing the appropriate pronoun in twenty sentences that makes each one grammatically correct,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751375079154968, "perplexity": 12828.251770711946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00550.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.023902
# Synopsis: Herding Particles to Make a Mirror Laser trapping offers a way to create mirrors out of microparticles. Several technologies, such as optical tweezers for holding and manipulating microbes and other single particles, are based on creating forces with optical fields. Researchers also want to grab and arrange multiple particles to form useful structures. Now, Tomasz Grzegorczyk from BAE Systems in Burlington, Massachusetts, and colleagues have successfully created a reflecting surface from hundreds of microparticles locked into position at the focus of an intense laser beam. Small optical elements may be a near-term application, but the researchers also speculate that the result, published in Physical Review Letters, could be one step on the way to ultralight telescope mirrors in space. The researchers made their multiparticle mirror out of $3$-micrometer-diameter polystyrene microspheres suspended in water. A laser emitting continuous $532$-nanometer-wavelength light provided the optical trapping field. The laser was focused to a $40$-micrometer spot that caused the spheres to gather in a thin, closely packed layer on the glass surface of a sample cell. Using a set of lenses, Grzegorczyk et al. reflected an image created with a $633$-nanometer laser off of the microparticle array and collected it with a camera. Their results show the mirror can transmit the image with good fidelity. The authors also carried out numerical simulations of mirrors made from microspheres in various arrangements (parabolic surface, etc.) and conclude that laser-trapped mirrors could, in principle, attain the imaging quality of existing optical systems. – David Voss ### Announcements More Announcements » Optics Gravitation ## Next Synopsis Particles and Fields ## Related Articles Quantum Physics ### Synopsis: Position Detector Approaches the Heisenberg Limit The light field from a microcavity can be used to measure the displacement of a thin bar with an uncertainty that is close to the Heisenberg limit. Read More » Atomic and Molecular Physics ### Viewpoint: Next Generation Clock Networks Free-space laser links have been used to synchronize optical clocks with an unprecedented uncertainty of femtoseconds. Read More » Optics ### Focus: How to Make an Intense Gamma-Ray Beam Computer simulations show that blasting plastic with strong laser pulses could produce gamma rays with unprecedented intensity, good for fundamental physics experiments and possibly cancer treatments. Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22685348987579346, "perplexity": 3454.549802844944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00088-ip-10-185-217-139.ec2.internal.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/13954
# ABSOLUTE INTENSITIES OF $O_{3}$ LINES IN THE 9-11 $\mu m$ REGION Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/13954 Files Size Format View 1997-RF-03.jpg 207.0Kb JPEG image Title: ABSOLUTE INTENSITIES OF $O_{3}$ LINES IN THE 9-11 $\mu m$ REGION Creators: Smith, M. A. H.; Rinsland, C. P.; Devi, V. Malathy; Benner, D. Chris Issue Date: 1997 Abstract: We have extended our previous analysis of high-resolution absorption spectra of $ozone^{h}$ to determine absolute intensities of nearly 200 ${^{16}}O_{3}$ lines in the 9-11 $\mu m$ region. The spectra were recorded at room temperature using the Fourier transform spectrometer at the McMath-Pierce facility of the National Solar Observatory at Kitt Peak, covering the $800-1400 cm^{-1}$ region at $0.0027 cm^{-1}$ resolution. The ozone samples were contained in a glass cell having crossed IR-transmitting and UV-transmitting paths approximately 10 cm in each direction. A 254 nm UV-absorption monitor of the same design as Pickett $et al.^{i}$ was used to measure the ozone partial pressures, which were kept at approximately 0.3 to 0.5 Torr to prevent the appearance of saturated lines. Only spectra for which the ozone partial pressure varied by $< 1.0%$ during the recording time were selected for analysis. Using our multispectrum nonlinear least-squares $procedure,^{j}$ we have fit four spectra simultaneously to determine intensities for numerous lines in both the P and R branches of the $\nu_{3}$ fundamental band and several lines in the $\nu_{1}$ band. On average, our measured intensities are only 1% larger than the values on the current HITRAN $compilation^{k}$. Our measurement set includes 44 $\nu_{3}$ lines in common with other recent experimental $studies.^{blm}$ Comparison of these various measurements shows excellent agreement for a few lines and adequate agreement (considering all possible sources of uncertainty and systematic errors) for the others. URI: http://hdl.handle.net/1811/13954 Other Identifiers: 1997-RF-03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6788934469223022, "perplexity": 2204.2971411332896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010732251/warc/CC-MAIN-20140305091212-00005-ip-10-183-142-35.ec2.internal.warc.gz"}
https://docs.px4.io/master/zh/flying/terrain_following_holding.html
# Terrain Following/Holding & Range Aid PX4 supports Terrain Following and Terrain Hold in Position and Altitude modes, on multicopters and VTOL vehicles in MC mode that have a distance sensor. PX4 also supports using a distance sensor as the primary source of altitude data in any mode, either all the time, or just when flying at low altitudes at low velocities (Range Aid). ## Terrain Following Terrain following enables a vehicle to automatically maintain a relatively constant height above ground level when traveling at low altitudes. This is useful for avoiding obstacles and for maintaining constant height when flying over varied terrain (e.g. for aerial photography). This feature can be enabled in Position and Altitude modes, on multicopters and VTOL vehicles in MC mode that have a distance sensor. When terrain following is enabled, PX4 uses the output of the EKF estimator to provide the altitude estimate, and the estimated terrain altitude (calculated from distance sensor measurements using another estimator) to provide the altitude setpoint. As the distance to ground changes, the altitude setpoint adjusts to keep the height above ground constant. At higher altitudes (when the estimator reports that the distance sensor data is invalid) the vehicle switches to altitude following, and will typically fly at a near-constant height above mean sea level (AMSL) using the barometer for altitude data. More precisely, the vehicle will use the primary source of altitude data as defined in EKF2_HGT_MODE. This is, by default, the barometer. Terrain following is enabled by setting MPC_ALT_MODE to 1. ## Terrain Hold Terrain hold uses a distance sensor to help a vehicle to better maintain a constant height above ground in altitude control modes, when horizontally stationary at low altitude. This allows a vehicle to avoid altitude changes due to barometer drift or excessive barometer interference from rotor wash. This feature can be enabled in Position and Altitude modes, on multicopters and VTOL vehicles in MC mode that have a distance sensor. When moving horizontally (speed > MPC_HOLD_MAX_XY), or above the altitude where the distance sensor is providing valid data, the vehicle will switch into altitude following. Terrain holding is enabled by setting MPC_ALT_MODE to 2. Terrain hold is implemented similarly to terrain following. It uses the output of the EKF estimator to provide the altitude estimate, and the estimated terrain altitude (calculated from distance sensor measurements using a separate, single state terrain estimator) to provide the altitude setpoint. If the distance to ground changes due to external forces, the altitude setpoint adjusts to keep the height above ground constant. ## Distance Sensor as Primary Source of Height PX4 allows you to make a distance sensor the primary source of altitude data (in any flight mode/vehicle type). This may be useful when no barometer is available, or for applications when the vehicle is guaranteed to only fly over a near-flat surface (e.g. indoors). The default and preferred altitude sensor for most use cases is the barometer (when available). When using a distance sensor as the primary source of height, fliers should be aware: • Flying over obstacles can lead to the estimator rejecting rangefinder data (due to internal data consistency checks), which can result in poor altitude holding while the estimator is relying purely on accelerometer estimates. This scenario might occur when a vehicle ascends a slope at a near-constant height above ground, because the rangefinder altitude does not change while that estimated from the accelerometer does. The ECL performs innovation consistency checks that take into account the error between measurement and current state as well as the estimated variance of the state and the variance of the measurement itself. If the checks fail the rangefinder data will be rejected, and the altitude will be estimated from the accelerometer. After 5 seconds of inconsistent data the estimator resets the state (in this case height) to match the current distance sensor data. The measurements might also become consistent again, for example, if the vehicle descends, or if the estimated height drifts to match the measured rangefinder height. • The local NED origin will move up and down with ground level. • Rangefinder performance over uneven surfaces (e.g. trees) can be very poor, resulting in noisy and inconsistent data. This again leads to poor altitude hold. The feature is enabled by setting: EKF2_HGT_MODE=2. ## Range Aid Range Aid uses a distance sensor as the primary source of height estimation during low speed/low altitude operation, but will otherwise use the primary source of altitude data defined in EKF2_HGT_MODE (typically a barometer). It is primarily intended for takeoff and landing, in cases where the barometer setup is such that interference from rotor wash is excessive and can corrupt EKF state estimates. Range aid may also be used to improve altitude hold when the vehicle is stationary. Terrain Hold is recommended over Range Aid for terrain holding. This is because terrain hold uses the normal ECL/EKF estimator for determining height, and this is generally more reliable than a distance sensor in most conditions. Range Aid is enabled by setting EKF2_RNG_AID=1 (when the primary source of altitude data (EKF2_HGT_MODE) is not the rangefinder). Range aid is further configured using the EKF2_RNG_A_ parameters: • EKF2_RNG_A_VMAX: Maximum horizontal speed, above which range aid is disabled. • EKF2_RNG_A_HMAX: Maximum height, above which range aid is disabled. • EKF2_RNG_A_IGATE: Range aid consistency checks "gate" (a measure of the error before range aid is disabled).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5091436505317688, "perplexity": 2823.421461882461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00278.warc.gz"}
https://www.physicsforums.com/threads/field-theory.213796/
# Field Theory 1. Feb 7, 2008 ### johnson123 1. The problem statement, all variables and given/known data Show that F[x]/( g(x) ) is a n-dimensional vector space. where g is in F[x], and g has degree n. Its clear that F[x]/( g(x) ) is a vector space and that B= (1,$$x^{2}$$,.....,$$x^{n-1}$$) spans F[x]/( g(x) ), but im having trouble showing that B is linearly independent I realize this is pretty much a HW problem and it should be in the HW section, but I read a post from one of the pf mentors noting that for gradlevel/seniorlevel problems you might have a chance at a response from the non hw sections. thanks for any suggestions. 2. Feb 7, 2008 ### Hurkyl Staff Emeritus Well, what happens if they are linearly dependent, so that a nontrivial linear combination of them is equal to zero in F[x] / (g(x))? 3. Feb 7, 2008 ### ejungkurth It's not clear that you have tied B to either F[x] or g(x). First relate B to F and g. Assume for the moment that I am not the person who doesn't have the answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893610954284668, "perplexity": 1346.3973066243452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720941.32/warc/CC-MAIN-20161020183840-00381-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.delorie.com/gnu/docs/avl/libavl_70.html
www.delorie.com/gnu/docs/avl/libavl_70.html search GNU libavl 2.0.1 [ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ] ## 5.12 Balance Sometimes binary trees can grow to become much taller than their optimum height. For example, the following binary tree was one of the tallest from a sample of 100 15-node trees built by inserting nodes in random order: The average number of comparisons required to find a random node in this tree is \ASCII\{(1 + 2 + (3 \times 2) + (4 \times 4) + (5 \times 4) + 6 + 7 + 8) / 15 = 4.4, (1 + 2 + (3 * 2) + (4 * 4) + (5 * 4) + 6 + 7 + 8) / 15 = 4.4} comparisons. In contrast, the corresponding optimal binary tree, shown below, requires only \ASCII\{(1 + (2 \times 2) + (3 \times 4) + (4 \times 8))/15 = 3.3, (1 + (2 * 2) + (3 * 4) + (4 * 8))/15 = 3.3} comparisons, on average. Moreover, the optimal tree requires a maximum of 4, as opposed to 8, comparisons for any search: Besides this inefficiency in time, trees that grow too tall can cause inefficiency in space, leading to an overflow of the stack in bst_t_next(), bst_copy(), or other functions. For both reasons, it is helpful to have a routine to rearrange a tree to its minimum possible height, that is, to balance (see balance) the tree. The algorithm we will use for balancing proceeds in two stages. In the first stage, the binary tree is "flattened" into a pathological, linear binary tree, called a "vine." In the second stage, binary tree structure is restored by repeatedly "compressing" the vine into a minimal-height binary tree. Here's a top-level view of the balancing function: <@xref{\NODE\, , BST to vine function.>,89} <@xref{\NODE\, , Vine to balanced BST function.>,90} void bst_balance (struct bst_table *tree) { assert (tree != NULL); tree_to_vine (tree); vine_to_tree (tree); tree->bst_generation++; } This code is included in @refalso{29 /* Special BST functions. */ void bst_balance (struct bst_table *tree); This code is included in @refalso{24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052618145942688, "perplexity": 1785.9523985452315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00004.warc.gz"}
http://www.jiskha.com/members/profile/posts.cgi?name=andy&page=6
Friday April 29, 2016 Posts by andy Total # Posts: 889 algebra the formula r= -0.075t + 3.85 can be used to predict the world record for the 1500 meter run, t years after 1930.find an inequality that identifies the years in which the world record will be less than 2.8 minutes March 16, 2011 science what is an explanation for the relationship between currents and the brightness of a light bulb?? February 21, 2011 english I am having some serious problems with parallelism. I do not understand how to use parallelism. I've read the lesson in my textbook and I've looked at different websites that help explain it, but I just can't get it. Can anyone break this down in "layman's... February 18, 2011 Statistics The length of time it takes college students to find a parking spot in the library parking lot follows a normal distribution with a mean of 4.0 minutes and a standard deviation of 1 minute. Find the probability that a randomly selected college student will take between 2.5 and... February 13, 2011 Algebra 1 Explain the differences between solving these two equations: Brake down the steps. log3(x - 1) = 4 and log3(x - 1) = log34 ) February 8, 2011 Algebra 1 Explain the differences between solving these two equations: •log3(x - 1) = 4 AND log3(x - 1) = log34 I think it has to deal with the formulas? February 8, 2011 Andy Name: Lesson 3 Assignment.Instructions Explain the differences between solving these two equations: •log3(x - 1) = 4 and log3(x - 1) = log34. I think it has to do with the formulas February 8, 2011 Calc-slopes & concavity 1. Let f(x)=x^3-3x+2 a.) Find the equation of the line tangent to the graph of y=f(x) at x=2 b.) For what values of x is the function increasing? c.) For what values of x is the graph concave down? January 23, 2011 English 12 okay thank you! January 23, 2011 English 12 In your understanding is this thesis statement clear enough? What other improvements would you make? The topic is "Theme of Overcoming struggle in the course text" Many themes are presented in: Hamlet, Death of a Salesmen, Life of Pi, the Road, and the Kite Runner ... January 23, 2011 IT computer memory No, incorrect. Be carefull with this, your: 18MB x 10^6 x 8 bits = 144,000,00 Should in fact be: 18mb x 2^20 x 8 bits = 150,994,994 Your second part is correct, so its 4 megabits per second = 4 x 10^6 = 4,000,000 Answer: 150994994 / 4000000 = 37.75 seconds. January 13, 2011 math A student claims that every prime greater than 3 is a term in the arithmetic sequence whose nth term is 6n + 1 or in the arithmetic sequence whose nth term is 6n - 1. Is this true? if so, why? December 23, 2010 The equation Ax^2 + 4y^2 = 16 represents an ellipse. Find all values of A such that its intersection with y = |x| has coordinates (x,y) which are integers. December 19, 2010 Math The equation Ax^2 + 4y^2 = 16 represents an ellipse. Find all values of A such that its intersection with y = |x| has coordinates (x,y) which are integers. December 16, 2010 physicis How much power does it takes to do 104 J of work in 8 seconds December 13, 2010 physicis How far will 350 J raise a 7kg mass? December 13, 2010 English Can anyone see any rhetorical device in this sentence? His father’s voice: You gonna let ’em push you around like that, boy? “No sir, Daddy,” Perry whispered. “I sure as hell ain’t.”( Sigler 227) December 8, 2010 math Sum of a 3 digit # is 24. Ones digit is 2 more than 2xs the hundred dig. Tens dig is 1 more thn 3xs the hund dig. Wats the #?? December 7, 2010 chemistry 2 Na202 + 2H20 -> 4NaOH + 02 When 200.g of Na202 were reacted, 32.0 g of 02 were collected. The yield of 02 collected was (how many percentages) December 7, 2010 English 12 U Please ignore my previous post, I accidentally posted my rough draft Okay for English 12 U, I'm doing a literary essay for Life of Pi, by Yann Martel. The topic of my essay is Martel (author's) depiction of setting in Canada and India. (Role of setting) I'd like ... December 1, 2010 Math Many thanks November 28, 2010 Math The path of a cliff diver as he dives into a lake is given by the equation y= -(x-10)^2+75, where y meters is the divers height above the water and x meters is the horizontal distance travelled by the diver. What is the maximum hight the diver is above water? Is the answer to ... November 28, 2010 Math The path of a cliff diver as he dives into a lake is given by the equation y= -(x-10)^2+75, where y meters is the divers height above the water and x meters is the horizontal distance travelled by the diver. What is the maximum hight the diver is above water? How can I find ... November 28, 2010 English 12 Yesterday, i posted a question about writing a thesis for Life of Pi. However i'm not sure how to develop a thesis from my topic. The topic of my essay is "Yann Martel's depiction on the role of setting in India and Canada". If anyone can kindly help me to ... November 21, 2010 English 12 alright thank you, ill post a question later on when i develop a thesis November 20, 2010 English 12 Okay, for English 12 im doing a semiar on Yann Martel's book, "Life of Pi" The topic of my essay is "martel depiction of Canada and India= the role of setting" My main body paragraphs are Setting in Canada Setting in India Setting in the Ocean what im ... November 20, 2010 physics you're not following the sig fig rule- it's -110 because you can only have 2 significant figures. November 18, 2010 Finance I need help writing scenario style questions for a school business management class November 7, 2010 Bio Lab You need to prepare medium for your culture cells. Your salt solution is 10x concentration, dilute to 1x for use. You also need to add fetal bovine serum for a final concentration of 10%. What would you add of each for the correct final concentrations in a liter of media? November 3, 2010 Organic Chemistry What would have happened if we had forgotten to neutralize the aqueous solution before recrystallization? November 3, 2010 Physics Sam is taking his girlfriend Sally out for a ride in his boat. He starts out at rest. He accelerates with an acceleration of 0.263 m/s^2 for 92.1 s. At that time Sally decides they are going fast enough and the boat moves at constant speed for a distance of 286 m. Then Sally ... October 28, 2010 science Do plants grow better with tap water or distilled water? October 23, 2010 thank you very much, DrBob222! October 20, 2010 Ka for benzoic acid, C6H5COOH, 6.5x10^-5. Calculate the pH of solution after addition of 10.0, 20.0, 30.0, and 40.0 mL of 0.10 M NaOH to 40.0 mL of 0.10 M Benzoic acid. PLEASE CHECK MY ANSWER!!!!! My answer is: Moles acid = 0.040 L x 0.10 M = 0.0040 Moles base = 0.010 L x 0.10... October 20, 2010 social studies What is the base for the earth's hills and mountains? October 17, 2010 physics does each of the two lenses used in a microscope produce a magnification of the object being viewed? October 17, 2010 Biology Are there any viruses/diseases in the tundra? October 15, 2010 math marcias house is approx. 10 miles east of the airport. a jet is flying directly over her house after taking off from the airport and flying approx. 12 miles. approx how many miles high is the jet when it flys over marcias house? October 11, 2010 math when a marble hits a wall at a 22 degree angle, what is the measure of the angle between the two paths of the marble? October 11, 2010 History What difficulties did the Pilgrims encounter in their attempt to gain passage to the New World? October 8, 2010 chem what are the net ionic equations for ClO3^-; Zn^2+ undergo hydrolysis? October 2, 2010 math the equation is f(x) = (1/sq. root of 1-x^2) 1. f(x) is never zero. 2. 0 is in the domain of f 3. All negative real numbers are in the domain of f 4. All positive real numbers are in the domain of f 5. 1 is in the domain of f 6. f(x) is never negative 7. f(x) is never positive... September 23, 2010 physics The pilot of an airplane traveling 180 km/h wants to drop supplies to flood victims isolated on a patch of land 125 m below. The supplies should be dropped how many seconds before the plane is directly overhead? ______s September 21, 2010 physics A ball is thrown horizontally from the roof of a building 60 m tall and lands 45 m from the base. What was the ball's initial speed? _______m/s September 21, 2010 Chemistry Than from here I need to find: Using the volume of solvent calculated in step 1, calculate how much sulfanilamide will remain dissolved in the mother liquor after the mixture is cooled to 0 degree Celsius? Can you please explain? September 21, 2010 Chemistry thank you! September 21, 2010 Chemistry Please help the question that I got is: Calculate how much 95% ethyl alcohol will be required to dissolve 075g sulfanilamide at 78 degree Celsius. Knowing that solubility of sulfanilamide is 210 mg/ml at 78 degree Celsius, 14mg/ml at 0 Degree Celsius. please explain. September 21, 2010 Science thank you September 21, 2010 algebra 8th ok gat it August 31, 2010 algebra 8th YOU THINK? BUT I HAVE TO DO ALL THE PROBLEM August 31, 2010 algebra 8th ADD OR SUBTRACT> 5.2-2.5 HOW DO I DO THIS? August 31, 2010 chem What is the total pressure in mm Hg of a gas mixture containing argon gas at 0.25 atm helium gas at 350 mm,Hg, and nitrogen gas at 360 torr. July 10, 2010 chem What is the volume in liters of 3.2 moles methane gas, CH4, at 12degrees C and 1.52 atm? July 10, 2010 chem An airplane is pressurized to 650mmHg , which is the atmospheric pressure at a ski resort at 1.30×10^4 altitude.If air is 21 % oxygen, what is the partial pressure of oxygen on the plane? July 10, 2010 biochemistry an intramalecular meds is given at 5.0mg/kg of body weight. if you give 425mg of medicatioon to a pt. what is the ppTIENT weight? July 5, 2010 biochemistry ordered 1.0g of tetracycline to be given every 6 hours. on hand 500mg. how many will we need for 1 days treatment? July 5, 2010 Literature During the Renaissance, people rediscovered the literature of the past. True or False? At that time there was an enormous renewal of interest in and study of classical antiquity. So True, right? Just wanna make sure, Thanks. July 3, 2010 personal finance Sue and Tom Wright are assistant professors at the local university. They each take home about $42,000 per year after taxes. Sue is 37 years of age, and Tom is 35. Their two children, Mike and Karen, are 11 and 9. Were either one to die, they estimate that the remaining family... June 27, 2010 personal finance Sue and Tom Wright are assistant professors at the local university. They each take home about$42,000 per year after taxes. Sue is 37 years of age, and Tom is 35. Their two children, Mike and Karen, are 11 and 9. Were either one to die, they estimate that the remaining family... June 27, 2010 chem Calculate the specific heat (\rm{J/g \; ^\circ C}) for a 21.5 g sample of a metal that absorbs 685 J when temperature increases from 42.1^\circ C to83.2^\circ C. June 10, 2010 chem Calculate the specific heat (\rm{J/g \; ^\circ C}) for a 18.5-\rm g sample of tin that absorbs 183 {\rm J} when temperature increases from 35.0 ^\circ \rm C to 78.6 ^\circ \rm C. June 10, 2010 bio A typical adult body contains 55 \% water. If a person has a mass of 70.0 kg, how many pounds of water does she have in her body? June 10, 2010 Biology June 5, 2010 chemistry Complete and balance the displayed half reaction in neutral aqueous solution taking into account the electrons. Mg(s) → Mg2+(aq) June 1, 2010 biology Does asexual reproduction of sponges depend on time of year, availability of food, or other factors? May 30, 2010 Biology May 30, 2010 Biology Under what conditions does asexual reproduction in sponges occur? Does it depend on time of year, availability of food, or other factors? May 30, 2010 Science What is the pH of a 2.62 x 10^-2 M HBr solution? thanks May 25, 2010 thanks so much May 23, 2010 Which is NOT a chareteristic property of acids? A: react with carbonate to yield CO2 gas B: taste sour C: neutralize bases D; turns litmus red to blue E:reacts with metal to yield H2 gas I reckon its D because isnt it the other way around ? Blue to red ...not...red to blue Any... May 23, 2010 what is the hydrogen ion concentration of an aqueous solution with a pOH of 9.262? I got 5.47X 10 ^-10 is this correct? Thanks Andy May 23, 2010 the solubility of silver sulfate in water is 0.223%(w/v) at 35 degrees celsius.calculate the solubility product of this salt at this temp. I got 1.46 X 10^-6 M is this right please thanks Andy May 23, 2010 Chem/science what is the ph of 2.62 x 10 ^-2 M HBr solution? I get 5.94 is that right? thanks andy May 23, 2010 Chem/science What is the conjugate acid of HC5H6O4-?? Is it HC5H6O42-?? thanks andy... May 23, 2010 Chemistry Why can't benzene dissolve sodium chloride? May 18, 2010 Ineed help with the synonyms and antonyms ex. perfect, frawless is this an antonym or synonym? May 17, 2010 SCIENCE?CHEMISTRY*** Dr Bob each time i do a problem on this site I have to print out answer....is there any way you know how to save it to usb stick???? I cant for the life of me find a save feature... also how do YOU when you give us a link to a site ...make it appear in blue and all we have to ... May 16, 2010 SCIENCE?CHEMISTRY*** thanks dr bob Andy May 16, 2010 SCIENCE?CHEMISTRY*** Am using the inventory method at moment but am stuck on how fill i the inventory for __Agl+__Fe2(CO3)3--> __Fel3+__Ag2CO3 I think the brackets are throwing me Hows this element b4 after ------------------------------------ Ag 1 2 l 1 3 Fe 6 1 CO 3 3 I dont know if this is ... May 16, 2010 SCIENCE?CHEMISTRY*** well done thanks... Andy May 14, 2010 SCIENCE?CHEMISTRY*** an 56 Fe 2+ particle has how many protons neutrons amd electrons??? 56 2+ Fe Thanks Andy May 14, 2010 SCIENCE?CHEMISTRY*** how many dozens of dust particles are in 2.45 g if each particle has a mass of 2.51 x 10^-4 grams...? I dont have a clue!!!!!! Thanks Andy May 14, 2010 SCIENCE?CHEMISTRY*** thanks..... andy May 14, 2010 SCIENCE?CHEMISTRY*** DR Bob, can you go thru the steps to work this out please.... how many grams of alcohol with a density of 0.900 g/cm ^3 (cm cubed) will have the same volume as 20.0g of mercury,with a density of 13.6 g/cm ^3 Do you have to find the volume of mercury first??? I know the answer ... May 14, 2010 chemistry thanks..andy May 13, 2010 chemistry The solubility of MnS was found to be 0.000963 g per 700 mL of water. Calculate the solubility product constant (Ksp) for MnS. MnS (s)=Mn2+ (aq) + S2- (aq) May 13, 2010 chemistry The solubility product constant (Ksp) for the dissolution of PbSO4 as represented by the chemical equation is 1.7x10-8. PbSO4 (s)= Pb2+ (aq) + SO42- (aq) Calculate the mass (g) of PbSO4 that dissolves in 1100 mL of water. May 13, 2010 SCIENCE/CHEM thanks so much....andy May 12, 2010 SCIENCE/CHEM Dr Bob Is it true that only TWO molecules of water are required to balance the equation for the reaction of HCL with Calcium Hydroxide.(calcium chloride is the other product....why??? Thanks andy May 12, 2010 SCIENCE/CHEM thanks.... andy May 12, 2010 SCIENCE/CHEM Dr BOB, Can you show me how to do the balanced equation please I am shocking at them Thanks AndyWhats the simple way to explain balancing equations so i get it please....because i am horrible at them... Thanks andy May 12, 2010 SCIENCE/CHEM the purity of zinc is to be determined by measuring the amount of hydrogen formed when a weighed sample of zinc reacts with an excess of HCL acid.the sample weighs 0.198 grams. what amount of hydrogen gas at STP will be obtained if the zinc is 100% pure? need a metric answer ... May 12, 2010 SCIENCE/CHEM thanks..... andy May 12, 2010 SCIENCE/CHEM the volume of a sample of gas is 650.ml at STP. what volumes will the sample occupy at 0.0degrees celsius and 950.torr??? is it 650.ml? May 12, 2010 SCIENCE/CHEM thanks so much ...andy May 12, 2010 SCIENCE/CHEM so 2 X4 ^2 = 32 so 32 is correct ....... thanks andy May 12, 2010 SCIENCE/CHEM thanks...andy May 12, 2010 SCIENCE/CHEM how many electrons can be contained in all orbitals with n=4?????? May 12, 2010 SCIENCE/CHEM thanks ...will practice and practice and practice.....andy May 12, 2010 1. Pages: 2. <<Prev 3. 1 4. 2 5. 3 6. 4 7. 5 8. 6 9. 7 10. 8 11. 9 12. Next>>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6587734818458557, "perplexity": 3448.2590591437047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00007-ip-10-239-7-51.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Pythagorean_theorem_(baseball)
# Pythagorean expectation (Redirected from Pythagorean theorem (baseball)) Pythagorean expectation is a formula invented by Bill James to estimate how many games a baseball team "should" have won based on the number of runs they scored and allowed. Comparing a team's actual and Pythagorean winning percentage can be used to evaluate how lucky that team was (by examining the variation between the two winning percentages). The name comes from the formula's resemblance to the Pythagorean theorem.[1] The basic formula is: ${\displaystyle \mathrm {Win\ Ratio} ={\frac {{\text{runs scored}}^{2}}{{\text{runs scored}}^{2}+{\text{runs allowed}}^{2}}}={\frac {1}{1+({\text{runs allowed}}/{\text{runs scored}})^{2}}}}$ where Win Ratio is the winning ratio generated by the formula. The expected number of wins would be the expected winning ratio multiplied by the number of games played. ## Empirical origin Empirically, this formula correlates fairly well with how baseball teams actually perform. However, statisticians since the invention of this formula found it to have a fairly routine error, generally about three games off. For example, in 2002, the New York Yankees scored 897 runs and allowed 697 runs. According to James' original formula, the Yankees should have won 62.35% of their games. ${\displaystyle \mathrm {Win} ={\frac {{\text{897}}^{2}}{{\text{897}}^{2}+{\text{697}}^{2}}}=0.623525865}$ Based on a 162-game season, the Yankees should have won 101.01 games. The 2002 Yankees actually went 103–58.[2] In efforts to fix this error, statisticians have performed numerous searches to find the ideal exponent. If using a single-number exponent, 1.83 is the most accurate, and the one used by baseball-reference.com.[3] The updated formula therefore reads as follows: ${\displaystyle \mathrm {Win} ={\frac {{\text{runs scored}}^{1.83}}{{\text{runs scored}}^{1.83}+{\text{runs allowed}}^{1.83}}}={\frac {1}{1+({\text{runs allowed}}/{\text{runs scored}})^{1.83}}}}$ The most widely known is the Pythagenport formula[4] developed by Clay Davenport of Baseball Prospectus: ${\displaystyle \mathrm {Exponent} =1.50\cdot \log \left({\frac {R+RA}{G}}\right)+0.45}$ He concluded that the exponent should be calculated from a given team based on the team's runs scored (R), runs allowed (RA), and games (G). By not reducing the exponent to a single number for teams in any season, Davenport was able to report a 3.9911 root-mean-square error as opposed to a 4.126 root-mean-square error for an exponent of 2.[4] Less well known but equally (if not more) effective is the Pythagenpat formula, developed by David Smyth.[5] ${\displaystyle \mathrm {Exponent} =\left({\frac {R+RA}{G}}\right)^{.287}}$ Davenport expressed his support for this formula, saying: After further review, I (Clay) have come to the conclusion that the so-called Smyth/Patriot method, aka Pythagenpat, is a better fit. In that, X = ((rs + ra)/g)0.285, although there is some wiggle room for disagreement in the exponent. Anyway, that equation is simpler, more elegant, and gets the better answer over a wider range of runs scored than Pythagenport, including the mandatory value of 1 at 1 rpg.[6] These formulas are only necessary when dealing with extreme situations in which the average number of runs scored per game is either very high or very low. For most situations, simply squaring each variable yields accurate results. There are some systematic statistical deviations between actual winning percentage and expected winning percentage, which include bullpen quality and luck. In addition, the formula tends to regress toward the mean, as teams that win a lot of games tend to be underrepresented by the formula (meaning they "should" have won fewer games), and teams that lose a lot of games tend to be overrepresented (they "should" have won more). ## "Second-order" and "third-order" wins In their Adjusted Standings Report,[7] Baseball Prospectus refers to different "orders" of wins for a team. The basic order of wins is simply the number of games they have won. However, because a team's record may not reflect its true talent due to luck, different measures of a team's talent were developed. First-order wins, based on pure run differential, are the number of expected wins generated by the "pythagenport" formula (see above). In addition, to further filter out the distortions of luck, Sabermetricians can also calculate a team's expected runs scored and allowed via a runs created-type equation (the most accurate at the team level being Base Runs). These formulas result in the team's expected number of runs given their offensive and defensive stats (total singles, doubles, walks, etc.), which helps to eliminate the luck factor of the order in which the team's hits and walks came within an inning. Using these stats, sabermetricians can calculate how many runs a team "should" have scored or allowed. By plugging these expected runs scored and allowed into the pythagorean formula, one can generate second-order wins, the number of wins a team deserves based on the number of runs they should have scored and allowed given their component offensive and defensive statistics. Third-order wins are second-order wins that have been adjusted for strength of schedule (the quality of the opponent's pitching and hitting). Second- and third-order winning percentage has been shown[according to whom?] to predict future actual team winning percentage better than both actual winning percentage and first-order winning percentage. ## Theoretical explanation Initially the correlation between the formula and actual winning percentage was simply an experimental observation. In 2003, Hein Hundal provided an inexact derivation of the formula and showed that the Pythagorean exponent was approximately 2/(σπ) where σ was the standard deviation of runs scored by all teams divided by the average number of runs scored.[8] In 2006, Professor Steven J. Miller provided a statistical derivation of the formula[9] under some assumptions about baseball games: if runs for each team follow a Weibull distribution and the runs scored and allowed per game are statistically independent, then the formula gives the probability of winning.[9] More simply, the Pythagorean formula with exponent 2 follows immediately from two assumptions: that baseball teams win in proportion to their "quality", and that their "quality" is measured by the ratio of their runs scored to their runs allowed. For example, if Team A has scored 50 runs and allowed 40, its quality measure would be 50/40 or 1.25. The quality measure for its (collective) opponent team B, in the games played against A, would be 40/50 (since runs scored by A are runs allowed by B, and vice versa), or 0.8. If each team wins in proportion to its quality, A's probability of winning would be 1.25 / (1.25 + 0.8), which equals 50^2 / (50^2 + 40^2), the Pythagorean formula. The same relationship is true for any number of runs scored and allowed, as can be seen by writing the "quality" probability as [50/40] / [ 50/40 + 40/50], and clearing fractions. The assumption that one measure of the quality of a team is given by the ratio of its runs scored to allowed is both natural and plausible; this is the formula by which individual victories (games) are determined. [There are other natural and plausible candidates for team quality measures, which, assuming a "quality" model, lead to corresponding winning percentage expectation formulas that are roughly as accurate as the Pythagorean ones.] The assumption that baseball teams win in proportion to their quality is not natural, but is plausible. It is not natural because the degree to which sports contestants win in proportion to their quality is dependent on the role that chance plays in the sport. If chance plays a very large role, then even a team with much higher quality than its opponents will win only a little more often than it loses. If chance plays very little role, then a team with only slightly higher quality than its opponents will win much more often than it loses. The latter is more the case in basketball, for various reasons, including that many more points are scored than in baseball (giving the team with higher quality more opportunities to demonstrate that quality, with correspondingly fewer opportunities for chance or luck to allow the lower-quality team to win.) Baseball has just the right amount of chance in it to enable teams to win roughly in proportion to their quality, i.e. to produce a roughly Pythagorean result with exponent two. Basketball's higher exponent of around 14 (see below) is due to the smaller role that chance plays in basketball. And the fact that the most accurate (constant) Pythagorean exponent for baseball is around 1.83, slightly less than 2, can be explained by the fact that there is (apparently) slightly more chance in baseball than would allow teams to win in precise proportion to their quality. Bill James realized this long ago when noting that an improvement in accuracy on his original Pythagorean formula with exponent two could be realized by simply adding some constant number to the numerator, and twice the constant to the denominator. This moves the result slightly closer to .500, which is what a slightly larger role for chance would do, and what using the exponent of 1.83 (or any positive exponent less than two) does as well. Various candidates for that constant can be tried to see what gives a "best fit" to real life data. The fact that the most accurate exponent for baseball Pythagorean formulas is a variable that is dependent on the total runs per game is also explainable by the role of chance, since the more total runs scored, the less likely it is that the result will be due to chance, rather than to the higher quality of the winning team having been manifested during the scoring opportunities. The larger the exponent, the farther away from a .500 winning percentage is the result of the corresponding Pythagorean formula, which is the same effect that a decreased role of chance creates. The fact that accurate formulas for variable exponents yield larger exponents as the total runs per game increases is thus in agreement with an understanding of the role that chance plays in sports. In his 1981 Baseball Abstract, James explicitly developed another of his formulas, called the log5 formula (which has since proven to be empirically accurate), using the notion of 2 teams having a face-to-face winning percentage against each other in proportion to a "quality" measure. His quality measure was half the team's "wins ratio" (or "odds of winning"). The wins ratio or odds of winning is the ratio of the team's wins against the league to its losses against the league. [James did not seem aware at the time that his quality measure was expressible in terms of the wins ratio. Since in the quality model any constant factor in a quality measure eventually cancels, the quality measure is today better taken as simply the wins ratio itself, rather than half of it.] He then stated that the Pythagorean formula, which he had earlier developed empirically, for predicting winning percentage from runs, was "the same thing" as the log5 formula, though without a convincing demonstration or proof. His purported demonstration that they were the same boiled down to showing that the two different formulas simplified to the same expression in a special case, which is itself treated vaguely, and there is no recognition that the special case is not the general one. Nor did he subsequently promulgate to the public any explicit, quality-based model for the Pythagorean formula. As of 2013, there is still little public awareness in the sabermetric community that a simple "teams win in proportion to quality" model, using the runs ratio as the quality measure, leads directly to James's original Pythagorean formula. In the 1981 Abstract, James also says that he had first tried to create a "log5" formula by simply using the winning percentages of the teams in place of the runs in the Pythagorean formula, but that it did not give valid results. The reason, unknown to James at the time, is that his attempted formulation implies that the relative quality of teams is given by the ratio of their winning percentages. Yet this cannot be true if teams win in proportion to their quality, since a .900 team wins against its opponents, whose overall winning percentage is roughly .500, in a 9 to 1 ratio, rather than the 9 to 5 ratio of their .900 to .500 winning percentages. The empirical failure of his attempt led to his eventual, more circuitous (and ingenious) and successful approach to log5, which still used quality considerations, though without a full appreciation of the ultimate simplicity of the model and of its more general applicability and true structural similarity to his Pythagorean formula. American sports executive Daryl Morey was the first to adapt James' Pythagorean expectation to professional basketball while a researcher at STATS, Inc.. He found that using 13.91 for the exponents provided an acceptable model for predicting won-lost percentages: ${\displaystyle \mathrm {Win} ={\frac {{\text{points for}}^{13.91}}{{\text{points for}}^{13.91}+{\text{points against}}^{13.91}}}.}$ Daryl's "Modified Pythagorean Theorem" was first published in STATS Basketball Scoreboard, 1993-94.[10] Noted basketball analyst Dean Oliver also applied James' Pythagorean theory to professional basketball. The result was similar. Another noted basketball statistician, John Hollinger, uses a similar Pythagorean formula, except with 16.5 as the exponent. ## Use in pro football The formula has also been used in pro football by football stat website and publisher Football Outsiders, where it is known as Pythagorean projection. The formula is used with an exponent of 2.37 and gives a projected winning percentage. That winning percentage is then multiplied by 16 (for the number of games played in an NFL season), to give a projected number of wins. This projected number given by the equation is referred to as Pythagorean wins. ${\displaystyle {\text{Pythagorean wins}}={\frac {{\text{Points For}}^{2.37}}{{\text{Points For}}^{2.37}+{\text{Points Against}}^{2.37}}}\times 16.}$ The 2011 edition of Football Outsiders Almanac[11] states, "From 1988 through 2004, 11 of 16 Super Bowls were won by the team that led the NFL in Pythagorean wins, while only seven were won by the team with the most actual victories. Super Bowl champions that led the league in Pythagorean wins but not actual wins include the 2004 Patriots, 2000 Ravens, 1999 Rams and 1997 Broncos." Although Football Outsiders Almanac acknowledges that the formula had been less-successful in picking Super Bowl participants from 2005–2008, it reasserted itself in 2009 and 2010. Furthermore, "[t]he Pythagorean projection is also still a valuable predictor of year-to-year improvement. Teams that win a minimum of one full game more than their Pythagorean projection tend to regress the following year; teams that win a minimum of one full game less than their Pythagoerean projection tend to improve the following year, particularly if they were at or above .500 despite their underachieving. For example, the 2008 New Orleans Saints went 8-8 despite 9.5 Pythagorean wins, hinting at the improvement that came with the next year's championship season." ## Use in ice hockey In 2013, statistician Kevin Dayaratna and mathematician Steven J. Miller provided theoretical justification for applying the Pythagorean Expectation to ice hockey. In particular, they found that by making the same assumptions that Miller made in his 2007 study about baseball, specifically that goals scored and goals allowed follow statistically independent Weibull distributions, that the Pythagorean Expectation works just as well for ice hockey as it does for baseball. The Dayaratna and Miller study verified the statistical legitimacy of making these assumptions and estimated the Pythagorean exponent for ice hockey to be slightly above 2.[12] ## Notes 1. ^ "The Game Designer: Pythagoras Explained". Retrieved 7 May 2016. 2. ^ "2002 New York Yankees". Baseball-Reference.com. Retrieved 7 May 2016. 3. ^ "Frequently Asked Questions". Baseball-Reference.com. Retrieved 7 May 2016. 4. ^ a b "Baseball Prospectus - Revisiting the Pythagorean Theorem". Baseball Prospectus. Retrieved 7 May 2016. 5. ^ "W% Estimators". Retrieved 7 May 2016. 6. ^ "Baseball Prospectus - Glossary". Retrieved 7 May 2016. 7. ^ "Baseball Prospectus - Adjusted Standings". Retrieved 7 May 2016. 8. ^ Hundal, Hein. "Derivation of James Pythagorean Formula (Long)". 9. ^ a b Miller (2007). "A Derivation of the Pythagorean Won-Loss Formula in Baseball". Chance. 20: 40–48. arXiv:. Bibcode:2005math......9698M. doi:10.1080/09332480.2007.10722831. 10. ^ Dewan, John; Zminda, Don; STATS, Inc. Staff (October 1993). STATS Basketball Scoreboard, 1993-94. STATS, Inc. p. 17. ISBN 0-06-273035-5. 11. ^ Football Outsiders Almanac 2011 (ISBN 978-1-4662-4613-3), p.xviii 12. ^ Dayaratna, Kevin; Miller, Steven J. (2013). "The Pythagorean Won-Loss Formula and Hockey: A Statistical Justification for Using the Classic Baseball Formula as an Evaluative Tool in Hockey" (PDF). The Hockey Research Journal 2012/13. XVI: 193–209.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058832883834839, "perplexity": 1909.8848655074144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00015.warc.gz"}
https://www.lessonplanet.com/teachers/using-electricity-4th-5th
# Using Electricity In this electricity instructional activity, students study the circuit diagram color the light bulb yellow and tick the box if they think there is electricity for that example. Students cross out the boxes with no electricity current. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108841419219971, "perplexity": 3911.972403664053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00134.warc.gz"}
http://philpapers.org/s/Michelle%20Van%20Brunschot
## Search results for 'Michelle Van Brunschot' (try it on Scholar) 1000+ found Order: 1. Translate Export citation My bibliography 2. A series of studies investigated the capacity of children between the ages of 7 and 12 to give free and informed consent to participation in psychological research. Children were reasonably accurate in describing the purpose of studies, but many did not understand the possible benefits or especially the possible risks of participating. In several studies children's consent was not affected by the knowledge that their parents had given their permission or by the parents saying that they would not be upset (...) Export citation My bibliography   2 citations 3. Liani van Straaten (2009). Colin Odell and Michelle Le Blanc (2007) David Lynch. Film-Philosophy 11 (3):254-259. Export citation My bibliography Export citation My bibliography 5. Export citation My bibliography 6. Michelle Westermann-Behaylo & Harry J. van Buren Iii (2011). Business and Human Rights. Proceedings of the International Association for Business and Society 22:99-110. One domain of corporate responsibility that is receiving considerable attention is whether and to what extent corporations have human rights obligations. The United Nations, through the work of Special Representative to the Secretary-General John Ruggie, has developed a framework seeking to clarify the responsibilities of businesses related to human rights. However, this framework adopts a limited, “do no harm” expectation for corporations that fails to capture the positive role that corporations can play in this social responsibility domain. In this paper (...) Export citation My bibliography 7. Michelle Westermann-Behaylo, Harry J. van Buren Iii & Shawn L. Berman (2011). Towards an Organizational View of Genuine Compassion. Proceedings of the International Association for Business and Society 22:111-122. Recent scholarship has suggested that compassion can occur at the organizational level. The definition of “organizational compassion” is particularly problematic because organizations have multiple reasons for engaging in actions that then have effects on various stakeholders. A number of questions regarding organizational compassion thus merit theoretical attention: Are all organizations capable of demonstrating caring and compassion? What factors enable or constrain organizational compassion? In a move toward a more complete understanding of compassion at the organizational level, a continuum of organizational (...) Export citation My bibliography 8. Johanna N. Y. Franklin & Frank Stephan (2010). Van Lambalgen's Theorem and High Degrees. Notre Dame Journal of Formal Logic 52 (2):173-185. We show that van Lambalgen's Theorem fails with respect to recursive randomness and Schnorr randomness for some real in every high degree and provide a full characterization of the Turing degrees for which van Lambalgen's Theorem can fail with respect to Kurtz randomness. However, we also show that there is a recursively random real that is not Martin-Löf random for which van Lambalgen's Theorem holds with respect to recursive randomness. Export citation My bibliography 9. Peter Hawke (2011). Van Inwagen's Modal Skepticism. Philosophical Studies 153 (3):351-364. In this paper, the author defends Peter van Inwagen’s modal skepticism. Van Inwagen accepts that we have much basic, everyday modal knowledge, but denies that we have the capacity to justify philosophically interesting modal claims that are far removed from this basic knowledge. The author also defends the argument by means of which van Inwagen supports his modal skepticism, offering a rebuttal to an objection along the lines of that proposed by Geirrson. Van Inwagen argues that Stephen Yablo’s recent and (...) Export citation My bibliography   3 citations 10. Harry J. Van Buren & Michelle Greenwood (2013). Ethics and HRM Education. Journal of Academic Ethics 11 (1):1-15. Human resource management (HRM) education has tended to focus on specific functions and tasks within organizations, such as compensation, staffing, and evaluation. This task orientation within HRM education fails to account for the bigger questions facing human resource management and employment relationships, questions which address the roles and responsibilities of the HR function and HR practitioners. An educational focus on HRM that does not explicitly address larger ethical questions fails to equip students to address stakeholder concerns about how employees are (...) Export citation My bibliography 11. William Craig (2014). Peter van Inwagen, Substitutional Quantification, and Ontological Commitment. Notre Dame Journal of Formal Logic 55 (4):553-561. Peter van Inwagen has long claimed that he doesn’t understand substitutional quantification and that the notion is, in fact, meaningless. Van Inwagen identifies the source of his bewilderment as an inability to understand the proposition expressed by a simple sentence like “,” where “$\Sigma$” is the existential quantifier understood substitutionally. I should think that the proposition expressed by this sentence is the same as that expressed by “.” So what’s the problem? The problem, I suggest, is that van Inwagen takes (...) Export citation My bibliography 12. I. Introduction “We can and do see the truth about many things: ourselves, others, trees and animals, clouds and rivers—in the immediacy of experience.”1 Absent from Bas van Fraassen’s list of those things we see are paramecia and mitochondria. We do not see such things, van Fraassen has long maintained, because they are unobservable, that is, they are undetectable by means of the unaided senses.2 But notice that these two notions—what we can see in the “immediacy” of experience and what (...) Export citation My bibliography   4 citations 13. Harry J. Van Buren & Michelle Greenwood (2008). Enhancing Employee Voice: Are Voluntary Employer–Employee Partnerships Enough? Journal of Business Ethics 81 (1):209-221. One of the essential ethical issues in the employment relationship is the loss of employee voice. Many of the ways employees have previously exercised voice in the employment relationship have been rendered less effective by (1) the changing nature of work, (2) employer preferences for flexibility that often work to the disadvantage of employees, and (3) changes in public policy and institutional systems that have failed to protect workers. We will begin with a discussion of how work has changed in (...) Export citation My bibliography   6 citations 14. In his recent book on the problem of evil, Peter van Inwagen argues that both the global and local arguments from evil are failures. In this paper, we engagevan Inwagen’s book at two main points. First, we consider his understanding of what it takes for a philosophical argument to succeed. We argue that while his criterion for success is interesting and helpful, there is good reason to think it is too stringent. Second, we consider his responses to the global (...) Export citation My bibliography   5 citations 15. The aim of this review is to show the fruitfulness of using images of facial expressions as experimental stimuli in order to study how neural systems support biologically relevant learning as it relates to social interactions. Here we consider facial expressions as naturally conditioned stimuli which, when presented in experimental paradigms, evoke activation in amygdala–prefrontal neural circuits that serve to decipher the predictive meaning of the expressions. Facial expressions offer a relatively innocuous strategy with which to investigate these normal variations (...) Export citation My bibliography   2 citations 16. Michael Huemer (2000). Van Inwagen's Consequence Argument. Philosophical Review 109 (4):525-544. Peter van Inwagen ’s argument for incompatibilism uses a sentential operator, “N”, which can be read as “No one has any choice about the fact that....” I show that, given van Inwagen ’s understanding of the notion of having a choice, the argument is invalid. However, a different interpretation of “N” can be given, such that the argument is clearly valid, the premises remain highly plausible, and the conclusion implies that free will is incompatible with determinism. Export citation My bibliography   6 citations 17. Philippe De Rouilhan (2012). In Defense of Logical Universalism: Taking Issue with Jean van Heijenoort. [REVIEW] Logica Universalis 6 (3-4):553-586. Van Heijenoort’s main contribution to history and philosophy of modern logic was his distinction between two basic views of logic, first, the absolutist, or universalist, view of the founding fathers, Frege, Peano, and Russell, which dominated the first, classical period of history of modern logic, and, second, the relativist, or model-theoretic, view, inherited from Boole, Schröder, and Löwenheim, which has dominated the second, contemporary period of that history. In my paper, I present the man Jean van Heijenoort (Sect. 1); then (...) Export citation My bibliography   2 citations 18. Tiziana Proietti (2015). The Aesthetics of Proportion in Hans van der Laan and Leon Battista Alberti. Aisthesis. Pratiche, Linguaggi E Saperi Dell’Estetico 8 (2):183-199. This paper aims at presenting the work of Dutch architecture Hans van der Laan through a comparison with the Renaissance architect Leon Battista Alberti by stating the similarity of the role assigned to proportion in architectural design by both architects. In particular, the study will show how both Van der Laan and Alberti understood proportion and the perceptive and aesthetic values of proportioned forms as the result of an intellectual appreciation. Export citation My bibliography 19. http://dx.doi.org/10.5007/1808-1711.2008v12n1p49 The aim of this article is to offer a rejoinder to an argument against scientific realism put forward by van Fraassen, based on theoretical considerations regarding microphysics. At a certain stage of his general attack to scientific realism, van Fraassen argues, in contrast to what realists typically hold, that empirical regularities should sometimes be regarded as “brute facts”, which do not ask for explanation in terms of deeper, unobservable mechanisms. The argument from microphysics formulated by van Fraassen is based (...) Export citation My bibliography 20. Erman Kaplama (2016). The Cosmological Aesthetic Worldview in Van Gogh’s Late Landscape Paintings. Cosmos and History: The Journal of Natural and Social Philosophy 12 (1):218-237. Some artworks are called sublime because of their capacity to move human imagination in a different way than the experience of beauty. The following discussion explores how Van Gogh’s The Starry Night along with some of his other late landscape paintings accomplish this peculiar movement of imagination thus qualifying as sublime artworks. These artworks constitute examples of the higher aesthetic principles and must be judged according to the cosmological-aesthetic criteria for they manage to generate a transition between ethos and phusis (...) Export citation My bibliography 21. Paul Giladi (2015). Pragmatist Themes in Van Fraassen’s Stances and Hegel’s Forms of Consciousness. International Journal of Philosophical Studies 24 (1):95-111. The aim of this paper is to establish a substantial positive philosophical connection between Bas van Fraassen and Hegel, by focusing on their respective notions of ‘stance’ and ‘form of consciousness’. In Section I, I run through five ways of understanding van Fraassen’s idea of a stance. I argue that a ‘stance’ is best understood as an intellectual disposition. This, in turn, means that the criteria for assessing a stance are ones which ask whether or not a stance adequately makes (...) Export citation My bibliography 22. Jennifer L. Soerensen (2013). The Local Problem of God's Hiddenness: A Critique of van Inwagen's Criterion of Philosophical Success. [REVIEW] International Journal for Philosophy of Religion 74 (3):297-314. In regards to the problem of evil, van Inwagen thinks there are two arguments from evil which require different defenses. These are the global argument from evil—that there exists evil in general, and the local argument from evil—that there exists some particular atrocious evil X. However, van Inwagen fails to consider whether the problem of God’s hiddenness also has a “local” version: whether there is in fact a “local” argument from God’s hiddenness which would be undefeated by his general defense (...) Export citation My bibliography   1 citation 23. Michelle W. Voss, Carmen Vivar, Arthur F. Kramer & Henriette van Praag (2013). Bridging Animal and Human Models of Exercise-Induced Brain Plasticity. Trends in Cognitive Sciences 17 (10):525-544. Export citation My bibliography   1 citation 24. Meghan E. Griffith (2005). Does Free Will Remain a Mystery? A Response to Van Inwagen. Philosophical Studies 124 (3):261-269. In this paper, I argue against Peter van Inwagen’s claim (in “Free Will Remains a Mystery”), that agent-causal views of free will could do nothing to solve the problem of free will (specifically, the problem of chanciness). After explaining van Inwagen’s argument, I argue that he does not consider all possible manifestations of the agent-causal position. More importantly, I claim that, in any case, van Inwagen appears to have mischaracterized the problem in some crucial ways. Once we are clear on (...) Export citation My bibliography   1 citation 25. Federica Russo (2006). Salmon and Van Fraassen on the Existence of Unobservable Entities: A Matter of Interpretation of Probability. [REVIEW] Foundations of Science 11 (3):221-247. A careful analysis of Salmon’s Theoretical Realism and van Fraassen’s Constructive Empiricism shows that both share a common origin: the requirement of literal construal of theories inherited by the Standard View. However, despite this common starting point, Salmon and van Fraassen strongly disagree on the existence of unobservable entities. I argue that their different ontological commitment towards the existence of unobservables traces back to their different views on the interpretation of probability via different conceptions of induction. In fact, inferences to (...) Export citation My bibliography 26. Peter van Inwagen (2004). Van Inwagen on Free Will. In Joseph K. Campbell (ed.), Freedom and Determinism. Cambridge MA: Bradford Book/MIT Press Export citation My bibliography   1 citation 27. Irving H. Anellis (2012). Jean van Heijenoort's Conception of Modern Logic, in Historical Perspective. Logica Universalis 6 (3-4):339-409. I use van Heijenoort’s published writings and manuscript materials to provide a comprehensive overview of his conception of modern logic as a first-order functional calculus and of the historical developments which led to this conception of mathematical logic, its defining characteristics, and in particular to provide an integral account, from his most important publications as well as his unpublished notes and scattered shorter historico-philosophical articles, of how and why the mathematical logic, whose he traced to Frege and the culmination of (...) Export citation My bibliography   1 citation 28. Harry J. van Buren Iii & Michelle Greenwood (2013). Ethics and HRM Education. Journal of Academic Ethics 11 (1):1-15. Export citation My bibliography   1 citation Export citation My bibliography 30. Sergio A. Gallegos (2015). Measurement and Metaphysics in van Fraassen’s Scientific Representation. Axiomathes 25 (1):117-131. Van Fraassen has presented in Scientific Representation an attractive notion of measurement as an important part of the empiricist structuralism that he endorses. However, he has been criticized on the grounds that both his notion of measurement and his empiricist structuralism force him to do the very thing he objects to in other philosophical projects—to endorse a controversial metaphysics. This paper proposes a defense of van Fraassen by arguing that his project is indeed a ‘metaphysical’ project, but one which is (...) Export citation My bibliography 31. Harry J. van Buren Iii & Michelle Greenwood (2009). Stakeholder Voice. Philosophy of Management 8 (3):15-23. The 25th anniversary of R. Edward Freeman’s Strategic Management: A Stakeholder Approach provides an opportunity to consider where stakeholder theory has been, where it is going, and how it might influence the behavior of academics conducting stakeholder-oriented research. We propose that Freeman’s early work on the stakeholder concept supports the normative claim that a stakeholder’s contribution to value creation implies a right to stakeholder voice with regard to how a corporation makes decisions. Failure to account for stakeholder voice (especially for (...) Export citation My bibliography   1 citation 32. Kenshi Miyabe (2010). An Extension of van Lambalgen's Theorem to Infinitely Many Relative 1-Random Reals. Notre Dame Journal of Formal Logic 51 (3):337-349. Van Lambalgen's Theorem plays an important role in algorithmic randomness, especially when studying relative randomness. In this paper we extend van Lambalgen's Theorem by considering the join of infinitely many reals which are random relative to each other. In addition, we study computability of the reals in the range of Omega operators. It is known that $\Omega^{\phi'}$ is high. We extend this result to that $\Omega^{\phi^{(n)}}$ is $\textrm{high}_n$ . We also prove that there exists A such that, for each n (...) Export citation My bibliography   1 citation 33. John Martin Fischer (1986). Van Inwagen on Free Will. Philosophical Quarterly 36 (April):252-260. I discuss van inwagen's "first formal argument" for the incompatibility of causal determinism and freedom to do otherwise. I distinguish different interpretations of the important notion, "s can render p false." I argue that on none of these interpretations is the argument clearly sound. I point to gaps in the argument, Although I do not claim that it is unsound. Export citation My bibliography   3 citations 34. Harold W. Noonan (2014). Tollensing van Inwagen. Philosophia 42 (4):1055-1061. Van Inwagen has an ingenious argument for the non-existence of human artefacts . But the argument cannot be accepted, since human artefacts are everywhere. However, it cannot be ignored. The proper response to it is to treat it as a refutation of its least plausible premise, i.e., to ‘tollens’ it. I first set out van Inwagen’s argument. I then identify its least plausible premise and explain the consequence of denying it, that is, the acceptance of a plenitudinous, pluralist ontology. I (...) Export citation My bibliography 35. http://dx.doi.org/10.5007/1808-1711.2008v12n2p121 O objetivo deste trabalho é discutir e desenvolver o diagnóstico que efetua van Fraassen (1987, p. 110) da lei de Hardy-Weinberg, de acordo coo qual esta: 1) não pode ser considerada uma lei a ser utilizada como un axioma da teoria genética de populações, pois é uma lei de equilíbrio que só vale sob certas condições especiais, 2) só determina uma subclasse de modelos, 3) sua generalização resulta vácua e 4) variantes complexas da lei podem ser deduzidas para pressupostos (...) Translate Export citation My bibliography 36. Martin Kusch (2015). Microscopes and the Theory-Ladenness of Experience in Bas van Fraassen’s Recent Work. Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 46 (1):167-182. Bas van Fraassen’s recent book Scientific Representation: Paradoxes of Perspective modifies and refines the “constructive empiricism” of The Scientific Image in a number of ways. This paper investigates the changes concerning one of the most controversial aspects of the overall position, that is, van Fraassen’s agnosticism concerning the veridicality of microscopic observation. The paper tries to make plausible that the new formulation of this agnosticism is an advance over the older rendering. The central part of this investigation is an attempt (...) Export citation My bibliography 37. Sandy C. Boucher (2015). Functionalism and Structuralism as Philosophical Stances: Van Fraassen Meets the Philosophy of Biology. Biology and Philosophy 30 (3):383-403. I consider the broad perspectives in biology known as ‘functionalism’ and ‘structuralism’, as well as a modern version of functionalism, ‘adaptationism’. I do not take a position on which of these perspectives is preferable; my concern is with the prior question, how should they be understood? Adapting van Fraassen’s argument for treating materialism as a stance, rather than a factual belief with propositional content, in the first part of the paper I offer an argument for construing functionalism (...) Export citation My bibliography 38. Anita Burdman Feferman (2012). Jean van Heijenoort: Kaleidoscope. [REVIEW] Logica Universalis 6 (3-4):277-291. Leitmotifs in the life of Jean van Heijenoort. Export citation My bibliography 39. Solomon Feferman (2012). On Rereading van Heijenoort's Selected Essays. Logica Universalis 6 (3-4):535-552. This is a critical reexamination of several pieces in van Heijenoort’s Selected Essays that are directly or indirectly concerned with the philosophy of logic or the relation of logic to natural language. Among the topics discussed are absolutism and relativism in logic, mass terms, the idea of a rational dictionary, and sense and identity of sense in Frege. Export citation My bibliography 40. Stephen Voss (ed.) (1993). Essays on the Philosophy and Science of René Descartes. Oxford University Press. A major contribution to Descartes studies, this book provides a panorama of cutting-edge scholarship ranging widely over Descartes's own primary concerns: metaphysics, physics, and its applications. It is at once a tool for scholars and--steering clear of technical Cartesian science--an accessible resource that will delight nonspecialists. The contributors include Edwin Curley, Willis Doney, Alan Gabbey, Daniel Garber, Marjorie Grene, Gary Hatfield, Marleen Rozemond, John Schuster, Dennis Sepper, Stephen Voss, Stephen Wagner, Margaret Welson, Jean Marie Beyssade, Michelle Beyssade, Michel Henry, (...) Export citation My bibliography   2 citations 41. Janez Bregant (2004). Van Gulick's Solution of the Exclusion Problem Revisited. Acta Analytica 19 (33):83-94. The anti-reductionist who wants to preserve the causal efficacy of mental phenomena faces several problems in regard to mental causation, i.e. mental events which cause other events, arising from her desire to accept the ontological primacy of the physical and at the same time save the special character of the mental. Psychology tries to persuade us of the former, appealing thereby to the results of experiments carried out in neurology; the latter is, however, deeply rooted in our everyday actions and (...) Export citation My bibliography 42. Mitchell O. Stokes (2007). Van Inwagen and the Quine-Putnam Indispensability Argument. Erkenntnis 67 (3):439 - 453. In this paper I do two things: (1) I support the claim that there is still some confusion about just what the Quine-Putnam indispensability argument is and the way it employs Quinean meta-ontology and (2) I try to dispel some of this confusion by presenting the argument in a way which reveals its important meta-ontological features, and include these features explicitly as premises. As a means to these ends, I compare Peter van Inwagen’s argument for the existence of properties with (...) Export citation My bibliography 43. John Bacon (1990). Van Cleve Versus Closure. Philosophical Studies 58 (3):239-242. In "Supervenience, Necessary Coextension, and Reducibility" (Philosophical Studies 49, 1986, 163-176), among other results, I showed that weak or ordinary supervenience is equivalent to Jaegwon Kim's strong supervenience, given certain assumptions: S4 modality, the usual modal conception of properties as class-concepts, and diagonal closure or resplicing of the set of base properties. This last means that any mapping of possible worlds into extensions of base properties counts itself as a base property. James Van Cleve attacks the modal conception of property (...) Export citation My bibliography 44. Helen Longino (2009). Perilous Thoughts: Comment on Van Fraassen. Philosophical Studies 143 (1):25 - 32. Bas van Fraassen’s empiricist reading of Perrin’s achievement invites the question: whose doubts about atoms did Perrin put to rest? This comment recontextualizes the argument and applies the notion of empirical grounding to some contemporary work in behavioral biology. Export citation My bibliography 45. Irving H. Anellis (2012). Editor's Introduction to Jean van Heijenoort, Historical Development of Modern Logic. Logica Universalis 6 (3-4):301-326. Van Heijenoort’s account of the historical development of modern logic was composed in 1974 and first published in 1992 with an introduction by his former student. What follows is a new edition with a revised and expanded introduction and additional notes. Export citation My bibliography 46. Felice Masi (2012). Il verso della dissoluzione e quello della caduta. Notizie sull'orientamento architettonico tra Th. Lipps e H. van der Laan. [REVIEW] Aisthesis. Pratiche, Linguaggi E Saperi Dell’Estetico 5 (2). The paper aims at drawing the main lines of a reflection about architectonic space, starting from the comparison between two hypothesis, as much as ever different: Theodor Lipps’ spatial aesthetics and Hans van der Laan’s elemental theory. The emphasis given by both authors to the intersection between directions and way, but also to the mutual subordination between thing and space, allows to rewrite the obituary of architecture as a spatial art, according to which the Modern Style has turned the spatiality (...) No categories Translate Export citation My bibliography 47. Samuel Simon & Aline Moares (2009). O empirismo construtivo de Bas C. Van Fraassen E o problema do sucesso científico. Philósophos - Revista de Filosofia 12 (2). O presente trabalho tem por objetivo apresentar os principais aspectos do Empirismo Construtivo de Bas C. van Fraassen, particularmente no que diz respeito ao problema do sucesso científico. Nesse contexto, serão examinadas as noções de observável e inobservável e suas relações com o ‘argumento do milagre’ e da ‘coincidência cósmica’, ambos criticados por van Fraassen. As respostas de autores que defendem o Realismo Científico serão então apresentadas, contrapondo-se aos argumentos do Empirismo Construtivo. Finalmente, possíveis dificuldades do Empirismo Construtivo serão ainda (...) Translate Export citation My bibliography 48. Caddie Putnam Rankin, Harry Van Buren & Michelle Westermann-Behaylo (2012). Corporate Compassion in Disaster Relief. Proceedings of the International Association for Business and Society 23:66-77. When natural disasters strike, a network of individuals, aid agencies, and corporations join together in a humanitarian effort to provide relief and recovery to those in need. Corporations, in particular, have played an increasing role in disaster assistance by providing financial support, goods, services, and logistic coordination (Muller and Whiteman 2009). Previous research has addressed corporate responses to disaster by investigating the factors that impact the likelihood of giving. Instead of focusing on the likelihood of corporate action, or inaction, we (...) Export citation My bibliography 49. Stijn Van Impe (2012). Kants morele kritiek op het atheïsme: mogelijkheid of onmogelijkheid van het hoogste goede? Algemeen Nederlands Tijdschrift voor Wijsbegeerte 104 (1). Translate
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343424558639526, "perplexity": 7504.062051806709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050960463.61/warc/CC-MAIN-20160524004920-00086-ip-10-185-217-139.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/175746/how-to-force-a-label-to-be-a-given-string
# How to force a label to be a given string? Normally I issue a command like \label{foo}, and this means that foo gets entered into a symbol table somewhere that says that it equals 14, or xiv, or aa, or whatever. These symbols are generated and incremented automatically, and to change the numbering system, I can use packages like alphalph. Suppose I want to force foo to evaluate to some arbitrary string, such as "Socrates." E.g., I want something like \mynameis{Socrates}\label{foo}, and then when I say \ref{foo}, the result will not be "14" but "Socrates." I want to completely bypass the normal system involving counters. If Socrates is the name of a chapter, then I don't expect LaTeX to be smart enough to automatically name the next chapter Plato. So, e.g.: \chapter{The life of Socrates}\mynameis{Socrates}\label{foo} ... \chapter{The life of Plato}\label{bar} ... This is an error on my part. It's OK with me if bar is set to garbage, or if LaTeX chokes, or if bar is set to some arbitrary string such as Socrates2. Is there some way to do this? - ## 1 Answer This is fairly easy: \documentclass{article} \makeatletter \newcommand{\mynameis}[1]{#1\renewcommand{\@currentlabel}{#1}} \makeatother \begin{document} \mynameis{Socrates}\label{foo}% Do you know about \ref{foo}? \end{document} \@currentlabel is the macro that is stored when you use \label (in addition to \thepage). So, all we do is update this to our liking before calling \label. If you wish for this to be compatible with hyperref, you could issue an additional \phantomsection so the hyperlink is correct (and perhaps also update \@currentlabelname) \documentclass{article} \usepackage{hyperref} \makeatletter \newcommand{\mynameis}[1]{% \phantomsection#1% Mark hyperlink \renewcommand{\@currentlabel}{#1}% \renewcommand{\@currentlabelname}{#1}} \makeatother \begin{document} \mynameis{Socrates}\label{foo}% Do you know about \ref{foo} or \nameref{foo}? \end{document} For more on cross-referencing, see Understanding how references and labels work. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509626984596252, "perplexity": 2257.114812524598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00205-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integral-of-e-constant-x-2.9061/
# Integral of e^-(constant)x^2 1. Nov 16, 2003 ### Kristen Need the integral of e^-(constant)x^2.........don't want to use guass integral trick 2. Nov 16, 2003 ### Ambitwistor 3. Nov 17, 2003 ### mathman The indefinite integral cannot be expressed in simple form. Usually it is given in terms of a function called "erf", which is simply a standard form (constant=1/2) integral.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998171925544739, "perplexity": 4256.676461308775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743968.63/warc/CC-MAIN-20181118052443-20181118074443-00256.warc.gz"}
http://forum.dlang.org/thread/hcqb44$1nc9$1@digitalmars.com?page=2
On Tue, Nov 3, 2009 at 3:54 PM, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: > Leandro Lucarella wrote: >> >> Andrei Alexandrescu, el  3 de noviembre a las 16:33 me escribiste: >>> >>> SafeD is, unfortunately, not finished at the moment. I want to leave >>> in place a stub that won't lock our options. Here's what we >>> currently have: >>> >>> module(system) calvin; >>> >>> This means calvin can do unsafe things. >>> >>> module(safe) susie; >>> >>> This means susie commits to extra checks and therefore only a subset of >>> D. >>> >>> module hobbes; >>> >>> This means hobbes abides to whatever the default safety setting is. >>> >>> The default safety setting is up to the compiler. In dmd by default >>> it is "system", and can be overridden with "-safe". >> >> What's the rationale for letting the compiler decide? I can't see nothing >> system, I think the default should be defined (I'm not sure what the >> default should be though). > > The parenthesis pretty much destroys your point :o). > > I don't think letting the implementation decide is a faulty model. If you > know what you want, you say it. Otherwise it means you don't care. How can you not care? Either your module uses unsafe features or it doesn't. So it seems if you don't specify, then your module must pass the strictest checks, because otherwise it's not a "don't care" situation -- it's a "system"-only situation. --bb Andrei Alexandrescu wrote: > Sketch of the safe rules: > > \begin{itemize*} > \item No @cast@ from a pointer type to an integral type and vice versa replace integral type with non-pointer type. > \item No @cast@ between unrelated pointer types > \item Bounds checks on all array accesses > \item No unions that include a reference type (array, @class@, > pointer, or @struct@ including such a type) pointers are not a reference type. Replace "reference type" with "pointers or reference types". > \item No pointer arithmetic > \item No escape of a pointer or reference to a local variable outside > its scope revise: cannot take the address of a local or a reference. > \item Cross-module function calls must only go to other @safe@ modules > \end{itemize*} . no inline assembler . no casting away of const, immutable, or shared Bill Baxter wrote: > On Tue, Nov 3, 2009 at 2:33 PM, Andrei Alexandrescu > <SeeWebsiteForEmail@erdani.org> wrote: >> SafeD is, unfortunately, not finished at the moment. I want to leave in >> place a stub that won't lock our options. Here's what we currently have: >> >> module(system) calvin; >> >> This means calvin can do unsafe things. >> >> module(safe) susie; >> >> This means susie commits to extra checks and therefore only a subset of D. >> >> module hobbes; >> >> This means hobbes abides to whatever the default safety setting is. >> >> The default safety setting is up to the compiler. In dmd by default it is >> "system", and can be overridden with "-safe". >> >> Sketch of the safe rules: >> >> \begin{itemize*} >> \item No @cast@ from a pointer type to an integral type and vice versa >> \item No @cast@ between unrelated pointer types >> \item Bounds checks on all array accesses >> \item No unions that include a reference type (array, @class@, >> pointer, or @struct@ including such a type) >> \item No pointer arithmetic >> \item No escape of a pointer or reference to a local variable outside >> its scope >> \item Cross-module function calls must only go to other @safe@ modules >> \end{itemize*} >> >> So these are my thoughts so far. There is one problem though related to the >> last \item - there's no way for a module to specify "trusted", meaning: >> "Yeah, I do unsafe stuff inside, but safe modules can call me no problem". >> Many modules in std fit that mold. >> >> How can we address that? Again, I'm looking for a simple, robust, extensible >> design that doesn't lock our options. > > I have to say that I would be seriously annoyed to see repeated > references to a feature that turns out to be vaporware. (I'm guessing > there will be repeated references to SafeD based on the Chapter 4 > sample, and I'm guessing it will be vaporware based on the question > you're asking above). I'd say leave SafeD for the 2nd edition, and > just comment that work is underway in a "Future of D" chapter near the > end of the book. And of course add a "Look to <the publishers website > || digitalmars.com> for the latest!" > > Even if not vaporware, it looks like whatever you write is going to be > about something completely untested in the wild, and so has a high > chance of turning out to need re-designing in the face of actual use. > > --bb Ok, I won't use the term SafeD as if it were a product. But -safe is there, some checks are there, and Walter is apparently willing to complete them. It's not difficult to go with an initially conservative approach - e.g., "no taking the address of a local" as he wrote in a recent post - although a more refined approach would still allow to take addresses of locals, as long as they don't escape. Andrei On Tue, 03 Nov 2009 17:55:15 -0600, Andrei Alexandrescu wrote: > There's a lot more, but there are a few useful subspaces. One is, if an > entire application only uses module(safe) that means there is no memory > error in that application, ever. > > Andrei Does that mean that a module that uses a "trusted" module must also be marked as "trusted?" I would see this as pointless since system modules are likely to be used in safe code a lot. I think the only real option is to have the importer decide if it is trusted. I don't see a reasonable way to have third party certification. It is between the library writer and application developer. Since the library writer's goal should be to have a system module that is safe, he would likely want to mark it as trusted. This would leave "system" unused because everyone wants to be safe. In conclusion, here is a chunk of possible import options. I vote for the top two. import(system) std.stdio; system import std.stdio; trusted import std.stdio; import(trusted) std.stdio; import("This is a system module and I know that it is potentially unsafe, but I still want to use it in my safe code") std.stdio; Walter Bright Wrote: > Andrei Alexandrescu wrote: > > Sketch of the safe rules: > > > > \begin{itemize*} > > \item No @cast@ from a pointer type to an integral type and vice versa > > replace integral type with non-pointer type. > > > \item No @cast@ between unrelated pointer types > > \item Bounds checks on all array accesses > > \item No unions that include a reference type (array, @class@, > > pointer, or @struct@ including such a type) > > pointers are not a reference type. Replace "reference type" with > "pointers or reference types". > > > \item No pointer arithmetic > > > \item No escape of a pointer or reference to a local variable outside > > its scope > > revise: cannot take the address of a local or a reference. > > > \item Cross-module function calls must only go to other @safe@ modules > > \end{itemize*} > > . no inline assembler > . no casting away of const, immutable, or shared How does casting away const, immutable, or shared cause memory corruption? If I understand SafeD correctly, that's its only goal. If it does more, I'd also argue casting to shared or immutable is, in general, unsafe. I'm also unsure if safeD has really fleshed out what would make use of (lockfree) shared variables safe. For example, array concatenation in one thread while reading in another thread could allow reading of garbage memory (e.g. if the length was incremented before writing the cell contents) Jesse Phillips wrote: > On Tue, 03 Nov 2009 17:55:15 -0600, Andrei Alexandrescu wrote: > >> There's a lot more, but there are a few useful subspaces. One is, if an >> entire application only uses module(safe) that means there is no memory >> error in that application, ever. >> >> Andrei > > Does that mean that a module that uses a "trusted" module must also be > marked as "trusted?" I would see this as pointless since system modules > are likely to be used in safe code a lot. Same here. > I think the only real option is to have the importer decide if it is > trusted. That can't work. I can't say that stdc.stdlib is trusted no matter how hard I try. I mean free is there! > I don't see a reasonable way to have third party certification. > It is between the library writer and application developer. Since the > library writer's goal should be to have a system module that is safe, he > would likely want to mark it as trusted. This would leave "system" unused > because everyone wants to be safe. Certain modules definitely can't aspire to be trusted. But for example std.stdio can claim to be trusted because, in spite of using untrusted stuff like FILE* and fclose, they are encapsulated in a way that makes it impossible for a safe client to engender memory errors. > In conclusion, here is a chunk of possible import options. I vote for the > top two. > > import(system) std.stdio; > system import std.stdio; > trusted import std.stdio; > import(trusted) std.stdio; > import("This is a system module and I know that it is potentially unsafe, > but I still want to use it in my safe code") std.stdio; Specifying a clause with import crossed my mind too, it's definitely something to keep in mind. Andrei Jason House wrote: > Walter Bright Wrote: > >> Andrei Alexandrescu wrote: >>> Sketch of the safe rules: >>> >>> \begin{itemize*} \item No @cast@ from a pointer type to an >>> integral type and vice versa >> replace integral type with non-pointer type. >> >>> \item No @cast@ between unrelated pointer types \item Bounds >>> checks on all array accesses \item No unions that include a >>> reference type (array, @class@, pointer, or @struct@ including >>> such a type) >> pointers are not a reference type. Replace "reference type" with >> "pointers or reference types". >> >>> \item No pointer arithmetic \item No escape of a pointer or >>> reference to a local variable outside its scope >> revise: cannot take the address of a local or a reference. >> >>> \item Cross-module function calls must only go to other @safe@ >>> modules \end{itemize*} >> add: . no inline assembler . no casting away of const, immutable, >> or shared > > How does casting away const, immutable, or shared cause memory > corruption? If you have an immutable string, the compiler may cache or enregister the length and do anything (such as hoisting checks out of loops) in confidence the length will never change. If you do change it -> memory error. > If I understand SafeD correctly, that's its only goal. If it does > more, I'd also argue casting to shared or immutable is, in general, > unsafe. I'm also unsure if safeD has really fleshed out what would > make use of (lockfree) shared variables safe. For example, array > allow reading of garbage memory (e.g. if the length was incremented > before writing the cell contents) Shared arrays can't be modified. Andrei Jason House wrote: > How does casting away const, immutable, or shared cause memory > corruption? If I understand SafeD correctly, that's its only goal. If > it does more, I'd also argue casting to shared or immutable is, in > general, unsafe. They can cause memory corruption because inadvertent "tearing" can occur when two parts to a memory reference are updated, half from one and half from another alias. > I'm also unsure if safeD has really fleshed out what > would make use of (lockfree) shared variables safe. For example, > could allow reading of garbage memory (e.g. if the length was > incremented before writing the cell contents) That kind of out-of-order reading is just what shared is meant to prevent. On Tue, 03 Nov 2009 17:33:39 -0500, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote: > SafeD is, unfortunately, not finished at the moment. I want to leave in > place a stub that won't lock our options. Here's what we currently have: > > module(system) calvin; > > This means calvin can do unsafe things. > > module(safe) susie; > > This means susie commits to extra checks and therefore only a subset of > D. > > module hobbes; > > This means hobbes abides to whatever the default safety setting is. > ... > \item Cross-module function calls must only go to other @safe@ modules > \end{itemize*} > > So these are my thoughts so far. There is one problem though related to > the last \item - there's no way for a module to specify "trusted", > meaning: "Yeah, I do unsafe stuff inside, but safe modules can call me > no problem". Many modules in std fit that mold. My interpretation of the module decorations was: module(system) calvin; This means calvin uses unsafe things, but is considered safe for other modules (it overrides the setting of the compiler, so can be compiled even in safe mode). module(safe) susie; This means susie commits to extra checks, and will be compiled in safe mode even if the compiler is in unsafe mode. Susie can only import module(safe) or module(system) modules, or if the compiler is in safe mode, any module. module hobbes; This means hobbes doesn't care whether he's safe or not. (note the My rationale for interpreting module(system) is: why declare a module as system unless you *wanted* it to be compilable in safe mode? I would expect that very few modules are marked as module(system). And as for the default setting, I think that unsafe is a reasonable default. You can always create a shortcut/script/symlink to the compiler that adds the -safe flag if you wanted a safe-by-default version. -Steve "Andrei Alexandrescu" <SeeWebsiteForEmail@erdani.org> ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ × ÎÏ×ÏÓÔÑÈ ÓÌÅÄÕÀÝÅÅ: news:hcr2hb$dvm$1@digitalmars.com... > Jesse Phillips wrote: >> On Tue, 03 Nov 2009 17:55:15 -0600, Andrei Alexandrescu wrote: >> >>> There's a lot more, but there are a few useful subspaces. One is, if an >>> entire application only uses module(safe) that means there is no memory >>> error in that application, ever. >>> >>> Andrei >> >> Does that mean that a module that uses a "trusted" module must also be >> marked as "trusted?" I would see this as pointless since system modules >> are likely to be used in safe code a lot. > > Same here. > >> I think the only real option is to have the importer decide if it is >> trusted. > > That can't work. I can't say that stdc.stdlib is trusted no matter how > hard I try. I mean free is there! > >> I don't see a reasonable way to have third party certification. It is >> between the library writer and application developer. Since the library >> writer's goal should be to have a system module that is safe, he would >> likely want to mark it as trusted. This would leave "system" unused >> because everyone wants to be safe. > > Certain modules definitely can't aspire to be trusted. But for example > std.stdio can claim to be trusted because, in spite of using untrusted > stuff like FILE* and fclose, they are encapsulated in a way that makes it > impossible for a safe client to engender memory errors. > >> In conclusion, here is a chunk of possible import options. I vote for the >> top two. >> >> import(system) std.stdio; >> system import std.stdio; >> trusted import std.stdio; >> import(trusted) std.stdio; >> import("This is a system module and I know that it is potentially unsafe, >> but I still want to use it in my safe code") std.stdio; > > Specifying a clause with import crossed my mind too, it's definitely > something to keep in mind. > > > Andrei >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556215226650238, "perplexity": 7457.968911759948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452189638.13/warc/CC-MAIN-20150501034949-00023-ip-10-235-10-82.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2004_AMC_10B_Problems/Problem_18&direction=prev&oldid=141353
# 2004 AMC 10B Problems/Problem 18 ## Problem In the right triangle , we have , , and . Points , , and are located on , , and , respectively, so that , , and . What is the ratio of the area of to that of ? ## Solution 1 Let . Because is divided into four triangles, . Because of triangle area, . and , so . , so . ## Solution 2 First of all, note that , and therefore . Draw the height from onto as in the picture below: Now consider the area of . Clearly the triangles and are similar, as they have all angles equal. Their ratio is , hence . Now the area of can be computed as = . Similarly we can find that as well. Hence , and the answer is . ## Solution 3 (Coordinate Geometry) We will put triangle ACE on a xy-coordinate plane with C being the origin. The area of triangle ACE is 96. To find the area of triangle DBF, let D be (4, 0), let B be (0, 9), and let F be (12, 3). You can then use the shoelace theorem to find the area of DBF, which is 42. ## Solution 4 You can also place a point on such that is , creating trapezoid . Then, you can find the area of the trapezoid, subtract the area of the two right triangles and , divide by the area of , and get the ratio of . ## Solution 5 It is well known that for when two triangles share an angle, the two sides around the shared angle is proportional to the areas of each of the two triangles. We can find all the ratios of the triangles except for and then subtract from In this case, we have sharing with . Therefore, we have Also note that shares with . Therefore, we have Lastly, note that shares with . Therefore, we have Thus, the ratio of to is ~mathboy282 ## Solution 6 (Wooga Looga Theorem) We know that , so by the Wooga Looga Theorem we have .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909345507621765, "perplexity": 698.8075182999422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00384.warc.gz"}
https://www.gfdl.noaa.gov/blog_held/1-introduction/
# 1. Introduction Posted on February 17th, 2011 in Isaac Held's Blog Infrared radiation emitted to space simulated by an atmospheric model under development at GFDL. (1 frame/3 hours for one full year, starting in January). My goal in this blog is to provide a forum for discussion of climate dynamics, with an emphasis, but not an exclusive focus, on climate change.  The level of discussion is meant to be appropriate for graduate students in atmospheric and oceanic sciences, but I hope that this type of discussion is also useful to students in other fields with good applied math, physics and/or engineering backgrounds, to practicing scientists in other fields, and to some of my own colleagues.  Different threads will probably focus on different parts of this intended readership. Comments will be heavily moderated to maintain a tone and a level of discussion appropriate for the intended audience.  Moderation will likely be slow. Comments must be closely related to the topic under discussion. I  hope to post something every other week, on average. I am employed by NOAA (and also lecture and advise graduate students at Princeton University).   The opinions that I express are mine and not official positions of NOAA.  However, I consider working on this blog to be fully consistent with NOAA’s outreach and communications policies. I call myself an atmospheric or climate dynamicist/theorist/modeler.  I am sure that there are philosophers of science who distinguish between the terms “theory” and “model”, but I don’t.  I work with a range of theories of different kinds; when these reach a certain level of complexity they are typically referred to as computer models. The most relevant distinction relates to the purpose of the model.  Some models are meant to improve our understanding of the climate system, not to simulate it with any precision.  I like to talk about building a hierarchy of these models designed to improve and encapsulate our understanding.  The most comprehensive models can be thought of as our best attempts at simulation, limited by available computer resources and our understanding of the effective governing dynamics on space and time scales resolvable with those resources. Here is an example of a very simple model consisting of two coupled linear ordinary differential equations: $c \,dT/dt \, = - \beta T - \gamma (T - T_0) + \mathcal{F}(t)$ $c_0 \, dT_0/dt = \gamma (T - T_0)$ $T$ and $T_0$ represent the perturbations to the global mean surface temperature and deep ocean temperature resulting from the radiative forcing $\mathcal{F}$. This model is used in a recent paper by myself and several colleagues to help frame the discussion of what we refer to as the recalcitrant component of global warming. The animation at the top is a small part of the output from another model that a group of us have been analyzing lately, a global atmospheric/land model living on a grid with approximately 50km spacing in the horizontal.  (One can think of the atmospheric component of this model as 37,519,200 coupled ordinary differential equations — not that this is a good measure of the complexity of the model.)  Shown in the animation is a full year of the infrared energy emitted to space  (black is high emission, white is low emission.)  What one sees mostly are the simulated high clouds that provide cold weakly emitting surfaces, but if one looks carefully one can see the diurnal cycle in the emission from the surface, which provides a feeling for the rate at which time is passing.  Notice the sharp distinction between the mid-latitude atmosphere (dominated by non-linear waves) and the tropical atmosphere (dominated by smaller scale moist convection). The model is introduced in this paper.  It is initialized at some point in the past (about 20 years before this animation loop) and is constrained only by imposed boundary conditions over the ocean and sea ice.   In a full climate model, the state of the oceans and sea ice would evolve freely as well.  Comparing this particular simulated space-time field with observations in ways that are most informative about model deficiencies and the reliability of the model for various applications is a formidable challenge. The two-box model and this high resolution atmospheric model illustrate two very distinct elements in the hierarchy of climate models. I’ll discuss both models in the next few posts.  My own work seems to gravitate towards creating models intermediate in complexity between these two limits, in an attempt to both increase our understand of the climate and provide ideas on how to improve our high-end models.  See this essay for a discussion of the importance of model hierarchies. [The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.] ## 8 thoughts on “1. Introduction” 1. Jessica Kleiss says: Hey Isaac, Great idea for a new blog! And I love the animations. I teach courses in Environmental Studies (although I’m a Physical Oceanographer), and I think resources like this online will be a huge benefit to the exposure of concepts in our fields to students and educators! Keep up the great work with your blog! – Jessica 2. Hello Dr. Held, There’s virtually no blogs on the web aimed at your target audience on climate science, so this is very exciting, and I look forward to your posts. 3. Alexander Harvey says: Dr Held: This video series may be of some interest to those that pass by here: Mathematical and Statistical Approaches to Climate Modelling and Prediction https://www.sms.cam.ac.uk/collection/870907 It is a series of presentations mostly by modellers to modellers including various approaches to modelling, data assimilation, statistical emulation of simulators, exploring parameter space, how to construct climate model experiments (ensemble design as opposed to ensembles of “missed” opportunity {their joke not mine}), how to build parameterisations, and much more. Alex 4. Edwin Kite says: Thank you for taking the time to write this blog. It’s a important service to explain things at a level that can be understood by scientists working in neighbouring fields. 5. Ron Cram says: Dr. Held, You write “Some models are meant to improve our understanding of the climate system, not to simulate it with any precision.” True, and these models have value. Models which attempt to represent the climate precisely and from which researchers make claim about the future do not have any predictive value. The faith researchers put into these models is unwarranted and dependent on muddle-headed thinking. I have seen modelers talk of computer runs as “experiments.” Experiments are only performed in nature and in the lab, not in computer runs. I have even read modelers write words to the effect they were working “on something real.” I’m sorry to be blunt, but this is delusional. I used to invest in the stock market on the basis of computer models. Stock prices move on the basis of known laws including the law of supply and demand. (A great deal of computer trading still goes on, but it is mainly arbitrage – not longer term trading.) I was able to pick a number of variables and could perfectly hindcast the broader stock market or major market segments. The problem, of course, is that conditions change. The stock market is a chaotic system. Much like climate, the number of forces affecting the stock market is still unknown and future changes of the known forces (both short and longer-term) is impossible to know. To anyone who thinks computer modeling can foretell the future, I highly recommend the book “Useless Arithmetic” by Orrin Pilkey of Duke University and his daughter. Pilkey is an environmentalist who has extensive experience with computer models of shorelines. While computer models of shorelines are interesting, they are always wrong in the long-term. 1. Isaac Held says: Ron, with respect: I am not interested in comments like “the stock market is so-and-so therefore climate models are etc.” I am interested in comments that address my arguments about the climate system directly. Several of my posts are precisely concerned with what I refer to as the “argument from complexity” that you seem to support. I am totally serious in post #9: when I am confronted with this argument my initial response is “Well, summer IS warmer than winter — maybe climate isn’t all THAT complicated”. Several of the other posts are meant to introduce my view that the forced climate response to increasing CO2 is likely to be quite simple and linear in large part, just like the seasonal cycle, despite the complex and chaotic internal variability superposed. I am also not interested in getting into arguments about semantics. Computational “experimentation” and model-generated “data” are standard terminology in a lot of fields, with no implicit implications about the realism of the underlying model. If you don’t like this terminology that’s fine but there are far more interesting things to worry about. Trying to understand a model often requires you to put everything else aside, willingly suspending your disbelief and treating the model as your universe, much as if you were trying to enjoy, and maybe even understand, a novel. My wife and I were reading Don Quixote out loud recently, and after a session we would talk of the characters as if they were real. I would not call our state of mind “delusional” (never quite reaching the level of Woody Allen’s Kugelmass.) 6. John Puma says: Dr. Held, Thanks for the great blog. Would the simulated, emitted-IR clip be enhanced by a superimposed Day/Month counter? John Puma 1. Isaac Held says: This movie was made with ncview. I am afraid that I do not have time to do more than this right now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4692707359790802, "perplexity": 1250.8907567301494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00782.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/90229/indicate-the-number-of-significant-figures-in-each-of-the-following-measured-qua
# Problem: Indicate the number of significant figures in each of the following measured quantities.3.774 km ###### FREE Expert Solution We are asked to identify the number of significant figures. significant figures → digits that carry meaningful contributions to its measurement resolution 84% (139 ratings) ###### Problem Details Indicate the number of significant figures in each of the following measured quantities. 3.774 km
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834533095359802, "perplexity": 2408.9589421070477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00738.warc.gz"}
http://hyperspacewiki.org/index.php/N-od
# N-od Let $n \geq 3$. A simple $n$-od is a finite graph that is the union of $n$ arcs emanating from a single point $v$, called the vertex.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7834542989730835, "perplexity": 432.4201529381461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189214.2/warc/CC-MAIN-20170322212949-00385-ip-10-233-31-227.ec2.internal.warc.gz"}
https://blogs.ams.org/visualinsight/2013/08/15/tiling/
# Tübingen Tiling Tübingen Tiling – Greg Egan A systematic way to generate quasiperiodic tilings of the plane is to take a lattice in higher dimensions and slice it at a funny angle.  Greg Egan’s Tübingen applet generates quasiperiodic tilings by projecting selected triangles from an $n$-dimensional lattice called the $\mathrm{A}_n$ lattice onto a plane. This particular picture comes from the $\mathrm{A}_4$ lattice. The applet produces moving pictures that are much more beautiful than this still image, so please check it out! The $\mathrm{A}_n$ lattice lives in $n$ dimensions, but it’s easiest to describe it in one more dimension, as the set of all $(n+1)$-tuples of integers $(x_1,…,x_{n+1})$ such that $$x_1 + \cdots + x_{n+1} = 0.$$ It’s a fun exercise to show that $\mathrm{A}_2$ is a 2-dimensional hexagonal lattice, the sort of lattice you use to pack pennies as densely as possible. Similarly, $\mathrm{A}_3$ gives a standard way of packing grapefruit, which is in fact the densest lattice packing of spheres in 3 dimensions. If you were stacking layers of 4-dimensional grapefruit you could use the $\mathrm{A}_4$ lattice, though that would not be the densest possible packing. Let me rapidly sketch how we get from the $\mathrm{A}_4$ lattice to the beautiful tiling shown here. Each point $x$ in the $\mathrm{A}_4$ lattice is surrounded by a Voronoi cell, which consists of all points that are closer to $x$ than to any other lattice point. The Voronoi cells of $\mathrm{A}_4$ are all identical convex polytopes—can you figure out what this polytope is? The cells dual to these Voronoi cells are called Delaunay cells. To get the tiling we pick a plane $P$ in 4 dimensions, and whenever $P$ intersects a 2-dimensional face of a Voronoi cell, we project the corresponding 2d face of the corresponding Delaunay cell, which is a triangle, onto $P$. Then we draw these triangles on the plane!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5269301533699036, "perplexity": 384.9213525226054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00647.warc.gz"}
http://latex-my.blogspot.fr/2012/03/
## Featured Post ### We've moved to tex.my! It’s been great writing on Blogger.com, but we’ve decided to move to a new platform, thanks to some valuable help from a great, great friend... ## Saturday, March 24, 2012 ### Reaction to Bad Kerning Methinks this is one syndrome likely to afflict LaTeX users as well as designers. Everyone who’ve felt this way whenever you see a badly kerned sign, say “Aye”! ## Saturday, March 17, 2012 ### Working collaboratively.. which one you prefer? I found out this website, perhaps from an event organizer whom advising on the usage of MS Word over LaTeX. Perhaps somebody out there could point out if there is any business entity that does the same business but in the other way around - using LaTeX ### Converting an EndNote Database to BibTeX During a recent LaTeX introductory workshop, many participants said that they’re very much looking forward to using LaTeX for their future writings, but mentioned that there didn’nt seem to be an obvious way of porting their existing EndNote bibliography libarary into BibTeX format. EndNote does have an “Export BibTeX” filter, but it doesn’t seem to generate satisfactory BibTeX files. After some googling, I found Bevan Weir’s customised export filter, which does a much better job than EndNote’s default. I modified his filter file a little bit more, and was able to convert an EndNote bibliography library to BibTeX with the following steps. I tested this with EndNote X5 on the Mac, with JabRef 2.7, but they should also work with Windows versions. %ENDNOTE% refers to the path where EndNote is installed on your system. 1. Put BibTeX_Export_LLT.ens (download) in %ENDNOTE%/Styles/ . 2. Start EndNotes, and load your library. 3. Make sure the new style is listed: Edit > Output Styles > Open Style Manager Make sure BibTeX_Export_LLT is checked. 4. File > Export Make sure Save File as Type is set to Text Only, and Output Style is set to BibTeX_Export_LLT. 5. Save your file and check that it has a .bib extension. 6. Open the exported .bib in JabRef. There will be a whole bunch of errors about corrupted or empty BibTeX keys; don’t worry. Just click OK. 7. Ctrl+A to select all the BibTeX entreis, Tools > Autogenerate BibTeX keys. 8. Check through the BibTeX entries, especially those highlighted red, to check and correct any crucial information loss. And hopefully the converted bibliography file is now usable enough. ## Tuesday, March 6, 2012 ### ‘Funny Drawings’ with pst-fun I learned of the pst-fun package today from an answer at TeX.SX, which provides convenience commands for some ‘fun’ drawings in PStricks. Time for some quick fun then! \documentclass{minimal} \usepackage{pst-fun} \begin{document} \begin{pspicture}(-1, -2)(13,10) \psParrot{.8} \rput (2.5,7) {\psBird[Branch]} \rput (10,-1.5) {\psscalebox{-1 1}{\psKangaroo[fillcolor=red!30!yellow]{5.75}}} \rput {-50} (6,0) {\psBird} \end{pspicture} \end{document} Compile with latex, or xelatex if you want a PDF output. And the output looks like this: …and I really should get back to work now!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42385172843933105, "perplexity": 7217.359923285458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00107.warc.gz"}
http://mxnet.io/get_started/amazonlinux_setup.html
# Installing MXNet on Amazon Linux¶ NOTE: For MXNet with Python installation, please refer to the new install guide. Installing MXNet is a two-step process: 1. Build the shared library from the MXNet C++ source code. 2. Install the supported language-specific packages for MXNet. Note: To change the compilation options for your build, edit the make/config.mk file and submit a build request with the make command. ## Build the Shared Library¶ On Amazon Linux, you need the following dependencies: • Git (to pull code from GitHub) • libatlas-base-dev (for linear algebraic operations) • libopencv-dev (for computer vision operations) Install these dependencies using the following commands: # CMake is required for installing dependencies. sudo yum install -y cmake # Set appropriate library path env variables echo 'export PATH=/usr/local/bin:$PATH' >> ~/.profile echo 'export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH' >> ~/.profile echo 'export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH' >> ~/.profile echo '. ~/.profile' >> ~/.bashrc source ~/.profile # Install gcc-4.8/make and other development tools on Amazon Linux # Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compile-software.html # Install Python, Numpy, Scipy and set up tools. sudo yum groupinstall -y "Development Tools" sudo yum install -y python27 python27-setuptools python27-tools python-pip sudo yum install -y python27-numpy python27-scipy python27-nose python27-matplotlib graphviz # Install OpenBLAS at /usr/local/openblas git clone https://github.com/xianyi/OpenBLAS cd OpenBLAS make FC=gfortran -j$(($(nproc) + 1)) sudo make PREFIX=/usr/local install cd .. # Install OpenCV at /usr/local/opencv git clone https://github.com/opencv/opencv cd opencv mkdir -p build cd build cmake -D BUILD_opencv_gpu=OFF -D WITH_EIGEN=ON -D WITH_TBB=ON -D WITH_CUDA=OFF -D WITH_1394=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. sudo make PREFIX=/usr/local install # Install Graphviz for visualization and Jupyter notebook for running examples and tutorials sudo pip install graphviz sudo pip install jupyter # Export env variables for pkg config export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH After installing the dependencies, use the following command to pull the MXNet source code from GitHub # Get MXNet source code git clone https://github.com/dmlc/mxnet.git ~/mxnet --recursive # Move to source code parent directory cd ~/mxnet cp make/config.mk . echo "USE_BLAS=openblas" >>config.mk echo "ADD_CFLAGS += -I/usr/include/openblas" >>config.mk echo "ADD_LDFLAGS += -lopencv_core -lopencv_imgproc -lopencv_imgcodecs" >>config.mk If building with GPU support, run below commands to add GPU dependency configurations to config.mk file: echo "USE_CUDA=1" >>config.mk echo "USE_CUDA_PATH=/usr/local/cuda" >>config.mk echo "USE_CUDNN=1" >>config.mk Then build mxnet: make -j$(nproc) Executing these commands creates a library called libmxnet.so We have installed MXNet core library. Next, we will install MXNet interface package for the programming language of your choice: ## Install the MXNet Package for R¶ Run the following commands to install the MXNet dependencies and build the MXNet R package. Rscript -e "install.packages('devtools', repo = 'https://cran.rstudio.com')" cd R-package Rscript -e "library(devtools); library(methods); options(repos=c(CRAN='https://cran.rstudio.com')); install_deps(dependencies = TRUE)" cd .. make rpkg Note: R-package is a folder in the MXNet source. These commands create the MXNet R package as a tar.gz file that you can install as an R package. To install the R package, run the following command, use your MXNet version number: R CMD INSTALL mxnet_current_r.tar.gz ## Install the MXNet Package for Julia¶ The MXNet package for Julia is hosted in a separate repository, MXNet.jl, which is available on GitHub. To use Julia binding it with an existing libmxnet installation, set the MXNET_HOME environment variable by running the following command: export MXNET_HOME=/<path to>/libmxnet The path to the existing libmxnet installation should be the root directory of libmxnet. In other words, you should be able to find the libmxnet.so file at $MXNET_HOME/lib. For example, if the root directory of libmxnet is ~, you would run the following command: export MXNET_HOME=/~/libmxnet You might want to add this command to your ~/.bashrc file. If you do, you can install the Julia package in the Julia console using the following command: Pkg.add("MXNet") For more details about installing and using MXNet with Julia, see the MXNet Julia documentation. ## Install the MXNet Package for Scala¶ There are two ways to install the MXNet package for Scala: • Use the prebuilt binary package • Build the library from source code ### Use the Prebuilt Binary Package¶ For Linux users, MXNet provides prebuilt binary packages that support computers with either GPU or CPU processors. To download and build these packages using Maven, change the artifactId in the following Maven dependency to match your architecture: <dependency> <groupId>ml.dmlc.mxnet</groupId> <artifactId>mxnet-full_<system architecture></artifactId> <version>0.1.1</version> </dependency> For example, to download and build the 64-bit CPU-only version for Linux, use: <dependency> <groupId>ml.dmlc.mxnet</groupId> <artifactId>mxnet-full_2.10-linux-x86_64-cpu</artifactId> <version>0.1.1</version> </dependency> If your native environment differs slightly from the assembly package, for example, if you use the openblas package instead of the atlas package, it’s better to use the mxnet-core package and put the compiled Java native library in your load path: <dependency> <groupId>ml.dmlc.mxnet</groupId> <artifactId>mxnet-core_2.10</artifactId> <version>0.1.1</version> </dependency> ### Build the Library from Source Code¶ Before you build MXNet for Scala from source code, you must complete building the shared library. After you build the shared library, run the following command from the MXNet source root directory to build the MXNet Scala package: make scalapkg This command creates the JAR files for the assembly, core, and example modules. It also creates the native library in the native/{your-architecture}/target directory, which you can use to cooperate with the core module. To install the MXNet Scala package into your local Maven repository, run the following command from the MXNet source root directory: make scalainstall ## Install the MXNet Package for Perl¶ Before you build MXNet for Perl from source code, you must complete building the shared library. After you build the shared library, run the following command from the MXNet source root directory to build the MXNet Perl package: ## install PDL, Graphviz, Mouse, App::cpanminus, swig via yum before running these commands cpanm -q -L "${HOME}/perl5" Function::Parameters MXNET_HOME=${PWD} export LD_LIBRARY_PATH=${MXNET_HOME}/lib export PERL5LIB=${HOME}/perl5/lib/perl5 cd ${MXNET_HOME}/perl-package/AI-MXNetCAPI/ perl Makefile.PL INSTALL_BASE=${HOME}/perl5 make install cd ${MXNET_HOME}/perl-package/AI-NNVMCAPI/ perl Makefile.PL INSTALL_BASE=${HOME}/perl5 make install cd ${MXNET_HOME}/perl-package/AI-MXNet/ perl Makefile.PL INSTALL_BASE=${HOME}/perl5 make install Note - You are more than welcome to contribute easy installation scripts for other operating systems and programming languages, see community page for contributors guidelines.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24879522621631622, "perplexity": 18390.43476286845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00043.warc.gz"}
https://tex.stackexchange.com/questions/489856/proper-usage-of-otherkeywords-option-in-listings
# Proper usage of otherkeywords option in listings I'm trying to understand how listings' otherkeywords option really works. The documentation says about it: Defines keywords that contain other characters, or start with digits. Each given 'keyword' is printed in keyword style, but without changing the 'letter', 'digit' and 'other' status of the characters. This key is designed to define keywords like =>, ->, -->, --, ::, and so on. If one keyword is a subsequence of another (like -- and -->), you must specify the shorter first. Furthermore, keywords are defined as a 'letter' followed by a sequence of 'letter' or 'digit' characters. So let's look at an example. \documentclass{article} \usepackage{listings} \usepackage{xcolor} \lstdefinelanguage{mylang}{ % alsodigit={<,>,-,*}, otherkeywords={<,>,-,<-,->,<->,<--,-->,***}, morekeywords=[1]{-}, morekeywords=[2]{<,>,<-,->}, morekeywords=[3]{<->,<--,-->,***} } \begin{document} \lstset{ language={mylang}, basicstyle=\ttfamily, keywordstyle=[1]{\color{red}}, keywordstyle=[2]{\color{green}}, keywordstyle=[3]{\color{blue}} } \begin{lstlisting} - red < > <- -> green <-> <-- --> *** blue \end{lstlisting} \end{document} The output of the first line looks good, - should be treated as a class 1 keyword and be printed in red. On the second line the confusion begins. < and > are correctly printed in green, but <- and -> aren't. The documentation (as noted above) says that keywords being subsequences of other keywords should be defined first. This is exactly what is done in the example code, the ouput is still wrong. Even if listings would erroneously break up <- into < and -, I would expect the first < in <- to be printed in green. It's also printed in red, though, which is really confusing, because < never occurs in class 1. The third line is also wrong, except for the *** keyword, which is probably because neither * nor ** occur as separate keywords. If we don't skip over the first line in the otherkeywords description, especially the "or start with digits" part, changing <, > and - to 'digit' characters should satisfy all the conditions required by the documentation. However, adding alsodigit={<,>,-,*} makes no difference in the output. Could anyone give more insights about what's going on here? I'm tempted to say this is a bug, or at least some extra information is missing in the documentation. Unfortunately, debugging macro expansion is too verbose to follow the exact procedure. • Related in some way: tex.stackexchange.com/questions/472433/… – Steven B. Segletes May 8 at 19:03 • @Duplicate voters: Neither does the linked question ask for common keyword prefixes/subsequences nor does it address how the different catcodes of keyword characters are important nor does it answer why the package gives the result shown in my example. It just gives another example usage of the otherkeywords option. I'm not asking for some example that works correctly but want to know why the given example does not – siracusa May 9 at 12:34 • I agree. I in no way intended for the question to be closed as a result of my link. I have voted for a re-open. If I hadn't already upvoted, I would again. – Steven B. Segletes May 9 at 12:36 • Looks like a bug to me. – schtandard Aug 14 at 7:03 • @schtandard so it does to me. But it's annoying, and I don't have the time searching for a solution right now, hence the bounty. – Skillmon Aug 14 at 7:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6825619339942932, "perplexity": 1721.4401079145616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577363.98/warc/CC-MAIN-20190923150847-20190923172847-00464.warc.gz"}
https://hal.inria.fr/hal-01283619
# On a Waveguide with Frequently Alternating Boundary Conditions: Homogenized Neumann Condition 2 EDP - Equations aux dérivées partielles IECL - Institut Élie Cartan de Lorraine Abstract : We consider a waveguide modeled by the Laplacian in a straight planar strip. The Dirichlet boundary condition is taken on the upper boundary, while on the lower boundary we impose periodically alternating Dirichlet and Neumann condition assuming the period of alternation to be small. We study the case when the homogenization gives the Neumann condition instead of the alternating ones. We establish the uniform resol-vent convergence and the estimates for the rate of convergence. It is shown that the rate of the convergence can be improved by employing a special boundary corrector. Other results are the uniform resolvent convergence for the operator on the cell of periodicity obtained by the Floquet–Bloch decomposition, the two terms asymptotics for the band functions, and the complete asymptotic expansion for the bottom of the spectrum with an exponentially small error term. Document type : Journal articles Domain : https://hal.inria.fr/hal-01283619 Contributor : Renata BUNOIU Connect in order to contact the contributor Submitted on : Saturday, March 5, 2016 - 7:18:40 PM Last modification on : Saturday, October 16, 2021 - 11:18:03 AM ### Citation Denis Borisov, Renata Bunoiu, Giuseppe Cardone. On a Waveguide with Frequently Alternating Boundary Conditions: Homogenized Neumann Condition. Annales de l'Institut Henri Poincaré, 2010, 11 (8), pp.1591-1627. ⟨10.1007/s00023-010-0065-0⟩. ⟨hal-01283619⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742558360099792, "perplexity": 1316.6995057326822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00379.warc.gz"}
http://www.spkx.net.cn/EN/abstract/abstract40518.shtml
FOOD SCIENCE ›› 2017, Vol. 38 ›› Issue (2): 255-263. • Processing Technology • ### Optimization of Extraction and Antioxidant Activity of Polysaccharides from Epimedium Leaves TAN Li, CHEN Ruizhan, CHANG Qingquan, LU Juan, JIN Chenguang, YIN Wei 1. College of Chemistry, Changchun Normal University, Changchun 130032, China • Online:2017-01-25 Published:2017-01-16 Abstract: In this study, an ultrasonic-assisted enzymatic extraction (UAEE) method was proposed and optimized for the extraction of polysaccharides (CEPs) from Epimedium leaves. In the first step, the optimization of enzyme mixtures for the hydrolysis of Epimedium leaves was carried out by the combined use of one-factor-at-a-time method and orthogonal array design. Subsequently, the optimization of extraction parameters was done using Box-Behnken design with response surface methodology. The results showed that the influence of the dosage of enzymes on the extraction yield of CEPs was in the following order: cellulose > pectinase > papain > α-amylase, and the optimal combination found were papain 50 U/g, pectinase 250 U/g, cellulase 200 U/g and α-amylase 100 U/g. The optimal extraction parameters were determined as 46.8 ℃, 42.3 min, 4.3 and 311 W for temperature, time, pH, and ultrasonic power, respectively. Under these conditions, the experimental yield of CEPs was 5.98%, which was well in close agreement with the value (6.2%) predicted by the proposed model. Three major fractions (EPs-1, EPs-2 and EPs-3) from the CEPs were purified by DEAE-Sepharose fast-flow and Sephadex G-100 column chromatography. The antioxidant activities of the three fractions were evaluated by 1,1-diphenyl- 2-picrylhydrazyl (DPPH) radical, hydroxyl radical and superoxide anion racial scavenging capacity assays, and ferricreducing antioxidant power (FRAP) assay in vitro. It was indicated that UCEE could be an effective and environmentfriendly technique for extracting active ingredients from plant materials. All the three polysaccharides exhibited significant antioxidant activities in a dose-dependent manner. These results suggested that Epimedium polysaccharides could be explored as potential antioxidants for use in functional foods or medicines. CLC Number:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35344335436820984, "perplexity": 17476.421505690803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00560.warc.gz"}
https://www.gamedev.net/topic/642755-efficiently-calculating-increase-in-money-over-a-period-of-time/
\$25 ### Image of the Day Submit IOTD | Top Screenshots ## Efficiently calculating increase in money over a period of time Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 7 replies to this topic ### #1BkoChan  Members Posted 06 May 2013 - 01:49 AM I'm currently working on the server side logic of a real-time online browser game build using NodeJS. Currently the logic works like this... 1. Receive an event from a player (queue something for production) 2. Calculate when the next event will happen in the game (look at when the next production will complete or when the next research project will finish) 3. Set a timeout so that the server effectively sleeps until the next event happens (unless interrupted by a player sending another event) This seems to work very well so far. When the server wakes up from the timeout it updates the game with the amount of time passed and all production and research etc. is updated by the ellapsed time. The problem I'm having at the moment is that the amount of money generated for a player is based on how many workers they have. Worker numbers slowly increase over time until they reach a population cap. Given the following facts... - The player has 1 worker - The player has no money - A new working is created every minute - A worker produces 100 credits per minute How many credits does the user have after an hour? I have no idea how to predict the number of credits produced as the number of workers increases over time I hope that makes sense! ### #2Khatharr  Members Posted 06 May 2013 - 03:45 AM Oh hell, I can't remember the proper name for it, but it's done with sigma notation. Edit: http://en.wikipedia.org/wiki/Summation The formula for a sequence starting at 1 is s = n(n+1) / 2 For sequences starting at numbers higher than one it's: s = (n(n+1) / 2) - (m(m+1) / 2) Where m is one less than the starting point for the sequence and n is the ending point. For instance: 1+2+3+4+5 = 15 1+2 = 3 +3 = 6 +4 = 10 +5 = 15 vs 5(5+1) / 2 5(6) / 2 30 / 2 15 or 3+4+5 3+4 = 7 +5 = 12 vs 5(5+1) / 2 = 15 2(2+1) / 2 = 3 15 - 3 = 12 Edited by Khatharr, 06 May 2013 - 04:12 AM. void hurrrrrrrr() {__asm sub [ebp+4],5;} There are ten kinds of people in this world: those who understand binary and those who don't. ### #3BkoChan  Members Posted 07 May 2013 - 04:28 AM I'm not really seeing how to apply this to my problem in a more general manner. Your solution does provide the answer for 1 hour where the increase in workers is 1p/m. How to I alter this to handle 2,3,4 hours or a worker increase of 2 p/m? ### #4Álvaro  Members Posted 07 May 2013 - 05:22 AM What are all the possible parameter settings in the problem? ### #5BkoChan  Members Posted 07 May 2013 - 05:39 AM The player has a known number of workers (eg. 10, 20, 123) Each worker will produce 100 credits per minute A new worker is generated periodically (eg. every 1 minute, every 1.5 minutes, every 30 minutes) based on other variables A period of time passed (eg. 1 minute, 4 minutes, 2034 minutes) How to I calculate how many credits have been produced in the time period? example: The player has 20 workers The worker spawn rate is 1 worker every 2 minutes 20 minutes have passed since the last update How many credits have been generated in this time frame ### #6Khatharr  Members Posted 07 May 2013 - 05:44 AM This is untested, but at first glance it would be: m = starting_population n = m + cycles_elapsed worker_cycles = summation(m, n) * workers_spawned_per_cycle produced = worker_cycles * production_per_worker_cycle void hurrrrrrrr() {__asm sub [ebp+4],5;} There are ten kinds of people in this world: those who understand binary and those who don't. ### #7Álvaro  Members Posted 07 May 2013 - 07:00 AM example: The player has 20 workers The worker spawn rate is 1 worker every 2 minutes 20 minutes have passed since the last update How many credits have been generated in this time frame You need to be much more precise than that. For instance, does a worker produce 100 credits the minute that it is spawned? Or is it one minute after? In your example, when is the next worker going to be produced? Right away, or in one minute, or in two minutes? I suggest you write reference code that iterates over minutes (or whatever other time unit you want) and computes things in a naive way. You can then try to optimize that code by using summation formulas. But you have to know what it is you are trying to compute. ### #8frob  Moderators Posted 07 May 2013 - 10:20 AM Given the following facts... - The player has 1 worker - The player has no money - A new working is created every minute - A worker produces 100 credits per minute How many credits does the user have after an hour? The complex code isn't necessary.  Unless you have some serious problems, you could do something along these lines: if(timeElapsed > kMaxElapsedTime) { ShowMessage( Messages::TooMuchTimeElapsed ); timeElapsed = kMaxTimeElapsed; } for(int i=0; i<timeElapsed; i++) { SimulateOneTimeUnit(); } If you need more, create two simulators.  One is an online simulator, the other an offline simulator. Edited by frob, 07 May 2013 - 10:21 AM. Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast. Also check out my personal website at bryanwagstaff.com, where I occasionally write about assorted stuff. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24749071896076202, "perplexity": 1668.8431845868377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00292-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/142916-please-help-ap-calc-test-tomorrow-print.html
• May 3rd 2010, 07:53 PM helpmeee is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x). • May 3rd 2010, 08:10 PM surjective integrating fractions Hello, When facing such problems you should realize that: $\int \frac{x}{2+e^{x}}dx=\int \left(\frac{1}{2+e^{x}}\cdot x \right)dx$ If you define $f(x)=\frac{1}{2+e^{x}}$ and $g(x)=x$ then you have: $\int f(x)g(x) dx$ From here you can simply apply the rule of integration by parts. • May 3rd 2010, 08:22 PM helpmeee • May 3rd 2010, 08:27 PM CalculusCrazed • May 3rd 2010, 08:34 PM helpmeee That made no sense to me. Please show me with my problem • May 3rd 2010, 08:49 PM lovek323 I don't think integration by parts should be used here. Are you sure this was the question? This integral is rather difficult to evaluate. Cf. Wolfram Alpha • May 3rd 2010, 08:51 PM Debsta Quote: Originally Posted by lovek323 I don't think integration by parts should be used here. Are you sure this was the question? This integral is rather difficult to evaluate. Cf. Wolfram Alpha Yes state the exact question you are having problems with. • May 3rd 2010, 09:01 PM CalculusCrazed Yeah, because I was trying to do this by parts. It is not easy by any means. • May 3rd 2010, 09:20 PM There's always an integral involving Li(x) on the AP exam, right you guys? The AP board loves those almost as much as Si(x). • May 3rd 2010, 09:36 PM CalculusCrazed Quote: There's always an integral involving Li(x) on the AP exam, right you guys? The AP board loves those almost as much as Si(x). I never took AP and we didn't learn that in calc 1 or 2. What is Li(x) and Si(x)? • May 4th 2010, 04:10 AM mr fantastic Quote: Originally Posted by helpmeee is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x). Are there integral terminals, that is, is it a definite integral? • May 4th 2010, 04:48 AM skeeter Quote: Originally Posted by helpmeee is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x). no such integration is required on either the AB or BC exam unless it is a definite integral on the calculator part of the exam.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99075847864151, "perplexity": 1586.409832300411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00374.warc.gz"}
http://math.stackexchange.com/users/763/aditya?tab=summary
Reputation Top tag Next privilege 250 Rep. 5 Impact ~3k people reached • 0 posts edited • 0 helpful flags • 59 votes cast 4 Mutually exclusive events 4 Software for drawing geometry diagrams 0 A probability game ### Reputation (174) This user has no recent positive reputation changes ### Questions (3) 4 Roots of $f_n(x) = 1 + (1-x)^2 - (x+3)(1-x)^{n+1}$ in the interval $[0,1]$ 1 Counterexample for converse about measurable sections 0 Solution to Fredholm equation of the second type with symmetric Gaussian kernal ### Tags (11) 4 probability × 2 0 big-list 4 probability-theory 0 geometry 0 polynomials 0 examples-counterexamples 0 roots 0 measure-theory 0 math-software 0 gaussian-integral ### Accounts (19) TeX - LaTeX 39,628 rep 174164 Academia 536 rep 45 Photography 321 rep 411 Super User 274 rep 22 Mathematica 210 rep 16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745787739753723, "perplexity": 12147.311160341194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00121-ip-10-171-96-226.ec2.internal.warc.gz"}
http://yo-dave.com/tags/clojure/
# Clojure ## Writing a flexmark Extension in Clojure Writing wiki pages using Markdown is a great way to do things because it is simple, well-known, and capable. One of the annoyances of using Markdown is that it has no default syntax to create wikilinks, a type of hyperlink that links one page in a wiki to another. I’ve been working on a home-grown, personal wiki, CWiki, for almost a year now off and on. It’s written in the Clojure programming language, which is a pleasure to use. ## Stuart Sierras Component System Starting the implementation of user options (preferences) in CWiki (a personal wiki program) got me thinking about refactoring the project into a shape that is more compatible with Stuart Sierras reloaded workflow. Naturally, that led to thinking about his component architecture. Here are some more resources related to the component architecture. Stuart has a blog post about his reloaded workflow, linked above. The component code repository is on Github here. ## CWiki-Next For the past few months, I’ve been beating my head against a brick wall. The problem was that I was trying to get an all-server-side wiki built using Clojure. It actually works pretty well. I’ve been using it for personal information for months now. It works well except for one aspect, arguably the most important – editing new or existing content is not that pleasant. It’s all a variation of Markdown, which is nice. ## Notes on ClojureScript Regular Expressions There’s an old saying about regular expressions – paraphrasing: “If you try to solve a problem with regular expressions, you then have two problems.” Even though regular expressions are present in most programming languages, I have never become particularly proficient with them. Those that I am most familiar with are the Java version. Since ClojureScript compiles to JavaScript and runs without the Java underpinnings, it’s version of regular expressions are different. ## Using Anonymous Functions in the Clojure/Script Thrush Macro The other day, I was putting together a sequence of operations to transform one piece of text into another form of that same text. The functions took a text argument, and the result was a slightly tweaked version. Put all those functions together to get the fully transformed result. What could be more natural than to string those pieces of code together with one of Clojure’s threading macros: ‘->’ or ‘->>. ## Another Useful Function -- convert-seq-to-comma-separated-string This is a little note about a function I find myself using frequently in Clojure. User interface code often needs to display a list of things as a comma-separated list, e.g. “a, b, c, d”. If all of the things are strings, you can use the built-in string/join function to build such a string. When you have a sequence of things that are not strings, I suppose you could convert each element to a string and then use string/join. ## sorted-set-by This is just a little note about one of my favorite datatypes in Clojure. Like most Lisps, Clojure has a very useful group of datatypes built in including some set types. I use sorted-set a lot. Until recently, I hadn’t noticed sorted-set-by, a function that returns a sorted set using a comparator you specify. I found myself needing to create a sorted set of strings where the sorting was case-insensitive. The sorted-set-by function was exactly what I needed. ## A New Version of the Confidence Interval Program Recently, I wrote about updating an old program that did the Sign Test. Well, I have lots of old programs that could stand a bit of refreshing. Another of the simple ones calculates the confidence interval around the proportion of successes in a series of Bernoulli trials. I wrote about it way back in 2011. The original was written in Java and Swing many years ago. It is still available in a repository on Bitbucket. ## A Titled JavaFX Separator In the process of updating some old programs, I had to change the GUI frameworks used. The old programs were written in Java using the Swing GUI framework and the JGoodies Forms and Looks libraries. Nowadays, the official GUI framework for Java is JavaFX. Making the transition from Swing to JavaFX was relatively painless because the programs were so small. However, one of the things I missed from the JGoodies Forms library was the “titled separator”, that is a separator with a label in front of it. ## An Updated Sign Test Program Long ago, I wrote a post about a small program to calculate the probabilities of a sign test. A lot has happened since then. The sign test is still useful to me on occasion, but the application framework used to write the original program is now unsupported. Too, the original program used Java’s Swing framework for the GUI. The new official GUI framework for Java is JavaFX. So I’ve updated the program a bit. ## Leiningen passing Invalid Flags to Java Compiler Just a note about some weirdness in my work process and it’s solution. A few weeks ago, I started noticing some weirdness in trying to use some tools with Leiningen while developing a program in Clojure. When running tools like kibit, lein would fail with an error from javac about an invalid flag. Initially these flags were for attempts to set the file encoding. And the file encoding kept changing. ## Clojure/Script has Ruined Me for Other Languages The Elm language is often cited as an up-and-comer for web front end development. I was attracted to it largely because of the compiler’s friendly and extremely helpful error messages. It’s really attractive in many ways. But when I started looking at examples, I often found myself thinking things like “Why is this so inconsistent?” or “Why is this syntax so complicated?”. And it finally occurred to me that I’ve been ruined by the way Clojure/ClojureScript/Lisp/Scheme do things. ## Using Local Java JARS in Clojure Projects Recently, I’ve been working on a Sudoku game program. Part of the program provides a user with the ability to generate new puzzles of a particular difficulty. Generating a puzzle usually requires two puzzle solvers: one that solves puzzles (slowly) like a human would, the other that solves puzzles (very quickly) like a computer would. Rather than write my own from scratch, for this part of the development, I wanted to use an existing implementation of the machine-like solver. After a little research (more on this some other time), I found one I liked a lot – the Kudoku solver written in Java from attractive chaos. But how does one use a local jar file in a Clojure Project? Read on… ## Web vs. Native App Redux If you have been following along for awhile, you may have noted how conflicted I am about writing web apps vs. native apps. I’ve been looking into Meteor, which makes the decision even thornier for me. ## Which Version of Java will Leiningen Run from an Emacs Shell I’ve been updating some of my projects to use the newly released Java 8. That includes many Clojure projects. These are just “flow of consciousness” debugging notes. ## Saving and Restoring Program Configuration across Sessions in Clojure I like to use programs that can remember what I was doing the last time I was working with them. They should restore the window just as I had it, remember which file(s) I was working with, what preferences I had selected, and so on. Naturally, I want the programs I write to be just as considerate of the user. For some time, I’ve been fretting over the best way to do this in a Clojure program. Should I provide wrappers around the Java Preferences API? Some other mechanism? Turns out I should just embrace simplicity. Just a short rant about JavaFX because I’m pissed about it at the moment. I enjoy using it for the most part but it sometimes throws up surprising obstacles in otherwise routine work. The latest for me was an unexpected lack of a spinner control. There are alternatives in some open source projects, but, really? No spinners built in? This is almost as gob-smacking weird as the lack of dialogs. (Ok, there are some dialogs, like for opening/saving files, but not much in the way of user-programmable dialogs built in. A couple days ago, I posted a little snippet showing how to load and font from a list of preferred fonts using Clojure and JavaFX. Well, I’ve extended the demo a bit to show how to load both fonts installed on the OS and fonts from a resource file. Here’s the new snippet. (ns clojure_font_loading.core (:gen-class :extends javafx.application.Application) (:import [javafx.application Application] [javafx.event EventHandler] [javafx.scene Scene] [avafx.scene.control Button] [javafx.scene.layout StackPane VBox] [javafx. Just wanted to pass along a little snippet I have found myself using fairly frequently. CSS has the ability to specify the appropriate font to use in displaying a document. It handles the tag in such a way that it can gracefully degrade from a “preferred” font through a series of less ideal typefaces depending on what’s available on the machine doing the display. That’s a handy facility to have, even on Windows, which can have different fonts available depending on the version of Windows and what software has been installed. ## Paths with Spaces, I Give Up I’ve wanted to look into the Pedestal framework for creating web-based applications in Clojure. However, one of the requirements is Leiningen 2.2.0 or greater. And, as I’ve written before, version 2.2 will not install on my system because of spaces in the path of the user home directory. (”C:\Users\David Clark” on my system.) My user profile name is “david”. That’s what I use to sign on with. The fact that my home directory uses “David Clark” is an unfortunate result of how the computer was set up at the factory when I custom ordered it. ## N-Queens The N-Queens puzzle is a classic computer science problem. In fact, it’s much older than discipline of computer science. It is usually used as a problem to introduce students to backtracking algorithms in computer science. I was first introduced to the problem in Niklaus Wirth’s Algorithms + Data Structures = Programs back in the ‘70s. I thought it might be interesting to write an updated version in my continuing effort to become proficient in Clojure. My intent was to write a simple working version, then use the concurrency features of the language to write a parallel version and see what kind of performance gain was possible. ## Java-Clojure Interop: An Update My most popular answer on Stack Overflow has to do with Clojure-Java interop. Since that answer was written, some of the tools used in the answer, specifically enclojure, have been deprecated. Because many of the follow-up questions related to how to build a working version of the answer, I thought it might be a good idea to update the post with modern tools. As this is written, the tools used include: ## Keyboard Shortcuts for JavaFX Buttons Most programs written for graphical user interfaces still provide a way to operate with the keyboard, requiring minimal mouse usage. The thought is that expert users will want to speed through their work keeping their fingers on the keyboard rather than devote an entire hands worth of fingers to controlling the mouse. I’ve been learning JavaFX, the eventual replacement for the Swing UI framework on Java, and wanted to explore how shortcut functionality had changed. ## JavaFX KeyCodeCombinations in Clojure I’ve been experimenting with adding keyboard accelerators to some of the Clojure programs I’ve written with JavaFX-based user interfaces. As part of that investigation, I tried to translate the Java program here(Broken Link) to Clojure. The program just puts up a window with a menu bar containing only a “File” menu which itself contains one item, “Exit”. Most programs provide a keyboard shortcut or accelerator to close the program with a Ctrl-X (on Windows). Figuring out how to add that functionality was a bit of an issue for me. ## The Clojure Development Toolchain One of the things about Clojure that is difficult for beginners is the process of creating and running programs. I would argue that it is more difficult than learning the language itself. There is no “one-button” provisioning system that would set up some sort of canonical development environment. This long post will talk about setting up Leiningen and Emacs to make a comfortable environment for developing in Clojure. ## Clojure, JavaFX and Tic-Tac-Toe Recently, I have been experimenting with JavaFX in Clojure. Initially, in one of my experiments, I wanted to learn how to re-size a game-board interface as it’s containing window was re-sized. In the past I’ve had medical device interfaces that draw a representation of a physical device and these drawings must re-size as their window is re-sized. The initial experiment was with a simple interface for Tic-Tac-Toe. Since I had such a nice interface, I thought, why not program the complete game. ## Re-sizing an Interface in JavaFX and Clojure Since JavaFX is the future of the user interface for Java, I’ve started trying to learn it. Since I’m also learning Clojure, I’m doing the work in that language. One of the things I’ve been looking into is how the interface responds to resizing. If you have all of your controls in a nice layout, that is usually taken care of for you. But how do you handle things if the interface is not made up of standard components, something like a graphical game interface for example? ## Favorite Programming Books Every programmer seems to have their own list of favorite programming books. The lists are very personal and seem to be influenced by the age of the programmer, their training, and their field of endeavor. My own list follows. ## The Sign Test Sometimes weakness is a strength. That certainly seems to be the case for the lowly sign test. It is about the simplest statistical significance test imaginable. But if it tells you something is important, it probably is. Usually when you hear people talk about the “power” of a statistical test, they are referring to the ability of the test to detect a significant difference when one exists. For example, Student’s t test is a favorite and very powerful test for differences in means when you have data meeting the underlying assumptions of the test. ## Spare Time Projects It’s pretty common to see discussions about how to determine if a candidate for a programming job has a “passion” for programming and software. One of the usual pieces of advice is to ask about the projects someone does in their “spare time.” ## Getting Started with Lisp/Scheme/Clojure Ya know, this point just keeps slapping me in the face. It seems that people don’t stop trying to use Lisp because they don’t like the language. A lot of people stop because they don’t like the programming environment. Looking around the Q&A sites there seem to be many more questions about setting up a programming environment for the Lisp family of languages than there are for the more mainstream languages like Java and C++. ## Clojure and Java Interaction One of my most-upvoted answers on Stackoverflow is a simple example of how to call Clojure functions from Java. It doesn’t require calling through the Clojure run-time as so many responses do. But there is more to writing programs than calling static functions, as in my answer. You also might need to call methods of objects and on objects across the Clojure/Java divide. ## A Closure in Clojure Back when closures were first explained to me, a long time ago, I thought “sounds like a language with pass-by-reference semantics like Pascal.” Of course, it isn’t quite that simple. Clojure has a lot of nice features that work naturally to give you a “better Java than Java”. Here’s an example of using a closure that is not at all easy in Java. ## Getting enclojure 1.4 to work I’ve used enclojure (Update 12 Mar 2018: The link is now dead.) for a long time (in internet years). It has always seemed a bit finicky. However, with the 1.4 release and the switch to using Maven as the build tool, things stopped working. Projects that had worked fine before no longer compiled or executed. The “Getting Started” section of the enclojure web page appears to be hopelessly out of date and actually misleading. Here’s what I had to do.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20042136311531067, "perplexity": 1436.6402896713375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745522.86/warc/CC-MAIN-20181119084944-20181119110944-00215.warc.gz"}
https://reference.digilentinc.com/learn/courses/unit-4-lab4a/start
1. Objectives 1. Setup a terminal monitor on a PC or Linux workstation. 2. Use the PIC32MX370 to send an ASCII encoded text string and display the characters on the workstation monitor. 3. Receive an ASCII encoded text string from the workstation monitor and display it on the Basys MX3 LCD. 3. Equipment List 3.1. Hardware 1. Workstation computer running Windows 10 or higher, MAC OS, or Linux In addition, we suggest the following instruments: 3.2. Software The following programs must be installed on your development work station: 4. Project Takeaways 1. Knowledge of a PC terminal emulation program. 2. How to develop a library of PIC32 software to provide bi-directional communications of single characters and strings of characters. 3. How to create a character LCD with a UART serial interface. 5. Fundamental Concepts The Universal Asynchronous Receiver/Transmitter (UART) is an electronic device that converts parallel data to a serial data stream. Early microprocessors used independent integrated circuits to perform the conversion and frame the data stream with a start and stop bit. Currently, a vast majority of microprocessors have internal UART functionality, resulting in reduced system cost and complexity. The UART is capable of full-duplex operation, meaning simultaneously receiving and transmitting data. UARTs are generally constrained to a single peer-to-peer paired devices commonly called point-to-point communications. Asynchronous communication is the transmission of data between two devices that are not synchronized with one another via a clocking mechanism or other technique. The term asynchronous implies the sender can initiate data transmission at any time, and the receiver must be ready to accept information when it arrives. The two devices must be operating at, or nearly equal to, the same clock frequency and are resynchronized by a START bit sent along with the data. 6. Problem Statement Text messages will be sent to the UART serial port whenever a change in switch settings is detected and the action will be reported to a computer terminal. Whenever a text string is entered on the computer terminal, it will be displayed on the Basys MX3 LCD. 7. Background Information Asynchronous communications is a serial data protocol that has been in use for many years. Normally eight bits of data are transmitted between handshaking characters to allow the clocks of transmitting and receiving devices to be synchronized. There are other less commonly used modes that can send 5, 6, or 7 bits of data. Each byte of data is framed by a start bit and a stop bit. A symbol is defined as a start, data, parity, or stop bit. It is common to define common to define communications speed as bits per second. The bit rate is defined as the inverse of the period of a unit symbol. Although the common standard bit rates are 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, and 115200, communications is possible at any rate provided that the sender and receiver use the same rate. For most asynchronous communications, the term “baud” is commonly used interchangeably with the term “bit rate.” 8. Lab 4a 8.1. Requirements 1. Communications will use the PC terminal emulation program for a bit rate of 19200, odd parity, 8 data bits, and one stop bit. 2. After receiving the line of text from the UART, you will clear the LCD before echoing the received string from the UART, starting at the leftmost character position on line 1 of the LCD. 3. Write a C program and call it lab4a.c. This program will contain the function main and process the serial text. Put the following tasks inside the while(1) loop: 4. Wait for line of text using the “getstr” function 5. Clear the LCD display and home cursor 6. Echo the string entered on the computer terminal to the LCD 7. Sense the state for the eight slide switches and convert the switch settings to a value, where SW7 is the most significant bit. Whenever any of the switches changes state, a text message is generated using the following format: 1. “b b b b b b b b 0xhh ddd\n\r” where: 2. “b” is 1 or zero representing the state of SW0 through SW7 with the SW7 bit leftmost. 3. “hh” is the hexadecimal value of the binary encoded switches. 4. “ddd” is the decimal equivalent of the hexadecimal number generated for part b. 5. “\n” is the ASCII NEW LINE control character. 8. “\r” is the ASCII RETURN control character. 9. Send the text string composed in requirement 5 to the UART so it will be displayed on the terminal monitor as a single line of text. 8.2. Design Phase 1. Develop a data flow diagram for the software components needed for the requirements of Lab 4a. 2. Schematic diagrams: Provide a block diagram of the equipment used for Lab 4a. 3. Flow diagrams: Provide a complete software control flow diagram for Lab 4a. 8.3. Construction Phase 1. Connect the Basys MX3 UART USB port to one of the work station’s USB ports. 2. If the workstation is running a Windows OS, open the “Control Panel” followed by opening the “Device Manager” window. If you do not have administrator privileges, you will see the window shown in Fig. 8.1. Click on the OK box to continue. Administrative privileges are not required to view the settings. Figure 8.1. The device manager window showing you do not have administrator privileges. 3. Expand the tab called Ports (COM & LPT) as shown in Fig. 8.2. Figure 8.2. PC device manager window. 4. Note the USB Serial Port COM assignment. 5. Open the terminal emulation program on your workstation. Configure the terminal program for 19200 BAUD and ODD parity. The screen shown in Fig. A.4 of Appendix 1 is for the PuTTY terminal emulation program. 6. Launch a new Microchip MPLAB X project called LAB4b. Add the config_bits.h file to the project. 7. Develop the following UART interface functions: Note that the numeral “4” in the function names indicate that the Basys MX3 uses the PIC32MX370 UART 4. All receive functions are to be non-blocking. uart4Init(); // 19200 Baud, ODD parity (int) ch = uart4Getc(); // ch = -1 if no data has been received (int) len = uart4Gets(char *str); // len = -1 if no data has been received uart4Puc(char ch);< uart4Puts(char *str); 8. Develop the PIC32 application that meets the requirements in section 8.1. 8.4. Testing 1. After completing the development, run the application project to verify that the LCD is initialized correctly. I generally display an initial message on the LCD for one second that declares the LCD is functional. 2. Toggle the slide switches to verify that the terminal screen displays text similar to Figure A.3. 3. Enter a series of text strings that verify the following operations: 1. A text string containing between 1 and 16 characters displays only on the first LCD line. 2. A text string containing between 17 and 32 characters displays on both the first and second LCD lines as shown in Fig. A.2. 3. A text string containing between 17 and 32 characters displays on both the first and second as well as wrapping back around to the first LCD line. 4. Connect the Analog Discovery 2 to the Basys MX3 board. 1. Configure the Logic window to display signals DIO 8 which is the UART RX pin. Measure the time from the beginning of the start character to the beginning of the stop character. 2. Configure the Logic window to display signals DIO 0 through DIO 3. Label the signals as follows: 1. DIO 0 DB0 2. DIO 1 EN 3. DIO 2 RW 4. DIO 3 RS 3. Capture a single character display as shown in Fig. 8.3. 4. Measure the time the LCD reported being busy. 5. Measure the time required to output a single character. 6. Measure the time required to display a string of 20 characters. Figure 8.3. Handshaking pins and data bit zero for single character write to the LCD. 9. Questions 1. What is the effective data rate of the UART? (Remember to include the period of the stop signal that cannot be measured in testing step 4a.) 2. Based on the data collected in step 4 parts 4.4 and 4.5 of the testing procedure, what is the effective character display rate in characters per second? 3. Based on the data collected in step 4 part 4.6, how much does moving the cursor from line 1 to line 2 slow down the LCD character display rate? Justify your answer. 10. References 1. PIC32MX330/350/370/430/450/470 Family Data Sheet 4. RS-232, RS-422, RS-423, RS-485 Asynchronous communications Appendix A: Basys MX3 Schematic Drawings Figure A.1. PIC32MX370 to FT232RQR IC schematic diagram. Figure A.2. UART USB connector on the Basys MX3. Figure A.3. PuTTy screen shot generating LCD display. Figure A.4. PuTTY screen shot of serial configuration for 19200 BAUD and ODD parity. Appendix B: Allocating a Heap in MPLAB X If when compiling your project you see an error like “ld.exe Error: A heap is required, but has not been specified,” this is because you need to specify a heap size by setting “Run” →“Set Project Configuration” → “Customize…”. Go to the “xc32-ld” category (under “XC32 (Global Options)”) → “Heap size (bytes)” to “0” The configuration window should look like Fig. B.1. Click on the “Apply” button followed by clicking on the “OK” button. See http://microchip.wikidot.com/mplabx:creating-a-heap. Figure B.1. Allocating Heap size.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23588509857654572, "perplexity": 3306.0227582206576}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00612.warc.gz"}
http://amj.math.stonybrook.edu/html-articles/Files/15-12/
Arnold Mathematical Journal Research Contribution Received: 17 December 2014 / Accepted: 4 April 2015 # On an Equivariant Version of the Zeta Function of a Transformation S. M. Gusein-Zade Faculty of Mathematics and Mechanics, GSP-1 Moscow State University Moscow 119991 Russia    I. Luengo Present address: ICMAT-Institute of Mathematical Sciences Madrid Spain Faculty of Mathematical Sciences Complutense University of Madrid 28040 Madrid Spain A. Melle-Hernández Present address: ICMAT-Institute of Mathematical Sciences Madrid Spain Faculty of Mathematical Sciences Complutense University of Madrid 28040 Madrid Spain ### Abstract Earlier the authors offered an equivariant version of the classical monodromy zeta function of a $G$-invariant function germ with a finite group $G$ as a power series with the coefficients from the Burnside ring of the group $G$ tensored by the field of rational numbers. One of the main ingredients of the definition was the definition of the equivariant Lefschetz number of a $G$-equivariant transformation given by W. Lück and J. Rosenberg. Here we use another approach to a definition of the equivariant Lefschetz number of a transformation and describe the corresponding notions of the equivariant zeta function. This zeta-function is a power series with the coefficients from the Burnside ring itself. We give an A’Campo type formula for the equivariant monodromy zeta function of a function germ in terms of a resolution. Finally we discuss orbifold versions of the Lefschetz number and of the monodromy zeta function corresponding to the two equivariant ones. #### Keywords Equivariant Lefschetz numbers, Zeta functions, Burnside ring #### Mathematics Subject Classification 32S05, 32S50, 57R91, 58K10 ## 1 Introduction Many topological invariants have equivariant versions for spaces with actions of a group $G$, say, a finite one. For example, in [Verdier1973], an equivariant version of the Euler characteristic is an element of the Grothendieck ring of ${\mathbb{Z}}[G]$- or ${\mathbb{Q}}[G]$-modules. In tom Dieck ( [tom Dieck1979] , Section 5.4) it is defined as an element of the Burnside ring of the group $G$ (that is of the Grothendieck ring $K_{0}({\mbox{f.$G$-s.}})$ of finite $G$-sets). Applying these concepts to the Milnor fibre, one gets an equivariant version of the Milnor number of a $G$-invariant function-germ. For example, in [Wall1980] it is an element of the ring of virtual representations of the group $G$. An important invariant of a germ of a holomorphic function (on $({\mathbb{C}}^{n},0)$ or on a germ of a complex analytic variety) is its monodromy and its corresponding zeta function, see e.g. [Arnold et al.1988]. It is defined as the zeta function of the classical monodromy transformation on the Milnor fibre. A number of statements have natural formulations in terms of monodromy zeta functions. As an example one can indicate the well-known monodromy conjecture: see, e.g., [Denef and Loeser1992]. The monodromy zeta function is connected with a number of other invariants, topological and analytic ones. For example, in [Gusein-Zade et al.1999], it was shown that, for an irreducible plane curve singularity, the monodromy zeta function of the corresponding function-germ coincides with the Poincaré series of the natural filtration on the local ring defined by the curve valuation. There are generalizations of this fact to some other situations [(see, e.g., a survey in [Gusein-Zade2010]]. In all these cases one has no intrinsic explanation of the relation. The relation is obtained by independent computation of the right and left hand sides of it in the same terms and comparison of the obtained results. Generalizations of relations of this sort to equivariant settings could help to understand the general framework. For example, in [Ebeling and Gusein-Zade2012b] an equivariant version of a relation obtained earlier gave a better understanding of the role of the Saito duality in it. This leads to the desire to define equivariant analogues of monodromy zeta functions and of the Poincaré series of filtrations. These problem is not trivial and equivariant analogues are not unique. Moreover, up to now there were no definitions of equivariant analogues of monodromy zeta functions and of the Poincaré series which were elements of the same rings. For example, in [Campillo et al.2007]; [Campillo et al.2013], there were offered different approaches to equivariant Poincaré series. In [Campillo et al.2007] it is a power series with coefficients from the ring of one-dimensional representation of a group. In [Campillo et al.2013] it is an element of the Grothendieck ring of “locally finite” $G$-sets with an additional structure. In [Gusein-Zade et al.2008], there was given an equivariant version of the monodromy zeta function as a power series with the coefficients from $K_{0}({\mbox{f.$G$-s.}})\otimes{\mathbb{Q}}$. The fact that it was defined only after tensoring by the field ${\mathbb{Q}}$ of rational numbers makes it less reasonable, in particular, to compare it with the equivariant versions of the Poincaré series which were defined over integers. In [Gusein-Zade2013] an equivariant version of the monodromy zeta function was defined as an element of a generalization of the Burnside ring different from that in [Campillo et al.2013]. Just recently one gave a definition of an equivariant version of the Poincaré series as a power series with the coefficients from the Burnside ring of the group: [Campillo et al.2014]. (In that paper initially the Poincaré series is defined as a power series with the coefficients from a certain modification of the Burnside ring. A simple reductions sends this modification to the usual Burnside ring.) One of the main ingredients of the definition of the equivariant version of the monodromy zeta function in [Gusein-Zade et al.2008] was the definition of the equivariant Lefschetz number of a transformation from [Lück and Rosenberg2003]. The definition from [Lück and Rosenberg2003] is rather natural. Moreover, one can say that it is the only possible definition possessing some reasonable properties. However the fact that it leads to a “non-integer” definition of the (monodromy) zeta function gives a hint that this definition is not absolutely adequate to this purpose. There is certain freedom in a definition of an equivariant version of the Lefschetz number of a transformation connected with the question whether it should count the fixed points of the transformation or the fixed $G$-orbits of it. Here we use the second approach to the definition of the equivariant version of the Lefschetz number. This definition was introduced in [Dzedzej2001]. We describe the corresponding equivariant version of the zeta function of a transformation. This zeta-function is a power series with the coefficients from the ring $K_{0}({\mbox{f.$G$-s.}})$. We give an A’Campo type formula for the equivariant monodromy zeta function of a function germ in terms of a resolution. Difficulties to compare equivariant versions of monodromy zeta functions and of the Poincaré series being elements of different nature (e.g. those described above) leads to the idea to compare their “integer valued reductions”. These reductions can be made with the help of the usual Euler characteristic and also with the help of the orbifold Euler characteristic. In the light of this, we also discuss possible orbifold versions of the zeta function of a transformation. ###### Remark. The defined equivariant version of the zeta function is not a new invariant in the sense that it cannot distinguish more transformations than existing ones. In particular, it is expressed in terms of equivariant Lefschetz numbers of iterates defined earlier. The same holds for the usual (non-equivariant) zeta function of a transformation. However, it appears to be better adapted to a number of problems. (One more example: the well-known monodromy conjecture on poles of the topological Igusa zeta function (see, e.g., [Denef and Loeser1992]) is also formulated in terms of the monodromy zeta function.) To find equivariant analogues of these problems, one would need to have an equivariant generalization of the usual zeta function of a transformation. Moreover, there are other indices of equivariant transformations which are more fine invariants than the monodromy zeta function. For example, equivariant generalizations of Dold’s indices of iterates defined in [Crabb2007] are of this sort: our equivariant version of the zeta function can be expressed through them (private communication by M. C. Crabb). ## 2 Burnside Ring and the Equivariant Euler Characteristic A finite $G$-set is a finite set with an action (say a left one) of the group $G$. Isomorphism classes of irreducible $G$-sets (i.e. those which consist of exactly one orbit) are in one-to-one correspondence with the set $\mbox{Consub}(G)$ of conjugacy classes of subgroups of $G$: to the conjugacy class containing a subgroup $H\subset G$ one associates the isomorphism class $[G/H]$ of the $G$-set $G/H$. The Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$ of finite $G$-sets (also called the Burnside ring of $G$) is the group generated by isomorphism classes of finite $G$-sets with the relation $[A\coprod B]=[A]+[B]$ and with the multiplication defined by the cartesian product. As an abelian group $K_{0}({\mbox{ f.$G$-s.}})$ is freely generated by the isomorphism classes $[G/H]$ of irreducible $G$-sets. The element 1 in the ring $K_{0}({\mbox{ f.$G$-s.}})$ is represented by the $G$-set consisting of one point (with the trivial $G$-action). Recall that given a subgroup $H$ of $G$ there are two natural maps $\mbox{Res}_{H}^{G}:K_{0}(\mbox{f.}{G}\mbox{-sets})\to K_{0}(\mbox{f.}{H}% \mbox{-sets})$ and $\mbox{Ind}_{H}^{G}:K_{0}(\mbox{f.}{H}\mbox{-sets})\to K_{0}(\mbox{f.}{G}% \mbox{-sets})$. The restriction map $\mbox{Res}_{H}^{G}$ sends a $G$-set X to the same set considered with the $H$-action. The induction map $\mbox{Ind}_{H}^{G}$ sends an $H$-set $X$ to the product $G\times X$ factorized by the natural equivalence: $(g_{1},x_{1})\sim(g_{2},x_{2})$ if there exists $g\in H$ such that $g_{2}=g_{1}g$, $x_{2}=g^{-1}x_{1}$ with the natural (left) $G$-action. The induction map $\mbox{Ind}_{H}^{G}$ sends the class $[H/H^{\prime}]$ ($H^{\prime}$ is a subgroup of $H$) to the class $[G/H^{\prime}]$. Both maps are group homomorphisms, however the induction map $\mbox{Ind}_{H}^{G}$ is not a ring homomorphism. In some places, say, in [tom Dieck1979], [Lück and Rosenberg2003] and [Gusein-Zade et al.2008], the equivariant Euler characteristic of a $G$-space is considered as an element of the Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$. For a relatively good $G$-space $X$ (say, for a quasiprojective variety) the equivariant Euler characteristic $\chi^{G}(X)\in K_{0}({\mbox{ f.$G$-s.}})$ can be defined in the following way. For a point $x\in X$, let $G_{x}=\{g\in G:\,g\cdot x=x\}$ be the isotropy subgroup of the point $x$. For a conjugacy class ${\mathcal{H}}\in\mbox{ Consub}(G)$, set $X^{{\mathcal{H}}}=\{x\in X:x\mbox{ is a fixed point of a subgroup }H\in{\mathcal{H}}\}$ and let $X^{({\mathcal{H}})}=\{x\in X:G_{x}\in{\mathcal{H}}\}$ be the set of points with the isotropy subgroups from ${\mathcal{H}}$. One can see that in the natural sense $X^{({\mathcal{H}})}=X^{{\mathcal{H}}}{\setminus}X^{>{\mathcal{H}}}$, where $X^{>{\mathcal{H}}}=\bigcup\nolimits_{{\mathcal{H}}^{\prime}>{\mathcal{H}}}X^{{% \mathcal{H}}^{\prime}}$. Then the equivariant Euler characteristic of the $G$-space $X$ is defined as $\chi^{G}(X):=\sum_{{\mathcal{H}}\in{ Consub}(G)}\frac{\chi(X^{({\mathcal{H}% })})\,|H|}{|G|}[G/H]=\sum_{{\mathcal{H}}\in{ Consub}(G)}\chi(X^{({\mathcal{% H}})}/G)[G/H],$ (1) where $H$ is a representative of the conjugacy class ${\mathcal{H}}$. ###### Remark. Here we use the additive Euler characteristic $\chi(\cdot)$, i.e. the alternating sum of the ranks of the cohomology groups with compact support. For a complex analytic variety this Euler characteristic is equal to the alternating sum of the ranks of the usual cohomology groups. ###### Definition. A pre-$\lambda$ ring structure on a commutative ring $R$ is an additive to multiplicative group homomorphism $\lambda_{T}:R\to 1+T\cdot R[[T]]$, that is $\lambda_{T}(m+n)=\lambda_{T}(m)\lambda_{T}(n)$, such that $\lambda_{T}(m)=1+mT$ (mod $T^{2}$). A pre-$\lambda$ ring homomorphism is a ring homomorphism between pre-$\lambda$ rings which commutes with the pre-$\lambda$ ring structures. The Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$ has a natural pre-$\lambda$-ring structure defined by the series $\sigma_{X}(t)=1+[X]\,t+[S^{2}X]\,t^{2}+[S^{3}X]\,t^{3}+\cdots,$ where $S^{k}X=X^{k}/S_{k}$ is the $k$-th symmetric power of the $G$-set $X$ with the natural $G$-action. This pre-$\lambda$-ring structure induces a power structure over the Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$: see [Gusein-Zade et al.2006]. This means that for a power series $A(t)\in 1+t\cdot K_{0}({\mbox{ f.$G$-s.}})[[t]]$ and $m\in K_{0}({\mbox{ f.$G$-s.}})$ there is defined a series $\left(A(t)\right)^{m}\in 1+t\cdot K_{0}({\mbox{ f.$G$-s.}})[[t]]$ so that all the properties of the exponential function hold. In these notations $\sigma_{X}(t)=(1-t)^{-[X]}$. The geometric description of the natural power structure over the Grothendieck ring of quasiprojective varieties given in [Gusein-Zade et al.2006] using graded spaces is also valid for the power structure over $K_{0}({\mbox{ f.$G$-s.}})$ as well. Some examples of computation of the series $(1-t)^{-[G/H]}$ for $G$ being the cyclic group ${\mathbb{Z}}_{6}$ of order 6 and the group ${\mathcal{S}}_{3}$ of permutations on three elements can be found in [Gusein-Zade et al.2008] (with some misprints). For example \renewcommand\Z{\mathbb Z}\begin{align} (1-t)^{-[{\mathcal S}_3/\langle e\rangle]}=&\frac{1}{1-t^6}[1]+\frac{t^3}{(1-t^3)(1-t^6)}[{\mathcal S}_3/\Z_3]\\ &+\frac{3t^2}{(1-t^2)^2(1-t^6)}[{\mathcal S}_3/\Z_2]\\ &+\frac{t(1+4t^2+t^3+4t^4-2t^5+3t^6+t^7)}{(1-t^2)^2(1-t^3)(1-t^6)(1-t)^2}[{\mathcal S}_3/\langle e\rangle].\end{align} There is a natural homomorphism from the Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$ to the ring $R(G)$ of virtual representations of the group $G$ which sends the class $[G/H]\in K_{0}({\mbox{ f.$G$-s.}})$ to the representation $i^{G}_{H}[1_{H}]$ induced from the trivial one-dimensional representation $1_{H}$ of the subgroup $H$. (A virtual representation of the group $G$ is an element of the Grothendieck ring of representations, i.e. a formal difference of two representations.) This homomorphism is a homomorphism of pre-$\lambda$-rings ([Knutson1973]). Let us show that for any subgroup $H$ of $G$ the series $(1-t)^{-[G/H]}$ represents a rational function with the denominator equal to a product of the binomials of the form $(1-t^{m})$, $m\in{\mathbb{Z}}_{\geq 1}$. Since irreducible $G$-sets are in one-to-one correspondence with the set $\mbox{ Consub}(G)$ of conjugacy classes of subgroups of $G$ and $K_{0}({\mbox{ f.$G$-s.}})$ is freely generated by isomorphism classes $[G/H]$ of irreducible $G$-sets then $(1-t)^{-[G/H]}=\sum_{{\mathcal{F}}\in{ Consub}(G)}{\mathcal{A}}_{H,{% \mathcal{F}}}(t)[G/F]$ (2) where $F$ is a representative of the conjugacy class ${\mathcal{F}}$ and ${\mathcal{A}}_{H,{\mathcal{F}}}(t)\in{\mathbb{Z}}[[t]]$. Let ${\mathcal{F}}$ be a conjugacy class of subgroups of $G$ and let $F$ be a representative of it. The subgroup $F$ acts on the $G$-space $G/H$. Let $F\backslash G/H$ be the quotient of $G/H$ by this action and let $p:G/H\to F\backslash G/H$ be the quotient map. For $m=1,2,\ldots,$ let $Y_{m}$ be the set of points of $F\backslash G/H$ with $m$ preimages in $G/H$ and let ${\ell}^{{\mathcal{F}}}_{m}=|Y_{m}|$. (The numbers ${\ell}^{{\mathcal{F}}}_{m}$ depend only on the conjugacy class ${\mathcal{F}}$.) For an abelian $G$, ${\ell}^{{\mathcal{F}}}_{m}$ is different from zero if and only if $m=\frac{|F|}{|F\cap H|}$ and in this case ${\ell}^{{\mathcal{F}}}_{m}=|G|/|F+H|$. For conjugacy classes ${\mathcal{F}}$ and ${\mathcal{F}}^{\prime}$ from ${ Consub}(G)$, let $F$ and $F^{\prime}$ be their representatives, and let $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}$ be the number of fixed points of the group $F$ on $G/F^{\prime}$. The integer $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}$ is different from zero if and only if ${\mathcal{F}}^{\prime}\geq{\mathcal{F}}$ (i.e. there exist representatives $F^{\prime}$ of $F$ of them such that $F^{\prime}\supset F$. For an abelian $G$ and for ${\mathcal{F}}^{\prime}\geq{\mathcal{F}}$, one has $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}=|G/F^{\prime}|$. For a non-abelian group the equation is more involved and $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}$ depends on ${\mathcal{F}}$ as well. ###### Lemma 1. For ${\mathcal{F}}\in{ Consub}(G)$ one has $\prod_{m\geq 1}(1-t^{m})^{-{\ell}^{{\mathcal{F}}}_{m}}=\sum_{{\mathcal{F}}^{% \prime}\in{ Consub}(G)}r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}\,\,{% \mathcal{A}}_{H,{\mathcal{F}}^{\prime}}(t).$ (3) ###### Proof. Let $F$ be a representative of ${\mathcal{F}}$ and let us count fixed points of the subgroup $F$ in the left hand side and right hand side of (2) . For a finite set $X$ an element of $\coprod_{k\geq 0}S^{k}X$ can be identified with an integer valued function on $X$ with non-negative values. The corresponding element belongs to $S^{k}X$ if and only if the sum of all the values of the function is equal to $k$. An element of $\coprod_{k\geq 0}S^{k}[G/H]$ is fixed with respect to $F$ if and only if the corresponding function is invariant with respect to the $F$-action on $G/H$. Such a function can be identified with a function on $F\backslash G/H$. A function on $F\backslash G/H$ can be also considered as the direct sum of functions on the subset $Y_{s}$ defined above. The generating series for the number of functions on $Y_{s}$ (i.e. the series $\sum_{k\geq 0}\arrowvert S^{k}Y_{s}\arrowvert\,t^{k}$) is $(1-t)^{-|Y_{s}|}=(1-t)^{-{\ell}^{{\mathcal{F}}}_{s}}$. Each function on $Y_{s}$ with the sum of values equal to $k$ lifts to an $H$ invariant function on $G/H$ with the sum of the values equal to $ks$. Therefore the generating series for $F$-invariants functions on $p^{-1}(Y_{s})$ is $(1-t^{s})^{-{\ell}^{{\mathcal{F}}}_{s}}$. The generating series for all $F$-invariants functions on $G/H$ is the product of those for $p^{-1}(Y_{s})$. This is the left hand side of (3) . The right hand side of (3) is obviously the set of fixed points of $F$ on the right hand side of (2) . $\square$ Since $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}}$ is different from zero if and only if ${\mathcal{F}}\leq{\mathcal{F}}^{\prime}$ and $r_{{\mathcal{F}}^{\prime},{\mathcal{F}}^{\prime}}$ is different from zero, the system of Eq. (3) is a triangular one (with respect to the partial order on the set of conjugacy classes of subgroups of $G$). Together with the fact that the denominators of the left hand side of the Eq. (3) are product of the binomials of the form $(1-t^{m})$ this implies the following statement. ###### Proposition 1. For any subgroup $H$ of $G$ the series $(1-t)^{-[G/H]}$ belongs to the localization $K_{0}({\mbox{ f.$G$-s.}})[t]_{(\{1-t^{m}\})}$ of the polynomial ring $K_{0}({\mbox{ f.$G$-s.}})[t]$ at all the elements of the form $(1-t^{m})$, $m\geq 1$. The natural homomorphism from the Grothendieck ring $K_{0}({\mbox{ f.$G$-s.}})$ to the ring $R(G)$ of virtual representations of the group $G$ sends the equivariant Euler characteristic $\chi^{G}(X)$ to the one used in [Wall1980]. Since this homomorphism is, generally speaking, neither injective, no surjective, the equivariant Euler characteristic as an element in $K_{0}({\mbox{ f.$G$-s.}})$ is a somewhat finer invariant than the one as an element of the ring $R(G)$. ## 3 An Alternative Version of the Equivariant Lefschetz Number of a Map Let $X$ be a relatively good topological space (say, a quasiprojective complex or real variety) with a $G$-action and let $\varphi:X\to X$ be a $G$-equivariant proper map. The usual (“non-equivariant”) Lefschetz number $L(\varphi)$ counts the fixed points of $\varphi$ (or rather of its generic perturbation). The equivariant version $L^{G}(\varphi)$ of the Lefschetz number from [Lück and Rosenberg2003] counts the fixed points of $\varphi$ as a (finite) $G$-set. This leads to the following equation for the equivariant Lefschetz number $L^{G}(\varphi)=\sum\limits_{{\mathcal{H}}\in\mbox{Consub}(G)}\frac{L(\varphi_{% |(X^{{\mathcal{H}}},X^{>{\mathcal{H}}})})|H|}{|G|}[G/H],$ (4) where $H$ is a representative of the class ${\mathcal{H}}$. If $\varphi$ is a $G$-homeomorphism (like the monodromy transformation, see Sect. , (5) and (6) , (7) with the two parts of the equation (1) .] ###### Example. For some simplicity, let the group $G$ be abelian, let $X=(G/H)\times{\mathbb{Z}}_{k}=\{0,1,\ldots,k-1\}$, $k>0$, with the natural action of the group $G$ on the first factor, and let the map $\varphi:X\to X$ be defined by $\varphi(a,i)=\begin{cases}(a,i+1)&\mbox{ for }0\leq i If$k>1$, then$L^{G}(\varphi)={\widetilde{L}}^{G}(\varphi)=0$since$\varphi$has neither fixed points, no fixed orbits. The smallest$i$for which${\widetilde{L}}^{G}(\varphi^{i})\neq 0$is$i=k$. In this case all the$G$-orbits in$X$are fixed by$\varphi^{k}$and therefore${\widetilde{L}}^{G}(\varphi^{k})=k[G/H]$. On the other hand, if$g\notin H$, the map$\varphi^{k}$has no fixed points and thus$L^{G}(\varphi^{k})=0$. The smallest$i$for which$L^{G}(\varphi^{i})\neq 0$is$i=\ell k$, where$\ell$is the order of the element$g$in the group$G/H$. In this case all the points of$X$are fixed by$\varphi^{\ell k}$and therefore${\widetilde{L}}^{G}(\varphi^{\ell k})=k[G/H]$. Just in the same way as in [Lück and Rosenberg2003] one can formulate the equivariant version of the Lefschetz fixed point theorem for${\widetilde{L}}^{G}(\varphi)$(an analogue of Theorem 2.1 in [Lück and Rosenberg2003]). ## 4 The Zeta Function of a Transformation Let$\varphi:X\to X$be as above. The usual (non-equivariant) zeta function of$\varphi$is defined in terms of the action of$\varphi$in the (co)homology groups of$X$(in a way somewhat similar to the definition of the Lefschetz number). This definition is not convenient for a direct generalization to the equivariant case. It is more convenient to use the definition of the zeta function of the transformation$\varphi$in terms of the Lefschetz numbers of the iterates of$\varphi$. One defines integers$s_{i}$,$i=1,2\ldots,$recursively by the equation $L(\varphi^{m})=\sum_{i|m}s_{i}.$(8) The number$s_{m}$counts (with integer multiplicities) the points$x\in X$with the$\varphi$-order equal to$m$(i.e.$\varphi^{m}(x)=x$,$\varphi^{i}(x)\neq x$for$0<i<m$). Together with each such point all its images under the iterates of$\varphi$(there are exactly$m$different ones) are of this sort. Therefore$s_{m}$is divisible by$m$. One defines the zeta function$\zeta_{\varphi}(t)$to be $\zeta_{\varphi}(t):=\prod_{m\geq 1}(1-t^{m})^{-{s_{m}}/{m}}.$(9) ###### Remark. There are two traditions to define the zeta function of a transformation. The other one does not contain the minus sign in the exponent and therefore is the inverse to this one. Here we follow the definition from [A’Campo1975]. In the equivariant version, let$s_{m}^{{{G}}}(\varphi)$and${\widetilde{s}}_{m}^{{{G}}}(\varphi)$be defined through$L^{G}(\varphi^{i})$and${\widetilde{L}}^{G}(\varphi^{i})$respectively by the analogues of the Eq. (8) $L^{G}(\varphi^{m})=\sum_{i|m}s_{i}^{G}(\varphi),\quad{\widetilde{L}}^{G}(% \varphi^{m})=\sum_{i|m}{\widetilde{s}}_{i}^{G}(\varphi).$(10) The elements$s_{m}^{G}(\varphi)$and${\widetilde{s}}_{m}^{G}(\varphi)$count the points in$X$the$\varphi$-order of which is equal to$m$in$X$and in$X/G$respectively. ###### Example. In the Example from Sect. 2 with Proposition 1 gives $\begin{align}{\widetilde{\zeta}}^{{\mathcal{S}}_{3}}_{f}(t)&(1-t^{6k})^{-1}\cdot(1-t^{6k}% )^{(6k-1)[{\mathcal{S}}_{3}/{\mathbb{Z}}_{2}]}\cdot(1-t^{3k})^{-[{\mathcal{S}}% _{3}/\langle e\rangle]}\\ &\cdot(1-t^{2k})^{-[{\mathcal{S}}_{3}/\langle e\rangle]}\cdot(1-t^{6k})^{(1-6% k^{2})[{\mathcal{S}}_{3}/\langle e\rangle]}.\end{align}$## 5 On Orbifold Versions of the Equivariant Monodromy Zeta Function For a$G$-variety$X$, its orbifold Euler characteristic$\chi^{orb}(X,G)\in{\mathbb{Z}}$is defined, e.g., in [Atiyah and Segal1989] or [Hirzebruch and Höfer1990]. For a subgroup$H$of$G$, let$X^{H}=\{x\in X:Hx=x\}$be the fixed point set of$H$. The orbifold Euler characteristic$\chi^{orb}(X,G)$of the$G$-space$X$is defined, e.g., in [Atiyah and Segal1989] and [Hirzebruch and Höfer1990]: $\chi^{orb}(X,G)=\sum_{[g]\in{ Consub}(G)}\chi(X^{\langle g\rangle}/C_{G}(g)),$(13) where$C_{G}(g)=\{h\in G:h^{-1}gh=g\}$is the centralizer of$g$, and$\langle g\rangle$the subgroups generated by$g$. There is a natural homomorphism of abelian groups$\Phi:K_{0}({\mbox{ f.$G$-s.}})\to{\mathbb{Z}}$which sends the generator$[G/H]$of$K_{0}({\mbox{ f.$G$-s.}})$to$\chi^{orb}(G/H,G)$and therefore the equivariant Euler characteristic$\chi^{G}(X)\in K_{0}({\mbox{ f.$G$-s.}})$to the orbifold Euler characteristic$\chi^{orb}(X,G)$. For an abelian$G$,$\Phi([G/H])=|H|$and$\Phi$is a ring homomorphism, but this is not the case in general. The Lefschetz number is a sort of generalization of the Euler characteristic: the Euler characteristic is the Lefschetz number of the identity map. The definition of the orbifold Euler characteristic gives the hint that there can be the corresponding definition(s) of the orbifold Lefschetz number of a$G$-equivariant transformation. It can be expressed through an equivariant version of the Lefschetz number with values in the Burnside ring: the image of the Lefschetz number by the homomorphism$\Phi$. The two versions$L^{G}(\varphi)$and${\widetilde{L}}^{G}(\varphi)$of the equivariant Lefschetz number [see (4) and (6) ] give two versions $L^{orb}(\varphi)=\Phi(L^{G}(\varphi))\,\,\mbox{ and }\,\,{\widetilde{L}}^{orb}(\varphi)=\Phi({\widetilde{L}}^{G}(\varphi))$of orbifold Lefschetz numbers. The usual definition of the zeta function of a transformation [e.g. Eqs. (8) , (9) , (10) and (11) ] gives two orbifold versions of the zeta function of a$G$-equivariant transformation$\varphi:X\to X$: ${\zeta}_{\varphi}^{orb}(t)=\prod_{m\geq 1}(1-t^{m})^{-{{s}_{m}^{{orb}}}/{m}},% \,\,\mbox{ and }\,\,{\widetilde{\zeta}}_{\varphi}^{orb}(t)=\prod_{m\geq 1}(1-t% ^{m})^{-{{\widetilde{s}}_{m}^{{orb}}}/{m}},$(14) where$L^{orb}(\varphi^{m})=\sum_{i|m}s^{orb}_{i}(\varphi)$and${\widetilde{L}}^{orb}(\varphi^{m})=\sum_{i|m}{\widetilde{s}}^{orb}_{i}(\varphi)$. The exponents$-{{\widetilde{s}}_{m}^{{orb}}}/{m}$are integers and therefore the orbifold monodromy zeta function${\widetilde{\zeta}}_{\varphi}^{orb}(t)$is a rational function in$t$. The exponents$-{{s}_{m}^{{orb}}}/{m}$are in general rational numbers. For instance, for$f$from Example 2 in Sect. 5 one has ${\widetilde{\zeta}}_{f}^{orb}(t)=(1-t^{6k})^{-1+2(6k-1)+(1-6k^{2})}\cdot(1-t^{% 3k})^{-1}\cdot(1-t^{2k})^{-1}.$This follows from the fact that, for$G={\mathcal{S}}_{3}$and for a subgroup$H$of${\mathcal{S}}_{3}$,$\chi^{orb}({\mathcal{S}}_{3}/H,{\mathcal{S}}_{3})=|H|$. #### Acknowledgements S. M. Gusein-Zade partially supported by the Grants RFBR-13-01-00755 and NSh-5138.2014.1. I. Luengo and A. Melle-Hernández partially supported by the Spanish Grant MTM2013-45710-C2-2-P. ### References • [A’Campo1975] A’Campo, N.: La fonction zêta d’une monodromie. Comment. Math. Helv. 50, 233–248 (1975) • [Arnold et al.1988] Arnold, V.I., Gusein-Zade, S.M., Varchenko, A.N.: Singularities of Differentiable Maps. vol. II. Monodromy and Asymptotics of Integrals, Monographical Mathematics, vol. 83. Birkhäuser, Boston (1988) • [Atiyah and Segal1989] Atiyah, M., Segal, G.: On equivariant Euler characteristics. J. Geom. Phys. 6(4), 671–677 (1989) • [Gusein-Zade et al.1999] Gusein-Zade, S.M., Delgado, F., Campillo, A.: On the monodromy of a plane curve singularity and the Poincaré series of its ring of functions. Funktsional. Anal. i Prilozhen. 33(1), 66–68 (1999); translation in Funct. Anal. Appl. 33(1), 56–57 (1999) • [Campillo et al.2007] Campillo, A., Delgado, F., Gusein-Zade, S.M.: On Poincaré series of filtrations on equivariant functions of two variables. Mosc. Math. J. 7(2), 243–255 (2007) • [Campillo et al.2013] Campillo, A., Delgado, F., Gusein-Zade, S.M.: Equivariant Poincaré series of filtrations. Rev. Mat. Complut. 26(1), 241–251 (2013) • [Campillo et al.2014] Campillo, A., Delgado, F., Gusein-Zade, S.M.: An equivariant Poincaré series of filtrations and monodromy zeta functions. Rev. Mat. Complut. (2014). doi:10.1007/s13163-014-0160-8 • [Clemens1969] Clemens, C.H.: Picard–Lefschetz theorem for families of nonsingular algebraic varieties acquiring ordinary singularities. Trans. Am. Math. Soc. 136, 93–108 (1969) • [Crabb2007] Crabb, M.C.: Equivariant fixed-point indices of iterated maps. J. Fixed Point Theory Appl. 2(2), 171–193 (2007) • [Denef and Loeser1992] Denef, J., Loeser, F.: Caractéristiques d’Euler–Poincaré, fonctions zêta locales et modifications analytiques. J. Am. Math. Soc. 5(4), 705–720 (1992) • [Dzedzej2001] Dzedzej, Z.: Fixed orbit index for equivariant maps. In: Proceedings of the Third World Congress of Nonlinear Analysts, Part 4 (Catania, 2000). Nonlinear Analytics, vol. 47(4), pp. 2835–2840 (2001) • [Ebeling and Gusein-Zade2012a] Ebeling, W., Gusein-Zade, S.M.: Saito duality between Burnside rings for invertible polynomials. Bull. Lond. Math. Soc. 44, 814–822 (2012) • [Ebeling and Gusein-Zade2012b] Ebeling, W., Gusein-Zade, S.M.: Equivariant Poincaré series and monodromy zeta functions of quasihomogeneous polynomials. Publ. Res. Inst. Math. Sci. 48(3), 653–660 (2012) • [Gusein-Zade2010] Gusein-Zade, S.M.:Integration with respect to the Euler characteristic and its applications. Uspekhi Mat. Nauk 65(393), 5–42 (2010) (no. 3); translation in Russ. Math. Surv. 65(3), 399–432 (2010) • [Gusein-Zade2013] Gusein-Zade, S.M.: On an equivariant analogue of the monodromy zeta function. Funktsional. Anal. i Prilozhen. 47(1), 17–25 (2013); translation in Funct. Anal. Appl. 47(1), 14–20 (2013) • [Gusein-Zade et al.2006] Gusein-Zade, S.M., Luengo, I., Melle-Hernández, A.: Power structure over the Grothendieck ring of varieties and generating series of Hilbert schemes of points. Mich. Math. J. 54(2), 353–359 (2006) • [Gusein-Zade et al.2008] Gusein-Zade, S.M., Luengo, I., Melle-Hernández, A.: An equivariant version of the monodromy zeta function. In: Geometry, Topology, and Mathematical Physics, pp. 139–146. American Mathematical Society Translation Series 2, 224, American Mathematical Society, Providence (2008) • [Hirzebruch and Höfer1990] Hirzebruch, F.: Höfer, Th: On the Euler number of an orbifold. Math. Ann. 286(1–3), 255–260 (1990) • [Knutson1973] Knutson, D.:$\lambda$-Rings and the Representation Theory of the Symmetric Group. Lecture Notes in Mathematics, vol. 308. Springer, Berlin, New York (1973) • [Lück and Rosenberg2003] Lück, W., Rosenberg, J.: The equivariant Lefschetz fixed point theorem for proper cocompact$G\$-manifolds. In: Farrell, F.T., Lück, W. (eds.) High-Dimensional Manifold Topology, pp. 322–361. World Science Publishing, River Edge (2003) • [tom Dieck1979] tom Dieck, T.: Transformation Groups and Representation Theory. Lecture Notes in Mathematics, vol. 766. Springer, Berlin (1979) • [Verdier1973] Verdier, J.-L.: Caractéristique d’Euler–Poincaré. Bull. Soc. Math. France 101, 441–445 (1973) • [Wall1980] Wall, C.T.C.: A note on symmetry of singularities. Bull. Lond. Math. Soc. 12(3), 169–175 (1980)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507799744606018, "perplexity": 239.29601257293334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.56/warc/CC-MAIN-20170923175524-20170923195524-00525.warc.gz"}
https://elearning.atmajaya.ac.id/course/index.php?categoryid=62
### Pelatihan Ms Word Write a concise and interesting paragraph here that explains what this course is about ### Pelatihan Ms Powerpoint Write a concise and interesting paragraph here that explains what this course is about ### Materi Pelatihan Moodle Write a concise and interesting paragraph here that explains what this course is about
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802294373512268, "perplexity": 7511.647931777493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00206.warc.gz"}
https://icml.cc/Conferences/2020/ScheduleMultitrack?event=6810
Timezone: » Poster A Nearly-Linear Time Algorithm for Exact Community Recovery in Stochastic Block Model Peng Wang · Zirui Zhou · Anthony Man-Cho So Thu Jul 16 07:00 AM -- 07:45 AM &amp; Thu Jul 16 07:00 PM -- 07:45 PM (PDT) @ Virtual #None Learning community structures in graphs that are randomly generated by stochastic block models (SBMs) has received much attention lately. In this paper, we focus on the problem of exactly recovering the communities in a binary symmetric SBM, where a graph of $n$ vertices is partitioned into two equal-sized communities and the vertices are connected with probability $p = \alpha\log(n)/n$ within communities and $q = \beta\log(n)/n$ across communities for some $\alpha>\beta>0$. We propose a two-stage iterative algorithm for solving this problem, which employs the power method with a random starting point in the first-stage and turns to a generalized power method that can identify the communities in a finite number of iterations in the second-stage. It is shown that for any fixed $\alpha$ and $\beta$ such that $\sqrt{\alpha} - \sqrt{\beta} > \sqrt{2}$, which is known to be the information-theoretical limit for exact recovery, the proposed algorithm exactly identifies the underlying communities in $\tilde{O}(n)$ running time with probability tending to one as $n\rightarrow\infty$. As far as we know, this is the first algorithm with nearly-linear running time that achieves exact recovery at the information-theoretical limit. We also present numerical results of the proposed algorithm to support and complement our theoretical development.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7933269739151001, "perplexity": 342.8519124781542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00041.warc.gz"}
http://mathhelpforum.com/advanced-algebra/2855-irreducible.html
# Math Help - Irreducible???? 1. ## Irreducible???? Question: Determine if x^4 + x^2 + 1 is ireducible in Z3[x]. Factorize it if you can I think x^4 + x^2 + 1 can be factored as (x^2 +2)(x^2 +2) which would make it Irreducible but not sure? 2. Originally Posted by mathlg Question: Determine if x^4 + x^2 + 1 is ireducible in Z3[x]. Factorize it if you can I think x^4 + x^2 + 1 can be factored as (x^2 +2)(x^2 +2) which would make it Irreducible but not sure? $(x^2+2)(x^2+2)=x^4+4x^2+4$ Since we are in $\mathbb{Z}_3[x]$ we have that, $x^4+x^2+1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7223194241523743, "perplexity": 3379.945733794215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00149-ip-10-235-10-82.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007%2Fs12220-017-9760-0
The Journal of Geometric Analysis , Volume 27, Issue 3, pp 2269–2277 # The Asymptotically Flat Scalar-Flat Yamabe Problem with Boundary • Stephen McCormick Article ## Abstract We consider two cases of the asymptotically flat scalar-flat Yamabe problem on a non-compact manifold with inner boundary in dimension $$n\ge 3$$. First, following arguments of Cantor and Brill in the compact case, we show that given an asymptotically flat metric g, there is a conformally equivalent asymptotically flat scalar-flat metric that agrees with g on the boundary. We then replace the metric boundary condition with a condition on the mean curvature: given a function f on the boundary that is not too large, we show that there is an asymptotically flat scalar-flat metric, conformally equivalent to g whose boundary mean curvature is given by f. The latter case involves solving an elliptic PDE with critical exponent using the method of sub- and supersolutions. Both results require the usual assumption that the Sobolev quotient is positive. ## Keywords Yamabe problem Asymptotically flat manifold Scalar curvature ## Mathematics Subject Classification 58J05 53C21 53A30 35B33 ## Notes ### Acknowledgements The author would like to thank the Institut Henri Poincaré, for their hospitality while part of this work was completed, and gratefully acknowledge that this work was supported by a UNE Research Seed Grant. ## References 1. 1. Bartnik, R.: The mass of an asymptotically flat manifold. Commun. Pure. Appl. Math. 39, 661–693 (1986) 2. 2. Bartnik, R.: New definition of quasilocal mass. Phys. Rev. Lett. 62(20), 845–885 (1989) 3. 3. Bartnik, R., Isenberg, J.: The constraint equations. In: Chruściel, P., Friedrich, H. (eds.) The Einstein Equations and the Large Scale Behavior of Gravitational Fields, pp. 1–38. Birkhäuser, Basel (2004)Google Scholar 4. 4. Brendle, S., Marques, F.C.: Recent progress on the Yamabe problem. arXiv:1010.4960 (2010) 5. 5. Cantor, M.: A necessary and sufficient condition for York data to specify an asymptotically flat spacetime. J. Math. Phys. 20(8), 1741–1744 (1979) 6. 6. Cantor, M.: Some problems of global analysis on asymptotically simple manifolds. Compos. Math. 38(1), 3–35 (1979) 7. 7. Cantor, M., Brill, D.: The laplacian on asymptotically flat manifolds and the specification of scalar curvature. Compos. Math. 43(3), 317–330 (1981) 8. 8. Escobar, J.F.: Conformal deformation of a Riemannian metric to a scalar flat metric with constant mean curvature on the boundary. Ann. Math. 136, 1–50 (1992) 9. 9. Escobar, J.F.: The Yamabe problem on manifolds with boundary. J. Differ. Geom. 35(1), 21–84 (1992) 10. 10. Escobar, J.F.: Conformal metrics with prescribed mean curvature on the boundary. Calc. Var. Partial Differ. Equ. 4(6), 559–592 (1996) 11. 11. Kazdan, J.L., Warner, F.W.: Scalar curvature and conformal deformation of riemannian structure. J. Differ. Geom. 10(1), 113–134 (1975) 12. 12. Lee, J., Parker, T.: The Yamabe problem. Bull. Am. Math. Soc. 17(1), 37–91 (1987) 13. 13. Maxwell, D.: Solutions of the Einstein constraint equations with apparent horizon boundaries. Commun. Math. Phys. 253(3), 561–583 (2005) 14. 14. McCormick, S.: The hilbert manifold of asymptotically flat metric extensions. arXiv:1512.02331 (2015) 15. 15. Schwartz, F.: The zero scalar curvature Yamabe problem on noncompact manifolds with boundary. Indiana Univ. Math. J. 55(4), 1449–1459 (2006) 16. 16. Szabados, L.B.: Quasi-local energy-momentum and angular momentum in general relativity: a review article. Living Rev. Relativ. 7, 4 (2004) 17. 17. Trudinger, N.S.: Remarks concerning the conformal deformation of Riemannian structures on compact manifolds. Ann. Della Sc. Norm. Super. Pisa-Classe Sci. 22(2), 265–274 (1968) 18. 18. Yamabe, H.: On a deformation of Riemannian structures on compact manifolds. Osaka Math. J. 12(1), 21–37 (1960) ## Authors and Affiliations • Stephen McCormick • 1 • 2 1. 1.School of Science and TechnologyUniversity of New EnglandArmidaleAustralia 2. 2.Institutionen för MatematikKungliga Tekniska HögskolanStockholmSweden
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258082509040833, "perplexity": 2300.6671801024054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00104.warc.gz"}
http://mathhelpboards.com/differential-equations-17/method-integrating-factor-20250.html?s=937dcc8a83392d4f10f5259d4891b7f7
# Thread: method of integrating factor 1. $\tiny{206.q3.2}\\$ $\textsf{3. use the method of integrating factor}\\$ $\textsf{to find the general solution to the first order linear differential equation}\\$ \begin{align} \displaystyle \frac{dy}{dx}+5y=10x \end{align} $\textit{clueless !!!}$ 2. Given a first order linear ODE of the form: $\displaystyle \d{y}{x}+f(x)y=g(x)$ We can use an integrating factor $\mu(x)$ to make the LHS of the ODE into the derivative of a product, using the special properties of the exponential function with regard to differentiation. Consider what happens if we multiply though by: $\displaystyle \mu(x)=\exp\left(\int f(x)\,dx\right)$ We get: $\displaystyle \exp\left(\int f(x)\,dx\right)\d{y}{x}+\exp\left(\int f(x)\,dx\right)f(x)y=g(x)\exp\left(\int f(x)\,dx\right)$ Now, let's use: $\displaystyle F(x)=\int f(x)\,dx\implies F'(x)=f(x)$ And we now have: $\displaystyle \exp\left(F(x)\right)\d{y}{x}+\exp\left(F(x)\right)F'(x)y=g(x)\exp\left(F(x)\right)$ Now, if we observe that, via the product rule, we have: $\displaystyle \frac{d}{dx}\left(\exp(F(x))y\right)=\exp\left(F(x)\right)\d{y}{x}+\exp\left(F(x)\right)F'(x)y$ Then, we may now write our ODE as: $\displaystyle \frac{d}{dx}\left(\exp(F(x))y\right)=g(x)\exp\left(F(x)\right)$ Now, we may integrate both sides w.r.t $x$. So, in the given ODE: $\displaystyle \d{y}{x}+5y=10x$ We identify: $\displaystyle f(x)=5$ And so we compute the integrating factor as: $\displaystyle \mu(x)=\exp\left(5\int\,dx\right)=$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931864142417908, "perplexity": 389.9794483820826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690203.42/warc/CC-MAIN-20170924190521-20170924210521-00329.warc.gz"}
https://dalts.com/tags/linux/
## Building Pianobar On (L)Ubuntu I’ve been using Linux since slackware 0.9 and I’m quite used to compiling software and kernels from source, but in recent years it’s very rare that I have to do this anymore. I run Lubuntu on my desktop as I just like things things to be clean, simple and just work - I often sacrifice the latest and greatest for stability. One of the tools I just can’t do without is pianobar which is a command line tool for listening to music on Pandora, which incidentally is now available in Australia without the use of proxies! [Read More] ## Gold Coast TechSpace Storage One of my big projects over the last year outside of my day job has been helping to setup the Gold Coast TechSpace. It’s been a really rocky ride; things like this are really difficult to setup in a place like the Gold Coast and it’s been a huge learning experience for myself and the founding committee. However, it does feel like we are over the hump now with people starting to get what we are trying to achieve and a handful of loyal members paying a membership fee of $40/month ($20 for students) to help cover our rent. [Read More] ## Noteboard OLPC Project at Griffith Uni Last night I had the pleasure of attending the presentation of 4 Griffith University Gold Coast Students Yukito Tsunoda, Harold Haigh, Ben Cowley & Maryam Nemati. They have been working on an OLPC Activity “NoteBoard” as part of their Industry project. The students developed the activity based on specs supplied by Sridhar at OLPC Australia. This approach differs from some of the other OLPC Activities in that it was based on a real customer need rather than the meandering evolution that we sometimes see with some of these projects. [Read More] ## Coding By Numbers lca.conf.au wrap-up episode I didn’t get a chance to do any blogging at linux.conf.au this year - not even a wrap-up, but perhaps this is better. We did a codingbynumbers wrap-up episode where we summarized our time at the conf. In case you were wondering where the PHP episode is that I recorded at LCA - that’s the next episode, thought we’d get this one out quickly first. ## Google Go Interview with Andrew Gerrand For those that attended the Google Go Tutorial at linux.conf.au here is the link to the podcast interview that I did the week before the conference. Hope you find it useful. For those interest in Go that live in Brisbane we are hoping to do a session on GO very soon - watch this space. http://www.codingbynumbers.com/2011/01/coding-by-numbers-episode-20-interview.html ## Mercurial and Subversion, good playmates We've had a couple of Subversion outages recently. As usual, development ground to a halt. People couldn't update their projects, couldn't get the history - we shelved some changes in Intellij for checkin later in the day... in short a pain we could live without. Such is life with centralised version control. Since the advent of bitbucket I've been using Mercurial a lot for my own projects, so thought why not try it at work as a fallback for when subversion is down. [Read More] ## Leave em to it! Leave em to it! I was recently inspired to pull out the XOs again - they had been sitting on my shelf doing nothing and I thought I should make use of them. Anyway - I've given one to @aspinall for his daughter and one to my son Jaron (5).  I've pretty much just left him to it, and he's steadily working it out. Anyway, tonight I find him in his room playing around with the speech application. [Read More]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18920259177684784, "perplexity": 2592.179440628978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00499.warc.gz"}
https://motls.blogspot.com/2014/09/production-of-vacuum-cleaners-above.html?m=1
## Monday, September 01, 2014 ### Production of vacuum cleaners above 1600 watts banned in the EU In November 2013, I reminded everyone that the EU had an incredibly irresponsible plan to simply ban all vacuum cleaners above 1600 watts of the input power. (The equivalent figure that Americans would use is 110 times smaller because the voltage is 110 volts and in amps, so 1600 watts is equivalent to 14.55 amps.) Today, on the 75th anniversary of the outbreak of World War II, the ban came to force. I didn't want to believe that it would ever become valid but it really has. Czech media say that it's still OK to sell them and the retailers have huge inventories, indeed. Some Western European media suggest that it is no longer legal to even sell them – but the sale really seems to continue in Czechia, a country that is telling the EU overlords "screw you, Ken". Needless to say, no significant improvement in the efficiency of the vacuum cleaners has materialized since the late 2013. The models on the market still have the "useful suction power" in AW (air watts) equal to 1/5 or 1/6 of the input power; the models for which the ratio approaches 1/4 are significantly more expensive. Just check the top 20 bestselling vacuum cleaners in Czechia. The table looks as follows: 1. 320 AW / 1600 W 2. battery 3. 500 AW / 2200 W 4. 510 AW / 2000 W 5. 425 AW / 2000 W 6. 380 AW / 2000 W 7. 500 AW / 2200 W 8. ??? AW / 1700 W 9. ??? AW / 1000 W 10. 450 AW / 2000 W 11. battery 12. 230 AW / 1400 W 13. 500 AW / 2200 W 14. battery 15. ??? AW / 2200 W 16. 300 AW / 1400 W 17. battery 18. 400 AW / 1500 W 19. battery 20. 340 AW / 2000 W 21. 304 AW / 1800 W 22. ??? AW / 500 W I've included two models that were "advertised" in between. 17 models above use AC (power outlets). Out of these 17 models, the production of 11 models (two thirds) is already banned on the European territory. The arrogance with which these bureaucratic assholes pick a scapegoat – something that consumes a negligible fraction of the energy – and ban two thirds of the top models on the market is stunning. There is a sense in which Ukraine – or North Korea – does belong to the EU. More precisely, Brussels does belong to Ukraine or North Korea. North Korea celebrates its Earth Hour or Earth Day every hour on every day in every year, as the notorious satellite picture above shows. But another failed state, Ukraine, isn't too far from that ideal society, as the RT video shows. Not so wisely, the current Kiev regime has declared a so far "soft" war against its key supplier of energy resources. Instead of trying to fix these external problems and start proper business with other countries again, the rulers have paid for a P.R. campaign convincing the Ukrainians that they don't really need electricity (and other forms of energy). When you save energy, you save Ukraine, the poor Western Ukrainian sheep are told. What the commercials don't say is that the rulers could save Ukraine as well, simply by stopping screwing their country and by creating conditions in which the unrestricted import of the energy resources will actually resume. But you know, fascism and ecofascism are twin sisters. In the modern world, only failed states really need to tell their citizens to save energy. It's too bad that the apparatchiks in Brussels are transforming the European Union into a failed territory not dissimilar to North Korea or Ukraine. From our viewpoint, we are sort of being returned to the age of socialist "market" with its shortages of products. Notoriously in the mid 1980s, it was hard to buy toilet paper in Czechoslovakia. You may imagine the image our country had among the West Germans. Toilet paper may be found politically incorrect soon, too. They are already banning toilet tanks above 6 liters. Because some šit inevitably stays in the toilet most of the time when the tanks are smaller, it is very hard not to call the heads of the EU institutions "šitheads". From 2017, the EU also wants to ban vacuum cleaners above 900 watts – including more or less all existing models today. Add hair driers and a few more bans summarized by the illustration above. ;-) #### 1 comment: 1. I America, our vacuums are advertised by amperage, not watts, so I had to do the maths to find out our 12 amp model would be just fine under the new regs, but not the eventual ones. Of course, this will be handled the same way that the dunderheaded light bulb regulations and shower head flow control regulations are in America -- you'll start seeing much more expensive vacuums, because they solve the problem by putting 2+ 900 watt motors in the vacuum and combining them with some Rube Goldberg transmission device scheme. And of course there will be power losses in the transmission schemes, so you will need even more wattage burned in the eventual "compliant" model than if they had simply used a single bigger engine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.251160591840744, "perplexity": 3506.784556722622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00003.warc.gz"}
https://www.physicsforums.com/threads/friction-a-sled-snow-covered-hill-sliding-box.268033/
# Homework Help: Friction, a sled, snow covered hill, sliding box 1. Oct 29, 2008 ### Blangett 1. The problem statement, all variables and given/known data Question Details: A rope attached to a 19.0 kg wood sled pulls the sled up a 18.0 kg snow-covered hill. A 9.0 kg wood box rides on top of the sled. If the tension in the rope steadily increases, at what value of the tension does the box slip? So far none of the answers I have tried has worked. I looked up the friction coefficients. μ=.50 (wood on wood) μ=.12 (wood on snow) I have a midterm I am studying for on Friday. I am trying to grasp these types of problems unsuccessfully. Any help you can provide is deeply appreciated. 2. Relevant equations Where do I go from here? I am stuck on how to proceed. The fact that I am using the net forces of two seperate objects is confusing me. I am not sure what to do with them (do I add the Fnet x together?. Also the fact that I do not have acceleration given is bothering me. 3. The attempt at a solution Known fk=μkn ms=19.0kg mb=9.0kg μk=.06 (wood on snow) μs=.50 (wood on wood) n=m*g gx=9.8sinθ m/s2 gy=9.8cosθ m/s2 (Fnetx)sled=T-fk-g = T-μkn-msg*sinθ=msax = T-μkmsg *cosθ-msg*sinθ=msax (Fnety)sled = n-g=msax = n-msg*cosθ=m*0 n =ms*g*cosθ (Fnetx)box = -fs-g = mbax =μs*mb*g*cosθ*mb*g*sinθ =mbax (Fnety)box= n - g = n - mb*g*cosθ=mbay = n - mb*g*cosθ=0 n= mb*g*cosθ 2. Oct 30, 2008 ### LowlyPion Welcome to PF. I presume that θ = 18 and that it is not an 18 kg hill. Also I'm not sure whether your coefficients of friction are given as part of the problem or you are supplying them. But you haven't properly recorded it in your equations. Without working through your equations, let me just say generally that the first thing to find is the acceleration of the system (box and sled) - once T overcomes wood/snow friction of both normal components of weight as well as the component of weight down the incline. That acceleration then applies to both masses - (sled + box). Armed with acceleration then at what point does the acceleration against the box overcome the normal component of the weight times the wood wood coefficient? 3. Oct 24, 2010 I'm working on this same problem, and rather than post a new thread, I thought it best to continue on this one. So far I've gotten just half a step father than the OP. First I'll restate the problem in general terms: A rope attached to an $$M_{s}$$ kg wood sled pulls the sled up a θ-degree snow-covered hill. An $$M_{b}$$ kg wood box rides on top of the sled. If the tension in the rope steadily increases, at what value of the tension does the box slip? I'm given $$M_{s}$$, $$M_{b}$$, θ, and both static and kinetic μ of friction for wood-on-wood, and wood-on-snow. First thing I did was make a free-body-diagram of the box, the sled, and of them both as a system. Then I double checked the quantity of forces by drawing an interaction diagram. The box has 3 forces: weight, normal, and friction. The sled has 6 forces: tension, weight, friction from ground, normal from ground, friction from box, & normal from box. As a system there are 4 forces: normal, tension, weight, and friction. I set up the hill as my (+) x-axis, and the y-axis perpendicular to that. Thus the weights are decomposed into: $$\vec{w}$$=(-)(M)(sin(θ))$$\hat{x}$$ + (-)(M)(cos(θ))$$\hat{y}$$ Using the system, when the sled is just about to start moving: T - $$F_{ss}$$ - $$M_{t}$$ = 0 so T = ($$M_{t}$$)(g)(sin(θ)) + (μ_ss)($$M_{t}$$)(g)(cos(θ)) Then I divide by M to get A, because [T = MA] ==> [A = T/M]. This gives: A = (g)( sin(θ) + (μ_ss)cos(θ) ) Now, I'm unsure what is meant by "acceleration against the box." I guess my problem is that I don't understand how the tension on the sled relates to the friction on the box. In my FBD of the sled, both frictional forces are pointing in the (-) x-direction. So I figured that when the weight of the box is equal to the frictional force on the box it will be just about to slip. So I put the maximum static friction of the box into the equation of the sum-of-forces for the sled: T - ( $$f_{sled}$$ + $$f_{box}$$ ) - ( ($$M_{s}$$)(g)(cos(θ)) ) = 0 T - [ (μ_ss)($$M_{s}$$)(g)(cos(θ)) ] - [($$M_{b}$$)(g)(sin(θ))] - [($$M_{s}$$)(g)(sin(θ))] Solving for T gives does not give me the correct answer. I know, from the back of the book, that when $$M_{s}$$ = 20kg, $$M_{b}$$ = 10kg, and θ= 20-degrees, that T = 155N. I tried something else too, but it's all gibberish now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7795045375823975, "perplexity": 1183.2968914266937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00096.warc.gz"}
https://bio.libretexts.org/TextMaps/Biochemistry/Book%3A_Biochemistry_Free_and_Easy_(Ahern_and_Rajagopal)/06%3A_Metabolism_I/6.01%3A_Definitions
# 6.1: Definitions [ "article:topic", "authorname:ahern1", "license:ccbyncsa" ] We start by defining a few terms. Anabolic processes refer to collections of biochemical reactions that make bigger molecules from smaller ones. Examples include the synthesis of fatty acids from acetyl-CoA, of proteins from amino acids, of complex carbohydrates from simple sugars, and of nucleic acids from nucleotides. Just as any construction project requires energy, so, too, do anabolic processes require input of energy. Anabolic processes tend to be reductive in nature, in contrast to catabolic processes, which are oxidative. Not all anabolic processes are reductive, though. Protein synthesis and nucleic acid synthesis do not involve reduction, though the synthesis of amino acids and nucleotides does. Catabolic processes are the primary sources of energy for heterotrophic organisms and they ultimately power the anabolic processes. Examples include glycolysis (breakdown of glucose), the citric acid cycle, and fatty acid oxidation. Reductive processes require electron sources, such as NADPH, NADH, or $$\text{FADH}_2$$. Oxidative processes require electron carriers, such as $$\text{NAD}^+$$, $$\text{NADP}^+$$, or FAD. Catabolic processes are ultimately the source of ATP energy in cells, but the vast majority of ATP i heterotrophic organisms is not made in directly in these reactions. Instead, the electrons released by oxidation are collected by electron carriers which donate them, in the mitochondria, to complexes that make ATP (ultimately) by oxidative phosphorylation. Figure 6.1.1: Redox Reactions In our tour of metabolism, we will tackle in this chapter processes that are the most oxidative/reductive in nature and in the following chapter those pathways that involve less reduction/oxidation. The aim in this coverage is not to go through the step- by-step reactions of the pathway, but rather to focus on control points, interesting enzymes, molecules common between pathways, and how the metabolic pathways meet the organism’s needs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5013248324394226, "perplexity": 5668.175806822844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00034.warc.gz"}
http://mathhelpforum.com/calculus/65917-differential-geometry.html
differential geometry? hello there. i was reading through some work on curves of pursuit and the author stated a certain equivalence without any comment as to where it came from. Say i have a differentiable curve $C$ in the plane. Given a point $X \in C$ let the distance of the tangent line from the origin at $X$ be $p$, the angle the tangent line makes with the x-axis be $\omega$, and the length of the curve be $s$. Then we have the relation: $p + \frac{d^2p}{d\omega^2} = \frac{ds}{d\omega}$ --- this sort of thing looks like it should be intuitive but i can't see why it is at all, or at least like it should follow from some other nice results. i've been able to show it by letting $C = (x_1(t), x_2(t))$: $p = \frac{x_1\dot{x}_2-\dot{x}_1x_2}{\sqrt{\dot{x}_1^2+\dot{x}_2^2}}$ $\omega = \text{arctan}\left(\frac{\dot{x}_2}{\dot{x}_1}\rig ht)$ $\frac{ds}{dt} = \sqrt{\dot{x}_1^2+\dot{x}_2^2}$ and then working out all the derivatives manually but that doesn't seem to help me understand it either. i hope that someone recognizes it and can point me in the direction for a nicer explanation of it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021983742713928, "perplexity": 108.4099851360109}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659753.31/warc/CC-MAIN-20160924173739-00150-ip-10-143-35-109.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/126740-what-meaning-print.html
# what is the meaning of • February 2nd 2010, 12:54 AM transgalactic what is the meaning of $f_Y(y)$ f means density function what are Y and y ? • February 2nd 2010, 01:40 AM mr fantastic Quote: Originally Posted by transgalactic $f_Y(y)$ f means density function what are Y and y ? Y is the random variable. And since f is a function, ..... • February 2nd 2010, 01:42 AM transgalactic what is y then? • February 2nd 2010, 01:48 AM mr fantastic Quote: Originally Posted by transgalactic what is y then? Well, if I have a function g(x), say, what is x ....? Perhaps you need to go back to revise your pre-calculus work on functions. • February 2nd 2010, 02:57 AM CaptainBlack Quote: Originally Posted by transgalactic $f_Y(y)$ f means density function what are Y and y ? It is the density of the RV Y evaluated at y. CB • February 2nd 2010, 03:04 AM transgalactic Quote: Originally Posted by CaptainBlack It is the density of the RV Y evaluated at y. CB i see your words are much closer to what i am looking for can you say what is RV why Y evaluated by y can you give a simple example so it will be clear • February 2nd 2010, 02:57 PM harish21 Quote: Originally Posted by transgalactic i see your words are much closer to what i am looking for can you say what is RV why Y evaluated by y can you give a simple example so it will be clear RV means a random vairable Y being evaluated at y means you are finding the pdf(probability density function) of the random variable! • February 2nd 2010, 03:01 PM matheagle meaning of... life? well that's 42.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6644436717033386, "perplexity": 4353.54338194528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500812662.69/warc/CC-MAIN-20140820021332-00046-ip-10-180-136-8.ec2.internal.warc.gz"}
https://docs.wpilib.org/pt/latest/docs/software/advanced-controls/trajectories/holonomic.html
# Holonomic Drive Controller The holonomic drive controller is a trajectory tracker for robots with holonomic drivetrains (e.g. swerve, mecanum, etc.). This can be used to accurately track trajectories with correction for minor disturbances. ## Constructing a Holonomic Drive Controller The holonomic drive controller should be instantiated with 2 PID controllers and 1 profiled PID controller. Nota The 2 PID controllers are controllers that should correct for error in the field-relative x and y directions respectively. For example, if the first 2 arguments are PIDController(1, 0, 0) and PIDController(1.2, 0, 0) respectively, the holonomic drive controller will add an additional meter per second in the x direction for every meter of error in the x direction and will add an additional 1.2 meters per second in the y direction for every meter of error in the y direction. The final parameter is a ProfiledPIDController for the rotation of the robot. Because the rotation dynamics of a holonomic drivetrain are decoupled from movement in the x and y directions, users can set custom heading references while following a trajectory. These heading references are profiled according to the parameters set in the ProfiledPIDController. var controller = new HolonomicDriveController( new PIDController(1, 0, 0), new PIDController(1, 0, 0), new ProfiledPIDController(1, 0, 0, new TrapezoidProfile.Constraints(6.28, 3.14))); // Here, our rotation profile constraints were a max velocity // of 1 rotation per second and a max acceleration of 180 degrees // per second squared. The holonomic drive controller returns «adjusted velocities» such that when the robot tracks these velocities, it accurately reaches the goal point. The controller should be updated periodically with the new goal. The goal is comprised of a desired pose, linear velocity, and heading. Nota The «goal pose» represents the position that the robot should be at at a particular timestamp when tracking the trajectory. It does NOT represent the trajectory’s endpoint. The controller can be updated using the Calculate (C++) / calculate (Java) method. There are two overloads for this method. Both of these overloads accept the current robot position as the first parameter and the desired heading as the last parameter. For the middle parameters, one overload accepts the desired pose and the linear velocity reference while the other accepts a Trajectory.State object, which contains information about the goal pose. The latter method is preferred for tracking trajectories. // Sample the trajectory at 3.4 seconds from the beginning. Trajectory.State goal = trajectory.sample(3.4); // Get the adjusted speeds. Here, we want the robot to be facing // 70 degrees (in the field-relative coordinate system). currentRobotPose, goal, Rotation2d.fromDegrees(70.0)); The adjusted velocities are of type ChassisSpeeds, which contains a vx (linear velocity in the forward direction), a vy (linear velocity in the sideways direction), and an omega (angular velocity around the center of the robot frame). The returned adjusted speeds can be converted into usable speeds using the kinematics classes for your drivetrain type. In the example code below, we will assume a swerve drive robot; however, the kinematics code is exactly the same for a mecanum drive robot except using MecanumDriveKinematics. SwerveModuleState[] moduleStates = kinematics.toSwerveModuleStates(adjustedSpeeds); SwerveModuleState frontLeft = moduleStates[0]; SwerveModuleState frontRight = moduleStates[1]; SwerveModuleState backLeft = moduleStates[2]; SwerveModuleState backRight = moduleStates[3]; Because these swerve module states are still speeds and angles, you will need to use PID controllers to set these speeds and angles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32805120944976807, "perplexity": 2652.5818945643027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00663.warc.gz"}
https://en.wikiludia.com/wiki/Expected_value
Expected value In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity (see § Examples for details). In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment. More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.[1][2] The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum or integral from converging. The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value (var(X) = E[(X - E[X])2] = E(X2) - [E(X)]2). The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator— that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter. In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function. One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber or information security breach).[4] Definition Finite case Let ${\displaystyle X}$ be a random variable with a finite number of finite outcomes ${\displaystyle x_{1},x_{2},\ldots ,x_{k}}$ occurring with probabilities ${\displaystyle p_{1},p_{2},\ldots ,p_{k},}$ respectively. The expectation of ${\displaystyle X}$ is defined as ${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{k}x_{i}\,p_{i}=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.}$ Since all probabilities ${\displaystyle p_{i}}$ add up to 1 (${\displaystyle p_{1}+p_{2}+\cdots +p_{k}=1}$), the expected value is the weighted average, with ${\displaystyle p_{i}}$’s being the weights. If all outcomes ${\displaystyle x_{i}}$ are equiprobable (that is, ${\displaystyle p_{1}=p_{2}=\cdots =p_{k}}$), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes ${\displaystyle x_{i}}$ are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of ${\displaystyle X}$ is what one expects to happen on average. An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows. Examples • Let ${\displaystyle X}$ represent the outcome of a roll of a fair six-sided die. More specifically, ${\displaystyle X}$ will be the number of pips showing on the top face of the die after the toss. The possible values for ${\displaystyle X}$ are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of 1/6. The expectation of ${\displaystyle X}$ is ${\displaystyle \operatorname {E} [X]=1\cdot {\frac {1}{6}}+2\cdot {\frac {1}{6}}+3\cdot {\frac {1}{6}}+4\cdot {\frac {1}{6}}+5\cdot {\frac {1}{6}}+6\cdot {\frac {1}{6}}=3.5.}$ If one rolls the die ${\displaystyle n}$ times and computes the average (arithmetic mean) of the results, then as ${\displaystyle n}$ grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. One example sequence of ten rolls of the die is 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, which has the average of 3.1, with the distance of 0.4 from the expected value of 3.5. The convergence is relatively slow: the probability that the average falls within the range 3.5 ± 0.1 is 21.6% for ten rolls, 46.1% for a hundred rolls and 93.7% for a thousand rolls. See the figure for an illustration of the averages of longer sequences of rolls of the die and how they converge to the expected value of 3.5. More generally, the rate of convergence can be roughly quantified by e.g. Chebyshev's inequality and the Berry–Esseen theorem. • The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable ${\displaystyle X}$ represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is$35; otherwise the player loses the bet. The expected profit from such a bet will be ${\displaystyle \operatorname {E} [\,{\text{gain from }}\1{\text{ bet}}\,]=-\1\cdot {\frac {37}{38}}+\35\cdot {\frac {1}{38}}=-\0.0526.}$ That is, the bet of $1 stands to lose$0.0526, so its expected value is -\$0.0526. Countably infinite case Let ${\displaystyle X}$ be a random variable with a countable set of outcomes ${\displaystyle x_{1},x_{2},\ldots ,}$ occurring with probabilities ${\displaystyle p_{1},p_{2},\ldots ,}$ respectively, such that the infinite sum ${\displaystyle \textstyle \sum _{i=1}^{\infty }|x_{i}|\,p_{i}}$ converges. The expected value of ${\displaystyle X}$ is defined as the series ${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}.}$ Remark 1. Observe that ${\displaystyle \textstyle {\Bigl |}\operatorname {E} [X]{\Bigr |}\leq \sum _{i=1}^{\infty }|x_{i}|\,p_{i}<\infty .}$ Remark 2. Due to absolute convergence, the expected value does not depend on the order in which the outcomes are presented. By contrast, a conditionally convergent series can be made to converge or diverge arbitrarily, via the Riemann rearrangement theorem. Example • Suppose ${\displaystyle x_{i}=i}$ and ${\displaystyle p_{i}={\frac {k}{i2^{i}}},}$ for ${\displaystyle i=1,2,3,\ldots }$, where ${\displaystyle k={\frac {1}{\ln 2}}}$ (with ${\displaystyle \ln }$ being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then ${\displaystyle \operatorname {E} [X]=1\left({\frac {k}{2}}\right)+2\left({\frac {k}{8}}\right)+3\left({\frac {k}{24}}\right)+\dots ={\frac {k}{2}}+{\frac {k}{4}}+{\frac {k}{8}}+\dots =k.}$ Since this series converges absolutely, the expected value of ${\displaystyle X}$ is ${\displaystyle k}$. • For an example that is not absolutely convergent, suppose random variable ${\displaystyle X}$ takes values 1, −2, 3, −4, ..., with respective probabilities ${\displaystyle {\frac {c}{1^{2}}},{\frac {c}{2^{2}}},{\frac {c}{3^{2}}},{\frac {c}{4^{2}}}}$, ..., where ${\displaystyle c={\frac {6}{\pi ^{2}}}}$ is a normalizing constant that ensures the probabilities sum up to one. Then the infinite sum ${\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=c\,{\bigg (}1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\dotsb {\bigg )}}$ converges and its sum is equal to ${\displaystyle {\frac {6\ln 2}{\pi ^{2}}}\approx 0.421383}$. However it would be incorrect to claim that the expected value of ${\displaystyle X}$ is equal to this number—in fact ${\displaystyle \operatorname {E} [X]}$ does not exist (finite or infinite), as this series does not converge absolutely (see Alternating harmonic series). • An example that diverges arises in the context of the St. Petersburg paradox. Let ${\displaystyle x_{i}=2^{i}}$ and ${\displaystyle p_{i}={\frac {1}{2^{i}}}}$ for ${\displaystyle i=1,2,3,\ldots }$. The expected value calculation gives ${\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots \,.}$ Since this does not converge but instead keeps growing, the expected value is infinite. Absolutely continuous case If ${\displaystyle X}$ is a random variable whose cumulative distribution function admits a density ${\displaystyle f(x)}$, then the expected value is defined as the following Lebesgue integral: ${\displaystyle \operatorname {E} [X]=\int _{\mathbb {R} }xf(x)\,dx.}$ Remark. From computational perspective, the integral in the definition of ${\displaystyle \operatorname {E} [X]}$ may often be treated as an improper Riemann integral ${\displaystyle \textstyle \int _{-\infty }^{+\infty }xf(x)\,dx.}$ Specifically, if the function ${\displaystyle xf(x)}$ is Riemann-integrable on every finite interval ${\displaystyle [a,b]}$, and ${\displaystyle \min \left((-1)\cdot {\hbox{(R)}}\int _{-\infty }^{0}xf(x)\,dx,\ {\hbox{(R)}}\int _{0}^{+\infty }xf(x)\,dx\right)<\infty ,}$ then the values (whether finite or infinite) of both integrals agree. General case In general, if ${\displaystyle X}$ is a random variable defined on a probability space ${\displaystyle (\Omega ,\Sigma ,\operatorname {P} )}$, then the expected value of ${\displaystyle X}$, denoted by ${\displaystyle \operatorname {E} [X]}$, ${\displaystyle \langle X\rangle }$, or ${\displaystyle {\bar {X}}}$, is defined as the Lebesgue integral ${\displaystyle \operatorname {E} [X]=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega ).}$ Remark 1. If ${\displaystyle X_{+}(\omega )=\max(X(\omega ),0)}$ and ${\displaystyle X_{-}(\omega )=-\min(X(\omega ),0)}$, then ${\displaystyle X=X_{+}-X_{-}.}$ The functions ${\displaystyle X_{+}}$ and ${\displaystyle X_{-}}$ can be shown to be measurable (hence, random variables), and, by definition of Lebesgue integral, {\displaystyle {\begin{aligned}\operatorname {E} [X]&=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega )\\&=\int _{\Omega }X_{+}(\omega )\,d\operatorname {P} (\omega )-\int _{\Omega }X_{-}(\omega )\,d\operatorname {P} (\omega )\\&=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}],\end{aligned}}} where ${\displaystyle \operatorname {E} [X_{+}]}$ and ${\displaystyle \operatorname {E} [X_{-}]}$ are non-negative and possibly infinite. The following scenarios are possible: • ${\displaystyle \operatorname {E} [X]}$ is finite, i.e. ${\displaystyle \max(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty ;}$ • ${\displaystyle \operatorname {E} [X]}$ is infinite, i.e. ${\displaystyle \max(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])=\infty }$ and ${\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty ;}$ • ${\displaystyle \operatorname {E} [X]}$ is neither finite nor infinite, i.e. ${\displaystyle \operatorname {E} [X_{+}]=\operatorname {E} [X_{-}]=\infty .}$ Remark 2. If ${\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)}$ is the cumulative distribution function of ${\displaystyle X}$, then ${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{+\infty }x\,dF_{X}(x),}$ where the integral is interpreted in the sense of Lebesgue–Stieltjes. Remark 3. An example of a distribution for which there is no expected value is Cauchy distribution. Remark 4. For multidimensional random variables, their expected value is defined per component, i.e. ${\displaystyle \operatorname {E} [(X_{1},\ldots ,X_{n})]=(\operatorname {E} [X_{1}],\ldots ,\operatorname {E} [X_{n}])}$ and, for a random matrix ${\displaystyle X}$ with elements ${\displaystyle X_{ij}}$, ${\displaystyle (\operatorname {E} [X])_{ij}=\operatorname {E} [X_{ij}].}$ Basic properties The properties below replicate or follow immediately from those of Lebesgue integral. ${\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=\operatorname {P} (A)}$ If ${\displaystyle A}$ is an event, then ${\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=\operatorname {P} (A),}$ where ${\displaystyle {\mathbf {1} }_{A}}$ is the indicator function of the set ${\displaystyle A}$. Proof. By definition of Lebesgue integral of the simple function ${\displaystyle {\mathbf {1} }_{A}={\mathbf {1} }_{A}(\omega )}$, ${\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=1\cdot \operatorname {P} (A)+0\cdot \operatorname {P} (\Omega \setminus A)=\operatorname {P} (A).}$ If X = Y (a.s.) then E[X] = E[Y] The statement follows from the definition of Lebesgue integral (${\displaystyle X_{+}=Y_{+}}$ (a.s.), ${\displaystyle X_{-}=Y_{-}}$ (a.s.)), and that changing a simple random variable on a set of probability zero does not alter the expected value. Expected value of a constant If ${\displaystyle X}$ is a random variable, and ${\displaystyle X=c}$ (a.s.), where ${\displaystyle c\in [-\infty ,+\infty ]}$, then ${\displaystyle \operatorname {E} [X]=c}$. In particular, for an arbitrary random variable ${\displaystyle X}$, ${\displaystyle \operatorname {E} [\operatorname {E} [X]]=\operatorname {E} [X]}$. Linearity The expected value operator (or expectation operator) ${\displaystyle \operatorname {E} [\cdot ]}$ is linear in the sense that {\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [X]+\operatorname {E} [Y],\\[6pt]\operatorname {E} [aX]&=a\operatorname {E} [X],\end{aligned}}} where ${\displaystyle X}$ and ${\displaystyle Y}$ are arbitrary random variables, and ${\displaystyle a}$ is a constant. More rigorously, let ${\displaystyle X}$ and ${\displaystyle Y}$ be random variables whose expected values are defined (different from ${\displaystyle \infty -\infty }$). • If ${\displaystyle \operatorname {E} [X]+\operatorname {E} [Y]}$ is also defined (i.e. differs from ${\displaystyle \infty -\infty }$), then ${\displaystyle \operatorname {E} [X+Y]=\operatorname {E} [X]+\operatorname {E} [Y].}$ • Let ${\displaystyle \operatorname {E} [X]}$ be finite, and ${\displaystyle a\in \mathbb {R} }$ be a finite scalar. Then ${\displaystyle \operatorname {E} [aX]=a\operatorname {E} [X].}$ E[X] exists and is finite if and only if E[|X|] is finite The following statements regarding a random variable ${\displaystyle X}$ are equivalent: • ${\displaystyle \operatorname {E} [X]}$ exists and is finite. • Both ${\displaystyle \operatorname {E} [X_{+}]}$ and ${\displaystyle \operatorname {E} [X_{-}]}$ are finite. • ${\displaystyle \operatorname {E} [|X|]}$ is finite. Sketch of proof. Indeed, ${\displaystyle |X|=X_{+}+X_{-}}$. By linearity, ${\displaystyle \operatorname {E} [|X|]=\operatorname {E} [X_{+}]+\operatorname {E} [X_{-}]}$. The above equivalency relies on the definition of Lebesgue integral and measurability of ${\displaystyle X}$. Remark. For the reasons above, the expressions "${\displaystyle X}$ is integrable" and "the expected value of ${\displaystyle X}$ is finite" are used interchangeably when speaking of a random variable throughout this article. Monotonicity If ${\displaystyle X\leq Y}$ (a.s.), and both ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {E} [Y]}$ exist, then ${\displaystyle \operatorname {E} [X]\leq \operatorname {E} [Y]}$. Remark. ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {E} [Y]}$ exist in the sense that ${\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty }$ and ${\displaystyle \min(\operatorname {E} [Y_{+}],\operatorname {E} [Y_{-}])<\infty .}$ Proof follows from the linearity and the previous property for ${\displaystyle Z=Y-X}$, since ${\displaystyle Z\geq 0}$ (a.s.). If ${\displaystyle |X|\leq Y}$ (a.s.) and ${\displaystyle \operatorname {E} [Y]}$ is finite then so is ${\displaystyle \operatorname {E} [X]}$ Let ${\displaystyle X}$ and ${\displaystyle Y}$ be random variables such that ${\displaystyle |X|\leq Y}$ (a.s.) and ${\displaystyle \operatorname {E} [Y]<\infty }$. Then ${\displaystyle \operatorname {E} [X]\neq \pm \infty }$. Proof. Due to non-negativity of ${\displaystyle |X|}$, ${\displaystyle \operatorname {E} |X|}$ exists, finite or infinite. By monotonicity, ${\displaystyle \operatorname {E} |X|\leq \operatorname {E} [Y]<\infty }$, so ${\displaystyle \operatorname {E} |X|}$ is finite which, as we saw earlier, is equivalent to ${\displaystyle \operatorname {E} [X]}$ being finite. If ${\displaystyle \operatorname {E} |X^{\beta }|<\infty }$ and ${\displaystyle 0<\alpha <\beta }$ then ${\displaystyle \operatorname {E} |X^{\alpha }|<\infty }$ The proposition below will be used to prove the extremal property of ${\displaystyle \operatorname {E} [X]}$ later on. Proposition. If ${\displaystyle X}$ is a random variable, then so is ${\displaystyle X^{\alpha }}$, for every ${\displaystyle \alpha >0}$. If, in addition, ${\displaystyle \operatorname {E} |X^{\beta }|<\infty }$ and ${\displaystyle 0<\alpha <\beta }$, then ${\displaystyle \operatorname {E} |X^{\alpha }|<\infty }$. Counterexample for infinite measure The requirement that ${\displaystyle \operatorname {P} (\Omega )<\infty }$ is essential. By way of counterexample, consider the measurable space ${\displaystyle ([1,+\infty ),{\mathcal {B}}_{\mathbb {R} _{[1,+\infty )}},\lambda ),}$ where ${\displaystyle {\mathcal {B}}_{\mathbb {R} _{[1,+\infty )}}}$ is the Borel ${\displaystyle \sigma }$-algebra on the interval ${\displaystyle [1,+\infty ),}$ and ${\displaystyle \lambda }$ is the linear Lebesgue measure. The reader can prove that ${\displaystyle \textstyle \int _{[1,+\infty )}{\frac {1}{x}}\,dx=\infty ,}$ even though ${\displaystyle \textstyle \int _{[1,+\infty )}{\frac {1}{x^{2}}}\,dx=1.}$ (Sketch of proof: ${\displaystyle \textstyle \int _{S}{\frac {1}{x}}\,dx}$ and ${\displaystyle \textstyle \int _{S}{\frac {1}{x^{2}}}\,dx}$ define a measure ${\displaystyle \mu }$ on ${\displaystyle \textstyle [1,+\infty )=\cup _{n=1}^{\infty }[1,n].}$ Use "continuity from below" w.r. to ${\displaystyle \mu }$ and reduce to Riemann integral on each finite subinterval ${\displaystyle [1,n]}$). Extremal property Recall, as we proved early on, that if ${\displaystyle X}$ is a random variable, then so is ${\displaystyle X^{2}}$. Proposition (extremal property of ${\displaystyle \operatorname {E} [X])}$). Let ${\displaystyle X}$ be a random variable, and ${\displaystyle \operatorname {E} [X^{2}]<\infty }$. Then ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {Var} [X]}$ are finite, and ${\displaystyle \operatorname {E} [X]}$ is the best least squares approximation for ${\displaystyle X}$ among constants. Specifically, • for every ${\displaystyle c\in \mathbb {R} }$, ${\displaystyle \textstyle \operatorname {E} [X-c]^{2}\geq \operatorname {Var} [X];}$ • equality holds if and only if ${\displaystyle c=\operatorname {E} [X].}$ (${\displaystyle \operatorname {Var} [X]}$ denotes the variance of ${\displaystyle X}$). Remark (intuitive interpretation of extremal property). In intuitive terms, the extremal property says that if one is asked to predict the outcome of a trial of a random variable ${\displaystyle X}$, then ${\displaystyle \operatorname {E} [X]}$, in some practically useful sense, is one's best bet if no advance information about the outcome is available. If, on the other hand, one does have some advance knowledge ${\displaystyle {\mathcal {F}}}$ regarding the outcome, then — again, in some practically useful sense — one's bet may be improved upon by using conditional expectations ${\displaystyle \operatorname {E} [X\mid {\mathcal {F}}]}$ (of which ${\displaystyle \operatorname {E} [X]}$ is a special case) rather than ${\displaystyle \operatorname {E} [X]}$. Proof of proposition. By the above properties, both ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {Var} [X]=\operatorname {E} [X^{2}]-\operatorname {E} ^{2}[X]}$ are finite, and {\displaystyle {\begin{aligned}\operatorname {E} [X-c]^{2}&=\operatorname {E} [X^{2}-2cX+c^{2}]\\[6pt]&=\operatorname {E} [X^{2}]-2c\operatorname {E} [X]+c^{2}\\[6pt]&=(c-\operatorname {E} [X])^{2}+\operatorname {E} [X^{2}]-\operatorname {E} ^{2}[X]\\[6pt]&=(c-\operatorname {E} [X])^{2}+\operatorname {Var} [X],\end{aligned}}} whence the extremal property follows. Non-degeneracy If ${\displaystyle \operatorname {E} |X|=0}$, then ${\displaystyle X=0}$ (a.s.). ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$ For an arbitrary random variable ${\displaystyle X}$, ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$. Proof. By definition of Lebesgue integral, {\displaystyle {\begin{aligned}|\operatorname {E} [X]|&={\Bigl |}\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]{\Bigr |}\leq {\Bigl |}\operatorname {E} [X_{+}]{\Bigr |}+{\Bigl |}\operatorname {E} [X_{-}]{\Bigr |}\\[5pt]&=\operatorname {E} [X_{+}]+\operatorname {E} [X_{-}]=\operatorname {E} [X_{+}+X_{-}]\\[5pt]&=\operatorname {E} |X|.\end{aligned}}} This result can also be proved based on Jensen's inequality. Non-multiplicativity In general, the expected value operator is not multiplicative, i.e. ${\displaystyle \operatorname {E} [XY]}$ is not necessarily equal to ${\displaystyle \operatorname {E} [X]\cdot \operatorname {E} [Y]}$. Indeed, let ${\displaystyle X}$ assume the values of 1 and -1 with probability 0.5 each. Then ${\displaystyle \operatorname {E^{2}} [X]=\left({\frac {1}{2}}\cdot (-1)+{\frac {1}{2}}\cdot 1\right)^{2}=0,}$ and ${\displaystyle \operatorname {E} [X^{2}]={\frac {1}{2}}\cdot (-1)^{2}+{\frac {1}{2}}\cdot 1^{2}=1,{\text{ so }}\operatorname {E} [X^{2}]\neq \operatorname {E^{2}} [X].}$ The amount by which the multiplicativity fails is called the covariance: ${\displaystyle \operatorname {Cov} (X,Y)=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y].}$ However, if ${\displaystyle X}$ and ${\displaystyle Y}$ are independent, then ${\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]}$, and ${\displaystyle \operatorname {Cov} (X,Y)=0}$. Counterexample: ${\displaystyle \operatorname {E} [X_{i}]\not \to \operatorname {E} [X]}$ despite ${\displaystyle X_{i}\to X}$ pointwise Let ${\displaystyle \left([0,1],{\mathcal {B}}_{[0,1]},{\mathrm {P} }\right)}$ be the probability space, where ${\displaystyle {\mathcal {B}}_{[0,1]}}$ is the Borel ${\displaystyle \sigma }$-algebra on ${\displaystyle [0,1]}$ and ${\displaystyle {\mathrm {P} }}$ the linear Lebesgue measure. For ${\displaystyle i\geq 1,}$ define a sequence of random variables ${\displaystyle X_{i}=i\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i}}\right]}}$ and a random variable ${\displaystyle X={\begin{cases}+\infty &{\text{if}}\ x=0\\0&{\text{otherwise.}}\end{cases}}}$ on ${\displaystyle [0,1]}$, with ${\displaystyle {\mathbf {1} }_{S}}$ being the indicator function of the set ${\displaystyle S\subseteq [0,1]}$. For every ${\displaystyle x\in [0,1],}$ as ${\displaystyle i\to +\infty ,}$ ${\displaystyle X_{i}(x)\to X(x),}$ and ${\displaystyle \operatorname {E} [X_{i}]=i\cdot {\mathrm {P} }\left(\left[0,{\frac {1}{i}}\right]\right)=i\cdot {\dfrac {1}{i}}=1,}$ so ${\displaystyle \lim _{i\to \infty }\operatorname {E} [X_{i}]=1.}$ On the other hand, ${\displaystyle \mathop {\mathrm {P} } (\{0\})=0,}$ and hence ${\displaystyle \operatorname {E} \left[X\right]=0.}$ In general, the expected value operator is not ${\displaystyle \sigma }$-additive, i.e. ${\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]\neq \sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}$ By way of counterexample, let ${\displaystyle \left([0,1],{\mathcal {B}}_{[0,1]},{\mathrm {P} }\right)}$ be the probability space, where ${\displaystyle {\mathcal {B}}_{[0,1]}}$ is the Borel ${\displaystyle \sigma }$-algebra on ${\displaystyle [0,1]}$ and ${\displaystyle {\mathrm {P} }}$ the linear Lebesgue measure. Define a sequence of random variables ${\displaystyle \textstyle X_{i}=(i+1)\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i+1}}\right]}-i\cdot {\mathbf {1} }_{\left[0,{\frac {1}{i}}\right]}}$ on ${\displaystyle [0,1]}$, with ${\displaystyle {\mathbf {1} }_{S}}$ being the indicator function of the set ${\displaystyle S\subseteq [0,1]}$. For the pointwise sums, we have ${\displaystyle \sum _{i=0}^{n}X_{i}=(n+1)\cdot {\mathbf {1} }_{\left[0,{\frac {1}{n+1}}\right]},}$ ${\displaystyle \sum _{i=0}^{\infty }X_{i}(x)={\begin{cases}+\infty &{\text{if}}\ x=0\\0&{\text{otherwise.}}\end{cases}}}$ ${\displaystyle \sum _{i=0}^{\infty }\operatorname {E} [X_{i}]=\lim _{n\to \infty }\sum _{i=0}^{n}\operatorname {E} [X_{i}]=\lim _{n\to \infty }\operatorname {E} \left[\sum _{i=0}^{n}X_{i}\right]=1.}$ On the other hand, ${\displaystyle \mathop {\mathrm {P} } (\{0\})=0,}$ and hence ${\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=0\neq 1=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}$ Countable additivity for non-negative random variables Let ${\displaystyle \{X_{i}\}_{i=0}^{\infty }}$ be non-negative random variables. It follows from monotone convergence theorem that ${\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}$ Inequalities Cauchy–Bunyakovsky–Schwarz inequality The Cauchy–Bunyakovsky–Schwarz inequality states that ${\displaystyle (\operatorname {E} [XY])^{2}\leq \operatorname {E} [X^{2}]\cdot \operatorname {E} [Y^{2}].}$ Markov's inequality For a nonnegative random variable ${\displaystyle X}$ and ${\displaystyle a>0}$, Markov's inequality states that ${\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.}$ Bienaymé-Chebyshev inequality Let ${\displaystyle X}$ be an arbitrary random variable with finite expected value ${\displaystyle \operatorname {E} [X]}$ and finite variance ${\displaystyle \operatorname {Var} [X]\neq 0}$. The Bienaymé-Chebyshev inequality states that, for any real number ${\displaystyle k>0}$, ${\displaystyle \operatorname {P} {\Bigl (}{\Bigl |}X-\operatorname {E} [X]{\Bigr |}\geq k{\sqrt {\operatorname {Var} [X]}}{\Bigr )}\leq {\frac {1}{k^{2}}}.}$ Jensen's inequality Let ${\displaystyle f:{\mathbb {R} }\to {\mathbb {R} }}$ be a Borel convex function and ${\displaystyle X}$ a random variable such that ${\displaystyle \operatorname {E} |X|<\infty }$. Jensen's inequality states that ${\displaystyle f(\operatorname {E} (X))\leq \operatorname {E} (f(X)).}$ Remark 1. The expected value ${\displaystyle \operatorname {E} (f(X))}$ is well-defined even if ${\displaystyle X}$ is allowed to assume infinite values. Indeed, ${\displaystyle \operatorname {E} |X|<\infty }$ implies that ${\displaystyle X\neq \pm \infty }$ (a.s.), so the random variable ${\displaystyle f(X(\omega ))}$ is defined almost sure, and therefore there is enough information to compute ${\displaystyle \operatorname {E} (f(X)).}$ Remark 2. Jensen's inequality implies that ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$ since the absolute value function is convex. Lyapunov's inequality Let ${\displaystyle 0. Lyapunov's inequality states that ${\displaystyle {\Bigl (}\operatorname {E} |X|^{s}{\Bigr )}^{1/s}\leq \left(\operatorname {E} |X|^{t}\right)^{1/t}.}$ Proof. Applying Jensen's inequality to ${\displaystyle |X|^{s}}$ and ${\displaystyle g(x)=|x|^{t/s}}$, obtain ${\displaystyle {\Bigl |}\operatorname {E} |X^{s}|{\Bigr |}^{t/s}\leq \operatorname {E} |X^{s}|^{t/s}=\operatorname {E} |X|^{t}}$. Taking the ${\displaystyle t}$th root of each side completes the proof. Corollary. ${\displaystyle \operatorname {E} |X|\leq {\Bigl (}\operatorname {E} |X|^{2}{\Bigr )}^{1/2}\leq \cdots \leq {\Bigl (}\operatorname {E} |X|^{n}{\Bigr )}^{1/n}\leq \cdots }$ Hölder's inequality Let ${\displaystyle p}$ and ${\displaystyle q}$ satisfy ${\displaystyle 1\leq p\leq \infty }$, ${\displaystyle 1\leq q\leq \infty }$, and ${\displaystyle 1/p+1/q=1}$. The Hölder's inequality states that ${\displaystyle \operatorname {E} |XY|\leq (\operatorname {E} |X|^{p})^{1/p}(\operatorname {E} |Y|^{q})^{1/q}.}$ Minkowski inequality Let ${\displaystyle p}$ be an integer satisfying ${\displaystyle 1\leq p\leq \infty }$. Let, in addition, ${\displaystyle \operatorname {E} |X|^{p}<\infty }$ and ${\displaystyle \operatorname {E} |Y|^{p}<\infty }$. Then, according to the Minkowski inequality, ${\displaystyle \operatorname {E} |X+Y|^{p}<\infty }$ and ${\displaystyle {\Bigl (}\operatorname {E} |X+Y|^{p}{\Bigr )}^{1/p}\leq {\Bigl (}\operatorname {E} |X|^{p}{\Bigr )}^{1/p}+{\Bigl (}\operatorname {E} |Y|^{p}{\Bigr )}^{1/p}.}$ Taking limits under the ${\displaystyle \operatorname {E} }$ sign Monotone convergence theorem Let the sequence of random variables ${\displaystyle \{X_{n}\}}$ and the random variables ${\displaystyle X}$ and ${\displaystyle Y}$ be defined on the same probability space ${\displaystyle (\Omega ,\Sigma ,\operatorname {P} ).}$ Suppose that • all the expected values ${\displaystyle \operatorname {E} [X_{n}],}$ ${\displaystyle \operatorname {E} [X],}$ and ${\displaystyle \operatorname {E} [Y]}$ are defined (differ from ${\displaystyle \infty -\infty }$); • ${\displaystyle \operatorname {E} [Y]>-\infty ;}$ • for every ${\displaystyle n,}$ ${\displaystyle -\infty \leq Y\leq X_{n}\leq X_{n+1}\leq +\infty \quad {\hbox{(a.s.)}};}$ • ${\displaystyle X}$ is the pointwise limit of ${\displaystyle \{X_{n}\}}$ (a.s.), i.e. ${\displaystyle X(\omega )=\lim \nolimits _{n}X_{n}(\omega )}$ (a.s.). The monotone convergence theorem states that ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X].}$ Fatou's lemma Let the sequence of random variables ${\displaystyle \{X_{n}\}}$ and the random variable ${\displaystyle Y}$ be defined on the same probability space ${\displaystyle (\Omega ,\Sigma ,\operatorname {P} ).}$ Suppose that • all the expected values ${\displaystyle \operatorname {E} [X_{n}],}$ ${\displaystyle \textstyle \operatorname {E} [\liminf _{n}X_{n}],}$ and ${\displaystyle \operatorname {E} [Y]}$ are defined (differ from ${\displaystyle \infty -\infty }$); • ${\displaystyle \operatorname {E} [Y]>-\infty ;}$ • ${\displaystyle -\infty \leq Y\leq X_{n}\leq +\infty }$ (a.s.), for every ${\displaystyle n.}$ Fatou's lemma states that ${\displaystyle \operatorname {E} [\liminf _{n}X_{n}]\leq \liminf _{n}\operatorname {E} [X_{n}].}$ (${\displaystyle \textstyle \liminf _{n}X_{n}}$ is a random variable, for every ${\displaystyle n,}$ by the properties of limit inferior). Corollary. Let • ${\displaystyle X_{n}\to X}$ pointwise (a.s.); • ${\displaystyle \operatorname {E} [X_{n}]\leq C,}$ for some constant ${\displaystyle C}$ (independent from ${\displaystyle n}$); • ${\displaystyle \operatorname {E} [Y]>-\infty ;}$ • ${\displaystyle -\infty \leq Y\leq X_{n}\leq +\infty }$ (a.s.), for every ${\displaystyle n.}$ Then ${\displaystyle \operatorname {E} [X]\leq C.}$ Proof is by observing that ${\displaystyle \textstyle X=\liminf _{n}X_{n}}$ (a.s.) and applying Fatou's lemma. Dominated convergence theorem Let ${\displaystyle \{X_{n}\}_{n}}$ be a sequence of random variables. If ${\displaystyle X_{n}\to X}$ pointwise (a.s.), ${\displaystyle |X_{n}|\leq Y\leq +\infty }$ (a.s.), and ${\displaystyle \operatorname {E} [Y]<\infty }$. Then, according to the dominated convergence theorem, • the function ${\displaystyle X}$ is measurable (hence a random variable); • ${\displaystyle \operatorname {E} |X|<\infty }$; • all the expected values ${\displaystyle \operatorname {E} [X_{n}]}$ and ${\displaystyle \operatorname {E} [X]}$ are defined (do not have the form ${\displaystyle \infty -\infty }$); • ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X]}$ (both sides may be infinite); • ${\displaystyle \lim _{n}\operatorname {E} |X_{n}-X|=0.}$ Uniform integrability In some cases, the equality ${\displaystyle \displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [\lim _{n}X_{n}]}$ holds when the sequence ${\displaystyle \{X_{n}\}}$ is uniformly integrable. Relationship with characteristic function The probability density function ${\displaystyle f_{X}}$ of a scalar random variable ${\displaystyle X}$ is related to its characteristic function ${\displaystyle \varphi _{X}}$ by the inversion formula: ${\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.}$ For the expected value of ${\displaystyle g(X)}$ (where ${\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}$ is a Borel function), we can use this inversion formula to obtain ${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt\right]\,dx.}$ If ${\displaystyle \operatorname {E} [g(X)]}$ is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, ${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}$ where ${\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}$ is the Fourier transform of ${\displaystyle g(x).}$ The expression for ${\displaystyle \operatorname {E} [g(X)]}$ also follows directly from Plancherel theorem. Uses and applications It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies. The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. ${\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}]}$, where ${\displaystyle {\mathbf {1} }_{\mathcal {A}}}$ is the indicator function of the set ${\displaystyle {\mathcal {A}}}$. The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β). In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the variance ${\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.}$ A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator ${\displaystyle {\hat {A}}}$ operating on a quantum state vector ${\displaystyle |\psi \rangle }$ is written as ${\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle }$. The uncertainty in ${\displaystyle {\hat {A}}}$ can be calculated using the formula ${\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}$. The law of the unconscious statistician The expected value of a measurable function of ${\displaystyle X}$, ${\displaystyle g(X)}$, given that ${\displaystyle X}$ has a probability density function ${\displaystyle f(x)}$, is given by the inner product of ${\displaystyle f}$ and ${\displaystyle g}$: ${\displaystyle \operatorname {E} [g(X)]=\int _{\mathbb {R} }g(x)f(x)\,dx.}$ This formula also holds in multidimensional case, when ${\displaystyle g}$ is a function of several random variables, and ${\displaystyle f}$ is their joint density.[5][6] Alternative formula for expected value Formula for non-negative random variables Finite and countably infinite case For a non-negative integer-valued random variable ${\displaystyle X:\Omega \to \{0,1,2,3,\ldots \}\cup \{+\infty \},}$ ${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }\operatorname {P} (X\geq i).}$ General case If ${\displaystyle X:\Omega \to [0,+\infty ]}$ is a non-negative random variable, then ${\displaystyle \operatorname {E} [X]=\int \limits _{[0,+\infty )}\operatorname {P} (X\geq x)\,dx=\int \limits _{[0,+\infty )}\operatorname {P} (X>x)\,dx,}$ and ${\displaystyle \operatorname {E} [X]={\hbox{(R)}}\int \limits _{0}^{\infty }\operatorname {P} (X\geq x)\,dx={\hbox{(R)}}\int \limits _{0}^{\infty }\operatorname {P} (X>x)\,dx,}$ where ${\displaystyle {\hbox{(R)}}\textstyle \int _{0}^{\infty }}$ denotes improper Riemann integral. Formula for non-positive random variables If ${\displaystyle X:\Omega \to [-\infty ,0]}$ is a non-positive random variable, then ${\displaystyle \operatorname {E} [X]=-\int \limits _{(-\infty ,0]}\operatorname {P} (X\leq x)\,dx=-\int \limits _{(-\infty ,0]}\operatorname {P} (X and ${\displaystyle \operatorname {E} [X]=-{\hbox{(R)}}\int \limits _{-\infty }^{0}\operatorname {P} (X\leq x)\,dx=-{\hbox{(R)}}\int \limits _{-\infty }^{0}\operatorname {P} (X where ${\displaystyle {\hbox{(R)}}\textstyle \int _{-\infty }^{0}}$ denotes improper Riemann integral. This formula follows from that for the non-negative case applied to ${\displaystyle -X.}$ If, in addition, ${\displaystyle X}$ is integer-valued, i.e. ${\displaystyle X:\Omega \to \{\ldots ,-3,-2,-1,0\}\cup \{-\infty \}}$, then ${\displaystyle \operatorname {E} [X]=-\sum _{i=-1}^{-\infty }\operatorname {P} (X\leq i).}$ General case If ${\displaystyle X}$ can be both positive and negative, then ${\displaystyle \operatorname {E} [X]=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]}$, and the above results may be applied to ${\displaystyle X_{+}}$ and ${\displaystyle X_{-}}$ separately. History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[7] Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject. Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: … this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope. The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[8] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[9] References 1. ^ Sheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Academic Press. p. 38 ff. ISBN 0-12-598062-0. 2. ^ Richard W Hamming (1991). "§2.5 Random variables, mean and the expected value". The art of probability for scientists and engineers. Addison–Wesley. p. 64 ff. ISBN 0-201-40686-1. 3. ^ Richard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1. Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples! 4. ^ Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274. 5. ^ Expectation Value, retrieved August 8, 2017 6. ^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw–Hill, pp. 139–152 7. ^ "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. 1960. doi:10.2307/2309286. 8. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.] 9. ^ Literature • Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press. ISBN 0-8018-6946-3. • Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714:).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 611, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870748519897461, "perplexity": 978.2881045520115}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00363.warc.gz"}
https://glossary.informs.org/ver2/mpgwiki/index.php?title=Semi-definite_program
# Semi-definite program $LaTeX: \textstyle \min \{cx: S(x) \in P\},$ where $LaTeX: \mbox{P}$ is the class of positive semi-definite matrices, and $LaTeX: \textstyle \mbox{S}(x) = S_0 + \sum_{j} x(j)S_j ,$ where each $LaTeX: \textstyle \mbox{S}_j,$ for $LaTeX: \textstyle j = 0,\dots,n$ is a (given) symmetric matrix. This includes the linear program as a special case.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1786416620016098, "perplexity": 180.047862928497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00464.warc.gz"}
https://mathoverflow.net/questions/301527/biased-vs-unbiased-lax-monoidal-categories
# Biased vs unbiased lax monoidal categories There are two principal ways to define a monoidal category: • The biased definition includes a unit object $I$, a binary tensor product $A\otimes B$, and a ternary associativity isomorphism $(A\otimes B)\otimes C\cong A\otimes (B\otimes C)$ and unit isomorphisms satisfying appropriate axioms. • The unbiased definition includes an $n$-ary tensor product $(A_1\otimes\cdots \otimes A_n)$ for all $n\ge 0$ (where $n=0$ gives the unit $I = ()$), with associativity isomorphisms such as $((A\otimes B) \otimes () \otimes (C)) \cong (A\otimes B\otimes C)$ satisfying appropriate axioms. The two definitions are equivalent in an appropriate sense (though this is a nontrivial coherence theorem). However, this is no longer true for "lax" kinds of monoidal category, where the associativity and unit isomorphisms are replaced by not-necessarily-invertible transformations. In the lax case, the unbiased definition seems to be more-studied, and is usually what people mean by a "lax monoidal category". There are good reasons for this, but "biased-lax" monoidal categories, and more general biased-lax structures, do occasionally pop up. In the unbiased case, there are only two consistent choices of direction for the transformations: $((A\otimes B) \otimes () \otimes (C)) \to (A\otimes B\otimes C)$ gives a lax monoidal category, while the opposite direction gives a colax one. In the biased case, there are more choices: in addition to choosing $(A\otimes B)\otimes C\to A\otimes (B\otimes C)$ or the opposite, we can choose how to orient the two unit morphisms: either $A \otimes I \to A$ or $A \to A\otimes I$, and also either $I\otimes A \to A$ or $A\to I\otimes A$. For instance, a skew monoidal category pairs $A\to I\otimes A$ with $A\otimes I\to A$. (Thanks Maxime for pointing this out in the comments.) In this question I am interested in biased-lax monoidal categories where the unit transformations go in the same direction, say $A\otimes I\to A$ and $I\otimes A\to A$. It seems that it should be possible to identify a biased-lax monoidal category of this sort with a particular kind of unbiased-lax one, by defining the $n$-ary tensor product in terms of the binary one by right-associativity: $(A_1\otimes\cdots \otimes A_n) = (A_1 \otimes (A_2 \otimes \cdots \otimes (A_{n-1}\otimes A_n)\cdots ))$ (or perhaps left associativity, depending on which direction the biased-lax associativity map goes). I have seen this claimed in print, and have even claimed it myself, but I have not seen a proof written out. So my questions are: 1. Has anyone studied biased-lax monoidal categories of this sort (or related structures like biased-lax bicategories, biased-lax monoids in a monoidal bicategory, etc.) in detail? 2. In particular, is there a better name for them? (The only reference I know of is the paper "$T$-categories" by Albert Burroni, who called "biased-colax" bicategories of this sort "pseudo-categories" — clearly not a good name in light of modern terminological conventions.) 3. (The main question) Has anyone written out a proof that biased-lax monoidal categories of this sort can be identified with certain unbiased-lax ones? 4. What is an intrinsic characterization of the unbiased-lax monoidal categories that arise in this way? (I expect they should be the ones such that certain of the associativity maps happen to be isomorphisms.) • Re 2: Skew monoidal categories are an example of biased-monoidal categories which seem to 'ping' a bit more when searching google, but it depends on how you want to orient the unit morphisms. – Maxime Lucas May 30 '18 at 10:05 • @MaximeLucas Thanks! I should have mentioned skew-monoidal categories and specified the direction of unit morphisms; I'll edit the question. I think both unit morphisms have to go in the same direction for my question (3) to be true, which is not the case for skew-monoidal categories. (The actual direction is then arbitrary, up to passage to opposite categories.) – Mike Shulman May 30 '18 at 14:45 • There seems to be an answer to questions 3 and 4 for skew monoidal categories in arxiv.org/abs/1708.06087: in that case, unbiased-(co)lax monoidal categories have to be replaced by (co)lax algebras over a slightly fancier Cat-operad, and there is a "certain of the associativity maps are identities" condition called being an "LBC-algebra" that characterizes the lax algebras arising from skew-monoidal categories in this way. – Mike Shulman May 31 '18 at 20:33 • That is right. You can restrict this to obtain an equivalence between left normal skew monoidal categories (those for which $l:ia \to a$ is invertible) and normal colax monoidal categories satisfying the LBC condition (when viewed as colax $L$-algebras). (The corresponding multicategory result was described in Theorem 6.3 of arxiv.org/abs/1708.06088) – john Jun 1 '18 at 7:47 • This special case about left normal skew monoidal categories would, I suspect, also be a special case of the result you are after about biased (co)lax monoidal categories. – john Jun 1 '18 at 7:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453216791152954, "perplexity": 615.412869346913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00518.warc.gz"}
https://mirror.git.trinitydesktop.org/cgit/kvkbd/tree/src/MainWidget.cpp?h=v3.5.13.1&id=d283a82e4dc5c41ff9a9c2137b572c3d4bff1a52
summaryrefslogtreecommitdiffstats log msg author committer range path: root/src/MainWidget.cpp blob: d773c0c13db941871714a593bf307a6c341501b7 (plain) ```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 ``` ``````/*************************************************************************** * Copyright (C) 2007 by Todor Gyumyushev * * yodor@developer.bg * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * ***************************************************************************/ #include "MainWidget.h" #include "VButton.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #define R1LEN 13 #define R2LEN 10 #define R3LEN 9 #define R4LEN 7 bool shutting=false; MainWidget::MainWidget ( KAboutData *about, bool tren, TQWidget *parent, const char * name, WFlags f ) : ResizableDragWidget ( parent,name,f ), stand_alone(tren) { tray=0; nresize=false; display=qt_xdisplay(); //TQString k1= "`1234567890-="; //TQString k1s = "~!@#\$%^&*()_+"; unsigned int kc1[R1LEN] = {49,10,11,12,13,14,15,16,17,18,19,20,21}; //TQString k2= "qwertyuiop"; //TQString k2s = "TQWERTYUIOP"; unsigned int kc2[R2LEN] = {24,25,26,27,28,29,30,31,32,33}; //TQString k3= "asdfghjkl"; //;'"; //TQString k3s="ASDFGHJKL"; unsigned int kc3[R3LEN] = {38,39,40,41,42,43,44,45,46}; //,{47,48}; //TQString k4="zxcvbnm"; //,./"; //TQString k4s="ZXCVBNM"; unsigned int kc4[R4LEN] = {52,53,54,55,56,57,58};//59,60,61}; int stx=15; int sty=15; extent_visible=false; // resize ( 550,235 ); // move(0,0); VButton *esc = new VButton ( this,"" ); esc->setKeyCode ( 9 ); esc->move ( stx,sty ); esc->setText ( "Esc" ); esc->res(); other_keys.append(esc); connect ( esc,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); for ( int a=0;a<4;a++ ) { VButton *f = new VButton ( this,"" ); f->setKeyCode ( 67+a ); f->setText ( "F"+TQString ( "%1" ).arg ( a+1 ) ); f->move ( stx+esc->width() + ( 35*a ) +20,sty ); f->res(); other_keys.append(f); connect ( f,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); } for ( int a=0;a<4;a++ ) { VButton *f = new VButton ( this,"" ); f->setKeyCode ( 71+a ); f->setText ( "F"+TQString ( "%1" ).arg ( a+5 ) ); f->move ( stx+esc->width() + ( 35*a ) +40+ ( 4*35 ),sty ); f->res(); other_keys.append(f); connect ( f,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); } for ( int a=0;a<4;a++ ) { VButton *f = new VButton ( this,"" ); f->setKeyCode ( 75+a ); if ( a>1 ) f->setKeyCode ( 93+a ); f->setText ( "F"+TQString ( "%1" ).arg ( a+9 ) ); f->move ( stx+esc->width() + ( 35*a ) +45+ ( 8*35 ) +10,sty ); f->res(); other_keys.append(f); connect ( f,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); } //ROW 1 for ( int a=0;asetKeyCode ( kc1[a] ); v->move ( stx+ ( 35*a ),sty+35 ); connect ( v,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( v ); v->res(); //caps_btns.append ( v ); } VButton *bksp = new VButton ( this,"" ); bksp->setKeyCode ( 22 ); bksp->move ( stx+ ( R1LEN *35 ),sty+35 ); bksp->resize ( 46,30 ); bksp->setText ( "Bksp" ); bksp->res(); other_keys.append(bksp); connect ( bksp,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); //ROW 2 VButton *tab = new VButton ( this,"" ); tab->setKeyCode ( 23 ); tab->move ( stx,sty+35+35 ); tab->resize ( 47,30 ); tab->setText ( "Tab" ); tab->res(); other_keys.append(tab); connect ( tab,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); for ( int a=0;asetKeyCode ( kc2[a] ); //v->setText ( k2.mid ( a,1 ) ); //v->setShiftText ( k2s.mid ( a,1 ) ); v->move ( stx+tab->width() +5+ ( 35*a ),sty+35+35 ); v->res(); connect ( v,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( v ); } VButton *lbr = new VButton ( this,"" ); lbr->setKeyCode ( 34 ); lbr->move ( stx+tab->width() +5+ ( R2LEN *35 ),sty+ ( 2*35 ) ); //lbr->setText ( "[" ); //lbr->setShiftText ( "{" ); lbr->res(); connect ( lbr,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( lbr ); VButton *rbr = new VButton ( this,"" ); rbr->setKeyCode ( 35 ); rbr->move ( stx+tab->width() +5+ ( ( R2LEN +1 ) *35 ),sty+ ( 2*35 ) ); //rbr->setText ( "]" ); //rbr->setShiftText ( "}" ); rbr->res(); connect ( rbr,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( rbr ); VButton *bksl = new VButton ( this,"" ); bksl->setKeyCode ( 51 ); bksl->move ( stx+tab->width() +5+ ( ( R2LEN +2 ) *35 ),sty+35+35 ); bksl->resize ( 30,30 ); //bksl->setText ( "\\" ); //bksl->setShiftText ( "|" ); bksl->res(); connect ( bksl,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( bksl ); //ROW 3 caps = new VButton ( this,"" ); caps->setKeyCode ( 66 ); caps->move ( stx,sty+ ( 3*35 ) ); caps->resize ( 63,30 ); caps->setText ( "Caps" ); caps->setToggleButton ( true ); caps->res(); other_keys.append(caps); connect ( caps,TQT_SIGNAL ( clicked() ),this,TQT_SLOT ( toggleCaps() ) ); connect ( caps,TQT_SIGNAL ( keyClick ( unsigned int ) ),this,TQT_SLOT ( keyPress ( unsigned int ) ) ); for ( int a=0;asetKeyCode ( kc3[a] ); //v->setText ( k3.mid ( a,1 ) ); //v->setShiftText ( k3s.mid ( a,1 ) ); v->move ( stx+caps->width() +5+ ( 35*a ),sty+ ( 3*35 ) ); btns.append ( v ); v->res(); connect ( v,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); } VButton *smcl = new VButton ( this,"" ); smcl->setKeyCode ( 47 ); smcl->move ( stx+ ( R3LEN *35 ) +caps->width() +5,sty+ ( 3*35 ) ); //smcl->setText ( ";" ); //smcl->setShiftText ( ":" ); connect ( smcl,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( smcl ); smcl->res(); VButton *sngq = new VButton ( this,"" ); sngq->setKeyCode ( 48 ); sngq->move ( stx+ ( ( R3LEN +1 ) *35 ) +caps->width() +5,sty+ ( 3*35 ) ); //sngq->setText ( "'" ); //sngq->setShiftText ( "\"" ); connect ( sngq,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( sngq ); sngq->res(); VButton *enter = new VButton ( this,"" ); enter->setKeyCode ( 36 ); enter->move ( stx+ ( ( R3LEN +2 ) *35 ) +caps->width() +5,sty+ ( 3*35 ) ); enter->resize ( 50,30 ); enter->setText ( "Enter" ); connect ( enter,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); other_keys.append(enter); enter->res(); //ROW 4 lshft = new VButton ( this,"" ); lshft->setKeyCode ( 50 ); lshft->move ( stx,sty+ ( 4*35 ) ); lshft->resize ( 80,30 ); lshft->setText ( "Shift" ); lshft->setToggleButton ( true ); connect ( lshft,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( toggleShift() ) ); mod_keys.append ( lshft ); lshft->res(); for ( int a=0;asetKeyCode ( kc4[a] ); //v->setText ( k4.mid ( a,1 ) ); //v->setShiftText ( k4s.mid ( a,1 ) ); v->move ( stx+35+16+35+ ( 35*a ),sty+ ( 4*35 ) ); btns.append ( v ); v->res(); connect ( v,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); } VButton *sm = new VButton ( this,"" ); sm->setKeyCode ( 59 ); sm->move ( stx+ ( R4LEN *35 ) +lshft->width() +5,sty+ ( 4*35 ) ); //sm->setText ( "," ); //sm->setShiftText ( "<" ); connect ( sm,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( sm ); sm->res(); VButton *gr = new VButton ( this,"" ); gr->setKeyCode ( 60 ); gr->move ( stx+ ( ( R4LEN +1 ) *35 ) +lshft->width() +5,sty+ ( 4*35 ) ); //gr->setText ( "." ); //gr->setShiftText ( ">" ); connect ( gr,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( gr ); gr->res(); VButton *sl = new VButton ( this,"" ); sl->setKeyCode ( 61 ); sl->move ( stx+ ( ( R4LEN +2 ) *35 ) +lshft->width() +5,sty+ ( 4*35 ) ); //sl->setText ( "/" ); //sl->setShiftText ( "?" ); connect ( sl,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); btns.append ( sl ); sl->res(); rshft = new VButton ( this,"" ); rshft->setKeyCode ( 50 ); rshft->move ( stx+ ( ( R4LEN +3 ) *35 ) +lshft->width() +5,sty+ ( 4*35 ) ); rshft->resize ( 68,30 ); rshft->setText ( "Shift" ); rshft->setToggleButton ( true ); connect ( rshft,TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( toggleShift() ) ); mod_keys.append ( rshft ); rshft->res(); lctrl = new VButton ( this,"" ); lctrl->resize ( 45,30 ); lctrl->move ( stx, sty+ ( 5*35 ) ); lctrl->setText ( "Ctrl" ); lctrl->setKeyCode ( 37 ); lctrl->setToggleButton ( true ); mod_keys.append ( lctrl ); lctrl->res(); win = new VButton ( this,"" ); win->resize ( 45,30 ); win->move ( 5+lctrl->x() +lctrl->width(), sty+ ( 5*35 ) ); win->setText ( "Win" ); win->setKeyCode ( 115 ); win->setToggleButton ( true ); mod_keys.append ( win ); win->res(); lalt = new VButton ( this,"" ); lalt->resize ( 45,30 ); lalt->move ( 5+win->x() +win->width(), sty+ ( 5*35 ) ); lalt->setText ( "Alt" ); lalt->setKeyCode ( 64 ); lalt->setToggleButton ( true ); mod_keys.append ( lalt ); lalt->res(); VButton *space = new VButton ( this,"" ); space->setKeyCode ( 65 ); space->resize ( 5*35+28,30 ); space->move ( 5+lalt->x() +lalt->width(),sty+ ( 5*35 ) ); connect ( space, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); space->res(); other_keys.append(space); ralt = new VButton ( this,"" ); ralt->resize ( 45,30 ); ralt->move ( 5+space->x() +space->width(), sty+ ( 5*35 ) ); ralt->setText ( "AltGr" ); ralt->setKeyCode ( 113 ); ralt->setToggleButton ( true ); mod_keys.append ( ralt ); ralt->res(); mnu = new VButton ( this,"" ); mnu->resize ( 45,30 ); mnu->move ( 5+ralt->x() +ralt->width(), sty+ ( 5*35 ) ); mnu->setText ( "Menu" ); mnu->setKeyCode ( 117 ); mnu->setToggleButton ( false ); connect ( mnu, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); other_keys.append(mnu); mnu->res(); rctrl = new VButton ( this,"" ); rctrl->resize ( 45,30 ); rctrl->move ( 5+mnu->x() +mnu->width(), sty+ ( 5*35 ) ); rctrl->setText ( "Ctrl" ); rctrl->setKeyCode ( 37 ); rctrl->setToggleButton ( true ); mod_keys.append ( rctrl ); rctrl->res(); mappingNotify(NULL); quit = new VButton ( this,"quit" ); quit->resize ( 15,30 ); quit->move ( 525,15 ); quit->setPaletteBackgroundColor ( TQt::red ); quit->res(); other_keys.append(quit); connect ( quit,TQT_SIGNAL ( clicked() ),this, TQT_SLOT ( quitClicked() ) ); extent = new VButton(this,"extent"); extent->resize( 15,65 ); extent->move(525, 85 ); extent->setText(">>"); extent->res(); other_keys.append(extent); connect (extent, TQT_SIGNAL( clicked() ) , this, TQT_SLOT ( toggleNumericPad() ) ); TQTimer *t = new TQTimer ( this ); connect ( t, TQT_SIGNAL ( timeout() ), this, TQT_SLOT ( queryModState() ) ); t->start ( 500, FALSE ); setPaletteBackgroundColor ( TQt::black ); setFocusPolicy ( TQ_NoFocus ); int padx= 550; TQString txt[9] = {"Ho\nme","▲","Pg\nUp","◄"," ","►","End","▼","Pg\nDn"}; TQString nump[9] = {"7","8","9","4","5","6","1","2","3"}; int val=0; int nval[9] = {16,17,18,13,14,15,10,11,12}; int cval[9] = {79,80,81,83,84,85,87,88,89}; for (int a=2;a<5;a++){ for (int b=0;b<3;b++){ NumpadVButton *v = new NumpadVButton(this,""); v->move(padx+(b*35),sty+(a*35)); v->res(); v->setKeyCode(nval[val],cval[val]); v->setText(TQString::fromUtf8(txt[val])); v->setShiftText(TQString::fromUtf8(nump[val])); numl_keys.append(v); connect ( v, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); val++; } } ins = new NumpadVButton(this,"ins"); ins->resize(65,30); ins->move(padx,sty+(5*35)); ins->res(); ins->setText("Ins"); ins->setKeyCode(19,90); ins->setShiftText("0"); numl_keys.append(ins); connect ( ins, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); del = new NumpadVButton(this,"del"); del->resize(30,30); del->move(padx+70,sty+(5*35)); del->res(); del->setText("Del"); del->setShiftText("."); del->setKeyCode(60,91); numl_keys.append(del); connect ( del, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); numl = new VButton(this,"numlock"); numl->setKeyCode(77); numl->move(padx,sty+(1*35)); numl->res(); numl->setText("Num\nLock"); numl->setToggleButton ( true ); other_keys.append(numl); connect ( numl, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); connect ( numl, TQT_SIGNAL ( clicked() ), this, TQT_SLOT ( toggleNumlock() ) ); div = new VButton(this,"div"); div->move(padx+(35),sty+(1*35)); div->res(); div->setText("/"); div->setKeyCode(112); other_keys.append(div); connect ( div, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); mul = new VButton(this,"mul"); mul->move(padx+(2*35),sty+(1*35)); mul->res(); mul->setText("*"); mul->setKeyCode(63); other_keys.append(mul); connect ( mul, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); ent = new VButton(this,"enter1"); ent->resize(30,65); ent->move(padx+70+35,sty+(4*35)); ent->res(); ent->setText("Ent"); ent->setKeyCode(36); other_keys.append(ent); connect ( ent, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); plu = new VButton(this,"plus"); plu->resize(30,65); plu->move(padx+70+35,sty+(2*35)); plu->res(); plu->setText("+"); plu->setKeyCode(86); other_keys.append(plu); connect ( plu, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); min = new VButton(this,"minus"); min->resize(30,30); min->move(padx+70+35,sty+(1*35)); min->setText("-"); min->setKeyCode(82); other_keys.append(min); min->res(); connect ( min, TQT_SIGNAL ( keyClick ( unsigned int ) ), this, TQT_SLOT ( keyPress ( unsigned int ) ) ); if (!stand_alone){ tray = new KbdTray ( this ); tray->setPixmap ( UserIcon ( "tray" ) ); KConfig *cfg = KApplication::kApplication()->config(); KPopupMenu *m = tray->contextMenu(); m->setCheckable ( true ); KHelpMenu *h = new KHelpMenu ( tray, about ); m->insertItem ( "Font ...", this, TQT_SLOT ( chooseFont() ) ); mnu_autores = m->insertItem( "Auto resize font",this, TQT_SLOT ( toggleFontAutoRes() ) ); bool fnt_autores = cfg->readBoolEntry("autoresfont",true); m->setItemChecked(mnu_autores, fnt_autores); mnu_dock = m->insertItem ( "Dock widget", this, TQT_SLOT ( showDock() ) ); bool show_dock = cfg->readBoolEntry("showdock",false); m->setItemChecked(mnu_dock, show_dock); //m->insertItem("Configure", this, TQT_SLOT(config())); m->insertSeparator(); m->insertItem ( "Help", h->menu() ); tray->show(); dock = new KbdDock ( this ); if (show_dock){ dock->show(); } else{ dock->hide(); } TQFont fnt = cfg->readFontEntry("KvkbdFont"); m->setItemChecked(mnu_autores, fnt_autores); setFont(fnt); popup_menu = new VButton ( this,"popupmenu" ); popup_menu->resize ( 15,30 ); popup_menu->move ( 525,15+35 ); //popup_menu->setPaletteBackgroundColor ( TQt::green ); popup_menu->res(); popup_menu->setPixmap(TQIconSet(SmallIcon("configure")).pixmap()); other_keys.append(popup_menu); connect ( popup_menu,TQT_SIGNAL ( clicked() ),this, TQT_SLOT ( showConfigMenu() ) ); } else{ setCaption("kvkbdalone"); } } void MainWidget::finishInit() { KConfig *cfg = KApplication::kApplication()->config(); bool vis = cfg->readBoolEntry("visible",true); if (vis) { show(); } else { hide(); } } bool MainWidget::close(bool alsoDelete) { shutting=true; saveState(); return TQWidget::close(alsoDelete); } void MainWidget::restorePosition() { TQDesktopWidget *desktop = TQApplication::desktop(); TQRect screen_geom = desktop->screenGeometry(); int d_width=550; int d_height=235; TQRect dflt_geom(screen_geom.width()-d_width,screen_geom.height()-d_height,d_width,d_height); KConfig *cfg = 0; cfg = KApplication::kApplication()->config(); if (cfg){ TQRect geom = cfg->readRectEntry("geometry"); if (!geom.isNull() && geom.isValid()) { dflt_geom=geom; } } setGeometry(dflt_geom); } void MainWidget::saveState() { KConfig *cfg = 0; cfg = KApplication::kApplication()->config(); if (cfg){ cfg->writeEntry("visible",isShown()); cfg->sync(); } } void MainWidget::hideEvent ( TQHideEvent * ) { KConfig *cfg = 0; cfg = KApplication::kApplication()->config(); if (cfg){ cfg->writeEntry("geometry",tqgeometry()); cfg->sync(); } } void MainWidget::resizeEvent(TQResizeEvent * e) { if (nresize) return; const TQSize sz = e->size(); //512 x 243 //spc x -> 5, spc y->7 // btn x ->30 , btny -> 28 if (extent_visible){ sdxb = width(); sdxs = width() - (width() * (150.0/700.0)); VButton::pw=700.0; VButton::ph=235.0; setMinimumSize(700/3,235/3); } else{ sdxs = width(); sdxb = width() + (width() * (150.0/550.0)); VButton::pw=550.0; VButton::ph=235.0; setMinimumSize(550/3,235/3); } for ( unsigned a=0;areposition(sz.width(),sz.height()); } for ( unsigned a=0;areposition(sz.width(),sz.height()); } for ( unsigned a=0;areposition(sz.width(),sz.height()); } for ( unsigned a=0;areposition(sz.width(),sz.height()); } updateFont(); } void MainWidget::showConfigMenu() { if (tray){ KPopupMenu *m = tray->contextMenu(); m->popup(mapToGlobal(popup_menu->pos())); } } void MainWidget::updateFont() { if (tray->contextMenu()->isItemChecked(mnu_autores)){ TQFont fnt = this->font(); fnt.setWeight(TQFont::Bold); //double rs = (100.0/700.0)*width(); double rp = (8.0/600.0)*width(); //fnt.setStretch(rs); fnt.setPointSizeFloat(rp); setFont(fnt); } } void MainWidget::toggleNumericPad() { nresize=true; if (extent_visible){ extent_visible=false; TQWidget::resize ( (int)sdxs, height() ); extent->setText(">>"); } else{ extent_visible=true; extent->setText("<<"); TQWidget::resize ( (int)sdxb, height() ); } nresize=false; } void MainWidget::chooseFont() { bool c = false; if (isShown()) { hide(); c=true; } bool ok; TQFont font = TQFontDialog::getFont( &ok, this->font(), this ); if ( ok ) { // font is set to the font the user selected setFont(font); } else { // the user canceled the dialog; font is set to the initial // value, in this case Helvetica [Cronyx], 10 } KConfig *cfg = KApplication::kApplication()->config(); cfg->writeEntry ("KvkbdFont", this->font()); cfg->sync(); if (c)show(); updateFont(); } void MainWidget::quitClicked() { if (stand_alone) close(true); else hide(); } void MainWidget::showDock() { bool c = dock->isShown(); if ( c ) { tray->contextMenu()->setItemChecked ( mnu_dock, !c ); dock->hide(); } else { tray->contextMenu()->setItemChecked ( mnu_dock, !c ); dock->show(); } KConfig *cfg = KApplication::kApplication()->config(); cfg->writeEntry ("showdock", !c); cfg->sync(); } void MainWidget::toggleFontAutoRes() { bool c = tray->contextMenu()->isItemChecked( mnu_autores); tray->contextMenu()->setItemChecked(mnu_autores, !c); KConfig *cfg = KApplication::kApplication()->config(); cfg->writeEntry ("autoresfont", !c); cfg->sync(); } void MainWidget::toggleNumlock() { bool p=numl->isOn(); for ( unsigned a=0;anumlockPressed(p); } } void MainWidget::toggleCaps() { bool p=caps->isOn(); for ( unsigned a=0;acapsPressed(p); } } void MainWidget::toggleShift() { bool p=false; if ( lshft->isOn() || rshft->isOn() ) p=true; for ( unsigned a=0;ashiftPressed ( p ); } } void MainWidget::keyPress ( unsigned int a ) { send_key ( a,true,true ); bool reverse = false; if (lshft->isOn() || rshft->isOn()) reverse=true; for ( unsigned a=0;asetOn ( false ); } if (caps->isOn()) { if (reverse) { for (unsigned a=0;acapsPressed(true); } } }else { for (unsigned a=0;acapsPressed(false); } } } void MainWidget::send_key ( unsigned int keycode, bool press, bool release ) { Window curr_focus; int revert_to; XGetInputFocus ( display, &curr_focus, &revert_to ); for ( unsigned a=0;aisOn() ) { XTestFakeKeyEvent ( display, mod->getKeyCode(), true, 0 ); } } XTestFakeKeyEvent ( display, keycode, true,1 ); XTestFakeKeyEvent ( display, keycode, false, 2 ); for ( unsigned a=0;aisOn() ) { XTestFakeKeyEvent ( display, mod->getKeyCode(), false, 2 ); } } XFlush ( display ); } bool MainWidget::keyState ( int iKey ) { int iKeyMask = 0; Window wDummy1, wDummy2; int iDummy3, iDummy4, iDummy5, iDummy6; unsigned int iMask; XModifierKeymap* map = XGetModifierMapping ( display ); KeyCode keyCode = XKeysymToKeycode ( display,iKey ); if ( keyCode == NoSymbol ) return false; for ( int i = 0; i < 8; ++i ) { if ( map->modifiermap[map->max_keypermod * i] == keyCode ) { iKeyMask = 1 << i; } } XQueryPointer ( display, DefaultRootWindow ( display ), &wDummy1, &wDummy2,&iDummy3, &iDummy4, &iDummy5, &iDummy6, &iMask ); XFreeModifiermap ( map ); return ( iMask & iKeyMask ) != 0; } void MainWidget::queryModState() { //printf("Scroll: %d\n",keyState(XK_Scroll_Lock,pDisplay)); //printf("Caps : %d\n",keyState(XK_Caps_Lock,pDisplay)); //printf("Num : %d\n",keyState(XK_Num_Lock,pDisplay)); bool caps_state = keyState ( XK_Caps_Lock); bool numl_state = keyState ( XK_Num_Lock); if ( caps_state!=caps->isOn() ) { caps->setOn ( caps_state ); toggleCaps(); } if ( numl_state!= numl->isOn() ) { numl->setOn(numl_state); toggleNumlock(); } } void MainWidget::setupText(VButton& v) { KeyCode keycode=v.getKeyCode(); KeySym keysym_l = XKeycodeToKeysym(display, keycode, 0); KeySym keysym_u = XKeycodeToKeysym(display, keycode, 1); long ret = keysym2ucs(keysym_l); TQString btn_text(TQChar((uint)ret)); v.setText(btn_text); TQString btn_upper(btn_text.upper()); if (btn_upper==btn_text) { ret = keysym2ucs(keysym_u); TQChar c((uint)ret); if (c=='&') v.setShiftText("&&"); else v.setShiftText(c); } else { v.setShiftText(btn_upper); } } void MainWidget::mappingNotify(XMappingEvent *) { //TQTimer::singleShot( 1000, this, TQT_SLOT(test()) ); //if (e)XRefreshKeyboardMapping(e); //int index=0; for (unsigned a=0;aglobalPos(); int x = abs(p.x()-gpress.x()); int y = abs(p.y()-gpress.y()); if ( x<10 && y<10 ) { if ( mainWidget->isShown() ) mainWidget->hide(); else mainWidget->show(); } } /* \$XFree86\$ * This module converts keysym values into the corresponding ISO 10646 * (UCS, Unicode) values. * * The array keysymtab[] contains pairs of X11 keysym values for graphical * characters and the corresponding Unicode value. The function * keysym2ucs() maps a keysym onto a Unicode value using a binary search, * therefore keysymtab[] must remain SORTED by keysym value. * * The keysym -> UTF-8 conversion will hopefully one day be provided * by Xlib via XmbLookupString() and should ideally not have to be * done in X applications. But we are not there yet. * * We allow to represent any UCS character in the range U-00000000 to * U-00FFFFFF by a keysym value in the range 0x01000000 to 0x01ffffff. * This admittedly does not cover the entire 31-bit space of UCS, but * it does cover all of the characters up to U-10FFFF, which can be * represented by UTF-16, and more, and it is very unlikely that higher * UCS codes will ever be assigned by ISO. So to get Unicode character * U+ABCD you can directly use keysym 0x0100abcd. * * NOTE: The comments in the table below contain the actual character * encoded in UTF-8, so for viewing and editing best use an editor in * UTF-8 mode. * * Author: Markus G. Kuhn , * University of Cambridge, April 2001 * * Special thanks to Richard Verhoeven for preparing * an initial draft of the mapping table. * * This software is in the public domain. Share and enjoy! * * AUTOMATICALLY GENERATED FILE, DO NOT EDIT !!! (tqunicode/convmap.pl) */ struct codepair { unsigned short keysym; unsigned short ucs; } keysymtab[] = { { 0x01a1, 0x0104 }, /* Aogonek Ą LATIN CAPITAL LETTER A WITH OGONEK */ { 0x01a2, 0x02d8 }, /* breve ˘ BREVE */ { 0x01a3, 0x0141 }, /* Lstroke Ł LATIN CAPITAL LETTER L WITH STROKE */ { 0x01a5, 0x013d }, /* Lcaron Ľ LATIN CAPITAL LETTER L WITH CARON */ { 0x01a6, 0x015a }, /* Sacute Ś LATIN CAPITAL LETTER S WITH ACUTE */ { 0x01a9, 0x0160 }, /* Scaron Š LATIN CAPITAL LETTER S WITH CARON */ { 0x01aa, 0x015e }, /* Scedilla Ş LATIN CAPITAL LETTER S WITH CEDILLA */ { 0x01ab, 0x0164 }, /* Tcaron Ť LATIN CAPITAL LETTER T WITH CARON */ { 0x01ac, 0x0179 }, /* Zacute Ź LATIN CAPITAL LETTER Z WITH ACUTE */ { 0x01ae, 0x017d }, /* Zcaron Ž LATIN CAPITAL LETTER Z WITH CARON */ { 0x01af, 0x017b }, /* Zabovedot Ż LATIN CAPITAL LETTER Z WITH DOT ABOVE */ { 0x01b1, 0x0105 }, /* aogonek ą LATIN SMALL LETTER A WITH OGONEK */ { 0x01b2, 0x02db }, /* ogonek ˛ OGONEK */ { 0x01b3, 0x0142 }, /* lstroke ł LATIN SMALL LETTER L WITH STROKE */ { 0x01b5, 0x013e }, /* lcaron ľ LATIN SMALL LETTER L WITH CARON */ { 0x01b6, 0x015b }, /* sacute ś LATIN SMALL LETTER S WITH ACUTE */ { 0x01b7, 0x02c7 }, /* caron ˇ CARON */ { 0x01b9, 0x0161 }, /* scaron š LATIN SMALL LETTER S WITH CARON */ { 0x01ba, 0x015f }, /* scedilla ş LATIN SMALL LETTER S WITH CEDILLA */ { 0x01bb, 0x0165 }, /* tcaron ť LATIN SMALL LETTER T WITH CARON */ { 0x01bc, 0x017a }, /* zacute ź LATIN SMALL LETTER Z WITH ACUTE */ { 0x01bd, 0x02dd }, /* doubleacute ˝ DOUBLE ACUTE ACCENT */ { 0x01be, 0x017e }, /* zcaron ž LATIN SMALL LETTER Z WITH CARON */ { 0x01bf, 0x017c }, /* zabovedot ż LATIN SMALL LETTER Z WITH DOT ABOVE */ { 0x01c0, 0x0154 }, /* Racute Ŕ LATIN CAPITAL LETTER R WITH ACUTE */ { 0x01c3, 0x0102 }, /* Abreve Ă LATIN CAPITAL LETTER A WITH BREVE */ { 0x01c5, 0x0139 }, /* Lacute Ĺ LATIN CAPITAL LETTER L WITH ACUTE */ { 0x01c6, 0x0106 }, /* Cacute Ć LATIN CAPITAL LETTER C WITH ACUTE */ { 0x01c8, 0x010c }, /* Ccaron Č LATIN CAPITAL LETTER C WITH CARON */ { 0x01ca, 0x0118 }, /* Eogonek Ę LATIN CAPITAL LETTER E WITH OGONEK */ { 0x01cc, 0x011a }, /* Ecaron Ě LATIN CAPITAL LETTER E WITH CARON */ { 0x01cf, 0x010e }, /* Dcaron Ď LATIN CAPITAL LETTER D WITH CARON */ { 0x01d0, 0x0110 }, /* Dstroke Đ LATIN CAPITAL LETTER D WITH STROKE */ { 0x01d1, 0x0143 }, /* Nacute Ń LATIN CAPITAL LETTER N WITH ACUTE */ { 0x01d2, 0x0147 }, /* Ncaron Ň LATIN CAPITAL LETTER N WITH CARON */ { 0x01d5, 0x0150 }, /* Odoubleacute Ő LATIN CAPITAL LETTER O WITH DOUBLE ACUTE */ { 0x01d8, 0x0158 }, /* Rcaron Ř LATIN CAPITAL LETTER R WITH CARON */ { 0x01d9, 0x016e }, /* Uring Ů LATIN CAPITAL LETTER U WITH RING ABOVE */ { 0x01db, 0x0170 }, /* Udoubleacute Ű LATIN CAPITAL LETTER U WITH DOUBLE ACUTE */ { 0x01de, 0x0162 }, /* Tcedilla Ţ LATIN CAPITAL LETTER T WITH CEDILLA */ { 0x01e0, 0x0155 }, /* racute ŕ LATIN SMALL LETTER R WITH ACUTE */ { 0x01e3, 0x0103 }, /* abreve ă LATIN SMALL LETTER A WITH BREVE */ { 0x01e5, 0x013a }, /* lacute ĺ LATIN SMALL LETTER L WITH ACUTE */ { 0x01e6, 0x0107 }, /* cacute ć LATIN SMALL LETTER C WITH ACUTE */ { 0x01e8, 0x010d }, /* ccaron č LATIN SMALL LETTER C WITH CARON */ { 0x01ea, 0x0119 }, /* eogonek ę LATIN SMALL LETTER E WITH OGONEK */ { 0x01ec, 0x011b }, /* ecaron ě LATIN SMALL LETTER E WITH CARON */ { 0x01ef, 0x010f }, /* dcaron ď LATIN SMALL LETTER D WITH CARON */ { 0x01f0, 0x0111 }, /* dstroke đ LATIN SMALL LETTER D WITH STROKE */ { 0x01f1, 0x0144 }, /* nacute ń LATIN SMALL LETTER N WITH ACUTE */ { 0x01f2, 0x0148 }, /* ncaron ň LATIN SMALL LETTER N WITH CARON */ { 0x01f5, 0x0151 }, /* odoubleacute ő LATIN SMALL LETTER O WITH DOUBLE ACUTE */ { 0x01f8, 0x0159 }, /* rcaron ř LATIN SMALL LETTER R WITH CARON */ { 0x01f9, 0x016f }, /* uring ů LATIN SMALL LETTER U WITH RING ABOVE */ { 0x01fb, 0x0171 }, /* udoubleacute ű LATIN SMALL LETTER U WITH DOUBLE ACUTE */ { 0x01fe, 0x0163 }, /* tcedilla ţ LATIN SMALL LETTER T WITH CEDILLA */ { 0x01ff, 0x02d9 }, /* abovedot ˙ DOT ABOVE */ { 0x02a1, 0x0126 }, /* Hstroke Ħ LATIN CAPITAL LETTER H WITH STROKE */ { 0x02a6, 0x0124 }, /* Hcircumflex Ĥ LATIN CAPITAL LETTER H WITH CIRCUMFLEX */ { 0x02a9, 0x0130 }, /* Iabovedot İ LATIN CAPITAL LETTER I WITH DOT ABOVE */ { 0x02ab, 0x011e }, /* Gbreve Ğ LATIN CAPITAL LETTER G WITH BREVE */ { 0x02ac, 0x0134 }, /* Jcircumflex Ĵ LATIN CAPITAL LETTER J WITH CIRCUMFLEX */ { 0x02b1, 0x0127 }, /* hstroke ħ LATIN SMALL LETTER H WITH STROKE */ { 0x02b6, 0x0125 }, /* hcircumflex ĥ LATIN SMALL LETTER H WITH CIRCUMFLEX */ { 0x02b9, 0x0131 }, /* idotless ı LATIN SMALL LETTER DOTLESS I */ { 0x02bb, 0x011f }, /* gbreve ğ LATIN SMALL LETTER G WITH BREVE */ { 0x02bc, 0x0135 }, /* jcircumflex ĵ LATIN SMALL LETTER J WITH CIRCUMFLEX */ { 0x02c5, 0x010a }, /* Cabovedot Ċ LATIN CAPITAL LETTER C WITH DOT ABOVE */ { 0x02c6, 0x0108 }, /* Ccircumflex Ĉ LATIN CAPITAL LETTER C WITH CIRCUMFLEX */ { 0x02d5, 0x0120 }, /* Gabovedot Ġ LATIN CAPITAL LETTER G WITH DOT ABOVE */ { 0x02d8, 0x011c }, /* Gcircumflex Ĝ LATIN CAPITAL LETTER G WITH CIRCUMFLEX */ { 0x02dd, 0x016c }, /* Ubreve Ŭ LATIN CAPITAL LETTER U WITH BREVE */ { 0x02de, 0x015c }, /* Scircumflex Ŝ LATIN CAPITAL LETTER S WITH CIRCUMFLEX */ { 0x02e5, 0x010b }, /* cabovedot ċ LATIN SMALL LETTER C WITH DOT ABOVE */ { 0x02e6, 0x0109 }, /* ccircumflex ĉ LATIN SMALL LETTER C WITH CIRCUMFLEX */ { 0x02f5, 0x0121 }, /* gabovedot ġ LATIN SMALL LETTER G WITH DOT ABOVE */ { 0x02f8, 0x011d }, /* gcircumflex ĝ LATIN SMALL LETTER G WITH CIRCUMFLEX */ { 0x02fd, 0x016d }, /* ubreve ŭ LATIN SMALL LETTER U WITH BREVE */ { 0x02fe, 0x015d }, /* scircumflex ŝ LATIN SMALL LETTER S WITH CIRCUMFLEX */ { 0x03a2, 0x0138 }, /* kra ĸ LATIN SMALL LETTER KRA */ { 0x03a3, 0x0156 }, /* Rcedilla Ŗ LATIN CAPITAL LETTER R WITH CEDILLA */ { 0x03a5, 0x0128 }, /* Itilde Ĩ LATIN CAPITAL LETTER I WITH TILDE */ { 0x03a6, 0x013b }, /* Lcedilla Ļ LATIN CAPITAL LETTER L WITH CEDILLA */ { 0x03aa, 0x0112 }, /* Emacron Ē LATIN CAPITAL LETTER E WITH MACRON */ { 0x03ab, 0x0122 }, /* Gcedilla Ģ LATIN CAPITAL LETTER G WITH CEDILLA */ { 0x03ac, 0x0166 }, /* Tslash Ŧ LATIN CAPITAL LETTER T WITH STROKE */ { 0x03b3, 0x0157 }, /* rcedilla ŗ LATIN SMALL LETTER R WITH CEDILLA */ { 0x03b5, 0x0129 }, /* itilde ĩ LATIN SMALL LETTER I WITH TILDE */ { 0x03b6, 0x013c }, /* lcedilla ļ LATIN SMALL LETTER L WITH CEDILLA */ { 0x03ba, 0x0113 }, /* emacron ē LATIN SMALL LETTER E WITH MACRON */ { 0x03bb, 0x0123 }, /* gcedilla ģ LATIN SMALL LETTER G WITH CEDILLA */ { 0x03bc, 0x0167 }, /* tslash ŧ LATIN SMALL LETTER T WITH STROKE */ { 0x03bd, 0x014a }, /* ENG Ŋ LATIN CAPITAL LETTER ENG */ { 0x03bf, 0x014b }, /* eng ŋ LATIN SMALL LETTER ENG */ { 0x03c0, 0x0100 }, /* Amacron Ā LATIN CAPITAL LETTER A WITH MACRON */ { 0x03c7, 0x012e }, /* Iogonek Į LATIN CAPITAL LETTER I WITH OGONEK */ { 0x03cc, 0x0116 }, /* Eabovedot Ė LATIN CAPITAL LETTER E WITH DOT ABOVE */ { 0x03cf, 0x012a }, /* Imacron Ī LATIN CAPITAL LETTER I WITH MACRON */ { 0x03d1, 0x0145 }, /* Ncedilla Ņ LATIN CAPITAL LETTER N WITH CEDILLA */ { 0x03d2, 0x014c }, /* Omacron Ō LATIN CAPITAL LETTER O WITH MACRON */ { 0x03d3, 0x0136 }, /* Kcedilla Ķ LATIN CAPITAL LETTER K WITH CEDILLA */ { 0x03d9, 0x0172 }, /* Uogonek Ų LATIN CAPITAL LETTER U WITH OGONEK */ { 0x03dd, 0x0168 }, /* Utilde Ũ LATIN CAPITAL LETTER U WITH TILDE */ { 0x03de, 0x016a }, /* Umacron Ū LATIN CAPITAL LETTER U WITH MACRON */ { 0x03e0, 0x0101 }, /* amacron ā LATIN SMALL LETTER A WITH MACRON */ { 0x03e7, 0x012f }, /* iogonek į LATIN SMALL LETTER I WITH OGONEK */ { 0x03ec, 0x0117 }, /* eabovedot ė LATIN SMALL LETTER E WITH DOT ABOVE */ { 0x03ef, 0x012b }, /* imacron ī LATIN SMALL LETTER I WITH MACRON */ { 0x03f1, 0x0146 }, /* ncedilla ņ LATIN SMALL LETTER N WITH CEDILLA */ { 0x03f2, 0x014d }, /* omacron ō LATIN SMALL LETTER O WITH MACRON */ { 0x03f3, 0x0137 }, /* kcedilla ķ LATIN SMALL LETTER K WITH CEDILLA */ { 0x03f9, 0x0173 }, /* uogonek ų LATIN SMALL LETTER U WITH OGONEK */ { 0x03fd, 0x0169 }, /* utilde ũ LATIN SMALL LETTER U WITH TILDE */ { 0x03fe, 0x016b }, /* umacron ū LATIN SMALL LETTER U WITH MACRON */ { 0x047e, 0x203e }, /* overline ‾ OVERLINE */ { 0x04a1, 0x3002 }, /* kana_fullstop 。 IDEOGRAPHIC FULL STOP */ { 0x04a2, 0x300c }, /* kana_openingbracket 「 LEFT CORNER BRACKET */ { 0x04a3, 0x300d }, /* kana_closingbracket 」 RIGHT CORNER BRACKET */ { 0x04a4, 0x3001 }, /* kana_comma 、 IDEOGRAPHIC COMMA */ { 0x04a5, 0x30fb }, /* kana_conjunctive ・ KATAKANA MIDDLE DOT */ { 0x04a6, 0x30f2 }, /* kana_WO ヲ KATAKANA LETTER WO */ { 0x04a7, 0x30a1 }, /* kana_a ァ KATAKANA LETTER SMALL A */ { 0x04a8, 0x30a3 }, /* kana_i ィ KATAKANA LETTER SMALL I */ { 0x04a9, 0x30a5 }, /* kana_u ゥ KATAKANA LETTER SMALL U */ { 0x04aa, 0x30a7 }, /* kana_e ェ KATAKANA LETTER SMALL E */ { 0x04ab, 0x30a9 }, /* kana_o ォ KATAKANA LETTER SMALL O */ { 0x04ac, 0x30e3 }, /* kana_ya ャ KATAKANA LETTER SMALL YA */ { 0x04ad, 0x30e5 }, /* kana_yu ュ KATAKANA LETTER SMALL YU */ { 0x04ae, 0x30e7 }, /* kana_yo ョ KATAKANA LETTER SMALL YO */ { 0x04af, 0x30c3 }, /* kana_tsu ッ KATAKANA LETTER SMALL TU */ { 0x04b0, 0x30fc }, /* prolongedsound ー KATAKANA-HIRAGANA PROLONGED SOUND MARK */ { 0x04b1, 0x30a2 }, /* kana_A ア KATAKANA LETTER A */ { 0x04b2, 0x30a4 }, /* kana_I イ KATAKANA LETTER I */ { 0x04b3, 0x30a6 }, /* kana_U ウ KATAKANA LETTER U */ { 0x04b4, 0x30a8 }, /* kana_E エ KATAKANA LETTER E */ { 0x04b5, 0x30aa }, /* kana_O オ KATAKANA LETTER O */ { 0x04b6, 0x30ab }, /* kana_KA カ KATAKANA LETTER KA */ { 0x04b7, 0x30ad }, /* kana_KI キ KATAKANA LETTER KI */ { 0x04b8, 0x30af }, /* kana_KU ク KATAKANA LETTER KU */ { 0x04b9, 0x30b1 }, /* kana_KE ケ KATAKANA LETTER KE */ { 0x04ba, 0x30b3 }, /* kana_KO コ KATAKANA LETTER KO */ { 0x04bb, 0x30b5 }, /* kana_SA サ KATAKANA LETTER SA */ { 0x04bc, 0x30b7 }, /* kana_SHI シ KATAKANA LETTER SI */ { 0x04bd, 0x30b9 }, /* kana_SU ス KATAKANA LETTER SU */ { 0x04be, 0x30bb }, /* kana_SE セ KATAKANA LETTER SE */ { 0x04bf, 0x30bd }, /* kana_SO ソ KATAKANA LETTER SO */ { 0x04c0, 0x30bf }, /* kana_TA タ KATAKANA LETTER TA */ { 0x04c1, 0x30c1 }, /* kana_CHI チ KATAKANA LETTER TI */ { 0x04c2, 0x30c4 }, /* kana_TSU ツ KATAKANA LETTER TU */ { 0x04c3, 0x30c6 }, /* kana_TE テ KATAKANA LETTER TE */ { 0x04c4, 0x30c8 }, /* kana_TO ト KATAKANA LETTER TO */ { 0x04c5, 0x30ca }, /* kana_NA ナ KATAKANA LETTER NA */ { 0x04c6, 0x30cb }, /* kana_NI ニ KATAKANA LETTER NI */ { 0x04c7, 0x30cc }, /* kana_NU ヌ KATAKANA LETTER NU */ { 0x04c8, 0x30cd }, /* kana_NE ネ KATAKANA LETTER NE */ { 0x04c9, 0x30ce }, /* kana_NO ノ KATAKANA LETTER NO */ { 0x04ca, 0x30cf }, /* kana_HA ハ KATAKANA LETTER HA */ { 0x04cb, 0x30d2 }, /* kana_HI ヒ KATAKANA LETTER HI */ { 0x04cc, 0x30d5 }, /* kana_FU フ KATAKANA LETTER HU */ { 0x04cd, 0x30d8 }, /* kana_HE ヘ KATAKANA LETTER HE */ { 0x04ce, 0x30db }, /* kana_HO ホ KATAKANA LETTER HO */ { 0x04cf, 0x30de }, /* kana_MA マ KATAKANA LETTER MA */ { 0x04d0, 0x30df }, /* kana_MI ミ KATAKANA LETTER MI */ { 0x04d1, 0x30e0 }, /* kana_MU ム KATAKANA LETTER MU */ { 0x04d2, 0x30e1 }, /* kana_ME メ KATAKANA LETTER ME */ { 0x04d3, 0x30e2 }, /* kana_MO モ KATAKANA LETTER MO */ { 0x04d4, 0x30e4 }, /* kana_YA ヤ KATAKANA LETTER YA */ { 0x04d5, 0x30e6 }, /* kana_YU ユ KATAKANA LETTER YU */ { 0x04d6, 0x30e8 }, /* kana_YO ヨ KATAKANA LETTER YO */ { 0x04d7, 0x30e9 }, /* kana_RA ラ KATAKANA LETTER RA */ { 0x04d8, 0x30ea }, /* kana_RI リ KATAKANA LETTER RI */ { 0x04d9, 0x30eb }, /* kana_RU ル KATAKANA LETTER RU */ { 0x04da, 0x30ec }, /* kana_RE レ KATAKANA LETTER RE */ { 0x04db, 0x30ed }, /* kana_RO ロ KATAKANA LETTER RO */ { 0x04dc, 0x30ef }, /* kana_WA ワ KATAKANA LETTER WA */ { 0x04dd, 0x30f3 }, /* kana_N ン KATAKANA LETTER N */ { 0x04de, 0x309b }, /* voicedsound ゛ KATAKANA-HIRAGANA VOICED SOUND MARK */ { 0x04df, 0x309c }, /* semivoicedsound ゜ KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK */ { 0x05ac, 0x060c }, /* Arabic_comma ، ARABIC COMMA */ { 0x05bb, 0x061b }, /* Arabic_semicolon ؛ ARABIC SEMICOLON */ { 0x05bf, 0x061f }, /* Arabic_question_mark ؟ ARABIC QUESTION MARK */ { 0x05c1, 0x0621 }, /* Arabic_hamza ء ARABIC LETTER HAMZA */ { 0x05c2, 0x0622 }, /* Arabic_maddaonalef آ ARABIC LETTER ALEF WITH MADDA ABOVE */ { 0x05c3, 0x0623 }, /* Arabic_hamzaonalef أ ARABIC LETTER ALEF WITH HAMZA ABOVE */ { 0x05c4, 0x0624 }, /* Arabic_hamzaonwaw ؤ ARABIC LETTER WAW WITH HAMZA ABOVE */ { 0x05c5, 0x0625 }, /* Arabic_hamzaunderalef إ ARABIC LETTER ALEF WITH HAMZA BELOW */ { 0x05c6, 0x0626 }, /* Arabic_hamzaonyeh ئ ARABIC LETTER YEH WITH HAMZA ABOVE */ { 0x05c7, 0x0627 }, /* Arabic_alef ا ARABIC LETTER ALEF */ { 0x05c8, 0x0628 }, /* Arabic_beh ب ARABIC LETTER BEH */ { 0x05c9, 0x0629 }, /* Arabic_tehmarbuta ة ARABIC LETTER TEH MARBUTA */ { 0x05ca, 0x062a }, /* Arabic_teh ت ARABIC LETTER TEH */ { 0x05cb, 0x062b }, /* Arabic_theh ث ARABIC LETTER THEH */ { 0x05cc, 0x062c }, /* Arabic_jeem ج ARABIC LETTER JEEM */ { 0x05cd, 0x062d }, /* Arabic_hah ح ARABIC LETTER HAH */ { 0x05ce, 0x062e }, /* Arabic_khah خ ARABIC LETTER KHAH */ { 0x05cf, 0x062f }, /* Arabic_dal د ARABIC LETTER DAL */ { 0x05d0, 0x0630 }, /* Arabic_thal ذ ARABIC LETTER THAL */ { 0x05d1, 0x0631 }, /* Arabic_ra ر ARABIC LETTER REH */ { 0x05d2, 0x0632 }, /* Arabic_zain ز ARABIC LETTER ZAIN */ { 0x05d3, 0x0633 }, /* Arabic_seen س ARABIC LETTER SEEN */ { 0x05d4, 0x0634 }, /* Arabic_sheen ش ARABIC LETTER SHEEN */ { 0x05d5, 0x0635 }, /* Arabic_sad ص ARABIC LETTER SAD */ { 0x05d6, 0x0636 }, /* Arabic_dad ض ARABIC LETTER DAD */ { 0x05d7, 0x0637 }, /* Arabic_tah ط ARABIC LETTER TAH */ { 0x05d8, 0x0638 }, /* Arabic_zah ظ ARABIC LETTER ZAH */ { 0x05d9, 0x0639 }, /* Arabic_ain ع ARABIC LETTER AIN */ { 0x05da, 0x063a }, /* Arabic_ghain غ ARABIC LETTER GHAIN */ { 0x05e0, 0x0640 }, /* Arabic_tatweel ـ ARABIC TATWEEL */ { 0x05e1, 0x0641 }, /* Arabic_feh ف ARABIC LETTER FEH */ { 0x05e2, 0x0642 }, /* Arabic_qaf ق ARABIC LETTER TQAF */ { 0x05e3, 0x0643 }, /* Arabic_kaf ك ARABIC LETTER KAF */ { 0x05e4, 0x0644 }, /* Arabic_lam ل ARABIC LETTER LAM */ { 0x05e5, 0x0645 }, /* Arabic_meem م ARABIC LETTER MEEM */ { 0x05e6, 0x0646 }, /* Arabic_noon ن ARABIC LETTER NOON */ { 0x05e7, 0x0647 }, /* Arabic_ha ه ARABIC LETTER HEH */ { 0x05e8, 0x0648 }, /* Arabic_waw و ARABIC LETTER WAW */ { 0x05e9, 0x0649 }, /* Arabic_alefmaksura ى ARABIC LETTER ALEF MAKSURA */ { 0x05ea, 0x064a }, /* Arabic_yeh ي ARABIC LETTER YEH */ { 0x05eb, 0x064b }, /* Arabic_fathatan ً ARABIC FATHATAN */ { 0x05ec, 0x064c }, /* Arabic_dammatan ٌ ARABIC DAMMATAN */ { 0x05ed, 0x064d }, /* Arabic_kasratan ٍ ARABIC KASRATAN */ { 0x05ee, 0x064e }, /* Arabic_fatha َ ARABIC FATHA */ { 0x05ef, 0x064f }, /* Arabic_damma ُ ARABIC DAMMA */ { 0x05f0, 0x0650 }, /* Arabic_kasra ِ ARABIC KASRA */ { 0x05f1, 0x0651 }, /* Arabic_shadda ّ ARABIC SHADDA */ { 0x05f2, 0x0652 }, /* Arabic_sukun ْ ARABIC SUKUN */ { 0x06a1, 0x0452 }, /* Serbian_dje ђ CYRILLIC SMALL LETTER DJE */ { 0x06a2, 0x0453 }, /* Macedonia_gje ѓ CYRILLIC SMALL LETTER GJE */ { 0x06a3, 0x0451 }, /* Cyrillic_io ё CYRILLIC SMALL LETTER IO */ { 0x06a4, 0x0454 }, /* Ukrainian_ie є CYRILLIC SMALL LETTER UKRAINIAN IE */ { 0x06a5, 0x0455 }, /* Macedonia_dse ѕ CYRILLIC SMALL LETTER DZE */ { 0x06a6, 0x0456 }, /* Ukrainian_i і CYRILLIC SMALL LETTER BYELORUSSIAN-UKRAINIAN I */ { 0x06a7, 0x0457 }, /* Ukrainian_yi ї CYRILLIC SMALL LETTER YI */ { 0x06a8, 0x0458 }, /* Cyrillic_je ј CYRILLIC SMALL LETTER JE */ { 0x06a9, 0x0459 }, /* Cyrillic_lje љ CYRILLIC SMALL LETTER LJE */ { 0x06aa, 0x045a }, /* Cyrillic_nje њ CYRILLIC SMALL LETTER NJE */ { 0x06ab, 0x045b }, /* Serbian_tshe ћ CYRILLIC SMALL LETTER TSHE */ { 0x06ac, 0x045c }, /* Macedonia_kje ќ CYRILLIC SMALL LETTER KJE */ { 0x06ae, 0x045e }, /* Byelorussian_shortu ў CYRILLIC SMALL LETTER SHORT U */ { 0x06af, 0x045f }, /* Cyrillic_dzhe џ CYRILLIC SMALL LETTER DZHE */ { 0x06b0, 0x2116 }, /* numerosign № NUMERO SIGN */ { 0x06b1, 0x0402 }, /* Serbian_DJE Ђ CYRILLIC CAPITAL LETTER DJE */ { 0x06b2, 0x0403 }, /* Macedonia_GJE Ѓ CYRILLIC CAPITAL LETTER GJE */ { 0x06b3, 0x0401 }, /* Cyrillic_IO Ё CYRILLIC CAPITAL LETTER IO */ { 0x06b4, 0x0404 }, /* Ukrainian_IE Є CYRILLIC CAPITAL LETTER UKRAINIAN IE */ { 0x06b5, 0x0405 }, /* Macedonia_DSE Ѕ CYRILLIC CAPITAL LETTER DZE */ { 0x06b6, 0x0406 }, /* Ukrainian_I І CYRILLIC CAPITAL LETTER BYELORUSSIAN-UKRAINIAN I */ { 0x06b7, 0x0407 }, /* Ukrainian_YI Ї CYRILLIC CAPITAL LETTER YI */ { 0x06b8, 0x0408 }, /* Cyrillic_JE Ј CYRILLIC CAPITAL LETTER JE */ { 0x06b9, 0x0409 }, /* Cyrillic_LJE Љ CYRILLIC CAPITAL LETTER LJE */ { 0x06ba, 0x040a }, /* Cyrillic_NJE Њ CYRILLIC CAPITAL LETTER NJE */ { 0x06bb, 0x040b }, /* Serbian_TSHE Ћ CYRILLIC CAPITAL LETTER TSHE */ { 0x06bc, 0x040c }, /* Macedonia_KJE Ќ CYRILLIC CAPITAL LETTER KJE */ { 0x06be, 0x040e }, /* Byelorussian_SHORTU Ў CYRILLIC CAPITAL LETTER SHORT U */ { 0x06bf, 0x040f }, /* Cyrillic_DZHE Џ CYRILLIC CAPITAL LETTER DZHE */ { 0x06c0, 0x044e }, /* Cyrillic_yu ю CYRILLIC SMALL LETTER YU */ { 0x06c1, 0x0430 }, /* Cyrillic_a а CYRILLIC SMALL LETTER A */ { 0x06c2, 0x0431 }, /* Cyrillic_be б CYRILLIC SMALL LETTER BE */ { 0x06c3, 0x0446 }, /* Cyrillic_tse ц CYRILLIC SMALL LETTER TSE */ { 0x06c4, 0x0434 }, /* Cyrillic_de д CYRILLIC SMALL LETTER DE */ { 0x06c5, 0x0435 }, /* Cyrillic_ie е CYRILLIC SMALL LETTER IE */ { 0x06c6, 0x0444 }, /* Cyrillic_ef ф CYRILLIC SMALL LETTER EF */ { 0x06c7, 0x0433 }, /* Cyrillic_ghe г CYRILLIC SMALL LETTER GHE */ { 0x06c8, 0x0445 }, /* Cyrillic_ha х CYRILLIC SMALL LETTER HA */ { 0x06c9, 0x0438 }, /* Cyrillic_i и CYRILLIC SMALL LETTER I */ { 0x06ca, 0x0439 }, /* Cyrillic_shorti й CYRILLIC SMALL LETTER SHORT I */ { 0x06cb, 0x043a }, /* Cyrillic_ka к CYRILLIC SMALL LETTER KA */ { 0x06cc, 0x043b }, /* Cyrillic_el л CYRILLIC SMALL LETTER EL */ { 0x06cd, 0x043c }, /* Cyrillic_em м CYRILLIC SMALL LETTER EM */ { 0x06ce, 0x043d }, /* Cyrillic_en н CYRILLIC SMALL LETTER EN */ { 0x06cf, 0x043e }, /* Cyrillic_o о CYRILLIC SMALL LETTER O */ { 0x06d0, 0x043f }, /* Cyrillic_pe п CYRILLIC SMALL LETTER PE */ { 0x06d1, 0x044f }, /* Cyrillic_ya я CYRILLIC SMALL LETTER YA */ { 0x06d2, 0x0440 }, /* Cyrillic_er р CYRILLIC SMALL LETTER ER */ { 0x06d3, 0x0441 }, /* Cyrillic_es с CYRILLIC SMALL LETTER ES */ { 0x06d4, 0x0442 }, /* Cyrillic_te т CYRILLIC SMALL LETTER TE */ { 0x06d5, 0x0443 }, /* Cyrillic_u у CYRILLIC SMALL LETTER U */ { 0x06d6, 0x0436 }, /* Cyrillic_zhe ж CYRILLIC SMALL LETTER ZHE */ { 0x06d7, 0x0432 }, /* Cyrillic_ve в CYRILLIC SMALL LETTER VE */ { 0x06d8, 0x044c }, /* Cyrillic_softsign ь CYRILLIC SMALL LETTER SOFT SIGN */ { 0x06d9, 0x044b }, /* Cyrillic_yeru ы CYRILLIC SMALL LETTER YERU */ { 0x06da, 0x0437 }, /* Cyrillic_ze з CYRILLIC SMALL LETTER ZE */ { 0x06db, 0x0448 }, /* Cyrillic_sha ш CYRILLIC SMALL LETTER SHA */ { 0x06dc, 0x044d }, /* Cyrillic_e э CYRILLIC SMALL LETTER E */ { 0x06dd, 0x0449 }, /* Cyrillic_shcha щ CYRILLIC SMALL LETTER SHCHA */ { 0x06de, 0x0447 }, /* Cyrillic_che ч CYRILLIC SMALL LETTER CHE */ { 0x06df, 0x044a }, /* Cyrillic_hardsign ъ CYRILLIC SMALL LETTER HARD SIGN */ { 0x06e0, 0x042e }, /* Cyrillic_YU Ю CYRILLIC CAPITAL LETTER YU */ { 0x06e1, 0x0410 }, /* Cyrillic_A А CYRILLIC CAPITAL LETTER A */ { 0x06e2, 0x0411 }, /* Cyrillic_BE Б CYRILLIC CAPITAL LETTER BE */ { 0x06e3, 0x0426 }, /* Cyrillic_TSE Ц CYRILLIC CAPITAL LETTER TSE */ { 0x06e4, 0x0414 }, /* Cyrillic_DE Д CYRILLIC CAPITAL LETTER DE */ { 0x06e5, 0x0415 }, /* Cyrillic_IE Е CYRILLIC CAPITAL LETTER IE */ { 0x06e6, 0x0424 }, /* Cyrillic_EF Ф CYRILLIC CAPITAL LETTER EF */ { 0x06e7, 0x0413 }, /* Cyrillic_GHE Г CYRILLIC CAPITAL LETTER GHE */ { 0x06e8, 0x0425 }, /* Cyrillic_HA Х CYRILLIC CAPITAL LETTER HA */ { 0x06e9, 0x0418 }, /* Cyrillic_I И CYRILLIC CAPITAL LETTER I */ { 0x06ea, 0x0419 }, /* Cyrillic_SHORTI Й CYRILLIC CAPITAL LETTER SHORT I */ { 0x06eb, 0x041a }, /* Cyrillic_KA К CYRILLIC CAPITAL LETTER KA */ { 0x06ec, 0x041b }, /* Cyrillic_EL Л CYRILLIC CAPITAL LETTER EL */ { 0x06ed, 0x041c }, /* Cyrillic_EM М CYRILLIC CAPITAL LETTER EM */ { 0x06ee, 0x041d }, /* Cyrillic_EN Н CYRILLIC CAPITAL LETTER EN */ { 0x06ef, 0x041e }, /* Cyrillic_O О CYRILLIC CAPITAL LETTER O */ { 0x06f0, 0x041f }, /* Cyrillic_PE П CYRILLIC CAPITAL LETTER PE */ { 0x06f1, 0x042f }, /* Cyrillic_YA Я CYRILLIC CAPITAL LETTER YA */ { 0x06f2, 0x0420 }, /* Cyrillic_ER Р CYRILLIC CAPITAL LETTER ER */ { 0x06f3, 0x0421 }, /* Cyrillic_ES С CYRILLIC CAPITAL LETTER ES */ { 0x06f4, 0x0422 }, /* Cyrillic_TE Т CYRILLIC CAPITAL LETTER TE */ { 0x06f5, 0x0423 }, /* Cyrillic_U У CYRILLIC CAPITAL LETTER U */ { 0x06f6, 0x0416 }, /* Cyrillic_ZHE Ж CYRILLIC CAPITAL LETTER ZHE */ { 0x06f7, 0x0412 }, /* Cyrillic_VE В CYRILLIC CAPITAL LETTER VE */ { 0x06f8, 0x042c }, /* Cyrillic_SOFTSIGN Ь CYRILLIC CAPITAL LETTER SOFT SIGN */ { 0x06f9, 0x042b }, /* Cyrillic_YERU Ы CYRILLIC CAPITAL LETTER YERU */ { 0x06fa, 0x0417 }, /* Cyrillic_ZE З CYRILLIC CAPITAL LETTER ZE */ { 0x06fb, 0x0428 }, /* Cyrillic_SHA Ш CYRILLIC CAPITAL LETTER SHA */ { 0x06fc, 0x042d }, /* Cyrillic_E Э CYRILLIC CAPITAL LETTER E */ { 0x06fd, 0x0429 }, /* Cyrillic_SHCHA Щ CYRILLIC CAPITAL LETTER SHCHA */ { 0x06fe, 0x0427 }, /* Cyrillic_CHE Ч CYRILLIC CAPITAL LETTER CHE */ { 0x06ff, 0x042a }, /* Cyrillic_HARDSIGN Ъ CYRILLIC CAPITAL LETTER HARD SIGN */ { 0x07a1, 0x0386 }, /* Greek_ALPHAaccent Ά GREEK CAPITAL LETTER ALPHA WITH TONOS */ { 0x07a2, 0x0388 }, /* Greek_EPSILONaccent Έ GREEK CAPITAL LETTER EPSILON WITH TONOS */ { 0x07a3, 0x0389 }, /* Greek_ETAaccent Ή GREEK CAPITAL LETTER ETA WITH TONOS */ { 0x07a4, 0x038a }, /* Greek_IOTAaccent Ί GREEK CAPITAL LETTER IOTA WITH TONOS */ { 0x07a5, 0x03aa }, /* Greek_IOTAdiaeresis Ϊ GREEK CAPITAL LETTER IOTA WITH DIALYTIKA */ { 0x07a7, 0x038c }, /* Greek_OMICRONaccent Ό GREEK CAPITAL LETTER OMICRON WITH TONOS */ { 0x07a8, 0x038e }, /* Greek_UPSILONaccent Ύ GREEK CAPITAL LETTER UPSILON WITH TONOS */ { 0x07a9, 0x03ab }, /* Greek_UPSILONdieresis Ϋ GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA */ { 0x07ab, 0x038f }, /* Greek_OMEGAaccent Ώ GREEK CAPITAL LETTER OMEGA WITH TONOS */ { 0x07ae, 0x0385 }, /* Greek_accentdieresis ΅ GREEK DIALYTIKA TONOS */ { 0x07af, 0x2015 }, /* Greek_horizbar ― HORIZONTAL BAR */ { 0x07b1, 0x03ac }, /* Greek_alphaaccent ά GREEK SMALL LETTER ALPHA WITH TONOS */ { 0x07b2, 0x03ad }, /* Greek_epsilonaccent έ GREEK SMALL LETTER EPSILON WITH TONOS */ { 0x07b3, 0x03ae }, /* Greek_etaaccent ή GREEK SMALL LETTER ETA WITH TONOS */ { 0x07b4, 0x03af }, /* Greek_iotaaccent ί GREEK SMALL LETTER IOTA WITH TONOS */ { 0x07b5, 0x03ca }, /* Greek_iotadieresis ϊ GREEK SMALL LETTER IOTA WITH DIALYTIKA */ { 0x07b6, 0x0390 }, /* Greek_iotaaccentdieresis ΐ GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS */ { 0x07b7, 0x03cc }, /* Greek_omicronaccent ό GREEK SMALL LETTER OMICRON WITH TONOS */ { 0x07b8, 0x03cd }, /* Greek_upsilonaccent ύ GREEK SMALL LETTER UPSILON WITH TONOS */ { 0x07b9, 0x03cb }, /* Greek_upsilondieresis ϋ GREEK SMALL LETTER UPSILON WITH DIALYTIKA */ { 0x07ba, 0x03b0 }, /* Greek_upsilonaccentdieresis ΰ GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS */ { 0x07bb, 0x03ce }, /* Greek_omegaaccent ώ GREEK SMALL LETTER OMEGA WITH TONOS */ { 0x07c1, 0x0391 }, /* Greek_ALPHA Α GREEK CAPITAL LETTER ALPHA */ { 0x07c2, 0x0392 }, /* Greek_BETA Β GREEK CAPITAL LETTER BETA */ { 0x07c3, 0x0393 }, /* Greek_GAMMA Γ GREEK CAPITAL LETTER GAMMA */ { 0x07c4, 0x0394 }, /* Greek_DELTA Δ GREEK CAPITAL LETTER DELTA */ { 0x07c5, 0x0395 }, /* Greek_EPSILON Ε GREEK CAPITAL LETTER EPSILON */ { 0x07c6, 0x0396 }, /* Greek_ZETA Ζ GREEK CAPITAL LETTER ZETA */ { 0x07c7, 0x0397 }, /* Greek_ETA Η GREEK CAPITAL LETTER ETA */ { 0x07c8, 0x0398 }, /* Greek_THETA Θ GREEK CAPITAL LETTER THETA */ { 0x07c9, 0x0399 }, /* Greek_IOTA Ι GREEK CAPITAL LETTER IOTA */ { 0x07ca, 0x039a }, /* Greek_KAPPA Κ GREEK CAPITAL LETTER KAPPA */ { 0x07cb, 0x039b }, /* Greek_LAMBDA Λ GREEK CAPITAL LETTER LAMDA */ { 0x07cc, 0x039c }, /* Greek_MU Μ GREEK CAPITAL LETTER MU */ { 0x07cd, 0x039d }, /* Greek_NU Ν GREEK CAPITAL LETTER NU */ { 0x07ce, 0x039e }, /* Greek_XI Ξ GREEK CAPITAL LETTER XI */ { 0x07cf, 0x039f }, /* Greek_OMICRON Ο GREEK CAPITAL LETTER OMICRON */ { 0x07d0, 0x03a0 }, /* Greek_PI Π GREEK CAPITAL LETTER PI */ { 0x07d1, 0x03a1 }, /* Greek_RHO Ρ GREEK CAPITAL LETTER RHO */ { 0x07d2, 0x03a3 }, /* Greek_SIGMA Σ GREEK CAPITAL LETTER SIGMA */ { 0x07d4, 0x03a4 }, /* Greek_TAU Τ GREEK CAPITAL LETTER TAU */ { 0x07d5, 0x03a5 }, /* Greek_UPSILON Υ GREEK CAPITAL LETTER UPSILON */ { 0x07d6, 0x03a6 }, /* Greek_PHI Φ GREEK CAPITAL LETTER PHI */ { 0x07d7, 0x03a7 }, /* Greek_CHI Χ GREEK CAPITAL LETTER CHI */ { 0x07d8, 0x03a8 }, /* Greek_PSI Ψ GREEK CAPITAL LETTER PSI */ { 0x07d9, 0x03a9 }, /* Greek_OMEGA Ω GREEK CAPITAL LETTER OMEGA */ { 0x07e1, 0x03b1 }, /* Greek_alpha α GREEK SMALL LETTER ALPHA */ { 0x07e2, 0x03b2 }, /* Greek_beta β GREEK SMALL LETTER BETA */ { 0x07e3, 0x03b3 }, /* Greek_gamma γ GREEK SMALL LETTER GAMMA */ { 0x07e4, 0x03b4 }, /* Greek_delta δ GREEK SMALL LETTER DELTA */ { 0x07e5, 0x03b5 }, /* Greek_epsilon ε GREEK SMALL LETTER EPSILON */ { 0x07e6, 0x03b6 }, /* Greek_zeta ζ GREEK SMALL LETTER ZETA */ { 0x07e7, 0x03b7 }, /* Greek_eta η GREEK SMALL LETTER ETA */ { 0x07e8, 0x03b8 }, /* Greek_theta θ GREEK SMALL LETTER THETA */ { 0x07e9, 0x03b9 }, /* Greek_iota ι GREEK SMALL LETTER IOTA */ { 0x07ea, 0x03ba }, /* Greek_kappa κ GREEK SMALL LETTER KAPPA */ { 0x07eb, 0x03bb }, /* Greek_lambda λ GREEK SMALL LETTER LAMDA */ { 0x07ec, 0x03bc }, /* Greek_mu μ GREEK SMALL LETTER MU */ { 0x07ed, 0x03bd }, /* Greek_nu ν GREEK SMALL LETTER NU */ { 0x07ee, 0x03be }, /* Greek_xi ξ GREEK SMALL LETTER XI */ { 0x07ef, 0x03bf }, /* Greek_omicron ο GREEK SMALL LETTER OMICRON */ { 0x07f0, 0x03c0 }, /* Greek_pi π GREEK SMALL LETTER PI */ { 0x07f1, 0x03c1 }, /* Greek_rho ρ GREEK SMALL LETTER RHO */ { 0x07f2, 0x03c3 }, /* Greek_sigma σ GREEK SMALL LETTER SIGMA */ { 0x07f3, 0x03c2 }, /* Greek_finalsmallsigma ς GREEK SMALL LETTER FINAL SIGMA */ { 0x07f4, 0x03c4 }, /* Greek_tau τ GREEK SMALL LETTER TAU */ { 0x07f5, 0x03c5 }, /* Greek_upsilon υ GREEK SMALL LETTER UPSILON */ { 0x07f6, 0x03c6 }, /* Greek_phi φ GREEK SMALL LETTER PHI */ { 0x07f7, 0x03c7 }, /* Greek_chi χ GREEK SMALL LETTER CHI */ { 0x07f8, 0x03c8 }, /* Greek_psi ψ GREEK SMALL LETTER PSI */ { 0x07f9, 0x03c9 }, /* Greek_omega ω GREEK SMALL LETTER OMEGA */ { 0x08a1, 0x23b7 }, /* leftradical ⎷ ??? */ { 0x08a2, 0x250c }, /* topleftradical ┌ BOX DRAWINGS LIGHT DOWN AND RIGHT */ { 0x08a3, 0x2500 }, /* horizconnector ─ BOX DRAWINGS LIGHT HORIZONTAL */ { 0x08a4, 0x2320 }, /* topintegral ⌠ TOP HALF INTEGRAL */ { 0x08a5, 0x2321 }, /* botintegral ⌡ BOTTOM HALF INTEGRAL */ { 0x08a6, 0x2502 }, /* vertconnector │ BOX DRAWINGS LIGHT VERTICAL */ { 0x08a7, 0x23a1 }, /* topleftsqbracket ⎡ ??? */ { 0x08a8, 0x23a3 }, /* botleftsqbracket ⎣ ??? */ { 0x08a9, 0x23a4 }, /* toprightsqbracket ⎤ ??? */ { 0x08aa, 0x23a6 }, /* botrightsqbracket ⎦ ??? */ { 0x08ab, 0x239b }, /* topleftparens ⎛ ??? */ { 0x08ac, 0x239d }, /* botleftparens ⎝ ??? */ { 0x08ad, 0x239e }, /* toprightparens ⎞ ??? */ { 0x08ae, 0x23a0 }, /* botrightparens ⎠ ??? */ { 0x08af, 0x23a8 }, /* leftmiddlecurlybrace ⎨ ??? */ { 0x08b0, 0x23ac }, /* rightmiddlecurlybrace ⎬ ??? */ /* 0x08b1 topleftsummation ? ??? */ /* 0x08b2 botleftsummation ? ??? */ /* 0x08b3 topvertsummationconnector ? ??? */ /* 0x08b4 botvertsummationconnector ? ??? */ /* 0x08b5 toprightsummation ? ??? */ /* 0x08b6 botrightsummation ? ??? */ /* 0x08b7 rightmiddlesummation ? ??? */ { 0x08bc, 0x2264 }, /* lessthanequal ≤ LESS-THAN OR ETQUAL TO */ { 0x08bd, 0x2260 }, /* notequal ≠ NOT ETQUAL TO */ { 0x08be, 0x2265 }, /* greaterthanequal ≥ GREATER-THAN OR ETQUAL TO */ { 0x08bf, 0x222b }, /* integral ∫ INTEGRAL */ { 0x08c0, 0x2234 }, /* therefore ∴ THEREFORE */ { 0x08c1, 0x221d }, /* variation ∝ PROPORTIONAL TO */ { 0x08c2, 0x221e }, /* infinity ∞ INFINITY */ { 0x08c5, 0x2207 }, /* nabla ∇ NABLA */ { 0x08c8, 0x223c }, /* approximate ∼ TILDE OPERATOR */ { 0x08c9, 0x2243 }, /* similarequal ≃ ASYMPTOTICALLY ETQUAL TO */ { 0x08cd, 0x21d4 }, /* ifonlyif ⇔ LEFT RIGHT DOUBLE ARROW */ { 0x08ce, 0x21d2 }, /* implies ⇒ RIGHTWARDS DOUBLE ARROW */ { 0x08cf, 0x2261 }, /* identical ≡ IDENTICAL TO */ { 0x08d6, 0x221a }, /* radical √ SQUARE ROOT */ { 0x08da, 0x2282 }, /* includedin ⊂ SUBSET OF */ { 0x08db, 0x2283 }, /* includes ⊃ SUPERSET OF */ { 0x08dc, 0x2229 }, /* intersection ∩ INTERSECTION */ { 0x08dd, 0x222a }, /* union ∪ UNION */ { 0x08de, 0x2227 }, /* logicaland ∧ LOGICAL AND */ { 0x08df, 0x2228 }, /* logicalor ∨ LOGICAL OR */ { 0x08ef, 0x2202 }, /* partialderivative ∂ PARTIAL DIFFERENTIAL */ { 0x08f6, 0x0192 }, /* function ƒ LATIN SMALL LETTER F WITH HOOK */ { 0x08fb, 0x2190 }, /* leftarrow ← LEFTWARDS ARROW */ { 0x08fc, 0x2191 }, /* uparrow ↑ UPWARDS ARROW */ { 0x08fd, 0x2192 }, /* rightarrow → RIGHTWARDS ARROW */ { 0x08fe, 0x2193 }, /* downarrow ↓ DOWNWARDS ARROW */ /* 0x09df blank ? ??? */ { 0x09e0, 0x25c6 }, /* soliddiamond ◆ BLACK DIAMOND */ { 0x09e1, 0x2592 }, /* checkerboard ▒ MEDIUM SHADE */ { 0x09e2, 0x2409 }, /* ht ␉ SYMBOL FOR HORIZONTAL TABULATION */ { 0x09e3, 0x240c }, /* ff ␌ SYMBOL FOR FORM FEED */ { 0x09e4, 0x240d }, /* cr ␍ SYMBOL FOR CARRIAGE RETURN */ { 0x09e5, 0x240a }, /* lf ␊ SYMBOL FOR LINE FEED */ { 0x09e8, 0x2424 }, /* nl ␤ SYMBOL FOR NEWLINE */ { 0x09e9, 0x240b }, /* vt ␋ SYMBOL FOR VERTICAL TABULATION */ { 0x09ea, 0x2518 }, /* lowrightcorner ┘ BOX DRAWINGS LIGHT UP AND LEFT */ { 0x09eb, 0x2510 }, /* uprightcorner ┐ BOX DRAWINGS LIGHT DOWN AND LEFT */ { 0x09ec, 0x250c }, /* upleftcorner ┌ BOX DRAWINGS LIGHT DOWN AND RIGHT */ { 0x09ed, 0x2514 }, /* lowleftcorner └ BOX DRAWINGS LIGHT UP AND RIGHT */ { 0x09ee, 0x253c }, /* crossinglines ┼ BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL */ { 0x09ef, 0x23ba }, /* horizlinescan1 ⎺ HORIZONTAL SCAN LINE-1 (Unicode 3.2 draft) */ { 0x09f0, 0x23bb }, /* horizlinescan3 ⎻ HORIZONTAL SCAN LINE-3 (Unicode 3.2 draft) */ { 0x09f1, 0x2500 }, /* horizlinescan5 ─ BOX DRAWINGS LIGHT HORIZONTAL */ { 0x09f2, 0x23bc }, /* horizlinescan7 ⎼ HORIZONTAL SCAN LINE-7 (Unicode 3.2 draft) */ { 0x09f3, 0x23bd }, /* horizlinescan9 ⎽ HORIZONTAL SCAN LINE-9 (Unicode 3.2 draft) */ { 0x09f4, 0x251c }, /* leftt ├ BOX DRAWINGS LIGHT VERTICAL AND RIGHT */ { 0x09f5, 0x2524 }, /* rightt ┤ BOX DRAWINGS LIGHT VERTICAL AND LEFT */ { 0x09f6, 0x2534 }, /* bott ┴ BOX DRAWINGS LIGHT UP AND HORIZONTAL */ { 0x09f7, 0x252c }, /* topt ┬ BOX DRAWINGS LIGHT DOWN AND HORIZONTAL */ { 0x09f8, 0x2502 }, /* vertbar │ BOX DRAWINGS LIGHT VERTICAL */ { 0x0aa1, 0x2003 }, /* emspace   EM SPACE */ { 0x0aa2, 0x2002 }, /* enspace   EN SPACE */ { 0x0aa3, 0x2004 }, /* em3space   THREE-PER-EM SPACE */ { 0x0aa4, 0x2005 }, /* em4space   FOUR-PER-EM SPACE */ { 0x0aa5, 0x2007 }, /* digitspace   FIGURE SPACE */ { 0x0aa6, 0x2008 }, /* punctspace   PUNCTUATION SPACE */ { 0x0aa7, 0x2009 }, /* thinspace   THIN SPACE */ { 0x0aa8, 0x200a }, /* hairspace   HAIR SPACE */ { 0x0aa9, 0x2014 }, /* emdash — EM DASH */ { 0x0aaa, 0x2013 }, /* endash – EN DASH */ /* 0x0aac signifblank ? ??? */ { 0x0aae, 0x2026 }, /* ellipsis … HORIZONTAL ELLIPSIS */ { 0x0aaf, 0x2025 }, /* doubbaselinedot ‥ TWO DOT LEADER */ { 0x0ab0, 0x2153 }, /* onethird ⅓ VULGAR FRACTION ONE THIRD */ { 0x0ab1, 0x2154 }, /* twothirds ⅔ VULGAR FRACTION TWO THIRDS */ { 0x0ab2, 0x2155 }, /* onefifth ⅕ VULGAR FRACTION ONE FIFTH */ { 0x0ab3, 0x2156 }, /* twofifths ⅖ VULGAR FRACTION TWO FIFTHS */ { 0x0ab4, 0x2157 }, /* threefifths ⅗ VULGAR FRACTION THREE FIFTHS */ { 0x0ab5, 0x2158 }, /* fourfifths ⅘ VULGAR FRACTION FOUR FIFTHS */ { 0x0ab6, 0x2159 }, /* onesixth ⅙ VULGAR FRACTION ONE SIXTH */ { 0x0ab7, 0x215a }, /* fivesixths ⅚ VULGAR FRACTION FIVE SIXTHS */ { 0x0ab8, 0x2105 }, /* careof ℅ CARE OF */ { 0x0abb, 0x2012 }, /* figdash ‒ FIGURE DASH */ { 0x0abc, 0x2329 }, /* leftanglebracket 〈 LEFT-POINTING ANGLE BRACKET */ /* 0x0abd decimalpoint ? ??? */ { 0x0abe, 0x232a }, /* rightanglebracket 〉 RIGHT-POINTING ANGLE BRACKET */ /* 0x0abf marker ? ??? */ { 0x0ac3, 0x215b }, /* oneeighth ⅛ VULGAR FRACTION ONE EIGHTH */ { 0x0ac4, 0x215c }, /* threeeighths ⅜ VULGAR FRACTION THREE EIGHTHS */ { 0x0ac5, 0x215d }, /* fiveeighths ⅝ VULGAR FRACTION FIVE EIGHTHS */ { 0x0ac6, 0x215e }, /* seveneighths ⅞ VULGAR FRACTION SEVEN EIGHTHS */ { 0x0ac9, 0x2122 }, /* trademark ™ TRADE MARK SIGN */ { 0x0aca, 0x2613 }, /* signaturemark ☓ SALTIRE */ /* 0x0acb trademarkincircle ? ??? */ { 0x0acc, 0x25c1 }, /* leftopentriangle ◁ WHITE LEFT-POINTING TRIANGLE */ { 0x0acd, 0x25b7 }, /* rightopentriangle ▷ WHITE RIGHT-POINTING TRIANGLE */ { 0x0ace, 0x25cb }, /* emopencircle ○ WHITE CIRCLE */ { 0x0acf, 0x25af }, /* emopenrectangle ▯ WHITE VERTICAL RECTANGLE */ { 0x0ad0, 0x2018 }, /* leftsinglequotemark ‘ LEFT SINGLE TQUOTATION MARK */ { 0x0ad1, 0x2019 }, /* rightsinglequotemark ’ RIGHT SINGLE TQUOTATION MARK */ { 0x0ad2, 0x201c }, /* leftdoublequotemark “ LEFT DOUBLE TQUOTATION MARK */ { 0x0ad3, 0x201d }, /* rightdoublequotemark ” RIGHT DOUBLE TQUOTATION MARK */ { 0x0ad4, 0x211e }, /* prescription ℞ PRESCRIPTION TAKE */ { 0x0ad6, 0x2032 }, /* minutes ′ PRIME */ { 0x0ad7, 0x2033 }, /* seconds ″ DOUBLE PRIME */ { 0x0ad9, 0x271d }, /* latincross ✝ LATIN CROSS */ /* 0x0ada hexagram ? ??? */ { 0x0adb, 0x25ac }, /* filledrectbullet ▬ BLACK RECTANGLE */ { 0x0adc, 0x25c0 }, /* filledlefttribullet ◀ BLACK LEFT-POINTING TRIANGLE */ { 0x0add, 0x25b6 }, /* filledrighttribullet ▶ BLACK RIGHT-POINTING TRIANGLE */ { 0x0ade, 0x25cf }, /* emfilledcircle ● BLACK CIRCLE */ { 0x0adf, 0x25ae }, /* emfilledrect ▮ BLACK VERTICAL RECTANGLE */ { 0x0ae0, 0x25e6 }, /* enopencircbullet ◦ WHITE BULLET */ { 0x0ae1, 0x25ab }, /* enopensquarebullet ▫ WHITE SMALL SQUARE */ { 0x0ae2, 0x25ad }, /* openrectbullet ▭ WHITE RECTANGLE */ { 0x0ae3, 0x25b3 }, /* opentribulletup △ WHITE UP-POINTING TRIANGLE */ { 0x0ae4, 0x25bd }, /* opentribulletdown ▽ WHITE DOWN-POINTING TRIANGLE */ { 0x0ae5, 0x2606 }, /* openstar ☆ WHITE STAR */ { 0x0ae6, 0x2022 }, /* enfilledcircbullet • BULLET */ { 0x0ae7, 0x25aa }, /* enfilledsqbullet ▪ BLACK SMALL SQUARE */ { 0x0ae8, 0x25b2 }, /* filledtribulletup ▲ BLACK UP-POINTING TRIANGLE */ { 0x0ae9, 0x25bc }, /* filledtribulletdown ▼ BLACK DOWN-POINTING TRIANGLE */ { 0x0aea, 0x261c }, /* leftpointer ☜ WHITE LEFT POINTING INDEX */ { 0x0aeb, 0x261e }, /* rightpointer ☞ WHITE RIGHT POINTING INDEX */ { 0x0aec, 0x2663 }, /* club ♣ BLACK CLUB SUIT */ { 0x0aed, 0x2666 }, /* diamond ♦ BLACK DIAMOND SUIT */ { 0x0aee, 0x2665 }, /* heart ♥ BLACK HEART SUIT */ { 0x0af0, 0x2720 }, /* maltesecross ✠ MALTESE CROSS */ { 0x0af1, 0x2020 }, /* dagger † DAGGER */ { 0x0af2, 0x2021 }, /* doubledagger ‡ DOUBLE DAGGER */ { 0x0af3, 0x2713 }, /* checkmark ✓ CHECK MARK */ { 0x0af4, 0x2717 }, /* ballotcross ✗ BALLOT X */ { 0x0af5, 0x266f }, /* musicalsharp ♯ MUSIC SHARP SIGN */ { 0x0af6, 0x266d }, /* musicalflat ♭ MUSIC FLAT SIGN */ { 0x0af7, 0x2642 }, /* malesymbol ♂ MALE SIGN */ { 0x0af8, 0x2640 }, /* femalesymbol ♀ FEMALE SIGN */ { 0x0af9, 0x260e }, /* telephone ☎ BLACK TELEPHONE */ { 0x0afa, 0x2315 }, /* telephonerecorder ⌕ TELEPHONE RECORDER */ { 0x0afb, 0x2117 }, /* phonographcopyright ℗ SOUND RECORDING COPYRIGHT */ { 0x0afc, 0x2038 }, /* caret ‸ CARET */ { 0x0afd, 0x201a }, /* singlelowquotemark ‚ SINGLE LOW-9 TQUOTATION MARK */ { 0x0afe, 0x201e }, /* doublelowquotemark „ DOUBLE LOW-9 TQUOTATION MARK */ /* 0x0aff cursor ? ??? */ { 0x0ba3, 0x003c }, /* leftcaret < LESS-THAN SIGN */ { 0x0ba6, 0x003e }, /* rightcaret > GREATER-THAN SIGN */ { 0x0ba8, 0x2228 }, /* downcaret ∨ LOGICAL OR */ { 0x0ba9, 0x2227 }, /* upcaret ∧ LOGICAL AND */ { 0x0bc0, 0x00af }, /* overbar ¯ MACRON */ { 0x0bc2, 0x22a5 }, /* downtack ⊥ UP TACK */ { 0x0bc3, 0x2229 }, /* upshoe ∩ INTERSECTION */ { 0x0bc4, 0x230a }, /* downstile ⌊ LEFT FLOOR */ { 0x0bc6, 0x005f }, /* underbar _ LOW LINE */ { 0x0bca, 0x2218 }, /* jot ∘ RING OPERATOR */ { 0x0bcc, 0x2395 }, /* quad ⎕ APL FUNCTIONAL SYMBOL TQUAD */ { 0x0bce, 0x22a4 }, /* uptack ⊤ DOWN TACK */ { 0x0bcf, 0x25cb }, /* circle ○ WHITE CIRCLE */ { 0x0bd3, 0x2308 }, /* upstile ⌈ LEFT CEILING */ { 0x0bd6, 0x222a }, /* downshoe ∪ UNION */ { 0x0bd8, 0x2283 }, /* rightshoe ⊃ SUPERSET OF */ { 0x0bda, 0x2282 }, /* leftshoe ⊂ SUBSET OF */ { 0x0bdc, 0x22a2 }, /* lefttack ⊢ RIGHT TACK */ { 0x0bfc, 0x22a3 }, /* righttack ⊣ LEFT TACK */ { 0x0cdf, 0x2017 }, /* hebrew_doublelowline ‗ DOUBLE LOW LINE */ { 0x0ce0, 0x05d0 }, /* hebrew_aleph א HEBREW LETTER ALEF */ { 0x0ce1, 0x05d1 }, /* hebrew_bet ב HEBREW LETTER BET */ { 0x0ce2, 0x05d2 }, /* hebrew_gimel ג HEBREW LETTER GIMEL */ { 0x0ce3, 0x05d3 }, /* hebrew_dalet ד HEBREW LETTER DALET */ { 0x0ce4, 0x05d4 }, /* hebrew_he ה HEBREW LETTER HE */ { 0x0ce5, 0x05d5 }, /* hebrew_waw ו HEBREW LETTER VAV */ { 0x0ce6, 0x05d6 }, /* hebrew_zain ז HEBREW LETTER ZAYIN */ { 0x0ce7, 0x05d7 }, /* hebrew_chet ח HEBREW LETTER HET */ { 0x0ce8, 0x05d8 }, /* hebrew_tet ט HEBREW LETTER TET */ { 0x0ce9, 0x05d9 }, /* hebrew_yod י HEBREW LETTER YOD */ { 0x0cea, 0x05da }, /* hebrew_finalkaph ך HEBREW LETTER FINAL KAF */ { 0x0ceb, 0x05db }, /* hebrew_kaph כ HEBREW LETTER KAF */ { 0x0cec, 0x05dc }, /* hebrew_lamed ל HEBREW LETTER LAMED */ { 0x0ced, 0x05dd }, /* hebrew_finalmem ם HEBREW LETTER FINAL MEM */ { 0x0cee, 0x05de }, /* hebrew_mem מ HEBREW LETTER MEM */ { 0x0cef, 0x05df }, /* hebrew_finalnun ן HEBREW LETTER FINAL NUN */ { 0x0cf0, 0x05e0 }, /* hebrew_nun נ HEBREW LETTER NUN */ { 0x0cf1, 0x05e1 }, /* hebrew_samech ס HEBREW LETTER SAMEKH */ { 0x0cf2, 0x05e2 }, /* hebrew_ayin ע HEBREW LETTER AYIN */ { 0x0cf3, 0x05e3 }, /* hebrew_finalpe ף HEBREW LETTER FINAL PE */ { 0x0cf4, 0x05e4 }, /* hebrew_pe פ HEBREW LETTER PE */ { 0x0cf5, 0x05e5 }, /* hebrew_finalzade ץ HEBREW LETTER FINAL TSADI */ { 0x0cf6, 0x05e6 }, /* hebrew_zade צ HEBREW LETTER TSADI */ { 0x0cf7, 0x05e7 }, /* hebrew_qoph ק HEBREW LETTER TQOF */ { 0x0cf8, 0x05e8 }, /* hebrew_resh ר HEBREW LETTER RESH */ { 0x0cf9, 0x05e9 }, /* hebrew_shin ש HEBREW LETTER SHIN */ { 0x0cfa, 0x05ea }, /* hebrew_taw ת HEBREW LETTER TAV */ { 0x0da1, 0x0e01 }, /* Thai_kokai ก THAI CHARACTER KO KAI */ { 0x0da2, 0x0e02 }, /* Thai_khokhai ข THAI CHARACTER KHO KHAI */ { 0x0da3, 0x0e03 }, /* Thai_khokhuat ฃ THAI CHARACTER KHO KHUAT */ { 0x0da4, 0x0e04 }, /* Thai_khokhwai ค THAI CHARACTER KHO KHWAI */ { 0x0da5, 0x0e05 }, /* Thai_khokhon ฅ THAI CHARACTER KHO KHON */ { 0x0da6, 0x0e06 }, /* Thai_khorakhang ฆ THAI CHARACTER KHO RAKHANG */ { 0x0da7, 0x0e07 }, /* Thai_ngongu ง THAI CHARACTER NGO NGU */ { 0x0da8, 0x0e08 }, /* Thai_chochan จ THAI CHARACTER CHO CHAN */ { 0x0da9, 0x0e09 }, /* Thai_choching ฉ THAI CHARACTER CHO CHING */ { 0x0daa, 0x0e0a }, /* Thai_chochang ช THAI CHARACTER CHO CHANG */ { 0x0dab, 0x0e0b }, /* Thai_soso ซ THAI CHARACTER SO SO */ { 0x0dac, 0x0e0c }, /* Thai_chochoe ฌ THAI CHARACTER CHO CHOE */ { 0x0dad, 0x0e0d }, /* Thai_yoying ญ THAI CHARACTER YO YING */ { 0x0dae, 0x0e0e }, /* Thai_dochada ฎ THAI CHARACTER DO CHADA */ { 0x0daf, 0x0e0f }, /* Thai_topatak ฏ THAI CHARACTER TO PATAK */ { 0x0db0, 0x0e10 }, /* Thai_thothan ฐ THAI CHARACTER THO THAN */ { 0x0db1, 0x0e11 }, /* Thai_thonangmontho ฑ THAI CHARACTER THO NANGMONTHO */ { 0x0db2, 0x0e12 }, /* Thai_thophuthao ฒ THAI CHARACTER THO PHUTHAO */ { 0x0db3, 0x0e13 }, /* Thai_nonen ณ THAI CHARACTER NO NEN */ { 0x0db4, 0x0e14 }, /* Thai_dodek ด THAI CHARACTER DO DEK */ { 0x0db5, 0x0e15 }, /* Thai_totao ต THAI CHARACTER TO TAO */ { 0x0db6, 0x0e16 }, /* Thai_thothung ถ THAI CHARACTER THO THUNG */ { 0x0db7, 0x0e17 }, /* Thai_thothahan ท THAI CHARACTER THO THAHAN */ { 0x0db8, 0x0e18 }, /* Thai_thothong ธ THAI CHARACTER THO THONG */ { 0x0db9, 0x0e19 }, /* Thai_nonu น THAI CHARACTER NO NU */ { 0x0dba, 0x0e1a }, /* Thai_bobaimai บ THAI CHARACTER BO BAIMAI */ { 0x0dbb, 0x0e1b }, /* Thai_popla ป THAI CHARACTER PO PLA */ { 0x0dbc, 0x0e1c }, /* Thai_phophung ผ THAI CHARACTER PHO PHUNG */ { 0x0dbd, 0x0e1d }, /* Thai_fofa ฝ THAI CHARACTER FO FA */ { 0x0dbe, 0x0e1e }, /* Thai_phophan พ THAI CHARACTER PHO PHAN */ { 0x0dbf, 0x0e1f }, /* Thai_fofan ฟ THAI CHARACTER FO FAN */ { 0x0dc0, 0x0e20 }, /* Thai_phosamphao ภ THAI CHARACTER PHO SAMPHAO */ { 0x0dc1, 0x0e21 }, /* Thai_moma ม THAI CHARACTER MO MA */ { 0x0dc2, 0x0e22 }, /* Thai_yoyak ย THAI CHARACTER YO YAK */ { 0x0dc3, 0x0e23 }, /* Thai_rorua ร THAI CHARACTER RO RUA */ { 0x0dc4, 0x0e24 }, /* Thai_ru ฤ THAI CHARACTER RU */ { 0x0dc5, 0x0e25 }, /* Thai_loling ล THAI CHARACTER LO LING */ { 0x0dc6, 0x0e26 }, /* Thai_lu ฦ THAI CHARACTER LU */ { 0x0dc7, 0x0e27 }, /* Thai_wowaen ว THAI CHARACTER WO WAEN */ { 0x0dc8, 0x0e28 }, /* Thai_sosala ศ THAI CHARACTER SO SALA */ { 0x0dc9, 0x0e29 }, /* Thai_sorusi ษ THAI CHARACTER SO RUSI */ { 0x0dca, 0x0e2a }, /* Thai_sosua ส THAI CHARACTER SO SUA */ { 0x0dcb, 0x0e2b }, /* Thai_hohip ห THAI CHARACTER HO HIP */ { 0x0dcc, 0x0e2c }, /* Thai_lochula ฬ THAI CHARACTER LO CHULA */ { 0x0dcd, 0x0e2d }, /* Thai_oang อ THAI CHARACTER O ANG */ { 0x0dce, 0x0e2e }, /* Thai_honokhuk ฮ THAI CHARACTER HO NOKHUK */ { 0x0dcf, 0x0e2f }, /* Thai_paiyannoi ฯ THAI CHARACTER PAIYANNOI */ { 0x0dd0, 0x0e30 }, /* Thai_saraa ะ THAI CHARACTER SARA A */ { 0x0dd1, 0x0e31 }, /* Thai_maihanakat ั THAI CHARACTER MAI HAN-AKAT */ { 0x0dd2, 0x0e32 }, /* Thai_saraaa า THAI CHARACTER SARA AA */ { 0x0dd3, 0x0e33 }, /* Thai_saraam ำ THAI CHARACTER SARA AM */ { 0x0dd4, 0x0e34 }, /* Thai_sarai ิ THAI CHARACTER SARA I */ { 0x0dd5, 0x0e35 }, /* Thai_saraii ี THAI CHARACTER SARA II */ { 0x0dd6, 0x0e36 }, /* Thai_saraue ึ THAI CHARACTER SARA UE */ { 0x0dd7, 0x0e37 }, /* Thai_sarauee ื THAI CHARACTER SARA UEE */ { 0x0dd8, 0x0e38 }, /* Thai_sarau ุ THAI CHARACTER SARA U */ { 0x0dd9, 0x0e39 }, /* Thai_sarauu ู THAI CHARACTER SARA UU */ { 0x0dda, 0x0e3a }, /* Thai_phinthu ฺ THAI CHARACTER PHINTHU */ /* 0x0dde Thai_maihanakat_maitho ? ??? */ { 0x0ddf, 0x0e3f }, /* Thai_baht ฿ THAI CURRENCY SYMBOL BAHT */ { 0x0de0, 0x0e40 }, /* Thai_sarae เ THAI CHARACTER SARA E */ { 0x0de1, 0x0e41 }, /* Thai_saraae แ THAI CHARACTER SARA AE */ { 0x0de2, 0x0e42 }, /* Thai_sarao โ THAI CHARACTER SARA O */ { 0x0de3, 0x0e43 }, /* Thai_saraaimaimuan ใ THAI CHARACTER SARA AI MAIMUAN */ { 0x0de4, 0x0e44 }, /* Thai_saraaimaimalai ไ THAI CHARACTER SARA AI MAIMALAI */ { 0x0de5, 0x0e45 }, /* Thai_lakkhangyao ๅ THAI CHARACTER LAKKHANGYAO */ { 0x0de6, 0x0e46 }, /* Thai_maiyamok ๆ THAI CHARACTER MAIYAMOK */ { 0x0de7, 0x0e47 }, /* Thai_maitaikhu ็ THAI CHARACTER MAITAIKHU */ { 0x0de8, 0x0e48 }, /* Thai_maiek ่ THAI CHARACTER MAI EK */ { 0x0de9, 0x0e49 }, /* Thai_maitho ้ THAI CHARACTER MAI THO */ { 0x0dea, 0x0e4a }, /* Thai_maitri ๊ THAI CHARACTER MAI TRI */ { 0x0deb, 0x0e4b }, /* Thai_maichattawa ๋ THAI CHARACTER MAI CHATTAWA */ { 0x0dec, 0x0e4c }, /* Thai_thanthakhat ์ THAI CHARACTER THANTHAKHAT */ { 0x0ded, 0x0e4d }, /* Thai_nikhahit ํ THAI CHARACTER NIKHAHIT */ { 0x0df0, 0x0e50 }, /* Thai_leksun ๐ THAI DIGIT ZERO */ { 0x0df1, 0x0e51 }, /* Thai_leknung ๑ THAI DIGIT ONE */ { 0x0df2, 0x0e52 }, /* Thai_leksong ๒ THAI DIGIT TWO */ { 0x0df3, 0x0e53 }, /* Thai_leksam ๓ THAI DIGIT THREE */ { 0x0df4, 0x0e54 }, /* Thai_leksi ๔ THAI DIGIT FOUR */ { 0x0df5, 0x0e55 }, /* Thai_lekha ๕ THAI DIGIT FIVE */ { 0x0df6, 0x0e56 }, /* Thai_lekhok ๖ THAI DIGIT SIX */ { 0x0df7, 0x0e57 }, /* Thai_lekchet ๗ THAI DIGIT SEVEN */ { 0x0df8, 0x0e58 }, /* Thai_lekpaet ๘ THAI DIGIT EIGHT */ { 0x0df9, 0x0e59 }, /* Thai_lekkao ๙ THAI DIGIT NINE */ { 0x0ea1, 0x3131 }, /* Hangul_Kiyeog ㄱ HANGUL LETTER KIYEOK */ { 0x0ea2, 0x3132 }, /* Hangul_SsangKiyeog ㄲ HANGUL LETTER SSANGKIYEOK */ { 0x0ea3, 0x3133 }, /* Hangul_KiyeogSios ㄳ HANGUL LETTER KIYEOK-SIOS */ { 0x0ea4, 0x3134 }, /* Hangul_Nieun ㄴ HANGUL LETTER NIEUN */ { 0x0ea5, 0x3135 }, /* Hangul_NieunJieuj ㄵ HANGUL LETTER NIEUN-CIEUC */ { 0x0ea6, 0x3136 }, /* Hangul_NieunHieuh ㄶ HANGUL LETTER NIEUN-HIEUH */ { 0x0ea7, 0x3137 }, /* Hangul_Dikeud ㄷ HANGUL LETTER TIKEUT */ { 0x0ea8, 0x3138 }, /* Hangul_SsangDikeud ㄸ HANGUL LETTER SSANGTIKEUT */ { 0x0ea9, 0x3139 }, /* Hangul_Rieul ㄹ HANGUL LETTER RIEUL */ { 0x0eaa, 0x313a }, /* Hangul_RieulKiyeog ㄺ HANGUL LETTER RIEUL-KIYEOK */ { 0x0eab, 0x313b }, /* Hangul_RieulMieum ㄻ HANGUL LETTER RIEUL-MIEUM */ { 0x0eac, 0x313c }, /* Hangul_RieulPieub ㄼ HANGUL LETTER RIEUL-PIEUP */ { 0x0ead, 0x313d }, /* Hangul_RieulSios ㄽ HANGUL LETTER RIEUL-SIOS */ { 0x0eae, 0x313e }, /* Hangul_RieulTieut ㄾ HANGUL LETTER RIEUL-THIEUTH */ { 0x0eaf, 0x313f }, /* Hangul_RieulPhieuf ㄿ HANGUL LETTER RIEUL-PHIEUPH */ { 0x0eb0, 0x3140 }, /* Hangul_RieulHieuh ㅀ HANGUL LETTER RIEUL-HIEUH */ { 0x0eb1, 0x3141 }, /* Hangul_Mieum ㅁ HANGUL LETTER MIEUM */ { 0x0eb2, 0x3142 }, /* Hangul_Pieub ㅂ HANGUL LETTER PIEUP */ { 0x0eb3, 0x3143 }, /* Hangul_SsangPieub ㅃ HANGUL LETTER SSANGPIEUP */ { 0x0eb4, 0x3144 }, /* Hangul_PieubSios ㅄ HANGUL LETTER PIEUP-SIOS */ { 0x0eb5, 0x3145 }, /* Hangul_Sios ㅅ HANGUL LETTER SIOS */ { 0x0eb6, 0x3146 }, /* Hangul_SsangSios ㅆ HANGUL LETTER SSANGSIOS */ { 0x0eb7, 0x3147 }, /* Hangul_Ieung ㅇ HANGUL LETTER IEUNG */ { 0x0eb8, 0x3148 }, /* Hangul_Jieuj ㅈ HANGUL LETTER CIEUC */ { 0x0eb9, 0x3149 }, /* Hangul_SsangJieuj ㅉ HANGUL LETTER SSANGCIEUC */ { 0x0eba, 0x314a }, /* Hangul_Cieuc ㅊ HANGUL LETTER CHIEUCH */ { 0x0ebb, 0x314b }, /* Hangul_Khieuq ㅋ HANGUL LETTER KHIEUKH */ { 0x0ebc, 0x314c }, /* Hangul_Tieut ㅌ HANGUL LETTER THIEUTH */ { 0x0ebd, 0x314d }, /* Hangul_Phieuf ㅍ HANGUL LETTER PHIEUPH */ { 0x0ebe, 0x314e }, /* Hangul_Hieuh ㅎ HANGUL LETTER HIEUH */ { 0x0ebf, 0x314f }, /* Hangul_A ㅏ HANGUL LETTER A */ { 0x0ec0, 0x3150 }, /* Hangul_AE ㅐ HANGUL LETTER AE */ { 0x0ec1, 0x3151 }, /* Hangul_YA ㅑ HANGUL LETTER YA */ { 0x0ec2, 0x3152 }, /* Hangul_YAE ㅒ HANGUL LETTER YAE */ { 0x0ec3, 0x3153 }, /* Hangul_EO ㅓ HANGUL LETTER EO */ { 0x0ec4, 0x3154 }, /* Hangul_E ㅔ HANGUL LETTER E */ { 0x0ec5, 0x3155 }, /* Hangul_YEO ㅕ HANGUL LETTER YEO */ { 0x0ec6, 0x3156 }, /* Hangul_YE ㅖ HANGUL LETTER YE */ { 0x0ec7, 0x3157 }, /* Hangul_O ㅗ HANGUL LETTER O */ { 0x0ec8, 0x3158 }, /* Hangul_WA ㅘ HANGUL LETTER WA */ { 0x0ec9, 0x3159 }, /* Hangul_WAE ㅙ HANGUL LETTER WAE */ { 0x0eca, 0x315a }, /* Hangul_OE ㅚ HANGUL LETTER OE */ { 0x0ecb, 0x315b }, /* Hangul_YO ㅛ HANGUL LETTER YO */ { 0x0ecc, 0x315c }, /* Hangul_U ㅜ HANGUL LETTER U */ { 0x0ecd, 0x315d }, /* Hangul_WEO ㅝ HANGUL LETTER WEO */ { 0x0ece, 0x315e }, /* Hangul_WE ㅞ HANGUL LETTER WE */ { 0x0ecf, 0x315f }, /* Hangul_WI ㅟ HANGUL LETTER WI */ { 0x0ed0, 0x3160 }, /* Hangul_YU ㅠ HANGUL LETTER YU */ { 0x0ed1, 0x3161 }, /* Hangul_EU ㅡ HANGUL LETTER EU */ { 0x0ed2, 0x3162 }, /* Hangul_YI ㅢ HANGUL LETTER YI */ { 0x0ed3, 0x3163 }, /* Hangul_I ㅣ HANGUL LETTER I */ { 0x0ed4, 0x11a8 }, /* Hangul_J_Kiyeog ᆨ HANGUL JONGSEONG KIYEOK */ { 0x0ed5, 0x11a9 }, /* Hangul_J_SsangKiyeog ᆩ HANGUL JONGSEONG SSANGKIYEOK */ { 0x0ed6, 0x11aa }, /* Hangul_J_KiyeogSios ᆪ HANGUL JONGSEONG KIYEOK-SIOS */ { 0x0ed7, 0x11ab }, /* Hangul_J_Nieun ᆫ HANGUL JONGSEONG NIEUN */ { 0x0ed8, 0x11ac }, /* Hangul_J_NieunJieuj ᆬ HANGUL JONGSEONG NIEUN-CIEUC */ { 0x0ed9, 0x11ad }, /* Hangul_J_NieunHieuh ᆭ HANGUL JONGSEONG NIEUN-HIEUH */ { 0x0eda, 0x11ae }, /* Hangul_J_Dikeud ᆮ HANGUL JONGSEONG TIKEUT */ { 0x0edb, 0x11af }, /* Hangul_J_Rieul ᆯ HANGUL JONGSEONG RIEUL */ { 0x0edc, 0x11b0 }, /* Hangul_J_RieulKiyeog ᆰ HANGUL JONGSEONG RIEUL-KIYEOK */ { 0x0edd, 0x11b1 }, /* Hangul_J_RieulMieum ᆱ HANGUL JONGSEONG RIEUL-MIEUM */ { 0x0ede, 0x11b2 }, /* Hangul_J_RieulPieub ᆲ HANGUL JONGSEONG RIEUL-PIEUP */ { 0x0edf, 0x11b3 }, /* Hangul_J_RieulSios ᆳ HANGUL JONGSEONG RIEUL-SIOS */ { 0x0ee0, 0x11b4 }, /* Hangul_J_RieulTieut ᆴ HANGUL JONGSEONG RIEUL-THIEUTH */ { 0x0ee1, 0x11b5 }, /* Hangul_J_RieulPhieuf ᆵ HANGUL JONGSEONG RIEUL-PHIEUPH */ { 0x0ee2, 0x11b6 }, /* Hangul_J_RieulHieuh ᆶ HANGUL JONGSEONG RIEUL-HIEUH */ { 0x0ee3, 0x11b7 }, /* Hangul_J_Mieum ᆷ HANGUL JONGSEONG MIEUM */ { 0x0ee4, 0x11b8 }, /* Hangul_J_Pieub ᆸ HANGUL JONGSEONG PIEUP */ { 0x0ee5, 0x11b9 }, /* Hangul_J_PieubSios ᆹ HANGUL JONGSEONG PIEUP-SIOS */ { 0x0ee6, 0x11ba }, /* Hangul_J_Sios ᆺ HANGUL JONGSEONG SIOS */ { 0x0ee7, 0x11bb }, /* Hangul_J_SsangSios ᆻ HANGUL JONGSEONG SSANGSIOS */ { 0x0ee8, 0x11bc }, /* Hangul_J_Ieung ᆼ HANGUL JONGSEONG IEUNG */ { 0x0ee9, 0x11bd }, /* Hangul_J_Jieuj ᆽ HANGUL JONGSEONG CIEUC */ { 0x0eea, 0x11be }, /* Hangul_J_Cieuc ᆾ HANGUL JONGSEONG CHIEUCH */ { 0x0eeb, 0x11bf }, /* Hangul_J_Khieuq ᆿ HANGUL JONGSEONG KHIEUKH */ { 0x0eec, 0x11c0 }, /* Hangul_J_Tieut ᇀ HANGUL JONGSEONG THIEUTH */ { 0x0eed, 0x11c1 }, /* Hangul_J_Phieuf ᇁ HANGUL JONGSEONG PHIEUPH */ { 0x0eee, 0x11c2 }, /* Hangul_J_Hieuh ᇂ HANGUL JONGSEONG HIEUH */ { 0x0eef, 0x316d }, /* Hangul_RieulYeorinHieuh ㅭ HANGUL LETTER RIEUL-YEORINHIEUH */ { 0x0ef0, 0x3171 }, /* Hangul_SunkyeongeumMieum ㅱ HANGUL LETTER KAPYEOUNMIEUM */ { 0x0ef1, 0x3178 }, /* Hangul_SunkyeongeumPieub ㅸ HANGUL LETTER KAPYEOUNPIEUP */ { 0x0ef2, 0x317f }, /* Hangul_PanSios ㅿ HANGUL LETTER PANSIOS */ { 0x0ef3, 0x3181 }, /* Hangul_KkogjiDalrinIeung ㆁ HANGUL LETTER YESIEUNG */ { 0x0ef4, 0x3184 }, /* Hangul_SunkyeongeumPhieuf ㆄ HANGUL LETTER KAPYEOUNPHIEUPH */ { 0x0ef5, 0x3186 }, /* Hangul_YeorinHieuh ㆆ HANGUL LETTER YEORINHIEUH */ { 0x0ef6, 0x318d }, /* Hangul_AraeA ㆍ HANGUL LETTER ARAEA */ { 0x0ef7, 0x318e }, /* Hangul_AraeAE ㆎ HANGUL LETTER ARAEAE */ { 0x0ef8, 0x11eb }, /* Hangul_J_PanSios ᇫ HANGUL JONGSEONG PANSIOS */ { 0x0ef9, 0x11f0 }, /* Hangul_J_KkogjiDalrinIeung ᇰ HANGUL JONGSEONG YESIEUNG */ { 0x0efa, 0x11f9 }, /* Hangul_J_YeorinHieuh ᇹ HANGUL JONGSEONG YEORINHIEUH */ { 0x0eff, 0x20a9 }, /* Korean_Won ₩ WON SIGN */ { 0x13a4, 0x20ac }, /* Euro € EURO SIGN */ { 0x13bc, 0x0152 }, /* OE Œ LATIN CAPITAL LIGATURE OE */ { 0x13bd, 0x0153 }, /* oe œ LATIN SMALL LIGATURE OE */ { 0x13be, 0x0178 }, /* Ydiaeresis Ÿ LATIN CAPITAL LETTER Y WITH DIAERESIS */ { 0x20ac, 0x20ac }, /* EuroSign € EURO SIGN */ }; long MainWidget::keysym2ucs(KeySym keysym) { int min = 0; int max = sizeof(keysymtab) / sizeof(struct codepair) - 1; int mid; /* first check for Latin-1 characters (1:1 mapping) */ if ((keysym >= 0x0020 && keysym <= 0x007e) || (keysym >= 0x00a0 && keysym <= 0x00ff)) return keysym; /* also check for directly encoded 24-bit UCS characters */ if ((keysym & 0xff000000) == 0x01000000) return keysym & 0x00ffffff; /* binary search in table */ while (max >= min) { mid = (min + max) / 2; if (keysymtab[mid].keysym < keysym) min = mid + 1; else if (keysymtab[mid].keysym > keysym) max = mid - 1; else { /* found it */ return keysymtab[mid].ucs; } } /* no matching Unicode value found */ return -1; } KbdTray::KbdTray(TQWidget *parent, const char *name) : KSystemTray(parent,name) { } void KbdTray::mousePressEvent(TQMouseEvent *e) { if (e->button()==Qt::LeftButton) { TQWidget *p = parentWidget(); if (p){ if (p->isShown()){ p->hide(); } else { p->show(); } } } else { KSystemTray::mousePressEvent(e); } } ``````
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700655937194824, "perplexity": 7920.66534752913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00038.warc.gz"}
https://search.r-project.org/CRAN/refmans/DLMtool/html/YPR.html
YPR {DLMtool} R Documentation Yield Per Recruit analysis to get FMSY proxy F01 Description A simple yield per recruit approximation to FMSY (F01) which is the position of the ascending YPR curve for which dYPR/dF = 0.1(dYPR/d0) Usage ```YPR(x, Data, reps = 100, plot = FALSE) YPR_CC(x, Data, reps = 100, plot = FALSE, Fmin = 0.005) YPR_ML(x, Data, reps = 100, plot = FALSE) ``` Arguments `x` A position in the data object `Data` A data object `reps` The number of stochastic samples of the MP recommendation(s) `plot` Logical. Show the plot? `Fmin` The minimum fishing mortality rate inferred from the catch-curve analysis Details The TAC is calculated as: \textrm{TAC} = F_{0.1} A where F_{0.1} is the fishing mortality (F) where the slope of the yield-per-recruit (YPR) curve is 10\ The YPR curve is calculated using an equilibrium age-structured model with life-history and selectivity parameters sampled from the `Data` object. The variants of the YPR MP differ in the method to estimate current abundance (see Functions section below). #' Value An object of class `Rec-class` with the `TAC` slot populated with a numeric vector of length `reps` Functions • `YPR`: Requires an external estimate of abundance. • `YPR_CC`: A catch-curve analysis is used to determine recent Z which given M (Mort) gives F and thus abundance = Ct/(1-exp(-F)) • `YPR_ML`: A mean-length estimate of recent Z is used to infer current abundance. Required Data See `Data-class` for information on the `Data` object `YPR`: Abun, LFS, MaxAge, vbK, vbLinf, vbt0 `YPR_CC`: CAA, Cat, LFS, MaxAge, vbK, vbLinf, vbt0 `YPR_ML`: CAL, Cat, LFS, Lbar, Lc, MaxAge, Mort, vbK, vbLinf, vbt0 Rendered Equations See Online Documentation for correctly rendered equations Note Based on the code of Meaghan Bryan Author(s) Meaghan Bryan and Tom Carruthers References Beverton and Holt. 1954. Examples ```YPR(1, MSEtool::SimulatedData, plot=TRUE) YPR_CC(1, MSEtool::SimulatedData, plot=TRUE) YPR_ML(1, MSEtool::SimulatedData, plot=TRUE) ``` [Package DLMtool version 6.0.2 Index]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8795732855796814, "perplexity": 22323.171074685462}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00315.warc.gz"}
https://academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/
# Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method. A Stellar Parallax describes the movement of a star against other more distant stars during the revolution of the Earth around the Sun. The concept can be used to determine the distance of a few stars that are close to the Sun. Mathematically, the distance to a star in parsecs will be given by: d= 1/p where p is in seconds of arc while d is in parsecs. An answer to this question is provided by one of our experts who specializes in physics. Let us know how much you liked it and give it a rating. Select a citation style: References Academic.Tips. (2021) 'Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method'. 1 July. Reference Academic.Tips. (2021, July 1). Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method. Retrieved from https://academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/ References Academic.Tips. 2021. "Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method." July 1, 2021. https://academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/. 1. Academic.Tips. "Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method." July 1, 2021. https://academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/. Bibliography Academic.Tips. "Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method." July 1, 2021. https://academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/. Work Cited "Describe stellar parallax and explain how one would mathematically measure and calculate the distance to a star using this method." Academic.Tips, 1 July 2021, academic.tips/question/describe-stellar-parallax-and-explain-how-one-would-mathematically-measure-and-calculate-the-distance-to-a-star-using-this-method/. Copy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799016118049622, "perplexity": 736.6721204369765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00266.warc.gz"}
https://latex.org/forum/viewtopic.php?f=44&t=30819
## LaTeX forum ⇒ Text Formatting ⇒ Pagenumbering in textblock Topic is solved Information and discussion about LaTeX's general text formatting features (e.g. bold, italic, enumerations, ...) hirscsil Posts: 2 Joined: Sun Jan 07, 2018 12:42 pm ### Pagenumbering in textblock Dear Members As this is my first post in this forum, I don't know if I post it in the correct category. So if not, I'm sorry Right now I'm writing summaries for the final exams in latex, and since I have three columns and almost no margin, I can't and don't want to use headers or footers. Because of that I wanted to print the pagenumber in a textblock which works actually quite nice. But the problem is, that the page always starts at page 2 with number one, but of course in the table of content the fist page is actually the one where also the table of content is contained. I tried many things like putting \setcounter{page}{1} on different places but it still didn't work. This is the code I use right now (shortened). I include my single chapters with the \input{document} command, but the error is also visible with the blinddocument. What i actually want: starting the pagenumbering at the very first page, that the pagenumbers in the table of content are consistant with the pages containing the section. \documentclass[landscape,a4paper,fontsize=8pt]{scrartcl}\usepackage[dvipsnames]{xcolor}\usepackage[ngerman]{babel} \usepackage{amsmath,color}\usepackage{amssymb}\usepackage{helvet}\usepackage{fancyhdr}\usepackage{blindtext} % To generate the box with the pagenumter\usepackage{atbegshi}\usepackage[absolute,overlay]{textpos}  \usepackage[a4paper, left=0.5cm, right=0.5cm, top=0.3cm, bottom=0.3cm]{geometry}\usepackage{flowfram}  % Three columns\Ncolumninarea{3}{\textwidth}{\textheight}{0pt}{0pt}\insertvrule{flow}{1}{flow}{2}\insertvrule{flow}{2}{flow}{3}  % Generate the textblock with the pagenumber\TPGrid{8}{11}\AtBeginShipout{ \begin{textblock}{3}(7.85,10.7) \footnotesize \texttt{\colorbox{Salmon}{\textbf{\thepage}}} \end{textblock}%} \begin{document}  \fontsize{8pt}{3pt}\selectfont \begin{center} \noindent {\scshape\Large Topic \\} {\scshape\large Name, Class \\} \end{center}  \tableofcontents  %\input{doc1} %\input{doc2} %\input{doc3}  \Blinddocument \end{document} Kind regards and thank you in advance hirscsil Tags: Stefan Kottwitz Posts: 8953 Joined: Mon Mar 10, 2008 9:44 pm Hi hirscsil, welcome to the forum! Quick fix: \AtBeginShipout{% \begin{textblock}{3}(7.85,10.7)% \footnotesize\stepcounter{page}% \texttt{\colorbox{Salmon}{\textbf{\thepage}}}% \addtocounter{page}{-1}% \end{textblock}%} I guess at "begin shipout time" the page counter was not yet incremented. Stefan hirscsil Posts: 2 Joined: Sun Jan 07, 2018 12:42 pm Stefan Kottwitz wrote:Hi hirscsil, welcome to the forum! Quick fix: \AtBeginShipout{% \begin{textblock}{3}(7.85,10.7)% \footnotesize\stepcounter{page}% \texttt{\colorbox{Salmon}{\textbf{\thepage}}}% \addtocounter{page}{-1}% \end{textblock}%} I guess at "begin shipout time" the page counter was not yet incremented. Stefan Hi Stefan Oh wow, that was a quick response and works perfectly. Thank you very much. hirscsil
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597541451454163, "perplexity": 4517.9470319734255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591718.31/warc/CC-MAIN-20180720154756-20180720174756-00091.warc.gz"}
https://govindan.usc.edu/projects/2_project/
Our lab has focused on vehicular sensing systems, mobile computing, and networked sensing. The following sections list some papers in each of these topics. ### Vehicular sensing systems 1. MobiSys AutoCast: Scalable Infrastructure-Less Cooperative Perception for Distributed Collaborative Driving In Proceedings of the 20th Annual International Conference on Mobile Systems, Applications and Services 2022 2. NSDI CarMap-Fast 3D Feature Map Updates for Automobiles In 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20) 2020 3. Fusion QuickSketch: Building 3D Representations in Unknown Environments Using Crowdsourcing In 2018 21st International Conference on Information Fusion (FUSION) 2018 4. IoTDI Kestrel: Video Analytics for Augmented Multi-Camera Vehicle Tracking In 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI) 2018 5. TVT Towards Robust Vehicular Context Sensing IEEE Transactions on Vehicular Technology 2018 6. MobiSys AVR: Augmented Vehicular Reality In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (Mobisys) 2018 7. Augmented Vehicular Reality: Enabling Extended Vision for Future Vehicles In Proceedings of the 18th International Workshop on Mobile Computing Systems and Applications, HotMobile 2017, Sonoma, CA, USA, February 21 - 22, 2017 2017 8. SEC Real-time Traffic Estimation at Vehicular Edge Nodes In Proceedings of the Second ACM/IEEE Symposium on Edge Computing, San Jose / Silicon Valley, SEC 2017, CA, USA, October 12-14, 2017 2017 9. SEC Pre-DriveID: Pre-trip Driver Identification from In-vehicle Data In Proceedings of the Second ACM/IEEE Symposium on Edge Computing, San Jose / Silicon Valley, SEC 2017, CA, USA, October 12-14, 2017 2017 10. TVT Towards Robust Vehicular Context Sensing IEEE Transactions on Vehicular Technology 2017 11. SenSys CARLOC: Precisely Tracking Automobile Position In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys) no 2015 ### Mobile computing 1. SoCC Scrooge: A Cost-Effective Deep Learning Inference System In SoCC ’21: ACM Symposium on Cloud Computing, Seattle, WA, USA, November 1 - 4, 2021 2021 2. IoTDI In Proceedings of the 6th ACM/IEEE Conference on Internet of Things Design and Implementation, 2021 2021 3. Middleware Olympian: Scheduling GPU Usage in a Deep Neural Network Model Serving System In Proceedings of the 19th International Middleware Conference 2018 4. MobiSys Gnome: A Practical Approach to NLOS Mitigation for GPS Positioning in Smartphones In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (Mobisys) 2018 5. Ubicomp ALPS: Accurate Landmark Positioning at City Scales In the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2016) se 2016 6. MobiSys Efficient Privilege De-Escalation for Ad Libraries in Mobile Apps In The 13th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys 2015) ma 2015 7. MobiSys FlexiWeb: Network-Aware Compaction for Accelerating Mobile Web Transfers In Proc. ACM MobiCom ma 2015 ### Networked sensing 1. ToSN Sensing the Sensor: Estimating Camera Properties with Minimal Information ACM Trans. Sen. Netw. Feb 2022 2. TMC Synthesis of Large-Scale Instant IoT Networks IEEE Transactions on Mobile Computing Feb 2021 3. SoCC Scrooge: A Cost-Effective Deep Learning Inference System In SoCC ’21: ACM Symposium on Cloud Computing, Seattle, WA, USA, November 1 - 4, 2021 Feb 2021 4. IoTDI In Proceedings of the 6th ACM/IEEE Conference on Internet of Things Design and Implementation, 2021 Feb 2021 5. IoTJ New Frontiers in IoT: Networking, Systems, Reliability, and Security Challenges IEEE Internet of Things Journal Feb 2020 6. IROS Persistent Connected Power Constrained Surveillance with Unmanned Aerial Vehicles In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Feb 2020 7. Grab: Fast and Accurate Sensor Processing for Cashier-Free Shopping In Feb 2020 8. Rapid Top-Down Synthesis of Large-Scale IoT Networks In Proceedings of the IEEE International Conference on Computer Communications and Networks (ICCCN) Feb 2020 9. Sensys Caesar: Cross-camera Complex Activity Recognition Feb 2019 10. IoTDI Kestrel: Video Analytics for Augmented Multi-Camera Vehicle Tracking In 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI) Feb 2018 11. ToN Scalability and Satisfiability of Quality-of-Information in Wireless Networks IEEE/ACM Trans. Netw. Feb 2018 <img src="/assets/img/1.jpg" class="img-fluid rounded z-depth-1" width="auto" height="auto" title="example image" onerror="this.onerror=null; $('.responsive-img-srcset').remove();" /> </picture> </figure> –> <!– </figure> –> <!– </figure> –> <!– </figure> –> <img src="/assets/img/6.jpg" class="img-fluid rounded z-depth-1" width="auto" height="auto" title="example image" onerror="this.onerror=null;$('.responsive-img-srcset').remove();" /> </picture> </figure> –> <!– </figure> –>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15939271450042725, "perplexity": 18299.82850588728}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00326.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.em/1175789760
## Experimental Mathematics ### Computing Varieties of Representations of Hyperbolic $3$-Manifolds into $\SLfR$ #### Abstract The geometric structure on a closed orientable hyperbolic 3-manifold determines a discrete faithful representation $\rho$ of its fundamental group into $\mathrm{SO^{+}(3,1)}$, unique up to conjugacy. Although Mostow rigidity prohibits us from deforming $\rho$, we can try to deform the composition of $\rho$ with inclusion of $\mathrm{SO^{+}(3,1)}$ into a larger group. In this sense, we have found by exact computation a small number of closed manifolds in the Hodgson-Weeks census for which $\rho$ deforms into $\mathrm{SL(4,\mathbb R)}$, thus showing that the hyperbolic structure can be deformed in these cases to a real projective structure. In this paper we describe the method for computing these deformations, particular attention being given to the manifold Vol3. #### Article information Source Experiment. Math. Volume 15, Issue 3 (2006), 291-306. Dates First available in Project Euclid: 5 April 2007 Cooper, Daryl; Long, Darren; Thistlethwaite, Morwen. Computing Varieties of Representations of Hyperbolic $3$-Manifolds into $\SLfR$. Experiment. Math. 15 (2006), no. 3, 291--306. http://projecteuclid.org/euclid.em/1175789760.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252007007598877, "perplexity": 522.7843334273232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638820.85/warc/CC-MAIN-20150417045718-00222-ip-10-235-10-82.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/62636/finding-the-coffecient-of-restitution
# Finding the coffecient of restitution A ball moving with velocity $1 \hat i \ ms^{-1}$ and collides with a friction less wall, afetr collision the velocity of ball becomes $1/2 \hat j \ ms^{-1}$. Find the coefficient of restitution between wall and ball. I approached it like: Now, $$e=cot^2\theta$$ but $\theta$ is not known. So,We equate velocity $$\sqrt{e^2 sin^2\theta+cos^2 \theta}=1/2$$ But this is hard to solve as $\cot^4 \theta$ get's involved. Is there any other method to do this or any easy method to solve these? - Feeling silly now. Just equating the component of velocities along the wall: $$1/2 sin\theta=cos\theta$$ we get $$\tan\theta =2$$ so, $$e=1/4$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7651856541633606, "perplexity": 204.40280773518842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066654.17/warc/CC-MAIN-20141017150106-00006-ip-10-16-133-185.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/705/pratik-deoghare?tab=reputation
# Pratik Deoghare less info reputation 41228 bio website pratikdeoghare.github.com location India age member for 4 years, 4 months seen 2 hours ago profile views 649 # 2,951 Reputation 5 yesterday +5 06:07 upvote Is value of $\pi = 4$? 22:43 4 events Anecdotes about famous mathematicians or physicists 5 Dec 20 +5 23:15 upvote Is value of $\pi = 4$? 5 Dec 17 +5 14:53 upvote Why the name 'FACTORIAL'? 5 Dec 15 10 Dec 12 5 Dec 11 5 Nov 22 5 Nov 16 -5 Nov 8 5 Nov 7 5 Nov 6 5 Nov 4 5 Oct 31 5 Oct 28 5 Oct 27 5 Oct 24 5 Oct 17 5 Oct 14 10 Oct 8 10 Oct 5 5 Oct 1 0 Sep 30 5 Sep 29 5 Sep 26 10 Sep 15 5 Sep 12 15 Aug 19 -2 Aug 6 5 Aug 3 5 Aug 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35137033462524414, "perplexity": 20149.376143461188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777454.142/warc/CC-MAIN-20141217075257-00053-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.gamedev.net/blogs/entry/956290-hautecapture/
• 9 • 9 • 10 • 10 • 9 • entries 292 557 • views 153859 # HauteCapture 71 views I wanted to present you with a good piece of software I found today: This is a screen capture utility that runs on the PC, and captures screens of CE devices. I only noted one robustness issue: it did not handle me attaching my CE device while the program was running. It crashed. Not a big deal... just start it over. It's an incredibly usable program. Unlike other screen capture utilities for the Pocket PC, I don't have to drag some file from the device itself, everything happens right on the PC. The only thing wrong with it is that it costs $30. For$10 I might have bought it. But \$30 to take screen shots of my Pocket PC? I don't think so. There are no comments to display.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1628439724445343, "perplexity": 4538.981134989028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00183.warc.gz"}
http://tex.stackexchange.com/users/7390/simplyknownasg?tab=activity
SimplyKnownAsG Reputation Top tag Next privilege 250 Rep. Sep24 awarded Autobiographer Jul17 comment glossaries package reference non-glossary item in see also Awesome, thanks! Jul17 accepted glossaries package reference non-glossary item in see also Jul15 revised glossaries package reference non-glossary item in see also added an image Jul15 asked glossaries package reference non-glossary item in see also Apr22 awarded Nice Question Mar19 comment How to conditionally define a new command in LaTeX? There seems to be conflicting statements here. The first sentence says `\ifdefined` and `\ifcsname` are TeX primitives, and the last sentence says otherwise. Jan18 comment Changing Table Numbering Scheme to Include Section Number I like this solution over using another package because it allows you to modify the appearance, for example if someone wanted to us a dash or other separator, `\thesection-\arabic{section}` Jan17 revised Changing Table Numbering Scheme to Include Section Number The format is `\@addtoreset{counter-name}{master}`; it was backwards Jan17 suggested approved edit on Changing Table Numbering Scheme to Include Section Number May2 awarded Excavator May2 revised How can I manually install a package on MiKTeX (Windows) Added command line option for installing packages May2 suggested approved edit on How can I manually install a package on MiKTeX (Windows) Aug22 comment Censor text spanning multiple lines I like your answer and you were correct in assuming I wasn't too worried about hyphens. One question though: Any idea why some of the spacing did change slightly, although the entire paragraph (height) would be maintained? Aug22 awarded Supporter Aug22 awarded Scholar Aug22 accepted Censor text spanning multiple lines Aug22 awarded Student Aug22 comment Censor text spanning multiple lines It is possible, although, if I could get something to work within a paragraph I'd be happy with that. Aug22 awarded Editor
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443795680999756, "perplexity": 8906.446640350076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928076.40/warc/CC-MAIN-20150521113208-00165-ip-10-180-206-219.ec2.internal.warc.gz"}
http://colvertgroup.com/standard-error/in-statistics-and-estimation-error.php
Home > Standard Error > In Statistics And Estimation Error # In Statistics And Estimation Error ## Contents Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. A statistics is a consistent estimator of a parameter if its probability that it will be close to the parameter's true value approaches 1 with increasing sample size. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Stat Trek Teach yourself statistics Skip to main content Home Tutorials AP Statistics Stat Tables Stat Tools Calculators Books A margin of error. his comment is here The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. All rights Reserved.EnglishfrançaisDeutschportuguêsespañol日本語한국어中文(简体)By using this site you agree to the use of cookies for analytics and personalized content.Read our policyOK Home ResearchResearch Methods Experiments Design Statistics Reasoning Philosophy Ethics History AcademicAcademic A natural way to describe the variation of these sample means around the true population mean is the standard deviation of the distribution of the sample means. We might describe this interval estimate as a 95% confidence interval. ## Standard Error Formula I have chosen an extreme sample size to just make this clear. I. A practical result: Decreasing the uncertainty in a mean value estimate by a factor of two requires acquiring four times as many observations in the sample. Gurland and Tripathi (1971)[6] provide a correction and equation for this effect. 1. A standard deviation is the spread of the scores around the average in a single sample. 2. May 3, 2015 Matthew Clare · Lancaster University Jochen,  I do apologise for providing an answer. 3. Relative standard error See also: Relative standard deviation The relative standard error of a sample mean is the standard error divided by the mean and expressed as a percentage. 4. National Center for Health Statistics typically does not report an estimated mean if its relative standard error exceeds 30%. (NCHS also typically requires at least 30 observations – if not more They may be used to calculate confidence intervals. May 8, 2015 Can you help by adding an answer? The survey with the lower relative standard error can be said to have a more precise measurement, since it has proportionately less sampling variation around the mean. Standard Error Definition The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al.[4] For the purpose of this example, the 5,534 women are the entire population Add to my courses 1 Frequency Distribution 2 Normal Distribution 2.1 Assumptions 3 F-Distribution 4 Central Tendency 4.1 Mean 4.1.1 Arithmetic Mean 4.1.2 Geometric Mean 4.1.3 Calculate Median 4.2 Statistical Mode Point Estimate Formula If one survey has a standard error of $10,000 and the other has a standard error of$5,000, then the relative standard errors are 20% and 10% respectively. II. http://stattrek.com/estimation/standard-error.aspx?Tutorial=AP Follow us! Trochim, All Rights Reserved Purchase a printed copy of the Research Methods Knowledge Base Last Revised: 10/20/2006 HomeTable of ContentsNavigatingFoundationsSamplingExternal ValiditySampling TerminologyStatistical Terms in SamplingProbability SamplingNonprobability SamplingMeasurementDesignAnalysisWrite-UpAppendicesSearch menuMinitab® 17 SupportWhat is How To Find Point Estimate The standard error is a measure of central tendency. (A) I only (B) II only (C) III only (D) All of the above. (E) None of the above. Because the age of the runners have a larger standard deviation (9.27 years) than does the age at first marriage (4.72 years), the standard error of the mean is larger for Greek letters indicate that these are population values. ## Point Estimate Formula In this scenario, the 400 patients are a sample of all patients who may be treated with the drug. Comments View the discussion thread. . Standard Error Formula And isn't that why we sampled in the first place? Standard Error Calculator The proportion or the mean is calculated using the sample. Blackwell Publishing. 81 (1): 75–81. Never did I suggest that it is perfect, so don't ridicule like that. In this scenario, the 2000 voters are a sample from all the actual voters. The margin of error and the confidence interval are based on a quantitative measure of uncertainty: the standard error. Standard Error Vs Standard Deviation The standard error estimated using the sample standard deviation is 2.56. Confidence Level The probability part of a confidence interval is called a confidence level. The distribution of the mean age in all possible samples is called the sampling distribution of the mean. weblink Sign up today to join our community of over 11+ million scientific professionals. And, of course, we don't actually know the population parameter value -- we're trying to find that out -- but we can use our best estimate for that -- the sample Standard Error Excel All Rights Reserved. For the runners, the population mean age is 33.87, and the population standard deviation is 9.27. ## And it has been proven that it is an underestimation several times, and the underestimation of your 2 point sample is roughly 25% whereas in 6 data point, the SEM only From previous experience we know that the population standard deviation is \$5,000 Using alpha = 1 - 0.99 = 0.01, we find the z-values for the endpoints of the CI when Your last post actually clarifies that not the SE itself is indicating the representativeness of the data but rather the sample size (given an appropriate sampling procedure, for sure). Hyattsville, MD: U.S. Standard Error Regression n is the size (number of observations) of the sample. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction"[9] The ages in one such sample are 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. For the purpose of hypothesis testing or estimating confidence intervals, the standard error is primarily of use when the sampling distribution is normally distributed, or approximately normally distributed. The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater than the true population mean μ {\displaystyle \mu } = 33.88 years. For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. This gives 9.27/sqrt(16) = 2.32. The z-score for the normal variable statistics is used to help determine the interval endpoints that correspond to the probability of degree of certainty one which to use for the interval The standard error of the mean now refers to the change in mean with different experiments conducted each time. A confidence interval is a type of interval estimate, not a type of point estimate. An important aspect of statistical inference is using estimates to approximate the value of an unknown population parameter. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. The standard error of the mean now refers to the change in mean with different experiments conducted each time.Mathematically, the standard error of the mean formula is given by: σM = No problem, save it as a course and come back to it later. For example, the sample mean is the usual estimator of a population mean. This gives 9.27/sqrt(16) = 2.32. I leave to you to figure out the other ranges. Consider the following scenarios. Since the maximum margin of error, E is given by the formula: then solving for n, the sample size for some expected level of error, E. If σ is known, the standard error is calculated using the formula σ x ¯   = σ n {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} where σ is the Standard error of the mean Further information: Variance §Sum of uncorrelated variables (Bienaymé formula) The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220634698867798, "perplexity": 718.8231335993544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00226.warc.gz"}
https://rd.springer.com/article/10.1186%2Fs12864-018-5118-7
BMC Genomics , 19:743 Effects of evolutionary history on genome wide and phenotypic convergence in Drosophila populations • Mark A Phillips • Grant A Rutledge • James N Kezos • Zachary S Greenspan • Andrew Talbott • Sara Matty • Hamid Arain • Laurence D Mueller • Michael R Rose • Parvin Shahrestani Open Access Research article Part of the following topical collections: 1. Comparative and evolutionary genomics Abstract Background Studies combining experimental evolution and next-generation sequencing have found that adaptation in sexually reproducing populations is primarily fueled by standing genetic variation. Consequently, the response to selection is rapid and highly repeatable across replicate populations. Some studies suggest that the response to selection is highly repeatable at both the phenotypic and genomic levels, and that evolutionary history has little impact. Other studies suggest that even when the response to selection is repeatable phenotypically, evolutionary history can have significant impacts at the genomic level. Here we test two hypotheses that may explain this discrepancy. Hypothesis 1: Past intense selection reduces evolutionary repeatability at the genomic and phenotypic levels when conditions change. Hypothesis 2: Previous intense selection does not reduce evolutionary repeatability, but other evolutionary mechanisms may. We test these hypotheses using D. melanogaster populations that were subjected to 260 generations of intense selection for desiccation resistance and have since been under relaxed selection for the past 230 generations. Results We find that, with the exception of longevity and to a lesser extent fecundity, 230 generations of relaxed selection has erased the extreme phenotypic differentiation previously found. We also find no signs of genetic fixation, and only limited evidence of genetic differentiation between previously desiccation resistance selected populations and their controls. Conclusion Our findings suggest that evolution in our system is highly repeatable even when populations have been previously subjected to bouts of extreme selection. We therefore conclude that evolutionary repeatability can overcome past bouts of extreme selection in Drosophila experimental evolution, provided experiments are sufficiently long and populations are not inbred. Keywords Experimental evolution Evolutionary genomics Evolutionary history Adaptation Abbreviations CMH test Cochran-Mantel-Haenszel test LME Linear mixed-effects model SNP Single nucleotide polymorphism Background The combination of experimental evolution and next generation sequencing has become established as a powerful means for studying the genetics of adaptation and testing major tenets of population genetic theory [1, 2]. Studies featuring populations of fruit flies, Drosophila melanogaster, suggest that adaptation in sexually reproducing populations is fueled by selection on standing genetic variation, and is largely characterized by a lack of genetic fixation [3, 4, 5, 6, 7, 8, 9]. The apparent lack of fixation is even seen in long-term experiments nearing a thousand generations of selection [10]. Moreover, work with outcrossing populations of Saccharomyces cerevisiae has shown that adaptation is still primarily driven by standing genetic variation even at much larger effective population sizes than what is currently seen in experiments featuring D. melanogaster [11]. At present, the underlying genetic architecture of adaptation in these experiments is an area of active study and debate, but these broad results regarding sexually reproducing populations are largely consistent across a variety of independent experiments [1, 2]. In accordance with these findings, evolution in outbred populations is rapid, and highly repeatable when newly derived D. melanogaster populations are subjected to the same selection regimes as long-standing populations [12, 13]. It takes only dozens of generations for newly derived populations to converge on long-standing populations at both the genomic and phenotypic levels, even when long-standing populations have previously undergone hundreds of generations of selection [12, 13]. These findings suggest that phenotypes and patterns of genetic variation are primarily shaped by most recent selection regime, and that evolutionary history, prior to the recent selection regime, has little discernible impact. However, this runs contrary to evidence from experimental evolution work using Drosophila subobscura derived from wild populations at contrasting European latitudes [14]. Those findings indicate that evolution is predictable at the phenotypic level, but differences in where source populations originate can have significant effects on outcomes at the genetic level, suggesting that evolutionary history does play a role when it comes to repeatability at the genomic level. The idea that the degree to which populations return to ancestral phenotypic values and allele frequencies is at least in part contingent on evolutionary history is also supported by reverse experimental evolution studies [3, 15]. However, it should be noted that the authors of these reverse experimental evolution studies were unable to rule out the possibility that complete reversion would have occurred in all populations if given more time. A possible resolution to why evolutionary history appears to play a role in some experiments but not others follows from Graves et al. [13]. In addition to the aforementioned results, the authors found evidence that more intense selection regimes lead to significantly greater losses of genetic variation compared to milder selection regimes. While they did not observe strong evidence of fixation within any of the populations studied, fixation seemed at least possible provided a sufficiently intense selection regime. Therefore, the finding that phenotypes and patterns of genetic variation are almost exclusively shaped by most recent selection regime in Burke et al. [12] and Graves et al. [13] could be due to the fact that while selection for accelerated for development is intense, it is perhaps not sufficiently intense to bring about the sort of changes necessary to impact future evolutionary trajectories. Presumably, stronger selective pressures could potentially result in widespread fixation of alleles favored by such selection. In these cases, given that adaption in sexual experimental evolution is primarily fueled by standing genetic variation, the widespread fixation could have significant impact on how experimental populations respond to new selective pressures, and their ability to revert to ancestral states when moved back to ancestral conditions (i.e. the evolutionary repeatability at the genomic level typically seen in this sort of work would be reduced). The failure to find fixation in the populations selected for accelerated development in Burke et al. [12] and Graves et al. [13] also does not necessarily mean the genomic changes brought about by that selection regime will not negatively impact future evolutionary repeatability. For instance, if adaptation in this sort of system involves shifts in equilibrium frequencies at many sites across the genome as a result of antagonistic pleiotropy [16, 17], intense selection may drive frequencies to attractor states that constrain adaptation when conditions change. We do not have data from experiments where populations exposed to selection for accelerated development were moved to new selection regimes. Instead, we explore adaptation to new selection regimes with a set of populations we have that were previously subjected to very intense selection for desiccation resistance. Using these populations, we test the following hypotheses: Hypothesis (1) past intense selection reduces evolutionary repeatability at the genomic and phenotypic levels when conditions change. Hypothesis (2) previous intense selection does not reduce evolutionary repeatability at the genomic and phenotypic levels, in the absence of other factors such as inbreeding or chromosomal rearrangement. As mentioned above, we test these hypotheses using a group of D. melanogaster populations that were historically subjected to intense selection for desiccation resistance. In the past, we have shown that populations subject to selection for desiccation resistance are useful material for analyzing the mechanistic foundations of adaptation in lab fruit flies as they produce highest levels of phenotypic differentiation between selected populations and controls observed within our experimental system [18, 19, 20, 21, 22, 23, 24]; keeping in mind that our system includes the populations selected for accelerated development featured in Burke et al. [12] and Graves et al. [13]. The extreme phenotypic shifts brought about by this particular selection regime are attributed to the fact the regime involves using environments so inimical to survival only a small percentage (10–20%) of each generation survives [18, 19, 20, 21]. We have called this intense selection paradigm “culling selection” in the past, and it represents one of the most extreme protocols used in Drosophila experimental evolution [25]. Lastly, unlike the populations selected for accelerated development previously discussed, we have extensive data on what these populations look like after selection for desiccation resistance was no longer being imposed. Specifically, to test our hypotheses we examine patterns of phenotypic and genomic differentiation in two five-fold replicated stocks, TSO 1–5, and TDO 1–5, known as C and D respectively during active selection, which were first described in Rose et al. [18]. The D populations were intensely selected for desiccation resistance for about 260 generations, and afterward were renamed as TDO, and maintained on a 21 day (T for “Three-week”) relaxed culture selection for the past ~ 230 generations. The C populations were moderately selected for starvation resistance for about 260 generations in parallel with the D populations, serving as controls for the D populations, and were later renamed as TSO, and maintained under the same culture selection regime as the TDO populations. The TSO and TDO populations were all placed under relaxed culture selection during the same generations. The extreme functional differentiation (i.e. carbohydrate content, water loss rates, and water content) previously seen between these two groups was achieved using environments so inimical to survival that only a small percentage (10–20%) of each generation survives selection [18, 19, 20, 21] With Hypothesis 1, large impacts of evolutionary history are in fact due to exposure to intense selection. If this hypothesis is correct, we would expect to find lingering phenotypic and significant genomic differentiation between the TDO and TSO populations even after ~ 230 generations of relaxed selection in the former. If Hypothesis 2 is correct, we should not find such differentiation. Results Phenotypic results Mortality and mean longevity Mortality rates were measured in the TSO and TDO populations (Additional file 1). From the Gompertz model fit to our mortality data, A is the age-independent parameter which gives a measure of the baseline mortality rate. α is the age-dependent parameter which gives a measure of the rate of aging. The TDO populations have lower values for the parameters A and α compared to the TSO populations. These differences are significant for A (p = 0.0001; Fig. 1; Additional file 2: Table S2 and see Figure S1 for survivorship plots) but are not significant for α (p = 0.945; Fig. 1; Additional file 2: Table S2). In addition, the TDO populations show a greater break-day (bd) compared to the TSO populations (p < 0.0001; Fig. 1; Additional file 2: Table S2). When analyzing mean longevity, the TDO populations live ~ 7 days longer than the TSO populations (p = 0.0009; Additional file 2: Table S3). These significant differences are observed in both males and females. As seen in Fig. 2a, the observed difference in mean longevity is comparable to the peak difference in mean longevity observed when the populations were under directional selection (when they were maintained as C’s and D’s). Development time: larvae to adult Larvae to adult development time was measured in the TSO and TDO populations (Additional file 3). The TDO populations take about 1 h longer to eclose from pupa compared to the TSO populations, however this difference is not significant (p = 0.66; Fig. 3; Additional file 2: Table S4). Age-specific fecundity was monitored in the TDO and TSO populations (Additional file 4). The TDO populations show a greater number of eggs laid per surviving female (mx) compared to the TSO populations in the second interval (days 18–20 from egg) (p = 0.021; Fig. 4; Additional file 2: Table S5). This is the interval just prior to these populations’ reproductive window (days from egg 20–21). The reproductive window is the period when eggs are used from these populations for the next generation. All other intervals from the analysis are not significant (p > 0.05). Fungal-resistance Mortality after exposure to the fungus Beauveria bassiana was monitored in the TDO and TSO populations (Additional file 5). No difference was observed in the ability of these populations to survive after exposure to the fungus (p = 0.123; Fig. 5). Starvation resistance The average survival time under starvation conditions was monitored in the TDO and TSO populations (Additional file 6). The average survival time during starvation, or starvation resistance, for the five TSO populations is 73.54 h, and for the five TDO populations, it is 69.05 h (Fig. 2b). This 4.49 h difference in starvation resistance is not statistically significant (p-value = 0.152; Additional file 2: Table S6). In contrast, there was a notable difference in starvation when the D populations were being actively selected for desiccation resistance (Fig. 2b). Some might argue that our failure to detect any differences in starvation resistance may be due to a lack of statistical power, but this seems unlikely. Historically the difference in starvation resistance for the D population minus the C populations was over 10 h. In this study the observed difference between the TSO and TDO populations was 4.49 h, which is not statistically significant. However, this test had the ability to detect a difference of 5.57 h. Thus, differences even close to the historical C and D difference should have been easily detected. If the true starvation resistance of the TDO population had been 83.4 h rather than 69, giving TDO’s a 10 h advantage over TSO’s, the probability of detecting that difference would have been 0.88. If the TDO populations had only 5 h starvation resistance advantage the chance of detecting that differences drops 0.31. Desiccation resistance The average survival time during desiccation was monitored in the TDO and TSO populations (Additional file 7). The desiccation resistance for the five TSO populations is 13.26 h, and for the five TDO populations, it is 15.04 h (Fig. 2c). This 1.78 h difference in survival time is not statistically significant (p-value = 0.164; Additional file 2: Table S7). As seen in Fig. 2c, this observed difference in the TDO and TSO populations is nearly 30 times smaller than what was seeing during the height of selection for desiccation resistance. Once again, our failure to detect lingering desiccation could be attributed to a lack of statistical power. We once again argue that this is unlikely. Historically the difference in desiccation resistance for the D population minus the C populations was about 50 h. In this study the observed difference between the TSO and TDO populations was − 1.78 h, which is not statistically significant. However, this test had the ability to detect a difference of 2.67 h. If the true TDO desiccation time had been only 3 h greater than the TSO time the chance of detecting the difference would be 0.60. If the true TDO desiccation time had been 5 h greater the chance of detecting the difference increases to 0.96. Cardiac arrest rates The rates of cardiac arrest after electrical pacing were monitored in the TSO and the TDO populations (Additional file 8). The five TSO populations had an average cardiac arrest rate of 27.6%. Whereas, the five TDO populations had an average cardiac arrest rate of 25.78%. Similar to the starvation resistance and desiccation resistance, this small difference between the two sets of populations was not statistically significant (Additional file 2: Figure S2, p-value = 0.598). Genomic results Heterozygosity and FST We do not see any large regions where heterozygosity has been completely expunged, and this result is robust to reductions in window size (Fig. 6, Additional file 2: Figures S2 and S3). However, there are some notable depressions consistent across replicates that may be indicative of soft sweeps. Mean heterozygosity in the TSO populations ranges from 0.24 to 0.26, and 0.26 to 0.27 in the TDO populations (Additional file 2: Table S8). Based on a t-test comparing the two sets of means, heterozygosity is significantly higher in the TDO populations (p-value = 0.001). Mean FST in the TSO populations is 0.04 and 0.07 in the TDO populations, which indicates there is a high degree of similarity between replicates. SNP differentiation We find little evidence of SNP (single nucleotide polymorphism) differentiation between the TDO and TSO populations (Fig. 7). Based on our Cochran-Mantel-Haenszel (CMH) test results, we find a total of 17 sites with p-values that exceed our permutation derived significance threshold (Fig. 7a). These 17 sites correspond to three regions, two on chromosome 3 L and one on the X chromosome. However, we find no signs of significant SNP differentiation using the quasibinomial GLM method. This is true using both the Bonferroni correction, and the less conservative q-value approach to correct for multiple comparisons (Fig. 7b–c). Within the significantly differentiated regions detected using the CMH test, we find a total of seven genes (Table 1). Six of the seven genes are located on chromosome arm 3 L, while the remaining gene is located on chromosome X. For genes CR42860, CR45802, and CR34047, there is little to no information about their molecular and biological functions. Gene CG42355 is associated with sperm chromatin condensation, but not much else is presently known. Genes sallimus (sls) and zormin have been well documented to be associated with the development of the striated muscle sarcomeres [26, 27, 28]. Sls expression is necessary for myoblast fusion, and the inevitable development of myoblasts into muscle fibres. Sallimus, protein derived from the sls gene, aides in aligning thin filaments side-by-side and in anti-parallel direction, which nucleates Z-disc formation in developing myofibrils [26]. Sallimus also binds to thin filaments (i.e. actin), aiding in balancing the two halves of the sarcomere. Protein zormin can be also be found near the Z-disc and M-line of the muscles [28]. These filaments connect the Z-disc with the ends of the thick filaments. The size of these proteins, and the extensibility of their binding, affects the elasticity and stiffness of muscles [27]. These properties dictate muscle contraction, stiffness, and performance. The final gene, CG32649, located on the X chromosome is associated with ubiquinone biosynthesis and mitochondrial electron transport. There are two human orthologs, COQ8A and COQ8B, linked to CG32649. Mutations at these two ADCK genes can lead to primary coenzyme Q-10 deficiency and nephrotic syndrome, respectively [29, 30]. Table 1 Genes located in regions found to be significantly differentiated based on our CMH test comparing SNP frequencies in the TDO and TSO populations Gene Location Association Molecular Function Biological Process CG42355 3L: 2037371-2038224. Unknown Unknown Sperm chromatin condensation. sls (Sallimus) 3L: 2039681-2115611 Protein necessary for myoblast fusion; determinant of resting elasticity of striated muscle sarcomeres (myofibril stiffness); regulates mitochondrial respiration in sarcomere Structural constituent of muscle; Actin binding; Protein binding. Chromosome organization; skeletal muscle organ development; regulation of immune system process; mesoderm development; chromosome condensation; locomotion; somatic muscle development; myotube differentiation; visceral muscle development; striated muscle tissue development; regulation of multicellular organismal process. CR42860 3L: 2088166-2089626 Unknown Unknown Unknown Zormin 3L: 2117466-2151700 Found in the Z-disc and the M-line of muscles. Affects elasticity and stiffness of sarcomeres. Protein binding; actin binding CR45802 3L: 2118498-2119567 Unknown Unknown Unknown CR34047 3L: 5098376-5099795 Unknown Unknown Unknown CG32649 X: 12898768-12901114 Ubiquinone biosynthesis; CoQ8A and CoQ8B human orthologs Protein kinase activity. Mitochondrial electron transport, ubiquinol to cytochrome c. Migration simulations Given how long these populations have been maintained in the lab, some might argue that the lack of SNP differentiation between the TDO and TSO populations is perhaps due to low frequency accidental migration events. To test this idea, simulations were performed using SNP frequencies in the ACO and CO populations featured in Burke et al. [12] and Graves et al. [13] as a starting point (See Methods for details). Data from these populations were used as a proxy for what differentiation between the TDO and TSO populations might have looked like during the height of selection for desiccation resistance. We simulated 230 generations of neutral evolution with varying levels of migration (two, six and ten events per generation), and looked at how our ability to detect differentiated sites using the quasibinomial GLM approach was impacted (Additional file 2: Table S9). We looked at the number of significantly differentiated sites using both a Bonferroni correction (our most stringent method) and the q-value approach to correction for multiple comparisons (or most liberal method). Applying this test to observed SNP frequencies in the ACO and CO populations, we detect 162 sites when we use the Bonferroni method for correction for multiple comparisons, and ~ 425 k when we use the less stringent q-value approach. We find that in our simulated data sets, migration does reduce the number of significantly differentiated sites detected. For instance, the number of sites that are significant after a Bonferroni correction is less than three for each of the scenarios we looked at (Additional file 2: Figure S5). However, this is still greater than the zero sites detected between the TDO and TSO populations. Using the q-value method, we find that migration rates of two and six similarly reduce the number of detected sites by ~ 270–300 k, and a migration rate of 10 reduces the number of detected sites by ~ 400 k. However, even in our most extreme scenario with 10 migration events every generation, we still detect ~ 20 k differentiated sites compared to zero in the TSO and TDO populations (Additional file 2: Figure S6). Given that it is incredibly unlikely there was anything approaching the equivalent of 10 migrations events per generation between the TDO and TSO treatments, these findings suggest that the lack of differentiation between the TDO and TSO populations cannot be easily attributed to accidental migration events. Discussion Our results indicate that ~ 230 generations of relaxed selection were enough for the previously desiccation selected TDO populations to largely converge on the TSO controls phenotypically and perhaps genomically. Aside from longevity and one interval for fecundity, we find no signs of phenotypic differentiation between the TDO and TSO populations for any of the characters measured. Most notably, the TDO populations do not show any signs of significantly enhanced survival in desiccating environments compared to the TSO populations, despite extreme differences in desiccation resistance prior to the relaxation of selection [18, 19, 20, 23]. There is also no longer any evidence of increased starvation resistance in the TDO populations, which was a trait previously found to be correlated with enhanced desiccation resistance [20]. Next, we found no differences in fungal resistance and development time, which are traits we would also have expected to be impacted by selection for desiccation resistance. Lastly, significant differences in female fecundity were also limited to a single window spanning day 18 to day 20 from egg (day 9–11 from eclosion). Chippindale et al. [31] found that D populations had significantly higher fecundity compared to the C populations. However, their experiment only measured early fecundity (day 3–5 from eclosion). It is also worth noting that the observed phenotypic reversion under control conditions may not be due entirely to shifts in allele frequencies. Instead, they may be the product of gene by environment interactions as has been well characterized in the quantitative trait locus literature [32, 33, 34]. Longevity is the only trait we studied that still shows clear signs of phenotypic differentiation between the TDO and TSO populations. This arises notwithstanding the absence of genomic differentiation detected between these two sets of populations. There are two ways to resolve this paradox at the level of statistical analysis. First, it is conceivable that the longevity differentiation arose by chance alone, given that we compared these two sets of populations for multiple phenotypes. However, if we perform a Bonferroni correction on the threshold for statistical significance making it 0.003, the observed p-value of 0.001 remains significant for longevity. Second, if we grant the point that there is a significant longevity difference, then the failure to detect extensive genomic differentiation becomes an issue. However, if we consider the genomic analysis of differentiation between A and C populations of Graves et al. [13], we find that there is a general reduction in the ability of genomic analysis to detect differentiation when only ten populations total are compared as two groups of five. In Graves et al. [13], thousands of differentiated sites were detected when comparing all ten A-type populations to all ten C-type populations, compared to hundreds of sites when groups of five were compared to one another. This finding is also supported by theoretical studies examining the power of evolve and re-sequence studies to detect causal variants [35, 36]. As we are limited to five replicates per treatment in this study, it is possible that we simply do not have the statistical power to detect the genomic differentiation underlying this residual phenotypic differentiation. Previous work has shown that selection for increased longevity is associated with increased desiccation resistance [37, 38], and furthermore selection for increased desiccation resistance was associated with increased longevity [18]. Further work with sustained selection for desiccation resistance revealed a more complex relationship, with the greatest benefits for longevity accruing at intermediate levels of increased desiccation resistance [21, 22]. It is surprising and perhaps noteworthy that the longevity difference between the TDO and TSO populations is similar to the longevity difference that they exhibited at their peak of differentiation for this character, particularly for females [18, 31]. Furthermore, when desiccation selection proceeded to very high levels of desiccation resistance in the D (ancestral to TDO) populations, their differentiation for longevity relative to the C (ancestral to TSO) populations actually fell from this peak. In the case of the present TDO and TSO populations, the differentiation of desiccation is now gone, at least at the level of statistical detectability. Yet the longevity difference has returned to its former peak level. One possible explanation for this is that the T culture regime may favor increased longevity, or moderately increased longevity is at least not selected against by any type of antagonistic pleiotropy. However, we have no way of distinguishing between these hypotheses at the present time. Although our selection protocol for desiccation resistance is relatively extreme, compared to other selection regimes we have used [39] we do not find any clear evidence of it having a lasting impact on levels of genetic variation. Specifically, our analysis did not yield any evidence that being subjected to intense selection in the past has led to widespread fixation in the TDO populations. As such, our findings suggest that even when a moderately outbred experimental population’s evolutionary history involves prolonged periods of intense selection, it does not have irreversible effects on levels and patterns of genetic variation. This lack of fixation also indicates that the rapid and highly repeatable evolution from standing genetic variation seen in Graves et al. [13] should still be possible when populations previously subjected to periods of intense selection are exposed to new conditions. Unfortunately, we cannot directly compare current levels of SNP differentiation between the TDO and TSO populations to what they were during the height of TDO selection for desiccation resistance. However, given the levels of phenotypic differentiation between the two groups during the height of selection [18, 19, 20, 21, 22], we can reasonably suggest that total SNP differentiation between the two groups during this period was likely comparable to the dozens to hundreds of differentiated sites typically detected in Drosophila experimental evolution studies [4, 5, 6, 7, 8, 9]. We also believe the findings of Burke et al. [12] and Graves et al. [13] in particular support this rationale as they were performed with population from the same system. Burke et al. [12] shows that selection for accelerated development generates levels of phenotypic differentiation between selected populations and controls approaching what is seen during the height of selection for desiccation resistance, and Graves et al. [13] shows that this is accompanied by wide-spread SNP differentiation when compared to controls and an overall reduction in levels of genetic variation. As such, the present lack of SNP differentiation between the TDO and TSO populations can be interpreted as evidence that ~ 230 generations of relaxed selection were enough to reduce obvious signs of genomic differentiation between the two groups. CMH tests comparing SNP frequencies between the two groups of populations did yield some significantly differentiated sites. However, this was limited to 17 sites compared to the dozens to potentially thousands of differentiated sites likely present during the height of selection, as seen in other experimental evolution studies [4, 5, 6, 7, 8, 9]. There were a total of seven genes associated with these sites, but none of these candidate genes had clear connections to desiccation resistance (See Results for details). Additionally, the quasibinomial GLM approach to detecting significantly differentiated SNP’s advocated by Wiberg et al. [35] did not detect any significant SNP differentiation between the TDO and TSO populations. Given the issues with the CMH test documented in by Wiberg et al. [40], the discrepancy between the two tests casts some degree of doubt on whether or not the few sites detected using the CMH test are truly differentiated. As such, we conclude that ~ 230 generations of relaxed selection were enough to largely eliminate the signs of meaningful SNP differentiation likely present between the TDO populations and their control during the height of selection. This further suggest that a past history of intense selection does not necessarily reduce evolutionary repeatability at the genomic level in Drosophila experimental evolution. Our genomic findings about the role of evolutionary history in shaping patterns of genetic variation are not entirely conclusive however. For instance, while we have phenotypic data for the TDO populations prior to the relaxation of selection, we do not have any genomic data from this period because original work with these population pre-dated affordable genome wide sequencing. As such, we cannot directly show that relaxing selection resulted in significant shifts in patterns of genetic variation. We also cannot directly compare current levels of SNP differentiation between the TDO and TSO populations to what they were during the height of selection for desiccation resistance as previously mentioned. Lastly, we acknowledge that as is often the case with studies combining experimental evolution and pooled sequencing, our results were perhaps impacted by some pronounced underlying haplotype structure. Exploring this possibility is undoubtedly a worthwhile venture but falls beyond the scope of our study at present. However, assuming past experimental evolution studies featuring genome-wide comparisons between experimentally evolved Drosophila populations are broadly applicable, these results nevertheless suggest patterns and levels of genetic variation and differentiation in Drosophila experimental evolution are not impacted by sustained strong selection so as to eliminate the potential for evolutionary repeatability in response to subsequent selection. Conclusion Cumulatively, our findings suggest that past bouts of extreme selection do not negate the potential for evolutionary repeatability at the genomic and phenotypic levels in response to future selection in Drosophila experimental evolution. While we are able to detect some signs of genetic differentiation and residual differences in mean longevity when comparing the TDO and TSO populations, it is nothing on the order of what is usually found between selected and control populations in Drosophila experimental evolution [4, 5, 6, 7, 8, 9]. And there is no reason to believe these differences would dramatically impact how these populations respond to future selection. As such, we conclude that past intense selection does not necessarily eliminate the possibility of evolutionary repeatability in response to future selection in experimental evolution studies featuring sexually reproducing populations, provided the duration of these experiments is sufficiently long and populations are not inbred at any point. Methods Populations This experiment used large, outbred lab populations (effective populations size of ~ 1000 [41]) of Drosophila melanogaster derived from a population sampled by P.T. Ives from South Amherst, Massachusetts (Ives, 1970). The experimental stocks used in this study were derived from a set of five populations that had been selected for late reproduction (O1–5). The O1–5 populations were derived from the Ives stock in February 1980 [42]. In 1988, two sets of populations were derived from the O1–5 populations. One set (D1–5) were selected for desiccation resistance while the other set (C1–5) were maintained to control for desiccation resistance selection. The C1–5 populations were handled like the D1–5 populations, except flies were given nonnutritive agar instead of desiccant [18]. In 2005, these populations were relaxed from selection and kept on a 21-day culture regime to the present day. Under this new regime, the D populations have been renamed to TDO, and the C populations to TSO. In total, the TDO populations underwent ~ 260 generations of selection for desiccation resistance, and ~ 230 generations of relaxed selection. Populations were reared on a banana-molasses diet for stock maintenance and for experimental assays. The banana-molasses media is composed of the following ingredients per 1 L distilled H20: 13 g Apex® Drosophila agar type II, 120 g peeled, ripe banana, 40 mL light Karo® corn syrup, 40 mL dark Karo® corn syrup, 50 mL Eden®organic barley malt syrup, 32 g Red Star® active dry yeast, 2 g Sigma-Aldrich® Methyl 4-hydroxybenzoate (anti-fungal), and 42 mL EtOH. Stocks are maintained on a 24-h light cycle and kept at room temperature (24 °C ± 1 °C). Phenotypic assays Mortality and mean longevity For this assay, the TDO and TSO populations were reared in eight dram polystyrene vials with ~ 6 mL of food, an egg density of 60–80 eggs and given 14 days to develop. Adult flies from each replicate were transferred on day 14 from egg to three, six-liter acrylic plastic cages with ~ 1000 flies per cage (~ 3000 flies per replicate). Flies were given fresh food daily, and every 2 weeks flies were transferred to clean cages using light CO2 anesthesia. Individual mortality was assessed every 24 h, the flies were sexed at death, and the exact cohort size was calculated from the complete recorded deaths. Total cohort size across all replicates from both regimes was ~ 30,000 flies. Mean longevity was analyzed using a linear mixed-effects model (LME) in the R-project for statistical computing (www.R.project.org) [43]. The model used for the data is described as follows: Let yijkm be the longevity for regime – i (i = 1 (TDO) or 2 (TSO)), sex-j (j = 1 (female), 2(male)), population – k (k = 1,.., 10) and individual – m (k = 1,.., njk). A LME model for longevity is, $${y}_{ijkm}=\alpha +{\delta}_i{\beta}_i+{\delta}_j\gamma +{\delta}_i{\delta}_j\pi +{b}_k+{\varepsilon}_{ijkm}$$ where δs = 0, if s = 1, and 1 otherwise, and bk and εijkm are assumed to be independent random variables with a normal distribution with zero mean and variances $${\sigma}_1^2$$ and $${\sigma}_2^2$$ respectively. Mortality rates from the TDO and TSO populations were analyzed using a two-stage, three-parameter Gompertz model. The Gompertz model and its variants describe the change in instantaneous mortality rates with age. The chance of dying between day t and t + 1, qt, was estimated as, $${q}_t=1-\frac{p_{t+1}}{p_t}\ \mathrm{where}$$ $${p}_t=\left\{\begin{array}{c}\exp \left\{\frac{A\left[1-\exp \left(\alpha t\right)\right]}{\alpha}\right\}\ if\ t\le bd\\ {}\mathit{\exp}\left\{\frac{A\left[1-\exp \left(\alpha bd\right)\right]}{\alpha }+ Aexp\left(\alpha bd\right)\left( bd-t\right)\right\} if\ t> bd\end{array}\right.$$ where bd is the break day or the age at which mortality rates transition from a Gompertz dynamic to a plateau. With this model we let yijkt be the mortality from selection regime-i (i = 1 (TDO), 2 (TSO)), sex-j (j = 1 (female), 2(male)) and population-k (k = 1, 2, …, 10), at age-t. Random variation arises due to both population effects and individual variation. Consequently, the mortality of adults from selection regime-i, sex-j, and population-k, at time-t is yijkt = f(φijk,t) + εijkt, where φijk is the vector of parameters, (Aijk, αijk, bdijk), and, $${A}_{ijk}={\pi}_1+{\delta}_i{\beta}_{1i}+{\delta}_j{\gamma}_1+{b}_{1k}$$ $${\alpha}_{ijk}={\pi}_2+{\delta}_i{\beta}_{2i}+{\delta}_j{\gamma}_2+{b}_{2k}$$ $${bd}_{ijk}={\pi}_3+{\delta}_i{\beta}_{3i}+{\delta}_j{\gamma}_3+{b}_{3k}$$ where δs = 0, if s = 1 and 1 otherwise. The within population variation, ε, is assumed to be normally distributed with a zero mean. This variation increases with age so we assumed that Var(ε) = σ2|t|2 where Δ is estimated from the data. Population variation, bmk, was assumed to affect all three parameters. We tested models with population variation in subsets of parameters and with a constant within population variation. The model chosen had the lowest Akaike and Bayesian information criterion [44]. The population variation is assumed independent of the within population variation and also has a normal distribution with zero mean and covariance matrix, Σb. Parameters of eq. (Z) were estimated by the restricted maximum likelihood techniques implemented by the nlme function in R. Development time: larvae to adult In this experiment, the time from larvae hatching from egg to adult eclosion from pupae was studied. Eggs from the TDO and TSO populations were collected on non-nutritive agar. From each agar plate, 50 first-instar larvae were transferred to polystyrene vials with banana molasses food. Thirteen vials per replicate were assayed. Vials were checked every 6 h after the first adult flies eclosed, and all eclosed flies were counted and sexed by microscope. Time to eclosion was analyzed using a linear mixed-effects model (LME) in the R-project for statistical computing (www.R.project.org) [43]. The model used for the data is described as follows: Let yijkm be the development time for regime – i (i = 1 (TDO) or 2 (TSO)), sex-j (j = 1 (female), 2(male), population – k (k = 1,.., 10) and individual – m (m = 1,.., njk). A LME model for time to eclosion is, $${y}_{ijkm}=\alpha +{\delta}_i{\beta}_i+{\delta}_j\gamma +{\delta}_i{\delta}_j\pi +{b}_k+{\varepsilon}_{ijkm}$$ where δs = 0, if s = 1, and 1 otherwise, and bk and εijkm are assumed to be independent random variables with a normal distribution with zero mean and variances $${\sigma}_1^2$$ and $${\sigma}_2^2$$ respectively. TDO and TSO adult age-specific fecundity was monitored for 2 weeks. Populations were reared in vials and given 14 days to develop. On day 14 from egg, one mating pair (one male and one female) were transferred to 60 charcoal caps per replicate. Charcoal medium is composed of the following per 1 L distilled H2O: 19 g Apex® Drosophila agar type II, 5 g Fisher® Activated Darco® G-60 Carbon, 54 g Sucrose, 32 g Red Star® active dry yeast, 3 g Sigma-Aldrich® Methyl 4-hydroxybenzoate (anti-fungal), and 30 mL EtOH. Starting on day 14, fecundity was monitored every 24 h until day 28. Pairs were given a fresh charcoal cap with 50 μL yeast solution (98 mL distilled water, 2 g active dry yeast, and 2 mL 1% acetic acid) each day, and the old charcoal caps were scanned on a flatbed scanner and counted at a later time. Age-specific fecundity was analyzed using a linear mixed-effects model (LME) in the R-project for statistical computing (www.R.project.org) [43]. The data consisted of fecundity at an age (x) within an age interval − k (k = 1..,5). Fecundity was modeled by a straight line within each interval. Regime − j (j = 1 (TDO) or 2 (TSO)) could affect the intercept, but not the slope of the line. Slope could vary between intervals. Populations − i (i = 1, 2…,10) contributed random variation to these measures. Fecundity at age (x), interval (k), regime (j), and population (i) is yijkx and can be described by, $${y}_{ijkx}=\alpha +{\beta}_k+{\delta}_j{\gamma}_j+\left(\omega +{\pi}_k{\delta}_k\right)x+{\delta}_k{\delta}_j{\mu}_{jk}+{c}_i+{\mathcal{E}}_{ijkx},$$ where δs = 0 if s = 1 and 1 otherwise, and ci and $${\mathcal{E}}_{ijkx}$$ are independent standard normal random variables with variance $${\sigma}_c^2$$ and $${\sigma}_{\mathcal{E}}^2$$, respectively. The effects of diet on the intercept are assessed by considering the magnitude and variance of both γj and μjk. Fungal-resistance Susceptibility to fungal infection was compared between the TDO and TSO populations. The pathogen used was the entomopathogenic fungus Beauveria bassiana, strain 12,460 obtained from the USDA Agricultural Research Service Collection of Entomopathogenic Fungi, Ithaca NY. Fungal suspensions were prepared by suspending 0.3 g of B. bassiana spores in 25 mL of 0.03% silwet. The TDO and TSO populations were reared in vials and given 12 days to develop. On day 12, the flies were transferred to fresh food vials. On day 14, ~ 500 flies (sexes mixed) were briefly anesthetized with CO2 and then placed on Petri Dishes on ice for the duration of the inoculation assay (<2 min). Anesthetized flies were sprayed either with 5 mL of the prepared fungal suspension or with 5 mL of control suspension (0.03% silwet, but not fungus) using a spray tower (Vandenberg 1996). Sprayed flies were then moved to 3 L cages and kept at 100% humidity for 24 h. After 24 h, the humidity was reduced to 60%. Dead flies were removed from the cages daily and were sexed. Food was replaced daily. We completed three technical replicates and tested a total of ~ 1500 flies (sexes mixed) per population per treatment. Fly mortality, pij(t), was modeled at day-t (t = 1, 2,..,?) in selection regime-i (i = 1 (TDO), 2 (TSO)) and treatment-j (j = 1 (fungus), 2 (no fungus)) by the logistic regression function, $$\mathit{\log}\left[\frac{p_{ij}(t)}{1-{p}_{ij}(t)}\right]={\mu}_0+{\delta}_i{\alpha}_0+{\delta}_j{\beta}_0+{\delta}_i{\delta}_j{\gamma}_0+\left({\mu}_1+{\delta}_i{\alpha}_1+{\delta}_j{\beta}_1+{\delta}_i{\delta}_j{\gamma}_1\right)t,$$ where δk k = 1 if k = 1 or 0 otherwise. Parameters of this equation were estimated with the glm function in R [43]. Starvation resistance On day 15 from egg, 30 female flies from each of the TDO and TSO populations were placed in their own starvation straw, one fly per straw. These starvation straws are capped at both ends and contain a small amount of agar at one end of the straw. This agar “plug” provides adequate humidity, but no nutrients. Mortality was checked every 4 h using lack of movement under provocation as a sign of death. Female mean longevity in a nutrition free environment was analyzed using a linear mixed-effects model (LME) in the R-project for statistical computing (www.R.project.org) [43]. The model used for the data is described as follows: Let yijkm be the longevity for regime – i (i = 1 (TDO) or 2 (TSO)), population – j (j = 1,.., 10) and individual – k (k = 1,.., njk). A LME model for longevity is, $${y}_{ijk}=\alpha +{\delta}_i\beta +{b}_j+{\varepsilon}_{ijk}$$ where δs = 0, if s = 1, and 1 otherwise, and bj and εijk are assumed to be independent random variables with a normal distribution with zero mean and variances $${\sigma}_1^2$$ and $${\sigma}_2^2$$ respectively. Desiccation resistance On day 15 from egg, 30 female flies from each of the TDO and TSO populations were placed in their own desiccant straws, one fly per straw. A piece of cheesecloth separated the fly from the pipet tip at the end of the straw that contained 0.75 g of desiccant (anhydrous calcium sulfate). The pipet tip containing desiccant was sealed with a layer of Parafilm©. Mortality was checked hourly, using lack of movement under provocation as a sign of death. Female mean longevity in a desiccated environment was analyzed using a linear mixed-effects model (LME) in the R-project for statistical computing (https://www.r-project.org/) [43]. The model is described as follows: Let yijkm be the longevity for regime – i (i = 1 (TDO) or 2 (TSO)), population – j (j = 1,.., 10) and individual – k (k = 1,.., njk). A LME model for longevity is, $${y}_{ijk}=\alpha +{\delta}_i\beta +{b}_j+{\varepsilon}_{ijk}$$ where δs = 0, if s = 1, and 1 otherwise, and bj and εijk are assumed to be independent random variables with a normal distribution with zero mean and variances $${\sigma}_1^2$$ and $${\sigma}_2^2$$ respectively. Cardiac Arrest Rates On days 15, 16, and 17 from egg, 30 female flies from each of the TDO and TSO populations were chosen at random (total of 90 flies per replicate). The flies were anesthetized for 3 min using trimethylamine (FlyNap©), and then placed on a microscope slide prepared with foil and two electrodes. FlyNap was chosen as the anesthetic because of its minimal effect on heart function and heart physiology when administered for more than 1 min [45]. The cold-shock method was not used as an anesthetic for the cardiac pacing assay, because the flies need to be fully anesthetized throughout the procedure. If the flies regain consciousness, the added stress and abdominal contractions while trying to escape would alter heart rate and function more than FlyNap does. Paternostro et al. [46] found that FlyNap has the least cardiac disruption compared to the two other substances commonly used for Drosophila anesthesia, carbon dioxide and ether. Two electrodes were attached to a square-wave stimulator in order to produce electric pacing of heart contraction. Anesthetized flies were attached to the slide between the foil gaps using a conductive electrode jelly touching the two ends of the fly body, specifically the head and the posterior abdomen tip. The shocking settings for this assay were 40 V, 6 Hz, and 10 ms pulse duration. Each shock lasted for 30 s. An initial check of the status of the heart was made after completion of the shock, followed by a check after a 2-min “recovery” period. Heart status was scored as either contracting or in cardiac arrest. The protocol for this assay is outlined in Wessells and Bodmer [47]. CMH tests were used to analyze the rates of cardiac arrests between the TSO1–5 and TDO1–5 populations. The CMH test is used when there are repeated tests of independence, or multiple 2 × 2 tables of independence. Below is the equation for the CMH test statistic, with the continuity correction included, that we used for our statistical analyses: $${X}_{\mathrm{MH}}^2=\frac{{\left\{|\Sigma \left[{a}_i-\frac{\left({a}_i+{b}_i\right)\left({a}_i+{c}_i\right)}{n_i}\right]|-0.5\right\}}^2}{\Sigma \left({a}_i+{b}_i\right)\left({a}_i+{c}_i\right)\left({b}_i+{d}_i\right)\left({c}_i+{d}_i\right)/\left({n}_i^3-{n}_i^2\right)}$$ We designated “a” and “b” as the number of cardiac arrests in the TSO and TDO cohorts of population i. We designated “c” and “d” as the number of contracting hearts in these two cohorts of population i. The ni represents the sum of ai, bi, ci, and di. The subscript i (i = 1..5), representing one of the five replicate populations within the B stock. DNA extraction and sequencing Genomic DNA was extracted from samples of 200 female flies collected from each of the 10 individual populations (TSO1–5 and TDO1–5) using the Qiagen©/Gentra Puregene© kit, following the manufacturer’s protocol for bulk DNA purification. The 30 gDNA pools were prepared as standard 200–300 bp fragment libraries for Illumina sequencing, and constructed such that each five replicate populations of a treatment (e.g., TSO1–5) were given unique barcodes, normalized, and pooled together. Libraries were run across PE100 lanes of an Illumina HiSEQ 2000 at the UCI Genomics Highthroughput Sequencing Facility. Resulting data were 100 bp paired-end reads. Each population was sequenced twice; data from both runs were combined for some analyses as described below. Combining reads from two independent sequencing runs likely alleviate the effects of possible bias introduced from running all replicates for each population in the same lane. Genomic analysis Reads were mapped to the D. melanogaster reference genome (version 6.14) using bwa mem with default settings (BWA version 0.7.8) [48]. The resulting SAM files were filtered for reads mapped in proper pairs with a minimum mapping quality of 20, and converted to the BAM format using the view and sort commands in SAMtools [49]. The rmdup command in SAMtools was then used to remove potential PCR duplicates. As each population was sequenced twice, there were two bam files corresponding to each population at this stage. BAMtools was used to combine pairs corresponding to the same populations [50]. Average coverage was above 70× for all populations except TSO3, which was 67× (Additional file 2: Table S1). Next, SAMtools was used to combine the 10 bam files into a single mpileup file. Using the PoPoolation2 software package [51], these files were converted to “synchronized” files, which is a format that allele counts for all bases in the reference genome and for all populations being analyzed. We then used RepeatMasker 4.0.3 (http://www.repeatmasker.org) to create a gff file detailing low complexity regions in the D. melanogaster reference genome. The regions were then masked in our sync file once again using PoPoolation2. SNP variation A SNP table was created using the sync file mentioned above. We only considered sites where coverage was between 30× and 200×, and for a site to be considered polymorphic we required a minimum minor allele frequency of 2% across all 10 populations. All sites failing to meet these criteria were discarded. To assess broad patterns of SNP variation in TSO and TDO populations, heterozygosity was calculated and plotted over 150 kb non-overlapping windows directly from the major and minor counts in our SNP table. A t-test was also performed to compare mean heterozygosity between the two groups of populations. To assess how closely replicate populations resembled one another, FST estimates were also obtained using the formula: FST = (HT-HS)/HT where HT is heterozygosity based on total population allele frequencies, and HS is the average subpopulation heterozygosity in each of the replicate populations [52]. FST estimates were made at every polymorphic site in the data set for a given set of replicate populations. SNP differentiation We used two different methods to assess SNP differentiation in the TSO and TDO populations. First, we used the CMH test as implemented in the PoPoolation2 software package to compare SNP frequencies between the TSO and TDO populations. As the findings of Wiberg et al. [40] indicate that coverage variation can impact statistical results, we subsampled to a uniform coverage of 50× across the genome for each population using scripts provide in the PoPoolation2 software package. During this process, all positions with coverage less than 50× or greater than 200× were discarded. The subsampling procedure involved calculating the exact fraction of the allele frequencies at each site, and linearly scaling them to our target coverage of 50×. In addition to these coverage requirements, we only considered sites polymorphic if they had a minor allele frequency of 2% across all ten populations. In total, the resulting subsampled sync file contained ~ 1.2 million SNPs spread across the major chromosome arms. CMH tests were then performed at every polymorphic site between the TSO and TDO populations. To correct for multiple comparisons, we used the permutation approach featured in Graves et al. [13]. Briefly, populations were randomly assigned to one of two groups and the CMH test was then performed at each polymorphic position in the shuffled data set to generate null distributions of p-values. This was done a 1000 times, and each time the smallest p-value generated was recorded. The quantile function in R was then used to define thresholds that define the genome-wide false-positive rate, per site, at 5%. In addition to the CMH test, we also used the quasibinomial GLM approach recommended by Wiberg et al. [40]. Here the authors argue that while the CMH may be the most commonly used test to compare allele frequencies in projects combining experimental evolution and pool-seq, key assumptions of the test are often violated in such studies. For instance, the assumption that each count within a cell of the contingency table being considered by the test is independent is automatically violated in pool-seq studies as counts from reads are not independent draws from the experimental populations being studies (See Wiberg et al. [40] for a more detailed discussion of this issue and others). Based on their findings, the violation of these assumptions results in inflated p-values and increased false positive rates. Their findings further suggest the quasibinomial GLM approach they advocate has lower false positive and higher true positive rates than the CMH test. However, it should be noted that the permutation derived significance threshold used in our CMH tests are more stringent than anything featured in their analysis. The test was implemented using scripts provided by Wiberg et al. [40]. A .sync file was once again the primary input file, and we used the same SNP calling criteria outlined above (minimum coverage of 50× per population, maximum of 200× per population, and a minimum minor allele frequency of 2% across all 10 populations). Coverage was once again scaled to 50× to minimize the effect of coverage variation on our results. As counts of zero can lead to problems when implementing this approach, a count of one was added to each allele whenever a zero was encountered. In terms of establishing significance thresholds, another reported benefit of quasibinomal GLMs is that they produce the expected uniform distribution of p-values under the null hypothesis which allows for standard method of correcting for multiple comparisons [35]. As a result, to correct for multiple comparisons we used two common approaches, the Bonferroni correction and the q-value method [53, 54]. We chose to use the Bonferroni correction and the q-value methods as Wiberg et al. [40] found them to be the most and least conservative approach, respectively. Migration simulations Although all populations are maintained independently, and precautions are taken to prevent accidental migration (eg. vials and cages used for stock maintenance are specifically labeled, a single person is never performing maintenance on the TDO and TSO populations simultaneously, etc.), there have almost certainly been chance migration events over the hundreds of generations these populations have been maintained. As such, it could be argued that the apparent genomic convergence between the TDO and TSO populations following the relaxation of selection for desiccation resistance is due to low frequency migration events. Forward simulations featuring migration were performed to test this idea. Ideally, simulations would have been done based on SNP frequencies in the TDO and TSO populations at the height of selection for desiccation resistance. However, we do not have such data as these populations were derived well before the rise of next generation sequencing technology, and we have no frozen samples from that time period. As a result, we opted to use data from the ACO and CO populations described in Burke et al. [12] and Graves et al. [13]. The ACO group consists of five replicate populations subjected to selection for accelerated development, while the CO group consists of five replicate populations subjected to selection for delayed reproducing and increased longevity. These different selection regimes have in turn produced significant phenotypic and SNP differentiation between the two groups [12, 13]. Our simulations strategy was to see if 230 generations of drift and low frequency migration were enough to erase the genomic differentiation present between these populations. In total, 1.1 million SNPs were identified across the five major chromosome arms in the ACO and CO populations (Additional file 9). Based on SNP frequencies at these sites, we generated 5 ACO and 5 CO populations each consisting of 1000 individuals. We chose to simulate 1000 individuals per population as an effective population size of 1000 is supported by past work in our system [41]. To generate haplotypes for individuals in each population, we put 2000 alleles into a pot corresponding to the major and minor allelic frequencies in the real data (eg. Simulated ACO1 was generated based on frequencies in the real ACO1 population). The pot was shuffled, and alleles were taken out two at a time achieve a random distribution. To perform our simulations we used MimicrEE2 (https://sourceforge.net/p/mimicree2/wiki/Home/), a forward simulation specifically designed to mimic experimental evolution. It simulates populations of diploid individuals where genomes are provided as haplotypes with two haplotypes constituting a diploid genome. There were no changes in the demography once the initial population file is submitted. The simulated populations have non-overlapping generations and all individuals are hermaphrodites (though selfing is excluded). At each generation, matings are performed, where mating success (number of offspring) scales linearly with fitness, until the total number of offspring in the population equals the targeted population size (fecundity selection). Each parent contributes a single gamete to the offspring. Crossing-over events are introduced according to a user-specified recombination rate. The recombination rates were specified for 100 kb windows and were obtained from the D. melanogaster recombination rate calculator v2.232. As recombination does not occurs in male Drosophila, the empirically estimated female recombination rate was divided by two for the simulations. From this method, ten populations were generated to match the ten experimental populations. We used the migration features in MimicrEE2 to see how different levels of migration would impact levels of SNP differentiation in the simulated ACO and CO populations after 230 generations of neutral evolution. We simulated a total of three scenarios: two, six, and ten migration events every generation. For each population, the source of migration was generated using the method outlined above based on the average SNP frequencies across replicates corresponding to a given selection treatment. The ACO populations could only receive CO migrations, while the CO populations could only receive ACO migrants. So, for the two migration event scenario, each generation one of the 5 ACO replicates would receive a CO migrant, and one of the CO replicates would receive an ACO migrant. In the six migration event scenario, each generation three of the 5 ACO replicates would receive a CO migrant, and three of the CO replicates would receive an ACO migrant. Lastly, in the 10 migration event scenario, at each generation every one of the ACO replicates would receive a CO migrant, and every one of the CO replicates would receive an ACO migrant. We then assessed our ability to detect differentiated sites in the resulting simulated data sets using the quasibinomial GLM approach in the same manner it was applied to the TDO and TSO data. We looked at the number of sites detected using the Bonferroni correction (our most stringent method) and the q-value approach (our least stringent method). Notes Acknowledgements We thank Melanie Garcia, Madison Cheek, Elizabeth Rodriguez, and all other students of the Shahrestani and Rose laboratories for their help with data collection. Funding The project was funded by the California State University Program for Education & Research in Biotechnology, New Investigator Grant, awarded to PS. Availability of data and materials The DNA sequence data supporting the conclusions of this article are available in the NCBI SRA repository (SRP136130, https://www.ncbi.nlm.nih.gov/sra/SRP136130). The phenotypic data sets supporting the conclusions of this article are included within the article and its additional files. Authors’ contributions MAP and GAR did the laboratory work to generate the genomic data used in our analysis. MAP and ZSG were responsible for analyzing the resulting data set and performing population genetic simulations to address concerns about migration. JNK generated and analyzed all starvation resistance, desiccation resistance, and cardiac arrest rate data. GAR, AT, SM, HA, and PS generated developmental, mortality, and fungal resistance data, while GAR prepared all populations for these assays and analyzed all the data. GAR also generated and analyzed all fecundity data. LDM supervised analysis of all phenotypic data. MRR provided the populations and guidance for the project. PS supervised the project. MAP, GAR, and JNK were primarily responsible for drafting the manuscript. All authors read and approved the final manuscript. Not applicable. Not applicable. Competing interests MAP, GAR, ZSG, JNK, AT, SM, HA, and PS have no competing interest. MRR and LDM have financial interest in Genescient Inc. and Lyceum Pharmaceuticals Inc. MRR also has financial interests in Methuselah Flies LLC. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary material 12864_2018_5118_MOESM1_ESM.xlsx (43 kb) Additional file 1: Mortality Data. File containing daily mortality data for the TDO and TSO populations. (XLSX 42 kb) 12864_2018_5118_MOESM2_ESM.docx (4.3 mb) Additional file 2: All supplementary figures and tables for Effects of evolutionary history on genome wide and phenotypic convergence in Drosophila populations. (DOCX 4402 kb) 12864_2018_5118_MOESM3_ESM.xlsx (152 kb) Additional file 3: Time to Eclosion Data. File containing time to eclosion data for the TDO and TSO populations. (XLSX 152 kb) 12864_2018_5118_MOESM4_ESM.xlsx (12 kb) Additional file 4: Fecundity Data. File containing fecundity data for the TDO and TSO populations. (XLSX 12 kb) 12864_2018_5118_MOESM5_ESM.xlsx (19 kb) Additional file 5: Fungal Resistance Data. Survivorship and mortality data for TDO and TSO cohorts exposed to the fungal pathogen Beauveria bassiana, and control cohorts not exposed to the fungus. (XLSX 19 kb) 12864_2018_5118_MOESM6_ESM.xlsx (14 kb) Additional file 6: Starvation Resistance Data. Time till death for TDO and TSO individuals subjected to starvation conditions. (XLSX 14 kb) 12864_2018_5118_MOESM7_ESM.xlsx (14 kb) Additional file 7: Desiccation Resistance Data. Time till death for TDO and TSO individuals subjected to desiccation conditions. (XLSX 14 kb) 12864_2018_5118_MOESM8_ESM.xlsx (9 kb) Additional file 8: Cardiac Arrest Rate Data. Cardiac arrest rates for TDO and TSO individuals subjected to heart pacing procedure. (XLSX 8 kb) 12864_2018_5118_MOESM9_ESM.zip (28 mb) Additional file 9: ACO and CO SNP Table. A table containing nucleotide counts for all polymorphic sites identified in the ACO and CO populations. This data set was used as the basis for our migration simulations. (ZIP 28661 kb) References 1. 1. Schlötterer C, Kofler R, Versace E, Tobler R, Franseen SU. Combining experimental evolution with next-generation sequencing: a powerful tool to study adaptation from standing genetic variation. Heredity. 2015;114:331–440. 2. 2. Long AD, Liti G, Luptak A, Tenaillon O. Elucidating the molecular architecture of adaptation via evolve and resequence experiments. Nat Rev Genet. 2015;16:567–82. 3. 3. Teotónio H, Chelo IM, Bradiá M, Rose MR, Long AD. Experimental evolution reveals natural selection on standing genetic variation. Nat Genet. 2009;41:251–7. 4. 4. Burke MK, Dunham JP, Shahrestani P, Thornton KR, Rose MR, Long AD. Genome-wide analysis of a long-term evolution experiment with Drosophila. Nature. 2010;467:587–90. 5. 5. Turner TL, Steward AD, Fields AT, Rice WR, Tarone AM. Population-based resequencing of experimentally evolved populations reveals the genetic basis of body size variation in Drosophila melanogaster. PLoS Genet. 2011;7:e10001336.Google Scholar 6. 6. Orozco-ter Wengel P, Kapun M, Nolte V, Kofler R, Flatt T, Schlötterer C. Adaptation of Drosophila to a novel laboratory environment reveals temporally heterogeneous trajectories of selected traits. Mol Ecol. 2012;21:4931–41. 7. 7. Tobler R, Franssen SU, Kofler R, Orozco-ter Wengel P, Nolte V, Hermisson J, Schlötterer C. Massive habitat-specific genomic response in D. melanogaster populations during experimental evolution in hot and cold environments. Mol Biol Evol. 2014;31:364–75. 8. 8. Franssen SU, Volte N, Tobler R, Schlötterer C. Patterns of linkage disequilibrium and long range hitchhiking in evolving experimental Drosophila melanogaster populations. Mol Biol Evol. 2015;32:495–509. 9. 9. Huang Y, Wright SI, Agrawal AF. Genome-wide patterns of genetic variation within and among alternative selective regimes. PLoS Genet. 2014;10:e1004527. 10. 10. Phillips MA, Long AD, Greenspan ZS, Greer LF, Burke MK, Bryant V, et al. Genome-wide analysis of long-term evolutionary domestication in Drosophila melanogaster. Sci Rep. 2016:2016. . 11. 11. Burke MK, Liti G, Long AD. Long standing genetic variation drives repeatable experimental evolution in outcrossing populations of Saccharomyces cerevisiae. Mol Biol Evol. 2014;32:3228–39. 12. 12. Burke MK, Barter TT, Cabral LG, Kezos JN, Phillips MA, Rutledge GA, et al. Rapid convergence and divergence of life-history in experimentally evolved Drosophila Melanogaster. Evolution. 2016;70:2085–98. 13. 13. Graves JL, Hertweck KL, Phillips MA, Han MV, Cabral LG, Barter TT, et al. Genomics of parallel experimental evolution in Drosophila. Mol Biol Evol. 2017;34:831–42. 14. 14. Simões P, Fragata I, Seabra SG, Faria GS, Santos MA, Rose MR, et al. Predictable phenotypic, but not karyotypic, evolution of populations with contrasting initial history. Sci Rep. 2017;71:913. 15. 15. Teotónio H, Rose MR. Variation in the reversibility of evolution. Nature. 2000;408:463–6. 16. 16. Rose MR. Antagonistic pleiotropy, dominance, and genetic variation. Heredity. 1982;48:63–78. 17. 17. Van Dooren TJ. Protected polymorphism and evolutionary stability in pleiotropic models with trait-specific dominance. Evolution. 2006;60:1991–2003. 18. 18. Rose MR, Vu LN, Park SU, Graves JL. Selection for stress resistance increases longevity in Drosophila melanogaster. Exp Gerontol. 1992;27:241–50. 19. 19. Gibbs AG, Chippindale AK, Rose MR. Physiological mechanisms of evolved desiccation resistance in Drosophila melanogaster. J Exp Biol. 1997;200:1821–32. 20. 20. Djawdan M, Chippindale AK, Rose MR, Bradley TJ. Metabolic reserves and evolved stress resistance in Drosophila melanogaster. Physiol Zool. 1998;71:584–94. 21. 21. Archer MA, Phelan JP, Beckman KA, Rose MR. Breakdown in correlations during laboratory evolution. II. Selection on stress resistance in Drosophila populations. Evolution. 2003;57:536–43. 22. 22. Phelan JP, Archer MA, Beckman KA, Chippindale AK, Nusbaum TJ, Rose MR. Breakdown in correlations during laboratory evolution. I. Comparative analyses of Drosophila populations. Evolution. 2003;57:527–35. 23. 23. Archer MA, Bradley TJ, Mueller LD, Rose MR. Using experimental evolution to study the functional mechanisms of desiccation resistance in Drosophila melanogaster. Physiol Biochem Zool. 2007;80:386–98. 24. 24. Burke MK, Rose MR. Experimental evolution with Drosophila. Am J Phys Regul Integr Comp Phys. 2009;296:R1847–54.Google Scholar 25. 25. Rose MR, Graves JL, Hutchinson EW. The use of selection to probe patterns of peliotrophy in fitness characters. In: Gilbert F, editor. Insect Life Cycles. New York: Spring-Verlag; 1990. p. 29–42. 26. 26. Bullard B, Burkart C, Labeit S, Leonard K. The function of elastic proteins in the oscillatory contraction of insect flight muscle. J Muscle Res Cell Motil. 2005;26:479–85. 27. 27. Burkart C, Qiu F, Brendel S, Benes V, Haag P, Labeit S, et al. Modular proteins from the Drosophila sallimus (sls) gene and their expression in muscles with different extensibility. J Mol Biol. 2007;367:953–69. 28. 28. Orfanos Z, Leonard K, Elliott C, Katzemich A, Bullard B, Sparrow J. Sallimus and the dynamics of sarcomere assembly in Drosophila flight muscles. J Mol Biol. 2015;427:2151–8. 29. 29. Ashraf S, Gee HY, Woerner S, Xie LX, Vega-Warner V, Lovric S, et al. ADCK4 mutations promote steroid-resistant nephrotic syndrome through CoQ10 biosynthesis disruption. J Clin Invest. 2013;123:5179–89. 30. 30. Mollet J, Delahodde A, Serre V, Chretien D, Schlemmer D, Lombes A, et al. CABC1 gene mutations cause ubiquinone deficiency with cerebellar ataxia and seizures. Am J Hum Genet. 2008;82:623–30. 31. 31. Chippindale AK, Lerio AM, Kim SB, Rose MR. Phenotypic Plsaticity and selection in Drosophila life-history evolution. 1. Nutrition and the cost of reproduction. J Evol Biol. 1993;6:171–93. 32. 32. Viera C, Pasyukova EG, Zeng ZB, Hackett JB, Lynmar RF, Mackay TF. Genotype-environment interaction for quantitative trait loci affecting life span in Drosophila melanogaster. Genetics. 2000;154:213–27.Google Scholar 33. 33. Gutteling EW, Riksen JA, Bakker J, Kammenga JE. Mapping phenotypic plasticity and genotype-environment interactions affecting life-history traits in Caenorhabditis elegans. Heredity. 2007;98:28–37. 34. 34. Bergland AO, Genissel A, Nuzhdin SV, Tatar M. Quantitative trait loci affecting phenotypic plasticity and allometric relationship of ovariole number and thorax length in Drosophila melanogaster. Genetics. 2008;180:567–82. 35. 35. Baldwin-Brown JG, Long AD, Thornton KR. The power to detect quantitative trait loci using resequenced, experimentally evolved populations of diploid, sexual organisms. Mol Biol Evol. 2014;31:1040–55. 36. 36. Kofler R, Schlötterer C. A guide for the design of evolve and resequencing studies. Mol Biol Evol. 2014;31:474–83. 37. 37. Service PM, Hutchinson EW, MacKinley MD, Rose MR. Resistance to environmental stress in Drosophila melanogaster selected for postponed senescence. Physiol Zool. 1985;58:380–9. 38. 38. Graves JL, Toolson E, Jeong CM, Vu LN, Rose MR. Desiccation resistance, flight duration, glycogen and postponed senescence in Drosophila melanogaster. Physiol Zool. 1992;65:268–86. 39. 39. Rose MR, Passananti HB, Matos M. Methuselah flies: a case study in the evolution of aging. Singapore: World Scientific Publishing; 2004. 40. 40. Wiberg RAW, Gaggiotti OE, Morrissey MB, Ritchie MG, Johnson L. Identifying consistent allele frequency differences in studies of stratified populations. Methods Ecol Evol. 2017;8:1899–909. 41. 41. Mueller LD, Joshi A, Santos M, Rose MR. Effective population size and evolutionary dynmaics in outbred laboratory populations of Drosophila. J Genet. 2013;92:349–61. 42. 42. Rose MR. Laboratory evolution of postponed senescence in Drosophila melanogaster. Evolution. 1984;38:1004–10. 43. 43. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. 2015. Available from: https://www.R-project.org/. Accessed  23 Nov 2016. 44. 44. Pinheiro JC, Bates DM. Mixed-effects models in S and S-PLUS. New York: Springer; 2000. 45. 45. Chen W, Hillyer JF. FlyNap (trimethylamine) increases the heart rate of mosquitoes and eliminates the cardioacceleratory effect of neuropeptide CCAP. PLoS One. 2013;8:e70414. 46. 46. Paternostro G, Vignola C, Bartsch DU, Omens JH, McCulloch AD, Reed JC. Age-associated cardiac dysfunction in Drosophila melanogaster. Circ Res. 2001;88:1053–8. 47. 47. Wessells RJ, Bodmer R. Screening assays for heart function mutants in Drosophila. BioTechniques. 2004;37:58–66. 48. 48. Li H, Durbin R. Fast and accurate short read alignment with burrows-wheeler transform. Bioinformatics. 2009;25:1754–60. 49. 49. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9. 50. 50. Barnett DW, Garrison EK, Quinlan AR, Stromberg MP, Marth GT. BamTools: a C ++ API and toolkit for analyzing and managing BAM files. Bioinformatics. 2011;27:1691–2. 51. 51. Kofler R, Pandey RV, Schlötterer C. PoPoolation2: identifying differentiation between populations using sequencing of pooled DNA samples (Pool-Seq). Bioinformatics. 2011;27:3435–6. 52. 52. Hedrick PW. Genetics of populations. Massachusetts: Jones & Bartlett Learning Press; 2009.Google Scholar 53. 53. Storey JD, Bass A, Dabney A, Robinson D. q-value: Q-value estimation for false discovery rate control. R package version 2.2.2. 2015. Retrieved from http://github.com/jdstorey/qvalue. Accessed 7 Apr 2017. 54. 54. Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proc Natl Acad Sci. 2003;100:9440–5. © The Author(s). 2018 Authors and Affiliations • Mark A Phillips • 1 Email author • Grant A Rutledge • 1 • James N Kezos • 2 • Zachary S Greenspan • 1 • Andrew Talbott • 3 • Sara Matty • 3 • Hamid Arain • 3 • Laurence D Mueller • 1 • Michael R Rose • 1 • Parvin Shahrestani • 3 1. 1.Department of Ecology and Evolutionary BiologyUniversity of California IrvineIrvineUSA 2. 2.Department of Development, Aging, and RegenerationSanford Burnham Prebys Medical Discovery InstituteSan DiegoUSA 3. 3.Department of Biological ScienceCalifornia State University FullertonFullertonUSA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561297655105591, "perplexity": 5832.306604939916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00323.warc.gz"}
http://math.stackexchange.com/questions/312131/does-ghg-1-subseteq-h-imply-ghg-1-h/312141
# Does $gHg^{-1}\subseteq H$ imply $gHg^{-1}= H$? [duplicate] Let $G$ be a group, $H<G$ a subgroup and $g$ an element of $G$. Let $\lambda_g$ denote the inner automorphism which maps $x$ to $gxg^{-1}$. I wonder if $H$ can be mapped to a proper subgroup of itself, i.e. $\lambda_g(H)\subset H$. I tried to approach this problem topologically. Since every group is the fundamental group of a connected CW-complex of dimension 2, let $(X,x_0)$ be such a space for $G$. Since $X$ is (locally) path-connected and semi-locally simply-connected, there exists a (locally) path-connected covering space $(\widetilde X,\widetilde x_0)$, such that $p_*(\pi_1(\widetilde X,\widetilde x_0))=H$. The element $g$ corresponds to $[\gamma]\in\pi_1(X,x_0))$, and its lift at $\widetilde x_0$ is a path ending at $\widetilde x_1$. By hypothesis, $H\subseteq g^{-1}Hg$, which leads to the existence of a unique lift $f:\pi_1(\widetilde X,\widetilde x_0)\to\pi_1(\widetilde X,\widetilde x_1)$ such that $p=p\circ f$. This lift turns out to be a surjective covering map itself, and it is a homeomorphism iff $H=g^{-1}Hg$. I was unsuccessful in showing the injectivity. If $x_1$ and $x_2$ have the same image under $f$, then $x_1$, $x_2$, and $f(x_1)=f(x_2)$ are all in the same fiber. I took $\lambda$ to be a path from $x_1$ to $x_2$. I have been playing around with $\lambda$, $p\lambda$, and $f\lambda$, but got nowhere. Of course, there could also be a direct algebraic proof. On the other hand, if the statement is not true then someone maybe knows of a counterexample. - ## marked as duplicate by user1729, PVAL, Mark Bennet, Jonas Meyer, Hagen von EitzenOct 29 '14 at 21:54 Some counterexamples are given on MO. –  anon Feb 23 '13 at 17:25 Another example. Let $$K = \left\{ \frac{a}{2^{n}} : a \in \mathbf{Z}, n \in \mathbf{N} \right\}$$ be the additive subgroup of $\mathbf{Q}$. The map $g : x \mapsto 2 x$ is an automorphism of $K$. Consider the semidirect product $G = K \rtimes \langle g \rangle$. (So that conjugating an element $x$ of $K$ by $g$ in $G$ is the same as taking the value of $g$ on $x$.) Let $H = \left\{ \frac{a}{2} : a \in \mathbf{Z} \right\}$ be a subgroup of $G$. Then $H^{g} = \mathbf{Z} < H$. PS When I was first exposed to these examples, what struck me is what happens if you look at it from the other end: $\mathbf{Z}^{g^{-1}} = H > \mathbf{Z}$. Thanks for your answer. But I don't quite understand what a group $G$ is. Of what form are the elements? –  Stefan Hamcke Feb 23 '13 at 18:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852795004844666, "perplexity": 87.5189195133962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.4/warc/CC-MAIN-20150728002308-00037-ip-10-236-191-2.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/r/relaxed+eddy+accumulation.html
Sample records for relaxed eddy accumulation 1. Application of relaxed eddy accumulation (REA on managed grassland M. Riederer 2014-05-01 Full Text Available Relaxed eddy accumulation is applied for measuring fluxes of trace gases for which there is a lack of sensors fast enough in their resolution for eddy-covariance. On managed grasslands, the length of time between management events and the application of relaxed eddy accumulation has an essential influence on the determination of the proportionality factor b and thereby on the resulting flux. In this study this effect is discussed for the first time. Also, scalar similarity between proxy scalars and scalars of interest is affected until the ecosystem has completely recovered. Against this background, CO2 fluxes were continuously measured and 13CO2 isofluxes were determined with a high measurement precision on two representative days in summer 2010. This enabled the evaluation of the 13CO2 flux portion of the entire CO2 flux, in order to estimate potential influences on tracer experiments in ecosystem sciences and to compare a common method for the partitioning of the net ecosystem exchange into assimilation and respiration based on temperature and light response with an isotopic approach directly based on the isotope discrimination of the biosphere. 2. Toward finding a universally applicable parameterization of the β factor for Relaxed Eddy Accumulation applications Vogl, Teresa; Hrdina, Amy; Thomas, Christoph 2016-04-01 The traditional eddy covariance (EC) technique requires the use of fast responding sensors (≥ 10 Hz) that do not exist for many chemical species found in the atmosphere. In this case, the Relaxed Eddy Accumulation (REA) method offers a means to calculate fluxes of trace gases and other scalar quantities (Businger and Oncley, 1990) and was originally derived from the eddy accumulation method (EA) first proposed by Desjardins (1972). While REA lessens the requirements for sensors and sampling and thus offers practical appeal, it introduces a dependence of the computed flux from a proportionality factor β. The accuracy of the REA fluxes hinges upon the correct determination of β, which was found to vary between 0.40 and 0.63 (Milne et al., 1999, Ammann and Meixner, 2002, Ruppert et al., 2006). However, formulating a universally valid parameterization for β instead of empirical evaluation has remained a conundrum and has been a main limitation for REA. In this study we take a fresh look at the dependencies and mathematical models of β by analyzing eddy covariance (EC) data and REA simulations for two field experiments in drastically contrasting environments: an exclusively physically driven environment in the Dry Valleys of Antarctica, and a biologically active system in a grassland in Germany. The main objective is to work toward a model parameterization for β that can be applied over wide range of surface conditions and forcings without the need for empirical evaluation, which is not possible for most REA applications. Our study discusses two different models to define β: (i) based upon scalar-scalar similarity, in which a different scalar is measured with fast-response sensors as a proxy for the scalar of interest, here referred to as β0; and (ii) computed solely from the vertical wind statistics, assuming a linear relationship between the scalar of interest and the vertical wind speed, referred to as βw. Results are presented for the carbon 3. Inter-comparison of ammonia fluxes obtained using the Relaxed Eddy Accumulation technique Hensen, A.; Nemitz, E.; Flynn, M.J. 2009-01-01 The exchange of Ammonia (NH3) between grassland and the atmosphere was determined using Relaxed Eddy Accumulation (REA) measurements. The use of REA is of special interest for NH3, since the determination of fluxes at one height permits multiple systems to be deployed to quantify vertical flux...... between 0.3 and 0.82. For the period immediately after fertilization, the REA systems showed average fluxes 20% to 70% lower than the reference. At periods with low fluxes REA and AGM can agree within a few %. Overall, the results show that the continuous REA technique can now be used to measure NH3......, significant improvements in sampling precision are essential to allow robust determination of flux divergence in future studies. Wet chemical techniques will be developed further since they use the adsorptive and reactive properties of NH3 that impedes development of cheaper optical systems.... 4. First measurements of H2O2 and organic peroxide surface fluces by the Relaxed Eddy Accumulation technique Valverde-Canossa, J.; Ganzeveld, L.N.; Rappenglück, B.; Steinbrecher, R.; Klemm, O.; Schuster, G.; Moortgat, G.K. 2006-01-01 The relaxed eddy-accumulation (REA) technique was specially adapted to a high-performance liquid chromatographer (enzymatic method) and scrubbing coils to measure concentrations and fluxes of hydrogen peroxide (H2O2) and organic peroxides with a carbon chain C4, of which only methylhydroperoxide (MH 5. Determination of the terpene flux from orange species and Norway spruce by relaxed eddy accumulation Christensen, C.S.; Hummelshøj, P.; Jensen, N.O.; 2000-01-01 Terpene fluxes from a Norway spruce (Picea abies) forest and an orange orchard (Citrus clementii and Citrus sinensis) were measured by relaxed eddy accumulation (REA) during summer 1997. alpha-pinene and beta-pinene were the most abundant terpenes emitted from Norway spruce and constituted...... approximately 70% of the flux. A much lower flux was observed for myrcene, limonene and gamma-terpinene and both alpha-terpinene and camphor were only occasionally detected. The average terpene flux was 107.6 ng m(-2) s(-1) which corresponds to 0.73 mu g g(dw)(-1) h(-1) (30 degrees C) when calculated relatively...... the weight of the dry biomass. The five terpenes which were detected in all samples at the orange orchard were limonene, sabinene, alpha-pinene, trans-ocimene and beta-pinene with an average Aux of 126.3 ng m(-2) s(-1). Cis-ocimene, linalool and myrcene were occasionally detected but no systematic upward... 6. A dual, single detector relaxed eddy accumulation system for long-term measurement of mercury flux S. Osterwalder 2015-08-01 Full Text Available The fate of anthropogenic emissions of mercury (Hg to the atmosphere is influenced by the exchange of elemental Hg with the earth surface. This exchange which holds the key to a better understanding of Hg cycling from local to global scales has been difficult to quantify. To advance and facilitate research about land–atmosphere Hg interactions, we developed a dual-intake, single analyzer Relaxed Eddy Accumulation (REA system. REA is an established technique for measuring turbulent fluxes of trace gases and aerosol particles in the atmospheric surface layer. Accurate determination of gaseous elemental mercury (GEM fluxes has proven difficult to technical challenges presented by extremely small concentration differences (typically −3 between updrafts and downdrafts. To address this we present an advanced REA design that uses two inlets and two pair of gold cartridges for semi-continuous monitoring of GEM fluxes. They are then analyzed sequentially on the same detector while another pair of gold cartridges takes over the sample collection. We also added a reference gas module for repeated quality-control measurements. To demonstrate the system performance, we present results from field campaigns in two contrasting environments: an urban setting with a heterogeneous fetch and a boreal mire during snow-melt. The observed emission rates were 15 and 3 ng m−2 h−1. We claim that this dual-inlet, single detector approach is a significant development of the REA system for ultra-trace gases and can help to advance our understanding of long-term land–atmosphere GEM exchange. 7. A relaxed eddy accumulation system for measuring vertical fluxes of nitrous acid X. Ren 2011-10-01 Full Text Available A relaxed eddy accumulation (REA system combined with a nitrous acid (HONO analyzer was developed to measure atmospheric HONO vertical fluxes. The system consists of three major components: (1 a fast-response sonic anemometer measuring both vertical wind velocity and air temperature, (2 a fast-response controlling unit separating air motions into updraft and downdraft samplers by the sign of vertical wind velocity, and (3 a highly sensitive HONO analyzer based on aqueous long path absorption photometry that measures HONO concentrations in the updrafts and downdrafts. A dynamic velocity threshold (±0.5σw, where σw is a standard deviation of the vertical wind velocity was used for valve switching determined by the running means and standard deviations of the vertical wind velocity. Using measured temperature as a tracer and the average values from two field deployments, the flux proportionality coefficient, β, was determined to be 0.42 ± 0.02, in good agreement with the theoretical estimation. The REA system was deployed in two ground-based field studies. In the California Research at the Nexus of Air Quality and Climate Change (CalNex study in Bakersfield, California in summer 2010, measured HONO fluxes appeared to be upward during the day and were close to zero at night. The upward HONO flux was highly correlated to the product of NO2 and solar radiation. During the Biosphere Effects on Aerosols and Photochemistry Experiment (BEARPEX 2009 at Blodgett Forest, California in July 2009, the overall HONO fluxes were small in magnitude and were close to zero. Causes for the different HONO fluxes in the two different environments are briefly discussed. 8. Inter-comparison of ammonia fluxes obtained using the Relaxed Eddy Accumulation technique M. A. Sutton 2009-11-01 Full Text Available The exchange of Ammonia (NH3 between grassland and the atmosphere was determined using Relaxed Eddy Accumulation (REA measurements. The use of REA is of special interest for NH3, since the determination of fluxes at one height permits multiple systems to be deployed to quantify vertical flux divergence (either due to effects of chemical production or advection. During the Braunschweig integrated experiment four different continuous-sampling REA systems were operated during a period of about 10 days and were compared against a reference provided by independent application of the Aerodynamic Gradient Method (AGM. The experiment covered episodes before and after both cutting and fertilizing and provided a wide range of fluxes −60–3600 ng NH3 m−2 s−1 for testing the REA systems. The REA systems showed moderate to good correlation with the AGM estimates, with r2 values for the linear regressions between 0.3 and 0.82. For the period immediately after fertilization, the REA systems showed average fluxes 20% to 70% lower than the reference. At periods with low fluxes REA and AGM can agree within a few %. Overall, the results show that the continuous REA technique can now be used to measure NH3 surface exchange fluxes. While REA requires greater analytical precision in NH3 measurement than the AGM, a key advantage of REA is that reference sampling periods can be introduced to remove bias between sampling inlets. However, while the data here indicate differences consistent with advection effects, significant improvements in sampling precision are essential to allow robust determination of flux divergence in future studies. Wet chemical techniques will be developed further since they use the adsorptive and reactive properties of NH3 that impedes development of cheaper optical systems. 9. Inter-comparison of ammonia fluxes obtained using the relaxed eddy accumulation technique A. Hensen 2008-10-01 Full Text Available The exchange of NH3 between grassland and the atmosphere was determined using Relaxed Eddy Accumulation (REA measurements. The use of REA is of special interest for NH3, since the determination of fluxes at one height permits multiple systems to be deployed to quantify vertical flux divergence (either due to effects of chemical production or advection. During the Braunschweig integrated experiment four different continuous-sampling REA systems were operated during a period of about 10 days and were compared against a reference provided by independent application of the Aerodynamic Gradient Method (AGM. The experiment covered episodes before, after cutting and fertilising and provided a wide range of fluxes −60–3600 ng NH3 m−2 s−1 for testing the REA systems. The REA systems showed moderate to good correlation with the AGM estimates, with r2 values for the linear regressions between 0.3 and 0.82. For the period immediately after fertilization, the REA systems showed average fluxes 20% to 70% lower than the reference. At periods with low fluxes REA and AGM can agree within a few %. Overall, the results show that the continuous REA technique can now be used to measure NH3 surface exchange fluxes. While REA requires greater analytical precision in NH3 measurement than the AGM, a key advantage of REA is that auto-referencing periods can be introduced to remove bias between sampling inlets. However, while the data here indicate differences consistent with advection effects, further improvements in sampling precision are needed to allow measurement of flux divergence. Wet chemical techniques will be developed further since they use the sticky and reactive properties of NH3 that impedes development of cheaper optical systems. 10. VOC flux measurements using a novel Relaxed Eddy Accumulation GC-FID system in urban Houston, Texas Park, C.; Schade, G.; Boedeker, I. 2008-12-01 Houston experiences higher ozone production rates than most other major cities in the US, which is related to high anthropogenic VOC emissions from both area/mobile sources (car traffic) and a large number of petrochemical facilities. The EPA forecasts that Houston is likely to still violate the new 8-h NAAQS in 2020. To monitor neighborhood scale pollutant fluxes, we established a tall flux tower installation a few kilometers north of downtown Houston. We measure energy and trace gas fluxes, including VOCs from both anthropogenic and biogenic emission sources in the urban surface layer using eddy covariance and related techniques. Here, we describe a Relaxed Eddy Accumulation (REA) system combined with a dual-channel GC-FID used for VOC flux measurements, including first results. Ambient air is sampled at approximately 15 L min-1 through a 9.5 mm OD PFA line from 60 m above ground next to a sonic anemometer. Subsamples of this air stream are extracted through an ozone scrubber and pushed into two Teflon bag reservoirs, from which they are transferred to the GC pre-concentration units consisting of carbon-based adsorption traps encapsulated in heater blocks for thermal desorption. We discuss the performance of our system and selected measurement results from the 2008 spring and summer seasons in Houston. We present diurnal variations of the fluxes of the traffic tracers benzene, toluene, ethylbenzene, and xylenes (BTEX) during different study periods. Typical BTEX fluxes ranged from -0.36 to 3.10 mg m-2 h-1 for benzene, and -0.47 to 5.04 mg m-2 h-1 for toluene, and exhibited diurnal cycles with two dominant peaks related to rush-hour traffic. A footprint analysis overlaid onto a geographic information system (GIS) will be presented to reveal the dominant emission sources and patterns in the study area. 11. Long-term measurement of terpenoid flux above a Larix kaempferi forest using a relaxed eddy accumulation method Mochizuki, Tomoki; Tani, Akira; Takahashi, Yoshiyuki; Saigusa, Nobuko; Ueyama, Masahito 2014-02-01 Terpenoids emitted from forests contribute to the formation of secondary organic aerosols and affect the carbon budgets of forest ecosystems. To investigate seasonal variation in terpenoid flux involved in the aerosol formation and carbon budget, we measured the terpenoid flux of a Larix kaempferi forest between May 2011 and May 2012 by using a relaxed eddy accumulation method. Isoprene was emitted from a fern plant species Dryopteris crassirhizoma on the forest floor and monoterpenes from the L. kaempferi. α-Pinene was the dominant compound, but seasonal variation of the monoterpene composition was observed. High isoprene and monoterpene fluxes were observed in July and August. The total monoterpene flux was dependent on temperature, but several unusual high positive fluxes were observed after rain fall events. We found a good correlation between total monoterpene flux and volumetric soil water content (r = 0.88), and used this correlation to estimate monoterpene flux after rain events and calculate annual terpenoid emissions. Annual carbon emission in the form of total monoterpenes plus isoprene was determined to be 0.93% of the net ecosystem exchange. If we do not consider the effect of rain fall, carbon emissions may be underestimated by about 50%. Our results suggest that moisture conditions in the forest soil is a key factor controlling the monoterpene emissions from the forest ecosystem. 12. Assessment of a relaxed eddy accumulation for measurements of fluxes of biogenic volatile organic compounds: Study over arable crops and a mature beech forest Gallagher, M.W.; Clayborough, R.; Beswick, K.M. 2000-01-01 A relaxed eddy accumulation (REA) system, based on the design by Beverland et al. (Journal of Geophysics Research 101 (D17) 22, 807-22, 815), for the measurement of biogenic VOC species was evaluated by intercomparison with an eddy correlation CO2 flux system over a mature deciduous beech canopy...... obtained with correlation coefficients for the REA system ranging from 0.71 to 0.82, lending further confidence in the use of this technique, Daily averaged biogenic emissions from the wheat and barley canopies were significantly larger than expected, likely a result of harvesting. Fluxes measured over... 13. A dual-inlet, single detector relaxed eddy accumulation system for long-term measurement of mercury flux Osterwalder, S.; Fritsche, J.; Alewell, C.; Schmutz, M.; Nilsson, M. B.; Jocher, G.; Sommar, J.; Rinne, J.; Bishop, K. 2016-02-01 The fate of anthropogenic emissions of mercury (Hg) to the atmosphere is influenced by the exchange of elemental Hg with the earth surface. This exchange holds the key to a better understanding of Hg cycling from local to global scales, which has been difficult to quantify. To advance research about land-atmosphere Hg interactions, we developed a dual-inlet, single detector relaxed eddy accumulation (REA) system. REA is an established technique for measuring turbulent fluxes of trace gases and aerosol particles in the atmospheric surface layer. Accurate determination of gaseous elemental mercury (GEM) fluxes has proven difficult due to technical challenges presented by extremely small concentration differences (typically < 0.5 ng m-3) between updrafts and downdrafts. We present an advanced REA design that uses two inlets and two pairs of gold cartridges for continuous monitoring of GEM fluxes. This setup reduces the major uncertainty created by the sequential sampling in many previous designs. Additionally, the instrument is equipped with a GEM reference gas generator that monitors drift and recovery rates. These innovations facilitate continuous, autonomous measurement of GEM flux. To demonstrate the system performance, we present results from field campaigns in two contrasting environments: an urban setting with a heterogeneous fetch and a boreal peatland during snowmelt. The observed average emission rates were 15 and 3 ng m-2 h-1, respectively. We believe that this dual-inlet, single detector approach is a significant improvement of the REA system for ultra-trace gases and can help to advance our understanding of long-term land-atmosphere GEM exchange. 14. Four-year measurement of methane flux over a temperate forest with a relaxed eddy accumulation method Sakabe, A.; Kosugi, Y.; Ueyama, M.; Hamotani, K.; Takahashi, K.; Iwata, H.; Itoh, M. 2013-12-01 Forests are generally assumed to be an atmospheric methane (CH4) sink (Le Mer and Roger, 2001). However, under Asian monsoon climate, forests are subject to wide spatiotemporal range in soil water status, where forest soils often became water-saturated condition heterogeneously. In such warm and humid conditions, forests may act as a CH4 source and/or sink with considerable spatiotemporal variations. Micrometeorological methods such as eddy covariance (EC) method continuously measure spatially-representative flux at a canopy scale without artificial disturbance. In this study, we measured CH4 fluxes over a temperate forest during four-year period using a CH4 analyzer based on tunable diode laser spectroscopy detection with a relaxed eddy accumulation (REA) method (Hamotani et al., 1996, 2001). We revealed the amplitude and seasonal variations of canopy-scale CH4 fluxes. The REA method is the attractive alternative to the EC method to measure trace-gas flux because it allows the use of analyzers with an optimal integration time. We also conducted continuous chamber measurements on forest floor to reveal spatial variations in soil CH4 fluxes and its controlling processes. The observations were made in an evergreen coniferous forest in central Japan. The site has a warm temperate monsoon climate with wet summer. Some wetlands were located in riparian zones along streams within the flux footprint area. For the REA method, the sonic anemometer (SAT-550, Kaijo) was mounted on top of the 29-m-tall tower and air was sampled from just below the sonic anemometer to reservoirs according to the direction of vertical wind velocity (w). After accumulating air for 30 minutes, the air in the reservoirs was pulled into a CO2/H2O gas analyzer (LI-840, Li-Cor) and a CH4 analyzer (FMA-200, Los Gatos Research). Before entering the analyzers, the sampled air was dried using a gas dryer (PD-50 T-48; Perma Pure Inc.). The REA flux is obtained from the difference in the mean concentrations 15. Chambers versus Relaxed Eddy Accumulation: an intercomparison study of two methods for short-term measurements of biogenic CO2 fluxes Jasek, Alina; Zimnoch, Miroslaw; Gorczyca, Zbigniew; Chmura, Lukasz; Necki, Jaroslaw 2014-05-01 The presented work is a part of comprehensive study aimed at thorough characterization of carbon cycle in the urban environment of Krakow, southern Poland. In the framework of this study two independent methods were employed to quantify biogenic CO2 flux in the city: (i) closed chambers, and (ii) Relaxed Eddy Accumulation (REA). The results of a three-day intensive intercomparison campaign performed in July 2013 and utilizing both measurement methods are reported here. The chamber method is a widely used approach for measurements of gas exchange between the soil and the atmosphere. The system implemented in this study consisted of a single chamber operating in a closed-dynamic mode, combined with Vaisala CarboCAP infrared CO2 sensor in a mobile setup. An alternative flux measurement method, covering larger area is represented by REA, which is a modification of the eddy covariance method. It consists of a 3D anemometer (Gill Windmaster Pro) and the system collecting updraft and downdraft samples to 5-litre Tedlar bags. The CO2 mixing ratios in the collected samples are measured by Picarro G2101i analyzer. The setup consists of two sets of bags so that the sampling can be performed continuously with 15-min temporal resolution. A 48-hectares open meadow located close the city center was chosen as a test site for comparison of the two methods of CO2 flux measurements outlined above. In the middle of the meadow a 3-metre high tripod was installed with the anemometer and REA inlet system. For a period of 46 hours the system was measuring net CO2 flux from the surrounding area. A meteorological conditions and intensity of photosynthetically active radiation (PAR) were also recorded. In the same time, CO2 flux from several points around the REA inlet was measured with the chamber system, resulting in 93 values for both respiration and net CO2 flux. Chamber results show rather homogenous distribution of the soil CO2 flux (the mean value equal to 40.9 ± 2.2 mmol/m2h), with 16. Application of a GC-ECD for measurements of biosphere–atmosphere exchange fluxes of peroxyacetyl nitrate using the relaxed eddy accumulation and gradient method A. Moravek 2014-02-01 Full Text Available Peroxyacetyl nitrate (PAN may constitute a significant fraction of reactive nitrogen in the atmosphere. Current knowledge about the biosphere–atmosphere exchange of PAN is limited and only few studies have investigated the deposition of PAN to terrestrial ecosystems. We developed a flux measurement system for the determination of biosphere–atmosphere exchange fluxes of PAN using both the hyperbolic relaxed eddy accumulation (HREA method and the modified Bowen ratio (MBR method. The system consists of a modified, commercially available gas chromatograph with electron capture detection (GC-ECD, Meteorologie Consult GmbH, Germany. Sampling was performed by trapping PAN onto two pre-concentration columns; during HREA operation one was used for updraft and one for downdraft events and during MBR operation the two columns allowed simultaneous sampling at two measurement heights. The performance of the PAN flux measurement system was tested at a natural grassland site, using fast response ozone (O3 measurements as a proxy for both methods. The measured PAN fluxes were comparatively small (daytime PAN deposition was on average −0.07 nmol m−2 s−1 and, thus, prone to significant uncertainties. A major challenge in the design of the system was the resolution of the small PAN mixing ratio differences. Consequently, the study focuses on the performance of the analytical unit and a detailed analysis of errors contributing to the overall uncertainty. The error of the PAN mixing ratio differences ranged from 4 to 15 ppt during the MBR and between 18 and 26 ppt during the HREA operation, while during daytime measured PAN mixing ratios were of similar magnitude. Choosing optimal settings for both the MBR and HREA method, the study shows that the HREA method did not have a significant advantage towards the MBR method under well mixed conditions as it was expected. 17. Application of Relaxed Eddy Accumulation (REA) method to estimate CO2 and CH4 surface fluxes in the city of Krakow, southern Poland. Zimnoch, Miroslaw; Gorczyca, Zbigniew; Pieniazek, Katarzyna; Jasek, Alina; Chmura, Lukasz; Rozanski, Kazimierz 2013-04-01 There is a growing interest in the recent years in studies aimed at quantifying carbon cycling in urban centres. Worldwide migration of human population from rural to urban areas and corresponding growth of extensive urban agglomerations and megacities leads to intensification of anthropogenic emissions of carbon and strong disruption of natural carbon cycle on these areas. Therefore, a deeper understanding of the carbon "metabolism" of such regions is required. Apart of better quantification of surface carbon fluxes, also a thorough understanding of the functioning of biosphere under strong anthropogenic influence is needed. Nowadays, covariance methods are widely applied for studying gas exchange between the atmosphere and the Earth's surface. Relaxed Eddy Accumulation method (REA), combined with the CO2 and CH4 CRDS analyser allows simultaneous measurements of surface fluxes of carbon dioxide and methane within the chosen footprint of the detection system, thus making possible thorough characterisation of the overall exchange of those gases between the atmosphere and the urban surface across diverse spatial and temporal scales. Here we present preliminary results of the study aimed at quantifying surface fluxes of CO2 and CH4 in Krakow, southern Poland. The REA system for CO2 and CH4 flux measurements has been installed on top of a 20m high tower mounted on the roof of the faculty building, close to the city centre of Krakow. The sensors were installed ca 42 m above the local ground. Gill Windmaster-Pro sonic anemometer was coupled with self-made system, designed by the Poznan University of Life Sciences, Poland, for collecting air samples in two pairs of 10-liter Tedlar bags, and with Picarro G2101-i CRDS analyser. The air was collected in 30-min intervals. The CO2 and CH4 mixing ratios in these cumulative downdraft and updraft air samples were determined by the CRDS analyser after each sampling interval. Based on the measured mixing ratios difference and the 18. Methane fluxes above the Hainich forest by True Eddy Accumulation and Eddy Covariance Siebicke, Lukas; Gentsch, Lydia; Knohl, Alexander 2016-04-01 Understanding the role of forests for the global methane cycle requires quantifying vegetation-atmosphere exchange of methane, however observations of turbulent methane fluxes remain scarce. Here we measured turbulent fluxes of methane (CH4) above a beech-dominated old-growth forest in the Hainich National Park, Germany, and validated three different measurement approaches: True Eddy Accumulation (TEA, closed-path laser spectroscopy), and eddy covariance (EC, open-path and closed-path laser spectroscopy, respectively). The Hainich flux tower is a long-term Fluxnet and ICOS site with turbulent fluxes and ecosystem observations spanning more than 15 years. The current study is likely the first application of True Eddy Accumulation (TEA) for the measurement of turbulent exchange of methane and one of the very few studies comparing open-path and closed-path eddy covariance (EC) setups side-by-side. We observed uptake of methane by the forest during the day (a methane sink with a maximum rate of 0.03 μmol m-2 s-1 at noon) and no or small fluxes of methane from the forest to the atmosphere at night (a methane source of typically less than 0.01 μmol m-2 s-1) based on continuous True Eddy Accumulation measurements in September 2015. First results comparing TEA to EC CO2 fluxes suggest that True Eddy Accumulation is a valid option for turbulent flux quantifications using slow response gas analysers (here CRDS laser spectroscopy, other potential techniques include mass spectroscopy). The TEA system was one order of magnitude more energy efficient compared to closed-path eddy covariance. The open-path eddy covariance setup required the least amount of user interaction but is often constrained by low signal-to-noise ratios obtained when measuring methane fluxes over forests. Closed-path eddy covariance showed good signal-to-noise ratios in the lab, however in the field it required significant amounts of user intervention in addition to a high power consumption. We conclude 19. True eddy accumulation and eddy covariance methods and instruments intercomparison for fluxes of CO2, CH4 and H2O above the Hainich Forest Siebicke, Lukas 2017-04-01 The eddy covariance (EC) method is state-of-the-art in directly measuring vegetation-atmosphere exchange of CO2 and H2O at ecosystem scale. However, the EC method is currently limited to a small number of atmospheric tracers by the lack of suitable fast-response analyzers or poor signal-to-noise ratios. High resource and power demands may further restrict the number of spatial sampling points. True eddy accumulation (TEA) is an alternative method for direct and continuous flux observations. Key advantages are the applicability to a wider range of air constituents such as greenhouse gases, isotopes, volatile organic compounds and aerosols using slow-response analyzers. In contrast to relaxed eddy accumulation (REA), true eddy accumulation (Desjardins, 1977) has the advantage of being a direct method which does not require proxies. True Eddy Accumulation has the potential to overcome above mentioned limitations of eddy covariance but has hardly ever been successfully demonstrated in practice in the past. This study presents flux measurements using an innovative approach to true eddy accumulation by directly, continuously and automatically measuring trace gas fluxes using a flow-through system. We merge high-frequency flux contributions from TEA with low-frequency covariances from the same sensors. We show flux measurements of CO2, CH4 and H2O by TEA and EC above an old-growth forest at the ICOS flux tower site "Hainich" (DE-Hai). We compare and evaluate the performance of the two direct turbulent flux measurement methods eddy covariance and true eddy accumulation using side-by-side trace gas flux observations. We further compare performance of seven instrument complexes, i.e. combinations of sonic anemometers and trace gas analyzers. We compare gas analyzers types of open-path, enclosed-path and closed-path design. We further differentiate data from two gas analysis technologies: infrared gas analysis (IRGA) and laser spectrometry (open path and CRDS closed 20. Eddies U.S. Geological Survey, Department of the Interior — The maximum potential area of eddy bars (MPAEB) represents the cumulative area of the eddy occupied by sand at different times within the photographic record... 1. A disjunct eddy accumulation system for the measurement of BVOC fluxes: instrument characterizations and field deployment G. D. Edwards 2012-04-01 Full Text Available Biological volatile organic compounds (BVOCs, such as isoprene and monoterpenes, are emitted in large amounts from forests. Quantification of the flux of BVOCs is critical in the evaluation of the impact of these compounds on the concentrations of atmospheric oxidants and on the production of secondary organic aerosol. A disjunct eddy accumulation (DEA sampler system was constructed for the measurement of speciated BVOC fluxes. Unlike traditional eddy covariance (EC, the relatively new technique of disjunct sampling differs by taking short, discrete samples that allows for slower sampling frequencies. Disjunct sample airflow is directed into cartridges containing sorbent materials at sampling rates proportional to the magnitude of the vertical wind. Compounds accumulated on the cartridges are then quantified by thermal desorption and gas chromatography. Herein, we describe our initial tests to evaluate the disjunct sampler including the application of using vertical wind measurements to create optimized sampling thresholds. Measurements of BVOC fluxes obtained from DEA during its deployment above a mixed hardwood forest at the University of Michigan Biological Station (Pellston, MI during the 2009 CABINEX field campaign are reported. Daytime (09:00 a.m. to 05:00 p.m. isoprene fluxes, when averaged over the footprint of the tower were 1.31 mg m−2 h−1 which is comparable to previous flux measurements at this location. Speciated monoterpene fluxes are some of the first to be reported from this site. Daytime averages were 26.7 μg m−2 h−1 for α-pinene and 10.6 μg m−2 h−1 for β-pinene. These measured concentrations and fluxes were compared to the output of an atmospheric chemistry model, and were found to be consistent with our knowledge of the variables that control BVOCs fluxes at this site. 2. A disjunct eddy accumulation system for the measurement of BVOC fluxes: instrument characterizations and field deployment G. D. Edwards 2012-09-01 Full Text Available Biological volatile organic compounds (BVOCs, such as isoprene and monoterpenes, are emitted in large amounts from forests. Quantification of the flux of BVOCs is critical in the evaluation of the impact of these compounds on the concentrations of atmospheric oxidants and on the production of secondary organic aerosol. A disjunct eddy accumulation (DEA sampler system was constructed for the measurement of speciated BVOC fluxes. Unlike traditional eddy covariance (EC, the relatively new technique of disjunct sampling differs by taking short, discrete samples that allow for slower sampling frequencies. Disjunct sample airflow is directed into cartridges containing sorbent materials at sampling rates proportional to the magnitude of the vertical wind. Compounds accumulated on the cartridges are then quantified by thermal desorption and gas chromatography. Herein, we describe our initial tests to evaluate the disjunct sampler including the application of vertical wind measurements to create optimized sampling thresholds. Measurements of BVOC fluxes obtained from DEA during its deployment above a mixed hardwood forest at the University of Michigan Biological Station (Pellston, MI during the 2009 CABINEX field campaign are reported. Daytime (09:00 a.m. to 05:00 p.m. LT isoprene fluxes, when averaged over the footprint of the tower, were 1.31 mg m−2 h−1 which are comparable to previous flux measurements at this location. Speciated monoterpene fluxes are some of the first to be reported from this site. Daytime averages were 26.7 μg m−2 h−1 for α-pinene and 10.6 μg m−2 h−1 for β-pinene. These measured concentrations and fluxes were compared to the output of an atmospheric chemistry model, and were found to be consistent with our knowledge of the variables that control BVOCs fluxes at this site. 3. Fully-coupled magnetoelastic model for Galfenol alloys incorporating eddy current losses and thermal relaxation Evans, Phillip G.; Dapino, Marcelo J. 2008-03-01 A general framework is developed to model the nonlinear magnetization and strain response of cubic magnetostrictive materials to 3-D dynamic magnetic fields and 3-D stresses. Dynamic eddy current losses and inertial stresses are modeled by coupling Maxwell's equations to Newton's second law through a nonlinear constitutive model. The constitutive model is derived from continuum thermodynamics and incorporates rate-dependent thermal effects. The framework is implemented in 1-D to describe a Tonpilz transducer in both dynamic actuation and sensing modes. The model is shown to qualitatively describe the effect of increase in magnetic hysteresis with increasing frequency, the shearing of the magnetization loops with increasing stress, and the decrease in the magnetostriction with increasing load stiffness. 4. Turbulent fluxes by "Conditional Eddy Sampling" Siebicke, Lukas 2015-04-01 Turbulent flux measurements are key to understanding ecosystem scale energy and matter exchange, including atmospheric trace gases. While the eddy covariance approach has evolved as an invaluable tool to quantify fluxes of e.g. CO2 and H2O continuously, it is limited to very few atmospheric constituents for which sufficiently fast analyzers exist. High instrument cost, lack of field-readiness or high power consumption (e.g. many recent laser-based systems requiring strong vacuum) further impair application to other tracers. Alternative micrometeorological approaches such as conditional sampling might overcome major limitations. Although the idea of eddy accumulation has already been proposed by Desjardin in 1972 (Desjardin, 1977), at the time it could not be realized for trace gases. Major simplifications by Businger and Oncley (1990) lead to it's widespread application as 'Relaxed Eddy Accumulation' (REA). However, those simplifications (flux gradient similarity with constant flow rate sampling irrespective of vertical wind velocity and introduction of a deadband around zero vertical wind velocity) have degraded eddy accumulation to an indirect method, introducing issues of scalar similarity and often lack of suitable scalar flux proxies. Here we present a real implementation of a true eddy accumulation system according to the original concept. Key to our approach, which we call 'Conditional Eddy Sampling' (CES), is the mathematical formulation of conditional sampling in it's true form of a direct eddy flux measurement paired with a performant real implementation. Dedicated hardware controlled by near-real-time software allows full signal recovery at 10 or 20 Hz, very fast valve switching, instant vertical wind velocity proportional flow rate control, virtually no deadband and adaptive power management. Demonstrated system performance often exceeds requirements for flux measurements by orders of magnitude. The system's exceptionally low power consumption is ideal 5. Effect of Sodium-Potassium Pump Inhibitors and Membrane-Depolarizing Agents on Sodium Nitroprusside-Induced Relaxation and Cyclic Guanosine Monophosphate Accumulation in Rat Aorta Rapoport, Robert M; Schwartz, Karen; Murad, Ferid 1985-01-01 ... or tetraethylammonium, membrane-depolarizing agents, inhibited relaxation to nitroprusside. These conditions had little or no effect on the elevated cyclic guanosine monophosphate levels at a concentration of nitroprusside (0.1 μM... 6. A relaxed (rel) mutant of Streptomyces coelicolor A3(2) with a missing ribosomal protein lacks the ability to accumulate ppGpp, A-factor and prodigiosin. Ochi, K 1990-12-01 A relaxed (rel) mutant was found among 70 thiopeptin-resistant isolates of Streptomyces coelicolor A3(2) which arose spontaneously. The ability of the rel mutant to accumulate ppGpp during Casamino acid deprivation was reduced 10-fold compared to the wild-type. Analysis of the ribosomal proteins by two-dimensional PAGE revealed that the mutant lacked a ribosomal protein, tentatively designated ST-L11. It was therefore classified as a relC mutant. The mutant was defective in producing A-factor and the pigmented antibiotic prodigiosin, in both liquid and agar cultures, but produced agarase normally. Production of actinorhodin, another pigmented antibiotic, was also abnormal; it appeared suddenly in agar cultures after 10 d incubation. Although aerial mycelium still formed, its appearance was markedly delayed. Whereas liquid cultures of the parent strain accumulated ppGpp, agar cultures accumulated only trace amounts. Instead, a substance characterized only as an unidentified HPLC peak accumulated intracellularly in the late growth phase, just before aerial mycelium formation and antibiotic production. This substance did not accumulate in mutant cells. It was found in S. lividans 66 and S. parvulus, but not in seven other Streptomyces species tested. The significance of these observations, and the relationship of the mutant to earlier rel isolates of Streptomyces is discussed. 7. Stationary mesoscale eddies, upgradient eddy fluxes, and the anisotropy of eddy diffusivity Lu, Jianhua; Wang, Fuchang; Liu, Hailong; Lin, Pengfei 2016-01-01 The mesoscale eddies of which parameterization is needed in coarse-resolution ocean models include not only the transient eddies akin to baroclinic instability but also the stationary eddies associated with topography. By applying a modified Lorenz-type decomposition to the eddy-permitting Southern Ocean State Estimate, we show that the stationary mesoscale eddies contribute a significant part to the total eddy kinetic energy, eddy enstrophy, and the total eddy-induced isopycnal thickness and potential vorticity fluxes. We find that beneath middepth (about 1000 m) the upgradient eddy fluxes, or so-called "negative" eddy diffusivities, are mainly attributed to the stationary mesoscale eddies, whereas the remaining transient eddy diffusivity is positive, for which the Gent and McWilliams (1990) parameterization scheme applies well. A quantitative method of measuring the anisotropy of eddy diffusivity is presented. The effect of stationary mesoscale eddies is one of major sources responsible for the anisotropy of eddy diffusivity. We suggest that an independent parameterization scheme for stationary mesoscale eddies may be needed for coarse-resolution ocean models, although the transient eddies remain the predominant part of mesoscale eddies in the oceans. 8. Eddy current testing Song, Sung Jin; Lee, Hyang Beom; Kim, Young Hwan [Soongsil Univ., Seoul (Korea, Republic of); Shin, Young Kil [Kunsan Univ., Gunsan (Korea, Republic of) 2004-02-15 Eddy current testing has been widely used for non destructive testing of steam generator tubes. In order to retain reliability in ECT, the following subjects were carried out in this study: numerical modeling and analysis of defects by using BC and RPC probes in SG tube, preparation of absolute coil impedance plane diagram by FEM. Signal interpretation of the eddy current signals obtained from nuclear power plants. 9. Mesoscale eddies are oases for higher trophic marine life Godø, Olav R. 2012-01-17 Mesoscale eddies stimulate biological production in the ocean, but knowledge of energy transfers to higher trophic levels within eddies remains fragmented and not quantified. Increasing the knowledge base is constrained by the inability of traditional sampling methods to adequately sample biological processes at the spatio-temporal scales at which they occur. By combining satellite and acoustic observations over spatial scales of 10 s of km horizontally and 100 s of m vertically, supported by hydrographical and biological sampling we show that anticyclonic eddies shape distribution and density of marine life from the surface to bathyal depths. Fish feed along density structures of eddies, demonstrating that eddies catalyze energy transfer across trophic levels. Eddies create attractive pelagic habitats, analogous to oases in the desert, for higher trophic level aquatic organisms through enhanced 3-D motion that accumulates and redistributes biomass, contributing to overall bioproduction in the ocean. Integrating multidisciplinary observation methodologies promoted a new understanding of biophysical interaction in mesoscale eddies. Our findings emphasize the impact of eddies on the patchiness of biomass in the sea and demonstrate that they provide rich feeding habitat for higher trophic marine life. 2012 God et al. 10. A compact and stable eddy covariance set-up for methane measurements using off-axis integrated cavity output spectroscopy D. M. D. Hendriks 2007-08-01 Full Text Available A DLT-100 Fast Methane Analyser (FMA from Los Gatos Research (LGR Ltd. is assessed for its applicability in a closed path eddy covariance field set-up. The FMA uses off-axis integrated cavity output spectroscopy (ICOS combined with a highly specific narrow band laser for the detection of CH4 and strongly reflective mirrors to obtain a laser path length of 2×10³ to 20×10³ m. Statistical testing, a calibration experiment and comparison with high tower data showed high precision and very good stability of the instrument. The measurement cell response time was tested to be 0.10 s. In the field set-up, the FMA is attached to a scroll pump and combined with a Gill Windmaster Pro 3 axis Ultrasonic Anemometer and a Licor 7500 open path infrared gas analyzer. The power-spectra and co-spectra of the instrument are satisfactory for 10 Hz sampling rates. The correspondence with CH4 flux chamber measurements is good and the observed CH4 emissions are comparable with (eddy covariance CH4 measurements in other peat areas. CH4 emissions are rather variable over time and show a diurnal pattern. The average CH4 emission is 50±12.5 nmol m−2 s−1, while the typical maximum CH4 emission is 120±30 nmol m−2 s−1 (during daytime and the typical minimum flux is –20±2.5 nmol m−2 s−1 (uptake, during night time. Additionally, the set-up was tested for three measurement techniques with slower measurement rates, which could be used in the future to make the scroll pump superfluous and save energy. Both disjunct eddy covariance as well as slow 1 Hz eddy covariance showed results very similar to normal 10 Hz eddy covariance. Relaxed eddy accumulation (REA only matched with normal 10 Hz eddy covariance over an averaging period of at least several weeks. 11. Discrete large eddy simulation L.TAO; K.R.RAJAGOPAL 2001-01-01 Despite the intense effort expended towards obtaining a model for describing the turbulent flows of fluid,there is no model at hand that can do an adequate job.This leads us to look for a non-traditional approach to turbulence modeling.In this work we conjoin the notion of large eddy simulation with those of fuzzy sets and neural networks to describe a class of turbulent flow.in previous works we had discussed several issues concerning large eddy simulation such as filtering and averaging,Here,we discuss the use of fuzzy sets to improve the filtering procedure. 12. Don Eddy; "Jewelry." Schaefer, Claire 1989-01-01 Presents a lesson that introduces students in grades K-three to sources of design inspiration in contemporary urban settings. Using Don Eddy's painting of a jewelry store window display, asks students to describe and analyze the interplay of shape, pattern, and color. Suggests studio activities, including an activity in which students build a… 13. Interview with Eddie Reisch Owen, Hazel 2013-01-01 Eddie Reisch is currently working as a policy advisor for Te Reo Maori Operational Policy within the Student Achievement group with the Ministry of Education in New Zealand, where he has implemented and led a range of e-learning initiatives and developments, particularly the Virtual Learning Network (VLN). He is regarded as one of the leading… 14. Eddies off Tasmania 2002-01-01 This true-color satellite image shows a large phytoplankton bloom, several hundred square kilometers in size, in the Indian Ocean off the west coast of Tasmania. In this scene, the rich concentration of microscopic marine plants gives the water a lighter, more turquoise appearance which helps to highlight the current patterns there. Notice the eddies, or vortices in the water, that can be seen in several places. It is possible that these eddies were formed by converging ocean currents flowing around Tasmania, or by fresh river runoff from the island, or both. Often, eddies in the sea serve as a means for stirring the water, thus providing nutrients that help support phytoplankton blooms, which in turn provide nutrition for other organisms. Effectively, these eddies help feed the sea (click to read an article on this topic). This image was acquired November 7, 2000, by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) flying aboard the Orbview-2 satellite. Tasmania is located off Australia's southeastern coast. Image courtesy SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE 15. Emergent eddy saturation from an energy constrained eddy parameterisation Mak, J.; Marshall, D. P.; Maddison, J. R.; Bachman, S. D. 2017-04-01 The large-scale features of the global ocean circulation and the sensitivity of these features with respect to forcing changes are critically dependent upon the influence of the mesoscale eddy field. One such feature, observed in numerical simulations whereby the mesoscale eddy field is at least partially resolved, is the phenomenon of eddy saturation, where the time-mean circumpolar transport of the Antarctic Circumpolar Current displays relative insensitivity to wind forcing changes. Coarse-resolution models employing the Gent-McWilliams parameterisation with a constant Gent-McWilliams eddy transfer coefficient seem unable to reproduce this phenomenon. In this article, an idealised model for a wind-forced, zonally symmetric flow in a channel is used to investigate the sensitivity of the circumpolar transport to changes in wind forcing under different eddy closures. It is shown that, when coupled to a simple parameterised eddy energy budget, the Gent-McWilliams eddy transfer coefficient of the form described in Marshall et al. (2012) [A framework for parameterizing eddy potential vorticity fluxes, J. Phys. Oceanogr., vol. 42, 539-557], which includes a linear eddy energy dependence, produces eddy saturation as an emergent property. 16. Eddy Powell 1939 - 2003 2003-01-01 We were saddened to learn that Eddy Powell had passed away on Saturday 26 July after a long illness. Eddy had so many friends at CERN and made such a contribution to the Organisation that it is impossible that his passing goes without comment. Eddy was born in England on 4 August 1939 and, after serving his apprenticeship with the U.K. Ministry of Defence, he joined CERN in September 1965. As an electrical design draftsman with the Synchro-cyclotron Division he played an important role in the upgrades of that machine in the early 1970's, particularly on the RF systems and later on the development of the ISOLDE facility. This brought him into close contact with many of the technical support services in CERN and, unlike many of his compatriots, he acquired a remarkably good fluency in French. Always inquisitive on the physics carried out at CERN, he spent a great deal of time learning from physicists and engineers at all levels. When he felt sufficiently confident he became a CERN Guide for general public visit... 17. EDDIE RICKENBACKER: RACETRACK ENTREPRENEUR W. David Lewis 2000-01-01 Full Text Available Edward V. (Eddie Rickenbacker (1890-1973 is best remembered for hisrecord as a combat pilot in World War I, in which he shot down 26 Germa naircraft and won fame as America’s "Ace of Aces." From 1934 until 1963 he was general manager, president, and board chairman of Eastern Air Lines, which was for a time the most profitable air carrier in the United States. This paper shows how Rickenbacker’s fiercely entrepreneurial style of management was born in his early involvement in the automobile industry, and particularly in his career as an automobile racing driver from 1909 through 1916. 18. Emergent eddy saturation from an energy constrained eddy parameterisation Mak, Julian; Marshall, David P; Bachman, Scott D 2016-01-01 The large-scale features of the global ocean circulation and the sensitivity of these features with respect to forcing changes are critically dependent upon the influence of the mesoscale eddy field. One such feature, observed in numerical simulations whereby the mesoscale eddy field is at least partially resolved, is the phenomenon of eddy saturation, where the time-mean circumpolar transport of the Antarctic Circumpolar Current displays relative insensitivity to wind forcing changes. Coarse-resolution models employing the Gent--McWilliams parameterisation with a constant Gent--McWilliams coefficient seem unable to reproduce this phenomenon. In this article, an idealised model for a wind-forced, zonally symmetric flow in a channel is used to investigate the sensitivity of the circumpolar transport to changes in wind forcing under different eddy closures. It is shown that, when coupled to a simple parameterised eddy energy budget, the Gent--McWilliams coefficient of the form described in Marshall et al. (2012... 19. Conditional Eddies in Plasma Turbulence Johnsen, Helene; Pécseli, Hans; Trulsen, J. 1986-01-01 Conditional structures, or eddies, in turbulent flows are discussed with special attention to electrostatic turbulence in plasmas. The potential variation of these eddies is obtained by sampling the fluctuations only when a certain condition is satisfied in a reference point. The resulting... 20. An explicit relaxation filtering framework based upon Perona-Malik anisotropic diffusion for shock capturing and subgrid scale modeling of Burgers turbulence Maulik, Romit 2016-01-01 In this paper, we introduce a relaxation filtering closure approach to account for subgrid scale effects in explicitly filtered large eddy simulations using the concept of anisotropic diffusion. We utilize the Perona-Malik diffusion model and demonstrate its shock capturing ability and spectral performance for solving the Burgers turbulence problem, which is a simplified prototype for more realistic turbulent flows showing the same quadratic nonlinearity. Our numerical assessments present the behavior of various diffusivity functions in conjunction with a detailed sensitivity analysis with respect to the free modeling parameters. In comparison to direct numerical simulation (DNS) and under-resolved DNS results, we find that the proposed closure model is efficient in the prevention of energy accumulation at grid cut-off and is also adept at preventing any possible spurious numerical oscillations due to shock formation under the optimal parameter choices. In contrast to other relaxation filtering approaches, it... 1. Natural relaxation Marzola, Luca; Raidal, Martti 2016-11-01 Motivated by natural inflation, we propose a relaxation mechanism consistent with inflationary cosmology that explains the hierarchy between the electroweak scale and Planck scale. This scenario is based on a selection mechanism that identifies the low-scale dynamics as the one that is screened from UV physics. The scenario also predicts the near-criticality and metastability of the Standard Model (SM) vacuum state, explaining the Higgs boson mass observed at the Large Hadron Collider (LHC). Once Majorana right-handed neutrinos are introduced to provide a viable reheating channel, our framework yields a corresponding mass scale that allows for the seesaw mechanism as well as for standard thermal leptogenesis. We argue that considering singlet scalar dark matter extensions of the proposed scenario could solve the vacuum stability problem and discuss how the cosmological constant problem is possibly addressed. 2. Modeling mesoscale eddies Canuto, V. M.; Dubovikov, M. S. Mesoscale eddies are not resolved in coarse resolution ocean models and must be modeled. They affect both mean momentum and scalars. At present, no generally accepted model exists for the former; in the latter case, mesoscales are modeled with a bolus velocity u∗ to represent a sink of mean potential energy. However, comparison of u∗(model) vs. u∗ (eddy resolving code, [J. Phys. Ocean. 29 (1999) 2442]) has shown that u∗(model) is incomplete and that additional terms, "unrelated to thickness source or sinks", are required. Thus far, no form of the additional terms has been suggested. To describe mesoscale eddies, we employ the Navier-Stokes and scalar equations and a turbulence model to treat the non-linear interactions. We then show that the problem reduces to an eigenvalue problem for the mesoscale Bernoulli potential. The solution, which we derive in analytic form, is used to construct the momentum and thickness fluxes. In the latter case, the bolus velocity u∗ is found to contain two types of terms: the first type entails the gradient of the mean potential vorticity and represents a positive contribution to the production of mesoscale potential energy; the second type of terms, which is new, entails the velocity of the mean flow and represents a negative contribution to the production of mesoscale potential energy, or equivalently, a backscatter process whereby a fraction of the mesoscale potential energy is returned to the original reservoir of mean potential energy. This type of terms satisfies the physical description of the additional terms given by [J. Phys. Ocean. 29 (1999) 2442]. The mesoscale flux that enters the momentum equations is also contributed by two types of terms of the same physical nature as those entering the thickness flux. The potential vorticity flux is also shown to contain two types of terms: the first is of the gradient-type while the other terms entail the velocity of the mean flow. An expression is derived for the mesoscale 3. A compact and stable eddy covariance set-up for methane measurements using off-axis integrated cavity output spectroscopy D. M. D. Hendriks 2008-01-01 Full Text Available A Fast Methane Analyzer (FMA is assessed for its applicability in a closed path eddy covariance field set-up in a peat meadow. The FMA uses off-axis integrated cavity output spectroscopy combined with a highly specific narrow band laser for the detection of CH4 and strongly reflective mirrors to obtain a laser path length of 2–20×103 m. Statistical testing and a calibration experiment showed high precision (7.8×10−3 ppb and accuracy (<0.30% of the instrument, while no drift was observed. The instrument response time was determined to be 0.10 s. In the field set-up, the FMA is attached to a scroll pump and combined with a 3-axis ultrasonic anemometer and an open path infrared gas analyzer for measurements of carbon dioxide and water vapour. The power-spectra and co-spectra of the instruments were satisfactory for 10 Hz sampling rates. Due to erroneous measurements, spikes and periods of low turbulence the data series consisted for 26% of gaps. Observed CH4 fluxes consisted mainly of emission, showed a diurnal cycle, but were rather variable over. The average CH4 emission was 29.7 nmol m−2 s−1, while the typical maximum CH4 emission was approximately 80.0 nmol m−2 s−1 and the typical minimum flux was approximately 0.0 nmol m−2 s−1. The correspondence of the measurements with flux chamber measurements in the footprint was good and the observed CH4 emission rates were comparable with eddy covariance CH4 measurements in other peat areas. Additionally, three measurement techniques with lower sampling frequencies were simulated, which might give the possibility to measure CH4 fluxes without an external pump and save energy. Disjunct eddy covariance appeared to be the most reliable substitute for 10 Hz eddy covariance, while relaxed eddy accumulation gave 4. Large Eddy Simulation Joseph Mathew 2010-10-01 Full Text Available Large eddy simulation (LES is an emerging technique for obtaining an approximation to turbulent flow fields. It is an improvement over the widely prevalent practice of obtaining means of turbulent flows when the flow has large scale, low frequency, unsteadiness. An introduction to the method, its general formulation, and the more common modelling for flows without reaction, is discussed. Some attempts at extension to flows with combustion have been made. Examples from present work for flows with and without combustion are given. The final example of the LES of the combustor of a helicopter engine illustrates the state-of-the-art in application of the technique.Defence Science Journal, 2010, 60(6, pp.598-605, DOI:http://dx.doi.org/10.14429/dsj.60.602 5. Eddy Correlation Flux Measurement System Oak Ridge National Laboratory — The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat,... 6. A simple model of eddy saturation Marshall, D. P.; Ambaum, M.; Munday, D. R.; Novak, L.; Maddison, J. R. 2016-02-01 A simple model is developed for eddy saturation of the Antarctic Circumpolar Current (ACC): the relative insensitivity of its volume transport to the magnitude of the surface wind stress in ocean models with explicit eddies. The simple model solves prognostic equations for the ACC volume transport and the eddy energy, forming a 2-dimensional nonlinear dynamical system. In equilibrium, the volume transport is independent of the surface wind stress but scales with the bottom drag, whereas the eddy energy scales with the wind stress but is independent of bottom drag. The magnitude of the eddy energy is controlled by the zonal momentum balance between the surface wind stress and eddy form stress, whereas the baroclinic volume transport is controlled by the eddy energy balance between the mean-to-eddy energy conversion and bottom dissipation. The theoretical predictions are confirmed in eddy-resolving numerical calculations for an idealised reentrant channel. The results suggest that the rate of eddy energy dissipation has a strong impact not only the volume transport of the ACC, but also on global ocean stratification and heat content through the thermal wind relation. Moreover, a vital ingredient in this model is a relation between the eddy form stress and eddy energy derived in the eddy parameterisation framework of Marshall et al. (2012, J. Phys. Oceanogr.), offering the prospect of obtaining eddy saturation in ocean models with parameterised eddies. 7. Applied large eddy simulation. Tucker, Paul G; Lardeau, Sylvain 2009-07-28 Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. 8. Remote field eddy current testing Cheong, Y. M.; Jung, H. K.; Huh, H.; Lee, Y. S.; Shim, C. M 2001-03-01 The state-of-art technology of the remote field eddy current, which is actively developed as an electromagnetic non-destructive testing tool for ferromagnetic tubes, is described. The historical background and recent R and D activities of remote-field eddy current technology are explained including the theoretical development of remote field eddy current, such as analytical and numerical approach, and the results of finite element analysis. The influencing factors for actual applications, such as the effect of frequency, magnetic permeability, receiving sensitivity, and difficulties of detection and classification of defects are also described. Finally, two examples of actual application, 1) the gap measurement between pressure tubes and calandria tube in CANDU reactor and, 2) the detection of defects in the ferromagnetic heat exchanger tubes, are described. The future research efforts are also included. 9. Formation and propagation of the Aleutian eddy Ishiyama, H.; Ueno, H.; Inatsu, M. 2012-12-01 Aleutian eddies are anticyclonic eddies which form south of the Aleutian Islands between 170°E and 175°E and propagate southwestward. In this study we investigated formation and propagation of the Aleutian eddy through analysis of 18-year time series of satellite altimeter data distributed by AVISO. Neighbor enclosed area tracking algorithm was applied to track each eddy identified using Okubo-Weiss parameter. Zero to five Aleutian eddies were formed per year and the number of Aleutian eddy formed per year changed with a period of three to four years. Meanwhile, the propagation route of the Aleutian eddy did not show marked interannual variation. Most of the Aleutian eddies propagate toward the center of western subarctic gyre; the rest propagate toward Kamchatka Peninsula or into the Bering Sea. 10. Physical interpretation and separation of eddy current pulsed thermography Yin, Aijun; Gao, Bin; Yun Tian, Gui; Woo, W. L.; Li, Kongjing 2013-02-01 Eddy current pulsed thermography (ECPT) applies induction heating and a thermal camera for non-destructive testing and evaluation (NDT&E). Because of the variation in resultant surface heat distribution, the physical mechanism that corresponds to the general behavior of ECPT can be divided into an accumulation of Joule heating via eddy current and heat diffusion. However, throughout the literature, the heating mechanisms of ECPT are not given in detail in the above two thermal phenomena and they are difficult to be separated. Nevertheless, once these two physical parameters are separated, they can be directly used to detect anomalies and predict the variation in material properties such as electrical conductivity, magnetic permeability and microstructure. This paper reports physical interpretation of these two physical phenomena that can be found in different time responses given the ECPT image sequences. Based on the phenomenon and their behaviors, the paper proposes a statistical method based on single channel blind source separation to decompose the two physical phenomena using different stages of eddy current and thermal propagation from the ECPT images. Links between mathematical models and physical models have been discussed and verified. This fundamental understanding of transient eddy current distribution and heating propagation can be applied to the development of feature extraction and pattern recognition for the quantitative analysis of ECPT measurement images and defect characterization. 11. Westward movement of eddies into the Gulf of Aden from the Arabian Sea Al Saafani, M.A.; Shenoi, S.S.C.; Shankar, D.; Aparna, M.; Kurian, J.; Durand, F.; Vinayachandran, P.N. associated with the monsoon system over the Arabian Sea [Findlater, 1969], and its associated curl field begin to relax in late July and early August, a broad upwelling band begins to break up into several large eddies in the vicinity of Gulf of Aden (see...-September, the outflow from the Great Whirl causes the formation of Socotra Gyre [Simmons et al., 1988]. This mechanism, in which eddies pinch off from the Somali Current system owing to instabilities, is also active during May [Fratantoni et al., 2006]. [20] To examine... 12. Dynamic Model of Mesoscale Eddies Dubovikov, Mikhail S. 2003-04-01 Oceanic mesoscale eddies which are analogs of well known synoptic eddies (cyclones and anticyclones), are studied on the basis of the turbulence model originated by Dubovikov (Dubovikov, M.S., "Dynamical model of turbulent eddies", Int. J. Mod. Phys.B7, 4631-4645 (1993).) and further developed by Canuto and Dubovikov (Canuto, V.M. and Dubovikov, M.S., "A dynamical model for turbulence: I. General formalism", Phys. Fluids8, 571-586 (1996a) (CD96a); Canuto, V.M. and Dubovikov, M.S., "A dynamical model for turbulence: II. Sheardriven flows", Phys. Fluids8, 587-598 (1996b) (CD96b); Canuto, V.M., Dubovikov, M.S., Cheng, Y. and Dienstfrey, A., "A dynamical model for turbulence: III. Numerical results", Phys. Fluids8, 599-613 (1996c)(CD96c); Canuto, V.M., Dubovikov, M.S. and Dienstfrey, A., "A dynamical model for turbulence: IV. Buoyancy-driven flows", Phys. Fluids9, 2118-2131 (1997a) (CD97a); Canuto, V.M. and Dubovikov, M.S., "A dynamical model for turbulence: V. The effect of rotation", Phys. Fluids9, 2132-2140 (1997b) (CD97b); Canuto, V.M., Dubovikov, M.S. and Wielaard, D.J., "A dynamical model for turbulence: VI. Two dimensional turbulence", Phys. Fluids9, 2141-2147 (1997c) (CD97c); Canuto, V.M. and Dubovikov, M.S., "Physical regimes and dimensional structure of rotating turbulence", Phys. Rev. Lett. 78, 666-669 (1997d) (CD97d); Canuto, V.M., Dubovikov, M.S. and Dienstfrey, A., "Turbulent convection in a spectral model", Phys. Rev. Lett. 78, 662-665 (1997e) (CD97e); Canuto, V.M. and Dubovikov, M.S., "A new approach to turbulence", Int. J. Mod. Phys.12, 3121-3152 (1997f) (CD97f); Canuto, V.M. and Dubovikov, M.S., "Two scaling regimes for rotating Raleigh-Benard convection", Phys. Rev. Letters78, 281-284, (1998) (CD98); Canuto, V.M. and Dubovikov, M.S., "A dynamical model for turbulence: VII. The five invariants for shear driven flows", Phys. Fluids11, 659-664 (1999a) (CD99a); Canuto, V.M., Dubovikov, M.S. and Yu, G., "A dynamical model for turbulence: VIII. IR and UV 13. Relaxation Techniques for Health ... R S T U V W X Y Z Relaxation Techniques for Health Share: On This Page What’s the ... Bottom Line? How much do we know about relaxation techniques? A substantial amount of research has been done ... 14. Eddy current thickness measurement apparatus Rosen, Gary J.; Sinclair, Frank; Soskov, Alexander; Buff, James S. 2015-06-16 A sheet of a material is disposed in a melt of the material. The sheet is formed using a cooling plate in one instance. An exciting coil and sensing coil are positioned downstream of the cooling plate. The exciting coil and sensing coil use eddy currents to determine a thickness of the solid sheet on top of the melt. 15. Latent Period of Relaxation. Kobayashi, M; Irisawa, H 1961-10-27 The latent period of relaxation of molluscan myocardium due to anodal current is much longer than that of contraction. Although the rate and the grade of relaxation are intimately related to both the stimulus condition and the muscle tension, the latent period of relaxation remains constant, except when the temperature of the bathing fluid is changed. 16. Quantifying mesoscale eddies in the Lofoten Basin Raj, R. P.; Johannessen, J. A.; Eldevik, T.; Nilsen, J. E. Ø.; Halo, I. 2016-07-01 The Lofoten Basin is the most eddy rich region in the Norwegian Sea. In this paper, the characteristics of these eddies are investigated from a comprehensive database of nearly two decades of satellite altimeter data (1995-2013) together with Argo profiling floats and surface drifter data. An automated method identified 1695/1666 individual anticyclonic/cyclonic eddies in the Lofoten Basin from more than 10,000 altimeter-based eddy observations. The eddies are found to be predominantly generated and residing locally. The spatial distributions of lifetime, occurrence, generation sites, size, intensity, and drift of the eddies are studied in detail. The anticyclonic eddies in the Lofoten Basin are the most long-lived eddies (>60 days), especially in the western part of the basin. We reveal two hotspots of eddy occurrence on either side of the Lofoten Basin. Furthermore, we infer a cyclonic drift of eddies in the western Lofoten Basin. Barotropic energy conversion rates reveals energy transfer from the slope current to the eddies during winter. An automated colocation of surface drifters trapped inside the altimeter-based eddies are used to corroborate the orbital speed of the anticyclonic and cyclonic eddies. Moreover, the vertical structure of the altimeter-based eddies is examined using colocated Argo profiling float profiles. Combination of altimetry, Argo floats, and surface drifter data is therefore considered to be a promising observation-based approach for further studies of the role of eddies in transport of heat and biomass from the slope current to the Lofoten Basin. 17. Observed eddy dissipation in the Agulhas Current Braby, Laura; Backeberg, Björn C.; Ansorge, Isabelle; Roberts, Michael J.; Krug, Marjolaine; Reason, Chris J. C. 2016-08-01 Analyzing eddy characteristics from a global data set of automatically tracked eddies for the Agulhas Current in combination with surface drifters as well as geostrophic currents from satellite altimeters, it is shown that eddies from the Mozambique Channel and south of Madagascar dissipate as they approach the Agulhas Current. By tracking the offshore position of the current core and its velocity at 30°S in relation to eddies, it is demonstrated that eddy dissipation occurs through a transfer of momentum, where anticyclones consistently induce positive velocity anomalies, and cyclones reduce the velocities and cause offshore meanders. Composite analyses of the anticyclonic (cyclonic) eddy-current interaction events demonstrate that the positive (negative) velocity anomalies propagate downstream in the Agulhas Current at 44 km/d (23 km/d). Many models are unable to represent these eddy dissipation processes, affecting our understanding of the Agulhas Current. 18. Quality and Reliability of Large-Eddy Simulations Meyers, Johan; Sagaut, Pierre 2008-01-01 Computational resources have developed to the level that, for the first time, it is becoming possible to apply large-eddy simulation (LES) to turbulent flow problems of realistic complexity. Many examples can be found in technology and in a variety of natural flows. This puts issues related to assessing, assuring, and predicting the quality of LES into the spotlight. Several LES studies have been published in the past, demonstrating a high level of accuracy with which turbulent flow predictions can be attained, without having to resort to the excessive requirements on computational resources imposed by direct numerical simulations. However, the setup and use of turbulent flow simulations requires a profound knowledge of fluid mechanics, numerical techniques, and the application under consideration. The susceptibility of large-eddy simulations to errors in modelling, in numerics, and in the treatment of boundary conditions, can be quite large due to nonlinear accumulation of different contributions over time, ... 19. Biogeochemical characteristics of a long-lived anticyclonic eddy in the eastern South Pacific Ocean Cornejo D'Ottone, Marcela; Bravo, Luis; Ramos, Marcel; Pizarro, Oscar; Karstensen, Johannes; Gallegos, Mauricio; Correa-Ramirez, Marco; Silva, Nelson; Farias, Laura; Karp-Boss, Lee 2016-05-01 Mesoscale eddies are important, frequent, and persistent features of the circulation in the eastern South Pacific (ESP) Ocean, transporting physical, chemical and biological properties from the productive shelves to the open ocean. Some of these eddies exhibit subsurface hypoxic or suboxic conditions and may serve as important hotspots for nitrogen loss, but little is known about oxygen consumption rates and nitrogen transformation processes associated with these eddies. In the austral fall of 2011, during the Tara Oceans expedition, an intrathermocline, anticyclonic, mesoscale eddy with a suboxic (Water (ESSW) that at this latitude is normally restricted to an area near the coast. Measurements of nitrogen species within the eddy revealed undersaturation (below 44 %) of nitrous oxide (N2O) and nitrite accumulation (> 0.5 µM), suggesting that active denitrification occurred in this water mass. Using satellite altimetry, we were able to track the eddy back to its region of formation on the coast of central Chile (36.1° S, 74.6° W). Field studies conducted in Chilean shelf waters close to the time of eddy formation provided estimates of initial O2 and N2O concentrations of the ESSW source water in the eddy. By the time of its offshore sighting, concentrations of both O2 and N2O in the subsurface oxygen minimum zone (OMZ) of the eddy were lower than concentrations in surrounding water and "source water" on the shelf, indicating that these chemical species were consumed as the eddy moved offshore. Estimates of apparent oxygen utilization rates at the OMZ of the eddy ranged from 0.29 to 44 nmol L-1 d-1 and the rate of N2O consumption was 3.92 nmol L-1 d-1. These results show that mesoscale eddies affect open-ocean biogeochemistry in the ESP not only by transporting physical and chemical properties from the coast to the ocean interior but also during advection, local biological consumption of oxygen within an eddy further generates conditions favorable to 20. Particle aggregation in anticyclonic eddies and implications for distribution of biomass A. Samuelsen 2012-01-01 Full Text Available Acoustic measurements show that the biomass of zooplankton and mesopelagic fish is redistributed by mesoscale variability and that the signal extends over several hundred meters depth. The mechanisms governing this distribution are not well understood, but influences from both physical (i.e. physical redistribution and biological processes (i.e. nutrient transport, primary production, active swimming, etc. are likely. This study examines how hydrodynamic conditions and basic vertical swimming behavior act to distribute biomass in an anticyclonic eddy. Using an eddy-resolving 2.3 km-resolution physical ocean model as forcing for a particle-tracking module, particles representing passively floating organisms and organisms with vertical swimming behavior are released within an eddy and monitored for 20 to 30 days. The role of hydrodynamic conditions on the distribution of biomass is discussed in relation to the acoustic measurements. Particles released close to the surfaces tend, in agreement with the observations, to accumulate around the edge of the eddy, whereas particles released at depth tend to distribute along the isopycnals. After a month they are displaced several hundreds meters in the vertical with the deepest particles found close to the eddy center, but there is no evidence of aggregation of particles along the eddy rim. All in all, the particle redistribution appears to result from a complex mixture of strain and vertical velocity. The simplified view where the vertical velocity in eddies is regarded as uniform and symmetric around the eddy center is therefore not a reliable representation of the eddy dynamics. 1. Particle aggregation at the edges of anticyclonic eddies and implications for distribution of biomass A. Samuelsen 2012-06-01 Full Text Available Acoustic measurements show that the biomass of zooplankton and mesopelagic fish is redistributed by mesoscale variability and that the signal extends over several hundred meters depth. The mechanisms governing this distribution are not well understood, but influences from both physical (i.e. redistribution and biological processes (i.e. nutrient transport, primary production, active swimming, etc. are likely. This study examines how hydrodynamic conditions and basic vertical swimming behavior act to distribute biomass in an anticyclonic eddy. Using an eddy-resolving 2.3 km-resolution physical ocean model as forcing for a particle-tracking module, particles representing passively floating organisms and organisms with vertical swimming behavior are released within an eddy and monitored for 20 to 30 days. The role of hydrodynamic conditions on the distribution of biomass is discussed in relation to the acoustic measurements. Particles released close to the surface tend, in agreement with the observations, to accumulate around the edge of the eddy, whereas particles released at depth gradually become distributed along the isopycnals. After a month they are displaced several hundreds meters in the vertical with the deepest particles found close to the eddy center and the shallowest close to the edge. There is no evidence of aggregation of particles along the eddy rim in the last simulation. The model results points towards a physical mechanism for aggregation at the surface, however biological processes cannot be ruled out using the current modeling tool. 2. Transient eddy current flow metering Forbriger, Jan 2015-01-01 Measuring local velocities or entire flow rates in liquid metals or semiconductor melts is a notorious problem in many industrial applications, including metal casting and silicon crystal growth. We present a new variant of an old technique which relies on the continuous tracking of a flow-advected transient eddy current that is induced by a pulsed external magnetic field. This calibration-free method is validated by applying it to the velocity of a spinning disk made of aluminum. First tests at a rig with a flow of liquid GaInSn are also presented. 3. Transient eddy current flow metering Forbriger, J.; Stefani, F. 2015-10-01 Measuring local velocities or entire flow rates in liquid metals or semiconductor melts is a notorious problem in many industrial applications, including metal casting and silicon crystal growth. We present a new variant of an old technique which relies on the continuous tracking of a flow-advected transient eddy current that is induced by a pulsed external magnetic field. This calibration-free method is validated by applying it to the velocity of a spinning disk made of aluminum. First tests at a rig with a flow of liquid GaInSn are also presented. A. O. Abramovych 2014-06-01 Full Text Available Introduction. At present there are many electrical schematic metal detectors (the most common kind of ground penetrating radar, which are differ in purpose. Each scheme has its own advantages and disadvantages compared to other schemes. Designing metal detector problem of optimal selection of functional units most schemes can only work with a narrow range of special purpose units. Functional units used in circuits can be replaced by better ones, but specialization schemes do not provide such a possibility. Description of problem. Author has created a "complex for research of functional units of metal detectors" that is the universal system that meets the task. With this set of studies conducted on the practical implementation of radar-eddy current method of distinguishing non-ferrous metals (gold, copper, etc. is based. Description of method. Mathematical tools using have to be treated as a signal metal detector to distinguish metals: gold, copper and others. Conclusions. Processing of partial pulses may have information about beforehand signal loss during propagation in heterogeneous media with lossy nonuniform distribution parameters. Using eddy currents To calculate the value of the input voltage depending on the conductivity of the metal in the receiving antenna.Combining two different methods for processing the received signal theoretically it could be proved that with high probability can distinguish non-ferrous metals - gold, copper etc. 5. Mesoscale Ocean Large Eddy Simulations Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank 2015-11-01 The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes. 6. Biogeochemical characteristics of a long-lived anticyclonic eddy in the eastern South Pacific Ocean M. Cornejo 2015-09-01 Full Text Available Eastern boundary upwelling systems are characterized by high productivity that often leads to subsurface hypoxia on the shelf. Mesoscale eddies are important, frequent, and persistent features of circulation in these regions, transporting physical, chemical and biological properties from shelves to the open ocean. In austral fall of 2011, during the Tara Oceans expedition, a subsurface layer (200–400 m in which the concentration of oxygen was very low (−1 of O2 was observed in the eastern South Pacific, ~ 900 km offshore (30° S, 81° W. Satellite altimetry combined with CTD observations associated the local oxygen anomaly with an intrathermocline, anticyclonic, mesoscale eddy with a diameter of about 150 km. The eddy contained Equatorial Subsurface Water (ESSW that at this latitude is normally restricted near the coast. Undersaturation (44 % of nitrous oxide (N2O and nitrite accumulation (> 0.5 μM gave evidence for denitrification in this water mass. Based on satellite altimetry, we tracked the eddy back to its region of formation on the coast of central Chile (36.1° S, 74.6° W. We estimate that the eddy formed in April 2010. Field studies conducted on the Chilean shelf in June 2010 provided approximate information on initial O2 and N2O concentrations of "source water" in the region at the time of eddy formation. Concentrations of both O2 and N2O in the oxygen minimum zone (OMZ of the offshore eddy were lower than its surroundings or "source water" on the shelf, suggesting that these chemical species were consumed as the eddy moved offshore. Estimates of apparent oxygen utilization rates at the OMZ of the eddy ranged from 0.29 to 44 nmol L−1 d−1 and the rate of N2O consumption was 3.92 nmol L−1 d−1. Our results show that mesoscale eddies in the ESP not only transport physical properties of the ESSW from the coast to the ocean interior, but also export and transform biogeochemical properties, creating suboxic environments in the 7. Oceanic eddies in synthetic aperture radar images Andrei Yu Ivanov; Anna I Ginzburg 2002-09-01 Continuous observations since 1991 by using synthetic aperture radar (SAR) on board the Almaz-1, ERS-1/2, JERS-1, and RADARSAT satellites support the well-known fact that oceanic eddies are distributed worldwide in the ocean. The paper is devoted to an evaluation of the potential of SAR for detection of eddies and vortical motions in the ocean. The classification of typical vortical features in the ocean detected in remote sensing images (visible, infrared, and SAR) is presented as well as available information on their spatial and temporal scales. Examples of the Almaz-1 and ERS-1/2 SAR images showing different eddy types, such as rings, spiral eddies of the open ocean, eddies behind islands and in bays, spin-off eddies and mushroom-like structures (vortex dipoles) are given and discussed. It is shown that a common feature for most of the eddies detected in the SAR images is a broad spectrum of spatial scales, spiral shape and shear nature. It is concluded that the spaceborne SARs give valuable information on ocean eddies, especially in combination with visible and infrared satellite data. 8. Wind changes above warm Agulhas Current eddies Rouault, M 2016-01-01 Full Text Available speeds above the eddies at the instantaneous scale; 20 % of cases had incomplete data due to partial global coverage by the scatterometer for one path. For cases where the wind is stronger above warm eddies, there is no relationship between the increase... 9. Exploring Eddy-Covariance Measurements Using a Spatial Approach: The Eddy Matrix Engelmann, Christian; Bernhofer, Christian 2016-10-01 Taylor's frozen turbulence hypothesis states that "standard" eddy-covariance measurements of fluxes at a fixed location can replace a spatial ensemble of instantaneous values at multiple locations. For testing this hypothesis, a unique turbulence measurement set-up was used for two measurement campaigns over desert (Namibia) and grassland (Germany) in 2012. This "Eddy Matrix" combined nine ultrasonic anemometer-thermometers and 17 thermocouples in a 10 m × 10 m regular grid with 2.5-m grid distance. The instantaneous buoyancy flux derived from the spatial eddy covariance of the Eddy Matrix was highly variable in time (from -0.3 to 1 m K s^{-1}). However, the 10-min average reflected 83 % of the reference eddy-covariance flux with a good correlation. By introducing a combined eddy-covariance method (the spatial eddy covariance plus the additional flux of the temporal eddy covariance of the spatial mean values), the mean flux increases by 9 % relative to the eddy-covariance reference. Considering the typical underestimation of fluxes by the standard eddy-covariance method, this is seen as an improvement. Within the limits of the Eddy Matrix, Taylor's hypothesis is supported by the results. 10. Lattice Boltzmann Large Eddy Simulation Model of MHD Flint, Christopher 2016-01-01 The work of Ansumali \\textit{et al.}\\cite{Ansumali} is extended to Two Dimensional Magnetohydrodynamic (MHD) turbulence in which energy is cascaded to small spatial scales and thus requires subgrid modeling. Applying large eddy simulation (LES) modeling of the macroscopic fluid equations results in the need to apply ad-hoc closure schemes. LES is applied to a suitable mesoscopic lattice Boltzmann representation from which one can recover the MHD equations in the long wavelength, long time scale Chapman-Enskog limit (i.e., the Knudsen limit). Thus on first performing filter width expansions on the lattice Boltzmann equations followed by the standard small Knudsen expansion on the filtered lattice Boltzmann system results in a closed set of MHD turbulence equations provided we enforce the physical constraint that the subgrid effects first enter the dynamics at the transport time scales. In particular, a multi-time relaxation collision operator is considered for the density distribution function and a single rel... 11. Modeling the mesoscale eddy field in the Gulf of Alaska Xiu, Peng; Chai, Fei; Xue, Huijie; Shi, Lei; Chao, Yi 2012-05-01 Mesoscale anticyclonic eddies are a common feature in the Gulf of Alaska (GOA). A three-dimensional circulation model is used to examine the general characteristics of eddies in the GOA during 1993-2009. Using an eddy detection algorithm, we tracked on average 6.5 eddies formed each year from the modeled results and 6.9 eddies from altimeter data. Modeled eddy characteristics agree with the remote sensing derived eddy statistics in terms of eddy magnitude, propagation speed, and eddy-core diameter. From the model results, strong seasonal and interannual variations were found in both the number and areal coverage of GOA eddies. At the seasonal scale, more eddies are observed to form from March to May, while the eddy-covered area usually peaks around October. At the interannual scale, our results suggest the years with large eddy-covered area do not necessarily have more eddies generated. The long-term variation of eddy-covered area in the GOA is modulated by El Niño/Southern Oscillation (ENSO) events through altering the local wind stress. Model results indicate one typical Haida eddy could transport 37×1018 J of heat and 27 km3 of freshwater from the shelf to the central gulf. The equivalent fluxes caused by Haida eddies are comparable with the annual mean of net heat flux and freshwater flux from the atmosphere into the ocean in the Haida region, implying that mesoscale eddies are important sources contributing to the heat and freshwater budgets. 12. Intense submesoscale upwelling in anticyclonic eddies Brannigan, L. 2016-04-01 Observations from around the global ocean show that enhanced biological activity can be found in anticyclonic eddies. This may mean that upwelling of nutrient-rich water occurs within the eddy, but such upwelling is not captured by models that resolve mesoscale processes. High-resolution simulations presented here show intense submesoscale upwelling from the thermocline to the mixed layer in anticyclonic eddies. The properties of the upwelling are consistent with a process known as symmetric instability. A simple limiting nutrient experiment shows that this upwelling can drive much higher biological activity in anticyclonic eddies when there is a high nutrient concentration in the thermocline. An estimate for the magnitude of upwelling associated with symmetric instability in anticyclonic eddies in the Sargasso Sea shows that it may be of comparable magnitude to other processes, though further work is required to understand the full implications for basin-scale nutrient budgets. 13. Studies of the eddy structure in the lower ionosphere by the API technique Bakhmetieva, Nataliya V.; Grigoriev, Gennadii I.; Lapin, Victor G. 2016-07-01 We present a new application of the API technique to study of turbulent phenomena in the lower ionosphere. The main objective of these studies is experimental diagnostics of natural ordered eddy structures at the altitudes of the mesosphere and lower thermosphere, such as those that occur when internal gravity waves propagate in stratified flows in the atmospheric boundary layer. To this end, we considered the impact of eddy motions in the mesosphere and lower thermosphere on the relaxation time and the frequency of the signal scattered by periodic irregularities. Theoretical study of eddy structures base on experiments using SURA heating facility (56,14 N; 44,1 W). It is known, artificial periodic irregularities (APIs) are formed in the field of the powerful standing wave as a result of the interference of the incident wave and reflected from the ionosphere (Belikovich et al., Ionospheric Research by Means of Artificial Periodic Irregularities - 2002. Katlenburg-Lindau, Germany. Copernicus GmbH. 160 p.). The relaxation or decay of the periodic structure is specified by the ambipolar diffusion process. The atmospheric turbulence causes reduction of the amplitude and decay time of the API scattered signal in comparison with the diffusion time. We found a relation between the eddy period and the characteristic decay time of scattered signal, for which the synchronism of the waves scattered by a periodic structure is broken. Besides, it is shown, when the eddy structure moves by a horizontal wind exists at these heights, the frequency of the radio wave scattered by API structure will periodically increase and decrease compared with the frequency of the radiated diagnostic (probing) radio-wave. The work was supported by the Russian Science Foundation under grant No 14-12-00556. Hannula, S.P.; Stone, D.; Li, C.Y. (Cornell Univ., Ithaca, NY (USA)) Most of the models that are used to describe the nonelastic behavior of materials utilize stress-strain rate relations which can be obtained by a load relaxation test. The conventional load relaxation test, however, cannot be performed if the volume of the material to be tested is very small. For such applications the indentation type of test offers an attractive means of obtaining data necessary for materials characterization. In this work the feasibility of the indentation load relaxation test is studied. Experimental techniques are described together with results on Al, Cu and 316 SS. These results are compared to those of conventional uniaxial load relaxation tests, and the conversion of the load-indentation rate data into the stress-strain rate data is discussed. 15. Relaxation techniques for stress ... problems such as high blood pressure, stomachaches, headaches, anxiety, and depression. Using relaxation techniques can help you feel calm. These exercises can also help you manage stress and ease the effects of stress on your body. 16. Perturbations and quantum relaxation 2016-01-01 We investigate whether small perturbations can cause relaxation to quantum equilibrium over very long timescales. We consider in particular a two-dimensional harmonic oscillator, which can serve as a model of a field mode on expanding space. We assume an initial wave function with small perturbations to the ground state. We present evidence that the trajectories are highly confined so as to preclude relaxation to equilibrium even over very long timescales. Cosmological implications are briefly discussed. 17. Large Eddy Simulations in Astrophysics Schmidt, Wolfram 2014-01-01 In this review, the methodology of large eddy simulations (LES) is introduced and applications in astrophysics are discussed. As theoretical framework, the scale decomposition of the dynamical equations for neutral fluids by means of spatial filtering is explained. For cosmological applications, the filtered equations in comoving coordinates are also presented. To obtain a closed set of equations that can be evolved in LES, several subgrid scale models for the interactions between numerically resolved and unresolved scales are discussed, in particular the subgrid scale turbulence energy equation model. It is then shown how model coefficients can be calculated, either by dynamical procedures or, a priori, from high-resolution data. For astrophysical applications, adaptive mesh refinement is often indispensable. It is shown that the subgrid scale turbulence energy model allows for a particularly elegant and physically well motivated way of preserving momentum and energy conservation in AMR simulations. Moreover... 18. Conformable eddy current array delivery Summan, Rahul; Pierce, Gareth; Macleod, Charles; Mineo, Carmelo; Riise, Jonathan; Morozov, Maxim; Dobie, Gordon; Bolton, Gary; Raude, Angélique; Dalpé, Colombe; Braumann, Johannes 2016-02-01 The external surface of stainless steel containers used for the interim storage of nuclear material may be subject to Atmospherically Induced Stress Corrosion Cracking (AISCC). The inspection of such containers poses a significant challenge due to the large quantities involved; therefore, automating the inspection process is of considerable interest. This paper reports upon a proof-of-concept project concerning the automated NDT of a set of test containers containing artificially generated AISCCs. An Eddy current array probe with a conformable padded surface from Eddyfi was used as the NDT sensor and end effector on a KUKA KR5 arc HW robot. A kinematically valid cylindrical raster scan path was designed using the KUKA|PRC path planning software. Custom software was then written to interface measurement acquisition from the Eddyfi hardware with the motion control of the robot. Preliminary results and analysis are presented from scanning two canisters. 19. EDDY CURRENT CHARACTERIZATION OF NANOMATERIALS A YOUNES 2015-06-01 Full Text Available NDT Magnetic measurements as impedance in Eddy currents, corecitif and residual field in hysteresis loop are used to study the different stages of mechanical alloying in the Fe–Co system. In this paper, we changed the electromagnetic properties of Fe-Co, by developing their metallurgical parameters such as grain size. For this we are used a planetary ball mill, we are milled the FeCo alloy for different milling times until to obtain nanostructure, the lamellar structure with some small particles embedded in them was observed during the first stage of mechanical alloying. XRD patterns show after 10 h of milling the formation of a disordered solid solution having a body-centered cubic (bcc structure. After 40h of milling, morphological studies indicated that the average crystallites size is around 15 nm. 20. A western boundary current eddy characterisation study Ribbe, Joachim; Brieva, Daniel 2016-12-01 The analysis of an eddy census for the East Australian Current (EAC) region yielded a total of 497 individual short-lived (7-28 days) cyclonic and anticyclonic eddies for the period 1993 to 2015. This was an average of about 23 eddies per year. 41% of the tracked individual cyclonic and anticyclonic eddies were detected off southeast Queensland between about 25 °S and 29 °S. This is the region where the flow of the EAC intensifies forming a swift western boundary current that impinges near Fraser Island on the continental shelf. This zone was also identified as having a maximum in detected short-lived cyclonic eddies. A total of 94 (43%) individual cyclonic eddies or about 4-5 per year were tracked in this region. The census found that these potentially displaced entrained water by about 115 km with an average displacement speed of about 4 km per day. Cyclonic eddies were likely to contribute to establishing an on-shelf longshore northerly flow forming the western branch of the Fraser Island Gyre and possibly presented an important cross-shelf transport process in the life cycle of temperate fish species of the EAC domain. In-situ observations near western boundary currents previously documented the entrainment, off-shelf transport and export of near shore water, nutrients, sediments, fish larvae and the renewal of inner shelf water due to short-lived eddies. This study found that these cyclonic eddies potentially play an important off-shelf transport process off the central east Australian coast. 1. Anatomy of a subtropical intrathermocline eddy Barceló-Llull, Bàrbara; Sangrà, Pablo; Pallàs-Sanz, Enric; Barton, Eric D.; Estrada-Allis, Sheila N.; Martínez-Marrero, Antonio; Aguiar-González, Borja; Grisolía, Diana; Gordo, Carmen; Rodríguez-Santana, Ángel; Marrero-Díaz, Ángeles; Arístegui, Javier 2017-06-01 An interdisciplinary survey of a subtropical intrathermocline eddy was conducted within the Canary Eddy Corridor in September 2014. The anatomy of the eddy is investigated using near submesoscale fine resolution two-dimensional data and coarser resolution three-dimensional data. The eddy was four months old, with a vertical extension of 500 m and 46 km radius. It may be viewed as a propagating negative anomaly of potential vorticity (PV), 95% below ambient PV. We observed two cores of low PV, one in the upper layers centered at 85 m, and another broader anomaly located between 175 m and the maximum sampled depth in the three-dimensional dataset (325 m). The upper core was where the maximum absolute values of normalized relative vorticity (or Rossby number), |Ro| =0.6, and azimuthal velocity, U=0.5 m s-1, were reached and was defined as the eddy dynamical core. The typical biconvex isopleth shape for intrathermocline eddies induces a decrease of static stability, which causes the low PV of the upper core. The deeper low PV core was related to the occurrence of a pycnostad layer of subtropical mode water that was embedded within the eddy. The eddy core, of 30 km radius, was in near solid body rotation with period of 4 days. It was encircled by a thin outer ring that was rotating more slowly. The kinetic energy (KE) content exceeded that of available potential energy (APE), KE/APE=1.58; this was associated with a low aspect ratio and a relatively intense rate of spin as indicated by the relatively high value of Ro. Inferred available heat and salt content anomalies were AHA=2.9×1018 J and ASA=14.3×1010 kg, respectively. The eddy AHA and ASA contents per unit volume largely exceed those corresponding to Pacific Ocean intrathermocline eddies. This suggests that intrathermocline eddies may play a significant role in the zonal conduit of heat and salt along the Canary Eddy Corridor. 2. Modal Wave Number Spectrum for Mesoscale Eddies KANG Ying; PENG Linhui 2003-01-01 The variations of ocean environmental parameters invariably result in variations of local modal wave numbers of a sound pressure field. The asymptotic Hankel transform with a short sliding window is applied to the complex sound pressure field in the water containing a mesoscale eddy to examine the variation of local modal wave numbers in such a range-dependent environment. The numerical simulation results show that modal wave number spectra obtained by this method can reflect the location and strength of a mesoscale eddy, therefore it can be used to monitor the strength and spatial scale of ocean mesoscale eddies. 3. Renormalization group formulation of large eddy simulation Yakhot, V.; Orszag, S. A. 1985-01-01 Renormalization group (RNG) methods are applied to eliminate small scales and construct a subgrid scale (SSM) transport eddy model for transition phenomena. The RNG and SSM procedures are shown to provide a more accurate description of viscosity near the wall than does the Smagorinski approach and also generate farfield turbulence viscosity values which agree well with those of previous researchers. The elimination of small scales causes the simultaneous appearance of a random force and eddy viscosity. The RNG method permits taking these into account, along with other phenomena (such as rotation) for large-eddy simulations. 4. Eddy Current Testing, RQA/M1-5330.17. National Aeronautics and Space Administration, Huntsville, AL. George C. Marshall Space Flight Center. As one in the series of classroom training handbooks, prepared by the U.S. space program, instructional material is presented in this volume concerning familiarization and orientation on eddy current testing. The subject is presented under the following headings: Introduction, Eddy Current Principles, Eddy Current Equipment, Eddy Current Methods,… 5. Molecular Relaxation in Liquids Bagchi, Biman 2012-01-01 This book brings together many different relaxation phenomena in liquids under a common umbrella and provides a unified view of apparently diverse phenomena. It aligns recent experimental results obtained with modern techniques with recent theoretical developments. Such close interaction between experiment and theory in this area goes back to the works of Einstein, Smoluchowski, Kramers' and de Gennes. Development of ultrafast laser spectroscopy recently allowed study of various relaxation processes directly in the time domain, with time scales going down to picosecond (ps) and femtosecond (fs 6. Large Eddy Simulations in Astrophysics Schmidt, Wolfram 2015-12-01 In this review, the methodology of large eddy simulations (LES) is introduced and applications in astrophysics are discussed. As theoretical framework, the scale decomposition of the dynamical equations for neutral fluids by means of spatial filtering is explained. For cosmological applications, the filtered equations in comoving coordinates are also presented. To obtain a closed set of equations that can be evolved in LES, several subgrid-scale models for the interactions between numerically resolved and unresolved scales are discussed, in particular the subgrid-scale turbulence energy equation model. It is then shown how model coefficients can be calculated, either by dynamic procedures or, a priori, from high-resolution data. For astrophysical applications, adaptive mesh refinement is often indispensable. It is shown that the subgrid-scale turbulence energy model allows for a particularly elegant and physically well-motivated way of preserving momentum and energy conservation in adaptive mesh refinement (AMR) simulations. Moreover, the notion of shear-improved models for in-homogeneous and non-stationary turbulence is introduced. Finally, applications of LES to turbulent combustion in thermonuclear supernovae, star formation and feedback in galaxies, and cosmological structure formation are reviewed. 7. Fatty acid profiles of phyllosoma larvae of western rock lobster (Panulirus cygnus) in cyclonic and anticyclonic eddies of the Leeuwin Current off Western Australia Wang, M.; O'Rorke, R.; Waite, A. M.; Beckley, L. E.; Thompson, P.; Jeffs, A. G. 2014-03-01 The recent dramatic decline in settlement in the population of the spiny lobster, Panulirus cygnus, may be due to changes in the oceanographic processes that operate offshore of Western Australia. It has been suggested that this decline could be related to poor nutritional condition of the post-larvae, especially lipid which is accumulated in large quantities during the preceding extensive pelagic larval stage. The current study focused on investigations into the lipid content and fatty acid (FA) profiles of lobster phyllosoma larvae from three mid to late stages of larval development (stages VI, VII, VIII) sampled from two cyclonic and two anticyclonic eddies of the Leeuwin Current off Western Australia. The results showed significant accumulation of lipid and energy storage FAs with larval development regardless of location of capture, however, larvae from cyclonic eddies had more lipid and FAs associated with energy storage than larvae from anticyclonic eddies. FA food chain markers from the larvae indicated significant differences in the food webs operating in the two types of eddy, with a higher level of FA markers for production from flagellates and a lower level from copepod grazing in cyclonic versus anticyclonic eddies. The results indicate that the microbial food web operating in cyclonic eddies provides better feeding conditions for lobster larvae despite anticyclonic eddies being generally more productive and containing greater abundances of zooplankton as potential prey for lobster larvae. Gelatinous zooplankton, such as siphonophores, may play an important role in cyclonic eddies by accumulating dispersed microbial nutrients and making them available as larger prey for phyllosoma. The markedly superior nutritional condition of lobster larvae feeding in the microbial food web found in cyclonic eddies, could greatly influence their subsequent settlement and recruitment to the coastal fishery. 8. Work done by atmospheric winds on mesoscale ocean eddies Xu, Chi; Zhai, Xiaoming; Shang, Xiao-Dong 2016-12-01 Mesoscale eddies are ubiquitous in the ocean and dominate the ocean's kinetic energy. However, physical processes influencing ocean eddy energy remain poorly understood. Mesoscale ocean eddy-wind interaction potentially provides an energy flux into or out of the eddy field, but its effect on ocean eddies has not yet been determined. Here we examine work done by atmospheric winds on more than 1,200,000 mesoscale eddies identified from satellite altimetry data and show that atmospheric winds significantly damp mesoscale ocean eddies, particularly in the energetic western boundary current regions and the Southern Ocean. Furthermore, the large-scale wind stress curl is found to on average systematically inject kinetic energy into anticyclonic (cyclonic) eddies in the subtropical (subpolar) gyres while mechanically damps anticyclonic (cyclonic) eddies in the subpolar (subtropical) gyres. 9. Eddies in the Red Sea: A statistical and dynamical study Zhan, Peng 2014-06-01 Sea level anomaly (SLA) data spanning 1992–2012 were analyzed to study the statistical properties of eddies in the Red Sea. An algorithm that identifies winding angles was employed to detect 4998 eddies propagating along 938 unique eddy tracks. Statistics suggest that eddies are generated across the entire Red Sea but that they are prevalent in certain regions. A high number of eddies is found in the central basin between 18°N and 24°N. More than 87% of the detected eddies have a radius ranging from 50 to 135 km. Both the intensity and relative vorticity scale of these eddies decrease as the eddy radii increase. The averaged eddy lifespan is approximately 6 weeks. AEs and cyclonic eddies (CEs) have different deformation features, and those with stronger intensities are less deformed and more circular. Analysis of long-lived eddies suggests that they are likely to appear in the central basin with AEs tending to move northward. In addition, their eddy kinetic energy (EKE) increases gradually throughout their lifespans. The annual cycles of CEs and AEs differ, although both exhibit significant seasonal cycles of intensity with the winter and summer peaks appearing in February and August, respectively. The seasonal cycle of EKE is negatively correlated with stratification but positively correlated with vertical shear of horizontal velocity and eddy growth rate, suggesting that the generation of baroclinic instability is responsible for the activities of eddies in the Red Sea. 10. Temporal Large-Eddy Simulation Pruett, C. D.; Thomas, B. C. 2004-01-01 11. Transformed eddy-PV flux and positive synoptic eddy feedback onto low-frequency flow Ren, Hong-Li [University of Hawaii, School of Ocean and Earth Sciences and Technology, Honolulu, HI (United States); China Meteorological Administration, Laboratory for Climate Studies, National Climate Center, Beijing (China); Jin, Fei-Fei [University of Hawaii, School of Ocean and Earth Sciences and Technology, Honolulu, HI (United States); Kug, Jong-Seong [Korea Ocean Research and Development Institute, Ansan (Korea, Republic of); Gao, Li [University of Hawaii, School of Ocean and Earth Sciences and Technology, Honolulu, HI (United States); China Meteorological Administration, Numerical Prediction Center, National Meteorological Center, Beijing (China) 2011-06-15 Interaction between synoptic eddy and low-frequency flow (SELF) has been the subject of many studies. In this study, we further examine the interaction by introducing a transformed eddy-potential-vorticity (TEPV) flux that is obtained from eddy-potential-vorticity flux through a quasi-geostrophic potential-vorticity inversion. The main advantage of using the TEPV flux is that it combines the effects of the eddy-vorticity and heat fluxes into the net acceleration of the low-frequency flow in such a way that the TEPV flux tends to be analogous to the eddy-vorticity fluxes in the barotropic framework. We show that the anomalous TEPV fluxes are preferentially directed to the left-hand side of the low-frequency flow in all vertical levels throughout the troposphere for monthly flow anomalies and for climate modes such as the Arctic Oscillation (AO). Furthermore, this left-hand preference of the TEPV flux direction is a convenient three-dimensional indicator of the positive reinforcement of the low-frequency flow by net eddy-induced acceleration. By projecting the eddy-induced net accelerations onto the low-frequency flow anomalies, we estimate the eddy-induced growth rates for the low frequency flow anomalies. This positive eddy-induced growth rate is larger (smaller) in the lower (upper) troposphere. The stronger positive eddy feedback in the lower troposphere may play an important role in maintaining an equivalent barotropic structure of the low-frequency atmospheric flow by balancing some of the strong damping effect of surface friction. (orig.) 12. Transformed eddy-PV flux and positive synoptic eddy feedback onto low-frequency flow Ren, Hong-Li; Jin, Fei-Fei; Kug, Jong-Seong; Gao, Li 2011-06-01 Interaction between synoptic eddy and low-frequency flow (SELF) has been the subject of many studies. In this study, we further examine the interaction by introducing a transformed eddy-potential-vorticity (TEPV) flux that is obtained from eddy-potential-vorticity flux through a quasi-geostrophic potential-vorticity inversion. The main advantage of using the TEPV flux is that it combines the effects of the eddy-vorticity and heat fluxes into the net acceleration of the low-frequency flow in such a way that the TEPV flux tends to be analogous to the eddy-vorticity fluxes in the barotropic framework. We show that the anomalous TEPV fluxes are preferentially directed to the left-hand side of the low-frequency flow in all vertical levels throughout the troposphere for monthly flow anomalies and for climate modes such as the Arctic Oscillation (AO). Furthermore, this left-hand preference of the TEPV flux direction is a convenient three-dimensional indicator of the positive reinforcement of the low-frequency flow by net eddy-induced acceleration. By projecting the eddy-induced net accelerations onto the low-frequency flow anomalies, we estimate the eddy-induced growth rates for the low frequency flow anomalies. This positive eddy-induced growth rate is larger (smaller) in the lower (upper) troposphere. The stronger positive eddy feedback in the lower troposphere may play an important role in maintaining an equivalent barotropic structure of the low-frequency atmospheric flow by balancing some of the strong damping effect of surface friction. 13. Role of mesoscale eddies in transport of Fukushima-derived cesium isotopes in the ocean Budyansky, M. V.; Goryachev, V. A.; Kaplunenko, D. D.; Lobanov, V. B.; Prants, S. V.; Sergeev, A. F.; Shlyk, N. V.; Uleysky, M. Yu. 2015-02-01 We present the results of in situ measurements of 134Cs and 137Cs released from the Fukushima Nuclear Power Plant (FNPP) collected at surface and different depths in the western North Pacific in June and July 2012. It was found that 15 month after the incident concentrations of radiocesium in the Japan and Okhotsk seas were at background or slightly increased level, while they had increased values in the subarctic front area east of Japan. The highest concentrations of 134Cs and 137Cs up to 13.5±0.9 and 22.7±1.5 Bq m-3 have been found to exceed ten times the background levels before the accident. Maximal content of radiocesium was observed within subsurface and intermediate water layers inside the cores of anticyclonic eddies (100-500 m). Even slightly increased content of radiocesium was found at some eddies at depth of 1000 m. It is expected that convergence and subduction of surface water inside eddies are main mechanisms of downward transport of radionuclides. In situ observations are compared with the results of simulated advection of these radioisotopes by the AVISO altimetric velocity field. Different Lagrangian diagnostics are used to reconstruct the history and origin of synthetic tracers imitating measured seawater samples collected in each of those eddies. The results of observations are consistent with the simulated results. It is shown that the tracers, simulating water samples with increased radioactivity to be measured in the cruise, really visited the areas with presumably high level of contamination. Fast water advection between anticyclonic eddies and convergence of surface water inside eddies makes them responsible for spreading, accumulation and downward transport of cesium rich water to the intermediate depth in the frontal zone. 14. Hair Dye and Hair Relaxers ... For Consumers Consumer Information by Audience For Women Hair Dye and Hair Relaxers Share Tweet Linkedin Pin it More sharing ... products. If you have a bad reaction to hair dyes and relaxers, you should: Stop using the ... 15. An eddy tracking algorithm based on dynamical systems theory Conti, Daniel; Orfila, Alejandro; Mason, Evan; Sayol, Juan Manuel; Simarro, Gonzalo; Balle, Salvador 2016-11-01 This work introduces a new method for ocean eddy detection that applies concepts from stationary dynamical systems theory. The method is composed of three steps: first, the centers of eddies are obtained from fixed points and their linear stability analysis; second, the size of the eddies is estimated from the vorticity between the eddy center and its neighboring fixed points, and, third, a tracking algorithm connects the different time frames. The tracking algorithm has been designed to avoid mismatching connections between eddies at different frames. Eddies are detected for the period between 1992 and 2012 using geostrophic velocities derived from AVISO altimetry and a new database is provided for the global ocean. 16. Vertical eddy heat fluxes from model simulations Stone, Peter H.; Yao, Mao-Sung 1991-01-01 Vertical eddy fluxes of heat are calculated from simulations with a variety of climate models, ranging from three-dimensional GCMs to a one-dimensional radiative-convective model. The models' total eddy flux in the lower troposphere is found to agree well with Hantel's analysis from observations, but in the mid and upper troposphere the models' values are systematically 30 percent to 50 percent smaller than Hantel's. The models nevertheless give very good results for the global temperature profile, and the reason for the discrepancy is unclear. The model results show that the manner in which the vertical eddy flux is carried is very sensitive to the parameterization of moist convection. When a moist adiabatic adjustment scheme with a critical value for the relative humidity of 100 percent is used, the vertical transports by large-scale eddies and small-scale convection on a global basis are equal: but when a penetrative convection scheme is used, the large-scale flux on a global basis is only about one-fifth to one-fourth the small-scale flux. Comparison of the model results with observations indicates that the results with the latter scheme are more realistic. However, even in this case, in mid and high latitudes the large and small-scale vertical eddy fluxes of heat are comparable in magnitude above the planetary boundary layer. 17. Modelling of the North Atlantic eddy characteristics Ushakov, Konstantin; Ibrayev, Rashit 2017-04-01 We investigate eddy characteristics of the Atlantic basin circulation and their impact on the ocean heat transport. A 15-year-long numerical experiment is performed with the global 3-dimensional z-coordinate INMIO ocean general circulation model of 0.1 deg., 49 levels resolution in conditions of the CORE-II protocol. The model is tuned to maximal intensity of eddies production by using only biharmonic filters instead of lateral viscous and diffusive terms in the model equations. Comparison with viscous and coarse-resolution simulations shows the increase of explicitly resolved heat transfer fraction and absolute values. Vertical turbulent mixing is parameterized by the Munk-Anderson scheme including convective adjustment. The sea ice is described by a simple thermodynamic submodel. The eddying velocity and temperature field components are defined as anomalies relative to the 3-month sliding mean. The regional distributions of hydrological parameters, eddy kinetic energy, heat convergence, meridional heat transport (MHT) and Atlantic meridional overturning circulation (AMOC) streamfunction, and their temporal variability are analyzed. In some parts of the basin the simulated eddy heat transport is opposite to the mean flow transport and may change direction with depth. The MHT intensity is slightly below observationally based assessments with notable influence of the East Greenland current simulation bias. The work is supported by the Russian Science Foundation (project N 14-27-00126) and performed in the Institute of Numerical Mathematics, Russian Academy of Sciences. 18. Kinetic Actviation Relaxation Technique Béland, Laurent Karim; El-Mellouhi, Fedwa; Joly, Jean-François; Mousseau, Normand 2011-01-01 We present a detailed description of the kinetic Activation-Relaxation Technique (k-ART), an off-lattice, self-learning kinetic Monte Carlo algorithm with on-the-fly event search. Combining a topological classification for local environments and event generation with ART nouveau, an efficient unbiased sampling method for finding transition states, k-ART can be applied to complex materials with atoms in off-lattice positions or with elastic deformations that cannot be handled with standard KMC approaches. In addition to presenting the various elements of the algorithm, we demonstrate the general character of k-ART by applying the algorithm to three challenging systems: self-defect annihilation in c-Si, self-interstitial diffusion in Fe and structural relaxation in amorphous silicon. 19. Nonlinear fractional relaxation A Tofighi 2012-04-01 We define a nonlinear model for fractional relaxation phenomena. We use -expansion method to analyse this model. By studying the fundamental solutions of this model we find that when → 0 the model exhibits a fast decay rate and when → ∞ the model exhibits a power-law decay. By analysing the frequency response we find a logarithmic enhancement for the relative ratio of susceptibility. 20. Statistics of avalanches with relaxation and Barkhausen noise: A solvable model Dobrinevski, Alexander; Le Doussal, Pierre; Wiese, Kay Jörg 2013-09-01 We study a generalization of the Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model of a particle in a Brownian force landscape, including retardation effects. We show that under monotonous driving the particle moves forward at all times, as it does in absence of retardation (Middleton's theorem). This remarkable property allows us to develop an analytical treatment. The model with an exponentially decaying memory kernel is realized in Barkhausen experiments with eddy-current relaxation and has previously been shown numerically to account for the experimentally observed asymmetry of Barkhausen pulse shapes. We elucidate another qualitatively new feature: the breakup of each avalanche of the standard ABBM model into a cluster of subavalanches, sharply delimited for slow relaxation under quasistatic driving. These conditions are typical for earthquake dynamics. With relaxation and aftershock clustering, the present model includes important ingredients for an effective description of earthquakes. We analyze quantitatively the limits of slow and fast relaxation for stationary driving with velocity v>0. The v-dependent power-law exponent for small velocities, and the critical driving velocity at which the particle velocity never vanishes, are modified. We also analyze nonstationary avalanches following a step in the driving magnetic field. Analytically, we obtain the mean avalanche shape at fixed size, the duration distribution of the first subavalanche, and the time dependence of the mean velocity. We propose to study these observables in experiments, allowing a direct measurement of the shape of the memory kernel and tracing eddy current relaxation in Barkhausen noise. 1. Energetics of lateral eddy diffusion/advection:Part I. Thermodynamics and energetics of vertical eddy diffusion HUANG Rui Xin 2014-01-01 Two important nonlinear properties of seawater thermodynamics linked to changes of water density, cab-beling and elasticity (compressibility), are discussed. Eddy diffusion and advection lead to changes in den-sity;as a result, gravitational potential energy of the system is changed. Therefore, cabbeling and elasticity play key roles in the energetics of lateral eddy diffusion and advection. Vertical eddy diffusion is one of the key elements in the mechanical energy balance of the global oceans. Vertical eddy diffusion can be con-ceptually separated into two steps:stirring and subscale diffusion. Vertical eddy stirring pushes cold/dense water upward and warm/light water downward;thus, gravitational potential energy is increased. During the second steps, water masses from different places mix through subscale diffusion, and water density is increased due to cabbeling. Using WOA01 climatology and assuming the vertical eddy diffusivity is equal to a constant value of 2×103 Pa2/s, the total amount of gravitational potential energy increase due to vertical stirring in the world oceans is estimated at 263 GW. Cabbeling associated with vertical subscale diffusion is a sink of gravitational potential energy, and the total value of energy lost is estimated at 73 GW. Therefore, the net source of gravitational potential energy due to vertical eddy diffusion for the world oceans is estimated at 189 GW. 2. Load Relaxation of Olivine Single Crystals Cooper, R. F.; Stone, D. S.; Plookphol, T. 2016-12-01 Single crystals of ferromagnesian olivine (San Carlos, AZ, peridot; Fo90-92) have been deformed in both uniaxial creep and load relaxation under conditions of ambient pressure, T = 1500ºC and pO2 = 10-10 atm; creep stresses were in the range 40 ≤ σ1 (MPa) ≤ 220. The crystals were oriented such that the applied stress was parallel to [011]c, which promotes single slip on the slowest slip system in olivine, (010)[001]. The creep rates at steady state match well the results of earlier investigators, as does the stress sensitivity (a power-law exponent of n = 3.6). Dislocation microstructures, including spatial distribution of low-angle (subgrain) boundaries, additionally confirm previous investigations. Inverted primary creep (an accelerating strain rate with an increase in stress) was observed. Load-relaxation, however, produced a singular response—a single hardness curve—regardless of the magnitude of creep stress or total accumulated strain preceding relaxation. The log-stress v. log-strain rate data from load-relaxation and creep experiments overlap to within experimental error. The load-relaxation behavior is distinctly different that that described for other crystalline solids, where the flow stress is affected strongly by work hardening such that a family of distinct hardness curves is generated, which are related by a scaling function. The response of olivine for the conditions studied, thus, indicates flow that is rate-limited by dislocation glide, reflecting specifically a high intrinsic lattice resistance (Peierls stress). 3. Eddy parameterization challenge suite I: Eady spindown Bachman, S.; Fox-Kemper, B. 2013-04-01 The first set of results in a suite of eddy-resolving Boussinesq, hydrostatic simulations is presented. Each set member consists of an initially linear stratification and shear as in the Eady problem, but this profile occupies only a limited region of a channel and is allowed to spin-down via baroclinic instability. The diagnostic focus is on the spatial structure and scaling of the eddy transport tensor, which is the array of coefficients in a linear flux-gradient relationship. The advective (antisymmetric) and diffusive (symmetric) components of the tensor are diagnosed using passive tracers, and the resulting diagnosed tensor reproduces the horizontal transport of the active tracer (buoyancy) to within ± 7% and the vertical transport to within ± 12%. The derived scalings are shown to be close in form to the standard Gent-McWilliams (antisymmetric) and Redi diffusivity (symmetric) tensors with a magnitude that varies in space (concentrated in the horizontal and vertical near the center of the frontal shear) and time as the eddies energize. The Gent-McWilliams eddy coefficient is equal to the Redi isopycnal diffusivity to within ± 6%, even as these coefficients vary with depth. The scaling for the magnitude of simulation parameters is determined empirically to within ± 28%. To achieve this accuracy, the eddy velocities are diagnosed directly and used in the tensor scalings, rather than assuming a correlation between eddy velocity and the mean flow velocity where ± 97% is the best accuracy achievable. Plans for the next set of models in the challenge suite are described. 4. Eddy Correlation Flux Measurement System (ECOR) Handbook Cook, DR 2011-01-31 The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration. 5. About Eddy Currents in Induction Melting Processes Gafiţa Nicolae-Bogdan 2008-05-01 Full Text Available In this paper we present a method forcomputing the eddy currents in induction meltingprocesses for non-ferrous alloys. We take intoconsideration the situation when only the crucible ismoving, inside the coils. This fact makes differentialcomputation methods to be hard to apply, because isnecessary to generate a new mesh and a new systemmatrix for every for every new position of the cruciblerelated to the coils. Integral methods cancel thisdrawback because the mesh is generated only for thedomains with eddy currents. For integral methods, themesh and the inductance matrix remain unchangedduring the movement of the crucible; only the free termsof the equation system will change. 6. Visualization and analysis of eddies in a global ocean simulation Williams, Sean J [Los Alamos National Laboratory; Hecht, Matthew W [Los Alamos National Laboratory; Petersen, Mark [Los Alamos National Laboratory; Strelitz, Richard [Los Alamos National Laboratory; Maltrud, Mathew E [Los Alamos National Laboratory; Ahrens, James P [Los Alamos National Laboratory; Hlawitschka, Mario [UC DAVIS; Hamann, Bernd [UC DAVIS 2010-10-15 Eddies at a scale of approximately one hundred kilometers have been shown to be surprisingly important to understanding large-scale transport of heat and nutrients in the ocean. Due to difficulties in observing the ocean directly, the behavior of eddies below the surface is not very well understood. To fill this gap, we employ a high-resolution simulation of the ocean developed at Los Alamos National Laboratory. Using large-scale parallel visualization and analysis tools, we produce three-dimensional images of ocean eddies, and also generate a census of eddy distribution and shape averaged over multiple simulation time steps, resulting in a world map of eddy characteristics. As expected from observational studies, our census reveals a higher concentration of eddies at the mid-latitudes than the equator. Our analysis further shows that mid-latitude eddies are thicker, within a range of 1000-2000m, while equatorial eddies are less than 100m thick. 7. Evolution of oceanic circulation theory:From gyres to eddies* HUANG Rui-xin 2013-01-01 Physical oceanography is now entering the eddy-resolving era. Eddies are commonly referred to the so-called mesoscale or submesoscale eddies;by definition, they have horizontal scales from 1 to 500 km and vertical scales from meters to hundreds of meters. In one word, the ocean is a turbulent environment; thus, eddy motions are one of the fundamental aspects of oceanic circulation. Studies of these eddies, including observations, theory, laboratory experiments, and parameterization in numerical models, will be the most productive research frontiers for the next 10 to 20 years. Although we have made great efforts to collect data about eddies in the ocean; thus far, we know very little about the three-dimensional structure of these eddies and their contributions to the oceanic general circulation and climate. Therefore, the most important breakthrough may come from observations and physical reasoning about the fundament aspects of eddy structure and their contributions to ocean circulation and climate. 8. Eddy Effects in the General Circulation, Spanning Mean Currents, Mesoscale Eddies, and Topographic Generation, Including Submesoscale Nests 2013-09-30 layer the eddy flux is significantly diabatic with a shallow eddy-induced (Lagrangian) circulation cell and down-gradient lateral diapycnal flux. These...3D Schematic representation of the eddy effects on the mean buoyancy field decomposed between adiabatic eddy-induced advection and diabatic ...plane). The diabatic component acts to smooth out surface buoyancy extrema and is shown as sinuous arrows in the top plane. Interior diabatic fluxes 9. Cycloidal meandering of a mesoscale anticyclonic eddy Kizner, Ziv; Shteinbuch-Fridman, Biana; Makarov, Viacheslav; Rabinovich, Michael 2017-08-01 By applying a theoretical approach, we propose a hypothetical scenario that might explain some features of the movement of a long-lived mesoscale anticyclone observed during 1990 in the Bay of Biscay [R. D. Pingree and B. Le Cann, "Three anticyclonic slope water oceanic eddies (SWODDIES) in the southern Bay of Biscay in 1990," Deep-Sea Res., Part A 39, 1147 (1992)]. In the remote-sensing infrared images, at the initial stage of observations, the anticyclone was accompanied by two cyclonic eddies, so the entire structure appeared as a tripole. However, at later stages, only the anticyclone was seen in the images, traveling generally west. Unusual for an individual eddy were the high speed of its motion (relative to the expected planetary beta-drift) and the presence of almost cycloidal meanders in its trajectory. Although surface satellites seem to have quickly disappeared, we hypothesize that subsurface satellites continued to exist, and the coherence of the three vortices persisted for a long time. A significant perturbation of the central symmetry in the mutual arrangement of three eddies constituting a tripole can make reasonably fast cycloidal drift possible. This hypothesis is tested with two-layer contour-dynamics f-plane simulations and with finite-difference beta-plane simulations. In the latter case, the interplay of the planetary beta-effect and that due to the sloping bottom is considered. 10. Intrathermocline eddies in the Southern Indian Ocean Nauw, J.J.; van Aken, H.M.; Lutjeharms, J.R.E.; de Ruijter, W.P.M. 2006-01-01 In 2001, two relatively saline intrathermocline eddies (ITEs) were observed southeast of Madagascar at 200 m depth. They are characterized by a subsurface salinity maximum of over 35.8 at potential temperatures between 18 and 22 C. The oxygen concentrations within the high salinity cores are slightl 11. Inverse modeling for Large-Eddy simulation Geurts, Bernardus J. 1998-01-01 Approximate higher order polynomial inversion of the top-hat filter is developed with which the turbulent stress tensor in Large-Eddy Simulation can be consistently represented using the filtered field. Generalized (mixed) similarity models are proposed which improved the agreement with the kinetic 12. Wind changes above warm Agulhas Current eddies Roualt, M 2016-10-01 Full Text Available )C to the surrounding ocean. The analysis of 960 twice daily instantaneous charts of equivalent stability neutral wind speed estimates from the SeaWinds scatterometer onboard the QuikScat satellite collocated with SST during the lifespan of six warm eddies show stronger... 13. A modified method to estimate eddy diffusivity in the North Pacific using altimeter eddy statistics ZHANG Zhiwei; LI Yaru; TIAN Jiwei 2013-01-01 The method proposed by Stammer (1998) is modified using eddy statistics from altimeter observation to obtain more realistic eddy diffusivity (K) for the North Pacific.Compared with original estimates,the modified K has remarkably reduced values in the Kuroshio Extension (KE) and North Equatorial Counter Current (NECC) regions,but slightly enhanced values in the Subtropical Counter Current (STCC) region.In strong eastward flow areas like the KE and NECC,owing to a large difference between mean flow velocity and propagation velocity of mesoscale eddies,tracers inside the mesoscale eddies are transported outside rapidly by advection,and mixing length L is hence strongly suppressed.The low eddy probability (P) is also responsible for the reduced K in the NECC area.In the STCC region,however,L is mildly suppressed and P is very high,so K there is enhanced.The zonally-averaged K has two peaks with comparable magnitudes,in the latitude bands of the STCC and KE.In the core of KE,because of the reduced values of P and L,the zonally-averaged K is a minimum.Zonally-integrated eddy heat transport in the KE band,calculated based on the modified K,is much closer to the results of previous independent research,indicating the robustness of our modified K.The map of modified K provides useful information for modeling studies in the North Pacific. 14. Grueneisen relaxation photoacoustic microscopy Wang, Lidai; Zhang, Chi; Wang, Lihong V. 2014-01-01 The temperature-dependent property of the Grueneisen parameter has been employed in photoacoustic imaging mainly to measure tissue temperature. Here we explore this property using a different approach and develop Grueneisen-relaxation photoacoustic microscopy (GR-PAM), a technique that images non-radiative absorption with confocal optical resolution. GR-PAM sequentially delivers two identical laser pulses with a micro-second-scale time delay. The first laser pulse generates a photoacoustic signal and thermally tags the in-focus absorbers. Owing to the temperature dependence of the Grueneisen parameter, when the second laser pulse excites the tagged absorbers within the thermal relaxation time, a photoacoustic signal stronger than the first one is produced. GR-PAM detects the amplitude difference between the two co-located photoacoustic signals, confocally imaging the non-radiative absorption. We greatly improved axial resolution from 45 µm to 2.3 µm and at the same time slightly improved lateral resolution from 0.63 µm to 0.41 µm. In addition, the optical sectioning capability facilitates the measurement of the absolute absorption coefficient without fluence calibration. PMID:25379919 15. Obituary: John Allen Eddy (1931-2009) Gingerich, Owen 2011-12-01 Jack Eddy, who was born 25 March 1931 in Pawnee City in southeastern Nebraska, died after a long battle with cancer in Tucson, Arizona, on 10 June 2009. Best known for his work on the long-term instability of the sun, described in a landmark paper in Science titled "The Maunder Minimum," he also deserves recognition as one of the triumvirate who founded the Historical Astronomy Division of the AAS. His father ran a cooperative farm store where Jack worked as a teenager; his parents were of modest means and there were concerns whether he could afford college, but one of the state senators, also from Pawnee City, nominated him for the U.S. Naval Academy. A course in celestial navigation gave him a love of the sky. After graduation in 1953, he served four years on aircraft carriers in the Pacific during the Korean War and then as a navigator and operations officer on a destroyer in the Persian Gulf. In 1957, he left the Navy and entered graduate school at the University of Colorado in Boulder, where in 1962 he received a Ph.D. in astro-geophysics. His thesis, supervised by Gordon Newkirk, dealt with light scattering in the upper atmosphere, based on data from stratospheric balloon flights. He then worked as teacher and researcher at the High Altitude Observatory in Boulder. Always adventuresome and willing to explore new frontiers, on his own time Eddy examined an Amerindian stone circle in the Big Horn mountains of Wyoming, a so-called medicine wheel, concluding that there were alignments with both the solstitial sun and Aldebaran. His conjectures became a cover story on Science magazine in June of 1974. In 1971 Jack privately reproduced for his friends a small collection of his own hilarious cartoons titled "Job Opportunities for Out-of-work Astronomers," with an abstract beginning, "Contrary to popular belief, a PhD in Astronomy/Astrophysics need not be a drawback in locating work in this decade." For example, under merchandising, a used car salesman advertises 16. Extended MHD Modeling of Tearing-Driven Magnetic Relaxation Sauppe, Joshua 2016-10-01 Driven plasma pinch configurations are characterized by the gradual accumulation and episodic release of free energy in discrete relaxation events. The hallmark of this relaxation in a reversed-field pinch (RFP) plasma is flattening of the parallel current density profile effected by a fluctuation-induced dynamo emf in Ohm's law. Nonlinear two-fluid modeling of macroscopic RFP dynamics has shown appreciable coupling of magnetic relaxation and the evolution of plasma flow. Accurate modeling of RFP dynamics requires the Hall effect in Ohm's law as well as first order ion finite Larmor radius (FLR) effects, represented by the Braginskii ion gyroviscous stress tensor. New results find that the Hall dynamo effect from / ne can counter the MHD effect from - in some of the relaxation events. The MHD effect dominates these events and relaxes the current profile toward the Taylor state, but the opposition of the two dynamos generates plasma flow in the direction of equilibrium current density, consistent with experimental measurements. Detailed experimental measurements of the MHD and Hall emf terms are compared to these extended MHD predictions. Tracking the evolution of magnetic energy, helicity, and hybrid helicity during relaxation identifies the most important contributions in single-fluid and two-fluid models. Magnetic helicity is well conserved relative to the magnetic energy during relaxation. The hybrid helicity is dominated by magnetic helicity in realistic low-beta pinch conditions and is also well conserved. Differences of less than 1 % between magnetic helicity and hybrid helicity are observed with two-fluid modeling and result from cross helicity evolution through ion FLR effects, which have not been included in contemporary relaxation theories. The kinetic energy driven by relaxation in the computations is dominated by velocity components perpendicular to the magnetic field, an effect that had not been predicted. Work performed at University of Wisconsin 17. A new gauge-invariant method for diagnosing eddy diffusivities Mak, Julian; Marshall, David P 2015-01-01 Coarse resolution numerical ocean models must typically include a parameterisation for mesoscale turbulence. A common recipe for such parameterisations is to invoke down-gradient mixing, or diffusion, of some tracer quantity, such as potential vorticity or buoyancy. However, it is well known that eddy fluxes include large rotational components which necessarily do not lead to any mixing; eddy diffusivities diagnosed from unfiltered fluxes are thus contaminated by the presence of these rotational components. Here a new methodology is applied whereby eddy diffusivities are diagnosed directly from the eddy force function. The eddy force function depends only upon flux divergences, is independent of any rotational flux components, and is inherently non-local and smooth. A one-shot inversion procedure is applied, minimising the mis-match between parameterised force functions and force functions derived from eddy resolving calculations. This enables diffusivities associated with the eddy potential vorticity and buo... 18. Biogeochemical properties of eddies in the California Current System Chenillat, Fanny; Franks, Peter J. S.; Combes, Vincent 2016-06-01 The California Current System (CCS) has intense mesoscale activity that modulates and exports biological production from the coastal upwelling system. To characterize and quantify the ability of mesoscale eddies to affect the local and regional planktonic ecosystem of the CCS, we analyzed a 10 year-long physical-biological model simulation, using eddy detection and tracking to isolate the dynamics of cyclonic and anticyclonic eddies. As they propagate westward across the shelf, cyclonic eddies efficiently transport coastal planktonic organisms and maintain locally elevated production for up to 1 year (800 km offshore). Anticyclonic eddies, on the other hand, have a limited impact on local production over their ~6 month lifetime as they propagate 400 km offshore. At any given time ~8% of the model domain was covered by eddy cores. Though the eddies cover a small area, they explain ~50 and 20% of the transport of nitrate and plankton, respectively. 19. Magnetoviscosity and relaxation in ferrofluids Felderhof 2000-09-01 The increase in viscosity of a ferrofluid due to an applied magnetic field is discussed on the basis of a phenomenological relaxation equation for the magnetization. The relaxation equation was derived earlier from irreversible thermodynamics, and differs from that postulated by Shliomis. The two relaxation equations lead to a different dependence of viscosity on magnetic field, unless the relaxation rates are related in a specific field-dependent way. Both planar Couette flow and Poiseuille pipe flow in parallel and perpendicular magnetic field are discussed. The entropy production for these situations is calculated and related to the magnetoviscosity. 20. [Death in a relaxation tank]. Rupp, Wolf; Simon, Karl-Heinz; Bohnert, Michael 2009-01-01 Complete relaxation can be achieved by floating in a darkened, sound-proof relaxation tank filled with salinated water kept at body temperature. Under these conditions, meditation exercises up to self-hypnosis may lead to deep relaxation with physical and mental revitalization. A user manipulated his tank, presumably to completely cut off all optical and acoustic stimuli and accidentally also covered the ventilation hole. The man was found dead in his relaxation tank. The findings suggested lack of oxygen as the cause of death. 1. Spin injection and relaxation in a mesoscopic superconductor Aprili, Marco; Quay, Charis; Chevalier, Denis; Dutreix, Clement [Laboratoire de Physique des Solides, CNRS UMR-8502, Bat. 510, Universite Paris-Sud, 91405 Orsay Cedex (France); Bena, Cristina [Institut de Physique Theorique, CEA/Saclay, Orme des Merisiers, 91190 Gif-sur-Yvette Cedex (France); Strunk, Christoph [Institute for Experimental and Applied Physics, University of Regensburg, 93040 Regensburg (Germany) 2015-07-01 Injecting spin-polarized electrons or holes into a superconductor and removing Cooper pairs creates both spin and charge imbalances. We have investigated the relaxation of the out-of-equilibrium magnetization induced by spin injection. First, we measured the spin and charge relaxation times (t{sub S} and t{sub Q}) by creating a dynamic equilibrium between continuous injection and relaxation, this leads to constant-in-time spin and charge accumulation proportional to their respective relaxation times. Using a mesoscopic ''absolute'' spin-valve, we obtained t{sub S} and t{sub Q} by probing the difference on the chemical potential between quasiparticles and Cooper pairs. We observed that spin (charge) accumulation dominates at low (high) injection current. This artificially generates spin-charge separation as theoretically first predicted by Kivelson and Rokhsar. Second, we directly measured the spin relaxation time in the frequency space and found t{sub S} = 1-10 ns consistent with results from constant current injection. Finally, we measured the spin coherence time of the out-of-equilibrium quasi-particles by performing an electron spin resonance experiment. 2. Relaxing Behavioural Inheritance Nuno Amálio 2013-05-01 Full Text Available Object-oriented (OO inheritance allows the definition of families of classes in a hierarchical way. In behavioural inheritance, a strong version, it should be possible to substitute an object of a subclass for an object of its superclass without any observable effect on the system. Behavioural inheritance is related to formal refinement, but, as observed in the literature, the refinement constraints are too restrictive, ruling out many useful OO subclassings. This paper studies behavioural inheritance in the context of ZOO, an object-oriented style for Z. To overcome refinement's restrictions, this paper proposes relaxations to the behavioural inheritance refinement rules. The work is presented for Z, but the results are applicable to any OO language that supports design-by-contract. 3. Assessment of large-eddy simulation in capturing preferential concentration of heavy particles in isotropic turbulent flows Jin, Guodong; Zhang, Jian; He, Guo-Wei; Wang, Lian-Ping 2010-12-01 Particle-laden turbulent flow is a typical non-equilibrium process characterized by particle relaxation time τp and the characteristic timescale of the flows τf, in which the turbulent mixing of heavy particles is related to different scales of fluid motions. The preferential concentration (PC) of heavy particles could be strongly affected by fluid motion at dissipation-range scales, which presents a major challenge to the large-eddy simulation (LES) approach. The errors in simulated PC by LES are due to both filtering and the subgrid scale (SGS) eddy viscosity model. The former leads to the removal of the SGS motion and the latter usually results in a more spatiotemporally correlated vorticity field. The dependence of these two factors on the flow Reynolds number is assessed using a priori and a posteriori tests, respectively. The results suggest that filtering is the dominant factor for the under-prediction of the PC for Stokes numbers less than 1, while the SGS eddy viscosity model is the dominant factor for the over-prediction of the PC for Stokes numbers between 1 and 10. The effects of the SGS eddy viscosity model on the PC decrease as the Reynolds number and Stokes number increase. LES can well predict the PC for particle Stokes numbers larger than 10. An SGS model for particles with small and intermediate Stokes numbers is needed to account for the effects of the removed SGS turbulent motion on the PC. 4. Assessment of large-eddy simulation in capturing preferential concentration of heavy particles in isotropic turbulent flows Jin Guodong; Zhang Jian; He Guowei; Wang Lianping, E-mail: hgw@lnm.imech.ac.cn [LNM, Institute of Mechanics, Chinese Academy of Sciences, Beijing 100190 (China) 2010-12-15 Particle-laden turbulent flow is a typical non-equilibrium process characterized by particle relaxation time {tau}{sub p} and the characteristic timescale of the flows {tau}{sub f}, in which the turbulent mixing of heavy particles is related to different scales of fluid motions. The preferential concentration (PC) of heavy particles could be strongly affected by fluid motion at dissipation-range scales, which presents a major challenge to the large-eddy simulation (LES) approach. The errors in simulated PC by LES are due to both filtering and the subgrid scale (SGS) eddy viscosity model. The former leads to the removal of the SGS motion and the latter usually results in a more spatiotemporally correlated vorticity field. The dependence of these two factors on the flow Reynolds number is assessed using a priori and a posteriori tests, respectively. The results suggest that filtering is the dominant factor for the under-prediction of the PC for Stokes numbers less than 1, while the SGS eddy viscosity model is the dominant factor for the over-prediction of the PC for Stokes numbers between 1 and 10. The effects of the SGS eddy viscosity model on the PC decrease as the Reynolds number and Stokes number increase. LES can well predict the PC for particle Stokes numbers larger than 10. An SGS model for particles with small and intermediate Stokes numbers is needed to account for the effects of the removed SGS turbulent motion on the PC. 5. Hydrogen sulfide and vascular relaxation SUN Yan; TANG Chao-shu; DU Jun-bao; JIN Hong-fang 2011-01-01 Objective To review the vasorelaxant effects of hydrogen sulfide (H2S) in arterial rings in the cardiovascular system under both physiological and pathophysiological conditions and the possible mechanisms involved.Data sources The data in this review were obtained from Medline and Pubmed sources from 1997 to 2011 using the search terms "hydrogen sulfide" and ""vascular relaxation".Study selection Articles describing the role of hydrogen sulfide in the regulation of vascular activity and its vasorelaxant effects were selected.Results H2S plays an important role in the regulation of cardiovascular tone.The vasomodulatory effects of H2S depend on factors including concentration,species and tissue type.The H2S donor,sodium hydrosulfide (NarS),causes vasorelaxation of rat isolated aortic rings in a dose-dependent manner.This effect was more pronounced than that observed in pulmonary arterial rings.The expression of KATP channel proteins and mRNA in the aortic rings was increased compared with pulmonary artery rings.H2S is involved in the pathogenesis of a variety of cardiovascular diseases.Downregulation of the endogenous H2S pathway is an important factor in the pathogenesis of cardiovascular diseases.The vasorelaxant effects of H2S have been shown to be mediated by activation of KATP channels in vascular smooth muscle cells and via the induction of acidification due to activation of the CI/HCO3 exchanger.It is speculated that the mechanisms underlying the vasoconstrictive function of H2S in the aortic rings involves decreased NO production and inhibition of cAMP accumulation.Conclusion H2S is an important endogenous gasotransmitter in the cardiovascular system and acts as a modulator of vascular tone in the homeostatic regulation of blood pressure. 6. Oceanic mass transport by mesoscale eddies. Zhang, Zhengguang; Wang, Wei; Qiu, Bo 2014-07-18 Oceanic transports of heat, salt, fresh water, dissolved CO2, and other tracers regulate global climate change and the distribution of natural marine resources. The time-mean ocean circulation transports fluid as a conveyor belt, but fluid parcels can also be trapped and transported discretely by migrating mesoscale eddies. By combining available satellite altimetry and Argo profiling float data, we showed that the eddy-induced zonal mass transport can reach a total meridionally integrated value of up to 30 to 40 sverdrups (Sv) (1 Sv = 10(6) cubic meters per second), and it occurs mainly in subtropical regions, where the background flows are weak. This transport is comparable in magnitude to that of the large-scale wind- and thermohaline-driven circulation. 7. Eddy diffusivities of inertial particles under gravity Afonso, Marco Martins; Muratore-Ginanneschi, Paolo 2011-01-01 The large-scale/long-time transport of inertial particles of arbitrary mass density under gravity is investigated by means of a formal multiple-scale perturbative expansion in the scale-separation parametre between the carrier flow and the particle concentration field. The resulting large-scale equation for the particle concentration is determined, and is found to be diffusive with a positive-definite eddy diffusivity. The calculation of the latter tensor is reduced to the resolution of an auxiliary differential problem, consisting of a coupled set of two differential equations in a (6+1)-dimensional coordinate system (3 space coordinates plus 3 velocity coordinates plus time). Although expensive, numerical methods can be exploited to obtain the eddy diffusivity, for any desirable non-perturbative limit (e.g. arbitrary Stokes and Froude numbers). The aforementioned large-scale equation is then specialized to deal with two different relevant perturbative limits: i) vanishing of both Stokes time and sedimenting... 8. Magnetic relaxation in anisotropic magnets Lindgård, Per-Anker 1971-01-01 The line shape and the kinematic and thermodynamic slowing down of the critical and paramagnetic relaxation in axially anisotropic materials are discussed. Kinematic slowing down occurs only in the longitudinal relaxation function. The thermodynamic slowing down occurs in either the transverse or... 9. Large Eddy Simulation of Turbulent Combustion 2006-03-15 Application to an HCCI Engine . Proceedings of the 4th Joint Meeting of the U.S. Sections of the Combustion Institute, 2005. [34] K. Fieweger...LARGE EDDY SIMULATION OF TURBULENT COMBUSTION Principle Investigator: Heinz Pitsch Flow Physics and Computation Department of Mechanical Engineering ...burners and engines found in modern, industrially relevant equipment. In the course of this transition of LES from a scientifically interesting method 10. HYCOM High-resolution Eddying Simulations 2014-07-01 number of vertical profiles of temperature and salinity in place of XBT temperature profiles. The reanalysis was completed in February 2014. As noted...10.1016/j.ocemod.2011.02.011. Metzger, E. J., and Coauthors, 2014a: US Navy operational global ocean and Arctic ice prediction systems. Oceanography...has collaborated on developing and demonstrating the performance and application of eddy-resolving, real-time global and basin-scale ocean prediction 11. Anisotropic Mesoscale Eddy Transport in Ocean General Circulation Models Reckinger, S. J.; Fox-Kemper, B.; Bachman, S.; Bryan, F.; Dennis, J.; Danabasoglu, G. 2014-12-01 Modern climate models are limited to coarse-resolution representations of large-scale ocean circulation that rely on parameterizations for mesoscale eddies. The effects of eddies are typically introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically in general circulation models. Thus, only a single parameter, namely the eddy diffusivity, is used at each spatial and temporal location to impart the influence of mesoscale eddies on the resolved flow. However, the diffusive processes that the parameterization approximates, such as shear dispersion, potential vorticity barriers, oceanic turbulence, and instabilities, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters to three: a major diffusivity, a minor diffusivity, and the principal axis of alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the newly introduced parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces global temperature and salinity biases. These effects can be improved even further by parameterizing the anisotropic transport mechanisms in the ocean. 12. Large Eddy Simulation of Transitional Boundary Layer 2009-11-01 A sixth order compact finite difference code is employed to investigate compressible Large Eddy Simulation (LES) of subharmonic transition of a spatially developing zero pressure gradient boundary layer, at Ma = 0.2. The computational domain extends from Rex= 10^5, where laminar blowing and suction excites the most unstable fundamental and sub-harmonic modes, to fully turbulent stage at Rex= 10.1x10^5. Numerical sponges are used in the neighborhood of external boundaries to provide non-reflective conditions. Our interest lies in the performance of the dynamic subgrid scale (SGS) model [1] in the transition process. It is observed that in early stages of transition the eddy viscosity is much smaller than the physical viscosity. As a result the amplitudes of selected harmonics are in very good agreement with the experimental data [2]. The model's contribution gradually increases during the last stages of transition process and the dynamic eddy viscosity becomes fully active and dominant in the turbulent region. Consistent with this trend the skin friction coefficient versus Rex diverges from its laminar profile and converges to the turbulent profile after an overshoot. 1. Moin P. et. al. Phys Fluids A, 3(11), 2746-2757, 1991. 2. Kachanov Yu. S. et. al. JFM, 138, 209-247, 1983. 13. LARGE EDDY SIMULATION FOR PLUNGING BREAKER WAVE Bai Yu-chuan; Wang Zhao-yin 2003-01-01 As wave propagates into shallow water, the shoaling effect leads to increase of wave height, and at a certain position, the wave will be breaking. The breaking wave is powerful agents for generating turbulence, which plays an important role in most of the fluid dynamical processes in the surf zone, so a proper numerical model for describing the turbulent effect is needed urgently. A numerical model is set up to simulate the wave breaking process, which consists of a free surface model using the surface marker method and the vertical two-dimensional model that solves the flow equations. The turbulence is described by Large Eddy Simulation (LES) method where the larger turbulent features are simulated by solving the flow equations, and the small-scale turbulence that is represented by a sub-grid model. A dynamic eddy viscosity sub-grid scale stress model has been used for the present simulation. The large eddy simulation model, which we presented in this paper, can be used to study the propagation of a solitary wave in constant water depth and the shoaling of a non-breaking solitary wave on a beach. To track free-surface movements, The TUMMAC method is employed. By applying the model to wave breaking problem in the surf zone, we found that these model results compared very well with experimental data. In addition, this model is able to reproduce the complicated flow phenomena, especially the plunging breaker. 14. Why Eddy Momentum Fluxes are Concentrated in the Upper Troposphere Ait-Chaalal, Farid 2015-01-01 The extratropical eddy momentum flux (EMF) is controlled by generation, propagation, and dissipation of large-scale eddies and is concentrated in Earth's upper troposphere. An idealized GCM is used to investigate how this EMF structure arises. In simulations in which the poles are heated more strongly than the equator, EMF is concentrated near the surface, demonstrating that surface drag generally is not responsible for the upper-tropospheric EMF concentration. Although Earth's upper troposphere favors linear wave propagation, quasi-linear simulations in which nonlinear eddy-eddy interactions are suppressed demonstrate that this is likewise not primarily responsible for the upper-tropospheric EMF concentration. The quasi-linear simulations reveal the essential role of nonlinear eddy-eddy interactions in the surf zone in the upper troposphere, where wave activity absorption away from the baroclinic generation regions occurs through the nonlinear generation of small scales. In Earth-like atmospheres, wave activ... 15. Development and optimization of hardware for delta relaxation enhanced MRI. Harris, Chad T; Handler, William B; Araya, Yonathan; Martínez-Santiesteban, Francisco; Alford, Jamu K; Dalrymple, Brian; Van Sas, Frank; Chronik, Blaine A; Scholl, Timothy J 2014-10-01 Delta relaxation enhanced magnetic resonance (dreMR) imaging requires an auxiliary B0 electromagnet capable of shifting the main magnetic field within a clinical 1.5 Tesla (T) MR system. In this work, the main causes of interaction between an actively shielded, insertable resistive B0 electromagnet and a 1.5T superconducting system are systematically identified and mitigated. The effects of nonideal fabrication of the field-shifting magnet are taken into consideration through careful measurement during winding and improved accuracy in the design of the associated active shield. The shielding performance of the resultant electromagnet is compared against a previously built system in which the shield design was based on an ideal primary coil model. Hardware and software approaches implemented to eliminate residual image artifacts are presented in detail. The eddy currents produced by the newly constructed dreMR system are shown to have a significantly smaller "long-time-constant" component, consistent with the hypothesis that less energy is deposited into the cryostat of the MR system. With active compensation, the dreMR imaging system is capable of 0.22T field shifts within a clinical 1.5T MRI with no significant residual eddy-current fields. Copyright © 2013 Wiley Periodicals, Inc. 16. On the relationship between Southern Ocean eddies and phytoplankton Frenger, Ivy; Münnich, Matthias; Gruber, Nicolas 2017-04-01 Effects on phytoplankton in the Southern Ocean are crucial for the global ocean nutrient and carbon cycles. Such effects potentially arise from mesoscale eddies which are omnipresent in the region. Eddies are known to affect phytoplankton through either advection and mixing, or the stimulation/suppression of growth. Yet, the climatological relationship between Southern Ocean eddies and phytoplankton has not been quantified in detail. To provide an estimate of this relationship, we identified more than100,000 eddies in the Southern Ocean and determined associated phytoplankton anomalies using satellite-based chlorophyll-a (chl) measurements. The eddies have a very substantial impact on the chl levels, with eddy associated chl differing by more than 10% from the background over wide areas. The structure of these anomalies is largely zonal, with positive anomalies north of the Antarctic Circumpolar Current (ACC) and negative anomalies within the circumpolar belt of the ACC for cyclonic eddies. The pattern is similar but of opposite sign for anticyclonic eddies. The seasonality of this signal is weak north to the ACC, but pronounced in the vicinity of the ACC. The spatial structure and seasonality of the signal can be explained largely by advection, i.e., the eddy-circulation driven lateral transport of anomalies across large-scale gradients. We conclude this based on the shape of local chl anomalies of eddies and ambient chl gradients. In contrast, ACC winter anomalies are consistent with an effect of eddies on the light exposure of phytoplankton. The clear impact of eddies on chl implies a downstream effect on Southern Ocean biogeochemical properties. 17. Accelerating convergence of molecular dynamics-based structural relaxation Christensen, Asbjørn 2005-01-01 We describe strategies to accelerate the terminal stage of molecular dynamics (MD)based relaxation algorithms, where a large fraction of the computational resources are used. First, we analyze the qualitative and quantitative behavior of the QuickMin family of MD relaxation algorithms and explore...... the influence of spectral properties and dimensionality of the molecular system on the algorithm efficiency. We test two algorithms, the MinMax and Lanczos, for spectral estimation from an MD trajectory, and use this to derive a practical scheme of time step adaptation in MD relaxation algorithms to improve...... efficiency. We also discuss the implementation aspects. Secondly, we explore the final state refinement acceleration by a combination with the conjugate gradient technique, where the key ingredient is an implicit corrector step. Finally, we test the feasibility of passive Hessian matrix accumulation from... 18. Can Black Hole Relax Unitarily? Solodukhin, S. N. 2005-03-01 We review the way the BTZ black hole relaxes back to thermal equilibrium after a small perturbation and how it is seen in the boundary (finite volume) CFT. The unitarity requires the relaxation to be quasi-periodic. It is preserved in the CFT but is not obvious in the case of the semiclassical black hole the relaxation of which is driven by complex quasi-normal modes. We discuss two ways of modifying the semiclassical black hole geometry to maintain unitarity: the (fractal) brick wall and the worm-hole modification. In the latter case the entropy comes out correctly as well. 19. Can Black Hole Relax Unitarily? Solodukhin, Sergey N. 2004-01-01 We review the way the BTZ black hole relaxes back to thermal equilibrium after a small perturbation and how it is seen in the boundary (finite volume) CFT. The unitarity requires the relaxation to be quasi-periodic. It is preserved in the CFT but is not obvious in the case of the semiclassical black hole the relaxation of which is driven by complex quasi-normal modes. We discuss two ways of modifying the semiclassical black hole geometry to maintain unitarity: the (fractal) brick wall and the... 20. Can Black Hole Relax Unitarily? Solodukhin, S N 2004-01-01 We review the way the BTZ black hole relaxes back to thermal equilibrium after a small perturbation and how it is seen in the boundary (finite volume) CFT. The unitarity requires the relaxation to be quasi-periodic. It is preserved in the CFT but is not obvious in the case of the semiclassical black hole the relaxation of which is driven by complex quasi-normal modes. We discuss two ways of modifying the semiclassical black hole geometry to maintain unitarity: the (fractal) brick wall and the worm-hole modification. In the latter case the entropy comes out correctly as well. 1. Tidal generation of large sub-mesoscale eddy dipoles W. Callendar 2011-08-01 Full Text Available Numerical simulations of tidal flow past Cape St. James on the south tip of Haida Gwaii (Queen Charlotte Islands are presented that indicate mesoscale dipoles are formed from coalescing tidal eddies. Observations in this region demonstrate robust eddy generation at the Cape, with the primary process being flow separation of buoyant or wind driven outflows forming large anti-cyclonic, negative potential vorticity, Haida Eddies. However, there are other times where dipoles are observed in satellites, indicating a source of positive potential vorticity must also be present. The simulations here build on previous work that implicates oscillating tidal flow past the cape in creating the positive vorticity. Small headland eddies of alternating vorticity are created each tide. During certain tidal cycles, the headland eddies coalesce and self organize in such a way as to create large >20-km diameter eddies that then self-advect into deep water. The self advection speed is faster than the beta drift of anti-cyclones, and the propagation direction appears to be more southerly than typical Haida Eddies, though the model contains no mean wind-driven flows. These eddies are smaller than Haida Eddies, but given their tidal origin, may represent a more consistent source of coastal water that is injected into the interior of the subpolar gyre. 2. Tidal generation of large sub-mesoscale eddy dipoles Callendar, W.; Klymak, J. M.; Foreman, M. G. G. 2011-08-01 Numerical simulations of tidal flow past Cape St. James on the south tip of Haida Gwaii (Queen Charlotte Islands) are presented that indicate mesoscale dipoles are formed from coalescing tidal eddies. Observations in this region demonstrate robust eddy generation at the Cape, with the primary process being flow separation of buoyant or wind driven outflows forming large anti-cyclonic, negative potential vorticity, Haida Eddies. However, there are other times where dipoles are observed in satellites, indicating a source of positive potential vorticity must also be present. The simulations here build on previous work that implicates oscillating tidal flow past the cape in creating the positive vorticity. Small headland eddies of alternating vorticity are created each tide. During certain tidal cycles, the headland eddies coalesce and self organize in such a way as to create large >20-km diameter eddies that then self-advect into deep water. The self advection speed is faster than the beta drift of anti-cyclones, and the propagation direction appears to be more southerly than typical Haida Eddies, though the model contains no mean wind-driven flows. These eddies are smaller than Haida Eddies, but given their tidal origin, may represent a more consistent source of coastal water that is injected into the interior of the subpolar gyre. 3. Tidal generation of large sub-mesoscale eddy dipoles W. Callendar 2011-04-01 Full Text Available Numerical simulations of tidal flow past Cape St. James on the south tip of Haida Gwai (Queen Charlotte Islands are presented that indicate mesoscale dipoles are formed from coalescing tidal eddies. Observations in this region demonstrate robust eddy generation at the Cape, with the primary process being flow separation of buoyant or wind driven outflows forming large anti-cyclonic, negative potential vorticity, Haida Eddies. However, there are other times where dipoles are observed in satellites, indicating a source of positive potential vorticity must also be present. The simulations here build on previous work that implicates oscillating tidal flow past the cape in creating the positive vorticity. Small headland eddies of alternating vorticity are created each tide. During certain tidal cycles, the headland eddies coalesce and self organize in such a way as to create large >20-km diameter eddies that then self-advect into deep water. The self advection speed is faster than the beta drift of anti-cyclones, and the propagation direction appears to be more southerly than typical Haida Eddies, though the model contains no mean wind-driven flows. These eddies are smaller than Haida Eddies, but given their tidal origin, may represent a more consistent source of coastal water that is injected into to the interior of the subpolar gyre. 4. IVA Ultrasonic and Eddy Current NDE for ISS Project National Aeronautics and Space Administration — Phased array ultrasonic testing (PAUT) instruments and array eddy current testing instruments were tested on hypervelocity impact damaged aluminum plates simulating... 5. Features of eddy kinetic energy and variations of upper circulation in the South China Sea 贺志刚; 王东晓; 胡建宇 2002-01-01 The features of eddy kinetic energy (EKE) and the variations of upper circulation in the South China Sea (SCS) are discussed in this paper using geostrophic currents estimated from Maps of Sea Level Anomalies of the TOPEX/Poseidon altimetry data. A high EKE center is identified in the southeast, of Viemam coast with the highest energy level 1400 cm2@s-2 in both summer and autumn.This high EKE center is caused by the instability of the current axis leaving the coast of Vietnam in summer and the transition of seasonal circulation patterns in autumn. There exists another high EKE region in the northeastern SCS, southwest to Taiwan Island in winter. This high EKE region is generated from the eddy activities caused by the Kuroshio intrusion and accumulates more'than one third of the annual EKE, which confirms that the eddies are most active in winter. The transition of upper circulation patterns is also evidenced by the directions of the major axises of velocity variance ellipses between 10°and 14.5°N, which supports the model results reported before. 6. An Exact Relaxation of Clustering Mørup, Morten; Hansen, Lars Kai 2009-01-01 of clustering problems such as the K-means objective and pairwise clustering as well as graph partition problems, e.g., for community detection in complex networks. In particular we show that a relaxation to the simplex can be given for which the extreme solutions are stable hard assignment solutions and vice......Continuous relaxation of hard assignment clustering problems can lead to better solutions than greedy iterative refinement algorithms. However, the validity of existing relaxations is contingent on problem specific fuzzy parameters that quantify the level of similarity between the original...... versa. Based on the new relaxation we derive the SR-clustering algorithm that has the same complexity as traditional greedy iterative refinement algorithms but leading to significantly better partitions of the data. A Matlab implementation of the SR-clustering algorithm is available for download.... 7. The relaxation & stress reduction workbook Davis, Martha; Eshelman, Elizabeth Robbins; McKay, Matthew 2008-01-01 "The Relaxation & Stress Reduction Workbook broke new ground when it was first published in 1980, detailing easy, step-by-step techniques for calming the body and mind in an increasingly overstimulated world... 8. Relaxation Dynamics in Heme Proteins. Scholl, Reinhard Wilhelm A protein molecule possesses many conformational substates that are likely arranged in a hierarchy consisting of a number of tiers. A hierarchical organization of conformational substates is expected to give rise to a multitude of nonequilibrium relaxation phenomena. If the temperature is lowered, transitions between substates of higher tiers are frozen out, and relaxation processes characteristic of lower tiers will dominate the observational time scale. This thesis addresses the following questions: (i) What is the energy landscape of a protein? How does the landscape depend on the environment such as pH and viscosity, and how can it be connected to specific structural parts? (ii) What relaxation phenomena can be observed in a protein? Which are protein specific, and which occur in other proteins? How does the environment influence relaxations? (iii) What functional form best describes relaxation functions? (iv) Can we connect the motions to specific structural parts of the protein molecule, and are these motions important for the function of the protein?. To this purpose, relaxation processes after a pressure change are studied in carbonmonoxy (CO) heme proteins (myoglobin-CO, substrate-bound and substrate-free cytochrome P450cam-CO, chloroperoxidase-CO, horseradish peroxidase -CO) between 150 K and 250 K using FTIR spectroscopy to monitor the CO bound to the heme iron. Two types of p -relaxation experiments are performed: p-release (200 to ~eq40 MPa) and p-jump (~eq40 to 200 MPa) experiments. Most of the relaxations fall into one of three groups and are characterized by (i) nonexponential time dependence and non-Arrhenius temperature dependence (FIM1( nu), FIM1(Gamma)); (ii) exponential time dependence and non-Arrhenius temperature dependence (FIM0(A_{i}to A_{j})); exponential time dependence and Arrhenius temperature dependence (FIMX( nu)). The influence of pH is studied in myoglobin-CO and shown to have a strong influence on the substate population of the 9. Large eddy simulation of stably stratified turbulence 2010-01-01 Stable stratification turbulence, as a common phenomenon in atmospheric and oceanic flows, is an important mechanism for numerical prediction of such flows. In this paper the large eddy simulation is utilized for investigating stable stratification turbulence numerically. The paper is expected to provide correct statistical results in agreement with those measured in the atmosphere or ocean. The fully developed turbulence is obtained in the stable stratification fluid by large eddy simulation with different initial velocity field and characteristic parameters, i.e. Reynolds number Re and Froude number Fr. The evolution of turbulent kinetic energy, characteristic length scales and parameters is analyzed for investigating the development of turbulence in stable stratification fluid. The three-dimensional energy spectra, horizontal and vertical energy spectrum, are compared between numerical simulation and real observation in the atmosphere and ocean in order to test the reliability of the numerical simulation. The results of numerical cases show that the large eddy simulation is capable of predicting the properties of stable stratification turbulence in consistence with real measurements at less computational cost. It has been found in this paper that the turbulence can be developed under different initial velocity conditions and the internal wave energy is dominant in the developed stable stratification turbulence. It is also found that the characteristic parameters must satisfy certain conditions in order to have correct statistical property of stable stratification turbulence in the atmosphere and ocean. The Reynolds number and Froude number are unnecessarily equal to those in atmosphere or ocean, but the Reynolds number must be large enough, say, greater than 10 2 , and Froude number must be less than 0.1. The most important parameter is ReFr 2 which must be greater than 10. 10. Essential parameters in eddy current inspection Stepinski, T. [Uppsala Univ. (Sweden). Signals and Systems 2000-05-01 Our aim was to qualitatively analyze a number of variables that may affect the result of eddy current (EC) inspection but because of various reasons are not considered as essential in common practice. In the report we concentrate on such variables that can vary during or between inspections but their influence is not determined during routine calibrations. We present a qualitative analysis of the influence of the above-mentioned variables on the ability to detect and size flaws using mechanized eddy current testing (ET). ET employs some type of coil or probe, sensing magnetic flux generated by eddy currents induced in the tested specimen. An amplitude-phase modulated signal (with test frequency f0 ) from the probe is sensed by the EC instrument. The amplitude-phase modulated signal is amplified and demodulated in phase-sensitive detectors removing carrier frequency f0 from the signal. The detectors produce an in-phase and a quadrature component of the signal defining it as a point in the impedance plane. Modern instruments are provided with a screen presenting the demodulated and filtered signal in complex plane. We focus on such issues, related to the EC equipment as, probe matching, distortion introduced by phase discriminators and signal filters, and the influence of probe resolution and lift-off on sizing. The influence of different variables is investigated by means of physical reasoning employing theoretical models and demonstrated using simulated and real EC signals. In conclusion, we discuss the way in which the investigated variables may affect the result of ET. We also present a number of practical recommendations for the users of ET and indicate the areas that are to be further analyzed. 11. An Angular Momentum Eddy Detection Algorithm (AMEDA) applied to coastal eddies Le Vu, Briac; Stegner, Alexandre; Arsouze, Thomas 2016-04-01 We present a new automated eddy detection and tracking algorithm based on the computation of the LNAM (Local and Normalized Angular Momentum). This method is an improvement of the previous method by Mkhinini et al. (2014) with the aim to be applied to multiple datasets (satellite data, numerical models, laboratory experiments) using as few objective criteria as possible. First, we show the performance of the algorithm for three different source of data: a Mediterranean 1/8° AVISO geostrophic velocities fields based on the Absolute Dynamical Topography (ADT), a ROMS idealized simulation and a high resolution velocity field derived from PIV measurements in a rotating tank experiment. All the velocity fields describe the dynamical evolution of mesoscale eddies generated by the instability of coastal currents. Then, we compare the results of the AMEDA algorithm applied to regional 1/8° AVISO Mediterranean data set with in situ measurements (drifter, ARGO, ADCP…). This quantitative comparisons with few specific test cases enables us to estimate the accuracy of the method to quantify the eddies features: trajectory, size and intensity. We also use the AMEDA algorithm to identify the main formation areas of long-lived eddies in the Mediterranean Sea during the last 15 years. 12. Negative magnetic relaxation in superconductors Krasnoperov E.P. 2013-01-01 Full Text Available It was observed that the trapped magnetic moment of HTS tablets or annuli increases in time (negative relaxation if they are not completely magnetized by a pulsed magnetic field. It is shown, in the framework of the Bean critical-state model, that the radial temperature gradient appearing in tablets or annuli during a pulsed field magnetization can explain the negative magnetic relaxation in the superconductor. 13. Large-eddy simulation in hydraulics Rodi, Wolfgang 2013-01-01 Complex turbulence phenomena are of great practical importance in hydraulics, including environmental flows, and require advanced methods for their successful computation. The Large Eddy Simulation (LES), in which the larger-scale turbulent motion is directly resolved and only the small-scale motion is modelled, is particularly suited for complex situations with dominant large-scale structures and unsteadiness. Due to the increasing computer power, LES is generally used more and more in Computational Fluid Dynamics. Also in hydraulics, it offers great potential, especially for near-field probl 14. Large eddy simulation in the ocean Scotti, Alberto 2010-12-01 Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed. 15. Towards technical application of large eddy simulation Breuer, M. [Erlangen-Nuernberg Univ., Erlangen (DE). Inst. of Fluid Mechanics (LSTM) 2001-07-01 The paper is concerned with the computation of high Reynolds number circular cylinder flow (Re = 3900/140,000) based on the large eddy simulation (LES) technique. Because this flow involves a variety of complex flow features encountered in technical applications, successful simulations for this test case, especially at high Reynolds numbers, can be considered as the first step to real world applications of LES. Based on an efficient finite-volume LES code, a detailed study on different aspects influencing the quality of LES results was carried out. In the present paper, some of the results are presented and compared with experimental measurements available. (orig.) 16. Decay of eddies at the South-West Indian Ridge Andrew C. Coward 2011-11-01 Full Text Available The South-West Indian Ridge in the Indian sector of the Southern Ocean is a region recognised for the creation of particularly intense eddy disturbances in the mean flow of the Antarctic Circumpolar Current. Eddies formed at this ridge have been extensively studied over the past decade using hydrographic, satellite, drifter and float data and it is hypothesised that they could provide a vehicle for localised meridional heat and salt exchange. The effectiveness of this process is dependent on the rate of decay of the eddies. However, in order to investigate eddy decay, logistically difficult hydrographic monitoring is required. This study presents the decay of cold eddies at the South-West Indian Ridge, using outputs from a high-resolution ocean model. The model’s representation of the dynamic nature of this region is fully characteristic of observations. On average, 3–4 intense and well-defined cold eddies are generated per year; these eddies have mean longevities of 5.0±2.2 months with average advection speeds of 5±2 km/day. Most simulated eddies reach their peak intensity within 1.5–2.5 months after genesis and have depths of 2000 m – 3000 m. Thereafter they dissipate within approximately 3 months. The decay of eddies is generally characterised by a decrease in their sea surface height signature, a weakening in their rotation rates and a modification in their temperature–salinity characteristics. Subantarctic top predators are suspected to forage preferentially along the edges of eddies. The process of eddy dissipation may thus influence their feeding behaviour. 17. Engineering and Scaling the Spontaneous Magnetization Reversal of Faraday Induced Magnetic Relaxation in Nano-Sized Amorphous Ni Coated on Crystalline Au Wen-Hsien Li 2016-05-01 Full Text Available We report on the generation of large inverse remanent magnetizations in nano-sized core/shell structure of Au/Ni by turning off the applied magnetic field. The remanent magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before the switching off of the magnetic field. Spontaneous reversal in direction and increase in magnitude of the remanent magnetization in subsequent relaxations over time were found. All of the various types of temporal relaxation curves of the remanent magnetizations are successfully scaled by a stretched exponential decay profile, characterized by two pairs of relaxation times and dynamic exponents. The relaxation time is used to describe the reduction rate, while the dynamic exponent describes the dynamical slowing down of the relaxation through time evolution. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction. 18. Developing large eddy simulation for turbomachinery applications. Eastwood, Simon J; Tucker, Paul G; Xia, Hao; Klostermeier, Christian 2009-07-28 For jets, large eddy resolving simulations are compared for a range of numerical schemes with no subgrid scale (SGS) model and for a range of SGS models with the same scheme. There is little variation in results for the different SGS models, and it is shown that, for schemes which tend towards having dissipative elements, the SGS model can be abandoned, giving what can be termed numerical large eddy simulation (NLES). More complex geometries are investigated, including coaxial and chevron nozzle jets. A near-wall Reynolds-averaged Navier-Stokes (RANS) model is used to cover over streak-like structures that cannot be resolved. Compressor and turbine flows are also successfully computed using a similar NLES-RANS strategy. Upstream of the compressor leading edge, the RANS layer is helpful in preventing premature separation. Capturing the correct flow over the turbine is particularly challenging, but nonetheless the RANS layer is helpful. In relation to the SGS model, for the flows considered, evidence suggests issues such as inflow conditions, problem definition and transition are more influential. 19. Magnetoresistive flux focusing eddy current flaw detection Wincheski, Russell A. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor) 2005-01-01 A giant magnetoresistive flux focusing eddy current device effectively detects deep flaws in thick multilayer conductive materials. The probe uses an excitation coil to induce eddy currents in conducting material perpendicularly oriented to the coil's longitudinal axis. A giant magnetoresistive (GMR) sensor, surrounded by the excitation coil, is used to detect generated fields. Between the excitation coil and GMR sensor is a highly permeable flux focusing lens which magnetically separates the GMR sensor and excitation coil and produces high flux density at the outer edge of the GMR sensor. The use of feedback inside the flux focusing lens enables complete cancellation of the leakage fields at the GMR sensor location and biasing of the GMR sensor to a location of high magnetic field sensitivity. In an alternate embodiment, a permanent magnet is positioned adjacent to the GMR sensor to accomplish the biasing. Experimental results have demonstrated identification of flaws up to 1 cm deep in aluminum alloy structures. To detect deep flaws about circular fasteners or inhomogeneities in thick multilayer conductive materials, the device is mounted in a hand-held rotating probe assembly that is connected to a computer for system control, data acquisition, processing and storage. 20. Eddy generation in the Mediterranean undercurrent Serra, Nuno; Ambar, Isabel In the framework of the European Union MAST III project Canary Islands Gibraltar Azores Observations, 24 RAFOS floats were deployed in the Mediterranean Water (MW) undercurrent off south Portugal between September 1997 and September 1998. A preliminary analysis of this Lagrangian approach, complemented with XBT and current-meter data, show some of the major aspects of the flow associated with the undercurrent as well as associated eddy activity. Floats that stayed in the undercurrent featured a downstream deceleration and a steering by bottom topography. Three meddy formations at Cape St. Vincent could be isolated from the float data. The dynamical coupling of meddies and cyclones was observed for a considerable period of time. The generation of two dipolar structures in the Portimão Canyon region also was observed with the float data. A major bathymetric relief—Gorringe Bank—was not only an important constraint to the eddy trajectories and of the flow at the MW levels but also a site for meddy formation. 1. Eddy correlation measurements of submarine groundwater discharge Crusius, J.; Berg, P.; Koopmans, D.J.; Erban, L. 2008-01-01 This paper presents a new, non-invasive means of quantifying groundwater discharge into marine waters using an eddy correlation approach. The method takes advantage of the fact that, in virtually all aquatic environments, the dominant mode of vertical transport near the sediment-water interface is turbulent mixing. The technique thus relies on measuring simultaneously the fluctuating vertical velocity using an acoustic Doppler velocimeter and the fluctuating salinity and/or temperature using rapid-response conductivity and/or temperature sensors. The measurements are typically done at a height of 5-15??cm above the sediment surface, at a frequency of 16 to 64??Hz, and for a period of 15 to 60??min. If the groundwater salinity and/or temperature differ from that of the water column, the groundwater specific discharge (cm d- 1) can be quantified from either a heat or salt balance. Groundwater discharge was estimated with this new approach in Salt Pond, a small estuary on Cape Cod (MA, USA). Estimates agreed well with previous estimates of discharge measured using seepage meters and 222Rn as a tracer. The eddy correlation technique has several desirable characteristics: 1) discharge is quantified under in-situ hydrodynamic conditions; 2) salinity and temperature can serve as two semi-independent tracers of discharge; 3) discharge can be quantified at high temporal resolution, and 4) long-term records of discharge may be possible, due to the low power requirements of the instrumentation. ?? 2007 Elsevier B.V. All rights reserved. 2. A subsurface cyclonic eddy in the Bay of Bengal Babu, M.T.; PrasannaKumar, S.; Rao, D.P. CTD data collected from the Northwestern Bay of Bengal during late July 1984 reveal the existence of a cold core subsurface eddy centred at 17 degrees 40'N and 85 degrees 19'E. The thermal structure observed across the eddy indicate... 3. Anisotropy of eddy variability in the global ocean Stewart, K. D.; Spence, P.; Waterman, S.; Sommer, J. Le; Molines, J.-M.; Lilly, J. M.; England, M. H. 2015-11-01 The anisotropy of eddy variability in the global ocean is examined in geostrophic surface velocities derived from satellite observations and in the horizontal velocities of a 1/12° global ocean model. Eddy anisotropy is of oceanographic interest as it is through anisotropic velocity fluctuations that the eddy and mean-flow fields interact dynamically. This study is timely because improved observational estimates of eddy anisotropy will soon be available with Surface Water and Ocean Topography (SWOT) altimetry data. We find there to be good agreement between the characteristics and distributions of eddy anisotropy from the present satellite observations and model ocean surface. In the model, eddy anisotropy is found to have significant vertical structure and is largest close to the ocean bottom, where the anisotropy aligns with the underlying isobaths. The highly anisotropic bottom signal is almost entirely contained in the barotropic variability. Upper-ocean variability is predominantly baroclinic and the alignment is less sensitive to the underlying bathymetry. These findings offer guidance for introducing a parameterization of eddy feedbacks, based on the eddy kinetic energy and underlying bathymetry, to operate on the barotropic flow and better account for the effects of barotropic Reynolds stresses unresolved in coarse-resolution ocean models. 4. Significant sink of ocean-eddy energy near western boundaries Zhai, Xiaoming; Johnson, Helen L.; Marshall, David P. 2010-09-01 Ocean eddies generated through instability of the mean flow are a vital component of the energy budget of the global ocean. In equilibrium, the sources and sinks of eddy energy have to be balanced. However, where and how eddy energy is removed remains uncertain. Ocean eddies are observed to propagate westwards at speeds similar to the phase speeds of classical Rossby waves, but what happens to the eddies when they encounter the western boundary is unclear. Here we use a simple reduced-gravity model along with satellite altimetry data to show that the western boundary acts as a graveyard' for the westward-propagating ocean eddies. We estimate a convergence of eddy energy near the western boundary of approximately 0.1-0.3TW, poleward of 10° in latitude. This energy is most probably scattered into high-wavenumber vertical modes, resulting in energy dissipation and diapycnal mixing. If confirmed, this eddy-energy sink will have important implications for the ocean circulation. 5. Eddy Currents: Levitation, Metal Detectors, and Induction Heating Wouch, G.; Lord, A. E., Jr. 1978-01-01 A simple and accessible calculation is given of the effects of eddy currents for a sphere in the field of a single circular loop of alternating current. These calculations should help toward the inclusion of eddy current effects in upper undergraduate physics courses. (BB) 6. Calculation of Eddy currents in the ETE spherical torus Ludwig, Gerson Otto 2002-07-01 A circuit model based on a Green's function method was developed to evaluate the currents induced during startup in the vessel of ETE (Spherical Tokamak Experiment). The eddy currents distribution is calculated using a thin shell approximation for the vacuum vessel and local curvilinear coordinates. The results are compared with values of the eddy currents measured in ETE. (author) 7. A synthesis of similarity and eddy-viscosity models Verstappen, R.; Friedrich, R; Geurts, BJ; Metais, O 2004-01-01 In large-eddy simulation, a low-pass spatial filter is usually applied to the Navier-Stokes equations. The resulting commutator of the filter and the nonlinear term is usually modelled by an eddy-viscosity model, by a similarity model or by a mix thereof. Similarity models possess the proper mathema 8. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan; 2013-01-01 The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results... 9. ON THE EDDY VISCOSITY MODEL OF PERIODIC TURBULENT SHEAR FLOWS 王新军; 罗纪生; 周恒 2003-01-01 Physical argument shows that eddy viscosity is essentially different from molecular viscosity. By direct numerical simulation, it was shown that for periodic turbulent flows, there is phase difference between Reynolds stress and rate of strain. This finding posed great challenge to turbulence modeling, because most turbulence modeling, which use the idea of eddy viscosity, do not take this effect into account. 10. Conditional eddies, or clumps, in ion-beam-generated turbulence Johnsen, Helene; Pecseli, H. L.; Trulsen, J. 1985-01-01 with a relatively long lifetime in terms of the average bounce period is observed. Particles bouncing in the potential well associated with these eddies' will necessarily remain correlated for times determined by the eddy lifetime. The results thus provide evidence for clump formation in plasmas... 11. Large eddy simulation of turbulent mixing by using 3D decomposition method Issakhov, Alibek, E-mail: aliisahov@mail.ru [al-Farabi Kazakh National University, Almaty (Kazakhstan) 2011-12-22 Parallel implementation of algorithm of numerical solution of Navier-Stokes equations for large eddy simulation (LES) of turbulence is presented in this research. The dynamic Smagorinsky model is applied for sub-grid simulation of turbulence. The numerical algorithm was worked out using a scheme of splitting on physical parameters. At the first stage it is supposed that carrying over of movement amount takes place only due to convection and diffusion. Intermediate field of velocity is determined by method of fractional steps by using Thomas algorithm (tridiagonal matrix algorithm). At the second stage the determined intermediate field of velocity is used for determination of the field of pressure. Three dimensional Poisson equation for the field of pressure is solved using over relaxation method. 12. Investigation of Particles Statistics in large Eddy Simulated Turbulent Channel Flow using Generalized lattice Boltzmann Method Mandana Samari Kermani 2016-01-01 Full Text Available The interaction of spherical solid particles with turbulent eddies in a 3-D turbulent channel flow with friction Reynolds number was studied. A generalized lattice Boltzmann equation (GLBE was used for computation of instantaneous turbulent flow field for which large eddy simulation (LES was employed. The sub-grid-scale (SGS turbulence effects were simulated through a shear-improved Smagorinsky model (SISM, which can predict turbulent near wall region without any wall function. Statistical properties of particles behavior such as root mean square (RMS velocities were studied as a function of dimensionless particle relaxation time ( by using a Lagrangian approach. Combination of SISM in GLBE with particle tracking analysis in turbulent channel flow is novelty of the present work. Both GLBE and SISM solve the flow field equations locally. This is an advantage of this method and makes it easy implementing. Comparison of the present results with previous available data indicated that SISM in GLBE is a reliable method for simulation of turbulent flows which is a key point to predict particles behavior correctly. 13. Nonlinear inertial oscillations of a multilayer eddy: An analytical solution Dotsenko, S. F.; Rubino, A. 2008-06-01 Nonlinear axisymmetric oscillations of a warm baroclinic eddy are considered within the framework of an reduced-gravity model of the dynamics of a multilayer ocean. A class of exact analytical solutions describing pure inertial oscillations of an eddy formation is found. The thicknesses of layers in the eddy vary according to a quadratic law, and the horizontal projections of the velocity in the layers depend linearly on the radial coordinate. Owing to a complicated structure of the eddy, weak limitations on the vertical distribution of density, and an explicit form of the solution, the latter can be treated as a generalization of the exact analytical solutions of this form that were previously obtained for homogeneous and baroclinic eddies in the ocean. 14. Effects of Drake Passage on a strongly eddying global ocean Viebahn, Jan P; Bars, Dewi Le; Dijkstra, Henk A 2015-01-01 The climate impact of ocean gateway openings during the Eocene-Oligocene transition is still under debate. Previous model studies employed grid resolutions at which the impact of mesoscale eddies has to be parameterized. We present results of a state-of-the-art eddy-resolving global ocean model with a closed Drake Passage, and compare with results of the same model at non-eddying resolution. An analysis of the pathways of heat by decomposing the meridional heat transport into eddy, horizontal, and overturning circulation components indicates that the model behavior on the large scale is qualitatively similar at both resolutions. Closing Drake Passage induces (i) sea surface warming around Antarctica due to changes in the horizontal circulation of the Southern Ocean, (ii) the collapse of the overturning circulation related to North Atlantic Deep Water formation leading to surface cooling in the North Atlantic, (iii) significant equatorward eddy heat transport near Antarctica. However, quantitative details sign... 15. On the interactions between planetary geostrophy and mesoscale eddies Grooms, Ian; Julien, Keith; Fox-Kemper, Baylor 2011-04-01 Multiscale asymptotics are used to derive three systems of equations connecting the planetary geostrophic (PG) equations for gyre-scale flow to a quasigeostrophic (QG) equation set for mesoscale eddies. Pedlosky (1984), following similar analysis, found eddy buoyancy fluxes to have only a small effect on the large-scale flow; however, numerical simulations disagree. While the impact of eddies is relatively small in most regions, in keeping with Pedlosky's result, eddies have a significant effect on the mean flow in the vicinity of strong, narrow currents. First, the multiple-scales analysis of Pedlosky is reviewed and amplified. Novel results of this analysis include new multiple-scales models connecting large-scale PG equations to sets of QG eddy equations. However, only introducing anisotropic scaling of the large-scale coordinates allows us to derive a model with strong two-way coupling between the QG eddies and the PG mean flow. This finding reconciles the analysis with simulations, viz. that strong two-way coupling is observed in the vicinity of anisotropic features of the mean flow like boundary currents and jets. The relevant coupling terms are shown to be eddy buoyancy fluxes. Using the Gent-McWilliams parameterization to approximate these fluxes allows solution of the PG equations with closed tracer fluxes in a closed domain, which is not possible without mesoscale eddy (or other small-scale) effects. The boundary layer width is comparable to an eddy mixing length when the typical eddy velocity is taken to be the long Rossby wave phase speed, which is the same result found by Fox-Kemper and Ferrari (2009) in a reduced gravity layer. 16. Origin of the magnetic-field dependence of the nuclear spin-lattice relaxation in iron Seewald, G; Körner, H J; Borgmann, D; Dietrich, M 2008-01-01 The magnetic-field dependence of the nuclear spin-lattice relaxation at Ir impurities in Fe was measured for fields between 0 and 2 T parallel to the [100] direction. The reliability of the applied technique of nuclear magnetic resonance on oriented nuclei was demonstrated by measurements at different radio-frequency (rf) field strengths. The interpretation of the relaxation curves, which used transition rates to describe the excitation of the nuclear spins by a frequency-modulated rf field, was confirmed by model calculations. The magnetic-field dependence of the so-called enhancement factor for rf fields, which is closely related to the magnetic-field dependence of the spin-lattice relaxation, was also measured. For several magnetic-field-dependent relaxation mechanisms, the form and the magnitude of the field dependence were derived. Only the relaxation via eddy-current damping and Gilbert damping could explain the observed field dependence. Using reasonable values of the damping parameters, the field depe... 17. The prospect of using large eddy and detached eddy simulations in engineering design, and the research required to get there. 2014-08-13 In this paper, we try to look into the future to envision how large eddy and detached eddy simulations will be used in the engineering design process about 20-30 years from now. Some key challenges specific to the engineering design process are identified, and some of the critical outstanding problems and promising research directions are discussed. 18. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model. Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin 2016-07-01 Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed. 19. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design. 20. Dynamical theory of spin relaxation Field, Timothy R.; Bain, Alex D. 2013-02-01 The dynamics of a spin system is usually calculated using the density matrix. However, the usual formulation in terms of the density matrix predicts that the signal will decay to zero, and does not address the issue of individual spin dynamics. Using stochastic calculus, we develop a dynamical theory of spin relaxation, the origins of which lie in the component spin fluctuations. This entails consideration of random pure states for individual protons, and how these pure states are correctly combined when the density matrix is formulated. Both the lattice and the spins are treated quantum mechanically. Such treatment incorporates both the processes of spin-spin and (finite temperature) spin-lattice relaxation. Our results reveal the intimate connections between spin noise and conventional spin relaxation. 1. A mixed relaxed clock model 2016-01-01 Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829 2. LAVENDER AROMATERAPHY AS A RELAXANT IGA Prima Dewi AP 2013-02-01 Full Text Available Aromatherapy is a kind of treatment that used aroma with aromatherapy essential oil. Extraction process from essential oil generally doing in three methods, there are distilling with water (boiled, distilling with water and steam, and distilling with steam. One of the most favorite aroma is lavender. The main content from lavender is linalyl acetate and linalool (C10H18O. Linalool is main active contents in lavender which can use for anti-anxiety (relaxation. Based on some research, the conclusion indicates that essential oil from lavender can give relaxation (carminative, sedative, reduce anxiety level and increasing mood. 3. Statistical mechanics of violent relaxation Spergel, David N.; Hernquist, Lars 1992-01-01 We propose a functional that is extremized through violent relaxation. It is based on the Ansatz that the wave-particle scattering during violent dynamical processes can be approximated as a sequence of discrete scattering events that occur near a particle's perigalacticon. This functional has an extremum whose structure closely resembles that of spheroidal stellar systems such as elliptical galaxies. The results described here, therefore, provide a simple framework for understanding the physical nature of violent relaxation and support the view that galaxies are structured in accord with fundamental statistical principles. 4. Active optomechanics through relaxation oscillations Princepe, Debora; Frateschi, Newton 2014-01-01 We propose an optomechanical laser based on III-V compounds which exhibits self-pulsation in the presence of a dissipative optomechanical coupling. In such a laser cavity, radiation pressure drives the mechanical degree of freedom and its back-action is caused by the mechanical modulation of the cavity loss rate. Our numerical analysis shows that even in a wideband gain material, such dissipative coupling couples the mechanical oscillation with the laser relaxation oscillations process. Laser self-pulsation is observed for mechanical frequencies below the laser relaxation oscillation frequency under sufficiently high optomechanical coupling factor. 5. Thermal relaxation and mechanical relaxation of rice gel 丁玉琴; 赵思明; 熊善柏 2008-01-01 Rice gel was prepared by simulating the production processes of Chinese local rice noodles,and the properties of thermal relaxation and mechanical relaxation during gelatinization were studied by differential scanning calorimetry(DSC) measurement and dynamic rheometer.The results show that during gelatinization,the molecular chains of rice starch undergo the thermal relaxation and mechanical relaxation.During the first heating and high temperature holding processes,the starch crystallites in the rice slurry melt,and the polymer chains stretch and interact,then viscoelastic gel forms.The cooling and low temperatures holding processes result in reinforced networks and decrease the viscoelasticity of the gel.During the second heating,the remaining starch crystallites further melt,the network is reinforced,and the viscoelasticity increases.The viscoelasticity,the molecular conformation and texture of the gel are adjusted by changing the temperature,and finally construct the gel with the textural characteristics of Chinese local rice noodle. 6. Large eddy simulations in 2030 and beyond. Piomelli, U 2014-08-13 Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier-Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. 7. Large eddy simulation of breaking waves Christensen, Erik Damgaard; Deigaard, Rolf 2001-01-01 is described by large eddy simulation where the larger turbulent features are simulated by solving the flow equations, and the small scale turbulence that is not resolved by the flow model is represented by a sub-grid model. A simple Smagorinsky sub-grid model has been used for the present simulations......A numerical model is used to simulate wave breaking, the large scale water motions and turbulence induced by the breaking process. The model consists of a free surface model using the surface markers method combined with a three-dimensional model that solves the flow equations. The turbulence....... The incoming waves are specified by a flux boundary condition. The waves are approaching in the shore-normal direction and are breaking on a plane, constant slope beach. The first few wave periods are simulated by a two-dimensional model in the vertical plane normal to the beach line. The model describes... 8. Large eddy simulation applications in gas turbines. Menzies, Kevin 2009-07-28 The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle. 9. Large-eddy simulation of contrails Chlond, A. [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany) 1997-12-31 A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs. 10. Eddy-current-damped microelectromechanical switch Christenson, Todd R. (Albuquerque, NM); Polosky, Marc A. (Tijeras, NM) 2007-10-30 A microelectromechanical (MEM) device is disclosed that includes a shuttle suspended for movement above a substrate. A plurality of permanent magnets in the shuttle of the MEM device interact with a metal plate which forms the substrate or a metal portion thereof to provide an eddy-current damping of the shuttle, thereby making the shuttle responsive to changes in acceleration or velocity of the MEM device. Alternately, the permanent magnets can be located in the substrate, and the metal portion can form the shuttle. An electrical switch closure in the MEM device can occur in response to a predetermined acceleration-time event. The MEM device, which can be fabricated either by micromachining or LIGA, can be used for sensing an acceleration or deceleration event (e.g. in automotive applications such as airbag deployment or seat belt retraction). 11. Eddy-current-damped microelectromechanical switch Christenson, Todd R. (Albuquerque, NM); Polosky, Marc A. (Tijeras, NM) 2009-12-15 A microelectromechanical (MEM) device is disclosed that includes a shuttle suspended for movement above a substrate. A plurality of permanent magnets in the shuttle of the MEM device interact with a metal plate which forms the substrate or a metal portion thereof to provide an eddy-current damping of the shuttle, thereby making the shuttle responsive to changes in acceleration or velocity of the MEM device. Alternately, the permanent magnets can be located in the substrate, and the metal portion can form the shuttle. An electrical switch closure in the MEM device can occur in response to a predetermined acceleration-time event. The MEM device, which can be fabricated either by micromachining or LIGA, can be used for sensing an acceleration or deceleration event (e.g. in automotive applications such as airbag deployment or seat belt retraction). 12. Large-eddy simulation of propeller noise Keller, Jacob; Mahesh, Krishnan 2016-11-01 We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR. 13. Direct and large-eddy simulation IX Kuerten, Hans; Geurts, Bernard; Armenio, Vincenzo 2015-01-01 This volume reflects the state of the art of numerical simulation of transitional and turbulent flows and provides an active forum for discussion of recent developments in simulation techniques and understanding of flow physics. Following the tradition of earlier DLES workshops, these papers address numerous theoretical and physical aspects of transitional and turbulent flows. At an applied level it contributes to the solution of problems related to energy production, transportation, magneto-hydrodynamics and the environment. A special session is devoted to quality issues of LES. The ninth Workshop on 'Direct and Large-Eddy Simulation' (DLES-9) was held in Dresden, April 3-5, 2013, organized by the Institute of Fluid Mechanics at Technische Universität Dresden. This book is of interest to scientists and engineers, both at an early level in their career and at more senior levels. 14. Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank 2017-07-01 Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign. 15. Turbulence Spectra and Eddy Diffusivity over Forests. Lee, Xuhui 1996-08-01 The main objectives of this observational study are to examine the stability dependence of velocity and air temperature spectra and to employ the spectral quantities to establish relations for eddy diffusivity over forests. The datasets chosen for the analysis were collected above the Browns River forest and the Camp Borden forest over a wide range of stability conditions.Under neutral and unstable conditions the nondimensional dissipation rate of turbulent kinetic energy (TKE) over the forests is lower than that from its Monin-Obukhov similarity (MOS) function for the smooth-wall surface layer. The agreement is somewhat better under stable conditions but a large scatter is evident. When the frequency is made nondimensional by the height of the stand (h) and the longitudinal velocity at this height (uh, the Kaimal spectral model for neutral air describes the observations very well. The eddy diffusivity formulation K = c 4w/ provides a promising alternative to the MOS approach, where w is the standard deviation of the vertical velocity and TKE dissipation rate. Current datasets yield a constant of 0.43 for c for sensible heat in neutral and stable air, a value very close to that for the smooth-wall surface layer. It is postulated that c is a conservative parameter for sensible heat in the unstable air, its value probably falling between 0.41 and 0.54. In the absence of data, it is possible to estimate K from measurements of the local mean wind u and air stability. As a special case, it is shown that K = 0.27(uh/uh)w under neutral stability. This relation is then used to establish a profile model for wind speed and scalar concentration in the roughness sublayer. The analysis points out that uh and h are important scaling parameters in attempts to formulate quantitative relations for turbulence over tall vegetation. 16. Eddy Correlation Flux Measurement System Handbook Cook, D. R. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-01-01 The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration. The instruments used are: • a fast-response, three-dimensional (3D) wind sensor (sonic anemometer) to obtain the orthogonal wind components and the speed of sound (SOS) (used to derive the air temperature) • an open-path infrared gas analyzer (IRGA) to obtain the water vapor density and the CO2 concentration, and • an open-path infrared gas analyzer (IRGA) to obtain methane density and methane flux at one SGP EF and at the NSA CF. The ECOR systems are deployed at the locations where other methods for surface flux measurements (e.g., energy balance Bowen ratio [EBBR] systems) are difficult to employ, primarily at the north edge of a field of crops. A Surface Energy Balance System (SEBS) has been installed collocated with each deployed ECOR system in SGP, NSA, Tropical Western Pacific (TWP), ARM Mobile Facility 1 (AMF1), and ARM Mobile Facility 2 (AMF2). The surface energy balance system consists of upwelling and downwelling solar and infrared radiometers within one net radiometer, a wetness sensor, and soil measurements. The SEBS measurements allow the comparison of ECOR sensible and latent heat fluxes with the energy balance determined from the SEBS and provide information on wetting of the sensors for data quality purposes. The SEBS at one SGP and one NSA site also support upwelling and downwelling PAR measurements to qualify those two locations as Ameriflux sites. 17. An eddy closure for potential vorticity Ringler, Todd D [Los Alamos National Laboratory 2009-01-01 The Gent-McWilliams (GM) parameterization is extended to include a direct influence in the momentum equation. The extension is carried out in two stages; an analysis of the inviscid system is followed by an analysis of the viscous system. In the inviscid analysis the momentum equation is modified such that potential vorticity is conserved along particle trajectories following a transport velocity that includes the Bolus velocity in a manner exactly analogous to the continuity and tracer equations. In addition (and in contrast to traditional GM closures), the new formulation of the inviscid momentum equation results in a conservative exchange between potential and kinetic forms of energy. The inviscid form of the eddy closure conserves total energy to within an error proportional to the time derivative of the Bolus velocity. The hypothesis that the viscous term in the momentum equation should give rise to potential vorticity being diffused along isopycnals in a manner analogous to other tracers is examined in detail. While the form of the momentum closure that follows from a strict adherence to this hypothesis is not immediately interpretable within the constructs of traditional momentum closures, three approximations to this hypothesis results in a form of dissipation that is consistent with traditional Laplacian diffusion. The first two approximations are that relative vorticity, not potential vorticity, is diffused along isopyncals and that the flow is in approximate geostrophic balance. An additional approximation to the Jacobian term is required when the dissipation coefficient varies in space. More importantly, the critique of this hypothesis results in the conclusion that the viscosity parameter in the momentum equation should be identical to the tradition GM closure parameter {Kappa}. Overall, we deem the viscous form of the eddy closure for potential vorticity as a viable closure for use in ocean circulation models. 18. Modelling cyclonic eddies in the Delagoa Bight region Cossa, O.; Pous, S.; Penven, P.; Capet, X.; Reason, C. J. C. 2016-05-01 The objective of this study is to document and shed light on the circulation around the Delagoa Bight region in the southern Mozambique Channel using a realistic modelling approach. A simulation including mesoscale forcings at the boundaries of our regional configuration succeeds in reproducing the general circulation in the region as well as the existence of a semi-permanent cyclonic eddy, whose existence is attested by in situ measurements in the Bight. Characterised by a persistent local minimum in SSH located around 26°S-34°E, this cyclonic eddy termed herein the Delagoa Bight lee eddy occurs about 25% of the time with no clear seasonal preference. Poleward moving cyclones, mostly generated further north, occur another 25% of the time in the Bight area. A tracking method applied to eddies generated in Delagoa Bight using model outputs as well as AVISO data confirms the model realism and provides additional statistics. The diameter of the eddy core varies between 61 and 147 km and the average life time exceeds 20 days. Additional model analyses reveal the systematic presence of negative vorticity in the Bight that can organise and form a Delagoa Bight lee eddy depending on the intensity of an intermittent southward flow along the shore and the spatial distribution of surrounding mesoscale features. In addition, the model solution shows other cyclonic eddies generated near Inhambane and eventually travelling through the Bight. Their generation and pathways appears to be linked with large Mozambique Channel rings. 19. Eddies spatial variability at Makassar Strait – Flores Sea Nuzula, F.; Syamsudin, M. L.; Yuliadi, L. P. S.; Purba, N. P.; Martono 2017-01-01 This study was aimed to get the distribution of eddies spatially and temporally from Makassar Waters (MW) to Flores Sea (FS), as well as its relations with the upwelling, the downwelling, and chlorophyll-a concentration. The study area extends from 115º–125º E to 2.5º–8º S. The datasets were consisted of monthly geostrophic currents, sea surface heights, sea surface temperatures, and chlorophyll-a from 2008 – 2012. The results showed that eddies which found at Makassar Strait (MS) has the highest diameter and speed of 255.3 km and 21.4 cm/s respectively, while at the southern MW has 266.4 km and 15.6 cm/s, and at FS has 182.04 km and 11.4 cm/s. From a total of 51 eddies found, the majority of eddies type was anticyclonic. At MS and FS, eddies formed along the year, whereas at southern MW were found missing in West Season. Moreover, the chlorophyll-a concentration was consistently higher at the eddies area. Even though, the correlation among eddies and the upwelling downwelling phenomena was not significantly as shown by sea surface temperatures value. 20. Dynamics of Eddies in the Southeastern Tropical Indian Ocean Hanifah, F.; Ningsih, N. S.; Sofian, I. 2016-08-01 A holistic study was done on eddies in the Southeastern Tropical Indian Ocean (SETIO) using the HYbrid Coordinate Ocean Model (HYCOM) for 64 years (from 1950 to 2013). The results from the model were verified against the current and the Sea Surface Height Anomaly (SSHA) from Ocean Surface Current Analyses - Real time (OSCAR) and Archiving, Validation and Interpretation of Satellite Oceanographic Data (AVISO) respectively. The verification showed that the model simulates the condition in the area of study relatively well. We discovered that the local wind was not the only factor that contributed to the formation of eddies in the area. The difference in South Java Current (SJC) flow compared to the Indonesian Throughflow (ITF) and South Equatorial Current (SEC) flow as well as the difference in the relative velocity between the currents in the area led us to suspect that shear velocity may be responsible for the formation of eddies. The results from our model corroborated our prediction about shear velocity. Therefore, we attempted to explain the appearance of eddies in the SETIO based on the concept of shear velocity. By observing and documenting the occurrences of eddies in the area, we found that there are 8 cyclonic and 7 anticyclonic eddies in the SETIO. The distribution and frequency of the appearance of eddies varies, depending on the season. 1. Study on the mesoscale eddies around the Ryukyu Islands HAN Shuzong; XU Changsan; WU Huiming; WANG Gang; PEI Junfeng; FAN Yongbin; WANG Xingchi 2016-01-01 Results of the Ocean General Circulation Model for the Earth Simulator (OFES) from January 1977 to December 2006 are used to investigate mesoscale eddies near the Ryukyu Islands. The results show that: (1) Larger eddies are mainly east of Taiwan, above the Ryukyu Trench and south of the Shikoku Island. These three sea areas are all in the vicinity of the Ryukyu Current. (2) Eddies in the area of the Ryukyu Current are mainly anticyclonic, and conducive to that current. The transport of water east of the Ryukyu Islands is mainly toward the northeast. (3) The Ryukyu Current is significantly affected by the eddies. The lower the latitude, the greater these effects. However, the Kuroshio is relatively stable, and the effect of mesoscale eddies is not significant. (4) A warm eddy south of the Shikoku Island break away from the Kuroshio and move southwest, and is clearly affected by the Ryukyu Current and Kuroshio. Relationships between the mesoscale eddies, Kuroshio meanders, and Ryukyu Current are discussed. 2. Conjugate spectrum filters for eddy current signal processing Stepinski, T.; Maszi, N. (Univ. of Uppsala (Sweden). Dept. of Technology.) 1993-07-01 The paper addresses the problem of detection and classification of material defects during eddy current inspection. Digital signal processing algorithms for detection and characterization of flaws are considered and a new type of filter for classification of eddy current data is proposed. In the first part of the paper the signal processing blocks used in modern eddy current instruments are presented and analyzed in terms of information transmission. The processing usually consists of two steps: detection by means of amplitude-phase detectors and filtering of the detector output signals by means of analog signal filters. Distortion introduced by the signal filters is considered and illustrated using real eddy current responses. The nature of the distortion is explained and the way to avoid it is indicated. A novel method for processing the eddy current responses is presented in the second part of the paper. The method employs two-dimensional conjugate spectrum filters (CSFs) that are sensitive both to the phase angle and the shape of the eddy current responses. First the theoretical background of the CSF is presented and then two different ways of application, matched filters and orthogonal conjugate spectrum filters, are considered. The matched CSFs can be used for attenuation of all signals with the phase angle different from the selected prototype. The orthogonal filters are able to suppress completely a specific interference, e.g. the response of supporting plate when testing heat exchanger tubes. The performance of the CSFs is illustrated by means of real and simulated eddy current signals. 3. Eddy energy sources and flux in the Red Sea Zhan, Peng 2015-04-01 In the Red Sea, eddies are reported to be one of the key features of hydrodynamics in the basin. They play a significant role in converting the energy among the large-scale circulation, the available potential energy (APE) and the eddy kinetic energy (EKE). Not only do eddies affect the horizontal circulation, deep-water formation and overturning circulation in the basin, but they also have a strong impact on the marine ecosystem by efficiently transporting heat, nutrients and carbon across the basin and by pumping the nutrient-enriched subsurface water to sustain the primary production. Previous observations and modeling work suggest that the Red Sea is rich of eddy activities. In this study, the eddy energy sources and sinks have been studied based on a high-resolution MITgcm. We have also investigated the possible mechanisms of eddy generation in the Red Sea. Eddies with high EKE are found more likely to appear in the central and northern Red Sea, with a significant seasonal variability. They are more inclined to occur during winter when they acquire their energy mainly from the conversion of APE. In winter, the central and especially the northern Red Sea are subject to important heat loss and extensive evaporation. The resultant densified upper-layer water tends to sink and release the APE through baroclinic instability, which is about one order larger than the barotropic instability contribution and is the largest source term for the EKE in the Red Sea. As a consequence, the eddy energy is confined to the upper layer but with a slope deepening from south to north. In summer, the positive surface heat flux helps maintain the stratification and impedes the gain of APE. The EKE is, therefore, much lower than that in winter despite a higher wind power input. Unlike many other seas, the wind energy is not the main source of energy to the eddies in the Red Sea. 4. A novel eddy current damper: theory and experiment Ebrahimi, Babak; Khamesee, Mir Behrad [Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1 (Canada); Golnaraghi, Farid, E-mail: khamesee@mecheng1.uwaterloo.c [Mechatronic Systems Engineering, Simon Fraser University, Surrey, British Columbia, V3T 0A3 (Canada) 2009-04-07 A novel eddy current damper is developed and its damping characteristics are studied analytically and experimentally. The proposed eddy current damper consists of a conductor as an outer tube, and an array of axially magnetized ring-shaped permanent magnets separated by iron pole pieces as a mover. The relative movement of the magnets and the conductor causes the conductor to undergo motional eddy currents. Since the eddy currents produce a repulsive force that is proportional to the velocity of the conductor, the moving magnet and the conductor behave as a viscous damper. The eddy current generation causes the vibration to dissipate through the Joule heating generated in the conductor part. An accurate, analytical model of the system is obtained by applying electromagnetic theory to estimate the damping properties of the proposed eddy current damper. A prototype eddy current damper is fabricated, and experiments are carried out to verify the accuracy of the theoretical model. The experimental test bed consists of a one-degree-of-freedom vibration isolation system and is used for the frequency and transient time response analysis of the system. The eddy current damper model has a 0.1 m s{sup -2} (4.8%) RMS error in the estimation of the mass acceleration. A damping coefficient as high as 53 Ns m{sup -1} is achievable with the fabricated prototype. This novel eddy current damper is an oil-free, inexpensive damper that is applicable in various vibration isolation systems such as precision machinery, micro-mechanical suspension systems and structure vibration isolation. 5. Dielectric relaxation of samarium aluminate Sakhya, Anup Pradhan; Dutta, Alo; Sinha, T.P. [Bose Institute, Department of Physics, Kolkata (India) 2014-03-15 A ceramic SmAlO{sub 3} (SAO) sample is synthesized by the solid-state reaction technique. The Rietveld refinement of the X-ray diffraction pattern has been done to find the crystal symmetry of the sample at room temperature. An impedance spectroscopy study of the sample has been performed in the frequency range from 50 Hz to 1 MHz and in the temperature range from 313 K to 573 K. Dielectric relaxation peaks are observed in the imaginary parts of the spectra. The Cole-Cole model is used to analyze the dielectric relaxation mechanism in SAO. The temperature-dependent relaxation times are found to obey the Arrhenius law having an activation energy of 0.29 eV, which indicates that polaron hopping is responsible for conduction or dielectric relaxation in this material. The complex impedance plane plot of the sample indicates the presence of both grain and grain-boundary effects and is analyzed by an electrical equivalent circuit consisting of a resistance and a constant-phase element. The frequency-dependent conductivity spectra follow a double-power law due to the presence of two plateaus. (orig.) 6. Choosing a skeletal muscle relaxant. See, Sharon; Ginzburg, Regina 2008-08-01 Skeletal muscle relaxants are widely used in treating musculoskeletal conditions. However, evidence of their effectiveness consists mainly of studies with poor methodologic design. In addition, these drugs have not been proven to be superior to acetaminophen or nonsteroidal anti-inflammatory drugs for low back pain. Systematic reviews and meta-analyses support using skeletal muscle relaxants for short-term relief of acute low back pain when nonsteroidal anti-inflammatory drugs or acetaminophen are not effective or tolerated. Comparison studies have not shown one skeletal muscle relaxant to be superior to another. Cyclobenzaprine is the most heavily studied and has been shown to be effective for various musculoskeletal conditions. The sedative properties of tizanidine and cyclobenzaprine may benefit patients with insomnia caused by severe muscle spasms. Methocarbamol and metaxalone are less sedating, although effectiveness evidence is limited. Adverse effects, particularly dizziness and drowsiness, are consistently reported with all skeletal muscle relaxants. The potential adverse effects should be communicated clearly to the patient. Because of limited comparable effectiveness data, choice of agent should be based on side-effect profile, patient preference, abuse potential, and possible drug interactions. 7. Onsager relaxation of toroidal plasmas Samain, A.; Nguyen, F. 1997-01-01 The slow relaxation of isolated toroidal plasmas towards their thermodynamical equilibrium is studied in an Onsager framework based on the entropy metric. The basic tool is a variational principle, equivalent to the kinetic equation, involving the profiles of density, temperature, electric potential, electric current. New minimization procedures are proposed to obtain entropy and entropy production rate functionals. (author). 36 refs. 8. Relaxation properties in classical diamagnetism Carati, A.; Benfenati, F.; Galgani, L. 2011-06-01 It is an old result of Bohr that, according to classical statistical mechanics, at equilibrium a system of electrons in a static magnetic field presents no magnetization. Thus a magnetization can occur only in an out of equilibrium state, such as that produced through the Foucault currents when a magnetic field is switched on. It was suggested by Bohr that, after the establishment of such a nonequilibrium state, the system of electrons would quickly relax back to equilibrium. In the present paper, we study numerically the relaxation to equilibrium in a modified Bohr model, which is mathematically equivalent to a billiard with obstacles, immersed in a magnetic field that is adiabatically switched on. We show that it is not guaranteed that equilibrium is attained within the typical time scales of microscopic dynamics. Depending on the values of the parameters, one has a relaxation either to equilibrium or to a diamagnetic (presumably metastable) state. The analogy with the relaxation properties in the Fermi Pasta Ulam problem is also pointed out. 9. Eddy-Current Inspection Of Tab Seals On Beverage Cans Bar-Cohen, Yoseph 1994-01-01 Eddy-current inspection system monitors tab seals on beverage cans. Device inspects all cans at usual production rate of 1,500 to 2,000 cans per minute. Automated inspection of all units replaces visual inspection by microscope aided by mass spectrometry. System detects defects in real time. Sealed cans on conveyor pass near one of two coils in differential eddy-current probe. Other coil in differential eddy-current probe positioned near stationary reference can on which tab seal is known to be of acceptable quality. Signal of certain magnitude at output of probe indicates defective can, automatically ejected from conveyor. 10. Oceanic eddy detection and lifetime forecast using machine learning methods Ashkezari, Mohammad D.; Hill, Christopher N.; Follett, Christopher N.; Forget, Gaël.; Follows, Michael J. 2016-12-01 We report a novel altimetry-based machine learning approach for eddy identification and characterization. The machine learning models use daily maps of geostrophic velocity anomalies and are trained according to the phase angle between the zonal and meridional components at each grid point. The trained models are then used to identify the corresponding eddy phase patterns and to predict the lifetime of a detected eddy structure. The performance of the proposed method is examined at two dynamically different regions to demonstrate its robust behavior and region independency. 11. Eddy Surface properties and propagation at Southern Hemisphere western boundary current systems G. S. Pilo 2015-02-01 Full Text Available Oceanic eddies occur in all world oceans, but are more energetic when associated to western boundary currents (WBC systems. In these regions, eddies play an important role on mixing and energy exchange. Therefore, it is important to quantify and qualify eddies occurring within these systems. Previous studies performed eddy censuses in Southern Hemisphere WBC systems. However, important aspects of local eddy population are still unknown. Main questions to be answered relate to eddies' spatial distribution, propagation and lifetime within each system. Here, we use a global eddy dataset to qualify eddies based on their surface characteristics at the Agulhas Current (AC, the Brazil Current (BC and the East Australian Current (EAC Systems. We show that eddy propagation within each system is highly forced by the local mean flow and bathymetry. In the AC System, eddy polarity dictates its propagation distance. BC system eddies do not propagate beyond the Argentine Basin, and are advected by the local ocean circulation. EAC System eddies from both polarities cross south of Tasmania, but only anticyclonics reach the Great Australian Bight. Eddies in all systems and from both polarities presented a geographical segregation according to size. Large eddies occur along the Agulhas Retroflection, the Agulhas Return Current, the Brazil-Malvinas Confluence and the Coral Sea. Small eddies occur in the systems southernmost domains. Understanding eddies' propagation helps to establish monitoring programs, and to better understand how these features would affect local mixing. 12. Equivalent Relaxations of Optimal Power Flow Bose, S; Low, SH; Teeraratkul, T; Hassibi, B 2015-03-01 Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results imply that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results. 13. Ergodicity test of the eddy correlation method Chen, J.; Hu, Y.; Yu, Y.; Lü, S. 2014-07-01 The turbulent flux observation in the near-surface layer is a scientific issue which researchers in the fields of atmospheric science, ecology, geography science, etc. are commonly interested in. For eddy correlation measurement in the atmospheric surface layer, the ergodicity of turbulence is a basic assumption of the Monin-Obukhov (M-O) similarity theory, which is confined to steady turbulent flow and homogenous surface; this conflicts with turbulent flow under the conditions of complex terrain and unsteady, long observational period, which the study of modern turbulent flux tends to focus on. In this paper, two sets of data from the Nagqu Station of Plateau Climate and Environment (NaPlaCE) and the cooperative atmosphere-surface exchange study 1999 (CASE99) were used to analyze and verify the ergodicity of turbulence measured by the eddy covariance system. Through verification by observational data, the vortex of atmospheric turbulence, which is smaller than the scale of the atmospheric boundary layer (i.e., its spatial scale is less than 1000 m and temporal scale is shorter than 10 min) can effectively meet the conditions of the average ergodic theorem, and belong to a wide sense stationary random processes. Meanwhile, the vortex, of which the spatial scale is larger than the scale of the boundary layer, cannot meet the conditions of the average ergodic theorem, and thus it involves non-ergodic stationary random processes. Therefore, if the finite time average is used to substitute for the ensemble average to calculate the average random variable of the atmospheric turbulence, then the stationary random process of the vortex, of which spatial scale was less than 1000 m and thus below the scale of the boundary layer, was possibly captured. However, the non-ergodic random process of the vortex, of which the spatial scale was larger than that of the boundary layer, could not be completely captured. Consequently, when the finite time average was used to substitute 14. Eddy sensors for small diameter stainless steel tubes. Skinner, Jack L.; Morales, Alfredo Martin; Grant, J. Brian; Korellis, Henry James; LaFord, Marianne Elizabeth; Van Blarigan, Benjamin; Andersen, Lisa E. 2011-08-01 The goal of this project was to develop non-destructive, minimally disruptive eddy sensors to inspect small diameter stainless steel metal tubes. Modifications to Sandia's Emphasis/EIGER code allowed for the modeling of eddy current bobbin sensors near or around 1/8-inch outer diameter stainless steel tubing. Modeling results indicated that an eddy sensor based on a single axial coil could effectively detect changes in the inner diameter of a stainless steel tubing. Based on the modeling results, sensor coils capable of detecting small changes in the inner diameter of a stainless steel tube were designed, built and tested. The observed sensor response agreed with the results of the modeling and with eddy sensor theory. A separate limited distribution SAND report is being issued demonstrating the application of this sensor. 15. Does the wind systematically energize or damp ocean eddies? Wilson, Chris 2016-12-01 Globally, mesoscale ocean eddies are a key component of the climate system, involved in transport and mixing of heat, carbon, and momentum. However, they represent one of the major challenges of climate modeling, as the details of their nonlinear dynamics affect all scales. Recent progress analyzing satellite observations of the surface ocean and atmosphere has uncovered energetic interactions between the atmospheric wind stress and ocean eddies that may change our understanding of key processes affecting even large-scale climate. Wind stress acts systematically on ocean eddies and may explain observed asymmetry in the distribution of eddies and details of their lifecycle of growth and decay. These findings provide powerful guidance for climate model development. 16. Mesoscale eddies in the northeastern Pacific tropical-subtropical transition zone : statistical characterization from satellite altimetry Kurczyn, J. A.; Beier, Emilio; Lavín, Miguel,; Chaigneau, Alexis 2012-01-01 Mesoscale eddies in the northeastern Pacific tropical-subtropical transition zone (16 degrees N-30 degrees N; 130 degrees W-102 degrees W) are analyzed using nearly 18 years of satellite altimetry and an automated eddy-identification algorithm. Eddies that lasted more than 10 weeks are described based on the analysis of 465 anticyclonic and 529 cyclonic eddy trajectories. We found three near-coastal eddy-prolific areas: (1) Punta Eugenia, (2) Cabo San Lucas, and (3) Cabo Corrientes. These thr... 17. Eddy heat flux in the Southern Ocean: response to variable wind forcing Hogg, Andrew Mcc.; Meredith, Michael P.; Blundell, Jeffrey R.; Wilson, Christopher 2008-01-01 We assess the role of time-dependent eddy variability in the Antarctic Circumpolar Current (ACC) in influencing warming of the Southern Ocean. For this, we use an eddy-resolving quasigeostrophic model of the wind-driven circulation, and quantify the response of circumpolar transport, eddy kinetic energy and eddy heat transport to changes in winds. On interannual timescales, the model exhibits the behaviour of an "eddy saturated" ocean state, where increases in wind stress do not signicantly ... 18. Organic semiconductors: What makes the spin relax? Bobbert, Peter A. 2010-04-01 Spin relaxation in organic materials is expected to be slow because of weak spin-orbit coupling. The effects of deuteration and coherent spin excitation show that the spin-relaxation time is actually limited by hyperfine fields. 19. Relaxation Techniques to Manage IBS Symptoms ... the Day Art of IBS Gallery Contact Us Relaxation Techniques to Manage IBS Symptoms Details Content Last Updated: ... Topic Psychological Treatments Understanding Stress Cognitive Behavioral Therapy Relaxation Techniques for IBS You’ve been to the doctor ... 20. Coastal GPS Altimetry for Eddy Monitoring Cardellach, E.; Treuhaft, R. N.; Chao, Y.; Lowe, S. T.; Young, L. E.; Zuffada, C. 2003-04-01 Coastal zones (within approximately 20-30 km of the coast) are dominated by fast-changing (on the order of days) and small-scale (on the order of km or less) processes. The dynamics and thermodynamics associated with these coastal processes influence the physics, biogeochemistry and the associated carbon cycling in the coastal zones. To monitor these important processes at the highest possible resolution (both spatial and temporal) is therefore an integrated component of the Earth's observing system. Coastal processes are currently not adequately monitored from existing spaceborne observations. The infrared instruments can measure the sea surface temperature in coastal zones with a resolution of approximately 1km daily, but are heavily contaminated by clouds usually found in the land-sea boundaries. The conventional radar altimetry, even with the wide-swath (e.g., OSTM) configuration, can only provide measurements every 10 days, too long to resolve the fast-changing coastal processes, not mentioning the land contamination within the first few footprints (on the order of 20 km) away from the coast. Coastal GPS altimetry from cliffs or structures near the coastline provides a complementary way to measure these coastal processes. The precision of such ground-based grazing angle GPS measurements has been proven to be 2-cm over the smooth surface at Crater Lake [Treuhaft et al., 2001]. Nevertheless, the accuracy of the GPS altimetry over the open sea, significantly affected by roughness, has yet to be assessed. This poster aims to present a set of experiments and analyses to prove the coastal GPS altimetry concept with a few-cm accuracy goal. It includes the analysis of data gathered over the ocean from an oil platform, Platform Harvest, as well as simulations of the GPS reflected signal to identify and correct the effects of the sea roughness. The results of this research are planned to feed the design, execution and processing of an eddy monitoring experiment. It will 1. Nearby boundaries create eddies near microscopic filter feeders Pepper, Rachel E.; Roper, Marcus; Ryu, Sangjin; Matsudaira, Paul; Stone, Howard A. 2009-01-01 We show through calculations, simulations and experiments that the eddies often observed near sessile filter feeders are frequently due to the presence of nearby boundaries. We model the common filter feeder Vorticella, which is approximately 50 µm across and which feeds by removing bacteria from ocean or pond water that it draws towards itself. We use both an analytical stokeslet model and a Brinkman flow approximation that exploits the narrow-gap geometry to predict the size of the eddy cau... 2. Eddy current pulsed phase thermography and feature extraction He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang 2013-08-01 This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth. 3. Plasmon-mediated energy relaxation in graphene Ferry, D. K. [School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona 85287-5706 (United States); Somphonsane, R. [Department of Physics, King Mongkut' s Institute of Technology, Ladkrabang, Bangkok 10520 (Thailand); Ramamoorthy, H.; Bird, J. P. [Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, New York 14260-1500 (United States) 2015-12-28 Energy relaxation of hot carriers in graphene is studied at low temperatures, where the loss rate may differ significantly from that predicted for electron-phonon interactions. We show here that plasmons, important in the relaxation of energetic carriers in bulk semiconductors, can also provide a pathway for energy relaxation in transport experiments in graphene. We obtain a total loss rate to plasmons that results in energy relaxation times whose dependence on temperature and density closely matches that found experimentally. 4. Large eddy simulation of LDL surface concentration in a subject specific human aorta. Lantz, Jonas; Karlsson, Matts 2012-02-02 The development of atherosclerosis is correlated to the accumulation of lipids in the arterial wall, which, in turn, may be caused by the build-up of low-density lipoproteins (LDL) on the arterial surface. The goal of this study was to model blood flow within a subject specific human aorta, and to study how the LDL surface concentration changed during a cardiac cycle. With measured velocity profiles as boundary conditions, a scale-resolving technique (large eddy simulation, LES) was used to compute the pulsatile blood flow that was in the transitional regime. The relationship between wall shear stress (WSS) and LDL surface concentration was investigated, and it was found that the accumulation of LDL correlated well with WSS. In general, regions of low WSS corresponded to regions of increased LDL concentration and vice versa. The instantaneous LDL values changed significantly during a cardiac cycle; during systole the surface concentration was low due to increased convective fluid transport, while in diastole there was an increased accumulation of LDL on the surface. Therefore, the near-wall velocity was investigated at four representative locations, and it was concluded that in regions with disturbed flow the LDL concentration had significant temporal changes, indicating that LDL accumulation is sensitive to not only the WSS but also near-wall flow. 5. Zero-dimensional spin accumulation and spin dynamics in a mesoscopic metal island Zaffalon, M; van Wees, BJ 2003-01-01 We have measured electron spin accumulation at 4.2 K and at room temperature in an aluminum island with all dimensions (400 nmx400 nmx30 nm) smaller than the spin relaxation length. For the first time, we obtain uniform spin accumulation in a four-terminal lateral device with a magnitude exceeding t 6. Spin injection, accumulation, and precession in a mesoscopic nonmagnetic metal island Zaffalon, M; van Wees, BJ 2005-01-01 We experimentally study spin accumulation in an aluminum island with all dimensions smaller than the spin-relaxation length, so that the spin imbalance throughout the island is uniform. Electrical injection and detection of the spin accumulation are carried out in a four-terminal geometry by means o 7. Application of altimetry data assimilation on mesoscale eddies simulation 2008-01-01 Mesoscale eddy plays an important role in the ocean circulation. In order to improve the simulation accuracy of the mesoscale eddies, a three-dimensional variation (3DVAR) data assimilation system called Ocean Variational Analysis System (OVALS) is coupled with a POM model to simulate the mesoscale eddies in the Northwest Pacific Ocean. In this system, the sea surface height anomaly (SSHA) data by satellite altimeters are assimilated and translated into pseudo temperature and salinity (T-S) profile data. Then, these profile data are taken as observation data to be assimilated again and produce the three-dimensional analysis T-S field. According to the characteristics of mesoscale eddy, the most appropriate assimilation parameters are set up and testified in this system. A ten years mesoscale eddies simulation and comparison experiment is made, which includes two schemes: assimilation and non-assimilation. The results of comparison between two schemes and the observation show that the simulation accuracy of the assimilation scheme is much better than that of non-assimilation, which verified that the altimetry data assimilation method can improve the simulation accuracy of the mesoscale dramatically and indicates that it is possible to use this system on the forecast of mesoscale eddies in the future. 8. Sonic eddy model of the turbulent boundary layer Breidenthal, Robert; Dintilhac, Paul; Williams, Owen 2016-11-01 A model of the compressible turbulent boundary layer is proposed. It is based on the notion that turbulent transport by an eddy requires that information of nonsteady events propagates across the diameter of that eddy during one rotation period. The finite acoustic signaling speed then controls the turbulent fluxes. As a consequence, the fluxes are limited by the largest eddies that satisfies this requirement. Therefore "sonic eddies" with a rotational Mach number of about unity would determine the skin friction, which is predicted to vary inversely with Mach number. This sonic eddy model contrasts with conventional models that are based on the energy equation and variations in the density. The effect of density variations is known to be weak in free shear flows, and the sonic eddy model assumes the same for the boundary layer. In general, Mach number plays two simultaneous roles in compressible flow, one related to signaling and the other related to the energy equation. The predictions of the model are compared with experimental data and DNS results from the literature. 9. Collisionless Relaxation of Stellar Systems Kandrup, H E 1998-01-01 The objective of the work summarised here has been to exploit and extend ideas from plasma physics and accelerator dynamics to formulate a unified description of collisionless relaxation that views violent relaxation, Landau damping, and phase mixing as (manifestations of) a single phenomenon. This approach embraces the fact that the collisionless Boltzmann equation (CBE), the basic object of the theory, is an infinite-dimensional Hamiltonian system, with the distribution function f playing the role of the fundamental dynamical variable, and that, interpreted appropriately, an evolution described by the other Hamiltonian system. Equilibrium solutions correspond to extremal points of the Hamiltonian subject to the constraints associated with Liouville's Theorem. Stable equilibria correspond to energy minima. The evolution of a system out of equilibrium involves (in general nonlinear) phase space oscillations which may -- or may not -- interfere destructively so as to damp away. 10. Collisionless Relaxation of Stellar Systems Kandrup, Henry E. 1999-08-01 The objective of the work summarized here has been to exploit and extend ideas from plasma physics and accelerator dynamics to formulate a unified description of collisionless relaxation of stellar systems that views violent relaxation, Landau damping, and phase mixing as (manifestations of) a single phenomenon. This approach embraces the fact that the collisionless Boltzmann equation (CBE), the basic object of the theory, is an infinite-dimensional Hamiltonian system, with the distribution function f playing the role of the fundamental dynamical variable, and that, interpreted appropriately, an evolution described by the CBE is no different fundamentally from an evolution described by any other Hamiltonian system. Equilibrium solutions f0 correspond to extremal points of the Hamiltonian subject to the constraints associated with Liouville's Theorem. Stable equilibria correspond to energy minima. The evolution of a system out of equilibrium involves (in general nonlinear) phase space oscillations which may - or may not - interfere destructively so as to damp away. 11. Kinetic activation-relaxation technique Béland, Laurent Karim; Brommer, Peter; El-Mellouhi, Fedwa; Joly, Jean-François; Mousseau, Normand 2011-10-01 We present a detailed description of the kinetic activation-relaxation technique (k-ART), an off-lattice, self-learning kinetic Monte Carlo (KMC) algorithm with on-the-fly event search. Combining a topological classification for local environments and event generation with ART nouveau, an efficient unbiased sampling method for finding transition states, k-ART can be applied to complex materials with atoms in off-lattice positions or with elastic deformations that cannot be handled with standard KMC approaches. In addition to presenting the various elements of the algorithm, we demonstrate the general character of k-ART by applying the algorithm to three challenging systems: self-defect annihilation in c-Si (crystalline silicon), self-interstitial diffusion in Fe, and structural relaxation in a-Si (amorphous silicon). 12. Kinetic activation-relaxation technique. Béland, Laurent Karim; Brommer, Peter; El-Mellouhi, Fedwa; Joly, Jean-François; Mousseau, Normand 2011-10-01 We present a detailed description of the kinetic activation-relaxation technique (k-ART), an off-lattice, self-learning kinetic Monte Carlo (KMC) algorithm with on-the-fly event search. Combining a topological classification for local environments and event generation with ART nouveau, an efficient unbiased sampling method for finding transition states, k-ART can be applied to complex materials with atoms in off-lattice positions or with elastic deformations that cannot be handled with standard KMC approaches. In addition to presenting the various elements of the algorithm, we demonstrate the general character of k-ART by applying the algorithm to three challenging systems: self-defect annihilation in c-Si (crystalline silicon), self-interstitial diffusion in Fe, and structural relaxation in a-Si (amorphous silicon). 13. Brief relaxation training program for hospital employees. Balk, Judith L; Chung, Sheng-Chia; Beigi, Richard; Brooks, Maria 2009-01-01 Employee stress leads to attrition, burnout, and increased medical costs. We aimed to assess if relaxation training leads to decreased stress levels based on questionnaire and thermal biofeedback. Thirty-minute relaxation training sessions were conducted for hospital employees and for cancer patients. Perceived Stress levels and skin temperature were analyzed before and after relaxation training. 14. Variation of Eddy Current Density Distribution and its Effect on Crack Signal in Eddy Current Non-Destructive of Testing 2006-01-01 Full Text Available The paper deals with variation of eddy current density distribution along material depth and investigates an effect of the variation on a crack signal in eddy current non-destructive testing. Four coaxial rectangular tangential coils are used to induce eddy currents in a tested conductive object. The exciting coils are driven independently by phase-shifted AC currents; a ratio of amplitudes of the exciting currents is continuously changed to vary the distribution of eddy current density along material depth under a circular pick-up coil positioned in centre between the exciting coils. Dependences of a crack signal amplitude and its phase on the ratio are evaluated and special features are extracted. It is revealed that the dependences are strongly influenced by depth of a crack, and thus the extracted features can enhance evaluation of a detected crack. 15. POS Tagging Using Relaxation Labelling 1995-01-01 Relaxation labelling is an optimization technique used in many fields to solve constraint satisfaction problems. The algorithm finds a combination of values for a set of variables such that satisfies -to the maximum possible degree- a set of given constraints. This paper describes some experiments performed applying it to POS tagging, and the results obtained. It also ponders the possibility of applying it to word sense disambiguation. 16. Eddy Current Flexible Probes for Complex Geometries Gilles-Pascaud, C.; Decitre, J. M.; Vacher, F.; Fermon, C.; Pannetier, M.; Cattiaux, G. 2006-03-01 The inspection of materials used in aerospace, nuclear or transport industry is a critical issue for the safety of components exposed to stress or/and corrosion. The industry claims for faster, more sensitive, and more flexible techniques. Technologies based on Eddy Current (EC) flexible array probe and magnetic sensor with high sensitivity such as giant magneto-resistance (GMR) could be a good solution to detect surface-breaking flaws in complex shaped surfaces. The CEA has recently developed, with support from the French Institute for Radiological Protection and Nuclear Safety (IRSN), a flexible array probe based on micro-coils etched on Kapton. The probe's performances have been assessed for the inspection of reactor residual heat removal pipes, and for aeronautical applications within the framework of the European project VERDICT. The experimental results confirm the very good detection of narrow cracks on plane and curve shaped surfaces. This paper also describes the recent progresses concerning the application of GMR sensors to EC testing, and the results obtained for the detection of small surface breaking flaws. 17. Advanced Eddy current NDE steam generator tubing. Bakhtiari, S. 1999-03-29 As part of a multifaceted project on steam generator integrity funded by the U.S. Nuclear Regulatory Commission, Argonne National Laboratory is carrying out research on the reliability of nondestructive evaluation (NDE). A particular area of interest is the impact of advanced eddy current (EC) NDE technology. This paper presents an overview of work that supports this effort in the areas of numerical electromagnetic (EM) modeling, data analysis, signal processing, and visualization of EC inspection results. Finite-element modeling has been utilized to study conventional and emerging EC probe designs. This research is aimed at determining probe responses to flaw morphologies of current interest. Application of signal processing and automated data analysis algorithms has also been addressed. Efforts have focused on assessment of frequency and spatial domain filters and implementation of more effective data analysis and display methods. Data analysis studies have dealt with implementation of linear and nonlinear multivariate models to relate EC inspection parameters to steam generator tubing defect size and structural integrity. Various signal enhancement and visualization schemes are also being evaluated and will serve as integral parts of computer-aided data analysis algorithms. Results from this research will ultimately be substantiated through testing on laboratory-grown and in-service-degraded tubes. 18. Spin relaxation in metallic ferromagnets Berger, L. 2011-02-01 The Elliott theory of spin relaxation in metals and semiconductors is extended to metallic ferromagnets. Our treatment is based on the two-current model of Fert, Campbell, and Jaoul. The d→s electron-scattering process involved in spin relaxation is the inverse of the s→d process responsible for the anisotropic magnetoresistance (AMR). As a result, spin-relaxation rate 1/τsr and AMR Δρ are given by similar formulas, and are in a constant ratio if scattering is by solute atoms. Our treatment applies to nickel- and cobalt-based alloys which do not have spin-up 3d states at the Fermi level. This category includes many of the technologically important magnetic materials. And we show how to modify the theory to apply it to bcc iron-based alloys. We also treat the case of Permalloy Ni80Fe20 at finite temperature or in thin-film form, where several kinds of scatterers exist. Predicted values of 1/τsr and Δρ are plotted versus resistivity of the sample. These predictions are compared to values of 1/τsr and Δρ derived from ferromagnetic-resonance and AMR experiments in Permalloy. 19. Arresting relaxation in Pickering Emulsions Atherton, Tim; Burke, Chris 2015-03-01 Pickering emulsions consist of droplets of one fluid dispersed in a host fluid and stabilized by colloidal particles absorbed at the fluid-fluid interface. Everyday materials such as crude oil and food products like salad dressing are examples of these materials. Particles can stabilize non spherical droplet shapes in these emulsions through the following sequence: first, an isolated droplet is deformed, e.g. by an electric field, increasing the surface area above the equilibrium value; additional particles are then adsorbed to the interface reducing the surface tension. The droplet is then allowed to relax toward a sphere. If more particles were adsorbed than can be accommodated by the surface area of the spherical ground state, relaxation of the droplet is arrested at some non-spherical shape. Because the energetic cost of removing adsorbed colloids exceeds the interfacial driving force, these configurations can remain stable over long timescales. In this presentation, we present a computational study of the ordering present in anisotropic droplets produced through the mechanism of arrested relaxation and discuss the interplay between the geometry of the droplet, the dynamical process that produced it, and the structure of the defects observed. 20. Relaxation response in femoral angiography. Mandle, C L; Domar, A D; Harrington, D P; Leserman, J; Bozadjian, E M; Friedman, R; Benson, H 1990-03-01 Immediately before they underwent femoral angiography, 45 patients were given one of three types of audiotapes: a relaxation response tape recorded for this study, a tape of contemporary instrumental music, or a blank tape. All patients were instructed to listen to their audiotape during the entire angiographic procedure. Each audiotape was played through earphones. Radiologists were not told the group assignment or tape contents. The patients given the audiotape with instructions to elicit the relaxation response (n = 15) experienced significantly less anxiety (P less than .05) and pain (P less than .001) during the procedure, were observed by radiology nurses to exhibit significantly less pain (P less than .001) and anxiety (P less than .001), and requested significantly less fentanyl citrate (P less than .01) and diazepam (P less than .01) than patients given either the music (n = 14) or the blank (n = 16) control audiotapes. Elicitation of the relaxation response is a simple, inexpensive, efficacious, and practical method to reduce pain, anxiety, and medication during femoral angiography and may be useful in other invasive procedures. 1. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng 2017-08-01 Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach. 2. Time of relaxation in dusty plasma model Timofeev, A. V. 2015-11-01 Dust particles in plasma may have different values of average kinetic energy for vertical and horizontal motion. The partial equilibrium of the subsystems and the relaxation processes leading to this asymmetry are under consideration. A method for the relaxation time estimation in nonideal dusty plasma is suggested. The characteristic relaxation times of vertical and horizontal motion of dust particles in gas discharge are estimated by analytical approach and by analysis of simulation results. These relaxation times for vertical and horizontal subsystems appear to be different. A single hierarchy of relaxation times is proposed. 3. LARGE-EDDY AND DETACHED-EDDY SIMULATIONS OF THE SEPARATED FLOW AROUND A CIRCULAR CYLINDER 2007-01-01 The separated turbulent flow around a circular cylinder is investigated using Large-Eddy Simulation (LES), Detached-Eddy Simulation (DES, or hybrid RANS/LES methods), and Unsteady Reynolds-Averaged Navier-Stokes (URANS). The purpose of this study is to examine some typical simulation approaches for the prediction of complex separated turbulent flow and to clarify the capability of applying these approaches to a typical case of the separated turbulent flow around a circular cylinder. Several turbulence models, I.e. Dynamic Sub-grid Scale (SGS) model in LES, the DES-based Spalart-Allmaras (S-A) and Shear-Stress- Transport (SST) models in DES, and the S-A and SST models in URANS, are used in the calculations. Some typical results, e.g., the mean pressure and drag coefficients, velocity profiles, Strouhal number, and Reynolds stresses, are obtained and compared with previous computational and experimental data. Based on our extensive calculations, we assess the capability and performance of these simulation approaches coupled with the relevant turbulence models to predict the separated turbulent flow. 4. Transport Induced by Mean-Eddy Interaction: I. Theory, and Relation to Lagrangian Lobe Dynamics Ide, Kayo 2011-01-01 In this paper we develop a method for the estimation of {\\bf T}ransport {\\bf I}nduced by the {\\bf M}ean-{\\bf E}ddy interaction (TIME) in two-dimensional unsteady flows. The method is built on the dynamical systems approach and can be viewed as a hybrid combination of Lagrangian and Eulerian methods. The (Eulerian) boundaries across which we consider (Lagrangian) transport are kinematically defined by appropriately chosen streamlines of the mean flow. By evaluating the impact of the mean-eddy interaction on transport, the TIME method can be used as a diagnostic tool for transport processes that occur during a specified time interval along a specified boundary segment. We introduce two types of TIME functions: one that quantifies the accumulation of flow properties and another that measures the displacement of the transport geometry. The spatial geometry of transport is described by the so-called pseudo-lobes, and temporal evolution of transport by their dynamics. In the case where the TIME functions are evalua... 5. Approximate method for solving relaxation problems in terms of materials damagability under creep Nikitenko, A.F.; Sukhorukov, I.V. 1995-03-01 The technology of thermoforming under creep and superplasticity conditions is finding increasing application in machine building for producing articles of a preset shape. After a part is made there are residual stresses in it, which lead to its warping. To remove residual stresses, moulded articles are usually exposed to thermal fixation, i.e., the part is held in compressed state at a certain temperature. Thermal fixation is simply the process of residual stress relaxation, following by accumulation of total creep in the material. Therefore the necessity to develop engineering methods for calculating the time of thermal fixation and relaxation of residual stresses to a safe level, not resulting in warping, becomes evident. The authors present an approximate method of calculation of stress-strain rate of a body during relaxation. They use a system of equations which describes a materials creep, simultaneously taking into account accumulation of damages in it. 6. 5 Things To Know About Relaxation Techniques for Stress ... X Y Z 5 Things To Know About Relaxation Techniques for Stress Share: When you’re under stress, ... creating the relaxation response through regular use of relaxation techniques could counteract the negative effects of stress. Relaxation ... 7. Scalar excursions in large-eddy simulations Matheou, Georgios; Dimotakis, Paul E. 2016-12-01 The range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods for diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size 8. Large eddy simulation of powered Fontan hemodynamics. Delorme, Y; Anupindi, K; Kerlo, A E; Shetty, D; Rodefeld, M; Chen, J; Frankel, S 2013-01-18 Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2-3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3-5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a "biventricular Fontan" circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo(TM)) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. 9. Recurrence Analysis of Eddy Covariance Fluxes Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael 2015-04-01 The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise. 10. Three-fluid, three-dimensional magnetohydrodynamic solar wind model with eddy viscosity and turbulent resistivity Usmanov, Arcadi V.; Matthaeus, William H. [Department of Physics and Astronomy, University of Delaware, Newark, DE 19716 (United States); Goldstein, Melvyn L., E-mail: arcadi.usmanov@nasa.gov [Code 672, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States) 2014-06-10 We have developed a three-fluid, three-dimensional magnetohydrodynamic solar wind model that incorporates turbulence transport, eddy viscosity, turbulent resistivity, and turbulent heating. The solar wind plasma is described as a system of co-moving solar wind protons, electrons, and interstellar pickup protons, with separate energy equations for each species. Numerical steady-state solutions of Reynolds-averaged solar wind equations coupled with turbulence transport equations for turbulence energy, cross helicity, and correlation length are obtained by the time relaxation method in the corotating with the Sun frame of reference in the region from 0.3 to 100 AU (but still inside the termination shock). The model equations include the effects of electron heat conduction, Coulomb collisions, photoionization of interstellar hydrogen atoms and their charge exchange with the solar wind protons, turbulence energy generation by pickup protons, and turbulent heating of solar wind protons and electrons. The turbulence transport model is based on the Reynolds decomposition and turbulence phenomenologies that describe the conversion of fluctuation energy into heat due to a turbulent cascade. In addition to using separate energy equations for the solar wind protons and electrons, a significant improvement over our previous work is that the turbulence model now uses an eddy viscosity approximation for the Reynolds stress tensor and the mean turbulent electric field. The approximation allows the turbulence model to account for driving of turbulence by large-scale velocity gradients. Using either a dipole approximation for the solar magnetic field or synoptic solar magnetograms from the Wilcox Solar Observatory for assigning boundary conditions at the coronal base, we apply the model to study the global structure of the solar wind and its three-dimensional properties, including embedded turbulence, heating, and acceleration throughout the heliosphere. The model results are 11. Compaction and relaxation of biofilms 2015-06-18 Operation of membrane systems for water treatment can be seriously hampered by biofouling. A better characterization of biofilms in membrane systems and their impact on membrane performance may help to develop effective biofouling control strategies. The objective of this study was to determine the occurrence, extent and timescale of biofilm compaction and relaxation (decompaction), caused by permeate flux variations. The impact of permeate flux changes on biofilm thickness, structure and stiffness was investigated in situ and non-destructively with optical coherence tomography using membrane fouling monitors operated at a constant crossflow velocity of 0.1 m s−1 with permeate production. The permeate flux was varied sequentially from 20 to 60 and back to 20 L m−2 h−1. The study showed that the average biofilm thickness on the membrane decreased after elevating the permeate flux from 20 to 60 L m−2 h−1 while the biofilm thickness increased again after restoring the original flux of 20 L m−2 h−1, indicating the occurrence of biofilm compaction and relaxation. Within a few seconds after the flux change, the biofilm thickness was changed and stabilized, biofilm compaction occurred faster than the relaxation after restoring the original permeate flux. The initial biofilm parameters were not fully reinstated: the biofilm thickness was reduced by 21%, biofilm stiffness had increased and the hydraulic biofilm resistance was elevated by 16%. Biofilm thickness was related to the hydraulic biofilm resistance. Membrane performance losses are related to the biofilm thickness, density and morphology, which are influenced by (variations in) hydraulic conditions. A (temporarily) permeate flux increase caused biofilm compaction, together with membrane performance losses. The impact of biofilms on membrane performance can be influenced (increased and reduced) by operational parameters. The article shows that a (temporary) pressure increase leads to more 12. Plasma Relaxation in Hall Magnetohydrodynamics Shivamoggi, B K 2011-01-01 Parker's formulation of isotopological plasma relaxation process in magnetohydrodynamics (MHD) is extended to Hall MHD. The torsion coefficient alpha in the Hall MHD Beltrami condition turns out now to be proportional to the "potential vorticity." The Hall MHD Beltrami condition becomes equivalent to the "potential vorticity" conservation equation in two-dimensional hydrodynamics if the Hall MHD Lagrange multiplier beta is taken to be proportional to the "potential vorticity" as well. The winding pattern of the magnetic field lines in Hall MHD then appears to evolve in the same way as "potential vorticity" lines in 2D hydrodynamics. 13. Spectral Estimation of NMR Relaxation Naugler, David G.; Cushley, Robert J. 2000-08-01 In this paper, spectral estimation of NMR relaxation is constructed as an extension of Fourier Transform (FT) theory as it is practiced in NMR or MRI, where multidimensional FT theory is used. nD NMR strives to separate overlapping resonances, so the treatment given here deals primarily with monoexponential decay. In the domain of real error, it is shown how optimal estimation based on prior knowledge can be derived. Assuming small Gaussian error, the estimation variance and bias are derived. Minimum bias and minimum variance are shown to be contradictory experimental design objectives. The analytical continuation of spectral estimation is constructed in an optimal manner. An important property of spectral estimation is that it is phase invariant. Hence, hypercomplex data storage is unnecessary. It is shown that, under reasonable assumptions, spectral estimation is unbiased in the context of complex error and its variance is reduced because the modulus of the whole signal is used. Because of phase invariance, the labor of phasing and any error due to imperfect phase can be avoided. A comparison of spectral estimation with nonlinear least squares (NLS) estimation is made analytically and with numerical examples. Compared to conventional sampling for NLS estimation, spectral estimation would typically provide estimation values of comparable precision in one-quarter to one-tenth of the spectrometer time when S/N is high. When S/N is low, the time saved can be used for signal averaging at the sampled points to give better precision. NLS typically provides one estimate at a time, whereas spectral estimation is inherently parallel. The frequency dimensions of conventional nD FT NMR may be denoted D1, D2, etc. As an extension of nD FT NMR, one can view spectral estimation of NMR relaxation as an extension into the zeroth dimension. In nD NMR, the information content of a spectrum can be extracted as a set of n-tuples (ω1, … ωn), corresponding to the peak maxima 14. Relaxing Chosen-Ciphertext Security Canetti, Ran; Krawczyk, Hugo; Nielsen, Jesper Buus 2003-01-01 Security against adaptive chosen ciphertext attacks (or, CCA security) has been accepted as the standard requirement from encryption schemes that need to withstand active attacks. In particular, it is regarded as the appropriate security notion for encryption schemes used as components within...... “for most practical purposes.” We propose a relaxed variant of CCA security, called Replayable CCA (RCCA) security. RCCA security accepts as secure the non-CCA (yet arguably secure) schemes mentioned above; furthermore, it suffices for most existing applications of CCA security. We provide three... 15. Relaxation Algorithm of Piecing—Error for Sub—Images 李跃平; 唐璞山 2001-01-01 During the process of automatic image recognition or automatic reverse design of IC,people often encounter the problem that some sub-images must be pieced together into a whole image,In the traditional piecing algorithm for subimages,a large accumulated error will be made.In this paper,a relaxation algorithm of piecing -error for sub-images is presented.It can eliminate the accumulated error in the traditional algorithm and greatly improve the quality of pieced image.Based on an initial pieced image,one can continuously adjust the center of every sub-image and its angle to lessen the error between the adjacent sub-images,so the quality of pieced image can be improved.The presented results indicate that the proposed algorithm can dramatically decrease the error while the quality of ultimate pieced image is still acceptable.The time complexity of this algorithm is O(nlnn). 16. Relaxation Algorithm of Piecing-Error for Sub-Images LI Yueping; TANG Pushan 2001-01-01 During the process of automatic image recognition or automatic reverse design of IC, people often encounter the problem that some sub-images must be pieced together into a whole image. In the traditional piecing algorithm for sub images, a large accumulated error will be made. In this paper, a relaxation algorithm of piecing-error for sub-images is presented. It can eliminate the accumulated error in the traditional algorithm and greatly improve the quality of pieced image. Based on an initial pieced image, one can continuously adjust the center of every sub-image and its angle to lessen the error between the adjacent sub-images, so the quality of pieced image can be improved. The presented results indicate that the proposed algorithm can dramatically decrease the error while the quality of ultimate pieced image is still acceptable. The time complexity of this algorithm is O(n In n). 17. The eddy kinetic energy budget in the Red Sea Zhan, Peng 2016-06-09 18. Investigation of long-lived eddies on Jupiter Lewis, S. R.; Calcutt, S. B.; Taylor, F. W.; Read, P. L. 1986-01-01 Quasi-geostrophic, two layer models of the Jovian atmosphere are under development; these may be used to simulate eddy phemonena in the atmosphere and include tracer dynamics explicitly. The models permit the investigation of the dynamics of quasi-geostrophic eddies under more controlled conditions than are possible in the laboratory. They can also be used to predict the distribution and behavior of tracer species, and hence to discriminate between different models of the mechanisms forcing the eddies, provided suitable observations can be obtained. At the same time, observational strategies are being developed for the Near Infrared Mapping Spectrometer on the Galileo Orbiter, with the objective of obtaining composition measurements for comparison with the models. Maps of features at thermal infrared wavelengths near 5 micron and reflected sunlight maps as a function of wavelength and phase angle will be obtained. These should provide further useful information on the morphology, composition and microstructure of clouds within eddy features. Equilibrium chemistry models which incorporate advection may then be used to relate these results of the dynamical models and provide addtional means of classifying different types of eddies. 19. Mesoscale eddies in the Subantarctic Front-Southwest Atlantic Pablo D. Glorioso 2005-12-01 Full Text Available Satellite and ship observations in the southern southwest Atlantic (SSWA reveal an intense eddy field and highlight the potential for using continuous real-time satellite altimetry to detect and monitor mesoscale phenomena with a view to understanding the regional circulation. The examples presented suggest that mesoscale eddies are a dominant feature of the circulation and play a fundamental role in the transport of properties along and across the Antarctic Circumpolar Current (ACC. The main ocean current in the SSWA, the Falkland-Malvinas Current (FMC, exhibits numerous embedded eddies south of 50°S which may contribute to the patchiness, transport and mixing of passive scalars by this strong, turbulent current. Large eddies associated with meanders are observed in the ACC fronts, some of them remaining stationary for long periods. Two particular cases are examined using a satellite altimeter in combination with in situ observations, suggesting that cross-frontal eddy transport and strong meandering occur where the ACC flow intensifies along the sub-Antarctic Front (SAF and the Southern ACC Front (SACCF. 20. Eddy Current Sensing of Torque in Rotating Shafts Varonis, Orestes J.; Ida, Nathan 2013-12-01 The noncontact torque sensing in machine shafts is addressed based on the stress induced in a press-fitted magnetoelastic sleeve on the shaft and eddy current sensing of the changes of electrical conductivity and magnetic permeability due to the presence of stress. The eddy current probe uses dual drive, dual sensing coils whose purpose is increased sensitivity to torque and decreased sensitivity to variations in distance between probe and shaft (liftoff). A mechanism of keeping the distance constant is also employed. Both the probe and the magnetoelastic sleeve are evaluated for performance using a standard eddy current instrument. An eddy current instrument is also used to drive the coils and analyze the torque data. The method and sensor described are general and adaptable to a variety of applications. The sensor is suitable for static and rotating shafts, is independent of shaft diameter and operational over a large range of torques. The torque sensor uses a differential eddy current measurement resulting in cancellation of common mode effects including temperature and vibrations. 1. Coastal Kelvin waves and dynamics of Gulf of Aden eddies Valsala, Vinu K.; Rao, Rokkam R. 2016-10-01 The Gulf of Aden (GA) is a small semi-enclosed oceanic region between the Red Sea and the western Arabian Sea. The GA is characterised with westward propagating cyclonic and anti-cyclonic eddies throughout the year. The genesis and propagation of these eddies into the GA have been the focus of several studies which concluded that oceanic instabilities (both barotropic and baroclinic) as well as the Rossby waves from the Arabian Sea are the responsible mechanisms for the presence and maintenance of these eddies. Using a high-resolution (~11 km) reduced gravity hydrodynamic layered model with controlled lateral boundary conditions at the three sides of the GA here we show yet another factor, the coastally propagating Kelvin waves along the coastal Arabia (coasts of Oman and Yemen), is also critically important in setting up a favourable condition for the oceanic instabilities and sustenance of meso-scale eddies in the GA. These Kelvin waves at both seasonal and intra-seasonal time scales are found play an important role in the timing and amplitudes of eddies observed in the GA. 2. The eddy kinetic energy budget in the Red Sea Zhan, Peng; Subramanian, Aneesh C.; Yao, Fengchao; Kartadikaria, Aditya R.; Guo, Daquan; Hoteit, Ibrahim 2016-07-01 The budget of eddy kinetic energy (EKE) in the Red Sea, including the sources, redistributions, and sink, is examined using a high'resolution eddy-resolving ocean circulation model. A pronounced seasonally varying EKE is identified, with its maximum intensity occurring in winter, and the strongest EKE is captured mainly in the central and northern basins within the upper 200 m. Eddies acquire kinetic energy from conversion of eddy available potential energy (EPE), from transfer of mean kinetic energy (MKE), and from direct generation due to time-varying (turbulent) wind stress, the first of which contributes predominantly to the majority of the EKE. The EPE-to-EKE conversion occurs almost in the entire basin, while the MKE-to-EKE transfer appears mainly along the shelf boundary of the basin (200 m isobath) where high horizontal shear interacts with topography. The EKE generated by the turbulent wind stress is relatively small and limited to the southern basin. All these processes are intensified during winter, when the rate of energy conversion is about 4-5 times larger than that in summer. The EKE is redistributed by the vertical and horizontal divergence of energy flux and the advection of the mean flow. As a main sink of EKE, dissipation processes is ubiquitously found in the basin. The seasonal variability of these energy conversion terms can explain the significant seasonality of eddy activities in the Red Sea. 3. Methane Emissions from Permafrost Regions using Low-Power Eddy Covariance Stations Burba, G.; Sturtevant, C.; Schreiber, P.; Peltola, O.; Zulueta, R.; Mammarella, I.; Haapanala, S.; Rinne, J.; Vesala, T.; McDermitt, D.; Oechel, W. 2012-04-01 Methane is an important greenhouse gas with a warming potential 23 times that of carbon dioxide over a 100-year cycle. The permafrost regions of the world store significant amounts of organic materials under anaerobic conditions, leading to large methane production and accumulation in the upper layers of bedrock, soil and ice. These regions are currently undergoing dramatic change in response to warming trends, and may become a significant potential source of global methane release under a warming climate over the coming decades and centuries. Presently, most measurements of methane fluxes in permafrost regions have been made with static chamber techniques, and very few were done with the eddy covariance approach using closed-path analyzers. Although chambers and closed-path analyzers have advantages, both techniques have significant limitations, especially for permafrost research. Static chamber measurements are discrete in time and space, and particularly difficult to use over polygonal tundra with highly non-uniform micro-topography and active water layer. They also may not capture the dynamics of methane fluxes on varying time scales (hours to annual estimates). In addition, placement of the chamber may disturb the surface integrity causing a significant over-estimation of the measured flux. Closed-path gas analyzers for measuring methane eddy fluxes employ advanced technologies such as TDLS (Tunable Diode Laser Spectroscopy), ICOS (Integrated Cavity Output Spectroscopy), WS-CRDS (wavelength scanned cavity ring-down spectroscopy), but require high flow rates at significantly reduced optical cell pressures to provide adequate response time and sharpen absorption features. Such methods, when used with the eddy covariance technique, require a vacuum pump and a total of 400-1500 Watts of grid power for the pump and analyzer system. The weight of such systems often exceeds 100-200 lbs, restricting practical applicability for remote or portable field studies. As a 4. Event Detection and Visualization of Ocean Eddies based on SSH and Velocity Field Matsuoka, Daisuke; Araki, Fumiaki; Inoue, Yumi; Sasaki, Hideharu 2016-04-01 Numerical studies of ocean eddies have been progressed using high-resolution ocean general circulation models. In order to understand ocean eddies from simulation results with large amount of information volume, it is necessary to visualize not only distribution of eddies of each time step, but also events or phenomena of eddies. However, previous methods cannot precisely detect eddies, especially, during the events such as eddies' amalgamation, bifurcation. In the present study, we propose a new approach of eddy's detection, tracking and event visualization based on sea surface height (SSH) and velocity field. The proposed method detects eddies region as well as streams and currents region, and classifies detected eddies into several types. By tracking the time-varying change of classified eddies, it is possible to detect not only eddies event such as amalgamation and bifurcation but also the interaction between eddy and ocean current. As a result of visualizing detected eddies and events, we succeeded in creating the movie which enables us to intuitively understand the region of interest. 5. Statistical Characteristics of Mesoscale Eddies in the North Pacific Derived from Satellite Altimetry Yu-Hsin Cheng 2014-06-01 Full Text Available The sea level anomaly data derived from satellite altimetry are analyzed to investigate statistical characteristics of mesoscale eddies in the North Pacific. Eddies are detected by a free-threshold eddy identification algorithm. The results show that the distributions of size, amplitude, propagation speed, and eddy kinetic energy of eddy follow the Rayleigh distribution. The most active regions of eddies are the Kuroshio Extension region, the Subtropical Counter Current zone, and the Northeastern Tropical Pacific region. By contrast, eddies are seldom observed around the center of the eastern part of the North Pacific Subarctic Gyre. The propagation speed and kinetic energy of cyclonic and anticyclonic eddies are almost the same, but anticyclonic eddies possess greater lifespans, sizes, and amplitudes than those of cyclonic eddies. Most eddies in the North Pacific propagate westward except in the Oyashio region. Around the northeastern tropical Pacific and the California currents, cyclonic and anticyclonic eddies propagate westward with slightly equatorward (197° average azimuth relative to east and poleward (165° deflection, respectively. This implies that the background current may play an important role in formation of the eddy pathway patterns. 6. Investigation on a new inducer of pulsed eddy current thermography He, Min; Zhang, Laibin; Zheng, Wenpei; Feng, Yijing 2016-09-01 In this paper, a new inducer of pulsed eddy current thermography (PECT) is presented. The use of the inducer can help avoid the problem of blocking the infrared (IR) camera's view in eddy current thermography technique. The inducer can also provide even heating of the test specimen. This paper is concerned with the temperature distribution law around the crack on a specimen when utilizing the new inducer. Firstly, relative mathematical models are provided. In the following section, eddy current distribution and temperature distribution around the crack are studied using the numerical simulation method. The best separation distance between the inducer and the specimen is also determined. Then, results of temperature distribution around the crack stimulated by the inducer are gained by experiments. Effect of current value on temperature rise is studied as well in the experiments. Based on temperature data, temperature features of the crack are discussed. 7. Non-Destructive Techniques Based on Eddy Current Testing García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto 2011-01-01 Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754 8. Large eddy simulation of water flow over series of dunes Jun LU 2011-12-01 Full Text Available Large eddy simulation was used to investigate the spatial development of open channel flow over a series of dunes. The three-dimensional filtered Navier-Stokes (N-S equations were numerically solved with the fractional-step method in sigma coordinates. The subgrid-scale turbulent stress was modeled with a dynamic coherent eddy viscosity model proposed by the authors. The computed velocity profiles are in good agreement with the available experimental results. The mean velocity and the turbulent Reynolds stress affected by a series of dune-shaped structures were compared and analyzed. The variation of turbulence statistics along the flow direction affected by the wavy bottom roughness has been studied. The turbulent boundary layer in a complex geographic environment can be simulated well with the proposed large eddy simulation (LES model. 9. Investigation on a new inducer of pulsed eddy current thermography Min He 2016-09-01 Full Text Available In this paper, a new inducer of pulsed eddy current thermography (PECT is presented. The use of the inducer can help avoid the problem of blocking the infrared (IR camera’s view in eddy current thermography technique. The inducer can also provide even heating of the test specimen. This paper is concerned with the temperature distribution law around the crack on a specimen when utilizing the new inducer. Firstly, relative mathematical models are provided. In the following section, eddy current distribution and temperature distribution around the crack are studied using the numerical simulation method. The best separation distance between the inducer and the specimen is also determined. Then, results of temperature distribution around the crack stimulated by the inducer are gained by experiments. Effect of current value on temperature rise is studied as well in the experiments. Based on temperature data, temperature features of the crack are discussed. 10. A Synoptic Snapshot of the East Cape Eddy (ECE) LIU Wei; LIU Qinyu 2005-01-01 A synoptic snapshot in this study is made for the East Cape Eddy (ECE) based on the World Ocean Circulation Experiment (WOCE) P14C Hydrographic Section and Shipboard ADCP velocity vector data collected in September 1992.The ECE is an anticyclonic eddy, barotropically structured and centered at 33.64°S and 176.21°E, with warm and salinouscored subsurface water. The radius of the eddy is of the order O (110 km) and the maximum circumferential velocity is O(40cms-1); as a result, the relative vorticity is estimated to be O (7 × 10-6s-1). Due to the existence of the ECE, the mixed layer north of New Zealand becomes deeper, reaching a depth of 300 m in the austral winter. The ECE plays an important role in the formation and distribution of the Subtropical Mode Water (STMW) over a considerable area in the South Pacific. 11. Strong eddy compensation for the Gulf Stream heat transport Saenko, Oleg A. 2015-12-01 Using a high-resolution ocean model forced with high-resolution atmospheric fields, a 5 year mean heat budget of the upper ocean in the Gulf Stream (GS) region is analyzed. The heat brought to the region with the mean flows along the GS path is 2-3 times larger than the heat loss to the atmosphere, with the difference being balanced by a strong cooling effect due to lateral eddy heat fluxes. However, over a broad area off the Grand Banks, the eddies warm the uppermost ocean layers, partly compensating for the loss of heat to the atmosphere. The upward eddy heat flux, which brings heat from the deeper ocean to the upper layers, is 30-80% of the surface heat loss. 12. Non-Destructive Techniques Based on Eddy Current Testing Ernesto Vázquez-Sánchez 2011-02-01 Full Text Available Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. 13. Exact temporal eddy current compensation in magnetic resonance imaging systems. Morich, M A; Lampman, D A; Dannels, W R; Goldie, F D 1988-01-01 A step-response method has been developed to extract the properties (amplitudes and decay time constants) of intrinsic-eddy-current-sourced magnetic fields generated in whole-body magnetic resonance imaging systems when pulsed field gradients are applied. Exact compensation for the eddy-current effect is achieved through a polynomial rooting procedure and matrix inversion once the 2 N properties of the N-term decay process are known. The output of the inversion procedure yields the required characteristics of the filter for spectrum magnitude and phase equalization. The method is described for the general case along with experimental results for one-, two-, and three-term inversions. The method's usefulness is demonstrated for the usually difficult case of long-term (200-1000-ms) eddy-current compensation. Field-gradient spectral flatness measurements over 30 mHz-100 Hz are given to validate the method. 14. Eddy current pulsed thermography for fatigue evaluation of gear Tian, Gui Yun; Yin, Aijun; Gao, Bin; Zhang, Jishan; Shaw, Brian 2014-02-01 The pulsed eddy current (PEC) technique generates responses over a wide range of frequencies, containing more spectral coverage than traditional eddy current inspection. Eddy current pulsed thermography (ECPT), a newly developed non-destructive testing (NDT) technique, has advantages such as rapid inspection of a large area within a short time, high spatial resolution, high sensitivity and stand-off measurement distance. This paper investigates ECPT for the evaluation of gear fatigue tests. The paper proposes a statistical method based on single channel blind source separation to extract details of gear fatigue. The discussion of transient thermal distribution and patterns of fatigue contact surfaces as well as the non-contact surfaces have been reported. In addition, the measurement for gears with different cycles of fatigue tests by ECPTand the comparison results between ECPT with magnetic Barkhausen noise (MBN) have been evaluated. The comparison shows the competitive capability of ECPT in fatigue evaluation. 15. Non-destructive techniques based on eddy current testing. García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto 2011-01-01 Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. 16. Stray Capacitances of an Air-Cored Eddy Current Sensor Yi Jia 2009-12-01 Full Text Available Stray capacitance of an air-cored eddy current sensor is one of the most crucial issues for successful development of an eddy current based residual stress assessment technology at frequency above 50 MHz. A two dimensional finite element model and an equivalent lumped capacitance network have been developed to accurately quantify overall stray capacitances of an air-cored eddy current sensor with specimen being tested. A baseline model was used to evaluate sensor design parameters, including the effects of pitch distance, trace width, trace thickness, number of turns, inner diameter, substrate thickness, lift-off distance, and dielectric constant of shim on the stray capacitances of the sensor. The results clearly indicate that an appropriate sensor design parameters could reduce the stray capacitance and improve the sensor performance. This research opens up a new design space to minimize stray capacitance effect and improve the sensor sensitivity and its lift-off uncertainty at elevated high frequencies. 17. Eddies reduce denitrification and compress habitats in the Arabian Sea Lachkar, Zouhair; Smith, Shafer; Lévy, Marina; Pauluis, Olivier 2016-09-01 The combination of high biological production and weak oceanic ventilation in regions, such as the northern Indian Ocean and the eastern Pacific and Atlantic, cause large-scale oxygen minimum zones (OMZs) that profoundly affect marine habitats and alter key biogeochemical cycles. Here we investigate the effects of eddies on the Arabian Sea OMZ—the world's thickest—using a suite of regional model simulations with increasing horizontal resolution. We find that isopycnal eddy transport of oxygen to the OMZ region limits the extent of suboxia so reducing denitrification, increasing the supply of nitrate to the surface, and thereby enhancing biological production. That same enhanced production generates more organic matter in the water column, amplifying oxygen consumption below the euphotic zone, thus increasing the extent of hypoxia. Eddy-driven ventilation likely plays a similar role in other low-oxygen regions and thus may be crucial in shaping marine habitats and modulating the large-scale marine nitrogen cycle. 18. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G. 2012-01-01 In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver. 19. Relaxation of liquid bridge after droplets coalescence Jiangen Zheng 2016-11-01 Full Text Available We investigate the relaxation of liquid bridge after the coalescence of two sessile droplets resting on an organic glass substrate both experimentally and theoretically. The liquid bridge is found to relax to its equilibrium shape via two distinct approaches: damped oscillation relaxation and underdamped relaxation. When the viscosity is low, damped oscillation shows up, in this approach, the liquid bridge undergoes a damped oscillation process until it reaches its stable shape. However, if the viscous effects become significant, underdamped relaxation occurs. In this case, the liquid bridge relaxes to its equilibrium state in a non-periodic decay mode. In depth analysis indicates that the damping rate and oscillation period of damped oscillation are related to an inertial-capillary time scale τc. These experimental results are also testified by our numerical simulations with COMSOL Multiphysics. 20. Cross relaxation in nitroxide spin labels Marsh, Derek 2016-11-01 Cross relaxation, and mI -dependence of the intrinsic electron spin-lattice relaxation rate We , are incorporated explicitly into the rate equations for the electron-spin population differences that govern the saturation behaviour of 14N- and 15N-nitroxide spin labels. Both prove important in spin-label EPR and ELDOR, particularly for saturation recovery studies. Neither for saturation recovery, nor for CW-saturation EPR and CW-ELDOR, can cross relaxation be described simply by increasing the value of We , the intrinsic spin-lattice relaxation rate. Independence of the saturation recovery rates from the hyperfine line pumped or observed follows directly from solution of the rate equations including cross relaxation, even when the intrinsic spin-lattice relaxation rate We is mI -dependent. 1. Eddy turbulence parameters inferred from radar observations at Jicamarca M. N. Vlasov 2007-03-01 Full Text Available Significant electron density striations, neutral temperatures 27 K above nominal, and intense wind shear were observed in the E-region ionosphere over the Jicamarca Radio Observatory during an unusual event on 26 July 2005 (Hysell et al., 2007. In this paper, these results are used to estimate eddy turbulence parameters and their effects. Models for the thermal balance in the mesosphere/lower thermosphere and the charged particle density in the E region are developed here. The thermal balance model includes eddy conduction and viscous dissipation of turbulent energy as well as cooling by infrared radiation. The production and recombination of ions and electrons in the E region, together with the production and transport of nitric oxide, are included in the plasma density model. Good agreement between the model results and the experimental data is obtained for an eddy diffusion coefficient of about 1×103 m2/s at its peak, which occurs at an altitude of 107 km. This eddy turbulence results in a local maximum of the temperature in the upper mesosphere/lower thermosphere and could correspond either to an unusually high mesopause or to a double mesosphere. Although complicated by plasma dynamic effects and ongoing controversy, our interpretation of Farley-Buneman wave phase velocity (Hysell et al., 2007 is consistent with a low Brunt-Väisälä frequency in the region of interest. Nitric oxide transport due to eddy diffusion from the lower thermosphere to the mesosphere causes electron density changes in the E region whereas NO density modulation due to irregularities in the eddy diffusion coefficient creates variability in the electron density. 2. Observation of baroclinic eddies southeast of Okinawa Island PARK; Jae-Hun 2008-01-01 In the region southeast of Okinawa, during May to July 2001, a cyclonic and an anticyclonic eddy were observed from combined measurements of hydrocasts, an upward-looking moored acoustic Doppler current profiler (MADCP), pressure-recording inverted echo sounders (PIESs), satellite altimetry, and a coastal tide gauge. The hydrographic data showed that the lowest/highest temperature (T) and salinity (S) anomalies from a 13-year mean for the same season were respectively -3.0/+2.5℃ and -0.20/+0.15 psu at 380/500 dbar for the cyclonic/anticyclonic eddies. From the PIES data, using a gravest empirical mode method, we estimated time-varying surface dynamic height (D) anomaly referred to 2000 dbar changing from -20 to 30 cm, and time-varying T and S anomalies at 500 dbar ranging through about ±2 ℃ and ±0.2 psu, respectively. The passage of the eddies caused variations of both satellite-measured sea surface height anomaly (SSHA) and tide-gauge-measured sea level anomaly to change from about –20 to 30 cm, consistent with the D anomaly from the PIESs. Bottom pressure sensors measured no variation related to these eddy activities, which indicated that the two eddies were dominated by baro-clinicity. Time series of SSHA map confirmed that the two eddies, originating from the North Pacific Subtropical Countercurrent region near 20°―30°N and 150°―160°E, traveled about 3000 km for about 18 months with mean westward propagation speed of about 6 cm/s, before arriving at the region southeast of Okinawa Island. 3. Ion beam induced stress formation and relaxation in germanium Steinbach, T., E-mail: Tobias.Steinbach@uni-jena.de [Institut für Festkörperphysik, Friedrich-Schiller-Universität Jena, Max-Wien-Platz 1, D-07743 Jena (Germany); Reupert, A.; Schmidt, E.; Wesch, W. [Institut für Festkörperphysik, Friedrich-Schiller-Universität Jena, Max-Wien-Platz 1, D-07743 Jena (Germany) 2013-07-15 Ion irradiation of crystalline solids leads not only to defect formation and amorphization but also to mechanical stress. In the past, many investigations in various materials were performed focusing on the ion beam induced damage formation but only several experiments were done to investigate the ion beam induced stress evolution. Especially in microelectronic devices, mechanical stress leads to several unwanted effects like cracking and peeling of surface layers as well as changing physical properties and anomalous diffusion of dopants. To study the stress formation and relaxation process in semiconductors, crystalline and amorphous germanium samples were irradiated with 3 MeV iodine ions at different ion fluence rates. The irradiation induced stress evolution was measured in situ with a laser reflection technique as a function of ion fluence, whereas the damage formation was investigated by means of Rutherford backscattering spectrometry. The investigations show that mechanical stress builds up at low ion fluences as a direct consequence of ion beam induced point defect formation. However, further ion irradiation causes a stress relaxation which is attributed to the accumulation of point defects and therefore the creation of amorphous regions. A constant stress state is reached at high ion fluences if a homogeneous amorphous surface layer was formed and no further ion beam induced phase transition took place. Based on the results, we can conclude that the ion beam induced stress evolution seems to be mainly dominated by the creation and accumulation of irradiation induced structural modification. 4. Utilizing RELAX NG Schemas in XML Editors Schmied, Martin 2008-01-01 This thesis explores the possibilities of utilizing RELAX NG schemata in the process of editing XML documents. The ultimate goal of this thesis is to prototype a system supporting user while editing XML document with bound RELAX NG schema inside the Eclipse IDE. Such a system comprises two major components -- an integration of RELAX NG validator and an autocompletion engine. Design of the autocompletion engine represents the main contribution of this thesis, because similar systems are almost... 5. Surface-atmosphere decoupling limits accumulation at Summit, Greenland. Berkelhammer, Max; Noone, David C; Steen-Larsen, Hans Christian; Bailey, Adriana; Cox, Christopher J; O'Neill, Michael S; Schneider, David; Steffen, Konrad; White, James W C 2016-04-01 Despite rapid melting in the coastal regions of the Greenland Ice Sheet, a significant area (~40%) of the ice sheet rarely experiences surface melting. In these regions, the controls on annual accumulation are poorly constrained owing to surface conditions (for example, surface clouds, blowing snow, and surface inversions), which render moisture flux estimates from myriad approaches (that is, eddy covariance, remote sensing, and direct observations) highly uncertain. Accumulation is partially determined by the temperature dependence of saturation vapor pressure, which influences the maximum humidity of air parcels reaching the ice sheet interior. However, independent proxies for surface temperature and accumulation from ice cores show that the response of accumulation to temperature is variable and not generally consistent with a purely thermodynamic control. Using three years of stable water vapor isotope profiles from a high altitude site on the Greenland Ice Sheet, we show that as the boundary layer becomes increasingly stable, a decoupling between the ice sheet and atmosphere occurs. The limited interaction between the ice sheet surface and free tropospheric air reduces the capacity for surface condensation to achieve the rate set by the humidity of the air parcels reaching interior Greenland. The isolation of the surface also acts to recycle sublimated moisture by recondensing it onto fog particles, which returns the moisture back to the surface through gravitational settling. The observations highlight a unique mechanism by which ice sheet mass is conserved, which has implications for understanding both past and future changes in accumulation rate and the isotopic signal in ice cores from Greenland. 6. An eddy viscosity calculation method for a turbulent duct flow Antonia, R. A.; Bisset, D. K.; Kim, J. 1991-01-01 The mean velocity profile across a fully developed turbulent duct flow is obtained from an eddy viscosity relation combined with an empirical outer region wake function. Results are in good agreement with experiments and with direct numerical simulations in the same flow at two Reynolds numbers. In particular, the near-wall trend of the Reynolds shear stress and its variation with Reynolds number are similar to those of the simulations. The eddy viscosity method is more accurate than previous mixing length or implicit function methods. 7. Eddy current NDE performance demonstrations using simulation tools Maurice, L. [EDF - CEIDRE, 2 rue Ampere, 93206 Saint-Denis Cedex 1 (France); Costan, V.; Guillot, E.; Thomas, P. [EDF - R and D, THEMIS, 1, avenue du General de Gaulle, 92141 Clamart (France) 2013-01-25 To carry out performance demonstrations of the Eddy-Current NDE processes applied on French nuclear power plants, EDF studies the possibility of using simulation tools as an alternative to measurements on steam generator tube mocks-up. This paper focuses on the strategy led by EDF to assess and use code{sub C}armel3D and Civa, on the case of Eddy-Current NDE on wears problem which may appear in the U-shape region of steam generator tubes due to the rubbing of anti-vibration bars. 8. Large—eddy Simulation of Bubble—Liquid Confined Jets YANGMin; ZHOULixing 2002-01-01 The Large-eddy simulation (LES) with two-way coupling is used to study bubble-liquid two-phase confined multiple jets discharged into a 2D channel.The LES results reveal the large-eddy vortex structures of both liquid flow and bubble motion,the shear-generated and bubble-induced liquid turbulence,and indicate much stronger bubble fluctuation than that of the liquid,the enhancement of liquid turbulence by bubbles.Both shear and bubble-liquid interaction are important for the liquid turbulence generation in the case studied. 9. PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS. Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D. 1986-01-01 Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity. 10. Parameter identification of internal wave and mesoscale eddy 2003-01-01 A simplified parameter identification algorithm for the inverse refractive indexes of the mesoscale eddy and the internal wave in the ocean is proposed by researching into the incident field and the scattered field that comprise the total field of a wave in the ocean, considering that the total field and the incident field satisfy the Helmholtz equations and the scattered field conforms to the Sommerfield radiation condition. Two examples for the calculation of refractive index and inverse refractive index respectively of the mesoscale eddy and the internal wave demonstrate the applicability of the algorithm. 11. Large Eddy Simulation of the ventilated wave boundary layer Lohmann, Iris P.; Fredsøe, Jørgen; Sumer, B. Mutlu 2006-01-01 A Large Eddy Simulation (LES) of (1) a fully developed turbulent wave boundary layer and (2) case 1 subject to ventilation (i.e., suction and injection varying alternately in phase) has been performed, using the Smagorinsky subgrid-scale model to express the subgrid viscosity. The model was found...... size. The results indicate that the large eddies develop in the resolved scale, corresponding to fluid with an effective viscosity decided by the sum of the kinematic and subgrid viscosity. Regarding case 2, the results are qualitatively in accordance with experimental findings. Injection generally...... significantly. Ventilation therefore results in a net current, even in symmetric waves.... 12. Defect detection in conducting materials using eddy current testing techniques Brauer Hartmut 2014-01-01 Full Text Available Lorentz force eddy current testing (LET is a novel nondestructive testing technique which can be applied preferably to the identification of internal defects in nonmagnetic moving conductors. The LET is compared (similar testing conditions with the classical eddy current testing (ECT. Numerical FEM simulations have been performed to analyze the measurements as well as the identification of internal defects in nonmagnetic conductors. The results are compared with measurements to test the feasibility of defect identification. Finally, the use of LET measurements to estimate of the electrical conductors under test are described as well. 13. Estimating surface fluxes using eddy covariance and numerical ogive optimization Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling; 2015-01-01 Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low... 14. RECENT PROGRESS IN NONLINEAR EDDY-VISCOSITY TURBULENCE MODELING 符松; 郭阳; 钱炜祺; 王辰 2003-01-01 This article presents recent progresses in turbulence modeling in the Unit for Turbulence Simulation in the Department of Engineering Mechanics at Tsinghua University. The main contents include: compact Non-Linear Eddy-Viscosity Model (NLEVM) based on the second-moment closure, near-wall low-Re non-linear eddy-viscosity model and curvature sensitive turbulence model.The models have been validated in a wide range of complex flow test cases and the calculated results show that the present models exhibited overall good performance. 15. Cinlar Subgrid Scale Model for Large Eddy Simulation Kara, Rukiye 2016-01-01 We construct a new subgrid scale (SGS) stress model for representing the small scale effects in large eddy simulation (LES) of incompressible flows. We use the covariance tensor for representing the Reynolds stress and include Clark's model for the cross stress. The Reynolds stress is obtained analytically from Cinlar random velocity field, which is based on vortex structures observed in the ocean at the subgrid scale. The validity of the model is tested with turbulent channel flow computed in OpenFOAM. It is compared with the most frequently used Smagorinsky and one-equation eddy SGS models through DNS data. 16. Temperature relaxation in dense plasma mixtures Faussurier, Gérald; Blancard, Christophe 2016-09-01 We present a model to calculate temperature-relaxation rates in dense plasma mixtures. The electron-ion relaxation rates are calculated using an average-atom model and the ion-ion relaxation rates by the Landau-Spitzer approach. This method allows the study of the temperature relaxation in many-temperature electron-ion and ion-ion systems such as those encountered in inertial confinement fusion simulations. It is of interest for general nonequilibrium thermodynamics dealing with energy flows between various systems and should find broad use in present high energy density experiments. 17. Baryogenesis via Elementary Goldstone Higgs Relaxation Gertov, Helene; Pearce, Lauren; Yang, Louis 2016-01-01 We extend the relaxation mechanism to the Elementary Goldstone Higgs frame- work. Besides studying the allowed parameter space of the theory we add the minimal ingredients needed for the framework to be phenomenologically viable. The very nature of the extended Higgs sector allows to consider very flat scalar potential directions along which the relaxation mechanism can be implemented. This fact translates into wider regions of applicability of the relaxation mechanism when compared to the Standard Model Higgs case. Our results show that, if the electroweak scale is not fundamental but radiatively generated, it is possible to generate the observed matter-antimatter asymmetry via the relaxation mechanism. 18. Dielectric relaxation studies in polyvinyl butyral Mehendru, P. C.; Kumar, Naresh; Arora, V. P.; Gupta, N. P. 1982-10-01 Dielectric measurements have been made in thick films (˜100 μm) of polyvinyl butyral (PVB) having degree of polymerization n=1600, in the frequency range 100 Hz-100 KHz and temperature range 300-373 K. The results indicated that PVB was in the amorphous phase and observed dielectric dispersion has been assigned as the β-relaxation process. The β relaxation is of Debye type with symmetrical distribution of relaxation times. The dielectric relaxation strength Δɛ and the distribution parameters β¯ increase with temperature. The results can be qualitatively explained by assuming the hindered rotation of the side groups involving hydroxyl/acetate groups. 19. Relaxation and Visualization Strategies for Story Telling 冯灵林 2012-01-01 The importance of training students to tell or retell story is self - evident for mastering English language. The following activity introduces relaxation and visualization strategies for story telling. 20. Numerical experiments with assimilation of the mean and unresolved meteorological conditions into large-eddy simulation model Esau, Igor 2010-01-01 Micrometeorology, city comfort, land use management and air quality monitoring increasingly become important environmental issues. To serve the needs, meteorology needs to achieve a serious advance in representation and forecast on micro-scales (meters to 100 km) called meteorological terra incognita. There is a suitable numerical tool, namely, the large-eddy simulation modelling (LES) to support the development. However, at present, the LES is of limited utility for applications. The study addresses two problems. First, the data assimilation problem on micro-scales is investigated as a possibility to recover the turbulent fields consistent with the mean meteorological profiles. Second, the methods to incorporate of the unresolved surface structures are investigated in a priopi numerical experiments. The numerical experiments demonstrated that the simplest nudging or Newtonian relaxation technique for the data assimilation is applicable on the turbulence scales. It is also shown that the filtering property of... 1. Pre-crack fatigue life assessment of relevant aircraft materials using fractal analysis of eddy current test data Schreiber, Jürgen; Cikalova, Ulana; Hillmann, Susanne; Meyendorf, Norbert; Hoffmann, Jochen 2013-01-01 Successful determination of residual fatigue life requires a comprehensive understanding of the fatigue related material deformation mechanism. Neither macroscopic continuum mechanics nor micromechanic observations provide sufficient data to explain subsequent deformation structures occurring during the fatigue life of a metallic structure. Instead mesomechanic deformation on different scaling levels can be studied by applying fractal analysis of various means of nondestructive inspection measurements. The resulting fractal dimension data can be correlated to the actual material damage states, providing an estimation of the remaining residual fatigue life before macroscopic fracture develops. Recent efforts were aimed to apply the fractal concept to aerospace relevant materials AA7075-T6 and Ti-6Al-4V. Proven and newly developed fractal analysis methods were applied to eddy current (EC) measurements of fatigued specimens, with the potential to transition this approach to an aircraft for an in-situ nondestructive inspection. The occurrence of mesomechanic deformation at the material surface of both AA7075-T6 and Ti-6Al-4V specimens could be established via topography images using confocal microscopy (CM). Furthermore, a pulsed eddy current (PEC) approach was developed, combined with a sophisticated new fractal analysis algorithm based on short pulse excitation and evaluation of EC relaxation behavior. This paper presents concept, experimental realization, fractal analysis procedures, and results of this effort. 2. Nuclear relaxation via paramagnetic impurities Dzheparov, F S; Jacquinot, J F 2002-01-01 First part of the work contains a calculation of the kinetics of nuclear relaxation via paramagnetic impurities for systems with arbitrary (including fractal) space dimension d basing on ideas, which run current for 3d objects now. A new mean-field-type theory is constructed in the second part of the work. It reproduces all results of the first part for integer d and gives a possibility to describe the process for longer time, when a crossover to Balagurov-Waks asymptotics starts to develop. Solutions of the equations of the new theory are constructed for integer d. To obtain the solutions a method of calculation of the low-energy and long-wave asymptotics for T matrix of potential scattering out of the mass shell for singular repulsive potentials is developed 3. Relaxing Chosen-Ciphertext Security Canetti, Ran; Krawczyk, Hugo; Nielsen, Jesper Buus 2003-01-01 Security against adaptive chosen ciphertext attacks (or, CCA security) has been accepted as the standard requirement from encryption schemes that need to withstand active attacks. In particular, it is regarded as the appropriate security notion for encryption schemes used as components within...... general protocols and applications. Indeed, CCA security was shown to suffice in a large variety of contexts. However, CCA security often appears to be somewhat too strong: there exist encryption schemes (some of which come up naturally in practice) that are not CCA secure, but seem sufficiently secure...... “for most practical purposes.” We propose a relaxed variant of CCA security, called Replayable CCA (RCCA) security. RCCA security accepts as secure the non-CCA (yet arguably secure) schemes mentioned above; furthermore, it suffices for most existing applications of CCA security. We provide three... 4. Large-Eddy Simulations of Tropical Convective Systems, the Boundary Layer, and Upper Ocean Coupling 2014-09-30 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large - Eddy Simulations of Tropical Convective Systems... large eddy simulation (LES) of organized convective systems, which resolve boundary layer eddy scales to mesoscale Report Documentation Page Form...COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Large - Eddy Simulations of Tropical Convective Systems, the Boundary Layer, and Upper Ocean 5. Observed and modeled surface eddy heat fluxes in the eastern Nordic Seas Isachsen, P.E. .; Koszalka, Inga Monika; LaCasce, J. H. 2012-01-01 Large-scale budget calculations and numerical model process studies suggest that lateral eddy heat fluxes have an important cooling effect on the Norwegian Atlantic Current (NwAC) as it flows through the Nordic Seas. But observational estimates of such fluxes have been lacking. Here, wintertime surface eddy heat fluxes in the eastern Nordic Seas are estimated from surface drifter data, satellite data and an eddy-permitting numerical model. Maps of the eddy heat flux divergence suggest advecti... 6. Accumulation by Conservation Büscher, Bram; Fletcher, Robert 2014-01-01 Following the financial crisis and its aftermath, it is clear that the inherent contradictions of capitalist accumulation have become even more intense and plunged the global economy into unprecedented turmoil and urgency. Governments, business leaders and other elite agents are frantically searchin 7. Nondestructive Testing Eddy Current Basic Principles RQA/M1-5330.12 (V-I). National Aeronautics and Space Administration, Huntsville, AL. George C. Marshall Space Flight Center. As one in the series of programmed instruction handbooks, prepared by the U.S. space program, home study material is presented in this volume concerning familiarization and orientation on basic eddy current principles. The subject is presented under the following headings: Basic Eddy Current Concepts, Eddy Current Generation and Distribution,… 8. Magnetic Flux Fluctuations Due to Eddy Currents and Thermal Noise in Metallic Disks Uzunbajakau, S.; Rijpma, A.P.; Dolfsma, J.; Krooshoop, H.J.G.; Brake, ter H.J.M.; Peters, M.J.; Rogalla, H. 2003-01-01 We derive expressions for the magnetic flux in a circular loop due to eddy currents and thermal noise in coaxial metallic disks. The eddy currents are induced by an applied field that changes sinusoidally in time. We give expressions for the eddy current noise when the frequency of the applied field 9. Magnetic Flux Fluctuations Due to Eddy Currents and Thermal Noise in Metallic Disks Uzunbajakau, S.; Rijpma, A.P.; Dolfsma, J.; Krooshoop, Hendrikus J.G.; ter Brake, Hermanus J.M.; Peters, M.J.; Rogalla, Horst 2003-01-01 We derive expressions for the magnetic flux in a circular loop due to eddy currents and thermal noise in coaxial metallic disks. The eddy currents are induced by an applied field that changes sinusoidally in time. We give expressions for the eddy current noise when the frequency of the applied field 10. Detection of Bay of Bengal eddies from TOPEX and insitu observations Ali, M.M.; Sharma, R.; Gopalakrishna, V.V. anticyclonic eddy is located around 13 degrees N and 83 degrees E from the TOPEX observations averaged over 10-19 August 1993. The thermal sections pass through the southern periphery of the eddy with a prominent trough over the eddy region. The crest noticed... 11. Acoustical characteristics and simulated tomographic inversion of a cold core eddy in the Bay of Bengal PrasannaKumar, S.; Navelkar, G.S.; Murty, T.V.R.; Murty, C.S. by about 100-200 ms under the influence of eddy. The intensity computations show that when the ray passes through the eddy, it suffers an additional loss of 20-25 dB. From the simulated travel time delays, the eddy profile is reconstructed through... 12. SIMULATION OF EDDIES AFFECTED BY TOPOGRAPHY IN A BAROTROPICAL QUASI-GEOSTROPHIC FLUID 2001-01-01 Based upon the quasi-geostrophic barotropic equation, taking into account the effect of seabed topography, analytical solutions and simulated eddies associated with different topographies are obtained. Through exhibiting the shape of various eddies we have found some interesting phenomena and had a better understanding of the importance of seabed topography to the eddy shape. 13. Eddy correlation measurements of oxygen uptake in deep ocean sediments Berg, P.; Glud, Ronnie Nøhr; Hume, A. 2010-01-01 Abstract: We present and compare small sediment-water fluxes of O-2 determined with the eddy correlation technique, with in situ chambers, and from vertical sediment microprofiles at a 1450 m deep-ocean site in Sagami Bay, Japan. The average O-2 uptake for the three approaches, respectively, was ... 14. 76 FR 59394 - Big Eddy-Knight Transmission Project 2011-09-26 ... and ancillary facilities between BPA's existing Big Eddy Substation in The Dalles, Oregon, to a proposed new Knight Substation that would be connected to an existing BPA line about 4 miles northwest of... Oregon side of the Columbia River, as described in the final EIS. For the proposed new Knight Substation... 15. When Does Eddy Viscosity Damp Subfilter Scales Sufficiently? Verstappen, Roel 2011-01-01 Large eddy simulation (LES) seeks to predict the dynamics of spatially filtered turbulent flows. The very essence is that the LES-solution contains only scales of size >=Delta, where Delta denotes some user-chosen length scale. This property enables us to perform a LES when it is not feasible to com 16. Large Eddy Simulations of an Airfoil in Turbulent Inflow Gilling, Lasse; Sørensen, Niels 2008-01-01 Wind turbines operate in the turbulent boundary layer of the atmosphere and due to the rotational sampling effect the blades experience a high level of turbulence [1]. In this project the effect of turbulence is investigated by large eddy simulations of the turbulent flow past a NACA 0015 airfoil... 17. When Does Eddy Viscosity Damp Subfilter Scales Sufficiently? Verstappen, Roel 2011-01-01 Large eddy simulation (LES) seeks to predict the dynamics of spatially filtered turbulent flows. The very essence is that the LES-solution contains only scales of size ≥Δ, where Δ denotes some user-chosen length scale. This property enables us to perform a LES when it is not feasible to compute the 18. NASA's Large-Eddy Simulation Research for Jet Noise Applications DeBonis, James R. 2009-01-01 Research into large-eddy simulation (LES) for application to jet noise is described. The LES efforts include in-house code development and application at NASA Glenn along with NASA Research Announcement sponsored work at Stanford University and Florida State University. Details of the computational methods used and sample results for jet flows are provided. 19. Aero-acoustic modeling using large eddy simulation Shen, Wen Zhong; Sørensen, Jens Nørkær 2007-01-01 The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar... 20. Mind the gap: a guideline for large eddy simulation. George, William K; Tutkun, Murat 2009-07-28 This paper briefly reviews some of the fundamental ideas of turbulence as they relate to large eddy simulation (LES). Of special interest is how our thinking about the so-called 'spectral gap' has evolved over the past decade, and what this evolution implies for LES applications. 1. Analysis of inadvertent microprocessor lag time on eddy covariance results Karl Zeller; Gary Zimmerman; Ted Hehn; Evgeny Donev; Diane Denny; Jeff Welker 2001-01-01 Researchers using the eddy covariance approach to measuring trace gas fluxes are often hoping to measure carbon dioxide and energy fluxes for ecosystem intercomparisons. This paper demonstrates a systematic microprocessor- caused lag of 20.1 to 20.2 s in a commercial sonic anemometer-analog-to-digital datapacker system operated at 10 Hz. The result of the inadvertent... 2. Detached Eddy Simulations of an Airfoil in Turbulent Inflow Gilling, Lasse; Sørensen, Niels; Davidson, Lars 2009-01-01 The effect of resolving inflow turbulence in detached eddy simulations of airfoil flows is studied. Synthetic turbulence is used for inflow boundary condition. The generated turbulence fields are shown to decay according to experimental data as they are convected through the domain with the free ... 3. Eddie Murphy grimmile kulus üheksa kuud / Triin Tael Tael, Triin 2007-01-01 Uues USA komöödiafilmis "Norbit" (režissöör Brian Robbins, stsenarist näitleja vend Charles Murphy) mängib Eddie Murphy kolme täiesti erinevat rolli. Seejuures aitas teda grimmikunstnik Rick Baker 4. Eddie Murphy grimmile kulus üheksa kuud / Triin Tael Tael, Triin 2007-01-01 Uues USA komöödiafilmis "Norbit" (režissöör Brian Robbins, stsenarist näitleja vend Charles Murphy) mängib Eddie Murphy kolme täiesti erinevat rolli. Seejuures aitas teda grimmikunstnik Rick Baker 5. Distant Influence of Kuroshio Eddies on North Pacific Weather Patterns? Ma, Xiaohui; Chang, Ping; Saravanan, R.; Montuoro, Raffaele; Hsieh, Jen-Shan; Wu, Dexing; Lin, Xiaopei; Wu, Lixin; Jing, Zhao 2015-12-01 High-resolution satellite measurements of surface winds and sea-surface temperature (SST) reveal strong coupling between meso-scale ocean eddies and near-surface atmospheric flow over eddy-rich oceanic regions, such as the Kuroshio and Gulf Stream, highlighting the importance of meso-scale oceanic features in forcing the atmospheric planetary boundary layer (PBL). Here, we present high-resolution regional climate modeling results, supported by observational analyses, demonstrating that meso-scale SST variability, largely confined in the Kuroshio-Oyashio confluence region (KOCR), can further exert a significant distant influence on winter rainfall variability along the U.S. Northern Pacific coast. The presence of meso-scale SST anomalies enhances the diabatic conversion of latent heat energy to transient eddy energy, intensifying winter cyclogenesis via moist baroclinic instability, which in turn leads to an equivalent barotropic downstream anticyclone anomaly with reduced rainfall. The finding points to the potential of improving forecasts of extratropical winter cyclones and storm systems and projections of their response to future climate change, which are known to have major social and economic impacts, by improving the representation of ocean eddy-atmosphere interaction in forecast and climate models. 6. Large Eddy Simulation of Sydney Swirl Non-Reaction Jets Yang, Yang; Kær, Søren Knudsen; Yin, Chungen The Sydney swirl burner non-reaction case was studied using large eddy simulation. The two-point correlation method was introduced and used to estimate grid resolution. Energy spectra and instantaneous pressure and velocity plots were used to identify features in flow field. By using these method... 7. Ekman Spiral in Horizontally Inhomogeneous Ocean with Varying Eddy Viscosity 2015-01-01 1 Ekman Spiral in Horizontally Inhomogeneous Ocean with Varying Eddy Viscosity...Oceanography Naval Postgraduate School, Monterey, California, USA Manuscript Click here to download Manuscript: Ekman -chu-pageoph-rev.docx 1...currently valid OMB control number. 1. REPORT DATE 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Ekman Spiral 8. Subminiature eddy current transducers for studying boride coatings Dmitriev, S. F.; Ishkov, A. V.; Malikov, V. N.; Sagalakov, A. M. 2016-07-01 Strengthening of parts and units of machines, increased reliability and longer service life is an important task of modern mechanical engineering. The main objects of study in the work were selected steel 65G and 50HGA, wear-resistant boride coatings ternary system Fe-B-Fe n B which were investigated by scanning electron microscopy and eddy-current nondestructive methods. 9. The influence of mesoscale eddies on shallow water acoustic propagation Deferrari, Harry; Olson, Donald 2003-10-01 Acoustic propagation measurements in 150 m depth on the Florida escarpment observe the effects of the passage of a cyclonic eddy. As the stream core of the Florida Current meanders, the eddy is formed and propagates along the shelf edge. The sequence over a roughly a fortnight is as follows: ahead of the eddy, warm surface water and cold bottom water are swept onto the terrace forming a steep thermocline and corresponding strong downward refracting C(z). The gradient produce intense, focused RBR arrivals and the thermocline becomes a duct for internal waves to propagate shoreward. At first, the internal wave energy is minimal and propagation is stable and coherent. As the internal tides attempt to propagate on shelf, the sound speed field and the acoustic signals become increasingly variable. The variability reaches a crescendo as the 200 m long internal tide is blocked from propagating on to the narrower shelf and begins to break and overturn producing small-scale variability. As the eddy passes, nearly iso-thermal conditions are restored along with quiescent internal wave fields and reduced signal variability. Here, the effects are quantized with data from fixed-system acoustic and oceanographic measurements demonstrating that the mesoscale determines acoustic propagation conditions days in advance. 10. A Laboratory Activity on the Eddy Current Brake Molina-Bolivar, J. A.; Abella-Palacios, A. J. 2012-01-01 The aim of this paper is to introduce a simple and low-cost experimental setup that can be used to study the eddy current brake, which considers the motion of a sliding magnet on an inclined conducting plane in terms of basic physical principles. We present a set of quantitative experiments performed to study the influence of the geometrical and… 11. Grade and Recovery Prediction for Eddy Current Separation Processes Rem, P.C.; Beunder, E.M.; Kuilman, W. 1998-01-01 Grade and recovery of eddy current separation can be estimated on the basis of trajectory simulations for particles of simple shapes. In order to do so, the feed is characterized in terms of a small set of test-particles, each test-particle representing a fraction of the feed of a given size, shape 12. A Novel Interface for Eddy Current Displacement Sensors Nabavi, M.R.; Nihtianov, S. 2009-01-01 In this paper, we propose a novel interface concept for eddy current displacement sensors. A measurement method and a new front-end circuit are also proposed. The front-end circuit demonstrates excellent thermal stability, high resolution, and low-power consumption. The proposed idea is analytically 13. Eddy covariance based methane flux in Sundarbans mangroves, India Chandra Shekhar Jha; Suraj Reddy Rodda; Kiran Chand Thumaty; A K Raha; V K Dadhwal 2014-07-01 We report the initial results of the methane flux measured using eddy covariance method during summer months from the world’s largest mangrove ecosystem, Sundarbans of India. Mangrove ecosystems are known sources for methane (CH4) having very high global warming potential. In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans (India). The tower is equipped with eddy covariance flux tower instruments to continuously measure methane fluxes besides the mass and energy fluxes. This paper presents the preliminary results of methane flux variations during summer months (i.e., April and May 2012) in Sundarbans mangrove ecosystem. The mean concentrations of CH4 emission over the study period was 1682 ± 956 ppb. The measured CH4 fluxes computed from eddy covariance technique showed that the study area acts as a net source for CH4 with daily mean flux of 150.22 ± 248.87 mg m−2 day−1. The methane emission as well as its flux showed very high variability diurnally. Though the environmental conditions controlling methane emission is not yet fully understood, an attempt has been made in the present study to analyse the relationships of methane efflux with tidal activity. This present study is part of Indian Space Research Organisation–Geosphere Biosphere Program (ISRO–GBP) initiative under ‘National Carbon Project’. 14. A study of eddy current measurement (1986-1987) Ramachandran, R.S.; Armstrong, K.P. 1989-06-22 A study was conducted in 1986 to evaluate a modified eddy current system for measuring copper thickness on Kapton. Results showed a measurement error of 0.42 {mu}in. for a thickness range of 165 to 170 {mu}in. and a measurement variability of 3.2 {mu}in. 15. Probability of detection models for eddy current NDE methods Rajesh, S.N. 1993-04-30 The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent. 16. Superparamagnetic relaxation of weakly interacting particles Mørup, Steen; Tronc, Elisabeth 1994-01-01 The influence of particle interactions on the superparamagnetic relaxation time has been studied by Mossbauer spectroscopy in samples of maghemite (gamma-Fe2O3) particles with different particle sizes and particle separations. It is found that the relaxation time decreases with decreasing particl... 17. Postextrasystolic relaxation in the dog heart Kuijer, P.J.P.; Heethaar, R.M.; Herbschleb, J.N.; Zimmerman, A.N.E.; Meijler, F.L. 1978-01-01 Left ventricular relaxation was studied in 8 dogs using parameters derived from the left ventricular pressure: the fastest pressure fall and the time constant of pressure decline. Effects of extrasystolic rhythm interventions were examined on the relaxation parameters of the post-relative to the pre 18. Superparamagnetic relaxation in alpha-Fe particles Bødker, Franz; Mørup, Steen; Pedersen, Michael Stanley; 1998-01-01 The superparamagnetic relaxation time of carbon-supported alpha-Fe particles with an average size of 3.0 Mm has been studied over a large temperature range by the use of Mossbauer spectroscopy combined with AC and DC magnetization measurements. It is found that the relaxation time varies with tem... 19. Cross relaxation in nitroxide spin labels Marsh, Derek 2016-01-01 -label EPR and ELDOR, particularly for saturation recovery studies. Neither for saturation recovery, nor for CW-saturation EPR and CW-ELDOR, can cross relaxation be described simply by increasing the value of We, the intrinsic spin-lattice relaxation rate. Independence of the saturation recovery rates from... 20. Magnetization Transfer Induced Biexponential Longitudinal Relaxation Prantner, Andrew M.; Bretthorst, G. Larry; Neil, Jeffrey J.; Garbow, Joel R.; Ackerman, Joseph J.H. 2009-01-01 Longitudinal relaxation of brain water 1H magnetization in mammalian brain in vivo is typically analyzed on a per voxel basis using a monoexponential model, thereby assigning a single relaxation time constant to all 1H magnetization within a given voxel. This approach was tested by obtaining inversion recovery data from grey matter of rats at 64 exponentially-spaced recovery times. Using Bayesian probability for model selection, brain water data were best represented by a biexponential function characterized by fast and slow relaxation components. At 4.7 T, the amplitude fraction of the rapidly relaxing component is 3.4 ± 0.7 % with a rate constant of 44 ± 12 s-1 (mean ± SD; 174 voxels from 4 rats). The rate constant of the slow relaxing component is 0.66 ± 0.04 s-1. At 11.7 T, the corresponding values are 6.9 ± 0.9 %, 19 ± 5 s-1, and 0.48 ± 0.02 s-1 (151 voxels from 4 rats). Several putative mechanisms for biexponential relaxation behavior were evaluated, and magnetization transfer between bulk water protons and non-aqueous protons was determined to be the source of biexponential longitudinal relaxation. MR methods requiring accurate quantification of longitudinal relaxation may need to take this effect explicitly into account. PMID:18759367 1. Windowing Waveform Relaxation of Initial Value Problems Yao-lin Jiang 2006-01-01 We present a windowing technique of waveform relaxation for dynamic systems. An effective estimation on window length is derived by an iterative error expression provided here. Relaxation processes can be speeded up if one takes the windowing technique in advance. Numerical experiments are given to further illustrate the theoretical analysis. 2. Video Analysis of Eddy Structures from Explosive Volcanic Eruptions Fisher, M. A.; Kobs-Nawotniak, S. E. 2013-12-01 We present a method of analyzing turbulent eddy structures in explosive volcanic eruptions using high definition video. Film from the eruption of Sakurajima on 25 September 2011 was analyzed using a modified version of FlowJ, a Java-based toolbox released by National Institute of Health. Using the Lucas and Kanade algorithm with a Gaussian derivative gradient, it tracks the change in pixel position over a 23 image buffer to determine the optical flow. This technique assumes that the optical flow, which is the apparent motion of the pixels, is equivalent to the actual flow field. We calculated three flow fields per second for the duration of the video. FlowJ outputs flow fields in pixels per frame that were then converted to meters per second in Matlab using a known distance and video rate. We constructed a low pass filter using proper orthogonal decomposition (POD) and critical point analysis to identify the underlying eddy structure with boundaries determined by tracing the flow lines. We calculated the area of each eddy and noted its position over a series of velocity fields. The changes in shape and position were tracked to determine the eddy growth rate and overall eddy rising velocity. The eddies grow in size 1.5 times quicker than they rise vertically. Presently, this method is most successful in high contrast videos when there is little to no effect of wind on the plumes. Additionally, the pixel movement from the video images represents a 2D flow with no depth, while the actual flow is three dimensional; we are continuing to develop an algorithm that will allow 3D reprojection of the 2D data. Flow in the y-direction lessens the overall velocity magnitude as the true flow motion has larger y-direction component. POD, which only uses the pattern of the flow, and analysis of the critical points (points where flow is zero) is used to determine the shape of the eddies. The method allows for video recorded at remote distances to be used to study eruption dynamics 3. Coupled Large Eddy Simulation and Discrete Element Model for Particle Saltation Liu, X.; Liu, D.; Fu, X. 2016-12-01 Particle saltation is the major mode of motion for sediment transport. The quantification of the characteristics of saltation, either as an individual particle or as a group, is of great importance to our understanding of the transport process. In the past, experiments and numerical models have been performed to study the saltation length, height, and velocity under different turbulent flow and rough bed conditions. Most previous numerical models have very restrictive assumptions. For example, many models assumed Log-law flow velocity profiles to drive the motion of particles. Others assumed some "splash-function" which assigns the reflection angle for the rebounding of the saltating particle after each collision with bed. This research aims to relax these restrictions by a coupled eddy-resolving flow solver and a discrete element model. The model simulates the fully four-way coupling among fluid, particles, and wall. The model is extensively validated on both the turbulent flow field and saltation statistics. The results show that the two controlling factors for particle saltation are turbulent fluctuations and bed collision. Detailed quantification of these two factors will be presented. Through the statistics of incidence reflection angles, a more physical "splash-function" is obtained in which the reflection angle follows an asymmetric bimodal distribution for a given incidence angle. The higher mode is always located on the upstream side of the bed particle, while the lower one is always on the downstream surface. 4. Quantifying turbulent wall shear stress in a stenosed pipe using large eddy simulation. Gårdhagen, Roland; Lantz, Jonas; Carlsson, Fredrik; Karlsson, Matts 2010-06-01 Large eddy simulation was applied for flow of Re=2000 in a stenosed pipe in order to undertake a thorough investigation of the wall shear stress (WSS) in turbulent flow. A decomposition of the WSS into time averaged and fluctuating components is proposed. It was concluded that a scale resolving technique is required to completely describe the WSS pattern in a subject specific vessel model, since the poststenotic region was dominated by large axial and circumferential fluctuations. Three poststenotic regions of different WSS characteristics were identified. The recirculation zone was subject to a time averaged WSS in the retrograde direction and large fluctuations. After reattachment there was an antegrade shear and smaller fluctuations than in the recirculation zone. At the reattachment the fluctuations were the largest, but no direction dominated over time. Due to symmetry the circumferential time average was always zero. Thus, in a blood vessel, the axial fluctuations would affect endothelial cells in a stretched state, whereas the circumferential fluctuations would act in a relaxed direction. 5. Stress Relaxation in Entangled Polymer Melts Hou, Ji-Xuan; Svaneborg, Carsten; Everaers, Ralf 2010-01-01 We present an extensive set of simulation results for the stress relaxation in equilibrium and step-strained bead-spring polymer melts. The data allow us to explore the chain dynamics and the shear relaxation modulus, G(t), into the plateau regime for chains with Z=40 entanglements and into the t......We present an extensive set of simulation results for the stress relaxation in equilibrium and step-strained bead-spring polymer melts. The data allow us to explore the chain dynamics and the shear relaxation modulus, G(t), into the plateau regime for chains with Z=40 entanglements...... and into the terminal relaxation regime for Z=10. Using the known (Rouse) mobility of unentangled chains and the melt entanglement length determined via the primitive path analysis of the microscopic topological state of our systems, we have performed parameter-free tests of several different tube models. We find... 6. Stress and Relaxation in Relation to Personality Harish Kumar Sharma 2011-09-01 Full Text Available Relaxation plays a significant role in facing stress. The aim of the present study is to see whether personality patterns determine an individual’s ability to relax. As a reaction to stress, coping is the best way to handle stress, which requires rational and conscious thinking. Does this ability to relax anyway facilitate coping reactions? A study was conducted on 100 college students. Results revealed that extraverts relax easily than introverts. In addition, if intelligence level is average or above average, relaxation does play a role in facilitating coping reactions. It suggests that in designing techniques of stress management, the personality and intelligence level must be taken into consideration to make techniques effective. 7. GEM: a dynamic tracking model for mesoscale eddies in the ocean Li, Qiu-Yang; Sun, Liang; Lin, Sheng-Fu 2016-12-01 The Genealogical Evolution Model (GEM) presented here is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish between different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, the GEM first uses a two-dimensional (2-D) similarity vector (i.e., a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the "missing eddy" problem (temporarily lost eddy in tracking). Second, for tracking when an eddy splits, the GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as the birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O(LM(N + 1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distributions in the North Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". The GEM is useful not only for satellite-based observational data, but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology. 8. Instantaneous Wavelet Energetic Transfers between Atmospheric Blocking and Local Eddies. Fournier, Aimé 2005-07-01 A new wavelet energetics technique, based on best-shift orthonormal wavelet analysis (OWA) of an instantaneous synoptic map, is constructed for diagnosing nonlinear kinetic energy (KE) transfers in five observed blocking cases. At least 90% of the longitudinal variance of time and latitude band mean 50-kPa geopotential is reconstructed by only two wavelets using best shift. This superior efficiency to the standard OWAs persists for time-evolving structures. The cases comprise two categories, respectively dominated by zonal-wavenumber sets {1} and {1, 2}. Further OWA of instantaneous residual nonblocking structures, combined with new “nearness” criteria, yields three more orthogonal components, representing smaller-scale eddies near the block (upstream and downstream) and distant structures. This decomposition fulfills a vision expressed to the author by Saltzman. Such a decomposition is not obtainable by simple Fourier analysis.Eddy patterns apparent in the components' contours suggest inferring geostrophic energetic interactions, but the component Rossby numbers may be too large to support the inference. However, a new result enabled by this method is the instantaneous attribution of blocking strain-field effects to particular energetically interactive eddies, consistent with Shutts' hypothesis. Such attribution was only possible before in simplified models or in a time-average sense. In four of five blocks, the upstream eddies feed KE to the block, which in turn, in three of four cases, transmits KE to the downstream eddies. The small case size precludes statistically significant conclusions. The appendixes link low-order blocking structure and dynamics to some wavelet design principles and propose a new interaction diagnosis, similar to E-vector analysis, but instantaneous. 9. Magnetic Resonance Fingerprinting with short relaxation intervals. Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter 2017-09-01 The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T1 and T2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved 10. Sea Surface Height Variability and Eddy Statistical Properties in the Red Sea Zhan, Peng 2013-05-01 Satellite sea surface height (SSH) data over 1992-2012 are analyzed to study the spatial and temporal variability of sea level in the Red Sea. Empirical orthogonal functions (EOF) analysis suggests the remarkable seasonality of SSH in the Red Sea, and a significant correlation is found between SSH variation and seasonal wind cycle. A winding-angle based eddy identification algorithm is employed to derive the mesoscale eddy information from SSH data. Totally more than 5500 eddies are detected, belonging to 2583 eddy tracks. Statistics suggest that eddies generate over the entire Red Sea, with two regions in the central basin of high eddy frequency. 76% of the detected eddies have a radius ranging from 40km to 100km, of which both intensity and absolute vorticity decrease with eddy radius. The average eddy lifespan is about 5 weeks, and eddies with longer lifespan tend to have larger radius but less intensity. Different deformation rate exists between anticyclonic eddies (AEs) and cyclonic eddies (CEs), those eddies with higher intensity appear to be less deformed and more circular. Inspection of the 84 long-lived eddies suggests the AEs tend to move a little more northward than CEs. AE generation during summer is obviously lower than that during other seasons, while CE generation is higher during spring and summer. Other features of AEs and CEs are similar with both vorticity and intensity reaching the summer peaks in August and winter peaks in January. Inter-annual variability reveals that the eddies in the Red Sea are isolated from the global event. The eddy property tendencies are different from the south and north basin, both of which exhibit a two-year cycle. Showing a correlation coefficient of -0.91, Brunt–Väisälä frequency is negatively correlated with eddy kinetic energy (EKE), which results from AE activities in the high eddy frequency region. Climatological vertical velocity shear variation is identical with EKE except in the autumn, suggesting the 11. Quantification of surface energy fluxes from a small water body using scintillometry and eddy covariance McGloin, Ryan; McGowan, Hamish; McJannet, David 2014-01-01 Accurate quantification of evaporation from small water storages is essential for water management and planning, particularly in water-scarce regions. In order to ascertain suitable methods for direct measurement of evaporation from small water bodies, this study presents a comparison of eddy......% greater than eddy covariance measurements. We suggest possible reasons for this difference and provide recommendations for further research for improving measurements of surface energy fluxes over small water bodies using eddy covariance and scintillometry. Key Points Source areas for Eddy covariance...... and scintillometry were on the water surface Reasonable agreement was shown between the sensible heat flux measurements Scintillometer estimates of latent heat flux were greater than eddy covariance... 12. Permanent Magnet Eddy Current Loss Analysis of a Novel Motor Integrated Permanent Magnet Gear Zhang, Yuqiu; Lu, Kaiyuan; Ye, Yunyue 2012-01-01 In this paper, a new motor integrated permanent magnet gear (MIPMG) is discussed. The focus is on eddy current loss analysis associated to permanent magnets (PMs). A convenient model of MIPMG is provided based on 2-D field-motion coupled time-stepping finite element method for transient eddy...... current analysis. The model takes the eddy current effect of PMs into account in determination of the magnetic field in the air-gap and in the magnet regions. The eddy current losses generated in the magnets are properly interpreted. Design improvements for reducing the eddy current losses are suggested... 13. Eddy current heating of irregularly shaped plates by slow ramped fields Dresner, L. 1979-09-01 Eddy current heating of thin conducting plates of various shapes by a perpendicular field is studied, assuming that the magnetic field created by the eddy currents is negligible in comparison with the external field. The method is to introduce the stream function of the eddy currents, which is shown to satisfy Poisson's equation, and then employ a pair of complementary variational principles (i.e., a minimum principle and a maximum principle), the extrema of which equal the eddy current heating. Two such complementary principles give not only an estimate of the eddy current heating, but a bound on the error of the estimate as well. 14. Eddy current separation apparatus, separation module, separation method and method for adjusting an eddy current separation apparatus Rem, P.C.; Bakker, M.C.M.; Berkhout, S.P.M.; Rahman, M.A. 2012-01-01 Eddy current separation apparatus (1) for separating particles (20) from a particle stream (w), wherein the apparatus (1) comprises a separator drum (4) adapted to create a first particle fraction (21) and a second particle fraction (23), a feeding device (2) upstream of the separator drum (4) for s 15. Antiproton Accumulator (AA) Photographic Service 1980-01-01 The AA in its final stage of construction, before it disappeared from view under concrete shielding. Antiprotons were first injected, stochastically cooled and accumulated in July 1980. From 1981 on, the AA provided antiprotons for collisions with protons, first in the ISR, then in the SPS Collider. From 1983 on, it also sent antiprotons, via the PS, to the Low-Energy Antiproton Ring (LEAR). The AA was dismantled in 1997 and shipped to Japan. 16. Comparison Study of Flow in A Compound Channel: Experimental and Numerical Method Using Large Eddy Simulation SDS-2DH Model Eka Oktariyanto Nugroho 2007-11-01 Full Text Available Flow modeling in a compound channel is a complex matter. Indeed, due to the smaller velocities in the floodplains than in the main channel, shear layers develop at the interfaces between two stage channels, and the channel conveyance is affected by a momentum transfer corresponding to this shear layer. Since a compound channel is characterized by a deep main channel flanked by relatively shallow flood plains, the interaction between the faster fluid velocities in the main channel and the slower moving flow on the floodplains causes shear stresses at their interface which significantly distort flow and boundary shear stress patterns. The distortion implies that flow field in rivers is highly non homogeneous turbulent, which lateral transport of fluid momentum and suspended sediment are influenced by the characteristics of flow in rivers. The nature of mechanism of lateral transport needs to be understood for the design of river engineering schemes that rely upon realistic flow. Furthermore, the flows in river are also almost turbulent. This means that the fluid motion is highly random, unsteady, and three-dimensional. Due to these complexities, the flow can not be properly predicted by using approximate analytical solutions to the governing equations of motion. With the complexity of the problems, the solution of turbulent is simplified with mathematics equation. The momentum transfer due to turbulent exchanges is then studied experimentally and numerically. Experimental data is obtained by using Electro Magnetic Velocimetry and Wave Height Gauge. The Large Eddy Simulation Sub Depth Scale (LES SDS-2 Dimensional Horizontal (2DH Model is used to solve the turbulent problem. Successive Over Relaxation (SOR method is employed to solve the numerical computation based ob finite difference discretization. The model has been applied to the compound channel with smooth roughness. Some organized large eddies were found in the boundary between main channel 17. Domain Relaxation in Langmuir Films Bernoff, Andrew J.; Alexander, James C.; Mann, Elizabeth K.; Mann, J. Adin; Zou, Lu; Wintersmith, Jacob R. 2007-11-01 We report on an experimental, theoretical and computational study of a molecularly thin polymer Langmuir layer domain on the surface of a subfluid. When stretched (by a transient stagnation flow), the Langmuir layer takes the form of a bola consisting of two roughly circular reservoirs connected by a thin tether. This shape relaxes to the circular minimum energy configuration. The tether is never observed to rupture, even when it is more than a hundred times as long as it is thin. We model these experiments as a free boundary problem where motion is driven by the line tension of the domain and damped by the viscosity of the subfluid. We process the digital images of the experiment to extract the domain shape, use one of these shapes as an initial condition for the numerical solution of a boundary-integral model of the underlying hydrodynamics, and compare the subsequent images of the experiment to the numerical simulation. The numerical evolutions verify that our hydrodynamical model can reproduce the observed dynamics. They also allow us to deduce the magnitude of the line tension in the system, often to within 1%. 18. Supervised Discrete Hashing With Relaxation. Gui, Jie; Liu, Tongliang; Sun, Zhenan; Tao, Dacheng; Tan, Tieniu 2016-12-29 Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called ''supervised discrete hashing with relaxation'' (SDHR) based on ''supervised discrete hashing'' (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR. 19. Spin relaxation in organic semiconductors Bobbert, Peter 2011-03-01 Intriguing magnetic field effects in organic semiconductor devices have been reported: anomalous magnetoresistance in organic spin valves and large effects of small magnetic fields on the current and luminescence of organic light-emitting diodes. Influences of isotopic substitution on these effects points at the role of hyperfine coupling. We performed studies of spin relaxation in organic semiconductors based on (i) coherent spin precession of the electron spin in an effective magnetic field consisting of a random hyperfine field and an applied magnetic field and (ii) incoherent hopping of charges. These ingredients are incorporated in a stochastic Liouville equation for the dynamics of the spin density matrix of single charges as well as pairs of charges. For single charges we find a spin diffusion length that depends on the magnetic field, explaining anomalous magnetoresistance in organic spin valves. For pairs of charges we show that the magnetic field influences formation of singlet bipolarons, in the case of like charges, and singlet and triplet excitons, in the case of opposite charges. We can reproduce different line shapes of reported magnetic field effects, including recently found effects at ultra-small fields. 20. Response of the Kuroshio Current to Eddies in the Luzon Strait ZHAO Jie; LUO De-Hai 2010-01-01 The impact of eddies on the Kuroshio Current in the Luzon Strait(LS)area is investigated by using the sea surface height anomaly(SSHA)satellite observation data and the sea surface height(SSH)assimilation data.The influence of the eddies on the mean current depends upon the type of eddies and their relative position.The mean current is enhanced(weakened)as the cyclonic(anticyclonic)eddy becomes slightly far from it,whereas it is weakened(enhanced)as the cyclonic(anticyclonic)eddy moves near or within the position of the mean current;this is explained as the eddy-induced meridional velocity and geostrophic flow relationship.The anticyclonic(cyclonic)eddy can increase(decrease)the mean meridional flow due to superimposition of the eddy-induced meridional flow when the eddy is within the region of the mean current.However,when the eddy is slightly far from the mean current region,the anticyclonic(cyclonic)eddy tends to decrease(increase)the zonal gradient of the SSH,which thus results in weakening(strengthening)of the mean current in the LS region. 1. Coupling between SST and wind speed over mesoscale eddies in the South China Sea Sun, Shuangwen; Fang, Yue; Liu, Baochao; ᅟ, Tana 2016-11-01 The coupling between sea surface temperature (SST) and sea surface wind speed over mesoscale eddies in the South China Sea (SCS) was studied using satellite measurements. Positive correlations between SST anomalies (SSTA) and wind speed anomalies were found over both cyclonic and anticyclonic eddies. In contrast to the open oceans, the spatial patterns of the coupling over mesoscale eddies in the SCS depend largely on the seasonal variations of the background SST gradient, wind speed, and wind directional steadiness. In summer, the maximum SSTA location coincides with the center of eddy-induced sea surface height anomalies. In winter, the eddy-induced SSTA show a clear dipole pattern. The spatial patterns of wind speed anomalies over eddies are similar to those of the SSTA in both seasons. Wind speed anomalies are linearly correlated with SSTA over anticyclonic and cyclonic eddies. The coupling coefficients between SSTA and wind speed anomalies in the SCS are comparable to those in the open oceans. 2. Finite element analysis of gradient z-coil induced eddy currents in a permanent MRI magnet. Li, Xia; Xia, Ling; Chen, Wufan; Liu, Feng; Crozier, Stuart; Xie, Dexin 2011-01-01 In permanent magnetic resonance imaging (MRI) systems, pulsed gradient fields induce strong eddy currents in the conducting structures of the magnet body. The gradient field for image encoding is perturbed by these eddy currents leading to MR image distortions. This paper presents a comprehensive finite element (FE) analysis of the eddy current generation in the magnet conductors. In the proposed FE model, the hysteretic characteristics of ferromagnetic materials are considered and a scalar Preisach hysteresis model is employed. The developed FE model was applied to study gradient z-coil induced eddy currents in a 0.5 T permanent MRI device. The simulation results demonstrate that the approach could be effectively used to investigate eddy current problems involving ferromagnetic materials. With the knowledge gained from this eddy current model, our next step is to design a passive magnet structure and active gradient coils to reduce the eddy current effects. 3. Spatial information recognizing of ocean eddies based on virtual force field and its application LI Ce; DU Yunyan; SU Fenzhen; YANG Xiaomei; XU Jun 2007-01-01 A new approach to detecting ocean eddies automatically from remote sensing imageries based on the ocean eddy's eigen-pattern in remote sensing imagery and "force field-based shape extracting method" is proposed. First, the analysis on extracting eddies' edges from remote sensing imagery using conventional edge detection arithmetic operators is performed and returns digitized vector edge data as a result. Second, attraction forces and fusion forces between edge curves were analyzed and calculated based on the vector eddy edges. Thirdly, the virtual significant spatial patterns of eddy were detected automatically using iterative repetition followed by optimized rule. Finally, the spatial form auto-detection of different types of ocean eddies was done using satellite images.The study verified that this is an effective way to identify and detect the ocean eddy with a complex form. 4. Relaxation of a 1-D gravitational system Valageas, P 2006-01-01 We study the relaxation towards thermodynamical equilibrium of a 1-D gravitational system. This OSC model shows a series of critical energies $E_{cn}$ where new equilibria appear and we focus on the homogeneous ($n=0$), one-peak ($n=\\pm 1$) and two-peak ($n=2$) states. Using numerical simulations we investigate the relaxation to the stable equilibrium $n=\\pm 1$ of this $N-$body system starting from initial conditions defined by equilibria $n=0$ and $n=2$. We find that in a fashion similar to other long-range systems the relaxation involves a fast violent relaxation phase followed by a slow collisional phase as the system goes through a series of quasi-stationary states. Moreover, in cases where this slow second stage leads to a dynamically unstable configuration (two peaks with a high mass ratio) it is followed by a new sequence violent relaxation/slow collisional relaxation''. We obtain an analytical estimate of the relaxation time $t_{2\\to \\pm 1}$ through the mean escape time of a particle from its potent... 5. Plasma Relaxation Dynamics Moderated by Current Sheets Dewar, Robert; Bhattacharjee, Amitava; Yoshida, Zensho 2014-10-01 Ideal magnetohydrodynamics (IMHD) is strongly constrained by an infinite number of microscopic constraints expressing mass, entropy and magnetic flux conservation in each infinitesimal fluid element, the latter preventing magnetic reconnection. By contrast, in the Taylor-relaxed equilibrium model all these constraints are relaxed save for global magnetic flux and helicity. A Lagrangian is presented that leads to a new variational formulation of magnetized fluid dynamics, relaxed MHD (RxMHD), all static solutions of which are Taylor equilibrium states. By postulating that some long-lived macroscopic current sheets can act as barriers to relaxation, separating the plasma into multiple relaxation regions, a further generalization, multi-relaxed MHD (MRxMHD), is developed. These concepts are illustrated using a simple two-region slab model similar to that proposed by Hahm and Kulsrud--the formation of an initial shielding current sheet after perturbation by boundary rippling is calculated using MRxMHD and the final island state, after the current sheet has relaxed through a reconnection sequence, is calculated using RxMHD. Australian Research Council Grant DP110102881. 6. Occurrence and characteristics of mesoscale eddies in the tropical northeastern Atlantic Ocean Schütte, Florian; Brandt, Peter; Karstensen, Johannes 2016-05-01 Coherent mesoscale features (referred to here as eddies) in the tropical northeastern Atlantic Ocean (between 12-22° N and 15-26° W) are examined and characterized. The eddies' surface signatures are investigated using 19 years of satellite-derived sea level anomaly (SLA) data. Two automated detection methods are applied, the geometrical method based on closed streamlines around eddy cores, and the Okubo-Weiß method based on the relation between vorticity and strain. Both methods give similar results. Mean eddy surface signatures of SLA, sea surface temperature (SST) and sea surface salinity (SSS) anomalies are obtained from composites of all snapshots around identified eddy cores. Anticyclones/cyclones are identified by an elevation/depression of SLA and enhanced/reduced SST and SSS in their cores. However, about 20 % of all anticyclonically rotating eddies show reduced SST and reduced SSS instead. These kind of eddies are classified as anticyclonic mode-water eddies (ACMEs). About 146 ± 4 eddies per year with a minimum lifetime of 7 days are identified (52 % cyclones, 39 % anticyclones, 9 % ACMEs) with rather similar mean radii of about 56 ± 12 km. Based on concurrent in situ temperature and salinity profiles (from Argo float, shipboard, and mooring data) taken inside of eddies, distinct mean vertical structures of the three eddy types are determined. Most eddies are generated preferentially in boreal summer and along the West African coast at three distinct coastal headland regions and carry South Atlantic Central Water supplied by the northward flow within the Mauretanian coastal current system. Westward eddy propagation (on average about 3.00 ± 2.15 km d-1) is confined to distinct zonal corridors with a small meridional deflection dependent on the eddy type (anticyclones - equatorward, cyclones - poleward, ACMEs - no deflection). Heat and salt fluxes out of the coastal region and across the Cape Verde Frontal Zone, which separates the shadow zone from 7. Mesoscale eddies over the Laptev Sea continental slope in the Arctic Ocean Pnyushkov, A.; Polyakov, I.; Nguyen, A. T. 2015-12-01 Mesoscale eddies are an important component in Arctic Ocean dynamics and can play a role in vertical redistribution of ocean heat from the intermediate layer of warm Atlantic Water (AW). We analyze mooring data collected along the continental slope of the Laptev Sea in 2007-11 to improve the characterization of Arctic mesoscale eddies in this region of the Eurasian Basin (EB).Wavelet analyses suggest that ~20% of the mooring record is occupied by mesoscale eddies, whose vertical scales can be large, often >600 m. Based on similarity between temperature/salinity profiles measured inside eddies and modern climatology for the 2000s, we found two distinct sources of eddy formation in the EB; one in the vicinity of Fram Strait and the other at the continental slope of the Severnaya Zemlya Archipelago. Both sources of eddies are on the route of AW propagation along the EB margins, so that the Arctic Circumpolar Boundary Current (ACBC) can carry these eddies along the continental slope.The lateral advection of waters isolated inside the eddy cores by ACBC affect the heat and salt balance of the eastern EB. The average temperature anomaly inside Fram Strait eddies in the layer above the AW temperature core (i.e., above 350 m depth level) was ~0.1º C with the strongest temperature anomaly in this layer exceeding 0.5ºC. In contrast to Fram Strait eddies, Severnaya Zemlya eddies carry anomalously cold and fresh water, and likely contribute to ventilation of the AW core. In addition, we found increased vertical shears of the horizontal velocities inside eddies that result in enhanced mixing. Our estimates made using the Pacanowski and Philander (1981) relationship suggest that, on average, vertical diffusivity coefficients inside eddies are four times larger than those in the surrounding waters. We will use the high resolution ECCO model to investigate the relative contributions of along and across slope transports induced by eddies along the ACBC path. 8. N2 fixation in eddies of the eastern tropical South Pacific Ocean Loscher, Carolin R.; Bourbonnais, Annie; Dekaezemacker, Julien; Charoenpong, Chawalit N.; Altabet, Mark A.; Bange, Hermann W.; Czeschel, Rena; Hoffmann, Chris; Schmitz, Ruth 2016-05-01 Mesoscale eddies play a major role in controlling ocean biogeochemistry. By impacting nutrient availability and water column ventilation, they are of critical importance for oceanic primary production. In the eastern tropical South Pacific Ocean off Peru, where a large and persistent oxygen-deficient zone is present, mesoscale processes have been reported to occur frequently. However, investigations into their biological activity are mostly based on model simulations, and direct measurements of carbon and dinitrogen (N2) fixation are scarce.We examined an open-ocean cyclonic eddy and two anticyclonic mode water eddies: a coastal one and an open-ocean one in the waters off Peru along a section at 16° S in austral summer 2012. Molecular data and bioassay incubations point towards a difference between the active diazotrophic communities present in the cyclonic eddy and the anticyclonic mode water eddies.In the cyclonic eddy, highest rates of N2 fixation were measured in surface waters but no N2 fixation signal was detected at intermediate water depths. In contrast, both anticyclonic mode water eddies showed pronounced maxima in N2 fixation below the euphotic zone as evidenced by rate measurements and geochemical data. N2 fixation and carbon (C) fixation were higher in the young coastal mode water eddy compared to the older offshore mode water eddy. A co-occurrence between N2 fixation and biogenic N2, an indicator for N loss, indicated a link between N loss and N2 fixation in the mode water eddies, which was not observed for the cyclonic eddy. The comparison of two consecutive surveys of the coastal mode water eddy in November 2012 and December 2012 also revealed a reduction in N2 and C fixation at intermediate depths along with a reduction in chlorophyll by half, mirroring an aging effect in this eddy. Our data indicate an important role for anticyclonic mode water eddies in stimulating N2 fixation and thus supplying N offshore. 9. Large eddy simulation of soot evolution in an aircraft combustor Mueller, Michael E.; Pitsch, Heinz 2013-11-01 An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel 10. Determining Confounding Sensitivities In Eddy Current Thin Film Measurements Gros, Ethan; Udpa, Lalita; Smith, James A.; Wachs, Katelyn 2016-07-01 Determining Confounding Sensitivities In Eddy Current Thin Film Measurements Ethan Gros, Lalita Udpa, Electrical Engineering, Michigan State University, East Lansing MI 48824 James A. Smith, Experiment Analysis, Idaho National Laboratory, Idaho Falls ID 83415 Eddy current (EC) techniques are widely used in industry to measure the thickness of non-conductive films on a metal substrate. This is done using a system whereby a coil carrying a high-frequency alternating current is used to create an alternating magnetic field at the surface of the instrument's probe. When the probe is brought near a conductive surface, the alternating magnetic field will induce ECs in the conductor. The substrate characteristics and the distance of the probe from the substrate (the coating thickness) affect the magnitude of the ECs. The induced currents load the probe coil affecting the terminal impedance of the coil. The measured probe impedance is related to the lift off between coil and conductor as well as conductivity of the test sample. For a known conductivity sample, the probe impedance can be converted into an equivalent film thickness value. The EC measurement can be confounded by a number of measurement parameters. It is the goal of this research to determine which physical properties of the measurement set-up and sample can adversely affect the thickness measurement. The eddy current testing is performed using a commercially available, hand held eddy current probe (ETA3.3H spring loaded eddy probe running at 8 MHz) that comes with a stand to hold the probe. The stand holds the probe and adjusts the probe on the z-axis to help position the probe in the correct area as well as make precise measurements. The signal from the probe is sent to a hand held readout, where the results are recorded directly in terms of liftoff or film thickness. Understanding the effect of certain factors on the measurements of film thickness, will help to evaluate how accurate the ETA3.3H spring 11. Large Eddy Simulation for Wave Breaking in the Surf Zone 白玉川; 蒋昌波; 沈焕庭 2001-01-01 In this paper, the large eddy simulation method is used combined with the marker and cell method to study the wave propagation or shoaling and breaking process. As wave propagates into shallow water, the shoaling leads to the increase of wave height, and then at a certain position, the wave will be breaking. The breaking wave is a powerful agent for generating turbulence, which plays an important role in most of the fluid dynamic processes throughout the sarf zone, such as transformation of wave energy, generation of near-shore current and diffusion of materials. So a proper numerical model for describing the turbulence effect is needed. In this paper, a revised Smagorinsky subgrid-scale model is used to describe the turbulence effect. The present study reveals that the coefficient of the Smagorinsky model for wave propagation or breaking simulation may be taken as a varying function of the water depth and distance away from the wave breaking point. The large eddy simulation model presented in this paper has been used to study the propagation of the solitary wave in constant water depth and the shoaling of the non-breaking solitary wave on a beach. The model is based on large eddy simulation, and to track free-surface movements, the Tokyo University Modified Marker and Cell (TUMMAC) method is employed. In order to ensure the accuracy of each component of this wave mathematical model,several steps have been taken to verify calculated solutions with either analytical solutions or experimental data. For non-breaking waves, very accurate results are obtained for a solitary wave propagating over a constant depth and on a beach. Application of the model to cnoidal wave breaking in the surf zone shows that the model results are in good agreement with analytical solution and experimental data. From the present model results, it can be seen that the turbulent eddy viscosity increases from the bottom to the water surface in surf zone. In the eddy viscosity curve, there is a 12. Eddy Flow during Magma Emplacement: The Basemelt Sill, Antarctica 2014-12-01 The McMurdo Dry Valleys magmatic system, Antarctica, forms part of the Ferrar dolerite Large Igneous Province. Comprising a vertical stack of interconnected sills, the complex provides a world-class example of pervasive lateral magma flow on a continental scale. The lowermost intrusion (Basement Sill) offers detailed sections through the now frozen particle macrostructure of a congested magma slurry1. Image-based numerical modelling where the intrusion geometry defines its own unique finite element mesh allows simulations of the flow regime to be made that incorporate realistic magma particle size and flow geometries obtained directly from field measurements. One testable outcome relates to the origin of rhythmic layering where analytical results imply the sheared suspension intersects the phase space for particle Reynolds and Peclet number flow characteristic of macroscopic structures formation2. Another relates to potentially novel crystal-liquid segregation due to the formation of eddies locally at undulating contacts at the floor and roof of the intrusion. The eddies are transient and mechanical in origin, unrelated to well-known fluid dynamical effects around obstacles where flow is turbulent. Numerical particle tracing reveals that these low Re number eddies can both trap (remove) and eject particles back into the magma at a later time according to their mass density. This trapping mechanism has potential to develop local variations in structure (layering) and magma chemistry that may otherwise not occur where the contact between magma and country rock is linear. Simulations indicate that eddy formation is best developed where magma viscosity is in the range 1-102 Pa s. Higher viscosities (> 103 Pa s) tend to dampen the effect implying eddy development is most likely a transient feature. However, it is nice to think that something as simple as a bumpy contact could impart physical and by implication chemical diversity in igneous rocks. 1Marsh, D.B. (2004), A 13. Le Chatelier's principle with multiple relaxation channels Gilmore, R.; Levine, R. D. 1986-05-01 Le Chatelier's principle is discussed within the constrained variational approach to thermodynamics. The formulation is general enough to encompass systems not in thermal (or chemical) equilibrium. Particular attention is given to systems with multiple constraints which can be relaxed. The moderation of the initial perturbation increases as additional constraints are removed. This result is studied in particular when the (coupled) relaxation channels have widely different time scales. A series of inequalities is derived which describes the successive moderation as each successive relaxation channel opens up. These inequalities are interpreted within the metric-geometry representation of thermodynamics. 14. Neural control of muscle relaxation in echinoderms. Elphick, M R; Melarange, R 2001-03-01 Smooth muscle relaxation in vertebrates is regulated by a variety of neuronal signalling molecules, including neuropeptides and nitric oxide (NO). The physiology of muscle relaxation in echinoderms is of particular interest because these animals are evolutionarily more closely related to the vertebrates than to the majority of invertebrate phyla. However, whilst in vertebrates there is a clear structural and functional distinction between visceral smooth muscle and skeletal striated muscle, this does not apply to echinoderms, in which the majority of muscles, whether associated with the body wall skeleton and its appendages or with visceral organs, are made up of non-striated fibres. The mechanisms by which the nervous system controls muscle relaxation in echinoderms were, until recently, unknown. Using the cardiac stomach of the starfish Asterias rubens as a model, it has been established that the NO-cGMP signalling pathway mediates relaxation. NO also causes relaxation of sea urchin tube feet, and NO may therefore function as a 'universal' muscle relaxant in echinoderms. The first neuropeptides to be identified in echinoderms were two related peptides isolated from Asterias rubens known as SALMFamide-1 (S1) and SALMFamide-2 (S2). Both S1 and S2 cause relaxation of the starfish cardiac stomach, but with S2 being approximately ten times more potent than S1. SALMFamide neuropeptides have also been isolated from sea cucumbers, in which they cause relaxation of both gut and body wall muscle. Therefore, like NO, SALMFamides may also function as 'universal' muscle relaxants in echinoderms. The mechanisms by which SALMFamides cause relaxation of echinoderm muscle are not known, but several candidate signal transduction pathways are discussed here. The SALMFamides do not, however, appear to act by promoting release of NO, and muscle relaxation in echinoderms is therefore probably regulated by at least two neuronal signalling systems acting in parallel. Recently, other 15. Stress Relaxation in Entangled Polymer Melts Hou, Ji-Xuan; Svaneborg, Carsten; Everaers, Ralf 2010-01-01 and into the terminal relaxation regime for Z=10. Using the known (Rouse) mobility of unentangled chains and the melt entanglement length determined via the primitive path analysis of the microscopic topological state of our systems, we have performed parameter-free tests of several different tube models. We find......We present an extensive set of simulation results for the stress relaxation in equilibrium and step-strained bead-spring polymer melts. The data allow us to explore the chain dynamics and the shear relaxation modulus, G(t), into the plateau regime for chains with Z=40 entanglements... 16. Spin relaxation in nanowires by hyperfine coupling Echeverria-Arrondo, C. [Department of Physical Chemistry, Universidad del Pais Vasco UPV/EHU, 48080 Bilbao (Spain); Sherman, E.Ya. [Department of Physical Chemistry, Universidad del Pais Vasco UPV/EHU, 48080 Bilbao (Spain); IKERBASQUE Basque Foundation for Science, 48011 Bilbao, Bizkaia (Spain) 2012-08-15 Hyperfine interactions establish limits on spin dynamics and relaxation rates in ensembles of semiconductor quantum dots. It is the confinement of electrons which determines nonzero hyperfine coupling and leads to the spin relaxation. As a result, in nanowires one would expect the vanishing of this effect due to extended electron states. However, even for relatively clean wires, disorder plays a crucial role and makes electron localization sufficient to cause spin relaxation on the time scale of the order of 10 ns. (copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.) 17. Compact vs. Exponential-Size LP Relaxations Carr, R.D.; Lancia, G. 2000-09-01 In this paper we introduce by means of examples a new technique for formulating compact (i.e. polynomial-size) LP relaxations in place of exponential-size models requiring separation algorithms. In the same vein as a celebrated theorem by Groetschel, Lovasz and Schrijver, we state the equivalence of compact separation and compact optimization. Among the examples used to illustrate our technique, we introduce a new formulation for the Traveling Salesman Problem, whose relaxation we show equivalent to the subtour elimination relaxation. 18. Relaxation time in disordered molecular systems Rocha, Rodrigo P. [Departamento de Física, Universidade Federal de Santa Catarina, 88040-900 Florianópolis-SC (Brazil); Freire, José A., E-mail: jfreire@fisica.ufpr.br [Departamento de Física, Universidade Federal do Paraná, 81531-990 Curitiba-PR (Brazil) 2015-05-28 Relaxation time is the typical time it takes for a closed physical system to attain thermal equilibrium. The equilibrium is brought about by the action of a thermal reservoir inducing changes in the system micro-states. The relaxation time is intuitively expected to increase with system disorder. We derive a simple analytical expression for this dependence in the context of electronic equilibration in an amorphous molecular system model. We find that the disorder dramatically enhances the relaxation time but does not affect its independence of the nature of the initial state. 19. Nuclear magnetic resonance relaxation in multiple sclerosis Larsson, H B; Barker, G J; MacKay, A 1998-01-01 OBJECTIVES: The theory of relaxation processes and their measurements are described. An overview is presented of the literature on relaxation time measurements in the normal and the developing brain, in experimental diseases in animals, and in patients with multiple sclerosis. RESULTS...... AND CONCLUSION: Relaxation time measurements provide insight into development of multiple sclerosis plaques, especially the occurrence of oedema, demyelination, and gliosis. There is also evidence that normal appearing white matter in patients with multiple sclerosis is affected. What is now needed are fast... 20. 1H relaxation dispersion in solutions of nitroxide radicals: Influence of electron spin relaxation Kruk, D.; Korpała, A.; Kubica, A.; Kowalewski, J.; Rössler, E. A.; Moscicki, J. 2013-03-01 The work presents a theory of nuclear (1H) spin-lattice relaxation dispersion for solutions of 15N and 14N radicals, including electron spin relaxation effects. The theory is a generalization of the approach presented by Kruk et al. [J. Chem. Phys. 137, 044512 (2012)], 10.1063/1.4736854. The electron spin relaxation is attributed to the anisotropic part of the electron spin-nitrogen spin hyperfine interaction modulated by rotational dynamics of the paramagnetic molecule, and described by means of Redfield relaxation theory. The 1H relaxation is caused by electron spin-proton spin dipole-dipole interactions which are modulated by relative translational motion of the solvent and solute molecules. The spectral density characterizing the translational dynamics is described by the force-free-hard-sphere model. The electronic relaxation influences the 1H relaxation by contributing to the fluctuations of the inter-molecular dipolar interactions. The developed theory is tested against 1H spin-lattice relaxation dispersion data for glycerol solutions of 4-oxo-TEMPO-d16-15N and 4-oxo-TEMPO-d16-14N covering the frequency range of 10 kHz-20 MHz. The studies are carried out as a function of temperature starting at 328 K and going down to 290 K. The theory gives a consistent overall interpretation of the experimental data for both 14N and 15N systems and explains the features of 1H relaxation dispersion resulting from the electron spin relaxation. 1. “I think relax, relax and it flows a lot easier”: Exploring client-generated relax strategies Dianne Cirone 2014-10-01 Full Text Available Background. Some adult stroke survivors participating in Cognitive Orientation to daily Occupational Performance (CO-OP treatment programs self-generated relax strategies that have not been explored in previous CO-OP publications. The objective of this study was to describe the process by which adults with stroke used relax strategies and to explore the outcomes associated with their use. Methods. Secondary analysis of transcripts of intervention sessions from five participants was conducted. Results. All five participants applied relax strategies after initially observing a breakdown in performance that was attributed to increased fatigue or tension. The relax strategies used by the participants during their occupations included general relaxation, physical modifications to reduce tension, mental preparation, and pacing. The application of these strategies seemed to result in improved skill performance, reduced fatigue, and transfer to other activities. Conclusion. The relax strategy warrants further investigation as a potentially important therapeutic tool to improve occupational performance in individuals who have had a stroke. 2. An atmospheric electrical method to determine the eddy diffusion coefficient M N Kulkarni; A K Kamra 2010-02-01 The ion–aerosol balance equations are solved to get the profiles of atmospheric electric parameters over the ground surface in an aerosol-rich environment under the conditions of surface radioactivity. Combining the earlier results for low aerosol concentrations and the present results for high aerosol concentrations, a relation is obtained between the average value of atmospheric electric space charge in the lowest ∼2m, the surface electric field and eddy diffusivity/aerosol concentration. The values of eddy diffusivity estimated from this method using some earlier measurements of space charge and surface electric field are in reasonably good agreement with those calculated from other standard methods using meteorological or electrical variables. 3. Comparative performance of image fusion methodologies in eddy current testing S. Thirunavukkarasu 2012-12-01 Full Text Available Image fusion methodologies have been studied for improving the detectability of eddy current Nondestructive Testing (NDT. Pixel level image fusion has been performed on C-scan eddy current images of a sub-surface defect at two different frequencies. Multi-resolution analysis based Laplacian pyramid and wavelet fusion methodologies, statistical inference based Bayesian fusion and Principal Component Analysis (PCA based fusion methodologies have been studied towards improving the detectability of defects. The performance of the fusion methodologies has been compared using image metrics such as SNR and entropy. Bayesian based fusion methodology has shown better performance as compared to other methodologies with 33.75 dB improvement in the SNR and an improvement of 3.22 in the entropy. 4. Synthetic-Eddy Method for Urban Atmospheric Flow Modelling Pavlidis, D.; Gorman, G. J.; Gomes, J. L. M. A.; Pain, C. C.; Apsimon, H. 2010-08-01 The computational fluid dynamics code Fluidity, with anisotropic mesh adaptivity, is used as a multi-scale obstacle-accommodating meteorological model. A novel method for generating realistic inlet boundary conditions based on the view of turbulence as a superposition of synthetic eddies is adopted. It is able to reproduce prescribed first-order and second-order one-point statistics and turbulence length scales. The aim is to simulate an urban boundary layer. The model is validated against two standard benchmark tests: a plane channel flow numerical simulation and a flow past a cube physical simulation. The performed large-eddy simulations are in good agreement with both reference models giving confidence that the model can be used to successfully simulate urban atmospheric flows. 5. Eddy current analysis of thin film recording heads Shenton, D.; Cendes, Z. J. 1984-03-01 Due to inherently thin pole tips which enhance the sharpness of read/write pulses, thin-film magnetic recording heads provide a unique potential for increasing disk file capacity. However, the very feature of these heads which makes them attractive in the recording process, namely, their small size, also makes thin-film heads difficult to study experimentally. For this reason, a finite element simulation of the thin-film head has been developed to provide the magnetic field distribution and the resistance/inductance characteristics of these heads under a variety of conditions. A study based on a one-step multipath eddy current procedure is reported. This procedure may be used in thin film heads to compute the variation of magnetic field with respect to frequency. Computations with the IBM 3370 head show that a large phase shift occurs due to eddy currents in the frequency range 1-10 MHz. 6. Numeral eddy current sensor modelling based on genetic neural network Yu A-Long 2008-01-01 This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness,on-line modelling and high precision.The maximum nonlinearity error can be reduced to 0.037% by using GNN.However, the maximum nonlinearity error is 0.075% using the least square method. 7. Low oxygen eddies in the eastern tropical North Atlantic Grundle, D. S.; Löscher, C. R.; Krahmann, G. 2017-01-01 Nitrous oxide (N2O) is a climate relevant trace gas, and its production in the ocean generally increases under suboxic conditions. The Atlantic Ocean is well ventilated, and unlike the major oxygen minimum zones (OMZ) of the Pacific and Indian Oceans, dissolved oxygen and N2O concentrations...... in the Atlantic OMZ are relatively high and low, respectively. This study, however, demonstrates that recently discovered low oxygen eddies in the eastern tropical North Atlantic (ETNA) can produce N2O concentrations much higher (up to 115 nmol L-1) than those previously reported for the Atlantic Ocean, and which...... are within the range of the highest concentrations found in the open-ocean OMZs of the Pacific and Indian Oceans. N2O isotope and isotopomer signatures, as well as molecular genetic results, also point towards a major shift in the N2O cycling pathway in the core of the low oxygen eddy discussed here, and we... 8. Eddy current characterization of magnetic treatment of materials Chern, E. James 1992-01-01 Eddy current impedance measuring methods have been applied to study the effect that magnetically treated materials have on service life extension. Eddy current impedance measurements have been performed on Nickel 200 specimens that have been subjected to many mechanical and magnetic engineering processes: annealing, applied strain, magnetic field, shot peening, and magnetic field after peening. Experimental results have demonstrated a functional relationship between coil impedance, resistance and reactance, and specimens subjected to various engineering processes. It has shown that magnetic treatment does induce changes in a material's electromagnetic properties and does exhibit evidence of stress relief. However, further fundamental studies are necessary for a thorough understanding of the exact mechanism of the magnetic-field processing effect on machine tool service life. 9. Large Eddy Simulation of Coherent Structure of Impinging Jet Mingzhou YU; Lihua CHEN; Hanhui JIN; Jianren FAN 2005-01-01 @@ The flow field of a rectangular exit, semi-confined and submerged turbulent jet impinging orthogonally on a flat plate with Reynolds number 8500 was studied by large eddy simulation (LES). A dynamic sub-grid stress model has been used for the small scales of turbulence. The evolvements such as the forming, developing, moving,pairing and merging of the coherent structures of vortex in the whole regions were obtained. The results revealed that the primary vortex structures were generated periodically, which was the key factor to make the secondary vortices generate in the wall jet region. In addition, the eddy intensity of the primary vortices and the secondary vortices induced by the primary vortices along with the time were also analyzed. 10. Practical Application of Eddy Currents Generated by Wind Dirba, I; Kleperis, J, E-mail: imants.dirba@gmail.com [Institute of Solid State Physics of University of Latvia, 8 Kengaraga Street, Riga, LV-1063 (Latvia) 2011-06-23 When a conductive material is subjected to time-varying magnetic fluxes, eddy (Foucault) currents are generated in it and magnetic field of opposite polarity as the applied one arises. Due to the internal resistance of the conductive material, the eddy currents will be dissipated into heat (Joule heating). Conventional domestic water heaters utilize gas burners or electric resistance heating elements to heat the water in the tank and substantial part of the energy to use for it is wasted. In this paper the origin of electromagnetic induction heat generated by wind turbine in special heat exchange camera connected to water boiler is discussed and material evaluation performed using mathematical modelling (comparing the 2D finite element model with analytical and numerical calculation results). 11. Modeling and strain gauging of eddy current repulsion deicing systems Smith, Samuel O. 1993-01-01 Work described in this paper confirms and extends work done by Zumwalt, et al., on a variety of in-flight deicing systems that use eddy current repulsion for repelling ice. Two such systems are known as electro-impulse deicing (EIDI) and the eddy current repulsion deicing strip (EDS). Mathematical models for these systems are discussed for their capabilities and limitations. The author duplicates a particular model of the EDS. Theoretical voltage, current, and force results are compared directly to experimental results. Dynamic strain measurements results are presented for the EDS system. Dynamic strain measurements near EDS or EIDI coils are complicated by the high magnetic fields in the vicinity of the coils. High magnetic fields induce false voltage signals out of the gages. 12. Estimating surface fluxes using eddy covariance and numerical ogive optimization Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling 2015-01-01 Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low......-frequency contributions interfere with our ability to isolate local biogeochemical processes of interest, as represented by turbulent fluxes. No method currently exists to disentangle low-frequency contributions on flux estimates. Here, we present a novel comprehensive numerical scheme to identify and separate out low... 13. Limitations of eddy current testing in a fast reactor environment Wu, Tao; Bowler, John R. 2016-02-01 The feasibility of using eddy current probes for detecting flaws in fast nuclear reactor structures has been investigated with the aim of detecting defects immersed in electrically conductive coolant including under liquid sodium during standby. For the inspections to be viable, there is a need to use an encapsulated sensor system that can be move into position with the aid of visualization tools. The initial objective being to locate the surface to be investigated using, for example, a combination of electromagnetic sensors and sonar. Here we focus on one feature of the task in which eddy current probe impedance variations due to interaction with the external surface of a tube are evaluated in order to monitor the probe location and orientation during inspection. 14. Practical Application of Eddy Currents Generated by Wind Dirba, I.; Kleperis, J. 2011-06-01 When a conductive material is subjected to time-varying magnetic fluxes, eddy (Foucault) currents are generated in it and magnetic field of opposite polarity as the applied one arises. Due to the internal resistance of the conductive material, the eddy currents will be dissipated into heat (Joule heating). Conventional domestic water heaters utilize gas burners or electric resistance heating elements to heat the water in the tank and substantial part of the energy to use for it is wasted. In this paper the origin of electromagnetic induction heat generated by wind turbine in special heat exchange camera connected to water boiler is discussed and material evaluation performed using mathematical modelling (comparing the 2D finite element model with analytical and numerical calculation results). 15. Analytical Modeling for the Grating Eddy Current Displacement Sensors Lv Chunfeng 2015-02-01 Full Text Available As a new type of displacement sensor, grating eddy current displacement sensor (GECDS combines traditional eddy current sensors and grating structure in one. The GECDS performs a wide range displacement measurement without precision reduction. This paper proposes an analytical modeling approach for the GECDS. The solution model is established in the Cartesian coordinate system, and the solving domain is limited to finite extents by using the truncated region eigenfunction expansion method. Based on the second order vector potential, expressions for the electromagnetic field as well as coil impedance related to the displacement can be expressed in closed-form. Theoretical results are then confirmed by experiments, which prove the suitability and effectiveness of the analytical modeling approach. 16. Ice slurry accumulation Christensen, K.G.; Kauffeld, M. 1998-06-01 More and more refrigeration systems are designed with secondary loops, thus reducing the refrigerant charge of the primary refrigeration plant. In order not to increase energy consumption by introducing a secondary refrigerant, alternatives to the well established single phase coolants (brines) and different concepts of the cooling plant have to be evaluated. Combining the use of ice-slurry - mixture of water, a freezing point depressing agent (antifreeze) and ice particles - as melting secondary refrigerant and the use of a cool storage makes it possible to build plants with secondary loops without increasing the energy consumption and investment. At the same time the operating costs can be kept at a lower level. The accumulation of ice-slurry is compared with other and more traditional storage systems. The method is evaluated and the potential in different applications is estimated. Aspects of practically use of ice-slurry has been examined in the laboratory at the Danish Technological Institute (DTI). This paper will include the final conclusions from this work concerning tank construction, agitator system, inlet, outlet and control. The work at DTI indicates that in some applications systems with ice-slurry and accumulation tanks have a great future. These applications are described by a varying load profile and a process temperature suiting the temperature of ice-slurry (-3 - -8/deg. C). (au) 17. Slow spin relaxation in dipolar spin ice. Orendac, Martin; Sedlakova, Lucia; Orendacova, Alzbeta; Vrabel, Peter; Feher, Alexander; Pajerowski, Daniel M.; Cohen, Justin D.; Meisel, Mark W.; Shirai, Masae; Bramwell, Steven T. 2009-03-01 Spin relaxation in dipolar spin ice Dy2Ti2O7 and Ho2Ti2O7 was investigated using the magnetocaloric effect and susceptibility. The magnetocaloric behavior of Dy2Ti2O7 at temperatures where the orientation of spins is governed by ice rules (T Tice) revealed thermally activated relaxation; however, the resulting temperature dependence of the relaxation time is more complicated than anticipated by a mere extrapolation of the corresponding high temperature data [1]. A susceptibility study of Ho2Ti2O7 was performed at T > Tice and in high magnetic fields, and the results suggest a slow relaxation of spins analogous to the behavior reported in a highly polarized cooperative paramagnet [2]. [1] J. Snyder et al., Phys. Rev. Lett. 91 (2003) 107201. [2] B. G. Ueland et al., Phys. Rev. Lett. 96 (2006) 027216. 18. Energy landscape of relaxed amorphous silicon Valiquette, Francis; Mousseau, Normand 2003-09-01 We analyze the structure of the energy landscape of a well-relaxed 1000-atom model of amorphous silicon using the activation-relaxation technique (ART nouveau). Generating more than 40 000 events starting from a single minimum, we find that activated mechanisms are local in nature, that they are distributed uniformly throughout the model, and that the activation energy is limited by the cost of breaking one bond, independently of the complexity of the mechanism. The overall shape of the activation-energy-barrier distribution is also insensitive to the exact details of the configuration, indicating that well-relaxed configurations see essentially the same environment. These results underscore the localized nature of relaxation in this material. 19. Precession Relaxation of Viscoelastic Oblate Rotators Frouard, Julien 2016-01-01 Various perturbations (collisions, close encounters, YORP) destabilise the rotation of a small body, leaving it in a non-principal spin state. Then the body experiences alternating stresses generated by the inertial forces. The ensuing inelastic dissipation reduces the kinetic energy, without influencing the angular momentum. This yields nutation relaxation, i.e., evolution of the spin towards rotation about the maximal-inertia axis. Knowledge of the timescales needed to damp the nutation is crucial in studies of small bodies' dynamics. In the past, nutation relaxation has been described by an empirical quality factor introduced to parameterise the dissipation rate and to evade the discussion of the actual rheological parameters and their role in dissipation. This approach is unable to describe the dependence of the relaxation rate upon the nutation angle, because we do not know the quality factor's dependence on the frequency (which is a function of the nutation angle). This leaves open the question of relax... 20. Two-Body Relaxation in Cosmological Simulations Binney, J; Binney, James; Knebe, Alexander 2002-01-01 The importance of two-body relaxation in cosmological simulations is explored with simulations in which there are two species of particles. The cases of mass ratio sqrt(2):1 and 4:1 are investigated. Simulations are run with both a fixed softening length and adaptive softening using the publicly available codes GADGET and MLAPM, respectively. The effects of two-body relaxation are detected in both the density profiles of halos and the mass function of halos. The effects are more pronounced with a fixed softening length, but even in this case they are not so large as to suggest that results obtained with one mass species are significantly affected by two-body relaxation. The simulations that use adaptive softening are slightly less affected by two-body relaxation and produce slightly higher central densities in the largest halos. They run about three times faster than the simulations that use a fixed softening length. 1. Structural relaxation in annealed hyperquenched basaltic glasses Guo, Xiaoju; Mauro, John C.; Potuzak, M. 2012-01-01 The enthalpy relaxation behavior of hyperquenched (HQ) and annealed hyperquenched (AHQ) basaltic glass is investigated through calorimetric measurements. The results reveal a common onset temperature of the glass transition for all the HQ and AHQ glasses under study, indicating that the primary r...... relaxation is activated at the same temperature regardless of the initial departure from equilibrium. The analysis of secondary relaxation at different annealing temperatures provides insights into the enthalpy recovery of HQ glasses.......The enthalpy relaxation behavior of hyperquenched (HQ) and annealed hyperquenched (AHQ) basaltic glass is investigated through calorimetric measurements. The results reveal a common onset temperature of the glass transition for all the HQ and AHQ glasses under study, indicating that the primary... 2. Vibrational energy relaxation in liquid oxygen Everitt, K. F.; Egorov, S. A.; Skinner, J. L. 1998-09-01 We consider theoretically the relaxation from the first excited vibrational state to the ground state of oxygen molecules in neat liquid oxygen. The relaxation rate constant is related in the usual way to the Fourier transform of a certain quantum mechanical force-force time-correlation function. A result from Egelstaff allows one instead to relate the rate constant (approximately) to the Fourier transform of a classical force-force time-correlation function. This Fourier transform is then evaluated approximately by calculating three equilibrium averages from a classical molecular dynamics simulation. Our results for the relaxation times (at two different temperatures) are within a factor of 5 of the experimental relaxation times, which are in the ms range. 3. Automatic tracking of dynamical evolutions of oceanic mesoscale eddies with satellite observation data Sun, Liang; Li, Qiu-Yang 2017-04-01 The oceanic mesoscale eddies play a major role in ocean climate system. To analyse spatiotemporal dynamics of oceanic mesoscale eddies, the Genealogical Evolution Model (GEM) based on satellite data is developed, which is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, a mononuclear eddy detection method was firstly developed with simple segmentation strategies, e.g. watershed algorithm. The algorithm is very fast by searching the steepest descent path. Second, the GEM uses a two-dimensional similarity vector (i.e. a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the ''missing eddy" problem (temporarily lost eddy in tracking). Third, for tracking when an eddy splits, GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O (LM(N+1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distribution in the Northern Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". GEM is useful not only for 4. An Investigation of the Eddy-Covariance Flux Imbalance in a Year-Long Large-Eddy Simulation of the Weather at Cabauw Schalkwijk, J.; Jonker, H.J.J.; Siebesma, A.P. 2016-01-01 The low-frequency contribution to the systematic and random sampling errors in single-tower eddy-covariance flux measurements is investigated using large-eddy simulation (LES). We use a continuous LES integration that covers a full year of realistic weather conditions over Cabauw, the Netherlands, a 5. Energy transfers and spectral eddy viscosity in large-eddy simulations of homogeneous isotropic turbulence: Comparison of dynamic Smagorinsky and multiscale models over a range of discretizations Hughes, T.J.R.; Wells, G.N.; Wray, A.A. 2004-01-01 Energy transfers within large-eddy simulation (LES) and direct numerical simulation (DNS) grids are studied. The spectral eddy viscosity for conventional dynamic Smagorinsky and variational multiscale LES methods are compared with DNS results. Both models underestimate the DNS results for a very coa 6. 4. Large-Eddy Simulation of Turbulent Channel Flow Yasuaki, DOI; Tsukasa, KIMURA; Hiroshima University; Mitsubishi Precision 1989-01-01 Turbulent channel flow is studied numerically by using Large-Eddy Simulation (LES). Finite difference method is employed in the LES. The simulation is stably executed by using the 3rd order upwind difference scheme which dissipate numerical errors. Several pilot tests are performed in order to investigate the effect of numerical dissipation and the wall damping function on the calculated results. Time dependent feature and turbulent flow structures in a turbulent channel flow are numerically ... 7. Large Eddy Simulation for Dispersed Bubbly Flows: A Review M. T. Dhotre 2013-01-01 Full Text Available Large eddy simulations (LES of dispersed gas-liquid flows for the prediction of flow patterns and its applications have been reviewed. The published literature in the last ten years has been analysed on a coherent basis, and the present status has been brought out for the LES Euler-Euler and Euler-Lagrange approaches. Finally, recommendations for the use of LES in dispersed gas liquid flows have been made. 8. Large-Eddy Simulations of Dust Devils and Convective Vortices Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei 2016-11-01 In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail. 9. Eddy current testing probe optimization using a parallel genetic algorithm Dolapchiev Ivaylo 2008-01-01 Full Text Available This paper uses the developed parallel version of Michalewicz's Genocop III Genetic Algorithm (GA searching technique to optimize the coil geometry of an eddy current non-destructive testing probe (ECTP. The electromagnetic field is computed using FEMM 2D finite element code. The aim of this optimization was to determine coil dimensions and positions that improve ECTP sensitivity to physical properties of the tested devices. 10. Cold HI in Turbulent Eddies and Galactic Spiral Shocks Steven J Gibson; Taylor, A. Russell; Stil, Jeroen M.; Brunt, Christopher M.; Kavars, Dain W.; Dickey, John M. 2007-01-01 HI 21cm-line self-absorption (HISA) reveals the shape and distribution of cold atomic clouds in the Galactic disk. Many of these clouds lack corresponding CO emission, despite being colder than purely atomic gas in equilibrium models. HISA requires background line emission at the same velocity, hence mechanisms that can produce such backgrounds. Weak, small-scale, and widespread absorption is likely to arise from turbulent eddies, while strong, large-scale absorption appears organized in clou... 11. Eddy-Mean Flow Interactions in Western Boundary Current Jets 2009-02-01 relevance to the atmosphere, the enstrophy variance budget (assuming eddy en- strophy advection, a triple correlation term, is small) reduces to a two-term...producing an increase in the barotropic component of the zonal jet. The other term however, v′2 − u′2, the term producing the quadrupole pattern that...shooting technique ” that varies the complex phase speed until the numerical solutions in the interior match the exterior analytic solutions at the edge of 12. Recent Improvements in High-Frequency Eddy Current Conductivity Spectroscopy Abu-Nabah, Bassam A.; Nagy, Peter B. 2008-02-01 Due to its frequency-dependent penetration depth, eddy current measurements are capable of mapping near-surface residual stress profiles based on the so-called piezoresistivity effect, i.e., the stress-dependence of electric conductivity. To capture the peak compressive residual stress in moderately shot-peened (Almen 4-8A) nickel-base superalloys, the eddy current inspection frequency has to go as high as 50-80 MHz. Recently, we have reported the development of a new high-frequency eddy current conductivity measuring system that offers an extended inspection frequency range up to 80 MHz. Unfortunately, spurious self- and stray-capacitance effects render the complex coil impedance variation with lift-off more nonlinear as the frequency increases, which makes it difficult to achieve accurate apparent eddy current conductivity (AECC) measurements with the standard four-point linear interpolation method beyond 25 MHz. In this paper, we will demonstrate that reducing the coil size reduces its sensitivity to capacitive lift-off variations, which is just the opposite of the better known inductive lift-off effect. Although reducing the coil size also reduces its absolute electric impedance and relative sensitivity to conductivity variations, a smaller coil still yields better overall performance for residual stress assessment. In addition, we will demonstrate the benefits of a semi-quadratic interpolation scheme that, together with the reduced lift-off sensitivity of the smaller probe coil, minimizes and in some cases completely eliminates the sensitivity of AECC measurements to lift-off uncertainties. These modifications allow us to do much more robust measurements up to as high as 80-100 MHz with the required high relative accuracy of +/-0.1%. 13. Mesolayer of attached eddies in turbulent channel flow Hwang, Yongyun 2016-10-01 Recent experimental measurements have reported that the outer peak of the streamwise wave-number spectra of the streamwise velocity depends on the Reynolds number. Starting from this puzzling observation, here it is proposed that the wall-parallel velocity components of each of the energy-containing motions in the form of Towsnend's attached eddies exhibit an inner-scaling nature in the region close to the wall. Some compelling evidence on this proposition has been presented with a careful inspection of scaling of velocity spectra from direct numerical simulations, a linear analysis with an eddy viscosity, and the recently computed statistical structure of the self-similar energy-containing motions in the logarithmic region. This observation suggests that the viscous wall effect would not be negligible at least below the peak wall-normal location of each of the energy-containing motions in the logarithmic and outer regions, reminiscent of the concept of the mesolayer previously observed in the mean momentum balance. It is shown that this behavior emerges due to a minimal form of scale interaction, modeled by the eddy viscosity in the linear theory, and enables one to explain the Reynolds-number-dependent behavior of the outer peak as well as the near-wall penetration of the large-scale outer structures in a consistent manner. Incorporation of this viscous wall effect to Townsend's attached eddies, which were originally built with an inviscid approximation at the wall, also reveals that the self-similarity of the wall-parallel velocity components of the energy-containing motions would be theoretically broken in the region close to the wall. 14. ARRAY PULSED EDDY CURRENT IMAGING SYSTEM USED TO DETECT CORROSION Yang Binfeng; Luo Feilu; Cao Xiongheng; Xu Xiaojie 2005-01-01 A theory model is established to describe the voltage-current response function. The peak amplitude and the zero-crossing time of the transient signal is extracted as the imaging features, array pulsed eddy current (PEC) imaging is proposed to detect corrosion. The test results show that this system has the advantage of fast scanning speed, different imaging mode and quantitative detection, it has a broad application in the aviation nondestructive testing. 15. Nondestructive examination of PHWR pressure tube using eddy current technique Lee, Hee Jong; Choi, Sung Nam; Cho, Chan Hee; Yoo, Hyun Joo; Moon, Gyoon Young [KHNP Central Research Institute, Daejeon (Korea, Republic of) 2014-06-15 A pressurized heavy water reactor (PHWR) core has 380 fuel channels contained and supported by a horizontal cylindrical vessel known as the calandria, whereas a pressurized water reactor (PWR) has only a single reactor vessel. The pressure tube, which is a pressure-retaining component, has a 103.4 mm inside diameter x 4.19 mm wall thickness, and is 6.36 m long, made of a zirconium alloy (Zr-2.5 wt% Nb). This provides support for the fuel while transporting the D2O heat-transfer fluid. The simple tubular geometry invites highly automated inspection, and good approach for all inspection. Similar to all nuclear heat-transfer pressure boundaries, the PHWR pressure tube requires a rigorous, periodic inspection to assess the reactor integrity in accordance with the Korea Nuclear Safety Committee law. Volumetric-based nondestructive evaluation (NDE) techniques utilizing ultrasonic and eddy current testing have been adopted for use in the periodic inspection of the fuel channel. The eddy current testing, as a supplemental NDE method to ultrasonic testing, is used to confirm the flaws primarily detected through ultrasonic testing, however, eddy current testing offers a significant advantage in that its ability to detect surface flaws is superior to that of ultrasonic testing. In this paper, effectiveness of flaw detection and the depth sizing capability by eddy current testing for the inside surface of a pressure tube, will be introduced. As a result of this examination, the ET technique is found to be useful only as a detection technique for defects because it can detect fine defects on the surface with high resolution. However, the ET technique is not recommended for use as a depth sizing method because it has a large degree of error for depth sizing. 16. Potential and limitations of eddy current lockin-thermography Riegert, G.; Gleiter, A.; Busse, G. 2006-04-01 Eddy current thermography uses an induction coil to induce eddy currents in conductive materials. The involved resistive losses heat the sample. By modulation of the eddy current amplitude, thermal waves are generated which interact with boundaries thereby revealing defects. Conventional eddy current testing has only a limited depth range due to the skin effect of metal samples. In Induction-Lockin-Thermography (ILT) the depth range is extended by the thermal penetration depth. An infrared camera monitors the modulation of the temperature field on the surface as a response to the coded excitation thereby allowing for fast imaging of defects in larger areas without the need of slow point-by-point mapping. This response is decoded by a Fourier analysis at the modulation frequency. So the extracted information is displayed by just two images where one displays local amplitude and the other local phase. ILT has significant advantages as compared to inductive heating with visual inspection of the thermographic sequence: Phase angle images are independent of most artifacts like reflections, variation in emission coefficient, or inhomogeneous heating. Due to the performed Fourier analysis of the temperature image sequence, the signal-to-noise ratio in the amplitude and phase images is significantly better than in single temperature images of the sequence. Induction heating is confined to conductive materials. However, it is applicable not only to metals but also to carbon fiber reinforced laminates (CFRP) or carbon fiber reinforced ceramics (C/C-SiC). The presented examples for applications of ILT illustrate the potential and limitations of this new non-destructive inspection method. 17. Analytical representations for relaxation functions of glasses Hilfer, R. 2002-01-01 Analytical representations in the time and frequency domains are derived for the most frequently used phenomenological fit functions for non-Debye relaxation processes. In the time domain the relaxation functions corresponding to the complex frequency dependent Cole-Cole, Cole-Davidson and Havriliak-Negami susceptibilities are also represented in terms of $H$-functions. In the frequency domain the complex frequency dependent susceptibility function corresponding to the time dependent stretche... 18. Vibrational relaxation in very high temperature nitrogen Hansen, C. Frederick 1991-01-01 Vibrational relaxation of N2 molecules is considered at temperatures up to 40,000 K in gas mixtures that contain electrons as well as heavy collision partners. The theory of vibrational relaxation due to N2-N2 collisions is fit to experimental data to 10,000 K by choice of the shape of the intermolecular potential and size of the collision cross section. These values are then used to extrapolate the theory to 40,000 K. 19. Anomalous enthalpy relaxation in vitreous silica Yue, Yuanzheng 2015-01-01 scans. It is known that the liquid fragility (i.e., the speed of the viscous slow-down of a supercooled liquid at its Tg during cooling) has impact on enthalpy relaxation in glass. Here, we find that vitreous silica (as a strong system) exhibits striking anomalies in both glass transition and enthalpy...... the fragile ones do in a structurally independent fashion. We discuss the origin of the anomalous enthalpy relaxation in the HQ vitreous silica.... 20. Message passing with relaxed moment matching Qi, Yuan; Guo, Yandong 2012-01-01 Bayesian learning is often hampered by large computational expense. As a powerful generalization of popular belief propagation, expectation propagation (EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP can be sensitive to outliers and suffer from divergence for difficult cases. To address this issue, we propose a new approximate inference approach, relaxed expectation propagation (REP). It relaxes the moment matching requirement of expectation propagation by addin... 1. Protein dynamics from nuclear magnetic relaxation. Charlier, Cyril; Cousin, Samuel F; Ferrage, Fabien 2016-05-01 Nuclear magnetic resonance is a ubiquitous spectroscopic tool to explore molecules with atomic resolution. Nuclear magnetic relaxation is intimately connected to molecular motions. Many methods and models have been developed to measure and interpret the characteristic rates of nuclear magnetic relaxation in proteins. These approaches shed light on a rich and diverse range of motions covering timescales from picoseconds to seconds. Here, we introduce some of the basic concepts upon which these approaches are built and provide a series of illustrations. 2. Lagrange relaxation and Dantzig-Wolfe decomposition Vidal, Rene Victor Valqui 1989-01-01 The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods......The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods... 3. Lagrange relaxation and Dantzig-Wolfe decomposition Vidal, Rene Victor Valqui 1989-01-01 The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods......The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods... 4. An examination of double-diffusive processes in a mesoscale eddy in the Arctic Ocean Bebieva, Yana; Timmermans, Mary-Louise 2016-01-01 Temperature and salinity measurements of an Atlantic Water mesoscale eddy in the Arctic Ocean's Canada Basin are analyzed to understand the effects of velocity shear on a range of double-diffusive processes. Double-diffusive structures in and around the eddy are examined through the transition from low shear (outside the eddy and within its solid body core) to high geostrophic shear zones at the eddy flanks. The geostrophic Richardson number takes large values where a double-diffusive staircase is observed and lowest values at the eddy flanks where geostrophic velocity is largest and a well-formed staircase is not present. A Thorpe scale analysis is used to estimate turbulent diffusivities in the flank regions. Double-diffusive and turbulent heat, salt, and buoyancy fluxes from the eddy are computed, and used to infer that the eddy decays on time scales of around 4-9 years. Fluxes highlight that Atlantic Water heat within the eddy can be fluxed downward into deeper water layers by means of both double-diffusive and turbulent mixing. Estimated lateral variations in vertical fluxes across the eddy allow for speculation that double diffusion speeds up the eddy decay, having important implications for the transfer of Atlantic Water heat in the Arctic Ocean. 5. Large eddy simulation of the atmosphere on various scales. Cullen, M J P; Brown, A R 2009-07-28 Numerical simulations of the atmosphere are routinely carried out on various scales for purposes ranging from weather forecasts for local areas a few hours ahead to forecasts of climate change over periods of hundreds of years. Almost without exception, these forecasts are made with space/time-averaged versions of the governing Navier-Stokes equations and laws of thermodynamics, together with additional terms representing internal and boundary forcing. The calculations are a form of large eddy modelling, because the subgrid-scale processes have to be modelled. In the global atmospheric models used for long-term predictions, the primary method is implicit large eddy modelling, using discretization to perform the averaging, supplemented by specialized subgrid models, where there is organized small-scale activity, such as in the lower boundary layer and near active convection. Smaller scale models used for local or short-range forecasts can use a much smaller averaging scale. This allows some of the specialized subgrid models to be dropped in favour of direct simulations. In research mode, the same models can be run as a conventional large eddy simulation only a few orders of magnitude away from a direct simulation. These simulations can then be used in the development of the subgrid models for coarser resolution models. 6. Turbulent eddy-time-correlation in the solar convective zone Belkacem, K; Goupil, M J; Baudin, F; Salabert, D; Appourchaux, T 2010-01-01 Theoretical modeling of the driving processes of solar-like oscillations is a powerful way of understanding the properties of the convective zones of solar-type stars. In this framework, the description of the temporal correlation between turbulent eddies is an essential ingredient to model mode amplitudes. However, there is a debate between a Gaussian or Lorentzian description of the eddy-time correlation function (Samadi et al. 2003, Chaplin et al. 2005). Indeed, a Gaussian description reproduces the low-frequency shape of the mode amplitude for the Sun, but is unsatisfactory from a theoretical point of view (Houdek, 2009) and leads to other disagreements with observations (Samadi et al., 2007). These are solved by using a Lorentzian description, but there the low-frequency shape of the solar observations is not correctly reproduced. We reconcile the two descriptions by adopting the sweeping approximation, which consists in assuming that the eddy-time-correlation function is dominated by the advection of ed... 7. Comparison of analytical eddy current models using principal components analysis Contant, S.; Luloff, M.; Morelli, J.; Krause, T. W. 2017-02-01 Monitoring the gap between the pressure tube (PT) and the calandria tube (CT) in CANDU® fuel channels is essential, as contact between the two tubes can lead to delayed hydride cracking of the pressure tube. Multifrequency transmit-receive eddy current non-destructive evaluation is used to determine this gap, as this method has different depths of penetration and variable sensitivity to noise, unlike single frequency eddy current non-destructive evaluation. An Analytical model based on the Dodd and Deeds solutions, and a second model that accounts for normal and lossy self-inductances, and a non-coaxial pickup coil, are examined for representing the response of an eddy current transmit-receive probe when considering factors that affect the gap response, such as pressure tube wall thickness and pressure tube resistivity. The multifrequency model data was analyzed using principal components analysis (PCA), a statistical method used to reduce the data set into a data set of fewer variables. The results of the PCA of the analytical models were then compared to PCA performed on a previously obtained experimental data set. The models gave similar results under variable PT wall thickness conditions, but the non-coaxial coil model, which accounts for self-inductive losses, performed significantly better than the Dodd and Deeds model under variable resistivity conditions. 8. Nearby boundaries create eddies near microscopic filter feeders. Pepper, Rachel E; Roper, Marcus; Ryu, Sangjin; Matsudaira, Paul; Stone, Howard A 2010-05-06 We show through calculations, simulations and experiments that the eddies often observed near sessile filter feeders are frequently due to the presence of nearby boundaries. We model the common filter feeder Vorticella, which is approximately 50 microm across and which feeds by removing bacteria from ocean or pond water that it draws towards itself. We use both an analytical stokeslet model and a Brinkman flow approximation that exploits the narrow-gap geometry to predict the size of the eddy caused by two parallel no-slip boundaries that represent the slides between which experimental observations are often made. We also use three-dimensional finite-element simulations to fully solve for the flow around a model Vorticella and analyse the influence of multiple nearby boundaries. Additionally, we track particles around live feeding Vorticella in order to determine the experimental flow field. Our models are in good agreement both with each other and with experiments. We also provide approximate equations to predict the experimental eddy sizes owing to boundaries both for the case of a filter feeder between two slides and for the case of a filter feeder attached to a perpendicular surface between two slides. 9. A Study of Eddy Viscosity Coefficient in Numerical Tidal Simulation 陈永平; 雷智益 2001-01-01 Based on the fluid motion equations, the physical meaning of eddy viscosity coefficient and the rationality of theBoussinesq hypothesis are discussed in this paper. The effect of the coefficient on numerical stability is analyzed briefly.A semi-enclosed rectangular sea area, with an orthogonal spur dike, is applied in a 2-D numerical model to study the effect of horizontal eddy viscosity coefficient (AH). The computed result shows that AH has little influence on the tidal level and averaged flow velocity, but has obvious influence on the intensity and the range of return flow around near thespur dike. Correspondingly, a wind-driven current pool and an annular current are applied in a 3-D numerical modelrespectively to study the effect of vertical eddy viscosity coefficient (AV). The computed result shows that the absolute value of AV is inversely proportional to that of horizontal velocity, and the vertical gradient value of AV determines the ver-tical distribution of horizontal velocity. The distribution form of AV is theoretically recommended as a parabolic type, ofwhich the maximum value appears at 0.5 H. 10. Modeling of the eddy viscosity by breaking waves 2007-01-01 Breaking wave induced nearsurface turbulence has important consequences for many physical and biochemical processes including water column and nutrients mixing, heat and gases exchange across air-sea interface. The energy loss from wave breaking and the bubble plume penetration depth are estimated. As a consequence, the vertical distribution of the turbulent kinetic energy (TKE), the TKE dissipation rate and the eddy viscosity induced by wave breaking are also provided. It is indicated that model results are found to be consistent with the observational evidence that most TKE generated by wave breaking is lost within a depth of a few meters near the sea surface. High turbulence level with intensities of eddy viscosity induced by breaking is nearly four orders larger than υwl(=κu *wz), the value predicted for the wall layer scaling close to the surface, where u *w is the friction velocity in water, κ with 0.4 is the von Kármán constant, and z is the water depth, and the strength of the eddy viscosity depends both on wind speed and sea state, and decays rapidly through the depth. This leads to the conclusion that the breaking wave induced vertical mixing is mainly limited to the near surface layer, well above the classical values expected from the similarity theory. Deeper down, however, the effects of wave breaking on the vertical mixing become less important. 11. Simulation of Cracks Detection in Tubes by Eddy Current Testing S Bennoud 2016-12-01 Full Text Available The eddy current testing can be used such as a perfect tool to characterize defects in conducting materials. However, in the latest years, an important progress was made in the development of software for the eddy current testing simulations. Evaluation of the NDT modeling tools is the principal goal of this study. Main concerns of the aeronautic industry and the potential contribution of modeling are discussed and illustrated. Simulation by finite element method is realized with the aim to calculate the electromagnetic energy of interaction between coil and tested part that enables to deduce the impedance response. The objective of this work is the development of a code for efficient resolution of an electromagnetic problem modeling, especially, for the analysis of probe response due to the eddy current process. The validation of developed code was made. The obtained results converge quickly towards the solution given by the (FEMM code with an average error of 0.018 for real parts of impedance and 0.004 for imaginary parts. The presented results in this work serve to illustrate that the proposed method is practical and they are also of some intrinsic interest especially in the control of aluminum tubes used in aeronautics. 12. Influence of mesoscale eddies on spatial structuring of top predators’ communities in the Mozambique Channel Tew Kai, Emilie; Marsac, Francis 2010-07-01 Mesoscale physical features as fronts and eddies appear to play a key role in the dynamics of marine communities. In the Indian Ocean, the Mozambique Channel (MC) is a natural laboratory to investigate mesoscale eddies (100-300 km in diameter); indeed, four to seven eddies per year are know to transit through the Channel, from North to South. We studied the structuring role of the mesoscale eddies on spatial dynamics and foraging strategy of top predators using seabirds and tuna as examples. Emphasis was on the central part of the MC (16-24°S) where eddy activity is most developed. We integrated three main categories of information: (i) satellite altimetry for sea-level anomaly (SLA) and geostrophic current, remotely-sensed surface temperature (SST) and SeaWiFS data for chlorophyll concentration (CC); (ii) individual tracking of Great Frigatebirds ( Fregata minor) to characterize foraging areas; and (iii) detailed catch statistics from purse-seine fisheries to describe distribution of tuna schools. Generalized Additive Models were applied to quantify the relative influence of mesoscale descriptors, SST and CC on foraging behaviour of Great Frigatebirds and location of purse-seine sets. Our results show that seabirds are more closely tied to mesoscale eddies compared to tuna. We underline the role of eddy boundaries on the response of frigatebirds and tuna. Good foraging conditions are promoted along the edge of eddies as a result of the interplay of the maturation process from cyclonic eddies and the concentration process by eddy interactions. A decrease in the number or intensity of eddies in the MC, as observed during strong El Niño events, could potentially affect the eddy-related ecosystem with putative negative repercussions on central-place foragers such as Great Frigatebirds. We discuss the importance of a better understanding of the “eddy system”, in marine conservation and tuna fisheries management in the Mozambique Channel. 13. Orientational relaxation in semiflexible dendrimers. Kumar, Amit; Biswas, Parbati 2013-12-14 The orientational relaxation dynamics of semiflexible dendrimers are theoretically calculated within the framework of optimized Rouse-Zimm formalism. Semiflexibility is modeled through appropriate restrictions in the direction and orientation of the respective bond vectors, while the hydrodynamic interactions are included via the preaveraged Oseen tensor. The time autocorrelation function M(i)(1)(t) and the second order orientational autocorrelation function P(i)(2)(t) are analyzed as a function of the branch-point functionality and the degree of semiflexibility. Our approach of calculating M(i)(1)(t) is completely different from that of the earlier studies (A. Perico and M. Guenza J. Chem. Phys., 1985, 83, 3103; J. Chem. Phys., 1986, 84, 510), where the expression of M(i)(1)(t) obtained from earlier studies does not demarcate the flexible dendrimers from the semiflexible ones. The component of global motion of the time autocorrelation function exhibits a strong dependence on both degree of semiflexibility and branch-point functionality, while the component of pulsation motion depends only on the degree of semiflexibility. But it is difficult to distinguish the difference in the extent of pulsation motion among the compressed (0 qualitative behavior of P(i)(2)(t) obtained from our calculations closely matches with the expression for P(exact)(2)(t) in the earlier studies. Theoretically calculated spectral density, J(ω), is found to depend on the degree of semiflexibility and the branch-point functionality for the compressed and expanded conformations of semiflexible dendrimers as a function of frequency, especially in the high frequency regime, where J(ω) decays with frequency for both compressed and expanded conformations of semiflexible dendrimers. This decay of the spectral density occurs after displaying a cross-over behavior with the variation in the degree of semiflexibility in the intermediate frequency regime. The characteristic area increases with the 14. Dielectric relaxation spectroscopy of phlogopite mica Kaur, Navjeet; Singh, Mohan; Singh, Anupinder [Department of Physics, Guru Nanak Dev University, Amritsar, Punjab 143005 (India); Awasthi, A.M. [Thermodynamics Laboratory, UGC-DAE Consortium for Scientific Research, Indore 452001 (India); Singh, Lakhwant, E-mail: lakhwant@yahoo.com [Department of Physics, Guru Nanak Dev University, Amritsar, Punjab 143005 (India) 2012-11-15 An in-depth investigation of the dielectric characteristics of annealed phlogopite mica has been conducted in the frequency range 0.1 Hz-10 MHz and over the temperature range 653-873 K through the framework of dielectric permittivity, electric modulus and conductivity formalisms. These formalisms show qualitative similarities in relaxation processes. The frequency dependence of the M Double-Prime and dc conductivity is found to obey an Arrhenius law and the activation energy of the phlogopite mica calculated both from dc conductivity and the modulus spectrum is similar, indicating that same type of charge carriers are involved in the relaxation phenomena. The electric modulus and conductivity data have been fitted with the Havriliak-Negami function. Scaling of M Prime , M Double-Prime , ac conductivity has also been performed in order to obtain insight into the relaxation mechanisms. The scaling behaviour indicates that the relaxation describes the same mechanism at different temperatures. The relaxation mechanism was also examined using the Cole-Cole approach. The study elaborates that the investigation regarding the temperature and frequency dependence of dielectric relaxation in the phlogopite mica will be helpful for various cutting edge applications of this material in electrical engineering. 15. Dielectric relaxation of gamma irradiated muscovite mica Kaur, Navjeet [Department of Physics, Guru Nanak Dev University, Amritsar, Punjab 143005 (India); Singh, Mohan, E-mail: mohansinghphysics@gmail.com [Department of Physics, Guru Nanak Dev University, Amritsar, Punjab 143005 (India); Singh, Lakhwant [Department of Physics, Guru Nanak Dev University, Amritsar, Punjab 143005 (India); Awasthi, A.M. [Thermodynamics Laboratory, UGC-DAE Consortium for Scientific Research, Indore 452001 (India); Lochab, S.P. [Inter-University Accelerator Centre, Aruna Asaf Ali Marg, New Delhi 110067 (India) 2015-03-15 Highlights: • The present article reports the effect of gamma irradiation on the dielectric relaxation characteristics of muscovite mica. • Dielectric and electrical relaxations have been analyzed in the framework of dielectric permittivity, electric modulus and Cole–Cole formalisms. • The frequency dependent electrical conductivity has been rationalized using Johnsher’s universal power law. • The experimentally measured electric modulus and conductivity data have been fitted using Havriliak–Negami dielectric relaxation function. - Abstract: In the present research, the dielectric relaxation of gamma irradiated muscovite mica was studied in the frequency range of 0.1 Hz–10 MHz and temperature range of 653–853 K, using the dielectric permittivity, electric modulus and conductivity formalisms. The dielectric constants (ϵ′ and ϵ′′) are found to be high for gamma irradiated muscovite mica as compared to the pristine sample. The frequency dependence of the imaginary part of complex electric modulus (M′′) and dc conductivity data conforms Arrhenius law with single value of activation energy for pristine sample and two values of activation energy for gamma irradiated mica sample. The experimentally assessed electric modulus and conductivity information have been interpreted by the Havriliak–Negami dielectric relaxation explanation. Using the Cole–Cole framework, an analysis of real and imaginary characters of the electric modulus for pristine and gamma irradiated sample was executed which reflects the non-Debye relaxation mechanism. 16. Rounded stretched exponential for time relaxation functions. Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B 2009-12-01 A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)Cole-Cole plots for dielectric and shear stress relaxation (both the modulus and viscosity forms). It is shown that both the dielectric spectra and dynamic shear modulus imaginary parts approach the real axis with a slope equal to 0 at high frequency, whereas the dynamic viscosity has an infinite slope in the same limit. This indicates that inertial effects at high frequency are best discerned in the modulus rather than the viscosity Cole-Cole plot. As a consequence of the even expansion in time of the shear stress relaxation function, the value of the storage modulus derived from it at very high frequency exceeds that in the infinite frequency limit (i.e., G(infinity)). 17. Stress relaxation in viscous soft spheres. Boschan, Julia; Vasudevan, Siddarth A; Boukany, Pouyan E; Somfai, Ellák; Tighe, Brian P 2017-09-27 We report the results of molecular dynamics simulations of stress relaxation tests in athermal viscous soft sphere packings close to their unjamming transition. By systematically and simultaneously varying both the amplitude of the applied strain step and the pressure of the initial condition, we access both linear and nonlinear response regimes and control the distance to jamming. Stress relaxation in viscoelastic solids is characterized by a relaxation time τ* that separates short time scales, where viscous loss is substantial, from long time scales, where elastic storage dominates and the response is essentially quasistatic. We identify two distinct plateaus in the strain dependence of the relaxation time, one each in the linear and nonlinear regimes. The height of both plateaus scales as an inverse power law with the distance to jamming. By probing the time evolution of particle velocities during relaxation, we further identify a correlation between mechanical relaxation in the bulk and the degree of non-affinity in the particle velocities on the micro scale. 18. On convex relaxation of graph isomorphism. Aflalo, Yonathan; Bronstein, Alexander; Kimmel, Ron 2015-03-10 We consider the problem of exact and inexact matching of weighted undirected graphs, in which a bijective correspondence is sought to minimize a quadratic weight disagreement. This computationally challenging problem is often relaxed as a convex quadratic program, in which the space of permutations is replaced by the space of doubly stochastic matrices. However, the applicability of such a relaxation is poorly understood. We define a broad class of friendly graphs characterized by an easily verifiable spectral property. We prove that for friendly graphs, the convex relaxation is guaranteed to find the exact isomorphism or certify its inexistence. This result is further extended to approximately isomorphic graphs, for which we develop an explicit bound on the amount of weight disagreement under which the relaxation is guaranteed to find the globally optimal approximate isomorphism. We also show that in many cases, the graph matching problem can be further harmlessly relaxed to a convex quadratic program with only n separable linear equality constraints, which is substantially more efficient than the standard relaxation involving n2 equality and n2 inequality constraints. Finally, we show that our results are still valid for unfriendly graphs if additional information in the form of seeds or attributes is allowed, with the latter satisfying an easy to verify spectral characteristic. 19. Association of MRI T1 relaxation time with neuropsychological test performance in manganese- exposed welders. Bowler, R M; Yeh, C-L; Adams, S W; Ward, E J; Ma, R E; Dharmadhikari, S; Snyder, S A; Zauber, S E; Wright, C W; Dydak, U 2017-06-03 This study examines the results of neuropsychological testing of 26 active welders and 17 similar controls and their relationship to welders' shortened MRI T1 relaxation time, indicative of increased brain manganese (Mn) accumulation. Welders were exposed to Mn for an average duration of 12.25 years to average levels of Mn in air of 0.11±0.05mg/m(3). Welders scored significantly worse than controls on Fruit Naming and the Parallel Lines test of graphomotor tremor. Welders had shorter MRI T1 relaxation times than controls in the globus pallidus, substantia nigra, caudate nucleus, and the anterior prefrontal lobe. 63% of the variation in MRI T1 relaxation times was accounted for by exposure group. In welders, lower relaxation times in the caudate nucleus and substantia nigra were associated with lower neuropsychological test performance on tests of verbal fluency (Fruit Naming), verbal learning, memory, and perseveration (WHO-UCLA AVLT). Results indicate that verbal function may be one of the first cognitive domains affected by brain Mn deposition in welders as reflected by MRI T1 relaxation times. Copyright © 2017 Elsevier B.V. All rights reserved. 20. Zooplankton distribution and dynamics in a North Pacific Eddy of coastal origin: II. Mechanisms of eddy colonization by and retention of offshore species Mackas, D. L.; Tsurumi, M.; Galbraith, M. D.; Yelland, D. R. 2005-04-01 Mesoscale anticyclonic eddies form annually in late winter along the eastern margin of the subarctic North Pacific. Eddies that originate off the southern tip of the Queen Charlotte Islands (near 52°N 132°W) are called 'Haida Eddies'. During the subsequent 1-3 years, they propagate westward into the Alaska Gyre. Enroute, the eddies are colonized by zooplankton originating from the central British Columbia continental shelf, the continental slope and along-slope boundary current, and the oceanic Alaska Gyre. Eddies also gradually lose kinetic energy, water properties, and biota to the surrounding ocean. In this paper, we analyze zooplankton samples from Haida eddies obtained in late winter, early summer and autumn of 2000, and in early summer and autumn of 2001, and compare the within-eddy zooplankton distributions, abundances, and community composition of the oceanic-origin species to observations from the continental margin and Alaska Gyre source regions. Most between-region comparisons were consistent with a hypothesis that the eddy zooplankton are a mixture intermediate in abundance and community composition between the BC continental margin and offshore Alaska Gyre source regions (although usually closer to the Alaska Gyre). However, about 30% of the comparisons showed within-eddy abundances higher than in either source region. This outcome cannot arise from mixing alone. Aggregation and retention appear to be linked to vertical distribution behavior: most of the successful taxa spend much of their time below the surface mixed layer. This minimizes their exposure to wash-out by three physical processes: Slow upwelling and surface divergence that accompanies weakening of the anticyclonic geostrophic currents. Rapid but intermittent flushing of the surface layer by Ekman transport during strong wind events. Exchange across the eddy margin/geostrophic streamlines caused by temporary displacement by wind-driven inertial currents. 1. The Antiproton Accumulator (AA) 1980-01-01 Section 06 - 08*) of the AA where the dispersion (and hence the horizontal beam size) is large. One can distinguish (left to right): A vacuum-tank, two bending magnets (BST06 and BST07 in blue) with a quadrupole (QDN07, in red) in between, another vacuum-tank, a wide quadrupole (QFW08) and a further tank . The tanks are covered with heating tape for bake-out. The tank left of BST06 contained the stack core pickup for stochastic cooling (see 7906193, 7906190, 8005051), the two other tanks served mainly as vacuum chambers in the region where the beam was large. Peter Zettwoch works on BST06. *) see: H. Koziol, Antiproton Accumulator Parameter List, PS/AA/Note 84-2 (1984) 2. Solids Accumulation Scouting Studies Duignan, M. R.; Steeper, T. J.; Steimke, J. L. 2012-09-26 The objective of Solids Accumulation activities was to perform scaled testing to understand the behavior of remaining solids in a Double Shell Tank (DST), specifically AW-105, at Hanford during multiple fill, mix, and transfer operations. It is important to know if fissionable materials can concentrate when waste is transferred from staging tanks prior to feeding waste treatment plants. Specifically, there is a concern that large, dense particles containing plutonium could accumulate in poorly mixed regions of a blend tank heel for tanks that employ mixing jet pumps. At the request of the DOE Hanford Tank Operations Contractor, Washington River Protection Solutions, the Engineering Development Laboratory of the Savannah River National Laboratory performed a scouting study in a 1/22-scale model of a waste staging tank to investigate this concern and to develop measurement techniques that could be applied in a more extensive study at a larger scale. Simulated waste tank solids: Gibbsite, Zirconia, Sand, and Stainless Steel, with stainless steel particles representing the heavier particles, e.g., plutonium, and supernatant were charged to the test tank and rotating liquid jets were used to mix most of the solids while the simulant was pumped out. Subsequently, the volume and shape of the mounds of residual solids and the spatial concentration profiles for the surrogate for heavier particles were measured. Several techniques were developed and equipment designed to accomplish the measurements needed and they included: 1. Magnetic particle separator to remove simulant stainless steel solids. A device was designed and built to capture these solids, which represent the heavier solids during a waste transfer from a staging tank. 2. Photographic equipment to determine the volume of the solids mounds. The mounds were photographed as they were exposed at different tank waste levels to develop a composite of topographical areas. 3. Laser rangefinders to determine the volume of 3. Determining confounding sensitivities in eddy current thin film measurements Gros, Ethan; Udpa, Lalita; Smith, James A.; Wachs, Katelyn 2017-02-01 Eddy current (EC) techniques are widely used in industry to measure the thickness of non-conductive films on a metal substrate. This is done by using a system whereby a coil carrying a high-frequency alternating current is used to create an alternating magnetic field at the surface of the instrument's probe. When the probe is brought near a conductive surface, the alternating magnetic field will induce ECs in the conductor. The substrate characteristics and the distance of the probe from the substrate (the coating thickness) affect the magnitude of the ECs. The induced currents load the probe coil affecting the terminal impedance of the coil. The measured probe impedance is related to the lift off between coil and conductor as well as conductivity of the test sample. For a known conductivity sample, the probe impedance can be converted into an equivalent film thickness value. The EC measurement can be confounded by a number of measurement parameters. It was the goal of this research to determine which physical properties of the measurement set-up and sample can adversely affect the thickness measurement. The eddy-current testing was performed using a commercially available, hand-held eddy-current probe (ETA3.3H spring-loaded eddy probe running at 8 MHz) that comes with a stand to hold the probe. The stand holds the probe and adjusts the probe on the z-axis to help position the probe in the correct area as well as make precise measurements. The signal from the probe was sent to a hand-held readout, where the results are recorded directly in terms of liftoff or film thickness. Understanding the effect of certain factors on the measurements of film thickness, will help to evaluate how accurate the ETA3.3H spring-loaded eddy probe was at measuring film thickness under varying experimental conditions. This research studied the effects of a number of factors such as i) conductivity, ii) edge effect, iii) surface finish of base material and iv) cable condition. 4. Eddy-Induced Ekman Pumping from Sea-Surface Temperature and Surface Current Effects Gaube, P.; Chelton, D. B.; O'Neill, L. W. 2011-12-01 Numerous past studies have discussed the biological importance of upwelling of nutrients into the interiors of nonlinear eddies. Such upwelling can occur during the transient stages of formation of cyclones from shoaling of the thermocline. In their mature stages, upwelling can occur from Ekman pumping driven by eddy-induced wind stress curl. Previous investigations of ocean-atmosphere interaction in regions of persistent sea-surface temperature (SST) frontal features have shown that the wind field is locally stronger over warm water and weaker over cold water. Spatial variability of the SST field thus results in a wind stress curl and an associated Ekman pumping in regions of crosswind temperature gradients. It can therefore be anticipated that any SST anomalies associated with eddies can generate Ekman pumping in the eddy interiors. Another mechanism for eddy-induced Ekman pumping is the curl of the stress on the sea surface that arises from the difference between the surface wind velocity and the surface ocean velocity. While SST-induced Ekman upwelling can occur over eddies of either polarity surface current effects on Ekman upwelling occur only over anticyclonic eddies The objective of this study is to determine the spatial structures and relative magnitudes of the two mechanisms for eddy-induced Ekman pumping within the interiors of mesoscale eddies. This is achieved by collocating satellite-based measurements of SST, surface winds and wind stress curl to the interiors of eddies identified and tracked with an automated procedure applied to the sea-surface height (SSH) fields in the Reference Series constructed by AVISO from the combined measurements by two simultaneously operating altimeters. It is shown that, on average, the wind stress curl from eddy-induced surface currents is largest at the eddy center, resulting in Ekman pumping velocities of order 10 cm day-1. While this surface current-induced Ekman pumping depends only weakly on the wind direction 5. Observing mesoscale eddy effects on mode-water subduction and transport in the North Pacific. Xu, Lixiao; Li, Peiliang; Xie, Shang-Ping; Liu, Qinyu; Liu, Cong; Gao, Wendian 2016-02-01 While modelling studies suggest that mesoscale eddies strengthen the subduction of mode waters, this eddy effect has never been observed in the field. Here we report results from a field campaign from March 2014 that captured the eddy effects on mode-water subduction south of the Kuroshio Extension east of Japan. The experiment deployed 17 Argo floats in an anticyclonic eddy (AC) with enhanced daily sampling. Analysis of over 3,000 hydrographic profiles following the AC reveals that potential vorticity and apparent oxygen utilization distributions are asymmetric outside the AC core, with enhanced subduction near the southeastern rim of the AC. There, the southward eddy flow advects newly ventilated mode water from the north into the main thermocline. Our results show that subduction by eddy lateral advection is comparable in magnitude to that by the mean flow--an effect that needs to be better represented in climate models. 6. Non-destructive testing of composite materials used in military applications by eddy current thermography method Swiderski, Waldemar 2016-10-01 Eddy current thermography is a new NDT-technique for the detection of cracks in electro conductive materials. It combines the well-established inspection techniques of eddy current testing and thermography. The technique uses induced eddy currents to heat the sample being tested and defect detection is based on the changes of induced eddy currents flows revealed by thermal visualization captured by an infrared camera. The advantage of this method is to use the high performance of eddy current testing that eliminates the known problem of the edge effect. Especially for components of complex geometry this is an important factor which may overcome the increased expense for inspection set-up. The paper presents the possibility of applying eddy current thermography method for detecting defects in ballistic covers made of carbon fiber reinforced composites used in the construction of military vehicles. 7. Eddy, drift wave and zonal flow dynamics in a linear magnetized plasma Arakawa, H.; Inagaki, S.; Sasaki, M.; Kosuga, Y.; Kobayashi, T.; Kasuya, N.; Nagashima, Y.; Yamada, T.; Lesur, M.; Fujisawa, A.; Itoh, K.; Itoh, S.-I. 2016-09-01 Turbulence and its structure formation are universal in neutral fluids and in plasmas. Turbulence annihilates global structures but can organize flows and eddies. The mutual-interactions between flow and the eddy give basic insights into the understanding of non-equilibrium and nonlinear interaction by turbulence. In fusion plasma, clarifying structure formation by Drift-wave turbulence, driven by density gradients in magnetized plasma, is an important issue. Here, a new mutual-interaction among eddy, drift wave and flow in magnetized plasma is discovered. A two-dimensional solitary eddy, which is a perturbation with circumnavigating motion localized radially and azimuthally, is transiently organized in a drift wave - zonal flow (azimuthally symmetric band-like shear flows) system. The excitation of the eddy is synchronized with zonal perturbation. The organization of the eddy has substantial impact on the acceleration of zonal flow. 8. Evolution of the eddy field in the Arctic Ocean's Canada Basin, 2005-2015 Zhao, Mengnan; Timmermans, Mary-Louise; Cole, Sylvia; Krishfield, Richard; Toole, John 2016-08-01 The eddy field across the Arctic Ocean's Canada Basin is analyzed using Ice-Tethered Profiler (ITP) and moored measurements of temperature, salinity, and velocity spanning 2005 to 2015. ITPs encountered 243 eddies, 98% of which were anticyclones, with approximately 70% of these having anomalously cold cores. The spatially and temporally varying eddy field is analyzed accounting for sampling biases in the unevenly distributed ITP data and caveats in detection methods. The highest concentration of eddies was found in the western and southern portions of the basin, close to topographic margins and boundaries of the Beaufort Gyre. The number of lower halocline eddies approximately doubled from 2005-2012 to 2013-2014. The increased eddy density suggests more active baroclinic instability of the Beaufort Gyre that releases available potential energy to balance the wind energy input; this may stabilize the Gyre spin-up and associated freshwater increase. 9. Subsurface circulation and mesoscale variability in the Algerian subbasin from altimeter-derived eddy trajectories Escudier, Romain; Mourre, Baptiste; Juza, Mélanie; Tintoré, Joaquín. 2016-08-01 Algerian eddies are the strongest and largest propagating mesoscale structures in the Western Mediterranean Sea. They have a large influence on the mean circulation, water masses and biological processes. Over 20 years of satellite altimeter data have been analyzed to characterize the propagation of these eddies using automatic detection methods and cross-correlation analysis. We found that, on average, Algerian eddy trajectories form two subbasin scale anticlockwise gyres that coincide with the two Algerian gyres which were described in the literature as the barotropic circulation in the area. This result suggests that altimetry sea surface observations can provide information on subsurface currents and their variability through the study of the propagation of deep mesoscale eddies in semienclosed seas. The analysis of eddy sea level anomalies along the mean pathways reveals three preferred areas of formation. Eddies are usually formed at a specific time of the year in these areas, with a strong interannual variability over the last 20 years. 10. PROPAGATION OF LONG-LIVED ANTICYCLONIC ROSSBY EDDIES OVER AN ISOLATED TOPOGRAPHY AND THEIR MERGING 2001-01-01 In this paper, in a barotropic model the propagation of long-lived anticylonic Gaussian eddies larger than the radius of deformation over a Gaussian-shaped topography and the merging of the two anticyclonic eddies are investigated by solving the generalized Flierl-Yamagata equation. It is shown that whether or not the basic flow is present, the isoated topography seems to encourage the amplification of an anticyclonic eddy and its southwest movement around the hill. In the absence of both the westward flow and the topography, two anticyclonic eddies of indentical sizes and amplitudes can merge. However, either the including of the topography or the westward basic flow can make them not merge. In the presence of both, the eddies can merge, but this merging depends on whether the parameter condition is appropriate or not. Therefore, it can be concluded that the topographic forcing might be a possible mechanism for the merging of two anticyclonic eddies. 11. Design and array signal suggestion of array type pulsed eddy current probe for health monitoring of metal tubes Shin, Young Kil [Dept. of Electrical Engineering, Kunsan National University, Kunsan (Korea, Republic of) 2015-10-15 An array type probe for monitoring metal tubes is proposed in this paper which utilizes peak value and peak time of a pulsed eddy current(PEC) signal. The probe consists of an array of encircling coils along a tube and the outside of coils is shielded by ferrite to prevent source magnetic fields from directly affecting sensor signals since it is the magnetic fields produced by eddy currents that reflect the condition of metal tubes. The positions of both exciter and sensor coils are consecutively moved automatically so that manual scanning is not necessary. At one position of send-receive coils, peak value and peak time are extracted from a sensor PEC signal and these data are accumulated for all positions to form an array type peak value signal and an array type peak time signal. Numerical simulation was performed using the backward difference method in time and the finite element method for spatial analysis. Simulation results showed that peak value increases and the peak appears earlier as the defect depth or length increases. The proposed array signals are shown to be excellent in reflecting the defect location as well as variations of defect depth and length within the array probe. 12. The use of (double) relaxation oscillation SQUIDs as a sensor Duuren, van M.J.; Brons, G.C.S.; Kattouw, H.; Flokstra, J.; Rogalla, H. 1997-01-01 Relaxation Oscillation SQUIDs (ROSs) and Double Relaxation Oscillation SQUIDs (DROSs) are based on relaxation oscillations that are induced in hysteretic dc SQUIDs by an external L-R shunt. The relaxation frequency of a ROS varies with the applied flux Φ, whereas the output of a DROS is a dc voltage 13. The use of (double) relaxation oscillation SQUIDs as a sensor van Duuren, M.J.; Brons, G.C.S.; Kattouw, H.; Flokstra, Jakob; Rogalla, Horst 1997-01-01 Relaxation Oscillation SQUIDs (ROSs) and Double Relaxation Oscillation SQUIDs (DROSs) are based on relaxation oscillations that are induced in hysteretic dc SQUIDs by an external L-R shunt. The relaxation frequency of a ROS varies with the applied flux Φ, whereas the output of a DROS is a dc 14. A Stabilized Incompressible SPH Method by Relaxing the Density Invariance Condition Mitsuteru Asai 2012-01-01 Full Text Available A stabilized Incompressible Smoothed Particle Hydrodynamics (ISPH is proposed to simulate free surface flow problems. In the ISPH, pressure is evaluated by solving pressure Poisson equation using a semi-implicit algorithm based on the projection method. Even if the pressure is evaluated implicitly, the unrealistic pressure fluctuations cannot be eliminated. In order to overcome this problem, there are several improvements. One is small compressibility approach, and the other is introduction of two kinds of pressure Poisson equation related to velocity divergence-free and density invariance conditions, respectively. In this paper, a stabilized formulation, which was originally proposed in the framework of Moving Particle Semi-implicit (MPS method, is applied to ISPH in order to relax the density invariance condition. This formulation leads to a new pressure Poisson equation with a relaxation coefficient, which can be estimated by a preanalysis calculation. The efficiency of the proposed formulation is tested by a couple of numerical examples of dam-breaking problem, and its effects are discussed by using several resolution models with different particle initial distances. Also, the effect of eddy viscosity is briefly discussed in this paper. 15. A Nonlinear Multi-Scale Interaction Model for Atmospheric Blocking: The Eddy-Blocking Matching Mechanism Luo, Dehai; Cha, Jing; Zhong, Linhao; Dai, Aiguo 2014-05-01 In this paper, a nonlinear multi-scale interaction (NMI) model is used to propose an eddy-blocking matching (EBM) mechanism to account for how synoptic eddies reinforce or suppress a blocking flow. It is shown that the spatial structure of the eddy vorticity forcing (EVF) arising from upstream synoptic eddies determines whether an incipient block can grow into a meandering blocking flow through its interaction with the transient synoptic eddies from the west. Under certain conditions, the EVF exhibits a low-frequency oscillation on timescales of 2-3 weeks. During the EVF phase with a negative-over- positive dipole structure, a blocking event can be resonantly excited through the transport of eddy energy into the incipient block by the EVF. As the EVF changes into an opposite phase, the blocking decays. The NMI model produces life cycles of blocking events that resemble observations. Moreover, it is shown that the eddy north-south straining is a response of the eddies to a dipole- or Ω-type block. In our model, as in observations, two synoptic anticyclones (cyclones) can attract and merge with one another as the blocking intensifies, but only when the feedback of the blocking on the eddies is included. Thus, we attribute the eddy straining and associated vortex interaction to the feedback of the intensified blocking on synoptic eddies. The results illustrate the concomitant nature of the eddy deformation, whose role as a PV source for the blocking flow becomes important only during the mature stage of a block. Our EBM mechanism suggests that an incipient block flow is amplified (or suppressed) under certain conditions by the EVF coming from the upstream of the blocking region. 16. Eddy heat and salt transports in the South China Sea and their seasonal modulations Chen, Gengxin; Gan, Jianping; Xie, Qiang; Chu, Xiaoqing; Wang, Dongxiao; Hou, Yijun 2012-05-01 This study describes characteristics of eddy (turbulent) heat and salt transports, in the basin-scale circulation as well as in the embedded mesoscale eddy found in the South China Sea (SCS). We first showed the features of turbulent heat and salt transports in mesoscale eddies using sea level anomaly (SLA) data, in situ hydrographic data, and 375 Argo profiles. We found that the transports were horizontally variable due to asymmetric distributions of temperature and salinity anomalies and that they were vertically correlated with the thermocline and halocline depths in the eddies. An existing barrier layer caused the halocline and eddy salt transport to be relatively shallow. We then analyzed the transports in the basin-scale circulation using an eddy diffusivity method and the sea surface height data, the Argo profiles, and the climatological hydrographic data. We found that relatively large poleward eddy heat transports occurred to the east of Vietnam (EOV) in summer and to the west of the Luzon Islands (WOL) in winter, while a large equatorward heat transport was located to the west of the Luzon Strait (WLS) in winter. The eddy salt transports were mostly similar to the heat transports but in the equatorward direction due to the fact that the mean salinity in the upper layer in the SCS tended to decrease toward the equator. Using a 21/2-layer reduced-gravity model, we conducted a baroclinic instability study and showed that the baroclinic instability was critical to the seasonal variation of eddy kinetic energy (EKE) and thus the eddy transports. EOV, WLS, and WOL were regions with strong baroclinic instability, and, thus, with intensified eddy transports in the SCS. The combined effects of vertical velocity shear, latitude, and stratification determined the intensity of the baroclinic instability, which intensified the eddy transports EOV during summer and WLS and WOL during winter. 17. Occurrence and characteristics of mesoscale eddies in the tropical northeast Atlantic Ocean F. Schütte 2015-12-01 Full Text Available Coherent mesoscale features (referred to here as eddies in the tropical northeast Atlantic (between 12–22° N and 15–26° W are examined and characterised. The eddies' surface signatures are investigated using 19 years of satellite derived sea level anomaly (SLA data. Two automated detection methods are applied, the geometrical method based on closed streamlines around eddy cores, and the Okubo–Weiß method based on the relation between vorticity and strain. Both methods give similar results. Mean eddy surface signatures of SLA, sea surface temperature (SST and salinity (SSS are obtained from composites of all snapshots around identified eddy cores. Anticyclones/cyclones are associated with elevation/depression of SLA and enhanced/reduced SST and SSS patterns. However, about 20 % of all detected anticyclones show reduced SST and reduced SSS instead. These kind of eddies are classified as anticyclonic mode-water eddies (ACMEs. About 146 ± 4 eddies per year are identified (52 % cyclones, 39 % anticylones, 9 % ACMEs with rather similar mean radii of about 56 ± 12 km. Based on concurrent in-situ temperature and salinity profile data (from Argo float, shipboard and mooring data inside of the three eddy types, their distinct differences in vertical structure is determined. Most eddies are generated preferentially in boreal summer and along the West African coast at three distinct coastal headland region and carry South Atlantic Central Water that originates from the northward transport within the Mauretania coastal current system. Westward eddy propagation (on average about 3.00 ± 2.15 km d−1 is confined to distinct corridors with a small meridional deflection dependent on the eddy type (anticyclones – equatorward, cyclones – poleward, ACMEs – no deflection. Heat and salt flux out of the coastal region and across the Cap Verde Frontal Zone, which separates the shadow zone from the ventilated gyre, are calculated. 18. Mesoscale eddies in the South China Sea and their impact on temperature profiles WANG Guihua; SU Jilan; LI Rongfeng 2005-01-01 Some life history statistics of the mesoscale eddies ofthe South China Sea (SCS) derived from altimetry data will be further discussed according their different formation periods.A total of three ATLAS (autonomous temperature line acquisition system)mooring buoys data will be analyzed to discuss eddies' impact on temperature profiles.They identify that the intraseasonal variation of SCSthermocline is partly controlled by mesoscale eddies. 19. Time scales of relaxation dynamics during transient conditions in two-phase flow: RELAXATION DYNAMICS Schlüter, Steffen [School of Chemical, Biological and Environmental Engineering, Oregon State University, Corvallis Oregon USA; Department Soil Physics, Helmholtz-Centre for Environmental Research-UFZ, Halle Germany; Berg, Steffen [Shell Global Solutions International B.V., Rijswijk Netherlands; Li, Tianyi [School of Chemical, Biological and Environmental Engineering, Oregon State University, Corvallis Oregon USA; Vogel, Hans-Jörg [Department Soil Physics, Helmholtz-Centre for Environmental Research-UFZ, Halle Germany; Institut für Agrar- und Ernährungswissenschaften, Martin-Luther-Universität Halle-Wittenberg, Halle Germany; Wildenschild, Dorthe [School of Chemical, Biological and Environmental Engineering, Oregon State University, Corvallis Oregon USA 2017-06-01 The relaxation dynamics toward a hydrostatic equilibrium after a change in phase saturation in porous media is governed by fluid reconfiguration at the pore scale. Little is known whether a hydrostatic equilibrium in which all interfaces come to rest is ever reached and which microscopic processes govern the time scales of relaxation. Here we apply fast synchrotron-based X-ray tomography (X-ray CT) to measure the slow relaxation dynamics of fluid interfaces in a glass bead pack after fast drainage of the sample. The relaxation of interfaces triggers internal redistribution of fluids, reduces the surface energy stored in the fluid interfaces, and relaxes the contact angle toward the equilibrium value while the fluid topology remains unchanged. The equilibration of capillary pressures occurs in two stages: (i) a quick relaxation within seconds in which most of the pressure drop that built up during drainage is dissipated, a process that is to fast to be captured with fast X-ray CT, and (ii) a slow relaxation with characteristic time scales of 1–4 h which manifests itself as a spontaneous imbibition process that is well described by the Washburn equation for capillary rise in porous media. The slow relaxation implies that a hydrostatic equilibrium is hardly ever attained in practice when conducting two-phase experiments in which a flux boundary condition is changed from flow to no-flow. Implications for experiments with pressure boundary conditions are discussed. 20. Anomalous Enthalpy Relaxation in Vitreous Silica Yuanzheng eYue 2015-08-01 Full Text Available It is a challenge to calorimetrically determine the glass transition temperature (Tg of vitreous silica. Here we demonstrate that this challenge mainly arises from the extreme sensitivity of the Tg to the hydroxyl content in vitreous silica, but also from the irreversibility of its glass transition when repeating the calorimetric scans. It is known that the liquid fragility (i.e., the speed of the viscous slow-down of a supercooled liquid at its Tg during cooling has impact on enthalpy relaxation in glass. Here we find that vitreous silica (as a strong system exhibits striking anomalies in both glass transition and enthalpy relaxation compared to fragile oxide systems. The anomalous enthalpy relaxation of vitreous silica is discovered by performing the hperquenching-annealing-calorimetry experiments. We argue that the strong systems like vitreous silica and vitreous Germania relax in a structurally cooperative manner, whereas the fragile ones do in a structurally independent fashion. We discuss the origin of the anomalous enthalpy relaxation in the HQ vitreous silica. 1. Motional Spin Relaxation in Large Electric Fields Schmid, Riccardo; Filippone, B W 2008-01-01 We discuss the precession of spin-polarized Ultra Cold Neutrons (UCN) and $^{3}\\mathrm{He}$ atoms in uniform and static magnetic and electric fields and calculate the spin relaxation effects from motional $v\\times E$ magnetic fields. Particle motion in an electric field creates a motional $v\\times E$ magnetic field, which when combined with collisions, produces variations of the total magnetic field and results in spin relaxation of neutron and $^{3}\\mathrm{He}$ samples. The spin relaxation times $T_{1}$ (longitudinal) and $T_{2}$ (transverse) of spin-polarized UCN and $^{3}\\mathrm{He}$ atoms are important considerations in a new search for the neutron Electric Dipole Moment at the SNS \\emph{nEDM} experiment. We use a Monte Carlo approach to simulate the relaxation of spins due to the motional $v\\times E$ field for UCN and for $^{3}\\mathrm{He}$ atoms at temperatures below $600 \\mathrm{mK}$. We find the relaxation times for the neutron due to the $v\\times E$ effect to be long compared to the neutron lifetime, ... 2. Doppler effect induced spin relaxation boom Zhao, Xinyu; Huang, Peihao; Hu, Xuedong 2016-03-01 We study an electron spin qubit confined in a moving quantum dot (QD), with our attention on both spin relaxation, and the product of spin relaxation, the emitted phonons. We find that Doppler effect leads to several interesting phenomena. In particular, spin relaxation rate peaks when the QD motion is in the transonic regime, which we term a spin relaxation boom in analogy to the classical sonic boom. This peak indicates that a moving spin qubit may have even lower relaxation rate than a static qubit, pointing at the possibility of coherence-preserving transport for a spin qubit. We also find that the emitted phonons become strongly directional and narrow in their frequency range as the qubit reaches the supersonic regime, similar to Cherenkov radiation. In other words, fast moving excited spin qubits can act as a source of non-classical phonons. Compared to classical Cherenkov radiation, we show that quantum dot confinement produces a small but important correction on the Cherenkov angle. Taking together, these results have important implications to both spin-based quantum information processing and coherent phonon dynamics in semiconductor nanostructures. 3. On the energetics of the mean and eddy circulations in the lower stratosphere Oort, Abraham H. 2011-01-01 A hemispheric network of radiosonde stations is used in order to study the energetics of the lower stratosphere during the IGY period July 1957 through June 1958. For a hemispheric polar cap with 30 and 100 mb as top and bottom boundaries the balance equations of zonal and eddy kinetic energy, and zonal and eddy available potential energy are considered in detail. The eddies appear to build up the kinetic energy of the zonal flow at the expense of the eddy kinetic energy during all seasons. T... 4. Multifractal filtering method for extraction of ocean eddies from remotely sensed imagery GE Yong; DU Yunyan; CHENG Qiuming; LI Ce 2006-01-01 Traditional methods of extracting the ocean wave eddy information from remotely sensed imagery mainly use the edge detection technology such as Canny and Hough operators. However, due to the complexities of ocean eddies and image itself, it is sometimes difficult to successfully detect ocean eddies using these methods. A multifractal filtering technology is proposed for extraction of ocean eddies and demonstrated using NASA MODIS,SeaWiFS and NOAA satellite data set in the typical area, such as ocean west boundary current. Results showed that the new method has a superior performance over the traditional methods. 5. Study of eddy current power loss in an RCS vacuum chamber XU Shou-Yan; WANG Sheng 2012-01-01 In a Rapid Cycling Synchrotron (RCS),power loss due to an eddy current on the metal vacuum chamber would cause heating of the vacuum chamber.It is important to study the effect for estimating eddy current induced power loss and temperature growth.Analytical formulas for eddy current power loss for various types of vacuum chambers are derived for dipole and quadrupole repectively.By using the prototype of dipole of CSNS/RCS,an experiment was done to test the analytical formula.The derived formulas were applied to calculating the eddy current power loss on some special structures of an RCS vacuum chamber. 6. A sub-surface eddy at inertial current layer in the Canada Basin, Arctic Ocean 2007-01-01 An Arctic Ocean eddy in sub-surface layer is analyzed in this paper by use of temperature, salinity and current profiles data obtained at an ice camp in the Canada Basin during the second Chinese Arctic Expedition in summer of 2003.In the vertical temperature section, the eddy shows itself as an isolated cold water block at depth of 60 m with a minimum temperature of-1.5℃, about 0.5℃ colder than the ambient water.Isopycnals in the eddy form a pattern of convex, which indicates the eddy is anticyclonic.Although maximum velocity near O.4 m s-1 occurs in the current records observed synchronously, the current pattern is far away from a typical eddy.By further analysis, inertial frequency oscillations with amplitudes comparable with the eddy velocity are found in the sub-surface layer currents.After filter the inertial current and mean current, an axisymmetric current pattern of an eddy with maximum velocity radius of 5 km is obtained.The analysis of the T-S characteristics of the eddy core water and its ambient waters supports the conclusion that the eddy was formed on the Chukchi Shelf and migrated northeastward into the northern Canada Basin. 7. An initial note on quasistationary, cold-core Lanyu eddies southeast off Taiwan Island JING Chunsheng; LI Li 2003-01-01 Drifting buoys, satellite altimetry and satellite-derived sea surface thermal images are used to identify the existence of a large cold-core, cyclonic Kuroshio frontal eddy between Hengchun Peninsula and Lanyu, southeast off Taiwan Island around March 1996. The cold eddy accompahies an offshore meander of the Kuroshio near Lanyu, about 70 km and 100 km in horizontal zonal and meridional scales,respectively. The cold eddy is different from normal Kuroshio frontal eddies for its persisting of about 2 months near Lanyu. Supporting evidence suggests that the Kuroshio intruded into the South China Sea (SCS, hereafter) forming a loop-like structure during the persisting period of the cold eddy and that similar eddies occur occasionally in the same location. Compared with the corresponding studies in the Gulf of Mexico, it is suggested that Lanyu cold eddies are SCS analogues of Tortugas eddies found in the southern Straits of Florida. Overshooting of the meandering Kuroshio when it leaves the SCS and effects from conservation of potential vorticity are the possible mechanism of eddy genesis. 8. Study of the influence of particles on turbulence with the help of direct and large eddy simulations of gas-solid two-phase flows Boivin, M. 1996-12-31 An investigation of dilute dispersed turbulent two-way coupling two-phase flows has been undertaken with the hemp of Direct Numerical Simulations (DNS) on stationary-forced homogeneous isotropic turbulence. The particle relaxation times range from the Kolmogorov to the Eulerian time scales and the load goes up to 1. The analyses is made within the Eulerian-model framework, enhanced by the National Hydraulics Laboratory Lagrangian approach, which is extended here to include inverse coupling and Reynolds effects. Particles are found to dissipate on average turbulence energy. The spectra of the fluid-particle exchange energy rate show that small particles drag the fluid at high wavenumbers, which explains the observed relative increase of small scale energy. A spectral analysis points as responsible mechanism the transfer of fluid-particle covariance by fluid turbulence. Regarding the modeling, he Reynolds dependency and the load contribution are found crucial for good predictions of the dispersed phase moments. A study for practical applications with Large Eddy Simulations (LES) has yielded: LES can be used two-way coupling two-phase flows provided that a dynamic mixed sub-grid scale model is adopted and the particle relaxation time is larger than the cutoff filter one; the inverse coupling should depend more on the position of this relaxation time with respect to the Eulerian one than to the Kolmogorov one. (author) 67 refs. 9. The Antiproton Accumulator (AA) 1980-01-01 A section of the AA where the dispersion (and hence the horizontal beam size) is large. One can distinguish (left to right): A large vacuum-tank, a quadrupole (QDN09*), a bending magnet (BST08), another vacuum-tank, a wide quadrupole (QFW08) and (in the background) a further bending magnet (BST08). The tanks are covered with heating tape for bake-out. The tank left of QDN09 contained the kickers for stochastic pre-cooling (see 790621, 8002234, 8002637X), the other one served mainly as vacuum chamber in the region where the beam was large. Peter Zettwoch works on QFW08. * see: H. Koziol, Antiproton Accumulator Parameter List, PS/AA/Note 84-2 (1984) See under 7911303, 7911597X, 8004261 and 8202324. For photos of the AA in different phases of completion (between 1979 and 1982) see: 7911303, 7911597X, 8004261, 8004608X, 8005563X, 8005565X, 8006716X, 8006722X, 8010939X, 8010941X, 8202324, 8202658X, 8203628X . 10. ITER helium ash accumulation Hogan, J.T.; Hillis, D.L.; Galambos, J.; Uckan, N.A. (Oak Ridge National Lab., TN (USA)); Dippel, K.H.; Finken, K.H. (Forschungszentrum Juelich GmbH (Germany, F.R.). Inst. fuer Plasmaphysik); Hulse, R.A.; Budny, R.V. (Princeton Univ., NJ (USA). Plasma Physics Lab.) 1990-01-01 Many studies have shown the importance of the ratio {upsilon}{sub He}/{upsilon}{sub E} in determining the level of He ash accumulation in future reactor systems. Results of the first tokamak He removal experiments have been analysed, and a first estimate of the ratio {upsilon}{sub He}/{upsilon}{sub E} to be expected for future reactor systems has been made. The experiments were carried out for neutral beam heated plasmas in the TEXTOR tokamak, at KFA/Julich. Helium was injected both as a short puff and continuously, and subsequently extracted with the Advanced Limiter Test-II pump limiter. The rate at which the He density decays has been determined with absolutely calibrated charge exchange spectroscopy, and compared with theoretical models, using the Multiple Impurity Species Transport (MIST) code. An analysis of energy confinement has been made with PPPL TRANSP code, to distinguish beam from thermal confinement, especially for low density cases. The ALT-II pump limiter system is found to exhaust the He with maximum exhaust efficiency (8 pumps) of {approximately}8%. We find 1<{upsilon}{sub He}/{upsilon}{sub E}<3.3 for the database of cases analysed to date. Analysis with the ITER TETRA systems code shows that these values would be adequate to achieve the required He concentration with the present ITER divertor He extraction system. 11. Reduced-Complexity Semidefinite Relaxations of Optimal Power Flow Problems Andersen, Martin Skovgaard; Hansson, Anders; Vandenberghe, Lieven 2014-01-01 We propose a new method for generating semidefinite relaxations of optimal power flow problems. The method is based on chordal conversion techniques: by dropping some equality constraints in the conversion, we obtain semidefinite relaxations that are computationally cheaper, but potentially weaker......, than the standard semidefinite relaxation. Our numerical results show that the new relaxations often produce the same results as the standard semidefinite relaxation, but at a lower computational cost.... 12. Large Eddy Simulation of Inertial Particle Preferential Dispersion in a Turbulent Flow over a Backward-Facing Step Bing Wang 2013-01-01 Full Text Available Large eddy simulation of inertial particle dispersion in a turbulent flow over a backward-facing step was performed. The numerical results of both instantaneous particle dispersion and two-phase velocity statistics were in good agreement with the experimental measurements. The analysis of preferential dispersion of inertial particles was then presented by a wavelets analysis method for decomposing the two-phase turbulence signal obtained by numerical simulations, showing that the inertial particle concentration is separation from the Gaussian random distribution with very strong intermittencies. The statistical PDF of vorticity seen by particles shows that the inertial particles tend to accumulate in low vorticity regions where ∇u: ∇u is larger than zero. The concentration distribution of particle preferential dispersion preserves the historical effects. The research conclusions are useful for further understanding the two-phase turbulence physics and establishing accurate engineering prediction models of particle dispersion. 13. Mozart versus new age music: relaxation states, stress, and ABC relaxation theory. Smith, Jonathan C; Joyce, Carol A 2004-01-01 Smith's (2001) Attentional Behavioral Cognitive (ABC) relaxation theory proposes that all approaches to relaxation (including music) have the potential for evoking one or more of 15 factor-analytically derived relaxation states, or "R-States" (Sleepiness, Disengagement, Rested / Refreshed, Energized, Physical Relaxation, At Ease/Peace, Joy, Mental Quiet, Childlike Innocence, Thankfulness and Love, Mystery, Awe and Wonder, Prayerfulness, Timeless/Boundless/Infinite, and Aware). The present study investigated R-States and stress symptom-patterns associated with listening to Mozart versus New Age music. Students (N = 63) were divided into three relaxation groups based on previously determined preferences. Fourteen listened to a 28-minute tape recording of Mozart's Eine Kleine Nachtmusik and 14 listened to a 28-minute tape of Steven Halpern's New Age Serenity Suite. Others (n = 35) did not want music and instead chose a set of popular recreational magazines. Participants engaged in their relaxation activity at home for three consecutive days for 28 minutes a session. Before and after each session, each person completed the Smith Relaxation States Inventory (Smith, 2001), a comprehensive questionnaire tapping 15 R-States as well as the stress states of somatic stress, worry, and negative emotion. Results revealed no differences at Session 1. At Session 2, those who listened to Mozart reported higher levels of At Ease/Peace and lower levels of Negative Emotion. Pronounced differences emerged at Session 3. Mozart listeners uniquely reported substantially higher levels of Mental Quiet, Awe and Wonder, and Mystery. Mozart listeners reported higher levels, and New Age listeners slightly elevated levels, of At Ease/Peace and Rested/Refreshed. Both Mozart and New Age listeners reported higher levels of Thankfulness and Love. In summary, those who listened to Mozart's Eine Kleine Nachtmusik reported more psychological relaxation and less stress than either those who listened to 14. Analysis of eddy currents in the two-half isolated vacuum vessel of an iron core tokamak Liu, L.J., E-mail: liulongjian001@yeah.net [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Rao, B.; Zhang, M.; Yu, K.X.; Zhuang, G. [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China) 2015-12-15 Eddy currents in the vacuum vessel can cause many problems in plasma diagnostics and control, the fast analysis of eddy current is very important. In this paper, the characteristic of eddy currents in the thin shell of a two-half isolated vacuum vessel and the iron core's effect on eddy currents are analyzed, then an analytical method is used to calculate toroidal eddy currents in the vacuum vessel. Using this method, the eddy currents can be calculated rapidly which will benefit more accurate plasma reconstruction and real-time control. The calculated results by this method agree well with finite element method simulations based on J-TEXT configuration. 15. Asymptotic representation of relaxation oscillations in lasers Grigorieva, Elena V 2017-01-01 In this book we analyze relaxation oscillations in models of lasers with nonlinear elements controlling light dynamics. The models are based on rate equations taking into account periodic modulation of parameters, optoelectronic delayed feedback, mutual coupling between lasers, intermodal interaction and other factors. With the aim to study relaxation oscillations we present the special asymptotic method of integration for ordinary differential equations and differential-difference equations. As a result, they are reduced to discrete maps. Analyzing the maps we describe analytically such nonlinear phenomena in lasers as multistability of large-amplitude relaxation cycles, bifurcations of cycles, controlled switching of regimes, phase synchronization in an ensemble of coupled systems and others. The book can be fruitful for students and technicians in nonlinear laser dynamics and in differential equations. 16. On topological relaxations of chromatic conjectures Simonyi, Gábor 2010-01-01 There are several famous unsolved conjectures about the chromatic number that were relaxed and already proven to hold for the fractional chromatic number. We discuss similar relaxations for the topological lower bound(s) of the chromatic number. In particular, we prove that such a relaxed version is true for the Behzad-Vizing conjecture and also discuss the conjectures of Hedetniemi and of Hadwiger from this point of view. For the latter, a similar statement was already proven in an earlier paper of the first author with G. Tardos, our main concern here is that the so-called odd Hadwiger conjecture looks much more difficult in this respect. We prove that the statement of the odd Hadwiger conjecture holds for large enough Kneser graphs and Schrijver graphs of any fixed chromatic number. 17. Vibrational and Rotational Energy Relaxation in Liquids Petersen, Jakob the intramolecular dynamics during photodissociation is investigated. The apparent agreement with quantum mechanical calculations is shown to be in contrast to the applicability of the individual approximations used in deriving the model from a quantum mechanical treatment. In the spirit of the Bersohn-Zewail model......, the vibrational energy relaxation of I2 subsequent to photodissociation and recombination in CCl4 is studied using classical Molecular Dynamics simulations. The vibrational relaxation times and the time-dependent I-I pair distribution function are compared to new experimental results, and a qualitative agreement...... is found in both cases. Furthermore, the rotational energy relaxation of H2O in liquid water is studied via simulations and a power-and-work analysis. The mechanism of the energy transfer from the rotationally excited H2O molecule to its water neighbors is elucidated, i.e. the energy-accepting degrees... 18. Relaxation and Diffusion in Complex Systems Ngai, K L 2011-01-01 Relaxation and Diffusion in Complex Systems comprehensively presents a variety of experimental evidences of universal relaxation and diffusion properties in complex materials and systems. The materials discussed include liquids, glasses, colloids, polymers, rubbers, plastic crystals and aqueous mixtures, as well as carbohydrates, biomolecules, bioprotectants and pharmaceuticals. Due to the abundance of experimental data, emphasis is placed on glass-formers and the glass transition problem, a still unsolved problem in condensed matter physics and chemistry. The evidence for universal properties of relaxation and diffusion dynamics suggests that a fundamental physical law is at work. The origin of the universal properties is traced to the many-body effects of the interaction, rigorous theory of which does not exist at the present time. However, using solutions of simplified models as guides, key quantities have been identified and predictions of the universal properties generated. These predictions from Ngai’... 19. Substrate stress relaxation regulates cell spreading Chaudhuri, Ovijit; Gu, Luo; Darnell, Max; Klumpers, Darinka; Bencherif, Sidi A.; Weaver, James C.; Huebsch, Nathaniel; Mooney, David J. 2015-02-01 Studies of cellular mechanotransduction have converged upon the idea that cells sense extracellular matrix (ECM) elasticity by gauging resistance to the traction forces they exert on the ECM. However, these studies typically utilize purely elastic materials as substrates, whereas physiological ECMs are viscoelastic, and exhibit stress relaxation, so that cellular traction forces exerted by cells remodel the ECM. Here we investigate the influence of ECM stress relaxation on cell behaviour through computational modelling and cellular experiments. Surprisingly, both our computational model and experiments find that spreading for cells cultured on soft substrates that exhibit stress relaxation is greater than cells spreading on elastic substrates of the same modulus, but similar to that of cells spreading on stiffer elastic substrates. These findings challenge the current view of how cells sense and respond to the ECM. 20. Nonlinear Model of non-Debye Relaxation Zon, Boris A 2010-01-01 We present a simple nonlinear relaxation equation which contains the Debye equation as a particular case. The suggested relaxation equation results in power-law decay of fluctuations. This equation contains a parameter defining the frequency dependence of the dielectric permittivity similarly to the well-known one-parameter phenomenological equations of Cole-Cole, Davidson-Cole and Kohlrausch-Williams-Watts. Unlike these models, the obtained dielectric permittivity (i) obeys to the Kramers-Kronig relation; (ii) has proper behaviour at large frequency; (iii) its imaginary part, conductivity, shows a power-law frequency dependence \\sigma ~ \\omega^n where n1 is also observed in several experiments. The nonlinear equation proposed may be useful in various fields of relaxation theory. 1. Excited-state relaxation of some aminoquinolines 2006-01-01 Full Text Available The absorption and fluorescence spectra, fluorescence quantum yields and lifetimes, and fluorescence rate constants ( k f of 2-amino-3-( 2 ′ -benzoxazolylquinoline (I, 2-amino-3-( 2 ′ -benzothiazolylquinoline (II, 2-amino-3-( 2 ′ -methoxybenzothiazolyl-quinoline (III, 2-amino-3-( 2 ′ -benzothiazolylbenzoquinoline (IV at different temperatures have been measured. The shortwavelength shift of fluorescence spectra of compounds studied (23–49 nm in ethanol as the temperature decreases (the solvent viscosity increases points out that the excited-state relaxation process takes place. The rate of this process depends essentially on the solvent viscosity, but not the solvent polarity. The essential increasing of fluorescence rate constant k f (up to about 7 times as the solvent viscosity increases proves the existence of excited-state structural relaxation consisting in the mutual internal rotation of molecular fragments of aminoquinolines studied, followed by the solvent orientational relaxation. 2. Improved memristor-based relaxation oscillator 2013-09-01 This paper presents an improved memristor-based relaxation oscillator which offers higher frequency and wider tunning range than the existing reactance-less oscillators. It also has the capability of operating on two positive supplies or alternatively a positive and negative supply. Furthermore, it has the advantage that it can be fully integrated on-chip providing an area-efficient solution. On the other hand, The oscillation concept is discussed then a complete mathematical analysis of the proposed oscillator is introduced. Furthermore, the power consumption of the new relaxation circuit is discussed and validated by the PSPICE circuit simulations showing an excellent agreement. MATLAB results are also introduced to demonstrate the resistance range and the corresponding frequency range which can be obtained from the proposed relaxation oscillator. © 2013 Elsevier Ltd. 3. Interactive Image Enhancement by Fuzzy Relaxation Shang-Ming Zhou; John Q.Can; Li-Da Xu; Robert John 2007-01-01 In this paper, an interactive image enhancement (HE) technique based on fuzzy relaxation is presented, which allows the user to select different intensity levels for enhancement and intermit the enhancement process according to his/her preference in applications. First, based on an analysis of the convergence of a fuzzy relaxation algorithm for image contrast enhancement, an improved version of this algorithm, which is called FuzzIIE Method 1, is suggested by deriving a relationship between the convergence regions and the parameters in the transformations defined in the algorithm. Then a method called FuzzIIE Method 2 is introduced by using a different fuzzy relaxation function, in which there is no need to re-select the parameter values for interactive image enhancement. Experimental results are presented demonstrating the enhancement capabilities of the proposed methods under different conditions. 4. Short-term impacts of enhanced Greenland freshwater fluxes in an eddy-permitting ocean model R. Marsh 2009-11-01 Full Text Available In a sensitivity experiment, an eddy-permitting ocean general circulation model is forced with freshwater fluxes from the Greenland Ice Sheet, averaged for the period 1991–2000. The fluxes are obtained with a mass balance model for the ice sheet, forced with the ERA-40 reanalysis dataset. The freshwater flux is distributed around Greenland as an additional term in prescribed runoff, representing seasonal melting of the ice sheet and a fixed year-round iceberg calving flux, for 8.5 model years. The impacts on regional hydrography and circulation are investigated by comparing the sensitivity experiment to a control experiment, without Greenland fluxes. By the end of the sensitivity experiment, the majority of additional fresh water has accumulated in Baffin Bay, and only a small fraction has reached the interior of the Labrador Sea, where winter mixed layer depth is sensitive to small changes in salinity. As a consequence, the impact on large-scale circulation is very slight. An indirect impact of strong freshening off the west coast of Greenland is a small anti-cyclonic circulation around Greenland which opposes the wind-driven cyclonic circulation and reduces net southward flow through the Canadian Archipelago by ~10%. Implications for the post-2000 acceleration of Greenland mass loss are discussed. 5. Effect of stable stratification on dispersion within urban street canyons: A large-eddy simulation Li, Xian-Xiang; Britter, Rex; Norford, Leslie K. 2016-11-01 This study employs a validated large-eddy simulation (LES) code with high tempo-spatial resolution to investigate the effect of a stably stratified roughness sublayer (RSL) on scalar transport within an urban street canyon. The major effect of stable stratification on the flow and turbulence inside the street canyon is that the flow slows down in both streamwise and vertical directions, a stagnant area near the street level emerges, and the vertical transport of momentum is weakened. Consequently, the transfer of heat between the street canyon and overlying atmosphere also gets weaker. The pollutant emitted from the street level 'pools' within the lower street canyon, and more pollutant accumulates within the street canyon with increasing stability. Under stable stratification, the dominant mechanism for pollutant transport within the street canyon has changed from ejections (flow carries high-concentration pollutant upward) to unorganized motions (flow carries high-concentration pollutant downward), which is responsible for the much lower dispersion efficiency under stable stratifications. 6. Large-eddy simulation of heavy particle dispersion in wall-bounded turbulent flows Salvetti, M. V. 2015-03-01 Capabilities and accuracy issues in Lagrangian tracking of heavy particles in velocity fields obtained from large-eddy simulations (LES) of wall-bounded turbulent flows are reviewed. In particular, it is shown that, if no subgrid scale (SGS) model is added to the particle motion equations, particle preferential concentration and near-wall accumulation are significantly underestimated. Results obtained with SGS modeling for the particle motion equations based on approximate deconvolution are briefly recalled. Then, the error purely due to filtering in particle tracking in LES flow fields is singled out and analyzed. The statistical properties of filtering errors are characterized in turbulent channel flow both from an Eulerian and a Lagrangian viewpoint. Implications for stochastic SGS modeling in particle motion equations are briefly outlined. The author is retracting this article due to a significant overlap in content from three previously published papers [Phys. Fluids 20, 040603 (2008); Phys. Fluids 24, 045103 (2012); Acta Mech. 201(1-4), 277 (2008)], which constitutes dual publication. The author would like to apologize for any inconvenience this has caused. The article is retracted from the scientific record with effect from 12 January 2017. 7. Large Eddy Simulation for Plunge Breaker and Sediment Suspension BAI Yuchuan(白玉川); C.O.NG 2002-01-01 Breaking waves are a powerful agent for generating turbulence that plays an important role in many fluid dynamicalprocesses, particularly in the mixing of materials. Breaking waves can dislodge sediment and throw it into suspension,which will then be carried by wave-induced steady current and tidal flow. In order to investigate sediment suspension bybreaking waves, a numerical model based on large-eddy-simulation (LES) is developed. This numerical model can beused to simulate wave breaking and sediment suspension. The model consists of a free-surface model using the surfacemarker method combined with a two-dimensional model that solves the flow equations. The turbulence and the turbulentdiffusion are described by a large-eddy-simulation (LES) method where the large turbulence features are simulated bysolving the flow equations, and a subgrid model represents the small-scale turbulence that is not resolved by the flowmodel. A dynamic eddy viscosity subgrid scale stress model has been used for the present simulation. By applying thismodel to Stokes' wave breaking problem in the surf zone, we find that the model results agree very well with experimentaldata. By use of this model to simulation of the breaking process of a periodic wave, it can be found that the model canreproduce the complicated flow phenomena, especially the plunging breaker. It reflects the dynamic structures of roller orvortex in the plunging breaker, and when the wave breaks, many strong vortex structures will be produced in the innersurf zone where the concentration of suspended sediment can thereby become relatively high. 8. Integrated optical fiber lattice accumulators 1997-01-01 Approved for public release; distribution is unlimited. Sigma-delta modulators track a signal by accumulating the error between an input signal and a feedback signal. The accumulated energy is amplitude analyzed by a comparator. The comparator output signal is fed back and subtracted from the input signal. This thesis is primarily concerned with designing accumulators for inclusion in an optical sigma-delta modulator. Fiber lattice structures with optical amplifiers are used to perform the... 9. On integrating large eddy simulation and laboratory turbulent flow experiments. Grinstein, Fernando F 2009-07-28 Critical issues involved in large eddy simulation (LES) experiments relate to the treatment of unresolved subgrid scale flow features and required initial and boundary condition supergrid scale modelling. The inherently intrusive nature of both LES and laboratory experiments is noted in this context. Flow characterization issues becomes very challenging ones in validation and computational laboratory studies, where potential sources of discrepancies between predictions and measurements need to be clearly evaluated and controlled. A special focus of the discussion is devoted to turbulent initial condition issues. 10. Efficient Large Eddy Simulation for the Discontinuous Galerkin Method Creech, Angus; Maddison, James; Percival, James; Bruce, Tom 2016-01-01 In this paper we present a new technique for efficiently implementing Large Eddy Simulation with the Discontin- uous Galerkin method on unstructured meshes. In particular, we will focus upon the approach to overcome the computational complexity that the additional degrees of freedom in Discontinuous Galerkin methods entail. The turbulence algorithms have been implemented within Fluidity, an open-source computational fluid dynamics solver. The model is tested with the well known backward-facing step problem, and is shown to concur with published results. 11. Eddy diffusivities of inertial particles in random Gaussian flows Boi, Simone; Muratore-Ginanneschi, Paolo 2016-01-01 We investigate the large-scale transport of inertial particles. We derive explicit analytic expressions for the eddy diffusivities for generic Stokes times. These latter expressions are exact for any shear flow while they correspond to the leading contribution either in the deviation from the shear flow geometry or in the P\\'eclet number of general random Gaussian velocity fields. Our explicit expressions allow us to investigate the role of inertia for such a class of flows and to make exact links with the analogous transport problem for tracer particles. 12. The Role of Eddy-Tansport in the Thermohaline Circulation Dr. Paola Cessi 2011-11-17 Several research themes were developed during the course of this project. (1) Low-frequency oceanic varibility; (2) The role of eddies in the Antarctic Circumpolar Current (ACC) region; (3) Deep stratification and the overturning circulation. The key findings were as follows: (1) The stratification below the main thermocline (at about 500m) is determined in the circumpolar region and then communicated to the enclosed portions of the oceans through the overturning circulation. (2) An Atlantic pole-to-pole overturning circulation can be maintained with very small interior mixing as long as surface buoyancy values are shared between the northern North Atlantic and the ACC region. 13. Large-eddy simulation of trans- and supercritical injection Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A. 2016-07-01 In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good. 14. Eddy current pulsed phase thermography for subsurface defect quantitatively evaluation He, Yunze; Pan, Mengchun; Tian, GuiYun; Chen, Dixiang; Tang, Ying; Zhang, Hong 2013-09-01 This Letter verified eddy current pulse phase thermography through numerical and experimental studies. During the numerical studies, two characteristic features, blind frequency and min phase, were extracted from differential phase spectra, and their monotonic relationships with defects' depth under different heating time were compared. According to the numerical studies, 100 ms was employed as heating time during the improved experimental studies. The experimental results agreed with the numerical results. Based on their linear relationship with defects' depths, both features can be used to measure the defect's depth. 15. Thickness Evaluation of Aluminium Plate Using Pulsed Eddy Current Technique 2013-10-01 This paper describes a pulsed eddy current (PEC) based non-destructive testing system used for detection of thickness variation in aluminium plate. A giant magneto-resistive sensor has been used instead of pick up coil for detecting resultant magnetic field. The PEC response signals obtained from 1 to 5 mm thickness change in aluminium plate were investigated. Two time domain features, namely peak value and time to peak, of PEC response were used for extracting information about thickness variation in aluminium plate. The variation of peak value and time to peak with thickness was compared. A program was developed to display the thickness variation of the tested sample. 16. Eddy current system for inspection of train hollow axles Chady, Tomasz; Psuj, Grzegorz; Sikora, Ryszard; Kowalczyk, Jacek; Spychalski, Ireneusz [Department of Electrical and Computer Engineering, Faculty of Electrical Engineering, West Pomeranian University of Technology, Szczecin (Poland) 2014-02-18 The structural integrity of wheelsets used in rolling stock is of great importance to the safety. In this paper, electromagnetic system with an eddy current transducer suitable for the inspection of hollow axles have been presented. The transducer was developed to detect surface braking defects having depth not smaller than 0.5 mm. Ultrasound technique can be utilized to inspect the whole axle, but it is not sufficiently sensitive to shallow defects located close to the surface. Therefore, the electromagnetic technique is proposed to detect surface breaking cracks that cannot be detected by ultrasonic technique. 17. Relaxation Dynamics of Semiflexible Fractal Macromolecules Jonas Mielke 2016-07-01 Full Text Available We study the dynamics of semiflexible hyperbranched macromolecules having only dendritic units and no linear spacers, while the structure of these macromolecules is modeled through T-fractals. We construct a full set of eigenmodes of the dynamical matrix, which couples the set of Langevin equations. Based on the ensuing relaxation spectra, we analyze the mechanical relaxation moduli. The fractal character of the macromolecules reveals itself in the storage and loss moduli in the intermediate region of frequencies through scaling, whereas at higher frequencies, we observe the locally-dendritic structure that is more pronounced for higher stiffness. 18. Dynamics of cosmological relaxation after reheating Choi, Kiwoon; Sekiguchi, Toyokazu 2016-01-01 We examine if the cosmological relaxation mechanism, which was proposed recently as a new solution to the hierarchy problem, can be compatible with high reheating temperature well above the weak scale. As the barrier potential disappears at high temperature, the relaxion rolls down further after the reheating, which may ruin the successful implementation of the relaxation mechanism. It is noted that if the relaxion is coupled to a dark gauge boson, the new frictional force arising from dark gauge boson production can efficiently slow down the relaxion motion, which allows the relaxion to be stabilized after the electroweak phase transition for a wide range of model parameters, while satisfying the known observational constraints. 19. Synthetic aperture radar autofocus via semidefinite relaxation. Liu, Kuang-Hung; Wiesel, Ami; Munson, David C 2013-06-01 The autofocus problem in synthetic aperture radar imaging amounts to estimating unknown phase errors caused by unknown platform or target motion. At the heart of three state-of-the-art autofocus algorithms, namely, phase gradient autofocus, multichannel autofocus (MCA), and Fourier-domain multichannel autofocus (FMCA), is the solution of a constant modulus quadratic program (CMQP). Currently, these algorithms solve a CMQP by using an eigenvalue relaxation approach. We propose an alternative relaxation approach based on semidefinite programming, which has recently attracted considerable attention in other signal processing problems. Experimental results show that our proposed methods provide promising performance improvements for MCA and FMCA through an increase in computational complexity. 20. Depicting Vortex Stretching and Vortex Relaxing Mechanisms 符松; 李启兵; 王明皓 2003-01-01 Different from many existing studies on the paranetrization of vortices, we investigate the effectiveness of two new parameters for identifying the vortex stretching and vortex relaxing mechanisms. These parameters are invariants and identify three-dimensional flow structures only, i.e. they diminish in two-dimensional flows. This is also unlike the existing vortex identification approaches which deliver information in two-dimensional flows. The present proposals have been successfully applied to identify the stretching and relaxing vortices in compressible mixing layers and natural convection flows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7775290608406067, "perplexity": 3117.4164647126477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.54/warc/CC-MAIN-20180320205242-20180320225242-00390.warc.gz"}
https://johncarlosbaez.wordpress.com/2017/08/22/complex-adaptive-system-design-part-4/?replytocom=98324
## Complex Adaptive System Design (Part 4) Last time I introduced typed operads. A typed operad has a bunch of operations for putting together things of various types and getting new things of various types. This is a very general idea! But in the CASCADE project we’re interested in something more specific: networks. So we want operads whose operations are ways to put together networks and get new networks. That’s what our team came up with: John Foley of Metron, my graduate students Blake Pollard and Joseph Moeller, and myself. We’re writing a couple of papers on this, and I’ll let you know when they’re ready. These blog articles are kind of sneak preview—and a gentle introduction, where you can ask questions. For example: I’m talking a lot about networks. But what is a ‘network’, exactly? There are many kinds. At the crudest level, we can model a network as a simple graph, which is something like this: There are some restrictions on what counts as a simple graph. If the vertices are agents of some sort and the edges are communication channels, these restrictions imply: • We allow at most one channel between any pair of agents, since there’s at most one edge between any two vertices of our graph. • The channels do not have a favored direction, since there are no arrows on the edges of our graph. • We don’t allow a channel from an agent to itself, since an edge can’t start and end at the same vertex. For other purposes we may want to drop some or all of these restrictions. There is an appalling diversity of options! We might want to allow multiple channels between a pair of agents. For this we could use multigraphs. We might want to allow directed channels, where the sender and receiver have different capabilities: for example, signals may only be able to flow in one direction. For this we could use directed graphs. And so on. We will also want to consider graphs with colored vertices, to specify different types of agents—or colored edges, to specify different types of channels. Even more complicated variants are likely to become important as we proceed. To avoid sinking into a mire of special cases, we need the full power of modern mathematics. Instead of separately studying all these various kinds of networks, we need a unified notion that subsumes all of them. To do this, the Metron team came up with something called a ‘network model’. There is a network model for simple graphs, a network model for multigraphs, a network model for directed graphs, a network model for directed graphs with 3 colors of vertex and 15 colors of edge, and more. You should think of a network model as a kind of network. Not a specific network, just a kind of network. Our team proved that for each network model $G$ there is an operad $O_G$ whose operations describe how to put together networks of that kind. We call such operads ‘network operads’. I want to make all this precise, but today let me just show you one example. Let’s take $G$ to be the network model for simple graphs, and look at the network operad $O_G.$ I won’t tell you what kind of thing $G$ is yet! But I’ll tell you about the operad $O_G$. Types. Remember from last time that an operad has a set of ‘types’. For $O_G$ this is the set of natural numbers, $\mathbb{N}.$ The reason is that a simple graph can have any number of vertices. Operations. Remember that an operad has sets of ‘operations’. In our case we have a set of operations $O_G(t_1,\dots,t_n ; t)$ for each choice of $t_1,\dots,t_n, t \in \mathbb{N}.$ An operation $f \in O_G(t_1,\dots,t_n; t)$ is a way of taking a simple graph with $t_1$ vertices, a simple graph with $t_2$ vertices,… and so on, and sticking them together, perhaps adding new edges, to get a simple graph with $t = t_1 + \cdots + t_n$ vertices. Let me show you an operation $f \in O_G(3,4,2;9)$ This will be a way of taking three simple graphs—one with 3 vertices, one with 4, and one with 2—and sticking them together, perhaps adding edges, to get one with 9 vertices. Here’s what $f$ looks like: It’s a simple graph with vertices numbered from 1 to 9, with the vertices in bunches: {1,2,3}, {4,5,6,7} and {8,9}. It could be any such graph. This one happens to have an edge from 3 to 6 and an edge from 1 to 2. Here’s how we can actually use our operation. Say we have three simple graphs like this: Then we can use our operation to stick them together and get this: Notice that we added a new edge from 3 to 6, connecting two of our three simple graphs. We also added an edge from 1 to 2… but this had no effect, since there was already an edge there! The reason is that simple graphs have at most one edge between vertices. But what if we didn’t already have an edge from 1 to 2? What if we applied our operation $f$ to the following simple graphs? Well, now we’d get this: This time adding the edge from 1 to 2 had an effect, since there wasn’t already an edge there! In short, we can use this operad to stick together simple graphs, but also to add new edges within the simple graphs we’re sticking together! When I’m telling you how we ‘actually use’ our operad to stick together graphs, I’m secretly describing an algebra of our operad. Remember, an operad describes ways of sticking together things together, but an ‘algebra’ of the operad gives a particular specification of these things and describes how we stick them together. Our operad $O_G$ has lots of interesting algebras, but I’ve just shown you the simplest one. More precisely: Things. Remember from last time that for each type, an algebra specifies a set of things of that type. In this example our types are natural numbers, and for each natural number $t \in \mathbb{N}$ I’m letting the set of things $A(t)$ consist of all simple graphs with vertices $\{1, \dots, t\}.$ Action. Remember that our operad $O_G$ should have an action on $A$, meaning a bunch of maps $\alpha : O_G(t_1,...,t_n ; t) \times A(t_1) \times \cdots \times A(t_n) \to A(t)$ I just described how this works in some examples. Some rules should hold… and they do. To make sure you understand, try these puzzles: Puzzle 1. In the example I just explained, what is the set $O_G(t_1,\dots,t_n ; t)$ if $t \ne t_1 + \cdots + t_n?$ Puzzle 2. In this example, how many elements does $O_G(1,1;2)$ have? Puzzle 3. In this example, how many elements does $O_G(1,2;3)$ have? Puzzle 4. In this example, how many elements does $O_G(1,1,1;3)$ have? Puzzle 5. In the particular algebra $A$ that I explained, how many elements does $A(3)$ have? Next time I’ll describe some more interesting algebras of this operad $O_G.$ These let us describe networks of mobile agents with range-limited communication channels! Some posts in this series: Part 2. Metron’s software for system design. Part 3. Operads: the basic idea. Part 4. Network operads: an easy example. Part 5. Algebras of network operads: some easy examples. Part 6. Network models. Part 7. Step-by-step compositional design and tasking using commitment networks. Part 8. Compositional tasking using category-valued network models. Part 9 – Network models from Petri nets with catalysts. ### 7 Responses to Complex Adaptive System Design (Part 4) 1. arch1 says: At worst I’ll be a straight man: 1) empty set (since sticking graphs together doesn’t create or destroy vertices) 2-4) 2^P(t,2) (since each pair of distinguishable vertices can independently be joined by an arc, or not) 5) 2^P(3,2)=8 (same reason) • arch1 says: Replace “P” with “C” in my answers (the vertices are distinguishable but their order within the pair doesn’t matter) • John Baez says: arch1 wrote: At worst I’ll be a straight man. Like a comedian, every mathematician seems more funny with a straight man. 1) empty set (since sticking graphs together doesn’t create or destroy vertices). Right! 2)-4) 2^C(t,2) (since each pair of distinguishable vertices can independently be joined by an arc, or not) If C(t,2) means the binomial coefficient $\binom{t}{2}$, then you’re right! If $t_1 + \cdots + t_n = t$ then $O_G(t_1, \dots, t_n ; t)$ is the set of simple graphs with $t$ vertices, so its cardinality is $\displaystyle{ 2^{\binom{t}{2}} }$ 5) 2^C(3,2)=8 (same reason) Right again! In this example $A(t)$ is also the set of simple graphs wiht $t$ vertices, so its cardinality is also $\displaystyle{ 2^{\binom{t}{2}} }$ Moral: In this particular example, the algebra is very similar to the operad it’s an algebra of. That’s not always true, but every typed operad $O$ has an algebra of this kind, with $A(t) = O(t;t)$ 2. @whut says: (1) ? (2) 3 (3) 3 (4) 4 (5) 3 3. I think you underrate the importance of directionality in the communications channels. Quite generally, in networks representing the execution of business processes, if A talks to B but B does not talk to A, that fact is highly significant; and if B does talk to A, it is for a different reason and it transports a payload of a different type. I do not have any insight into the manner in which this distinction would be expressed in the formalism that you are explaining here and I apologize if this entire comment is a forward reference to a topic that you will introduce later on. • John Baez says: In this post I’m using simple graphs as an example of how operads can be used to assemble networks. Simple graphs have undirected edges. But they’re just one example of our approach. For networks where communication channels are directed, we use graphs that take this into account: for example, ‘directed graphs’. There are some restrictions on what counts as a simple graph. If the vertices are agents of some sort and the edges are communication channels, these restrictions imply: • We allow at most one channel between any pair of agents, since there’s at most one edge between any two vertices of our graph. • The channels do not have a favored direction, since there are no arrows on the edges of our graph. • We don’t allow a channel from an agent to itself, since an edge can’t start and end at the same vertex. For other purposes we may want to drop some or all of these restrictions. There is an appalling diversity of options! We might want to allow multiple channels between a pair of agents. For this we could use multigraphs. We might want to allow directed channels, where the sender and receiver have different capabilities: for example, signals may only be able to flow in one direction. For this we could use directed graphs. And so on. To avoid sinking into a mire of special cases, we need the full power of modern mathematics. Instead of separately studying all these various kinds of networks, we need a unified notion that subsumes all of them. To do this, the Metron team came up with something called a ‘network model’. There is a network model for simple graphs, a network model for multigraphs, a network model for directed graphs, a network model for directed graphs with 3 colors of vertex and 15 colors of edge, and more. I’ll explain the general concept of ‘network model’ later. First I want to illustrate some things you can do with network models, and I’ll do that next time using simple graphs. But fear not—our setup can handle a wide variety of networks. 4. When we have a ‘less detailed’ algebra $A$ and a ‘more detailed’ algebra $A',$ they will typically be related by a map $f : A' \to A$ which ‘forgets the extra details’. This map should be a ‘homomorphism’ of algebras, but I’ll postpone the definition of that concept. Let me give some examples. I’ll take the operad that I described last time, and describe some of its algebras, and homomorphisms between these. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6738871335983276, "perplexity": 609.2808622882717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00301.warc.gz"}
http://www.realclimate.org/index.php/archives/2004/11/pca-details/
### PCA details Filed under: — mike @ 22 November 2004 PCA of the 70 North American ITRDB Tree-ring Proxy Series used by Mann et al (1998) a. Eigenvalue spectrum for Mann et al (1998) PCA analysis (1902-1980 zero reference period, data normalized by detrended 1902-1980 standard deviation): Rank Explained Variance Cumulative Variance 1 0.3818 0.3818 2 0.0976 0.4795 _______________________________________________ 3 0.0491 0.5286 4 0.0354 0.5640 First 2 PCs were retained based on application of the standard selection rules (see Figure 1) used by Mann et al (1998). b. Eigenvalue spectrum for PCA analysis Based on Convention of MM (1400-1971 zero reference period, data un-normalized) Rank Explained Variance Cumulative Variance 1 0.1946 0.1946 2 0.0905 0.2851 3 0.0783 0.3634 4 0.0663 0.4297 5 0.0549 0.4846 _______________________________________________ 6 0.0373 0.5219 First 5 PCs should be retained in this case employing the standard selection rules (see Figure 1) used by Mann et al (1998). FIGURE 1. Comparison of eigenvalue spectrum resulting from a Principal Components Analysis (PCA) of the 70 North American ITRDB data used by Mann et al (1998) back to AD 1400 based on Mann et al (1998) centering/normalization convention (blue circles) and MM centering/normalization convention (red crosses). Shown also is the null distribution based on Monte Carlo simulations with 70 independent red noise series of the same length and same lag-one autocorrelation structure as the actual ITRDB data using the respective centering and normalization conventions (blue curve for MBH98 convention, red curve for MM convention). In the former case, 2 (or perhaps 3) eigenvalues are distinct from the noise eigenvalue continuum. In the latter case, 5 (or perhaps 6) eigenvalues are distinct from the noise eigenvalue continuum.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8796893358230591, "perplexity": 3296.290505808882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637904794.47/warc/CC-MAIN-20141030025824-00033-ip-10-16-133-185.ec2.internal.warc.gz"}
https://en.m.wikibooks.org/wiki/CLEP_College_Algebra/Absolute_Value_Equations
# CLEP College Algebra/Absolute Value Equations ## Absolute Values Absolute Values represented using two vertical bars, ${\displaystyle \vert }$ , are common in Algebra. They are meant to signify the number's distance from 0 on a number line. If the number is negative, it becomes positive. And if the number was positive, it remains positive: ${\displaystyle \left\vert 4\right\vert =4\,}$ ${\displaystyle \left\vert -4\right\vert =4\,}$ For a formal definition: ${\displaystyle |x|={\begin{cases}x,&{\text{if }}x\geq 0\\-x,&{\text{if }}x<0\end{cases}}}$ This can be read aloud as the following: If ${\displaystyle x\geq 0}$ , then ${\displaystyle |x|=x}$ If ${\displaystyle x<0}$ , then ${\displaystyle |x|=-x}$ The formal definition is simply a declaration of what the function represents at certain restrictions of the ${\displaystyle x}$ -value. For any ${\displaystyle x<0}$ , the output of the graph of the function on the ${\displaystyle xy}$  plane is that of the linear function ${\displaystyle y=-x}$ . If ${\displaystyle x\geq 0}$ , then the output is that of the linear function ${\displaystyle y=x}$ . For our purposes, it does not technically matter whether ${\displaystyle x\geq 0{\text{ and }}x<0}$  or ${\displaystyle x>0{\text{ and }}x\leq 0}$ . As long as you pick one and are consistent with it, it does not matter how this is defined. By convention, it is usually defined as in the beginning formal definition. Please note that the opposite (the negative, -) of a negative number is a positive. For example, the opposite of ${\displaystyle -1}$  is ${\displaystyle 1}$ . Usually, some books and teachers would refer to opposite number as the negative of the given magnitude. For convenience, this may be used, so always keep in mind this shortcut in language. ### Properties of the Absolute Value Function We will define the properties of the absolute value function. This will be important to know when taking the CLEP exam since it can drastically speed up the process of solving absolute value equations. Finally, the practice problems in this section will test you on your knowledge on absolute value equations. We recommend you learn these concepts to the best of your abilities. However, this will not be explicitly necessary by the time one takes the exam. #### Domain and Range Let ${\displaystyle f(x)=|x|}$  whose mapping is ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ . By definition, ${\displaystyle |x|={\begin{cases}-x&{\text{if}}&x<0\\x&{\text{if}}&x\geq 0\end{cases}}}$ . Because it can only be the case that ${\displaystyle y=-x{\text{ if }}x<0}$  and ${\displaystyle y=x{\text{ if }}x\geq 0}$ , it is not possible for ${\displaystyle |x|<0}$ . However, since ${\displaystyle x}$  has no restriction, the domain, ${\displaystyle A}$ , has no restriction. Thus, if ${\displaystyle B}$  represents the range of the function, then ${\displaystyle A=\{x\in \mathbb {R} \}}$  and ${\displaystyle B=\{y\geq 0|y\in \mathbb {R} \}}$ . Definition: Domain and Range Let ${\displaystyle f(x)=|x|}$  whose mapping is ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$  represent the absolute value function. If ${\displaystyle A}$  is the domain and ${\displaystyle B}$  is the range, then ${\displaystyle A=\{x\in \mathbb {R} \}}$  and ${\displaystyle B=\{y\geq 0|y\in \mathbb {R} \}}$ . By the above definition, there exists an absolute minimum to the parent function, and it exists at the origin, ${\displaystyle O(0,0)}$ #### Even or odd? Recall the definition of an even and an odd function. Let there be a function ${\displaystyle f:A\to B}$ If ${\displaystyle x\in A}$  and ${\displaystyle f(-x)=f(x)}$ , then ${\displaystyle f}$  is even. If ${\displaystyle x\in A}$  and ${\displaystyle f(-x)=-f(x)}$ , then ${\displaystyle f}$  is odd. Proof: ${\displaystyle f(x)=|x|}$  is even Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto |x|}$ . By definition, ${\displaystyle f(x)=|x|={\begin{cases}-x&{\text{if}}&x<0\\x&{\text{if}}&x\geq 0\end{cases}}}$ .Suppose ${\displaystyle x\in A}$ . Let ${\displaystyle x>0\Rightarrow -x<0}$ . ${\displaystyle f(x)=x}$  ${\displaystyle f(-x)=-(-x)=x}$  ${\displaystyle \Rightarrow f(-x)=f(x)\blacksquare }$ Because ${\displaystyle f(x)}$  is even, it is also the case that it is symmetrical. A review of this can be found here (Graphs and Their Properties). #### One-to-one and onto? Recall the definitions of injective and surjective. If ${\displaystyle u,v\in A}$ , and ${\displaystyle f(u)=f(v)\Rightarrow u=v}$ , then ${\displaystyle f(x)}$  is injective. If for all ${\displaystyle b\in B}$  there is an ${\displaystyle a\in A}$  such that ${\displaystyle f(a)=b}$ , then ${\displaystyle f(x)}$  is surjective. Proof: ${\displaystyle f(x)=|x|}$  is non-injective Suppose ${\displaystyle u,v\in \mathbb {R} }$  and ${\displaystyle f(u)=f(v)}$ . By the previous proof, we showed ${\displaystyle f(x)}$  is even. As such, we can use the value ${\displaystyle v=-u}$  to make the following statement: ${\displaystyle f(u)=f(v)\Rightarrow u\neq v}$ Therefore, ${\displaystyle f(x)}$  is non-injective. Because we have not established how to prove these statements through algebraic manipulation, we will be deriving properties as we go to gain a further understanding of these new functions. Establishing whether a function is surjective is simply through checking the definition (negating if otherwise to establish it as non-surjective). Proof: ${\displaystyle f(x)=|x|}$  is non-surjective Suppose ${\displaystyle b\in \mathbb {R} }$ . There exists an element ${\displaystyle b=-1\in \mathbb {R} }$ , for which ${\displaystyle f(x)=|x|\neq -1}$  for all ${\displaystyle x\in \mathbb {R} }$ .${\displaystyle \blacksquare }$ A review of the definitions can be found here (Definition and Interpretations of Functions). #### Intercepts and Inflections of the Parent Function Figure 1: ${\displaystyle f(x)=|x|}$  graphed on the first and second quadrant (above the ${\displaystyle x}$  axis), showing only the positive ${\displaystyle y}$  values. With all the information provided from the previous sections, we can derive the graph of the parent function ${\displaystyle f(x)=|x|}$ . It is even, and therefore, symmetrical about the ${\displaystyle y}$ -axis since there is an ${\displaystyle x}$ -intercept at ${\displaystyle x=0}$ . Finally, because we know the domain and range, we know the minimum of the function is at ${\displaystyle O(0,0)}$ , and we know the definition of the function, we can easily show that the graph of ${\displaystyle f(x)=|x|}$  is the following image to the right (Figure 1). A summary of what you should see from the graph is this: • Domain: ${\displaystyle \{x\in \mathbb {R} \}}$ . • Range: ${\displaystyle \{y\geq 0|y\in \mathbb {R} \}}$ . • There is an absolute minimum at ${\displaystyle O(0,0)}$ . • There is one ${\displaystyle x}$ -intercept at ${\displaystyle x=0}$ . • There is one ${\displaystyle y}$ -intercept at ${\displaystyle y=0}$ . • The graph is even and symmetrical about the ${\displaystyle y}$ -axis. • The graph is non-injective and non-surjective. • The graph has no inflection point. #### Transformations of the Parent Function Many times, one will not be working with the parent function. Many real life applications of this function involve at least some manipulation to either the input or the output: vertical stretching/contraction, horizontal stretching/contraction, reflection about the ${\displaystyle x}$ -axis, reflection about the ${\displaystyle y}$ -axis, and vertical/horizontal shifting. Luckily, not much changes when it comes to the manipulation of these functions. The exceptions will be talked about in more detail: Vertical Expansion/Contraction/Flipping Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=A\cdot f(x)}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0},Ay_{0}\right)\in g(x)}$ . Thus, • If ${\displaystyle A>1}$ , then ${\displaystyle g(x)}$  is an expansion of ${\displaystyle f(x)}$  by a factor of ${\displaystyle A}$ . • If ${\displaystyle 0 , then ${\displaystyle g(x)}$  is a contraction of ${\displaystyle f(x)}$  by a factor of ${\displaystyle A}$ . • If ${\displaystyle A<0}$ , then ${\displaystyle g(x)}$  is a reflection of ${\displaystyle f(x)}$  about the ${\displaystyle x}$ -axis. Vertical Shift Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=f(x)+b}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0},y_{0}-b\right)\in g(x)}$ . Thus, • If ${\displaystyle b>0}$ , then ${\displaystyle g(x)}$  is an upward shift of ${\displaystyle f(x)}$  by ${\displaystyle b}$ . • If ${\displaystyle b<0}$ , then ${\displaystyle g(x)}$  is a downward shift of ${\displaystyle f(x)}$  by ${\displaystyle b}$ . Horizontal Shift Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=f(x+a)}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0}-a,y_{0}\right)\in g(x)}$ . Thus, • If ${\displaystyle a>0}$ , then ${\displaystyle g(x)}$  is a leftward shift of ${\displaystyle f(x)}$  by ${\displaystyle a}$ . • If ${\displaystyle a<0}$ , then ${\displaystyle g(x)}$  is a rightward shift of ${\displaystyle f(x)}$  by ${\displaystyle a}$ . The properties not listed above are exceptions to the general rule about functions found in the chapter Algebra of Functions. The exceptions are not anything substantial. The only difference with what we found generally versus what we have provided above are simply a result of what we found in the previous section. • There is no reflection about the ${\displaystyle y}$ -axis because the function is even and symmetrical. • There is no horizontal expansion and contraction because it gives the same result as vertical expansion and contraction (this will be proven later). We now have all the information we will need to know about absolute value functions now. ### Graphing Absolute Value Functions This subsection is absolutely not optional. You will be asked these questions very explicitly, so it is a good idea to understand this section. If you didn't read the previous subsection, you are not going to understand how any of this makes sense. Fortunately, the idea behind graphing any arbitrary function is mostly dependent on what you know about the function. Therefore, we can easily be able to graph functions. These examples should hopefully be further confirmation of what you learned in Algebra of Functions. Example 1.2(a): Graph the following absolute value function: ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5}$  Method 1: Follow procedure from Algebra of FunctionsThis method will work for any arbitrary function. However, it will not always be the quickest method for absolute value functions. We follow the following steps. Let ${\displaystyle f(x)}$  be the parent function and ${\displaystyle g(x)=Af(ax+b)+c}$ . Factor ${\displaystyle ax+b}$  so that ${\displaystyle ax+b=a\left(x+{\frac {b}{a}}\right)}$ . Horizontally shift ${\displaystyle f(x)}$  to the left/right by ${\displaystyle {\frac {b}{a}}}$ . Horizontally contract/expand ${\displaystyle f\left(x-{\frac {b}{a}}\right)}$  by ${\displaystyle a}$ . Vertically expand/contract/flip ${\displaystyle f\left(a\left(x-{\frac {b}{a}}\right)\right)}$  by ${\displaystyle A}$ . Vertically shift ${\displaystyle Af\left(a\left(x-{\frac {b}{a}}\right)\right)}$  upward/downward by ${\displaystyle c}$ .Since ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5}$  has ${\displaystyle A={\frac {1}{2}}}$ , ${\displaystyle a=2}$ , ${\displaystyle b=6}$ , and ${\displaystyle c=-5}$ , we may apply these steps as given to get to our desired result. As this should be review, we will not be meticulously graphing each step. As such, only the final function (and the parent function in red) will be shown. Method 3: Find absolute minimum or maximum, graph one half, reflect.While method 1 will always work for any arbitrary, continuous function, method 3 is fastest for the absolute value function that composes a linear function. First, we should try to find the vertex. We know from Algebra of Functions that the only thing that will affect the location of the vertex in even functions is the ${\displaystyle x-a}$  term on the inner composed linear function and the vertical shift of the entire function, ${\displaystyle c}$ . Rewriting the absolute value equation as shown below will allow us to find the vertex of the function. ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5={\frac {1}{2}}|2(x+3)|-5}$ This then tells us the vertex is at ${\displaystyle (-3,-5)}$ . This method then tells us to graph the slopes. However, how should that work? Recall the formal definition of an arbitrary absolute value function: ${\displaystyle |g(x)|={\begin{cases}-g(x)&{\text{if}}&x In the above definition of a general absolute value function, ${\displaystyle g(x_{0})=-g(x_{0})=0}$ . This means that where the ${\displaystyle x}$ -value implies a vertex on the function, that is how we restrict absolute value function. In our instance, ${\displaystyle |g(x)|=|2x+6|}$ , for which ${\displaystyle 2(-3)+6=0}$ , so ${\displaystyle x_{0}=-3}$ . We can say, thusly, that :${\displaystyle |2x+6|={\begin{cases}-2x-6&{\text{if}}&x<3\\2x+6&{\text{if}}&x\geq 3\end{cases}}}$ To be continued. ### Practice Problems For all of the problems given below, ${\displaystyle a=-2}$  and ${\displaystyle b=3}$ . It is recommended one does all the problems below Evaluate the following expressions. 1 ${\displaystyle |a|=}$ 2 ${\displaystyle |b|=}$ 3 ${\displaystyle -|a|=}$ 4 ${\displaystyle -|b|=}$ 5 ${\displaystyle {\frac {1}{a}}\cdot |b|=}$ 6 ${\displaystyle a\cdot |b|=}$ 7 ${\displaystyle |a-b|=}$ 8 ${\displaystyle |a+b|=}$ 9 ${\displaystyle b-|a|=}$ 10 ${\displaystyle a-|b|=}$ 11 ${\displaystyle \left\vert {\frac {b}{a}}\right\vert =}$ 12 ${\displaystyle b\cdot |a|=}$ Properties of Absolute Value 13 Let ${\displaystyle y=f(x)=|x|}$ . The following properties are listed below. Select the definition that BEST matches the description of the property or how one can prove the listed property. Even function Non-surjective Vertical shift Horizontal Shift ${\displaystyle \exists b\in \mathbb {R} }$  such that ${\displaystyle f(x)\neq b}$ . The range of ${\displaystyle f(x)}$  is ${\displaystyle \{y|y\in \mathbb {R} \wedge y\geq 0\}}$ . ${\displaystyle f(x)=f(-x)}$ . If ${\displaystyle b<0\Leftrightarrow f(x)\neq b}$ , then ${\displaystyle y_{p}=f(x)+b\Rightarrow \{y_{p}\vert y_{p}\in \mathbb {R} \wedge y_{p}\geq b\}}$ . ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0}-a,y_{0}\right)\in f(x+a)}$ . The function of ${\displaystyle f(x)}$  is many-to-one. ## Absolute Value Equations Now, let's say that we're given the equation ${\displaystyle \left\vert k\right\vert =8}$  and we are asked to solve for ${\displaystyle k}$ . What number would satisfy the equation of ${\displaystyle \left\vert k\right\vert =8}$ ? 8 would work, but -8 would also work. That's why there can be two solutions to one equation. How come this is true? That is what the next example is for. Example 2.0(a): Formally define the function below:${\displaystyle f(k)=|2k+6|}$  Recall what the absolute value represents: it is the distance of that number to the left or right of the starting point, the point where the inner function is zero. Recall the formal definition of the absolute value function: ${\displaystyle f(x)=|x|={\begin{cases}-x{\text{ if }}x<0\\x{\text{ if }}x\geq 0\end{cases}}}$ We want to formally define the function ${\displaystyle f(k)=|2k+6|}$ . Let ${\displaystyle x=k}$  First, we need to find where ${\displaystyle 2k+6=0}$ . ${\displaystyle 2k+6=0}$  ${\displaystyle \Leftrightarrow 2k=-6}$  ${\displaystyle \Leftrightarrow k=-3}$ From that, it is safe to say that the following is true: ${\displaystyle f(k)=|2k+6|={\begin{cases}-(2k+6){\text{ if }}k<-3\\2k+6{\text{ if }}k\geq 3\end{cases}}}$ It is important to know how to do this so that we may formally apply an algorithm throughout this entire chapter. For now, we will be exploring ways to solve these functions based on the examples given, including the formalizing of an algorithm, which we will give later. Example 2.0(b): Solve for ${\displaystyle k}$ :${\displaystyle |2k+6|=8}$  Since we formally defined the function in Example 0, we will write the definition down. ${\displaystyle f(k)=|2k+6|={\begin{cases}-(2k+6){\text{ if }}k<-3\\2k+6{\text{ if }}k\geq -3\end{cases}}}$   It is important to realize what the equation is saying: "there is a function ${\displaystyle y=f(k)}$  equal to ${\displaystyle y=8}$  such that ${\displaystyle \exists k\in \mathbb {R} }$ ." As defined in the opening section, the following function is non-injective and non-surjective. Therefore, there must be a ${\displaystyle k_{1}{\text{ and }}k_{2}}$  such that it satisfies ${\displaystyle f(k)=8}$ . Therefore, the following must be true ${\displaystyle 2k+6=8\quad {\text{AND}}\quad -(2k+6)=8}$ . All that is left to do is to solve the two equations for ${\displaystyle k}$  for each given case, which will be differentiated by its positive and negative case: Negative case ${\displaystyle -(2k+6)=8}$  ${\displaystyle \Leftrightarrow 2k+6=-8}$  ${\displaystyle \Leftrightarrow 2k=-14}$  ${\displaystyle \Leftrightarrow k=-7}$  Positive case ${\displaystyle 2k+6=8}$  ${\displaystyle \Leftrightarrow 2k=2}$  ${\displaystyle \Leftrightarrow k=1}$ We found our two solutions for ${\displaystyle k}$ : ${\displaystyle k=-7,1\blacksquare }$ The above example demonstrates an algorithm that is commonly taught in high schools and many universities since it applicable to every absolute value equation. The steps for the algorithm will now be stated. Given ${\displaystyle |g(x)|+c=f(x)}$ : 1. Isolate the absolute value function so that is equal to another function, or ${\displaystyle |g(x)|=f(x)-c}$ . 2. Write the equation so that you solve for the composed function into two such cases. Given ${\displaystyle |g(x)|=f(x)-c}$ , • Solve for ${\displaystyle g(x)=f(x)-c}$  and • Solve for ${\displaystyle g(x)=-(f(x)-c)}$ . A basic principle of solving these absolute value equations is the need to keep the absolute value by itself. This should be enough for most people to understand, yet this phrasing can be a little ambiguous to some students. As such, a lot of practice problems may be in order here. We will be applying all the steps to algorithm outlined above instead of going through the process of formally solving these equations because Example 1 was meant to show that the algorithm is true. Example 2.0(c): Solve for ${\displaystyle k}$ : ${\displaystyle 3|2k+6|=12}$ We will show you two ways to solve this equation. The first is the standard way, the second will show you something not usually taught. Standard way: Multiply the constant multiple by its inverse. We'd have to divide both sides by ${\displaystyle 3}$  to get the absolute value by itself. We would set up the two different equations using similar reasoning as in the first example: ${\displaystyle 2k+6=4\quad {\text{OR}}\quad 2k+6=-4}$ . Then, we'd solve, by subtracting the 6 from both sides and dividing both sides by 2 to get the ${\displaystyle k}$  by itself, resulting in ${\displaystyle k=-5,-1}$ . We will leave the solving part as an exercise to the reader. Other way: "Distribute" the three into the absolute value. Play close attention to the steps and reasoning laid out herein, for the reasoning for why this works is just as important as the person using the trick, if not moreso. Let us first generalize the problem. Let there be a positive, non-zero constant multiple ${\displaystyle c}$  multiplied to the absolute value equation ${\displaystyle |2k+6|}$ : ${\displaystyle c\cdot |2k+6|=|c|\cdot |2k+6|\quad {\text{OR}}\quad c\cdot |2k+6|=|-c|\cdot |2k+6|}$ . Let us assume both are true. If both statements are true, then you are allowed to distribute the positive constant ${\displaystyle c}$  inside the absolute value. Otherwise, this method is invalid! {\displaystyle {\begin{aligned}|c|\cdot |2k+6|&=|c(2k+6)|&\qquad |-c|\cdot |2k+6|&=|-c(2k+6)|\\&=|2ck+6c|&\qquad &=|-2ck-6c|=|-(2ck+6c)|\\&=|1|\cdot |2ck+6c|={\color {red}1\cdot |2ck+6c|}&\qquad &=|-1|\cdot |2ck+6c|={\color {red}1\cdot |2ck+6c|}\end{aligned}}} Notice the two equations have the same highlighted answer in red, meaning so long as the value of the constant multiple ${\displaystyle c}$  is positive, you are allowed to distribute the ${\displaystyle c}$  inside the absolute value bars. However, this "distributive property" needed the property that multiplying two absolute values is the same as the absolute value of the product. We need to prove this is true first before one can use this in their proof. For the student that spotted this mistake, you may have a good logical mind on one's shoulder, or a good eye for detail. Proof:${\displaystyle |b|\cdot |c|=|bc|}$  Let us start with what we know: ${\displaystyle |x|={\begin{cases}x,&{\text{if}}&x\geq 0\\-x,&{\text{if}}&x<0\end{cases}}}$ If ${\displaystyle a<0}$ , then ${\displaystyle |a|=-a\geq 0}$ . Else, if ${\displaystyle a\geq 0}$ , then ${\displaystyle |a|\geq 0}$ .Let ${\displaystyle b,c\in \mathbb {R} }$ , ${\displaystyle |b|=B}$ , ${\displaystyle |c|=C}$ , and ${\displaystyle b\cdot c=m}$ . The following three cases apply: ${\displaystyle bc=m<0\Rightarrow |m|=-m>0}$ . This simply means that for some product ${\displaystyle bc}$  that equals a negative number ${\displaystyle m}$ , the absolute value of that is ${\displaystyle -m}$ , or the distance from zero. Because ${\displaystyle m<0}$ , multiplying the two sides by ${\displaystyle -1}$  will change the less than to a greater than, or ${\displaystyle m<0\Leftrightarrow -m>0}$ . ${\displaystyle bc=m=0\Rightarrow |m|=m=0}$ . For some product ${\displaystyle bc}$  that equals a number ${\displaystyle m=0}$ , the absolute value of that is ${\displaystyle 0}$ . ${\displaystyle bc=m>0\Rightarrow |m|=m>0}$ . For some product ${\displaystyle bc}$  that equals a positive number ${\displaystyle m}$ , the absolute value of the product is ${\displaystyle m}$ .Given ${\displaystyle |bc|=|m|}$  always result in some positive number (and zero), we can conclude that the function is equivalent to the following: ${\displaystyle |b\cdot c|=|m|={\begin{cases}m,&{\text{if }}m\geq 0\\-m,&{\text{if }}m<0\end{cases}}}$ Let ${\displaystyle |b|\cdot |c|=B\cdot C=n}$ . Since ${\displaystyle |b|=B>0}$  and ${\displaystyle |c|=C>0}$ , ${\displaystyle B\cdot C=n>0}$ . This means that ${\displaystyle n=|n|}$ . Therefore, ${\displaystyle |b|\cdot |c|=|n|}$ . This allows us to conclude that ${\displaystyle |b|\cdot |c|=n=|n|={\begin{cases}n,&{\text{if }}n\geq 0\\-n,&{\text{if }}n<0\end{cases}}}$ ${\displaystyle |bc|=|m|}$  implies that ${\displaystyle bc\geq 0\vee bc<0}$ . However, ${\displaystyle |b|\cdot |c|=n}$  where ${\displaystyle n=|n|}$ . We have shown that ${\displaystyle \forall b,c\in \mathbb {R} }$ , we will always see that ${\displaystyle |bc|>0}$  and ${\displaystyle |b|\cdot |c|>0}$ . Further, we already know that ${\displaystyle |x|>0}$ , meaning even if ${\displaystyle x<0}$ , ${\displaystyle |x|>0}$ . Thus, ${\displaystyle |m|=n=|n|}$ . Therefore, ${\displaystyle \forall b,c\in \mathbb {R} }$ , ${\displaystyle |b|\cdot |c|=|bc|\blacksquare }$ . One nice thing about this proof is how we can use this to conclude that any function multiplied by another function will result in multiplying the inner functions within the absolute values. All we have to do is assume that is equals some other function instead of another number, as implicitly written within this proof. The only necessary change one needs to make is simply define all the variables within as functions. By confirming the general case, we may be employ this trick when we see it again. Let us apply this property to the original problem (this gives us the green result below): ${\displaystyle 3|2k+6|={\color {green}|6k+18|=12}}$ This all implies that ${\displaystyle 6k+18=12\quad {\text{OR}}\quad 6k+18=-12}$ . From there, a simple use of algebra will show that the answer to the original problem is again ${\displaystyle k=-5,-1}$ . Let us change the previous problem a little so that the constant multiple is now negative. Without changing much else, what will be true as a result? Let us find out. Example 2.0(d): Solve for ${\displaystyle k}$ :${\displaystyle -4|2k+6|=8}$  We will attempt to the problem in two different ways: the standard way and the other way, which we will explain later. Standard way: Multiply the constant multiple by its inverse. Divide like the previous problem, so the equation would look like this: ${\displaystyle |2k+6|=-2}$ . Recall what the absolute value represents: it is the distance of that number to the left or right of the starting point, zero. With this, do you notice anything strange? When you evaluate an absolute value, you will always get a positive number because the distance must always be positive. Because this is means a logically impossible situation, there are no real solutions. Notice how we specifically mentioned "real" solutions. This is because we are certain that the solutions in the real set, ${\displaystyle \mathbb {R} }$ , do not exist. However, there might be some set out there which would have solutions for this type of equation. Because of this posibility, we need to be mathematically rigorous and specifically state "no real solutions." Other way: "Distribute" the constant multiple into the absolute value. Here, we notice that the constant multiple ${\displaystyle c<0}$ . The problem with that is there is no ${\displaystyle g}$  such that ${\displaystyle |g|<0}$ . The only way this would be true is for ${\displaystyle -|g|<0}$  because ${\displaystyle -|g|<0\qquad {\text{Divide both sides by }}-1}$  ${\displaystyle |g|>0}$ With this property, we may therefore only distribute the constant multiple as ${\displaystyle |c|}$  with a negative ${\displaystyle -1}$  as a factor outside the absolute value. As such, ${\displaystyle -4|2k+6|=-|8k+24|=8\qquad {\text{Divide both sides by }}-1}$  ${\displaystyle |8k+24|=-8}$ It seems the other way has us multiply a constant by its inverse to both sides. Either way, this "other method" still gave us the same answer: there is no real solution. The problem this time will be a little different. Keep in mind the principle we had in mind throughout all the examples so far, and be careful because a trap is set in this problem. Example 2.0(e): Solve for ${\displaystyle x}$ :${\displaystyle |3x-3|-3=2x-10}$  There are many we ways can attempt to find solutions to this problem. We will do this the standard and allow any student to do it however they so desire. ${\displaystyle |3x-3|-3=2x-10\qquad {\text{Add the }}3{\text{ to both sides.}}}$  ${\displaystyle |3x-3|=2x-7}$ Because the absolute value is isolated, we can begin with our generalized procedure. Assuming ${\displaystyle 2x-7>0}$ , we may begin by denoting these two equations: (1) ${\displaystyle 3x-3=2x-7}$  (2) ${\displaystyle 3x-3=-(2x-7)}$  These are only true if ${\displaystyle 2x-7>0}$ . For now, assume this condition is true. Let us solve for ${\displaystyle x}$  with each respective equation: Equation (1) ${\displaystyle 3x-3=2x-7\qquad {\text{Add }}3{\text{ and subtract }}2x{\text{ on both sides.}}}$  ${\displaystyle x=-4}$ Equation (2) ${\displaystyle 3x-3=-(2x-7)\qquad {\text{Distribute }}-1{\text{.}}}$  ${\displaystyle 3x-3=-2x+7\qquad \quad {\text{Add }}3{\text{ and add }}2x{\text{ on both sides.}}}$  ${\displaystyle 5x=10\qquad \qquad \qquad \quad {\text{Divide }}5{\text{ on both sides.}}}$  ${\displaystyle x=2}$ We have two potential solutions to the equation. Try to answer why we said potential here based on what you know so far about this problem. Why did we state we had two potential solutions? Because we had to assume that ${\displaystyle 2x-7>0}$  and ${\displaystyle |3x-3|=2x-7}$  is true for the provided ${\displaystyle x}$ .Because we had to assume that ${\displaystyle 2x-7>0}$  and ${\displaystyle |3x-3|=2x-7}$  is true for the provided ${\displaystyle x}$ . Because of this, we have to verify the solutions to this equation exist. Therefore, let us substitute those values into the equation: ${\displaystyle |3(-4)-3|=2(-4)-7}$ . Notice that the right-hand side is negative. Also, the left-hand side and the right-hand side are not equivalent. Therefore, this is not a solution. ${\displaystyle |3(2)-3|=2(2)-7}$ . Notice the right-hand side is negative, again. Also, the left-hand side and the right-hand side are not equivalent. Therefore, this cannot be a solution.This equations has no real solutions. More specifically, it has two extraneous solutions (i.e. the solutions we found do not satisfy the equality property when we substitute them back in). Despite doing the procedure outlined since the first problem, you obtain two extraneous solutions. This is not the fault of the procedure but a simple result of the equation itself. Because the left-hand side must always be positive, it means the right-hand side must be positive as well. Along with that restriction is the fact that the two sides may not equal the other for the values whereby only positive values are given. This is all a matter of properties of functions. Example 2.0(f): Solve for ${\displaystyle a}$ :${\displaystyle 6\left\vert 5{\frac {a}{6}}+{\frac {1}{12}}\right\vert ={\frac {3}{5}}|15a+15|}$  All the properties learned will be needed here, so let us hope you did not skip anything here. It will certainly make our lives easier if we know the properties we are about to employ in this problem. ${\displaystyle 6\left\vert 5{\frac {a}{6}}+{\frac {1}{12}}\right\vert ={\frac {3}{5}}|15a+15|\qquad {\text{Distribute, so to speak, the constant terms.}}}$  ${\displaystyle \left\vert 5a+{\frac {1}{2}}\right\vert =|9a+9|}$ Looking at the second equation might be the first declaration of absurdity. However, an application of the fundamental properties of absolute values is enough to do this problem. (3) ${\displaystyle 5a+{\frac {1}{2}}=|9a+9|}$  (4) ${\displaystyle 5a+{\frac {1}{2}}=-|9a+9|}$  Peel the problem one layer at a time. For this one, we will categorize equations based on where they come from; this should hopefully explain the dashes: 3-1 is first equation formulated from (3), for example. (3-1) ${\displaystyle 9a+9=5a+{\frac {1}{2}}}$  (3-2) ${\displaystyle 9a+9=-\left(5a+{\frac {1}{2}}\right)}$  (4-1) ${\displaystyle -(9a+9)=5a+{\frac {1}{2}}}$  (4-2) ${\displaystyle -(9a+9)=-\left(5a+{\frac {1}{2}}\right)}$  We can demonstrate that some equations are equivalents of the other. For example, (3-1) and (4-2) are equivalent, since dividing both sides of (4-2) by ${\displaystyle -1}$  gives (3-1). Further, (3-2) and (4-2) are equivalent (multiply both sides of equation (4-2) by ${\displaystyle -1}$ ). After determining all the equations that are equivalent, distribute ${\displaystyle -1}$  to the corresponding parentheses. (5) ${\displaystyle 9a+9=5a+{\frac {1}{2}}}$  (6) ${\displaystyle 9a+9=-5a-{\frac {1}{2}}}$  Now all that is left to do is solve the equations. We will leave this step as an exercise for the reader. There are two potential solutions: ${\displaystyle a=-{\frac {19}{28}},-{\frac {17}{8}}}$ . All that is left to do is verify that the equation in the question is true when looking at these specific values of ${\displaystyle a}$ : ${\displaystyle a=-{\frac {19}{28}}}$  ${\displaystyle \left\vert 5\left(-{\frac {19}{28}}\right)+{\frac {1}{2}}\right\vert =\left\vert 9\left(-{\frac {19}{28}}\right)+9\right\vert }$  is true. The two sides give the same value: ${\displaystyle {\frac {81}{8}}=10.125}$ . ${\displaystyle a=-{\frac {17}{8}}}$  ${\displaystyle \left\vert 5\left(-{\frac {17}{8}}\right)+{\frac {1}{2}}\right\vert =\left\vert 9\left(-{\frac {17}{8}}\right)+9\right\vert }$  is true. The two sides give the same value: ${\displaystyle {\frac {81}{28}}\approx 2.893}$ .Because both solutions are true, the two solutions are ${\displaystyle a=-{\frac {19}{28}},-{\frac {17}{8}}\blacksquare }$ . Absolute value equations can be very useful to the real world, and it is usually when it comes to modeling. We will introduce one example of a standard modeling problem, then one unusual application in geometry (EXAMPLE WIP). Example 2.0(g): Window Fitting Question: Alfred wants to place a window so that the length of the window varies by 70% the length of the room. The room is 45 feet high and 70 feet in length. If the centered window takes up the entire vertical height of the wall (a) what is the maximum surface area of the wall excluding the window? (b) assuming the room has a rectangular ${\displaystyle \displaystyle 70\times 30}$  base and roof, and this window design repeats for all sides of the room (except the two door sides), what is the internal surface area of the room that the excludes window panes? (a) ${\displaystyle \displaystyle 945{\text{ ft}}^{2}}$ (b) ${\displaystyle \displaystyle 13,440{\text{ ft}}^{2}}$ Explanation: The hardest part about this problem is attempting to understand the situation. Once a student understands the problem presented, the rest of the steps are mostly simple. The procedure we used to solve many linear-equation word problems shall be used here since it helps us condense a ton of information into something more "bite-sized." 1. List useful information (optional second step, or necessary first step). 2. Draw a picture (optional second step, or necessary first step). 3. Find tools to solve the problem based on the list. 4. Make and solve equations. Figure 3: For a ${\displaystyle \displaystyle 70\times 45{\text{ ft}}^{2}}$  wall, if the length of the window varies by 70% the length of the room, what is the maximum area of the wall excluding the window? We will be using these steps for items (a) and (b). First, we will list the information as below: • Length of the window varies by 70% of the room length. • Room is 45 ft. high. • Room is 70 ft. in length. • Window takes up entire vertical height of wall. • Window is centered according to length of wall (by previous item). • The room has a rectangular base of ${\displaystyle \displaystyle 75\times 35{\text{ ft}}^{2}}$ Next, sketch the situation based on our list. A good sketch (Figure 3) can tell you a lot more information than the list. As such, this step may be used moreso than the list. This is why this step may be optional if you listed out the information presented in the problem. From our sketch (the tool to solving the problem), we can come up with an equation to help solve for ${\displaystyle x}$ , the side-length of the wall. Because the absolute value describes the distance (or length), and we want the length to be 70% the room length, we may come to this conclusion: ${\displaystyle \displaystyle |70-2x|={\frac {7}{10}}\cdot 70=49}$ From there, we can solve the equation. {\displaystyle \displaystyle {\begin{aligned}70-2x&=49&70-2x&=-49&{\text{Original equation}}\\-2x&=-21&-2x&=-119&{\text{Subtraction property of equality}}\\x&={\frac {21}{2}}=10.5{\text{ ft}}&x&={\frac {119}{2}}=59.5{\text{ ft}}&{\text{Division property of equality}}\end{aligned}}} In our situation, it makes no sense to consider ${\displaystyle \displaystyle x=59.5}$  because it results in a negative length for the window, so we reject ${\displaystyle \displaystyle x=59.5}$ . It is always important to keep in mind context when working with word problems. This information will be very useful for item (b). For part (a), it asks us to find the area of the wall side excluding the window. This tells us the area of the wall, according to our sketch, is ${\displaystyle \displaystyle xh+xh=2xh=2\cdot \left({\frac {21}{2}}\right)\cdot 45=945{\text{ ft}}^{2}}$ ${\displaystyle \displaystyle \blacksquare }$ Item (b) gave us the following information, along with what we found in working (a): • Rectangular ${\displaystyle \displaystyle 75\times 35}$  base and roof. • Wall without window is ${\displaystyle \displaystyle A=945{\text{ ft}}^{2}}$ . • Two sides have no windows, meaning the surface area of the wall is ${\displaystyle \displaystyle A=45\times 70=3,150{\text{ ft}}^{2}}$ No sketch will be provided for item (b). With all the information out of the way, we can easily find the surface area that excludes all windows. ${\displaystyle \displaystyle S=2\cdot \left(75\times 35{\text{ ft}}^{2}\right)+2\cdot \left(945{\text{ ft}}^{2}\right)+2\cdot 3,150{\text{ ft}}^{2}=13,440{\text{ ft}}^{2}}$ ${\displaystyle \displaystyle \blacksquare }$ The next problem typically requires some trigonometry to solve easily. However, with one extra piece of information, one can use the properties of the absolute value property to solve the following problem. Example 2.0(h): Tiling a Roof (adapted from Trigonometry Book 1)   Figure 2: The plan for a roof is given in the image above. We want to find the area of the figure using only what is given with absolutely no trigonometry. An engineer is planning to make an roof with a ${\displaystyle \displaystyle 30}$  m. frame base and ${\displaystyle \displaystyle 100}$  m. perimeter. The angle of the slope of the roof to the base is ${\displaystyle \displaystyle \theta }$ . The sloped sides are congruent. A reference image (Figure 2) of the sloped-roof (with no cartesian plane) is provided. Given the area of a triangle is ${\displaystyle \displaystyle {\frac {1}{2}}bh}$ , and the distance formula is ${\displaystyle \displaystyle d={\sqrt {(\Delta x)^{2}+(\Delta y)^{2}}}}$ , find the area of the triangular cross section of the roof. Answer ${\displaystyle A=474.342{\text{ m}}^{2}}$    Figure 3 Explanation The following problem requires you to think about what doesn't change to successfully allow you to determine what one situation allows for all of the following to be possible. We will apply our problem-solving steps derived onto this problem first before we discuss one difference in this problem that somewhat breaks our algorithm. We will draw it first. DrawingWe can gain a lot of information from Figure 3. ${\displaystyle a>0}$  ${\displaystyle b>0}$  ${\displaystyle c<0}$  ${\displaystyle f(x)=-a|x|}$ ; specifically, ${\displaystyle f(b)=-a|b|=c}$  and ${\displaystyle f(b)=-a|-b|=c}$ . ${\displaystyle d={\sqrt {15^{2}+\left(f(b)\right)^{2}}}}$  ${\displaystyle b=15}$  because ${\displaystyle \Delta x=15}$  by the above distance equation for ${\displaystyle d}$ . The values of ${\displaystyle a,b,c}$  are constant. The height ${\displaystyle h=c}$  is constant, and the base has constant length, so ${\displaystyle a,b}$  are constant.Tool FindingOur drawing helped us gleam a lot of information. Knowing the perimeter is ${\displaystyle 100{\text{ m}}}$  tells us that the distance is {\displaystyle {\begin{aligned}2d+30&=100\\2d&=70\\d&=35{\text{ m}}\end{aligned}}} However, Figure 3 tells us that ${\displaystyle d={\sqrt {15^{2}+\left(f(b)\right)^{2}}}}$ . Therefore, by the transitive property, {\displaystyle {\begin{aligned}{\sqrt {15^{2}+\left(a|b|\right)^{2}}}&=35\\15^{2}+a^{2}b^{2}&=35^{2}&|b|=b{\text{ and }}(ab)^{2}=a^{2}b^{2}\\15^{2}\left(1+a^{2}\right)&=35^{2}&b=15{\text{ and distributive property.}}\\1+a^{2}&={\frac {49}{9}}&{\text{Division property of equality.}}\\a&={\sqrt {\frac {40}{9}}}&{\text{Subtraction and exponent property of equality.}}\end{aligned}}} After knowing the vertical contraction, we can determine the height of the triangle. From there, the area. ${\displaystyle h=c=15{\sqrt {\frac {40}{9}}}\approx 31.623{\text{ m}}}$ The area of the triangle is therefore, ${\displaystyle A={\frac {1}{2}}\cdot 15\cdot 15{\sqrt {\frac {40}{9}}}\approx 474.342{\text{ m}}^{2}\blacksquare }$ Notice how it was not necessary for us to solve for a specific value of ${\displaystyle x}$  based on the absolute value equation. The only aspect of absolute value equations necessary for this problem is the graph properties and some logic. In a way, this is the most easy absolute value problem. However, the needed creativity for it makes up for the "easiness" of the problem. ### Practice Problems 1 ${\displaystyle |k+6|=2k}$ ${\displaystyle k=}$ 2 ${\displaystyle |7+3a|=11-a}$ ${\displaystyle a\in \{}$  , ${\displaystyle \}}$ 3 ${\displaystyle |2k+6|+6=0}$ How many solutions? ## Inequalities with Absolute Values It is important to keep in mind that any function can be less than any other function. For example, ${\displaystyle 2x-5<54-13x}$  has any solutions for ${\displaystyle x<{\frac {59}{13}}=3+{\frac {14}{15}}}$ . So long as the value for ${\displaystyle x}$  is within that range, the function ${\displaystyle 2x-5}$  is less than the output of ${\displaystyle 54-13x}$ . The algebra for inequalities of ${\displaystyle f(x)=|x|}$  requires a bit more of demonstration to understand. While the methods we use will not be proven, per se, our examples and explanations should give a good intuition behind the idea of find the inequalities of absolute values. Example 3.0(a): ${\displaystyle |10-20x|<50}$  First, let us simplify the following expression through the method we demonstrated in the previous section (factoring the inside of the absolute value and bringing the constant out). Keep in mind, since we are switching the sides for which we view the equation, 50 is the left instead of right, we must also "flip" the inequality to be consistent with the original equation. {\displaystyle {\begin{aligned}50&>|10-20x|\\&>|10\cdot (1-2x)|\\&>10\cdot |1-2x|\end{aligned}}} From there, it should be easy to see that ${\displaystyle |1-2x|<5}$ Let us further analyze this situation. What the above equation is saying is ${\displaystyle y=|1-2x|}$  is less than the function ${\displaystyle y=5}$ . We want to make sure the inside value is less than five. Because the absolute value describes the distance, there are two realities to the function. Let ${\displaystyle A(x)=|1-2x|}$  ${\displaystyle A(x)=|x|={\begin{cases}1-2x,&{\text{if}}&x\geq {\frac {1}{2}}\\-(1-2x),&{\text{if}}&x<{\frac {1}{2}}\end{cases}}}$ Because there are two "pieces" to the function ${\displaystyle A(x)}$ , and we want each piece to be less than 5, ${\displaystyle 1-2x<5}$  and ${\displaystyle -(1-2x)<5}$ We will demonstrate the more common procedure in the next example. For now, this intuition should begin to form an idea of algebraic analysis. We will solve the left-hand then the right-hand case. Solving for ${\displaystyle x}$  in ${\displaystyle |1-2x|<5}$ . Left-hand case: ${\displaystyle 1-2x<5}$  Recall how multiplying both sides by a negative factor requires us to "flip" the inequality. Therefore, solving for ${\displaystyle x}$ : ${\displaystyle \Leftrightarrow x>-2}$  Right-hand case: ${\displaystyle -(1-2x)<5}$  ${\displaystyle \Leftrightarrow 1-2x>-5}$  ${\displaystyle \Leftrightarrow x<3}$ We have found a possible distribution of values that allows the following equation to be true, where ${\displaystyle |2x-5|<5}$ , and it is for values of ${\displaystyle x}$  in between ${\displaystyle -2}$  and ${\displaystyle 3}$ , non-inclusive. The above example is an intuition behind how solving for inequalities work. Technically speaking, we could make a proof for why we have to "operate" (take the steps seen above) absolute value inequalities this way. However, this will be a little too technical and involve a lot of generalization that could potentially confuse students rather than enlighten. If the student feels the challenge is worth, then one may try the proof of the steps we derived below. This is considered standard procedure (according to many High School textbooks). 1. Simplify until only the "absolute value bar term" is left. 2. Solve by taking the inside and relating it by the inequality for the "left-hand" values; taking the same expression found inside the absolute value, for the "right-hand" equation, negate the related term and flip the inequality to then solve. 3. Rewrite ${\displaystyle x}$  into necessary notatioon. Although the procedure may seem to be confusing, we are really only trying to make the algorithm as specific as possible. In reality, we will show just how easy it is to apply this algorithm for the problem above. Example 3.0(a) (REPEAT): ${\displaystyle |10-20x|<50}$  Let us skip to the most simplified form. ${\displaystyle |1-2x|<5}$ Now let us apply the above algorithm. ${\displaystyle 1-2x<5}$  and ${\displaystyle 1-2x>-5}$  (notice the negation and flipping for the right hand equation).From there, we will solve. Solving for ${\displaystyle x}$  in ${\displaystyle |1-2x|<5}$ . Left-hand case: ${\displaystyle 1-2x<5}$  Recall how multiplying both sides by a negative factor requires us to "flip" the inequality. Therefore, solving for ${\displaystyle x}$ : ${\displaystyle \Leftrightarrow x>-2}$  Right-hand case: ${\displaystyle 1-2x>-5}$  ${\displaystyle \Leftrightarrow x<3}$ There are two possible reasons why this procedure exists. For one, it allows us to quickly solve for ${\displaystyle x}$  in the "right-hand" equation without the need for double the amount of multiplications necessary to solve for ${\displaystyle x}$  (it lessens the amount of times we have to flip the inequality). Next, it allows us to focus more on the idea behind absolute value equations (the value inside will be positive, and hence, we want to find all values that allow us to find all possible solutions). Nevertheless, keep in mind how we found this procedure, and it was through applying the function definition of absolute values. In reality, we did the exact same thing for absolute value equations. The only difference in application of algorithm applies to the inequality, which further "complicates" matters by introducing a new concept to the non-injective absolute value function. Through finding two solutions, we gave two possible ranges for values of ${\displaystyle x}$ . Hopefully, this example should further shine a light into what many high schoolers think to be "black magic" among finding solutions to absolute value inequalities and equalities. The next examples should only hopefully further the concepts learned. Keep in mind, if one does not like the algorithm presented in the repeat example above, one is perfectly fine to use the other algorithm. The benefit of multiple choice is the ability to use any method, and only the correctness of your answer will be considered. Example 3.0(b): ${\displaystyle 15x-|12x+10|>13x}$  Explanations given later Example 3.0(c): ${\displaystyle \left\vert {\frac {5}{12}}x-87\right\vert \leq 100}$  Explanations given later Example 3.0(d): ${\displaystyle 15x-|12x+10|>13x}$  Explanations given later Introduction to example included later. Example 3.0(e): Variable temperature problem Problem: The temperature in a room averages at around ${\displaystyle 21^{\circ }{\text{C}}}$  in the summer without air conditioning. The change in temperature is dependent on the ambient weather conditions. Without air conditioning, the maximum change in temperature from the average is ${\displaystyle 4^{\circ }{\text{C}}}$ . When the air conditioning is on, the temperature of the room is a function of time ${\displaystyle t}$  (in hours), given by ${\displaystyle C(t)=-{\frac {3}{2}}t+20}$ . The maximum deviation in temperature should be no more than ${\displaystyle 5^{\circ }{\text{C}}}$ . (a) Write an equation that represents the temperature of the room without and with air conditioning, respectively. (b) Determine the minimum temperature value of the room without air conditioning in the summer. (c) At what time must the air conditioning stop for the temperature to drop by at most ${\displaystyle 5^{\circ }{\text{C}}}$ ?Answers: (a) ${\displaystyle |T-20|\leq 4}$  and ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$ . (b) ${\displaystyle T_{min}=16^{\circ }{\text{C}}}$ . (c) ${\displaystyle 3{\tfrac {1}{3}}}$  hours.Explanation: When working with word-problems it is best to rewrite the problem into something algebraic or "picturesque" (i.e. draw the problem out). One can also use both, as we will soon do.   Temperature Variation Situation The benefit of drawing a picture (or, more accurate, a sketch) of the situation is being able to more easily interpret situations. We are highly visual people, after all, so seeing a picture is a lot easier to understand than words. The highly intuitive nature of geometry also lends itself well to algebraic interpretations. Let us reread the situation without A/C. "Without air conditioning, the maximum change in temperature from the average is ${\displaystyle 4^{\circ }{\text{C}}}$ ."This gives us a lot of information. We know that ${\displaystyle T_{\text{max}}=T_{\text{avg}}+4}$  and ${\displaystyle T_{\text{min}}=T_{\text{avg}}-4}$ , so to keep it as one singular equation, it is best to write it as an absolute value equation. For this situation, (8) ${\displaystyle |T-20|\leq 4}$  It is important to know why this is true. Recall the absolute value represents the distance from ${\displaystyle 0}$  for the inside value. If ${\displaystyle 20}$  is the reference point, then to get ${\displaystyle 0}$  from ${\displaystyle 20}$ , you need to subtract 20 from the current value ${\displaystyle T}$ . As such, this equation is true. Now let us look at the situation for the air conditioning. "When the air conditioning is on, the temperature of the room is a function of time, given by ${\displaystyle C(t)=-{\frac {3}{2}}t+20}$ . The maximum deviation in temperature should be no more than ${\displaystyle 5^{\circ }{\text{C}}}$ ."Based on the wording of the sentence, the temperature ${\displaystyle T=C(t)}$  is based on the time, and the temperature can only be at most ${\displaystyle 5^{\circ }{\text{C}}}$  from the average. By the same logic given for Equation (8), (9) ${\displaystyle |C(t)-20|\leq 5}$  Equation (9) is left in the same form to show how similar the two equations are, and to also relate more to the wording of the set-up text. Using the transitive property, one can simplify the equation further to obtain (10) ${\displaystyle \left\vert -{\frac {3}{2}}t\right\vert \leq 5}$  Recall how ${\displaystyle |(-1)x|=1\cdot |x|}$ , where ${\displaystyle c=-1}$ . Because of this property, one can simplify the equation further to obtain the final equation for part (a): (11) ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$  This sufficiently answers item (a), perhaps the hardest part in the question. However, with the two equations obtained, (8) and (11), we can answer both items (b) and (c). Let us reread parts (b) and (c) using our understanding of the question: "Determine the minimum temperature value of the room without air conditioning in the summer."This is, in essence, asking the examinee to find the value of ${\displaystyle T_{\text{min}}}$  using (8). The previous examples should have hopefully prepared you with solving for absolute value inequalities. Solving for ${\displaystyle T}$  in ${\displaystyle |T-20|\leq 4}$ . Positive case: ${\displaystyle T-20\leq 4}$  ${\displaystyle \Leftrightarrow T\leq 24}$  Negative case: ${\displaystyle T-20\geq -5}$  ${\displaystyle \Leftrightarrow T\geq 16}$ Since the problem is asking for the minimum temperature value, ${\displaystyle T_{min}}$ , of an ambient temperature room, the correct answer here is ${\displaystyle T_{min}=16^{\circ }{\text{C}}}$ . Keep in mind, we are allowed to put that equal sign there thanks to the problem's wording ("at most" implies less than or equal to). Also, always remember to put units in word problems. "At what time must the air conditioning stop for the temperature to drop by at most ${\displaystyle 5^{\circ }{\text{C}}}$ ?"This is, in essence, asking the examinee to find the value of time ${\displaystyle t}$  (in hours) using the most simplified equation, Equation (10). Solving for ${\displaystyle t}$  in ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$ . Positive case: ${\displaystyle {\frac {3}{2}}t\leq 5}$  ${\displaystyle \Leftrightarrow t\leq {\frac {10}{3}}}$  Negative case: ${\displaystyle {\frac {3}{2}}t\geq -5}$  ${\displaystyle \Leftrightarrow t\geq -{\frac {10}{3}}}$ Because, in essence, only the positive case is considered (we are only looking at time ${\displaystyle t\geq 0}$ ), the maximum amount of time that the air conditioning will allow is ${\displaystyle t={\frac {10}{3}}=3{\tfrac {1}{3}}}$  hours. ## Lesson Review An absolute value (represented with |'s) stands for the number's distance from 0 on the number line. This essentially makes a negative number positive although a positive number remains the same. To solve an equation involving absolute values, you must get the absolute value by itself on one side and set it equal to the positive and negative version of the other side, because those are the two solutions the absolute value can output. However, check the solutions you get in the end; some might produce negative numbers on the right side, which are impossible because all outputs of an absolute value symbol are positive! ## Lesson Quiz Evaluate each expression. 1 ${\displaystyle |-4|=}$ 2 ${\displaystyle |6-8|=}$ Solve for ${\displaystyle a}$ . Type NS (with capitalization) into either both fields or the right field for equations with no solutions. Any solutions that are extraneous (don't work when substituted into the equation) should be typed with XS on either the right field or both. Order the solutions from least to greatest. 3 ${\displaystyle |3a-4|=5}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 4 ${\displaystyle 5|2a+3|=15}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 5 ${\displaystyle 3|4a-2|-12=-3}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 6 ${\displaystyle |a+1|-18=a-15}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 7 ${\displaystyle 2\left\vert {\frac {a}{2}}-1\right\vert -2a=-4a}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ Read the situations provided below. Then, answer the prompt or question given. Type NS (with capitalization) into either both fields or the right field for equations that have no solutions. Any solutions that are extraneous should be typed with XS on either the right field or both. Order the solutions from least to greatest. 8 The speed of the current of a nearby river deviates ${\displaystyle 1.5{\tfrac {\text{m}}{\text{s}}}}$  from the average speed ${\displaystyle 20{\tfrac {\text{m}}{\text{s}}}}$ . Let ${\displaystyle s}$  represent the speed of the river. Select all possible equations that could describe the situation. ${\displaystyle |s-1.5|=20}$ ${\displaystyle |s+1.5|=20}$ ${\displaystyle |20-s|=1.5}$ ${\displaystyle |s-20|=1.5}$ ${\displaystyle |s+20|=1.5}$ ${\displaystyle |1.5-s|=20}$ 9 A horizontal artificial river has an average velocity of ${\displaystyle -4{\tfrac {\text{m}}{\text{s}}}}$ . The velocity increases proportionally to the mass of the rocks, ${\displaystyle r}$ , in kilograms, blocking the path of the current. Assume the river's velocity for the day deviates a maximum of ${\displaystyle 6{\tfrac {\text{m}}{\text{s}}}}$ . If the proportionality constant is ${\displaystyle k={\frac {2}{5}}}$  meters per kilograms-seconds, what is the maximum mass of the rocks in the river for that day? ${\displaystyle 5}$  kilograms. ${\displaystyle 15}$  kilograms. ${\displaystyle 25}$  kilograms. ${\displaystyle 35}$  kilograms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 524, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311861395835876, "perplexity": 593.8518563810339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00497.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/standing-base-one-cliffs-mt-arapiles-victoria-australia-hiker-hears-rock-break
Question Standing at the base of one of the cliffs of Mt. Arapiles in Victoria, Australia, a hiker hears a rock break loose from a height of 105 m. He can't see the rock right away but then does, 1.50 s later. (a) How far above the hiker is the rock when he can see it? (b) How much time does he have to move before the rock hits his head? a) $94.0 \textrm{ m}$ b) $3.13 \textrm{ s}$ Solution Video
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6681181788444519, "perplexity": 1823.9208015509698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00475.warc.gz"}
https://open.kattis.com/problems/gowithflow
Hide # Go with the Flow In typesetting, a “river” is a string of spaces formed by gaps between words that extends down several lines of text. For instance, Figure 1 shows several examples of rivers highlighted in red (text is intentionally blurred to make the rivers more visible). Celebrated river authority Flo Ng wants her new book on rivers of the world to include the longest typographic rivers possible. She plans to set the text in a mono-spaced font (all letters and spaces have equal width) in a left-aligned column of some fixed width, with exactly one space separating words on each line (the text is not aligned on the right). For Flo, a “river” is defined as a sequence of spaces lying in consecutive lines in which the position of each space in the sequence (except the first) differs by at most $1$ from the position of the space in the line above it. Trailing white space cannot appear in a river. Words must be packed as tightly as possible on lines; no words may be split across lines. The line width used must be at least as long as the longest word in the text. For instance, Figure 2 shows the same text set with two different line widths. Line width 14: River of length 4 Line width 15: River of length 5 The Yangtze is| The Yangtze is | the third     | the third      | longest river | longest*river  | in*Asia and   | in Asia*and the| the*longest in| longest*in the | the*world to  | world to*flow  | flow*entirely | entirely*in one| in one country| country        | Figure 2: Longest rivers (*) for two different line widths. Given a text, you have been tasked with determining the line width that produces the longest river of spaces for that text. ## Input The first line of input contains an integer $n$ ($2 \leq n \leq 2\, 500$) specifying the number of words in the text. The following lines of input contain the words of text. Each word consists only of lowercase and uppercase letters, and words on the same line are separated by a single space. No word exceeds $80$ characters. ## Output Display the line width for which the input text contains the longest possible river, followed by the length of the longest river. If more than one line width yields this maximum, display the shortest such line width. Sample Input 1 Sample Output 1 21 The Yangtze is the third longest river in Asia and the longest in the world to flow entirely in one country 15 5 Sample Input 2 Sample Output 2 25 When two or more rivers meet at a confluence other than the sea the resulting merged river takes the name of one of those rivers 21 6 CPU Time limit 8 seconds Memory limit 2048 MB Difficulty 4.2medium Statistics Show
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39442113041877747, "perplexity": 1949.4045363421428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00591.warc.gz"}
https://cs.nyu.edu/dynamic/news/seminar_event/638/
# Numerical Analysis and Scientific Computing Seminar ## Optimization on Riemannian Manifolds for Solving Rank-structured Matrix and Tensor Problems Speaker: Bart Vandereycken, Princeton University Location: Warren Weaver Hall 1302 Date: Oct. 12, 2012, 10 a.m.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325942397117615, "perplexity": 14776.74566421791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249434065.81/warc/CC-MAIN-20190223021219-20190223043219-00088.warc.gz"}
https://brain-helper.com/2013/12/15/1089/
Over the past few decades, Morse theory has undergone many generalizations, into many different fields.  At the moment, I only know of a few, and I understand even fewer. Well, let’s begin at the beginning: • Classical Morse theory (CMT) • Stratified Morse theory (SMT) • Micro-local Morse theory (MMT) The core of these theories is, of course, the study of Morse functions on suitable spaces and generalizations/interpretations of theorems in CMT to these spaces.  For CMT, the spaces are smooth manifolds (or, compact manifolds, if your definition of Morse function doesn’t require properness).  SMT looks at Morse functions on (Whitney) stratified spaces, usually real/complex varieties (either algebraic or analytic), and more generally, subanalytic subsets of smooth manifolds.  MMT deals with both cases, but from a more “meta” perspective that I’m not going to tell you about right now. The overarching theme is pretty simple:  one can investigate the (co)homology of $X$ by examining the behavior of level sets of Morse functions as they “pass through” critical values.  First, we’ll need some notation.  Let $M$ be a smooth manifold, $a < b \in \mathbb{R}$, and let $f: M \to \mathbb{R}$ be a smooth function.  Then, set • $M_{\leq a} := f^{-1}(-\infty,a]$ • $M_{< a} := f^{-1}(-\infty,a)$ • $M_{[a,b]} := f^{-1}[a,b]$ In CMT, this overarching idea is described by two “fundamental” theorems: Fundamental Theorem of Classical Morse theory, A (CMT;A): Suppose $f$ has no critical values on the interval $[a,b] \subseteq \mathbb{R}$.  Then, $M_{\leq a}$ is diffeomorphic to $M_{\leq b}$, and the inclusion $M_{\leq a} \hookrightarrow M_{\leq b}$ is a homotopy equivalence (that is, $M_{\leq a}$ is a deformation-retract of $M_{\leq b}$). Homologically speaking, this last point can be rephrased as $H_*(M_{\leq b},M_{\leq a}) = 0$ (for singular homology with $\mathbb{Z}$ coefficients). Fundamental Theorem of Classical Morse theory, B (CMT;B): Suppose that $f$ has a unique critical value $v$ in the interior of the interval $[a,b] \subseteq \mathbb{R}$, corresponding to the isolated critical point $p \in M$ of index $\lambda$.  Then, $H_k(M_{\leq b},M_{\leq a})$ is non-zero only in degree $k = \lambda$, in which case $latex H_\lambda(M_{\leq b},M_{\leq a}) \cong \mathbb{Z}$. So, if $c \in \mathbb{R}$ varies across a critical value $a < v < b$ of $f$, the topological type of $M_{\leq c}$ “jumps” somehow.  If we want to compare how topological type of $M_{\leq b}$ differs from that of $M_{\leq a}$, the obvious thing to do is consider them together as a pair of spaces $(M_{\leq b}, M_{\leq a})$ and look at the relative (co)homology of this pair.  CMT;A and CMT;B together tell us that we’re only going to get non-zero relative homology of this pair when there is a critical value between $a$ and $b$, and in that case, the homology is non-zero only in degree $\lambda$. But HOW does the topological type change, specifically, as we cross the critical value? ## Author: brianhepler I'm a third-year math postdoc at the University of Wisconsin-Madison, where I work as a member of the geometry and topology research group. Generally speaking, I think math is pretty neat; and, if you give me the chance, I'll talk your ear off. Especially the more abstract stuff. It's really hard to communicate that love with the general population, but I'm going to do my best to show you a world of pure imagination.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297370076179504, "perplexity": 432.13694173554325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00458.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/appendix-e-sigma-notation-e-exercises-page-a38/42
## Calculus 8th Edition $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$ $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$ Use triangular inequality $|a+b|\leq |a|+|b|$ Expand the summation on both sides. $a_{1}+a_{2}+....+a_{n}=a_{1}+a_{2}+....+a_{n}$ Take the absolute values. $|a_{1}+a_{2}+....+a_{n}|=|a_{1}+a_{2}+....+a_{n}|$ Thus, $|a_{1}+a_{2}+....+a_{n}|\leq|a_{1}|+|a_{2}|+....+|a_{n}|$ (Triangular inequality $|a+b|\leq |a|+|b|$) Hence, $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876253604888916, "perplexity": 998.7885885739302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00485.warc.gz"}
https://jeeneetqna.in/624/plane-electromagnetic-wave-frequency-energy-density-vacuum
# A plane electromagnetic wave, has frequency of 2.0 × 10^10 Hz and its energy density is 1.02 × 10^–8 J/m^3 in vacuum. more_vert A plane electromagnetic wave, has frequency of 2.0 × 1010 Hz and its energy density is 1.02 × 10–8 J/m3 in vacuum. The amplitude of the magnetic field of the wave is close to (${1\over4\pi\varepsilon_0}=9\times10^9{Nm^2\over C^2}$ and speed of light = 3 × 108 ms–1) : (1) 160 nT (2) 180 nT (3) 190 nT (4) 150 nT more_vert Electromagnetic waves more_vert verified Ans : (1) 160 nT Explanation: more_vert How did you knw to use this formula ? more_vert there is only one formula magnetic energy density
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508094787597656, "perplexity": 1863.6769934977199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00255.warc.gz"}
https://mathoverflow.net/questions/320955/mapping-class-group-and-triangulations
# Mapping Class Group and Triangulations I am a physicist who's getting started with Mapping Class Group for Riemann surfaces, pants decompositions and triangulations so I apologise in advance if the following is a stupid question/wrong. I understand that to any pants decomposition of a Riemann surface we can associate a set of generators (the Dehn twists). Different pants decompositions gives different sets of generators, and relations among various sets of generators are understood as being generated by a minimal sets of relations (Lantern, Chain, Braiding...) My question is: is there a similar picture for Triangulations? Given a Triangulation, can I assign a canonical set of generators of the Mapping Class Group? Can I understand relations between generators using flips of triangulations? Yes. When a group $$G$$ acts geometrically on a metric space $$X$$, by choosing a basepoint $$x_0 \in X$$ you can construct its Dirichlet domain $$D_{x_0} = \{x \; | \; d(x, x_0) \leq d(x, g \cdot x_0) \; \forall g \in G\}$$ When the action of $$G$$ is sufficiently nice, this domain has finitely many sides and geodesics which are perpendicularly bisected by each face form a finite generating set for $$G$$. Since the mapping class group acts geometrically on the (labelled) flip graph (with the graph metric) we can do a similar process starting at a triangulation $$\mathcal{T}_0$$. 1. Let $$X_1$$ be the set of mapping classes which move $$\mathcal{T}_0$$ by the smallest non-zero amount. 2. Let $$X_2$$ be the set of mapping classes which move $$\langle X_1 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. 3. Let $$X_3$$ be the set of mapping classes which move $$\langle X_1, X_2 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. 4. Let $$X_4$$ be the set of mapping classes which move $$\langle X_1, X_2, X_3 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. $$\vdots$$ Then each $$X_i$$ is finite and for some $$N$$ the elements of $$X_1 \cup X_2 \cup \cdots \cup X_N$$ generate $$G$$. Since each of generator $$g$$ can be represented by a path in the flip graph from $$\mathcal{T}_0$$ to $$g(\mathcal{T}_0)$$ the relations between these generators can then be understood from the 2--cells of the flip graphs which give: • the square relation - that disjoint flips commute, and • the pentagon relation - that two flips which share a common triangle form a 5--cycle Since there are explicit descriptions of the action of the mapping class group on the flip graph this entire process can be done on a computer (although as far as I am aware no one has actually done this).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625456094741821, "perplexity": 216.82029695738711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00635.warc.gz"}
https://www.researchgate.net/profile/Keisuke-Fujii-9
# Keisuke FujiiKyoto University | Kyodai · The Hakubi Center for Advanced Research/ Graduate School of Informatics PhD 91 Publications 6,849 A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more 3,731 Citations Citations since 2017 60 Research Items 3358 Citations April 2013 - December 2014 Position • Professor (Assistant) April 2011 - March 2013 Position • PostDoc Position ## Publications Publications (91) Article Full-text available t-stochastic neighbor embedding (t-SNE) is a nonparametric data visualization method in classical machine learning. It maps the data from the high-dimensional space into a low-dimensional space, especially a two-dimensional plane, while maintaining the relationship or similarities between the surrounding points. In t-SNE, the initial position of th... Article Current quantum computers are limited in the number of qubits and coherence time, constraining the algorithms executable with sufficient fidelity. The variational quantum eigensolver (VQE) is an algorithm to find an approximate ground state of a quantum system and is expected to work on even such a device. The deep VQE [K. Fujii, et al., arXiv:2007... Article Full-text available Variational quantum algorithms (VQAs) have been proposed as one of the most promising approaches to demonstrate quantum advantage on noisy intermediate-scale quantum (NISQ) devices. However, it has been unclear whether VQAs can maintain quantum advantage under the intrinsic noise of the NISQ devices, which deteriorates the quantumness. Here we prop... Article Full-text available The implementation of time-evolution operators on quantum circuits is important for quantum simulation. However, the standard method, Trotterization, requires a huge number of gates to achieve desirable accuracy. Here, we propose a local variational quantum compilation (LVQC) algorithm, which allows us to accurately and efficiently compile time-evo... Preprint Quantum-inspired singular value decomposition (SVD) is a technique to perform SVD in logarithmic time with respect to the dimension of a matrix, given access to the matrix embedded in a segment-tree data structure. The speedup is possible through the efficient sampling of matrix elements according to their norms. Here, we apply it to extreme learni... Preprint The implementation of time-evolution operators, called Hamiltonian simulation, is one of the most promising usage of quantum computers that can fully exploit their computational powers. For time-independent Hamiltonians, the qubitization has recently established efficient realization of time-evolution, with achieving the optimal computational resou... Article Full-text available Variational quantum algorithms are considered to be appealing applications of near-term quantum computers. However, it has been unclear whether they can outperform classical algorithms or not. To reveal their limitations, we must seek a technique to benchmark them on large-scale problems. Here we propose a perturbative approach for efficient benchm... Preprint Pricing a multi-asset derivative is an important problem in financial engineering, both theoretically and practically. Although it is suitable to numerically solve partial differential equations to calculate the prices of certain types of derivatives, the computational complexity increases exponentially as the number of underlying assets increases... Article The variational quantum eigensolver (VQE), which has attracted attention as a promising application of noisy intermediate-scale quantum devices, finds a ground state of a given Hamiltonian by variationally optimizing the parameters of quantum circuits called Ansätze. Since the difficulty of the optimization depends on the complexity of the problem... Preprint Full-text available The demonstration of quantum error correction (QEC) is one of the most important milestones in the realization of fully-fledged quantum computers. Toward this, QEC experiments using the surface codes have recently been actively conducted. However, it has not yet been realized to protect logical quantum information beyond the physical coherence time... Preprint Implementing time evolution operators on quantum circuits is important for quantum simulation. However, the standard way, Trotterization, requires a huge numbers of gates to achieve desirable accuracy. Here, we propose a local variational quantum compilation (LVQC) algorithm, which allows to accurately and efficiently compile a time evolution opera... Article Full-text available We propose a divide-and-conquer method for the quantum-classical hybrid algorithm to solve larger problems with small-scale quantum computers. Specifically, we concatenate a variational quantum eigensolver (VQE) with a reduction in the system dimension, where the interactions between divided subsystems are taken as an effective Hamiltonian expanded... Article Full-text available In the early years of fault-tolerant quantum computing (FTQC), it is expected that the available code distance and the number of magic states will be restricted due to the limited scalability of quantum devices and the insufficient computational power of classical decoding units. Here, we integrate quantum error correction and quantum error mitigat... Preprint Current quantum computers are limited in the number of qubits and coherence time, constraining the algorithms executable with sufficient fidelity. Variational quantum eigensolver (VQE) is an algorithm to find an approximate ground state of a quantum system and expected to work on even such a device. The deep VQE [K. Fujii, et al., arXiv:2007.10917]... Preprint t-Stochastic Neighbor Embedding (t-SNE) is a non-parametric data visualization method in classical machine learning. It maps the data from the high-dimensional space into a low-dimensional space, especially a two-dimensional plane, while maintaining the relationship, or similarities, between the surrounding points. In t-SNE, the initial position of... Preprint Variational quantum eigensolver (VQE) is regarded as a promising candidate of hybrid quantum-classical algorithm for the near-term quantum computers. Meanwhile, VQE is confronted with a challenge that statistical error associated with the measurement as well as systematic error could significantly hamper the optimization. To circumvent this issue,... Article Full-text available Quantum circuits that are classically simulatable tell us when quantum computation becomes less powerful than or equivalent to classical computation. Such classically simulatable circuits are of importance because they illustrate what makes universal quantum computation different from classical computers. In this work, we propose a novel family of... Article Full-text available The kernel trick allows us to employ high-dimensional feature space for a machine learning task without explicitly storing features. Recently, the idea of utilizing quantum systems for computing kernel functions using interference has been demonstrated experimentally. However, the dimension of feature spaces in those experiments have been smaller t... Preprint Variational quantum algorithms (VQA) have been proposed as one of the most promising approaches to demonstrate quantum advantage on noisy intermediate-scale quantum (NISQ) devices. However, it has been unclear whether VQA algorithms can maintain quantum advantage under the intrinsic noise of the NISQ devices, which deteriorates the quantumness. Her... Article Full-text available We propose a sampling-based simulation for fault-tolerant quantum error correction under coherent noise. A mixture of incoherent and coherent noise, possibly due to over-rotation, is decomposed into Clifford channels with a quasiprobability distribution. Then, an unbiased estimator of the logical error probability is constructed by sampling Cliffor... Preprint Quantum kernel method is one of the key approaches to quantum machine learning, which has the advantages that it does not require optimization and has theoretical simplicity. By virtue of these properties, several experimental demonstrations and discussions of the potential advantages have been developed so far. However, as is the case in classical... Article Full-text available To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its feature... Preprint Variational quantum eigensolver (VQE), which attracts attention as a promising application of noisy intermediate-scale quantum devices, finds a ground state of a given Hamiltonian by variationally optimizing the parameters of quantum circuits called ansatz. Since the difficulty of the optimization depends on the complexity of the problem Hamiltonia... Chapter Quantum systems have an exponentially large degree of freedom in the number of particles and hence provide a rich dynamics that could not be simulated on conventional computers. Quantum reservoir computing is an approach to use such a complex and rich dynamics on the quantum systems as it is for temporal machine learning. In this chapter, we explai... Chapter Reservoir computing is a framework used to exploit natural nonlinear dynamics with many degrees of freedom, which is called a reservoir, for a machine learning task. Here we introduce the NMR implementation of quantum reservoir computing and quantum extreme learning machine using the nuclear quantum reservoir. The implementation utilizes globally c... Chapter Recent developments in reservoir computing based on spintronics technology are described here. The rapid growth of brain-inspired computing has motivated researchers working in a broad range of scientific field to apply their own technologies, such as photonics, soft robotics, and quantum computing, to brain-inspired computing. A relatively new tec... Article Applications such as simulating complicated quantum systems or solving large-scale linear algebra problems are very challenging for classical computers, owing to the extremely high computational cost. Quantum computers promise a solution, although fault-tolerant quantum computers will probably not be available in the near future. Current quantum de... Preprint Due to the linearity of quantum operations, it is not straightforward to implement nonlinear transformations on a quantum computer, making some practical tasks like a neural network hard to be achieved. In this work, we define a task called nonlinear transformation of complex amplitudes and provide an algorithm to achieve this task. Specifically, w... Article Noise in quantum operations often negates the advantage of quantum computation. However, most classical simulations of quantum computers calculate the ideal probability amplitudes by either storing full state vectors or using sophisticated tensor-network contractions. Here we investigate sampling-based classical simulation methods for noisy quantum... Preprint Variational quantum algorithms (VQAs) are expected to become a practical application of near-term noisy quantum computers. Although the effect of the noise crucially determines whether a VQA works or not, the heuristic nature of VQAs makes it difficult to establish analytic theories. Analytic estimations of the impact of the noise are urgent for se... Article We propose a method for learning temporal data using a parametrized quantum circuit. We use the circuit that has a similar structure as the recurrent neural network, which is one of the standard approaches employed for this type of machine learning task. Some of the qubits in the circuit are utilized for memorizing past data, while others are measu... Preprint We propose a sampling-based simulation for fault-tolerant quantum error correction under coherent noise. A mixture of incoherent and coherent noise, possibly due to over-rotation, is decomposed into Clifford channels with a quasi-probability distribution. Then, an unbiased estimator of the logical error probability is constructed by sampling Cliffo... Preprint Quantum circuits that are classically simulatable tell us when quantum computation becomes less powerful than or equivalent to classical computation. Such classically simulatable circuits are of importance because they illustrate what makes universal quantum computation different from classical computers. In this work, we propose a novel family of... Article Full-text available As the hardware technology for quantum computing advances, its possible applications are actively searched and developed. However, such applications still suffer from the noise on quantum devices, in particular when using two-qubit gates whose fidelity is relatively low. One way to overcome this difficulty is to substitute such non-local operations... Preprint We propose a method for learning temporal data using a parametrized quantum circuit. We use the circuit that has a similar structure as the recurrent neural network which is one of the standard approaches employed for this type of machine learning task. Some of the qubits in the circuit are utilized for memorizing past data, while others are measur... Article Full-text available We propose a quantum-classical hybrid algorithm to simulate the nonequilibrium steady state of an open quantum many-body system, named the dissipative-system variational quantum eigensolver (dVQE). To employ the variational optimization technique for a unitary quantum circuit, we map a mixed state into a pure state with a doubled number of qubits a... Preprint We introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Herein we show the main concepts of Qulacs, explain how to use its featur... Preprint Variational quantum algorithms are appealing applications of near-term quantum computers. However, there are two major issues to be solved, that is, we need an efficient initialization strategy for parametrized quantum circuit and to know the limitation of the algorithms by benchmarking it on large scale problems. Here, we propose a perturbative ap... Article Full-text available We propose a sequential minimal optimization method for quantum-classical hybrid algorithms, which converges faster, robust against statistical error, and hyperparameter-free. Specifically, the optimization problem of the parameterized quantum circuits is divided into solvable subproblems by considering only a subset of the parameters. In fact, if... Preprint We propose a divide-and-conquer method for the quantum-classical hybrid algorithm to solve larger problems with small-scale quantum computers. Specifically, we concatenate variational quantum eigensolver (VQE) with reducing the dimensions of the system, where the interactions between divided subsystems are taken as an effective Hamiltonian expanded... Preprint As the hardware technology for quantum computing advances, its possible applications are actively searched and developed. However, such applications still suffer from the noise on quantum devices, in particular when using two-qubit gates whose fidelity is relatively low. One way to overcome this difficulty is to substitute such non-local operations... Preprint We employ so-called quantum kernel estimation to exploit complex quantum dynamics of solid-state nuclear magnetic resonance for machine learning. We propose to map an input to a feature space by input-dependent Hamiltonian evolution, and the kernel is estimated by the interference of the evolution. Simple machine learning tasks, namely one-dimensio... Article Full-text available The variational quantum eigensolver (VQE), a variational algorithm to obtain an approximated ground state of a given Hamiltonian, is an appealing application of near-term quantum computers. To extend the framework to excited states, we here propose an algorithm, the subspace-search variational quantum eigensolver (SSVQE). This algorithm searches a... Preprint We show a certain kind of non-local operations can be decomposed into a sequence of local operations. Utilizing the result, we describe a strategy to decompose a general two-qubit gate to a sequence of single-qubit operations. Required operations are projective measurement of a qubit in Pauli basis, and $\pi/2$ rotation around x, y, and z axes. The... Preprint We propose a quantum-classical hybrid algorithm to simulate the non-equilibrium steady state of an open quantum many-body system, named the dissipative-system Variational Quantum Eigensolver (dVQE). To employ the variational optimization technique for a unitary quantum circuit, we map a mixed state into a pure state with a doubled number of qubits... Article Full-text available In quantum computing, the indirect measurement of unitary operators such as the Hadamard test plays a significant role in many algorithms. However, in certain cases, the indirect measurement can be reduced to the direct measurement, where a quantum state is destructively measured. Here, we investigate under what conditions such a replacement is pos... Article The variational quantum eigensolver (VQE) is an attractive possible application of near-term quantum computers. Originally, the aim of the VQE is to find a ground state for a given specific Hamiltonian. It is achieved by minimizing the expectation value of the Hamiltonian with respect to an ansatz state by tuning parameters θ on a quantum circuit,... Preprint Quantum simulation is one of the key applications of quantum computing, which can accelerate research and development in chemistry, material science, etc. Here, we propose an efficient method to simulate the time evolution driven by a static Hamiltonian, named subspace variational quantum simulator (SVQS). SVQS employs the subspace-search variation... Preprint We propose a sequential minimal optimization method for quantum-classical hybrid algorithms, which converges faster, is robust against statistical error, and is hyperparameter-free. Specifically, the optimization problem of the parameterized quantum circuits is divided into solvable subproblems by considering only a subset of the parameters. In fac... Article Many quantum algorithms, such as the Harrow-Hassidim-Lloyd (HHL) algorithm, depend on oracles that efficiently encode classical data into a quantum state. The encoding of the data can be categorized into two types: analog encoding, where the data are stored as amplitudes of a state, and digital encoding, where they are stored as qubit strings. The... Preprint In quantum computing, the indirect measurement of unitary operators such as the Hadamard test plays a significant role in many algorithms. However, in certain cases, the indirect measurement can be reduced to the direct measurement, where a quantum state is destructively measured. Here we investigate in what cases such a replacement is possible and... Preprint The variational quantum eigensolver (VQE), a variational algorithm to obtain an approximated ground state of a given Hamiltonian, is an appealing application of near-term quantum computers. The original work [Peruzzo et al.; \textit{Nat. Commun.}; \textbf{5}, 4213 (2014)] focused only on finding a ground state, whereas the excited states can also i... Preprint Full-text available The variational quantum eigensolver (VQE) is an attracting possible application of near-term quantum computers. Originally, the aim of the VQE is to find a ground state for a given specific Hamiltonian. It is achieved by minimizing the expectation value of the Hamiltonian with respect to an ansatz state by tuning parameters $$\bm{\theta}$$ on a qua... Preprint We experimentally demonstrate quantum machine learning using NMR based on a framework of quantum reservoir computing. Reservoir computing is for exploiting natural nonlinear dynamics with large degrees of freedom, which is called a reservoir, for a machine learning purpose. Here we propose a concrete physical implementation of a quantum reservoir u... Preprint Many quantum algorithms, such as Harrow-Hassidim-Lloyd (HHL) algorithm, depend on oracles that efficiently encode classical data into a quantum state. The encoding of the data can be categorized into two types; analog-encoding where the data are stored as amplitudes of a state, and digital-encoding where they are stored as qubit-strings. The former... Article The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently... Article Quantum reservoir computing provides a framework for exploiting the natural dynamics of quantum systems as a computational resource. It can implement real-time signal processing and solve temporal machine learning problems in general, which requires memory and nonlinear mapping of the recent input stream using the quantum dynamics in computational... Article We propose a classical-quantum hybrid algorithm for machine learning on near-term quantum processors, which we call quantum circuit learning. A quantum circuit driven by our framework learns a given task by tuning parameters implemented on it. The iterative optimization of the parameters allows us to circumvent the high-depth circuit. Theoretical i... Article Full-text available Instantaneous quantum polynomial-time (IQP) computation is a class of quantum computation consisting only of commuting two-qubit gates and is not universal in the sense of standard quantum computation. Nevertheless, it has been shown that if there is a classical algorithm that can simulate IQP efficiently, the polynomial hierarchy (PH) collapses at... Article What happens if in QMA the quantum channel between Merlin and Arthur is noisy? It is not difficult to show that such a modification does not change the computational power as long as the noise is not too strong so that errors are correctable with high probability, since if Merlin encodes the witness state in a quantum error-correction code and send... Article Blind quantum computation (BQC) allows a client, who only possesses relatively poor quantum devices, to delegate universal quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot know the client's input, quantum algorithm, and output. In the existing verification schemes of BQC, any suspicious... Article This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor intermediate measurements. The main results of this p... Article We show that the class QMA does not change even if we restrict Arthur's computing ability to only Clifford gate operations (plus classical XOR gate). The idea is to use the fact that the preparation of certain single-qubit states, so called magic states, plus any Clifford gate operations are universal for quantum computing. If Merlin is honest, he... Article Blind quantum computation (BQC) allows an unconditionally secure delegated quantum computation for a client (Alice) who only possesses cheap quantum devices. So far, extensive efforts have been paid to make Alice's devices as classical as possible. Along this direction, quantum channels between Alice and the quantum server (Bob) should be considere... Article Full-text available Deterministic quantum computation with one quantum bit (DQC1) [E. Knill and R. Laflamme, Phys. Rev. Lett. {\bf81}, 5672 (1998)] is a restricted model of quantum computing where the input state is the completely-mixed state except for a single pure qubit, and a single output qubit is measured at the end of the computing. We can generalize it to the... Article Full-text available It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, w... Article Full-text available We investigate quantum computational complexity of calculating partition functions of Ising models. We construct a quantum algorithm for an additive approximation of Ising partition functions on square lattices. To this end, we utilize the overlap mapping developed by Van den Nest, D\"ur, and Briegel [Phys. Rev. Lett. 98, 117207 (2007)] and its int... Article Full-text available Deterministic quantum computation with one quantum bit (DQC1) is a model of quantum computing where the input restricted to containing a single qubit in a pure state and with all other qubits in a completely-mixed state, with only a single qubit measurement at the end of the computation [E. Knill and R. Laflamme, Phys. Rev. Lett. {\bf81}, 5672 (199... Article Full-text available Protecting quantum information from decoherence due to environmental noise is vital for fault-tolerant quantum computation. To this end, standard quantum error correction employs parallel projective measurements of individual particles, which makes the system extremely complicated. Here we propose measurement-free topological protection in two dime... Article Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the clie... Article This is a short review on an interdisciplinary field of quantum information science and statistical mechanics. We first give a pedagogical introduction to the stabilizer formalism, which is an efficient way to describe an important class of quantum states, the so-called stabilizer states, and quantum operations on them. Furthermore, graph states, w... Article Full-text available The conventional duality analysis is employed to identify a location of a critical point on a uniform lattice without any disorder in its structure. In the present study, we deal with the random planar lattice, which consists of the randomized structure based on the square lattice. We introduce the uniformly random modification by the bond dilution... Article Full-text available We consider measurement-based quantum computation (MBQC) on thermal states of the interacting cluster Hamiltonian containing interactions between the cluster stabilizers that undergoes thermal phase transitions. We show that the long-range order of the symmetry breaking thermal states below a critical temperature drastically enhance the robustness... Article Full-text available Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-pr... Article Full-text available In the framework of quantum computational tensor network, which is a general framework of measurement-based quantum computation, the resource many-body state is represented in a tensor-network form (or a matrix-product form), and universal quantum computation is performed in a virtual linear space, which is called a correlation space, where tensors... Data Full-text available Supplementary material Article Full-text available Tremendous efforts have been paid for realization of fault-tolerant quantum computation so far. However, preexisting fault-tolerant schemes assume that a lot of qubits live together in a single quantum system, which is incompatible with actual situations of experiment. Here we propose a novel architecture for practically scalable quantum computatio... Article Full-text available We propose a family of surface codes with general lattice structures, where the error-tolerances against bit and phase errors can be controlled asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilber... Article Full-text available Blind quantum computation is a new secure quantum computing protocol which enables Alice who does not have sufficient quantum technology to delegate her quantum computation to Bob who has a fully-fledged quantum computer in such a way that Bob cannot learn anything about Alice's input, output, and algorithm. In previous protocols, Alice needs to ha... Article Full-text available Recently, Li {\it et al.} [Phys. Rev. Lett. {\bf 107}, 060501 (2011)] have demonstrated that topologically protected measurement-based quantum computation can be implemented on the thermal state of a nearest-neighbor two-body Hamiltonian with spin-2 and spin-3/2 particles provided that the temperature is smaller than a critical value, namely, thres... Article Full-text available In the framework of quantum computational tensor network [D. Gross and J. Eisert, Phys. Rev. Lett. {\bf98}, 220503 (2007)], which is a general framework of measurement-based quantum computation, the resource many-body state is represented in a tensor-network form, and universal quantum computation is performed in a virtual linear space, which is ca... Article Full-text available We investigate relations between computational power and correlation in resource states for quantum computational tensor network, which is a general framework for measurement-based quantum computation. We find that if the size of resource states is finite, not all resource states allow correct projective measurements in the correlation space, which... Article Full-text available We propose a robust and scalable scheme to generate an $N$-qubit $W$ state among separated quantum nodes (cavity-QED systems) by using linear optics and postselections. The present scheme inherits the robustness of the Barrett-Kok scheme [Phys. Rev. A {\bf 71}, 060310(R) (2005)]. The scalability is also ensured in the sense that an arbitrarily larg...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078289031982422, "perplexity": 802.2241239583686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00124.warc.gz"}
https://brilliant.org/discussions/thread/last-10000-digits-of-grahams-number/
# Last 10000 digits of Graham's number So, if you didn't know, it's possible to compute the last $$n$$ digits of the Graham's number quite easily, by observing the fact that $3\uparrow3=...7$, $3\uparrow3\uparrow3=...87$, $3\uparrow3\uparrow3\uparrow3=...387$, $3\uparrow3\uparrow3\uparrow3\uparrow3=...5387$, and so on. In fact, if you search on the internet you'll find several people that computed up to $200$ digits, $400$ digits, or $500$ digits. Now, I admit I was pretty disappointed with this results. I couldn't find anyone that pushed up the calculations! (I may be wrong, in that case show me if anyone else actually calculated more than that) So, I developed an algorithm that is sufficiently efficient to compute the first $500$ digits in only approximately $2$ seconds. With this algorithm, I was able to compute the last $10000$ digits of Graham's number on my computer, in $16316.5$ seconds (approximately $2.2$ hours). I'm not entirely sure how, but I'm pretty sure it's possible to make the algorithm much more efficient than that, so that even calculating these digits should happen in a matter of few seconds. If you have any good ideas, tell me below (try it first, if possible). Either way, here they are (new line every 70 digits): $g_{64} = ...3078726077030301309631860565499591166728394144521822940211684925744690\\ 8303160862621307856182261004203129683467872420025335707165042882886038\\ 1527890583177734743488999363221709377188705302161340082623104165260987\\ 2469522381180520893701095105746695201447806881045566940953003050207963\\ 0251089314581777820684968377573229456578469591751099452615715258153133\\ 8317144176357246277980971517349406785579279353063617993752257361282030\\ 1473864489406090828511196812234883812826536412935235758505566522273299\\ 8513867089855758447648371115779454007188631486534611854130768464954083\\ 8335805841695412280766038020711553526827091695879475106425076890327782\\ 6708848390877435531688133831988779505683625673270427786212688069705881\\ 0276174028639378952134276596828174174610570754797760763975177038244691\\ 2063024310915173151554672020720329805779214569991795690518659602446902\\ 7274217279141430415867319657287140268008652315291316261820652195021921\\ 0914610704519073926283967434339662068326819744974641935341502976180597\\ 5219746989916790553951240749624022306775365113200227817050284227367149\\ 7491794032802699070161003317178855043208655046184676579497958334888729\\ 3809617659827235067373513656241299335615924204033665860263764635136445\\ 0901965169912468031701035813068048871232519853582991620638263170147783\\ 2698324585503287762867838791720029901423547073086007824609234282758224\\ 9084362130009203937656462657958086964494023780323916935845145868457435\\ 0019308749993296237589317856219609033942624384808517627658437282947072\\ 8254599948415970659821958864982353541909598033354078979538256357245359\\ 7473687377205449098623941105789048433603977408157821289037966848053431\\ 0161245447459165410165069396137727188277544585197396781572979626659473\\ 8946097169571922151724219229759670392571616731391424780277194256133384\\ 1909070301383939429023145632838588124976137279502774086096179244679618\\ 6025376358215497336320381116685826498270685734807990885455869123699181\\ 9878323695993552951130721247635787520841764173839071101104122258672751\\ 9228928871964307453282154261444871750080952421236363768139610777495235\\ 3201683350142428109136897852904313178813533335511088902235947721071584\\ 9496037260619984908856098462387682166967457951388588832356929998573679\\ 5420111111854634404435045676794013567654575959588995123046187234360352\\ 1793889196205026750886423489731195416265541685656346013519486952120940\\ 3751415314099407271948277374441254695053697070411390864858720859625554\\ 1959030238060629137169138722855477794398025076773239670591720768747467\\ 2844654063418138034129430436097660215542336711205739081742346117574411\\ 8240134002338774237104408010940301648706232749183781478765763754598101\\ 5382140877149223264515258326901107991667754434423538374815474031008806\\ 3794659755273784075854629840335173116218238312437640446823996561592082\\ 8447876334410730503146643034564958892386400206720157774336643971046097\\ 1913397786711927076109359761269701118640088324214111573518542523175301\\ 8437333602444678510859228109479432808176387274946992467944754462151350\\ 5367844082852088230997958080227897832298874619826542016735413058821008\\ 1847428327743235541621704680913967417772759547636593847461261352062528\\ 6493615579933087865883727503023169846062624381686315823768546266806694\\ 4052359344981816230189301309354006700248433587219900315863936527371581\\ 9660118267438193855827192236488406202498847093959194410120314045227731\\ 5157935152536936493425053093270987737803549232774078167979402086740138\\ 2441063153368792024085513096521839198783970159526194524749783176345096\\ 6138354215340807779927410436230549205899966191897479395002808032068505\\ 2798877144984365085442674905448699279843309239159997374495902995758836\\ 3427364161563026748153852589303542350028921428192299937990553925840346\\ 3179740764550809001835012576720025421347754368577841258214793482088528\\ 3528473446395144549774006915475700147362629356396411599745063763259120\\ 7602085065331518191297627524916528283327470445487220431209794181800837\\ 5242881807867409536257070239161286255743400994204946286838833646124751\\ 9450140740102350858410502955539960188235779958081190441955895515413996\\ 2907362641416032599108674402147606737302706859009869989241966114092477\\ 2407789976751585993614061850356986544196252382034927589676238991215557\\ 8594439247420428277018540432967913180650401616546821719644550936450059\\ 5583090752015617577969861470976788938740200600298288287888330891308863\\ 3722819728625451800838400216724842196639874673295077669833180844718217\\ 9341171005320299673923404289980856827543457807278033622507455463530345\\ 2502047026773318085940276837031308524437227254648631365299090852540852\\ 7368117005989256614980159639267250074517606853740135934062100746075654\\ 0813896593247352620824016619700186551382187297721949846316900749082737\\ 8094264056082839271636552848839603826288292359895120699259424392975074\\ 9821816437833463246454551763732757655276760590582693972496924532998092\\ 3380190551831802463416887618541048817756223855531234022957969144122483\\ 6406466849925916755964304671002296817244339224792182135995868969341492\\ 3690776243056770164822502041983470306716296896922618604046908954485889\\ 1348275274667315520855871416643581761678324893080737347771500148013131\\ 5872093173217027881014846523723198056977435033203823397583204572679056\\ 6651633842068281545694928486633422445461995837450720456764508778883440\\ 3854173232861131892325939242549724454004862230630447361306160309037871\\ 5800584480793579521136958705250576661572845777934961160672813467687918\\ 2468219518101142821666186785543970961610009398295493882433280467468806\\ 9413764541931265604057917863711717794304584231021596233589764804054811\\ 8226228812074485140806951381912453398593093071760684032101862850540079\\ 7648371363380891593720550471092872187099316006161890897978303030444170\\ 7453650131619894698610993458668339554364084020023399067051642966934860\\ 0027570319308957395061221345848777985085039705997484485916440720079353\\ 6749317556015986374673052074784015605376345910038695996792166640248506\\ 5151972178292734011493177498161942761055395341214573854386960142347615\\ 1064120816786711566627026981412394927353467365863136279136411085250120\\ 5029022957845553710087982096902106709311969126401168909948606316616230\\ 2923902195299675526555986392068330814711696071435279166352601845751091\\ 1413391101513558310051637274156744456218580403883246348766716791121059\\ 6640361544098108142733814394129952702989282677057681983318348451157632\\ 3445763461588558180723948837731724171820914217827456505430234904693165\\ 7016237345668584450386474453301178678720075637289970867637583657220506\\ 2491301610433455913955841065225929205594669325571087621309871185323210\\ 5196699704725638064513028255040447297723752850180331481056404968157353\\ 9061785278598760313140865068530080919200003079350549373708348792972286\\ 8670573755820995979960622710931298072888031118784826564331478394544516\\ 8149803763580948273619215580715203535826455618889996767810934669730243\\ 5416969251647217062623523969036088524240889973797522976568651221236460\\ 6700882735550252805162688717743002242430178153096139879227557097022144\\ 3057790329335567051650934703248833671211260215491243781456055632180252\\ 2593943710111389180996687955300606304861015768885793044484950849027511\\ 0106029803619475963780866084863862122324538632723342196159851036070658\\ 3695251304412583544807924551217328349121977645145493850412041271265386\\ 4611628171304512992288612473168928694708607916093810579534245595047488\\ 6617927058056116386911212141737555332931772421804354502689741745136461\\ 9149258184608873236073683901075275381240402732984629660210771321778593\\ 5317908316720145478172167767818370069254396358778477381821832282338662\\ 9557597665763526037469623882734970261718460508041643967446394822226604\\ 7262800922130068469037144793383117028262382841196033782589178075561744\\ 9686306276313945559616369461957875659406855496062664521105020034463607\\ 8639152176612841273246755062872676148243079798928244275312027774818688\\ 7250109520756590181937968438911972868200142926836525620315535316031054\\ 6891685301933822474738169794305170907203165619769535061113213915856256\\ 1824661400448048835400453257011706951826263434793588413474586938545913\\ 1913595602946034912375719564262285371586081255164508962613460090798485\\ 7847797205307145186514675412317888384738009343344165376733605639874152\\ 6838837135702194865074959666167436192933645884998056100697104793100679\\ 4152084453613830911021630017437654919684883920437258419601503784784516\\ 0671512017198801157547084883939593053650556078872159994750221442214834\\ 8268144787270731001365537383577746098505866012640076129423352326255313\\ 3073942052007839547749762554111899859772880815945865752809988634672233\\ 4769804780146302789353612329312586963866559329949214911489134763214665\\ 4314303272656947761889503867538372033508034358690038674211367316517236\\ 2113256247997506770294235705056911305065974352655256553654276889526636\\ 0391135992668989734244822601493574507744556050638326609473542254360350\\ 8674855342484610627305685534794791282019520577643564769466316663822950\\ 0048280051827615363513800094323248679021061702425944029209484941954536\\ 7418064519308105163357496871638118822504114501587037019405680648005022\\ 5768533805530305183368091271811490817539484300268084104379556148104831\\ 5835447210850384076723823375354333111031697890169996590703687564769571\\ 4199517294684058268271081207938885760678089057660597351282040660918730\\ 7108483992113117957918089160673029776868734932638038255189701221105348\\ 1886141584874851920098526106525203948232207371149341083916873785440379\\ 8603368448472052729248390757866617805529414157119366603081892881936678\\ 7741482317801728126934985735783270950758576591974947039193152967596669\\ 2340488030236244704910353178090822611674695077464191287728244330583239\\ 5092525499355092526168572459565741317934416750148502425950695064738395\\ 6574791365193517983345353625214300354012602677162267216041981065226316\\ 9355188780388144831406525261687850955526460510711720009970929124954437\\ 8887496062882911725063001303622934916080254594614945788714278323508292\\ 4210209182589675356043086993801689249889268099510169055919951195027887\\ 1783083701834023647454888222216157322801013297450927344594504343300901\\ 0969280253527518332898844615089404248265018193851562535796399618993967\\ 905496638003222348723967018485186439059104575627262464195387$ Note by Aldo Roberto Pessolano 1 year, 4 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Update: I just found a site that lists exactly the same digits, so I'm definitely not the first to have calculated this digits. Still, the efficiency question remains. - 1 year, 4 months ago what is the inverse of grahams number - 3 months, 1 week ago 1/243=0.004115226337448559670781893004115226337448559670781893004.... , period=27, and etc - 3 months, 1 week ago Testing Math Editor: $\pi$ - 1 year, 3 months ago Booya! $e^{i \pi} +1 = 0$ - 1 year, 3 months ago what's the algorithm? - 1 year, 4 months ago The most efficient algorithm I thought for now is just one line on Mathematica: calc = 3; Do[calc = PowerMod[3, calc, 10^i], {i, 1, 500}]; calc This computes the latest 500 digits correctly in roughly 0.7 seconds on my computer. For 1000 digits, it takes 6.2 seconds. The main problem with this algorithm is that it gets progressively much slower since it computes all the already computed digits with every calculation. There must be a way to avoid recomputing all the digits every time, for example to get the 1001st digit pretty much instantly just by knowing the latest 1000 digits, but I can't quite figure out how. - 1 year, 4 months ago 3.345625467385246375823675867348259426395436858685673245678845673333302367489236758493678624396724839674839678492456789106574385627384562385962785967289674280654803333221... - 3 months, 1 week ago what a constant look at the digits ......31579315973175917973999717329...... - 3 months, 1 week ago started decimal place 46374 - 3 months, 1 week ago it had a lot of odds and 1 even - 3 months, 1 week ago another: 4.12344678259467589436578249365782493333331029564782936758293672892345636758935674392567238962789434247148245623392564839578123456123456925379267896661154673845268012345679213.... - 3 months, 1 week ago look at the digits .....56278524678333333333333...(100 3s total)...333333335326845367823524832352784325682283683678335683..... - 3 months, 1 week ago started decimal place 142857 - 3 months, 1 week ago 1/3=0.33333333333333333333333333... (period 1),1/9=0.111111111111111111111111111111... (period also is 1),1/27=0.037037037037037037037037037037037037... (period=3),1/81=0.01234567901234567901234567901234567901234567901... (period=9),...,1/grahams number (period=grahams number/9) - 3 months, 1 week ago period triples - 3 months, 1 week ago 546738111123/999999999999 - 3 months, 1 week ago 23456/99999 (period=5) - 3 months, 1 week ago 5.15673842536784536758786946875047894037580204308200206392579239563853633333026790678042606596503768034768013999761111124910123... - 3 months, 1 week ago find the digits ......3159735193759135793133331759173979973197351973113373313579197315793139735193791735931597391375975973797735179...... (all the digits here were odd) - 3 months, 1 week ago started decimal place 999001 - 3 months, 1 week ago umm the number 26378146328956830967280674283967389657849306784230673289567807685240677773333506342657802367819567483578695487530247483120657483075101 is this prime? try in Number world - 3 months, 1 week ago It's divisible by 7 - 3 months, 1 week ago 6.546728567584673256743295627438956784396578234967329647812967489367850163478046780654738563478056237480628111333999777546738146713294672946170856173805617438056173805637850613785061795714380561379461378463785016437586071483674820365780476892071520773457215893657896574839678964723911011... - 3 months, 1 week ago you could see the digits: ......658963725933333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333674382956743286972678594310001647193...... - 3 months, 1 week ago periodic sequence: 0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,... (period=2) - 3 months ago 3,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,... (period=2) but i have a 3 at the start - 3 months ago what a periodic sequence - 3 months ago 5.6574891673895617834523784016478301357835631756347827468203567813647813257803657830573205417329536780513728140613751376037805617834178203567318065354721054273805723104617357328033336217304617358016785203675830682306473280567281036478036121111615783236718036701561111116163781526783106732806512051627780254167394394567956293567239456379452367945231755547956371453925781936478329615782306712065720567203... - 3 months ago 1.128283173717379590959517773101717618496378493678149367192657839641789463786437859367489657329467381596378406378057381940785901674023748023671802367148023647820367328407895036274810738406738407384063578032748036258407389506328051020408160904010025062523549823647839567194678329678162835063810463278047328140333316703267023647382067124036721507132940738947389432691034758903751932075819403673027810635695679579693103446748617840672046738056713046783210467328036748105780637840632758023456167498146273849326758946738239215623749123401... is irrational approximations 9/8,35/31 - 2 months, 2 weeks ago hi - 1 year, 4 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673748016357422, "perplexity": 3654.1842646764876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00467.warc.gz"}
http://knowyourdata.be/2017/06/13/blogdown-a-package-to-create-websites/
# Blogdown, a package to create websites Blogdown is a R package to generate static websites based on R Markdown and Hugo, a static web engine. To be honest: It’s fantastic! I’ve been using Wordpress for a while, with all that hassle of copy-pasting html into the wp text editor, adjusting images… Not to mention all the fuzz that goes along when using htmlwidgets. That blog-agony is all over with blogdown. ## My take-ways Must read: Blogdown online book: definitely read the first 3 chapters and the appendix! I’m using the same theme as Rstudio Rviews blog, which is the icarus theme. • removed the banner placeholder • removed the thumbnail placeholder of the recent post in the right sidebar • replaced the monokai.css with default github.css for code highlight Adding disqus (comment section below each post) can be annoying, but this process should overcome minor issues: • add your disqusShortname in the config.toml file • run blogdown::build_site() • copy everything under the public dir to your web dir on your webserver • test, test, … afaik. Add an image in the yaml metadata of a post. This image will be used in the social-media card when sharing posts with twitter, facebook, linkedin, etc. When generating your html file of the post, it wil start looking for the image from the /static folder, so add a path from this point to your image. ## Most important take-way There is a big difference between blogdown::serve_site(), blogdown::hugo_build() and blogdown::build_site(): • blogdown::serve_site() is for local use only. It will generate html files of your new/adjusted (r)markdown posts. • blogdown::hugo_build() builds the public skeleton, but will not generate new html post compared to blogdown::serve_site() • blogdown::build_site() builds the whole website for publishing, thus will render all *.Rmd files again. • Make sure you set the baseurl= in the config.toml file correct before using blogdown::build_site() or blogdown::hugo_build(). The baseurl` is needed to activate the disqus and the social widgets. My publish flow exists of step 1 and 2. And step 3 is optional and only in case when I need to render everything again. Most likely, I forgot a thing or two. So if you would like to use this theme go fetch it at github.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18386226892471313, "perplexity": 6730.098408989959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00324.warc.gz"}
https://api.philpapers.org/s/David%20Evans
## Works by David Evans 87 found Order: Disambiguations David Evans [57] David M. Evans [15] David Andreoff Evans [10] David W. Evans [8] David Howell Evans [2] David T. Evans [2] David H. Evans [2] David Andrew Evans [1] Not all matches are shown. Search with initial or firstname to single out others. David John Evans University of Strathclyde David Evans Utah State University David Evans Open University (UK) 1. Export citation Bookmark   15 citations 2. Conversation as Planned Behavior.Jerry R. Hobbs & David Andreoff Evans - 1980 - Cognitive Science 4 (4):349-377. No categories Export citation Bookmark   28 citations 3. ℵ0-categorical structures with a predimension.David M. Evans - 2002 - Annals of Pure and Applied Logic 116 (1-3):157-186. We give an axiomatic framework for the non-modular simple 0-categorical structures constructed by Hrushovski. This allows us to verify some of their properties in a uniform way, and to show that these properties are preserved by iterations of the construction. Export citation Bookmark   14 citations 4. An activist's argument that participant values should guide risk–benefit ratio calculations in HIV cure research.David Evans - 2017 - Journal of Medical Ethics 43 (2):100-103. Export citation Bookmark   6 citations 5. Supersimple ω-categorical groups and theories.David M. Evans & Frank O. Wagner - 2000 - Journal of Symbolic Logic 65 (2):767-776. An ω-categorical supersimple group is finite-by-abelian-by-finite, and has finite SU-rank. Every definable subgroup is commensurable with an acl( $\emptyset$ )-definable subgroup. Every finitely based regular type in a CM-trivial ω-categorical simple theory is non-orthogonal to a type of SU-rank 1. In particular, a supersimple ω-categorical CM-trivial theory has finite SU-rank. Export citation Bookmark   8 citations 6. The conflict of the faculties and the knowledge industry: Kant's diagnosis, in his time and ours.David Evans - 2008 - Philosophy 83 (4):483-495. Kant's short essay is a reflection on the contemporary structure of academic studies; he examines this structure in terms of the functions of the State and of the Universities which form part of it. His analysis links the empirical facts with conceptual distinctions, in ways that are familiar from his more general and abstract philosophy. His main aim is to ground a distinction between legitimate and illegitimate ways in which different Faculties of the University may approach intellectual issues that are (...) Export citation Bookmark   5 citations 7. On the automorphism groups of finite covers.David M. Evans & Ehud Hrushovski - 1993 - Annals of Pure and Applied Logic 62 (2):83-112. We are concerned with identifying by how much a finite cover of an 0-categorical structure differs from a sequence of free covers. The main results show that this is measured by automorphism groups which are nilpotent-by-abelian. In the language of covers, these results say that every finite cover can be decomposed naturally into linked, superlinked and free covers. The superlinked covers arise from covers over a different base, and to describe this properly we introduce the notion of a quasi-cover.These results (...) Export citation Bookmark   8 citations 8. The Conflict of the Faculties and the Knowledge Industry: Kant's Diagnosis, in his Time and Ours.David Evans - 2008 - Philosophy 83 (4):483. Kant's short essay is a reflection on the contemporary structure of academic studies; he examines this structure in terms of the functions of the State and of the Universities which form part of it. His analysis links the empirical facts with conceptual distinctions, in ways that are familiar from his more general and abstract philosophy. His main aim is to ground a distinction between legitimate and illegitimate ways in which different Faculties of the University may approach intellectual issues that are (...) No categories Export citation Bookmark   5 citations 9. Time, space and form: Necessary for causation in health, disease and intervention?David W. Evans, Nicholas Lucas & Roger Kerry - 2016 - Medicine, Health Care and Philosophy 19 (2):207-213. Sir Austin Bradford Hill’s ‘aspects of causation’ represent some of the most influential thoughts on the subject of proximate causation in health and disease. Hill compiled a list of features that, when present and known, indicate an increasing likelihood that exposure to a factor causes—or contributes to the causation of—a disease. The items of Hill’s list were not labelled ‘criteria’, as this would have inferred every item being necessary for causation. Hence, criteria that are necessary for causation in health, disease (...) Export citation Bookmark   2 citations 10. The geometry of Hrushovski constructions, I: The uncollapsed case.David M. Evans & Marco S. Ferreira - 2011 - Annals of Pure and Applied Logic 162 (6):474-488. An intermediate stage in Hrushovski’s construction of flat strongly minimal structures in a relational language L produces ω-stable structures of rank ω. We analyze the pregeometries given by forking on the regular type of rank ω in these structures. We show that varying L can affect the isomorphism type of the pregeometry, but not its finite subpregeometries. A sequel will compare these to the pregeometries of the strongly minimal structures. Export citation Bookmark   3 citations 11. Counterexamples to a conjecture on relative categoricity.David M. Evans & P. R. Hewitt - 1990 - Annals of Pure and Applied Logic 46 (2):201-209. Export citation Bookmark   5 citations 12. The Form of Causation in Health, Disease and Intervention: Biopsychosocial Dispositionalism, Conserved Quantity Transfers and Dualist Mechanistic Chains.David W. Evans, Nicholas Lucas & Roger Kerry - 2017 - Medicine, Health Care and Philosophy: A European Journal 20 (3):353-363. Causation is important when considering how an organism maintains health, why disease arises in a healthy person, and how one may intervene to change the course of a disease. This paper explores the form of causative relationships in health, disease and intervention, with particular regard to the pathological and biopsychosocial models. Consistent with the philosophical view of dispositionalism, we believe that objects are the fundamental relata of causation. By accepting the broad scope of the biopsychosocial model, we argue that psychological (...) Export citation Bookmark   1 citation 13. The form of causation in health, disease and intervention: biopsychosocial dispositionalism, conserved quantity transfers and dualist mechanistic chains.David W. Evans, Nicholas Lucas & Roger Kerry - 2017 - Medicine, Health Care and Philosophy 20 (3):353-363. Causation is important when considering: how an organism maintains health; why disease arises in a healthy person; and, how one may intervene to change the course of a disease. This paper explores the form of causative relationships in health, disease and intervention, with particular regard to the pathological and biopsychosocial models. Consistent with the philosophical view of dispositionalism, we believe that objects are the fundamental relata of causation. By accepting the broad scope of the biopsychosocial model, we argue that psychological (...) Export citation Bookmark   1 citation 14. Ample dividing.David M. Evans - 2003 - Journal of Symbolic Logic 68 (4):1385-1402. We construct a stable one-based, trivial theory with a reduct which is not trivial. This answers a question of John B. Goode. Using this, we construct a stable theory which is n-ample for all natural numbers n, and does not interpret an infinite group. Export citation Bookmark   3 citations 15. Theatre of Deferral: The Image of the Law and the Architecture of the Inns of Court.David Evans - 1999 - Law and Critique 10 (1):1-25. This article addresses the architecture of the Inns of Court, the home of the Common Law. The approach taken, however, rejects an approach that would reduce the Inns to a roster of historical details and laudatory description. Instead, the Inns are seen, if not actually felt, as the embodiment of the “original” ground of law. This experience is revealed through a three-stage discovery process that situates the Inns within the medieval context of symbol and ritual as informed by Turner’s concept (...) No categories Export citation Bookmark   3 citations 16. Using data on the ‘career’ paths of one thousand ‘leading scientists’ from 1450 to 1900, what is conventionally called the ‘rise of modern science’ is mapped as a changing geography of scientific practice in urban networks. Four distinctive networks of scientific practice are identified. A primate network centred on Padua and central and northern Italy in the sixteenth century expands across the Alps to become a polycentric network in the seventeenth century, which in turn dissipates into a weak polycentric network (...) Export citation Bookmark   2 citations 17. Infant heart transplantation after cardiac death: ethical and legal problems.Michael Potts, Paul A. Byrne & David W. Evans - 2010 - Journal of Clinical Ethics 21 (3):224. Export citation Bookmark   2 citations 18. Seeking an ethical and legal way of procuring transplantable organs from the dying without further attempts to redefine human death.David Wainwright Evans - 2007 - Philosophy, Ethics, and Humanities in Medicine 2:11. Because complex organs taken from unequivocally dead people are not suitable for transplantation, human death has been redefined so that it can be certified at some earlier stage in the dying process and thereby make viable organs available without legal problems. Redefinitions based on concepts of. Export citation Bookmark   2 citations 19. Moral Responsibility in the Holocaust: A Study in the Ethics of Character.David Evans - 2001 - Mind 110 (438):485-488. Export citation Bookmark 20. The Ethics of Limiting Informed Debate: Censorship of Select Medical Publications in the Interest of Organ Transplantation.Michael Potts, Joseph L. Verheijde, Mohamed Y. Rady & David W. Evans - 2013 - Journal of Medicine and Philosophy 38 (6):625-638. Recently, several articles in the scholarly literature on medical ethics proclaim the need for “responsible scholarship” in the debate over the proper criteria for death, in which “responsible scholarship” is defined in terms of support for current neurological criteria for death. In a recent article, James M. DuBois is concerned that academic critiques of current death criteria create unnecessary doubt about the moral acceptability of organ donation, which may affect the public’s willingness to donate. Thus he calls for a closing (...) Export citation Bookmark   1 citation 21. Argumenty platońskie.David Evans - 1998 - Ruch Filozoficzny 55 (1):15-29. D. Evans, Argumenty platońskie, transl. Zbigniew Nerczuk. Export citation Bookmark 22. Aristotle on the Relation between Art and Science.David Evans - 2007 - The Proceedings of the Twenty-First World Congress of Philosophy 10:21-30. Aristotle assigns positive value to artistry and its skills, placing them below science but nearby. Fuller content for this view of art can be garnered from his technical treatises, especially the accounts of rhetoric and dialectic, where the subjectivity imported by the role of audiences is explored with subtlety. These ideas have influence on later philosophy of aesthetics and of technology, and they need to be pondered by those engaged in current debate in these areas. Export citation Bookmark 23. Socrates and Zeno: Plato, Parmenides 129.David Evans - 1994 - International Journal of Philosophical Studies 2 (2):243-255. 24. Finite covers with finite kernels.David M. Evans - 1997 - Annals of Pure and Applied Logic 88 (2-3):109-147. We are concerned with the following problem. Suppose Γ and Σ are closed permutation groups on infinite sets C and W and ρ: Γ → Σ is a non-split, continuous epimorphism with finite kernel. Describe the possibilities for ρ. Here, we consider the case where ρ arises from a finite cover π: C → W. We give reasonably general conditions on the permutation structure W;Σ which allow us to prove that these covers arise in two possible ways. The first way, (...) Export citation Bookmark   2 citations 25. Dialogue and Dialectic.David Evans - 2007 - The Proceedings of the Twenty-First World Congress of Philosophy 10:61-65. Plato wrote dialogues, and he praised dialectic, or conversation, as a suitable style for fruitful philosophical investigation. His works are great literature; and nodoubt this quality derives much from their form as dialogues. They also have definite philosophical content; and an important part of this content is their dialecticalepistemology. Dialectic is part of the content of Plato's philosophy. Can we reconcile this content with his literary style? I shall examine and sharpen the sense of this problem by referring to four (...) Export citation Bookmark 26. How to gain evidence for causation in disease and therapeutic intervention: from Koch’s postulates to counter-counterfactuals.David W. Evans - 2022 - Medicine, Health Care and Philosophy 25 (3):509-521. Researchers, clinicians, and patients have good reasons for wanting answers to causal questions of disease and therapeutic intervention. This paper uses microbiologist Robert Koch’s pioneering work and famous postulates to extrapolate a logical sequence of evidence for confirming the causes of disease: association between individuals with and without a disease; isolation of causal agents; and the creation of a counterfactual. This paper formally introduces counter-counterfactuals, which appear to have been used, perhaps intuitively, since the time of Koch and possibly earlier. (...) Export citation Bookmark 27. Simplicity of the automorphism groups of some Hrushovski constructions.David M. Evans, Zaniar Ghadernezhad & Katrin Tent - 2016 - Annals of Pure and Applied Logic 167 (1):22-48. Export citation Bookmark   1 citation 28. Magnesium Flares in the Night Sky.David J. Evans - 2001 - Theory, Culture and Society 18 (1):163-179. Export citation Bookmark   2 citations 29. The Highest Good in the Dialectic of Kant’s Critique of Practical Reason.David Evans - 2008 - Proceedings of the Xxii World Congress of Philosophy 16:59-65. Kant’s moral philosophy is celebrated for its doctrines of the primacy of the good will, the categorical imperative, and the significance of autonomy. These themes are pursued in the section of the Critique of Practical Reason which Kant called the Analytic, as well as in less formal works such as The Foundations of the Metaphysics of Morals. In his main work Kant added a Dialectic, which is less well studied but is still essential to understanding his whole project. The concept (...) Export citation Bookmark 30. Export citation Bookmark 31. Export citation Bookmark 32. Christopher Hookway "Scepticism".David Evans - 1993 - Humana Mente:366. Export citation Bookmark 33. Gotama the Physician.David Evans - 1998 - Buddhist Studies Review 15 (2):182-192. No categories Export citation Bookmark 34. Introduction: unstiffening all our theories: William James and the culture of modernism.David H. Evans - 2017 - In David Howell Evans (ed.), Understanding James, Understanding Modernism. Bloomsbury Academic. Export citation Bookmark 35. Letter to the Editor.David Evans - 1998 - Buddhist Studies Review 15 (1):79-80. No categories Export citation Bookmark 36. Letter to the Editor.David Evans - 1994 - Buddhist Studies Review 11 (1):67-68. No categories Export citation Bookmark 37. Letter to the Editor.David Evans - 1991 - Buddhist Studies Review 8 (1-2):167-164. No categories Export citation Bookmark 38. Letter to the Editor.David Evans - 1988 - Buddhist Studies Review 5 (1):60-62. No categories Export citation Bookmark 39. More on Sunnata.David Evans - 1980 - Buddhist Studies Review 2 (2):109-113. No categories Export citation Bookmark 40. Never reject anything. Nothing has been proved": William James and Gertrude Stein on time and language.David H. Evans - 2017 - In David Howell Evans (ed.), Understanding James, Understanding Modernism. Bloomsbury Academic. Export citation Bookmark 41. Platonic Arguments.David Evans & William Charlton - 1996 - Aristotelian Society Supplementary Volume 70:177-208. No categories Export citation Bookmark 42. The Beginnings of Buddhism. Kogen Mizuno; tr. by Richard L. Gage.David Evans - 1986 - Buddhist Studies Review 3 (1):54-55. The Beginnings of Buddhism. Kogen Mizuno; tr. by Richard L. Gage. Kosei, Tokyo 1980. xiv + 220 pp. £4.75. No categories Export citation Bookmark 43. Teaching Philosophy Historically.David Evans - 2007 - Discourse: Learning and Teaching in Philosophical and Religious Studies 7 (1):81-94. No categories Export citation Bookmark 44. Socrate pour tous. Enseigner la philosophie aux non-philosophes, Actes du Colloque de Copenhague de la Fédération internationale des sociétés de philosophie, coll. « Pour demain ».Jean Ferrari, Peter Kemp, David Evans & Nelly Robinet-bruyère - 2004 - Revue Philosophique de la France Et de l'Etranger 194 (4):472-473. Export citation Bookmark 45. The geometry of Hrushovski constructions, II. The strongly minimal case.David M. Evans & Marco S. Ferreira - 2012 - Journal of Symbolic Logic 77 (1):337-349. We investigate the isomorphism types of combinatorial geometries arising from Hrushovski's flat strongly minimal structures and answer some questions from Hrushovski's original paper. Export citation Bookmark   1 citation 46. Teaching Philosophy World-Wide: The FISP Committee.David Evans - 1992 - Teaching Philosophy 15 (3):301-304. Export citation Bookmark Export citation Bookmark Export citation Bookmark 49. Ancient and Modern Dialectic.David Evans - 1991 - Philosophical Studies (Dublin) 33:39-53. No categories
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3630932867527008, "perplexity": 6807.591914621088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00599.warc.gz"}