url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/312131/does-ghg-1-subseteq-h-imply-ghg-1-h/312141
# Does $gHg^{-1}\subseteq H$ imply $gHg^{-1}= H$? [duplicate] Let $G$ be a group, $H<G$ a subgroup and $g$ an element of $G$. Let $\lambda_g$ denote the inner automorphism which maps $x$ to $gxg^{-1}$. I wonder if $H$ can be mapped to a proper subgroup of itself, i.e. $\lambda_g(H)\subset H$. I tried to approach this problem topologically. Since every group is the fundamental group of a connected CW-complex of dimension 2, let $(X,x_0)$ be such a space for $G$. Since $X$ is (locally) path-connected and semi-locally simply-connected, there exists a (locally) path-connected covering space $(\widetilde X,\widetilde x_0)$, such that $p_*(\pi_1(\widetilde X,\widetilde x_0))=H$. The element $g$ corresponds to $[\gamma]\in\pi_1(X,x_0))$, and its lift at $\widetilde x_0$ is a path ending at $\widetilde x_1$. By hypothesis, $H\subseteq g^{-1}Hg$, which leads to the existence of a unique lift $f:\pi_1(\widetilde X,\widetilde x_0)\to\pi_1(\widetilde X,\widetilde x_1)$ such that $p=p\circ f$. This lift turns out to be a surjective covering map itself, and it is a homeomorphism iff $H=g^{-1}Hg$. I was unsuccessful in showing the injectivity. If $x_1$ and $x_2$ have the same image under $f$, then $x_1$, $x_2$, and $f(x_1)=f(x_2)$ are all in the same fiber. I took $\lambda$ to be a path from $x_1$ to $x_2$. I have been playing around with $\lambda$, $p\lambda$, and $f\lambda$, but got nowhere. Of course, there could also be a direct algebraic proof. On the other hand, if the statement is not true then someone maybe knows of a counterexample. - ## marked as duplicate by user1729, PVAL, Mark Bennet, Jonas Meyer, Hagen von EitzenOct 29 '14 at 21:54 Some counterexamples are given on MO. –  anon Feb 23 '13 at 17:25 Another example. Let $$K = \left\{ \frac{a}{2^{n}} : a \in \mathbf{Z}, n \in \mathbf{N} \right\}$$ be the additive subgroup of $\mathbf{Q}$. The map $g : x \mapsto 2 x$ is an automorphism of $K$. Consider the semidirect product $G = K \rtimes \langle g \rangle$. (So that conjugating an element $x$ of $K$ by $g$ in $G$ is the same as taking the value of $g$ on $x$.) Let $H = \left\{ \frac{a}{2} : a \in \mathbf{Z} \right\}$ be a subgroup of $G$. Then $H^{g} = \mathbf{Z} < H$. PS When I was first exposed to these examples, what struck me is what happens if you look at it from the other end: $\mathbf{Z}^{g^{-1}} = H > \mathbf{Z}$. Thanks for your answer. But I don't quite understand what a group $G$ is. Of what form are the elements? –  Stefan Hamcke Feb 23 '13 at 18:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852795004844666, "perplexity": 87.5189195133962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.4/warc/CC-MAIN-20150728002308-00037-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.realclimate.org/index.php/archives/2004/11/pca-details/
### PCA details Filed under: — mike @ 22 November 2004 PCA of the 70 North American ITRDB Tree-ring Proxy Series used by Mann et al (1998) a. Eigenvalue spectrum for Mann et al (1998) PCA analysis (1902-1980 zero reference period, data normalized by detrended 1902-1980 standard deviation): Rank Explained Variance Cumulative Variance 1 0.3818 0.3818 2 0.0976 0.4795 _______________________________________________ 3 0.0491 0.5286 4 0.0354 0.5640 First 2 PCs were retained based on application of the standard selection rules (see Figure 1) used by Mann et al (1998). b. Eigenvalue spectrum for PCA analysis Based on Convention of MM (1400-1971 zero reference period, data un-normalized) Rank Explained Variance Cumulative Variance 1 0.1946 0.1946 2 0.0905 0.2851 3 0.0783 0.3634 4 0.0663 0.4297 5 0.0549 0.4846 _______________________________________________ 6 0.0373 0.5219 First 5 PCs should be retained in this case employing the standard selection rules (see Figure 1) used by Mann et al (1998). FIGURE 1. Comparison of eigenvalue spectrum resulting from a Principal Components Analysis (PCA) of the 70 North American ITRDB data used by Mann et al (1998) back to AD 1400 based on Mann et al (1998) centering/normalization convention (blue circles) and MM centering/normalization convention (red crosses). Shown also is the null distribution based on Monte Carlo simulations with 70 independent red noise series of the same length and same lag-one autocorrelation structure as the actual ITRDB data using the respective centering and normalization conventions (blue curve for MBH98 convention, red curve for MM convention). In the former case, 2 (or perhaps 3) eigenvalues are distinct from the noise eigenvalue continuum. In the latter case, 5 (or perhaps 6) eigenvalues are distinct from the noise eigenvalue continuum.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8796893358230591, "perplexity": 3296.290505808882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637904794.47/warc/CC-MAIN-20141030025824-00033-ip-10-16-133-185.ec2.internal.warc.gz"}
https://en.m.wikibooks.org/wiki/CLEP_College_Algebra/Absolute_Value_Equations
# CLEP College Algebra/Absolute Value Equations ## Absolute Values Absolute Values represented using two vertical bars, ${\displaystyle \vert }$ , are common in Algebra. They are meant to signify the number's distance from 0 on a number line. If the number is negative, it becomes positive. And if the number was positive, it remains positive: ${\displaystyle \left\vert 4\right\vert =4\,}$ ${\displaystyle \left\vert -4\right\vert =4\,}$ For a formal definition: ${\displaystyle |x|={\begin{cases}x,&{\text{if }}x\geq 0\\-x,&{\text{if }}x<0\end{cases}}}$ This can be read aloud as the following: If ${\displaystyle x\geq 0}$ , then ${\displaystyle |x|=x}$ If ${\displaystyle x<0}$ , then ${\displaystyle |x|=-x}$ The formal definition is simply a declaration of what the function represents at certain restrictions of the ${\displaystyle x}$ -value. For any ${\displaystyle x<0}$ , the output of the graph of the function on the ${\displaystyle xy}$  plane is that of the linear function ${\displaystyle y=-x}$ . If ${\displaystyle x\geq 0}$ , then the output is that of the linear function ${\displaystyle y=x}$ . For our purposes, it does not technically matter whether ${\displaystyle x\geq 0{\text{ and }}x<0}$  or ${\displaystyle x>0{\text{ and }}x\leq 0}$ . As long as you pick one and are consistent with it, it does not matter how this is defined. By convention, it is usually defined as in the beginning formal definition. Please note that the opposite (the negative, -) of a negative number is a positive. For example, the opposite of ${\displaystyle -1}$  is ${\displaystyle 1}$ . Usually, some books and teachers would refer to opposite number as the negative of the given magnitude. For convenience, this may be used, so always keep in mind this shortcut in language. ### Properties of the Absolute Value Function We will define the properties of the absolute value function. This will be important to know when taking the CLEP exam since it can drastically speed up the process of solving absolute value equations. Finally, the practice problems in this section will test you on your knowledge on absolute value equations. We recommend you learn these concepts to the best of your abilities. However, this will not be explicitly necessary by the time one takes the exam. #### Domain and Range Let ${\displaystyle f(x)=|x|}$  whose mapping is ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ . By definition, ${\displaystyle |x|={\begin{cases}-x&{\text{if}}&x<0\\x&{\text{if}}&x\geq 0\end{cases}}}$ . Because it can only be the case that ${\displaystyle y=-x{\text{ if }}x<0}$  and ${\displaystyle y=x{\text{ if }}x\geq 0}$ , it is not possible for ${\displaystyle |x|<0}$ . However, since ${\displaystyle x}$  has no restriction, the domain, ${\displaystyle A}$ , has no restriction. Thus, if ${\displaystyle B}$  represents the range of the function, then ${\displaystyle A=\{x\in \mathbb {R} \}}$  and ${\displaystyle B=\{y\geq 0|y\in \mathbb {R} \}}$ . Definition: Domain and Range Let ${\displaystyle f(x)=|x|}$  whose mapping is ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$  represent the absolute value function. If ${\displaystyle A}$  is the domain and ${\displaystyle B}$  is the range, then ${\displaystyle A=\{x\in \mathbb {R} \}}$  and ${\displaystyle B=\{y\geq 0|y\in \mathbb {R} \}}$ . By the above definition, there exists an absolute minimum to the parent function, and it exists at the origin, ${\displaystyle O(0,0)}$ #### Even or odd? Recall the definition of an even and an odd function. Let there be a function ${\displaystyle f:A\to B}$ If ${\displaystyle x\in A}$  and ${\displaystyle f(-x)=f(x)}$ , then ${\displaystyle f}$  is even. If ${\displaystyle x\in A}$  and ${\displaystyle f(-x)=-f(x)}$ , then ${\displaystyle f}$  is odd. Proof: ${\displaystyle f(x)=|x|}$  is even Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto |x|}$ . By definition, ${\displaystyle f(x)=|x|={\begin{cases}-x&{\text{if}}&x<0\\x&{\text{if}}&x\geq 0\end{cases}}}$ .Suppose ${\displaystyle x\in A}$ . Let ${\displaystyle x>0\Rightarrow -x<0}$ . ${\displaystyle f(x)=x}$  ${\displaystyle f(-x)=-(-x)=x}$  ${\displaystyle \Rightarrow f(-x)=f(x)\blacksquare }$ Because ${\displaystyle f(x)}$  is even, it is also the case that it is symmetrical. A review of this can be found here (Graphs and Their Properties). #### One-to-one and onto? Recall the definitions of injective and surjective. If ${\displaystyle u,v\in A}$ , and ${\displaystyle f(u)=f(v)\Rightarrow u=v}$ , then ${\displaystyle f(x)}$  is injective. If for all ${\displaystyle b\in B}$  there is an ${\displaystyle a\in A}$  such that ${\displaystyle f(a)=b}$ , then ${\displaystyle f(x)}$  is surjective. Proof: ${\displaystyle f(x)=|x|}$  is non-injective Suppose ${\displaystyle u,v\in \mathbb {R} }$  and ${\displaystyle f(u)=f(v)}$ . By the previous proof, we showed ${\displaystyle f(x)}$  is even. As such, we can use the value ${\displaystyle v=-u}$  to make the following statement: ${\displaystyle f(u)=f(v)\Rightarrow u\neq v}$ Therefore, ${\displaystyle f(x)}$  is non-injective. Because we have not established how to prove these statements through algebraic manipulation, we will be deriving properties as we go to gain a further understanding of these new functions. Establishing whether a function is surjective is simply through checking the definition (negating if otherwise to establish it as non-surjective). Proof: ${\displaystyle f(x)=|x|}$  is non-surjective Suppose ${\displaystyle b\in \mathbb {R} }$ . There exists an element ${\displaystyle b=-1\in \mathbb {R} }$ , for which ${\displaystyle f(x)=|x|\neq -1}$  for all ${\displaystyle x\in \mathbb {R} }$ .${\displaystyle \blacksquare }$ A review of the definitions can be found here (Definition and Interpretations of Functions). #### Intercepts and Inflections of the Parent Function Figure 1: ${\displaystyle f(x)=|x|}$  graphed on the first and second quadrant (above the ${\displaystyle x}$  axis), showing only the positive ${\displaystyle y}$  values. With all the information provided from the previous sections, we can derive the graph of the parent function ${\displaystyle f(x)=|x|}$ . It is even, and therefore, symmetrical about the ${\displaystyle y}$ -axis since there is an ${\displaystyle x}$ -intercept at ${\displaystyle x=0}$ . Finally, because we know the domain and range, we know the minimum of the function is at ${\displaystyle O(0,0)}$ , and we know the definition of the function, we can easily show that the graph of ${\displaystyle f(x)=|x|}$  is the following image to the right (Figure 1). A summary of what you should see from the graph is this: • Domain: ${\displaystyle \{x\in \mathbb {R} \}}$ . • Range: ${\displaystyle \{y\geq 0|y\in \mathbb {R} \}}$ . • There is an absolute minimum at ${\displaystyle O(0,0)}$ . • There is one ${\displaystyle x}$ -intercept at ${\displaystyle x=0}$ . • There is one ${\displaystyle y}$ -intercept at ${\displaystyle y=0}$ . • The graph is even and symmetrical about the ${\displaystyle y}$ -axis. • The graph is non-injective and non-surjective. • The graph has no inflection point. #### Transformations of the Parent Function Many times, one will not be working with the parent function. Many real life applications of this function involve at least some manipulation to either the input or the output: vertical stretching/contraction, horizontal stretching/contraction, reflection about the ${\displaystyle x}$ -axis, reflection about the ${\displaystyle y}$ -axis, and vertical/horizontal shifting. Luckily, not much changes when it comes to the manipulation of these functions. The exceptions will be talked about in more detail: Vertical Expansion/Contraction/Flipping Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=A\cdot f(x)}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0},Ay_{0}\right)\in g(x)}$ . Thus, • If ${\displaystyle A>1}$ , then ${\displaystyle g(x)}$  is an expansion of ${\displaystyle f(x)}$  by a factor of ${\displaystyle A}$ . • If ${\displaystyle 0 , then ${\displaystyle g(x)}$  is a contraction of ${\displaystyle f(x)}$  by a factor of ${\displaystyle A}$ . • If ${\displaystyle A<0}$ , then ${\displaystyle g(x)}$  is a reflection of ${\displaystyle f(x)}$  about the ${\displaystyle x}$ -axis. Vertical Shift Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=f(x)+b}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0},y_{0}-b\right)\in g(x)}$ . Thus, • If ${\displaystyle b>0}$ , then ${\displaystyle g(x)}$  is an upward shift of ${\displaystyle f(x)}$  by ${\displaystyle b}$ . • If ${\displaystyle b<0}$ , then ${\displaystyle g(x)}$  is a downward shift of ${\displaystyle f(x)}$  by ${\displaystyle b}$ . Horizontal Shift Let ${\displaystyle f(x)=|x|}$  and ${\displaystyle g(x)=f(x+a)}$ . There must be an ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0}-a,y_{0}\right)\in g(x)}$ . Thus, • If ${\displaystyle a>0}$ , then ${\displaystyle g(x)}$  is a leftward shift of ${\displaystyle f(x)}$  by ${\displaystyle a}$ . • If ${\displaystyle a<0}$ , then ${\displaystyle g(x)}$  is a rightward shift of ${\displaystyle f(x)}$  by ${\displaystyle a}$ . The properties not listed above are exceptions to the general rule about functions found in the chapter Algebra of Functions. The exceptions are not anything substantial. The only difference with what we found generally versus what we have provided above are simply a result of what we found in the previous section. • There is no reflection about the ${\displaystyle y}$ -axis because the function is even and symmetrical. • There is no horizontal expansion and contraction because it gives the same result as vertical expansion and contraction (this will be proven later). We now have all the information we will need to know about absolute value functions now. ### Graphing Absolute Value Functions This subsection is absolutely not optional. You will be asked these questions very explicitly, so it is a good idea to understand this section. If you didn't read the previous subsection, you are not going to understand how any of this makes sense. Fortunately, the idea behind graphing any arbitrary function is mostly dependent on what you know about the function. Therefore, we can easily be able to graph functions. These examples should hopefully be further confirmation of what you learned in Algebra of Functions. Example 1.2(a): Graph the following absolute value function: ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5}$  Method 1: Follow procedure from Algebra of FunctionsThis method will work for any arbitrary function. However, it will not always be the quickest method for absolute value functions. We follow the following steps. Let ${\displaystyle f(x)}$  be the parent function and ${\displaystyle g(x)=Af(ax+b)+c}$ . Factor ${\displaystyle ax+b}$  so that ${\displaystyle ax+b=a\left(x+{\frac {b}{a}}\right)}$ . Horizontally shift ${\displaystyle f(x)}$  to the left/right by ${\displaystyle {\frac {b}{a}}}$ . Horizontally contract/expand ${\displaystyle f\left(x-{\frac {b}{a}}\right)}$  by ${\displaystyle a}$ . Vertically expand/contract/flip ${\displaystyle f\left(a\left(x-{\frac {b}{a}}\right)\right)}$  by ${\displaystyle A}$ . Vertically shift ${\displaystyle Af\left(a\left(x-{\frac {b}{a}}\right)\right)}$  upward/downward by ${\displaystyle c}$ .Since ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5}$  has ${\displaystyle A={\frac {1}{2}}}$ , ${\displaystyle a=2}$ , ${\displaystyle b=6}$ , and ${\displaystyle c=-5}$ , we may apply these steps as given to get to our desired result. As this should be review, we will not be meticulously graphing each step. As such, only the final function (and the parent function in red) will be shown. Method 3: Find absolute minimum or maximum, graph one half, reflect.While method 1 will always work for any arbitrary, continuous function, method 3 is fastest for the absolute value function that composes a linear function. First, we should try to find the vertex. We know from Algebra of Functions that the only thing that will affect the location of the vertex in even functions is the ${\displaystyle x-a}$  term on the inner composed linear function and the vertical shift of the entire function, ${\displaystyle c}$ . Rewriting the absolute value equation as shown below will allow us to find the vertex of the function. ${\displaystyle f(x)={\frac {1}{2}}|2x+6|-5={\frac {1}{2}}|2(x+3)|-5}$ This then tells us the vertex is at ${\displaystyle (-3,-5)}$ . This method then tells us to graph the slopes. However, how should that work? Recall the formal definition of an arbitrary absolute value function: ${\displaystyle |g(x)|={\begin{cases}-g(x)&{\text{if}}&x In the above definition of a general absolute value function, ${\displaystyle g(x_{0})=-g(x_{0})=0}$ . This means that where the ${\displaystyle x}$ -value implies a vertex on the function, that is how we restrict absolute value function. In our instance, ${\displaystyle |g(x)|=|2x+6|}$ , for which ${\displaystyle 2(-3)+6=0}$ , so ${\displaystyle x_{0}=-3}$ . We can say, thusly, that :${\displaystyle |2x+6|={\begin{cases}-2x-6&{\text{if}}&x<3\\2x+6&{\text{if}}&x\geq 3\end{cases}}}$ To be continued. ### Practice Problems For all of the problems given below, ${\displaystyle a=-2}$  and ${\displaystyle b=3}$ . It is recommended one does all the problems below Evaluate the following expressions. 1 ${\displaystyle |a|=}$ 2 ${\displaystyle |b|=}$ 3 ${\displaystyle -|a|=}$ 4 ${\displaystyle -|b|=}$ 5 ${\displaystyle {\frac {1}{a}}\cdot |b|=}$ 6 ${\displaystyle a\cdot |b|=}$ 7 ${\displaystyle |a-b|=}$ 8 ${\displaystyle |a+b|=}$ 9 ${\displaystyle b-|a|=}$ 10 ${\displaystyle a-|b|=}$ 11 ${\displaystyle \left\vert {\frac {b}{a}}\right\vert =}$ 12 ${\displaystyle b\cdot |a|=}$ Properties of Absolute Value 13 Let ${\displaystyle y=f(x)=|x|}$ . The following properties are listed below. Select the definition that BEST matches the description of the property or how one can prove the listed property. Even function Non-surjective Vertical shift Horizontal Shift ${\displaystyle \exists b\in \mathbb {R} }$  such that ${\displaystyle f(x)\neq b}$ . The range of ${\displaystyle f(x)}$  is ${\displaystyle \{y|y\in \mathbb {R} \wedge y\geq 0\}}$ . ${\displaystyle f(x)=f(-x)}$ . If ${\displaystyle b<0\Leftrightarrow f(x)\neq b}$ , then ${\displaystyle y_{p}=f(x)+b\Rightarrow \{y_{p}\vert y_{p}\in \mathbb {R} \wedge y_{p}\geq b\}}$ . ${\displaystyle \left(x_{0},y_{0}\right)\in f(x)\Leftrightarrow \left(x_{0}-a,y_{0}\right)\in f(x+a)}$ . The function of ${\displaystyle f(x)}$  is many-to-one. ## Absolute Value Equations Now, let's say that we're given the equation ${\displaystyle \left\vert k\right\vert =8}$  and we are asked to solve for ${\displaystyle k}$ . What number would satisfy the equation of ${\displaystyle \left\vert k\right\vert =8}$ ? 8 would work, but -8 would also work. That's why there can be two solutions to one equation. How come this is true? That is what the next example is for. Example 2.0(a): Formally define the function below:${\displaystyle f(k)=|2k+6|}$  Recall what the absolute value represents: it is the distance of that number to the left or right of the starting point, the point where the inner function is zero. Recall the formal definition of the absolute value function: ${\displaystyle f(x)=|x|={\begin{cases}-x{\text{ if }}x<0\\x{\text{ if }}x\geq 0\end{cases}}}$ We want to formally define the function ${\displaystyle f(k)=|2k+6|}$ . Let ${\displaystyle x=k}$  First, we need to find where ${\displaystyle 2k+6=0}$ . ${\displaystyle 2k+6=0}$  ${\displaystyle \Leftrightarrow 2k=-6}$  ${\displaystyle \Leftrightarrow k=-3}$ From that, it is safe to say that the following is true: ${\displaystyle f(k)=|2k+6|={\begin{cases}-(2k+6){\text{ if }}k<-3\\2k+6{\text{ if }}k\geq 3\end{cases}}}$ It is important to know how to do this so that we may formally apply an algorithm throughout this entire chapter. For now, we will be exploring ways to solve these functions based on the examples given, including the formalizing of an algorithm, which we will give later. Example 2.0(b): Solve for ${\displaystyle k}$ :${\displaystyle |2k+6|=8}$  Since we formally defined the function in Example 0, we will write the definition down. ${\displaystyle f(k)=|2k+6|={\begin{cases}-(2k+6){\text{ if }}k<-3\\2k+6{\text{ if }}k\geq -3\end{cases}}}$   It is important to realize what the equation is saying: "there is a function ${\displaystyle y=f(k)}$  equal to ${\displaystyle y=8}$  such that ${\displaystyle \exists k\in \mathbb {R} }$ ." As defined in the opening section, the following function is non-injective and non-surjective. Therefore, there must be a ${\displaystyle k_{1}{\text{ and }}k_{2}}$  such that it satisfies ${\displaystyle f(k)=8}$ . Therefore, the following must be true ${\displaystyle 2k+6=8\quad {\text{AND}}\quad -(2k+6)=8}$ . All that is left to do is to solve the two equations for ${\displaystyle k}$  for each given case, which will be differentiated by its positive and negative case: Negative case ${\displaystyle -(2k+6)=8}$  ${\displaystyle \Leftrightarrow 2k+6=-8}$  ${\displaystyle \Leftrightarrow 2k=-14}$  ${\displaystyle \Leftrightarrow k=-7}$  Positive case ${\displaystyle 2k+6=8}$  ${\displaystyle \Leftrightarrow 2k=2}$  ${\displaystyle \Leftrightarrow k=1}$ We found our two solutions for ${\displaystyle k}$ : ${\displaystyle k=-7,1\blacksquare }$ The above example demonstrates an algorithm that is commonly taught in high schools and many universities since it applicable to every absolute value equation. The steps for the algorithm will now be stated. Given ${\displaystyle |g(x)|+c=f(x)}$ : 1. Isolate the absolute value function so that is equal to another function, or ${\displaystyle |g(x)|=f(x)-c}$ . 2. Write the equation so that you solve for the composed function into two such cases. Given ${\displaystyle |g(x)|=f(x)-c}$ , • Solve for ${\displaystyle g(x)=f(x)-c}$  and • Solve for ${\displaystyle g(x)=-(f(x)-c)}$ . A basic principle of solving these absolute value equations is the need to keep the absolute value by itself. This should be enough for most people to understand, yet this phrasing can be a little ambiguous to some students. As such, a lot of practice problems may be in order here. We will be applying all the steps to algorithm outlined above instead of going through the process of formally solving these equations because Example 1 was meant to show that the algorithm is true. Example 2.0(c): Solve for ${\displaystyle k}$ : ${\displaystyle 3|2k+6|=12}$ We will show you two ways to solve this equation. The first is the standard way, the second will show you something not usually taught. Standard way: Multiply the constant multiple by its inverse. We'd have to divide both sides by ${\displaystyle 3}$  to get the absolute value by itself. We would set up the two different equations using similar reasoning as in the first example: ${\displaystyle 2k+6=4\quad {\text{OR}}\quad 2k+6=-4}$ . Then, we'd solve, by subtracting the 6 from both sides and dividing both sides by 2 to get the ${\displaystyle k}$  by itself, resulting in ${\displaystyle k=-5,-1}$ . We will leave the solving part as an exercise to the reader. Other way: "Distribute" the three into the absolute value. Play close attention to the steps and reasoning laid out herein, for the reasoning for why this works is just as important as the person using the trick, if not moreso. Let us first generalize the problem. Let there be a positive, non-zero constant multiple ${\displaystyle c}$  multiplied to the absolute value equation ${\displaystyle |2k+6|}$ : ${\displaystyle c\cdot |2k+6|=|c|\cdot |2k+6|\quad {\text{OR}}\quad c\cdot |2k+6|=|-c|\cdot |2k+6|}$ . Let us assume both are true. If both statements are true, then you are allowed to distribute the positive constant ${\displaystyle c}$  inside the absolute value. Otherwise, this method is invalid! {\displaystyle {\begin{aligned}|c|\cdot |2k+6|&=|c(2k+6)|&\qquad |-c|\cdot |2k+6|&=|-c(2k+6)|\\&=|2ck+6c|&\qquad &=|-2ck-6c|=|-(2ck+6c)|\\&=|1|\cdot |2ck+6c|={\color {red}1\cdot |2ck+6c|}&\qquad &=|-1|\cdot |2ck+6c|={\color {red}1\cdot |2ck+6c|}\end{aligned}}} Notice the two equations have the same highlighted answer in red, meaning so long as the value of the constant multiple ${\displaystyle c}$  is positive, you are allowed to distribute the ${\displaystyle c}$  inside the absolute value bars. However, this "distributive property" needed the property that multiplying two absolute values is the same as the absolute value of the product. We need to prove this is true first before one can use this in their proof. For the student that spotted this mistake, you may have a good logical mind on one's shoulder, or a good eye for detail. Proof:${\displaystyle |b|\cdot |c|=|bc|}$  Let us start with what we know: ${\displaystyle |x|={\begin{cases}x,&{\text{if}}&x\geq 0\\-x,&{\text{if}}&x<0\end{cases}}}$ If ${\displaystyle a<0}$ , then ${\displaystyle |a|=-a\geq 0}$ . Else, if ${\displaystyle a\geq 0}$ , then ${\displaystyle |a|\geq 0}$ .Let ${\displaystyle b,c\in \mathbb {R} }$ , ${\displaystyle |b|=B}$ , ${\displaystyle |c|=C}$ , and ${\displaystyle b\cdot c=m}$ . The following three cases apply: ${\displaystyle bc=m<0\Rightarrow |m|=-m>0}$ . This simply means that for some product ${\displaystyle bc}$  that equals a negative number ${\displaystyle m}$ , the absolute value of that is ${\displaystyle -m}$ , or the distance from zero. Because ${\displaystyle m<0}$ , multiplying the two sides by ${\displaystyle -1}$  will change the less than to a greater than, or ${\displaystyle m<0\Leftrightarrow -m>0}$ . ${\displaystyle bc=m=0\Rightarrow |m|=m=0}$ . For some product ${\displaystyle bc}$  that equals a number ${\displaystyle m=0}$ , the absolute value of that is ${\displaystyle 0}$ . ${\displaystyle bc=m>0\Rightarrow |m|=m>0}$ . For some product ${\displaystyle bc}$  that equals a positive number ${\displaystyle m}$ , the absolute value of the product is ${\displaystyle m}$ .Given ${\displaystyle |bc|=|m|}$  always result in some positive number (and zero), we can conclude that the function is equivalent to the following: ${\displaystyle |b\cdot c|=|m|={\begin{cases}m,&{\text{if }}m\geq 0\\-m,&{\text{if }}m<0\end{cases}}}$ Let ${\displaystyle |b|\cdot |c|=B\cdot C=n}$ . Since ${\displaystyle |b|=B>0}$  and ${\displaystyle |c|=C>0}$ , ${\displaystyle B\cdot C=n>0}$ . This means that ${\displaystyle n=|n|}$ . Therefore, ${\displaystyle |b|\cdot |c|=|n|}$ . This allows us to conclude that ${\displaystyle |b|\cdot |c|=n=|n|={\begin{cases}n,&{\text{if }}n\geq 0\\-n,&{\text{if }}n<0\end{cases}}}$ ${\displaystyle |bc|=|m|}$  implies that ${\displaystyle bc\geq 0\vee bc<0}$ . However, ${\displaystyle |b|\cdot |c|=n}$  where ${\displaystyle n=|n|}$ . We have shown that ${\displaystyle \forall b,c\in \mathbb {R} }$ , we will always see that ${\displaystyle |bc|>0}$  and ${\displaystyle |b|\cdot |c|>0}$ . Further, we already know that ${\displaystyle |x|>0}$ , meaning even if ${\displaystyle x<0}$ , ${\displaystyle |x|>0}$ . Thus, ${\displaystyle |m|=n=|n|}$ . Therefore, ${\displaystyle \forall b,c\in \mathbb {R} }$ , ${\displaystyle |b|\cdot |c|=|bc|\blacksquare }$ . One nice thing about this proof is how we can use this to conclude that any function multiplied by another function will result in multiplying the inner functions within the absolute values. All we have to do is assume that is equals some other function instead of another number, as implicitly written within this proof. The only necessary change one needs to make is simply define all the variables within as functions. By confirming the general case, we may be employ this trick when we see it again. Let us apply this property to the original problem (this gives us the green result below): ${\displaystyle 3|2k+6|={\color {green}|6k+18|=12}}$ This all implies that ${\displaystyle 6k+18=12\quad {\text{OR}}\quad 6k+18=-12}$ . From there, a simple use of algebra will show that the answer to the original problem is again ${\displaystyle k=-5,-1}$ . Let us change the previous problem a little so that the constant multiple is now negative. Without changing much else, what will be true as a result? Let us find out. Example 2.0(d): Solve for ${\displaystyle k}$ :${\displaystyle -4|2k+6|=8}$  We will attempt to the problem in two different ways: the standard way and the other way, which we will explain later. Standard way: Multiply the constant multiple by its inverse. Divide like the previous problem, so the equation would look like this: ${\displaystyle |2k+6|=-2}$ . Recall what the absolute value represents: it is the distance of that number to the left or right of the starting point, zero. With this, do you notice anything strange? When you evaluate an absolute value, you will always get a positive number because the distance must always be positive. Because this is means a logically impossible situation, there are no real solutions. Notice how we specifically mentioned "real" solutions. This is because we are certain that the solutions in the real set, ${\displaystyle \mathbb {R} }$ , do not exist. However, there might be some set out there which would have solutions for this type of equation. Because of this posibility, we need to be mathematically rigorous and specifically state "no real solutions." Other way: "Distribute" the constant multiple into the absolute value. Here, we notice that the constant multiple ${\displaystyle c<0}$ . The problem with that is there is no ${\displaystyle g}$  such that ${\displaystyle |g|<0}$ . The only way this would be true is for ${\displaystyle -|g|<0}$  because ${\displaystyle -|g|<0\qquad {\text{Divide both sides by }}-1}$  ${\displaystyle |g|>0}$ With this property, we may therefore only distribute the constant multiple as ${\displaystyle |c|}$  with a negative ${\displaystyle -1}$  as a factor outside the absolute value. As such, ${\displaystyle -4|2k+6|=-|8k+24|=8\qquad {\text{Divide both sides by }}-1}$  ${\displaystyle |8k+24|=-8}$ It seems the other way has us multiply a constant by its inverse to both sides. Either way, this "other method" still gave us the same answer: there is no real solution. The problem this time will be a little different. Keep in mind the principle we had in mind throughout all the examples so far, and be careful because a trap is set in this problem. Example 2.0(e): Solve for ${\displaystyle x}$ :${\displaystyle |3x-3|-3=2x-10}$  There are many we ways can attempt to find solutions to this problem. We will do this the standard and allow any student to do it however they so desire. ${\displaystyle |3x-3|-3=2x-10\qquad {\text{Add the }}3{\text{ to both sides.}}}$  ${\displaystyle |3x-3|=2x-7}$ Because the absolute value is isolated, we can begin with our generalized procedure. Assuming ${\displaystyle 2x-7>0}$ , we may begin by denoting these two equations: (1) ${\displaystyle 3x-3=2x-7}$  (2) ${\displaystyle 3x-3=-(2x-7)}$  These are only true if ${\displaystyle 2x-7>0}$ . For now, assume this condition is true. Let us solve for ${\displaystyle x}$  with each respective equation: Equation (1) ${\displaystyle 3x-3=2x-7\qquad {\text{Add }}3{\text{ and subtract }}2x{\text{ on both sides.}}}$  ${\displaystyle x=-4}$ Equation (2) ${\displaystyle 3x-3=-(2x-7)\qquad {\text{Distribute }}-1{\text{.}}}$  ${\displaystyle 3x-3=-2x+7\qquad \quad {\text{Add }}3{\text{ and add }}2x{\text{ on both sides.}}}$  ${\displaystyle 5x=10\qquad \qquad \qquad \quad {\text{Divide }}5{\text{ on both sides.}}}$  ${\displaystyle x=2}$ We have two potential solutions to the equation. Try to answer why we said potential here based on what you know so far about this problem. Why did we state we had two potential solutions? Because we had to assume that ${\displaystyle 2x-7>0}$  and ${\displaystyle |3x-3|=2x-7}$  is true for the provided ${\displaystyle x}$ .Because we had to assume that ${\displaystyle 2x-7>0}$  and ${\displaystyle |3x-3|=2x-7}$  is true for the provided ${\displaystyle x}$ . Because of this, we have to verify the solutions to this equation exist. Therefore, let us substitute those values into the equation: ${\displaystyle |3(-4)-3|=2(-4)-7}$ . Notice that the right-hand side is negative. Also, the left-hand side and the right-hand side are not equivalent. Therefore, this is not a solution. ${\displaystyle |3(2)-3|=2(2)-7}$ . Notice the right-hand side is negative, again. Also, the left-hand side and the right-hand side are not equivalent. Therefore, this cannot be a solution.This equations has no real solutions. More specifically, it has two extraneous solutions (i.e. the solutions we found do not satisfy the equality property when we substitute them back in). Despite doing the procedure outlined since the first problem, you obtain two extraneous solutions. This is not the fault of the procedure but a simple result of the equation itself. Because the left-hand side must always be positive, it means the right-hand side must be positive as well. Along with that restriction is the fact that the two sides may not equal the other for the values whereby only positive values are given. This is all a matter of properties of functions. Example 2.0(f): Solve for ${\displaystyle a}$ :${\displaystyle 6\left\vert 5{\frac {a}{6}}+{\frac {1}{12}}\right\vert ={\frac {3}{5}}|15a+15|}$  All the properties learned will be needed here, so let us hope you did not skip anything here. It will certainly make our lives easier if we know the properties we are about to employ in this problem. ${\displaystyle 6\left\vert 5{\frac {a}{6}}+{\frac {1}{12}}\right\vert ={\frac {3}{5}}|15a+15|\qquad {\text{Distribute, so to speak, the constant terms.}}}$  ${\displaystyle \left\vert 5a+{\frac {1}{2}}\right\vert =|9a+9|}$ Looking at the second equation might be the first declaration of absurdity. However, an application of the fundamental properties of absolute values is enough to do this problem. (3) ${\displaystyle 5a+{\frac {1}{2}}=|9a+9|}$  (4) ${\displaystyle 5a+{\frac {1}{2}}=-|9a+9|}$  Peel the problem one layer at a time. For this one, we will categorize equations based on where they come from; this should hopefully explain the dashes: 3-1 is first equation formulated from (3), for example. (3-1) ${\displaystyle 9a+9=5a+{\frac {1}{2}}}$  (3-2) ${\displaystyle 9a+9=-\left(5a+{\frac {1}{2}}\right)}$  (4-1) ${\displaystyle -(9a+9)=5a+{\frac {1}{2}}}$  (4-2) ${\displaystyle -(9a+9)=-\left(5a+{\frac {1}{2}}\right)}$  We can demonstrate that some equations are equivalents of the other. For example, (3-1) and (4-2) are equivalent, since dividing both sides of (4-2) by ${\displaystyle -1}$  gives (3-1). Further, (3-2) and (4-2) are equivalent (multiply both sides of equation (4-2) by ${\displaystyle -1}$ ). After determining all the equations that are equivalent, distribute ${\displaystyle -1}$  to the corresponding parentheses. (5) ${\displaystyle 9a+9=5a+{\frac {1}{2}}}$  (6) ${\displaystyle 9a+9=-5a-{\frac {1}{2}}}$  Now all that is left to do is solve the equations. We will leave this step as an exercise for the reader. There are two potential solutions: ${\displaystyle a=-{\frac {19}{28}},-{\frac {17}{8}}}$ . All that is left to do is verify that the equation in the question is true when looking at these specific values of ${\displaystyle a}$ : ${\displaystyle a=-{\frac {19}{28}}}$  ${\displaystyle \left\vert 5\left(-{\frac {19}{28}}\right)+{\frac {1}{2}}\right\vert =\left\vert 9\left(-{\frac {19}{28}}\right)+9\right\vert }$  is true. The two sides give the same value: ${\displaystyle {\frac {81}{8}}=10.125}$ . ${\displaystyle a=-{\frac {17}{8}}}$  ${\displaystyle \left\vert 5\left(-{\frac {17}{8}}\right)+{\frac {1}{2}}\right\vert =\left\vert 9\left(-{\frac {17}{8}}\right)+9\right\vert }$  is true. The two sides give the same value: ${\displaystyle {\frac {81}{28}}\approx 2.893}$ .Because both solutions are true, the two solutions are ${\displaystyle a=-{\frac {19}{28}},-{\frac {17}{8}}\blacksquare }$ . Absolute value equations can be very useful to the real world, and it is usually when it comes to modeling. We will introduce one example of a standard modeling problem, then one unusual application in geometry (EXAMPLE WIP). Example 2.0(g): Window Fitting Question: Alfred wants to place a window so that the length of the window varies by 70% the length of the room. The room is 45 feet high and 70 feet in length. If the centered window takes up the entire vertical height of the wall (a) what is the maximum surface area of the wall excluding the window? (b) assuming the room has a rectangular ${\displaystyle \displaystyle 70\times 30}$  base and roof, and this window design repeats for all sides of the room (except the two door sides), what is the internal surface area of the room that the excludes window panes? (a) ${\displaystyle \displaystyle 945{\text{ ft}}^{2}}$ (b) ${\displaystyle \displaystyle 13,440{\text{ ft}}^{2}}$ Explanation: The hardest part about this problem is attempting to understand the situation. Once a student understands the problem presented, the rest of the steps are mostly simple. The procedure we used to solve many linear-equation word problems shall be used here since it helps us condense a ton of information into something more "bite-sized." 1. List useful information (optional second step, or necessary first step). 2. Draw a picture (optional second step, or necessary first step). 3. Find tools to solve the problem based on the list. 4. Make and solve equations. Figure 3: For a ${\displaystyle \displaystyle 70\times 45{\text{ ft}}^{2}}$  wall, if the length of the window varies by 70% the length of the room, what is the maximum area of the wall excluding the window? We will be using these steps for items (a) and (b). First, we will list the information as below: • Length of the window varies by 70% of the room length. • Room is 45 ft. high. • Room is 70 ft. in length. • Window takes up entire vertical height of wall. • Window is centered according to length of wall (by previous item). • The room has a rectangular base of ${\displaystyle \displaystyle 75\times 35{\text{ ft}}^{2}}$ Next, sketch the situation based on our list. A good sketch (Figure 3) can tell you a lot more information than the list. As such, this step may be used moreso than the list. This is why this step may be optional if you listed out the information presented in the problem. From our sketch (the tool to solving the problem), we can come up with an equation to help solve for ${\displaystyle x}$ , the side-length of the wall. Because the absolute value describes the distance (or length), and we want the length to be 70% the room length, we may come to this conclusion: ${\displaystyle \displaystyle |70-2x|={\frac {7}{10}}\cdot 70=49}$ From there, we can solve the equation. {\displaystyle \displaystyle {\begin{aligned}70-2x&=49&70-2x&=-49&{\text{Original equation}}\\-2x&=-21&-2x&=-119&{\text{Subtraction property of equality}}\\x&={\frac {21}{2}}=10.5{\text{ ft}}&x&={\frac {119}{2}}=59.5{\text{ ft}}&{\text{Division property of equality}}\end{aligned}}} In our situation, it makes no sense to consider ${\displaystyle \displaystyle x=59.5}$  because it results in a negative length for the window, so we reject ${\displaystyle \displaystyle x=59.5}$ . It is always important to keep in mind context when working with word problems. This information will be very useful for item (b). For part (a), it asks us to find the area of the wall side excluding the window. This tells us the area of the wall, according to our sketch, is ${\displaystyle \displaystyle xh+xh=2xh=2\cdot \left({\frac {21}{2}}\right)\cdot 45=945{\text{ ft}}^{2}}$ ${\displaystyle \displaystyle \blacksquare }$ Item (b) gave us the following information, along with what we found in working (a): • Rectangular ${\displaystyle \displaystyle 75\times 35}$  base and roof. • Wall without window is ${\displaystyle \displaystyle A=945{\text{ ft}}^{2}}$ . • Two sides have no windows, meaning the surface area of the wall is ${\displaystyle \displaystyle A=45\times 70=3,150{\text{ ft}}^{2}}$ No sketch will be provided for item (b). With all the information out of the way, we can easily find the surface area that excludes all windows. ${\displaystyle \displaystyle S=2\cdot \left(75\times 35{\text{ ft}}^{2}\right)+2\cdot \left(945{\text{ ft}}^{2}\right)+2\cdot 3,150{\text{ ft}}^{2}=13,440{\text{ ft}}^{2}}$ ${\displaystyle \displaystyle \blacksquare }$ The next problem typically requires some trigonometry to solve easily. However, with one extra piece of information, one can use the properties of the absolute value property to solve the following problem. Example 2.0(h): Tiling a Roof (adapted from Trigonometry Book 1)   Figure 2: The plan for a roof is given in the image above. We want to find the area of the figure using only what is given with absolutely no trigonometry. An engineer is planning to make an roof with a ${\displaystyle \displaystyle 30}$  m. frame base and ${\displaystyle \displaystyle 100}$  m. perimeter. The angle of the slope of the roof to the base is ${\displaystyle \displaystyle \theta }$ . The sloped sides are congruent. A reference image (Figure 2) of the sloped-roof (with no cartesian plane) is provided. Given the area of a triangle is ${\displaystyle \displaystyle {\frac {1}{2}}bh}$ , and the distance formula is ${\displaystyle \displaystyle d={\sqrt {(\Delta x)^{2}+(\Delta y)^{2}}}}$ , find the area of the triangular cross section of the roof. Answer ${\displaystyle A=474.342{\text{ m}}^{2}}$    Figure 3 Explanation The following problem requires you to think about what doesn't change to successfully allow you to determine what one situation allows for all of the following to be possible. We will apply our problem-solving steps derived onto this problem first before we discuss one difference in this problem that somewhat breaks our algorithm. We will draw it first. DrawingWe can gain a lot of information from Figure 3. ${\displaystyle a>0}$  ${\displaystyle b>0}$  ${\displaystyle c<0}$  ${\displaystyle f(x)=-a|x|}$ ; specifically, ${\displaystyle f(b)=-a|b|=c}$  and ${\displaystyle f(b)=-a|-b|=c}$ . ${\displaystyle d={\sqrt {15^{2}+\left(f(b)\right)^{2}}}}$  ${\displaystyle b=15}$  because ${\displaystyle \Delta x=15}$  by the above distance equation for ${\displaystyle d}$ . The values of ${\displaystyle a,b,c}$  are constant. The height ${\displaystyle h=c}$  is constant, and the base has constant length, so ${\displaystyle a,b}$  are constant.Tool FindingOur drawing helped us gleam a lot of information. Knowing the perimeter is ${\displaystyle 100{\text{ m}}}$  tells us that the distance is {\displaystyle {\begin{aligned}2d+30&=100\\2d&=70\\d&=35{\text{ m}}\end{aligned}}} However, Figure 3 tells us that ${\displaystyle d={\sqrt {15^{2}+\left(f(b)\right)^{2}}}}$ . Therefore, by the transitive property, {\displaystyle {\begin{aligned}{\sqrt {15^{2}+\left(a|b|\right)^{2}}}&=35\\15^{2}+a^{2}b^{2}&=35^{2}&|b|=b{\text{ and }}(ab)^{2}=a^{2}b^{2}\\15^{2}\left(1+a^{2}\right)&=35^{2}&b=15{\text{ and distributive property.}}\\1+a^{2}&={\frac {49}{9}}&{\text{Division property of equality.}}\\a&={\sqrt {\frac {40}{9}}}&{\text{Subtraction and exponent property of equality.}}\end{aligned}}} After knowing the vertical contraction, we can determine the height of the triangle. From there, the area. ${\displaystyle h=c=15{\sqrt {\frac {40}{9}}}\approx 31.623{\text{ m}}}$ The area of the triangle is therefore, ${\displaystyle A={\frac {1}{2}}\cdot 15\cdot 15{\sqrt {\frac {40}{9}}}\approx 474.342{\text{ m}}^{2}\blacksquare }$ Notice how it was not necessary for us to solve for a specific value of ${\displaystyle x}$  based on the absolute value equation. The only aspect of absolute value equations necessary for this problem is the graph properties and some logic. In a way, this is the most easy absolute value problem. However, the needed creativity for it makes up for the "easiness" of the problem. ### Practice Problems 1 ${\displaystyle |k+6|=2k}$ ${\displaystyle k=}$ 2 ${\displaystyle |7+3a|=11-a}$ ${\displaystyle a\in \{}$  , ${\displaystyle \}}$ 3 ${\displaystyle |2k+6|+6=0}$ How many solutions? ## Inequalities with Absolute Values It is important to keep in mind that any function can be less than any other function. For example, ${\displaystyle 2x-5<54-13x}$  has any solutions for ${\displaystyle x<{\frac {59}{13}}=3+{\frac {14}{15}}}$ . So long as the value for ${\displaystyle x}$  is within that range, the function ${\displaystyle 2x-5}$  is less than the output of ${\displaystyle 54-13x}$ . The algebra for inequalities of ${\displaystyle f(x)=|x|}$  requires a bit more of demonstration to understand. While the methods we use will not be proven, per se, our examples and explanations should give a good intuition behind the idea of find the inequalities of absolute values. Example 3.0(a): ${\displaystyle |10-20x|<50}$  First, let us simplify the following expression through the method we demonstrated in the previous section (factoring the inside of the absolute value and bringing the constant out). Keep in mind, since we are switching the sides for which we view the equation, 50 is the left instead of right, we must also "flip" the inequality to be consistent with the original equation. {\displaystyle {\begin{aligned}50&>|10-20x|\\&>|10\cdot (1-2x)|\\&>10\cdot |1-2x|\end{aligned}}} From there, it should be easy to see that ${\displaystyle |1-2x|<5}$ Let us further analyze this situation. What the above equation is saying is ${\displaystyle y=|1-2x|}$  is less than the function ${\displaystyle y=5}$ . We want to make sure the inside value is less than five. Because the absolute value describes the distance, there are two realities to the function. Let ${\displaystyle A(x)=|1-2x|}$  ${\displaystyle A(x)=|x|={\begin{cases}1-2x,&{\text{if}}&x\geq {\frac {1}{2}}\\-(1-2x),&{\text{if}}&x<{\frac {1}{2}}\end{cases}}}$ Because there are two "pieces" to the function ${\displaystyle A(x)}$ , and we want each piece to be less than 5, ${\displaystyle 1-2x<5}$  and ${\displaystyle -(1-2x)<5}$ We will demonstrate the more common procedure in the next example. For now, this intuition should begin to form an idea of algebraic analysis. We will solve the left-hand then the right-hand case. Solving for ${\displaystyle x}$  in ${\displaystyle |1-2x|<5}$ . Left-hand case: ${\displaystyle 1-2x<5}$  Recall how multiplying both sides by a negative factor requires us to "flip" the inequality. Therefore, solving for ${\displaystyle x}$ : ${\displaystyle \Leftrightarrow x>-2}$  Right-hand case: ${\displaystyle -(1-2x)<5}$  ${\displaystyle \Leftrightarrow 1-2x>-5}$  ${\displaystyle \Leftrightarrow x<3}$ We have found a possible distribution of values that allows the following equation to be true, where ${\displaystyle |2x-5|<5}$ , and it is for values of ${\displaystyle x}$  in between ${\displaystyle -2}$  and ${\displaystyle 3}$ , non-inclusive. The above example is an intuition behind how solving for inequalities work. Technically speaking, we could make a proof for why we have to "operate" (take the steps seen above) absolute value inequalities this way. However, this will be a little too technical and involve a lot of generalization that could potentially confuse students rather than enlighten. If the student feels the challenge is worth, then one may try the proof of the steps we derived below. This is considered standard procedure (according to many High School textbooks). 1. Simplify until only the "absolute value bar term" is left. 2. Solve by taking the inside and relating it by the inequality for the "left-hand" values; taking the same expression found inside the absolute value, for the "right-hand" equation, negate the related term and flip the inequality to then solve. 3. Rewrite ${\displaystyle x}$  into necessary notatioon. Although the procedure may seem to be confusing, we are really only trying to make the algorithm as specific as possible. In reality, we will show just how easy it is to apply this algorithm for the problem above. Example 3.0(a) (REPEAT): ${\displaystyle |10-20x|<50}$  Let us skip to the most simplified form. ${\displaystyle |1-2x|<5}$ Now let us apply the above algorithm. ${\displaystyle 1-2x<5}$  and ${\displaystyle 1-2x>-5}$  (notice the negation and flipping for the right hand equation).From there, we will solve. Solving for ${\displaystyle x}$  in ${\displaystyle |1-2x|<5}$ . Left-hand case: ${\displaystyle 1-2x<5}$  Recall how multiplying both sides by a negative factor requires us to "flip" the inequality. Therefore, solving for ${\displaystyle x}$ : ${\displaystyle \Leftrightarrow x>-2}$  Right-hand case: ${\displaystyle 1-2x>-5}$  ${\displaystyle \Leftrightarrow x<3}$ There are two possible reasons why this procedure exists. For one, it allows us to quickly solve for ${\displaystyle x}$  in the "right-hand" equation without the need for double the amount of multiplications necessary to solve for ${\displaystyle x}$  (it lessens the amount of times we have to flip the inequality). Next, it allows us to focus more on the idea behind absolute value equations (the value inside will be positive, and hence, we want to find all values that allow us to find all possible solutions). Nevertheless, keep in mind how we found this procedure, and it was through applying the function definition of absolute values. In reality, we did the exact same thing for absolute value equations. The only difference in application of algorithm applies to the inequality, which further "complicates" matters by introducing a new concept to the non-injective absolute value function. Through finding two solutions, we gave two possible ranges for values of ${\displaystyle x}$ . Hopefully, this example should further shine a light into what many high schoolers think to be "black magic" among finding solutions to absolute value inequalities and equalities. The next examples should only hopefully further the concepts learned. Keep in mind, if one does not like the algorithm presented in the repeat example above, one is perfectly fine to use the other algorithm. The benefit of multiple choice is the ability to use any method, and only the correctness of your answer will be considered. Example 3.0(b): ${\displaystyle 15x-|12x+10|>13x}$  Explanations given later Example 3.0(c): ${\displaystyle \left\vert {\frac {5}{12}}x-87\right\vert \leq 100}$  Explanations given later Example 3.0(d): ${\displaystyle 15x-|12x+10|>13x}$  Explanations given later Introduction to example included later. Example 3.0(e): Variable temperature problem Problem: The temperature in a room averages at around ${\displaystyle 21^{\circ }{\text{C}}}$  in the summer without air conditioning. The change in temperature is dependent on the ambient weather conditions. Without air conditioning, the maximum change in temperature from the average is ${\displaystyle 4^{\circ }{\text{C}}}$ . When the air conditioning is on, the temperature of the room is a function of time ${\displaystyle t}$  (in hours), given by ${\displaystyle C(t)=-{\frac {3}{2}}t+20}$ . The maximum deviation in temperature should be no more than ${\displaystyle 5^{\circ }{\text{C}}}$ . (a) Write an equation that represents the temperature of the room without and with air conditioning, respectively. (b) Determine the minimum temperature value of the room without air conditioning in the summer. (c) At what time must the air conditioning stop for the temperature to drop by at most ${\displaystyle 5^{\circ }{\text{C}}}$ ?Answers: (a) ${\displaystyle |T-20|\leq 4}$  and ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$ . (b) ${\displaystyle T_{min}=16^{\circ }{\text{C}}}$ . (c) ${\displaystyle 3{\tfrac {1}{3}}}$  hours.Explanation: When working with word-problems it is best to rewrite the problem into something algebraic or "picturesque" (i.e. draw the problem out). One can also use both, as we will soon do.   Temperature Variation Situation The benefit of drawing a picture (or, more accurate, a sketch) of the situation is being able to more easily interpret situations. We are highly visual people, after all, so seeing a picture is a lot easier to understand than words. The highly intuitive nature of geometry also lends itself well to algebraic interpretations. Let us reread the situation without A/C. "Without air conditioning, the maximum change in temperature from the average is ${\displaystyle 4^{\circ }{\text{C}}}$ ."This gives us a lot of information. We know that ${\displaystyle T_{\text{max}}=T_{\text{avg}}+4}$  and ${\displaystyle T_{\text{min}}=T_{\text{avg}}-4}$ , so to keep it as one singular equation, it is best to write it as an absolute value equation. For this situation, (8) ${\displaystyle |T-20|\leq 4}$  It is important to know why this is true. Recall the absolute value represents the distance from ${\displaystyle 0}$  for the inside value. If ${\displaystyle 20}$  is the reference point, then to get ${\displaystyle 0}$  from ${\displaystyle 20}$ , you need to subtract 20 from the current value ${\displaystyle T}$ . As such, this equation is true. Now let us look at the situation for the air conditioning. "When the air conditioning is on, the temperature of the room is a function of time, given by ${\displaystyle C(t)=-{\frac {3}{2}}t+20}$ . The maximum deviation in temperature should be no more than ${\displaystyle 5^{\circ }{\text{C}}}$ ."Based on the wording of the sentence, the temperature ${\displaystyle T=C(t)}$  is based on the time, and the temperature can only be at most ${\displaystyle 5^{\circ }{\text{C}}}$  from the average. By the same logic given for Equation (8), (9) ${\displaystyle |C(t)-20|\leq 5}$  Equation (9) is left in the same form to show how similar the two equations are, and to also relate more to the wording of the set-up text. Using the transitive property, one can simplify the equation further to obtain (10) ${\displaystyle \left\vert -{\frac {3}{2}}t\right\vert \leq 5}$  Recall how ${\displaystyle |(-1)x|=1\cdot |x|}$ , where ${\displaystyle c=-1}$ . Because of this property, one can simplify the equation further to obtain the final equation for part (a): (11) ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$  This sufficiently answers item (a), perhaps the hardest part in the question. However, with the two equations obtained, (8) and (11), we can answer both items (b) and (c). Let us reread parts (b) and (c) using our understanding of the question: "Determine the minimum temperature value of the room without air conditioning in the summer."This is, in essence, asking the examinee to find the value of ${\displaystyle T_{\text{min}}}$  using (8). The previous examples should have hopefully prepared you with solving for absolute value inequalities. Solving for ${\displaystyle T}$  in ${\displaystyle |T-20|\leq 4}$ . Positive case: ${\displaystyle T-20\leq 4}$  ${\displaystyle \Leftrightarrow T\leq 24}$  Negative case: ${\displaystyle T-20\geq -5}$  ${\displaystyle \Leftrightarrow T\geq 16}$ Since the problem is asking for the minimum temperature value, ${\displaystyle T_{min}}$ , of an ambient temperature room, the correct answer here is ${\displaystyle T_{min}=16^{\circ }{\text{C}}}$ . Keep in mind, we are allowed to put that equal sign there thanks to the problem's wording ("at most" implies less than or equal to). Also, always remember to put units in word problems. "At what time must the air conditioning stop for the temperature to drop by at most ${\displaystyle 5^{\circ }{\text{C}}}$ ?"This is, in essence, asking the examinee to find the value of time ${\displaystyle t}$  (in hours) using the most simplified equation, Equation (10). Solving for ${\displaystyle t}$  in ${\displaystyle \left\vert {\frac {3}{2}}t\right\vert \leq 5}$ . Positive case: ${\displaystyle {\frac {3}{2}}t\leq 5}$  ${\displaystyle \Leftrightarrow t\leq {\frac {10}{3}}}$  Negative case: ${\displaystyle {\frac {3}{2}}t\geq -5}$  ${\displaystyle \Leftrightarrow t\geq -{\frac {10}{3}}}$ Because, in essence, only the positive case is considered (we are only looking at time ${\displaystyle t\geq 0}$ ), the maximum amount of time that the air conditioning will allow is ${\displaystyle t={\frac {10}{3}}=3{\tfrac {1}{3}}}$  hours. ## Lesson Review An absolute value (represented with |'s) stands for the number's distance from 0 on the number line. This essentially makes a negative number positive although a positive number remains the same. To solve an equation involving absolute values, you must get the absolute value by itself on one side and set it equal to the positive and negative version of the other side, because those are the two solutions the absolute value can output. However, check the solutions you get in the end; some might produce negative numbers on the right side, which are impossible because all outputs of an absolute value symbol are positive! ## Lesson Quiz Evaluate each expression. 1 ${\displaystyle |-4|=}$ 2 ${\displaystyle |6-8|=}$ Solve for ${\displaystyle a}$ . Type NS (with capitalization) into either both fields or the right field for equations with no solutions. Any solutions that are extraneous (don't work when substituted into the equation) should be typed with XS on either the right field or both. Order the solutions from least to greatest. 3 ${\displaystyle |3a-4|=5}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 4 ${\displaystyle 5|2a+3|=15}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 5 ${\displaystyle 3|4a-2|-12=-3}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 6 ${\displaystyle |a+1|-18=a-15}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ 7 ${\displaystyle 2\left\vert {\frac {a}{2}}-1\right\vert -2a=-4a}$ ${\displaystyle a\in \{}$  ${\displaystyle ,}$  ${\displaystyle \}}$ Read the situations provided below. Then, answer the prompt or question given. Type NS (with capitalization) into either both fields or the right field for equations that have no solutions. Any solutions that are extraneous should be typed with XS on either the right field or both. Order the solutions from least to greatest. 8 The speed of the current of a nearby river deviates ${\displaystyle 1.5{\tfrac {\text{m}}{\text{s}}}}$  from the average speed ${\displaystyle 20{\tfrac {\text{m}}{\text{s}}}}$ . Let ${\displaystyle s}$  represent the speed of the river. Select all possible equations that could describe the situation. ${\displaystyle |s-1.5|=20}$ ${\displaystyle |s+1.5|=20}$ ${\displaystyle |20-s|=1.5}$ ${\displaystyle |s-20|=1.5}$ ${\displaystyle |s+20|=1.5}$ ${\displaystyle |1.5-s|=20}$ 9 A horizontal artificial river has an average velocity of ${\displaystyle -4{\tfrac {\text{m}}{\text{s}}}}$ . The velocity increases proportionally to the mass of the rocks, ${\displaystyle r}$ , in kilograms, blocking the path of the current. Assume the river's velocity for the day deviates a maximum of ${\displaystyle 6{\tfrac {\text{m}}{\text{s}}}}$ . If the proportionality constant is ${\displaystyle k={\frac {2}{5}}}$  meters per kilograms-seconds, what is the maximum mass of the rocks in the river for that day? ${\displaystyle 5}$  kilograms. ${\displaystyle 15}$  kilograms. ${\displaystyle 25}$  kilograms. ${\displaystyle 35}$  kilograms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 524, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311861395835876, "perplexity": 593.8518563810339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00497.warc.gz"}
https://brain-helper.com/2013/12/15/1089/
Over the past few decades, Morse theory has undergone many generalizations, into many different fields.  At the moment, I only know of a few, and I understand even fewer. Well, let’s begin at the beginning: • Classical Morse theory (CMT) • Stratified Morse theory (SMT) • Micro-local Morse theory (MMT) The core of these theories is, of course, the study of Morse functions on suitable spaces and generalizations/interpretations of theorems in CMT to these spaces.  For CMT, the spaces are smooth manifolds (or, compact manifolds, if your definition of Morse function doesn’t require properness).  SMT looks at Morse functions on (Whitney) stratified spaces, usually real/complex varieties (either algebraic or analytic), and more generally, subanalytic subsets of smooth manifolds.  MMT deals with both cases, but from a more “meta” perspective that I’m not going to tell you about right now. The overarching theme is pretty simple:  one can investigate the (co)homology of $X$ by examining the behavior of level sets of Morse functions as they “pass through” critical values.  First, we’ll need some notation.  Let $M$ be a smooth manifold, $a < b \in \mathbb{R}$, and let $f: M \to \mathbb{R}$ be a smooth function.  Then, set • $M_{\leq a} := f^{-1}(-\infty,a]$ • $M_{< a} := f^{-1}(-\infty,a)$ • $M_{[a,b]} := f^{-1}[a,b]$ In CMT, this overarching idea is described by two “fundamental” theorems: Fundamental Theorem of Classical Morse theory, A (CMT;A): Suppose $f$ has no critical values on the interval $[a,b] \subseteq \mathbb{R}$.  Then, $M_{\leq a}$ is diffeomorphic to $M_{\leq b}$, and the inclusion $M_{\leq a} \hookrightarrow M_{\leq b}$ is a homotopy equivalence (that is, $M_{\leq a}$ is a deformation-retract of $M_{\leq b}$). Homologically speaking, this last point can be rephrased as $H_*(M_{\leq b},M_{\leq a}) = 0$ (for singular homology with $\mathbb{Z}$ coefficients). Fundamental Theorem of Classical Morse theory, B (CMT;B): Suppose that $f$ has a unique critical value $v$ in the interior of the interval $[a,b] \subseteq \mathbb{R}$, corresponding to the isolated critical point $p \in M$ of index $\lambda$.  Then, $H_k(M_{\leq b},M_{\leq a})$ is non-zero only in degree $k = \lambda$, in which case $latex H_\lambda(M_{\leq b},M_{\leq a}) \cong \mathbb{Z}$. So, if $c \in \mathbb{R}$ varies across a critical value $a < v < b$ of $f$, the topological type of $M_{\leq c}$ “jumps” somehow.  If we want to compare how topological type of $M_{\leq b}$ differs from that of $M_{\leq a}$, the obvious thing to do is consider them together as a pair of spaces $(M_{\leq b}, M_{\leq a})$ and look at the relative (co)homology of this pair.  CMT;A and CMT;B together tell us that we’re only going to get non-zero relative homology of this pair when there is a critical value between $a$ and $b$, and in that case, the homology is non-zero only in degree $\lambda$. But HOW does the topological type change, specifically, as we cross the critical value? ## Author: brianhepler I'm a third-year math postdoc at the University of Wisconsin-Madison, where I work as a member of the geometry and topology research group. Generally speaking, I think math is pretty neat; and, if you give me the chance, I'll talk your ear off. Especially the more abstract stuff. It's really hard to communicate that love with the general population, but I'm going to do my best to show you a world of pure imagination.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297370076179504, "perplexity": 432.13694173554325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00458.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/appendix-e-sigma-notation-e-exercises-page-a38/42
## Calculus 8th Edition $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$ $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$ Use triangular inequality $|a+b|\leq |a|+|b|$ Expand the summation on both sides. $a_{1}+a_{2}+....+a_{n}=a_{1}+a_{2}+....+a_{n}$ Take the absolute values. $|a_{1}+a_{2}+....+a_{n}|=|a_{1}+a_{2}+....+a_{n}|$ Thus, $|a_{1}+a_{2}+....+a_{n}|\leq|a_{1}|+|a_{2}|+....+|a_{n}|$ (Triangular inequality $|a+b|\leq |a|+|b|$) Hence, $|\sum\limits_{i =1}^{n}a_{i}| \leq \sum\limits_{i =1}^{n}|a_{i}|$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876253604888916, "perplexity": 998.7885885739302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00485.warc.gz"}
https://jeeneetqna.in/624/plane-electromagnetic-wave-frequency-energy-density-vacuum
# A plane electromagnetic wave, has frequency of 2.0 × 10^10 Hz and its energy density is 1.02 × 10^–8 J/m^3 in vacuum. more_vert A plane electromagnetic wave, has frequency of 2.0 × 1010 Hz and its energy density is 1.02 × 10–8 J/m3 in vacuum. The amplitude of the magnetic field of the wave is close to (${1\over4\pi\varepsilon_0}=9\times10^9{Nm^2\over C^2}$ and speed of light = 3 × 108 ms–1) : (1) 160 nT (2) 180 nT (3) 190 nT (4) 150 nT more_vert Electromagnetic waves more_vert verified Ans : (1) 160 nT Explanation: more_vert How did you knw to use this formula ? more_vert there is only one formula magnetic energy density
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508094787597656, "perplexity": 1863.6769934977199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00255.warc.gz"}
https://mathoverflow.net/questions/320955/mapping-class-group-and-triangulations
# Mapping Class Group and Triangulations I am a physicist who's getting started with Mapping Class Group for Riemann surfaces, pants decompositions and triangulations so I apologise in advance if the following is a stupid question/wrong. I understand that to any pants decomposition of a Riemann surface we can associate a set of generators (the Dehn twists). Different pants decompositions gives different sets of generators, and relations among various sets of generators are understood as being generated by a minimal sets of relations (Lantern, Chain, Braiding...) My question is: is there a similar picture for Triangulations? Given a Triangulation, can I assign a canonical set of generators of the Mapping Class Group? Can I understand relations between generators using flips of triangulations? Yes. When a group $$G$$ acts geometrically on a metric space $$X$$, by choosing a basepoint $$x_0 \in X$$ you can construct its Dirichlet domain $$D_{x_0} = \{x \; | \; d(x, x_0) \leq d(x, g \cdot x_0) \; \forall g \in G\}$$ When the action of $$G$$ is sufficiently nice, this domain has finitely many sides and geodesics which are perpendicularly bisected by each face form a finite generating set for $$G$$. Since the mapping class group acts geometrically on the (labelled) flip graph (with the graph metric) we can do a similar process starting at a triangulation $$\mathcal{T}_0$$. 1. Let $$X_1$$ be the set of mapping classes which move $$\mathcal{T}_0$$ by the smallest non-zero amount. 2. Let $$X_2$$ be the set of mapping classes which move $$\langle X_1 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. 3. Let $$X_3$$ be the set of mapping classes which move $$\langle X_1, X_2 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. 4. Let $$X_4$$ be the set of mapping classes which move $$\langle X_1, X_2, X_3 \rangle \cdot \mathcal{T}_0$$ by the smallest non-zero amount. $$\vdots$$ Then each $$X_i$$ is finite and for some $$N$$ the elements of $$X_1 \cup X_2 \cup \cdots \cup X_N$$ generate $$G$$. Since each of generator $$g$$ can be represented by a path in the flip graph from $$\mathcal{T}_0$$ to $$g(\mathcal{T}_0)$$ the relations between these generators can then be understood from the 2--cells of the flip graphs which give: • the square relation - that disjoint flips commute, and • the pentagon relation - that two flips which share a common triangle form a 5--cycle Since there are explicit descriptions of the action of the mapping class group on the flip graph this entire process can be done on a computer (although as far as I am aware no one has actually done this).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625456094741821, "perplexity": 216.82029695738711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00635.warc.gz"}
https://www.researchgate.net/profile/Keisuke-Fujii-9
# Keisuke FujiiKyoto University | Kyodai · The Hakubi Center for Advanced Research/ Graduate School of Informatics PhD 91 Publications 6,849 A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more 3,731 Citations Citations since 2017 60 Research Items 3358 Citations April 2013 - December 2014 Position • Professor (Assistant) April 2011 - March 2013 Position • PostDoc Position ## Publications Publications (91) Article Full-text available t-stochastic neighbor embedding (t-SNE) is a nonparametric data visualization method in classical machine learning. It maps the data from the high-dimensional space into a low-dimensional space, especially a two-dimensional plane, while maintaining the relationship or similarities between the surrounding points. In t-SNE, the initial position of th... Article Current quantum computers are limited in the number of qubits and coherence time, constraining the algorithms executable with sufficient fidelity. The variational quantum eigensolver (VQE) is an algorithm to find an approximate ground state of a quantum system and is expected to work on even such a device. The deep VQE [K. Fujii, et al., arXiv:2007... Article Full-text available Variational quantum algorithms (VQAs) have been proposed as one of the most promising approaches to demonstrate quantum advantage on noisy intermediate-scale quantum (NISQ) devices. However, it has been unclear whether VQAs can maintain quantum advantage under the intrinsic noise of the NISQ devices, which deteriorates the quantumness. Here we prop... Article Full-text available The implementation of time-evolution operators on quantum circuits is important for quantum simulation. However, the standard method, Trotterization, requires a huge number of gates to achieve desirable accuracy. Here, we propose a local variational quantum compilation (LVQC) algorithm, which allows us to accurately and efficiently compile time-evo... Preprint Quantum-inspired singular value decomposition (SVD) is a technique to perform SVD in logarithmic time with respect to the dimension of a matrix, given access to the matrix embedded in a segment-tree data structure. The speedup is possible through the efficient sampling of matrix elements according to their norms. Here, we apply it to extreme learni... Preprint The implementation of time-evolution operators, called Hamiltonian simulation, is one of the most promising usage of quantum computers that can fully exploit their computational powers. For time-independent Hamiltonians, the qubitization has recently established efficient realization of time-evolution, with achieving the optimal computational resou... Article Full-text available Variational quantum algorithms are considered to be appealing applications of near-term quantum computers. However, it has been unclear whether they can outperform classical algorithms or not. To reveal their limitations, we must seek a technique to benchmark them on large-scale problems. Here we propose a perturbative approach for efficient benchm... Preprint Pricing a multi-asset derivative is an important problem in financial engineering, both theoretically and practically. Although it is suitable to numerically solve partial differential equations to calculate the prices of certain types of derivatives, the computational complexity increases exponentially as the number of underlying assets increases... Article The variational quantum eigensolver (VQE), which has attracted attention as a promising application of noisy intermediate-scale quantum devices, finds a ground state of a given Hamiltonian by variationally optimizing the parameters of quantum circuits called Ansätze. Since the difficulty of the optimization depends on the complexity of the problem... Preprint Full-text available The demonstration of quantum error correction (QEC) is one of the most important milestones in the realization of fully-fledged quantum computers. Toward this, QEC experiments using the surface codes have recently been actively conducted. However, it has not yet been realized to protect logical quantum information beyond the physical coherence time... Preprint Implementing time evolution operators on quantum circuits is important for quantum simulation. However, the standard way, Trotterization, requires a huge numbers of gates to achieve desirable accuracy. Here, we propose a local variational quantum compilation (LVQC) algorithm, which allows to accurately and efficiently compile a time evolution opera... Article Full-text available We propose a divide-and-conquer method for the quantum-classical hybrid algorithm to solve larger problems with small-scale quantum computers. Specifically, we concatenate a variational quantum eigensolver (VQE) with a reduction in the system dimension, where the interactions between divided subsystems are taken as an effective Hamiltonian expanded... Article Full-text available In the early years of fault-tolerant quantum computing (FTQC), it is expected that the available code distance and the number of magic states will be restricted due to the limited scalability of quantum devices and the insufficient computational power of classical decoding units. Here, we integrate quantum error correction and quantum error mitigat... Preprint Current quantum computers are limited in the number of qubits and coherence time, constraining the algorithms executable with sufficient fidelity. Variational quantum eigensolver (VQE) is an algorithm to find an approximate ground state of a quantum system and expected to work on even such a device. The deep VQE [K. Fujii, et al., arXiv:2007.10917]... Preprint t-Stochastic Neighbor Embedding (t-SNE) is a non-parametric data visualization method in classical machine learning. It maps the data from the high-dimensional space into a low-dimensional space, especially a two-dimensional plane, while maintaining the relationship, or similarities, between the surrounding points. In t-SNE, the initial position of... Preprint Variational quantum eigensolver (VQE) is regarded as a promising candidate of hybrid quantum-classical algorithm for the near-term quantum computers. Meanwhile, VQE is confronted with a challenge that statistical error associated with the measurement as well as systematic error could significantly hamper the optimization. To circumvent this issue,... Article Full-text available Quantum circuits that are classically simulatable tell us when quantum computation becomes less powerful than or equivalent to classical computation. Such classically simulatable circuits are of importance because they illustrate what makes universal quantum computation different from classical computers. In this work, we propose a novel family of... Article Full-text available The kernel trick allows us to employ high-dimensional feature space for a machine learning task without explicitly storing features. Recently, the idea of utilizing quantum systems for computing kernel functions using interference has been demonstrated experimentally. However, the dimension of feature spaces in those experiments have been smaller t... Preprint Variational quantum algorithms (VQA) have been proposed as one of the most promising approaches to demonstrate quantum advantage on noisy intermediate-scale quantum (NISQ) devices. However, it has been unclear whether VQA algorithms can maintain quantum advantage under the intrinsic noise of the NISQ devices, which deteriorates the quantumness. Her... Article Full-text available We propose a sampling-based simulation for fault-tolerant quantum error correction under coherent noise. A mixture of incoherent and coherent noise, possibly due to over-rotation, is decomposed into Clifford channels with a quasiprobability distribution. Then, an unbiased estimator of the logical error probability is constructed by sampling Cliffor... Preprint Quantum kernel method is one of the key approaches to quantum machine learning, which has the advantages that it does not require optimization and has theoretical simplicity. By virtue of these properties, several experimental demonstrations and discussions of the potential advantages have been developed so far. However, as is the case in classical... Article Full-text available To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its feature... Preprint Variational quantum eigensolver (VQE), which attracts attention as a promising application of noisy intermediate-scale quantum devices, finds a ground state of a given Hamiltonian by variationally optimizing the parameters of quantum circuits called ansatz. Since the difficulty of the optimization depends on the complexity of the problem Hamiltonia... Chapter Quantum systems have an exponentially large degree of freedom in the number of particles and hence provide a rich dynamics that could not be simulated on conventional computers. Quantum reservoir computing is an approach to use such a complex and rich dynamics on the quantum systems as it is for temporal machine learning. In this chapter, we explai... Chapter Reservoir computing is a framework used to exploit natural nonlinear dynamics with many degrees of freedom, which is called a reservoir, for a machine learning task. Here we introduce the NMR implementation of quantum reservoir computing and quantum extreme learning machine using the nuclear quantum reservoir. The implementation utilizes globally c... Chapter Recent developments in reservoir computing based on spintronics technology are described here. The rapid growth of brain-inspired computing has motivated researchers working in a broad range of scientific field to apply their own technologies, such as photonics, soft robotics, and quantum computing, to brain-inspired computing. A relatively new tec... Article Applications such as simulating complicated quantum systems or solving large-scale linear algebra problems are very challenging for classical computers, owing to the extremely high computational cost. Quantum computers promise a solution, although fault-tolerant quantum computers will probably not be available in the near future. Current quantum de... Preprint Due to the linearity of quantum operations, it is not straightforward to implement nonlinear transformations on a quantum computer, making some practical tasks like a neural network hard to be achieved. In this work, we define a task called nonlinear transformation of complex amplitudes and provide an algorithm to achieve this task. Specifically, w... Article Noise in quantum operations often negates the advantage of quantum computation. However, most classical simulations of quantum computers calculate the ideal probability amplitudes by either storing full state vectors or using sophisticated tensor-network contractions. Here we investigate sampling-based classical simulation methods for noisy quantum... Preprint Variational quantum algorithms (VQAs) are expected to become a practical application of near-term noisy quantum computers. Although the effect of the noise crucially determines whether a VQA works or not, the heuristic nature of VQAs makes it difficult to establish analytic theories. Analytic estimations of the impact of the noise are urgent for se... Article We propose a method for learning temporal data using a parametrized quantum circuit. We use the circuit that has a similar structure as the recurrent neural network, which is one of the standard approaches employed for this type of machine learning task. Some of the qubits in the circuit are utilized for memorizing past data, while others are measu... Preprint We propose a sampling-based simulation for fault-tolerant quantum error correction under coherent noise. A mixture of incoherent and coherent noise, possibly due to over-rotation, is decomposed into Clifford channels with a quasi-probability distribution. Then, an unbiased estimator of the logical error probability is constructed by sampling Cliffo... Preprint Quantum circuits that are classically simulatable tell us when quantum computation becomes less powerful than or equivalent to classical computation. Such classically simulatable circuits are of importance because they illustrate what makes universal quantum computation different from classical computers. In this work, we propose a novel family of... Article Full-text available As the hardware technology for quantum computing advances, its possible applications are actively searched and developed. However, such applications still suffer from the noise on quantum devices, in particular when using two-qubit gates whose fidelity is relatively low. One way to overcome this difficulty is to substitute such non-local operations... Preprint We propose a method for learning temporal data using a parametrized quantum circuit. We use the circuit that has a similar structure as the recurrent neural network which is one of the standard approaches employed for this type of machine learning task. Some of the qubits in the circuit are utilized for memorizing past data, while others are measur... Article Full-text available We propose a quantum-classical hybrid algorithm to simulate the nonequilibrium steady state of an open quantum many-body system, named the dissipative-system variational quantum eigensolver (dVQE). To employ the variational optimization technique for a unitary quantum circuit, we map a mixed state into a pure state with a doubled number of qubits a... Preprint We introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Herein we show the main concepts of Qulacs, explain how to use its featur... Preprint Variational quantum algorithms are appealing applications of near-term quantum computers. However, there are two major issues to be solved, that is, we need an efficient initialization strategy for parametrized quantum circuit and to know the limitation of the algorithms by benchmarking it on large scale problems. Here, we propose a perturbative ap... Article Full-text available We propose a sequential minimal optimization method for quantum-classical hybrid algorithms, which converges faster, robust against statistical error, and hyperparameter-free. Specifically, the optimization problem of the parameterized quantum circuits is divided into solvable subproblems by considering only a subset of the parameters. In fact, if... Preprint We propose a divide-and-conquer method for the quantum-classical hybrid algorithm to solve larger problems with small-scale quantum computers. Specifically, we concatenate variational quantum eigensolver (VQE) with reducing the dimensions of the system, where the interactions between divided subsystems are taken as an effective Hamiltonian expanded... Preprint As the hardware technology for quantum computing advances, its possible applications are actively searched and developed. However, such applications still suffer from the noise on quantum devices, in particular when using two-qubit gates whose fidelity is relatively low. One way to overcome this difficulty is to substitute such non-local operations... Preprint We employ so-called quantum kernel estimation to exploit complex quantum dynamics of solid-state nuclear magnetic resonance for machine learning. We propose to map an input to a feature space by input-dependent Hamiltonian evolution, and the kernel is estimated by the interference of the evolution. Simple machine learning tasks, namely one-dimensio... Article Full-text available The variational quantum eigensolver (VQE), a variational algorithm to obtain an approximated ground state of a given Hamiltonian, is an appealing application of near-term quantum computers. To extend the framework to excited states, we here propose an algorithm, the subspace-search variational quantum eigensolver (SSVQE). This algorithm searches a... Preprint We show a certain kind of non-local operations can be decomposed into a sequence of local operations. Utilizing the result, we describe a strategy to decompose a general two-qubit gate to a sequence of single-qubit operations. Required operations are projective measurement of a qubit in Pauli basis, and $\pi/2$ rotation around x, y, and z axes. The... Preprint We propose a quantum-classical hybrid algorithm to simulate the non-equilibrium steady state of an open quantum many-body system, named the dissipative-system Variational Quantum Eigensolver (dVQE). To employ the variational optimization technique for a unitary quantum circuit, we map a mixed state into a pure state with a doubled number of qubits... Article Full-text available In quantum computing, the indirect measurement of unitary operators such as the Hadamard test plays a significant role in many algorithms. However, in certain cases, the indirect measurement can be reduced to the direct measurement, where a quantum state is destructively measured. Here, we investigate under what conditions such a replacement is pos... Article The variational quantum eigensolver (VQE) is an attractive possible application of near-term quantum computers. Originally, the aim of the VQE is to find a ground state for a given specific Hamiltonian. It is achieved by minimizing the expectation value of the Hamiltonian with respect to an ansatz state by tuning parameters θ on a quantum circuit,... Preprint Quantum simulation is one of the key applications of quantum computing, which can accelerate research and development in chemistry, material science, etc. Here, we propose an efficient method to simulate the time evolution driven by a static Hamiltonian, named subspace variational quantum simulator (SVQS). SVQS employs the subspace-search variation... Preprint We propose a sequential minimal optimization method for quantum-classical hybrid algorithms, which converges faster, is robust against statistical error, and is hyperparameter-free. Specifically, the optimization problem of the parameterized quantum circuits is divided into solvable subproblems by considering only a subset of the parameters. In fac... Article Many quantum algorithms, such as the Harrow-Hassidim-Lloyd (HHL) algorithm, depend on oracles that efficiently encode classical data into a quantum state. The encoding of the data can be categorized into two types: analog encoding, where the data are stored as amplitudes of a state, and digital encoding, where they are stored as qubit strings. The... Preprint In quantum computing, the indirect measurement of unitary operators such as the Hadamard test plays a significant role in many algorithms. However, in certain cases, the indirect measurement can be reduced to the direct measurement, where a quantum state is destructively measured. Here we investigate in what cases such a replacement is possible and... Preprint The variational quantum eigensolver (VQE), a variational algorithm to obtain an approximated ground state of a given Hamiltonian, is an appealing application of near-term quantum computers. The original work [Peruzzo et al.; \textit{Nat. Commun.}; \textbf{5}, 4213 (2014)] focused only on finding a ground state, whereas the excited states can also i... Preprint Full-text available The variational quantum eigensolver (VQE) is an attracting possible application of near-term quantum computers. Originally, the aim of the VQE is to find a ground state for a given specific Hamiltonian. It is achieved by minimizing the expectation value of the Hamiltonian with respect to an ansatz state by tuning parameters $$\bm{\theta}$$ on a qua... Preprint We experimentally demonstrate quantum machine learning using NMR based on a framework of quantum reservoir computing. Reservoir computing is for exploiting natural nonlinear dynamics with large degrees of freedom, which is called a reservoir, for a machine learning purpose. Here we propose a concrete physical implementation of a quantum reservoir u... Preprint Many quantum algorithms, such as Harrow-Hassidim-Lloyd (HHL) algorithm, depend on oracles that efficiently encode classical data into a quantum state. The encoding of the data can be categorized into two types; analog-encoding where the data are stored as amplitudes of a state, and digital-encoding where they are stored as qubit-strings. The former... Article The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently... Article Quantum reservoir computing provides a framework for exploiting the natural dynamics of quantum systems as a computational resource. It can implement real-time signal processing and solve temporal machine learning problems in general, which requires memory and nonlinear mapping of the recent input stream using the quantum dynamics in computational... Article We propose a classical-quantum hybrid algorithm for machine learning on near-term quantum processors, which we call quantum circuit learning. A quantum circuit driven by our framework learns a given task by tuning parameters implemented on it. The iterative optimization of the parameters allows us to circumvent the high-depth circuit. Theoretical i... Article Full-text available Instantaneous quantum polynomial-time (IQP) computation is a class of quantum computation consisting only of commuting two-qubit gates and is not universal in the sense of standard quantum computation. Nevertheless, it has been shown that if there is a classical algorithm that can simulate IQP efficiently, the polynomial hierarchy (PH) collapses at... Article What happens if in QMA the quantum channel between Merlin and Arthur is noisy? It is not difficult to show that such a modification does not change the computational power as long as the noise is not too strong so that errors are correctable with high probability, since if Merlin encodes the witness state in a quantum error-correction code and send... Article Blind quantum computation (BQC) allows a client, who only possesses relatively poor quantum devices, to delegate universal quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot know the client's input, quantum algorithm, and output. In the existing verification schemes of BQC, any suspicious... Article This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor intermediate measurements. The main results of this p... Article We show that the class QMA does not change even if we restrict Arthur's computing ability to only Clifford gate operations (plus classical XOR gate). The idea is to use the fact that the preparation of certain single-qubit states, so called magic states, plus any Clifford gate operations are universal for quantum computing. If Merlin is honest, he... Article Blind quantum computation (BQC) allows an unconditionally secure delegated quantum computation for a client (Alice) who only possesses cheap quantum devices. So far, extensive efforts have been paid to make Alice's devices as classical as possible. Along this direction, quantum channels between Alice and the quantum server (Bob) should be considere... Article Full-text available Deterministic quantum computation with one quantum bit (DQC1) [E. Knill and R. Laflamme, Phys. Rev. Lett. {\bf81}, 5672 (1998)] is a restricted model of quantum computing where the input state is the completely-mixed state except for a single pure qubit, and a single output qubit is measured at the end of the computing. We can generalize it to the... Article Full-text available It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, w... Article Full-text available We investigate quantum computational complexity of calculating partition functions of Ising models. We construct a quantum algorithm for an additive approximation of Ising partition functions on square lattices. To this end, we utilize the overlap mapping developed by Van den Nest, D\"ur, and Briegel [Phys. Rev. Lett. 98, 117207 (2007)] and its int... Article Full-text available Deterministic quantum computation with one quantum bit (DQC1) is a model of quantum computing where the input restricted to containing a single qubit in a pure state and with all other qubits in a completely-mixed state, with only a single qubit measurement at the end of the computation [E. Knill and R. Laflamme, Phys. Rev. Lett. {\bf81}, 5672 (199... Article Full-text available Protecting quantum information from decoherence due to environmental noise is vital for fault-tolerant quantum computation. To this end, standard quantum error correction employs parallel projective measurements of individual particles, which makes the system extremely complicated. Here we propose measurement-free topological protection in two dime... Article Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the clie... Article This is a short review on an interdisciplinary field of quantum information science and statistical mechanics. We first give a pedagogical introduction to the stabilizer formalism, which is an efficient way to describe an important class of quantum states, the so-called stabilizer states, and quantum operations on them. Furthermore, graph states, w... Article Full-text available The conventional duality analysis is employed to identify a location of a critical point on a uniform lattice without any disorder in its structure. In the present study, we deal with the random planar lattice, which consists of the randomized structure based on the square lattice. We introduce the uniformly random modification by the bond dilution... Article Full-text available We consider measurement-based quantum computation (MBQC) on thermal states of the interacting cluster Hamiltonian containing interactions between the cluster stabilizers that undergoes thermal phase transitions. We show that the long-range order of the symmetry breaking thermal states below a critical temperature drastically enhance the robustness... Article Full-text available Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-pr... Article Full-text available In the framework of quantum computational tensor network, which is a general framework of measurement-based quantum computation, the resource many-body state is represented in a tensor-network form (or a matrix-product form), and universal quantum computation is performed in a virtual linear space, which is called a correlation space, where tensors... Data Full-text available Supplementary material Article Full-text available Tremendous efforts have been paid for realization of fault-tolerant quantum computation so far. However, preexisting fault-tolerant schemes assume that a lot of qubits live together in a single quantum system, which is incompatible with actual situations of experiment. Here we propose a novel architecture for practically scalable quantum computatio... Article Full-text available We propose a family of surface codes with general lattice structures, where the error-tolerances against bit and phase errors can be controlled asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilber... Article Full-text available Blind quantum computation is a new secure quantum computing protocol which enables Alice who does not have sufficient quantum technology to delegate her quantum computation to Bob who has a fully-fledged quantum computer in such a way that Bob cannot learn anything about Alice's input, output, and algorithm. In previous protocols, Alice needs to ha... Article Full-text available Recently, Li {\it et al.} [Phys. Rev. Lett. {\bf 107}, 060501 (2011)] have demonstrated that topologically protected measurement-based quantum computation can be implemented on the thermal state of a nearest-neighbor two-body Hamiltonian with spin-2 and spin-3/2 particles provided that the temperature is smaller than a critical value, namely, thres... Article Full-text available In the framework of quantum computational tensor network [D. Gross and J. Eisert, Phys. Rev. Lett. {\bf98}, 220503 (2007)], which is a general framework of measurement-based quantum computation, the resource many-body state is represented in a tensor-network form, and universal quantum computation is performed in a virtual linear space, which is ca... Article Full-text available We investigate relations between computational power and correlation in resource states for quantum computational tensor network, which is a general framework for measurement-based quantum computation. We find that if the size of resource states is finite, not all resource states allow correct projective measurements in the correlation space, which... Article Full-text available We propose a robust and scalable scheme to generate an $N$-qubit $W$ state among separated quantum nodes (cavity-QED systems) by using linear optics and postselections. The present scheme inherits the robustness of the Barrett-Kok scheme [Phys. Rev. A {\bf 71}, 060310(R) (2005)]. The scalability is also ensured in the sense that an arbitrarily larg...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078289031982422, "perplexity": 802.2241239583686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00124.warc.gz"}
https://brilliant.org/discussions/thread/last-10000-digits-of-grahams-number/
# Last 10000 digits of Graham's number So, if you didn't know, it's possible to compute the last $$n$$ digits of the Graham's number quite easily, by observing the fact that $3\uparrow3=...7$, $3\uparrow3\uparrow3=...87$, $3\uparrow3\uparrow3\uparrow3=...387$, $3\uparrow3\uparrow3\uparrow3\uparrow3=...5387$, and so on. In fact, if you search on the internet you'll find several people that computed up to $200$ digits, $400$ digits, or $500$ digits. Now, I admit I was pretty disappointed with this results. I couldn't find anyone that pushed up the calculations! (I may be wrong, in that case show me if anyone else actually calculated more than that) So, I developed an algorithm that is sufficiently efficient to compute the first $500$ digits in only approximately $2$ seconds. With this algorithm, I was able to compute the last $10000$ digits of Graham's number on my computer, in $16316.5$ seconds (approximately $2.2$ hours). I'm not entirely sure how, but I'm pretty sure it's possible to make the algorithm much more efficient than that, so that even calculating these digits should happen in a matter of few seconds. If you have any good ideas, tell me below (try it first, if possible). Either way, here they are (new line every 70 digits): $g_{64} = ...3078726077030301309631860565499591166728394144521822940211684925744690\\ 8303160862621307856182261004203129683467872420025335707165042882886038\\ 1527890583177734743488999363221709377188705302161340082623104165260987\\ 2469522381180520893701095105746695201447806881045566940953003050207963\\ 0251089314581777820684968377573229456578469591751099452615715258153133\\ 8317144176357246277980971517349406785579279353063617993752257361282030\\ 1473864489406090828511196812234883812826536412935235758505566522273299\\ 8513867089855758447648371115779454007188631486534611854130768464954083\\ 8335805841695412280766038020711553526827091695879475106425076890327782\\ 6708848390877435531688133831988779505683625673270427786212688069705881\\ 0276174028639378952134276596828174174610570754797760763975177038244691\\ 2063024310915173151554672020720329805779214569991795690518659602446902\\ 7274217279141430415867319657287140268008652315291316261820652195021921\\ 0914610704519073926283967434339662068326819744974641935341502976180597\\ 5219746989916790553951240749624022306775365113200227817050284227367149\\ 7491794032802699070161003317178855043208655046184676579497958334888729\\ 3809617659827235067373513656241299335615924204033665860263764635136445\\ 0901965169912468031701035813068048871232519853582991620638263170147783\\ 2698324585503287762867838791720029901423547073086007824609234282758224\\ 9084362130009203937656462657958086964494023780323916935845145868457435\\ 0019308749993296237589317856219609033942624384808517627658437282947072\\ 8254599948415970659821958864982353541909598033354078979538256357245359\\ 7473687377205449098623941105789048433603977408157821289037966848053431\\ 0161245447459165410165069396137727188277544585197396781572979626659473\\ 8946097169571922151724219229759670392571616731391424780277194256133384\\ 1909070301383939429023145632838588124976137279502774086096179244679618\\ 6025376358215497336320381116685826498270685734807990885455869123699181\\ 9878323695993552951130721247635787520841764173839071101104122258672751\\ 9228928871964307453282154261444871750080952421236363768139610777495235\\ 3201683350142428109136897852904313178813533335511088902235947721071584\\ 9496037260619984908856098462387682166967457951388588832356929998573679\\ 5420111111854634404435045676794013567654575959588995123046187234360352\\ 1793889196205026750886423489731195416265541685656346013519486952120940\\ 3751415314099407271948277374441254695053697070411390864858720859625554\\ 1959030238060629137169138722855477794398025076773239670591720768747467\\ 2844654063418138034129430436097660215542336711205739081742346117574411\\ 8240134002338774237104408010940301648706232749183781478765763754598101\\ 5382140877149223264515258326901107991667754434423538374815474031008806\\ 3794659755273784075854629840335173116218238312437640446823996561592082\\ 8447876334410730503146643034564958892386400206720157774336643971046097\\ 1913397786711927076109359761269701118640088324214111573518542523175301\\ 8437333602444678510859228109479432808176387274946992467944754462151350\\ 5367844082852088230997958080227897832298874619826542016735413058821008\\ 1847428327743235541621704680913967417772759547636593847461261352062528\\ 6493615579933087865883727503023169846062624381686315823768546266806694\\ 4052359344981816230189301309354006700248433587219900315863936527371581\\ 9660118267438193855827192236488406202498847093959194410120314045227731\\ 5157935152536936493425053093270987737803549232774078167979402086740138\\ 2441063153368792024085513096521839198783970159526194524749783176345096\\ 6138354215340807779927410436230549205899966191897479395002808032068505\\ 2798877144984365085442674905448699279843309239159997374495902995758836\\ 3427364161563026748153852589303542350028921428192299937990553925840346\\ 3179740764550809001835012576720025421347754368577841258214793482088528\\ 3528473446395144549774006915475700147362629356396411599745063763259120\\ 7602085065331518191297627524916528283327470445487220431209794181800837\\ 5242881807867409536257070239161286255743400994204946286838833646124751\\ 9450140740102350858410502955539960188235779958081190441955895515413996\\ 2907362641416032599108674402147606737302706859009869989241966114092477\\ 2407789976751585993614061850356986544196252382034927589676238991215557\\ 8594439247420428277018540432967913180650401616546821719644550936450059\\ 5583090752015617577969861470976788938740200600298288287888330891308863\\ 3722819728625451800838400216724842196639874673295077669833180844718217\\ 9341171005320299673923404289980856827543457807278033622507455463530345\\ 2502047026773318085940276837031308524437227254648631365299090852540852\\ 7368117005989256614980159639267250074517606853740135934062100746075654\\ 0813896593247352620824016619700186551382187297721949846316900749082737\\ 8094264056082839271636552848839603826288292359895120699259424392975074\\ 9821816437833463246454551763732757655276760590582693972496924532998092\\ 3380190551831802463416887618541048817756223855531234022957969144122483\\ 6406466849925916755964304671002296817244339224792182135995868969341492\\ 3690776243056770164822502041983470306716296896922618604046908954485889\\ 1348275274667315520855871416643581761678324893080737347771500148013131\\ 5872093173217027881014846523723198056977435033203823397583204572679056\\ 6651633842068281545694928486633422445461995837450720456764508778883440\\ 3854173232861131892325939242549724454004862230630447361306160309037871\\ 5800584480793579521136958705250576661572845777934961160672813467687918\\ 2468219518101142821666186785543970961610009398295493882433280467468806\\ 9413764541931265604057917863711717794304584231021596233589764804054811\\ 8226228812074485140806951381912453398593093071760684032101862850540079\\ 7648371363380891593720550471092872187099316006161890897978303030444170\\ 7453650131619894698610993458668339554364084020023399067051642966934860\\ 0027570319308957395061221345848777985085039705997484485916440720079353\\ 6749317556015986374673052074784015605376345910038695996792166640248506\\ 5151972178292734011493177498161942761055395341214573854386960142347615\\ 1064120816786711566627026981412394927353467365863136279136411085250120\\ 5029022957845553710087982096902106709311969126401168909948606316616230\\ 2923902195299675526555986392068330814711696071435279166352601845751091\\ 1413391101513558310051637274156744456218580403883246348766716791121059\\ 6640361544098108142733814394129952702989282677057681983318348451157632\\ 3445763461588558180723948837731724171820914217827456505430234904693165\\ 7016237345668584450386474453301178678720075637289970867637583657220506\\ 2491301610433455913955841065225929205594669325571087621309871185323210\\ 5196699704725638064513028255040447297723752850180331481056404968157353\\ 9061785278598760313140865068530080919200003079350549373708348792972286\\ 8670573755820995979960622710931298072888031118784826564331478394544516\\ 8149803763580948273619215580715203535826455618889996767810934669730243\\ 5416969251647217062623523969036088524240889973797522976568651221236460\\ 6700882735550252805162688717743002242430178153096139879227557097022144\\ 3057790329335567051650934703248833671211260215491243781456055632180252\\ 2593943710111389180996687955300606304861015768885793044484950849027511\\ 0106029803619475963780866084863862122324538632723342196159851036070658\\ 3695251304412583544807924551217328349121977645145493850412041271265386\\ 4611628171304512992288612473168928694708607916093810579534245595047488\\ 6617927058056116386911212141737555332931772421804354502689741745136461\\ 9149258184608873236073683901075275381240402732984629660210771321778593\\ 5317908316720145478172167767818370069254396358778477381821832282338662\\ 9557597665763526037469623882734970261718460508041643967446394822226604\\ 7262800922130068469037144793383117028262382841196033782589178075561744\\ 9686306276313945559616369461957875659406855496062664521105020034463607\\ 8639152176612841273246755062872676148243079798928244275312027774818688\\ 7250109520756590181937968438911972868200142926836525620315535316031054\\ 6891685301933822474738169794305170907203165619769535061113213915856256\\ 1824661400448048835400453257011706951826263434793588413474586938545913\\ 1913595602946034912375719564262285371586081255164508962613460090798485\\ 7847797205307145186514675412317888384738009343344165376733605639874152\\ 6838837135702194865074959666167436192933645884998056100697104793100679\\ 4152084453613830911021630017437654919684883920437258419601503784784516\\ 0671512017198801157547084883939593053650556078872159994750221442214834\\ 8268144787270731001365537383577746098505866012640076129423352326255313\\ 3073942052007839547749762554111899859772880815945865752809988634672233\\ 4769804780146302789353612329312586963866559329949214911489134763214665\\ 4314303272656947761889503867538372033508034358690038674211367316517236\\ 2113256247997506770294235705056911305065974352655256553654276889526636\\ 0391135992668989734244822601493574507744556050638326609473542254360350\\ 8674855342484610627305685534794791282019520577643564769466316663822950\\ 0048280051827615363513800094323248679021061702425944029209484941954536\\ 7418064519308105163357496871638118822504114501587037019405680648005022\\ 5768533805530305183368091271811490817539484300268084104379556148104831\\ 5835447210850384076723823375354333111031697890169996590703687564769571\\ 4199517294684058268271081207938885760678089057660597351282040660918730\\ 7108483992113117957918089160673029776868734932638038255189701221105348\\ 1886141584874851920098526106525203948232207371149341083916873785440379\\ 8603368448472052729248390757866617805529414157119366603081892881936678\\ 7741482317801728126934985735783270950758576591974947039193152967596669\\ 2340488030236244704910353178090822611674695077464191287728244330583239\\ 5092525499355092526168572459565741317934416750148502425950695064738395\\ 6574791365193517983345353625214300354012602677162267216041981065226316\\ 9355188780388144831406525261687850955526460510711720009970929124954437\\ 8887496062882911725063001303622934916080254594614945788714278323508292\\ 4210209182589675356043086993801689249889268099510169055919951195027887\\ 1783083701834023647454888222216157322801013297450927344594504343300901\\ 0969280253527518332898844615089404248265018193851562535796399618993967\\ 905496638003222348723967018485186439059104575627262464195387$ Note by Aldo Roberto Pessolano 1 year, 4 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Update: I just found a site that lists exactly the same digits, so I'm definitely not the first to have calculated this digits. Still, the efficiency question remains. - 1 year, 4 months ago what is the inverse of grahams number - 3 months, 1 week ago 1/243=0.004115226337448559670781893004115226337448559670781893004.... , period=27, and etc - 3 months, 1 week ago Testing Math Editor: $\pi$ - 1 year, 3 months ago Booya! $e^{i \pi} +1 = 0$ - 1 year, 3 months ago what's the algorithm? - 1 year, 4 months ago The most efficient algorithm I thought for now is just one line on Mathematica: calc = 3; Do[calc = PowerMod[3, calc, 10^i], {i, 1, 500}]; calc This computes the latest 500 digits correctly in roughly 0.7 seconds on my computer. For 1000 digits, it takes 6.2 seconds. The main problem with this algorithm is that it gets progressively much slower since it computes all the already computed digits with every calculation. There must be a way to avoid recomputing all the digits every time, for example to get the 1001st digit pretty much instantly just by knowing the latest 1000 digits, but I can't quite figure out how. - 1 year, 4 months ago 3.345625467385246375823675867348259426395436858685673245678845673333302367489236758493678624396724839674839678492456789106574385627384562385962785967289674280654803333221... - 3 months, 1 week ago what a constant look at the digits ......31579315973175917973999717329...... - 3 months, 1 week ago started decimal place 46374 - 3 months, 1 week ago it had a lot of odds and 1 even - 3 months, 1 week ago another: 4.12344678259467589436578249365782493333331029564782936758293672892345636758935674392567238962789434247148245623392564839578123456123456925379267896661154673845268012345679213.... - 3 months, 1 week ago look at the digits .....56278524678333333333333...(100 3s total)...333333335326845367823524832352784325682283683678335683..... - 3 months, 1 week ago started decimal place 142857 - 3 months, 1 week ago 1/3=0.33333333333333333333333333... (period 1),1/9=0.111111111111111111111111111111... (period also is 1),1/27=0.037037037037037037037037037037037037... (period=3),1/81=0.01234567901234567901234567901234567901234567901... (period=9),...,1/grahams number (period=grahams number/9) - 3 months, 1 week ago period triples - 3 months, 1 week ago 546738111123/999999999999 - 3 months, 1 week ago 23456/99999 (period=5) - 3 months, 1 week ago 5.15673842536784536758786946875047894037580204308200206392579239563853633333026790678042606596503768034768013999761111124910123... - 3 months, 1 week ago find the digits ......3159735193759135793133331759173979973197351973113373313579197315793139735193791735931597391375975973797735179...... (all the digits here were odd) - 3 months, 1 week ago started decimal place 999001 - 3 months, 1 week ago umm the number 26378146328956830967280674283967389657849306784230673289567807685240677773333506342657802367819567483578695487530247483120657483075101 is this prime? try in Number world - 3 months, 1 week ago It's divisible by 7 - 3 months, 1 week ago 6.546728567584673256743295627438956784396578234967329647812967489367850163478046780654738563478056237480628111333999777546738146713294672946170856173805617438056173805637850613785061795714380561379461378463785016437586071483674820365780476892071520773457215893657896574839678964723911011... - 3 months, 1 week ago you could see the digits: ......658963725933333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333674382956743286972678594310001647193...... - 3 months, 1 week ago periodic sequence: 0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,... (period=2) - 3 months ago 3,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,... (period=2) but i have a 3 at the start - 3 months ago what a periodic sequence - 3 months ago 5.6574891673895617834523784016478301357835631756347827468203567813647813257803657830573205417329536780513728140613751376037805617834178203567318065354721054273805723104617357328033336217304617358016785203675830682306473280567281036478036121111615783236718036701561111116163781526783106732806512051627780254167394394567956293567239456379452367945231755547956371453925781936478329615782306712065720567203... - 3 months ago 1.128283173717379590959517773101717618496378493678149367192657839641789463786437859367489657329467381596378406378057381940785901674023748023671802367148023647820367328407895036274810738406738407384063578032748036258407389506328051020408160904010025062523549823647839567194678329678162835063810463278047328140333316703267023647382067124036721507132940738947389432691034758903751932075819403673027810635695679579693103446748617840672046738056713046783210467328036748105780637840632758023456167498146273849326758946738239215623749123401... is irrational approximations 9/8,35/31 - 2 months, 2 weeks ago hi - 1 year, 4 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673748016357422, "perplexity": 3654.1842646764876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00467.warc.gz"}
https://www.physicsforums.com/threads/what-istime.5270/
# What istime? 1. Aug 31, 2003 ### benzun_1999 what istime?? my question is simple: what is time scientifically???????????[?] [?] [?] [?] [?] [?] [?] [?] 2. Aug 31, 2003 ### kishtik No one knows. 3. Aug 31, 2003 Staff Emeritus In quantum mechanics a parameter not an observable. In relativity (both kinds) a dimension. Beyond that, what he said. 4. Aug 31, 2003 ### quartodeciman NIST: time and frequency to make more of 'time' requires philosophy. SEP:time SEP: the experience and perception of time SEP: temporal logic SEP: being and becoming in modern physics questions: Is time essentially fundamental? Does every attempt to derive time lead back to concepts that already presuppose the existence of time/temporal references? Are time qualifiers in language irreducible elements? (example: "What happened before time began?" --> "happen", "***ed" past tense, "before", "begin", "***an" past tense) Last edited: Sep 1, 2003 5. Aug 31, 2003 ### pmb Re: what istime?? Time is basically that which distinguishes different states of the universe. For example: Consider actual but particular arrangement of particles in the universe. Since there are more than one such arrangements then what is different? Furthermore there is particular way in which these arrangements are related to each other. This relation is the "order" in the universe, or what is called "entropy." By 'order' I mean orderlyness of things. For example: If you room is quite neat then it's order is 'high.' Everything is in 'order.' If your room is a mess then its of 'low' order or 'disordered." So this phenomena of different arrangements and the relationship to order is called "time." As 'time' increases things change, i.e. the particles in the universe moved, and things move to an overall disorder. So label these different arrangements with numbers and call these numbers the time. But not that the number itself is not time - time is that which the number refers to. Pete 6. Sep 3, 2003 ### shankar time is an independent factor,an abstract on. finding relation between two unknown function is very difficult and may not have any. but with a known one its easy to define a function. i hope that u got my point .. pleae fell free to correct me it iam wrong... 7. Sep 3, 2003 ### Arc_Central Time is a measurement of motion applied to your measure of motion in yer brain. The relationship is your understanding of it. You prolly autta ask - What is motion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488661050796509, "perplexity": 3894.2765729410835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213540.33/warc/CC-MAIN-20180818095337-20180818115337-00491.warc.gz"}
https://slideplayer.com/slide/232763/
# Core 3 Differentiation Learning Objectives: ## Presentation on theme: "Core 3 Differentiation Learning Objectives:"— Presentation transcript: Core 3 Differentiation Learning Objectives: Review understanding of differentiation from Core 1 and 2 Understand how to differentiate ex Understand how to differentiate ln ax Differentiation Review Differentiation means…… Finding the gradient function. The gradient function is used to calculate the gradient of a curve for any given value of x, so at any point. If y = xn = nxn-1 The Key Bit dy dx The general rule (very important) is :- If y = xn dy dx = nxn-1 E.g. if y = x2 = 2x dy dx E.g. if y = x3 = 3x2 dy dx E.g. if y = 5x4 = 5 x 4x3 = 20x3 dy dx A differentiating Problem The gradient of y = ax3 + 4x2 – 12x is 2 when x=1 What is a? dy dx = 3ax2 + 8x -12 When x=1 dy dx = 3a + 8 – 12 = 2 3a - 4 = 2 3a = 6 a = 2 Finding Stationary Points At a maximum At a minimum + dy dx > 0 + d2y dx2 < 0 dy dx =0 - dy dx < 0 - d2y dx2 > 0 dy dx =0 Differentiation of ax Compare the graph of y = ax with the graph of its gradient function. Adjust the values of a until the graphs coincide. Differentiation of ax Summary The curve y = ax and its gradient function coincide when a = 2.718 The number 2.718….. is called e, and is a very important number in calculus See page 88 and 89 A1 and A2 Differentiation of ex f `(x) = ex f `(x) = aex If f(x) = ex Also, if f(x) = aex Differentiation of ex The gradient function f’(x )and the original function f(x) are identical, therefore The gradient function of ex is ex i.e. the derivative of ex is ex f `(x) = ex If f(x) = ex f `(x) = aex Also, if f(x) = aex Differentiation of ex Turn to page 90 and work through Exercise A Derivative of ln x = 1 ln x is the inverse of ex The graph of y=ln x is a reflection of y = ex in the line y = x This helps us to differentiate ln x If y = ln x then x = ey so = 1 So Derivative of ln x is Differentiation of ln x Live page Differentiation of ln 3x Live page Differentiation of ln 17x Live page Summary - ln ax (1) f(x) = ln x f’(1) = 1 f’(4) = 0.25 f(x) = ln 3x the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f(x) = ln 3x f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f(x) = ln 17x f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 Summary - ln ax (2) For f(x) = ln ax For f(x) = ln ax f `(x) = 1/x Whatever value a takes…… the gradient function is the same f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f’(100) = 0.01 f’(0.2) = 5 the gradient at x=100 is 0.01 the gradient at x=0.2 is 5 The gradient is always the reciprocal of x For f(x) = ln ax f `(x) = 1/x Examples f `(x) = 1/x If f(x) = ln 7x If f(x) = ln 11x3 Don’t know about ln ax3 f(x) = ln 11 + ln x3 f(x) = ln ln x f `(x) = 3 (1/x) Constants go in differentiation f `(x) = 3/x = nxn-1 If y = xn f `(x) = aex if f(x) = aex if g(x) = ln ax Summary dy dx = nxn-1 If y = xn f `(x) = aex if f(x) = aex if g(x) = ln ax g`(x) = 1/x if h(x) = ln axn h`(x) = n/x h(x) = ln a + n ln x Differentiation of ex and ln x Classwork / Homework Turn to page 92 Exercise B Q1 ,3, 5 Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863202571868896, "perplexity": 2367.819497689086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514314.87/warc/CC-MAIN-20181021181851-20181021203351-00500.warc.gz"}
http://tex.stackexchange.com/questions/12483/how-to-center-the-toc
How to center the TOC? I would like to design the table of contents aligned to a vertical line near the center of the page. The chapter and section titles need to be flushright, the numbers flushleft, and between titles and numbers, I'd like to place a separating element. It can be done easily by a tabular: \begin{tabular}{>{\raggedleft}m{0.4\textwidth}>{\centering}m{0.3cm}>{\raggedright}m{0.4\textwidth}} First section title & \textbullet & 27 \tabularnewline \end{tabular} You can see the result of the code above on this image: But how can I build a tabular with the LaTeX \tableofcontents command? What commands do I need to redefine? After some googling I haven't found any ready solution. - For the chapter insert any symbol you like, instead of the \Large\textbullet \documentclass{book} \usepackage{xcolor,ragged2e} \usepackage{array} \makeatletter \def\MBox#1#2#3#4{% \parbox[t]{0.4\linewidth}{\RaggedLeft#1}% \makebox[0.1\linewidth]{\color{red}#2}% \makebox[0.4\linewidth][l]{#3}\\[#4]} \renewcommand*\l@chapter [2]{\par\MBox{\bfseries\Large#1}{\Large\textbullet}{#2}{5pt}} \renewcommand*\l@section [2]{ \MBox{\bfseries\large#1}{\textbullet}{#2}{2pt}} \renewcommand*\l@subsection[2]{ \MBox{\bfseries #1}{\textbullet}{#2}{1pt}} \renewcommand\numberline[1]{} \makeatother \begin{document} \tableofcontents \chapter{First Chapter} \section{foo} \newpage \section{bar} \newpage \subsection{foobar} foobar \section{An extraordinary long section title which should have a linebreak bar \end{document} - wonderful, thanks! i have never thought that it's so simple. – deeenes Mar 2 '11 at 22:04 Have a look into the source of your document class, where all mentioned macros are defined. The source is on your local drive, use your search feature or kpsewhich to locate it, such as kpsewhich book.cls at the command prompt. On Linux, I often directly use it like gedit kpsewhich book.cls Actually, I made a shell function for that, to ease the frequent access of source code. - thank you! maybe i'll try also this way, because the vertical centering of the numbers in the case of section titles wrapped to more line. – deeenes Mar 2 '11 at 22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037808775901794, "perplexity": 1800.2767927250532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445291.19/warc/CC-MAIN-20151124205405-00159-ip-10-71-132-137.ec2.internal.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-00346713
# Measuring the heterogeneity of the coseismic stress change following the 1999 Mw7.6 Chi-Chi earthquake Abstract : Seismicity quiescences are expected to occur in places where the stress has been decreased, in particular following large main shocks. However, such quiescences can be delayed by hours to years and be preceded by an initial phase of earthquake triggering. This can explain previous analyses arguing that seismicity shadows are rarely observed, since they can only be seen after this triggering phase is over. Such is the case of the main rupture zone, which experiences the strongest aftershock activity despite having been coseismically unloaded by up to tens of bars. The 1999 M w 7.6 Chi-Chi, Taiwan earthquake is characterized by the existence of several such delayed quiescences, especially off the Chelungpu fault on which the earthquake took place. We here investigate whether these delays can be explained by a model of heterogeneous static-stress transfer coupled with a rate-and-state friction law. We model the distribution of coseismic small-scale stress change τ by a Gaussian law with mean tau bar and standard deviation σ τ . The latter measures the level of local heterogeneity of the coseismic change in stress. The model is shown to mimic the earthquake time series very well. Robust inversion of the tau bar and σ τ parameters can be achieved at various locations, although on-fault seismicity has not been observed for a sufficiently long time to provide more than lower bounds on those estimates for the Chelungpu fault. Several quiescences have delays that can be well explained by local stress heterogeneity, even at relatively large distances from the Chi-Chi earthquake. Document type : Journal articles https://hal-insu.archives-ouvertes.fr/insu-00346713 Contributor : Pascale Talour <> Submitted on : Wednesday, March 10, 2021 - 3:29:43 PM Last modification on : Tuesday, July 27, 2021 - 9:36:02 AM Long-term archiving on: : Friday, June 11, 2021 - 7:01:37 PM ### File 2006JB004651.pdf Publisher files allowed on an open archive ### Citation David Marsan, Guillaume Daniel. Measuring the heterogeneity of the coseismic stress change following the 1999 Mw7.6 Chi-Chi earthquake. Journal of Geophysical Research : Solid Earth, American Geophysical Union, 2007, 122 (article n° B07305), pp.NIL_10-NIL_30. ⟨10.1029/2006JB004651⟩. ⟨insu-00346713⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376882076263428, "perplexity": 4295.403592269362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00081.warc.gz"}
https://arxiv.org/abs/1703.09614
physics.chem-ph (what is this?) # Title:Accessing dark states optically through excitation-ferrying states Abstract: The efficiency of solar energy harvesting systems is largely determined by their ability to transfer excitations from the antenna to the energy trapping center before recombination. Dark state protection, achieved by coherent coupling between subunits in the antenna structure, can significantly reduce radiative recombination and enhance the efficiency of energy trapping. Because the dark states cannot be populated by optical transitions from the ground state, they are usually accessed through phononic relaxation from the bright states. In this study, we explore a novel way of connecting the dark states and the bright states via optical transitions. In a ring-like chromophore system inspired by natural photosynthetic antennae, the single-excitation bright state can be optically connected to the lowest energy single-excitation dark state through certain double-excitation states. We call such double-excitation states the ferry states and show that they are the result of accidental degeneracy between two categories of double-excitation states. We then mathematically prove that the ferry states are only available when N, the number of subunits on the ring, satisfies N=4l+2 (l being an integer). Numerical calculations confirm that the ferry states enhance the energy transfer power of our model, showing a significant energy transfer power spike at N=6 compared with smaller N values, even without phononic relaxation. The proposed mathematical theory for the ferry states is not restricted to this one particular system or numerical model. In fact, it is potentially applicable to any coherent optical system that adopts a ring-shaped chromophore arrangement. Beyond the ideal case, the ferry state mechanism also demonstrates robustness under weak phononic dissipation, weak site energy disorder, and large coupling strength disorder. Subjects: Chemical Physics (physics.chem-ph); Biological Physics (physics.bio-ph); Quantum Physics (quant-ph) Cite as: arXiv:1703.09614 [physics.chem-ph] (or arXiv:1703.09614v3 [physics.chem-ph] for this version) ## Submission history From: Zixuan Hu [view email] [v1] Fri, 24 Mar 2017 20:17:14 UTC (1,084 KB) [v2] Thu, 30 Mar 2017 18:46:32 UTC (1,085 KB) [v3] Tue, 5 Sep 2017 14:33:34 UTC (1,283 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702064156532288, "perplexity": 3056.899938540826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00109.warc.gz"}
http://sb2cl.ai2.upv.es/content/differential-algebra-control-systems-design-computation-canonical-forms
# Differential Algebra for Control Systems Design. Computation of Canonical Forms. Title Differential Algebra for Control Systems Design. Computation of Canonical Forms. Publication Type Journal Article Year of Publication 2013 Authors Picó-Marco E Journal Control Systems Magazine Volume 33 Start Page 52 Issue 2 Pagination 52 - 62 Abstract Many systems can be represented using polynomial differential equations, particularly in process control, biotechnology, and systems biology [1], [2]. For example, models of chemical and biochemical reaction networks derived using the law of mass action have the form ẋ = Sv(k,x), (1) where x is a vector of concentrations, S is the stoichiometric matrix, and v is a vector of rate expressions formed by multivariate polynomials with real coefficients k . Furthermore, a model containing nonpolynomial nonlinearities can be approximated by such polynomial models as explained in "Model Approximation". The primary aims of differential algebra (DALG) are to study, compute, and structurally describe the solution of a system of polynomial differential equations,f (x,ẋ, ...,x(k)) =0, (2) where f is a polynomial [3]-[6]. Although, in many instances, it may be impossible to symbolically compute the solutions, or these solutions may be difficult to handle due to their size, it is still useful to be able to study and structurally describe the solutions. Often, understanding properties of the solution space and consequently of the equations is all that is required for analysis and control design.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426129817962646, "perplexity": 767.0032336327015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00020.warc.gz"}
https://mathstrek.blog/2015/01/29/tensor-product-and-linear-algebra/
## Tensor Product and Linear Algebra Tensor products can be rather intimidating for first-timers, so we’ll start with the simplest case: that of vector spaces over a field K. Suppose V and W are finite-dimensional vector spaces over K, with bases $\{v_1, \ldots, v_n\}$ and $\{w_1, \ldots, w_m\}$ respectively. Then the tensor product $V\otimes_K W$ is the vector space with abstract basis $\{ v_i w_j\}_{1\le i \le n, 1\le j\le m}.$ In particular, it is of dimension mn over K. Now we can “multiply” elements of V and W to obtain an element of this new space, e.g. $(2v_1 + 3v_2)(w_1 - 2w_3) = 2v_1 w_1 + 3 v_2 w_1 - 4 v_1 w_3 - 6v_2 w_3.$ For example, if V is the space of polynomials in x of degree ≤ 2 and W is the space of polynomials in y of degree ≤ 3, then $V\otimes_K W$ is the space of polynomials spanned by $x^i y^j$ where 0≤i≤2, 0≤j≤3. However, defining the tensor product with respect to a chosen basis is rather unwieldy: we’d like a definition which only depends on V and W, and not the bases we picked. Definition. A bilinear map of vector spaces is a map $B:V \times W \to X,$ where V, W, X are vector spaces, such that • when we fix w, B(-, w): V→X is linear; • when we fix v, B(v, -): W→X is linear. The tensor product of V and W, denoted $V\otimes_K W$, is defined to be a vector space together with a bilinear map $\psi : V\times W \to V\otimes_K W$ such that the following universal property holds: • for any bilinear map $B: V\times W \to X$, there is a unique linear map $f:V\otimes_K W \to X$ such that $f\circ \psi = B.$ For v∈V and w∈W, the element $v\otimes w := \psi(v,w)$ is called a pure tensor element. The universal property guarantees that if the tensor product exists, then it is unique up to isomorphism. What remains is the Proof of Existence. Recall that if S is a basis of vector space V, then any linear function VW uniquely corresponds to a function SW. Thus if we let T be the (infinite-dimensional) vector space with basis: $\{e_{v, w} : v \in V, w\in W\}$ then linear maps gTX correspond uniquely to functions BV×W → X. Saying that B is bilinear is precisely the same as g factoring through the subspace U to obtain $\overline g : T/U \to X,$ where U is the subspace generated by elements of the form: \begin{aligned} e_{v+v', w} - e_{v,w} - e_{v', w}, \qquad & e_{cv, w} - c\cdot e_{v,w}\\ e_{v, w+w'} - e_{v,w} - e_{v, w'},\qquad & e_{v,cw} - c\cdot e_{v,w}\end{aligned} for all vv’ ∈ Vww’ ∈ W and constant c ∈ K. Hence T/U is precisely our desired vector space, with $\psi : V\times W \to T/U$ given by $(v, w) \mapsto e_{v,w} \pmod U.$ And vw is the image of $e_{v,w}$ in T/U. ♦ Note From the proof, it is clear that V ⊗ W is spanned by the pure tensors; in general though, not every element of V ⊗ W is a pure tensor. E.g. vw + v’w’ is generally not a pure tensor. However, vw + vw’ + v’w + v’w’ = (v+v’)⊗(w+w’) is a pure tensor since ψ is bilinear. ## Properties of Tensor Product We have: Proposition. The following hold for K-vector spaces: • $K \otimes_K V \cong V$, where $c\otimes v\mapsto cv$; • $V \otimes_K W \cong W \otimes_K V$, where $v\otimes w\mapsto w\otimes v$; • $V \otimes_K (W \otimes_K W') \cong (V\otimes_K W)\otimes_K W'$, where $v\otimes (w\otimes w') \mapsto (v\otimes w)\otimes w'$; • $V \otimes_K (\oplus_i W_i) \cong \oplus_i (V\otimes W_i)$, where $v\otimes (w_i)_i \mapsto (v\otimes w_i)_i$. Proof For the first property, the map K × V → V taking (cv) to cv is bilinear over K, so by the universal property of tensor products, this induces fK ⊗ V → V taking cv to cv. On the other hand, let’s take the linear map gV → K ⊗ V mapping v to 1⊗v. It remains to prove gf and fg are identity maps. Indeed: fg takes v → 1⊗v → v and gf takes cv → cv → 1⊗cvcv where the equality follows from bilinearity of ⊗. For the third property, fix vV. The map W×W’ → (VW)⊗W’ taking (ww’) to (vw)⊗w‘ is bilinear in W and W’ so it induces $f_v : W\otimes W' \to (V\otimes W)\otimes W'$ taking $w\otimes w' \mapsto (v\otimes w)\otimes w'.$ Next we check that the map $V\times (W\otimes W') \to (V\otimes W)\otimes W', \qquad (v, x) \mapsto f_v(x)$ is bilinear so it induces a linear map $f : V\otimes (W\otimes W') \mapsto (V\otimes W)\otimes W'$ taking $v\otimes (w\otimes w') \mapsto (v\otimes w)\otimes w'.$ Similarly one defines a reverse map $g: (V\otimes W)\otimes W' \to V\otimes (W\otimes W')$ taking $(v\otimes w)\otimes w' \mapsto v\otimes (w\otimes w').$ Since the pure tensors generate the whole space, it follows that f and g are mutually inverse. The second and fourth properties are left to the reader. ♦ As a result of the second and fourth properties, we also have: Corollary. For any collection $\{V_i\}$ and $\{W_j\}$ of vector spaces, we have: $\oplus_{i, j} (V_i \otimes_K W_j) \cong (\oplus_i V_i)\otimes_K (\otimes_j W_j),$ where the LHS element $(v_i) \otimes (w_j)$ maps to $(v_i \otimes w_j)_{i,j}$ on the RHS. In particular, if $\{v_i\}$ and $\{w_j\}$ are bases of V and W respectively, then $V = \oplus_i Kv_i, \ W = \oplus_j Kw_j \implies V\otimes W = \oplus_{i, j} K(v_i \otimes w_j)$ so $\{v_i \otimes w_j\}$ forms a basis of VW. This recovers our original intuitive definition of the tensor product! ## Tensor Product and Duals Recall that the dual of a vector space V is the space V* of all linear maps VK. It is easy to see that V* ⊕ W* is naturally isomorphic to (V ⊕ W)* and when V is finite-dimensional, V** is naturally isomorphic to V. [ One way to visualize V** ≅ V is to imagine the bilinear map V* × V → K taking (fv) to f(v). Fixing f we obtain a linear map VK as expected while fixing v we obtain a linear map V*→K and this corresponds to an element of V**. ] If V is finite-dimensional, then a basis $\{v_1, \ldots, v_n\}$ of V gives rise to a dual basis $\{f_1, \ldots, f_n\}$ of V* where $f_i(v_j) = \begin{cases} 1, \quad &\text{if } i = j,\\ 0,\quad &\text{otherwise.}\end{cases}$ or simply $f_i(v_j) = \delta_{ij}$ with the Kronecker delta symbol. The next result we would like to show is: Proposition. Let V and W be finite-dimensional over K. • We have $V^*\otimes W^* \cong (V\otimes W)^*$ taking (f, g) to the map $V\otimes W\to K, (v\otimes w) \mapsto f(v)g(w).$ • Also $V^* \otimes W \cong \text{Hom}_K(V, W)$ taking (f, w) to the map $V\to W, v\mapsto f(v)w.$ Proof For the first case, fix fV*, gW*. The map $V\times W \to K$ taking $(v,w)\mapsto f(v)g(w)$ is bilinear so it induces a map $h:V\otimes W\to K$ taking $(v\otimes w)\mapsto f(v)g(w).$ But the assignment (fg) → h gives rise to a map $V^* \times W^* \to (V\otimes W)^*$ which is bilinear so it induces $\varphi:V^* \otimes W^* \to (V\otimes W)^*.$ Note that $f\otimes g$ corresponds to the map $h:V\otimes W\to K$ taking $v\otimes w \mapsto f(v)g(w).$ To show that this is an isomorphism, let $\{v_i\}$ and $\{w_j\}$ be bases of V and W respectively, with dual bases $\{f_i\}$ and $\{g_j\}$ of V* and W*. The map then takes $f_i \otimes g_j$ to the linear map $V\otimes W\to K$ which takes $v_k \otimes w_l$ to $f_i(v_k) g_j(w_l) = \delta_{ik}\delta_{jl}.$ But this corresponds to the dual basis of $\{v_i \otimes w_j\},$ so we see that the above map φ takes a basis $\{f_i \otimes g_j\}$ to a basis: dual of $\{v_i\otimes w_j\}.$ The second case is left as an exercise. ♦ Note Here’s one convenient way to visualize the above. Suppose elements of V comprise of column vectors. Then V* is the space of row vectors, and evaluating V* × → K corresponds to multiplying a row vector by column vector, thus giving a scalar. So V*W* ≅ (VW)* follows quite easily: indeed, the LHS concatenates two spaces of row vectors, while the RHS concatenates two spaces of column vectors then turns it into a space of row vectors. The tensor product is a little trickier: for V and W we take column vectors with entries $\alpha_1, \ldots, \alpha_n$ and $\beta_1, \ldots, \beta_m$ respectively. Then we form the column vector with mn entries $\alpha_i \beta_j.$ This lets us see why V*⊗W* ≅ (VW)*: in both cases we get a row vector with mn entries. Finally, to obtain V* ⊗W we take row vectors $\alpha_1, \ldots, \alpha_n$ for elements of V* and column vectors $\beta_1, \ldots, \beta_m$ for those of W, and the these multiply to give us an m × n matrix, which represents linear maps VW: Question Consider the map V* × V → K which takes (f, v) to f(v). This is bilinear so it induces a linear map fV*⊗V → K. On the other hand, V*⊗V is naturally isomorphic to End(V), the space of K-linear maps VV. If we represent elements of End(V) as square matrices, what does f correspond to? [ Answer: the trace of the matrix. ] ## Tensor Algebra Given a vector space V, let us consider n consecutive tensors: $V^{\otimes n} := \overbrace{V\otimes V\otimes \ldots \otimes V}^{n \text{ copies}}.$ and let T(V) be the direct sum $\oplus_{n=0}^\infty V^{\otimes n} = K \oplus V \oplus (V\otimes V) \oplus (V\otimes V\otimes V)\ldots.$ This gives an associative algebra over K by extending the bilinear map $V^{\otimes m} \times V^{\otimes n} \to V^{\otimes (m+n)}, \quad (v_1, v_2) \mapsto v_1 \otimes v_2.$ to the entire space T(V) × T(V) → T(V). Note that it is not commutative in general. For example, suppose V has a basis {xyz}. Then • $V^{(2)}$ has basis $\{x^2, xy, xz, yx, y^2, yz, zx, zy, z^2\}$, where we have shortened the notation $x^2 := x\otimes x,$ $xy := x\otimes y,$ etc. • $V^{(3)}$ has basis $\{x^3, x^2 y, \ldots\}$, with 27 elements. • Multiplying $V\times V^{(2)} \to V^{(3)}$ gives $(x+z)(xy + zx) = x^2 y + xzx + zxy + z^2 x.$ The algebra T(V), called the tensor algebra of V, satisfies the following universal property. Theorem. The natural map ψ : V → T(V) is a linear map such that: • for any associative K-algebra A, and K-linear map φ: V → A, there is a unique K-algebra homomorphism f: T(V) → A such that φ = fψ. Thus, $\text{Hom}_{K-\text{lin}}(V, A) \cong \text{Hom}_{K-\text{alg}}(T(V), A).$ However, often we would like multiplication to be commutative (e.g. when dealing with polynomials) and we’ll use the symmetric tensor algebra instead. Or we would like multiplication to be anti-commutative, i.e. xy = –yx (e.g. when dealing with differential forms) and we’ll use the exterior tensor algebra instead. We will say more about these when the need arises. This entry was posted in Notes and tagged , , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 89, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941043257713318, "perplexity": 669.9657784888367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00208.warc.gz"}
https://content.iospress.com/articles/international-journal-of-applied-electromagnetics-and-mechanics/jae2304
You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly. # Action of torsion and axial moment in a new nonlinear cantilever-type vibration energy harvester #### Abstract The effects of composite motion involving action of torsion and axial moment on the vibrating element on characteristics of a new cantilever-type nonlinear electromagnetic vibration energy harvester are analyzed. The systems with softening and hardening action of the magnetic force are analyzed. The impact of the phenomenon on electromagnetic quantities of the system is investigated using the 2d analytical and 3d numerical models. The simulated and measured frequency-response characteristics show noticeable differences when the phenomenon is taken into account. ## 1.Introduction The nonlinear vibration energy harvesters attract attention due to potential to widen the frequency bandwidth at varying conditions [1, 2, 3, 4]. In the electromagnetic harvesters the nonlinearity is brought by action of magnets on a core [3] or on other magnets [5, 6]. To overcome some limitations of these configurations a microgenerator with a coreless magnetic circuit, depicted in Fig. 1a, was proposed by authors [6]. Generally, in this type of harvesters it is sufficient to represent the kinematics by one degree of freedom [1, 2, 3, 5]. This is not the case for the system considered here, in which the angle Θ, between the normal to the beam mid-surface and y axis, that varies with relative displacement ζ (see Fig. 1b and c), affects the electromagnetic quantities of the microgenerator. The complex motion of “grey magnets”, illustrated in Fig. 1d, includes torsion due to which they are exposed to action of the axial magnetic moment mz. The beam theory [7] implies Θζ, though the moment mz brings additional nonlinear effects. The considered system is capable to switch the magnetic stiffness between hardening (Fig. 1b) and softening (Fig. 1c) action by reversing the magnetization sense of the stationary magnets. The two separate structures are distinguished in this way for which, depending on the magnetization sense, the axial magnetic moment attempts to straighten or to buckle the beam. Clearly, if ζ= 0, then mz= 0, though as far as the magnetic system in Fig. 1b is locally stable, the latter is locally unstable around the origin. The above suggests that the effects of complex motion on characteristics of these two systems will be different. ##### Figure 1. Harvesters considered: a) CAD drawing of entire configuration, b)–c) illustration of complex motion for system with hardening and softening action of the magnetic force, respectively; circled markers denote the magnetization vector senses going forth () and back (x) the xy plane, d) photograph of laboratory system operating at resonance – dashed lines bound blurred region of image illustrating complex motion of the vibrating element. ## 2.Mathematical models and results ### 2.1Electromagnetic quantities The harvesters in Fig. 1b and c are considered as the two structures, which will be referred to as HS (hardening, stable around ζ= 0) and SU (softening, unstable around ζ= 0), respectively. Starting from the basic considerations, if the complex motion is ignored, the electromagnetic quantities can be calculated analytically by solving the Ampère equation ##### (1) div𝐠𝐫𝐚𝐝A=μ0Jμ on the yz plane in Fig. 2, with A being the magnetic vector potential, μ0 the vacuum permeability and Jμ the magnetization current. The latter was modeled using the current shell Jμ=± 0.5 Hc/φ prescribed at edges of permanent-magnets parallel to z axis, where Hc is the coercivity field, and φ a small positive variation of ordinate. ##### Figure 2. Model for calculations of magnetic field distribution; arrows indicate magnetisation senses (SU harvester). The expressions derived from solution of Eq. (1) that describe the magnetic force and the flux linkage are, respectively ##### (2) fζ(ζ)=n=1Lx(UnVn-XnYn)sin(βnζ)2μ0βn ##### (3) λ(ζ)=ntπlScn=1m=1Cvsin(γmk)sin(βnζ)(cos(βnl)-cos(βng)) where ##### (4) [UnXn]=m=1Cs[βncos(γmxg)γmsin(γmxg)] ##### (5) [VnYn]=m=1Cv[γmcos(γmxg)βnsin(γmxg)] ##### (6) γm=0,5(2m-1)πp-1,βn=0,5(2n-1)πq-1 ##### (7) Cs=4μ0Hcsin(γmb)cos(βnϕ)(sin(βna)-sin(βnh))ϕγmβn ##### (8) Cv=4μ0Hccos(βnϕ)(sin(γmc)-sin(γmf))(sin(γme)-sin(γmd))ϕγmβn xg is the abscissa of center of an air-gap, Lx length of system along x axis, nt the number of turns, Sc the cross-section area of coil shown in Fig. 2, respectively. Impact of complex motion on the electromagnetic quantities of was taken into account using a 3d finite element model for magnetic field calculations [8]. The specifications of modeled systems are given in Table 1. ##### Table 1 Specifications of parameters common for HS and SU harvesters used in calculations of electromagnetic quantities Parameter Value 16.5, 3, 8.5, 10.4,13.4, 12, 0.5, 25.5, 4, 14 mm 70 mm, 70 mm 18 mm -900 kA/m 1000 turns ##### Figure 3. Finite element model showing a 3d mesh for one half of the system considering composite motion. In computations the motion was approximated assuming that the barycenter of “grey magnets” moves along a circle with radius equal to the beam length, as they rotate around the barycenter about the angle Θ. The variations presented in Fig. 4 are for the SU system. For the HS system the variations in Fig. 4b and c are multiplied by -1. Figure 5 compares variations of magnetic force and flux linkage with and without complex motion accounted for. As one can observe, the variations of quantities due to angle Θ are significant. It is also worth noticing that the analytical formulas provide close predictions, even though the system has relatively short length along x axis. ### 2.2Frequency-response characteristics Assessment of the impact of complex motion on the frequency response characteristics was carried out via solution of equations derived from the Timoshenko beam theory [7], considering the electromagnetic coupling ##### (10) ρId2Θdt2-EI2Θx2+5GA6(ζx+Θ)=(mz(ζ,Θ)+λΘi)δ and the electric circuit ##### (11) Lcdidt+(R0+Rc)i=-λζdζdt-λΘdΘdt where ρ, A,E,G,D,I are the beam density, the area of cross-section, the Young and shear moduli, damping coefficient, moment of inertia, fext the external force, -mag the gravity force on moving magnets mass ma; δ= 1 at the beam free end, and δ= 0 elsewhere; i, Lc, Rc, R0 are current, coil inductance, and coils and load resistance, respectively. The parameters used in simulations are given in Table 2. ##### Table 2 Specifications of parameters common for HS and SU harvesters for calculations of frequency characteristics ParameterValue Cantilever beam materialGlass fiber/epoxy composite ρ 2730 kg/m3 E,G 7250 GPa, 2971 GPa D 0.0036 Ns/m Rc,Lc 23 Ω, 0.0042 H ma 0.024 kg ##### Figure 4. Results of 3d finite element modeling (SU harvester): a) magnetic force fζ, b) axial moment mz, c) magnetic flux linkage. ##### Figure 5. Comparison of electromagnetic quantities calculated using different models (SU harvester). The characteristics were determined via time-domain solution of Eqs (9)–(11) using a chirp signal with a constant magnitude and a linear frequency sweep for fext. Using this approach the beam springs were designed taking the strength and global stability into account, but using the quantities described by Eqs (2) and (3), which clearly ignore the complex motion. The stability was assessed by analysis of spectra of the responses and estimation of the Lyapunov exponents [5] for their crucial parts. As a result, the beam sprigs with dimensions 65 × 1.1 × 16 mm, and 60 × 2.2 × 16.5 mm, were designed for the HS and SU system, respectively. Their corresponding natural frequencies are f𝐻𝑆𝑛𝑎𝑡= 12.3 Hz and f𝑆𝑈𝑛𝑎𝑡= 32.3 Hz. ##### Figure 6. Diagram of laboratory test test-stand for measurement of frequency characteristics. ##### Figure 7. Frequency characteristics of rms voltage across loading resistor for different loading conditions: a) HS harvester for rms value of force fext equal to 0.12 N, b) SU harvester for rms value of force fext equal to 0.34 N. In the next step the characteristics were determined using the variations in Fig. 4 in Eqs (9)–(11). In order to validate the results the models were put under measurements on the laboratory test-stand (see Fig. 6). The simulated and measured characteristics for various loading conditions are presented in Fig. 7. ## 3.Discussion of results The most important observations involving the results obtained for the two systems considered are, as follows. For the HS harvester in Fig. 7a negligence of complex motion causes overestimation of generated voltage for all loading conditions. With the complex motion accounted for the results of simulation closely match the measurements except the no-load envelope which exposes more complex behavior in measurements between 15 Hz and 20 Hz. This however, can be easily explained by action of large harmonics in the experimental force waveform due to impact of the inertia force on the electromagnetic shaker. The results of additional simulations carried out for physically infeasible magnitudes of external force approaching to 0.5 N, which are not illustrated here, show that system with complex motion operating at no-load falls into chaotic operation around 18 Hz, whilst its simple-motion counterpart remains stable. For the SU harvester in Fig. 7b negligence of complex motion also causes overestimation of simulated voltage. Simulations for larger and even physically infeasible magnitudes of external force, which are not illustrated here, exposed jump at frequency lower by some 2.2 Hz for the system with complex motion operating at no-load, although both were stable. ## 4.Conclusion The investigation shows the need to account for the complex motion in a more accurate designing of the considered systems. The latter regards especially the HS-type system which exposes restricted stability and higher sensitivity to variations in conditions of operation than the SU-type one. Understanding these features is crucial from the viewpoint of development of a new type of wideband harvester integrating the HS and SU harvesters in a single system which will be presented in our future work. ## Acknowledgments The work was carried out under project 2016/23/N/ST7/03808 of The National Science Centre, Poland. ## References [1] S.P. Beeby, R.N. Torah, M.J. Tudor, P. Glynne-Jones, T. O’Donnell, C.R. Saha and S. Roy, A micro electromagnetic generator for vibration energy harvesting, Journ. of Micromech. & Microeng., IOP Press, 17(7) (2007), 1257–1265. [2] P. Podder, A. Amann and S. Roy, Combined Effect of Bistability and Mechanical Impact on the Performance of a Nonlinear Electromagnetic Vibration Energy Harvester, IEEE/ASME Trans. Mechatronics 21(2) (2016), 727–739. [3] T. Sato and H. Igarashi, A chaotic vibration energy harvester using magnetic material, Smart Mater. Struct. 24(2) (2015), 25033. [4] S.M. Chen, J.J. Zhou and J.H. Hu, Experimental study and finite element analysis for piezoelectric impact energy harvesting using a bent metal beam, Int, Journ. of Applied Electromagnetics and Mechanics, IOS Press, 46(4) (2014), 895–904. [5] E. Sardini and M. Serpelloni, An efficient electromagnetic power harvesting device for low-frequency applications, Sensors Actuators A: Physical. 172(2) (2011), 475–482. [6] M. Jagiela and M. Kulik, Wideband electromagnetic converter of vibration energy into electric energy, Patent Application, Patent Office of The Republic of Poland, No. P420998, March, 2017. [7] A.J.M. Ferreira, Solid mechanics and its applications Vol. 157: Matlab codes for finite element analysis, Springer, Netherlands, 2009. [8] onelab.info, accessed 01.2018.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075194597244263, "perplexity": 2674.586680425978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00275.warc.gz"}
https://www.semanticscholar.org/paper/Modular-type-transformations-and-integrals-the-Dixit/a2ddcd70351a6287f00ad6223fd7fba9dcf013e8
Corpus ID: 182557443 # Modular-type transformations and integrals involving the Riemann ?-function @inproceedings{Dixit2018ModulartypeTA, title={Modular-type transformations and integrals involving the Riemann ?-function}, author={A. Dixit}, year={2018} } A survey of various developments in the area of modular-type transformations (along with their generalizations of different types) and integrals involving the Riemann Ξ-function associated to them is given. We discuss their applications in Analytic Number Theory, Special Functions and Asymptotic Analysis. 3 Citations On Hurwitz zeta function and Lommel functions • Mathematics • 2019 We obtain a new proof of Hurwitz's formula for the Hurwitz zeta function $\zeta(s, a)$ beginning with Hermite's formula. The aim is to reveal a nice connection between $\zeta(s, a)$ and a specialExpand Ramanujan's Beautiful Integrals • Mathematics • 2021 Throughout his entire mathematical life, Ramanujan loved to evaluate definite integrals. One can find them in his problems submitted to the Journal of the Indian Mathematical Society, notebooks,Expand Superimposing theta structure on a generalized modular relation • Mathematics • 2020 A generalized modular relation of the form $F(z, w, \alpha)=F(z, iw,\beta)$, where $\alpha\beta=1$ and $i=\sqrt{-1}$, is obtained in the course of evaluating an integral involving the RiemannExpand #### References SHOWING 1-10 OF 29 REFERENCES Transformation formulas associated with integrals involving the Riemann Ξ-function Using residue calculus and the theory of Mellin transforms, we evaluate integrals of a certain type involving the Riemann Ξ-function, which give transformation formulas of the form F(z, α) = F(z, β),Expand Series transformations and integrals involving the Riemann Ξ-function The transformation formulas of Ramanujan, Hardy, Koshliakov and Ferrar are unified, in the sense that all these formulas come from the same source, namely, a general formula involving an integral ofExpand Zeros of combinations of the Riemann ξ-function on bounded vertical shifts • Mathematics • 2015 In this paper we consider a series of bounded vertical shifts of the Riemann ξ-function. Interestingly, although such functions have essential singularities, infinitely many of their zeros lie on theExpand Self-reciprocal functions, powers of the Riemann zeta function and modular-type transformations • Mathematics, Physics • 2013 Abstract Integrals containing the first power of the Riemann Ξ-function as part of the integrand that lead to modular-type transformations have been previously studied by Ramanujan, Hardy,Expand A First Course in Modular Forms • Mathematics • 2008 Modular Forms, Elliptic Curves, and Modular Curves.- Modular Curves as Riemann Surfaces.- Dimension Formulas.- Eisenstein Series.- Hecke Operators.- Jacobians and Abelian Varieties.- Modular CurvesExpand A transformation formula involving the gamma and riemann zeta functions in Ramanujan's lost notebook • Mathematics • 2010 Two proofs are given for a series transformation formula involving the logarithmic derivative of the Gamma function found in Ramanujan’s lost notebook. The transformation formula is connected with aExpand Riesz-type criteria and theta transformation analogues • Mathematics • 2016 Abstract We give character analogues of a generalization of a result due to Ramanujan, Hardy and Littlewood, and provide Riesz-type criteria for Riemann Hypotheses for the Riemann zeta function andExpand Koshliakov kernel and identities involving the Riemann zeta function • Mathematics • 2015 Some integral identities involving the Riemann zeta function and functions reciprocal in a kernel involving the Bessel functions $J_{z}(x), Y_{z}(x)$ and $K_{z}(x)$ are studied. Interesting specialExpand Analogues of the general theta transformation formula • A. Dixit • Mathematics • Proceedings of the Royal Society of Edinburgh: Section A Mathematics • 2013 A new class of integrals involving the confluent hypergeometric function 1F1(a;c;z) and the Riemann Ξ-function is considered. It generalizes a class containing some integrals of Ramanujan, Hardy andExpand Analogues of a transformation formula of Ramanujan We derive two new analogues of a transformation formula of Ramanujan involving the Gamma and Riemann zeta functions present in the Lost Notebook. Both involve infinite series consisting of HurwitzExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958456754684448, "perplexity": 1476.826048548968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00438.warc.gz"}
http://cms.math.ca/cjm/kw/majorant
Search results Search: All articles in the CJM digital archive with keyword majorant Expand all        Collapse all Results 1 - 4 of 4 1. CJM 2009 (vol 61 pp. 503) Baranov, Anton; Woracek, Harald Subspaces of de~Branges Spaces Generated by Majorants For a given de~Branges space $\mc H(E)$ we investigate de~Branges subspaces defined in terms of majorants on the real axis. If $\omega$ is a nonnegative function on $\mathbb R$, we consider the subspace $\mc R_\omega(E)=\clos_{\mc H(E)} \big\{F\in\mc H(E): \text{ there exists } C>0: |E^{-1} F|\leq C\omega \mbox{ on }{\mathbb R}\big\} .$ We show that $\mc R_\omega(E)$ is a de~Branges subspace and describe all subspaces of this form. Moreover, we give a criterion for the existence of positive minimal majorants. Keywords:de~Branges subspace, majorant, Beurling-Malliavin TheoremCategories:46E20, 30D15, 46E22 2. CJM 2003 (vol 55 pp. 1231) Admissible Majorants for Model Subspaces of $H^2$, Part I: Slow Winding of the Generating Inner Function A model subspace $K_\Theta$ of the Hardy space $H^2 = H^2 (\mathbb{C}_+)$ for the upper half plane $\mathbb{C}_+$ is $H^2(\mathbb{C}_+) \ominus \Theta H^2(\mathbb{C}_+)$ where $\Theta$ is an inner function in $\mathbb{C}_+$. A function $\omega \colon \mathbb{R}\mapsto[0,\infty)$ is called {\it an admissible majorant\/} for $K_\Theta$ if there exists an $f \in K_\Theta$, $f \not\equiv 0$, $|f(x)|\leq \omega(x)$ almost everywhere on $\mathbb{R}$. For some (mainly meromorphic) $\Theta$'s some parts of $\Adm\Theta$ (the set of all admissible majorants for $K_\Theta$) are explicitly described. These descriptions depend on the rate of growth of $\arg \Theta$ along $\mathbb{R}$. This paper is about slowly growing arguments (slower than $x$). Our results exhibit the dependence of $\Adm B$ on the geometry of the zeros of the Blaschke product $B$. A complete description of $\Adm B$ is obtained for $B$'s with purely imaginary (vertical'') zeros. We show that in this case a unique minimal admissible majorant exists. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 Admissible Majorants for Model Subspaces of $H^2$, Part II: Fast Winding of the Generating Inner Function This paper is a continuation of \cite{HM02I}. We consider the model subspaces $K_\Theta=H^2\ominus\Theta H^2$ of the Hardy space $H^2$ generated by an inner function $\Theta$ in the upper half plane. Our main object is the class of admissible majorants for $K_\Theta$, denoted by $\Adm \Theta$ and consisting of all functions $\omega$ defined on $\mathbb{R}$ such that there exists an $f \ne 0$, $f \in K_\Theta$ satisfying $|f(x)|\leq\omega(x)$ almost everywhere on $\mathbb{R}$. Firstly, using some simple Hilbert transform techniques, we obtain a general multiplier theorem applicable to any $K_\Theta$ generated by a meromorphic inner function. In contrast with \cite{HM02I}, we consider the generating functions $\Theta$ such that the unit vector $\Theta(x)$ winds up fast as $x$ grows from $-\infty$ to $\infty$. In particular, we consider $\Theta=B$ where $B$ is a Blaschke product with horizontal'' zeros, {\it i.e.}, almost uniformly distributed in a strip parallel to and separated from $\mathbb{R}$. It is shown, among other things, that for any such $B$, any even $\omega$ decreasing on $(0,\infty)$ with a finite logarithmic integral is in $\Adm B$ (unlike the vertical'' case treated in \cite{HM02I}), thus generalizing (with a new proof) a classical result related to $\Adm\exp(i\sigma z)$, $\sigma>0$. Some oscillating $\omega$'s in $\Adm B$ are also described. Our theme is related to the Beurling-Malliavin multiplier theorem devoted to $\Adm\exp(i\sigma z)$, $\sigma>0$, and to de~Branges' space $\mathcal{H}(E)$. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 Inequalities for rational functions with prescribed poles This paper considers the rational system ${\cal P}_n (a_1,a_2,\ldots,a_n):= \bigl\{ {P(x) \over \prod_{k=1}^n (x-a_k)}, P\in {\cal P}_n\bigr\}$ with nonreal elements in $\{a_k\}_{k=1}^{n}\subset\Bbb{C}\setminus [-1,1]$ paired by complex conjugation. It gives a sharp (to constant) Markov-type inequality for real rational functions in ${\cal P}_n (a_1,a_2,\ldots,a_n)$. The corresponding Markov-type inequality for high derivatives is established, as well as Nikolskii-type inequalities. Some sharp Markov- and Bernstein-type inequalities with curved majorants for rational functions in ${\cal P}_n(a_1,a_2,\ldots,a_n)$ are obtained, which generalize some results for the classical polynomials. A sharp Schur-type inequality is also proved and plays a key role in the proofs of our main results. Keywords:Markov-type inequality, Bernstein-type inequality, Nikolskii-type inequality, Schur-type inequality, rational functions with prescribed poles, curved majorants, Chebyshev polynomialsCategories:41A17, 26D07, 26C15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795172810554504, "perplexity": 569.2820408120039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704179963/warc/CC-MAIN-20130516113619-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
http://retrievo.pt/advanced_search?link=1&field1=creators&searchTerms1='Ensslin%2C+K.'&constraint1=MATCH_EXACT_PHRASE
Type Database Creator Date Thumbnail # Search results 136 records were found. ## Self-consistent simulation of quantum wires defined by local oxidation of Ga[Al]As heterostructures Comment: 5 pages, 6 figures; revised figures, clarified text ## Density dependence of microwave induced magneto-resistance oscillations in a two-dimensional electron gas Comment: 5 pages, 4 figures ## Spin state mixing in InAs double quantum dots Comment: 5 pages, 4 figures ## Pauli spin-blockade in an InAs nanowire double quantum dot Comment: EP2DS-17 Proceedings, 3 Pages, 3 Figures ## Raman imaging and electronic properties of graphene Graphite is a well-studied material with known electronic and optical properties. Graphene, on the other hand, which is just one layer of carbon atoms arranged in a hexagonal lattice, has been studied theoretically for quite some time but has only recently become accessible for experiments. Here we demonstrate how single- and multi-layer graphene can be unambiguously identified using Raman scattering. Furthermore, we use a scanning Raman set-up to image few-layer graphene flakes of various heights. In transport experiments we measure weak localization and conductance fluctuations in a graphene flake of about 7 monolayer thickness. We obtain a phase-coherence length of about 2 $\mu$m at a temperature of 2 K. Furthermore we investigate the conductivity through single-layer graphene flakes and the tuning of electron and hole densities v... ## Measuring current by counting electrons in a nanowire quantum dot We measure current by counting single electrons tunneling through an InAs nanowire quantum dot. The charge detector is realized by fabricating a quantum point contact in close vicinity to the nanowire. The results based on electron counting compare well to a direct measurements of the quantum dot current, when taking the finite bandwidth of the detector into account. The ability to detect single electrons also opens up possibilities for manipulating and detecting individual spins in nanowire quantum dots. ## Spin-orbit interaction and spin relaxation in a two-dimensional electron gas Using time-resolved Faraday rotation, the drift-induced spin-orbit Field of a two-dimensional electron gas in an InGaAs quantum well is measured. Including measurements of the electron mobility, the Dresselhaus and Rashba coefficients are determined as a function of temperature between 10 and 80 K. By comparing the relative size of these terms with a measured in-plane anisotropy of the spin dephasing rate, the D'yakonv-Perel' contribution to spin dephasing is estimated. The measured dephasing rate is significantly larger than this, which can only partially be explained by an inhomogeneous g-factor. ## Analytic Model for the Energy Spectrum of a Graphene Quantum Dot in a Perpendicular Magnetic Field Comment: 4 pages, 3 figures ## Graphene quantum dots in perpendicular magnetic fields Comment: 5 pages, 4 figures, submitted to pss-b ## Spin States in Graphene Quantum Dots We investigate ground and excited state transport through small (d = 70 nm) graphene quantum dots. The successive spin filling of orbital states is detected by measuring the ground state energy as a function of a magnetic field. For a magnetic field in-plane of the quantum dot the Zemann splitting of spin states is measured. The results are compatible with a g-factor of 2 and we detect a spin-filling sequence for a series of states which is reasonable given the strength of exchange interaction effects expected for graphene.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405680656433105, "perplexity": 1731.682554308584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00124.warc.gz"}
http://www.vahala.caltech.edu/Research/eOFD
Search # electro-Optical Frequency Division (eOFD) Electro-Optical Frequency Division (eOFD) is a new method for implementing Optical Frequency Division (OFD) that has similarities to the way that electrical frquency division is applied to stabilize high-frequency oscillators. However, the method reverses the conventional architecture for stabilization of an electrical VCO by electro-optically linking it to a higher-frequency, optically-derived reference frequency, as opposed to a lower-frequency reference such as quartz. This allows the method to leverage the high stability of optically derived references to stabilize a common electrical VCO. Figure 1: Diagrams of the eOFD process [2]. As background to this section, it is helpful to read the section describing Optical Frequency Division (OFD). The idea of two-point locking [1] described there has inspired a third way to implement OFD that our group has recently demonstrated [2]. The idea is described in figure 1 and begins with two laser lines having a very stable frequency separation. In conventional two-point locking, these lasers would be used to stabilize two comb teeth in an existing frequency comb. In what we call electro-optical frequency division (eOFD), the frequency comb is generated from these lasers by phase modulation at a frequency determined by a voltage-controlled, electrical oscillator (VCO).  Upon phase modulation, each laser line generates a set of sidebands with a separation in frequency equal to the VCO frequency. The basic layout is shown in panel A in figure 1 where the dual frequency optical reference is shown at the top, followed below by the phase modulation (Optical divider) and finally at the bottom by the electrical VCO. A spectral representation of the process is given in panel C of figure 1 with the two lasers lines at ν1 and ν2. For a large enough number of sidebands there will be two sidebands near the mid point of the frequency span between the two laser frequencies and having a separation in frequency that can be easily measured using a photo detector.  This detected electrical signal carries the phase information of the VCO (multiplied by the number of sidebands, N = N1 + N2, between the two lasers) and can be used to provide feedback control to the VCO.  When the feedback loop is closed, the net result is that the VCO acquires the relative frequency stability of the lasers divided by N. Since the relative stability of the two lasers can be very good and N can be large, the VCO stability can be greatly improved. The actual implementation of this method is interesting in an architectural sense as it resembles a conventional electrical frequency synthesizer (see section on Microwave Photonics) except with the location in frequency space of the VCO and the reference oscillator reversed. The diagram in Panel B of figure 1 shows the idea. In the conventional electrical synthesizer, the VCO is divided down in frequency for comparison to a low frequency quartz oscillator. On the other hand, in eOFD an all-optical reference (the two lasers) is divided down using eOFD to stabilize the VCO. The advantage is that as noted in the section on OFD, optical sources can be orders-of-magnitude more stable than quartz so transferring this stability by eOFD in this new architecture has performance advantages over conventional electrical frequency division. Also, in comparison to conventional OFD, eOFD relies on relative stability of the optical reference as opposed to absolute stability. For reasons discussed in the section on Microwave photonics, relative stability is often more robust with respect to environmental disturbances. The data in figure 2 show how the VCO oscillator performance is improved through the eOFD process. The dashed black curve is the phase noise of an Agilent high performance microwave VCO. The red curve on the other hand is the phase noise of the optical reference (i.e., the difference frequency of the two laser sources). These sources are tuned to two different frequency separations and then divided down in frequency to control the VCO. The blue curve shows the case of optical division by 30x from an initial 327 GHz frequency separation while the green curve shows the VCO performance with division by an even larger factor of 148x from an initial 1.61 THz frequency separation. The improvement is quadratic in the division factor so there is a very large reduction in the phase noise of the already high-performance VCO. We are currently working on even higher performance implementations of this idea. Figure 2: Demonstration of eOFD [2]. W. C. Swann, E. Baumann, F. R. Giorgetta, N. R. Newbury, "Microwave generation with low residual phase noise from a femtosecond fiber laser with an intracavity electro-optic modulator," Opt. Express 19, 24387–24395 (2011) Jiang Li, Xu Yi, Hansuek Lee, Scott Diddams, Kerry Vahala, "Electro-Optical Frequency Division and Stable Microwave Synthesis," Science 345, 309-313 (2014)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639861941337585, "perplexity": 1297.6170433990476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00236.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-5-problem-67e-chemistry-an-atoms-first-approach-2nd-edition/9781305079243/what-number-of-atoms-of-nitrogen-are-present-in-500-g-of-each-of-the-following-a-glycine-c2h5o2n/50ecaf25-a826-11e8-9bb5-0ece094302b6
What number of atoms of nitrogen are present in 5.00 g of each of the following? a. glycine, C 2 H 5 O 2 N b. magnesium nitride c. calcium nitrate d. dinitrogen tetroxide Chemistry: An Atoms First Approach 2nd Edition Steven S. Zumdahl + 1 other Publisher: Cengage Learning ISBN: 9781305079243 Chapter Section Chemistry: An Atoms First Approach 2nd Edition Steven S. Zumdahl + 1 other Publisher: Cengage Learning ISBN: 9781305079243 Chapter 5, Problem 67E Textbook Problem 55 views What number of atoms of nitrogen are present in 5.00 g of each of the following?a. glycine, C2H5O2Nb. magnesium nitridec. calcium nitrated. dinitrogen tetroxide (a) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of glycine (C2H5O2N) . Explanation of Solution Given The mass of glycine (C2H5O2N)  is 5.00g . The molar mass of glycine (C2H5O2N) is, (2×12.01+5×1.008+2×15.999+14.0)g/mol=75.058g/mol Formula The number of moles in C2H5O2N is calculated as, MolesofC2H5O2N=MassofC2H5O2NMolarmassofC2H5O2N Substitute the values of mass and molar mass of C2H5O2N in above equation, MolesofC2H5O2N=MassofC2H5O2NMolarmassofC2H5O2N=5 (b) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of magnesium nitride (c) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of calcium nitrate (d) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of dinitrogen tetraoxide Still sussing out bartleby? Check out a sample textbook solution. See a sample solution The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started Find more solutions based on key concepts The fiber-rich portion of the wheat kernel is the bran layer. T F Nutrition: Concepts and Controversies - Standalone book (MindTap Course List) What is the declination of the sun on october 30th? Fundamentals of Physical Geography How is the enzyme phosphorylase activated? Introduction to General, Organic and Biochemistry What is population genetics? Human Heredity: Principles and Issues (MindTap Course List) 4. The normal arterial range is Cardiopulmonary Anatomy & Physiology Three objects are brought close to one another, two at a time. When objects A and B are brought together, they ... Physics for Scientists and Engineers, Technology Update (No access codes included)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377564549446106, "perplexity": 2231.1520501030855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00207.warc.gz"}
https://lavelle.chem.ucla.edu/forum/search.php?author_id=6654&sr=posts
## Search found 11 matches Sun Feb 21, 2016 8:58 pm Forum: *Electrophiles Topic: Standard Entropy of Activation Replies: 2 Views: 599 ### Re: Standard Entropy of Activation If you're referring to page 89 in the course reader where it explains our pseudo-equilibrium constant and our the signs for ∆H and ∆S for our ∆G equation than it is referring to the energy going from the products to the peak of the activation energy needed for the process to occur. In other words, i... Sun Feb 14, 2016 1:56 pm Forum: Kinetics vs. Thermodynamics Controlling a Reaction Topic: zero order reaction Replies: 2 Views: 574 ### Re: zero order reaction Normally, with other order reactions, when we graph the change in concentration over time, the graph shows a curved line with a negative slope that shows the concentration first decreasing quickly and then slowing down. However, with zero order reactions, because the rate of the reaction is independ... Sun Feb 14, 2016 1:25 pm Forum: General Rate Laws Topic: half life Replies: 1 Views: 365 ### Re: half life Because we know that 1/64=(1/2)^6, we know that 6 half lives occur to get to 1/64th of the initial concentration. Therefore, we multiply our half-life time by 6, 6*0.43=2.58, to get an answer of 2.58 seconds. Mon Feb 01, 2016 7:49 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: Chapter 14 IN-TEXT Example 14.4: Self-Test 14.5A Replies: 2 Views: 469 ### Re: Chapter 14 IN-TEXT Example 14.4: Self-Test 14.5A Although it's not technically incorrect to have fractions when balancing an equation, unless it specifically states that you're balancing for a specific number of moles of a molecule, just put integers in your final balanced equations. Fri Jan 29, 2016 7:26 pm Forum: Balancing Redox Reactions Topic: Redox Reactions Replies: 3 Views: 575 ### Redox Reactions Does anybody know of any pneumonic device or anything else to help me remember the cathode's role vs the anode? I always seem to mix them up. Fri Jan 29, 2016 7:24 pm Forum: Balancing Redox Reactions Topic: Oxidation States Replies: 8 Views: 1030 ### Re: Oxidation States He hasn't mentioned any more yet since the first day of electrochemistry in the course reader, so hopefully not. Sun Jan 24, 2016 3:15 pm Forum: Gibbs Free Energy Concepts and Calculations Topic: Quiz 1 Preparation Replies: 1 Views: 349 ### Quiz 1 Preparation Question 9a requires that we calculate the standard reaction entropy for a reaction using standard molar entropies. The Standard molar entropies for O2,CO2 and H2O are given in the front of the packet; however, the standard molar entropy for C6H6 isn't provided. Should it be or are we supposed to ca... Sat Jan 23, 2016 1:49 pm Forum: Gibbs Free Energy Concepts and Calculations Topic: Spontaneous Reactions at certain temperatures Replies: 2 Views: 513 ### Re: Spontaneous Reactions at certain temperatures So for these types of situations, you'll want to look at the equation for delta G, but in a conceptual, not quantitative manner. ∆G=∆H-T∆S First, we know that a system is spontaneous if ∆G is negative, so we now want to look at qualities of enthalpy and entropy that will make that true. If ∆S is pos... Thu Jan 14, 2016 8:14 pm Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: Homework Problem 8.73 Replies: 4 Views: 878 ### Re: Homework Problem 8.73 It's extremely helpful to draw the Lewis Structures of these molecules before finding enthalpies because that will illustrate the specific bonds you have to break so that the product may be formed with its bonds. For the reactants, you'll see that the Lewis structure is H-C:::C-H meaning that you ha... Thu Jan 14, 2016 8:01 pm Forum: Thermodynamic Systems (Open, Closed, Isolated) Topic: System and Surroundings Replies: 2 Views: 743 ### Re: System and Surroundings That’s correct. Assuming that we’re dealing with a perfect system guarantees that the heat gained/lost by the system will be equal to the negative heat lost/gained by the surroundings, which Lavelle outlines below that “perfect system” mention by putting heat given off by the reaction equal to heat ... Wed Jan 06, 2016 8:52 pm Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: Understanding Bond Enthalpies Replies: 3 Views: 2141 ### Re: Understanding Bond Enthalpies Hi Leah! This idea seemed a bit weird to me at first as well; however, if we think about the example in the course reader where CH2 combines with HBr to form CH3 and CH2Br, we’re looking specifically at the energy required or released to break or create every bond changed. Each of these energies add... Go to advanced search
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398153185844421, "perplexity": 4247.946201163248}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00591.warc.gz"}
https://brilliant.org/problems/olympiad-for-grade-10/
Algebra Level 4 Let $a,b,c> 0,abc=1$ Let the minimum of P is M $P= \frac{a^2}{\sqrt{2+2ab}}+\frac{b^2}{\sqrt{2+2bc}}+\frac{c^2}{\sqrt{2+2ca}}$ If $M=\frac{m}{n}$ Find the value of m+n if $$\gcd{m,n}=1$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488012790679932, "perplexity": 1019.9743373845392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00575.warc.gz"}
https://cms.math.ca/10.4153/CMB-2008-012-8
location:  Publications → journals → CMB Abstract view # Dynamical Zeta Function for Several Strictly Convex Obstacles Published:2008-03-01 Printed: Mar 2008 • Vesselin Petkov Format: HTML LaTeX MathJax PDF PostScript ## Abstract The behavior of the dynamical zeta function $Z_D(s)$ related to several strictly convex disjoint obstacles is similar to that of the inverse $Q(s) = \frac{1}{\zeta(s)}$ of the Riemann zeta function $\zeta(s)$. Let $\Pi(s)$ be the series obtained from $Z_D(s)$ summing only over primitive periodic rays. In this paper we examine the analytic singularities of $Z_D(s)$ and $\Pi(s)$ close to the line $\Re s = s_2$, where $s_2$ is the abscissa of absolute convergence of the series obtained by the second iterations of the primitive periodic rays. We show that at least one of the functions $Z_D(s), \Pi(s)$ has a singularity at $s = s_2$. Keywords: dynamical zeta function, periodic rays MSC Classifications: 11M36 - Selberg zeta functions and regularized determinants; applications to spectral theory, Dirichlet series, Eisenstein series, etc. Explicit formulas 58J50 - Spectral problems; spectral geometry; scattering theory [See also 35Pxx] top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546732068061829, "perplexity": 848.7249622923285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.meritnation.com/cbse-class-11-science/math/math-ncert-solutions/linear-inequalities/ncert-solutions/41_1_1342_166_122_5549
NCERT Solutions for Class 11 Science Math Chapter 6 Linear Inequalities are provided here with simple step-by-step explanations. These solutions for Linear Inequalities are extremely popular among Class 11 Science students for Math Linear Inequalities Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the NCERT Book of Class 11 Science Math Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s NCERT Solutions. All NCERT Solutions for class Class 11 Science Math are prepared by experts and are 100% accurate. #### Question 1: Solve 24x < 100, when (i) x is a natural number (ii) x is an integer The given inequality is 24x < 100. (i) It is evident that 1, 2, 3, and 4 are the only natural numbers less than. Thus, when x is a natural number, the solutions of the given inequality are 1, 2, 3, and 4. Hence, in this case, the solution set is {1, 2, 3, 4}. (ii) The integers less than are …–3, –2, –1, 0, 1, 2, 3, 4. Thus, when x is an integer, the solutions of the given inequality are …–3, –2, –1, 0, 1, 2, 3, 4. Hence, in this case, the solution set is {…–3, –2, –1, 0, 1, 2, 3, 4}. #### Question 2: Solve –12x > 30, when (i) x is a natural number (ii) x is an integer The given inequality is –12x > 30. (i) There is no natural number less than. Thus, when x is a natural number, there is no solution of the given inequality. (ii) The integers less than are …, –5, –4, –3. Thus, when x is an integer, the solutions of the given inequality are …, –5, –4, –3. Hence, in this case, the solution set is {…, –5, –4, –3}. #### Question 3: Solve 5x– 3 < 7, when (i) x is an integer (ii) x is a real number The given inequality is 5x– 3 < 7. (i) The integers less than 2 are …, –4, –3, –2, –1, 0, 1. Thus, when x is an integer, the solutions of the given inequality are …, –4, –3, –2, –1, 0, 1. Hence, in this case, the solution set is {…, –4, –3, –2, –1, 0, 1}. (ii) When x is a real number, the solutions of the given inequality are given by x < 2, that is, all real numbers x which are less than 2. Thus, the solution set of the given inequality is x (–, 2). #### Question 4: Solve 3x + 8 > 2, when (i) x is an integer (ii) x is a real number The given inequality is 3x + 8 > 2. (i) The integers greater than –2 are –1, 0, 1, 2, … Thus, when x is an integer, the solutions of the given inequality are –1, 0, 1, 2 … Hence, in this case, the solution set is {–1, 0, 1, 2, …}. (ii) When x is a real number, the solutions of the given inequality are all the real numbers, which are greater than –2. Thus, in this case, the solution set is (– 2, ). #### Question 5: Solve the given inequality for real x: 4x + 3 < 5x + 7 4x + 3 < 5x + 7 4x + 3 – 7 < 5x + 7 – 7 4x – 4 < 5x 4x – 4 – 4x < 5x – 4x ⇒ –4 < x Thus, all real numbers x,which are greater than –4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–4, ). #### Question 6: Solve the given inequality for real x: 3x – 7 > 5x – 1 3x – 7 > 5x – 1 3x – 7 + 7 > 5x – 1 + 7 3x > 5x + 6 3x – 5x > 5x + 6 – 5x ⇒ – 2x > 6 Thus, all real numbers x,which are less than –3, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –3). #### Question 7: Solve the given inequality for real x: 3(x – 1) 2 (x – 3) 3(x – 1) 2(x – 3) 3x – 3 2x – 6 3x – 3 + 3 2x – 6 + 3 3x 2x – 3 3x – 2x 2x – 3 – 2x x – 3 Thus, all real numbers x,which are less than or equal to –3, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –3]. #### Question 8: Solve the given inequality for real x: 3(2 – x) 2(1 – x) 3(2 – x) 2(1 – x) 6 – 3x 2 – 2x 6 – 3x + 2x 2 – 2x + 2x 6 – x 2 6 – x – 6 2 – 6 ⇒ –x –4 x 4 Thus, all real numbers x,which are less than or equal to 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 4]. #### Question 9: Solve the given inequality for real x: Thus, all real numbers x,which are less than 6, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 6). #### Question 10: Solve the given inequality for real x: Thus, all real numbers x,which are less than –6, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –6). #### Question 11: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 12: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 120, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–∞, 120]. #### Question 13: Solve the given inequality for real x: 2(2x + 3) – 10 < 6 (x – 2) Thus, all real numbers x,which are greater than or equal to 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (4, ∞). #### Question 14: Solve the given inequality for real x: 37 ­– (3x + 5) 9x – 8(x – 3) Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 15: Solve the given inequality for real x: Thus, all real numbers x,which are greater than 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (4, ). #### Question 16: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 17: Solve the given inequality and show the graph of the solution on number line: 3x – 2 < 2x +1 3x – 2 < 2x +1 3x – 2x < 1 + 2 x < 3 The graphical representation of the solutions of the given inequality is as follows. #### Question 18: Solve the given inequality and show the graph of the solution on number line: 5x – 3 3x – 5 The graphical representation of the solutions of the given inequality is as follows. #### Question 19: Solve the given inequality and show the graph of the solution on number line: 3(1 – x) < 2 (x + 4) The graphical representation of the solutions of the given inequality is as follows. #### Question 20: Solve the given inequality and show the graph of the solution on number line: The graphical representation of the solutions of the given inequality is as follows. #### Question 21: Ravi obtained 70 and 75 marks in first two unit test. Find the minimum marks he should get in the third test to have an average of at least 60 marks. Let x be the marks obtained by Ravi in the third unit test. Since the student should have an average of at least 60 marks, Thus, the student must obtain a minimum of 35 marks to have an average of at least 60 marks. #### Question 22: To receive Grade ‘A’ in a course, one must obtain an average of 90 marks or more in five examinations (each of 100 marks). If Sunita’s marks in first four examinations are 87, 92, 94 and 95, find minimum marks that Sunita must obtain in fifth examination to get grade ‘A’ in the course. Let x be the marks obtained by Sunita in the fifth examination. In order to receive grade ‘A’ in the course, she must obtain an average of 90 marks or more in five examinations. Therefore, Thus, Sunita must obtain greater than or equal to 82 marks in the fifth examination. #### Question 23: Find all pairs of consecutive odd positive integers both of which are smaller than 10 such that their sum is more than 11. Let x be the smaller of the two consecutive odd positive integers. Then, the other integer is x + 2. Since both the integers are smaller than 10, x + 2 < 10 x < 10 – 2 x < 8 … (i) Also, the sum of the two integers is more than 11. x + (x + 2) > 11 2x + 2 > 11 2x > 11 – 2 2x > 9 From (i) and (ii), we obtain $4.5 Since x is an odd number, x can take the values, 5 and 7. Thus, the required possible pairs are (5, 7) and (7, 9). #### Question 24: Find all pairs of consecutive even positive integers, both of which are larger than 5 such that their sum is less than 23. Let x be the smaller of the two consecutive even positive integers. Then, the other integer is x + 2. Since both the integers are larger than 5, x > 5 ... (1) Also, the sum of the two integers is less than 23. x + (x + 2) < 23 2x + 2 < 23 2x < 23 – 2 2x < 21 From (1) and (2), we obtain 5 < x < 10.5. Since x is an even number, x can take the values, 6, 8, and 10. Thus, the required possible pairs are (6, 8), (8, 10), and (10, 12). #### Question 25: The longest side of a triangle is 3 times the shortest side and the third side is 2 cm shorter than the longest side. If the perimeter of the triangle is at least 61 cm, find the minimum length of the shortest side. Let the length of the shortest side of the triangle be x cm. Then, length of the longest side = 3x cm Length of the third side = (3x – 2) cm Since the perimeter of the triangle is at least 61 cm, Thus, the minimum length of the shortest side is 9 cm. #### Question 26: A man wants to cut three lengths from a single piece of board of length 91 cm. The second length is to be 3 cm longer than the shortest and the third length is to be twice as long as the shortest. What are the possible lengths of the shortest board if the third piece is to be at least 5 cm longer than the second? [Hint: If x is the length of the shortest board, then x, (x + 3) and 2x are the lengths of the second and third piece, respectively. Thus, x = (x + 3) + 2x 91 and 2x (x + 3) + 5] Let the length of the shortest piece be x cm. Then, length of the second piece and the third piece are (x + 3) cm and 2x cm respectively. Since the three lengths are to be cut from a single piece of board of length 91 cm, x cm + (x + 3) cm + 2x cm 91 cm 4x + 3 91 4x 91 ­– 3 4x 88 Also, the third piece is at least 5 cm longer than the second piece. 2x (x + 3) + 5 2x x + 8 x 8 … (2) From (1) and (2), we obtain 8 x 22 Thus, the possible length of the shortest board is greater than or equal to 8 cm but less than or equal to 22 cm. #### Question 1: Solve the given inequality graphically in two-dimensional plane: x + y < 5 The graphical representation of x + y = 5 is given as dotted line in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 + 0 < 5 or, 0 < 5, which is true Therefore, half plane II is not the solution region of the given inequality. Also, it is evident that any point on the line does not satisfy the given strict inequality. Thus, the solution region of the given inequality is the shaded half plane I excluding the points on the line. This can be represented as follows. #### Question 2: Solve the given inequality graphically in two-dimensional plane: 2x + y 6 The graphical representation of 2x + y = 6 is given in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 2(0) + 0 ≥ 6 or 0 ≥ 6, which is false Therefore, half plane I is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the shaded half plane II including the points on the line. This can be represented as follows. #### Question 3: Solve the given inequality graphically in two-dimensional plane: 3x + 4y 12 3x + 4y 12 The graphical representation of 3x + 4y = 12 is given in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) + 4(0) 12 or 0 12, which is true Therefore, half plane II is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the shaded half plane I including the points on the line. This can be represented as follows. #### Question 4: Solve the given inequality graphically in two-dimensional plane: y + 8 2x The graphical representation of y + 8 = 2x is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 + 8 2(0) or 8 0, which is true Therefore, lower half plane is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 5: Solve the given inequality graphically in two-dimensional plane: xy 2 The graphical representation of xy = 2 is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 – 0 2 or 0 2, which is true Therefore, the lower half plane is not the solution region of the given inequality. Also, it is clear that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 6: Solve the given inequality graphically in two-dimensional plane: 2x – 3y > 6 The graphical representation of 2x – 3y = 6 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 2(0) – 3(0) > 6 or 0 > 6, which is false Therefore, the upper half plane is not the solution region of the given inequality. Also, it is clear that any point on the line does not satisfy the given inequality. Thus, the solution region of the given inequality is the half plane that does not contain the point (0, 0) excluding the line. The solution region is represented by the shaded region as follows. #### Question 7: Solve the given inequality graphically in two-dimensional plane: –3x + 2y –6 The graphical representation of – 3x + 2y = – 6 is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) + 2(0) – 6 or 0 –6, which is true Therefore, the lower half plane is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 8: Solve the given inequality graphically in two-dimensional plane: 3y – 5x < 30 The graphical representation of 3y – 5x = 30 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) – 5(0) < 30 or 0 < 30, which is true Therefore, the upper half plane is not the solution region of the given inequality. Also, it is evident that any point on the line does not satisfy the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) excluding the line. The solution region is represented by the shaded region as follows. #### Question 9: Solve the given inequality graphically in two-dimensional plane: y < –2 The graphical representation of y = –2 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 < –2, which is false Also, it is evident that any point on the line does not satisfy the given inequality. Hence, every point below the line, y = –2 (excluding all the points on the line), determines the solution of the given inequality. The solution region is represented by the shaded region as follows. #### Question 10: Solve the given inequality graphically in two-dimensional plane: x > –3 The graphical representation of x = –3 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 > –3, which is true Also, it is evident that any point on the line does not satisfy the given inequality. Hence, every point on the right side of the line, x = –3 (excluding all the points on the line), determines the solution of the given inequality. The solution region is represented by the shaded region as follows. #### Question 1: Solve the following system of inequalities graphically: x 3, y 2 x 3 … (1) y 2 … (2) The graph of the lines, x = 3 and y = 2, are drawn in the figure below. Inequality (1) represents the region on the right hand side of the line, x = 3 (including the line x = 3), and inequality (2) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 2: Solve the following system of inequalities graphically: 3x + 2y 12, x 1, y 2 3x + 2y 12 … (1) x 1 … (2) y 2 … (3) The graphs of the lines, 3x + 2y = 12, x = 1, and y = 2, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 2y = 12 (including the line 3x + 2y = 12). Inequality (2) represents the region on the right side of the line, x = 1 (including the line x = 1). Inequality (3) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 3: Solve the following system of inequalities graphically: 2x + y 6, 3x + 4y 12 2x + y 6 … (1) 3x + 4y 12 … (2) The graph of the lines, 2x + y= 6 and 3x + 4y = 12, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y= 6 (including the line 2x + y= 6), and inequality (2) represents the region below the line, 3x + 4y =12 (including the line 3x + 4y =12). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 4: Solve the following system of inequalities graphically: x + y 4, 2xy > 0 x + y 4 … (1) 2xy > 0 … (2) The graph of the lines, x + y = 4 and 2xy = 0, are drawn in the figure below. Inequality (1) represents the region above the line, x + y = 4 (including the line x + y = 4). It is observed that (1, 0) satisfies the inequality, 2xy > 0. [2(1) – 0 = 2 > 0] Therefore, inequality (2) represents the half plane corresponding to the line, 2xy = 0, containing the point (1, 0) [excluding the line 2xy > 0]. Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on line x + y = 4 and excluding the points on line 2xy = 0 as follows. #### Question 5: Solve the following system of inequalities graphically: 2xy > 1, x – 2y < –1 2xy > 1 … (1) x – 2y < –1 … (2) The graph of the lines, 2xy = 1 and x – 2y = –1, are drawn in the figure below. Inequality (1) represents the region below the line, 2xy = 1 (excluding the line 2xy = 1), and inequality (2) represents the region above the line, x – 2y = –1 (excluding the line x – 2y = –1). Hence, the solution of the given system of linear inequalities is represented by the common shaded region excluding the points on the respective lines as follows. #### Question 6: Solve the following system of inequalities graphically: x + y 6, x + y 4 x + y 6 … (1) x + y 4 … (2) The graph of the lines, x + y = 6 and x + y = 4, are drawn in the figure below. Inequality (1) represents the region below the line, x + y = 6 (including the line x + y = 6), and inequality (2) represents the region above the line, x + y = 4 (including the line x + y = 4). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 7: Solve the following system of inequalities graphically: 2x + y 8, x + 2y 10 2x + y= 8 … (1) x + 2y = 10 … (2) The graph of the lines, 2x + y= 8 and x + 2y = 10, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y = 8, and inequality (2) represents the region above the line, x + 2y = 10. Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 8: Solve the following system of inequalities graphically: x + y 9, y > x, x 0 x + y 9      ... (1) y > x             ... (2) x 0             ... (3) The graph of the lines, x + y= 9 and y = x, are drawn in the figure below. Inequality (1) represents the region below the line, x + y = 9 (including the line x + y = 9). It is observed that (0, 1) satisfies the inequality, y > x. [1 > 0] Therefore, inequality (2) represents the half plane corresponding to the line, y = x, containing the point (0, 1) [excluding the line y = x]. Inequality (3) represents the region on the right hand side of the line, x = 0 or y-axis (including y-axis). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the lines, x + y = 9 and x = 0, and excluding the points on line y = x as follows. #### Question 9: Solve the following system of inequalities graphically: 5x + 4y 20, x 1, y 2 5x + 4y 20 … (1) x 1 … (2) y 2 … (3) The graph of the lines, 5x + 4y = 20, x = 1, and y = 2, are drawn in the figure below. Inequality (1) represents the region below the line, 5x + 4y = 20 (including the line 5x + 4y = 20). Inequality (2) represents the region on the right hand side of the line, x = 1 (including the line x = 1). Inequality (3) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 10: Solve the following system of inequalities graphically: 3x + 4y 60, x + 3y 30, x 0, y 0 3x + 4y 60 … (1) x + 3y 30 … (2) The graph of the lines, 3x + 4y = 60 and x + 3y = 30, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 4y = 60 (including the line 3x + 4y = 60), and inequality (2) represents the region below the line, x + 3y = 30 (including the line x + 3y = 30). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective line and the axes represents the solution of the given system of linear inequalities. #### Question 11: Solve the following system of inequalities graphically: 2x + y 4, x + y 3, 2x – 3y 6 2x + y 4 … (1) x + y 3 … (2) 2x – 3y 6 … (3) The graph of the lines, 2x + y= 4, x + y = 3, and 2x – 3y = 6, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y= 4 (including the line 2x + y= 4). Inequality (2) represents the region below the line, x + y = 3 (including the line x + y = 3). Inequality (3) represents the region above the line, 2x – 3y = 6 (including the line 2x – 3y = 6). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 12: Solve the following system of inequalities graphically: x – 2y ≤ 3, 3x + 4y ≥ 12, x ≥ 0, y ≥ 1 x – 2y ≤ 3 … (1) 3x + 4y ≥ 12 … (2) y ≥ 1 … (3) The graph of the lines, x – 2y = 3, 3x + 4y = 12, and y = 1, are drawn in the figure below. Inequality (1) represents the region above the line, x – 2y = 3 (including the line x – 2y = 3). Inequality (2) represents the region above the line, 3x + 4y = 12 (including the line 3x + 4y = 12). Inequality (3) represents the region above the line, y = 1 (including the line y = 1). The inequality, x ≥ 0, represents the region on the right hand side of y-axis (including y-axis). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines and y- axis as follows. #### Question 13: [[Q]] Solve the following system of inequalities graphically: 4x + 3y ≤ 60, y ≥ 2x, x ≥ 3, x, y ≥ 0 4x + 3y ≤ 60 … (1) y ≥ 2x … (2) x ≥ 3 … (3) The graph of the lines, 4x + 3y = 60, y = 2x, and x = 3, are drawn in the figure below. Inequality (1) represents the region below the line, 4x + 3y = 60 (including the line 4x + 3y = 60). Inequality (2) represents the region above the line, y = 2x (including the line y = 2x). Inequality (3) represents the region on the right hand side of the line, x = 3 (including the line x = 3). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 14: Solve the following system of inequalities graphically: 3x + 2y 150, x + 4y 80, x 15, y 0, x 0 3x + 2y 150 … (1) x + 4y 80 … (2) x 15 … (3) The graph of the lines, 3x + 2y = 150, x + 4y = 80, and x = 15, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 2y = 150 (including the line 3x + 2y = 150). Inequality (2) represents the region below the line, x + 4y = 80 (including the line x + 4y = 80). Inequality (3) represents the region on the left hand side of the line, x = 15 (including the line x = 15). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective lines and the axes represents the solution of the given system of linear inequalities. #### Question 15: Solve the following system of inequalities graphically: x + 2y 10, x + y 1, xy 0, x 0, y 0 x + 2y 10 … (1) x + y 1 … (2) xy 0 … (3) The graph of the lines, x + 2y = 10, x + y = 1, and xy = 0, are drawn in the figure below. Inequality (1) represents the region below the line, x + 2y = 10 (including the line x + 2y = 10). Inequality (2) represents the region above the line, x + y = 1 (including the line x + y = 1). Inequality (3) represents the region above the line, xy = 0 (including the line xy = 0). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective lines and the axes represents the solution of the given system of linear inequalities. #### Question 1: Solve the inequality 2 3x – 4 5 2 3x – 4 5 2 + 4 3x – 4 + 4 5 + 4 6 3x 9 2 x 3 Thus, all the real numbers, x, which are greater than or equal to 2 but less than or equal to 3, are the solutions of the given inequality. The solution set for the given inequalityis [2, 3]. #### Question 2: Solve the inequality 6 –3(2x – 4) < 12 6 – 3(2x – 4) < 12 2 –(2x – 4) < 4 ⇒ –2 2x – 4 > –4 4 – 2 2x > 4 – 4 2 2x > 0 1 x > 0 Thus, the solution set for the given inequalityis (0, 1]. #### Question 3: Solve the inequality Thus, the solution set for the given inequalityis [–4, 2]. #### Question 4: Solve the inequality ⇒ –75 < 3(x – 2) 0 ⇒ –25 < x – 2 0 ⇒ – 25 + 2 < x 2 ⇒ –23 < x 2 Thus, the solution set for the given inequalityis (–23, 2]. #### Question 5: Solve the inequality Thus, the solution set for the given inequalityis. #### Question 6: Solve the inequality Thus, the solution set for the given inequalityis. #### Question 7: Solve the inequalities and represent the solution graphically on number line: 5x + 1 > –24, 5x – 1 < 24 5x + 1 > –24 5x > –25 x > –5 … (1) 5x – 1 < 24 5x < 25 x < 5 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is (–5, 5). The solution of the given system of inequalities can be represented on number line as #### Question 8: Solve the inequalities and represent the solution graphically on number line: 2(x – 1) < x + 5, 3(x + 2) > 2 – x 2(x – 1) < x + 5 2x – 2 < x + 5 2xx < 5 + 2 x < 7 … (1) 3(x + 2) > 2 – x 3x + 6 > 2 – x 3x + x > 2 – 6 4x > – 4 x > – 1 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is (–1, 7). The solution of the given system of inequalities can be represented on number line as #### Question 9: Solve the following inequalities and represent the solution graphically on number line: 3x – 7 > 2(x – 6), 6 – x > 11 – 2x 3x – 7 > 2(x – 6) ⇒ 3x – 7 > 2x – 12 ⇒ 3x – 2x > – 12 + 7 x > –5 … (1) 6 – x > 11 – 2x ⇒ –x + 2x > 11 – 6 x > 5 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is. The solution of the given system of inequalities can be represented on number line as #### Question 10: Solve the inequalities and represent the solution graphically on number line: 5(2x – 7) – 3(2x + 3) 0, 2x + 19 6x + 47 5(2x – 7) – 3(2x + 3) 0 10x – 35 – 6x – 9 0 4x – 44 0 4x 44 x 11 … (1) 2x + 19 6x + 47 19 – 47 6x – 2x ⇒ –28 4x ⇒ –7 x … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is [–7, 11]. The solution of the given system of inequalities can be represented on number line as #### Question 11: A solution is to be kept between 68°F and 77°F. What is the range in temperature in degree Celsius (C) if the Celsius/Fahrenheit (F) conversion formula is given by Since the solution is to be kept between 68°F and 77°F, 68 < F < 77 Putting we obtain Thus, the required range of temperature in degree Celsius is between 20°C and 25°C. #### Question 12: A solution of 8% boric acid is to be diluted by adding a 2% boric acid solution to it. The resulting mixture is to be more than 4% but less than 6% boric acid. If we have 640 litres of the 8% solution, how many litres of the 2% solution will have to be added? Let x litres of 2% boric acid solution is required to be added. Then, total mixture = (x + 640) litres This resulting mixture is to be more than 4% but less than 6% boric acid. 2%x + 8% of 640 > 4% of (x + 640) And, 2% x + 8% of 640 < 6% of (x + 640) 2%x + 8% of 640 > 4% of (x + 640) 2x + 5120 > 4x + 2560 5120 – 2560 > 4x – 2x 5120 – 2560 > 2x 2560 > 2x 1280 > x 2% x + 8% of 640 < 6% of (x + 640) 2x + 5120 < 6x + 3840 5120 – 3840 < 6x – 2x 1280 < 4x 320 < x 320 < x < 1280 Thus, the number of litres of 2% of boric acid solution that is to be added will have to be more than 320 litres but less than 1280 litres. #### Question 13: How many litres of water will have to be added to 1125 litres of the 45% solution of acid so that the resulting mixture will contain more than 25% but less than 30% acid content? Let x litres of water is required to be added. Then, total mixture = (x + 1125) litres It is evident that the amount of acid contained in the resulting mixture is 45% of 1125 litres. This resulting mixture will contain more than 25% but less than 30% acid content. 30% of (1125 + x) > 45% of 1125 And, 25% of (1125 + x) < 45% of 1125 30% of (1125 + x) > 45% of 1125 25% of (1125 + x) < 45% of 1125 ∴ 562.5 < x < 900 Thus, the required number of litres of water that is to be added will have to be more than 562.5 but less than 900. #### Question 14: IQ of a person is given by the formula Where MA is mental age and CA is chronological age. If 80 IQ 140 for a group of 12 years old children, find the range of their mental age.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628231287002563, "perplexity": 473.9659199784292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00342.warc.gz"}
http://openstudy.com/updates/50ebb90be4b0d4a537cc9f2c
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous 3 years ago Can anyone show me an example physics numerical which cant be solved just by memorizing formulas and need deep understanding of the concept?Please explain all the steps used. Please explain those deep concepts also Delete Cancel Submit • This Question is Closed 1. anonymous • 3 years ago Best Response You've already chosen the best response. 0 you know Kirchoff's laws ? if you do, then you know that the closed loop integral of a circuit is always equal to 0, but that also requires you to to take the potential difference of the battery, what if you don't have a battery but instead you have a magnetic field exactly as required for the same amount of current to flow through the circuit.. would Kirchoff's law fail ? if not, why ? if yes, then what is the integral equal to ? 2. anonymous • 3 years ago Best Response You've already chosen the best response. 0 To paraphrase Poincare: Science(physics) is no more a collection of facts (formulas) than a house is a collection of stones. Formulas are derived from the deep understanding of concepts for instances of special interest to demonstrate the quantitative relationship among the physical quantities involved and for convenience and ease in obtaining desired numerical results for these instances. The deep understanding of concepts is necessary to determine the proper relationship among the physical quantities from which you can choose appropriate formula's to obtain desired information. Most real problems involve more complex situations than can be solved by the simple application of a single equation. Even applying single equation may be problematic in recognizing that the equation is appropriate or in choosing the correct values for the parameters from the known information or knowing how and when to augment the given information with necessary additional information. 3. anonymous • 3 years ago Best Response You've already chosen the best response. 0 4. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Consider a tank of water of depth d which you wish to empty with a 5/8 inch diameter garden hose siphon after it passes over an obstruction of height z above the water surface and leads to a level h below the water surface. What is the maximum height of the sIphon loop above the water level before the water column in the sIphon breaks. Use Bernoulli's law because we have a fluid flowing through a tube at different heights. Bernoulli's Law states that at two points in the same streamline for a noncompressible nonviscous fluid in steady flow the sum of the pressure, the kinetic energy per unit volume and the potential energy per unit volume is the same. I'll call this sum the Bernoulli variable. At the top of the siphon the Bernoulli's variable for this point is $P+\frac{ 1 }{ 2 }\rho v ^{2}+\rho gh$. Now at the level of the water the Bernoulli variable has the value of the atmospheric pressure only so we can equate these. So the Bernoulli equation for a point in the loop is $P+\frac{ 1 }{ 2 } \rho v ^{2}+\rho gh=P _{ATM}$ We know that a fluid cannot support a tensile force ( i.e. water cannot pull anything) so when the absolute pressure in the loop drops to 0 the water column will break. So now we have for the maximum z $\rho gz _{\max} =P _{ATM}-\frac{ 1 }{ 2 }\rho v ^{2}$ But what is the velocity v. Since this is a noncompressible fluid the velocity is the same at all points in the siphon. In fact the velocity is determined only by the height of surface of the water above the outlet of the siphon. This is Torricelli's law (derived from Bernoulli's law) given by the equation $v _{out}= \sqrt{2gh}$ So the maximum z equation becomes $z _{\max} = \frac{ P _{MAX} }{ \rho g }- h$ The diameter of the siphon is irrelevant except that its relatively large diameter minimizes viscous forces and surface tension making Bernoulli's law a more valid application to this problem. Reviewing the approach you had to determine the law(s) that are appropriate and the conditions for the breaking of the water column. You had to deduce the Bernoulli variables in the siphon at the water level and top of the siphon. You had to deduce that the velocity at the siphon outlet. And lastly you had to remember the consistent set of units of density, gravitational acceleration, pressure, distance and area. In this case density is in slugs per cubic ft, acceleration of gravity in ft/sec^2, h in feet, and atmospheric pressure in lbs per sq ft. (remember the embarrassment of the company who built a Martian probe when inches and centimeters were interchanged) I think this problem is representative of physics problems in general. Some problems are easier less involved, many are more complex especially in application of physical intuition and/or the development of the mathematical relationships. Generally real problems are not plug and play. 5. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Well, obviously any numerical problem for which there does not yet exist a formula, but for which a formula can be derived from fundamental principles. Such problems can readily be devised at the graduate student level, but they take a lot of work, so I'm sure not going to do one for your pleasure. Send me \$500 and I might do it. Also, of course, any research problem falls into this category. 6. anonymous • 3 years ago Best Response You've already chosen the best response. 0 7. anonymous • 3 years ago Best Response You've already chosen the best response. 0 8. AravindG • 3 years ago Best Response You've already chosen the best response. 0 you can also check out iit group closed questions section.there are very nice questions there :) 9. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777007699012756, "perplexity": 677.4524158576506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00007-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1640495/why-is-the-accuracy-of-prhypothesis-in-bayess-theorem-less-important-than
# Why is the accuracy of $\Pr($hypothesis) in Bayes's Theorem less important than apparent? Source: p 224, Think: A Compelling Introduction to Philosophy (1 ed, 1999) by Simon Blackburn. I capitalised miniscules, which the author uses for variables. I pursue only intuition; please do not answer with formal proofs. Discussing Bayes's Theorem ( $\Pr(H|E)=\frac{\Pr(E|H)\Pr(H)}{\Pr(E)}$ ), the author abbreviates 'evidence' to E and 'hypothesis' to H. Of course, very often it is difficult or impossible to quantify the "prior" probabilities [ $\Pr(H)$ ] with any accuracy. It is important to realize that this need not matter as much as it might seem. Two factors alleviate the problem. First, even if we assign a range to each figure, [1.] it may be that all ways of calculating the upshot give a sufficiently similar result. And second, it may be that in the face of enough evidence, [2.] difference of prior opinion gets swamped. Investigators starting with very different antecedent attitudes to $\Pr(H)$ might end up assigning similarly high values to $\Pr(H \mid E)$, when $E$ becomes impressive enough. My interpretation of the above as overconfident and presumptuous implies my failure to comprehend it; somehow, I am unpersuaded by 1 and 2. Would these please be explained? 2) basically says "If you receive enough evidence, however unconvinced you were at the start, you must eventually become convinced". Since in real life we actually have a lot of data - medical trials, for instance, get run even when we could in principle know the answers already from extant data - we can afford to be quite imprecise with our priors. (Of course, the assumption here is that we have lots of data. If we are trying to extract the truth from very little data, then it becomes very important to have a reasonable prior, because the updating process has so little effect when the data is so short.) 1 seems to be a rephrasing of 2, to me; 2 seems to be the reason why 1 is true. The second statement is related to the way that Bayesian probability reacts in the light of new evidence, always converging towards the implied truth. No matter whether you are skeptical about the relation between cancer and tobacco, or a believer; after seeing a certain amount of reasonable evidence you should change your opinion to fit better the facts. It is however true that certain specially toxic initial priors can hamper your ability to reason correctly. Perhaps the more simple and radical example of this is when you take as prior $P(A)=0$ or $P(A)=1$. The problem of selecting reasonable initial prior is difficult , but can be as simple in some special occasions as starting from the total ignorance ($P(A)=1/2$) and letting the data correct our views. Regarding 1, I do not fully get what he is getting at. Every way of calculating something should give the same result, since it is uniquely determined from the priors. If we have some small discrepancies between our prior and that of our partner's, which is further reduced by evidence updates, then in many cases we can be content with either result, but we have to be careful. Edit: You can take A as any statement which is susceptible to be changed by evidence (which is a pretty broad category, though it may exclude logical propositions if we are presupposing logical omniscience). For example, A could be the aforementioned 'Smoking causes cancer'. • Thanks. Can you please clarify what $A$ means in your 3rd paragraph? – Greek - Area 51 Proposal Feb 7 '16 at 6:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084845542907715, "perplexity": 762.6239559723933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578762045.99/warc/CC-MAIN-20190426073513-20190426095513-00153.warc.gz"}
http://www.physicsforums.com/showpost.php?p=164678&postcount=1
View Single Post P: 1 I'm reading the first edition of Mechanics by Landau et al, published in 1960. Just before equation 3.1 on page 5 it says exactly this: "Since space is isotropic, the Lagrangian must also be independent of the direction of v, and is therefore a function only of it's magnitude, i.e. of v(bold)^2 = v(italic)^2: L = L(v(italic)^2) (3.1)" This seems very cryptic to me since the magnitude is sqrt(v(bold)^2) = v(italic). Could someone fill in the missing details for me please? Funky
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012216925621033, "perplexity": 592.9934436584748}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00463-ip-10-146-231-18.ec2.internal.warc.gz"}
https://forum.polymake.org/viewtopic.php?f=8&p=1997&sid=9f760824d04ae7d1732f442840a71830
## Kazarnovskii Pseudovolume Questions and problems about using polymake go here. Moderator: Moderators gino Posts: 1 Joined: 26 Apr 2017, 10:59 ### Kazarnovskii Pseudovolume Good afternoon, I would like to know if it is possible to use polymake in order to compute the Kazarnovskii pseudovolume of 4-dimensional polytopes. If $\Gamma$ is a polytope in $\mathbb C^2$, the Kazarnovskii pseudovolume $P_2(\Gamma)$ is, by definition, the sum $\frac{1}{\pi}\sum_\Delta \rho(\Delta)vol_2(\Delta)\psi(\Delta)$, as $\Delta$ runs in the set of 2-dimensional faces of $\Gamma$, where: - $\rho(\Delta)=1-\langle v_1,v_2\rangle^2$, with $\{v_1,v_2\}$ an orthonormal basis (respect to the scalar product $Re\langle\,,\rangle$ given by the real part of the standard hermitian one) of the plane parallel to $\Delta$ and passing through the origin; - $vol_2(\Delta)$ is the surface area of $\Delta$; - $\psi(\Delta)$ is the outer angle of $\Gamma$ at $\Delta$. So $P_2(\Gamma)$ is just a weighted version of the 2nd intrinsic volume of $\Gamma$ taking into account the position of $\Gamma$ with respect to complex structure of the ambient space. My question is the following: is polymake able to perform the necessary linear algebra computation on the set of the ridges of $\Gamma$? joswig Main Author Posts: 187 Joined: 24 Dec 2010, 11:10 Contact: ### Re: Kazarnovskii Pseudovolume This is a special computation which is not supported by polymake right away. One thing which makes this a bit delicate is that this needs to be implemented with floats. By design polymake is primarily about exact computations. Therefore typical float linear algebra is dramatically underdeveloped in polymake. Essentially, the only non-trivial algorithm being singular value decomposition (and thus solving systems of linear equations with reasonable accuracy). It seems doable though, by writing suitable C++ client code. Just a general warning: starting out with float coordinates for your points or inequalities polymake (by default) will convert into exact rational numbers. The output can later be converted to floats; see, e.g., https://forum.polymake.org/viewtopic.php?f=9&t=7&p=24&hilit=float#p24.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092167973518372, "perplexity": 758.7432305511115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424586.47/warc/CC-MAIN-20170723182550-20170723202550-00618.warc.gz"}
http://zbmath.org/?q=an:1119.49025
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Well posedness in vector optimization problems and vector variational inequalities. (English) Zbl 1119.49025 Summary: We give notions of well posedness for a vector optimization problem and a vector variational inequality of the differential type. First, the basic properties of well-posed vector optimization problems are studied and the case of $C$-quasiconvex problems is explored. Further, we investigate the links between the well posedness of a vector optimization problem and of a vector variational inequality. We show that, under the convexity of the objective function $f$, the two notions coincide. These results extend properties which are well known in scalar optimization. ##### MSC: 49K40 Sensitivity, stability, well-posedness of optimal solutions 90C29 Multi-objective programming; goal programming 47J20 Inequalities involving nonlinear operators ##### References: [1] Dontchev, A. L., and Zolezzi, T., Well-Posed Optimization Problems, Lecture Notes in Mathematics, Springer Verlag, Berlin, Germany, Vol. 1543, 1993. [2] Hadamard, J., Sur les Problèmes aux Dérivees Partielles et Leur Signification Physique, Bulletin of the University of Princeton, Vol. 13, pp. 49–52, 1902. [3] Tykhonov, A. N., On the Stability of Functional Optimization Problems, USSR Computational Mathematics and Mathematical Physics, Vol. 6, pp. 28–33, 1966. · Zbl 0212.23803 · doi:10.1016/0041-5553(66)90003-6 [4] Kinderlehrer, D., and Stampacchia, G., An Introduction to Variational Inequalities and Their Applications, Pure and Applied Mathematics, Academic Press, New York, NY, Vol. 88, 1980. [5] Crespi, G. P., Ginchev, I., and Rocca, M., Existence of Solutions and Starshapedness in Minty Variational Inequalities, Journal of Global Optimization, Vol. 32, pp. 485–494, 2005. · Zbl 1097.49007 · doi:10.1007/s10898-003-2685-0 [6] Crespi, G. P., Ginchev, I., and Rocca, M., Minty Variational Inequalities, Increase-Along-Rays Property, and Optimization, Journal of Optimization Theory and Applications, Vol. 123, pp. 479–496, 2004. · Zbl 1059.49010 · doi:10.1007/s10957-004-5719-y [7] Ekeland, I., Nonconvex Minimization Problems, Bulletin of the American Mathematical Society, Vol. 1, pp. 443–474, 1979. · Zbl 0441.49011 · doi:10.1090/S0273-0979-1979-14595-6 [8] Lucchetti, R., and Patrone, F., A Characterization of Tykhonov Well Posedness for Minimum Problems, with Applications to Variational Inequalities. Numerical Functional Analysis and Optimization, Vol. 3, pp. 461–476, 1981. · Zbl 0479.49025 · doi:10.1080/01630568108816100 [9] Loridan, P., Well Posedness in Vector Optimization, Recent Developments in Well-Posed Variational Problems, Edited by R. Lucchetti and J. Revalski, Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, Netherlands, Vol. 331, pp. 171–192, 1995. [10] Miglierina, E., Molho, E., and Rocca, M., Well Posedness and Scalarization in Vector Optimization, Journal of Optimization Theory and Applications, Vol. 126, pp. 391–409, 2005. · Zbl 1129.90346 · doi:10.1007/s10957-005-4723-1 [11] Giannessi, F., Theorems of the Alternative, Quadratic Programs, and Complementarity Problems, Variational Inequalities and Complementarity Problems: Theory and Applications, Edited by R.W. Cottle, F. Giannessi, and J. L. Lions, Wiley, New York, N.Y., pp. 151–186, 1980. [12] Giannessi, F., On Minty Variational Principle, New Trends in Mathematical Programming, Edited by F. Giannessi, S. Komlósi, and T. Rapesák, Kluwer Academic Publishers, Boston, Massachussetts, pp. 93–99, 1998. [13] Bednarczuk, E., and Penot, J. P., On the Positions of the Notion of Well-Posed Minimization Problems, Bollettino dell’Unione Matematica Italiana, Vol. 7, pp. 665–683, 1992. [14] Bednarczuk, E., and Penot, J. P., Metrically Well-Set Minimization Problems, Applied Mathematics and Optimization, Vol. 26, pp. 273–285, 1992. · Zbl 0762.90073 · doi:10.1007/BF01371085 [15] Luc, D. T., Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, Springer Verlag, Berlin, Germany, Vol. 319, 1989. [16] Tammer, C., A Generalization of Ekeland’s Variational Principle, Optimization, Vol.$\sim$25, pp. 129–141, 1992. [17] Rockafellar, R. T., and Wets, R. J. B., Variational analysis, Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, Germany, Vol. 317, 1998. [18] Hiriart-Hurruty, J. B., Tangent Cones, Generalized Gradients and Mathematical Programming in Banach Spaces, Mathematical Methods of Operations Research, Vol.$\sim$4, pp. 79–97, 1979. [19] Gorohovik, V. V., Convex and Nonsmooth Optimization Problems of Vector Optimization, Navuka i Tékhnika, Minsk, Ukraina, Vol. 240, 1990 (in Russian). [20] Ciligot-Travain, M., On Lagrange-Kuhn-Tucker Multipliers for Pareto Optimization Problems, Numerical Functional Analysis and Optimization, Vol. 15, pp. 689–693, 1994. [21] Amahroq, T., and Taa, A., On Lagrange-Kuhn-Tucker Multipliers for Multiobjective Optimization Problems, Optimization, Vol. 41, pp. 159–172, 1997. [22] Zaffaroni, A., Degrees of Efficiency and Degrees of Minimality, SIAM Journal Control and Optimization, Vol. 42, 1071–1086, 2003. [23] Nikodem, K., Continuity of K–Convex Set-Valued Functions, Bulletin of the Polish Academy of Sciences, Vol. 24, pp. 393–400, 1986. [24] Henkel, E. C., and Tammer, C., -Variational Inequalities in Partially Ordered Spaces, Optimization, Vol. 36, 105–118, 1992.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907060027122498, "perplexity": 4628.377542442981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011167968/warc/CC-MAIN-20140305091927-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mtosmt.org/issues/mto.17.23.3/blattler_examples.php?id=1&nonav=true
Example 2. Comparison of the impact of chord inversion upon an additive chord and upon a triad (a) inversion of the penultimate chord of Example 1 (b) replacement of penultimate chord of Example 1 with a dominant triad (c) inversion of the penultimate chord of Example 2b
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012324213981628, "perplexity": 3061.4675643502164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00123.warc.gz"}
https://avidemia.com/pure-mathematics/miscellaneous-examples-on-chapter-ix/
1. Given that $$\log_{10} e = .4343$$ and that $$2^{10}$$ and $$3^{21}$$ are nearly equal to powers of $$10$$, calculate $$\log_{10}2$$ and $$\log_{10}3$$ to four places of decimals. 2. Determine which of $$(\frac{1}{2}e)^{\sqrt{3}}$$ and $$(\sqrt{2})^{\frac{1}{2}\pi}$$ is the greater. [Take logarithms and observe that $$\sqrt{3}/(\sqrt{3} + \frac{1}{4}\pi) < \frac{2}{5} \sqrt{3} < .6929 < \log 2$$.] 3. Show that $$\log_{10}n$$ cannot be a rational number if $$n$$ is any positive integer not a power of $$10$$. [If $$n$$ is not divisible by $$10$$, and $$\log_{10}n = p/q$$, we have $$10^{p} = n^{q}$$, which is impossible, since $$10^{p}$$ ends with $$0$$ and $$n^{q}$$ does not. If $$n = 10^{a}N$$, where $$N$$ is not divisible by $$10$$, then $$\log_{10}N$$ and therefore $\log_{10}n = a + \log_{10}N$ cannot be rational.] 4. For what values of $$x$$ are the functions $$\log x$$, $$\log\log x$$, $$\log\log\log x$$, … (a) equal to $$0$$ (b) equal to $$1$$ (c) not defined? Consider also the same question for the functions $$lx$$, $$llx$$, $$lllx$$, …, where $$lx = \log |x|$$. 5. Show that $\log x – \binom{n}{1} \log(x + 1) + \binom{n}{2} \log(x + 2) – \dots + (-1)^{n} \log(x + n)$ is negative and increases steadily towards $$0$$ as $$x$$ increases from $$0$$ towards $$\infty$$. [The derivative of the function is $\sum_{0}^{n} (-1)^{r} \binom{n}{r} \frac{1}{x + r} = \frac{n!}{x(x + 1) \dots (x + n)},$ as is easily seen by splitting up the right-hand side into partial fractions. This expression is positive, and the function itself tends to zero as $$x \to \infty$$, since $\log(x + r) = \log x + \epsilon_{x},$ where $$\epsilon_{x} \to 0$$, and $$1 – \dbinom{n}{1} + \dbinom{n}{2} – \dots = 0$$.] 6. Prove that $\left(\frac{d}{dx}\right)^{n} \frac{\log x}{x} = \frac{(-1)^{n} n!}{x^{n+1}} \left(\log x – 1 – \frac{1}{2} – \dots – \frac{1}{n}\right).$ 7. If $$x > -1$$ then $$x^{2} > (1 + x) \{\log(1 + x)\}^{2}$$. [Put $$1 + x = e^{\xi}$$, and use the fact that $$\sinh \xi > \xi$$ when $$\xi > 0$$.] 8. Show that $$\{\log(1 + x)\}/x$$ and $$x/\{(1 + x)\log(1 + x)\}$$ both decrease steadily as $$x$$ increases from $$0$$ towards $$\infty$$. 9. Show that, as $$x$$ increases from $$-1$$ towards $$\infty$$, the function $$(1 + x)^{-1/x}$$ assumes once and only once every value between $$0$$ and $$1$$. 10. Show that $$\dfrac{1}{\log(1 + x)} – \dfrac{1}{x} \to \dfrac{1}{2}$$ as $$x \to 0$$. 11. Show that $$\dfrac{1}{\log(1 + x)} – \dfrac{1}{x}$$ decreases steadily from $$1$$ to $$0$$ as $$x$$ increases from $$-1$$ towards $$\infty$$. [The function is undefined when $$x = 0$$, but if we attribute to it the value $$\frac{1}{2}$$ when $$x = 0$$ it becomes continuous for $$x = 0$$. Use Ex. 7 to show that the derivative is negative.] 12. Show that the function $$(\log \xi – \log x)/(\xi – x)$$, where $$\xi$$ is positive, decreases steadily as $$x$$ increases from $$0$$ to $$\xi$$, and find its limit as $$x \to \xi$$. 13. Show that $$e^{x} > Mx^{N}$$, where $$M$$ and $$N$$ are large positive numbers, if $$x$$ is greater than the greater of $$2\log M$$ and $$16N^{2}$$. [It is easy to prove that $$\log x < 2\sqrt{x}$$; and so the inequality given is certainly satisfied if $x > \log M + 2N\sqrt{x},$ and therefore certainly satisfied if $$\frac{1}{2}x > \log M$$, $$\frac{1}{2}x > 2N\sqrt{x}$$.] 14. If $$f(x)$$ and $$\phi(x)$$ tend to infinity as $$x \to \infty$$, and $$f'(x)/\phi'(x) \to \infty$$, then $$f(x)/\phi(x) \to \infty$$. [Use the result of Ch. VI, Misc. Ex. 33.] By taking $$f(x) = x^{\alpha}$$, $$\phi(x) = \log x$$, prove that $$(\log x)/x^{\alpha} \to 0$$ for all positive values of $$\alpha$$. 15. If $$p$$ and $$q$$ are positive integers then $\frac{1}{pn + 1} + \frac{1}{pn + 2} + \dots + \frac{1}{qn} \to \log\left(\frac{q}{p}\right)$ as $$n \to \infty$$. [Cf. Ex. LXXVIII. 6.] 16. Prove that if $$x$$ is positive then $$n\log\{\frac{1}{2}(1 + x^{1/n})\} \to -\frac{1}{2}\log x$$ as $$n \to \infty$$. [We have $n\log\{\tfrac{1}{2}(1 + x^{1/n})\} = n\log\{1 – \tfrac{1}{2}(1 – x^{1/n})\} = \tfrac{1}{2}n(1 – x^{1/n}) \frac{\log(1 – u)}{u}$ where $$u = \frac{1}{2}(1 – x^{1/n})$$. Now use § 209 and Ex. LXXXII. 4.] 17. Prove that if $$a$$ and $$b$$ are positive then $\{\tfrac{1}{2}(a^{1/n} + b^{1/n})\}^{n} \to \sqrt{ab}.$ [Take logarithms and use Ex. 16.] 18. Show that $1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{2n – 1} = \tfrac{1}{2}\log n + \log 2 + \tfrac{1}{2} \gamma + \epsilon_{n},$ where $$\gamma$$ is Euler’s constant (Ex. LXXXIX. 1) and $$\epsilon_{n} \to 0$$ as $$n \to \infty$$. 19. Show that $1 + \tfrac{1}{3} – \tfrac{1}{2} + \tfrac{1}{5} + \tfrac{1}{7} – \tfrac{1}{4} + \tfrac{1}{9} + \dots = \tfrac{3}{2} \log 2,$ the series being formed from the series $$1 – \frac{1}{2} + \frac{1}{3} – \dots$$ by taking alternately two positive terms and then one negative. [The sum of the first $$3n$$ terms is $\begin{gathered} 1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{4n – 1} – \frac{1}{2} \left(1 + \frac{1}{2} + \dots + \frac{1}{n}\right)\\ = \tfrac{1}{2}\log 2n + \log 2 + \tfrac{1}{2}\gamma + \epsilon_{n} – \tfrac{1}{2}(\log n + \gamma + \epsilon_{n}’), \end{gathered}$ where $$\epsilon_{n}$$ and $$\epsilon’_{n}$$ tend to $$0$$ as $$n \to \infty$$. (Cf. Ex. LXXVIII. 6).] 20. Show that $$1 – \frac{1}{2} – \frac{1}{4} + \frac{1}{3} – \frac{1}{6} – \frac{1}{8} + \frac{1}{5} – \frac{1}{10} – \dots = \frac{1}{2}\log 2$$. 21. Prove that $\sum_{1}^{n} \frac{1}{\nu(36\nu^{2} – 1)} = -3 + 3\Sigma_{3n+1} – \Sigma_{n} – S_{n}$ where $$S_{n} = 1 + \dfrac{1}{2} + \dots + \dfrac{1}{n}$$, $$\Sigma_{n} = 1 + \dfrac{1}{3} + \dots + \dfrac{1}{2n – 1}$$. Hence prove that the sum of the series when continued to infinity is $-3 + \tfrac{3}{2}\log 3 + 2\log 2.$ 22. Show that $\sum_{1}^{\infty} \frac{1}{n(4n^{2} – 1)} = 2\log 2 – 1, \quad \sum_{1}^{\infty} \frac{1}{n(9n^{2} – 1)} = \tfrac{3}{2}(\log 3 – 1).$ 23. Prove that the sums of the four series $\sum_{1}^{\infty} \frac{1}{4n^{2} – 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{4n^{2} – 1},\quad \sum_{1}^{\infty} \frac{1}{(2n + 1)^{2} – 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{(2n + 1)^{2} – 1}$ are $$\frac{1}{2}$$, $$\frac{1}{4}\pi – \frac{1}{2}$$, $$\frac{1}{4}$$, $$\frac{1}{2}\log 2 – \frac{1}{4}$$ respectively. 24. Prove that $$n!\, (a/n)^{n}$$ tends to $$0$$ or to $$\infty$$ according as $$a < e$$ or $$a > e$$. [If $$u_{n} = n!\, (a/n)^{n}$$ then $$u_{n+1}/u_{n} = a\{1 + (1/n)\}^{-n} \to a/e$$. It can be shown that the function tends to $$\infty$$ when $$a = e$$: for a proof, which is rather beyond the scope of the theorems of this chapter, see Bromwich’s Infinite Series, pp. 461 et seq.] 25. Find the limit as $$x \to \infty$$ of $\left(\frac{a_{0} + a_{1} x + \dots + a_{r} x^{r}} {b_{0} + b_{1} x + \dots + b_{r} x^{r}}\right)^{\lambda_{0}+\lambda_{1}x},$ distinguishing the different cases which may arise. 26. Prove that $\sum \log \left(1 + \frac{x}{n}\right)\quad (x > 0)$ diverges to $$\infty$$. [Compare with $$\sum (x/n)$$.] Deduce that if $$x$$ is positive then $(1 + x)(2 + x) \dots (n + x)/n! \to \infty$ as $$n \to \infty$$. [The logarithm of the function is $$\sum\limits_{1}^{n} \log \left(1 + \dfrac{x}{\nu}\right)$$.] 27. Prove that if $$x > -1$$ then $\begin{gathered} \frac{1}{(x + 1)^{2}} = \frac{1}{(x + 1) (x + 2)} + \frac{1!}{(x + 1) (x + 2) (x + 3)}\\ + \frac{2!}{(x + 1) (x + 2) (x + 3) (x + 4)} + \dots.\end{gathered}$ [The difference between $$1/(x + 1)^{2}$$ and the sum of the first $$n$$ terms of the series is $\frac{1}{(x + 1)^{2}}\, \frac{n!}{(x + 2) (x + 3) \dots (x + n + 1)}.]$ 28. No equation of the type $Ae^{\alpha x} + Be^{\beta x} + \dots = 0,$ where $$A$$, $$B$$, … are polynomials and $$\alpha$$, $$\beta$$, … different real numbers, can hold for all values of $$x$$. [If $$\alpha$$ is the algebraically greatest of $$\alpha$$, $$\beta$$, …, then the term $$Ae^{\alpha x}$$ outweighs all the rest as $$x \to \infty$$.] 29. Show that the sequence $a_{1} = e,\quad a_{2} = e^{e^{2}},\quad a_{3} = e^{e^{e^{3}}},\ \dots$ tends to infinity more rapidly than any member of the exponential scale. [Let $$e_{1}(x) = e^{x}$$, $$e_{2}(x) = e^{e_{1}(x)}$$, and so on. Then, if $$e_{k}(x)$$ is any member of the exponential scale, $$a_{n} > e_{k}(n)$$ when $$n > k$$.] 30. Prove that $\frac{d}{dx} \{\phi(x)\}^{\psi(x)} = \frac{d}{dx} \{\phi(x)\}^{\alpha} + \frac{d}{dx} \{\beta^{\psi(x)}\}$ where $$\alpha$$ is to be put equal to $$\psi(x)$$ and $$\beta$$ to $$\phi(x)$$ after differentiation. Establish a similar rule for the differentiation of $$\phi(x)^{[\{\psi(x)\}^{\chi(x)}]}$$. 31. Prove that if $$D_{x}^{n} e^{-x^{2}} = e^{-x^{2}} \phi_{n}(x)$$ then (i) $$\phi_{n}(x)$$ is a polynomial of degree $$n$$, (ii) $$\phi_{n+1} = -2x\phi_{n} + \phi_{n}’$$, and (iii) all the roots of $$\phi_{n} = 0$$ are real and distinct, and separated by those of $$\phi_{n-1} = 0$$. [To prove (iii) assume the truth of the result for $${\kappa} = 1$$, $$2$$, …, $${n}$$, and consider the signs of $${\phi_{n+1}}$$ for the $$n$$ values of $$x$$ for which $${\phi_{n}} = 0$$ and for large (positive or negative) values of $$x$$.] 32. The general solution of $$f(xy) = f(x)f(y)$$, where $$f$$ is a differentiable function, is $$x^{a}$$, where $$a$$ is a constant: and that of $f(x + y) + f(x – y) = 2f(x)f(y)$ is $$\cosh ax$$ or $$\cos ax$$, according as $$f”(0)$$ is positive or negative. [In proving the second result assume that $$f$$ has derivatives of the first three orders. Then $2f(x) + y^{2}\{f”(x) + \epsilon_{y}\} = 2f(x)[f(0) + yf'(0) + \tfrac{1}{2} y^{2}\{f”(0) + \epsilon_{y}’\}],$ where $$\epsilon_{y}$$ and $$\epsilon_{y}’$$ tend to zero with $$y$$. It follows that $$f(0) = 1$$, $$f'(0) = 0$$, $$f”(x) = f”(0)f(x)$$, so that $$a = \sqrt{f”(0)}$$ or $$a = \sqrt{-f”(0)}$$.] 33. How do the functions $$x^{\sin(1/x)}$$, $$x^{\sin^{2}(1/x)}$$, $$x^{\csc(1/x)}$$ behave as $$x \to +0$$? 34. Trace the curves $$y = \tan x e^{\tan x}$$, $$y = \sin x \log \tan \frac{1}{2}x$$. 35. The equation $$e^{x} = ax + b$$ has one real root if $$a < 0$$ or $$a = 0$$, $$b > 0$$. If $$a > 0$$ then it has two real roots or none, according as $$a\log a > b – a$$ or $$a\log a < b – a$$. 36. Show by graphical considerations that the equation $$e^{x} = ax^{2} + 2bx + c$$ has one, two, or three real roots if $$a > 0$$, none, one, or two if $$a < 0$$; and show how to distinguish between the different cases. 37. Trace the curve $$y = \dfrac{1}{x} \log\left(\dfrac{e^{x} – 1}{x}\right)$$, showing that the point $$(0, \frac{1}{2})$$ is a centre of symmetry, and that as $$x$$ increases through all real values, $$y$$ steadily increases from $$0$$ to $$1$$. Deduce that the equation $\frac{1}{x} \log\left(\frac{e^{x} – 1}{x}\right) = \alpha$ has no real root unless $$0 < \alpha < 1$$, and then one, whose sign is the same as that of $$\alpha – \frac{1}{2}$$. [In the first place $y – \tfrac{1}{2} = \frac{1}{x} \left\{\log\left(\frac{e^{x} – 1}{x}\right) – \log e^{\frac{1}{2} x}\right\} = \frac{1}{x} \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right)$ is clearly an odd function of $$x$$. Also $\frac{dy}{dx} = \frac{1}{x^{2}} \left\{\tfrac{1}{2} x\coth \tfrac{1}{2}x – 1 – \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right)\right\}.$ The function inside the large bracket tends to zero as $$x \to 0$$; and its derivative is $\frac{1}{x} \left\{1 – \left(\frac{\frac{1}{2}x}{\sinh \frac{1}{2}x}\right)^2\right\},$ which has the sign of $$x$$. Hence $$dy/dx > 0$$ for all values of $$x$$.] 38. Trace the curve $$y = e^{1/x} \sqrt{x^{2} + 2x}$$, and show that the equation $e^{1/x} \sqrt{x^{2} + 2x} = \alpha$ has no real roots if $$\alpha$$ is negative, one negative root if $0 < \alpha < a = e^{1/\sqrt{2}} \sqrt{2 + 2\sqrt{2}},$ and two positive roots and one negative if $$\alpha > a$$. 39. Show that the equation $$f_{n}(x) = 1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!} = 0$$ has one real root if $$n$$ is odd and none if $$n$$ is even. [Assume this proved for $$n = 1$$, $$2$$, … $$2k$$. Then $$f_{2k+1}(x) = 0$$ has at least one real root, since its degree is odd, and it cannot have more since, if it had, $$f’_{2k+1}(x)$$ or $$f_{2k}(x)$$ would have to vanish once at least. Hence $$f_{2k+1}(x) = 0$$ has just one root, and so $$f_{2k+2}(x) = 0$$ cannot have more than two. If it has two, say $$\alpha$$ and $$\beta$$, then $$f’_{2k+2}(x)$$ or $$f_{2k+1}(x)$$ must vanish once at least between $$\alpha$$ and $$\beta$$, say at $$\gamma$$. And $f_{2k+2}(\gamma) = f_{2k+1}(\gamma) + \frac{\gamma^{2k+2}}{(2k + 2)!} > 0.$ But $$f_{2k+2}(x)$$ is also positive when $$x$$ is large (positively or negatively), and a glance at a figure will show that these results are contradictory. Hence $$f_{2k+2}(x) = 0$$ has no real roots.] 40. Prove that if $$a$$ and $$b$$ are positive and nearly equal then $\log \frac{a}{b} = \frac{1}{2}(a – b) \left(\frac{1}{a} + \frac{1}{b}\right),$ approximately, the error being about $$\frac{1}{6}\{(a – b)/a\}^{3}$$. [Use the logarithmic series. This formula is interesting historically as having been employed by Napier for the numerical calculation of logarithms.] 41. Prove by multiplication of series that if $$-1 < x < 1$$ then \begin{aligned} \tfrac{1}{2}\{\log(1 + x)\}^{2} &= \tfrac{1}{2} x^{2} – \tfrac{1}{3}(1 + \tfrac{1}{2})x^{3} + \tfrac{1}{4}(1 + \tfrac{1}{2} + \tfrac{1}{3})x^{4} – \dots,\\ \tfrac{1}{2}(\arctan x)^{2} &= \tfrac{1}{2} x^{2} – \tfrac{1}{4}(1 + \tfrac{1}{3})x^{4} + \tfrac{1}{6}(1 + \tfrac{1}{3} + \tfrac{1}{5})x^{6} – \dots.\end{aligned} 42. Prove that $(1 + \alpha x)^{1/x} = e^{\alpha}\{1 – \tfrac{1}{2} a^{2}x + \tfrac{1}{24}(8 + 3a)a^{3}x^{2}(1 + \epsilon_{x})\},$ where $$\epsilon_{x} \to 0$$ with $$x$$. 43. The first $$n + 2$$ terms in the expansion of $$\log\left(1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!}\right)$$ in powers of $$x$$ are $x – \frac{x^{n+1}}{n!} \left\{\frac{1}{n + 1} – \frac{x}{1!\, (n + 2)} + \frac{x^{2}}{2!\, (n + 3)} – \dots + (-1)^{n} \frac{x^{n}}{n!\, (2n + 1)} \right\}.$ 44. Show that the expansion of $\exp \left(-x – \frac{x^{2}}{2} – \dots – \frac{x^{n}}{n}\right)$ in powers of $$x$$ begins with the terms $1 – x + \frac{x^{n+1}}{n + 1} – \sum_{s=1}^{n} \frac{x^{n+s+1}}{(n + s)(n + s + 1)}.$ 45. Show that if $$-1 < x < 1$$ then \begin{aligned} \frac{1}{3}x + \frac{1\cdot4}{3\cdot6}2^{2}x^{2} + \frac{1\cdot4\cdot7}{3\cdot6\cdot9}3^{2}x^{3} + \dots &= \frac{x(x + 3)}{9(1 – x)^{7/3}},\\ \frac{1}{3}x + \frac{1\cdot4}{3\cdot6}2^{3}x^{2} + \frac{1\cdot4\cdot7}{3\cdot6\cdot9}3^{3}x^{3} + \dots &= \frac{x(x^{2} + 18x + 9)}{27(1 – x)^{10/3}}.\end{aligned} [Use the method of Ex. XCII. 6. The results are more easily obtained by differentiation; but the problem of the differentiation of an infinite series is beyond our range.] 46. Prove that \begin{aligned} \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)} &= \frac{1}{a – b} \log\left(\frac{a}{b}\right), \\ \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a – b)^{2}b}\left\{a – b – b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a – b)^{2}} \left\{a\log\left(\frac{a}{b}\right) – a + b\right\},\\ \int_{0}^{\infty} \frac{dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{(a^{2} + b^{2})b} \left\{\tfrac{1}{2}\pi a – b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{a^{2} + b^{2}} \left\{\tfrac{1}{2}\pi b + a\log\left(\frac{a}{b}\right)\right\},\end{aligned} provided that $$a$$ and $$b$$ are positive. Deduce, and verify independently, that each of the functions $a – 1 – \log a,\quad a\log a – a + 1,\quad \tfrac{1}{2}\pi a – \log a,\quad \tfrac{1}{2}\pi + a\log a$ is positive for all positive values of $$a$$. 47. Prove that if $$\alpha$$, $$\beta$$, $$\gamma$$ are all positive, and $$\beta^{2} > \alpha\gamma$$, then $\int_{0}^{\infty} \frac{dx}{\alpha x^{2} + 2\beta x + \gamma} = \frac{1}{\sqrt{\beta^{2} – \alpha\gamma}} \log \left\{\frac{\beta + \sqrt{\beta^{2} – \alpha\gamma}} {\sqrt{\alpha\gamma}} \right\};$ while if $$\alpha$$ is positive and $$\alpha\gamma > \beta^{2}$$ the value of the integral is $\frac{1}{\sqrt{\alpha\gamma – \beta^{2}}} \arctan \left\{\frac{\sqrt{\alpha\gamma – \beta^{2}}}{\beta}\right\},$ that value of the inverse tangent being chosen which lies between $$0$$ and $$\pi$$. Are there any other really different cases in which the integral is convergent? 48. Prove that if $$a > -1$$ then $\int_{1}^{\infty} \frac{dx}{(x + a)\sqrt{x^{2} – 1}} = \int_{0}^{\infty} \frac{dt}{\cosh t + a} = 2\int_{1}^{\infty}\frac{du}{u^{2} + 2au + 1};$ and deduce that the value of the integral is $\frac{2}{\sqrt{1 – a^{2}}} \arctan \sqrt{\frac{1 – a}{1 + a}}$ if $$-1 < a < 1$$, and $\frac{1}{\sqrt{a^{2} – 1}} \log\frac{\sqrt{a + 1} + \sqrt{a – 1}} {\sqrt{a + 1} – \sqrt{a – 1}} = \frac{2}{\sqrt{a^{2} – 1}} \operatorname{arg tanh} \sqrt{\frac{a – 1}{a + 1}}$ if $$a > 1$$. Discuss the case in which $$a = 1$$. 49. Transform the integral $$\int_{0}^{\infty} \frac{dx}{(x + a) \sqrt{x^{2} + 1}}$$, where $$a > 0$$, in the same ways, showing that its value is $\frac{1}{\sqrt{a^{2} + 1}} \log\frac{a + 1 + \sqrt{a^{2} + 1}}{a + 1 – \sqrt{a^{2} + 1}} = \frac{2}{\sqrt{a^{2} + 1}} \operatorname{arg tanh} \frac{\sqrt{a^{2} + 1}}{a + 1}.$ 50. Prove that $\int_{0}^{1} \arctan x\, dx = \tfrac{1}{4}\pi – \tfrac{1}{2}\log 2.$ 51. If $$0 < \alpha < 1$$, $$0 < \beta < 1$$, then $\int_{-1}^{1} \frac{dx}{\sqrt{(1 – 2\alpha x + \alpha^{2})(1 – 2\beta x + \beta^{2})}} = \frac{1}{\sqrt{\alpha\beta}} \log \frac{1 + \sqrt{\alpha\beta}}{1 – \sqrt{\alpha\beta}}.$ 52. Prove that if $$a > b > 0$$ then $\int_{-\infty}^{\infty} \frac{d\theta}{a\cosh \theta + b\sinh \theta} = \frac{\pi}{\sqrt{a^{2} – b^{2}}}{.}$ 53. Prove that $\int_{0}^{1} \frac{\log x}{1 + x^{2}}\, dx = -\int_{1}^{\infty} \frac{\log x}{1 + x^{2}}\, dx,\quad \int_{0}^{\infty} \frac{\log x}{1 + x^{2}}\, dx = 0,$ and deduce that if $$a > 0$$ then $\int_{0}^{\infty} \frac{\log x}{a^{2} + x^{2}}\, dx = \frac{\pi}{2a}\log a.$ [Use the substitutions $$x = 1/t$$ and $$x = au$$.] 54. Prove that $\int_{0}^{\infty} \log \left(1 + \frac{a^{2}}{x^{2}}\right) dx = \pi a$ if $$a > 0$$. [Integrate by parts.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991974949836731, "perplexity": 202.3665441633915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585911.17/warc/CC-MAIN-20211024050128-20211024080128-00645.warc.gz"}
https://socratic.org/questions/how-do-you-solve-2-ln-x-3-0-and-find-any-extraneous-solutions
Precalculus Topics # How do you solve 2 ln (x + 3) = 0 and find any extraneous solutions? This equation means that $x + 3 = 1$ or more precisely $x + 3 = {e}^{2 i \frac{\pi}{n}}$. For x real $x + 3 = \pm 1$ thus $x = - 4 \mathmr{and} x = - 2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929364323616028, "perplexity": 452.78949317970853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00548.warc.gz"}
https://www.physicsforums.com/threads/standard-ml-filter-and-mod.532378/
# Standard ML - Filter and mod 1. Sep 21, 2011 ### azrarillian Hi, i'm using SML and i'm trying to make a program/function that finds all the prime numbers in an Int list of numbers. what i'm trying to do is make a function that removes any (and all) elements x of the int list where x mod p = 0, and where p is the first prime number (2). then i want to make a recursion so that it does the same for the next element after p, which should be a prime number. the only problem i have is that i don't know how to filter or delete the elements x in the list. I've tried to use the function 'filter' but I can't figure out how to take modulo of the tail of the list (or rather of the elements in the tail) and the prime number. also, I know that there are other ways to find primenumbers, and though this is the way i want to use (for now) any and all help, otherwise, is welcome. Last edited: Sep 21, 2011 Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Standard ML - Filter and mod
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318396806716919, "perplexity": 338.95151551299705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00518.warc.gz"}
https://eprint.iacr.org/2009/151
## Cryptology ePrint Archive: Report 2009/151 Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm Shaohua Zhang Abstract: It is known that Euclid's algorithm, Guass' elimination and Buchberger's algorithm play important roles in algorithmic number theory, symbolic computation and cryptography, and even in science and engineering. The aim of this paper is to reveal again the relations of these three algorithms, and, simplify Buchberger's algorithm without using multivariate division algorithm. We obtain an algorithm for computing the greatest common divisor of several positive integers, which can be regarded as the generalization of Euclid's algorithm. This enables us to re-find the Guass' elimination and further simplify Buchberger's algorithm for computing Gr\"{o}bner bases of polynomial ideals in modern Computational Algebraic Geometry. Category / Keywords: Euclid's algorithm, Guass' elimination, multivariate polynomial, Gr\"{o}bner bases, Buchberger's algorithm Date: received 9 Mar 2009, last revised 14 Jan 2010 Contact author: shaohuazhang at mail sdu edu cn Available format(s): PDF | BibTeX Citation Note: The paper have been improved. Short URL: ia.cr/2009/151 [ Cryptology ePrint archive ]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661580920219421, "perplexity": 3128.8789054734516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00209.warc.gz"}
http://www.pdffilestore.com/calculus-early-transcendentals-8th-edition/
The calculus early transcendentals 8th edition is a math course by James Stewart. The book is a global best-seller because of its format, which has clear, concise, and actual relevant real-world examples. The author uses the book to convey the usefulness of calculations to improve technical proficiency and evaluate the inherent beauty of the subject. Related Book ### Calculus Early Transcendentals 8th Edition Solution calculus early transcendentals 8th edition exercise The study materials are integrated with patient examples to help build mathematical confidence and push the learner to obtain success in the course. The book contains the following seventeen chapters; Chapter 1 – Functions And Models Chapter 2 – Limits And Derivatives Chapter 3 – Differentiation Rules Chapter 4 – Applications Of Differentiation Chapter 5 – Integrals Chapter 6 – Applications Of Integration Chapter 7 – Techniques Of Integration Chapter 8 – Further Applications Of Integration Chapter 9 – Differential Equations Chapter 10 – The Parametric Equations And Their Polar Coordinates Chapter 11 – Infinite Sequences And Series Chapter 12 – Vectors And The Geometry Of Space Chapter 13 – Vector Functions Chapter 14 – Partial Derivatives Chapter 15 – Multiple Integrals Chapter 16 – Vector Calculus Chapter 17 – Second-Order Differential Equations Chapter format We will look at the format in the first chapter to fully understand why this book is beloved by many students. Chapter 1 Functions and Models 1.1 The Four Ways to Represent a Function and its exercises on page 19 1.2 The Mathematical Models: A Catalog of Essential Functions and its exercises on page 33 1.3 The New Functions from Old Functions and its exercises on page 42 1.4 The Exponential Functions and its exercises on page 53 1.5 The Inverse Functions and Logarithms and its exercises on page 66 The Review and Concept Check is on page 68 The Review and True-False Quiz is on page 69 The Chapter Review and its exercises on page 69 The Problems Plus is on page 76 As you can clearly see, every chapter has practical exercises at the end of each lesson. Besides, at the end of each chapter, there are also review questions and true-false quizzes and problem plus and chapter review questions and answers. The practical applications will help a learner remember the subject matter much better. The question in the textbook is below. Question: What is a function? Answer: A function is defined as an ordered pair (x, f (x)) so that a defined rule relates x and f (x). The set of all these values is the domain D of the function f, and the set of all values of (x) is called the interval R. ## James Stewart Calculus Solution The Students manual for James Stewart’s calculus 8th edition with solution is a book containing completed solutions to all of the exercises in the text. It gives calculus students a way to look at the solutions to the book’s problems and make sure that they did take the correct steps to come to an answer. This book is full of those practice questions and answers at the end of it. The chapters 1. Functions And Limits 2. Derivatives 3. Application Of Differentiation 4. Integrals 5. Applications Of Integration 6. Inverse Functions: Exponential, Logarithmic, And Inverse Trigonometric Functions 7. Techniques Of Integration 8. Further Applications Of Integration 9. Differential Equations 10. Parametric Equations And Polar Coordinates 11. Infinite Sequences And Series 12. Vectors And The Geometry Of Space 13. Vector Functions 14. Partial Derivatives 15. Multiple Integrals 16. Vector Calculus 17. Second-Order Differential Equations This below problem is found in the first section, Chapter T, and problem 1ADT The problem is about evaluating each of the below expressions without a calculator. (a) (-3)4         (b) —34                       (c) 3-4                         (d) 523/ 5 21 We will examine each answer individually. 1. a) The actual answer to the problem and the value of (-3)4 is 81. For the procedural explanation of the answer to the problem, we have to look at it this way. Because of the mathematical equation, the power value is given as the number four. First, you have to multiply negative three (-3) four times. This is to calculate its value (-3)4. (-3)4 = (-3) x (-3) x (-3) x (-3) =9×9 = 81 thus we deduce the answer to be the value of (-3)4which is 81. 2. b) In this case, we have to determine and evaluate the value of —34,and the answer is —81. For the procedural explanation, we need to consider the given expression —34. In this case, since the power is 4, you need actually to multiply three-four times to compute the value of —34. Mathematically this looks like the equation below. —34 = — (3) x (3) x (3) x (3) = —9 x 9 = —81 Now the answer becomes clear that the value of —34 is actually —81. 1. c) To decide on this equation, we have to break it down first to evaluate it and solve the value of 3-4. The answer to the problem for 3-4is 1/ 81 For the procedural explanation, you have to make a consideration of the given problem and which is 3-4. Besides, since the power is —4, we must multiply 1/3 four times to find the value of 3-4. So the equation will look like this; 3-4 = (1/3) X (1/3) X (1/3) = 1/9 X 1/9 = 1/81 So the answer then becomes clear, and the value of 3-4 becomes 1/81. 1. d) To determine and evaluate the problem which asks for the value of 521/523 And the answer to the problem on the value of 521 /523 is actually 25. The procedural explanation of the answer looks like this. Moreover, the formula uses the following; the (a) becomes a real number, and so is the (m). (n) Which are natural numbers, so following that logic, we have the following equation. am/bn = am-n So the calculation then becomes tied to the given expression of   521/523 So using the above formula, we can then compute the actual value of 521/523 Using the formula as so 521/523 = 5 23-21 which is equals to 52 = 25 So your final answer to the problem and the value of 521 /523 is 25. Final words The author, James Stewart, was a renowned mathematician. His books on mathematics are sought after because they offer some practical lessons on how to learn the subject matter and sprinkle many exercises that are used to solidify that lesson. The book is actually filled with exercises that are easy to follow and their corresponding answers for reviewing. No lesson is easier to learn than the one with practical examples included. As you know, James Stewart’s calculus books are usually very popular with a lot of calculus students, and the reason is that he offers real practical examples but also puts in an extra effort in providing practice problems and their solutions. Scroll to Top Scroll to Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177005648612976, "perplexity": 895.1827861143211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00734.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/Diptimoy_Ghosh
• Diptimoy Ghosh Articles written in Pramana – Journal of Physics • 𝐵 Physics: WHEPP-XI working group report We present the report of the 𝐵 physics working group of the Workshop on High Energy Physics Phenomenology (WHEPP-XI), held at the Physical Research Laboratory, Ahmedabad, in January 2010. • Physics beyond the Standard Model through $b \rightarrow s\mu^{+} \mu^{-}$ transition A comprehensive study of the impact of new-physics operators with different Lorentz structures on decays involving the $b \rightarrow s\mu^{+} \mu^{-}$ transition is performed. The effects of new vector– axial vector (VA), scalar–pseudoscalar (SP) and tensor (T) interactions on the differential branching ratios, forward–backward asymmetries ($A_{\text{FB}}$ ’s), and direct CP asymmetries of $\bar{B}_{s}^{0} \rightarrow \mu^{+} \mu^{-}, \bar{B}_{d}^{0} \rightarrow X_{s} \mu^{+} \mu^{-}, \bar{B}_{s}^{0} \rightarrow \mu^{+} \mu^{-} \gamma, \bar{B}_{d}^{0} \rightarrow \bar{K} \mu^{+} \mu^{-}$, and $\bar{B}_{d}^{0} \rightarrow \bar{K}^{*} \mu^{+} \mu^{-}$ are examined. In $\bar{B}_{d}^{0} \rightarrow \bar{K}^{*} \mu^{+} \mu^{-}$, we also explore the longitudinal polarization fraction $f_{\text{L}}$ and the angular asymmetries $A_{\text{T}}^{2}$ and $A_{\text{LT}}$, the direct CP asymmetries in them, as well as the triple-product CP asymmetries $A_{\text{T}}^{\text{(im)}}$ and $A_{\text{{LT}}^{\text{(im)}}$. While the new VA operators can significantly enhance most of the observables beyond the Standard Model predictions, the SP and T operators can do this only for $A_{\text{FB}}$ in $\bar{B}_{d}^{0} \rightarrow \bar{K} \mu^{+} \mu^{-}$. • $B_{s}$ data at Tevatron and possible new physics The new physics (NP) is parametrized with four model-independent quantities: the magnitudes and phases of the dispersive part $M_{12}$ and the absorptive part $\Gamma_{12}$ of the NP contribution to the effective Hamiltonian. We constrain these parameters using the four observables $\Delta M_{\text{s}}$, $\Delta \Gamma_{\text{s}}$, the mixing phase $\beta_{\text{s}}^{J/\psi \phi}$ and $A_{\text{sl}}^{b}$. This formalism is extended to include charge-parity-time reversal (CPT) violation, and it is shown that CPT violation by itself, or even in the presence of CPTconserving NP without an absorptive part, helps only marginally in the simultaneous resolution of these anomalies. • # Pramana – Journal of Physics Volume 94, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003090262413025, "perplexity": 2144.209129931562}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00402.warc.gz"}
https://arxiv.org/abs/cs/0410047
cs (what is this?) Title: Simple Distributed Weighted Matchings Abstract: Wattenhofer [WW04] derive a complicated distributed algorithm to compute a weighted matching of an arbitrary weighted graph, that is at most a factor 5 away from the maximum weighted matching of that graph. We show that a variant of the obvious sequential greedy algorithm [Pre99], that computes a weighted matching at most a factor 2 away from the maximum, is easily distributed. This yields the best known distributed approximation algorithm for this problem so far. Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Discrete Mathematics (cs.DM) Cite as: arXiv:cs/0410047 [cs.DC] (or arXiv:cs/0410047v1 [cs.DC] for this version) Submission history From: Jaap-Henk Hoepman [view email] [v1] Tue, 19 Oct 2004 09:00:06 GMT (9kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363135099411011, "perplexity": 3357.4230123039974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00675.warc.gz"}
https://www.redcrab-software.com/en/Calculator/Electrics/Voltage-Drop
# Voltage drop Online calculators and formulas for calculating the voltage loss in a wire ## Calculate loss voltage of a wire This page calculates the voltage drop that is lost in a wire due to its resistance. To do this, the input voltage, the current, the simple cable length and the cable cross-section must be specified. A phase shift in the case of inductive loading can be specified as an option. A value of 1 is preset for Cos φ for ohmic load and direct current. The specific resistance or the conductance can be specified for the material of the conductor. The following table contains the most common values of the conductance. #### Conductance Copper 56.0 Silver 62.5 Aluminium 35.0 For a list of other specific resistances and conductance values click here. Voltage drop calculator Input Delete Entries Voltage (V) Load current (A) Wire length (one way) (m) *) Cross-sectional area (mm2) Cos φ Resistivity (Ω) Conductance (S) Decimal places 0 1 2 3 4 6 8 10 Result Loss voltage Useful voltage Voltage drop Wire resistance ### Legend $$\displaystyle A$$ cross-section $$\displaystyle l$$  length $$\displaystyle R$$ Resistance of the wire $$\displaystyle ρ$$ Specific resistance $$\displaystyle σ$$Specific conductance $$\displaystyle Un$$ Nominal voltage (input) $$\displaystyle ΔU$$ Loss voltage *) Double the line length is calculated (outward and return line). ## Formulas for voltage drop calculation Single wire resistance $$\displaystyle R=\frac{ρ · l}{A}$$ $$\displaystyle =\frac{l}{σ · A}$$ Total wire resistance $$\displaystyle R=2 ·\frac{ρ · l}{A}$$ $$\displaystyle =2 ·\frac{l}{σ · A}$$ loss voltage $$\displaystyle ΔU=2 ·\frac{l}{σ · A}· I · cos( φ)$$ voltage drop in % $$\displaystyle Δu=\frac{ΔU}{Un} ·100 \%$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655294179916382, "perplexity": 3339.6280948028298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00192.warc.gz"}
https://physics.stackexchange.com/questions/218305/is-there-a-model-of-the-universe-with-the-transfinite-spacetime
# Is there a model of the universe with the transfinite (space)time? In mathematics there is a concept of ordinal numbers where one can count to infinity and beyond. For example the least number that is greater than all the finite numbers is denoted by $\omega$. Such a number $\omega$ is said to be a limit of the finite numbers or a limit ordinal. If one is counting as with natural numbers, the next numbers after $\omega$ are $\omega+1, \omega+2, \omega+3$, ... The limit of this sequence is a limit ordinal $\omega+\omega=\omega \cdot 2$. Then one could count from $\omega \cdot 2$ and so on. Eventually one would get to the number denoted by $\omega_1$ which represents the cardinality (or size) of the real numbers; note that the cardinality of the natural numbers is $\omega$. But then one could still go beyond. Then when one needs to prove a statement which is true for all ordinal numbers one can do so by transfinite induction. In the base case one proves the statement for the ordinal $0$. The inductive case has a successor case and a limit case. In the successor case, one assumes that the statement is true for an ordinal $\alpha$ and then proves it for the ordinal $\alpha+1$. In the limit case, if $\delta$ is a limit ordinal, then one assumes that the statement holds for all $\alpha < \delta$ and proves it for the ordinal $\delta$. I am interested in the model of the universe that allows the possibility that the spacetime and especially the time dimension is transfinite. The standard model of physics explains with the equations what happens in successor cases: given a complete information about the system, one can derive the possible (I know this may be too simplified, but I do not know much about the quantum mechanics) state of that system one 1 second later, e.g. that a football would be 5 meters closer to the goalkeeper. However, I am looking for a theory that would have the rules that would specify what happens at the limit stage. For example if the theory claimed that the universe continues expanding and getting colder as its time is closer to the time $\omega$ (infinity), then what would happen with the universe at the time $\omega$ and $\omega+1$? I remember a talk from 6 years ago by some distinguished physisist (from Oxford I think) who introduced a model of the universe where the universe would expand up to a very distant time in the future and then at some point it would start collapsing to a point from when a Big Bang would reoccur again and a new universe would start. I think it would make a sense for these crucial events such as the change from the expansion to the contraction and from the contraction to the expansion to happen at the limit stages of the time. Similarly, he said that some universes could be richer than their predecessors according to certain patters. But of course, at that time, I understood the talk only at a very intuitive level. Note that some limits ordinals are stronger than others in a sense of under what operations they are closed. For example if $\alpha$ and $\beta$ are any ordinals less than $\omega_1$, then their addition, multiplication, exponentiation is less than $\omega_1$. On the other hand $\omega+1$ is less than $\omega \cdot 2$, but $(\omega+1)+(\omega+1)=\omega+\omega+1=\omega \cdot 2 +1$ which is greater than $\omega \cdot 2$, so $\omega \cdot 2$ is not even closed under addition. One defines the mathematical universe of all sets as the union of successive classes $L_\alpha$ for an ordinal $\alpha$ , see Constructible universe. It turns out that the richness of a class $L_\alpha$ depends much on how closed $\alpha$ is. Therefore I would expect that the physical universe at the limit ordinal would have also a much richer structure locally (wrt time), i.e. more laws and phenomena of a general theory could be observed and measured in the universe at that time. So are there any models of the universe that consider the existence of the transfinite time dimension? I am also happy to be pointed out to some references, but in such cases brief explanations included here will be appreciated. My background is mathematics, not physics, so please accept my apologies for an uneducated question. • I'm not exactly sure what you mean by a transfinite model of spacetime. Are you thinking of using something like the long line? If yes, note that physics crucially relies on the differentiable structure of spacetime, which is highly non-unique for objects like the long line, so you get a host of problems associated with choosing the right one as soon as you allow such objects as spacetimes. – ACuriousMind Nov 13 '15 at 13:14 • @ACuriousMind The long line is a total order on $\omega_1 \times [0,1)$. Yes, I meant to use something similar to α×[0,1) for an ordinal α. But by the theorem of Simon Donaldson R4 (spacetime) has uncountably many (or $\omega_1$-many) non-diffeomorphic structures. C.f. en.wikipedia.org/wiki/Exotic_R4. So do you not face the same problem in the standard model with spacetime already? – Dávid Natingga Nov 13 '15 at 13:34 • Well, it's "obvious" which one to choose for $\mathbb{R}^4$ - the non-exotic one. It's not obvious to me which one to choose on these weird objects. Also, for larger ordinals $\alpha$, $\alpha\times[0,1)$ is no longer a topological manifold, if I understand correctly, so this doesn't fit at all into usual models of spacetime. – ACuriousMind Nov 13 '15 at 13:36 • @ACuriousMind Yes, it is true that many concepts including the notion of spacetime would need to be generalized for such a model. So it seems that you have not come across such a model. – Dávid Natingga Nov 13 '15 at 13:43 • @ACuriousMind Could you please justify the evidence for differential structure of spacetime being a standard $\mathbb{R}^4$ and not an exotic one? – krzysiekb Feb 3 '17 at 10:34 I think that with this question you are overstretching the boundaries of applicability of maths to physics. I think yours is ultimately a philosophical question, so an answer will also have to be somewhat philosophical. It has often been stated how remarkable it is that maths is so unreasonably effective at describing physics. Indeed this seems miraculous, but it undoubtedly plays a role that our most basic maths concepts stem quite directly from the world around us (natural numbers from counting, rational numbers from ratios and then lengths, real numbers from limits of lengths, etc). Axiomatizing this already leads to some problems, but in general these are ignored without grave consequences. When you define more and more abstract concepts, you may run into concepts that don't have any obvious link to the world around us anymore. An example is one that you gave yourself: what is the cardinality of $\omega_1$? You said that it is the cardinality of the continuum, but I'm sure you are aware yourself of the fact that the truth of that statement is independent of ZFC, which by many is considered to be the basis of mathematical axiomatization: both its assertion and its negation can be postulated without introducing contradictions. For physical applications of ordinal numbers to make a decision on this seems to be a minimal requirement, but since it doesn't seem to be possible to base the decision on observation, I don't think such a model could be useful. It would appear to be important that ordinal numbers derive from and thus must have been preceded by cardinal numbers and also that ordinal numbers themselves imply/require spacetime, given their special temporal/sequential interrelationship, whereas cardinal numbers do not. To my mind, this implies a model of the universe requiring an a priori "period" preceding spacetime where there was indeed a singularity -- that is, a single dimension mapped only by cardinal numbers, before the second dimension, spacetime, emerged. The mass-energy equation suggests that this first static dimension was one of mass, followed by the introduction of a ordinal-oriented spacetime dimension through the introduction of electromagnetism. Indeed, such a model predicts the existence of dark matter as our view of the primal mass/matter not yet implicated in spacetime/electromagnetism (hence its invisibility to us), which charged/changed those particles into the standard elementary particles. The Big Bang would then be the explosion of mass, countable with natural numbers, into a new transfinite realm structured by spacetime. The very nature of ordinal numbers suggests that this structuring should not have been instantaneous at the Big Bang but should have progressed sequentially -- the expanding universe backs this up, perhaps, as more dark matter is structured and incorporated into spacetime. Again, this is a mathematical model which I offer to further exploration of this concept. My apologies for any blatant misconceptions and for reintroducing the idea of "let there be light" as a founding principle of creation. • You should make use of existing theories. If you want to go a bit outside the norm, that is fine but you shouldn't write personal theories. – Yashas Feb 15 '17 at 6:03 • @YashasSamaga I think his post quality is far above the typical own-theory propagators, but unfortunately it is still offtopic. – peterh - Reinstate Monica Feb 15 '17 at 6:55 • @Yashas, peterh: Thanks for the feedback, I appreciate your politeness. I should have limited myself to asking this: if spacetime is transfinite, could this relate to a model including mass as a finite variable due to some implication of the cardinal/ordinal relationship? This relates better to the query. I am neither a mathematician nor a physicist (archivist, actually). I was hoping someone more knowledgeable might explore this inkling I had - I am incompetent to do so myself but couldn't find literature on the concept apart from this query. – steigewaerter Feb 17 '17 at 3:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513429164886475, "perplexity": 304.51907874660907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00366.warc.gz"}
https://vismor.com/documents/network_analysis/matrix_algorithms/S3.SS3.php
# 3.3 Solving Overdetermined Systems A m × n system of linear equations with m < n is overdetermined. There are more equations than there are unknowns. ”Solving” this equation is the process of reducing the system to an m × m problem then solving the reduced set of equations. A common technique for constructing a reduced set of equations is known as the least squares solution to the equations. The least squares equations are derived by premultiplying Equation 19 by AT, i.e. $\mathbf{\left({A}^{T}A\right)x={A}^{T}b}$ (26) Often Equation 26 is referred to as the normal equations of the linear least squares problem. The least squares terminology refers to the fact that the solution to Equation 26 minimizes the sum of the squares of the differences between the left and right sides of Equation 19.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562287926673889, "perplexity": 197.17315883625704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188132.48/warc/CC-MAIN-20170322212948-00002-ip-10-233-31-227.ec2.internal.warc.gz"}
https://eventuallyalmosteverywhere.wordpress.com/category/analysis/variational-principles/
# Large Deviations 6 – Random Graphs As a final instalment in this sequence of posts on Large Deviations, I’m going to try and explain how one might be able to apply some of the theory to a problem about random graphs. I should explain in advance that much of what follows will be a heuristic argument only. In a way, I’m more interested in explaining what the technical challenges are than trying to solve them. Not least because at the moment I don’t know exactly how to solve most of them. At the very end I will present a rate function, and reference properly the authors who have proved this. Their methods are related but not identical to what I will present. Problem Recall the two standard definitions of random graphs. As in many previous posts, we are interested in the sparse case where the average degree of a vertex is o(1). Anyway, we start with n vertices, and in one description we add an edge between any pair of vertices independently and with fixed probability $\frac{\lambda}{n}$. In the second model, we choose uniformly at random from the set of graphs with n vertices and $\frac{\lambda n}{2}$ edges. Note that if we take the first model and condition on the number of edges, we get the second model, since the probability of a given configuration appearing in G(n,p) is a function only of the number of edges present. Furthermore, the number of edges in G(n,p) is binomial with parameters $\binom{n}{2}$ and p. For all purposes here it will make no difference to approximate the former by $\frac{n^2}{2}$. Of particular interest in the study of sparse random graphs is the phase transition in the size of the largest component observed as $\lambda$ passes 1. Below 1, the largest component has size on a scale of log n, and with high probability all components are trees. Above 1, there is a unique giant component containing $\alpha_\lambda n$ vertices, and all other components are small. For $\lambda\approx 1$, where I don’t want to discuss what ‘approximately’ means right now, we have a critical window, for which there are infinitely many components with sizes on a scale of $n^{2/3}$. A key observation is that this holds irrespective of which model we are using. In particular, this is consistent. By the central limit theorem, we have that: $|E(G(n,\frac{\lambda}{n}))|\sim \text{Bin}\left(\binom{n}{2},\frac{\lambda}{n}\right)\approx \frac{n\lambda}{2}\pm\alpha,$ where $\alpha$ is the error due to CLT-scale fluctuations. In particular, these fluctuations are on a scale smaller than n, so in the limit have no effect on which value of $\lambda$ in the edge-specified model is appropriate. However, it is still a random model, so we can condition on any event which happens with positive probability, so we might ask: what does a supercritical random graph look like if we condition it to have no giant component? Assume for now that we are considering $G(n,\frac{\lambda}{n}),\lambda>1$. This deviation from standard behaviour might be achieved in at least two ways. Firstly, we might just have insufficient edges. If we have a large deviation towards too few edges, then this would correspond to a subcritical $G(n,\frac{\mu n}{2})$, so would have no giant components. However, it is also possible that the lack of a giant component is due to ‘clustering’. We might in fact have the correct number of edges, but they might have arranged themselves into a configuration that keeps the number of components small. For example, we might have a complete graph on $Kn^{1/2}$ vertices plus a whole load of isolated vertices. This has the correct number of edges, but certainly no giant component (that is an O(n) component). We might suspect that having too few edges would be the primary cause of having no giant component, but it would be interesting if clustering played a role. In a previous post, I talked about more realistic models of complex networks, for which clustering beyond the levels of Erdos-Renyi is one of the properties we seek. There I described a few models which might produce some of these properties. Obviously another model is to take Erdos-Renyi and condition it to have lots of clustering but that isn’t hugely helpful as it is not obvious what the resulting graphs will in general look like. It would certainly be interesting if conditioning on having no giant component were enough to get lots of clustering. To do this, we need to find a rate function for the size of the giant component in a supercritical random graph. Then we will assume that evaluating this near 0 gives the LD probability of having ‘no giant component’. We will then compare this to the straightforward rate function for the number of edges; in particular, evaluated at criticality, so the probability that we have a subcritical number of edges in our supercritical random graph. If they are the same, then this says that the surfeit of edges dominates clustering effects. If the former is smaller, then clustering may play a non-trivial role. If the former is larger, then we will probably have made a mistake, as we expect on a LD scale that having too few edges will almost surely lead to a subcritical component. Methods The starting point is the exploration process for components of the random graph. Recall we start at some vertex v and explore the component containing v depth-first, tracking the number of vertices which have been seen but not yet explored. We can extend this to all components by defining: $S(0)=0, \quad S(t)=S(t-1)+(X(t)-1),$ where X(t) is the number of children of the t’th vertex. For a single component, S(t) is precisely the number of seen but unexplored vertices. It is more complicated in general. Note that when we exhaust the first component S(t)=-1, and then when we exhaust the second component S(t)=-2 and so on. So in fact $S_t-\min_{0\leq s\leq t}S_s$ is the number of seen but unexplored vertices, with $\min_{0\leq s\leq t}S_s$ equal to (-1) times the number of components already explored up to time t. Once we know the structure of the first t vertices, we expect the distribution of X(t) – 1 to be $\text{Bin}\Big(n-t-[S_t-\min_{0\leq s\leq t}S_s],\tfrac{\lambda}{n}\Big)-1.$ We aren’t interested in all the edges of the random graph, only in some tree skeleton of each component. So we don’t need to consider the possibility of edges connecting our current location to anywhere we’ve previously visited (as such an edge would have been consider then – it’s a depth-first exploration), hence the -t. But we also don’t want to consider edges connecting our current location to anywhere we’ve seen, since that would be a surplus edge creating a cycle, hence the -S_s. It is binomial because by independence even after all this conditioning, the probability that there’s an edge from my current location to any other vertex apart from those discounted is equal to $\frac{\lambda}{n}$ and independent. For Mogulskii’s theorem in the previous post, we had an LDP for the rescaled paths of a random walk with independent stationary increments. In this situation we have a random walk where the increments do not have this property. They are not stationary because the pre-limit distribution depends on time. They are also not independent, because the distribution depends on behaviour up to time t, but only through the value of the walk at the present time. Nonetheless, at least by following through the heuristic of having an instantaneous exponential cost for a LD event, then products of sums becoming integrals within the exponent, we would expect to have a similar result for this case. We can find the rate function $\Lambda_\lambda^*(x)of$latex \text{Po}(\lambda)-1\$ and thus get a rate function for paths of the exploration process $I_\lambda(f)=\int_0^1 \Lambda_{(1-t-\bar{f}(t))\lambda}^*(f')dt,$ where $\bar{f}(t)$ is the height of f above its previous minimum. Technicalities and Challenges 1) First we need to prove that it is actually possible to extend Mogulskii to this more general setting. Even though we are varying the distribution continuously, so we have some sort of ‘local almost convexity’, the proof is going to be fairly fiddly. 2) Having to consider excursions above the local minima is a massive hassle. We would ideally like to replace $\bar{f}$ with f. This doesn’t seem unreasonable. After all, if we pick a giant component within o(n) steps, then everything considered before the giant component won’t show up in the O(n) rescaling, so we will have a series of macroscopic excursions above 0 with widths giving the actual sizes of the giant components. The problem is that even though with high probability we will pick a giant component after O(1) components, then probability that we do not do this decays only exponentially fast, so will show up as a term in the LD analysis. We would hope that this would not be important – after all later we are going to take an infimum, and since the order we choose the vertices to explore is random and in particular independent of the actual structure, it ought not to make a huge difference to any result. 3) A key lemma in the proof of Mogulskii in Dembo and Zeitouni was the result that it doesn’t matter from an LDP point of view whether we consider the linear (continuous) interpolation or the step-wise interpolation to get a process that actually lives in $L_\infty([0,1])$. In this generalised case, we will also need to check that approximating the Binomial distribution by its Poisson limit is valid on an exponential scale. Note that because errors in the approximation for small values of t affect the parameter of the distribution at larger times, this will be more complicated to check than for the IID case. 4) Once we have a rate function, if we actually want to know about the structure of the ‘typical’ graph displaying some LD property, we will need to find the infimum of the integrated rate function with some constraints. This is likely to be quite nasty unless we can directly use Euler-Lagrange or some other variational tool. $I_{(1+\epsilon)}(0)\approx \frac{\epsilon^3}{6},$ $-\lim\tfrac{1}{n}\log\mathbb{P}\Big(\text{Bin}(\tfrac{n^2}{2},\tfrac{1+\epsilon}{n})\leq\tfrac{n}{2}\Big)\approx \frac{\epsilon^2}{4}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639914989471436, "perplexity": 226.5693826818204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00140.warc.gz"}
https://math.stackexchange.com/questions/3182067/surface-area-of-a-solid-of-revolution-request-for-a-source-for-rigourse-proof?noredirect=1
# surface area of a solid of revolution. (request for a source for rigourse proof ) I know that this question has been asked and answered before , but non of the answers give a rigourse treatment of the problem . All the explanations boil down to give some argument using infinitesimals and how we should keep the linear orders and neglect the higher orders in the integral. But why should we neglect the non linear terms ? who decides that it's okay to neglect non linear terms, but it's not okay to do the same for the linear terms.What i'm looking for is a rigourse proof for the formula of the surface area of a solid of revolution. Preferable not very advanced, a reference for a book also would work. ## 1 Answer I think have a decent outline of a proof. First prove the relationship between area under a curve and definite integration. Then prove how to convert Surface Area and volume problems to related definite integral problems rigorously. It might be best to start with simple area under the curve of a function. The area under a a curve can be represented by a Riemann Sum. The Sum is easily calculated from usual geometric formulae and we know it will be less than the actual area under the curve. We can proceed from that in a couple different ways. The simpler geometric figures added up to calculated the area can shrink, requiring more of them. We keep track of the error term and note that it gets arbitrarily small. That should give you a rigorous proof with the dropping of the square terms of the error. An alternative approach, one can construct a Riemann Sum that deliberately over estimates the area. It can be shown that the overestimate and the underestimate converge using various convergence proofs. Applying the same principles to 3 dimensions then gives you the usual formulas. Every definite integral is finding the area of a curve, so whatever 1D integral gives you a volume or a surface area is a problem that has been re-represented as the area under a curve. The function you end up integrating contains the geometric information of the surface of interest. So once you prove that the new function does in fact contain the desired information, then the proofs of integrating a single valued function apply. A slightly different approach, more geometric approach: Given a vector function $$\vec{E}(x,y,z)$$ the volume integral of its Divergence is the surface integral of its component normal to the surface: Letting $$\hat{n}$$ be the normal to our surface. $$\int\int\int \nabla\cdot\vec{E} \ dV = \int\int \vec{E} \cdot \hat{n} \ dA$$ So if the divegrence of $$\vec{E}$$ is 1, then the integral gives us the entire volume of the region of integration. Here we note that if $$\vec{r}$$ is a position vector from the origin to the point of integration, $$\nabla \cdot \vec{r}/3=1$$. Which means our volume can be expressed as a surface integral $$\int\int \vec{r}\cdot \hat{n}/3 \ dA$$ Our surface integral is $$\int\int 1 \ dA=\int \int \hat{n} \cdot \hat {n} \ dA$$. Working in reverse direction from before, we can conver the area integral into a volume integral using the with the reverse of Gauss' Law. Area = $$\int\int \int \nabla \cdot \hat{n} \ dV$$ Note the relationship between the Area Integral expression for Volume vs. The area integral expression for surface area. One is proportional to $$\hat{n} \cdot \hat{n}$$, the other is proportional to $$\vec{r} \cdot \hat{n}=r\cos{\theta}$$ where $$\cos{\theta}$$ That cosine term represents a needed transformation to change the area integral to a volume integral, information about the shape of an object being integrated. Also keep in mind how triple integrals work in Cylindrical Coordinates. Combine all this I think you get a proof without the "ghost of departed quantities". • "An alternative approach, one can construct a Riemann Sum that deliberately over estimates the area. It can be shown that the overestimate and the underestimate converge using various convergence proofs". I think this is what i'm looking for , can you please direct me to a source which handle this problem , or what kind of textbooks that treat this subject ? – yousef magableh Apr 12 at 5:31 • – TurlocTheRed Apr 12 at 14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955528974533081, "perplexity": 175.5643843527521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00468.warc.gz"}
https://www.studypug.com/us/en/math/trigonometry/find-the-exact-value-of-trigonometric-ratios
##### 2.4 Find the exact value of trigonometric ratios Triangles, angles, sides and hypotenuse, - these are the basic parts of Trigonometry. In this chapter we will brush up on these parts once more. By now you are already familiar with the radian. The radian is the term used to define the measure of a standard angle. It is defined that the measure of the radian is equal to that of the length of its corresponding arc. In this chapter we will learn about the different types of angles such as the standard angles, reference angles and co-terminal angles. Standard angles, if you would recall from our previous chapter, are angles that are formed by the intersection of a ray and the x axis. The x axis is referred to as the initial side and the ray is referred to as the terminal side. Take note that standard angles have their vertices in the center of a unit circle. We also discussed in that same chapter about the reference angles, which are the angles associated with the angles in every standard angle. The reference angles are the acute angle formed by the terminal side of the standard angle and the x axis. The last kind of angle is the co-terminal angles, which as the word suggests are angles that have the same terminal sides. We will get to learn more about these angles from chapter 8.1 to 8.3. In the next two parts of the chapter, we will learn about the general form of the different trigonometric functions. This will help us find the exact values of the trigonometric functions as well as use the ASTC rule in Trigonometry. In 8.6 to 8.8 we will review what we have learned before about a unit circle, and the such as the definition of the radians, and the length of the arc, converting between the measures of the angles, and the trigonometric ratios of angles in radians.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580590486526489, "perplexity": 168.36087037810543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00343-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/1102.1060/
# Polycyclic aromatic hydrocarbons in the dwarf galaxy IC 10 Wiebe D.S., Egorov O.V., Lozinskaya T.A. ###### Abstract Infrared observation from the Spitzer Space Telescope archive are used to study the dust component of the interstellar medium in the IC 10 irregular galaxy. Dust distribution in the galaxy is compared to the distributions of H and [SII] emission, neutral hydrogen and CO clouds, and ionizing radiation sources. The distribution of polycyclic aromatic hydrocarbons (PAH) in the galaxy is shown to be highly non-uniform with the mass fraction of these particles in the total dust mass reaching 4%. PAHs tend to avoid bright HII regions and correlate well with atomic and molecular gas. This pattern suggests that PAHs form in the dense interstellar gas. We propose that the significant decrease of the PAH abundance at low metallicity is observed not only globally (at the level of entire galaxies), but also locally (at least, at the level of individual HII regions). We compare the distribution of the PAH mass fraction to the distribution of high-velocity features, that we have detected earlier in wings of H and SII lines, over the entire available galaxy area. No conclusive evidence for shock destruction of PAHs in the IC 10 galaxy could be found. Institute of Astronomy, Russian Academy of Sciences, ul. Pyatnitskaya 48, Moscow, 119017 Russia Sternberg Astronomical Institute, Universitetskii pr. 13, Moscow, 119992 Russia ## 1 Introduction Numerous infrared space observatories, first of all, the Spitzer space telescope, as well as MSX and ISO satellites, opened up a new era in studies of the star formation both in nearby and distant galaxies. The so-called polycyclic aromatic hydrocarbons (PAH) [1] — macromolecules consisting of several tens or several hundred atoms, mostly carbon and hydrogen — are of special interest in relation to these observations. The absorption of an ultraviolet (UV) photon by such a molecule excites bending and vibrational modes, and, as a result, near IR photons are emitted. PAH emission bands may account for a substantial fraction (up to several dozen percent) of the entire infrared luminosity of the galaxy [2]. Polycyclic aromatic hydrocarbons attract considerable interest at least for two reasons. First, their emission is related to the overall UV radiation field of a galaxy, making them a natural indicator of the star formation rate. Second, PAH molecules not only trace the state of the interstellar medium, but also play an important role in its physical and chemical evolution. The former aspect is interesting both for interpretation of available IR observations and for planning new near-IR space missions (JWST, SOFIA, SPICA, etc.). The latter aspect is of great importance for development of models for various objects ranging from protoplanetary disks to the interstellar medium of an entire galaxy. Unfortunately, PAH formation and destruction mechanisms in the interstellar medium still are not understood. Such possible scenarios as the synthesis of PAHs in carbon-rich atmospheres of AGB and post-AGB stars or in dense molecular clouds as well as the destruction of PAHs by shocks and UV radiation are discussed extensively in the literature (see Sandstrom et al. [3] and references therein). The observed deficit of the emission of these macromolecules in metal-poor galaxies may be an important indication of the nature of the PAH evolutionary cycle. Note that, as shown by Draine et al. [4], this deficit is related to the real lack of PAHs and not to the low efficiency of the excitation of their IR transitions. In galaxies with oxygen abundance the typical mass fraction of PAHs (the fraction of the total dust mass in particles consisting of at most one thousand atoms) is equal to about 4%, i.e., about the same as in the Milky Way. At metallicities the average decreases quite sharply down to 1% and even lower. In order to carify the cause of this transition and to identify the PAH formation and destruction mechanisms as well as their relation to the physical parameters and the metallicity of a galaxy, Sandstrom et al. [3] analyzed in detail Spitzer observations of the dust component in the nearest irregular dwarf galaxy — the Small Magellanic Cloud (SMC). These authors found weak correlation or no correlation between and such SMC parameters as the location of carbon-rich asymptotic giant branch stars, supergiant HI shells and young supernova remnants, and the turbulent Mach number. They showed that correlates with CO intensity and increases in regions of high dust and molecular gas surface density. Sandstrom et al. [3] concluded that PAH mass fraction is high in regions of active star formation, but suppressed in bright HII regions. The irregular dwarf galaxy IC 10 is analogous to the SMC in terms of a number of parameters. The average gas metallicity in IC 10 is , varying from 7.6 to 8.5 in different HII regions ([5, 6, 7] and references therein), i.e., it covers the very range where the transition from high to low PAH abundance occurs. The interstellar medium of this galaxy is characterized by a filamentary, multi-shell structure. In H and [SII] images IC 10 appears as a giant complex of multiple shells and supershells, arc- and ring-shaped features with sizes ranging from 50 to 800–1000 pc (see [8, 9, 10, 11, 12] and references therein). The HI distribution also shows numerous “holes”, supershells, and extended irregular features with rudiments of a spiral pattern [8]. The IC 10 galaxy is especially attractive for the analysis of the dust component because, unlike the SMC, it is a starburst galaxy. It is often classified as a BCD-type object because of its high H and IR luminosity [13]. The stellar population of IC 10 shows evidence of two star formation bursts. The first burst is at least 350 Myr old, while the second one has occurred 4–10 million years ago (see [14, 15, 16] and references therein). For the purpose of identifying the influence of shocks and/or UV radiation on dust the anomalously large population of Wolf–Rayet (WR) stars in IC 10 is of special interest. Here the highest density of WR stars is observed among the known dwarf galaxies, comparable to the density of these stars in massive spiral galaxies [13, 15, 16, 17, 18, 19]. High H and IR luminosity of IC 10, combined with the large number of WR stars, indicates that the last burst of star formation in this galaxy must have been short, but engulfed most of the galaxy. The anomalously high number of WR stars means that we are actually witnessing a short period immediately after the last episode of star formation. The central, brightest region, associated with this last star formation episode, is located in the south-eastern part of the galaxy and includes the largest and densest HI cloud, a molecular cloud seen in CO lines, a conspicuous dust lane, and a complex of large emission nebulae, reaching 300–400 pc in size, with two shell nebulae HL111 and HL106 (according to the catalog of Hodge & Lee [20]), as well as young star clusters and about a dozen WR stars (see [8, 9, 10, 5, 7] and references therein). An H image of this central region of the galaxy is shown in Figure 1. According to Vacca et al. [16], the center of the last star formation episode is located near the object that was earlier classified as the WR star M24. (Hereafter letters R and M, followed by a number, refer to WR stars from the lists of Royer et al. [22] and Massey & Holmes [18], respectively). Vacca et al. [16] showed that M24 is actually a close group consisting of at least six blue stars, four of these stars being possible WR candidates. Lopez-Sanchez et al. [23] have conclusively identified two WR stars in this region. The HL111c nebula, surrounding M24, is one of the brightest HII regions in IC 10 and the brightest part of the HL111 shell. The neighborhood of M24 and the inner cavern of this shell host youngest (2–4 Myr old) star clusters in the galaxy [7, 14]. The shell nebula HL106 is located in the densest southern part of a complex, consisting of HI, CO, and dust clouds mentioned above. The ionizing radiation in this region must be generated by WR stars R2 and R10 and clusters 4-3 and 4-4 from the list of Hunter [14]. According to the above author, these clusters are a few times older than young clusters 4-1 and 4-2 in the HL111 region. From the south, adjacent to the HI and CO clouds and the dust lane is a unique object, the so-called synchrotron supershell [24]. Until recently, it was believed to have been formed as a result of multiple explosions of about a dozen supernovae [24, 25, 26, 27]. Lozinskaya & Moiseev [28] for the first time explained the formation of this synchrotron supershell by a hypernova explosion. The above features of IC 10 offer great possibilities for the study of the structure and physical characteristics of the dust component of a dwarf galaxy and the role that shocks play in its evolution. In this paper we analyze the connection between our data on IC 10, obtained earlier, and observations of this galaxy with the Spitzer telescope. In the following sections we describe the technique used to analyze these observations, present the obtained results and discuss them. In Conclusions we summarize the main findings of this work. ## 2 Observations ### 2.1 IR observations In this paper we use Spitzer archive observations of IC 10 obtained as a part of the program “A mid-IR Hubble atlas of galaxies” [29]. These data were downloaded from the Spitzer Heritage Archive. The MOPEX software was utilized to compose image mosaics and custom IDL procedures were used to analyze them. One of the most complicated issues in the analysis of such images is the choice of the background level. Sandstrom et al. [3] used a rather sophisticated procedure for this purpose, because the Small Magellanic Cloud occupies a large area on the sky. The angular size of IC 10 is small and one may therefore expect that background (mostly intragalactic and zodiacal) variations are small toward this galaxy. In this paper we set the background level in all the IR bands considered to the average brightness in areas located far from the star forming regions in IC 10. The adopted background values are listed Table 1. Such a simple procedure for background estimation is acceptable for our purposes. It is interesting that background values estimated using SPOT (Spitzer Planning Observations Tool), which are also shown in Table 1 (these are values written in FITS file headers), in some cases differ appreciably from the adopted values. This further supports the use of the background estimate extracted from real data. ### 2.2 Optical and 21 cm line observations To analyze possible effects of shocks on the dust component, we use results of H and [SII]Å observations, made with the SCORPIO focal reducer and the scanning Fabry-Perot interferometer at the 6-m telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences and described in detail by Lozinskaya et al. [12] and Egorov et al. [30]. To compare the dust component distribution with the large-scale structure and kinematics of HI in IC 10, we used 21-cm VLA data obtained by Wilcots and Miller [8]. Egorov et al. [30] reanalyzed the data cube of these observations, provided to us by the authors, in order to study the “local” structure and kinematics of HI in the neighborhood of the star forming complex and the brightest nebulae HL111 and HL106. We used the data with an angular resolution of (corresponding to a linear resolution of about 20 pc for the adopted distance of 800 pc to the galaxy). ## 3 Results The IR and H maps of the central region of IC 10 are shown in Figure 2. The interest to near-IR observations of galaxies is related to the fact that UV-excited PAH bands can be used as an indicator of the number of hot stars and hence as an indirect indicator of the star formation rate. Draine & Li [31] proposed to parameterize the UV radiation field of the galaxy as the sum of the “minimum” diffuse UV field (the lower cutoff of the starlight intensity distribution), filling up most of the galaxy’s volume, and a more intense UV field with a power-law distribution, which illuminates only the mass fraction of all the dust in the galaxy. The quantity, expressed in units of the average UV radiation field in our Galaxy, characterizes the overall rate of star formation in the system studied, whereas allows one to estimate the mass fraction of the galaxy involved in the ongoing star formation. Other parameters introduced are the 24-to-70 m flux ratio, which characterizes the fraction of “hot” dust, and , the dust luminosity fraction contributed by regions with the UV radiation intensity , i.e., the dust luminosity coming from photodissociation regions. Draine & Li [31] proposed a general algorithm for estimating the parameters of a galaxy from IR observations at 8 m, 24 m, 70 m, and 160 m (3.6 m data are used to remove the starlight contribution). Unfortunately, this algorithm can be applied to IC 10 only partially, because 160 m observations are not available for a substantial part of the galaxy. Results of long-wavelength observations are shown in Figure 3. As is evident from the figure, 160 m data are mostly available for outskirts of the galaxy and cover the star forming regions only partially (taking into account the angular resolution, which is equal to 40 at 160 m). Nevertheless, we used the available data and the technique described by Draine & Li [31] to determine , , and , the mass fraction of dust contained in PAH (or, more precisely, in particles with less than 1000 carbon atoms). We then averaged the data of IR observations over the region of IC 10, covered by 160 m data, and inferred the following parameters for this region: , , and . A comparison of these values with results of Draine et al. [4] for 65 galaxies of different types shows that parameters of IC 10 differ appreciably from the typical values of the corresponding quantities. Comparable values are only found for two other Irr galaxies — NGC 2915 () and NGC 5408 (). Note that NGC 5408 was also shown to contain starburst regions [32]. The parameter of IC 10 is also close to that of NGC 5408, but is much lower than the corresponding value in Mrk 33, another irregular galaxy from the list of Draine & Li [31], where it exceeds 10%. (Draine & Li [31] also report high value for the Seyfert galaxy NGC 5195, however, the result for this object is more dependent on the adopted radiation field parameters than in the case of Mrk 33.) In the sample of Draine & Li [31] Mrk 33 is also the galaxy with the highest 24-to-70 m flux ratio () and the largest value of . The corresponding values for IC 10 are equal to about 0.7 and 23%, respectively. On the whole, as far as radiation-field parameters are concerned, IC 10 is quite a typical Irr starburst galaxy. The IC 10 galaxy has unusually high PAH mass fraction . Its value, inferred using the technique of Draine & Li [31], significantly exceeds the corresponding parameters for all the galaxies mentioned above (1.3% in Mrk 33, 2.4% in NGC 5195, 1.4% in NGC 2915, and 0.4% in NGC 5408). In the algorithm of Draine & Li [31] this parameter is inferred from the sole quantity—the ratio of the average 8 m flux to the sum of the 70 m and 160 m fluxes. Our estimate for this parameter is 0.19, which, according to Draine & Li [31], corresponds to . To check whether such an unusually high is obtained due to the lack of 160 m data, we performed a more detailed fit of the observed 5.8 m, 8 m, 24 m, and 70 m fluxes based on the models of Draine & Li [31] using local rather than average fluxes. This technique allowed us to obtain individual estimates for different regions of the galaxy. Our modeling showed that the final average and its distribution across the galaxy do depend appreciably on the choice of the passbands used in the fit. However, all the considered cases still yield a high 8 m flux-averaged PAH fraction ranging from 2.9% to 4.5%. Note that the distribution of in the galaxy is quite irregular. Along with regions of high there are vast areas, where is less than 1%. As we mentioned above, the particular features of the distribution of depend on the choice of passbands used in the fit of photometric data. Hereafter we use the 8 m to 24 m () flux ratio as the local indicator of the PAH fraction. A number of authors and, in particular, Sandstrom et al. [3], pointed out the possibility of using the above flux ratio for this purpose (note, however, that the correlation between and found by Sandstrom et al. [3] is rather weak). The halftones in Figure 4 (left panel) show the distribution of this flux ratio in the central star forming region of IC 10, and contours correspond to the distribution of H intensity. Lighter tones indicate low ratios and, correspondingly, low , whereas darker tones indicate higher . A wide semi-ring near the HL111 and HL106 regions is immediately apparent, which can be traced by low ratio, weak H intensity, and the locations of WR stars (we connected them by lines to emphasize the location of the semi-ring). It might be supposed that the low PAH abundance in this region is caused by the destruction of these particles by the ultraviolet radiation of WR stars, however, further studies are needed for a more definite conclusion. The relation between the PAH abundance and star formation tracers is apparent not only in this semi-ring, but in the entire considered region. In the right panel of Figure 4 we show the correlation between the flux ratio and H intensity. It is evident from the figure that (and hence ) decreases with increasing H intensity. This may indicate that factors operating in the vicinity of the region of ongoing star formation, e.g., the UV radiation, have a destructive effect on PAH particles. The flux ratio approaches unity in regions with less intense H flux, and this corresponds to values of about 2–3% [3]. PAH particles may also be destroyed by shocks. Therefore, generally speaking, the above-mentioned low -ratio “semi-ring” located near HL111 and HL106 might have formed due to the destructive effect not only from the UV radiation of the WR stars located within it, but also from shocks produced by winds of these stars. The primary shock indicators are high-velocity gas motions. A detailed study of the ionized gas kinematics in IC 10 by Egorov et al. [30] indeed revealed weak high-velocity features in wings of H and [SII]Å lines in the inner cavern of the HL111 nebula and in other regions of the complex of violent star formation. In particular, such features were found in the vicinity of two WR stars located in the “semi-ring” mentioned above. We reanalyzed the results of observations of the galaxy in both lines made with the Fabry–Perot interferometer at the 6-m telescope of the Special Astrophyscial Observatory of the Russian Academy of Sciences in order to reveal possible anticorrelation between high-velocity gas motions and . We computed H and [SII]Å line profiles for several regions of high and low ratio. Weak high-velocity features at a level of about 2–6% of the peak intensity are found in wings of both lines, and this coincidence confirms the reality of corresponding motions. However, these high-velocity features show up both in regions with high and low ratios. To obtain more definitive results, we mapped the distributions of velocities and intensities of high-velocity features in blue and red wings of H line in the entire available field of the galaxy and in the central star forming region. The resulting maps indicate that high-velocity features in blue and red wings of the line show up in ranges from 50–60 to 100–110 km/s and 50 to 100 km/s, respectively, relative to the velocity of the line peak. In Figure 5 we compare the flux ratio to the intensities of high-velocity features in blue (left panel) and red (right panel) wings of H line. If the PAH abundance depends on the presence of shocks, one would expect the intensity of high-velocity features to anticorrelate with . No such anticorrelation can be seen in the figure, albeit a certain pattern does emerge: higher intensities of high-velocity features in both the blue and red wings tend to “avoid” regions with the highest ratios, although they are observed in nearby, slightly offset, locations. Nonetheless, the results reported here do not allow us to conclusively associate the destruction of PAH particles with shocks produced by stellar winds and/or supernova explosions. Arkhipova et al. [7] determined metallicities for a number of HII regions in IC 10. It is interesting to relate these metallicities to PAH content in order to see whether decreases with decreasing metallicity within the galaxy in the same way as it does when we compare different Irr galaxies. Figure 6 shows the ratios as a function of oxygen abundance for HII regions from the list of Arkhipova et al. [7]. In some cases two data points in this plot correspond to the same HII region. Metallicities of these HII regions inferred from long-slit and MPFS observations differ slightly, possibly due to different integration areas. We show only the data points with metallicity errors smaller than or equal to 0.05 dex. It is evident from Figure 6 that the ratio indeed decreases with decreasing oxygen abundance, although the turn-off value of is about rather than 8.0–8.1 as found in earlier works. This result indicates that the metallicity dependence of the PAH abundance shows up not only globally (at the level of entire galaxies), but also locally (at least at the level of individual HII regions). As we pointed out in the Introduction, the only important difference between the two nearest Irr galaxies IC 10 and SMC is that in IC 10 we observe the interstellar medium immediately after a violent burst of star formation that has encompassed most of the galaxy. It is therefore of interest to compare the data obtained in this work with results of a detailed study of the dust component in the SMC performed by Sandstrom et al. [3]. (Recall that we use the flux ratio to measure the PAH mass fraction and draw our conclusions based on this ratio.) IC 10, like the SMC, shows strong variations from one region to another with PAH avoiding bright HII regions. The lower spatial resolution prevents us from concluding that PAHs are located in the shells of bright nebulae, however, the large-scale map (Figure 4) shows clearly that the H brightness anticorrelates with the ratio. Sandstrom et al. [3] found weak or no correlation between and the location of HI supershells in the SMC. The IC 10 galaxy, on the contrary, shows well-correlated (nearly coincident) extended shell-like structures in maps of flux ratio and 21-cm HI emission (Figure 7). A correlation between the 8 m IR emission and the extended HI shell is also apparent in the middle panel of Figure 2. The large-scale correlation between 8 m brightness and extended arcs and HII and HI shells in IC 10 and in a number of other starburst Irr galaxies has been known since long (Hunter et al. [33]). The above authors attributed all the 8 m flux solely to PAH emission and concluded that correlates with brightness of giant shells and supershells. The correlation between the flux ratio and the 21-cm line emission leads us to the same conclusion in a somewhat more straightforward way. We believe this is a real correlation, as the stellar wind from numerous young star clusters and WR stars located inside extended shells shapes observed shell-like structure of both the gas and dust in IC 10. Another implication is that PAH molecules do not undergo significant destruction during the sweep-up of giant shells (see also [33]). The brightest extended CO cloud in the galaxy and the dust lane that coincides with it are located just to the south of the complex of ongoing star formation and are immediately adjacent to the HL106 nebula. Figure 8 shows the structure of this cloud according to data of Leroy et al. [10] superimposed on the distribution of flux ratio. (We obtained a composite map of the entire CO cloud by combining maps of its individual components: clouds B11a, B11b, B11c, and B11d in Figure 7 from [10].) The total gas column density (HI) toward the dense cloud, discussed here, amounts to cm [8]. According to CO emission observations, the column density of neutral and molecular hydrogen (H) in this direction is about cm [10, 34]. It follows from Figure 8 that three regions with highest ratios have exactly the same locations and sizes as the CO clouds B11a, B11c, and B11d from [10]. The fourth CO cloud—B11b—coincides with the bright shell nebula HL106. Egorov et al. [30] showed that the optical nebula HL106 is not located behind a dense cloud layer, but is partly embedded in it. Egorov et al. [30] also concluded that B11b is physically associated with the optical nebula. First, the radial velocity of the B11b cloud ( km/s [10]) coincides with the velocity of ionized gas in HL106 determined by Egorov et al. [30]. But most importantly, the brightest southern arc HL106 exactly outlines the boundaries of the B11b cloud. Such an ideal coincidence cannot be accidental and hints to a physical relation between the thin ionized shell and the molecular cloud B11b. The HL106 shell, which exactly bounds the B11b cloud, formed due to photodissociation of molecular gas at the boundary of this cloud as well as due to ionization by the UV radiation of WR stars R2 and R10 and clusters 4-3 and 4-4. Hunter [14] estimates these clusters to be about 20-30 Myr old. The presence of a bright ionizing nebula that surrounds B11b explains low flux ratio toward this cloud. We can thus conclude with certainty that highest ratios and, consequently, highest PAH fractions are indeed found toward dense CO clouds. The only exception is the region of the brightest nebula HL45. The evident drop of the flux ratio observed in this area may be due to destruction of PAHs by strong UV radiation. ## 4 Discussion and conclusions In the Introduction we have already emphasized the importance of understanding the evolution of PAHs and their relation to other galaxy components. In this work we compare results of infrared observations of IC 10 with other available observations in order to identify possible indications to the origin of PAH. One of the key PAH properties to be explained by their evolutionary model is the low content of these particles in metal-poor galaxies. Two hypotheses are mainly discussed in the literature — less efficient formation and more efficient destruction of PAHs in metal-poor systems. Galliano et al. [35] argue that the dependence of on the metallicity of a host galaxy can be naturally explained if we assume that PAH particles are synthesized in the atmospheres of long-lived AGB stars. In this case low metal and PAH abundances are due to the slower stellar evolution. However, if this assumption were true, the PAH fraction in the IC 10 galaxy would be, first, low, and second, uniformly distributed throughout the galaxy. We show the pattern to be exactly the opposite — (given by the flux ratio) varies appreciably across the galaxy and amounts almost to 4% in some areas. From the viewpoint of the spatial localization, PAHs correlate with both dense-gas indicators studied (HI and CO clouds). The PAH mass fraction decreases only in the neighborhood of HII regions and WR stars, which is consistent with the hypothesis that these particles are destroyed by UV radiation and shocks (although we failed to find convincing evidence for PAH destruction by shocks). On the whole, the pattern observed in IC 10 is qualitatively consistent with the assumption that PAH particles in molecular clouds form in situ. In this case the current high value in IC 10 may be related to a recent burst of star formation during which PAH particles have formed in dense gas, and did not have enough time to be destroyed anywhere except for the immediate neighborhood of the UV radiation sources. If this interpretation is correct, the metallicity dependence of should show up until PAH particles begin to be destroyed by ultraviolet radiation, and reflects the peculiarities of their formation rather than their subsequent evolution. We further plan to verify our conclusions by analyzing observational results on other dwarf galaxies. ## 5 Acknowledgments This work was supported by the Russian Foundation for Basic Research (grants nos. 10-02-00091 and 10-02-00231) and the Russian Federal Agency on Science and Innovation (contract no. 02.740.11.0247). O.V. Egorov thanks the Dynasty Foundation of Noncommercial Programs for financial support. Authors are grateful to Suzanne Madden and Tara Parkin for useful discussions. ## References • [1] Tielens A.G.G.M. Ann. Rev. Astron. Astroph. 46, 289 (2008) • [2] Smith J.D.T., Draine B.T., Dale D.A., Moustakas J. et al. Astrophys. J. 656, 770 (2007) • [3] Sandstrom K.M., Bolatto A.D., Draine B., Bot C., Stanimirovic S. Astrophys. J. 715, 701 (2010) • [4] Draine B.T., Dale D.A., Bendo G. et al. Astrophys. J. 663, 866 (2007) • [5] Lozinskaya T.A., Egorov O.V., Moiseev A.V., Bizyaev D.V. Astron. Lett. 35, 730 (2009) • [6] Magrini L., Gonçalves D.R. Mon. Not. Roy. Astron. Soc. 398, 280 (2009) • [7] Arkhipova V.P., Egorov O.V., Lozinskaya T.A., Moiseev A.V. Pis’ma Astron. Zh. 37, 83 (2011) • [8] Wilcots E.M., Miller B.W. Astron.J. 116, 2363 (1998) • [9] Gil de Paz A., Madore B.F., Pevunova O. Astrophys. J. Sup. Ser. 147, 29 (2003) • [10] Leroy A., Bolatto A., Walter F., Blitz L. Astrophys. J. 643, 825 (2006) • [11] Chyzy K.T., Knapik J., Bomans D.J., Klein U., Beck R., Soida M., Urbanik M. Astron. Astrophys. 405, 513 (2003) • [12] Lozinskaya T.A., Moiseev A.V., Podorvanyuk N.Yu., Burenkov A.N. Astron. Lett. 34, 217 (2008) • [13] Richer M.G., Bullejos A., Borissova J. et al. Astron. Astrophys. 370, 34 (2001) • [14] Hunter D. Astrophys. J. 559, 225 (2001) • [15] Massey P., Olsen K., Hodge P., Jacoby G., McNeill R., Smith R., Strong Sh. Astron. J. 133, 2393 (2007) • [16] Vacca W.D., Sheehy C.D., Graham J.R. Astrophys. J. 662, 272 (2007) • [17] Massey P., Armandroff T.E., Conti P.S. Astron. J. 103, 1159 (1992) • [18] Massey P., Holmes S. Astrophys. J. 580, L35 (2002) • [19] Crowther P.A, Drissen L., Abbott J.B., Royer P., Smartt S.J., Astron. Astrophys. 404, 483 (2003) • [20] Hodge P., Lee M.G. Proc. Astron. Soc. Pacif. 102, 26 (1990) • [21] Tikhonov N. A., Galazutdinova O. A. Astron. Lett. 35, 748 (2009) • [22] Royer P., Smartt S.J., Manfroid J., Vreux J. Astron. Astrophys. 366, L1 (2001) • [23] Lopez-Sanchez A.R., Mesa-Delgado A., Lopez-Martin L., Esteban C. Mon. Not. Roy. Astron. Soc. In press (arXiv:1010.1806) • [24] Yang H., Skillman E.D. Astron. J. 106, 1448 (1993) • [25] Bullejos A., Rozado M. Rev. Mex. (Ser de Conf.) 12, 254 (2002) • [26] Rosado M., Valdez-Gutiérrez M., Bullejos A., Arias L., Georgiev L., Ambrocio-Cruz P., Borissova J., Kurtev R. ASP Conf. Ser. 282, 50 (2002) • [27] Thurow J.C., Wilcots E.M. Astron. J. 129, 745 (2005) • [28] Lozinskaya T.A., Moiseev A.V. Mon. Not. Roy. Astron. Soc. 381, L26 (2007) • [29] Fazio G., Pahre M. Spitzer Proposal ID 69 (2004) • [30] Egorov O.V., Lozinskaya T.A., Moiseev A.V. Astron. Rep. 87, 277 (2010) • [31] Draine, B.T., Li A. Astrophys. J. 657, 810 (2007) • [32] I. D. Karachentsev, M. E. Sharina, A. E. Dolphin, E. K. Grebel, D. Geisler, P. Guhathakurta, P. W. Hodge, V. E. Karachentseva, A. Sarajedini, P. Seitzer. Astron. Astrophys. 385, 21 (2002) • [33] Hunter D.H., Elmegreen B.G., Martin E. Astron. J. 132, 801 (2006) • [34] Bolatto A.D., Jackson J.M., Wilson C.D., Moriarty-Schieven G. Astrophys. J. 532, 909 (2000) • [35] Galliano F., Dwek E., Chanial P. Astrophys. J. 672, 214 (2008)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925583958625793, "perplexity": 2350.0468001959543}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00673.warc.gz"}
http://www.emathematics.net/g5_ratios.php?def=find
User: • Matrices • Algebra • Geometry • Graphs and functions • Trigonometry • Coordinate geometry • Combinatorics Suma y resta Producto por escalar Producto Inversa Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations 2-D Shapes Areas Pythagorean Theorem Distances Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product Ratios and proportions Equivalent ratios: find the missing number Fill in the missing number to complete the proportion. 1 to = 4 to 20 Write 4 to 20 as a fraction. Then write an equivalent fraction with 1 as the numerator. $\frac{2}{20}=\frac{4\;\div\;4}{20\;\div\;4}=\frac{1}{5}$ 1 to 5 = 4 to 20 Fill in the missing number to complete the proportion. 5 to 7 = to 70
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495341777801514, "perplexity": 4376.538657032178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00084.warc.gz"}
https://www.physicsforums.com/threads/logarithms-in-one-log.893610/
# Homework Help: Logarithms in one log 1. Nov 16, 2016 ### rashida564 1. The problem statement, all variables and given/known data can we put 3log2(x)-4log(y)+log2(5) in one logarithm it try in all the ways but i can't find the solution . 2. Relevant equations loga(b)=logx(b)/logx(a) log(b*a)=log(b)+log(a) 3. The attempt at a solution log2(5x^3)-log(y^4) log2(5x^3)-log2(y^4)/log2(10) 2. Nov 16, 2016 ### Staff: Mentor There are also formulas for $\log_2 a - \log_2 b$ and $\log_2 a^c$ which you need here. 3. Nov 16, 2016 ### rashida564 i don't know that i should do 4. Nov 16, 2016 ### Staff: Mentor Well you have which I read as $\log_2 5x^3 - \log_{10} y^4 = \log_2 5x^3 - \frac{1}{\log_2 10}\log_2 y^4$. Now you can use $c \cdot \log_2 a = \log_2 a^c$ and $\log_2 a - \log_2 b = \log_2 \frac{a}{b}$ to write all in a single $\log_2$ expression. (Of course with a constant $c=\log_2 10$.) 5. Nov 16, 2016 ### rashida564 log2(5x^3/((log2y^4)^(1/log2(10)))) then who i can write it as a single log i see three logs 6. Nov 16, 2016 ### Staff: Mentor You cannot get rid of the constant $\log_2 10$ if you are dealing with two different basis. Are you sure they are meant to be different? And you have one $\log_2$ too many in the application of the formulas. 7. Nov 16, 2016 ### rashida564 sory for that 8. Nov 16, 2016 sorry*
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523740410804749, "perplexity": 1985.1540045379154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00322.warc.gz"}
http://link.springer.com/article/10.1007/s00703-007-0276-1
, Volume 99, Issue 1-2, pp 105-128 Date: 12 Nov 2007 # The impact of the PBL scheme and the vertical distribution of model layers on simulations of Alpine foehn Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Summary This paper investigates the influence of the planetary boundary-layer (PBL) parameterization and the vertical distribution of model layers on simulations of an Alpine foehn case that was observed during the Mesoscale Alpine Programme (MAP) in autumn 1999. The study is based on the PSU/NCAR MM5 modelling system and combines five different PBL schemes with three model layer settings, which mainly differ in the height above ground of the lowest model level (z 1). Specifically, z 1 takes values of about 7 m, 22 m and 36 m, and the experiments with z 1 = 7 m are set up such that the second model level is located at z = 36 m. To assess if the different model setups have a systematic impact on the model performance, the simulation results are compared against wind lidar, radiosonde and surface measurements gathered along the Austrian Wipp Valley. Moreover, the dependence of the simulated wind and temperature fields at a given height (36 m above ground) on z 1 is examined for several different regions. Our validation results show that at least over the Wipp Valley, the dependence of the model skill on z 1 tends to be larger and more systematic than the impact of the PBL scheme. The agreement of the simulated wind field with observations tends to benefit from moving the lowest model layer closer to the ground, which appears to be related to the dependence of lee-side flow separation on z 1. However, the simulated 2 m-temperatures are closest to observations for the intermediate z 1 of 22 m. This is mainly related to the fact that the simulated low-level temperatures decrease systematically with decreasing z 1 for all PBL schemes, turning a positive bias at z 1 = 36 m into a negative bias at z 1 = 7 m. The systematic z 1-dependence is also observed for the temperatures at a fixed height of 36 m, indicating a deficiency in the self-consistency of the model results that is not related to a specific PBL formulation. Possible reasons for this deficiency are discussed in the paper. On the other hand, a systematic z 1-dependence of the 36-m wind speed is encountered only for one out of the five PBL schemes. This turns out to be related to an unrealistic profile of the vertical mixing coefficient. Correspondence: Günther Zängl, Meteorologisches Institut der Universitat München, 80333 München, Germany
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008215069770813, "perplexity": 1419.861133784492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00341-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://link.springer.com/article/10.2478%2Fs13540-012-0024-1
, Volume 15, Issue 2, pp 332-343 Date: 18 Mar 2012 Covariant fractional extension of the modified Laplace-operator used in 3D-shape recovery Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Abstract Extending the Liouville-Caputo definition of a fractional derivative to a nonlocal covariant generalization of arbitrary bound operators acting on multidimensional Riemannian spaces an appropriate approach for the 3D shape recovery of aperture afflicted 2D slide sequences is proposed. We demonstrate, that the step from a local to a nonlocal algorithm yields an order of magnitude in accuracy and by using the specific fractional approach an additional factor 2 in accuracy of the derived results.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496542572975159, "perplexity": 1478.2045372373789}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465487.60/warc/CC-MAIN-20150226074105-00306-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3177721/determining-whether-a-relation-is-an-equivalence-relation
# Determining whether a relation is an equivalence relation Define a relation R on the set of functions from R to R as follows: (f, g) ∈ R if and only if f(x) − g(x) ≥ 0 for all x ∈ R . Is this relation reflexive? symmetric? transitive? Is it an equivalence relation? Explain. so far I have that the relations is reflexive because f(x)-f(x) ≥ 0: which is true but I'm not quite sure if the relation is symmetric or transitive as I am not quite familiar. • Welcome to MSE. $R$ is not symmetric, because $(f,g)\in R \not\implies (g,f)\in R$, so $R$ is not an equivalence relation – J. W. Tanner Apr 7 at 3:46 • Does the condition $f(x)-g(x)\ge0$ look "symmetric" in the functions $f$ and $g$? – Lord Shark the Unknown Apr 7 at 3:48 • I don't quite understand why it is not symmetric. However, is it true that the relation is reflexive and transitive? – ph-quiett Apr 7 at 3:51 • For example, "$<$" is not symmetric on real numbers because $a<b$ does not imply $b<a$ – J. W. Tanner Apr 7 at 4:00 • @ ph-quiett Consider $x ^ 2 + 1$ and $x ^ 2$ to disprove the symmetry – Minz Apr 7 at 4:02 Reflexive $$f(x)-f(x)\geq 0 \forall x\in \mathbb{R}$$ Yes it is reflexive. Transitive $$f(x)-g(x)\geq 0 \forall x\in \mathbb{R}$$ $$g(x)-h(x)\geq 0 \forall x\in \mathbb{R}$$ Add above equations, $$\Longrightarrow f(x)-h(x)\geq 0 \forall x\in \mathbb{R}$$ Yes it is transitive. $$f(x)-g(x)\geq 0 \forall x\in \mathbb{R}$$ $$g(x)-f(x)\leq 0 \forall x\in \mathbb{R}$$ Hence, $$(f,g)\in R$$ & $$(g,f)\in R$$ iff $$g=f$$ Hence, this relation is not symmetric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788236618041992, "perplexity": 423.9822293018301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00250.warc.gz"}
https://enacademic.com/dic.nsf/enwiki/177010
Smooth function A bump function is a smooth function with compact support. In mathematical analysis, a differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives. Functions that have derivatives of all orders are called smooth. Differentiability classes Consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer. The function f is said to be of class Ck if the derivatives f', f'', ..., f(k) exist and are continuous (the continuity is automatic for all the derivatives except for f(k)). The function f is said to be of class C, or smooth, if it has derivatives of all orders.[1] f is said to be of class Cω, or analytic, if f is smooth and if it equals its Taylor series expansion around any point in its domain. To put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a C1 function is exactly a function whose derivative exists and is of class C0. In general, the classes Ck can be defined recursively by declaring C0 to be the set of all continuous functions and declaring Ck for any positive integer k to be the set of all differentiable functions whose derivative is in Ck−1. In particular, Ck is contained in Ck−1 for every k, and there are examples to show that this containment is strict. C is the intersection of the sets Ck as k varies over the non-negative integers. Cω is strictly contained in C; for an example of this, see bump function or also below. Examples The C0 function f(x)=x for x≥0 and 0 otherwise. The function f(x)=x2 sin(1/x) for x>0. A smooth function that is not analytic. The function $f(x) = \begin{cases}x & \mbox{if }x \ge 0, \\ 0 &\mbox{if }x < 0\end{cases}$ is continuous, but not differentiable at $\scriptstyle x=0$, so it is of class C0 but not of class C1. The function $f(x) = \begin{cases}x^2\sin{(1/x)} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0\end{cases}$ is differentiable, with derivative $f'(x) = \begin{cases}-\mathord{\cos(1/x)} + 2x\sin{(1/x)} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0.\end{cases}$ Because cos(1/x) oscillates as x approaches zero, f ’(x) is not continuous at zero. Therefore, this function is differentiable but not of class C1. Moreover, if one takes f(x) = x3/2 sin(1/x) (x ≠ 0) in this example, it can be used to show that the derivative function of a differentiable function can be unbounded on a compact set and, therefore, that a differentiable function on a compact set may not be locally Lipschitz continuous. The functions f(x) = | x | k + 1 where k is even, are continuous and k times differentiable at all x. But at $\scriptstyle x=0$ they are not (k+1) times differentiable, so they are of class C k but not of class C j where j>k. The exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined. The function $f(x) = \begin{cases}e^{-1/(1-x^2)} & \mbox{ if } |x| < 1, \\ 0 &\mbox{ otherwise }\end{cases}$ is smooth, so of class C, but it is not analytic at $\scriptstyle x=\pm 1$, so it is not of class Cω. The function f is an example of a smooth function with compact support. Multivariate differentiability classes Let n and m be some positive integers. If f is a function from an open subset of Rn with values in Rm, then f has component functions f1, ..., fm. Each of these may or may not have partial derivatives. We say that f is of class Ck if all of the partial derivatives $\frac{\partial^l f_i}{\partial x_{i_1}^{l_1}\partial x_{i_2}^{l_2}\cdots\partial x_{i_n}^{l_n}}$ exist and are continuous, where each of $i, i_1, i_2, \ldots, i_k$ is an integer between 1 and n, each of $l, l_1, l_2, \ldots l_n$ is an integer between 0 and k, $l_1+l_2+\cdots +l_n=l$.[1] The classes C and Cω are defined as before.[1] These criteria of differentiability can be applied to the transition functions of a differential structure. The resulting space is called a Ck manifold. If one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is an affine map which approximates it at that point. The derivative of the map assigns to the point x the linear part of the affine approximation to the map at x. Since the space of linear maps from one Banach space to another is again a Banach space, we may continue this procedure to define higher order derivatives. A map f is of class Ck if it has continuous derivatives up to order k, as before. Note that Rn is a Banach space for any value of n, so the coordinate-free approach is applicable in this instance. It can be shown that the definition in terms of partial derivatives and the coordinate-free approach are equivalent; that is, a function f is of class Ck by one definition iff it is so by the other definition. The space of Ck functions Let D be an open subset of the real line. The set of all Ck functions defined on $\scriptstyle D$ and taking real values is a Fréchet space with the countable family of seminorms $p_{K, m}=\sup_{x\in K}\left|f^{(m)}(x)\right|$ where K varies over an increasing sequence of compact sets whose union is D, and m = 0, 1, …, k. The set of C functions over $\scriptstyle D$ also forms a Fréchet space. One uses the same seminorms as above, except that $\scriptstyle m$ is allowed to range over all non-negative integer values. The above spaces occur naturally in applications where functions having derivatives of certain orders are necessary; however, particularly in the study of partial differential equations, it can sometimes be more fruitful to work instead with the Sobolev spaces. Parametric continuity Parametric continuity is a concept applied to parametric curves describing the smoothness of the parameter's value with distance along the curve. Definition A curve can be said to have Cn continuity if $\frac{d^n s}{dt^n}$ is continuous of value throughout the curve. As an example of a practical application of this concept, a curve describing the motion of an object with a parameter of time, must have C1 continuity for the object to have finite acceleration. For smoother motion, such as that of a camera's path while making a film, higher levels of parametric continuity are required. Order of continuity Two Bézier curve segments attached that is only C0 continuous. Two Bézier curve segments attached in such a way that they are C1 continuous. The various order of parametric continuity can be described as follows:[2] • C−1: curves include discontinuities • C0: curves are joined • C1: first derivatives are equal • C2: first and second derivatives are equal • Cn: first through nth derivatives are equal The term parametric continuity was introduced to distinguish it from geometric continuity (Gn) which removes restrictions on the speed with which the parameter traces out the curve.[3] Geometric continuity The concept of geometrical or geometric continuity was primarily applied to the conic sections and related shapes by mathematicians such as Leibniz, Kepler, and Poncelet. The concept was an early attempt at describing, through geometry rather than algebra, the concept of continuity as expressed through a parametric function. The basic idea behind geometric continuity was that the five conic sections were really five different versions of the same shape. An ellipse tends to a circle as the eccentricity approaches zero, or to a parabola as it approaches one; and a hyperbola tends to a parabola as the eccentricity drops toward one; it can also tend to intersecting lines. Thus, there was continuity between the conic sections. These ideas led to other concepts of continuity. For instance, if a circle and a straight line were two expressions of the same shape, perhaps a line could be thought of as a circle of infinite radius. For such to be the case, one would have to make the line closed by allowing the point x = ∞ to be a point on the circle, and for x = +∞ and x = −∞ to be identical. Such ideas were useful in crafting the modern, algebraically defined, idea of the continuity of a function and of . Smoothness of curves and surfaces A curve or surface can be described as having Gn continuity, n being the increasing measure of smoothness. Consider the segments either side of a point on a curve: • G0: The curves touch at the join point. • G1: The curves also share a common tangent direction at the join point. • G2: The curves also share a common center of curvature at the join point. In general, Gn continuity exists if the curves can be reparameterized to have Cn (parametric) continuity.[4] A reparametrization of the curve is geometrically identical to the original; only the parameter is affected. Equivalently, two vector functions $\scriptstyle f(s)$ and $\scriptstyle g(t)$ have Gn continuity if $\scriptstyle f^{(n)}(t) \,\neq\, 0$ and $\scriptstyle f^{(n)}(t) \,=\, kg^{(n)}(t)$, for a scalar $\scriptstyle k \,>\, 0$ (i.e., if the direction, but not necessarily the magnitude, of the two vectors is equal). While it may be obvious that a curve would require G1 continuity to appear smooth, for good aesthetics, such as those aspired to in architecture and sports car design, higher levels of geometric continuity are required. For example, reflections in a car body will not appear smooth unless the body has G2 continuity.[citation needed] A rounded rectangle (with ninety degree circular arcs at the four corners) has G1 continuity, but does not have G2 continuity. The same is true for a rounded cube, with octants of a sphere at its corners and quarter-cylinders along its edges. If an editable curve with G2 continuity is required, then cubic splines are typically chosen; these curves are frequently used in industrial design. Smoothness Relation to analyticity While all analytic functions are smooth on the set on which they are analytic, the above example shows that the converse is not true for functions on the reals: there exist smooth real functions which are not analytic. For example, the Fabius function is smooth but not analytic at any point. Although it might seem that such functions are the exception rather than the rule, it turns out that the analytic functions are scattered very thinly among the smooth ones; more rigorously, the analytic functions form a meagre subset of the smooth functions. Furthermore, for every open subset A of the real line, there exist smooth functions which are analytic on A and nowhere else. It is useful to compare the situation to that of the ubiquity of transcendental numbers on the real line. Both on the real line and the set of smooth functions, the examples we come up with at first thought (algebraic/rational numbers and analytic functions) are far better behaved than the majority of cases: the transcendental numbers and nowhere analytic functions have full measure (their complements are meagre). The situation thus described is in marked contrast to complex differentiable functions. If a complex function is differentiable just once on an open set it is both infinitely differentiable and analytic on that set. Smooth partitions of unity Smooth functions with given closed support are used in the construction of smooth partitions of unity (see partition of unity and topology glossary); these are essential in the study of smooth manifolds, for example to show that Riemannian metrics can be defined globally starting from their local existence. A simple case is that of a bump function on the real line, that is, a smooth function f that takes the value 0 outside an interval [a,b] and such that $f(x) > 0 \quad \text{ for } \quad a < x < b.\,$ Given a number of overlapping intervals on the line, bump functions can be constructed on each of them, and on semi-infinite intervals (-∞, c] and [d,+∞) to cover the whole line, such that the sum of the functions is always 1. From what has just been said, partitions of unity don't apply to holomorphic functions; their different behavior relative to existence and analytic continuation is one of the roots of sheaf theory. In contrast, sheaves of smooth functions tend not to carry much topological information. Smooth functions between manifolds Smooth maps between smooth manifolds may be defined by means of charts, since the idea of smoothness of function is independent of the particular chart used. If F is a map from an m-manifold M to an n-manifold N, then F is smooth if, for every $\scriptstyle p \in M$, there is a chart $\scriptstyle (U, \varphi)$ in M containing p and a chart $\scriptstyle (V, \psi)$ in N containing F(p) with $\scriptstyle F(U) \subset V$, such that $\scriptstyle\psi\circ F \circ \varphi^{-1}$ is smooth from $\scriptstyle\varphi(U)$ to $\scriptstyle\psi(V)$ as a function from $\scriptstyle\mathbb{R}^m$ to $\scriptstyle\mathbb{R}^n$. Such a map has a first derivative defined on tangent vectors; it gives a fibre-wise linear mapping on the level of tangent bundles. Smooth functions between subsets of manifolds There is a corresponding notion of smooth map for arbitrary subsets of manifolds. If $\scriptstyle f : X \to Y$ is a function whose domain and range are subsets of manifolds $\scriptstyle X \subset M$ and $\scriptstyle Y \subset N$ respectively. $\scriptstyle f$ is said to be smooth if for all $\scriptstyle x \in X$ there is an open set $\scriptstyle U \subset M$ with $\scriptstyle x \in U$ and a smooth function $\scriptstyle F : U \to N$ such that $\scriptstyle F(p)=f(p)$ for all $\scriptstyle p \in U \cap X$. References 1. ^ a b c Warner (1883), p. 5, Definition 1.2. 2. ^ Parametric Curves 3. ^ (Bartels, Beatty & Barsky 1987, Ch. 13) 4. ^ Brian A. Barsky and Tony D. DeRose, "Geometric Continuity of Parametric Curves: Three Equivalent Characterizations," IEEE Computer Graphics and Applications, 9(6), Nov. 1989, pp. 60–68. •  This article incorporates text from a publication now in the public domainChisholm, Hugh, ed (1911). Encyclopædia Britannica (11th ed.). Cambridge University Press. • Guillemin, Pollack. Differential Topology. Prentice-Hall (1974). • Warner, Frank Wilson (1983). Foundations of differentiable manifolds and Lie groups. Springer. ISBN 9780387908946. Wikimedia Foundation. 2010. Look at other dictionaries: • Non-analytic smooth function — In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not …   Wikipedia • Smooth — could mean many things, including:* Draught beer served with nitrogen. * Smooth (magazine) * Smooth function, a function that is infinitely differentiable, used in calculus and topology. * Smooth Island (disambiguation) * Smooth number, a number… …   Wikipedia • Smooth operator — or smoothing operator may refer to:* Smooth Operator (song), a song by Sade * A smoothing operator, used to remove noise from data * A mathematical operator which is a smooth function, i.e., infinitely differentiable * Smoothie and juice bars… …   Wikipedia • Smooth muscle tissue — Smooth muscle …   Wikipedia • Smooth muscle — is a type of non striated muscle, found within the tunica media layer of large and small arteries and veins, the bladder, uterus, male and female reproductive tracts, gastrointestinal tract, respiratory tract, the ciliary muscle, and iris of the… …   Wikipedia • Smooth infinitesimal analysis — is a mathematically rigorous reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed… …   Wikipedia • Function space — In mathematics, a function space is a set of functions of a given kind from a set X to a set Y . It is called a space because in many applications, it is a topological space or a vector space or both. ExamplesFunction spaces appear in various… …   Wikipedia • Smooth number — In number theory, a positive integer is called B smooth if none of its prime factors are greater than B . For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5 smooth since none of its prime factors are greater than 5. 5… …   Wikipedia • Smooth pursuit — Pursuit movement is the ability of the eyes to smoothly follow a moving object. It is one of two ways that visual animals can voluntarily shift gaze, the other being saccadic eye movements. Pursuit differs from the vestibulo ocular reflex, which… …   Wikipedia • smooth — I. adjective Etymology: Middle English smothe, from Old English smōth; akin to Old Saxon smōthi smooth Date: before 12th century 1. a. (1) having a continuous even surface (2) of a curve being the representation of a function with a continuous… …   New Collegiate Dictionary
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962666392326355, "perplexity": 468.8890020135669}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00218.warc.gz"}
https://math.codidact.com/posts/287642
### Communities tag:snake search within a tag user:xxxx search by author id score:0.5 posts with 0.5+ score "snake oil" exact phrase created:<1w created < 1 week ago post_type:xxxx type of post Q&A # What is the Name of Function for Probability of a Certain Sum on Random Die Rolls? +5 −0 Hi. I'm writing a book about using statistics for roleplaying game design and am using this equation for calculating the probability of rolling a particular sum "n" on "z" throws of an "a"-sided die $${{1} \over {a^{z}}} \displaystyle \sum_{k=0}^{\lfloor (n - z) / a \rfloor } (-1)^{k} (_zC_k) (_{(n-a)(k-1)}C_{(z-1)})$$ What is the name for this function? I'm trying to search for calculators to recommend that support it, and I can't think of the words to search for. Why does this post require moderator attention? Why should this post be closed? +7 −0 I don't know some special name for exactly that sum, but it is closely related to what are known as polynomial coefficients or extended binomial coefficients. These are often written as ${n \choose j}_{k+1}$ and are defined indirectly via: $$\left(\sum_{i=0}^k x^i\right)^n = \sum_{j=0}^{nk} {n \choose j}_{k+1} x^j$$ with ${n \choose 0}_0 = 1$ and ${n \choose j}_{k+1} = 0$ for $j\notin \{0,\dots,nk\}$. Polynomial coefficients and distribution of the sum of discrete uniform variables by Caiado, C.C.S. and Rathie, P.N. provides this definition and a recurrence relation to compute this number. As the title suggests, it also provides a direct solution to your probability problem. If $Y = \sum_{i=0}^n X_i$ where each $X_i$ is a discrete random variable uniformly sampled from $\{0,\dots,k\}$, then $$P(Y=y) = \frac{1}{(k+1)^n}{n \choose y}_{k+1}$$ This is the probability that rolling $n$ dice with values from $0$ to $k$ produces the sum $y$. The more usual case of dice with values $1$ to $k$ can be reproduced by noting this is equivalent to rolling $n$ dice with values from $0$ to $k-1$ to produce the sum $y-n$. The aforementioned paper gives a summation expression in terms of gamma functions which probably does reduce to your sum, but a clearer expression is in Restricted Weighted Integer Compositions and Extended Binomial Coefficients by Steffen Eger. Table 1 in the conclusion (page 22) list various binomial identities and the corresponding extended binomial identities derived in that paper. (See also equation 15 in Example 33.) One of which is: $${k \choose n}_{l+1} = \sum_{j\geq 0}(-1)^j{k \choose j}{n + k - (l + 1)j - 1 \choose k-1}$$ Matching your notation and performing the adjustment as above gives: $${z \choose n-z}_a = \sum_{k\geq 0}(-1)^k{z \choose k}{n - ak - 1 \choose z-1}$$ This is almost exactly the sum you wrote except you have $(n-a)(k-1)$ where I have $n - ak - 1$. Checking the $n=3$ and $a=z=2$ case, the above formula gives the correct result of $2$ whereas your variant is $0$, so I'm going to assume that was just a typo. Just for completeness, for the second binomial coefficient factor to be non-zero we need $n - ak - 1 \geq z - 1$ or $k \leq (n-z)/a$ giving $\lfloor (n-z)/a \rfloor$ as an upperbound to the summation assuming the other numbers make sense, i.e. that $z \leq n \leq az$. Why does this post require moderator attention?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890541672706604, "perplexity": 392.51116231113696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00326.warc.gz"}
http://math.stackexchange.com/questions/178490/reference-request-symmetric-product-schemes
# Reference-Request: Symmetric Product Schemes Is there a good reference for the theory of symmetric product schemes? (I only need a few basic things, the construction, etc.) Googling it turned up a lot of papers which use it as if it's common knowledge, so I suspect there should be a reference somewhere, but I can't find any. - Do you know how to construct products of schemes? Do you know how to quotient schemes by the action of a finite group? –  Qiaochu Yuan Aug 3 '12 at 17:09 I didn't know that you could quotient schemes by the action of a finite group. How would you do that? –  only Aug 3 '12 at 17:21 If $R$ is a commutative ring, then an action of $G$ on $\text{Spec } R$ by scheme automorphisms is equivalent to an action of $G$ on $R$ by ring automorphisms, and $\text{Spec } R^G$ is a sensible model of the quotient scheme (where $R^G$ denotes the invariant subring of $R$). The inclusion $R^G \to R$ dualizes to the quotient map $\text{Spec } R \to \text{Spec } R^G$. To extend this to schemes it suffices to find a covering by $G$-invariant affine opens; unfortunately I am not sure if this is always possible... –  Qiaochu Yuan Aug 3 '12 at 17:25 Hmm, the answers to mathoverflow.net/questions/1558/… seem to suggest that it's not always possible to quotient by a finite group... –  only Aug 3 '12 at 18:17 Yes, okay, the issue is that orbits may not be contained in affine opens. For any scheme $S$ such that a finite subset of $S$ is contained in an affine open this is not a problem for the action of the symmetric group on $S \times S \times ... \times S$. –  Qiaochu Yuan Aug 3 '12 at 18:25 I don't think there is a reference addressing exactly your question, so I'll just try my best to answer your question using more general knowledge and providing a number of references. • If we have a finite group $G$ acting on a finitely generated algebra $A$ over a field $k$, we know by the Artin-Tate lemma that $A^G$ is also noetherian (it is a finitely generated algebra over $k$ in case $A$ is). You can find this e.g. in Atiyah & Macdonald, Introduction to Commutative Algebra. The idea is that $A$ is an $A^G$-module of finite type. It is easy to see that $K^G$ is the quotient field of $A^G$, in the case where $A$ is a domain. • For actions of finite groups on schemes: when we have a scheme $S$ and a finite group $G$ acting on $X$, one may reduce with some generality to the affine case, solved above. Indeed, this is the case for $X$ quasiprojective scheme over a ring $A$ (we will take $A$ to be Noetherian). I know that this result is due to Michael Artin, but I saw it in James Milne's Etale Cohomology book. I would say that a good reference in general is Mumford's Geometric Invariant Theory. IMPORTANT: The condition that is required to ensure the reduction to the affine case is the following. If any $G$-orbit admits an affine open set containing it, then we may construct $X/G$ by reduction to the affine case. If $X$ is quasiprojective over an affine ring, this is the case for any finite set of points. • It is easy to see that, if $X$ is normal, so is $X/G$ provided $X$ is quasiprojective over a field or so (i.e. provided we fall under the above hypotheses). • In the case of a smooth curve $C$, you may consider the symmetric product $S^d(C).$ This is smooth, and the proof is done via local coordinates, as follows. 1) In the case where $C={\mathbb A}^1$, we are still in the affine case. We know by the theorem on elementary symmetric polynomials (Waring) that (here $X=(X_i)_{i\leq n}$) $$k[X]^{S_n}=k[s_1, \cdots , s_n],$$ where $s_i$ are defined by $$T^n-s_1T^{n-1} + \cdots +(-1)^ns_n= \prod (T-X_i).$$ 2) In the case of a smooth curve $C$, consider a point $\sum n_i P_i$, where $P_i \in C$ and $\sum n_i=d.$ Without much effort (but by proceeding carefully) one may reduce to the above by choosing local coordinates and considering the decomposition group of a point of $C^d$ above our chosen point in $S^dC$. • If $X$ is a smooth projective surface, then the desingularisation of $S^dX$ (which is always non-smooth for $\dim X\geq 2$ as the branch locus is a union of diagonals which has codimension $\leq 2$) has a correspondence with the Hilbert scheme of zero-dimensional subschemes of $X$ of length $d$. The standard reference is H. Nakajima's book:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9413790106773376, "perplexity": 133.70016403301278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646249598.96/warc/CC-MAIN-20150827033049-00109-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/potential-energy-of-two-atoms.717129/
# Potential Energy of Two Atoms? 1. Oct 17, 2013 ### baubletop 1. The problem statement, all variables and given/known data The potential energy of two atoms separated by a distance r may be written as: U(r) = 4Uatomic * [(r0/r)12 - (r0/r)6] >Given r0 = 4.0 Ao and Uatomic = -0.012795 eV, what is the distance at which there is no net force between the atoms? Express your answer in terms of Ao. >What is the potential energy at the position where the net force is 0 N? Express your answer in terms of eV. 2. Relevant equations The above equation. 3. The attempt at a solution I tried using Coulomb's Law, but I don't know the charges so it's not very helpful. Finding the total energy is also not relevant since I don't know mass or velocity. I'm pretty stuck here. I know once I find the first part I can find the second part easily, but I feel like I'm overlooking something simple for the first part of the question. 2. Oct 17, 2013 ### Staff: Mentor Hint: What's the relationship between force and the potential function? (Use calculus.) 3. Oct 17, 2013 ### baubletop F = -dU/dr :) I tried it and it worked! Thanks for the hint! Draft saved Draft deleted Similar Discussions: Potential Energy of Two Atoms?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000044465065002, "perplexity": 591.9703738006491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00567.warc.gz"}
http://math.stackexchange.com/questions/197894/direction-of-a-bearing?answertab=oldest
# Direction of a bearing What is the direction in degrees that corresponds to the bearing S 48 W? Do not give any units in your answer. As usual, your answer should consist only of a number. Your answer must be between -180 and 179. so since its -(90+48)= -138 correct? This is the theory i have in my textbook - I don't understand the problem, but I really don't understand why there's a 480 in the problem and a 48 in your solution. Or is that 480 supposed to be 48 degrees? –  Gerry Myerson Sep 17 '12 at 5:42 I would concur with Brian's answer. More typically it would be expressed as 228°. –  copper.hat Sep 17 '12 at 7:14 What you have in your textbook confirms my guess at the meaning of $S$48\$° W but doesn’t have any bearing (sorry!) on the other assumptions. –  Brian M. Scott Sep 17 '12 at 7:28 Agree with copper.hat. I have never seen a compass dial with negative numbers. Quite irrespective of what OP's textbook says. OTOH adding/subtracting a multiple of 360 to accomodate for the whims of a textbook author is also a basic skill. Anyway I'm fairly sure that copper.hat was simply making the observation that the prescribed interval of length 360 is an odd choice in the context of compass bearings. –  Jyrki Lahtonen Sep 17 '12 at 7:33 @JyrkiLahtonen ya i also knew that there is no negative compass.. u know wat i am going to clarify this with my teacher and going to show him this post haha.. –  JackyBoi Sep 17 '12 at 7:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250303268432617, "perplexity": 865.8911190693116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00285-ip-10-180-136-8.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/111326/poker-game-probability-question?answertab=active
# Poker game probability question In a standard poker game (no wild cards), suppose you are dealt five cards and your hand contains exactly one pair. You trade in the three worthless cards for new ones. What is the probability that your hand improves? meaning that there is a substantive transformation from one "kind" of hand into another - As noted above, the easiest way is to calculate $P(\text{Not improving})$, and then $P(\text{Improving})=1-P(\text{Not improving})$. If you already have a pair, you can't get a straight or a flush. You can only improve by making trips or two pairs. Once you have improved to trips or two pairs you can improve further, but we don't need to concern ourselves with that. The rough way to calculate this is: $$P(\text{First card won't give you trips}) = \frac{45}{47}$$ as there are two card that give you trips. $$P(\text{Second card won't give trips or two pairs}) = \frac{41}{46}$$ as there are two cards for trips + three cards for two pairs. $$P(\text{last card won't give trips or two pairs}) = \frac{37}{45}$$ i.e. $2 + 3 + 3$ cards that improve. $$P(\text{Not improved}) = \frac{45}{47} \cdot \frac{41}{46} \cdot \frac{37}{45} \approx 0.702$$ and hence $$P(\text{Improved}) \approx 0.298.$$ This isn't exact, as you may have picked a card of the same rank that was earlier removed. In that case, there are only two and not three matches, which improves your odds slightly. Likewise, for your last card, you may have picked one or two cards of ranks that were earlier discarded. In all, this adds $\approx 1\%$ to your chances of not improving. - You have a pair, so these are the (mutually exclusive) ways of improving your hand: • Being dealt two worthless cards and a third of the number in the pair, giving you three of a kind • Being dealt one extra card and the remaining two of the number in the pair, giving you four of a kind • Being dealt one extra card and another pair, giving you two pair • Being dealt three of a kind, giving you a full house Note that you cannot get a flush or a straight. Also observe that the three cards you discarded all had distinct numbers, otherwise you would have had two pair. Let us assume that there are no other players, so that there are $47$ cards remaining. (If there are other players, you need to do essentially the following for each possible hand they're dealt, then sum over all of their possible hands times the probability of those hands. Probably intractable by hand.) Fix the hand you were dealt. We count the number of ways of being dealt each of the preceding possibilities. • Think about picking two worthless cards. You can't pick the number you've been dealt, so you have $45$ options for the first. For the second, you can no longer pick the same number as the first, or you'd be dealt two pair, so you have $45 - 3 = 42$ options for the second. Now divide by $2$ (you double-counted each pair of cards). Now there are $2$ ways of picking a third card of the number of the pair you've been dealt, so there are $$\frac{45\cdot 42}{2}2=45\cdot 42$$ hands ways of getting three of a kind. • There is exactly one way of being dealt the remaining two cards of the number in your pair. There are $45$ options for the worthless extra card. So there are $$45$$ ways of being dealt a four pair. • There are $12$ numbers remaining for another pair. However, $9$ of them have four suits for each number, while $3$ of them are the same number as the three you discarded, so there are $\binom{9}{1}\binom{4}{2} + \binom{3}{1}\binom{3}{2}$ ways of choosing another pair. Then the extra card can't be from the same number as that pair, nor the same number as your original pair, so there are $43$ cards to select it from if you picked your pair from the $9$ full sets of numbers and $44$ cards to select it from if you picked your pair from the $3$ sets of numbers from which you discarded. Therefore, there are $$43\cdot \bigg( 9\binom{4}{2} + 3\binom{3}{2} \bigg)$$ ways of being dealt two pair. • To be dealt a full house, you can either be dealt a pair with extra card same number as your original pair, or you can be dealt three of a kind. For the former, replace $43$ by $2$ in the previous bullet point. For the latter, there are $9$ numbers to choose from with $4$ suits and $3$ numbers to choose from with $3$ suits, so $\binom{9}{1}\binom{4}{3} + \binom{3}{1}\binom{3}{3}$ ways of being dealt a three-of-a-kind. Adding, $$2\cdot \bigg( 9\binom{4}{2} + 3\binom{3}{2} \bigg) + \bigg(\binom{9}{1}\binom{4}{3} + \binom{3}{1}\binom{3}{3}\bigg)$$ ways of being dealt a full house. Since there are $\binom{45}{3}$ ways of being dealt three replacement cards, we have the probability of improvement is: $$\frac{45\cdot 42 + 45 + 43\cdot 9\binom{4}{2} + 45\cdot 3\binom{3}{2} + 2\cdot 9\binom{4}{2} + 3\binom{3}{2} + \bigg(\binom{9}{1}\binom{4}{3} + \binom{3}{1}\binom{3}{3}\bigg)}{\binom{45}{3}}$$ Google's calculator says that is about $34\%.$ I think I covered everything here without error, but these counting arguments can be tricky. - Can I also say that I want to find the possibility of the hand not improving, and use 1-P(not improved)=P(improved)? Are there any flaws in that?If not, what is the probability that we still get 3 worthless cards? – user25329 Feb 20 '12 at 19:14 Yes, you can certainly do that as well. You'll have to rule out various cases, which I thought would lead to about as much work as the direct computation. – Neal Feb 20 '12 at 20:19 Hints: I know nothing of poker; but I'm assuming the three cards you throw away are not added back to the deck before you choose three new cards from it. If this is not the case, the following isn't relevant (but you can suitably modify it, and in fact it becomes an easier problem): You may assume that the "new deck" from which you choose the three new cards: $\ \$1) has a pair missing $\ \$2) has three other cards missing that have distinct face values and distinct from the face value of $\ \ \ \ \ \$the original pair. So, for example, the new deck is missing the two of hearts, the two of spades, the four of diamonds, the five of diamonds, and the ace of hearts. You may assume that this is the case. Now you want to find the probability that selecting three cards from this new deck of 47 gives you a better hand. This can occur in, and only in, the following mutually exclusive ways: $\ \$1) you obtain a three of a kind whose face value is the same as your original pair (and nothing $\ \ \ \ \ \$better). $\ \$2) you obtain a four of a kind. $\ \$3) you obtain an additional pair, distinct from the original pair, and nothing better. $\ \$4) you obtain a full house (note this can occur in two fundamentally different ways). As stated above, the above list is complete. You better your hand if and only if one of the above occurs. So, find the probability of each of the above cases ocurring and take their sum. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167325854301453, "perplexity": 328.89999837966064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00147-ip-10-71-132-137.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Advanced_Statistical_Mechanics/Postulates_of_Quantum_Mechanics/The_Heisenberg_Picture
# The Heisenberg Picture In all of the above, notice that we have formulated the postulates of quantum mechanics such that the state vector $$\vert\Psi(t)\rangle$$ evolves in time, but the operators corresponding to observables are taken to be stationary. This formulation of quantum mechanics is known as the Schrödinger picture. However, there is another, completely equivalent, picture in which the state vector remains stationary and the operators evolve in time. This picture is known as the Heisenberg picture. This particular picture will prove particularly useful to us when we consider quantum time correlation functions. The Heisenberg picture specifies an evolution equation for any operator $$A$$, known as the Heisenberg equation. It states that the time evolution of $$A$$ is given by ${dA \over dt} = {1 \over i\hbar}[A,H]$ While this evolution equation must be regarded as a postulate, it has a very immediate connection to classical mechanics. Recall that any function of the phase space variables $$A (x, p)$$ evolves according to ${dA \over dt} = \{A,H\}$ where $$\{...,...\}$$ is the Poisson bracket. The suggestion is that in the classical limit ( $$\hbar$$ small), the commutator goes over to the Poisson bracket. The Heisenberg equation can be solved in principle giving $A (t) = e^{iHt/\hbar}A e^{-iHt/\hbar}$ $= U^{\dagger}(t)A U(t)$ where $$A$$ is the corresponding operator in the Schrödinger picture. Thus, the expectation value of $$A$$ at any time $$t$$ is computed from $\langle A(t) \rangle = \langle \Psi\vert A(t)\vert\Psi\rangle$ where $$\vert \Psi \rangle$$ is the stationary state vector. Let's look at the Heisenberg equations for the operators $$X$$ and $$P$$. If $$H$$ is given by $H = {P^2 \over 2m} + U(X)$ then Heisenberg's equations for $$X$$ and $$P$$ are ${dX \over dt} = \underline { {1 \over i\hbar}[X,H] = {P \over m} }$ ${dP \over dt} = {1 \over i\hbar}[P,H] = -{\partial U \over \partial X}$ Thus, Heisenberg's equations for the operators $$X$$ and $$P$$ are just Hamilton's equations cast in operator form. Despite their innocent appearance, the solution of such equations, even for a one-particle system, is highly nontrivial and has been the subject of a considerable amount of research in physics and mathematics. Note that any operator that satisfies $$\left [ A(t), H \right ] = 0$$ will not evolve in time. Such operators are known as constants of the motion. The Heisenberg picture shows explicitly that such operators do not evolve in time. However, there is an analog with the Schrödinger picture: Operators that commute with the Hamiltonian will have associated probabilities for obtaining different eigenvalues that do not evolve in time. For example, consider the Hamiltonian, itself, which it trivially a constant of the motion. According to the evolution equation of the state vector in the Schrödinger picture, $\vert\Psi(t)\rangle = \sum_i e^{-iE_it/\hbar}\vert E_i\rangle \langle E_i\vert\Psi(0)\rangle$ the amplitude for obtaining an energy eigenvalue $$E_j$$ at time $$t$$ upon measuring $$H$$ will be $\langle E_j\vert\Psi(t)\rangle = \sum_i e^{-iE_it/\hbar}\langle E_j\vert E_i\rangle \langle E_i\vert\Psi(0)\rangle$ $= \sum_i e^{-iE_it/\hbar}\delta_{ij}\langle E_i\vert\Psi(0)\rangle$ $= e^{-iE_jt/\hbar}\langle E_j\vert\Psi(0)\rangle$ Thus, the squared modulus of both sides yields the probability for obtaining $$E_j$$, which is $\vert\langle E_j\vert\Psi(t)\rangle \vert^2 = \vert\langle E_j\vert\Psi(0)\rangle \vert^2$ Thus, the probabilities do not evolve in time. Since any operator that commutes with $$H$$ can be diagonalized simultaneously with $$H$$ and will have the same set of eigenvectors, the above arguments will hold for any such operator.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812648296356201, "perplexity": 131.13830127232504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00632.warc.gz"}
http://math.stackexchange.com/questions/50342/concrete-example-illustrating-the-interior-product
# Concrete Example Illustrating the Interior Product Let $V$ be a finite-dimensional vector space, let $v \in V$ and let $\omega$ be an alternating $k$-tensor on $V$, i.e., $\omega \in \Lambda^{k}(V)$. Then, the interior product of $v$ with $w$, denoted by $i_{v}$, is a mapping $$i_{v}:\Lambda^{k}(V)\rightarrow \Lambda^{k-1}(V)$$ determined by $$(i_v \omega)(v_1, \dots, v_{k-1}) = \omega(v, v_1, \dots, v_{k-1}).$$ My understanding of this, which is probably far from complete, is that the interior product basically provides a mechanism to produce a $k-1$-tensor from a $k$ tensor relative to some fixed vector $v$. I'm trying to understand however what the interior product actually means and how it is used in practice. Therefore, my question is, Can anyone provide example(s) illustrating computations and/or physical examples that will shed light on its purpose? Also, the interior product seems to be somewhat (inversely?) related to the exterior product in that an exterior product takes a $p$-tensor and a $q$ tensor and makes a $p+q$ tensor and therefore is an "expansion". The interior product, on the other hand, is a contraction but always produces a tensor of degree one less than you started out with. So, secondly, What is the precise relation between the interior and exterior products? Unfortunately, the Wikipedia page is of little help here and I can't find a reference that clearly explains these things. - Just nitpicking: you can identify elements of $\bigwedge^k$ with alternating k-tensors only in characteristic 0. –  Alexei Averchenko Jul 8 '11 at 15:32 @Alexei the question is tagged differential geometry :-) –  Willie Wong Jul 9 '11 at 15:37 For a physical example of the use of the interior product: that is how one can geometrically define the "electric" and "magnetic" parts of an electromagnetic field given by the Faraday tensor. It also shows conveniently how electric and magnetic fields change under Lorentz transformations. –  Willie Wong Jul 9 '11 at 16:04 Let me give another illustration of how the interior and exterior products are related. This particular case, however, works not on differential geometry, but requires Riemannian geometry. Given a metric $g$, denote by $\langle,\rangle$ the extension of its inner (not interior) product to forms. The metric $g$ induces an identification between the vector space $V$ and its dual $V^*$, via the operators $v\mapsto v^\flat$, where $$v^\flat(w) = \langle v,w\rangle$$ ($v^\flat \in V^*$ is a linear functional on $V$, and here we define it by its action on $w\in V$) Then we have the nice property for $\eta\in\Lambda^{k-1}(V),\tau \in \Lambda^k(V)$, and $v\in V$ that $$\langle v^\flat \wedge \eta, \tau\rangle = \langle \eta,(i_v)\tau \rangle$$ showing how the interior and exterior products are actually adjoint with respect to the metric inner product. A similar statement can be made by appealing to the Hodge-star operator associated to a Riemannian metric. Up to a constant multiplier $C$ (whose form depends a bit on your conventions, and which depends on the dimension and the degree of the forms), you have that $$(i_v)\tau = C *(x^\flat\wedge *\tau)$$ where $*$ is the Hodge star operator. - Very nice example –  ItsNotObvious Jul 9 '11 at 21:34 I don't think this will completely satisfy your questions, but I think the interior product is a neat way to induce orientations. To give an orientation on an $n$-manifold with boundary $M$ is the same as giving a nowhere-vanishing $n$-form $\Omega$. If $H \subset M$ is a hypersurface and $N$ is a transverse vector field along $H$ (so $N\colon H \to TM$, such that $N_x \in T_xM$ and $T_xM = N_x + T_xH$ for $x \in H$), then $i_N\Omega$ restricts to an orientation form on $H$. If $H = \partial M$, then taking $N$ to be an outward-pointing vector field along $\partial M$ gives the usual orientation used in Stokes's theorem. I don't have my copy with me, but a lot of this should be in Lee's Introduction to Smooth Manifolds. - Thanks for the reference; I've looked it up and it appears to be Proposition 13.12 on p 337. I have some work to do to completely understand the example but it's a start. –  ItsNotObvious Jul 8 '11 at 17:38 No problem. To me it's kind of an uninspiring example, since the usefulness of orientation forms at that point is that they spit out a sign. I wonder if there's something involving volume forms. –  Dylan Moreland Jul 8 '11 at 18:10 Lie derivative, of course! It's even mentioned in the interior product wiki page. As for the relation to the exterior product, IMO you should look more on the exterior derivative for comparison. But if you insist, for any $1$-form $\alpha$ and $k$-form $\omega$ and any vector $x$ we have $i_x (\alpha \wedge \omega) = \alpha(x) \omega$. UPD: from purely algebraic standpoint, it is useful to consider Lie coalgebra structure on $V^*$: a linear mapping $\mathrm{d}: V^* \to \bigwedge^2 V^*$ extended to a graded anti-derivation. It is easy to see that by defining $\omega([x, y]) = \mathrm{d}\omega(x, y)$ we get Lie algebra structure on $V$. So, interesting things must arise when we extend some $f: \bigwedge^2 V^* \to V^*$ to a graded antiderivation, right? Well, I can't answer that question, but I'm pretty sure that was the motivation :) - Of course? I'm glad that others find it to be so obvious. Notice though that Wikipedia page discusses the interior product within the context of manifolds and differential forms. Although I tagged the question with "differential geometry", the definition I provided, which comes from Jeff Lee's Manifolds and Differential Geometry, makes no reference to smooth infrastructure. Therefore, I'm hoping for an explanation in the more elementary context of multilinear algebra. –  ItsNotObvious Jul 8 '11 at 15:46 I didn't mean it's obvious, I meant it's useful for computing the Lie derivative. Is the interior product even used that much outside differential geometry? –  Alexei Averchenko Jul 8 '11 at 15:48 As I'm just learning the concept I can't really say how it's used, but since it can be defined in a completely algebraic manner I would think that it would have some sort of meaning in that context. If its only purpose is to relate the exterior/Lie derivatives then I can accept that, but would be surprised if this was the case –  ItsNotObvious Jul 8 '11 at 15:55 I wouldn't call it its whole purpose, no. –  Alexei Averchenko Jul 8 '11 at 16:01 I'm a bit puzzled by your answer. Did you read the question at all? You don't seem to address it (except for something in the second paragraph). –  t.b. Jul 8 '11 at 16:12 Here is a partial answer to my own question, specifically the part asking for a concrete example illustrating computational aspects of the interior product. Let the vector space in question be $\mathbb{R}^3$ endowed with the ordered basis $(e_1, e_2, e_3)$ and let $e^1, e^2, e^3$ be the relative cobasis. Here, we can think of the cobasis as just ordinary vectors that satisfy $e^i(e_j) = \delta^i_j$ or consider them as linear functionals on the dual space $(\mathbb{R}^n)^*$ that satisfy the same relations. Now, suppose $\omega \in \Lambda^{2}(\mathbb{R}^3)$ is given by $\omega = e^1 \wedge e^2$ and let $v = e_1$. Then, for any vector $x \in \mathbb{R}^3$ we can then compute the interior product as follows: $$(i_v \omega)(x) = (e^1 \wedge e^2)(e_1, x) = e^1(e_1)e^2(x) - e^1(x)e^2(e_1) = e^2(x)$$ Therefore $i_v \omega = e^2$ Next, keep the same $\omega$ but let $v = e_2$. Then, $$(i_v \omega)(x) = (e^1 \wedge e^2)(e_2, x) = e^1(e_2)e^2(x) - e^1(x)e^2(e_2) = -e^1(x)$$ So $i_v \omega = -e^1$ Finally, it is also easy to see by inspection that if $v =e_3$ then $i_v \omega = 0$ Computations for other values of $\omega$ proceed similarly. - This isn't an answer, but it certainly should be relevant to this discussion. Andrew McInerney's excellent new book "First Steps in Differential Geometry" discusses the interior product of a (o, k) tensor field (always alternating according to the book's conventions) on pp. 169 - 171. He gives 2 examples: 4.6.17 and 4.6.18 If any one understands how the result follows in 4.6.18 which "the reader may verify" please inform me. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150905013084412, "perplexity": 285.24426783981966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00293-ip-10-171-96-226.ec2.internal.warc.gz"}
https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Advanced_Quantum_Mechanics_(Kok)/14%3A_Atomic_Orbitals/14.3%3A_Visualizing_Orbitals/14.3.1%3A_s_Orbitals
# 14.3.1: s Orbitals $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Orbits with $$l = 0$$ are called $$s$$ orbitals. Although this is not where the letter comes from, it’s useful to think of these as “spherical” orbitals, because they are spherically symmetric. However, they aren’t just spheres! Again, remember that the probability cloud for the electron is a fuzzy ball around the nucleus, representing where the electron is likely to be found. The plot below shows two visualizations of the 1s orbital. On the left is a plot of $$\Psi^{*}(r) \Psi(r)$$. This gives the probability density for the electron to be found at radius $$r$$. That is, you must pick a small range $$dr$$ around the $$r$$ you’re interested in, and multiply this probability density by that $$dr$$. You then get the probability for finding the electron with that $$dr$$ of your chosen $$r$$. On the right is a cut through the $$x − z$$ plane showing the probability density as a function of position. Lighter colors mean more probability of finding the electron at that position. Notice that there is a darker spot at the center. This corresponds to the probability dropping to zero at $$r = 0$$, as seen in the left plot. In both cases, distances are plotted in terms of Angstroms; one Angstrom is 10−10 m which, as you can see from the plot, is about the size of an atom. If you go to the 2s orbitals, an additional bump is added to the radial wave function. Also, the average distance the electron is from the center of the atom gets larger. While the probability clouds for a 1s and 2s orbital overlap, most of the probability for a 2s electron is outside most of the probability for a 1s electron. This means that to some extent, when working out the properties of an atom with two electrons in the 1s shell and one 2s electron (that would be Lithium), we can treat the nucleus plus the 1s shell as a single spherical ball of net charge +1. While this isn’t perfect, this does lend some support to the approximation we’ll make for multielectron atoms that each electron is moving in a nuclear potential and not interfering too much with other electrons. Below are the same two plots for the 2s orbital. The scale of the axes is the same as the scale used previously in the 1s orbital, so that you may compare the plots directly. As we move to the 3s orbital, we have to expand the limits of our plots, as the electron is starting to have more and more probability to be at greater radius. In the plots below, you can see that the electron cloud still has reasonable probability density at a radius of 15 Angstroms. You can also see that the 3s orbital is three concentric fuzzy spherical shells; equivalently, the radial function has three bumps. Again, sizes are in Angstroms. This page titled 14.3.1: s Orbitals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Pieter Kok via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266142249107361, "perplexity": 285.8479528688579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00033.warc.gz"}
https://imathworks.com/tex/tex-latex-tikz-set-node-label-position-more-precisely/
# [Tex/LaTex] TikZ set node label position more precisely labelsnodestikz-pgf This seems like it should be really easy but I can't seem to find it anywhere… I'd like to be able to fine-tune the positioning of a node label. I'm aware of the \node[label=above/below/etc:{label}] (x) {}; syntax, but that doesn't seem to give you many options on where the label goes. I'd like to be able to place the label slightly closer or farther away, or maybe in a direction other than the 8 presets available. \tikz[label distance=x] isn't a good solution because I need it to be node-specific. You can define the direction of the label by using label=<angle>:<label text>. To specify the distance on a per node distance, you have to supply it to the label options: label={[label distance=<distance>]<angle>:<label text>} \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture}[ every node/.style=draw, every label/.style=draw ] \node [label={[label distance=1cm]30:label}] {Node}; \end{tikzpicture} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869417905807495, "perplexity": 1034.4882133315139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00271.warc.gz"}
https://byjus.com/chemistry/cbse-class-11-chemistry-practical-syllabus/
# CBSE Class 11 Chemistry Practical Syllabus There are many different experiments for CBSE class 11 chemistry students, who are going to be writing their practical exams soon. You can find the entire CBSE Class 11 chemistry practical syllabus below: #### A. Basic Laboratory Techniques 1. Cutting glass tube and glass rod 2. Bending a glass tube 3. Drawing out a glass jet 4. Boring a cork #### B. Characterization and Purification of Chemical Substance 1. Determination of melting point of an organic compound. 2. Determination of boiling point of an organic compound. 3. Crystallization involving impure sample of any one of the following: a. Alum, Copper Sulphate, Benzoic Acid. #### C. Experiments Related to pH Change (a) Any one of the following experiments: • Determination of pH of some solutions obtained from fruit juices, solutions of known and varied concentrations of acids, bases and salts using pH paper or universal indicator. • Comparing the pH of solutions of strong and weak acid of same concentration. • Study the pH change in the titration of a strong acid with a strong base using universal indicator. (b) Study of pH change by common-ion effect in case of weak acids and weak bases. #### D. Chemical Equilibrium One of the following experiments: (a) Study the shift in equilibrium between ferric ions and thiocynate ions by increasing /decreasing the concentration of either of the ions.(b) Study the shift in equilibrium between [Co (H2 O)6 ] 2+and chloride ions by changing the concentration of either of the ions. #### E. Quantitative Estimation 1. Using a chemical balance. 2. Preparation of standard solution of oxalic acid. 3. Determination of strength of a given solution of sodium hydroxide by titrating it against standard solution of oxalic acid. 4. Preparation of standard solution of sodium carbonate. 5. Determination of strength of a given solution of hydrochloric acid by titrating it against standard sodium carbonate solution. #### F. Qualitative Analysis (a) Determination of one anion and one cation in a given salt.Cations: $Pb^{2+}, Cu^{2+}, As^{3+}, Al^{3+}, Fe^{3+}, Mn^{2+}, Ni^{2+}, Zn^{2+}, Co^{2+}, Ca^{2+}, Sr^{2+}, Ba^{2+}, Mg^{2+}, NH^{+}_{4}$ Anions: $CO^{2-}_{3}, S^{2-}, SO^{2-}_{3}, NO^{-}_{2}, NO^{-}_{3}, Cl^{-}, I^{-}, PO^{3-}_{4}, C_{2}O^{2-}_{4}, CH_{3}COO^{-}$ (Note: Insoluble salts excluded) (b) Detection of nitrogen, sulphur, chlorine, in organic compounds. #### Project Scientific investigations involving laboratory testing and collecting information from other sources.A few suggested projects • Checking the bacterial contamination in drinking water by testing sulphide ions. • Study of the methods of purification of water. • Testing the hardness, presence of iron, fluoride, chloride etc. depending upon the regional variation in drinking water and the study of causes of presences of these ions above permissible limit (if any). • Investigation of the foaming capacity of different washing soaps and the effect of addition of sodium carbonate on them. • Study of the acidity of different samples of the tea leaves. • Determination of the rate of evaporation of different liquids. • Study of the effect of acids and bases on the tensile strength of fibers. • Analysis of fruit and vegetable juices for their acidity. • Note: Any other investigatory project, which involves about 10 periods of work, can be chosen with the approval of the teacher. For an in depth look into the various practical experiments for CBSE class 11 chemistry students you can check out the Chemistry Practical Class 11, for more information on the practical examination. #### Practise This Question 0.1 mol of methyl amine (Kb = 5 × 104) is mixed with 0.08 mol of HCl and the solution is diluted to 1 litre. The hydrogen ion concentration of the resulting solution is:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793746829032898, "perplexity": 4805.96518210557}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00391.warc.gz"}
https://www.reddit.com/r/math/comments/b8fqb/what_is_the_product_of_all_real_positive_numbers/c0lh76n
This is an archived post. You won't be able to vote or comment. you are viewing a single comment's thread. [–] 45 points46 points  (50 children) How do you propose to compute the product of an uncountable set? Why should such a thing even exist? [–] 23 points24 points  (6 children) This is the right idea. What is your definition of the "value" of an infinite sum/product? The only reasonable definition I know of is that you must take the limit of the partial sums/products. Now you must ask what is the sequence of partial sums/products? In order for that sequence to encompass the entire set of reals, you must iteratively add/multiply by a real number in such a way that given any specific real number, x, x will have been added/multiplied at some finite step. For example, suppose you propose a method of adding/multiplying reals in an order, and I ask you: "Alright, so how many numbers get added/multiplied before you add/multiply by pi"? You should, in theory, be able to answer that question with a specific number, say one million. It turns out that there is no way to do this with the reals, no matter how hard you try. This result is due to Georg Cantor if you want to look into it. The upshot of this is that your question doesn't even make sense. It's not that the answer is undefined (like 0/0), but its more like asking "What is the sum of white?" On the other hand, if you change the word "reals" to "rationals", then another poster had it right with the notion that the product is conditionally convergent and thus is order-dependent as you surmised. [–] 12 points13 points  (5 children) There is a standard way of defining sums (and products) over uncountable sets: take the sup of all finite sums. It doesn't help in this case because the finite products are unbounded, but the uncountability is not necessarily a big problem. Edit: blarg, markdown [–] 1 point2 points  (0 children) Uncountability is still a big problem, in fact--note that a sum of uncountably many positive numbers can never be finite. (this is mentioned in your link) [–] 0 points1 point * (2 children) I've seen that definition used for summations over uncountable sets, but never for products. For summations over uncountable sets one can prove that the summation converges only if at most countably many of the summands are non-zero. Do you know if a similar result holds for products? Edit: Fixed glaring error, lol [–] 2 points3 points * (1 child) A product is the same as the sum of logs, and applying that notion to every finite product that you are taking the sup over is definitely valid. So yes, assuming what you said is true (and some quick thought seems to indicate that the union of the sets {x > 1/n} for integer n should necessarily be countable if the summation converges, so I believe you) then the same result applies but replacing "summation" with "product" and "non-zero" with "non-unit". EDIT: Your "if and only if" should have been just "only if"... fixed my post to reflect the glaring inaccuracy that resulted from me not noticing that either [–] 1 point2 points  (0 children) Oops, yea, the sum from n=1 to infinity of n definitely doesn't converge, lol [–] 0 points1 point  (0 children) That's interesting... I think I might have seen that once in an Analysis class but forgotten about it. It only works if all the summands are positive though.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850414752960205, "perplexity": 611.087006962403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163421.31/warc/CC-MAIN-20160205193923-00255-ip-10-236-182-209.ec2.internal.warc.gz"}
https://sbseminar.wordpress.com/2011/03/23/how-to-think-about-hodge-decomposition/
# How to think about Hodge decomposition UPDATE: Greg Kuperberg writes the same things with fewer typos here. This blog post is meant for me to work through some things I want to present in class tomorrow, so it won’t have as much background as I usually try to include. Let $X$ be a compact complex manifold with a metric. Then we have operators $\partial$, $\overline{\partial}$, $\partial^*$ and $\overline{\partial}^*$. These go from $(p,q)$-forms to $(p+1,q)$, $(p,q+1)$, $(p-1,q)$ and $(p,q+1)$-forms respectively. They obey $\displaystyle{\partial^2=0}$, $\displaystyle{\overline{\partial}^2=0}$, $\displaystyle{(\partial^*)^2=0}$, $\displaystyle{(\overline{\partial}^*)^2=0}$, $\displaystyle{\partial \overline{\partial} = - \overline{\partial} \partial}$ and $\displaystyle{\partial^* \overline{\partial}^* = - \overline{\partial}^* \partial^*}$  $(1)$. The exterior derivative $d$ is $\partial + \overline{\partial}$, the Hodge dual formula is $d^* = \partial^* + \overline{\partial}^*$. We define three Laplacians: $\Delta_d = d d^* + d^* d$, $\Delta_{\partial} = \partial \partial^* + \partial^* \partial$ and $\Delta_{\overline{\partial}} = \overline{\partial} \overline{\partial}^* + \overline{\partial}^* \overline{\partial}$. So, in general, $\displaystyle{ \Delta_d = (\partial+\overline{\partial}) (\partial^* + \overline{\partial}^*) + (\partial^* + \overline{\partial}^*) (\partial+\overline{\partial}) = }$ $\displaystyle{\Delta_{\partial} + \Delta_{\overline{\partial}} + (\partial \overline{\partial}^* + \overline{\partial}^* \partial) + ( \overline{\partial} \partial^* + \partial^* \overline{\partial})}$. When $X$ is Kähler, we have $\Delta_{\partial} = \Delta_{\overline{\partial}} = (1/2) \Delta_{d}$. This identity is actually made up out of the following identities, which strike me as more fundamental: $\displaystyle{ \partial \overline{\partial}^* + \overline{\partial}^* \partial =0}$ and $\displaystyle{ \overline{\partial} \partial^* + \partial^* \overline{\partial}=0}$   $(2)$. $\displaystyle{ \partial \partial^* + \partial^* \partial = \overline{\partial} \overline{\partial}^* + \overline{\partial}^* \overline{\partial} }$   $(3)$. The two quantities in the third equation are (by definition) $\Delta_{\partial}$ and $\Delta_{\overline{\partial}}$; we’ll denote them both by $\Delta$. So $\Delta$ takes $(p,q)$-forms to $(p,q)$-forms. Also, $\Delta$ commutes with all of $\partial$, $\overline{\partial}$, $\partial^*$ and $\overline{\partial}^*$, this is an easy consequence of $(1)$ and $(3)$ (exercise!). This is all standard. The rest is something that I haven’t seen written down, but strikes me as making the theory much simpler. Results on elliptic operators tell us that there is a discrete sequence of eigenvalues $0=\lambda_0 < \lambda_1 < \lambda_2 < \cdots$ so that $\Omega^{p,q}(X)$ breaks up as a direct sum $\bigoplus \Omega_{\lambda}^{p,q}(X)$ of eigenspaces for $\Delta$, which each $\Omega^{p,q}_{\lambda}$ finite dimensional. The precise meaning of the direct sum is a little tricky: The $L^2$ completion of $\Omega^{p,q}(X)$ is the $L^2$-direct sum of these spaces and, more generally, the same is true for all of the $(2,s)$ Sobolev norms. But you don't need to understand that to understand the rest. Since $\partial$ etcetera commute with $\Delta$, they take each $\lambda$-eigenspace to itself. So, for each $\lambda$, we get $(n+1)^2$ finite dimensional vector spaces $\Omega^{p,q}_{\lambda}$ and maps $\partial$, $\overline{\partial}$, $\partial^*$ and $\overline{\partial}^*$ between them satisfying $(1)$, $(2)$ and $\displaystyle{ \partial \partial^* + \partial^* \partial = \overline{\partial} \overline{\partial}^* + \overline{\partial}^* \overline{\partial} = \lambda }$   $(3')$. When $\lambda=0$, all for maps are $0$; you just have a diamond of vector spaces with no maps. Now, what about $\lambda>0$? There is an obvious way to satisfy the above equations: Put a one dimensional vector space in positions $(p,q)$, $(p+1,q)$, $(p,q+1)$ and $(p+1,q+1)$. Let the two nontrivial $\partial$ maps be $1$, the two nontrivial $\overline{\partial}$ maps also be $1$, and the nontrivial $\partial^*$ and $\overline{\partial}^*$ maps be $\lambda$. Call this configuration a square and denote it $S^{p,q}_{\lambda}$. Then here is a nice algebraic lemma: Conditions $(1)$, $(2)$ and $(3')$ imply that $\Omega^{\bullet, \bullet}_{\lambda}$ is a direct sum of various $S^{p,q}_{\lambda}$‘s. (I put this as Problem 6 on a recent problem set and got good solutions from most of my students.) So, what the Hodge decomposition tells us is that $\Omega^{\bullet, \bullet}(X)$ looks like a direct sum of (a) terms where all four derivatives act by $0$, call them $0$-terms, and (b) various squares. In particular, even once we forget the Kähler structure, and thus can only talk about $\partial$ and $\overline{\partial}$, we still get a nontrivial consequence: The double complex $\Omega^{\bullet, \bullet}$ is a direct sum of (a) $0$-terms, where $\partial$ and $\overline{\partial}$ act by $0$, and (b) $2 \times 2$ squares of one dimensional vector spaces where the nontrivial $\partial$ and $\overline{\partial}$ maps are isomorphisms. Notice how easy this makes the standard results. The deRham cohomology, $H^k$, is the cohomology of $\displaystyle{ \cdots \to \bigoplus_{p+q=k-1} \Omega^{p,q} \stackrel{\partial+\partial^*}{\longrightarrow} \bigoplus_{p+q=k} \Omega^{p,q} \stackrel{\partial+\partial^*}{\longrightarrow} \bigoplus_{p+q=k+1} \Omega^{p,q} \to \cdots }.$ The Dolbeault cohomology, $H^{p,q}$ is the coholomogy of $\displaystyle{ \cdots \to \Omega^{p,q-1} \stackrel{\overline{\partial}}{\longrightarrow} \Omega^{p,q} \stackrel{\overline{\partial}}{\longrightarrow} \Omega^{p,q+1} \to \cdots}$. Hodge decomposition says that $H^k \cong \bigoplus H^{p,q}$. Well, that’s clear enough. Cohomology distributes over direct sum. The terms on which $\partial$ and $\overline{\partial}$ both act by $0$ contribute one dimension to each. The square terms are exact in both settings. Similarly, the $\partial \overline{\partial}$ lemma states that, if $\alpha$ is $d$-closed and in the image of $\partial$ then it is in the image of $\partial \overline{\partial}$. Proof: Decompose into these summands. If $\alpha$ is $d$-closed then the component of $\alpha$ on each summand is closed. The $d$-closed elements are (a) the $0$-terms, (b’) the bottom elements of squares (the element in degree $(p+1,q+1)$ in $S^{p,q}$) and (b”) the difference between the two middle terms of the square (the difference between the guy in degree $(p,q+1)$ and the guy in degree $(p+1,q)$). The image of $\partial$ can’t contain any terms of type (a) or (b”), so we see that $\alpha$ is a sum of terms of type (b’), which are, indeed, in the image of $\partial \overline{\partial}$. Of course, this is essentially the statement that the spectral sequence degenerates at $E^2$. But I really think authors should write it out directly. It makes everything else much more obvious. This statement is related to the statement that the spectral sequence degenerates at $E^1$, but stronger, see Greg Kuperberg’s MathOverflow post. Advertisements ## 5 thoughts on “How to think about Hodge decomposition” 1. (Actually I had to first make a search on Numdam to make sure it was Dolbeault and not Dolbeaut…) 2. I wrote about this topic in a MathOverflow post last year. In brief, any Dolbeault complex decomposes into dots, squares, and zigzags. The Hodge theorem implies that the Hodge spectral sequence degenerates at $E_1$ (I thought — you say $E_2$?). But the theorem is stronger than that, because the spectral sequence only detects even-length zigzags. The Hodge theorem actually says that there are no zigzags at all. 3. David Speyer says: Sorry, should be $E_1$. The reason that I was thinking $E_2$ was that I was thinking of the Dolbeault complex as the $E_1$ page for the hypercohomology of the holomorphic deRham complex. I’ll edit. Comments are closed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 130, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621968865394592, "perplexity": 228.86968854210127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00552-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/82386-finding-vertices-center-foci-ellipse.html
# Math Help - Finding the vertices, center and foci for an ellipse 1. ## Finding the vertices, center and foci for an ellipse Center: ( , ) Right vertex: ( , ) Left vertex: ( , ) Top vertex: ( , ) Bottom vertex: ( , ) Right focus: ( , ) Left focus: ( , ) Rearranging this is giving me trouble for some reason. Any help is appreciated. Thanks in advance! 2. Since they've given you no information (no equation, no points, no graph, no description, etc), there's no way to find the answers they're wanting you to fill in. Sorry! To learn how to work with ellipses and their equations, try here. But to answer this exercise, you'll first need to contact your instructor for the missing information. 3. Originally Posted by Beeorz There must have been a transient error with the LSU server. The above image wasn't displaying earlier. In case it disappears again, the equation is: . . . . . $9x^2\, +\, 16y^2\, -\, 90x\, -\, 256y\, +\, 1105\, =\, 0$ The first step, as always, is to complete the square to start converting the above to "conics" form. . . . . . $9x^2\, -\, 90x\, +\, 16y^2\, -\, 256y\, =\, -1105$ . . . . . $9(x^2\, -\, 10x)\, +\, 16(y^2\, -\, 16y)\, =\, -1105$ . . . . . $9(x^2\, -\, 10x\, +\, 25)\, +\, 16(y^2\, -\, 16y\, +\, 64)\, =\, 9(25)\, +\, 16(64)\, -\, 1105$ . . . . . $9(x\, -\, 5)^2\, +\, 16(y\, -\, 8)^2\, =\, 144$ Divide through and simplify to get this into "conics" form. Then use the fact that the larger denominator is under the numerator with an x in it to confirm that this ellipse is wider than it is tall. Read the values off the "conics" form of the equation for h, k, a, and b. Use the equation you've memorized to find the value of c. Then fill in the various values for which they've asked you. If you get stuck, please reply with a clear listing of your steps and reasoning so far. Thank you! 4. got it, was simply subtracting 9(25) and 16(64) instead of adding them to -1105 thanks for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339277505874634, "perplexity": 1243.0570273911835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772751.143/warc/CC-MAIN-20141217075252-00021-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/625682/proofs-involving-disjunctions-velleman-chapter-3-5
# Proofs involving Disjunctions [Velleman, Chapter 3.5] $\Large{{1.}}$ Are proofs using strategies $P136, P143$ always easier than those using $P140$? In the former two, only one statement (either $P$ or $Q$) must be proven. In the latter, both $P$ and $Q$ must be. $\Large{{2.}}$ If so, should I be concerned with $P140$? Why moot it at all as a proof strategy? $\Large{{3.}}$ I remember: $P \vee Q \text{ true} \iff \text{Either$P$or$Q$is true}$. Nonetheless, I'm disconcerted and agitated by the act of assuming as true either $P$ or $Q$ without any proof, before proving the other statement. I can't pinpoint why. Could someone please help? - Related and a possible duplicate. I suggest you look at my answer since it's pretty much Velleman reworded. – Git Gud Jan 3 '14 at 9:20 In question 1. you're comparing $P\lor Q$ as a goal (P140) and as $P\lor Q$ as an hypothesis (P136) with $P\lor Q$ as a goal (P143), so there are in fact two comparisons here: P136 to P143 and P140 to P143, but the P136 to P143 doesn't seem to make much sense because in one of them $P\lor Q$ is a goal and in the other it is a given. Regarding the first part of question 2., P140 is important because it gives a different way of dealing with a $P\lor Q$ goal than that shown on P143. Regarding question 3, are you thinking of $P\lor Q$ as a goal or as an hypothesis here? – Git Gud Jan 3 '14 at 9:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567181825637817, "perplexity": 681.2835764536542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00133-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathoverflow.net/users/17846/alex-suciu?tab=activity
# Alex Suciu less info reputation 711 bio website northeastern.edu/suciu location Boston age member for 3 years, 1 month seen 17 hours ago profile views 1,426 # 61 Actions Sep16 awarded Yearling Aug7 comment Is there a finitely presented group with infinite homology over $\mathbb{Q}$? Yes, $H_2$ of an fp group is finitely generated. But, say, $H_3$ needs not be finitely generated. The first such example was given by John Stallings, in a seminal paper, titled, sure enough, A finitely presented group whose 3-dimensional integral homology is not finitely generated, see here. Aug7 comment Is there a finitely presented group with infinite homology over $\mathbb{Q}$? Also, "infinite homology" means "infinite-dimensional homology" (as $\mathbb{Q}$-vector space), right? Aug7 comment Is there a finitely presented group with infinite homology over $\mathbb{Q}$? Just to make sure: the assertion that $H_i(G,\mathbb{Q})$ is finite-dimensional (for all $i>2$) is an assumption, yes? Jul19 awarded Nice Answer Jul18 revised Is it known which links have Seifert fibered complements? added 192 characters in body Jul18 answered Is it known which links have Seifert fibered complements? Apr3 comment Kahler structure on holomorphic principal bundles What's a "principle" bundle? Nov21 comment Action of $\pi_1(S)$ on its commutator subgroup I thought the question referred to the action of $G$ on the second derived quotient of $G$, that is, $H_1([G,G])$, rather than on the second nilpotent quotient of $G$, no? Oct18 revised Must the union of these two aspherical spaces be aspherical? added reference Oct13 answered good reference on brieskorn manifold Oct1 answered Must the union of these two aspherical spaces be aspherical? Sep16 awarded Yearling Jul29 comment On the wikipedia entry for Borel-Moore homology Maybe there is some language confusion here, but doesn't "finite CW-complex" mean a CW-complex with finitely many cells (that is, both finite-dimensional and finite type)? So how would $X=\mathbb{Z}$ (with the discrete topology) fit in that context? Jul15 awarded Informed Jul15 revised Cohomology of submanifold complements added 250 characters in body Jul15 revised Cohomology of submanifold complements edited body Jul15 comment Cohomology of submanifold complements Certainly property 3 will fail in that case, provided the degree of the hypersurface is at least 2. Jul15 answered Cohomology of submanifold complements Jun25 awarded Revival
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289536476135254, "perplexity": 2061.6814597926577}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00050-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/114549/invariants-of-reductive-group-actions-and-completion
# Invariants of reductive group actions and completion I'm trying to understand the extent to which taking invariants of a reductive group action "commutes" with completion. More precisely: Let $X = \operatorname{Spec} A$ be a reduced finite type affine scheme over $\operatorname{Spec} R$, where $R$ is a discrete valuation ring. Let $G$ be a reductive algebraic group with connected fibers over $\operatorname{Spec} R$ that acts on $X$, and let $G'$ be the formal completion of $G$ at the identity in the special fiber. Now if $x$ is a point in the special fiber of $X$, defined over the residue field of $R$, and corresponding to a maximal ideal ${\mathfrak m}$ of $A$, the action of $G$ on $X$ induces an action of $G'$ on the formal completion $X_x$ of $X$ at $x$. Let ${\mathcal O}_{X,x}^{G'}$ denote the ring of functions on $X_x$ that are invariant for this action. If we let ${\mathcal O}_X^G$ denote the ring of $G$-invariant functions on $X$, and let ${\mathfrak m}'$ be the intersection of ${\mathfrak m}$ with ${\mathcal O}_X^G$, then we have a natural map: $$\left({\mathcal O}_X^G\right)_{\mathfrak m'} \rightarrow {\mathcal O}_{X,x}^{G'}$$ [Here the subscript ${\mathfrak m'}$ denotes completion at ${\mathfrak m'}$.] Under what conditions is this map an isomorphism? It's clear that the map fails to be injective if there is an irreducible component of $X$ that meets the closure of the orbit of $x$ but does not contain $x$: the left hand side sees'' this component whereas the right hand side does not. Will the map always be injective if $x$ lies on every irreducible component that meets the orbit closure of $X$? Will the map be surjective in general? I am happy to make additional assumptions to guarantee that the map is an isomorphism. (In particular, in the applications I have in mind, $G = {\operatorname{GL}}_n$, $R$ is a $p$-adic integer ring, and $X$ is a complete intersection that is flat over $\operatorname{Spec} R$.) - Thanks! I'm happy to assume that $G$ has connected fibers, and that $x$ is a rational point. I've edited the question to make things clearer. –  David Helm Nov 26 '12 at 21:05 @David: Please let us know what is known a-priori concerning finiteness properties of $O_X^G$ (via Seshadri's work?) and $(O_{X,x})^{G'}$. –  user28172 Nov 27 '12 at 3:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836740493774414, "perplexity": 74.06467832664508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656168.61/warc/CC-MAIN-20150417045736-00271-ip-10-235-10-82.ec2.internal.warc.gz"}
https://cvgmt.sns.it/paper/2370/
# Conformal metrics on $R^{2m}$ with constant Q-curvature, prescribed volume and asymptotic behavior created by martinazz on 23 Feb 2014 modified on 17 Jul 2018 [BibTeX] Submitted Paper Inserted: 23 feb 2014 Last Updated: 17 jul 2018 Year: 2014 ArXiv: 1401.0944 PDF Abstract: We study the solutions $u\in C^\infty(R^{2m})$ of the problem $(-\Delta)^m u= Qe^{2mu}$, where $Q=\pm (2m-1)!$, and $V :=\int_{R^{2m}}e^{2mu}dx <\infty$, particularly when $m>1$. This corresponds to finding conformal metrics $g_u:=e^{2u} dx ^2$ on $R^{2m}$ with constant Q-curvature $Q$ and finite volume $V$. Extending previous works of Chang-Chen, and Wei-Ye, we show that both the value $V$ and the asymptotic behavior of $u(x)$ as $x \to \infty$ can be simultaneously prescribed, under certain restrictions. When $Q=(2m-1)!$ we need to assume $V<vol(S^{2m})$, but surprisingly for $Q=-(2m-1)!$ the volume $V$ can be chosen arbitrarily. Credits | Cookie policy | HTML 5 | CSS 2.1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486313819885254, "perplexity": 1316.1297802001302}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00327.warc.gz"}
https://space.stackexchange.com/questions/3227/are-any-geosynchronous-satellites-visible-with-the-naked-eye
Are any geosynchronous satellites visible with the naked eye? It is very easy to spot LEO satellites during dusk or dawn. I am wondering if satellites further out in a geosynchronous orbit are also visible. Of course, if even possible, these would appear more stationary than any LEO satellites.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030954599380493, "perplexity": 1842.8954782370113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00013.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/11724/will-two-entangled-qubits-be-affected-by-gravity/11725
# Will two entangled qubits be affected by gravity? Will two entangled qubits be affected by gravity? I mean if one is near blackhole's horizon and other is on earth will the effect of relativity be experienced on the measurement? will there be any delay or error in measurement? • wouldn't this question be more on topic on Physics SE? Apr 30 '20 at 7:58 • a recent paper proposing a test to observe the effects of gravity on entangled states is Bose et al. 2017 – glS May 4 '20 at 18:52 Let's start with the Bell state $$\frac1{\sqrt2}(|01\rangle+|10\rangle)$$. Let's further assume that the entire universe (or at least the parts of relevance) are pervaded by a static magnetic field along the $$z$$-axis, which I relate to the unitary evolution $$\exp(-itH_z)$$. Now let's place one part of the entangled state next to a massive object, such that its own time evolves slower than ours far away. As you can see in this QUIRK circuit here, the Bell states changes back and forth between $$\frac1{\sqrt2}(|01\rangle+|10\rangle)\leftrightarrow\frac{e^{i\phi}}{\sqrt2}(|01\rangle-|10\rangle),$$ as time goes by with two different speeds. Now you have to come up with a measurement protocol to distinguish these two states by local operations and communications (don't enter the event horizon!) Whereas you only catch up a global phase which is not measurable... UPDATE There is "Space QUEST (QUantum Entanglement Space Test) mission proposal" at the arxiv, to "...Experimentally test decoherence due to gravity". From the abstract: Some speculative theories suggest that quantum properties, such as entanglement, may exhibit entirely different behavior to purely classical systems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835442304611206, "perplexity": 752.8741027690985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058222.43/warc/CC-MAIN-20210926235727-20210927025727-00054.warc.gz"}
http://math.stackexchange.com/questions/141745/measure-integration-problem
Measure integration problem Assume $A_j,j\geq 1,j\in\Bbb N$ are measurable sets. Let $m \in N$, and let $E_m$ be the set defined as follows : $x \in E_m \Longleftrightarrow x$ is a member of at least $m$ of the sets $A_k$. I wanna know how to prove that 1. $E_m$ is measurable. 2. $m\lambda(E_m)\le\sum^{\infty}_{k=1}\lambda(A_k)$. It's hard to me. Help me T.T - For the first question, try to write $E$ as a countable union involving the $A_j$, using the fact that the subsets of $\Bbb N$ which have $m$ elements is countable. –  Davide Giraudo May 6 '12 at 12:54 Call $S=\sum\limits_{n=1}^{+\infty}\mathbf 1_{A_n}$. Here are some hints: • Show that $S$ is a measurable function (possibly as the pointwise limit of a sequence of measurable functions). Note that $E_m=[S\geqslant m]$, and deduce item 1. • Compute the integral $I$ of $S$ in terms of the sequence $(\lambda(A_n))_n$. Show that Markov's inequality reads $m\lambda(E_m)\leqslant I$ in the present situation, and deduce item 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960570931434631, "perplexity": 126.52353422417187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00309-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quantitative-analysis-manufacturing-furfural-from-bagasse.921063/
# Quantitative analysis -- manufacturing Furfural from bagasse 1. Jul 26, 2017 ### Gandhar NImkar Hello everyone.. Here is the thing I am working on a project where I am manufacturing Furfural from bagasse. The reaction is thermodynamically possible. I took some amount of bagasse and added it to a reactor at atmospheric pressure. I then added diluted Sulphuric acid which acts as catalyst, NaCl ( to increase the selectivity of furfural) and indirectly heated the mixture to produce furfural vapours which later were cooled. Now my problem is i want to find out the percentage purity of furfural distillate and residue as both has furfural, H2So4, and water (in distillate) and furfural, H2So4, water and salt ( in residue). I approached many institutes which had gas chromatography but were not compatible. So I am planning to use quantitative analysis to find approximate amount of each component I searched many times on internet but was disappointed. So I thought you guys might help me out. 2. Jul 26, 2017 ### dipstik what about raman or ftir? what were the hang ups for gas chromatography (volatilizing, volatility, thermal stability, derivitization)? maybe NMR? 3. Jul 27, 2017 ### Gandhar NImkar They said our product was corrosive as it was high in concentration as the mixture was distilled to get our product and they also needed a transparent pure furfural but the furfural we ordered for comparing our product with was 98% pure and was of dark brown color. So i was searching for quantitative analysis to find approximate percentage purity. Draft saved Draft deleted Similar Discussions: Quantitative analysis -- manufacturing Furfural from bagasse
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557292819023132, "perplexity": 4373.480491974084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00397.warc.gz"}
http://physics.stackexchange.com/questions/60228/wave-equations-for-two-intervals-at-potential-step
# Wave equations for two intervals at Potential step Lets say we have a potential step as in the picture: In the region I there is a free particle with a wavefunction $\psi_I$ while in the region II the wave function will be $\psi_{II}$. Let me take now the schrödinger equation and try to derive $\psi_I$: \begin{align} &~~W \psi = -\frac{\hbar^2}{2m}\, \frac{d^2 \Psi}{d\, x^2} + W_p \psi ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber \\ &~~W \psi = -\frac{\hbar^2}{2m}\, \frac{d^2 \Psi}{d\, x^2}\nonumber \\ &\frac{d^2 \Psi}{d\, x^2} = -\frac{2m W}{\hbar^2}\,\psi \nonumber\\ {\scriptsize \text{DE: }} &\boxed{\frac{d^2 \Psi}{d\, x^2} = -\mathcal L\,\psi}~\boxed{\mathcal{L} \equiv \sqrt{\tfrac{2mW}{\hbar^2}}} \nonumber\\ &~\phantom{\line(1,0){18.3}}\Downarrow \nonumber\\ {\scriptsize \text{general solution of DE: }} &\boxed{\psi_{I} = C \sin\left(\mathcal{L}\, x \right) + D \cos \left(\mathcal{L}\, x \right)}\nonumber \end{align} I got the general solution for the interval I, but this is nothing like the solution they use in all the books: $\psi_{I} = C e^{i\mathcal L x} + D e^{-i \mathcal L x}$ where $\mathcal L \equiv \sqrt{{\scriptsize 2mW/\hbar^2}}$. I have a personal issue with this because if $x= -\infty$ part $De^{-i \mathcal L x}$ would become infinite and this is impossible for a wavefunction! I know that i would get exponential form if i defined constant $\mathcal L$ a bit differently as i did above: \begin{align} {\scriptsize \text{DE: }} &\boxed{\frac{d^2 \Psi}{d\, x^2} = \mathcal L\,\psi}~\boxed{\mathcal{L} \equiv -\sqrt{\tfrac{2mW}{\hbar^2}}} \nonumber\\ &~\phantom{\line(1,0){18.3}}\Downarrow \nonumber\\ {\scriptsize \text{general solution of DE: }} &\boxed{ \psi_{I} = C e^{\mathcal L x } + D^{-\mathcal L x} }\nonumber \end{align} This general solution looks more like the one they use in the books but it lacks an imaginary $i$ and $\mathcal L$ is defined with a - while in all the books it is positive. Could anyone tell me what am i missing here so i could understand this? - Why do you think $\mathrm{e}^{-i\mathcal{L}x}$ would go to infinity? $\mathrm{e}^{i\theta}$ has constant magnitude $=1$ for all real $\theta$. The complex exponentials are just a way of rewriting the solution in terms of $\cos$ and $\sin$. In your second form the minus sign should go under the square root -- $\mathcal{L}$ is imaginary in that case. –  Michael Brown Apr 6 '13 at 11:03 I would appreciate if you could show me how to transform $C \sin(\mathcal L x) + D \cos (\mathcal L x)$ using euler formula. Afterall i don't have $i\sin()$. –  71GA Apr 6 '13 at 12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722389578819275, "perplexity": 265.977792721426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772751.143/warc/CC-MAIN-20141217075252-00003-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/208969-pairwise-coprime-fundamental-theorem-arithmetic.html
Thread: pairwise coprime and fundamental theorem of arithmetic 1. pairwise coprime and fundamental theorem of arithmetic I have added a attachment of my question that I am stuck on. How would you solve it? Thank you Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995948076248169, "perplexity": 1518.157126321077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00253.warc.gz"}
http://mathhelpforum.com/advanced-algebra/37112-diagonal-matrix-print.html
# Diagonal matrix Remember that when you diagonalize a matrix B, the diagonal values of the resulting diagonalized matrix will be the eigenvalues of B, and that a matrix P such that $P^{-1}BP$ is diagonal will have as it's columns the eigenvectors corresponding to those eigenvalues. Thus, we require that B have real eigenvectors. As our matrix is real, real eigenvalues are a necessary condition for real eigenvectors. Note that $\det(I\lambda-B)=0$ gives $(\lambda-a)^2+b^2=0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958241879940033, "perplexity": 221.80280677787258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00050-ip-10-236-191-2.ec2.internal.warc.gz"}
https://proceedings.mlr.press/v180/zhan22a.html
# Asymptotic optimality for active learning processes Xueying Zhan, Yaowei Wang, Antoni B. Chan Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:2342-2352, 2022. #### Abstract Active Learning (AL) aims to optimize basic learned model(s) iteratively by selecting and annotating unlabeled data samples that are deemed to best maximise the model performance with minimal required data. However, the learned model is easy to overfit due to the biased distribution (sampling bias and dataset shift) formed by non-uniform sampling used in AL. Considering AL as an iterative sequential optimization process, we first provide a perspective on AL in terms of statistical properties, i.e., asymptotic unbiasedness, consistency and asymptotic efficiency, with respect to basic estimators when the sample size (size of labeled set) becomes large, and in the limit as sample size tends to infinity. We then discuss how biases affect AL. Finally, we proposed a flexible AL framework that aims to mitigate the impact of bias in AL by minimizing generalization error and importance-weighted training loss simultaneously.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352041244506836, "perplexity": 1575.7979018015542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00777.warc.gz"}
https://infoscience.epfl.ch/record/211484
Infoscience Journal article # A Projected Gradient Algorithm for Image Restoration Under Hessian Matrix-Norm Regularization We have recently introduced a class of non-quadratic Hessian-based regularizers as a higher-order extension of the total variation (TV) functional. These regularizers retain some of the most favorable properties of TV while they can effectively deal with the staircase effect that is commonly met in TV-based reconstructions. In this work we propose a novel gradient-based algorithm for the efficient minimization of these functionals under convex constraints. Furthermore, we validate the overall proposed regularization framework for the problem of image deblurring under additive Gaussian noise.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044138789176941, "perplexity": 789.4371654949482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541907.82/warc/CC-MAIN-20161202170901-00071-ip-10-31-129-80.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/252917/forecast-in-ar1-process-witn-non-zero-mean
# Forecast in $AR(1)$ process witn non-zero mean For an $AR(1)$ model with $Y_t=12.2$ , $\phi=-0.5$ and $\mu=10.8$. a) Find $\hat{Y}_t(1)$, $\hat{Y}_t(2)$, $\hat{Y}_t(10)$ I'm a litle lost in forecasting ARIMA models. What I think it is $$Y_t-\mu=\phi(Y_{t-1}-\mu)+\epsilon_t$$ where $\epsilon_t$ is white noise. $$\hat{Y}_t(1)=E[Y_{t+1}|Y_1,\dots,Y_t]=E[\mu+\phi(Y_t-\mu)+\epsilon_{t+1}|Y_1,\dots,Y_t]$$ $$=\mu+\phi(Y_t-\mu)$$ and the general formula is $$\hat{Y}_t(l)=\mu+\phi^l(Y_t-\mu)$$ So $$\hat{Y}_t(1)=10.8+(-0.5)(12.2-10.8)=10.1$$ $$\hat{Y}_t(2)=10.8+(-0.5)^2(12.2-10.8)=11.15$$ $$\hat{Y}_t(10)=10.8+(-0.5)^{10}(12.2-10.8)=10.801$$ So $$\hat{Y}_t(l)\rightarrow \mu$$ Is this results right? For any ARIMA model that I want to do forecast I just take expectation in this way? • yes. And AR(1) models are forecast to decay towards their unconditional mean for $\mid \phi \mid < 1$. – Matthew Gunn Dec 22 '16 at 19:17 • @MatthewGunn In any ARIMA/SARIMA model to find the forecasts at hand , I just need to take the expectation in the same way? – user72621 Dec 22 '16 at 19:27 • MA terms are trickier. AR(n) reduces to a vector AR(1) (i.e. VAR). – Matthew Gunn Dec 22 '16 at 20:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211582541465759, "perplexity": 1607.57915723845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00247.warc.gz"}
https://zbmath.org/?q=an:1047.34101
× ## On meromorphic solutions of certain nonlinear differential equations.(English)Zbl 1047.34101 The authors deal with differential equations of the form $L(f)+ p(z, f)= h(z),\tag{1}$ where $$L(f)$$ is a linear differential polynomial in $$f$$ with meromorphic coefficients, $$p(z, f)$$ is a polynomial in $$f$$ with meromorphic coefficients, and $$h(z)$$ is meromorphic. Define $$L_f:= \{h$$ meromorphic: $$T(r,h)= S(r,h)\}$$ and denote by $$F$$ the family of meromorphic solutions to (1) such that, whenever $$f\in F$$, all coefficients in (1) are in $$L_f$$, and $$N(r\cdot f)= S(r\cdot f)$$. It follows that, if $$f,g\in F$$, then $T(r\cdot g)= O(T(r\cdot f))+ S(r\cdot f).$ Moreover, if $$\alpha> 1$$, then, for some $$r_\alpha> 0$$, $T(r\cdot g)= O(T(\alpha r,f))$ for all $$r\geq r_\alpha$$. The authors show that, if $$f$$ is a meromorphic solution to (1) such that all coefficients in (1) are in $$L_f$$, then $$\rho(f)\geq \rho(h)$$. If $$n=: \deg_f p(z, f)\geq k+2$$ and $$N(r\cdot f)= S(r\cdot f)$$ then $$\rho(f)= \rho(f)$$ and $$\mu(f)= \mu(f)$$. For the equation $L(f)- p(z) f^n= h(z),$ $$h(z)$$ be a meromorphic function, the authors show that the method used by Yang can be modified to obtain similar uniqueness results for meromorphic solutions to this generalized equation, when $$n\geq 4$$. ### MSC: 34M05 Entire and meromorphic solutions to ordinary differential equations in the complex domain 34M10 Oscillation, growth of solutions to ordinary differential equations in the complex domain Full Text: ### References: [1] DOI: 10.1007/BF02785417 · Zbl 1016.34091 [2] Mohon’ko, Teor. Funktsii Funktsional. Anal. i Prilozhen 14 pp 83– (1971) [3] Laine, Nevanlinna theory and complex differential equations (1993) · Zbl 0784.30002 [4] Yang, Bull. Austral. Math. Soc. 64 pp 377– (2001) [5] Hayman, Meromorphic functions (1964) [6] DOI: 10.1016/0022-247X(85)90216-1 · Zbl 0593.34014 [7] DOI: 10.1007/BF02807430 · Zbl 0129.29301 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8725773692131042, "perplexity": 524.8193113038163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00224.warc.gz"}
http://www.math-only-math.com/reciprocal-of-a-complex-number.html
# Reciprocal of a Complex Number How to find the reciprocal of a complex number? Let z = x + iy be a non-zero complex number. Then $$\frac{1}{z}$$ = $$\frac{1}{x + iy}$$ = $$\frac{1}{x + iy}$$ × $$\frac{x - iy}{x - iy}$$, [Multiplying numerator and denominator by conjugate of denominator i.e., Multiply both numerator and denominator by conjugate of x + iy] = $$\frac{x - iy}{x^{2} - i^{2}y^{2}}$$ = $$\frac{x - iy}{x^{2} + y^{2}}$$ =  $$\frac{x}{x^{2} + y^{2}}$$ +  $$\frac{i(-y)}{x^{2} + y^{2}}$$ Clearly, $$\frac{1}{z}$$ is equal to the multiplicative inverse of z. Also, $$\frac{1}{z}$$ = $$\frac{x - iy}{x^{2} + y^{2}}$$ = $$\frac{\overline{z}}{|z|^{2}}$$ Therefore, the multiplicative inverse of a non-zero complex z is equal to its reciprocal and is represent as $$\frac{Re(z)}{|z|^{2}}$$ + i$$\frac{(-Im(z))}{|z|^{2}}$$= $$\frac{\overline{z}}{|z|^{2}}$$ Solved examples on reciprocal of a complex number: 1. If a complex number z = 2 + 3i, then find the reciprocal of z? Give your answer in a + ib form. Solution: Given z = 2 + 3i Then, $$\overline{z}$$ = 2 - 3i And |z| = $$\sqrt{x^{2} + y^{2}}$$ = $$\sqrt{2^{2} + (-3)^{2}}$$ = $$\sqrt{4 + 9}$$ = $$\sqrt{13}$$ Now, |z|$$^{2}$$ = 13 Therefore, $$\frac{1}{z}$$ = $$\frac{\overline{z}}{|z|^{2}}$$ = $$\frac{2 - 3i}{13}$$ = $$\frac{2}{13}$$ + (-$$\frac{3}{13}$$)i, which is the required a + ib form. 2. Find the reciprocal of the complex number z = -1 + 2i. Give your answer in a + ib form. Solution: Given z = -1 + 2i Then, $$\overline{z}$$ = -1 - 2i And |z| = $$\sqrt{x^{2} + y^{2}}$$ = $$\sqrt{(-1)^{2} + 2^{2}}$$ = $$\sqrt{1 + 4}$$ = $$\sqrt{5}$$ Now, |z|$$^{2}$$= 5 Therefore, $$\frac{1}{z}$$ = $$\frac{\overline{z}}{|z|^{2}}$$ = $$\frac{-1 - 2i}{5}$$ = (-$$\frac{1}{5}$$) + (-$$\frac{2}{5}$$)i, which is the required a + ib form. 3. Find the reciprocal of the complex number z = i. Give your answer in a + ib form. Solution: Given z = i Then, $$\overline{z}$$ = -i And |z| = $$\sqrt{x^{2} + y^{2}}$$ = $$\sqrt{0^{2} + 1^{2}}$$ = $$\sqrt{0 + 1}$$ = $$\sqrt{1}$$ = 1 Now, |z|$$^{2}$$= 1 Therefore, $$\frac{1}{z}$$ = $$\frac{\overline{z}}{|z|^{2}}$$ = $$\frac{-i}{1}$$ = -i = 0 + (-i), which is the required a + ib form. Note: The reciprocal of i is its own conjugate - i.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794772028923035, "perplexity": 688.9460513474796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00516-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-solve-this-problem-using-laplace-transform.843763/
# How to solve this problem using laplace transform? 1. Nov 17, 2015 ### haha1234 1. The problem statement, all variables and given/known data The differential equation given: y''-y'-2y=4t2 2. Relevant equations 3. The attempt at a solution I used the laplace transform table to construct this equation,and then I did partial fraction for finding the inverse laplace transform.But I'm now stuck at finding the inverse laplace transform of 1/s^3 and 1/s^2... And the attached photo is the attempted solution. View attachment 92000 #### Attached Files: • ###### 20151118_133402[1].jpg File size: 30.8 KB Views: 70 2. Nov 17, 2015 ### Staff: Mentor I didn't verify your work, but here is a table of Laplace transforms - http://web.stanford.edu/~boyd/ee102/laplace-table.pdf 3. Nov 18, 2015 ### haha1234 4. Nov 18, 2015 ### Staff: Mentor Look just below the one for 1/s2. It's a more general formula. 5. Nov 18, 2015 ### Ray Vickson If you know the inverse transform of 1/s, then you can get the inverse transform of 1/s^2 by integration, and of 1/s^3 by integration again. Remember: there are some standard general transform results that are helpful. Below, let $f(t) \leftrightarrow g(s) = {\cal L}(f)(s)$. Then: $$\begin{array}{l} \displaystyle \frac{df(t)}{dt} \leftrightarrow s g(s) - f(0+)\\ \int_0^t f(\tau) \, d \tau \leftrightarrow \displaystyle \frac{1}{s} g(s) \end{array}$$ These were given specifically in the table suggested by Mark44; did you miss them? Draft saved Draft deleted Similar Discussions: How to solve this problem using laplace transform?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649621844291687, "perplexity": 2851.601903295294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00194.warc.gz"}
http://mathhelpforum.com/number-theory/49939-congruences-2-a.html
1. ## congruences 2 Slove the system of congruences 3x = 2 mod 4 4x = 1 mod 5 6x = 3 mod 9 is it no solution since 6 and 9 do not have an inverse since the gcd of 6 and 9 is not one thus u cant write it in the form x=a mod p 2. I have been trying to find the inverse and solve it that way but i dont know if that is right? 3. If $ac \equiv bc \ (\text{mod } m)$ and $(c,m) = d$, then $a \equiv b \left(\text{mod } \frac{m}{d}\right)$ Keeping in this in mind, I'm going to simplify the congruences first: $\begin{array}{rcll} 3x & \equiv & 2 & (\text{mod } 4) \\ 3x & \equiv & 6 & (\text{mod } 4) \\ x & \equiv & 2 & (\text{mod } 4) \end{array}$........ $\begin{array}{rcll} 4x & \equiv & 1 & (\text{mod } 5) \\ 4x & \equiv & -4 & (\text{mod } 5) \\ x & \equiv & -1 & (\text{mod } 5) \\ x & \equiv & 4 & (\text{mod } 5) \end{array}$........ $\begin{array}{rcll} 6x & \equiv & 3 & (\text{mod } 9) \\ 6x & \equiv & 12 & (\text{mod }9) \\ x & \equiv & 2 & (\text{mod } 3) \end{array}$ So we're left with the system: $\begin{array}{rcll} x & \equiv & 2 & (\text{mod } 4) \\ x & \equiv & 4 & (\text{mod } 5) \\ x & \equiv & 2 & (\text{mod } 3) \end{array}$ And by the Chinese Remainder theorem, since $(4, 5, 3) = 1$, there is a unique solution modulo $4\cdot 5 \cdot 3 = 60$. Should be pretty simply to solve this one. 4. Originally Posted by o_O If $ac \equiv bc \ (\text{mod } m)$ and $(c,m) = d$, then $a \equiv b \left(\text{mod } \frac{m}{d}\right)$ Keeping in this in mind, I'm going to simplify the congruences first: $\begin{array}{rcll} 3x & \equiv & 2 & (\text{mod } 4) \\ 3x & \equiv & 6 & (\text{mod } 4) \\ x & \equiv & 2 & (\text{mod } 4) \end{array}$........ $\begin{array}{rcll} 4x & \equiv & 1 & (\text{mod } 5) \\ 4x & \equiv & -4 & (\text{mod } 5) \\ x & \equiv & -1 & (\text{mod } 5) \\ x & \equiv & 4 & (\text{mod } 5) \end{array}$........ $\begin{array}{rcll} 6x & \equiv & 3 & (\text{mod } 9) \\ 6x & \equiv & 12 & (\text{mod }9) \\ x & \equiv & 2 & (\text{mod } 3) \end{array}$ So we're left with the system: $\begin{array}{rcll} x & \equiv & 2 & (\text{mod } 4) \\ x & \equiv & 4 & (\text{mod } 5) \\ x & \equiv & 2 & (\text{mod } 3) \end{array}$ And by the Chinese Remainder theorem, since $(4, 5, 3) = 1$, there is a unique solution modulo $4\cdot 5 \cdot 3 = 60$. Should be pretty simply to solve this one. i get the right answer after applying the CRT to the reduced congruences, but i still dont know how u reduced the congruences or simplified them. Could you maybe explain it a little bit. Could u find the inverse modulo of each congruence and and reduce it to the form x=a mod m 5. Hello, Divide the formula of o_O into 2 formulae: 1) If $ac\equiv bc\pmod m$ and $(c, m)=1$ then $a\equiv b\pmod m$. 2) If $ac\equiv bc\pmod {mc}$ then $a\equiv b\pmod m$. 1) corresponds to the case where you can find an inverse. 2) is a technique to decrease the modulus. You add the modulus as many times as you want so that the coefficient of x and the right-hand-side have a common divisor. Bye. 6. What I did was play around with the congruences a little. Remember, they sort of act like equality signs as well. Let's take the middle one I did: $ \begin{array}{rcll} 4x & \equiv & 1 & (\text{mod } 5) \\ 4x & \equiv & -4 & (\text{mod } 5) \qquad \text{1. Subtracted 5} \\ x & \equiv & -1 & (\text{mod } 5) \qquad \text{2. Divided both sides by 4} \\ x & \equiv & 4 & (\text{mod } 5) \qquad \text{3. Added 5 to get a least residue} \end{array} $ Basically, I wanted to get rid of the coefficient in front of the x and what I wanted is to ultimately divide it out: 1. If something is congruent to 4x mod 5, then adding or subtracting multiple of 5's will still be congruent to 4x. Remember: $a \equiv b \ (\text{mod m}) \ \iff \ a = b + km$ 2. This refers to the very first statement I had in my first post 3. Again, adding and subtracting multiples of the modulus will not change the congruence as we did in step 1. The main purpose in doing so was to get a least residue (i.e. the "remainder") Edit: Oh beaten!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278890490531921, "perplexity": 287.7361449484711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00206.warc.gz"}
http://www.dankalia.com/science/phy015.htm
Newton's Three Laws & Universal Gravitation By Danny Keck Few will argue that Sir Isaac Newton ranked among the top scientists of all time, and there are many that would go so far as to say that no one has had a more important effect on the development of science. Newton in fact did make some very important advances, such as creating infinitesimal calculus. In this paper I will briefly explain two other achievements of Newton: the laws of motion and universal gravitation. Both ideas were published in Newton's most famous publication, Mathematical Principles of Natural Philosohy, printed in 1687. Newton first formulated his three laws of motion, which described the relationship between force and its result, acceleration. The first law states that a body will move at a constant speed unless a force acts upon it. Also an object at rest will stay at rest unless a force acts on it. If no force acts on an object that it will continue moving at a constant (or zero) speed. This was originally thought up by Galileo, and it ran against conventional wisdom at that time. It was believed in the past that for an object to move at all a force must be acting on it; only an object at rest was not being affected by a force. Galilieo wondered however that if a perfectly smooth object were moving along a perfectly smooth horizontal surface, that it might travel forever in a straight line without any force to cause it to. In his book Newton clearly stated that this would be true. Velocity is not the result of force, but rather acceleration is the result. This solved a problem that had been debated for centuries: what force could possilby be moving the planets? The answer is none; the planets are moving naturally in a uniform circular motion. Although gravity causes their direction to change constantly, their actual rate of movement is the same because there is no force to acclerate them. The second law is a mathematical treatment of the first law. It lays down in simple yet accurate terms the numbers that desribe how a force changes the motion of a body. The acceleration of a mass is directly proportional to the force applied to it and inversely proportional to the mass of the object. All these reactions can be described with this simple formula: F=Ma. The second law can be considered the most important, as all of dyamics' basic equations can be derived from it using calculus. If more than once force is acting on an object, than they must all be added together to compute the equation. The magnitude and direction of every force must be combined to form the resultant force, which is used in the equation. Acceleration is also a vector quantity and it is in the same direction as the force. Newton's third law is popularly know as: for every action there is an equal and opposite reaction. It may be more accurate to say that the actions of two bodies on each other are always equal in magnitude and opposite in direction. If you press your hand against a wall with a certain amount of force, the wall will push back against your hand with the same amount of force. An airplane flying through the air is affected by the force of the earth pulling down on it, and in turn the airplane exerts an equal force on the earth (although because of the earth's immense mass this pull has a negligible effect.) Legend has it that Newton was considering his Third Law one day when he saw an apple fall from a tree. He noted that the earth was pulling on the apple and the apple was pulling on the earth. He then wondered: what if the same force that was acting here also was responsible for keeping the whole universe together? What if earth and the sun attracted each other using the same concepts and formulas that gravity affects objects on earth? It turned out that he was right. Earth's force of gravity is actually in effect throughout the entire universe, giving this force the name of universal gravitation. Its effects are very far reaching. Every pair of particles of matter in the universe attract each other with some gravitational force. The force acts in a straight line between them and is directly proportional to the combined masses of the objects and inversely proportional to the square of the distance separating them. This can be calucated using this formula: F = (Gm1m2) / d2. G is the gravitational constant. It is a constant that is the same throughout the universe and never depends on anything. The value of G is approximately 6.67´10-11. This value is obviously very small but it still affects answers. Some scientists believe that the value of G is actually diminishing as time goes by, but very slowly. The gravitational force is one of the four fundamental forces of the universe, along with the electromagnetic, strong nuclear and weak nuclear forces. Gravity is fairly weak. Between a proton and electron the force of gravity is 5´10-40 times as strong as the electromagnetic force. However, gravity does have an infinite range. Although Newton's theories were landmark achievements and are usually very accurate, in some situations they fall apart. Einstein came up with the theory of relativity to fix some discrepencies in Newton's work, introducing ideas such as spacetime and gravity waves. The increased accuracy comes at a price; Einstein's formulas are far more complicated than Newton's. The normal person working with normal objects, however, will find that Newton's theories work as accurately as can be measured. Newton's ideas on motion both on Earth and in the rest of the universe brought light to the confusing world of science, answered many of the great questions of the world, and were immense steps in the progression of mankind's ongoing scientific endeavors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794225454330444, "perplexity": 272.3441601075675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319933.33/warc/CC-MAIN-20170622234435-20170623014435-00600.warc.gz"}
https://www.bartleby.com/questions-and-answers/suppose-a-car-travels-108-km-at-a-speed-of35.0ms-and-uses1.80gallons-of-gasoline.-only-30percent-of-/093c2839-1357-4edd-9944-80b5626cf8a8
# Suppose a car travels 108 km at a speed of 35.0 m/s, and uses 1.80 gallons of gasoline. Only 30% of the gasoline goes into useful work by the force that keeps the car moving at constant speed despite friction. (The energy content of gasoline is 1.3 ✕ 108 J per gallon.)(a) What is the force exerted to keep the car moving at constant speed?  N(b) If the required force is directly proportional to speed, how many gallons will be used to drive 108 km at a speed of 28.0 m/s?  gallons Question Suppose a car travels 108 km at a speed of 35.0 m/s, and uses 1.80 gallons of gasoline. Only 30% of the gasoline goes into useful work by the force that keeps the car moving at constant speed despite friction. (The energy content of gasoline is 1.3 ✕ 108 J per gallon.)(a) What is the force exerted to keep the car moving at constant speed? N (b) If the required force is directly proportional to speed, how many gallons will be used to drive 108 km at a speed of 28.0 m/s? gallons Step 1 (a). The amount of work done to keep the car moving at constant speed can be calculated as follows, Step 2 Write the expression for work done, substitute the corresponding values, an... ### Want to see the full answer? See Solution #### Want to see this answer and more? Our solutions are written by experts, many with advanced degrees, and available 24/7 See Solution Tagged in
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893258571624756, "perplexity": 380.940223155407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00453.warc.gz"}
http://math.stackexchange.com/questions/227424/coercion-in-magma
# Coercion in MAGMA In MAGMA, if you are dealing with an element $x\in H$ for some group $H$, and you know that $H<G$ for some group $G$, is there an easy way to coerce $x$ into $G$ (e.g. if $H=\text{Alt}(n)$ and $G=\text{Alt}(n+k)$ for some $k\geq 1$)? The natural coercion method $G!x$ does not seem to work. - I think holding someone over magma is probably a good method of coercion. (Apologies for the nonconstructive comment, but I couldn't resist.) –  Cameron Buie Nov 2 '12 at 9:14 Why didn't I think of that! –  David Ward Nov 2 '12 at 9:19 Sorry that is my mistake for using the worng MAGMA tag! Many apologese. –  David Ward Nov 2 '12 at 10:33 No need to apologize; after all, the correct tag didn't even exist when you asked this question. –  Ilmari Karonen Nov 2 '12 at 10:38 One way would be to define the inclusion homomorphism $H \hookrightarrow G$ and apply it to your element $x$. See http://magma.maths.usyd.edu.au/magma/handbook/text/547#5783 for how to define homomorphisms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214786648750305, "perplexity": 1251.820992865498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163837349/warc/CC-MAIN-20131204133037-00026-ip-10-33-133-15.ec2.internal.warc.gz"}
https://getrevising.co.uk/revision-tests/thermal_physics_3
Thermal Physics HideShow resource information When there is a sudden change in pressure, by which no heat is transferred into or out of the system. Q=constant. 1 of 20 What is the 1st Law of Thermodynamics? Delta U = Delta W + Delta Q (U=internal energy; W=work done; Q=heat transferred) 2 of 20 What is the 2nd Law of Thermodynamics? The entropy (S) of the universe is always increasing. 3 of 20 Define HEAT Amount of energy transferred across a temperature difference. 4 of 20 Define temperature A measure of the average kinetic energy of particle sin a system 5 of 20 What is latent heat? State the equation. The heat required to change the phase of the substance. (change in Q = mL) 6 of 20 What is meant by an Ideal Gas? One that obeys the ideal gas equation: PV = nRT. Attractions between particles are negligible. Volume of the particles compared to the total volume in negligible. The collisions are elastic. The time of a collision is negligible. 7 of 20 What is the purpose of the Maxwell-Boltzman distribution graph? Shows the average kinetic energy of the system. (No. of particles = y axis; (kinetic) energy = x axis) 8 of 20 What is state? Specific values of P, V and T that describe a system. 9 of 20 Name another formula for KE other than 1/2 mv^2 KE = 3/2kT 10 of 20 What is k in the equation KE = 3/2 kT k = Boltzmann's Constant = 1.38 * 10^-23 11 of 20 Charles' Law? ISOBARIC (Volume on the y axis, temperature on the x axis) = CONSTANT PRESSURE 12 of 20 Gay-Lussac's Law? ISOCHORIC (Pressure on the y axis, temperature on the x axis) = CONSTANT VOLUME 13 of 20 Boyle's Law? ISOTHERMIC (Pressure on the y axis, volume on the x axis) = CONSTANT TEMP 14 of 20 What is c in the formula Mc(delta)T. State its units. Specific heat capacity (J kg^-1 K^-1) 15 of 20 A device that converts heat into mechanical work is called what? A heat engine. 16 of 20 Describe typical steps of a heat engine's process 1) Heat is taken into the engine (from hot reservoir of assumed fixed temp) . 2) Q heat enters the engine, a part is converted to mechanical Work, what is left is heat the engine rejects. 3) Rejected heat goes to reservoir of lower temp. 17 of 20 What equation can be used to calculate an engine's efficiency? efficiency = W/Q. Efficiency = (energy entered from hot reservoir - energy rejected)/Total energy entered from hot reservoir 18 of 20 Energy degradation refers to what happening to the energy It becomes less useful, as a consequence of the second law of thermodynamics 19 of 20 A gas compressed isothermally so that an amount of work = 6500J is done. How much heat is taken out or given to the gas No heat. Isothermic. Therefore, no temperature change. U = W 20 of 20 Other cards in this set Card 2 Front What is the 1st Law of Thermodynamics? Back Delta U = Delta W + Delta Q (U=internal energy; W=work done; Q=heat transferred) Card 3 Front What is the 2nd Law of Thermodynamics? Define HEAT Card 5 Front Define temperature
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545449376106262, "perplexity": 2920.2198790328193}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171418.79/warc/CC-MAIN-20170219104611-00373-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/30696/how-can-we-show-the-spaces-m-gn-and-m-g-n-are-homotopy-equivalent
# How can we show the spaces $M_{g}(n)$ and $M_{g, n}$ are homotopy equivalent? How can we prove that the moduli space,$M_{g}(n)$, of genus $g$ Riemann surface with $n$ boundary components is homotopy equivalent to $M_{g,n}$, that is ,the moduli space of genus $g$ Riemann surface with $n$ punctures? Thanks! (It is very intuitive, but it seems that I can't make it) - It would help to change your title to something more descriptive. –  Kevin H. Lin Jul 6 '10 at 19:17 Hi,I add it,thanks! –  HYYY Jul 7 '10 at 5:30 Thanks for making the change. –  Kevin H. Lin Jul 7 '10 at 6:27 A compact Riemann surface of genus $g$ with $n$ boundary components has a unique realization as a hyperbolic surface with geodesic boundary. One may see this by reflecting through the boundary and uniformizing. The uniqueness of the uniformization implies it is invariant under reflection, and therefore the fixed point set is geodesic. Thus, the moduli space of genus $g$ Riemann surfaces with $n$ boundary components is equivalent to the space of hyperbolic surfaces with totally geodesic boundary. One may now insert a punctured disk into each boundary component, to obtain a Riemann surface with punctures. I don't know of a canonical way to do this, but for example for a boundary component of length $l$, one may attach isometrically the boundary of a punctured Euclidean disk of circumference $l$. The important thing is that this gluing only depends on $l$, and that it induces a conformal structure on the punctured surface. This gives a map between the spaces. Since the mapping class groups are the same, it induces a homotopy equivalence (in the category of orbifolds). Of course, there are some technical details one must carry out to make this argument rigorous. There are several other ways to fill in a punctured disk. If you insert disks, then you get the closed surface of genus g. This factors through the canonical map from $M_{g,n}$ which fills in punctures. The fiber here is more complicated though, this is related to the Birman exact sequence. –  Ian Agol Jul 6 '10 at 3:44 Very small question:Is the moduli space of Riemann surface with genus $g$ and $n$ punctures the same as moduli space of Riemann surface with genus $g$ and $n$ marked points?Thanks! –  HYYY Jul 6 '10 at 17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295082092285156, "perplexity": 216.19951754311072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656965.63/warc/CC-MAIN-20150417045736-00192-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/168269-joint-density-function-x-discrete-y-continuous.html
# Math Help - Joint density function of X(discrete) and Y(continuous)? 1. ## Joint density function of X(discrete) and Y(continuous)? "Suppose X be a discrete random variable and Y be a continuous random varaible. Let f_X,Y (x,y) denote the joint density function of X and Y..." I'm not sure if I understand the meaning of joint density here. In the case above, what does the joint density mean? What are the properties of it? Does it still double integrate to 1 (like for the jointly continuous case)? From my probability class, I remember that a joint density function is only defined in the case when X and Y are BOTH continuous random variables. But when one random vaiable is continuous and the other discrete, how do we even define the joint density function? Hopefully someone can explain this. Thank you! 2. You sum on the discrete one and integrate on the continuous I prefer to use the term density for only continuous RVs, but not everyone does that. 3. Originally Posted by matheagle You sum on the discrete one and integrate on the continuous I prefer to use the term density for only continuous RVs, but not everyone does that. Do you mean that ∫ ∑ f_X,Y (x,y) dy = 1 ? y x Is there an appropriate name for this kind of "density"? 4. Originally Posted by kingwinner Do you mean that ∫ ∑ f_X,Y (x,y) dy = 1 ? y x Is there an appropriate name for this kind of "density"? Mixed Distributions 5. But what defines a VALID density in this case? 6. $f(x,y)\ge 0$ and that it sums/integrates to ONE 7. Originally Posted by matheagle $f(x,y)\ge 0$ and that it sums/integrates to ONE When you say it sums and integrates to one, do you mean ∫ ∑ f_X,Y (x,y) dy = 1 y x where the sum is over the support of X and the integral is over the support of Y? Does it matter whether we do the sum first or the integral first? (i.e. can we interchange the ∫ and ∑ ?) 8. Originally Posted by kingwinner When you say it sums and integrates to one, do you mean ∫ ∑ f_X,Y (x,y) dy = 1 y x where the sum is over the support of X and the integral is over the support of Y? Mr F says: Yes. Does it matter whether we do the sum first or the integral first? (i.e. can we interchange the ∫ and ∑ ?) Mr F says: Most likely not. Review the appropriate theorems on when reversing the order of integral and summation is valid. ..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786074757575989, "perplexity": 967.8855679510978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067100.54/warc/CC-MAIN-20141017150107-00118-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.tmssoftware.com/site/blog.asp?post=530
# Blog All Blog Posts  |  Next Post  |  Previous Post # New features in TMS Analytics & Physics 2.8 Bookmarks: #### Tuesday, April 16, 2019 The previous version 2.7 of TMS Analytics & Physics library introduced the capability of converting formula and units of measurements to the TeX format. The TeX representation allows then drawing formula in natural math form, using some external tools, such as MathJax. The new version 2.8 provides some improvements of formula conversion and includes new capabilities. The main new feature is converting standard functions to math operators. For example, exponent function ‘exp(x)’ can be converted to the form of power operation as ‘ex’. The following standard functions supported: sqrt, root, pow, abs and exp. The option of converting functions to the operation format can be on or off, allowing different representation of the formula. Let us consider the following formula, containing several standard functions ‘sqrt(x^2-1)+root{3*n}(x/2)*abs(x-A)-exp(x^2)*pow{n/2}(x+1)’. This formula can be converted to the TeX format in two different ways: with or without converting functions to the operation format. Here is the formula drawing for these two cases: • without function conversion: • with function conversion: Another improvement is new simplifications of math expressions. Simplifying math expressions allows better representation of math data. Simplified formula is more suitable for reading and perception. For example, the expression simplification implicitly used after symbolic derivative calculation, because this can significantly reduce the result. For example, let us consider the following formula: Evaluating symbolic derivatives for the expression, we can get two results. The first result evaluated using formal recursive chain algorithm without simplifications: Next formula is the same expression after simplification: The last version 2.8 of the library supports the following simplification algorithms: • Expanding nested sum, product and power expressions • Collecting constant addends in sum expressions • Reducing addends in sum expressions • Combining constant multipliers in product expressions • Reducing multipliers in fraction expressions • Combining power expressions with multipliers • Simplifying integer square roots and modulus of negative data The expression simplification can be used explicitly in the code, calling the ‘Simplify’ method of the ‘TTranslator’ class. Here is an example of the formula simplification demonstrating the algorithms: Initial formula: Simplified formula: TMS Analytics & Physics 2.8 is already available. Masiha Zemarai Bookmarks:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578578472137451, "perplexity": 2697.3432347600874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00252.warc.gz"}
http://benstanhope.blogspot.com/2010/12/is-god-bound-by-laws-of-logic.html
## Wednesday, December 1, 2010 ### Is God bound by the laws of logic? I wrote these proofs to settle a disagreement with my philosophy professor. He came to agree they were true. Since there is very little good information on this subject on the web and such a question is so vital to constructing a proper theology I have posted them here: The Law of non-contradiction:1) When one says “God is not bound to the laws of logic” they are stating that it is false to say that “God is bound to the laws of logic”. 2) The law of non-contradiction is a law of logic. 3) If God is “not bound to” the laws of logic then God is not “bound” to the law of non-contradiction (from 2). 4) The law of non-contradiction affirms that “something cannot be both true and false at the same time and in the same sense”. 5) If God is not bound by the law of non-contradiction (from 3) then it is possible for God to be bound by the laws of logic and not be bound by the laws logic at the same time and in the same sense (from 4). 6) The one who says “God is not bound by the laws of logic” is stating that it is false to say that “God is bound by the laws of logic.”(from 1) 7) Therefore, the one who says “God is not bound by laws of logic” assumes that God is bound by the law of non-contradiction in order for his statement to be true and the negating statement to be false (from 4 and 5). 8) Therefore, the one who says “God is not bound by the laws of logic” is assuming that God is bound by a law of logic (i.e. the law of non-contradiction). The Law of Identity (a richly satisfying spectacle): 1) When one says “God is not bound by the laws of logic” they are assuming that God is God in order to say that He is not bound by the laws of logic (i.e. God is not my dog which is quite bound by the laws of logic). 2) The law of identity is a law of logic 3) If God is not bound by the laws of logic then God is not bound by the law of identity (from 2). 4) The law of identity states that “Everything is what it is and not another thing”. 5) If God is not bound by the law of identity (from 3) then God needn’t be God and can be “other things” (from 4). 6) When one says “God is not bound by the laws of logic” they are assuming that God is God and not other things in order to say that He is not bound by the laws of logic (from 1). 7) Therefore, the one who says “God is not bound by the laws of logic” assumes that God is bound by a law of logic (i.e. law of non-contradiction) in order to deny that he is bound by the laws of logic. The Law of the excluded middle:1) When one says “God is not bound by the laws of logic” they are assuming their statement is true. 2) The law of the excluded middle is a law of logic. 3) The law of the excluded middle states “a proposition is either true or false”. 4) If God is not bound by the laws of logic then God is not bound by the law of the excluded middle (from 2). 5) If God is not bound by the law of the excluded middle (from 2) then propositions about God can be neither true nor false. 6) The statement “God is not bound by the laws of logic” is a proposition which claims to be true (from 1). 7) Therefore, the person who says “God is not bound by the laws of logic” is assuming that God is bound by the law of the excluded middle (from 3 and 4) in order to claim their statement is true and/or that it’s negation is false One might object by saying that God created the laws of logic and currently subjugates Himself under them, but this appears to be self-defeating as well: -Who created the laws of logic? The person here assumes the law of identity before it was created. (For that matter why would God create the laws of logic if there were no laws of logic withholding the existence of the laws of logic?) - When God was not subjugating Himself to logic is it true that He was not subjugating Himself to logic (excluded middle is presupposed). - To say the laws of logic didn’t exist is to assume that the law of non-contradiction did; otherwise, the laws of logic would have existed and would have not existed at the same time. You cannot refute logic without presupposing it. 1. On behalf of Grand Master Ashra Kwesi we would like to challenge you to a online debate on our internet radio show. Please email us if you except you will be givin a call in number time and date thank you. Much respect 2. If a concept is not bound by the laws of logic then the concept is illogical. The concept of god is illogical. A concept can't be logical and illogical at the same time. You can't have it both ways 3. What about dialetheism? The liar paradox is one of many examples that seems to fly in the face of this assumption. Whilst I'm not saying that this implies there is a god of any kind, I am suggesting that there are limits to what we can infer from logic alone. 4. "You can't refute logic without presupposing it." Well, you also can't "prove" logic without presupposing it. That's the tricky thing about logic. As soon as you start asking questions about whether or not logic is true, you hit a brick wall.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054563641548157, "perplexity": 553.6673374600807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500814701.13/warc/CC-MAIN-20140820021334-00030-ip-10-180-136-8.ec2.internal.warc.gz"}
https://indico.bnl.gov/event/17432/timetable/?view=standard_inline_minutes
# Ciro Riccio: T2K recent results and future prospects US/Eastern Small seminar room Description T2K (Tokai to Kamioka) is a Japan based long-baseline neutrino oscillation experiment designed to measure (anti-)neutrino flavor oscillations. A neutrino beam peaked around 0.6 GeV is produced in Tokai and directed toward the water Cherenkov detector Super-Kamiokande, which is located 295 km away. A complex of near detectors is located at 280 m and is used to constrain the flux and cross-section uncertainties. In 2014, T2K has started a campaign to measure the phase $\delta_{CP}$, an unknown element of the Pontecorvo-Maki-Nakagawa-Sakata matrix, that can provide a test of the violation or conservation of the CP symmetry in the lepton sector. To achieve this goal, T2K is taking data with a neutrino and antineutrino enhanced beam investigating asymmetries in the electron neutrino and antineutrino appearance probabilities.  One of the largest systematic uncertainties affecting neutrino oscillation measurements comes from limited knowledge of (anti-)neutrino-nucleus interactions. The T2K experiment has a wide range of neutrino cross-section measurements using detectors in its near detector complex. In this seminar an overview of the latest T2K neutrino oscillation and cross-section measurements are presented as well as its future prospects, which are characterized by an intense program of upgrades. The agenda of this meeting is empty
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689631462097168, "perplexity": 1983.7292773143686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00070.warc.gz"}
http://science.sciencemag.org/content/324/5929/888
PerspectiveOcean Science # Ice Sheet Stability and Sea Level See allHide authors and affiliations Science  15 May 2009: Vol. 324, Issue 5929, pp. 888-889 DOI: 10.1126/science.1173958 ## Summary Volume changes in the Antarctic Ice Sheet are poorly understood, despite the importance of the ice sheet to sea-level and climate variability. Over both millennial and shorter time scales, net water influx to the ice sheet (mainly snow accumulation) nearly balances water loss through ice calving and basal ice shelf melting at the ice sheet margins (1). However, there may be times when parts of the West Antarctic Ice Sheet (WAIS) are lost to the oceans, thus raising sea levels. On page 901 of this issue, Bamber et al. (2) calculate the total ice volume lost to the oceans from an unstable retreat of WAIS, which may occur if the part of the ice sheet that overlies submarine basins is ungrounded and moves to a new position down the negative slope (see the figure).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210615754127502, "perplexity": 3282.133204096456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863626.14/warc/CC-MAIN-20180520151124-20180520171124-00211.warc.gz"}
https://www.physicsforums.com/threads/using-cross-product-to-find-angle-between-two-vectors.510814/
# Using cross product to find angle between two vectors 1. ### yayscience 5 1. The problem statement, all variables and given/known data Find the angle between \begin{align*} \vec{A} = 10\hat{y} + 2\hat{z} \\ and \\ \vec{B} = -4\hat{y}+0.5\hat{z} \end{align*} using the cross product. The answer is given to be 161.5 degrees. 2. Relevant equations $$\left| \vec{A} \times \vec{B} \right| = \left| \vec{A} \right| \left| \vec{B} \right|sin(\theta)$$ 3. The attempt at a solution $$\left| \vec{A} \times \vec{B} \right| =$$ $$\left| \begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ 0 & 10 & 2 \\ 0 & -4 & 0.5 \end{array} \right| = \left| 13\hat{x} \right| = 13$$ The magnitude of A cross B is 13. Next we find the magnitude of vectors A and B: $$\left| \vec{A} \right| = \sqrt{10^2+2^2} = \sqrt{104} = 10.198039$$ and $$\left| \vec{B} \right| = \sqrt{(-4)^2+(\frac{1}{2})^2} = \sqrt{16.25} = 4.0311289$$ multiplying the previous two answers we get: 41.109609 So now we should have: $$\frac{13}{41.109609} = sin(\theta)$$ Solving for theta, we get: 18.434951 degrees. This is frustrating: 180-18.434951 = the correct answer. I'm not quite sure where I'm going wrong here. I must be making the same mistake repeatedly. Another problem was the same thing, but with the numbers changed, and I also got the 180-{the answer I was getting} = {the correct answer}, but when I tried the example using the SAME methodology, I got the correct answer. Can someone please share some relevant wisdom in my direction? 2. ### ehild 12,091 sin(alpha)=sin(180-alpha) Plot the two vectors and you will see what angle they enclose. ehild 3. ### I like Serena 6,193 You might use the sign of the inner dot product to see which angle you have. 4. ### yayscience 5 I can plot them, and I can see the angle, but I'm interested in calculating the angle. When I use the dot product I get the correct result, but I cannot see where my mistake is while using the cross product. 5. ### ehild 12,091 There is no mistake, you get the sine of the angle, but there are two angles between 0 and pi with the same sine. ehild 6. ### yayscience 5 Oh wow; I didn't even consider that the answer wasn't unique. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958646297454834, "perplexity": 806.1888048176183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656168.61/warc/CC-MAIN-20150417045736-00146-ip-10-235-10-82.ec2.internal.warc.gz"}
https://thosgood.com/blog/2021/09/24/some-questions-about-analytic-geometry/
# Some questions about complex-analytic geometry ###### 24th September, 2021 Despite being an analytic/algebraic geometer by name (and title, and qualification, and academic upbringing, and …), there are so many gaps in my knowledge, even when it comes to simple foundational things. One thing which I have always tried to do during my academic “career”, however, is to be the person who asks the first stupid question, so that others can feel less nervous about asking their (certainly less stupid) questions. Thus: this blog post. I am going to explain what I do know, talk about what I don’t, and then ask some semi-concrete questions that I’m hoping people will be able to help me out with! # The algebraic dictionary In the case of algebraic geometry, we have a nice dictionary which lets us translate between the language of sheaves, of bundles, and of modules. I’m sure that many of you will have seen at least the first and last column of this table before; the middle one talks about what sort of resolutions we can find. Sheaves Bundles¹ Modules² locally free vector bundles finitely generated and projective coherent complexes of vector bundles finitely generated quasi-coherent complexes of vector bundles arbitrary ¹ For quasi-projective Noetherian schemes. ² For Noetherian affine schemes. Of course, there are some very important caveats here, noted above (i.e. we need to be working over sufficiently nice spaces in order for these correspondences to hold). Note that here we can write a quasi-coherent sheaf as a (filtered) colimit of its coherent subsheaves, hence the fact that it can also be resolved by a complex of vector bundles (just maybe a really really big and unwieldy one). # The analytic dictionary Now, I don’t really work very much in the algebraic setting: I’m more interested in the case of holomorphic things on Stein spaces, and so on. So here’s my attempt at filling in the analogous table. Sheaves Bundles Modules locally free vector bundles ??? coherent complexes of vector bundles on the Čech nerve ??? “quasi-coherent” ??? ??? As you can see, it is far from complete, but let’s start with the things that I do know (since this won’t take very long to talk about). It is still the case that locally free sheaves on a complex-analytic manifold correspond to vector bundles, and, by the results of Green’s 1980 thesis (or mine), we can show that coherent (analytic) sheaves (on a complex-analytic manifold) can be resolved, not quite by complexes of vector bundles, but by a sort of up-to-homotopy version of this, namely Green complexes, or complexes of vector bundles on the Čech nerve. For the last row of the “Bundles” column, things get a bit complicated. Indeed, it’s not really universally accepted (as far as I know) what “the” “good” definition of quasi-coherent even is in the complex-analytic case. The reason why we don’t really want to use the same definition as in the algebraic case (i.e. having a local presentation) is that such objects are not very well behaved at all. For example, as mentioned above, we know that any quasi-coherent algebraic sheaf is the filtered colimit of its coherent subsheaves, but the analogous statement in the analytic setting is very much not true. In particular, there exist quasi-coherent (in the sense of the algebraic definition) analytic sheaves on Stein spaces that have non-zero first sheaf cohomology. This is bad, because any coherent analytic sheaf on a Stein space has zero higher sheaf cohomology, and so there is absolutely no hope of recovering such quasi-coherent sheaves as colimits of their coherent subsheaves. (Indeed, it’s really bad, because it’s not just that we don’t quite recover the quasi-coherent sheaf from the colimit, but we’re really far away from doing so, because we’re trying to get something non-zero from a bunch of things that are zero!) There is the notion of Fréchet quasi-coherence, which I have read is a good substitute, but I don’t fully understand why, nor if it is somehow “the” good substitute (indeed, I haven’t even fully internalised the definition, since it really is something quite analytic). Theorem 4.3.6 in Spectral Decompositions and Analytic Sheaves tells us the following: Let X be a finite-dimensional Stein space, and let \mathscr{F} be a Fréchet \mathscr{O}_X-module. The following conditions are equivalent: 1. \mathscr{F} is quasi-coherent; 2. \mathscr{F} admits globally topologically free resolutions on the left; 3. \mathscr{F} is acyclic on X and admits, locally on X, topologically free resolutions to the left. Here, when they write “quasi-coherent”, they mean “Fréchet quasi-coherent”. So maybe this is useful somehow, because it characterises this notion of quasi-coherence in terms of a resolution property, but I’m not sure if that answers my question or not. Indeed, I’m (yet again) still not entirely sure what these “globally topologically free resolutions” look like, and how they might relate to things that I already know. Then, of course, there’s the elephant in the room (or dictionary, I suppose): what goes in the right-hand column? That is, what even is \operatorname{Spec} in the analytic setting? I know that this can be sort of understood through the framework of analytic spaces, in the sense of Grauert et al, but I can’t quite complete the whole story in my head from what I’ve read. I’ve been meaning to read Brian Conrad’s “Relative ampleness in rigid geometry”, since I think this might hold some answers, but I’ve been too scared by the appearance of the word “rigid” in the title to actually properly dive in… # Questions Anyway, there you go: there are lots of things that I, having a PhD in complex-analytic geometry stuff, really should know the answer to, but don’t. Here are my “precise” questions: 1. How can I fill in this analytic dictionary? 2. In fact, what even is \operatorname{Spec} in the analytic setting? 3. Is Fréchet quasi-coherent the good thing to put in the analytic dictionary? That is, what analogous properties does Fréchet quasi-coherence satisfy to make it the “right” notion? e.g. does it arise as the right Kan extension of the pseudofunctor from commutative rings to categories which sends a ring to the category of modules over that ring? 4. More specifically, is it true that Fréchet quasi-coherent sheaves can be recovered as filtered colimits of their coherent subsheaves? Sadly I still haven’t set up comments on this blog (it’s on my (ever-growing) to-do list), so having a conversation might be a bit tricky, but Twitter might be a good place to do this (on this thread, for example), or you can just drop me an email — I’d love to hear from anybody!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491066098213196, "perplexity": 539.0628345032757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00083.warc.gz"}
http://www.gnu.org/software/gsl/manual/html_node/Simulated-Annealing-algorithm.html
### 25.1 Simulated Annealing algorithm The simulated annealing algorithm takes random walks through the problem space, looking for points with low energies; in these random walks, the probability of taking a step is determined by the Boltzmann distribution, p = e^{-(E_{i+1} - E_i)/(kT)} if E_{i+1} > E_i, and p = 1 when E_{i+1} <= E_i. In other words, a step will occur if the new energy is lower. If the new energy is higher, the transition can still occur, and its likelihood is proportional to the temperature T and inversely proportional to the energy difference E_{i+1} - E_i. The temperature T is initially set to a high value, and a random walk is carried out at that temperature. Then the temperature is lowered very slightly according to a cooling schedule, for example: T -> T/mu_T where \mu_T is slightly greater than 1. The slight probability of taking a step that gives higher energy is what allows simulated annealing to frequently get out of local minima.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225342869758606, "perplexity": 619.4337646558192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00161-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.tampabay.com/archive/1994/10/27/hubble-finding-answers/
Our coronavirus coverage is free for the first 24 hours. Find the latest information at tampabay.com/coronavirus. Please consider subscribing or donating. 1. Archive Astronomers are closing in on two of the most elusive statistics ever _ the age and size of the universe. The Hubble Space Telescope has yielded observations that show it can establish a cosmic yardstick and determine how fast the universe is expanding, scientists say. The results also renew a long-standing paradox in which the universe appears to be younger than some of its stars. That impossibility suggests scientists will have to revise theories of the cosmos. One goal of the Hubble telescope is to make observations that would let scientists accurately measure the distances to faraway objects. The cosmic map now is like a roadmap without a distance scale; scientists know how various distances compare but don't know just what those distances are. With an accurate distance scale, scientists could determine how fast the universe is expanding. And that rate could be combined with some scientific assumptions to estimate the age of the universe. A team of scientists trained the Hubble telescope on a distant galaxy called M100. In today's issue of the journal, Nature, researchers report that the observations let them estimate with good precision that the M100 galaxy is some 56-million light-years away. A light-year is the distance light travels in one year, about 5.9-trillion miles. The finding let the scientists make a rough estimate of the rate of expansion of the universe, a long-debated number called the Hubble constant. More observations are needed to reach a more precise estimate, they said. The new estimate of the expansion rate implies that the universe is a relatively young 8-billion years old. The age becomes 12-billion years old if one assumes the universe contains far less matter than many theorists believe. Some prior age estimates have ranged up to 16-billion years, said Barry Madore, an astronomy professor at the California Institute of Technology and a member of the research team.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328336477279663, "perplexity": 1180.08875727152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00040.warc.gz"}
http://mathhelpforum.com/calculus/35351-divergent-sequences.html
Math Help - divergent sequences 1. divergent sequences is it possible for both the summation of (x sub n) and (y sub n) to be divergent and the summation of (x sub n ysubn) be convergent? 2. Hello, One of the best examples is to take $(x_n)=\sum \frac{1}{n}$ and $(y_n)=\sum \frac{-1}{n}$ The two are known to be divergent.. Though the summation is equal to 0 and convergent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497089385986328, "perplexity": 1307.6214477700785}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660493.3/warc/CC-MAIN-20150417045740-00035-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.deepdyve.com/lp/ou_press/one-bit-compressive-sensing-of-dictionary-sparse-signals-k1l3bdmQC9
# One-bit compressive sensing of dictionary-sparse signals One-bit compressive sensing of dictionary-sparse signals Abstract One-bit compressive sensing has extended the scope of sparse recovery by showing that sparse signals can be accurately reconstructed even when their linear measurements are subject to the extreme quantization scenario of binary samples—only the sign of each linear measurement is maintained. Existing results in one-bit compressive sensing rely on the assumption that the signals of interest are sparse in some fixed orthonormal basis. However, in most practical applications, signals are sparse with respect to an overcomplete dictionary, rather than a basis. There has already been a surge of activity to obtain recovery guarantees under such a generalized sparsity model in the classical compressive sensing setting. Here, we extend the one-bit framework to this important model, providing a unified theory of one-bit compressive sensing under dictionary sparsity. Specifically, we analyze several different algorithms—based on convex programming and on hard thresholding—and show that, under natural assumptions on the sensing matrix (satisfied by Gaussian matrices), these algorithms can efficiently recover analysis–dictionary-sparse signals in the one-bit model. 1. Introduction The basic insight of compressive sensing is that a small number of linear measurements can be used to reconstruct sparse signals. In traditional compressive sensing, we wish to reconstruct an $$s$$-sparse1 signal $${\bf x} \in \mathbb{R}^N$$ from linear measurements of the form   $$\label{meas} {\bf y} = {\bf A}{\bf x} \in \mathbb{R}^m \qquad\text{(or its corrupted version {\bf y} = {\bf A}{\bf x} + {\bf e})},$$ (1.1) where $${\bf A}$$ is an $$m\times N$$ measurement matrix. A significant body of work over the past decade has demonstrated that the $$s$$-sparse (or nearly $$s$$-sparse) signal $${\bf x}$$ can be accurately and efficiently recovered from its measurement vector $${\bf y} = {\bf A}{\bf x}$$ when $${\bf A}$$ has independent Gaussian entries, say, and when $$m \asymp s\log(N/s)$$ [1,12,15]. This basic model has been extended in several directions. Two important ones—which we focus on in this work—are (a) extending the set of signals to include the larger and important class of dictionary- sparse signals, and (b) considering highly quantized measurements as in one-bit compressive sensing. Both of these settings have important practical applications and have received much attention in the past few years. However, to the best of our knowledge, they have not been considered together before. In this work, we extend the theory of one-bit compressive sensing to dictionary-sparse signals. Below, we briefly review the background on these notions, set up notation and outline our contributions. 1.1 One-bit measurements In practice, each entry $$y_i = \langle {\bf a}_i, {\bf x}\rangle$$ (where $${\bf a}_i$$ denotes the $$i$$th row of $${\bf A}$$) of the measurement vector in (1.1) needs to be quantized. That is, rather than observing $${\bf y}={\bf A}{\bf x}$$, one observes $${\bf y} = Q({\bf A}{\bf x})$$ instead, where $$Q: \mathbb{R}^m \rightarrow \mathscr{A}$$ denotes the quantizer that maps each entry of its input to a corresponding quantized value in an alphabet $$\mathscr{A}$$. The so-called one-bit compressive sensing [5] problem refers to the case when $$|\mathscr{A}| = 2$$, and one wishes to recover $${\bf x}$$ from its heavily quantized (one bit) measurements $${\bf y} = Q({\bf A}{\bf x})$$. The simplest quantizer in the one-bit case uses the alphabet $$\mathscr{A} = \{-1, 1\}$$ and acts by taking the sign of each component as   $$\label{eq:quantized} y_i = Q(\langle {\bf a}_i, {\bf x}\rangle) = \mathrm{sgn}(\langle {\bf a}_i, {\bf x}\rangle),$$ (1.2) which we denote in shorthand by $${\bf y} = \mathrm{sgn}({\bf A}{\bf x})$$. Since the publication of [5] in 2008, several efficient methods, both iterative and optimization based, have been developed to recover the signal $${\bf x}$$ (up to normalization) from its one-bit measurements (see e.g. [17–19,24,25,30]). In particular, it is shown [19] that the direction of any $$s$$-sparse signal $${\bf x}$$ can be estimated by some $$\hat{{\bf x}}$$ produced from $${\bf y}$$ with accuracy   $$\left\| \frac{{\bf x}}{\|{\bf x}\|_2} - \frac{\hat{{\bf x}}}{\|\hat{{\bf x}}\|_2}\right\|_2 \leq \varepsilon$$ when the number of measurements is at least   $$m = {\it {\Omega}}\left(\frac{s \ln(N/s)}{\varepsilon} \right)\!.$$ Notice that with measurements of this form, we can only hope to recover the direction of the signal, not the magnitude. However, we can recover the entire signal if we allow for thresholded measurements of the form   $$\label{eq:quantizeddither} y_i = \mathrm{sgn}(\langle {{{\bf a}_i}}, {{{\bf x}}} \rangle - \tau_i).$$ (1.3) In practice, it is often feasible to obtain quantized measurements of this form, and they have been studied before. Existing works using measurements of the form (1.3) have also allowed for adaptive thresholds; that is, the $$\tau_i$$ can be chosen adaptively based on $$y_j$$ for $$j < i$$. The goal of those works was to improve the convergence rate, i.e. the dependence on $$\varepsilon$$ in the number of measurements $$m$$. It is known that a dependence of $${\it {\Omega}}(1/\varepsilon)$$ is necessary with non-adaptive measurements, but recent work on Sigma-Delta quantization [28] and other schemes [2,20] have shown how to break this barrier using measurements of the form (1.3) with adaptive thresholds. In this article, we neither focus on the decay rate (the dependence on $$\varepsilon$$) nor do we consider adaptive measurements. However, we do consider non-adaptive measurements both of the form (1.2) and (1.3). This allows us to provide results on reconstruction of the magnitude of signals, and the direction. 1.2 Dictionary sparsity Although the classical setting assumes that the signal $${\bf x}$$ itself is sparse, most signals of interest are not immediately sparse. In the straightforward case, a signal may be instead sparse after some transform; for example, images are known to be sparse in the wavelet domain, sinusoidal signals in the Fourier domain, and so on [9]. Fortunately, the classical framework extends directly to this model, since the product of a Gaussian matrix and an orthonormal basis is still Gaussian. However, in many practical applications, the situation is not so straightforward, and the signals of interest are sparse, not in an orthonormal basis, but rather in a redundant (highly overcomplete) dictionary; this is known as dictionary sparsity. Signals in radar and sonar systems, for example, are sparsely represented in Gabor frames, which are highly overcomplete and far from orthonormal [13]. Images may be sparsely represented in curvelet frames [6,7], undecimated wavelet frames [29] and other frames, which by design are highly redundant. Such redundancy allows for sparser representations and a wider class of signal representations. Even in the Fourier domain, utilizing an oversampled DFT allows for much more realistic and practical signals to be represented. For these reasons, recent research has extended the compressive sensing framework to the setting, where the signals of interest are sparsified by overcomplete tight frames (see e.g. [8,14,16,27]). Throughout this article, we consider a dictionary $${\bf D} \in \mathbb{R}^{n \times N}$$, which is assumed to be a tight frame, in the sense that   ${\bf D} {\bf D}^* = {\bf I}_n.$ To distinguish between the signal and its sparse representation, we write $${\bf f}\in\mathbb{R}^n$$ for the signal of interest and $${\bf f}={\bf D}{\bf x}$$, where $${\bf x}\in\mathbb{R}^N$$ is a sparse coefficient vector. We then acquire the samples of the form $${\bf y} = {\bf A}{\bf f} = {\bf A}{\bf D}{\bf x}$$ and attempt to recover the signal $${\bf f}$$. Note that, due to the redundancy of $${\bf D}$$, we do not hope to be able to recover a unique coefficient vector $${\bf x}$$. In other words, even when the measurement matrix $${\bf A}$$ is well suited for sparse recovery, the product $${\bf A}{\bf D}$$ may have highly correlated columns, making recovery of $${\bf x}$$ impossible. With the introduction of a non-invertible sparsifying transform $${\bf D}$$, it becomes important to distinguish between two related but distinct notions of sparsity. Precisely, we say that $${\bf f}$$ is $$s$$-synthesis sparse if $${\bf f} = {\bf D} {\bf x}$$ for some $$s$$-sparse $${\bf x} \in \mathbb{R}^N$$; $${\bf f}$$ is $$s$$-analysis sparse if $${\bf D}^* {\bf f} \in \mathbb{R}^N$$ is $$s$$-sparse. We note that analysis sparsity is a stronger assumption, because, assuming analysis sparsity, one can always take $${\bf x} = {\bf D}^* {\bf f}$$ in the synthesis sparsity model. See [11] for an introduction to the analysis-sparse model in compressive sensing (also called the analysis cosparse model). Instead of exact sparsity, it is often more realistic to study effective sparsity. We call a coefficient vector $${\bf x} \in \mathbb{R}^N$$ effectively $$s$$-sparse if   $$\|{\bf x}\|_1 \le \sqrt{s} \|{\bf x}\|_2,$$ and we say that $${\bf f}$$ is effectively $$s$$-synthesis sparse if $${\bf f} = {\bf D} {\bf x}$$ for some effectively $$s$$-sparse $${\bf x} \in \mathbb{R}^N$$; $${\bf f}$$ is effectively $$s$$-analysis sparse if $${\bf D}^* {\bf f} \in \mathbb{R}^N$$ is effectively $$s$$-sparse. We use the notation   \begin{align*} {\it {\Sigma}}^N_s & \mbox{for the set of $s$-sparse coefficient vectors in $\mathbb{R}^N$, and} \\ {\it {\Sigma}}_s^{N,{\rm eff}} & \mbox{for the set of effectively $s$-sparse coefficient vectors in $\mathbb{R}^N$.} \end{align*} We also use the notation $$B_2^n$$ for the set of signals with $$\ell_2$$-norm at most $$1$$ (i.e. the unit ball in $$\ell_2^n$$) and $$S^{n-1}$$ for the set of signals with $$\ell_2$$-norm equal to $$1$$ (i.e. the unit sphere in $$\ell_2^n$$). It is now well known that, if $${\bf D}$$ is a tight frame and $${\bf A}$$ satisfies analogous conditions to those in the classical setting (e.g. has independent Gaussian entries), then a signal $${\bf f}$$ which is (effectively) analysis- or synthesis sparse can be accurately recovered from traditional compressive sensing measurements $${\bf y} = {\bf A} {\bf f} = {\bf A}{\bf D}{\bf x}$$ (see e.g. [4,8,10,14,16,22,23,27]). 1.3 One-bit measurements with dictionaries: our setup In this article, we study one-bit compressive sensing for dictionary-sparse signals. Precisely, our aim is to recover signals $${\bf f} \in \mathbb{R}^n$$ from the binary measurements   $$y_i = \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle \qquad i=1,\ldots,m,$$ or   $$y_i = \mathrm{sgn} \left(\langle {\bf a}_i, {\bf f} \rangle - \tau_i \right) \qquad i = 1,\ldots,m,$$ when these signals are sparse with respect to a dictionary $${\bf D}$$. As in Section 1.2, there are several ways to model signals that are sparse with respect to $${\bf D}$$. In this work, two different signal classes are considered. For the first one, which is more general, our results are based on convex programming. For the second one, which is more restrictive, we can obtain results using a computationally simpler algorithm based on hard thresholding. The first class consists of signals $${\bf f} \in ({\bf D}^*)^{-1} {\it {\Sigma}}_s^{N,\rm{eff}}$$ that are effectively $$s$$-analysis sparse, i.e. they satisfy   $$\label{Assumption} \|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} \|{\bf D}^* {\bf f}\|_2.$$ (1.4) This occurs, of course, when $${\bf D}^* {\bf f}$$ is genuinely sparse (analysis sparsity) and this is realistic if we are working, e.g. with piecewise-constant images, since they are sparse after application of the total variation operator. We consider effectively sparse signals since genuine analysis sparsity is unrealistic when $${\bf D}$$ has columns in general position, as it would imply that $${\bf f}$$ is orthogonal to too many columns of $${\bf D}$$. The second class consists of signals $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N) \cap ({\bf D}^*)^{-1} {\it {\Sigma}}_{\kappa s}^{N, \rm{eff}}$$ that are both $$s$$-synthesis sparse and $$\kappa s$$-analysis sparse for some $$\kappa \ge 1$$. This will occur as soon as the signals are $$s$$-synthesis sparse, provided we utilize suitable dictionaries $${\bf D} \in \mathbb{R}^{n \times N}$$. One could take, for instance, the matrix of an equiangular tight frame when $$N = n + k$$, $$k = {\rm constant}$$. Other examples of suitable dictionaries found in [21] include harmonic frames again with $$N = n + k$$, $$k = {\rm constant}$$, as well as Fourier and Haar frames with constant redundancy factor $$N/n$$. Figure 1 summarizes the relationship between the various domains we deal with. Fig. 1 View largeDownload slide The coefficient, signal and measurement domains. Fig. 1 View largeDownload slide The coefficient, signal and measurement domains. 1.4 Contributions Our main results demonstrate that one-bit compressive sensing is viable even when the sparsifying transform is an overcomplete dictionary. As outlined in Section 1.1, we consider both the challenge of recovering the direction $${\bf f}/\|{\bf f}\|_2$$ of a signal $${\bf f}$$, and the challenge of recovering the entire signal (direction and magnitude). Using measurements of the form $$y_i = \mathrm{sgn}\langle {\bf a}_i, {\bf f} \rangle$$, we can recover the direction but not the magnitude; using measurements of the form $$y_i = \mathrm{sgn}\left(\langle {\bf a}_i, {\bf f} \rangle - \tau_i \right)$$, we may recover both. In (one-bit) compressive sensing, two standard families of algorithms are (a) algorithms based on convex programming, and (b) algorithms based on thresholding. In this article, we analyze algorithms from both classes. One reason to study multiple algorithms is to give a more complete landscape of this problem. Another reason is that the different algorithms come with different trade-offs (between computational complexity and the strength of assumptions required), and it is valuable to explore this space of trade-offs. 1.4.1 Recovering the direction First, we show that the direction of a dictionary-sparse signal can be estimated from one-bit measurements of the type $$\mathrm{sgn}({\bf A} {\bf f})$$. We consider two algorithms: our first approach is based on linear programming, and our second is based on hard thresholding. The linear programming approach is more computationally demanding, but applies to a broader class of signals. In Section 3, we prove that both of these approaches are effective, provided the sensing matrix $${\bf A}$$ satisfies certain properties. In Section 2, we state that these properties are in fact satisfied by a matrix $${\bf A}$$ populated with independent Gaussian entries. We combine all of these results to prove the statement below. As noted above, the different algorithms require different definitions of ‘dictionary sparsity’. In what follows, $$\gamma, C, c$$ refer to absolute numerical constants. Theorem 1 (Informal statement of direction recovery) Let $$\varepsilon \,{>}\, 0$$, let $$m \,{\ge}\, C \varepsilon^{-7} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, any dictionary-sparse2 signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f})$$ can be approximated by the output $$\widehat{{\bf f}}$$ of an efficient algorithm with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{\widehat{{\bf f}}}{\|\widehat{{\bf f}}\|_2} \right\|_2 \le \varepsilon.$$ 1.4.2 Recovering the whole signal By using one-bit measurements of the form $$\mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$, where $$\tau_1,\ldots,\tau_m$$ are properly normalized Gaussian random thresholds, we are able to recover not just the direction, but also the magnitude of a dictionary-sparse signal $${\bf f}$$. We consider three algorithms: our first approach is based on linear programming, our second approach on second-order cone programming and our third approach on hard thresholding. Again, there are different trade-offs to the different algorithms. As above, the approach based on hard thresholding is more efficient, whereas the approaches based on convex programming apply to a broader signal class. There is also a trade-off between linear programming and second-order cone programming: the second-order cone program requires knowledge of $$\|{\bf f}\|_2,$$ whereas the linear program does not (although it does require a loose bound), but the second-order cone programming approach applies to a slightly larger class of signals. We show in Section 4 that all three of these algorithms are effective when the sensing matrix $${\bf A}$$ is populated with independent Gaussian entries, and when the thresholds $$\tau_i$$ are also independent Gaussian random variables. We combine the results of Section 4 in the following theorem. Theorem 2 (Informal statement of signal estimation) Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C \varepsilon^{-9} s \ln(eN/s)$$, and let $${\bf A} \in \mathbb{R}^{m \times n}$$ and $$\boldsymbol{\tau} \in \mathbb{R}^m$$ be populated by independent mean-zero normal random variables with variance $$1$$ and $$\sigma^2$$, respectively. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, any dictionary-sparse$$^2$$ signal $${\bf f} \in \mathbb{R}^n$$ with $$\|{\bf f}\|_2 \le r$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by the output $$\widehat{{\bf f}}$$ of an efficient algorithm with error   $$\left\| {\bf f} - \widehat{{\bf f}} \right\|_2 \le \varepsilon r.$$ We have not spelled out the dependence of the number of measurements and the failure probability on the parameters $$r$$ and $$\sigma$$: as long as they are roughly the same order of magnitude, the dependence is absorbed in the constants $$C$$ and $$c$$ (see Section 4 for precise statements). As outlined earlier, an estimate of $$r$$ is required to implement the second-order cone program, but the other two algorithms do not require such an estimate. 1.5 Discussion and future directions The purpose of this work is to demonstrate that techniques from one-bit compressive sensing can be effective for the recovery of dictionary-sparse signals, and we propose several algorithms to accomplish this for various notions of dictionary sparsity. Still, some interesting future directions remain. First, we do not believe that the dependence on $$\varepsilon$$ above is optimal. We do believe instead that a logarithmic dependence on $$\varepsilon$$ for the number of measurements (or equivalently an exponential decay in the oversampling factor $$\lambda = m / (s \ln(eN/s))$$ for the recovery error) is possible by choosing the thresholds $$\tau_1,\ldots,\tau_m$$ adaptively. This would be achieved by adjusting the method of [2], but with the strong proviso of exact sparsity. Secondly, it is worth asking to what extent the trade-offs between the different algorithms reflect reality. In particular, is it only an artifact of the proof that the simpler algorithm based on hard thresholding applies to a narrower class of signals? 1.6 Organization The remainder of the article is organized as follows. In Section 2, we outline some technical tools upon which our results rely, namely some properties of Gaussian random matrices. In Section 3, we consider recovery of the direction $${\bf f}/\|{\bf f}\|$$ only and we propose two algorithms to achieve it. In Section 4, we present three algorithms for the recovery of the entire signal $${\bf f}$$. Finally, in Section 5, we provide proofs for the results outlined in Section 2. 2. Technical ingredients In this section, we highlight the theoretical properties upon which our results rely. Their proofs are deferred to Section 5 so that the reader does not lose track of our objectives. The first property we put forward is an adaptation to the dictionary case of the so-called sign product embedding property (the term was coined in [18], but the result originally appeared in [25]). Theorem 3 ($${\bf D}$$-SPEP) Let $$\delta > 0$$, let $$m \ge C \delta^{-7} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \delta^2 m)$$, the renormalized matrix $${\bf A}':= (\sqrt{2/\pi}/m) {\bf A}$$ satisfies the $$s$$th-order sign product embedding property adapted to $${\bf D} \in \mathbb{R}^{n \times N}$$ with constant $$\delta$$ — $${\bf D}$$-SPEP$$(s,\delta)$$ for short—i.e.   $$\label{SPEP} \left| \langle {\bf A}' {\bf f}, \mathrm{sgn}({\bf A}' {\bf g}) \rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \delta$$ (2.1) holds for all $${\bf f}, {\bf g} \in {\bf D}({\it {\Sigma}}^N_s) \cap S^{n-1}$$. Remark 1 The power $$\delta^{-7}$$ is unlikely to be optimal. At least in the non-dictionary case, i.e. when $${\bf D} = {\bf I}_n$$, it can be reduced to $$\delta^{-2}$$, see [3]. As an immediate consequence of $${\bf D}$$-SPEP, setting $${\bf g} = {\bf f}$$ in (2.1) allows one to deduce a variation of the classical restricted isometry property adapted to $${\bf D}$$, where the inner norm becomes the $$\ell_1$$-norm (we mention in passing that this variation could also be deduced by other means). Corollary 1 ($${\bf D}$$-RIP$$_1$$) Let $$\delta \,{>}\, 0$$, let $$m \,{\ge}\, C \delta^{-7} s \,{\ln}\,(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \delta^2 m)$$, the renormalized matrix $${\bf A}':= (\sqrt{2/\pi}/m) {\bf A}$$ satisfies the $$s$$th-order $$\ell_1$$-restricted isometry property adapted to $${\bf D} \in \mathbb{R}^{n \times N}$$ with constant $$\delta$$ — $${\bf D}$$-RIP$$_{1}(s,\delta)$$ for short—i.e.   $$(1-\delta) \| {\bf f}\|_2 \le \| {\bf A}' {\bf f} \|_1 \le (1+\delta) \|{\bf f}\|_2$$ (2.2) holds for all $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N)$$. The next property we put forward is an adaptation of the tessellation of the ‘effectively sparse sphere’ (see [26]) to the dictionary case. In what follows, given a (non-invertible) matrix $${\bf M}$$ and a set $$K$$, we denote by $${\bf M}^{-1} (K)$$ the preimage of $$K$$ with respect to $${\bf M}$$. Theorem 4 (Tessellation) Let $$\varepsilon > 0$$, let $$m \ge C \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, the rows $${\bf a}_1,\ldots,{\bf a}_m \in \mathbb{R}^n$$ of $${\bf A}$$$$\varepsilon$$-tessellate the effectively $$s$$-analysis-sparse sphere—we write that $${\bf A}$$ satisfies $${\bf D}$$-TES$$(s,\varepsilon)$$ for short—i.e.   $$\label{Tes} [{\bf f},{\bf g} \in ({\bf D}^*)^{-1}({\it {\Sigma}}_{s}^{N,{\rm eff}}) \cap S^{n-1} : \; \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle = \mathrm{sgn} \langle {\bf a}_i, {\bf g} \rangle \mbox{for all} i =1,\ldots,m] \Longrightarrow [\|{\bf f} - {\bf g}\|_2 \le \varepsilon].$$ (2.3) 3. Signal estimation: direction only In this whole section, given a measurement matrix $${\bf A} \in \mathbb{R}^{m \times n}$$ with rows $${\bf a}_1,\ldots,{\bf a}_m \in \mathbb{R}^n$$, the signals $${\bf f} \in \mathbb{R}^n$$ are acquired via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f}) \in \{-1,+1\}^m$$, i.e.   $$y_i = \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle \qquad i = 1,\ldots,m.$$ Under this model, all $$c {\bf f}$$ with $$c>0$$ produce the same one-bit measurements, so one can only hope to recover the direction of $${\bf f}$$. We present two methods to do so, one based on linear programming and the other one based on hard thresholding. 3.1 Linear programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$, the optimization scheme we consider here consists in outputting the signal $${\bf f}_{\rm lp}$$ solution of   $$\label{LPforDir} \underset{{{\bf h} \in \mathbb{R}^n}}{\rm minimize}\, \| {\bf D}^* {\bf h}\|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h}) = {\bf y} \quad \|{\bf A} {\bf h}\|_1 = 1.$$ (3.1) This is in fact a linear program (and thus may be solved efficiently), since the condition $$\mathrm{sgn}({\bf A} {\bf h}) = {\bf y}$$ reads   $$y_i ({\bf A} {\bf h})_i \ge 0 \qquad \mbox{for all} i = 1,\ldots, m,$$ and, under this constraint, the condition $$\|{\bf A} {\bf h}\|_1 = 1$$ reads   $$\sum_{i=1}^m y_i ({\bf A} {\bf h})_i = 1.$$ Theorem 5 If $${\bf A} \,{\in}\, \mathbb{R}^{m \times n}$$ satisfies both $${\bf D}$$-TES$$(36s,\varepsilon)$$ and $${\bf D}$$-RIP$$_1(25s,1/5)$$, then any effectively $$s$$-analysis-sparse signal $${\bf f} \in ({\bf D}^*)^{-1}{\it {\Sigma}}_s^{N,{\rm eff}}$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f})$$ is directionally approximated by the output $${\bf f}_{\rm lp}$$ of the linear program (3.1) with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{{\bf f}_{\rm lp}}{\|{\bf f}_{\rm lp}\|_2} \right\|_2 \le \varepsilon.$$ Proof. The main step is to show that $${\bf f}_{\rm lp}$$ is effectively $$36s$$-analysis sparse when $${\bf D}$$-RIP$$_1(t,\delta)$$ holds with $$t= 25s$$ and $$\delta=1/5$$. Then, since both $${\bf f}/\|{\bf f}\|_2$$ and $${\bf f}_{\rm lp} / \|{\bf f}_{\rm lp}\|_2$$ belong to $$({\bf D}^*)^{-1}{\it {\Sigma}}_{36 s}^{N,{\rm eff}} \cap S^{n-1}$$ and have the same sign observations, $${\bf D}$$-TES$$(36s,\varepsilon)$$ implies the desired conclusion. To prove the effective analysis sparsity of $${\bf f}_{\rm lp}$$, we first estimate $$\|{\bf A} {\bf f}\|_1$$ from below. For this purpose, let $$T_0$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, etc. We have   \begin{align*} \|{\bf A} {\bf f} \|_1 & = \|{\bf A} {\bf D} {\bf D}^* {\bf f}\|_1 = \left\| {\bf A} {\bf D} \left(\sum_{k \ge 0} ({\bf D}^*{\bf f})_{T_k} \right) \right\|_1 \ge \|{\bf A} {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right)\!\|_1 - \sum_{k \ge 1} \|{\bf A} {\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_1\\ & \ge (1-\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right)\!\|_2 - \sum_{k \ge 1} (1+\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_2, \end{align*} where the last step used $${\bf D}$$-RIP$$_1(t,\delta)$$. We notice that, for $$k \ge 1$$,   $$\|{\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_2 \le \| ({\bf D}^* {\bf f})_{T_k}\! \|_2 \le \frac{1}{\sqrt{t}} \| ({\bf D}^* {\bf f})_{T_{k-1}}\!\|_1,$$ from where it follows that   $$\label{LowerAf} \|{\bf A} {\bf f}\|_1 \ge (1-\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_0}\right)\!\|_2 - \frac{1+\delta}{\sqrt{t}} \|{\bf D}^* {\bf f} \|_1.$$ (3.2) In addition, we observe that   \begin{align*} \|{\bf D}^* {\bf f} \|_2 & = \|{\bf f}\|_2 = \|{\bf D} {\bf D}^* {\bf f}\|_2 = \left\| {\bf D} \left(\sum_{k \ge 0} ({\bf D}^* {\bf f})_{T_k} \right) \right\|_2 \le \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \sum_{k \ge 1} \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right) \right\|_2\\ & \le \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \frac{1}{\sqrt{t}} \|{\bf D}^* {\bf f} \|_1. \end{align*} In view of the effective sparsity of $${\bf D}^* {\bf f}$$, we obtain   $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} \|{\bf D}^* {\bf f}\|_2 \le \sqrt{s}\left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \sqrt{s/t} \|{\bf D}^* {\bf f} \|_1,$$ hence   $$\label{LowerDD*T0} \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 \ge \frac{1- \sqrt{s/t}}{\sqrt{s}} \|{\bf D}^* {\bf f} \|_1.$$ (3.3) Substituting (3.3) in (3.2) yields   $$\label{LowerAf2} \|{\bf A} {\bf f}\|_1 \ge \left((1-\delta)(1-\sqrt{s/t}) - (1+\delta)(\sqrt{s/t}) \right) \frac{1}{\sqrt{s}} \|{\bf D}^* {\bf f}\|_1 = \frac{2/5}{\sqrt{s}} \|{\bf D}^* {\bf f}\|_1,$$ (3.4) where we have used the values $$t = 25s$$ and $$\delta=1/5$$. This lower estimate for $$\|{\bf A} {\bf f} \|_1$$, combined with the minimality property of $${\bf f}_{\rm lp}$$, allows us to derive that   $$\label{UpperD*fhat} \|{\bf D}^* {\bf f}_{\rm lp} \|_1 \le \|{\bf D}^*({\bf f}/ \|{\bf A} {\bf f}\|_1)\|_1 = \frac{\|{\bf D}^* {\bf f}\|_1}{\|{\bf A} {\bf f} \|_1} \le (5/2) \sqrt{s}.$$ (3.5) Next, with $$\widehat{T}_0$$ denoting an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, $$\widehat{T}_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, $$\widehat{T}_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, etc., we can write   \begin{align*} 1 & = \|{\bf A} {\bf f}_{\rm lp} \|_1 = \|{\bf A} {\bf D} {\bf D}^* {\bf f}_{\rm lp} \|_1 = \left\| {\bf A} {\bf D} \left(\sum_{k \ge 0} ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_1 \le \sum_{k \ge 0} \left\| {\bf A} {\bf D} \left(({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_1\\ & \le \sum_{k \ge 0} (1+\delta) \left\| {\bf D} \left(({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_2 = (1+\delta) \left[\!\left\| ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_0} \right\|_2 + \sum_{k \ge 1} \!\left\| ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right\|_2 \right]\\ & \le (1+\delta) \left[\|{\bf D}^* {\bf f}_{\rm lp} \|_2 + \frac{1}{\sqrt{t}} \|{\bf D}^* {\bf f}_{\rm lp}\|_1 \right] \le (1+\delta) \left[\|{\bf D}^* {\bf f}_{\rm lp} \|_2 + (5/2)\sqrt{s/t} \right]. \end{align*} This chain of inequalities shows that   $$\label{LowerD*fhat} \|{\bf D}^* {\bf f}_{\rm lp} \|_2 \ge \frac{1-(5/2)\sqrt{s/t}}{1+\delta} = \frac{5}{12}.$$ (3.6) Combining (3.5) and (3.6), we obtain   $$\|{\bf D}^* {\bf f}_{\rm lp} \|_1 \le 6 \sqrt{s} \|{\bf D}^* {\bf f}_{\rm lp} \|_2.$$ In other words, $${\bf D}^* {\bf f}_{\rm lp}$$ is effectively $$36s$$-sparse, which is what was needed to conclude the proof. □ Remark 2 We point out that if $${\bf f}$$ was genuinely, instead of effectively, $$s$$-analysis sparse, then a lower bound of the type (3.4) would be immediate from the $${\bf D}$$-RIP$$_1$$. We also point out that our method of proving that the linear program outputs an effectively analysis-sparse signal is new even in the case $${\bf D} = {\bf I}_n$$. In fact, it makes it possible to remove a logarithmic factor from the number of measurements in this ‘non-dictionary’ case, too (compare with [24]). Furthermore, it allows for an analysis of the linear program (3.1) only based on deterministic conditions that the matrix $${\bf A}$$ may satisfy. 3.2 Hard thresholding Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$, the hard thresholding scheme we consider here consists in constructing a signal $${\bf f}_{\rm ht} \in \mathbb{R}^n$$ as   $$\label{HTforDir} {\bf f}_{\rm ht} = {\bf D} {\bf z}, \qquad \mbox{where} {\bf z} := H_t({\bf D}^* {\bf A}^* {\bf y}).$$ (3.7) Our recovery result holds for $$s$$-synthesis-sparse signals that are also effectively $$\kappa s$$-analysis sparse for some $$\kappa \ge 1$$ (we discussed in Section 1 some choices of dictionaries $${\bf D}$$ making this happen). Theorem 6 If $${\bf A} \in \mathbb{R}^{m \times n}$$ satisfies $${\bf D}$$-SPEP$$(s+t,\varepsilon/8)$$, $$t = \lceil 16 \varepsilon^{-2} \kappa s \rceil$$, then any $$s$$-synthesis-sparse signal $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N)$$ with $${\bf D}^* {\bf f} \in {\it {\Sigma}}_{\kappa s}^{N,{\rm eff}}$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$ is directionally approximated by the output $${\bf f}_{\rm ht}$$ of the hard thresholding (3.7) with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le \varepsilon.$$ Proof. We assume without loss of generality that $$\|{\bf f}\|_2 = 1$$. Let $$T=T_0$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, etc. We start by noticing that $${\bf z}$$ is a better $$t$$-sparse approximation to $${\bf D}^* {\bf A}^* {\bf y} = {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})$$ than $$[{\bf D}^* {\bf f}]_T$$, so we can write   $$\| {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) - {\bf z} \|_2^2 \le \|{\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) - [{\bf D}^* {\bf f}]_T \|_2^2,$$ i.e.   $$\| ({\bf D}^* {\bf f} - {\bf z}) - ({\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) \|_2^2 \le \| ({\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) - [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2.$$ Expanding the squares and rearranging gives   \begin{align} \label{Term1} \|{\bf D}^* {\bf f} - {\bf z} \|_2^2 & \le 2 \langle {\bf D}^* {\bf f} - {\bf z}, {\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle \\ \end{align} (3.8)  \begin{align} \label{Term2} & - 2 \langle [{\bf D}^* {\bf f}]_{\overline{T}} , {\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle \\ \end{align} (3.9)  \begin{align} \label{Term3} & + \| [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2. \end{align} (3.10) To bound (3.10), we invoke [15, Theorem 2.5] and the effective analysis sparsity of $${\bf f}$$ to derive   $$\| [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2 \le \frac{1}{4t} \| {\bf D}^* {\bf f} \|_1^2 \le \frac{\kappa s}{4t} \| {\bf D}^* {\bf f} \|_2^2 = \frac{\kappa s}{4t} \|{\bf f} \|_2^2 = \frac{\kappa s}{4t}.$$ To bound (3.8) in absolute value, we notice that it can be written as   \begin{align*} 2 | \langle {\bf D} {\bf D}^* {\bf f} - {\bf D} {\bf z}, &{\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | = 2 | \langle {\bf f} - {\bf f}_{\rm ht}, {\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | \\ & = 2 | \langle {\bf f} - {\bf f}_{\rm ht}, {\bf f} \rangle - \langle {\bf A} ({\bf f} - {\bf f}_{\rm ht}), \mathrm{sgn}({\bf A} {\bf f}) \rangle | \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht} \|_2, \end{align*} where the last step followed from $${\bf D}$$-SPEP$$(s+t,\varepsilon')$$, $$\varepsilon' := \varepsilon /8$$. Finally, (3.9) can be bounded in absolute value by   \begin{align*} 2 & \sum_{k \ge 1} | \langle [{\bf D}^* {\bf f}]_{T_k}, {\bf D}^*({\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) \rangle | = 2 \sum_{k \ge 1} | \langle {\bf D}([{\bf D}^* {\bf f}]_{T_k}), {\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | \\ & = 2 \sum_{k \ge 1} | \langle {\bf D}([{\bf D}^* {\bf f}]_{T_k}), {\bf f} \rangle - \langle {\bf A} ({\bf D}([{\bf D}^* {\bf f}]_{T_k})), \mathrm{sgn}({\bf A} {\bf f}) \rangle | \le 2 \sum_{k \ge 1} \varepsilon' \| {\bf D}([{\bf D}^* {\bf f}]_{T_k}) \|_2\\ & \le 2 \varepsilon' \sum_{k \ge 1} \| [{\bf D}^* {\bf f}]_{T_k} \|_2 \le 2 \varepsilon' \sum_{k \ge 1} \frac{\| [{\bf D}^* {\bf f}]_{T_{k-1}} \|_1}{\sqrt{t}} \le 2 \varepsilon' \frac{\|{\bf D}^* {\bf f}\|_1}{\sqrt{t}} \le 2 \varepsilon' \frac{\sqrt{\kappa s} \|{\bf D}^* {\bf f}\|_2}{\sqrt{t}} = 2 \varepsilon' \sqrt{\frac{\kappa s}{t}}. \end{align*} Putting everything together, we obtain   $$\|{\bf D}^* {\bf f} - {\bf z} \|_2^2 \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht}\|_2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t}.$$ In view of $$\|{\bf f} - {\bf f}_{\rm ht}\|_2 = \|{\bf D} ({\bf D}^* {\bf f} - {\bf z}) \|_2 \le \|{\bf D}^* {\bf f} - {\bf z}\|_2$$, it follows that   $$\|{\bf f} - {\bf f}_{\rm ht}\|_2^2 \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht}\|_2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t}, \quad \mbox{i.e.} \; (\|{\bf f} - {\bf f}_{\rm ht}\|_2 - \varepsilon')^2 \le {\varepsilon'}^2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t} \le \left(\varepsilon' \hspace{-0.5mm}+\hspace{-0.5mm} \sqrt{\frac{\kappa s}{t}} \right)^2 \hspace{-1mm}.$$ This implies that   $$\|{\bf f} - {\bf f}_{\rm ht}\|_2 \le 2 \varepsilon' + \sqrt{\frac{\kappa s}{t}}.$$ Finally, since $${\bf f}_{\rm ht}/\|{\bf f}_{\rm ht}\|_2$$ is the best $$\ell_2$$-normalized approximation to $${\bf f}_{\rm ht}$$, we conclude that   $$\left\| {\bf f} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le \|{\bf f} - {\bf f}_{\rm ht}\|_2 + \left\| {\bf f}_{\rm ht} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le 2 \|{\bf f} - {\bf f}_{\rm ht}\|_2 \le 4 \varepsilon' + 2 \sqrt{\frac{\kappa s}{t}}.$$ The announced result follows from our choices of $$t$$ and $$\varepsilon'$$. □ 4. Signal estimation: direction and magnitude Since information of the type $$y_i = \mathrm{sgn} \langle {\bf a}_i,{\bf f} \rangle$$ can at best allow one to estimate the direction of a signal $${\bf f} \in \mathbb{R}^n$$, we consider in this section information of the type   $$y_i = \mathrm{sgn}(\langle {\bf a}_i, {\bf f} \rangle - \tau_i) \qquad i = 1,\ldots,m ,$$ for some thresholds $$\tau_1,\ldots,\tau_m$$ introduced before quantization. In the rest of this section, we give three methods for recovering $${\bf f}$$ in its entirety. The first one is based on linear programming, the second one on second-order code programming and the last one on hard thresholding. We are going to show that using these algorithms, one can estimate both the direction and the magnitude of dictionary-sparse signal $${\bf f} \in \mathbb{R}^n$$ given a prior magnitude bound such as $$\| {\bf f} \|_2 \le r$$. We simply rely on the previous results by ‘lifting’ the situation from $$\mathbb{R}^n$$ to $$\mathbb{R}^{n+1}$$, in view of the observation that $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ can be interpreted as The following lemma will be equally useful when dealing with linear programming, second-order cone programming or hard thresholding schemes. Lemma 1 For $$\widetilde{{\bf f}}, \widetilde{{\bf g}} \in \mathbb{R}^{n+1}$$ written as   $$\widetilde{{\bf f}} := \begin{bmatrix} {\bf f}_{[n]} \\ \hline f_{n+1} \end{bmatrix} \qquad \mbox{and} \qquad \widetilde{{\bf g}} =: \begin{bmatrix} {\bf g}_{[n]} \\ \hline g_{n+1} \end{bmatrix}$$ with $$\widetilde{{\bf f}}_{[n]}, \widetilde{{\bf g}}_{[n]} \in \mathbb{R}^n$$ and with $$f_{n+1} \not= 0$$, $$g_{n+1} \not= 0$$, one has   $$\left\| \frac{{\bf f}_{[n]}}{f_{n+1}} - \frac{{\bf g}_{[n]}}{g_{n+1}} \right\|_2 \le \frac{\|\widetilde{{\bf f}}\|_2 \|\widetilde{{\bf g}}\|_2}{|f_{n+1}||g_{n+1}|} \left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2.$$ Proof. By using the triangle inequality in $$\mathbb{R}^n$$ and Cauchy–Schwarz inequality in $$\mathbb{R}^2$$, we can write   \begin{align*} \left\| \frac{{\bf f}_{[n]}}{f_{n+1}} - \frac{{\bf g}_{[n]}}{g_{n+1}} \right\|_2 & = \|\widetilde{{\bf f}}\|_2 \left\| \frac{1/f_{n+1}}{\|\widetilde{{\bf f}}\|_2} {\bf f}_{[n]} - \frac{1/g_{n+1}}{\|\widetilde{{\bf f}}\|_2} {\bf g}_{[n]} \right\|_2\\ & \le \|\widetilde{{\bf f}}\|_2 \left(\frac{1}{f_{n+1}} \left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 + \left| \frac{1/g_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{1/f_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \|{\bf g}_{[n]}\|_2 \right)\\ & = \|\widetilde{{\bf f}}\|_2 \left(\frac{1}{f_{n+1}} \left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 + \frac{\|{\bf g}_{[n]}\|_2}{|f_{n+1}| |g_{n+1}|} \left| \frac{f_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \right)\\ & \le \|\widetilde{{\bf f}}\|_2 \left[\frac{1}{|f_{n+1}|^2} + \frac{\|{\bf g}_{[n]}\|_2^2}{|f_{n+1}|^2 |g_{n+1}|^2} \right]^{1/2} \left[\left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2^2 + \left| \frac{f_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right|^2 \right]^{1/2}\\ & = \|\widetilde{{\bf f}}\|_2 \left[\frac{\|\widetilde{{\bf g}}\|_2^2}{|f_{n+1}|^2 |g_{n+1}|^2} \right]^{1/2} \left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2, \end{align*} which is the announced result. □ 4.1 Linear programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the optimization scheme we consider here consists in outputting the signal   $$\label{Defflp} {\bf f}_{\rm LP} = \frac{\sigma}{\widehat{u}} \widehat{{\bf h}} \in \mathbb{R}^n,$$ (4.1) where $$\widehat{{\bf h}} \in \mathbb{R}^{n}$$ and $$\widehat{u} \in \mathbb{R}$$ are solutions of   $$\label{OptProg} \underset{{\bf h} \in \mathbb{R}^n, u \in \mathbb{R}}{\rm minimize \;} \; \|{\bf D}^* {\bf h} \|_1 + |u| \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h} - u \boldsymbol{\tau} / \sigma) = {\bf y}, \quad \|{\bf A} {\bf h} - u \boldsymbol{\tau} / \sigma \|_1 = 1.$$ (4.2) Theorem 7 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C (r/\sigma+\sigma/r)^6 \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from the entries of $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(-c m \varepsilon^2 r^2 \sigma^2/(r^2+\sigma^2)^2)$$, any effectively $$s$$-analysis sparse $${\bf f} \in \mathbb{R}^n$$ satisfying $$\|{\bf f}\|_2 \le r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm LP}$$ given in (4.1) with error   $$\left\| {\bf f}- {\bf f}_{\rm LP} \right\|_2 \le \varepsilon r.$$ Proof. Let us introduce the ‘lifted’ signal $$\widetilde{{\bf f}} \in \mathbb{R}^{n+1}$$, the ‘lifted’ tight frame $$\widetilde{{\bf D}} \in \mathbb{R}^{(n+1)\times (N+1)}$$, and the ‘lifted’ measurement matrix $$\widetilde{{\bf A}} \in \mathbb{R}^{m \times (N+1)}$$ defined as   $$\tilde{\mathbf{f}}:=\left[\!\!\!\frac{\ \ \ \mathbf{f}\ \ }{\sigma}\!\!\!\right],\quad \tilde{\mathbf{D}}:=\left[\!\!\begin{array}{c|c} \mathbf{D} & \mathbf{0}\\\hline \mathbf{0} & \mathbf{1} \end{array}\!\!\right],\quad \tilde{\mathbf{A}}:= \left[\begin{array}{c|c} & -\tau_1/\sigma\\ \mathbf{A} & \vdots \\ & -\tau_m/\sigma\end{array}\right].$$ (4.3) First, we observe that $$\widetilde{{\bf f}}$$ is effectively $$(s+1)$$-analysis sparse (relative to $$\widetilde{{\bf D}}$$), since , hence   $$\frac{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_1}{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_2} = \frac{\|{\bf D}^* {\bf f} \|_1 + \sigma}{\sqrt{\|{\bf D}^* {\bf f}\|_2^2+\sigma^2}} \le \frac{\sqrt{s} \|{\bf D}^* {\bf f}\|_2 + \sigma}{\sqrt{\|{\bf D}^* {\bf f}\|_2^2+\sigma^2}} \le \sqrt{s+1}.$$ Next, we observe that the matrix $$\widetilde{{\bf A}} \in \mathbb{R}^{m \times (n+1)}$$, populated by independent standard normal random variables, satisfies $$\widetilde{{\bf D}}$$-TES$$(36(s+1),\varepsilon')$$, $$\varepsilon' := \dfrac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon$$ and $$\widetilde{{\bf D}}$$-RIP$$_1(25(s+1),1/5)$$ with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2) + \gamma' \exp(-c' m) \le \gamma'' \exp(-c'' m \varepsilon^2 r^2 \sigma^2 / (r^2 + \sigma^2)^2)$$, since $$m \ge C {\varepsilon'}^{-6} (s+1) \ln(eN/(s+1))$$ and $$m \ge C (1/5)^{-7} (s+1) \ln(e N / (s+1))$$ are ensured by our assumption on $$m$$. Finally, we observe that $${\bf y} = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}})$$ and that the optimization program (4.2) reads   $$\underset{\widetilde{{\bf h}} \in \mathbb{R}^{n+1}}{\rm minimize \;} \|\widetilde{{\bf D}}^* \widetilde{{\bf h}} \|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf h}}) = {\bf y}, \quad \|\widetilde{{\bf A}} \widetilde{{\bf h}} \|_1 = 1.$$ Denoting its solution as , Theorem 5 implies that   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon'.$$ In particular, looking at the last coordinate, this inequality yields   $$\left| \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \le \varepsilon', \qquad \mbox{hence} \qquad \frac{|g_{n+1}|}{\|\widetilde{{\bf g}}\|_2} \ge \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \varepsilon' \ge \frac{\sigma}{\sqrt{r^2+\sigma^2}} - \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}} = \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}}.$$ In turn, applying Lemma 1 while taking $${\bf f} = {\bf f}_{[n]}$$ and $${\bf f}_{\rm LP} = (\sigma/g_{n+1}) {\bf g}_{[n]}$$ into consideration gives   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm LP}}{\sigma} \right\|_2 \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{\| \widetilde{{\bf g}} \|_2}{|g_{n+1}|} \varepsilon' \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{2\sqrt{r^2+\sigma^2}}{\sigma} \frac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon = \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon,$$ so that   $$\| {\bf f} - {\bf f}_{\rm LP} \|_2 \le \|\widetilde{{\bf f}}\|_2 \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon \le r \varepsilon.$$ This establishes the announced result. □ Remark 3 The recovery scheme (4.2) does not require an estimation of $$r$$ to be run. The recovery scheme presented next does require such an estimation. Moreover, it is a second-order cone program instead of a simpler linear program. However, it has one noticeable advantage, namely that it applies not only to signals satisfying $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s}\|{\bf D}^*{\bf f}\|_2$$ and $$\|{\bf D}^*{\bf f}\|_2 \le r$$, but also more generally to signals satisfying $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} r$$ and $$\|{\bf D}^*{\bf f}\|_2 \le r$$. For both schemes, one needs $$\sigma$$ to be of the same order as $$r$$ for the results to become meaningful in terms of number of measurement and success probability. However, if $$r$$ is only upper-estimated, then one could choose $$\sigma \ge r$$ and obtain a weaker recovery error $$\|{\bf f} - \widehat{{\bf f}}\|_2 \le \varepsilon \sigma$$ with relevant number of measurement and success probability. 4.2 Second-order cone programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the optimization scheme we consider here consists in outputting the signal   $$\label{Deffcp} {\bf f}_{\rm CP} = \underset{{\bf h} \in \mathbb{R}^n}{\rm {\rm argmin}\, \;} \; \|{\bf D}^* {\bf h}\|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h} - \boldsymbol{\tau}) = {\bf y}, \quad \|{\bf h}\|_2 \le r.$$ (4.4) Theorem 8 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C (r/\sigma + \sigma/r)^6(r^2/\sigma^2+1) \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(- c' m \varepsilon^2 r^2 \sigma^2 / (r^2+\sigma^2)^2)$$, any signal $${\bf f} \in \mathbb{R}^n$$ with $$\|{\bf f}\|_2 \le r$$, $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm CP}$$ given in (4.4) with error   $$\left\| {\bf f}- {\bf f}_{\rm CP} \right\|_2 \le \varepsilon r.$$ Proof. We again use the notation (4.3) introducing the ‘lifted’ objects $$\widetilde{{\bf f}}$$, $$\widetilde{{\bf D}}$$ and $$\widetilde{{\bf A}}$$. Moreover, we set . We claim that $$\widetilde{{\bf f}}$$ and $$\widetilde{{\bf g}}$$ are effectively $$s'$$-analysis sparse, $$s' := (r^2 / \sigma^2 + 1)(s+1)$$. For $$\widetilde{{\bf g}}$$, this indeed follows from $$\| \widetilde{{\bf D}}^* \widetilde{{\bf g}} \|_2 = \|\widetilde{{\bf g}}\|_2 = \sqrt{\|{\bf f}_{\rm CP}\|_2^2 + \sigma^2} \ge \sigma$$ and   $$\|\widetilde{{\bf D}}^* \widetilde{{\bf g}}\|_1 = \left\| \begin{bmatrix} {\bf D}^* {\bf f}_{\rm CP} \\ \hline \sigma \end{bmatrix} \right\|_1 = \| {\bf D}^* {\bf f}_{\rm CP}\|_1 + \sigma \le \| {\bf D}^* {\bf f} \|_1 + \sigma \le \sqrt{s} r + \sigma \le \sqrt{r^2 + \sigma^2} \sqrt{s+1}.$$ We also notice that $$\widetilde{{\bf A}}$$ satisfies $$\widetilde{{\bf D}}$$-TES$$(s', \varepsilon')$$, $$\varepsilon' \,{:=}\, \dfrac{r \sigma}{r^2 + \sigma^2} \varepsilon$$, with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2) \le \gamma \exp(- c' m \varepsilon^2 r^2 \sigma^2 / (r^2+\sigma^2)^2)$$, since $$m \ge C {\varepsilon'}^{-6} s' \ln(eN/s')$$ is ensured by our assumption on $$m$$. Finally, we observe that both $$\widetilde{{\bf f}}/ \|\widetilde{{\bf f}}\|_2$$ and $$\widetilde{{\bf g}}/ \|\widetilde{{\bf g}}\|_2$$ are $$\ell_2$$-normalized effectively $$s'$$-analysis sparse and have the same sign observations $$\mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}}) = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf g}}) = {\bf y}$$. Thus,   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon'.$$ In view of Lemma 1, we derive   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm CP}}{\sigma} \right\|_2 \le \frac{r^2 + \sigma^2}{\sigma^2} \varepsilon', \qquad \mbox{hence} \qquad \|{\bf f} - {\bf f}_{\rm CP}\|_2 \le \frac{r^2 + \sigma^2}{\sigma} \varepsilon' = r \varepsilon.$$ This establishes the announced result. □ 4.3 Hard thresholding Given a signal $${\bf f} \in \mathbb{R}^N$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the hard thresholding scheme we consider here consists in outputting the signal   $$\label{fht} {\bf f}_{\rm HT} = \frac{-\sigma^2}{\langle \boldsymbol{\tau}, {\bf y} \rangle} {\bf D} {\bf z}, \qquad {\bf z} = H_{t-1}({\bf D}^* {\bf A}^* {\bf y}).$$ (4.5) Theorem 9 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C \kappa (r/\sigma+\sigma/r)^9 \varepsilon^{-9} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from the entries of $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(-c m \varepsilon^2 r^2 \sigma^2/(r^2+\sigma^2)^2)$$, any $$s$$-synthesis-sparse and effectively $$\kappa s$$-analysis-sparse signal $${\bf f} \in \mathbb{R}^n$$ satisfying $$\|{\bf f}\|_2 \le r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm HT}$$ given in (4.5) for $$t:=\lceil 16 (\varepsilon'/8)^{-2} \kappa (s+1) \rceil$$ with error   $$\left\| {\bf f}- {\bf f}_{\rm HT} \right\|_2 \le \varepsilon r.$$ Proof. We again use the notation (4.3) for the ‘lifted’ objects $$\widetilde{{\bf f}}$$, $$\widetilde{{\bf D}}$$ and $$\widetilde{{\bf A}}$$. First, we notice that $$\widetilde{{\bf f}}$$ is $$(s+1)$$-synthesis sparse (relative to $$\widetilde{{\bf D}}$$), as well as effectively $$\kappa (s+1)$$-analysis sparse, since satisfies   $$\frac{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_1}{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_2} = \frac{\|{\bf D}^* {\bf f}\|_1 + \sigma}{\sqrt{\|{\bf D}^*{\bf f}\|_2^2 + \sigma^2}} \le \frac{\sqrt{\kappa s}\|{\bf D}^* {\bf f}\|_2 + \sigma}{\sqrt{\|{\bf D}^*{\bf f}\|_2^2 +\sigma^2}} \le \sqrt{\kappa s + 1} \le \sqrt{\kappa (s+1)}.$$ Next, we observe that the matrix $$\widetilde{{\bf A}}$$, populated by independent standard normal random variables, satisfies $$\widetilde{{\bf D}}$$-SPEP$$(s+1+t,\varepsilon'/8)$$, $$\varepsilon ' := \dfrac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon$$, with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2 r^2)$$, since $$m \ge C (\varepsilon'/8)^{-7} (s+1+t) \ln(e(N+1)/(s+1+t))$$ is ensured by our assumption on $$m$$. Finally, since $${\bf y} = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}})$$, Theorem 5 implies that   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon',$$ where $$\widetilde{{\bf g}} \in \mathbb{R}^{n+1}$$ is the output of the ‘lifted’ hard thresholding scheme. i.e.   $$\widetilde{{\bf g}} = \widetilde{{\bf D}} \widetilde{{\bf z}}, \qquad \widetilde{{\bf z}} = H_{t} (\widetilde{{\bf D}}^*\widetilde{{\bf A}}^* {\bf y}).$$ In particular, looking at the last coordinate, this inequality yields   $$\label{LBg} \left| \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \le \varepsilon', \quad \mbox{hence} \quad \frac{|g_{n+1}|}{\|\widetilde{{\bf g}}\|_2} \ge \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \varepsilon' \ge \frac{\sigma}{\sqrt{r^2+\sigma^2}} - \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}} = \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}}.$$ (4.6) Now let us also observe that   $$\widetilde{{\bf z}} = H_{t} \left(\begin{bmatrix} {\bf D}^* {\bf A}^* {\bf y} \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix} \right) = \left\{ \begin{matrix} \begin{bmatrix} H_{t}({\bf D}^* {\bf A}^* {\bf y}) \\ \hline 0 \end{bmatrix}, \\ \mbox{or}\hspace{30mm}\\ \begin{bmatrix} H_{t-1}({\bf D}^* {\bf A}^* {\bf y}) \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix} , \end{matrix} \right. \quad \mbox{hence} \quad \widetilde{{\bf g}} = \widetilde{{\bf D}} \widetilde{{\bf z}} = \left\{ \begin{matrix} \begin{bmatrix} {\bf D}(H_{t}({\bf D}^* {\bf A}^* {\bf y})) \\ \hline 0 \end{bmatrix}, \\ \mbox{or}\hspace{35mm}\\ \begin{bmatrix} {\bf D}(H_{t-1}({\bf D}^* {\bf A}^* {\bf y})) \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix}. \end{matrix} \right.$$ In view of (4.6), the latter option prevails. It is then apparent that $${\bf f}_{\rm HT} = \sigma {\bf g}_{[n]} / g_{n+1}$$. Lemma 1 gives   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm HT}}{\sigma} \right\|_2 \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{\| \widetilde{{\bf g}} \|_2}{|g_{n+1}|} \varepsilon' \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{2\sqrt{r^2+\sigma^2}}{\sigma} \frac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon = \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon,$$ so that   $$\| {\bf f} - {\bf f}_{\rm HT} \|_2 \le \|\widetilde{{\bf f}}\|_2 \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon \le r \varepsilon.$$ This establishes the announced result. □ 5. Postponed proofs and further remarks This final section contains the theoretical justification of the technical properties underlying our results, followed by a few points of discussion around them. 5.1 Proof of $${\mathbf D}$$-$${\rm SPEP}$$ The Gaussian width turns out to be a useful tool in our proofs. For a set $$K \subseteq \mathbb{R}^n$$, it is defined by   $$w(K) = \mathbb{E} \left[\sup_{{\bf f} \in K} \langle {\bf f}, {\bf g} \rangle \right], \qquad {\bf g} \in \mathbb{R}^n \mbox{is a standard normal random vector}.$$ We isolate the following two properties. Lemma 2 Let $$K \subseteq \mathbb{R}^n$$ be a linear space and $$K_1,\ldots,K_L \subseteq \mathbb{R}^n$$ be subsets of the unit sphere $$S^{n-1}$$. (i) $$k / \sqrt{k+1} \le w(K \cap S^{n-1}) \le \sqrt{k} \qquad k:= \dim (K)$$; (ii) $$\displaystyle{w \left(K_1 \cup \ldots \cup K_L \right) \le \max \left\{w(K_1),\ldots,w(K_L) \right\} + 3 \sqrt{\ln(L)}}$$. Proof. (i) By the invariance under orthogonal transformation (see [25, Proposition 2.1]3), we can assume that $$K = \mathbb{R}^k \times \left\{(0,\ldots,0) \right\}$$. We then notice that $$\sup_{{\bf f} \in K \cap S^{n-1}} \langle {\bf f}, {\bf g} \rangle = \|(g_1,\ldots,g_k)\|_2$$ is the $$\ell_2$$-norm of a standard normal random vector of dimension $$k$$. We invoke, e.g. [15, Proposition 8.1] to derive the announced result. (ii) Let us introduce the non-negative random variables   $$\xi_\ell := \sup_{{\bf f} \in K_\ell} \langle {\bf f} , {\bf g} \rangle \quad \ell = 1,\ldots, L ,$$ so that the Gaussian widths of each $$K_\ell$$ and of their union take the form   $$w(K_\ell) = \mathbb{E}(\xi_\ell) \quad \ell = 1,\ldots, L \qquad \mbox{and} \qquad w \left(K_1 \cup \cdots \cup K_L \right) = \mathbb{E} \left(\max_{\ell = 1, \ldots, L} \xi_\ell \right).$$ By the concentration of measure inequality (see e.g. [15, Theorem 8.40]) applied to the function $$F: {\bf x} \in \mathbb{R}^n \mapsto \sup_{{\bf f} \in K_\ell} \langle {\bf f}, {\bf x} \rangle$$, which is a Lipschitz function with constant $$1$$, each $$\xi_\ell$$ satisfies   $$\mathbb{P}(\xi_\ell \ge \mathbb{E}(\xi_\ell) + t) \le \exp \left(-t^2/2 \right)\!.$$ Because each $$\mathbb{E}(\xi_\ell)$$ is no larger than $$\max_\ell \mathbb{E}(\xi_\ell) = \max_\ell w(K_\ell) =: \omega$$, we also have   $$\mathbb{P} (\xi_\ell \ge \omega + t) \le \exp \left(-t^2/2 \right)\!.$$ Setting $$v:= \sqrt{2 \ln(L)}$$, we now calculate   \begin{align*} \mathbb{E} \left(\max_{\ell =1,\ldots, L} \xi_\ell \right) & = \int_0^\infty \mathbb{P} \left(\max_{\ell =1,\ldots, L} \xi_\ell \ge u \right) \,{\rm{d}}u = \left(\int_0^{\omega+v} + \int_{\omega+v}^\infty \right) \mathbb{P} \left(\max_{\ell = 1,\ldots, L} \xi_\ell \ge u \right) \,{\rm{d}}u\\ & \le \int_0^{\omega + v} 1\, {\rm{d}}u + \int_{\omega + v}^\infty \sum_{\ell=1}^L \mathbb{P} \left(\xi_\ell \ge u \right) \,{\rm{d}}u = \omega + v + \sum_{\ell=1}^L \int_v^\infty \mathbb{P} \left(\xi_\ell \ge \omega + t \right) \,{\rm{d}}t\\ & \le \omega + v + L \int_v^\infty \exp \left(-t^2/2 \right)\, {\rm{d}}t \le \omega + v + L \frac{\exp(-v^2/2)}{v} \\ & = \omega + \sqrt{2 \ln(L)} + L \frac{1/L}{\sqrt{2 \ln(L)}} \le \omega + c \sqrt{\ln(L)}, \end{align*} where $$c=\sqrt{2} + (\sqrt{2} \ln(2))^{-1} \le 3$$. We have shown that $$w \left(K_1 \cup \cdots \cup K_L \right) \le \max_{\ell} w(K_\ell) + 3 \sqrt{\ln(L)}$$, as desired. □ We now turn our attention to proving the awaited theorem. Proof of Theorem 3. According to [25, Proposition 4.3], with $${\bf A}' := (\sqrt{2/\pi}/m) {\bf A}$$, we have   $$\left| \langle {\bf A}' {\bf f}, \mathrm{sgn}({\bf A}' {\bf g}) \rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \delta,$$ for all $${\bf f},{\bf g} \in {\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1}$$ provided $$m \ge C \delta^{-7} w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1})^2$$, so it is enough to upper bound $$w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1})$$ appropriately. To do so, with $${\it {\Sigma}}_S^N$$ denoting the space $$\{{\bf x} \in \mathbb{R}^N: {\rm supp}({\bf x}) \subseteq S \}$$ for any $$S \subseteq \{1,\ldots, N \}$$, we use Lemma 2 to write   \begin{align*} w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1}) & = w \bigg(\bigcup_{|S|=s} \left\{{\bf D}({\it {\Sigma}}_S^N) \cap S^{n-1} \right\} \bigg) \underset{(ii)}{\le} \max_{|S|=s} w({\bf D}({\it {\Sigma}}_S^N) \cap S^{n-1}) + 3 \sqrt{\ln \left(\binom{N}{s} \right)}\\ & \underset{(i)}{\le} \sqrt{s} + 3 \sqrt{s \ln \left(eN/s \right)} \le 4 \sqrt{s \ln \left(eN/s \right)}. \end{align*} The result is now immediate. □ 5.2 Proof of TES We propose two approaches for proving Theorem 4. One uses again the notion of Gaussian width, and the other one relies on covering numbers. The necessary results are isolated in the following lemma. Lemma 3 The set of $$\ell_2$$-normalized effectively $$s$$-analysis-sparse signals satisfies (i) $$\displaystyle{w \left(({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1} \right)} \le C \sqrt{s \ln(eN/s)},$$ (ii) $$\displaystyle{\mathscr{N} \left(({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1} , \rho \right) \le \binom{N}{t}\left(1 + \frac{8}{\rho} \right)^t, \qquad t := \lceil 4 \rho^{-2}} s \rceil.$$ Proof. (i) By the definition of the Gaussian width for $$\mathscr{K}_s: = ({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1}$$, with $${\bf g} \in \mathbb{R}^n$$ denoting a standard normal random vector,   $$\label{slep} w(\mathscr{K}_s) = \mathbb{E} \left[\sup_{\substack{{\bf D}^* {\bf f} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf f}\|_2 = 1}} \langle {\bf f} , {\bf g} \rangle \right] = \mathbb{E} \left[\sup_{\substack{{\bf D}^* {\bf f} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf D}^* {\bf f}\|_2 = 1}} \langle {\bf D} {\bf D}^* {\bf f} , {\bf g} \rangle \right] \le \mathbb{E} \left[\sup_{\substack{{\bf x} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf x}\|_2 = 1}} \langle {\bf D} {\bf x} , {\bf g} \rangle \right].$$ (5.1) In view of $$\|{\bf D}\|_{2 \to 2} = 1$$, we have, for any $${\bf x},{\bf x}' \in {\it {\Sigma}}_s^{N,{\rm eff}}$$ with $$\|{\bf x}\|_2 = \|{\bf x}'\|_2 =1$$,   \begin{align*} \mathbb{E} \left(\langle {\bf D} {\bf x}, {\bf g} \rangle - \langle {\bf D} {\bf x}', {\bf g}' \rangle \right)^2 &= \mathbb{E} \left[\langle {\bf D} {\bf x}, {\bf g} \rangle ^2 \right] + \mathbb{E} \left[\langle {\bf D} {\bf x}', {\bf g}' \rangle ^2 \right] = \|{\bf D} {\bf x}\|_2^2 + \|{\bf D} {\bf x}'\|_2^2 \le \|{\bf x}\|_2^2 + \|{\bf x}'\|_2^2\\ &= \mathbb{E} \left(\langle {\bf x}, {\bf g} \rangle - \langle {\bf x}', {\bf g}' \rangle \right)^2. \end{align*} Applying Slepian’s lemma (see e.g. [15, Lemma 8.25]), we obtain   $$w(\mathscr{K}_s) \le \mathbb{E} \left[\sup_{\substack{{\bf x} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf x}\|_2 = 1}} \langle {\bf x} , {\bf g} \rangle \right] =w({\it {\Sigma}}_s^{N,{\rm eff}} \cap S^{n-1}).$$ The latter is known to be bounded by $$C s \ln (eN/s)$$, see [25, Lemma 2.3]. (ii) The covering number $$\mathscr{N}(\mathscr{K}_s,\rho)$$ is bounded above by the maximal number $$\mathscr{P}(\mathscr{K}_s,\rho)$$ of elements in $$\mathscr{K}_s$$ that are separated by a distance $$\rho$$. We claim that $$\mathscr{P} (\mathscr{K}_s, \rho) \le \mathscr{P}({\it {\Sigma}}_t^N \cap B_2^N, \rho/2)$$. To justify this claim, let us consider a maximal $$\rho$$-separated set $$\{{\bf f}^1,\ldots,{\bf f}^L\}$$ of signals in $$\mathscr{K}_s$$. For each $$i$$, let $$T_i \subseteq \{1, \ldots, N \}$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}^i$$. We write   $$\rho < \|{\bf f}^i - {\bf f}^j \|_2 = \|{\bf D}^* {\bf f}^i - {\bf D}^* {\bf f}^j \|_2 \le \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \| ({\bf D}^* {\bf f}^i)_{\overline{T_i}} \|_2 + \| ({\bf D}^* {\bf f}^j)_{\overline{T_j}} \|_2.$$ Invoking [15, Theorem 2.5], we observe that   $$\| ({\bf D}^* {\bf f}^i)_{\overline{T_i}} \|_2 \le \frac{1}{2\sqrt{t}} \|{\bf D}^* {\bf f}^i \|_1 \le \frac{\sqrt{s}}{2 \sqrt{t}} \|{\bf D}^* {\bf f}^i \|_2 = \frac{\sqrt{s}}{2 \sqrt{t}},$$ and similarly for $$j$$ instead of $$i$$. Thus, we obtain   $$\rho < \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \sqrt{\frac{s}{t}} \le \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \frac{\rho}{2}, \quad \mbox{i.e.} \; \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 > \frac{\rho}{2}.$$ Since we have uncovered a set of $$L = \mathscr{P}(\mathscr{K}_s,\rho)$$ points in $${\it {\Sigma}}_t^N \cap B_2^N$$ that are $$(\rho/2)$$ separated, the claimed inequality is proved. We conclude by recalling that $$\mathscr{P}({\it {\Sigma}}_t^N \cap B_2^N, \rho/2)$$ is bounded above by $$\mathscr{N}({\it {\Sigma}}_t^N \cap B_2^N, \rho/4)$$, which is itself bounded above by $$\dbinom{N}{t} \left(1 + \dfrac{2}{\rho/4} \right)^t$$. □ We can now turn our attention to proving the awaited theorem. Proof of Theorem 4. With $$\mathscr{K}_s = ({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1}$$, the conclusion holds when $$m \ge C \varepsilon^{-6} w(\mathscr{K}_s)^2$$ or when $$m \ge C \varepsilon^{-1} \ln (\mathscr{N}(\mathscr{K}_s,c \varepsilon))$$, according to [26, Theorem 1.5] or to [3, Theorem 1.5], respectively. It now suffices to call upon Lemma 3. Note that the latter option yields better powers of $$\varepsilon^{-1}$$, but less pleasant failure probability. □ 5.3 Further remarks We conclude this theoretical section by making two noteworthy comments on the sign product embedding property and the tessellation property in the dictionary case. Remark 4 $${\bf D}$$-SPEP cannot hold for arbitrary dictionary $${\bf D}$$ if synthesis sparsity was replaced by effective synthesis sparsity. This is because the set of effectively $$s$$-synthesis-sparse signals can be the whole space $$\mathbb{R}^n$$. Indeed, let $${\bf f} \in \mathbb{R}^n$$ that can be written as $${\bf f} = {\bf D} {\bf u}$$ for some $${\bf u} \in \mathbb{R}^N$$. Let us also pick an $$(s-1)$$-sparse vector $${\bf v} \in \ker {\bf D}$$—there are tight frames for which this is possible, e.g. the concatenation of two orthogonal matrices. For $$\varepsilon > 0$$ small enough, we have   $$\frac{\|{\bf v} + \varepsilon {\bf u} \|_1}{\|{\bf v} + \varepsilon {\bf u}\|_2} \le \frac{\|{\bf v}\|_1 + \varepsilon \|{\bf u}\|_1}{\|{\bf v}\|_2 - \varepsilon \|{\bf u}\|_2} \le \frac{\sqrt{s-1} \|{\bf v}\|_2 + \varepsilon \|{\bf u}\|_1}{\|{\bf v}\|_2 - \varepsilon \|{\bf u}\|_2} \le \sqrt{s},$$ so that the coefficient vector $${\bf v} + \varepsilon {\bf u}$$ is effectively $$s$$-sparse, hence so is $$(1/\varepsilon){\bf v} + {\bf u}$$. It follows that $${\bf f} = {\bf D}((1/\varepsilon){\bf v} + {\bf u})$$ is effectively $$s$$-synthesis sparse. Remark 5 Theorem 3 easily implies a tessellation result for $${\bf D}({\it {\Sigma}}_s^N) \,\cap\, S^{n-1}$$, the ‘synthesis-sparse sphere’. Precisely, under the assumptions of the theorem (with a change of the constant $$C$$), $${\bf D}$$-SPEP$$(2s,\delta/2)$$ holds. Then, one can derive   $$[{\bf g},{\bf h} \in {\bf D}({\it {\Sigma}}_s) \cap S^{n-1} : \; \mathrm{sgn}({\bf A} {\bf g}) = \mathrm{sgn}({\bf A} {\bf h})] \Longrightarrow [\|{\bf g} - {\bf h}\|_2 \le \delta].$$ To see this, with $$\boldsymbol{\varepsilon} := \mathrm{sgn}({\bf A} {\bf g}) = \mathrm{sgn}({\bf A} {\bf h})$$ and with $${\bf f} := ({\bf g}-{\bf h})/\|{\bf g}-{\bf h}\|_2 \in {\bf D}({\it {\Sigma}}_{2s}) \cap S^{n-1}$$, we have   $$\left| \frac{\sqrt{2/\pi}}{m} \langle {\bf A} {\bf f} , \boldsymbol{\varepsilon}\rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \frac{\delta}{2}, \qquad \left| \frac{\sqrt{2/\pi}}{m} \langle {\bf A} {\bf f} , \boldsymbol{\varepsilon} \rangle - \langle {\bf f}, {\bf h} \rangle \right| \le \frac{\delta}{2},$$ so by the triangle inequality $$|\langle {\bf f}, {\bf g} - {\bf h} \rangle| \le \delta$$, i.e. $$\|{\bf g} -{\bf h}\|_2 \le \delta$$, as announced. Acknowledgements The authors would like to thank the AIM SQuaRE program that funded and hosted our initial collaboration. Funding NSF grant number [CCF-1527501], ARO grant number [W911NF-15-1-0316] and AFOSR grant number [FA9550-14-1-0088] to R.B.; Alfred P. Sloan Fellowship and NSF Career grant number [1348721 to D.N.]; NSERC grant number [22R23068 to Y.P.]; and NSF Postdoctoral Research Fellowship grant number [1400558 to M.W.]. Footnotes 1 A signal $${\bf x} \in \mathbb{R}^N$$ is called $$s$$-sparse if $$\|{\bf x}\|_0 := |\mathrm{supp}({\bf x})| \leq s \ll N$$. 2 Here, ‘dictionary sparsity’ means effective $$s$$-analysis sparsity if $$\widehat{{\bf f}}$$ is produced by convex programming and genuine $$s$$-synthesis sparsity together with effective $$\kappa s$$-analysis sparsity if $$\widehat{{\bf f}}$$ is produced by hard thresholding. 3 In particular, [25, Proposition 2.1] applies to the slightly different notion of mean width defined as $$\mathbb{E} \left[\sup_{{\bf f} \in K - K} \langle {\bf f}, {\bf g} \rangle \right]$$. References 1. ( 2016) Compressive Sensing webpage. http://dsp.rice.edu/cs (accessed 24 June 2016). 2. Baraniuk R., Foucart S., Needell D., Plan Y. & Wootters M. ( 2017) Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inform. Theory,  63, 3368– 3385. Google Scholar CrossRef Search ADS   3. Bilyk D. & Lacey M. T. ( 2015) Random tessellations, restricted isometric embeddings, and one bit sensing. arXiv preprint arXiv:1512.06697. 4. Blumensath T. ( 2011) Sampling and reconstructing signals from a union of linear subspaces. IEEE Trans. Inform. Theory,  57, 4660– 4671. Google Scholar CrossRef Search ADS   5. Boufounos P. T. & Baraniuk R. G. ( 2008) 1-Bit compressive sensing. Proceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS),  IEEE, pp. 16– 21. 6. Candès E. J., Demanet L., Donoho D. L. & Ying L. ( 2000) Fast discrete curvelet transforms. Multiscale Model. Simul.,  5, 861– 899. Google Scholar CrossRef Search ADS   7. Candès E. J. & Donoho D. L. ( 2004) New tight frames of curvelets and optimal representations of objects with piecewise $$C^2$$ singularities. Comm. Pure Appl. Math.,  57, 219– 266. Google Scholar CrossRef Search ADS   8. Candès E. J., Eldar Y. C., Needell D. & Randall P. ( 2010) Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal.,  31, 59– 73. Google Scholar CrossRef Search ADS   9. Daubechies I. ( 1992) Ten Lectures on Wavelets . Philadelphia, PA: SIAM. Google Scholar CrossRef Search ADS   10. Davenport M., Needell D. & Wakin M. B. ( 2012) Signal space CoSaMP for sparse recovery with redundant dictionaries. IEEE Trans. Inform. Theory,  59, 6820– 6829. Google Scholar CrossRef Search ADS   11. Elad M., Milanfar P. & Rubinstein R. ( 2007) Analysis versus synthesis in signal priors. Inverse Probl.,  23, 947. Google Scholar CrossRef Search ADS   12. Eldar Y. C. & Kutyniok G. ( 2012) Compressed Sensing: Theory and Applications . Cambridge, UK: Cambridge University Press. Google Scholar CrossRef Search ADS   13. Feichtinger H. & Strohmer T. (eds.) ( 1998) Gabor Analysis and Algorithms . Boston, MA: Birkhäuser. Google Scholar CrossRef Search ADS   14. Foucart S. ( 2016) Dictionary-sparse recovery via thresholding-based algorithms. J. Fourier Anal. Appl.,  22, 6– 19. Google Scholar CrossRef Search ADS   15. Foucart S. & Rauhut H. ( 2013) A Mathematical Introduction to Compressive Sensing . Basel, Switzerland: Birkhäuser. Google Scholar CrossRef Search ADS   16. Giryes R., Nam S., Elad M., Gribonval R. & Davies M. E. ( 2014) Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl.,  441, 22– 60. Google Scholar CrossRef Search ADS   17. Gopi S., Netrapalli P., Jain P. & Nori A. ( 2013) One-bit compressed sensing: Provable support and vector recovery. Proceedings of the 30th International Conference on Machine Learning (ICML),  Atlanta GA, 2013, pp. 154– 162. 18. Jacques L., Degraux K. & De Vleeschouwer C. ( 2013) Quantized iterative hard thresholding: bridging 1-bit and high-resolution quantized compressed sensing. Proceedings of the 10th International Conference on Sampling Theory and Applications (SampTA),  Bremen, Germany, pp. 105– 108. 19. Jacques L., Laska J. N., Boufounos P. T. & Baraniuk R. G. ( 2013) Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inform. Theory,  59, 2082– 2102. Google Scholar CrossRef Search ADS   20. Knudson K., Saab R. & Ward R. ( 2016) One-bit compressive sensing with norm estimation. IEEE Trans. Inform. Theory,  62, 2748– 2758. Google Scholar CrossRef Search ADS   21. Krahmer F., Needell D. & Ward R. ( 2015) Compressive sensing with redundant dictionaries and structured measurements. SIAM J. Math. Anal.,  47, 4606– 4629. Google Scholar CrossRef Search ADS   22. Nam S., Davies M. E., Elad M. & Gribonval R. ( 2013) The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal.,  34, 30– 56. Google Scholar CrossRef Search ADS   23. Peleg T. & Elad M. ( 2013) Performance guarantees of the thresholding algorithm for the cosparse analysis model. IEEE Trans. Inform. Theory,  59, 1832– 1845. Google Scholar CrossRef Search ADS   24. Plan Y. & Vershynin R. ( 2013a) One-bit compressed sensing by linear programming. Comm. Pure Appl. Math.,  66, 1275– 1297. Google Scholar CrossRef Search ADS   25. Plan Y. & Vershynin R. ( 2013b) Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inform. Theory,  59, 482– 494. Google Scholar CrossRef Search ADS   26. Plan Y. & Vershynin R. ( 2014) Dimension reduction by random hyperplane tessellations. Discrete Comput. Geom.,  51, 438– 461. Google Scholar CrossRef Search ADS   27. Rauhut H., Schnass K. & Vandergheynst P. ( 2008) Compressed sensing and redundant dictionaries. IEEE Trans. Inform. Theory,  54, 2210– 2219. Google Scholar CrossRef Search ADS   28. Saab R., Wang R. & Yilmaz Ö. ( 2016) Quantization of compressive samples with stable and robust recovery. Applied and Computational Harmonic Analysis, to appear. 29. Starck J.-L., Elad M. & Donoho D. ( 2004) Redundant multiscale transforms and their application for morphological component separation. Advances in Imaging and Electron Physics,  132, 287– 348. Google Scholar CrossRef Search ADS   30. Yan M., Yang Y. & Osher S. ( 2012) Robust 1-bit compressive sensing using adaptive outlier pursuit., IEEE Trans. Signal Process.,  60, 3868– 3875. Google Scholar CrossRef Search ADS   © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Information and Inference: A Journal of the IMA Oxford University Press # One-bit compressive sensing of dictionary-sparse signals , Volume 7 (1) – Mar 1, 2018 22 pages /lp/ou_press/one-bit-compressive-sensing-of-dictionary-sparse-signals-k1l3bdmQC9 Publisher Oxford University Press © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. ISSN 2049-8764 eISSN 2049-8772 D.O.I. 10.1093/imaiai/iax009 Publisher site See Article on Publisher Site ### Abstract Abstract One-bit compressive sensing has extended the scope of sparse recovery by showing that sparse signals can be accurately reconstructed even when their linear measurements are subject to the extreme quantization scenario of binary samples—only the sign of each linear measurement is maintained. Existing results in one-bit compressive sensing rely on the assumption that the signals of interest are sparse in some fixed orthonormal basis. However, in most practical applications, signals are sparse with respect to an overcomplete dictionary, rather than a basis. There has already been a surge of activity to obtain recovery guarantees under such a generalized sparsity model in the classical compressive sensing setting. Here, we extend the one-bit framework to this important model, providing a unified theory of one-bit compressive sensing under dictionary sparsity. Specifically, we analyze several different algorithms—based on convex programming and on hard thresholding—and show that, under natural assumptions on the sensing matrix (satisfied by Gaussian matrices), these algorithms can efficiently recover analysis–dictionary-sparse signals in the one-bit model. 1. Introduction The basic insight of compressive sensing is that a small number of linear measurements can be used to reconstruct sparse signals. In traditional compressive sensing, we wish to reconstruct an $$s$$-sparse1 signal $${\bf x} \in \mathbb{R}^N$$ from linear measurements of the form   $$\label{meas} {\bf y} = {\bf A}{\bf x} \in \mathbb{R}^m \qquad\text{(or its corrupted version {\bf y} = {\bf A}{\bf x} + {\bf e})},$$ (1.1) where $${\bf A}$$ is an $$m\times N$$ measurement matrix. A significant body of work over the past decade has demonstrated that the $$s$$-sparse (or nearly $$s$$-sparse) signal $${\bf x}$$ can be accurately and efficiently recovered from its measurement vector $${\bf y} = {\bf A}{\bf x}$$ when $${\bf A}$$ has independent Gaussian entries, say, and when $$m \asymp s\log(N/s)$$ [1,12,15]. This basic model has been extended in several directions. Two important ones—which we focus on in this work—are (a) extending the set of signals to include the larger and important class of dictionary- sparse signals, and (b) considering highly quantized measurements as in one-bit compressive sensing. Both of these settings have important practical applications and have received much attention in the past few years. However, to the best of our knowledge, they have not been considered together before. In this work, we extend the theory of one-bit compressive sensing to dictionary-sparse signals. Below, we briefly review the background on these notions, set up notation and outline our contributions. 1.1 One-bit measurements In practice, each entry $$y_i = \langle {\bf a}_i, {\bf x}\rangle$$ (where $${\bf a}_i$$ denotes the $$i$$th row of $${\bf A}$$) of the measurement vector in (1.1) needs to be quantized. That is, rather than observing $${\bf y}={\bf A}{\bf x}$$, one observes $${\bf y} = Q({\bf A}{\bf x})$$ instead, where $$Q: \mathbb{R}^m \rightarrow \mathscr{A}$$ denotes the quantizer that maps each entry of its input to a corresponding quantized value in an alphabet $$\mathscr{A}$$. The so-called one-bit compressive sensing [5] problem refers to the case when $$|\mathscr{A}| = 2$$, and one wishes to recover $${\bf x}$$ from its heavily quantized (one bit) measurements $${\bf y} = Q({\bf A}{\bf x})$$. The simplest quantizer in the one-bit case uses the alphabet $$\mathscr{A} = \{-1, 1\}$$ and acts by taking the sign of each component as   $$\label{eq:quantized} y_i = Q(\langle {\bf a}_i, {\bf x}\rangle) = \mathrm{sgn}(\langle {\bf a}_i, {\bf x}\rangle),$$ (1.2) which we denote in shorthand by $${\bf y} = \mathrm{sgn}({\bf A}{\bf x})$$. Since the publication of [5] in 2008, several efficient methods, both iterative and optimization based, have been developed to recover the signal $${\bf x}$$ (up to normalization) from its one-bit measurements (see e.g. [17–19,24,25,30]). In particular, it is shown [19] that the direction of any $$s$$-sparse signal $${\bf x}$$ can be estimated by some $$\hat{{\bf x}}$$ produced from $${\bf y}$$ with accuracy   $$\left\| \frac{{\bf x}}{\|{\bf x}\|_2} - \frac{\hat{{\bf x}}}{\|\hat{{\bf x}}\|_2}\right\|_2 \leq \varepsilon$$ when the number of measurements is at least   $$m = {\it {\Omega}}\left(\frac{s \ln(N/s)}{\varepsilon} \right)\!.$$ Notice that with measurements of this form, we can only hope to recover the direction of the signal, not the magnitude. However, we can recover the entire signal if we allow for thresholded measurements of the form   $$\label{eq:quantizeddither} y_i = \mathrm{sgn}(\langle {{{\bf a}_i}}, {{{\bf x}}} \rangle - \tau_i).$$ (1.3) In practice, it is often feasible to obtain quantized measurements of this form, and they have been studied before. Existing works using measurements of the form (1.3) have also allowed for adaptive thresholds; that is, the $$\tau_i$$ can be chosen adaptively based on $$y_j$$ for $$j < i$$. The goal of those works was to improve the convergence rate, i.e. the dependence on $$\varepsilon$$ in the number of measurements $$m$$. It is known that a dependence of $${\it {\Omega}}(1/\varepsilon)$$ is necessary with non-adaptive measurements, but recent work on Sigma-Delta quantization [28] and other schemes [2,20] have shown how to break this barrier using measurements of the form (1.3) with adaptive thresholds. In this article, we neither focus on the decay rate (the dependence on $$\varepsilon$$) nor do we consider adaptive measurements. However, we do consider non-adaptive measurements both of the form (1.2) and (1.3). This allows us to provide results on reconstruction of the magnitude of signals, and the direction. 1.2 Dictionary sparsity Although the classical setting assumes that the signal $${\bf x}$$ itself is sparse, most signals of interest are not immediately sparse. In the straightforward case, a signal may be instead sparse after some transform; for example, images are known to be sparse in the wavelet domain, sinusoidal signals in the Fourier domain, and so on [9]. Fortunately, the classical framework extends directly to this model, since the product of a Gaussian matrix and an orthonormal basis is still Gaussian. However, in many practical applications, the situation is not so straightforward, and the signals of interest are sparse, not in an orthonormal basis, but rather in a redundant (highly overcomplete) dictionary; this is known as dictionary sparsity. Signals in radar and sonar systems, for example, are sparsely represented in Gabor frames, which are highly overcomplete and far from orthonormal [13]. Images may be sparsely represented in curvelet frames [6,7], undecimated wavelet frames [29] and other frames, which by design are highly redundant. Such redundancy allows for sparser representations and a wider class of signal representations. Even in the Fourier domain, utilizing an oversampled DFT allows for much more realistic and practical signals to be represented. For these reasons, recent research has extended the compressive sensing framework to the setting, where the signals of interest are sparsified by overcomplete tight frames (see e.g. [8,14,16,27]). Throughout this article, we consider a dictionary $${\bf D} \in \mathbb{R}^{n \times N}$$, which is assumed to be a tight frame, in the sense that   ${\bf D} {\bf D}^* = {\bf I}_n.$ To distinguish between the signal and its sparse representation, we write $${\bf f}\in\mathbb{R}^n$$ for the signal of interest and $${\bf f}={\bf D}{\bf x}$$, where $${\bf x}\in\mathbb{R}^N$$ is a sparse coefficient vector. We then acquire the samples of the form $${\bf y} = {\bf A}{\bf f} = {\bf A}{\bf D}{\bf x}$$ and attempt to recover the signal $${\bf f}$$. Note that, due to the redundancy of $${\bf D}$$, we do not hope to be able to recover a unique coefficient vector $${\bf x}$$. In other words, even when the measurement matrix $${\bf A}$$ is well suited for sparse recovery, the product $${\bf A}{\bf D}$$ may have highly correlated columns, making recovery of $${\bf x}$$ impossible. With the introduction of a non-invertible sparsifying transform $${\bf D}$$, it becomes important to distinguish between two related but distinct notions of sparsity. Precisely, we say that $${\bf f}$$ is $$s$$-synthesis sparse if $${\bf f} = {\bf D} {\bf x}$$ for some $$s$$-sparse $${\bf x} \in \mathbb{R}^N$$; $${\bf f}$$ is $$s$$-analysis sparse if $${\bf D}^* {\bf f} \in \mathbb{R}^N$$ is $$s$$-sparse. We note that analysis sparsity is a stronger assumption, because, assuming analysis sparsity, one can always take $${\bf x} = {\bf D}^* {\bf f}$$ in the synthesis sparsity model. See [11] for an introduction to the analysis-sparse model in compressive sensing (also called the analysis cosparse model). Instead of exact sparsity, it is often more realistic to study effective sparsity. We call a coefficient vector $${\bf x} \in \mathbb{R}^N$$ effectively $$s$$-sparse if   $$\|{\bf x}\|_1 \le \sqrt{s} \|{\bf x}\|_2,$$ and we say that $${\bf f}$$ is effectively $$s$$-synthesis sparse if $${\bf f} = {\bf D} {\bf x}$$ for some effectively $$s$$-sparse $${\bf x} \in \mathbb{R}^N$$; $${\bf f}$$ is effectively $$s$$-analysis sparse if $${\bf D}^* {\bf f} \in \mathbb{R}^N$$ is effectively $$s$$-sparse. We use the notation   \begin{align*} {\it {\Sigma}}^N_s & \mbox{for the set of $s$-sparse coefficient vectors in $\mathbb{R}^N$, and} \\ {\it {\Sigma}}_s^{N,{\rm eff}} & \mbox{for the set of effectively $s$-sparse coefficient vectors in $\mathbb{R}^N$.} \end{align*} We also use the notation $$B_2^n$$ for the set of signals with $$\ell_2$$-norm at most $$1$$ (i.e. the unit ball in $$\ell_2^n$$) and $$S^{n-1}$$ for the set of signals with $$\ell_2$$-norm equal to $$1$$ (i.e. the unit sphere in $$\ell_2^n$$). It is now well known that, if $${\bf D}$$ is a tight frame and $${\bf A}$$ satisfies analogous conditions to those in the classical setting (e.g. has independent Gaussian entries), then a signal $${\bf f}$$ which is (effectively) analysis- or synthesis sparse can be accurately recovered from traditional compressive sensing measurements $${\bf y} = {\bf A} {\bf f} = {\bf A}{\bf D}{\bf x}$$ (see e.g. [4,8,10,14,16,22,23,27]). 1.3 One-bit measurements with dictionaries: our setup In this article, we study one-bit compressive sensing for dictionary-sparse signals. Precisely, our aim is to recover signals $${\bf f} \in \mathbb{R}^n$$ from the binary measurements   $$y_i = \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle \qquad i=1,\ldots,m,$$ or   $$y_i = \mathrm{sgn} \left(\langle {\bf a}_i, {\bf f} \rangle - \tau_i \right) \qquad i = 1,\ldots,m,$$ when these signals are sparse with respect to a dictionary $${\bf D}$$. As in Section 1.2, there are several ways to model signals that are sparse with respect to $${\bf D}$$. In this work, two different signal classes are considered. For the first one, which is more general, our results are based on convex programming. For the second one, which is more restrictive, we can obtain results using a computationally simpler algorithm based on hard thresholding. The first class consists of signals $${\bf f} \in ({\bf D}^*)^{-1} {\it {\Sigma}}_s^{N,\rm{eff}}$$ that are effectively $$s$$-analysis sparse, i.e. they satisfy   $$\label{Assumption} \|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} \|{\bf D}^* {\bf f}\|_2.$$ (1.4) This occurs, of course, when $${\bf D}^* {\bf f}$$ is genuinely sparse (analysis sparsity) and this is realistic if we are working, e.g. with piecewise-constant images, since they are sparse after application of the total variation operator. We consider effectively sparse signals since genuine analysis sparsity is unrealistic when $${\bf D}$$ has columns in general position, as it would imply that $${\bf f}$$ is orthogonal to too many columns of $${\bf D}$$. The second class consists of signals $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N) \cap ({\bf D}^*)^{-1} {\it {\Sigma}}_{\kappa s}^{N, \rm{eff}}$$ that are both $$s$$-synthesis sparse and $$\kappa s$$-analysis sparse for some $$\kappa \ge 1$$. This will occur as soon as the signals are $$s$$-synthesis sparse, provided we utilize suitable dictionaries $${\bf D} \in \mathbb{R}^{n \times N}$$. One could take, for instance, the matrix of an equiangular tight frame when $$N = n + k$$, $$k = {\rm constant}$$. Other examples of suitable dictionaries found in [21] include harmonic frames again with $$N = n + k$$, $$k = {\rm constant}$$, as well as Fourier and Haar frames with constant redundancy factor $$N/n$$. Figure 1 summarizes the relationship between the various domains we deal with. Fig. 1 View largeDownload slide The coefficient, signal and measurement domains. Fig. 1 View largeDownload slide The coefficient, signal and measurement domains. 1.4 Contributions Our main results demonstrate that one-bit compressive sensing is viable even when the sparsifying transform is an overcomplete dictionary. As outlined in Section 1.1, we consider both the challenge of recovering the direction $${\bf f}/\|{\bf f}\|_2$$ of a signal $${\bf f}$$, and the challenge of recovering the entire signal (direction and magnitude). Using measurements of the form $$y_i = \mathrm{sgn}\langle {\bf a}_i, {\bf f} \rangle$$, we can recover the direction but not the magnitude; using measurements of the form $$y_i = \mathrm{sgn}\left(\langle {\bf a}_i, {\bf f} \rangle - \tau_i \right)$$, we may recover both. In (one-bit) compressive sensing, two standard families of algorithms are (a) algorithms based on convex programming, and (b) algorithms based on thresholding. In this article, we analyze algorithms from both classes. One reason to study multiple algorithms is to give a more complete landscape of this problem. Another reason is that the different algorithms come with different trade-offs (between computational complexity and the strength of assumptions required), and it is valuable to explore this space of trade-offs. 1.4.1 Recovering the direction First, we show that the direction of a dictionary-sparse signal can be estimated from one-bit measurements of the type $$\mathrm{sgn}({\bf A} {\bf f})$$. We consider two algorithms: our first approach is based on linear programming, and our second is based on hard thresholding. The linear programming approach is more computationally demanding, but applies to a broader class of signals. In Section 3, we prove that both of these approaches are effective, provided the sensing matrix $${\bf A}$$ satisfies certain properties. In Section 2, we state that these properties are in fact satisfied by a matrix $${\bf A}$$ populated with independent Gaussian entries. We combine all of these results to prove the statement below. As noted above, the different algorithms require different definitions of ‘dictionary sparsity’. In what follows, $$\gamma, C, c$$ refer to absolute numerical constants. Theorem 1 (Informal statement of direction recovery) Let $$\varepsilon \,{>}\, 0$$, let $$m \,{\ge}\, C \varepsilon^{-7} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, any dictionary-sparse2 signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f})$$ can be approximated by the output $$\widehat{{\bf f}}$$ of an efficient algorithm with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{\widehat{{\bf f}}}{\|\widehat{{\bf f}}\|_2} \right\|_2 \le \varepsilon.$$ 1.4.2 Recovering the whole signal By using one-bit measurements of the form $$\mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$, where $$\tau_1,\ldots,\tau_m$$ are properly normalized Gaussian random thresholds, we are able to recover not just the direction, but also the magnitude of a dictionary-sparse signal $${\bf f}$$. We consider three algorithms: our first approach is based on linear programming, our second approach on second-order cone programming and our third approach on hard thresholding. Again, there are different trade-offs to the different algorithms. As above, the approach based on hard thresholding is more efficient, whereas the approaches based on convex programming apply to a broader signal class. There is also a trade-off between linear programming and second-order cone programming: the second-order cone program requires knowledge of $$\|{\bf f}\|_2,$$ whereas the linear program does not (although it does require a loose bound), but the second-order cone programming approach applies to a slightly larger class of signals. We show in Section 4 that all three of these algorithms are effective when the sensing matrix $${\bf A}$$ is populated with independent Gaussian entries, and when the thresholds $$\tau_i$$ are also independent Gaussian random variables. We combine the results of Section 4 in the following theorem. Theorem 2 (Informal statement of signal estimation) Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C \varepsilon^{-9} s \ln(eN/s)$$, and let $${\bf A} \in \mathbb{R}^{m \times n}$$ and $$\boldsymbol{\tau} \in \mathbb{R}^m$$ be populated by independent mean-zero normal random variables with variance $$1$$ and $$\sigma^2$$, respectively. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, any dictionary-sparse$$^2$$ signal $${\bf f} \in \mathbb{R}^n$$ with $$\|{\bf f}\|_2 \le r$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by the output $$\widehat{{\bf f}}$$ of an efficient algorithm with error   $$\left\| {\bf f} - \widehat{{\bf f}} \right\|_2 \le \varepsilon r.$$ We have not spelled out the dependence of the number of measurements and the failure probability on the parameters $$r$$ and $$\sigma$$: as long as they are roughly the same order of magnitude, the dependence is absorbed in the constants $$C$$ and $$c$$ (see Section 4 for precise statements). As outlined earlier, an estimate of $$r$$ is required to implement the second-order cone program, but the other two algorithms do not require such an estimate. 1.5 Discussion and future directions The purpose of this work is to demonstrate that techniques from one-bit compressive sensing can be effective for the recovery of dictionary-sparse signals, and we propose several algorithms to accomplish this for various notions of dictionary sparsity. Still, some interesting future directions remain. First, we do not believe that the dependence on $$\varepsilon$$ above is optimal. We do believe instead that a logarithmic dependence on $$\varepsilon$$ for the number of measurements (or equivalently an exponential decay in the oversampling factor $$\lambda = m / (s \ln(eN/s))$$ for the recovery error) is possible by choosing the thresholds $$\tau_1,\ldots,\tau_m$$ adaptively. This would be achieved by adjusting the method of [2], but with the strong proviso of exact sparsity. Secondly, it is worth asking to what extent the trade-offs between the different algorithms reflect reality. In particular, is it only an artifact of the proof that the simpler algorithm based on hard thresholding applies to a narrower class of signals? 1.6 Organization The remainder of the article is organized as follows. In Section 2, we outline some technical tools upon which our results rely, namely some properties of Gaussian random matrices. In Section 3, we consider recovery of the direction $${\bf f}/\|{\bf f}\|$$ only and we propose two algorithms to achieve it. In Section 4, we present three algorithms for the recovery of the entire signal $${\bf f}$$. Finally, in Section 5, we provide proofs for the results outlined in Section 2. 2. Technical ingredients In this section, we highlight the theoretical properties upon which our results rely. Their proofs are deferred to Section 5 so that the reader does not lose track of our objectives. The first property we put forward is an adaptation to the dictionary case of the so-called sign product embedding property (the term was coined in [18], but the result originally appeared in [25]). Theorem 3 ($${\bf D}$$-SPEP) Let $$\delta > 0$$, let $$m \ge C \delta^{-7} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \delta^2 m)$$, the renormalized matrix $${\bf A}':= (\sqrt{2/\pi}/m) {\bf A}$$ satisfies the $$s$$th-order sign product embedding property adapted to $${\bf D} \in \mathbb{R}^{n \times N}$$ with constant $$\delta$$ — $${\bf D}$$-SPEP$$(s,\delta)$$ for short—i.e.   $$\label{SPEP} \left| \langle {\bf A}' {\bf f}, \mathrm{sgn}({\bf A}' {\bf g}) \rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \delta$$ (2.1) holds for all $${\bf f}, {\bf g} \in {\bf D}({\it {\Sigma}}^N_s) \cap S^{n-1}$$. Remark 1 The power $$\delta^{-7}$$ is unlikely to be optimal. At least in the non-dictionary case, i.e. when $${\bf D} = {\bf I}_n$$, it can be reduced to $$\delta^{-2}$$, see [3]. As an immediate consequence of $${\bf D}$$-SPEP, setting $${\bf g} = {\bf f}$$ in (2.1) allows one to deduce a variation of the classical restricted isometry property adapted to $${\bf D}$$, where the inner norm becomes the $$\ell_1$$-norm (we mention in passing that this variation could also be deduced by other means). Corollary 1 ($${\bf D}$$-RIP$$_1$$) Let $$\delta \,{>}\, 0$$, let $$m \,{\ge}\, C \delta^{-7} s \,{\ln}\,(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \delta^2 m)$$, the renormalized matrix $${\bf A}':= (\sqrt{2/\pi}/m) {\bf A}$$ satisfies the $$s$$th-order $$\ell_1$$-restricted isometry property adapted to $${\bf D} \in \mathbb{R}^{n \times N}$$ with constant $$\delta$$ — $${\bf D}$$-RIP$$_{1}(s,\delta)$$ for short—i.e.   $$(1-\delta) \| {\bf f}\|_2 \le \| {\bf A}' {\bf f} \|_1 \le (1+\delta) \|{\bf f}\|_2$$ (2.2) holds for all $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N)$$. The next property we put forward is an adaptation of the tessellation of the ‘effectively sparse sphere’ (see [26]) to the dictionary case. In what follows, given a (non-invertible) matrix $${\bf M}$$ and a set $$K$$, we denote by $${\bf M}^{-1} (K)$$ the preimage of $$K$$ with respect to $${\bf M}$$. Theorem 4 (Tessellation) Let $$\varepsilon > 0$$, let $$m \ge C \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Then, with failure probability at most $$\gamma \exp(-c \varepsilon^2 m)$$, the rows $${\bf a}_1,\ldots,{\bf a}_m \in \mathbb{R}^n$$ of $${\bf A}$$$$\varepsilon$$-tessellate the effectively $$s$$-analysis-sparse sphere—we write that $${\bf A}$$ satisfies $${\bf D}$$-TES$$(s,\varepsilon)$$ for short—i.e.   $$\label{Tes} [{\bf f},{\bf g} \in ({\bf D}^*)^{-1}({\it {\Sigma}}_{s}^{N,{\rm eff}}) \cap S^{n-1} : \; \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle = \mathrm{sgn} \langle {\bf a}_i, {\bf g} \rangle \mbox{for all} i =1,\ldots,m] \Longrightarrow [\|{\bf f} - {\bf g}\|_2 \le \varepsilon].$$ (2.3) 3. Signal estimation: direction only In this whole section, given a measurement matrix $${\bf A} \in \mathbb{R}^{m \times n}$$ with rows $${\bf a}_1,\ldots,{\bf a}_m \in \mathbb{R}^n$$, the signals $${\bf f} \in \mathbb{R}^n$$ are acquired via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f}) \in \{-1,+1\}^m$$, i.e.   $$y_i = \mathrm{sgn} \langle {\bf a}_i, {\bf f} \rangle \qquad i = 1,\ldots,m.$$ Under this model, all $$c {\bf f}$$ with $$c>0$$ produce the same one-bit measurements, so one can only hope to recover the direction of $${\bf f}$$. We present two methods to do so, one based on linear programming and the other one based on hard thresholding. 3.1 Linear programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$, the optimization scheme we consider here consists in outputting the signal $${\bf f}_{\rm lp}$$ solution of   $$\label{LPforDir} \underset{{{\bf h} \in \mathbb{R}^n}}{\rm minimize}\, \| {\bf D}^* {\bf h}\|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h}) = {\bf y} \quad \|{\bf A} {\bf h}\|_1 = 1.$$ (3.1) This is in fact a linear program (and thus may be solved efficiently), since the condition $$\mathrm{sgn}({\bf A} {\bf h}) = {\bf y}$$ reads   $$y_i ({\bf A} {\bf h})_i \ge 0 \qquad \mbox{for all} i = 1,\ldots, m,$$ and, under this constraint, the condition $$\|{\bf A} {\bf h}\|_1 = 1$$ reads   $$\sum_{i=1}^m y_i ({\bf A} {\bf h})_i = 1.$$ Theorem 5 If $${\bf A} \,{\in}\, \mathbb{R}^{m \times n}$$ satisfies both $${\bf D}$$-TES$$(36s,\varepsilon)$$ and $${\bf D}$$-RIP$$_1(25s,1/5)$$, then any effectively $$s$$-analysis-sparse signal $${\bf f} \in ({\bf D}^*)^{-1}{\it {\Sigma}}_s^{N,{\rm eff}}$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f})$$ is directionally approximated by the output $${\bf f}_{\rm lp}$$ of the linear program (3.1) with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{{\bf f}_{\rm lp}}{\|{\bf f}_{\rm lp}\|_2} \right\|_2 \le \varepsilon.$$ Proof. The main step is to show that $${\bf f}_{\rm lp}$$ is effectively $$36s$$-analysis sparse when $${\bf D}$$-RIP$$_1(t,\delta)$$ holds with $$t= 25s$$ and $$\delta=1/5$$. Then, since both $${\bf f}/\|{\bf f}\|_2$$ and $${\bf f}_{\rm lp} / \|{\bf f}_{\rm lp}\|_2$$ belong to $$({\bf D}^*)^{-1}{\it {\Sigma}}_{36 s}^{N,{\rm eff}} \cap S^{n-1}$$ and have the same sign observations, $${\bf D}$$-TES$$(36s,\varepsilon)$$ implies the desired conclusion. To prove the effective analysis sparsity of $${\bf f}_{\rm lp}$$, we first estimate $$\|{\bf A} {\bf f}\|_1$$ from below. For this purpose, let $$T_0$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, etc. We have   \begin{align*} \|{\bf A} {\bf f} \|_1 & = \|{\bf A} {\bf D} {\bf D}^* {\bf f}\|_1 = \left\| {\bf A} {\bf D} \left(\sum_{k \ge 0} ({\bf D}^*{\bf f})_{T_k} \right) \right\|_1 \ge \|{\bf A} {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right)\!\|_1 - \sum_{k \ge 1} \|{\bf A} {\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_1\\ & \ge (1-\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right)\!\|_2 - \sum_{k \ge 1} (1+\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_2, \end{align*} where the last step used $${\bf D}$$-RIP$$_1(t,\delta)$$. We notice that, for $$k \ge 1$$,   $$\|{\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right)\!\|_2 \le \| ({\bf D}^* {\bf f})_{T_k}\! \|_2 \le \frac{1}{\sqrt{t}} \| ({\bf D}^* {\bf f})_{T_{k-1}}\!\|_1,$$ from where it follows that   $$\label{LowerAf} \|{\bf A} {\bf f}\|_1 \ge (1-\delta) \|{\bf D} \left(({\bf D}^* {\bf f})_{T_0}\right)\!\|_2 - \frac{1+\delta}{\sqrt{t}} \|{\bf D}^* {\bf f} \|_1.$$ (3.2) In addition, we observe that   \begin{align*} \|{\bf D}^* {\bf f} \|_2 & = \|{\bf f}\|_2 = \|{\bf D} {\bf D}^* {\bf f}\|_2 = \left\| {\bf D} \left(\sum_{k \ge 0} ({\bf D}^* {\bf f})_{T_k} \right) \right\|_2 \le \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \sum_{k \ge 1} \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_k} \right) \right\|_2\\ & \le \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \frac{1}{\sqrt{t}} \|{\bf D}^* {\bf f} \|_1. \end{align*} In view of the effective sparsity of $${\bf D}^* {\bf f}$$, we obtain   $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} \|{\bf D}^* {\bf f}\|_2 \le \sqrt{s}\left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 + \sqrt{s/t} \|{\bf D}^* {\bf f} \|_1,$$ hence   $$\label{LowerDD*T0} \left\| {\bf D} \left(({\bf D}^* {\bf f})_{T_0} \right) \right\|_2 \ge \frac{1- \sqrt{s/t}}{\sqrt{s}} \|{\bf D}^* {\bf f} \|_1.$$ (3.3) Substituting (3.3) in (3.2) yields   $$\label{LowerAf2} \|{\bf A} {\bf f}\|_1 \ge \left((1-\delta)(1-\sqrt{s/t}) - (1+\delta)(\sqrt{s/t}) \right) \frac{1}{\sqrt{s}} \|{\bf D}^* {\bf f}\|_1 = \frac{2/5}{\sqrt{s}} \|{\bf D}^* {\bf f}\|_1,$$ (3.4) where we have used the values $$t = 25s$$ and $$\delta=1/5$$. This lower estimate for $$\|{\bf A} {\bf f} \|_1$$, combined with the minimality property of $${\bf f}_{\rm lp}$$, allows us to derive that   $$\label{UpperD*fhat} \|{\bf D}^* {\bf f}_{\rm lp} \|_1 \le \|{\bf D}^*({\bf f}/ \|{\bf A} {\bf f}\|_1)\|_1 = \frac{\|{\bf D}^* {\bf f}\|_1}{\|{\bf A} {\bf f} \|_1} \le (5/2) \sqrt{s}.$$ (3.5) Next, with $$\widehat{T}_0$$ denoting an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, $$\widehat{T}_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, $$\widehat{T}_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}_{\rm lp}$$, etc., we can write   \begin{align*} 1 & = \|{\bf A} {\bf f}_{\rm lp} \|_1 = \|{\bf A} {\bf D} {\bf D}^* {\bf f}_{\rm lp} \|_1 = \left\| {\bf A} {\bf D} \left(\sum_{k \ge 0} ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_1 \le \sum_{k \ge 0} \left\| {\bf A} {\bf D} \left(({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_1\\ & \le \sum_{k \ge 0} (1+\delta) \left\| {\bf D} \left(({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right) \right\|_2 = (1+\delta) \left[\!\left\| ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_0} \right\|_2 + \sum_{k \ge 1} \!\left\| ({\bf D}^* {\bf f}_{\rm lp})_{\widehat{T}_k} \right\|_2 \right]\\ & \le (1+\delta) \left[\|{\bf D}^* {\bf f}_{\rm lp} \|_2 + \frac{1}{\sqrt{t}} \|{\bf D}^* {\bf f}_{\rm lp}\|_1 \right] \le (1+\delta) \left[\|{\bf D}^* {\bf f}_{\rm lp} \|_2 + (5/2)\sqrt{s/t} \right]. \end{align*} This chain of inequalities shows that   $$\label{LowerD*fhat} \|{\bf D}^* {\bf f}_{\rm lp} \|_2 \ge \frac{1-(5/2)\sqrt{s/t}}{1+\delta} = \frac{5}{12}.$$ (3.6) Combining (3.5) and (3.6), we obtain   $$\|{\bf D}^* {\bf f}_{\rm lp} \|_1 \le 6 \sqrt{s} \|{\bf D}^* {\bf f}_{\rm lp} \|_2.$$ In other words, $${\bf D}^* {\bf f}_{\rm lp}$$ is effectively $$36s$$-sparse, which is what was needed to conclude the proof. □ Remark 2 We point out that if $${\bf f}$$ was genuinely, instead of effectively, $$s$$-analysis sparse, then a lower bound of the type (3.4) would be immediate from the $${\bf D}$$-RIP$$_1$$. We also point out that our method of proving that the linear program outputs an effectively analysis-sparse signal is new even in the case $${\bf D} = {\bf I}_n$$. In fact, it makes it possible to remove a logarithmic factor from the number of measurements in this ‘non-dictionary’ case, too (compare with [24]). Furthermore, it allows for an analysis of the linear program (3.1) only based on deterministic conditions that the matrix $${\bf A}$$ may satisfy. 3.2 Hard thresholding Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$, the hard thresholding scheme we consider here consists in constructing a signal $${\bf f}_{\rm ht} \in \mathbb{R}^n$$ as   $$\label{HTforDir} {\bf f}_{\rm ht} = {\bf D} {\bf z}, \qquad \mbox{where} {\bf z} := H_t({\bf D}^* {\bf A}^* {\bf y}).$$ (3.7) Our recovery result holds for $$s$$-synthesis-sparse signals that are also effectively $$\kappa s$$-analysis sparse for some $$\kappa \ge 1$$ (we discussed in Section 1 some choices of dictionaries $${\bf D}$$ making this happen). Theorem 6 If $${\bf A} \in \mathbb{R}^{m \times n}$$ satisfies $${\bf D}$$-SPEP$$(s+t,\varepsilon/8)$$, $$t = \lceil 16 \varepsilon^{-2} \kappa s \rceil$$, then any $$s$$-synthesis-sparse signal $${\bf f} \in {\bf D}({\it {\Sigma}}_s^N)$$ with $${\bf D}^* {\bf f} \in {\it {\Sigma}}_{\kappa s}^{N,{\rm eff}}$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f})$$ is directionally approximated by the output $${\bf f}_{\rm ht}$$ of the hard thresholding (3.7) with error   $$\left\| \frac{{\bf f}}{\|{\bf f}\|_2} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le \varepsilon.$$ Proof. We assume without loss of generality that $$\|{\bf f}\|_2 = 1$$. Let $$T=T_0$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_1$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, $$T_2$$ an index set of next $$t$$ largest absolute entries of $${\bf D}^* {\bf f}$$, etc. We start by noticing that $${\bf z}$$ is a better $$t$$-sparse approximation to $${\bf D}^* {\bf A}^* {\bf y} = {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})$$ than $$[{\bf D}^* {\bf f}]_T$$, so we can write   $$\| {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) - {\bf z} \|_2^2 \le \|{\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) - [{\bf D}^* {\bf f}]_T \|_2^2,$$ i.e.   $$\| ({\bf D}^* {\bf f} - {\bf z}) - ({\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) \|_2^2 \le \| ({\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) - [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2.$$ Expanding the squares and rearranging gives   \begin{align} \label{Term1} \|{\bf D}^* {\bf f} - {\bf z} \|_2^2 & \le 2 \langle {\bf D}^* {\bf f} - {\bf z}, {\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle \\ \end{align} (3.8)  \begin{align} \label{Term2} & - 2 \langle [{\bf D}^* {\bf f}]_{\overline{T}} , {\bf D}^* {\bf f} - {\bf D}^* {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle \\ \end{align} (3.9)  \begin{align} \label{Term3} & + \| [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2. \end{align} (3.10) To bound (3.10), we invoke [15, Theorem 2.5] and the effective analysis sparsity of $${\bf f}$$ to derive   $$\| [{\bf D}^* {\bf f}]_{\overline{T}} \|_2^2 \le \frac{1}{4t} \| {\bf D}^* {\bf f} \|_1^2 \le \frac{\kappa s}{4t} \| {\bf D}^* {\bf f} \|_2^2 = \frac{\kappa s}{4t} \|{\bf f} \|_2^2 = \frac{\kappa s}{4t}.$$ To bound (3.8) in absolute value, we notice that it can be written as   \begin{align*} 2 | \langle {\bf D} {\bf D}^* {\bf f} - {\bf D} {\bf z}, &{\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | = 2 | \langle {\bf f} - {\bf f}_{\rm ht}, {\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | \\ & = 2 | \langle {\bf f} - {\bf f}_{\rm ht}, {\bf f} \rangle - \langle {\bf A} ({\bf f} - {\bf f}_{\rm ht}), \mathrm{sgn}({\bf A} {\bf f}) \rangle | \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht} \|_2, \end{align*} where the last step followed from $${\bf D}$$-SPEP$$(s+t,\varepsilon')$$, $$\varepsilon' := \varepsilon /8$$. Finally, (3.9) can be bounded in absolute value by   \begin{align*} 2 & \sum_{k \ge 1} | \langle [{\bf D}^* {\bf f}]_{T_k}, {\bf D}^*({\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f})) \rangle | = 2 \sum_{k \ge 1} | \langle {\bf D}([{\bf D}^* {\bf f}]_{T_k}), {\bf f} - {\bf A}^* \mathrm{sgn}({\bf A} {\bf f}) \rangle | \\ & = 2 \sum_{k \ge 1} | \langle {\bf D}([{\bf D}^* {\bf f}]_{T_k}), {\bf f} \rangle - \langle {\bf A} ({\bf D}([{\bf D}^* {\bf f}]_{T_k})), \mathrm{sgn}({\bf A} {\bf f}) \rangle | \le 2 \sum_{k \ge 1} \varepsilon' \| {\bf D}([{\bf D}^* {\bf f}]_{T_k}) \|_2\\ & \le 2 \varepsilon' \sum_{k \ge 1} \| [{\bf D}^* {\bf f}]_{T_k} \|_2 \le 2 \varepsilon' \sum_{k \ge 1} \frac{\| [{\bf D}^* {\bf f}]_{T_{k-1}} \|_1}{\sqrt{t}} \le 2 \varepsilon' \frac{\|{\bf D}^* {\bf f}\|_1}{\sqrt{t}} \le 2 \varepsilon' \frac{\sqrt{\kappa s} \|{\bf D}^* {\bf f}\|_2}{\sqrt{t}} = 2 \varepsilon' \sqrt{\frac{\kappa s}{t}}. \end{align*} Putting everything together, we obtain   $$\|{\bf D}^* {\bf f} - {\bf z} \|_2^2 \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht}\|_2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t}.$$ In view of $$\|{\bf f} - {\bf f}_{\rm ht}\|_2 = \|{\bf D} ({\bf D}^* {\bf f} - {\bf z}) \|_2 \le \|{\bf D}^* {\bf f} - {\bf z}\|_2$$, it follows that   $$\|{\bf f} - {\bf f}_{\rm ht}\|_2^2 \le 2 \varepsilon' \|{\bf f} - {\bf f}_{\rm ht}\|_2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t}, \quad \mbox{i.e.} \; (\|{\bf f} - {\bf f}_{\rm ht}\|_2 - \varepsilon')^2 \le {\varepsilon'}^2 + 2 \varepsilon' \sqrt{\frac{\kappa s}{t}} + \frac{\kappa s}{4t} \le \left(\varepsilon' \hspace{-0.5mm}+\hspace{-0.5mm} \sqrt{\frac{\kappa s}{t}} \right)^2 \hspace{-1mm}.$$ This implies that   $$\|{\bf f} - {\bf f}_{\rm ht}\|_2 \le 2 \varepsilon' + \sqrt{\frac{\kappa s}{t}}.$$ Finally, since $${\bf f}_{\rm ht}/\|{\bf f}_{\rm ht}\|_2$$ is the best $$\ell_2$$-normalized approximation to $${\bf f}_{\rm ht}$$, we conclude that   $$\left\| {\bf f} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le \|{\bf f} - {\bf f}_{\rm ht}\|_2 + \left\| {\bf f}_{\rm ht} - \frac{{\bf f}_{\rm ht}}{\|{\bf f}_{\rm ht}\|_2} \right\|_2 \le 2 \|{\bf f} - {\bf f}_{\rm ht}\|_2 \le 4 \varepsilon' + 2 \sqrt{\frac{\kappa s}{t}}.$$ The announced result follows from our choices of $$t$$ and $$\varepsilon'$$. □ 4. Signal estimation: direction and magnitude Since information of the type $$y_i = \mathrm{sgn} \langle {\bf a}_i,{\bf f} \rangle$$ can at best allow one to estimate the direction of a signal $${\bf f} \in \mathbb{R}^n$$, we consider in this section information of the type   $$y_i = \mathrm{sgn}(\langle {\bf a}_i, {\bf f} \rangle - \tau_i) \qquad i = 1,\ldots,m ,$$ for some thresholds $$\tau_1,\ldots,\tau_m$$ introduced before quantization. In the rest of this section, we give three methods for recovering $${\bf f}$$ in its entirety. The first one is based on linear programming, the second one on second-order code programming and the last one on hard thresholding. We are going to show that using these algorithms, one can estimate both the direction and the magnitude of dictionary-sparse signal $${\bf f} \in \mathbb{R}^n$$ given a prior magnitude bound such as $$\| {\bf f} \|_2 \le r$$. We simply rely on the previous results by ‘lifting’ the situation from $$\mathbb{R}^n$$ to $$\mathbb{R}^{n+1}$$, in view of the observation that $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ can be interpreted as The following lemma will be equally useful when dealing with linear programming, second-order cone programming or hard thresholding schemes. Lemma 1 For $$\widetilde{{\bf f}}, \widetilde{{\bf g}} \in \mathbb{R}^{n+1}$$ written as   $$\widetilde{{\bf f}} := \begin{bmatrix} {\bf f}_{[n]} \\ \hline f_{n+1} \end{bmatrix} \qquad \mbox{and} \qquad \widetilde{{\bf g}} =: \begin{bmatrix} {\bf g}_{[n]} \\ \hline g_{n+1} \end{bmatrix}$$ with $$\widetilde{{\bf f}}_{[n]}, \widetilde{{\bf g}}_{[n]} \in \mathbb{R}^n$$ and with $$f_{n+1} \not= 0$$, $$g_{n+1} \not= 0$$, one has   $$\left\| \frac{{\bf f}_{[n]}}{f_{n+1}} - \frac{{\bf g}_{[n]}}{g_{n+1}} \right\|_2 \le \frac{\|\widetilde{{\bf f}}\|_2 \|\widetilde{{\bf g}}\|_2}{|f_{n+1}||g_{n+1}|} \left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2.$$ Proof. By using the triangle inequality in $$\mathbb{R}^n$$ and Cauchy–Schwarz inequality in $$\mathbb{R}^2$$, we can write   \begin{align*} \left\| \frac{{\bf f}_{[n]}}{f_{n+1}} - \frac{{\bf g}_{[n]}}{g_{n+1}} \right\|_2 & = \|\widetilde{{\bf f}}\|_2 \left\| \frac{1/f_{n+1}}{\|\widetilde{{\bf f}}\|_2} {\bf f}_{[n]} - \frac{1/g_{n+1}}{\|\widetilde{{\bf f}}\|_2} {\bf g}_{[n]} \right\|_2\\ & \le \|\widetilde{{\bf f}}\|_2 \left(\frac{1}{f_{n+1}} \left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 + \left| \frac{1/g_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{1/f_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \|{\bf g}_{[n]}\|_2 \right)\\ & = \|\widetilde{{\bf f}}\|_2 \left(\frac{1}{f_{n+1}} \left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 + \frac{\|{\bf g}_{[n]}\|_2}{|f_{n+1}| |g_{n+1}|} \left| \frac{f_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \right)\\ & \le \|\widetilde{{\bf f}}\|_2 \left[\frac{1}{|f_{n+1}|^2} + \frac{\|{\bf g}_{[n]}\|_2^2}{|f_{n+1}|^2 |g_{n+1}|^2} \right]^{1/2} \left[\left\| \frac{{\bf f}_{[n]}}{\| \widetilde{{\bf f}} \|_2} - \frac{{\bf g}_{[n]}}{\|\widetilde{{\bf g}}\|_2} \right\|_2^2 + \left| \frac{f_{n+1}}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right|^2 \right]^{1/2}\\ & = \|\widetilde{{\bf f}}\|_2 \left[\frac{\|\widetilde{{\bf g}}\|_2^2}{|f_{n+1}|^2 |g_{n+1}|^2} \right]^{1/2} \left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2, \end{align*} which is the announced result. □ 4.1 Linear programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the optimization scheme we consider here consists in outputting the signal   $$\label{Defflp} {\bf f}_{\rm LP} = \frac{\sigma}{\widehat{u}} \widehat{{\bf h}} \in \mathbb{R}^n,$$ (4.1) where $$\widehat{{\bf h}} \in \mathbb{R}^{n}$$ and $$\widehat{u} \in \mathbb{R}$$ are solutions of   $$\label{OptProg} \underset{{\bf h} \in \mathbb{R}^n, u \in \mathbb{R}}{\rm minimize \;} \; \|{\bf D}^* {\bf h} \|_1 + |u| \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h} - u \boldsymbol{\tau} / \sigma) = {\bf y}, \quad \|{\bf A} {\bf h} - u \boldsymbol{\tau} / \sigma \|_1 = 1.$$ (4.2) Theorem 7 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C (r/\sigma+\sigma/r)^6 \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from the entries of $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(-c m \varepsilon^2 r^2 \sigma^2/(r^2+\sigma^2)^2)$$, any effectively $$s$$-analysis sparse $${\bf f} \in \mathbb{R}^n$$ satisfying $$\|{\bf f}\|_2 \le r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm LP}$$ given in (4.1) with error   $$\left\| {\bf f}- {\bf f}_{\rm LP} \right\|_2 \le \varepsilon r.$$ Proof. Let us introduce the ‘lifted’ signal $$\widetilde{{\bf f}} \in \mathbb{R}^{n+1}$$, the ‘lifted’ tight frame $$\widetilde{{\bf D}} \in \mathbb{R}^{(n+1)\times (N+1)}$$, and the ‘lifted’ measurement matrix $$\widetilde{{\bf A}} \in \mathbb{R}^{m \times (N+1)}$$ defined as   $$\tilde{\mathbf{f}}:=\left[\!\!\!\frac{\ \ \ \mathbf{f}\ \ }{\sigma}\!\!\!\right],\quad \tilde{\mathbf{D}}:=\left[\!\!\begin{array}{c|c} \mathbf{D} & \mathbf{0}\\\hline \mathbf{0} & \mathbf{1} \end{array}\!\!\right],\quad \tilde{\mathbf{A}}:= \left[\begin{array}{c|c} & -\tau_1/\sigma\\ \mathbf{A} & \vdots \\ & -\tau_m/\sigma\end{array}\right].$$ (4.3) First, we observe that $$\widetilde{{\bf f}}$$ is effectively $$(s+1)$$-analysis sparse (relative to $$\widetilde{{\bf D}}$$), since , hence   $$\frac{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_1}{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_2} = \frac{\|{\bf D}^* {\bf f} \|_1 + \sigma}{\sqrt{\|{\bf D}^* {\bf f}\|_2^2+\sigma^2}} \le \frac{\sqrt{s} \|{\bf D}^* {\bf f}\|_2 + \sigma}{\sqrt{\|{\bf D}^* {\bf f}\|_2^2+\sigma^2}} \le \sqrt{s+1}.$$ Next, we observe that the matrix $$\widetilde{{\bf A}} \in \mathbb{R}^{m \times (n+1)}$$, populated by independent standard normal random variables, satisfies $$\widetilde{{\bf D}}$$-TES$$(36(s+1),\varepsilon')$$, $$\varepsilon' := \dfrac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon$$ and $$\widetilde{{\bf D}}$$-RIP$$_1(25(s+1),1/5)$$ with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2) + \gamma' \exp(-c' m) \le \gamma'' \exp(-c'' m \varepsilon^2 r^2 \sigma^2 / (r^2 + \sigma^2)^2)$$, since $$m \ge C {\varepsilon'}^{-6} (s+1) \ln(eN/(s+1))$$ and $$m \ge C (1/5)^{-7} (s+1) \ln(e N / (s+1))$$ are ensured by our assumption on $$m$$. Finally, we observe that $${\bf y} = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}})$$ and that the optimization program (4.2) reads   $$\underset{\widetilde{{\bf h}} \in \mathbb{R}^{n+1}}{\rm minimize \;} \|\widetilde{{\bf D}}^* \widetilde{{\bf h}} \|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf h}}) = {\bf y}, \quad \|\widetilde{{\bf A}} \widetilde{{\bf h}} \|_1 = 1.$$ Denoting its solution as , Theorem 5 implies that   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon'.$$ In particular, looking at the last coordinate, this inequality yields   $$\left| \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \le \varepsilon', \qquad \mbox{hence} \qquad \frac{|g_{n+1}|}{\|\widetilde{{\bf g}}\|_2} \ge \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \varepsilon' \ge \frac{\sigma}{\sqrt{r^2+\sigma^2}} - \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}} = \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}}.$$ In turn, applying Lemma 1 while taking $${\bf f} = {\bf f}_{[n]}$$ and $${\bf f}_{\rm LP} = (\sigma/g_{n+1}) {\bf g}_{[n]}$$ into consideration gives   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm LP}}{\sigma} \right\|_2 \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{\| \widetilde{{\bf g}} \|_2}{|g_{n+1}|} \varepsilon' \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{2\sqrt{r^2+\sigma^2}}{\sigma} \frac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon = \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon,$$ so that   $$\| {\bf f} - {\bf f}_{\rm LP} \|_2 \le \|\widetilde{{\bf f}}\|_2 \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon \le r \varepsilon.$$ This establishes the announced result. □ Remark 3 The recovery scheme (4.2) does not require an estimation of $$r$$ to be run. The recovery scheme presented next does require such an estimation. Moreover, it is a second-order cone program instead of a simpler linear program. However, it has one noticeable advantage, namely that it applies not only to signals satisfying $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s}\|{\bf D}^*{\bf f}\|_2$$ and $$\|{\bf D}^*{\bf f}\|_2 \le r$$, but also more generally to signals satisfying $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} r$$ and $$\|{\bf D}^*{\bf f}\|_2 \le r$$. For both schemes, one needs $$\sigma$$ to be of the same order as $$r$$ for the results to become meaningful in terms of number of measurement and success probability. However, if $$r$$ is only upper-estimated, then one could choose $$\sigma \ge r$$ and obtain a weaker recovery error $$\|{\bf f} - \widehat{{\bf f}}\|_2 \le \varepsilon \sigma$$ with relevant number of measurement and success probability. 4.2 Second-order cone programming Given a signal $${\bf f} \in \mathbb{R}^n$$ observed via $${\bf y} = \mathrm{sgn} ({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the optimization scheme we consider here consists in outputting the signal   $$\label{Deffcp} {\bf f}_{\rm CP} = \underset{{\bf h} \in \mathbb{R}^n}{\rm {\rm argmin}\, \;} \; \|{\bf D}^* {\bf h}\|_1 \qquad \mbox{subject to} \quad \mathrm{sgn}({\bf A} {\bf h} - \boldsymbol{\tau}) = {\bf y}, \quad \|{\bf h}\|_2 \le r.$$ (4.4) Theorem 8 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C (r/\sigma + \sigma/r)^6(r^2/\sigma^2+1) \varepsilon^{-6} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(- c' m \varepsilon^2 r^2 \sigma^2 / (r^2+\sigma^2)^2)$$, any signal $${\bf f} \in \mathbb{R}^n$$ with $$\|{\bf f}\|_2 \le r$$, $$\|{\bf D}^* {\bf f}\|_1 \le \sqrt{s} r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm CP}$$ given in (4.4) with error   $$\left\| {\bf f}- {\bf f}_{\rm CP} \right\|_2 \le \varepsilon r.$$ Proof. We again use the notation (4.3) introducing the ‘lifted’ objects $$\widetilde{{\bf f}}$$, $$\widetilde{{\bf D}}$$ and $$\widetilde{{\bf A}}$$. Moreover, we set . We claim that $$\widetilde{{\bf f}}$$ and $$\widetilde{{\bf g}}$$ are effectively $$s'$$-analysis sparse, $$s' := (r^2 / \sigma^2 + 1)(s+1)$$. For $$\widetilde{{\bf g}}$$, this indeed follows from $$\| \widetilde{{\bf D}}^* \widetilde{{\bf g}} \|_2 = \|\widetilde{{\bf g}}\|_2 = \sqrt{\|{\bf f}_{\rm CP}\|_2^2 + \sigma^2} \ge \sigma$$ and   $$\|\widetilde{{\bf D}}^* \widetilde{{\bf g}}\|_1 = \left\| \begin{bmatrix} {\bf D}^* {\bf f}_{\rm CP} \\ \hline \sigma \end{bmatrix} \right\|_1 = \| {\bf D}^* {\bf f}_{\rm CP}\|_1 + \sigma \le \| {\bf D}^* {\bf f} \|_1 + \sigma \le \sqrt{s} r + \sigma \le \sqrt{r^2 + \sigma^2} \sqrt{s+1}.$$ We also notice that $$\widetilde{{\bf A}}$$ satisfies $$\widetilde{{\bf D}}$$-TES$$(s', \varepsilon')$$, $$\varepsilon' \,{:=}\, \dfrac{r \sigma}{r^2 + \sigma^2} \varepsilon$$, with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2) \le \gamma \exp(- c' m \varepsilon^2 r^2 \sigma^2 / (r^2+\sigma^2)^2)$$, since $$m \ge C {\varepsilon'}^{-6} s' \ln(eN/s')$$ is ensured by our assumption on $$m$$. Finally, we observe that both $$\widetilde{{\bf f}}/ \|\widetilde{{\bf f}}\|_2$$ and $$\widetilde{{\bf g}}/ \|\widetilde{{\bf g}}\|_2$$ are $$\ell_2$$-normalized effectively $$s'$$-analysis sparse and have the same sign observations $$\mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}}) = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf g}}) = {\bf y}$$. Thus,   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon'.$$ In view of Lemma 1, we derive   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm CP}}{\sigma} \right\|_2 \le \frac{r^2 + \sigma^2}{\sigma^2} \varepsilon', \qquad \mbox{hence} \qquad \|{\bf f} - {\bf f}_{\rm CP}\|_2 \le \frac{r^2 + \sigma^2}{\sigma} \varepsilon' = r \varepsilon.$$ This establishes the announced result. □ 4.3 Hard thresholding Given a signal $${\bf f} \in \mathbb{R}^N$$ observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ with $$\tau_1,\ldots,\tau_m \sim \mathscr{N}(0,\sigma^2)$$, the hard thresholding scheme we consider here consists in outputting the signal   $$\label{fht} {\bf f}_{\rm HT} = \frac{-\sigma^2}{\langle \boldsymbol{\tau}, {\bf y} \rangle} {\bf D} {\bf z}, \qquad {\bf z} = H_{t-1}({\bf D}^* {\bf A}^* {\bf y}).$$ (4.5) Theorem 9 Let $$\varepsilon, r, \sigma > 0$$, let $$m \ge C \kappa (r/\sigma+\sigma/r)^9 \varepsilon^{-9} s \ln(eN/s)$$ and let $${\bf A} \in \mathbb{R}^{m \times n}$$ be populated by independent standard normal random variables. Furthermore, let $$\tau_1,\ldots,\tau_m$$ be independent normal random variables with mean zero and variance $$\sigma^2$$ that are also independent from the entries of $${\bf A}$$. Then, with failure probability at most $$\gamma \exp(-c m \varepsilon^2 r^2 \sigma^2/(r^2+\sigma^2)^2)$$, any $$s$$-synthesis-sparse and effectively $$\kappa s$$-analysis-sparse signal $${\bf f} \in \mathbb{R}^n$$ satisfying $$\|{\bf f}\|_2 \le r$$ and observed via $${\bf y} = \mathrm{sgn}({\bf A} {\bf f} - \boldsymbol{\tau})$$ is approximated by $${\bf f}_{\rm HT}$$ given in (4.5) for $$t:=\lceil 16 (\varepsilon'/8)^{-2} \kappa (s+1) \rceil$$ with error   $$\left\| {\bf f}- {\bf f}_{\rm HT} \right\|_2 \le \varepsilon r.$$ Proof. We again use the notation (4.3) for the ‘lifted’ objects $$\widetilde{{\bf f}}$$, $$\widetilde{{\bf D}}$$ and $$\widetilde{{\bf A}}$$. First, we notice that $$\widetilde{{\bf f}}$$ is $$(s+1)$$-synthesis sparse (relative to $$\widetilde{{\bf D}}$$), as well as effectively $$\kappa (s+1)$$-analysis sparse, since satisfies   $$\frac{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_1}{\|\widetilde{{\bf D}}^* \widetilde{{\bf f}}\|_2} = \frac{\|{\bf D}^* {\bf f}\|_1 + \sigma}{\sqrt{\|{\bf D}^*{\bf f}\|_2^2 + \sigma^2}} \le \frac{\sqrt{\kappa s}\|{\bf D}^* {\bf f}\|_2 + \sigma}{\sqrt{\|{\bf D}^*{\bf f}\|_2^2 +\sigma^2}} \le \sqrt{\kappa s + 1} \le \sqrt{\kappa (s+1)}.$$ Next, we observe that the matrix $$\widetilde{{\bf A}}$$, populated by independent standard normal random variables, satisfies $$\widetilde{{\bf D}}$$-SPEP$$(s+1+t,\varepsilon'/8)$$, $$\varepsilon ' := \dfrac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon$$, with failure probability at most $$\gamma \exp(-c m {\varepsilon'}^2 r^2)$$, since $$m \ge C (\varepsilon'/8)^{-7} (s+1+t) \ln(e(N+1)/(s+1+t))$$ is ensured by our assumption on $$m$$. Finally, since $${\bf y} = \mathrm{sgn}(\widetilde{{\bf A}} \widetilde{{\bf f}})$$, Theorem 5 implies that   $$\left\| \frac{\widetilde{{\bf f}}}{\|\widetilde{{\bf f}}\|_2} - \frac{\widetilde{{\bf g}}}{\|\widetilde{{\bf g}}\|_2} \right\|_2 \le \varepsilon',$$ where $$\widetilde{{\bf g}} \in \mathbb{R}^{n+1}$$ is the output of the ‘lifted’ hard thresholding scheme. i.e.   $$\widetilde{{\bf g}} = \widetilde{{\bf D}} \widetilde{{\bf z}}, \qquad \widetilde{{\bf z}} = H_{t} (\widetilde{{\bf D}}^*\widetilde{{\bf A}}^* {\bf y}).$$ In particular, looking at the last coordinate, this inequality yields   $$\label{LBg} \left| \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \frac{g_{n+1}}{\|\widetilde{{\bf g}}\|_2} \right| \le \varepsilon', \quad \mbox{hence} \quad \frac{|g_{n+1}|}{\|\widetilde{{\bf g}}\|_2} \ge \frac{\sigma}{\|\widetilde{{\bf f}}\|_2} - \varepsilon' \ge \frac{\sigma}{\sqrt{r^2+\sigma^2}} - \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}} = \frac{\sigma}{2 \sqrt{r^2 + \sigma^2}}.$$ (4.6) Now let us also observe that   $$\widetilde{{\bf z}} = H_{t} \left(\begin{bmatrix} {\bf D}^* {\bf A}^* {\bf y} \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix} \right) = \left\{ \begin{matrix} \begin{bmatrix} H_{t}({\bf D}^* {\bf A}^* {\bf y}) \\ \hline 0 \end{bmatrix}, \\ \mbox{or}\hspace{30mm}\\ \begin{bmatrix} H_{t-1}({\bf D}^* {\bf A}^* {\bf y}) \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix} , \end{matrix} \right. \quad \mbox{hence} \quad \widetilde{{\bf g}} = \widetilde{{\bf D}} \widetilde{{\bf z}} = \left\{ \begin{matrix} \begin{bmatrix} {\bf D}(H_{t}({\bf D}^* {\bf A}^* {\bf y})) \\ \hline 0 \end{bmatrix}, \\ \mbox{or}\hspace{35mm}\\ \begin{bmatrix} {\bf D}(H_{t-1}({\bf D}^* {\bf A}^* {\bf y})) \\ \hline - \langle \boldsymbol{\tau},{\bf y} \rangle / \sigma \end{bmatrix}. \end{matrix} \right.$$ In view of (4.6), the latter option prevails. It is then apparent that $${\bf f}_{\rm HT} = \sigma {\bf g}_{[n]} / g_{n+1}$$. Lemma 1 gives   $$\left\| \frac{{\bf f}}{\sigma} - \frac{{\bf f}_{\rm HT}}{\sigma} \right\|_2 \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{\| \widetilde{{\bf g}} \|_2}{|g_{n+1}|} \varepsilon' \le \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{2\sqrt{r^2+\sigma^2}}{\sigma} \frac{r \sigma}{2(r^2 + \sigma^2)} \varepsilon = \frac{\| \widetilde{{\bf f}} \|_2}{\sigma} \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon,$$ so that   $$\| {\bf f} - {\bf f}_{\rm HT} \|_2 \le \|\widetilde{{\bf f}}\|_2 \frac{r}{\sqrt{r^2+\sigma^2}} \varepsilon \le r \varepsilon.$$ This establishes the announced result. □ 5. Postponed proofs and further remarks This final section contains the theoretical justification of the technical properties underlying our results, followed by a few points of discussion around them. 5.1 Proof of $${\mathbf D}$$-$${\rm SPEP}$$ The Gaussian width turns out to be a useful tool in our proofs. For a set $$K \subseteq \mathbb{R}^n$$, it is defined by   $$w(K) = \mathbb{E} \left[\sup_{{\bf f} \in K} \langle {\bf f}, {\bf g} \rangle \right], \qquad {\bf g} \in \mathbb{R}^n \mbox{is a standard normal random vector}.$$ We isolate the following two properties. Lemma 2 Let $$K \subseteq \mathbb{R}^n$$ be a linear space and $$K_1,\ldots,K_L \subseteq \mathbb{R}^n$$ be subsets of the unit sphere $$S^{n-1}$$. (i) $$k / \sqrt{k+1} \le w(K \cap S^{n-1}) \le \sqrt{k} \qquad k:= \dim (K)$$; (ii) $$\displaystyle{w \left(K_1 \cup \ldots \cup K_L \right) \le \max \left\{w(K_1),\ldots,w(K_L) \right\} + 3 \sqrt{\ln(L)}}$$. Proof. (i) By the invariance under orthogonal transformation (see [25, Proposition 2.1]3), we can assume that $$K = \mathbb{R}^k \times \left\{(0,\ldots,0) \right\}$$. We then notice that $$\sup_{{\bf f} \in K \cap S^{n-1}} \langle {\bf f}, {\bf g} \rangle = \|(g_1,\ldots,g_k)\|_2$$ is the $$\ell_2$$-norm of a standard normal random vector of dimension $$k$$. We invoke, e.g. [15, Proposition 8.1] to derive the announced result. (ii) Let us introduce the non-negative random variables   $$\xi_\ell := \sup_{{\bf f} \in K_\ell} \langle {\bf f} , {\bf g} \rangle \quad \ell = 1,\ldots, L ,$$ so that the Gaussian widths of each $$K_\ell$$ and of their union take the form   $$w(K_\ell) = \mathbb{E}(\xi_\ell) \quad \ell = 1,\ldots, L \qquad \mbox{and} \qquad w \left(K_1 \cup \cdots \cup K_L \right) = \mathbb{E} \left(\max_{\ell = 1, \ldots, L} \xi_\ell \right).$$ By the concentration of measure inequality (see e.g. [15, Theorem 8.40]) applied to the function $$F: {\bf x} \in \mathbb{R}^n \mapsto \sup_{{\bf f} \in K_\ell} \langle {\bf f}, {\bf x} \rangle$$, which is a Lipschitz function with constant $$1$$, each $$\xi_\ell$$ satisfies   $$\mathbb{P}(\xi_\ell \ge \mathbb{E}(\xi_\ell) + t) \le \exp \left(-t^2/2 \right)\!.$$ Because each $$\mathbb{E}(\xi_\ell)$$ is no larger than $$\max_\ell \mathbb{E}(\xi_\ell) = \max_\ell w(K_\ell) =: \omega$$, we also have   $$\mathbb{P} (\xi_\ell \ge \omega + t) \le \exp \left(-t^2/2 \right)\!.$$ Setting $$v:= \sqrt{2 \ln(L)}$$, we now calculate   \begin{align*} \mathbb{E} \left(\max_{\ell =1,\ldots, L} \xi_\ell \right) & = \int_0^\infty \mathbb{P} \left(\max_{\ell =1,\ldots, L} \xi_\ell \ge u \right) \,{\rm{d}}u = \left(\int_0^{\omega+v} + \int_{\omega+v}^\infty \right) \mathbb{P} \left(\max_{\ell = 1,\ldots, L} \xi_\ell \ge u \right) \,{\rm{d}}u\\ & \le \int_0^{\omega + v} 1\, {\rm{d}}u + \int_{\omega + v}^\infty \sum_{\ell=1}^L \mathbb{P} \left(\xi_\ell \ge u \right) \,{\rm{d}}u = \omega + v + \sum_{\ell=1}^L \int_v^\infty \mathbb{P} \left(\xi_\ell \ge \omega + t \right) \,{\rm{d}}t\\ & \le \omega + v + L \int_v^\infty \exp \left(-t^2/2 \right)\, {\rm{d}}t \le \omega + v + L \frac{\exp(-v^2/2)}{v} \\ & = \omega + \sqrt{2 \ln(L)} + L \frac{1/L}{\sqrt{2 \ln(L)}} \le \omega + c \sqrt{\ln(L)}, \end{align*} where $$c=\sqrt{2} + (\sqrt{2} \ln(2))^{-1} \le 3$$. We have shown that $$w \left(K_1 \cup \cdots \cup K_L \right) \le \max_{\ell} w(K_\ell) + 3 \sqrt{\ln(L)}$$, as desired. □ We now turn our attention to proving the awaited theorem. Proof of Theorem 3. According to [25, Proposition 4.3], with $${\bf A}' := (\sqrt{2/\pi}/m) {\bf A}$$, we have   $$\left| \langle {\bf A}' {\bf f}, \mathrm{sgn}({\bf A}' {\bf g}) \rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \delta,$$ for all $${\bf f},{\bf g} \in {\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1}$$ provided $$m \ge C \delta^{-7} w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1})^2$$, so it is enough to upper bound $$w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1})$$ appropriately. To do so, with $${\it {\Sigma}}_S^N$$ denoting the space $$\{{\bf x} \in \mathbb{R}^N: {\rm supp}({\bf x}) \subseteq S \}$$ for any $$S \subseteq \{1,\ldots, N \}$$, we use Lemma 2 to write   \begin{align*} w({\bf D}({\it {\Sigma}}_s^N) \cap S^{n-1}) & = w \bigg(\bigcup_{|S|=s} \left\{{\bf D}({\it {\Sigma}}_S^N) \cap S^{n-1} \right\} \bigg) \underset{(ii)}{\le} \max_{|S|=s} w({\bf D}({\it {\Sigma}}_S^N) \cap S^{n-1}) + 3 \sqrt{\ln \left(\binom{N}{s} \right)}\\ & \underset{(i)}{\le} \sqrt{s} + 3 \sqrt{s \ln \left(eN/s \right)} \le 4 \sqrt{s \ln \left(eN/s \right)}. \end{align*} The result is now immediate. □ 5.2 Proof of TES We propose two approaches for proving Theorem 4. One uses again the notion of Gaussian width, and the other one relies on covering numbers. The necessary results are isolated in the following lemma. Lemma 3 The set of $$\ell_2$$-normalized effectively $$s$$-analysis-sparse signals satisfies (i) $$\displaystyle{w \left(({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1} \right)} \le C \sqrt{s \ln(eN/s)},$$ (ii) $$\displaystyle{\mathscr{N} \left(({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1} , \rho \right) \le \binom{N}{t}\left(1 + \frac{8}{\rho} \right)^t, \qquad t := \lceil 4 \rho^{-2}} s \rceil.$$ Proof. (i) By the definition of the Gaussian width for $$\mathscr{K}_s: = ({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1}$$, with $${\bf g} \in \mathbb{R}^n$$ denoting a standard normal random vector,   $$\label{slep} w(\mathscr{K}_s) = \mathbb{E} \left[\sup_{\substack{{\bf D}^* {\bf f} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf f}\|_2 = 1}} \langle {\bf f} , {\bf g} \rangle \right] = \mathbb{E} \left[\sup_{\substack{{\bf D}^* {\bf f} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf D}^* {\bf f}\|_2 = 1}} \langle {\bf D} {\bf D}^* {\bf f} , {\bf g} \rangle \right] \le \mathbb{E} \left[\sup_{\substack{{\bf x} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf x}\|_2 = 1}} \langle {\bf D} {\bf x} , {\bf g} \rangle \right].$$ (5.1) In view of $$\|{\bf D}\|_{2 \to 2} = 1$$, we have, for any $${\bf x},{\bf x}' \in {\it {\Sigma}}_s^{N,{\rm eff}}$$ with $$\|{\bf x}\|_2 = \|{\bf x}'\|_2 =1$$,   \begin{align*} \mathbb{E} \left(\langle {\bf D} {\bf x}, {\bf g} \rangle - \langle {\bf D} {\bf x}', {\bf g}' \rangle \right)^2 &= \mathbb{E} \left[\langle {\bf D} {\bf x}, {\bf g} \rangle ^2 \right] + \mathbb{E} \left[\langle {\bf D} {\bf x}', {\bf g}' \rangle ^2 \right] = \|{\bf D} {\bf x}\|_2^2 + \|{\bf D} {\bf x}'\|_2^2 \le \|{\bf x}\|_2^2 + \|{\bf x}'\|_2^2\\ &= \mathbb{E} \left(\langle {\bf x}, {\bf g} \rangle - \langle {\bf x}', {\bf g}' \rangle \right)^2. \end{align*} Applying Slepian’s lemma (see e.g. [15, Lemma 8.25]), we obtain   $$w(\mathscr{K}_s) \le \mathbb{E} \left[\sup_{\substack{{\bf x} \in {\it {\Sigma}}_s^{N,{\rm eff}} \\ \|{\bf x}\|_2 = 1}} \langle {\bf x} , {\bf g} \rangle \right] =w({\it {\Sigma}}_s^{N,{\rm eff}} \cap S^{n-1}).$$ The latter is known to be bounded by $$C s \ln (eN/s)$$, see [25, Lemma 2.3]. (ii) The covering number $$\mathscr{N}(\mathscr{K}_s,\rho)$$ is bounded above by the maximal number $$\mathscr{P}(\mathscr{K}_s,\rho)$$ of elements in $$\mathscr{K}_s$$ that are separated by a distance $$\rho$$. We claim that $$\mathscr{P} (\mathscr{K}_s, \rho) \le \mathscr{P}({\it {\Sigma}}_t^N \cap B_2^N, \rho/2)$$. To justify this claim, let us consider a maximal $$\rho$$-separated set $$\{{\bf f}^1,\ldots,{\bf f}^L\}$$ of signals in $$\mathscr{K}_s$$. For each $$i$$, let $$T_i \subseteq \{1, \ldots, N \}$$ denote an index set of $$t$$ largest absolute entries of $${\bf D}^* {\bf f}^i$$. We write   $$\rho < \|{\bf f}^i - {\bf f}^j \|_2 = \|{\bf D}^* {\bf f}^i - {\bf D}^* {\bf f}^j \|_2 \le \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \| ({\bf D}^* {\bf f}^i)_{\overline{T_i}} \|_2 + \| ({\bf D}^* {\bf f}^j)_{\overline{T_j}} \|_2.$$ Invoking [15, Theorem 2.5], we observe that   $$\| ({\bf D}^* {\bf f}^i)_{\overline{T_i}} \|_2 \le \frac{1}{2\sqrt{t}} \|{\bf D}^* {\bf f}^i \|_1 \le \frac{\sqrt{s}}{2 \sqrt{t}} \|{\bf D}^* {\bf f}^i \|_2 = \frac{\sqrt{s}}{2 \sqrt{t}},$$ and similarly for $$j$$ instead of $$i$$. Thus, we obtain   $$\rho < \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \sqrt{\frac{s}{t}} \le \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 + \frac{\rho}{2}, \quad \mbox{i.e.} \; \|({\bf D}^* {\bf f}^i)_{T_i} - ({\bf D}^* {\bf f}^j)_{T_j} \|_2 > \frac{\rho}{2}.$$ Since we have uncovered a set of $$L = \mathscr{P}(\mathscr{K}_s,\rho)$$ points in $${\it {\Sigma}}_t^N \cap B_2^N$$ that are $$(\rho/2)$$ separated, the claimed inequality is proved. We conclude by recalling that $$\mathscr{P}({\it {\Sigma}}_t^N \cap B_2^N, \rho/2)$$ is bounded above by $$\mathscr{N}({\it {\Sigma}}_t^N \cap B_2^N, \rho/4)$$, which is itself bounded above by $$\dbinom{N}{t} \left(1 + \dfrac{2}{\rho/4} \right)^t$$. □ We can now turn our attention to proving the awaited theorem. Proof of Theorem 4. With $$\mathscr{K}_s = ({\bf D}^*)^{-1}({\it {\Sigma}}_s^{N,{\rm eff}}) \cap S^{n-1}$$, the conclusion holds when $$m \ge C \varepsilon^{-6} w(\mathscr{K}_s)^2$$ or when $$m \ge C \varepsilon^{-1} \ln (\mathscr{N}(\mathscr{K}_s,c \varepsilon))$$, according to [26, Theorem 1.5] or to [3, Theorem 1.5], respectively. It now suffices to call upon Lemma 3. Note that the latter option yields better powers of $$\varepsilon^{-1}$$, but less pleasant failure probability. □ 5.3 Further remarks We conclude this theoretical section by making two noteworthy comments on the sign product embedding property and the tessellation property in the dictionary case. Remark 4 $${\bf D}$$-SPEP cannot hold for arbitrary dictionary $${\bf D}$$ if synthesis sparsity was replaced by effective synthesis sparsity. This is because the set of effectively $$s$$-synthesis-sparse signals can be the whole space $$\mathbb{R}^n$$. Indeed, let $${\bf f} \in \mathbb{R}^n$$ that can be written as $${\bf f} = {\bf D} {\bf u}$$ for some $${\bf u} \in \mathbb{R}^N$$. Let us also pick an $$(s-1)$$-sparse vector $${\bf v} \in \ker {\bf D}$$—there are tight frames for which this is possible, e.g. the concatenation of two orthogonal matrices. For $$\varepsilon > 0$$ small enough, we have   $$\frac{\|{\bf v} + \varepsilon {\bf u} \|_1}{\|{\bf v} + \varepsilon {\bf u}\|_2} \le \frac{\|{\bf v}\|_1 + \varepsilon \|{\bf u}\|_1}{\|{\bf v}\|_2 - \varepsilon \|{\bf u}\|_2} \le \frac{\sqrt{s-1} \|{\bf v}\|_2 + \varepsilon \|{\bf u}\|_1}{\|{\bf v}\|_2 - \varepsilon \|{\bf u}\|_2} \le \sqrt{s},$$ so that the coefficient vector $${\bf v} + \varepsilon {\bf u}$$ is effectively $$s$$-sparse, hence so is $$(1/\varepsilon){\bf v} + {\bf u}$$. It follows that $${\bf f} = {\bf D}((1/\varepsilon){\bf v} + {\bf u})$$ is effectively $$s$$-synthesis sparse. Remark 5 Theorem 3 easily implies a tessellation result for $${\bf D}({\it {\Sigma}}_s^N) \,\cap\, S^{n-1}$$, the ‘synthesis-sparse sphere’. Precisely, under the assumptions of the theorem (with a change of the constant $$C$$), $${\bf D}$$-SPEP$$(2s,\delta/2)$$ holds. Then, one can derive   $$[{\bf g},{\bf h} \in {\bf D}({\it {\Sigma}}_s) \cap S^{n-1} : \; \mathrm{sgn}({\bf A} {\bf g}) = \mathrm{sgn}({\bf A} {\bf h})] \Longrightarrow [\|{\bf g} - {\bf h}\|_2 \le \delta].$$ To see this, with $$\boldsymbol{\varepsilon} := \mathrm{sgn}({\bf A} {\bf g}) = \mathrm{sgn}({\bf A} {\bf h})$$ and with $${\bf f} := ({\bf g}-{\bf h})/\|{\bf g}-{\bf h}\|_2 \in {\bf D}({\it {\Sigma}}_{2s}) \cap S^{n-1}$$, we have   $$\left| \frac{\sqrt{2/\pi}}{m} \langle {\bf A} {\bf f} , \boldsymbol{\varepsilon}\rangle - \langle {\bf f}, {\bf g} \rangle \right| \le \frac{\delta}{2}, \qquad \left| \frac{\sqrt{2/\pi}}{m} \langle {\bf A} {\bf f} , \boldsymbol{\varepsilon} \rangle - \langle {\bf f}, {\bf h} \rangle \right| \le \frac{\delta}{2},$$ so by the triangle inequality $$|\langle {\bf f}, {\bf g} - {\bf h} \rangle| \le \delta$$, i.e. $$\|{\bf g} -{\bf h}\|_2 \le \delta$$, as announced. Acknowledgements The authors would like to thank the AIM SQuaRE program that funded and hosted our initial collaboration. Funding NSF grant number [CCF-1527501], ARO grant number [W911NF-15-1-0316] and AFOSR grant number [FA9550-14-1-0088] to R.B.; Alfred P. Sloan Fellowship and NSF Career grant number [1348721 to D.N.]; NSERC grant number [22R23068 to Y.P.]; and NSF Postdoctoral Research Fellowship grant number [1400558 to M.W.]. Footnotes 1 A signal $${\bf x} \in \mathbb{R}^N$$ is called $$s$$-sparse if $$\|{\bf x}\|_0 := |\mathrm{supp}({\bf x})| \leq s \ll N$$. 2 Here, ‘dictionary sparsity’ means effective $$s$$-analysis sparsity if $$\widehat{{\bf f}}$$ is produced by convex programming and genuine $$s$$-synthesis sparsity together with effective $$\kappa s$$-analysis sparsity if $$\widehat{{\bf f}}$$ is produced by hard thresholding. 3 In particular, [25, Proposition 2.1] applies to the slightly different notion of mean width defined as $$\mathbb{E} \left[\sup_{{\bf f} \in K - K} \langle {\bf f}, {\bf g} \rangle \right]$$. References 1. ( 2016) Compressive Sensing webpage. http://dsp.rice.edu/cs (accessed 24 June 2016). 2. Baraniuk R., Foucart S., Needell D., Plan Y. & Wootters M. ( 2017) Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inform. Theory,  63, 3368– 3385. Google Scholar CrossRef Search ADS   3. Bilyk D. & Lacey M. T. ( 2015) Random tessellations, restricted isometric embeddings, and one bit sensing. arXiv preprint arXiv:1512.06697. 4. Blumensath T. ( 2011) Sampling and reconstructing signals from a union of linear subspaces. IEEE Trans. Inform. Theory,  57, 4660– 4671. Google Scholar CrossRef Search ADS   5. Boufounos P. T. & Baraniuk R. G. ( 2008) 1-Bit compressive sensing. Proceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS),  IEEE, pp. 16– 21. 6. Candès E. J., Demanet L., Donoho D. L. & Ying L. ( 2000) Fast discrete curvelet transforms. Multiscale Model. Simul.,  5, 861– 899. Google Scholar CrossRef Search ADS   7. Candès E. J. & Donoho D. L. ( 2004) New tight frames of curvelets and optimal representations of objects with piecewise $$C^2$$ singularities. Comm. Pure Appl. Math.,  57, 219– 266. Google Scholar CrossRef Search ADS   8. Candès E. J., Eldar Y. C., Needell D. & Randall P. ( 2010) Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal.,  31, 59– 73. Google Scholar CrossRef Search ADS   9. Daubechies I. ( 1992) Ten Lectures on Wavelets . Philadelphia, PA: SIAM. Google Scholar CrossRef Search ADS   10. Davenport M., Needell D. & Wakin M. B. ( 2012) Signal space CoSaMP for sparse recovery with redundant dictionaries. IEEE Trans. Inform. Theory,  59, 6820– 6829. Google Scholar CrossRef Search ADS   11. Elad M., Milanfar P. & Rubinstein R. ( 2007) Analysis versus synthesis in signal priors. Inverse Probl.,  23, 947. Google Scholar CrossRef Search ADS   12. Eldar Y. C. & Kutyniok G. ( 2012) Compressed Sensing: Theory and Applications . Cambridge, UK: Cambridge University Press. Google Scholar CrossRef Search ADS   13. Feichtinger H. & Strohmer T. (eds.) ( 1998) Gabor Analysis and Algorithms . Boston, MA: Birkhäuser. Google Scholar CrossRef Search ADS   14. Foucart S. ( 2016) Dictionary-sparse recovery via thresholding-based algorithms. J. Fourier Anal. Appl.,  22, 6– 19. Google Scholar CrossRef Search ADS   15. Foucart S. & Rauhut H. ( 2013) A Mathematical Introduction to Compressive Sensing . Basel, Switzerland: Birkhäuser. Google Scholar CrossRef Search ADS   16. Giryes R., Nam S., Elad M., Gribonval R. & Davies M. E. ( 2014) Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl.,  441, 22– 60. Google Scholar CrossRef Search ADS   17. Gopi S., Netrapalli P., Jain P. & Nori A. ( 2013) One-bit compressed sensing: Provable support and vector recovery. Proceedings of the 30th International Conference on Machine Learning (ICML),  Atlanta GA, 2013, pp. 154– 162. 18. Jacques L., Degraux K. & De Vleeschouwer C. ( 2013) Quantized iterative hard thresholding: bridging 1-bit and high-resolution quantized compressed sensing. Proceedings of the 10th International Conference on Sampling Theory and Applications (SampTA),  Bremen, Germany, pp. 105– 108. 19. Jacques L., Laska J. N., Boufounos P. T. & Baraniuk R. G. ( 2013) Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inform. Theory,  59, 2082– 2102. Google Scholar CrossRef Search ADS   20. Knudson K., Saab R. & Ward R. ( 2016) One-bit compressive sensing with norm estimation. IEEE Trans. Inform. Theory,  62, 2748– 2758. Google Scholar CrossRef Search ADS   21. Krahmer F., Needell D. & Ward R. ( 2015) Compressive sensing with redundant dictionaries and structured measurements. SIAM J. Math. Anal.,  47, 4606– 4629. Google Scholar CrossRef Search ADS   22. Nam S., Davies M. E., Elad M. & Gribonval R. ( 2013) The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal.,  34, 30– 56. Google Scholar CrossRef Search ADS   23. Peleg T. & Elad M. ( 2013) Performance guarantees of the thresholding algorithm for the cosparse analysis model. IEEE Trans. Inform. Theory,  59, 1832– 1845. Google Scholar CrossRef Search ADS   24. Plan Y. & Vershynin R. ( 2013a) One-bit compressed sensing by linear programming. Comm. Pure Appl. Math.,  66, 1275– 1297. Google Scholar CrossRef Search ADS   25. Plan Y. & Vershynin R. ( 2013b) Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inform. Theory,  59, 482– 494. Google Scholar CrossRef Search ADS   26. Plan Y. & Vershynin R. ( 2014) Dimension reduction by random hyperplane tessellations. Discrete Comput. Geom.,  51, 438– 461. Google Scholar CrossRef Search ADS   27. Rauhut H., Schnass K. & Vandergheynst P. ( 2008) Compressed sensing and redundant dictionaries. IEEE Trans. Inform. Theory,  54, 2210– 2219. Google Scholar CrossRef Search ADS   28. Saab R., Wang R. & Yilmaz Ö. ( 2016) Quantization of compressive samples with stable and robust recovery. Applied and Computational Harmonic Analysis, to appear. 29. Starck J.-L., Elad M. & Donoho D. ( 2004) Redundant multiscale transforms and their application for morphological component separation. Advances in Imaging and Electron Physics,  132, 287– 348. Google Scholar CrossRef Search ADS   30. Yan M., Yang Y. & Osher S. ( 2012) Robust 1-bit compressive sensing using adaptive outlier pursuit., IEEE Trans. Signal Process.,  60, 3868– 3875. Google Scholar CrossRef Search ADS   © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com ### Journal Information and Inference: A Journal of the IMAOxford University Press Published: Mar 1, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly discover and read the research that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 60, "x-ck12": 0, "texerror": 0, "math_score": 0.9900538921356201, "perplexity": 856.9686815882927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514005.65/warc/CC-MAIN-20181021115035-20181021140535-00088.warc.gz"}
https://scite.ai/reports/on-the-use-of-the-68bL2LY6
2021 DOI: 10.1590/0102-311x00294720 View full text | | Share ## Abstract:Abstract: This study illustrates the use of a recently developed sensitivity index, the E-value, helpful in strengthening causal inferences in observational epidemiological studies. The E-value aims to determine the minimum required strength of association between an unmeasured confounder and an exposure/outcome to explain the observed association as non-causal. Such parameter is defined as E - v a l u e = R R + R R R R - 1, where RR is the risk ratio between the exposure and the outcome. Our work illustr… Expand abstract Search citation statements Order By: Relevance Paper Sections 0 0 0 0 0 Citation Types 0 0 0 Publication Types Select... Relationship 0 0 Authors Journals
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520429968833923, "perplexity": 4111.858980670125}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303709.2/warc/CC-MAIN-20220121192415-20220121222415-00248.warc.gz"}