instruction
stringlengths
12
30k
>Starting from a vertex of an unknown, finite, strongly connected directed graph, we want to 'get out' (reach the vertex of the labyrinth called 'end'). Each vertex has two exits (edge which goes from vertex in question to an other one), one exit is labeled 'a', the other exit is labeled 'b'. >We have limitless 'memory' but, we don't recognize when we arrive at the same vertex again, so at each step we can only pick if we go exit a or exit b, or we recognize when we have entered the exit vertex. Show that there is an algorithm to get out of any maze! >Write the algorithm. If n is its input, then its output is a sequence 'a', 'b' that exits any maze with at most n vertices. I got this assignment (math student) in a course to do with algorithms. I don't believe the actual code outputting the 'a', 'b' sequence is particularly difficult once the structure of the function is found mathematically. I've had multiple ideas, it is clear, that were we to find a sequence that would guarantee a visit to all vertices, we would be done, as one of them must be the "end" vertex. This would be easy, if the edges weren't directed, because we would try all possible 1-long sequences by doing a 1-long sequence and tracing our steps back and doing the next, if we didnt reach the end. Then do the same with 2-long sequences, and so on. I think the longest sequnces we would have to try would be $ 2^n $ in length and we would get an incredibly long sequence but it could be shown to work. But this solution relies heavily on the fact that we could always trace our way back to the vertex at which we start and thereby doing something like a BFS. That doesn't work here, as we can't go back on edges we came from (at least not necessarily). I also stumbled upon *De Bruijn sequences*. A *De Bruijn sequence* for a given alphabet (in this case, 'a' and 'b') and a given length 'n' is a cyclic sequence in which every possible subsequence of length 'n' appears exactly once. I don't see how this would work, but maybe if we collate this sequence 'n+1' times, we would visit all vertices, because we would be guaranteed to have tried all 'walks' from all vertices, but I don't see this rigorously at all. That is why I come to you, I need some serious guidance, as I have no clue about this at all (my related question in CS Stack Exchange: https://cs.stackexchange.com/questions/167241/graph-labyrinth-solving-sequence/167244?noredirect=1#comment346313_167244)
Consider the triangle formed by joining the circumcenter, the incenter and the centroid of a a triangle (is there is already a name for this triangle in literature?). Simulations show that the line joining the incenter and the circumcenter always subtends and obtuse angle at centroid (except when the triangle is degenerate). [![enter image description here][1]][1] > **Conjecture**: In any triangle, the line joining the incenter and the circumcenter always subtends and obtuse angle at centroid. Can this be proved or disproved? [1]: https://i.stack.imgur.com/dFevf.png
Suppose I want to find all the interior local minimizers of the following problem: $\underset{x,y}{min}f(x,y)$ subject to $x,y>0, g(x,y)\geq 0$. By setting the first order conditions $\frac{\partial f(x,y)}{\partial x}=0$ and $\frac{\partial f(x,y)}{\partial y}=0$, I found $n$ candidates $(x^*_1,y^*_1)$,$(x^*_2,y^*_2)$ ... $(x^*_n,y^*_n)$. Is it true that all I need to do now is to see which ones out of the $n$ candidates make the Hessian $H\equiv \begin{bmatrix}f_{xx},f_{xy}\\f_{xy}, f_{yy}\end{bmatrix}$ positive semidefinite? Also, I think I do not need to worry about the KKT first order conditions here, because the KKT conditions aim to find candidates for both interior and corner solutions. Is that correct?
How to find all interior local min for minimization under inequality constraint? Does checking the psd of hessian of objective function suffice?
There is [numerical evidence][1] that $$\int_0^1\frac{1}{\sqrt{1-x^2}}\arccos\left(\frac{3x^3-3x+4x^2\sqrt{2-x^2}}{5x^2-1}\right)\mathrm dx=\frac{3\pi^2}{8}-2\pi\arctan\frac12.$$ >How can this be proved? Wolfram [does not find][2] an antiderivative. Here is the graph of $y=\frac{1}{\sqrt{1-x^2}}\arccos\left(\frac{3x^3-3x+4x^2\sqrt{2-x^2}}{5x^2-1}\right)$. [![enter image description here][3]][3] Based on recent experience with integrals involving inverse trigonometric functions ([example][4]), I guess a proof may involve a lot of substitutions. But I don't have any insight on how to approach this. A [search][5] on approachzero did not turn up anything similar. **Context** If this can be proved, then we can answer the question [Probability that the centroid of a triangle is inside its incircle][6], via @user170231's [answer][7]. [1]: https://www.wolframalpha.com/input?i2d=true&i=%5C%2840%29Divide%5B1%2CDivide%5B3Power%5B%CF%80%2C2%5D%2C8%5D-2%CF%80arctan%5C%2840%29Divide%5B1%2C2%5D%5C%2841%29%5D%5C%2841%29Integrate%5BDivide%5B1%2CSqrt%5B1-Power%5Bx%2C2%5D%5D%5Darccos%5C%2840%29Divide%5B3Power%5Bx%2C3%5D-3x%2B4Power%5Bx%2C2%5DSqrt%5B2-Power%5Bx%2C2%5D%5D%2C5Power%5Bx%2C2%5D-1%5D%5C%2841%29%2C%7Bx%2C0%2C1%7D%5D [2]: https://www.wolframalpha.com/input?i2d=true&i=Integrate%5BDivide%5B1%2CSqrt%5B1-Power%5Bx%2C2%5D%5D%5Darccos%5C%2840%29Divide%5B3Power%5Bx%2C3%5D-3x%2B4Power%5Bx%2C2%5DSqrt%5B2-Power%5Bx%2C2%5D%5D%2C5Power%5Bx%2C2%5D-1%5D%5C%2841%29%2Cx%5D [3]: https://i.stack.imgur.com/2bt1w.png [4]: https://math.stackexchange.com/a/4838976/398708 [5]: https://approach0.xyz/search/?q=OR%20content%3A%24%5Cint_0%5E%7B%5Cfrac%7B%5Cpi%7D%7B2%7D%7D%5Cfrac%7B1%7D%7B%5Csqrt%7B1-x%5E2%7D%7D%5Carccos%5Cleft(%5Cfrac%7B3x%5E3-3x%2B4x%5E2%5Csqrt%7B2-x%5E2%7D%7D%7B5x%5E2-1%7D%5Cright)%5C%20dx%24&p=1 [6]: https://math.stackexchange.com/q/4887813/398708 [7]: https://math.stackexchange.com/a/4889751/398708
> Let $Q_c(x)=x^2+c$. Prove that if $c<\frac14$, there is a unique > $\mu>1$ such that $Q_c$ is topologically conjugate to > $F_\mu(x)=\mu(1-x)$ via a map of the form $h(x)=\alpha x+\beta$. Interpretation: $h(x)$ is a linear map. $Q_c(x)$ and $F_\mu(x)$ are a quadratic and affine map respectively. The claims is that if $c<\frac14$, then $\exists$ $\mu>1$ and the quadratic and affine maps are conjugate to one another. **Definition.** (Topological conjugacy). Let $Q_c:X\to X$ and $F_\mu: Y\to Y$, and let $x_1\ne x_2$. Then $Q_c$ and $F_\mu$ are topologically conjugate if $\exists$ a homeomorphism $h:X\to Y$, $\ni$ $$h\circ Q_c = F_\mu\circ h$$ or $$h(Q_c(x_1)) = F_{\mu}(h(x_2)).$$ Now, we form, with $x_1\ne x_2$ $$h(Q_c(x_1)) = \alpha (x_1^2+c)+\beta$$and $$F_{\mu}(h(x_2))= \mu(1-\alpha x_2+\beta) $$ hence, $$\alpha (x_1^2+c)+\beta=\mu(1-\alpha x_2-\beta)$$ which gives \begin{equation} c = -\frac{1}{\alpha}(\beta \mu + \beta - \mu + \alpha x_1^2 + \alpha \mu x_2) \ \ \ \text{where}\ \alpha\ne 0\ \text{and}\ \beta + \alpha x_2\ne1 \end{equation} or \begin{equation} \mu=\frac{\alpha c+\beta +\alpha x_1^2}{-x_2 \alpha + \beta + 1} \end{equation} Insert for $c<\frac14$, i.e. $c=\frac15$: \begin{equation} \mu=\frac{\alpha \frac15+\beta +\alpha x_1^2}{-x_2 \alpha + \beta + 1} \end{equation} then we insert for $\alpha=1$ and $\beta=0$ for a simple case map $h(x)=x$: \begin{equation} \mu=\frac{\frac15+x_1^2}{1-x_2} \end{equation} Here we see that $\mu$ will be non-negative only in the unit interval, hence when $0\leq x_2<1$. Furthermore, we see $\mu>1$, when $x_2<x_1$ within the unit interval, which is the case for the quadratic family on the unit interval. So the claim holds for any mapping $h$, only in the unit interval. But is it valid as proof when it only holds for the unit interval and that $x_1>x_2$ strictly within the unit interval? Thanks UPDATE: By Lutz Lehmanns point, we have the new solution: Insert for $c<\frac14$, i.e. $c=\frac15$: \begin{equation} \mu=\frac{\alpha \frac15+\beta +\alpha p^2}{- \alpha p + \beta + 1} \end{equation} By the claim, we set: \begin{equation} \frac{\beta + \alpha (p^2 + \frac15)}{\beta -\alpha p + 1}>1 \end{equation} which is satisfied only when by \begin{equation} \begin{split} &p\in\mathbb{R}\\ &\frac{1}{10}\bigg( \sqrt{5} \sqrt{\frac{\alpha + 20}{\alpha}} - 5\bigg)<p<\frac{\beta+1}{\alpha}\\ &\beta<-1\\ &\alpha\leq-20, \end{split} \end{equation} which gives with the given conditions for $h(x)=\alpha x+\beta$ that the periodic point is always located in the unit interval $[0,1]$.
For any two finite subsets $A,B$, of an abelian group, is $$|A+B|^2 |A-B|^2 \geq |A+A||A-A||B+B||B-B| \ ?$$ I’m interested in finding out if there are sumset inequalities that are sharper than the triangle inequalities and that show that some quantity related to sums and differences is smaller when the sums and differences are over the same sets. Smaller examples don’t seem to work: $|A-B|^2 \geq |A-A||B-B|$ fails when $B = -A$ and $|A+A| < |A-A|$. $|A+B|^2 \geq |A+A||B+B|$ doesn’t work when $B = -A$ and $|A-A| < |A+A|$. The candidate given in the title combines the two and avoids these difficulties because of its symmetry. It also holds when A is an arithmetic progression or a subspace and B is a random set.
In an abelian group, does there exist inequalities involving products of number of elements of sum and differences of finite subsets?
Can you calculate the sum of this series? $\sum_{k=0}^\infty \left(\frac{kx^k}{(k+x)!}\right) $
Where did the sin(kx)/(kx) term in the two-point correlation function come from?
Can you calculate the sum of this series? $\sum_{k=1}^\infty \left(\frac{kx^k}{(k+x)!}\right) $
How could I introduce a parameter to determine the value of $\int_0^1\frac{log(1+x)}{1+x^2}dx$?
> Let $Q_c(x)=x^2+c$. Prove that if $c<\frac14$, there is a unique > $\mu>1$ such that $Q_c$ is topologically conjugate to > $F_\mu(x)=\mu(1-x)$ via a map of the form $h(x)=\alpha x+\beta$. Interpretation: $h(x)$ is a linear map. $Q_c(x)$ and $F_\mu(x)$ are a quadratic and affine map respectively. The claims is that if $c<\frac14$, then $\exists$ $\mu>1$ and the quadratic and affine maps are conjugate to one another. **Definition.** (Topological conjugacy). Let $Q_c:X\to X$ and $F_\mu: Y\to Y$, and let $x_1\ne x_2$. Then $Q_c$ and $F_\mu$ are topologically conjugate if $\exists$ a homeomorphism $h:X\to Y$, $\ni$ $$h\circ Q_c = F_\mu\circ h$$ or $$h(Q_c(x_1)) = F_{\mu}(h(x_2)).$$ Now, we form, with $x_1\ne x_2$ $$h(Q_c(x_1)) = \alpha (x_1^2+c)+\beta$$and $$F_{\mu}(h(x_2))= \mu(1-\alpha x_2+\beta) $$ hence, $$\alpha (x_1^2+c)+\beta=\mu(1-\alpha x_2-\beta)$$ which gives \begin{equation} c = -\frac{1}{\alpha}(\beta \mu + \beta - \mu + \alpha x_1^2 + \alpha \mu x_2) \ \ \ \text{where}\ \alpha\ne 0\ \text{and}\ \beta + \alpha x_2\ne1 \end{equation} or \begin{equation} \mu=\frac{\alpha c+\beta +\alpha x_1^2}{-x_2 \alpha + \beta + 1} \end{equation} Insert for $c<\frac14$, i.e. $c=\frac15$: \begin{equation} \mu=\frac{\alpha \frac15+\beta +\alpha x_1^2}{-x_2 \alpha + \beta + 1} \end{equation} then we insert for $\alpha=1$ and $\beta=0$ for a simple case map $h(x)=x$: \begin{equation} \mu=\frac{\frac15+x_1^2}{1-x_2} \end{equation} Here we see that $\mu$ will be non-negative only in the unit interval, hence when $0\leq x_2<1$. Furthermore, we see $\mu>1$, when $x_2<x_1$ within the unit interval, which is the case for the quadratic family on the unit interval. So the claim holds for any mapping $h$, only in the unit interval. But is it valid as proof when it only holds for the unit interval and that $x_1>x_2$ strictly within the unit interval? Thanks UPDATE: By Lutz Lehmanns point, we have the new solution: Insert for $c<\frac14$, i.e. $c=\frac15$: \begin{equation} \mu=\frac{\alpha \frac15+\beta +\alpha p^2}{- \alpha p + \beta + 1} \end{equation} By the claim, we set: \begin{equation} \frac{\beta + \alpha (p^2 + \frac15)}{\beta -\alpha p + 1}>1 \end{equation} which is satisfied only when by \begin{equation} \begin{split} &p\in\mathbb{R}\\ &\frac{1}{10}\bigg( \sqrt{5} \sqrt{\frac{\alpha + 20}{\alpha}} - 5\bigg)<p<\frac{\beta+1}{\alpha}\\ &\beta<-1\\ &\alpha\leq-20, \end{split} \end{equation} which gives with the given conditions for $h(x)=\alpha x+\beta$ that the periodic point is always located in the unit interval $[0,1]$, and the quadratic and affine maps are topologicaly conjugated. If $c<\frac14$, i.e. $c=\frac13$, then we obtain that the periodic point is outside the unit interval, i.e. $p = -\frac16, \beta<\frac16 (-\alpha - 6)$, hence there exists no topological conjugacy between the quadratic and affine maps.
Suppose you have a 26-sided die, each face is labelled from A-Z, what is the expected number of steps to observe the sequence "ABRACADABRA" for the first time? ANS = $26^{11} + 26^4 + 26$ A common technique to handle these kind of problems is to draw a markov chain and state which correspond to observing the pattern "ABRACADABRA" is marked absorbing then you calculate expected number of steps till absorption, but I think this method is tedious (especially for the given sequence) and I believe there is some technique related to _Optional Stopping Theorem_ that could be applied here, any help is appreciated.
Let $k$ be an algebraically closed characteristic zero field, consider the $\mathfrak{gl}_{1}$ representation $$ \rho: \mathfrak{gl}_{1} \rightarrow k^{2} $$ given by $$ \rho(x) = \begin{pmatrix} \alpha & x \\\ 0 & \beta \end{pmatrix} $$ Then this has a subrepresentation $$ \\{ \begin{pmatrix} a \\\ 0 \end{pmatrix} : a \in k \\} $$ where $\rho(x)$ just acts as a scalar $\alpha$, so this subrep is isomorphic to $k_{\alpha}$ The quotient representation also just acts as a scalar $\beta$. So $\alpha, \beta$ are the weights of $k^{2}$ (since $k_{\alpha}, k_{\beta}$ are the composition factors of $k^{2}$.) I know that a representation of a nilpotent Lie algebra decomposes as the direct sum of its weight spaces. (Also, the weight spaces are equal to the generalised eigenspaces of $\rho(x)$ for a generic element $x \in \mathfrak{gl}_{1}$.) $\mathfrak{gl}_{1}$ is nilpotent, so for $\alpha \ne \beta$, I am trying to decompose $k^{2}$ as $$ k^{2} = k_{\alpha} \oplus k_{\beta} $$ But I cannot see what $k_{\beta}$ should be as a subrepresentation of $k^{2}$? Any help would be appreciated!
Is this representation of a Lie algebra decomposable?
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then |−|= $\left(\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!}\right) $ . I've shown that |−| = $\left({2\lambda^\lambda e^{-\lambda}}\right) $ $\sum_{k=1}^\infty \left(\frac{kx^k}{(k+x)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help?
Consider the well-known [coupon collector problem][1] with $n$ potential coupons. If I collect a coupon per day, I know it will take me $n \cdot H_n$ days to finally collect all of them, where $H_n$ is the $n$-th harmonic number. For example, if the coupons are the positive integers up to 12, one potential sequence of coupons could look like r1=[5 2 12 9 3 2 1 8 5 6 7 11 11 11 3 4 2 9 3 5 8 12 8 3 10] Note that for this example, it took me 25 days to collect them all, which is “kind of” close to $12 \cdot H_{12}=37$. Now, once I obtain all of the coupons, then I will observe which ones I have obtained exactly once. I will eliminate this set $k_1$ of coupons, and take a look at how long it took me to collect the remaining $n-|k_1|$ coupons this time. Note that in this process I am always eliminating the last coupon to appear, but possibly more. In the previous example, the coupons that appear once are k1=[1 4 6 7 10] which are the ones which I will remove. And the ones that can stay are b1=[2 3 5 8 9 11 12] Searching how long it took me in the initial sequence to get all the values I haven’t removed, I obtained: r2=[5 2 12 9 3 2 8 5 11] I would expect that this should have taken me on average $(n-|k_1|) \cdot H_{n-|k_1|}$ days? Here $|k_1|=5$ so this would be $7 \cdot H_7=18$. This is not at all close to the actual time that it took me, which is 9 in our example. I will now check, in the second iteration, which are the coupons that I have collected exactly once, and remove them again from the sample. The coupons that appear only once are k2=[3 8 9 11 12] I remove them, and obtain the shortest list that contains all elements in $[1:12]\setminus \{k_1 \cup k_2\}$. This gives me r3=[5 2] Again, I would expect that this should have taken me on average $(n-|k_1|-|k_2|) \cdot H_{n-|k_1|-|k_2|}$ days? Here $|k_1|=5$ and $|k_2|=5$, which gives $2 \cdot H_2=3$, which is also more than what it actually took me, which was only 2 days. In the end, I am interested in the average time it takes me to find each number in the shortest sequence that contains them. I found all 5 numbers in $k_1$ in 25 days, all 5 numbers in $k_2$ in 9 days, and the remaining 2 coupons in $k_3$ in 2 days. The coupon-average thus gives me $$(5\cdot 25+5 \cdot 9+2\cdot 2)/12=14.5$$ If we refer to the coupon-average waiting time as $T_n$ when we have $n$ coupons, I am interested in the approximate expected value of $T_n$, particularly in its asymptotic behavior and lower bounds. My guess was that one could approximate $T_n$ by the recursive use of the initial approximation $H_n$, in which we fix that in each round we remove exactly one coupon. Therefore, we obtain $$\frac{H_n + H_{n-1}+ \ldots}{n}= \frac{\sum_{i=1}^n H_i}{n}$$ which asymptotically converges to $H_n$. But, as I have shown before in the example, this approximation is not that good because the recursive approximation using $H_n$ starts getting worse and worse. Edit: I have found that the expected size of $k_1$ is $H_n$, see Section 4 in [Myers, Amy N., and Herbert S. Wilf. "Some new aspects of the coupon collector's problem." SIAM review 48.3 (2006): 549-565.][2]. [1]: https://en.wikipedia.org/wiki/Coupon_collector%27s_problem [2]: https://arxiv.org/abs/math/0304229
How could I introduce a parameter to determine the value of $\int_0^1\frac{\log(1+x)}{1+x^2}dx$?
Stabilization problem is a fundamental problem in control theory. There are many literature to achieve stabilization but fewer results are related in computational complexity. Consequently, we consider the computational complexity of stabilization problem. Given a linear Boolean control network (BCN) $$x(t+1)=Ax(t)+Bu(t),$$ where $x(t)\in\mathbb{F}_2^{n\times 1},u(t)\in\mathbb{F}_2^{m\times 1}$ ($m\leq n$) represent the network state and input (or control) state at time $t$, respectively; $A\in\mathbb{F}_2^{n\times n},B\in\mathbb{F}_2^{n\times m}.$ Denote $\mathbb{F}_2$ as binary field. The stabilization problem in the linear BCN is to determine whether there is state feedback $u(t)=Kx(t)$ ($K\in\mathbb{F}_2^{m\times n}$) so that all network states of the linear BCN converge to the origin $[0,\cdots,0]$ [1]. Briefly, Stabilization Problem ($\mathbf{SP}$) can be represented as - Input: two Boolean matrices $A\in\mathbb{F}_2^{n\times n}$ and $B\in\mathbb{F}_2^{n\times m}$ $(m\leq n).$ - Problem: determine whether there is state feedback $u(t)=Kx(t)$ ($K\in\mathbb{F}_2^{m\times n}$) so that all network states of the system $x(t+1)=(A+BK)x(t)$ converge to $[0,\cdots,0].$ Exactly, I have proved a simple case where $K\in\mathbb{F}_2^{n\times 1}$ and all nonzero entries of $K$ do not exceed a given constant $w$ by Finite-Field subset sum problem [2]. $\mathbf{SP}$ is my question and the following is my efforts to solve this problem. According to [3], for a given Boolean matrix $Q\in\mathbb{F}_2^{n\times n}$ if there exist an invertible Boolean matrix $P_0\in\mathbb{F}_2^{n\times n}$ and a lower triangular matrix $A_0$ whose diagonal elements of $A_0$ are all $0$ such that $$P_0QP_0^{-1}=A_0,$$ then all states of the system $x(t+1)=Qx(t)$ will converge to $[0,\cdots,0].$ Inspired by this conclusion, $\mathbf{SP}$ can convert into whether there exists an invertible Boolean matrix $P_0\in\mathbb{F}_2^{n\times n}$ and a lower triangular matrix $A_0\in\mathbb{F}_2^{n\times n}$ whose diagonal elements are all $0$ such that $P_0(A+BK)P_0^{-1}=A_0.$ By simple calculation, we have \begin{equation} \label{eq:lineareq} BK=A+P_0^{-1}A_0P_0. \end{equation} According to the linear algebra, $\mathbf{SP}$ has a solution $K$ iff $$rank(B)=rank(B,A+P_0^{-1}A_0P_0).$$ Hence, $\mathbf{SP}$ converts into whether there exists an invertible matrix $P_0$ and a lower triangular matrix $A_0$ such that $rank(B)=rank(B,A+P_0^{-1}A_0P_0).$ Finally, Rank Equality Problem ($\mathbf{REP}$) can be denoted as - Input: given two Boolean matrices $A\in\mathbb{F}_2^{n\times n},B\in\mathbb{F}_2^{n\times m}$ ($m\leq n$) - Problem: find an invertible matrix $P_0\in\mathbb{F}_2^{n\times n}$ and a lower triangular matrix $A_0$ whose diagonal elements are all $0$ such that $$rank(B)=rank(B,A+P_0^{-1}A_0P_0).$$ Is there a polynomial time algorithm to solve the Rank Equality Problem? Next, I will give some supporting materials that $\mathbf{SP}$ is NP-complete. By [3], $\mathbf{SP}$ have a solution $K$ iff $\det(\lambda I_n-A-BK)~ mod ~2 =\lambda^n.$ Assume that $K=[k_{ij}]_{m\times n}$ and $$\det(\lambda I_n-A-BK)=\lambda^n+f_1(k_{11},k_{12},\cdots,k_{mn})\lambda^{n-1}+\cdots+f_{n-1}(k_{11},k_{12},\cdots,k_{mn})\lambda+f_n(k_{11},k_{12},\cdots,k_{mn}),$$ hence we have $$f_i(k_{11},k_{12},\cdots,k_{mn}) ~mod~ 2=0,~i=1,\cdots,n.$$ In the following, I will propose a method to convert $f_i(k_{11},k_{12},\cdots,k_{mn}) ~mod~ 2=0,~i=1,\cdots,n $ into Boolean quadratic equations ($\mathbf{QUADEQ}$). For example, let $k_{1,1}k_{1,2}k_{1,3}\cdots k_{1,10}$ be a term of $f_1(k_{11},k_{12},\cdots,k_{mn}),$ then I convert the term into Boolean $\mathbf{QUADEQ}$ ($\mathbf{QUADEQ}$ is NP-complete [4]). Let $k_{11}k_{12}=y_1,\cdots,k_{1,9}k_{1,10}=y_5,y_1y_2=z_1,y_3y_4=z_2,z_1z_2=q_1,$ therefore $k_{1,1}k_{1,2}k_{1,3}\cdots k_{1,10}$ is equal to $q_1y_5$ with $k_{11}k_{12}=y_1,\cdots,k_{1,9}k_{1,10}=y_5,y_1y_2=z_1,y_3y_4=z_2,z_1z_2=q_1.$ By this way, $f_i(k_{11},k_{12},\cdots,k_{mn}) ~mod~ 2=0,~i=1,\cdots,n$ can convert into Boolean $\mathbf{QUADEQ}$ in polynomial time. Therefore, I conjecture that $\mathbf{SP}$ is NP-complete. Reference [1] Cheng, D., Qi, H., & Li, Z. (2010). Analysis and control of Boolean networks: a semi-tensor product approach. Springer Science & Business Media. [2] Vardy, A. (1997). The intractability of computing the minimum distance of a code. IEEE Transactions on Information Theory, 43(6), 1757-1766. [3] Hernández Toledo, R. A. (2005). Linear finite dynamical systems. Communications in Algebra, 33(9), 2977-2989. [4] Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge University Press.
Faster way to find the eigenvalues of a 4x4 real matrix?
I'm looking for a pairing function $f(x,y)$ which gives unique values for every combination of integers $x,y>0$ and and as special property 0 for $x=0$ $\forall y$.<br> Or to be more general we can also replace $0$ with constants. <br> - We do know the max values of $x$ and $y$ can achieve - We are allowed to shift $x, y$ values. So we also know their min values. Which don't need to be 0 <br> So the more general definitions with constants $c_x, c_f$ (instead of $0$): $$ x_{min} \le x \le x_{max}$$ $$ y_{min} \le y \le y_{max}$$ $$\forall y: f(c_x,y) = c_f $$ $$\forall y, \forall x\not= c_x: f(x,y)\not = f(y,x) \not = c_f$$ - It will be used in a computer program. The goal is finding a function with a max result as small as possible (bit's needed for representation) while still being easy and fast to compute by a machine (not using too much storage). Therefore integer calculations are appreciated. Floating point can lead to inaccuracies.<br> - The inverse can be anything. We only need to check if two variables $x,y$ lead to the target number.<br> - No case selections are allowed (e.g. for $x=0$). Something like Kronecker Delta function which can only be $0$ for input $0$ and $1$ for anything else is also not allowed. Target value size, the number of different values $|${$x$}$|$ $= 2^{16} = 65536$ and $|${$y$}$|$ at least $2^{26} = 67108864$. <br> Size $|${$y$}$|$ can be bigger but $\max f(x,y) < 2^{92}$ Can we find any such function? ---- Here some **Examples:** Pairing functions which **do not work** ( not $0$ or a constant for any $x = c_x$ ) :<br> Cantor's pairing function : $$f(x,y)=\frac{(x+y)\cdot (x+y+1)}{2} + a$$ Szudzik pairing function: $$ \begin{eqnarray*} f(x,y) = \begin{cases} y^2 + x, &\text{if }x < y, \\ y^2 + x + y, &\text{if }x \ge y \end{cases} \end{eqnarray*} $$ This has some cases selection we do not want. We can shift $y$ to be always larger than $x$ and with this reduce it to: $$f(x,y) = (y+x_{max} +1 )^2 + x$$ Can we modify them to be a constant for a certain $x = c_x$? ---------- Pairing function which **does work** but needs extra memory/time : <br> If we use $0 \le x \le 2^{16}-1$ it can not surpass value $2^{16} = 65536$. Next prime above it is 65537. This is the 6543'th prime number. <br> Let $p(i)$ return the $i'th$ prime number.<br> Let $0 \le y \le 2^{26}-1$ <br> With this we can do the pairing function: $$f(x,y) = x \cdot p(y + 6543)$$ This returns a unique value for every combination $x,y$ except for $x = c_x = 0$ it will always return the constant $c_f = 0$.<br> However this works good in theory but calculating $p$ takes quite a long time. Storing all values would take a lot of memory ($\approx 268$ MB) - not a nice option. <br> Max value of $f(x,y)$ would be $2^{46.31}$ needing $47$ bits which is not too far from optimum $42$ bits. (Cantor and Szudzik are much bigger) To reduce memory we can split $y$ into half bit-size parts. For $2^{26}$ different values this would be $2^{13}$. First part gets represented by the first $2^{13}$ primes after $2^{16}$, second part by the $2^{13}$ primes after that. $$f(x,y) = x \cdot p(y_{\text{bits } 1..13} + 6543) \cdot p(y_{\text{bits } 14..26} + 6543 + 2^{13}) $$ This would reduce memory but scales the max value of $f(x,y)$ to $2^{51.3}$, so we need $52$ bits. ------ **Question:** Can we find any such pairing function which does not need to replace parameter $x,y$ with their primes (or similar) while the resulting value $f(x,y)$ won't get much bigger than $2^{52}$ (at most $2^{92}-1$ ) ? ---- Pairing function which **might work** but is **too big**:<br> While testing around I came up with <br> $$f(x,y) = ((x+1)^2+y^2) \cdot (x^3+(y+1)^2) \cdot x$$ I have **no proof** that this is a valid pairing function for all combinations $x,y$ but in tests it did work for all combinations $x<2^{8}, y<2^{20}$<br> However the results can get too big. It needs up to $121$ bits. <br> Can we find a pairing function with a smaller max value?
Where is my mistake in this integral equation?
I'm currently learning about neural networks and stumbled upon a confusion related to the use of Stochastic Gradient Descent (SGD) in training. Specifically, I'm puzzled about the computation of the partial derivative of the cross-entropy loss with respect to the predicted probabilities. Here's where my confusion lies: Why is it that $ \frac{\partial}{\partial f(\mathbf{x})_c}(-\log f(\mathbf{x})_y) = \frac{-1_{(y=c)}}{f(\mathbf{x})_y} \quad$? ($1_{(y=c)}=1$if $y=c$, otherwise 0) Given that $ f(\mathbf{x})_c = p(y=c|\mathbf{x}) $ and knowing that the sum of probabilities across all classes equals one, $ \sum_c f(\mathbf{x})_c = p(y=c|\mathbf{x}) = 1 $, it seems there should be a relationship between the derivatives across different classes. Thus, shouldn't the derivative $ \frac{\partial}{\partial f(\mathbf{x})_c}(-\log f(\mathbf{x})_y) $ be equivalent to $ \frac{\partial}{\partial f(\mathbf{x})_c}( (1-\sum_{c' \neq y}f(\mathbf{x})_{c'}))=-\frac{1}{\log f(\mathbf{x})_y}\quad\frac{\partial}{\partial f(\mathbf{x})_c}(-\log (1-\sum_{c' \neq y}f(\mathbf{x})_{c'})) $? And wouldn't this not equal zero, thereby presenting a contradiction? I'm trying to wrap my head around this concept and would greatly appreciate any insights or explanations you might offer. Thank you!
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then |−|= $\left(\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!}\right) $ . I've shown that |−| = $\left({2\lambda^\lambda e^{-\lambda}}\right) $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help?
I want to prove that $5^{2n}-2^{3n}$ is divisible by 17 for all positive integers $n$. I know this can be done by induction (sketch proof shown below) but want to know if: 1. Are there any alternative proof methods that do not use induction? 2. How many different induction proof approaches are possible for a question such as this? (By different I mean different groupings of terms or perhaps adding and subtracting a new term). $P(1): 5^{2}-2^{3}=17$ which is divisible by 17, so $P(1)$ is true. Now, we need to show that the truth of $P(k)$ implies the truth of $P(k+1)$ which means showing that if $5^{2k}-2^{3l}$ is divisible by 17 for some positive integer $k$ then $5^{2(k+1)}-2^{3(k+1)}$ is also divisible by 17. $5^{2(k+1)}-2^{3(k+1)}=25\times 5^{2k}-8\times 2^{3k}$ $=17\times 5^{2k}+8\times 5^{2k}-8\times 2^{3k}$ $=17\times 5^{2k}+8(5^{2k}-2^{3k})$ Then by $P(1)$, $5^{2k}-2^{3k}$ is divisible by 17 so we can write $5^{2k}-2^{3k}=17s$ $P(k+1)$ then becomes $17\times 5^{2k}+8\times 17s$ from which the result follows. What other approaches (if any) are possible and how are they different?
Divisibility proof - alternative?
I'm currently learning about neural networks and stumbled upon a confusion related to the use of Stochastic Gradient Descent (SGD) in training. Specifically, I'm puzzled about the computation of the partial derivative of the cross-entropy loss with respect to the predicted probabilities. Here's where my confusion lies: Why is it that $ \frac{\partial}{\partial f(\mathbf{x})_c}(-\log f(\mathbf{x})_y) = \frac{-1_{(y=c)}}{f(\mathbf{x})_y} \quad$? ($1_{(y=c)}=1$if $y=c$, otherwise 0) Given that $ f(\mathbf{x})_c = p(y=c|\mathbf{x}) $ and knowing that the sum of probabilities across all classes equals one, $ \sum_c f(\mathbf{x})_c = p(y=c|\mathbf{x}) = 1 $, it seems there should be a relationship between the derivatives across different classes. Thus, shouldn't the derivative $ \frac{\partial}{\partial f(\mathbf{x})_c}(-\log f(\mathbf{x})_y) $ be equivalent to $ \frac{\partial}{\partial f(\mathbf{x})_c}( (1-\sum_{c' \neq y}f(\mathbf{x})_{c'}))=-\frac{1}{ f(\mathbf{x})_y}\quad\frac{\partial}{\partial f(\mathbf{x})_c}( (1-\sum_{c' \neq y}f(\mathbf{x})_{c'})) $? And wouldn't this not equal zero (for $c\neq y$, this equals to -1), thereby presenting a contradiction? I'm trying to wrap my head around this concept and would greatly appreciate any insights or explanations you might offer. Thank you!
Find all functions $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $ f(x^2+y^2) + f(xf(y) + f(x)f(y)) = (f(x+y))^2. $ My attempt: Plugging in $x=y=0$ we get $f(0) + f(f(0)^2) = f(0)^2. $ Setting $x:=y$ and $y:=x$ gets us $f(xf(y)+f(x)f(y)) = f(yf(x)+f(x)f(y)).$ Setting $f(0)=a$ and plugging in $x=0$ and $ y=a $, we get $f(a^2) + f(af(a)) = f(a)^2. $ $x=a$ and $ y=0 $ gives us $ f(a^2) + f(a^2+af(a)) = f(a)^2.$ How can I continue?
I am solving for the rate of change of magnetic field strength( ($\frac{dB}{dt}$) Because I want to calculate the induced emf after this) in a current carrying coil of 100 turns(Note: Current is changing), the coil is connected to a 30kHz sine wave function generator. And I got confused while taking derivative of angle $\theta$. Formula for calculating Magnetic field strength $B$, $$ B = (100)\frac{\mu \times I}{2\pi r} T $$ Where, $\mu$ is the permeability of the air. and $I$ is the current through the coil. and (100) is the number of turns of the coil. and $r$ is the radius(distance) from the coil. Note: $\mu$ and $r$ are constants for our example but current $I$ is changing, $$\frac{d(B)}{dt}= \frac{100\mu}{2\pi r} \times \frac{d(I)}{dt} $$ How fast is current changing with respect to time? The current is 30kHz sine wave in this case. (Let's just only for this example keep this fact aside that in an inductor current and voltage has 90 degree phase shift.) Hence, $$\frac{d(I)}{dt}= \frac{d(sin(\theta))}{dt} $$ $$\frac{d(I)}{dt}= cos(\theta) \times \frac{d(\theta)}{dt} $$ Now, How fast is angle $\theta$ is changing with respect to time? the frequency of the sin wave is 30kHz so, $$ \frac{d(\theta)}{dt} = 30000 \times 2 \pi $$ Because in 1 second there will be 30k cycles completed of 2$\pi$ radians. $$ \frac{d(\theta)}{dt} = 60000 \pi $$ Hence, $$\frac{d(I)}{dt}= 60000 \pi \times cos(\theta) $$ So, $$\frac{d(B)}{dt}= \frac{100\mu \times 30000 \times cos(\theta)}{r} $$ Is the answer correct? I still don't really have an intuitive understanding of what is happing with the angle $\theta$. And what value I should put in $\theta$ to get the final answer, the rate of change of magnetic field strength $B$.
How to calculate the rate of change of angle $\theta$ with respect to time, If the sin wave frequency is 30kHz?
> Two cards are drawn from a well shuffled pack of $52$ cards. Find the > probability that one of them is a red card and the other is a queen. **My Attempt** The relevant cards are $26$ red cards and $2$ black queens i.e. in total $28$ cards. I took four cases. **Case 1 : One non-queen red card and one red queen** The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ **Case 2 : One non-queen red card and one black queen** The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ **Case 3 : Two red queens** The probability would be $$\frac{\binom{2}{2}}{\binom{28}{2}}$$ **Case 4 : One red queen and one black queen** The probability would be $$\frac{\binom{2}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ So the required probability $$=\frac{48+48+1+4}{\binom{28}{2}}=\frac{101}{378}$$ Is the above solution correct?
How to calculate the rate of change of angle $\theta$ with respect to time, If the sine wave frequency is 30kHz?
Your equation for $C(t)$ can be broken down into equations for $x$ and $y$ separately: $$ x(t) = \frac{1-t^2}{1+t^2} \quad ; \quad y(t) = \frac{2t}{1+t^2} $$ It’s easy to check that $[x(t)]^2 + [y(t)]^2 = 1$ for all $t$. This means that every point $C(t)= (x(t),y(t))$ lies on the unit circle. Also it’s clear that $0 \le x(t) \le 1$ if $0 \le t \le 1$. Can you take it from there? The same sort of reasoning will work whenever you have parametric equations and an implicit equation for a conic. In fact, it will work whenever you have parametric equations and an implicit equation for any curve.
My textbook is asking me to prove that, given $p_n(x,y)$ polynomial of degree $n\geq1$ in $x,y$, it is true that: $$ lim_{|(x,y)|\rightarrow\infty} |p_n(x,y)| = +\infty $$ But if I take, for example, $p_n$ to be $x^2 - y^2$, then i get that the limit does indeed approach $+\infty$ everywhere except along the lines $y = x$ and $y = -x$ on the $x,y$ plane. I fear I'm missing something here, any help?
Is the limit at infinity of an absolute multivariable polynomial always infinity?
Let $f:\mathbb{R}^2\to \mathbb{R}$ be a smooth function such that for all $t$ $f_t:=f(t,\cdot)$ has a unique maximum at $x^*(t)$ (e.g. $f_t$ is a strictly concave function for all $t$). My question is: is the function $t\mapsto x^*(t)$ smooth in general? If not, are there some reasonable conditions on $f$ that guarantee its smoothness?
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then {|−|}= $\left(\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!}\right) $ . I've shown that {|−|} = $\left({2\lambda^\lambda e^{-\lambda}}\right) $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help? EDIT: This is what I have done {|−|} = $\sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\lambda \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\infty \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ - $\sum_{j=\lambda+1}^\infty \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) $ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{k=1}^\infty \left(\frac{k\lambda^{k+\lambda}e^{-\lambda}}{(k+\lambda)!}\right) $ = $2 e^{-\lambda} \lambda^{\lambda}\sum_{k=1}^\infty \left(\frac{k\lambda^{k}}{(k+\lambda)!}\right) $
Your equation for $C(t)$ can be broken down into equations for $x$ and $y$ separately: $$ x(t) = \frac{1-t^2}{1+t^2} \quad ; \quad y(t) = \frac{2t}{1+t^2} $$ It’s easy to check that $[x(t)]^2 + [y(t)]^2 = 1$ for all $t$. This means that every point $C(t)= (x(t),y(t))$ lies on the unit circle. Also it’s clear that $0 \le x(t) \le 1$ if $0 \le t \le 1$. Can you take it from there? The same sort of reasoning will work whenever you have parametric equations and an implicit equation for a conic. In fact, it will work whenever you have parametric equations and an implicit equation for any curve. A rational quadratic curve will never quite cover an entire conic — there will always be at least one point missing. For example, your parametric equation $C(t)$ will never give you the point $(-1,0)$ no matter what parameter value $t$ you use.
Suppose $M$ is a smooth manifold and $A$ is a $\mathcal{C}^\infty(M)$-submodule of $\Gamma(TM)$ with the property that for all $x \in M$, the values of the vector fields in $A$ evaluated at $x$ determine a $p$-dimensional subspace of $T_x M$. Then $A$ defines a smooth distribution $P$ i.e. a smooth subbundle of $TM$ of rank $p$; for every $x \in M$, $P_x$ is given by all the vector fields in $A$ evaluated at $x$. Consider now $T_P \subset \Gamma(TM)$ the $\mathcal{C}^\infty(M)$-submodule of vector fields which are tangent to $P$ i.e. their values are in $P_x$ for each $x \in M$. In other words $T_P = \Gamma(P)$. My question is, why is it that $A = T_P$? A priori, it only seems that $A \subset T_P$, but why is it that if a vector field is in $P_x$ for every $x$ then it must be in $A$? As a preliminary step, it seems to me that it is not necessary for $A$ to be closed under multiplication with smooth functions for it to define a distribution. Indeed, if $A \subset \Gamma(TM)$ is just a collection of vector fields which at every point span a $p$-dimensional subspace, then for every point we find some vector fields in $A$ which are linearly independent at that point, so by continuity of $\det$ they must be linearly independent on a neighborhood of the point, so these vector fields smoothly trivialize $P$ around the point, hence $P$ is a smooth distribution. Am I wrong with anything in this argument? Based on it, it seems that the fact that $A$ is a $\mathcal{C}^\infty(M)$-submodule is essential in proving $T_P \subset A$, but I can't figure out how to use this fact.
Does a distribution uniquely determine a (the) submodule of vector fields which generates it?
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then {|−|}= $\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!} $ . I've shown that {|−|} = $\left({2\lambda^\lambda e^{-\lambda}}\right) $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help? EDIT: This is what I have done {|−|} = $\sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\lambda \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\infty \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ - $\sum_{j=\lambda+1}^\infty \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) $ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{k=1}^\infty \left(\frac{k\lambda^{k+\lambda}e^{-\lambda}}{(k+\lambda)!}\right) $ = $2 e^{-\lambda} \lambda^{\lambda}\sum_{k=1}^\infty \left(\frac{k\lambda^{k}}{(k+\lambda)!}\right) $ .
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then {|−|}= $\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!} $ . I've shown that {|−|} = ${2\lambda^\lambda e^{-\lambda}} $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help? EDIT: This is what I have done {|−|} = $\sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\lambda \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\infty \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ - $\sum_{j=\lambda+1}^\infty \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) $ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{k=1}^\infty \left(\frac{k\lambda^{k+\lambda}e^{-\lambda}}{(k+\lambda)!}\right) $ = $2 e^{-\lambda} \lambda^{\lambda}\sum_{k=1}^\infty \left(\frac{k\lambda^{k}}{(k+\lambda)!}\right) $ .
I'm struggling with the proof of Eulerian trail in walk through combinatorics. As you now, the theorem states that "A connected graph G has a closed Eulerian trail if and only if all vertices of G have even degree." I'll only ask <- part. Book gives an algorithm how to construct an Eulerian trail actually. 1. Step: Take any vertex S, and start walking along an edge e1, to the other endpoint A1 of that edge, then walk along any new edge e2 that starts in A1. Continue this way, using new (previously unused) edges at each step, until a closed trail C1 is formed. The first closed trail will be formed when we first revisit a vertex already visited. 2. Step: If C1 = G, then we are done. If not, then choose a vertex V in C1 so that C1 does not contain all edges adjacent to V. 3. Step: Let us now remove all edges of C1 from G. We get a graph in which again all vertices have even degree. Starting at V , let us take another closed trail C2 in the remaining graph. We can then unite C1 and C2 into one closed trail in G. Indeed, if we start walking by C1, we can stop at V , walk through C2, then complete our trail by using the remaining part of C1. Now my question is that in the algorithm when we start constructing our first C1 it does not guarantee that C1 includes the starting point S. Now again for C2 it does not actually guarantee that it includes vertex V. Then how can we union C1 and C2 ?
Where did the sin(kx)/(kx) term in the two-point correlation function $\xi(x)$ come from?
There are lots of quadratic recursion sequences $$ z_{n+1}=az_n^2+bz_n+c. $$ Inserting a linear transformation $z=\alpha x+\beta$ results in another quadratic recursion $$ x_{n+1}=a'x_n^2+b'x_n+c' $$ with the same qualitative properties. Now one can ask what are the most simple, most easy to compare examples in such a transformation class? The linear transformation has 2 free parameters. That allows generically to pose 2 conditions on the transformed coefficients $a',b',c'$. The condition combinations that have "won" as being popular are $a'=1$, $b'=0$ resulting in the Mandelbrot iteration, and $c'=0$, $a'+b'=0$ giving the Feigenbaum/logistic map. These conditions are themselves quadratic equations in the transformation parameters. Thus it is unsurprising that they can have complex solutions. So it can be of interest to ask when, given real coefficients in the original recursion, the transformation parameters are also real. The task asks when the the Mandelbrot map with real $c$ can be transformed into the logistic map using only real parameters. So insert the transformation $$ h(x_{n+1})=h(x_n)^2+c\\ αx_{n+1}+β=α^2x_n^2+2βαx_n+β^2+c\\ a'=α\\ b'=2β\\ c'=(β^2-β+c)/α $$ So $α=-2β$ and $β$ is a root of $(2β-1)^2+4c-1=0$.
Since e is a real number I know that $e^0 = 1$, but when I enter $z=0$ into the power series definition of $e^z$ I get an output of 0. Am I doing soemthing wrong? $$e^z = \sum_{n=0}^\infty \frac{1}{n!}z^n$$ Setting z = 0: $$e^0 = \sum_{n=0}^\infty \frac{1}{n!}0^n = \sum_{n=0}^\infty 0 = 0$$ What have I done wrong?
Why does the power series form of the exponential equal 0 when evaluated at 0?
Since e is a real number I know that $e^0 = 1$, but when I enter $z=0$ into the power series definition of $e^z$ I get an output of 0. Am I doing something wrong? $$e^z = \sum_{n=0}^\infty \frac{1}{n!}z^n$$ Setting z = 0: $$e^0 = \sum_{n=0}^\infty \frac{1}{n!}0^n = \sum_{n=0}^\infty 0 = 0$$ What have I done wrong?
Since $e$ is a real number I know that $e^0 = 1$, but when I enter $z=0$ into the power series definition of $e^z$ I get an output of 0. Am I doing something wrong? $$e^z = \sum_{n=0}^\infty \frac{1}{n!}z^n$$ Setting z = 0: $$e^0 = \sum_{n=0}^\infty \frac{1}{n!}0^n = \sum_{n=0}^\infty 0 = 0$$ What have I done wrong?
Since $e$ is a real number I know that $e^0 = 1$, but when I enter $z=0$ into the power series definition of $e^z$ I get an output of 0. Am I doing something wrong? $$e^z = \sum_{n=0}^\infty \frac{1}{n!}z^n$$ Setting $z=0$: $$e^0 = \sum_{n=0}^\infty \frac{1}{n!}0^n = \sum_{n=0}^\infty 0 = 0$$ What have I done wrong?
Since $e$ is a real number I know that $e^0 = 1$, but when I enter $z=0$ into the power series definition of $e^z$ I get an output of $0$. Am I doing something wrong? $$e^z = \sum_{n=0}^\infty \frac{1}{n!}z^n$$ Setting $z=0$: $$e^0 = \sum_{n=0}^\infty \frac{1}{n!}0^n = \sum_{n=0}^\infty 0 = 0$$ What have I done wrong?
In Lee's Introduction to Smooth Manifolds, Chapter 8 (page 176) it's written that given a manifold $M$ and arbitrary subset $A \subseteq M$, $X$ is said to be a smooth vector field along $A$ if for each point $p \in A$, there is a neighborhood $V$ of $p$ in $M$ and a smooth vector field $\tilde{X}$ on $V$ that agrees with $X$ on $V \cap A$. Just to be certain, this is saying that $V$ is an *open* neighborhood and $\tilde{X}$ being smooth on $V$ means each of the component functions is smooth right? Thank you!
I have a small question about PCA, specifically in calculating the covariance matrix. I know that to calculate the covariance matrix $C$, I have to subtract the mean from the data points and form the data matrix $X$, and thus I have $$ C = \frac{1}{n}XX^T $$ And here comes the question, what if there are data points that are the same, e.g. there are 2 identical data points, should I subtract 1 from the total number of data points, $n$? Also, in this case, does the data matrix $X$ need to change? exclude the data point Thank you
I'm very interested in the correct solution of this problem.\ So I also tried the convolution approach $$f(x,t)=\mathcal{L}_s^{-1}\left[\exp \left(-x \sqrt{\frac{s}{k}}\right)\right](t)=\frac{k x e^{-\frac{x^2}{4 k t}}}{2 \sqrt{\pi } \sqrt{k^3 t^3}}$$ $$g(t)=\mathcal{L}_s^{-1}\left[\frac{1}{s+b}\right](t)=e^{-b t}=\sum_{j=0}^{\infty} \frac{(-b t)^j}{j!}$$ Now we evaluate the convolution integral $$\int_{0}^{t} f(x,\tau)\cdot g(t-\tau)\ d\tau=\int_0^{t} \sum_{j=0}^{\infty} \frac{k x e^{-\frac{x^2}{4 k \tau }} (-b (t-\tau ))^j}{2 \sqrt{\pi } j! \sqrt{k^3 \tau ^3}}\ d\tau$$ We exchange integration and infinite sum and get (Mathematica helps) $$u(x,t)=\sum_{j=0}^{\infty} \int_0^{t} \frac{k x e^{-\frac{x^2}{4 k \tau }} (-b (t-\tau ))^j}{2 \sqrt{\pi } j! \sqrt{k^3 \tau ^3}}\ d\tau=\sum_{j=0}^{\infty} (-b t)^j \left(\frac{\, _1F_1\left(-j;\frac{1}{2};-\frac{x^2}{4 k t}\right)}{\Gamma (j+1)}-\frac{x\cdot \, _1F_1\left(\frac{1}{2}-j;\frac{3}{2};-\frac{x^2}{4 k t}\right)}{\Gamma \left(j+\frac{1}{2}\right) \sqrt{k t}}\right)$$ **Does anyone know how to get rid of the infinite sum?** Visualization with $[b=1, k=1, 0\le j\le 18]$ [![enter image description here][1]][1] **HINT**: Max. approximation error with $19$ summands is $\approx 3\cdot 10^{-12}$. [1]: https://i.stack.imgur.com/0gc5U.png
I have a problem, let $S_n$ process such that $ S_n := \xi_1 + \dots + \xi_n$ and stoping time moment $\tau: \Omega \rightarrow \mathbb{N} \cup{\infty}$, where \begin{equation*} \xi = \begin{cases} 1 & p \\ -1 & 1 - p \end{cases} \end{equation*} I need to prove that $X_n = S_{n + \tau} - S_\tau$ has the same distribution like $S_n$ and independed of sigma algebra $\mathcal{F_\tau}$ and satisfies $Y = X$ I know these facts are true for brownian motion by strong markov property, so I try to extend this theorem to this discrete case. Can anyone tell me which direction I should go? I will appreciate any help.
I have a problem, let $S_n$ process such that $ S_n := \xi_1 + \dots + \xi_n$ and stoping time moment $\tau: \Omega \rightarrow \mathbb{N} \cup{\infty}$, where \begin{equation*} \xi = \begin{cases} 1 & p \\ -1 & 1 - p \end{cases} \end{equation*} I need to prove that $X_n = S_{n + \tau} - S_\tau$ has the same distribution like $S_n$ and independed of sigma algebra $\mathcal{F_\tau}$ and satisfies $X = S$ in distribution. I know these facts are true for brownian motion by strong markov property, so I try to extend this theorem to this discrete case. Can anyone tell me which direction I should go? I will appreciate any help.
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then {|−|}= $\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!} $ . I've shown that {|−|} = ${2\lambda^\lambda e^{-\lambda}} $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $. Could somebody help? EDIT: This is what I have done {|−|} = $\sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\lambda \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\infty \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ - $\sum_{j=\lambda+1}^\infty \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) $ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum_{k=1}^\infty \left(\frac{k\lambda^{k+\lambda}e^{-\lambda}}{(k+\lambda)!}\right) $ = $2 e^{-\lambda} \lambda^{\lambda}\sum_{k=1}^\infty \left(\frac{k\lambda^{k}}{(k+\lambda)!}\right) $ .
Let $u_1$ and $g$ be increasing strictly concave functions from $\mathbb{R}$ to $\mathbb{R}$. Let $u_2:=g\circ u_1$. If we regard $u_1$ and $u_2$ as utility functions of two players, this is saying that player $2$ has Arrow-Pratt coefficient of absolute risk aversion greater than that of player $1$. Suppose now $X$ is a risky asset, i.e. a (non-constant) random variable. Let $t_1^*=argmax_{t\in [0,1]} \mathbb{E}(u_1(1-t+tX))$ and $t_2^*=argmax_{t\in [0,1]} \mathbb{E}(u_2(1-t+tX))$. How do I prove that $t_1^*>t_2^*$? Intuitively I expect this to be true since it means that the more risk averse player allocates less money to the risky asset.
How would one prove that the row space and null space are orthogonal complements of each other?
**Introduction** I'm given this quadrature formula: $$\int_a^b f(x)dx = h \sum_{i=0}^{n-1}\left[ f(x_i) + \alpha h f'(x_i) + \beta h^2 f''(x_i) \right] = Q_n(a,b;f)$$ where $$h = \frac{b-a}{n}, \ x_i = a + ih, \ i=0,1,\dots,n$$ My goal is to find $\alpha$ and $\beta$ so that the quadrature formula has the highest degree of exactness. **My question** I noticed that $Q_n(a,b;f)$ can be interpreted as a composite quadrature formula. Since a composite and its simple quadrature formulas have the same degree of exactness, my idea is to work with the simple formula and impose the exactness conditions on it. However, how can I know what simple formula this composite formula was generated from? Can I say that $n=1$ and the simple formula is: $$\int_a^b f(x)dx = h\left[ f(a) + \alpha h f'(a) + \beta h^2 f''(a) \right]$$ ?
I'm looking for a pairing function $f(x,y)$ which gives unique values for every combination of integers $x,y>0$ and and as special property 0 for $x=0$ $\forall y$.<br> Or to be more general we can also replace $0$ with constants. <br> - We do know the max values of $x$ and $y$ can achieve - We are allowed to shift $x, y$ values. So we also know their min values. Which don't need to be 0 <br> So the more general definitions with constants $c_x, c_f$ (instead of $0$): $$ x_{min} \le x \le x_{max}$$ $$ y_{min} \le y \le y_{max}$$ $$\forall y: f(c_x,y) = c_f $$ $$\forall y, \forall x\not= c_x: f(x,y)\not = f(y,x) \not = c_f$$ - It will be used in a computer program. The goal is finding a function with a max result as small as possible (bit's needed for representation) while still being easy and fast to compute by a machine (not using too much storage). Therefore integer calculations are appreciated. Floating point can lead to inaccuracies.<br> - The inverse can be anything. We only need to check if two variables $x,y$ lead to the target number.<br> - No case selections are allowed (e.g. for $x=0$). Something like Kronecker Delta function which can only be $0$ for input $0$ and $1$ for anything else is also not allowed. Target value size, the number of different values $|${$x$}$|$ $= 2^{16} = 65536$ and $|${$y$}$|$ at least $2^{26} = 67108864$. <br> Size $|${$y$}$|$ can be bigger but $\max f(x,y) < 2^{92}$ Can we find any such function? ---- Here some **Examples:** **I**) Pairing functions which **do not work** ( not $0$ or a constant for any $x = c_x$ ) :<br> Cantor's pairing function : $$f(x,y)=\frac{(x+y)\cdot (x+y+1)}{2} + a$$ Szudzik pairing function: $$ \begin{eqnarray*} f(x,y) = \begin{cases} y^2 + x, &\text{if }x < y, \\ y^2 + x + y, &\text{if }x \ge y \end{cases} \end{eqnarray*} $$ This has some cases selection we do not want. We can shift $y$ to be always larger than $x$ and with this reduce it to: $$f(x,y) = (y+x_{max} +1 )^2 + x$$ Can we modify them to be a constant for a certain $x = c_x$? ---------- **II**) Pairing function which **does work** but needs extra memory/time : <br> If we use $0 \le x \le 2^{16}-1$ it can not surpass value $2^{16} = 65536$. Next prime above it is 65537. This is the 6543'th prime number. <br> Let $p(i)$ return the $i'th$ prime number.<br> Let $0 \le y \le 2^{26}-1$ <br> With this we can do the pairing function: $$f(x,y) = x \cdot p(y + 6543)$$ This returns a unique value for every combination $x,y$ except for $x = c_x = 0$ it will always return the constant $c_f = 0$.<br> However this works good in theory but calculating $p$ takes quite a long time. Storing all values would take a lot of memory ($\approx 268$ MB) - not a nice option. <br> Max value of $f(x,y)$ would be $2^{46.31}$ needing $47$ bits which is not too far from optimum $42$ bits. (Cantor and Szudzik are much bigger) To reduce memory we can split $y$ into half bit-size parts. For $2^{26}$ different values this would be $2^{13}$. First part gets represented by the first $2^{13}$ primes after $2^{16}$, second part by the $2^{13}$ primes after that. $$f(x,y) = x \cdot p(y_{\text{bits } 1..13} + 6543) \cdot p(y_{\text{bits } 14..26} + 6543 + 2^{13}) $$ This would reduce memory but scales the max value of $f(x,y)$ to $2^{51.3}$, so we need $52$ bits. ------ **Question:** Can we find any such pairing function which does not need to replace parameter $x,y$ with their primes (or similar) while the resulting value $f(x,y)$ won't get much bigger than $2^{52}$ (at most $2^{92}-1$ ) ? ---- **III**) Pairing function which **might work** but is **too big**:<br> While testing around I came up with <br> $$f(x,y) = ((x+1)^2+y^2) \cdot (x^3+(y+1)^2) \cdot x$$ I have **no proof** that this is a valid pairing function for all combinations $x,y$ but in tests it did work for all combinations $x<2^{8}, y<2^{20}$<br> However the results can get too big. It needs up to $121$ bits. <br> Can we find a pairing function with a smaller max value?
proof: Suppose that $3| x^3+2x+1$ and $x$ is a rational number, $\frac{p}{q}$, $gcd(p, q) = 1, q \ne 0$ sub $\frac{p}{q}$ into $x^3+2x+1$: $\frac{p^3+2pq^2+1}{q^3} =3d$ for some integer d $p^3+2pq^2+1 =3dq^3$ this means that, $3|(p^3+2pq^2+1)$ I got stuck from this point onwards and could not find a contradiction, any hints on how should I proceed with the proof?
$\newcommand{bm}[1]{\mathbf{#1}}$Given the semi-orthogonal fat matrix ${\bm B} \in\mathbb R^{c \times d}$ (i.e., $c\leq d$, $\bm {BB}^\top=\bm I$), the matrix $\bm X \in {\Bbb R}^{m \times n}$, $c$ one-hot vectors $\mathbf y_1, \mathbf y_2, \dots, \mathbf y_c \in \mathbb R^c$, let the cost function $J : {\Bbb R}^{m \times d} \to {\Bbb R}$ be defined by $$J ({\bm W}) := -\frac1n\sum_{i=1}^n\frac1{1 + \frac1{c-1} \sum\limits_{1\leq j\leq c \wedge \bm y_j\neq \bm y_i} \exp \left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i \right)}$$ Let $\bm X = \begin{bmatrix} \bm x_1 & \bm x_2 & \dots & \bm x_n \end{bmatrix}$; $n>c$. I have worked out the gradient $\nabla_{\bm W}J$ as: $$\nabla_{\bm W}J=\frac1{n(c-1)}\bm X\left[\bm M-\mathrm {diag}(\bm M\bm e)\bm Y^\top\right]\bm B$$ where $\bm e = 1^{c\times 1}$. $\bm Y\in (0, 1)^{c\times n}$ is a fixed "one-hot" column vector matrix where $\bm y_i$ corresponds to $\bm x_i$, and $\bm M\in\mathbb R_+^{n\times c}$ is a matrix of scaled exponential elements such that $$M_{ij} = J_i^2\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)$$ where $J_i$ is the $i$th summation term in $J$. I am trying to understand the convergence properties of $J$. Prima facie looking at $J$, it appears that $J$ is minimized when $\Vert\bm W\Vert\to\infty$ and $(\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i<0 ,\forall ij$. However, when I look at the gradient, we have at convergence: $$\bm X\left[\bm M-\mathrm {diag}(\bm M\bm e)\bm Y^\top\right]\bm B=\bm 0$$ Although I understand that $$(\Vert\bm W\Vert\to\infty;(\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i<0,\forall ij)\implies\nabla_{\bm w}J\to\bm 0\implies\bm M\to\bm Y^\top\implies J\to-1$$ but from looking at the equation, it would appear that there might be local minima in the function since, in general, $\bm X\bm A\bm B=\bm 0$ does not require that $\bm A=\bm 0$. If so, given that $\bm X$ and $\bm B$ are fixed, can we somehow find the conditions on $\bm M$ that result in convergence? --- **Update:** Noting that $\bm {BB}^\top=\bf I$, simplifies the convergence $\nabla_{\bm W}J=\bm 0$ to: $$\bm X\bm M=\bm X\mathrm {diag}(\bm M\bm e)\bm Y^\top$$ $$\implies\bm M^\top\bm X^\top=\bm Y\mathrm {diag}(\bm M\bm e)\bm X^\top$$ This tells me that $\bm M^\top$ is equivalent to a transformation that projects $\bm X^\top$ onto $\bm Y$ scaled by $\mathrm {diag}(\bm M\bm e)$. If we let $\Vert\bm W\Vert\to\infty$ during minimization, then $\bm M\to\bm Y^\top\implies\mathrm {diag}(\bm M\bm e)\to \bm I^n$. I guess then my question is equivalent to the following: "Does there exist a diagonal matrix with distinct elements, i.e., $\mathrm{diag}(\bm M\bm e)\neq k\bm I^n$ for some finite $\Vert\bm W\Vert$ such that the above expression holds?" --- **Calculation of $\nabla_{\bm W}J$** There seems to be some confusion in comments regarding the expression I am getting for the gradients. Here is what I am doing: $$\nabla_{\bm W}J=\bm X(\nabla_{\bm W^\top\bm X}J)^\top$$ where $\nabla_{\bm W^\top\bm X}J=\begin{bmatrix}\nabla_{\bm W^\top\bm x_1}J & \nabla_{\bm W^\top\bm x_2}J & \dots & \nabla_{\bm W^\top\bm x_n}J \end{bmatrix}$. Now, $\nabla_{\bm W^\top\bm x_i}J$ can be calculated as: $$\nabla_{\bm W^\top\bm x_i}J=\frac{J_i^2}{n(c-1)}\bm B^\top\sum\limits_{\bm y_i\neq\bm y_j}\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)(\bm y_j-\bm y_i) \\ =\frac{J_i^2}{n(c-1)}\bm B^\top\sum\limits_{\forall j}\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)(\bm y_j-\bm y_i) $$ Let $m_j=J_i^2\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)$, then $$\nabla_{\bm W^\top\bm x_i}J=\frac1{n(c-1)}\bm B^\top\sum\limits_{\forall j}m_j(\bm y_j-\bm y_i) \\ =\frac1{n(c-1)}\bm B^\top\left[\sum\limits_{\forall j}m_j\bm y_j-\bm y_i\sum\limits_{\forall j}m_j\right] $$ Since $\bm y_j$ is one-hot, The term in brackets can be rewritten in vector form. $$\nabla_{\bm W^\top\bm x_i}J=\frac1{n(c-1)}\bm B^\top\left[\bm m_i-\bm m_i^\top\bm e\bm y_i\right]$$ where $\bm m_i=\begin{bmatrix} m_1 & m_2 & \dots & m_c\end{bmatrix}^\top$. From here, one can work upwards, to find $\nabla_{\bm W}J$ as provided previously. Notably, $\bf m_i$ also turns out to be the $i$th row of $\bm M$ mentioned previously. Hopefully, I did not miss something.
## Setting Suppose $Q$ is a compact metric space. Let $\mathcal{P}(Q)$ be the set of Borel probability measures on $Q$. This set is endowed with the topology of weak-* convergence: a sequence $\left\{ m_{N} \right\}$ of $\mathcal{P}(Q)$ converges to $m \in \mathcal{P}(Q)$ if $\forall \varphi \in C(Q)$ $$ \lim _N \int_Q \varphi(x) d m_N(x)=\int_Q \varphi(x) d m(x) , $$ which can be metrized by the distance $$ \mathbf{d}_1(\mu, \nu)=\sup \left\{\int _{Q} fd\left(\mu -\nu\right) : \operatorname{Lip}\left(f;Q\right)\leq 1\right\}, $$ where $\operatorname{Lip}\left(f;Q\right)$ denotes the minimal Lipschitz constant of $f$. The equivalent distance is given by $$ d_1(\mu, \nu)=\inf _{M\in\Pi(\mu, \nu)} \int_{Q \times Q} d(x, y) \,d M(x, y), $$ where $\Pi (\mu,\nu)$ is all couplings of $\mu$ and $\nu$, i.e. $$ M\left(A\times Q\right)=\mu\left(A\right),\,M\left(Q\times A\right)=\nu\left(A\right) \quad\forall A \in \mathscr{B}\left(Q\right). $$ ## Question I'm reading a Pierre Cardaliaguet's MFG note about the limit of sequence of symmetric functions. The author says that the function $$ U\left(m\right)=\sup_{y \in \operatorname{Spt}\left(m\right)}\left| y \right|:\mathcal{P}\left(Q\right)\to \mathbb{R} $$ is not continuous. Here $\operatorname{Spt}(m)$ denotes the support of $m$, which is defined by $$ \operatorname{Spt}(m):=\left\{x \in X \mid \forall N_x\in \mathcal{O}_{x}:\left(x \in N_x \Rightarrow \mu\left(N_x\right)>0\right)\right\}. $$ But I have no idea proving it. Can someone give me some glues? Thanks so much!
I've seen the following two definitions in my slides: 1. $S \subseteq \aleph$ is semi-decidable iff there exists a partially computable function g where $S = \{x \in \aleph\ |\ g(x)\downarrow \}$ 2. $S \subseteq \aleph$ is re iff $S = \phi$ or there exists a totally computable function $f$ where $S = \{y\ |\ \exists x f(x) == y\}$ I know that $\aleph_0$ or $\aleph_1$ represent set cardinalities but what does it mean if there is no subscript and it is used in the above context?
What does it mean for a set to be a member of $\aleph$?
I am reading Steele's *Cauchy-Schwarz Master Class*, and am wondering what "inversion preserving" refers to the following exercise from Chapter 1: > **Exercise 1.6 (A Sum of Inversion Preserving Summands)** Suppose that $p_k>0$ for $1\leq k\leq n$ and $\sum_{k=1}^np_k=1$. Show that $$\sum_{k=1}^n\left(p_k+\frac1{p_k}\right)^2\geq n^3+2n+\frac1n.$$ So, what "inversion" leaves the summands $(p + 1/p_k)^2$'s unchanged?
Why are the summands here "inversion preserving"?
When I was in high school, I discovered in an exercise that $$\boxed{\pi=12\int_{0}^{2-\sqrt3}\frac{dt}{1+t^2}}$$since $\frac{\pi}{12}=\arctan( 2-\sqrt3)$, what we were made to demonstrate with formulas with half angles. This formula has always fascinated me and I have memorized it. I tried the following $$\forall |t|<1, \frac{1}{1+t^2}=1-t^2+t^4-t^6+...$$So $$\pi\approx12\left( \alpha-\frac{\alpha^3}{3}+\frac{\alpha^5}{5} \right)\approx 3.1418$$with $\alpha=2-\sqrt3$ I first started with my calculator and then used Wolfram Alpha. Since I became a high school teacher, students frequently ask me how to get the decimals of pi. I know Machin's formula. I need a quick process to be able to answer the students' question without too many calculation steps. Is there such a process? __________________________ **Edit :** Since it's an alternating series, we get $$|\pi-12(\alpha+...+(-1)^n\frac{\alpha^{2n+1}}{2n+1}|\leq \frac {12}{2n+3}\frac{3^{2n+3}}{10^{2n+3}}$$since $\alpha<0.3$ For example, with $n=5$, $$3.1415925<\pi<3.1415928$$
I'm studying the article "Brenier, Y. (1987) Décomposition polaire et réarrangement monotone des champs de vecteurs" (polar factorization and monotone rearrangement of vector-valued functions). In this work one function is build which is expected to have some properties: let $f$ be a function from some compact space S, $f:S \mapsto S$.<br> $f$ is supposed to be bounded, Rieman-integrable and a Borel function.<br> let $b$ a measurable and bounded function.<br> let $g(y)=sup(y \cdot f(x)+b(x)-1/2\lVert f(x) \rVert^2)$ for $y \in S \bigcup f(S)$ (NB I'm unsure for $\bigcup$. It may be $\bigcap$).<br> The assertion is made that $g$ is convex, lipschitz continue and differentiable but no proof are given. I understand that it is most certainly an immediate results for an expert in the field, which I'm not. How can these properties be proven? I trivially started from the definition of convexity: $g(ty_0+(1-t)y_1) = sup(ty_0 \cdot f(x)+(1-t)y_1 \cdot f(x)+b(x)-1/2\lVert f(x) \rVert^2)$ but I don't know how to handle that further, when taking the supremum. I also gave a try with $S$ a closed set of $\mathbb{R}$, $b=0$ and $f$ differentiable but I'm stuck also because I've got then two extremum for $y \cdot f(x)-1/2\lVert f(x) \rVert^2$: $f'(x)=0$ and $y=f(x)$. But I can't tell which one is a maximum (if it were the second, then $g(y)=1/2y^2$ which has the desired properties.
$sup(y \cdot f(x)+b(x)-1/2\lVert f(x) \rVert^2)$ convex and regular with respect to $y$?
If $u^2 \ge -\dfrac{8}{3}$, then $u \ge -\sqrt{\dfrac{8}{3}}$. Is this the correct convention? I was confused because initially I thought the negative sign would go inside the square root, but then that would lead to imaginary numbers. Thanks.
**Problem**: assume $ a^2 + b^2 + c^2 = 1 $. Calculate the improper integral $\int_{B(\mathbf{0}, 1)} \frac{\mathrm{d} x \mathrm{~d} y \mathrm{~d} z}{1-a x-b y-c z}$ where $B(\mathbf{0}, 1)=\left\{x^2+y^2+z^2 \leq 1\right\}$ is the unit ball in $ \mathbb{R^3}$. **Attempt**: Assuming $ a,b,c \neq 0 $ ( I had difficulty with the simpler cases as well where for example $ a=1 , b=c=0$ ) I performed changed of variables $ u = 1-ax-by-cz , v = by , w = cz \iff x = \frac{1-u-v-w}{a} , y =v/b , z = w/b $ , the absolute value of the jacobian will be $ \frac{1}{abc} $ and I get that the integrad will be $ \frac{1}{abc} \cdot \frac{1}{u} $, the problem is, I'm having difficulty determining the new set under integration according to the diffeomorphism ( induced by the change of variables ) hence I can't proceed to calculate the integral. I know the new set of integration will have $ (\frac{1-u-v-w}{a})^2 + (v/b)^2 + (w/c)^2 \leq 1 $ but I don't know how to continue and hopefully, to use fubini's theorem. Any ideas? Thanks for the help!
$\newcommand{bm}[1]{\mathbf{#1}}$Given the semi-orthogonal fat matrix ${\bm B} \in\mathbb R^{c \times d}$ (i.e., $c\leq d$, $\bm {BB}^\top=\bm I$), the matrix $\bm X \in {\Bbb R}^{m \times n}$, $c$ one-hot vectors $\mathbf y_1, \mathbf y_2, \dots, \mathbf y_c \in \mathbb R^c$, let the cost function $J : {\Bbb R}^{m \times d} \to {\Bbb R}$ be defined by $$J ({\bm W}) := -\frac1n\sum_{i=1}^n\frac1{1 + \frac1{c-1} \sum\limits_{1\leq j\leq c \wedge \bm y_j\neq \bm y_i} \exp \left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i \right)}$$ Let $\bm X = \begin{bmatrix} \bm x_1 & \bm x_2 & \dots & \bm x_n \end{bmatrix}$; $n>c$. I have worked out the gradient $\nabla_{\bm W}J$ as: $$\nabla_{\bm W}J=\frac1{n(c-1)}\bm X\left[\bm M-\mathrm {diag}(\bm M\bm e)\bm Y^\top\right]\bm B$$ where $\bm e = 1^{c\times 1}$. $\bm Y\in (0, 1)^{c\times n}$ is a fixed "one-hot" column vector matrix where $\bm y_i$ corresponds to $\bm x_i$, and $\bm M\in\mathbb R_+^{n\times c}$ is a matrix of scaled exponential elements such that $$M_{ij} = J_i^2\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)$$ where $J_i$ is the $i$th summation term in $J$. I am trying to understand the convergence properties of $J$. Prima facie looking at $J$, it appears that $J$ is minimized when $\Vert\bm W\Vert\to\infty$ and $(\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i<0 ,\forall ij$. However, when I look at the gradient, we have at convergence: $$\bm X\left[\bm M-\mathrm {diag}(\bm M\bm e)\bm Y^\top\right]\bm B=\bm 0$$ Although I understand that $$(\Vert\bm W\Vert\to\infty;(\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i<0,\forall ij)\implies\nabla_{\bm w}J\to\bm 0\implies\bm M\to\bm Y^\top\implies J\to-1$$ but from looking at the equation, it would appear that there might be local minima in the function since, in general, $\bm X\bm A\bm B=\bm 0$ does not require that $\bm A=\bm 0$. If so, given that $\bm X$ and $\bm B$ are fixed, can we somehow find the conditions on $\bm M$ that result in convergence? --- **Update:** Noting that $\bm {BB}^\top=\bf I$, simplifies the convergence $\nabla_{\bm W}J=\bm 0$ to: $$\bm X\bm M=\bm X\mathrm {diag}(\bm M\bm e)\bm Y^\top$$ $$\implies\bm M^\top\bm X^\top=\bm Y\mathrm {diag}(\bm M\bm e)\bm X^\top$$ This tells me that $\bm M^\top$ is equivalent to a transformation that projects $\bm X^\top$ onto $\bm Y$ scaled by $\mathrm {diag}(\bm M\bm e)$. If we let $\Vert\bm W\Vert\to\infty$ during minimization, then $\bm M\to\bm Y^\top\implies\mathrm {diag}(\bm M\bm e)\to \bm I^n$. I guess then my question is equivalent to the following: "Does there exist a diagonal matrix with distinct elements, i.e., $\mathrm{diag}(\bm M\bm e)\neq k\bm I^n$ for some finite $\Vert\bm W\Vert$ such that the above expression holds?" --- **Calculation of $\nabla_{\bm W}J$** There seems to be some confusion in comments regarding the expression I am getting for the gradients. Here is what I am doing: $$\nabla_{\bm W}J=\bm X(\nabla_{\bm W^\top\bm X}J)^\top$$ where $\nabla_{\bm W^\top\bm X}J=\begin{bmatrix}\nabla_{\bm W^\top\bm x_1}J & \nabla_{\bm W^\top\bm x_2}J & \dots & \nabla_{\bm W^\top\bm x_n}J \end{bmatrix}$. Now, $\nabla_{\bm W^\top\bm x_i}J$ can be calculated as: $$\nabla_{\bm W^\top\bm x_i}J=\frac{J_i^2}{n(c-1)}\bm B^\top\sum\limits_{\bm y_i\neq\bm y_j}\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)(\bm y_j-\bm y_i) \\ =\frac{J_i^2}{n(c-1)}\bm B^\top\sum\limits_{\forall j}\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)(\bm y_j-\bm y_i) $$ Let $m_j=J_i^2\exp\left((\bm y_j-\bm y_i)^\top \bm B\bm W^\top\bm x_i\right)$, then $$\nabla_{\bm W^\top\bm x_i}J=\frac1{n(c-1)}\bm B^\top\sum\limits_{\forall j}m_j(\bm y_j-\bm y_i) \\ =\frac1{n(c-1)}\bm B^\top\left[\sum\limits_{\forall j}m_j\bm y_j-\bm y_i\sum\limits_{\forall j}m_j\right] $$ Since $\bm y_j$ is one-hot, The term in brackets can be rewritten in vector form. $$\nabla_{\bm W^\top\bm x_i}J=\frac1{n(c-1)}\bm B^\top\left[\bm m_i-\left(\bm m_i^\top\bm e\right)\bm y_i\right]$$ where $\bm m_i=\begin{bmatrix} m_1 & m_2 & \dots & m_c\end{bmatrix}^\top$. From here, one can work upwards, to find $\nabla_{\bm W}J$ as provided previously. Notably, $\bf m_i$ also turns out to be the $i$th row of $\bm M$ mentioned previously. Hopefully, I did not miss something.
When I was in high school, I discovered in an exercise that $$\boxed{\pi=12\int_{0}^{2-\sqrt3}\frac{dt}{1+t^2}}$$since $\frac{\pi}{12}=\arctan( 2-\sqrt3)$, what we were made to demonstrate with formulas with half angles. This formula has always fascinated me and I have memorized it. I tried the following $$\forall |t|<1, \frac{1}{1+t^2}=1-t^2+t^4-t^6+...$$So $$\pi\approx12\left( \alpha-\frac{\alpha^3}{3}+\frac{\alpha^5}{5} \right)\approx 3.1418$$with $\alpha=2-\sqrt3$ I first started with my calculator and then used Wolfram Alpha. Since I became a high school teacher, students frequently ask me how to get the decimals of pi. I know Machin's formula. I need a quick process to be able to answer the students' question without too many calculation steps. Is there such a process( the advantage of this one is that it gives solutions of type $a+b\sqrt3,(a,b)\in \mathbb Q^2$ and that it allows for a very simple majorant)? __________________________ **Edit :** Since it's an alternating series, we get $$|\pi-12(\alpha+...+(-1)^n\frac{\alpha^{2n+1}}{2n+1}|\leq \frac {12}{2n+3}\frac{3^{2n+3}}{10^{2n+3}}$$since $\alpha<0.3$ For example, with $n=5$, $$3.1415925<\pi<3.1415928$$
With latex: How to show that if $y=f(x+a)$ is an even function where $f(x)=(x-6)^2sin(\omega x)$ then $a$ must be 6? The original question was to solve for what $\omega$s are possible. And there was a step like this: since $y=f(x+a)$ is even, so is $y=(x+a-6)^2$ and $y=sin(\omega x+\omega a)$. I'm confused by this step.
How to show that if y=f(x+a) is an even function where f(x)=(x-6)^2sin(bx) then a must be 6?
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: 1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. 2) Are these two isomorphic as topological groups? **Some preliminaries:** Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special linear group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2=\{-1, 1\}$ denotes the multiplicative group of order 2. **My attempt** For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$ $=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $-U=\{-A\mid A\in U\}$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup -U$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $A \in O(n)$ with $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $det(A)\cdot A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to <s>one being abelian while the other is not and</s> them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof
How to solve these trigonometry system of equations?
> Use the method of characteristics to find the general solution of the following PDE $$ e^xu_x + u_y = xu. $$ Show explicitly that your result is indeed a solution of the PDE. So I believe I have found the general solution as follows: $\frac{dy}{dx} = e^{-x},$ $\frac{du}{dx} = xue^{-x}$, $\frac{du}{dy} = xu$. Then, the first eq'n gives $C_1 = y + e^{-x}$. And, the second equation gives $C_2 = \ln{u} + (x+1)e^{-x}$. So, the general solution is given by $\ln{u} + (x+1)e^{-x} = \omega(y + e^{-x})$, where $\omega$ is an arbitrary function. We can rearrange to give an explicit general sol'n: $$u(x,y) = \exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\}.$$ Now, I'm struggling to, as the question asks, show explicitly that this is a solution. I'm assuming this is wanting me to substitute my solution into the PDE. But, because I have this arbitrary function it doesn't seem to be working.. (?) I can't get my expression for $u(x,y)$ to give me the PDE, i.e., taking partial derivatives and subbing into the PDE I don't get back an expression for $xu$. This is what I have: \begin{align} u_x &= \frac{\partial}{\partial x} \exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \left( \frac{\partial}{\partial x} [\omega(y + e^{-x})] - \frac{\partial}{\partial x}[(x+1)e^{-x}] \right) \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \left( \frac{\partial}{\partial x} [\ln{u} + (x+1)e^{-x}] - [-xe^{-x}] \right) \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \left( \frac{\partial}{\partial x} [\ln{u}] + \frac{\partial}{\partial x}[(x+1)e^{-x}] + [xe^{-x}] \right) \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \frac{u_x}{u} \\ &= u \cdot \frac{u_x}{u} \\ &= u_x \\ \end{align} \begin{align} u_y &= \frac{\partial}{\partial y} \exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \left( \frac{\partial}{\partial y} [\omega(y + e^{-x})] - \frac{\partial}{\partial y}[(x+1)e^{-x}] \right) \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \frac{\partial}{\partial y} [\ln{u} + (x+1)e^{-x}] \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \frac{\partial}{\partial y} [\ln{u}] \\ &=\exp\{\omega(y + e^{-x}) - (x+1)e^{-x}\} \cdot \frac{u_y}{u} \\ &= u \cdot \frac{u_y}{u} \\ &= u_y \\ \end{align} Is this enough to show that it's a solution? What else can I do with this? Have I missed something?
I'm using an ellipsoid $f = x^a + |y|^b + |z|^c - 1 = 0$ to fit some data (a failure envelope), where $x \in [0, 1]$, $y \in [-1, 1]$, and $z \in [-1, 1]$ are all normalized variables and $a$, $b$, $c$ are to-be-optimized coefficients. The fitting is not good, so I decided to add some coupling terms into $f$, including symmetric terms such as $d_1x^{d_2}|y|^{d_3}$ and also asymetric terms such as sign($z$)$e_1x^{e_2}|z|^{e_3}$. Ten coupling terms were added into $f$ (let's call it $f^*$). During optimizaiton, I didn't put any constraint. So, occasionally, the optimal coefficients may result in overshooting, such as $x > 1$, which is prevented. My question is what constraints should I put to the coefficients to retain $f^*$ being valid. Or what general principles should I follow to design the constraints? Thanks.
Consider the triangle formed by joining the circumcenter, the incenter and the centroid of a triangle (is there is already a name for this triangle in literature?). Simulations show that the line joining the incenter and the circumcenter always subtends and obtuse angle at centroid (except when the triangle is degenerate). [![enter image description here][1]][1] > **Conjecture**: In any triangle, the line joining the incenter and the circumcenter always subtends and obtuse angle at centroid. Can this be proved or disproved? [1]: https://i.stack.imgur.com/dFevf.png
The question regarding which i am asking this question is as follows Prove that that least positive value of $x$, satisfying $\tan(x) = x+1$, lies in the interval $(\frac{\pi}{4}, \frac{\pi}{2})$ The solution to this question in my text book is done using a graph, however this method becomes infeasible for more complex equations. Is their any other way to solve this maybe analytically, using derivatives or something, anything i tried seemed to fail could someone please help me with this one. Is their a analytical we to find the smallest intersection of 2 functions? Any help would be appreciated!
Well I give an answer [here][1] and I want to know if it's true for a special example *Answer :* Not an answer just some speculation about gamma function and Bernstein's polynomial which is of independent interest here but could help . Using fallaciously the inverse function of $f(x)$: $$f\left(x\right)=e^{x^{4}-\ln\left(\frac{1}{y}\right)}!,x!=\Gamma(x+1),0<y<1$$ Then using Bernstein form it seems we have $\forall x>0$: $$\lim_{n\to \infty}\sum_{k=0}^{n}\frac{f\left(\frac{k}{n}\right)n!}{k!\left(n-k\right)!}\left(\ln\left(x^{\frac{1}{6}}+1\right)\right)^{k}\left(1-\ln\left(1+x^{\frac{1}{6}}\right)\right)^{\left(n-k\right)}=y!$$ *End of the answer :* Now the special example : Let $f(x)$ as above and $y=1/2$ have we for $0<x\leq 1$ : >>$$\lim_{n\to \infty}\sum_{k=0}^{n}\frac{f\left(\frac{k}{n}\right)n!}{k!\left(n-k\right)!}\left(\ln\left(x^{\frac{1}{6}}+1\right)\right)^{k}\left(1-\ln\left(1+x^{\frac{1}{6}}\right)\right)^{\left(n-k\right)}=\sqrt{\frac{\pi}{4}}$$ Is this special example true ? Remark : I have no idea to tackle it rigourosly . [1]: https://math.stackexchange.com/questions/4884922/one-of-the-numbers-zeta5-zeta7-zeta9-zeta11-is-irrational/4890066#4890066
A new formula for $\pi$ using cheated Bernstein's polynomials with composition and Gamma function?
**Problem**: Assume $ a^2 + b^2 + c^2 = 1 $. Calculate the improper integral $\int_{B(\mathbf{0}, 1)} \frac{\mathrm{d} x \mathrm{~d} y \mathrm{~d} z}{1-a x-b y-c z}$ where $B(\mathbf{0}, 1)=\left\{x^2+y^2+z^2 \leq 1\right\}$ is the unit ball in $ \mathbb{R^3}$. **Attempt**: Assuming $ a,b,c \neq 0 $ ( I had difficulty with the simpler cases as well where for example $ a=1 , b=c=0$ ) I performed changed of variables $ u = 1-ax-by-cz , v = by , w = cz \iff x = \frac{1-u-v-w}{a} , y =v/b , z = w/b $ , the absolute value of the jacobian will be $ \frac{1}{abc} $ and I get that the integrand will be $ \frac{1}{abc} \cdot \frac{1}{u} $, the problem is, I'm having difficulty determining the new set under integration according to the diffeomorphism ( induced by the change of variables ) hence I can't proceed to calculate the integral. I know the new set of integration will have $ (\frac{1-u-v-w}{a})^2 + (v/b)^2 + (w/c)^2 \leq 1 $ but I don't know how to continue and hopefully, to use Fubini's theorem. Any ideas? Thanks for the help!
I believe this is a Cauchy distribution, but I am having a hard time identifying the parameters. Could someone help me to identify the value of gamma and x0 if this is indeed a Cauchy distribution? f(x) = 2/(pi * sqrt(3))(1 + (x^2/3))^-2
I was doing some problems in Complex Analysis… And I came across this. **Let $f: \mathbb{C} \to \mathbb{C}$ be entire. Then for any bounded set $B$, f ($B$) is bounded.** Now I know that if an entire function is constant then the above statement is necessarily true. What about non-constant entire function? All I know is that a non-constant entire function is unbounded. (By Liouville’s Theorem) I am summarising my doubts here: $1)$ What is the image of a bounded set under this non-constant entire function? $2)$ What can we say about its image on the unbounded set? Thanks for your time.
>The life of a semiconductor laser at a constant power is normally distributed with a mean of $7000$ hours and a standard deviation of $600$ hours. >(d) - A product contains three lasers, and the product fails if any of the lasers fails. Assume that the lasers fail independently. What should the mean life equal for $99\%$ of the products to exceed $10,000$ hours before failure? Taken from $\textit{'Applied Statistics and Probability for Engineers'}$;$\;$ $6th$ edition;$\;$$2014$;$\;$p.$152/153$ Let $Y$ be the random variable for the life of the product and $X$ the random variable for life of a laser. We're looking for the mean ${\mu'}$ such that $P(Y>10000)\geq 0.99$; but we also have that $P(Y>10000)=P(X>10000)^3$; and: $$\begin{align*}& P(X>10000) = \sqrt[3]{0.99} \approx 0.9967 \\[7pt] \implies& P(\frac{X-\mu}{600}>\frac{10000-\mu}{600}) =0.9967 \\[7pt] \implies& \Phi(\frac{\mu-10000}{600})=0.9967 \\[7pt] \implies& \frac{\mu-10000}{600}=2.72 \implies \mu=11632 \end{align*}$$ But now, how can I calculate the mean life $\mu'$ of the product or did I misunderstand the question?
$$\int\frac{1}{y}\frac{1}{\ln y}$$ $$\int\frac{1}{y}\frac{1}{\ln^2y}$$ I know that i should use some substitution, but i don't understand how.Is there any other way than substitution? I've tried to understand the integral-calculator.com output, but no luck. Can anyone explain to me how to solve these integrals?
How should these types of integrals be solved? $\int\frac{1}{y}\frac{1}{\ln y}$
I believe this is a Cauchy distribution, but I am having a hard time identifying the parameters. Could someone help me to identify the value of $\gamma$ and $x_0$ if this is indeed a Cauchy distribution? $$f(x) = \frac{2}{\pi\sqrt{3}}\frac{1}{(1 + x^{2/3})^2}$$
The set of real functions has the structure of a vector space, and all vector spaces have basis. What is the basis of this space then? My first thought was maybe using taylor series but still there are a lot of functions that can not be expressed through taylor series and a linear combination can not have infinite elements (I think). Is there a known basis to this vector space? Is there a way to find it?
I'm having some problems solving this integral: $$ I = \mathcal{P} \int_{-\infty}^{+\infty} \frac{1-e^{2ix}}{x^2} \ dx$$ where $\mathcal{P}$ is the Cauchy principal value. The exercise suggests to use the fact that: $$I_* = \frac{1}{2} \operatorname{Re} \left[I\right]=\mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx$$ since $\sin^2 x = \frac{1}{2} \left(1- \cos(2x)\right)$. ***My solution.*** I went on and tried to solve $I_*$ as follows: I used the fact that the analytic extension of the integrand has no poles, which makes the integral equals to $0$ by using residue theory: $$\lim_{R\to + \infty}\oint_{\Gamma_R} \frac{\sin^2z}{z^2} \ dz = \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx = 0$$ where the second equality is true since $$\oint_{\Gamma_R}\frac{\sin^2z}{z^2} \ dz =\left(\int_{-R}^{+R} + \int_{C_R}\right) \frac{\sin^2z}{z^2} \ dz$$ where $C_R = \\{z = r e^{i \theta}\in \mathbb{C} : 0\le r \le R\\}$ and $$\\mid \int_{C_R} \frac{\sin^2 z}{z^2} \ dz \\mid \le \int_{C_R} \frac{1}{|z^2|} \ dz \le \int_{C_R} \frac{1}{|R^2|} \ dz \to 0, \ R\to +\infty$$ $$\lim_{R\to+\infty} \int_{-R}^{+R} \frac{\sin^2z}{z^2} \ dz = \lim_{R\to+\infty} \frac{\sin^2 x}{x^2} \ dx \equiv \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx$$ Since the residues of this function are all $0$, this means that also $$\mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx =0 $$ Ok, now, since this implies that $\operatorname{Re}I = 0$, I thought that $I$ must have just an imaginary part; for this reason, I then tried to calculate the following: $$\operatorname{Im} [I] = \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin(2x)}{x^2} \ dx \equiv \mathcal{P}\int_{-\infty}^{+\infty} h(x) \ dx$$ I extended $h(x)\to h(z)$, which has a first order pole in $z=0$: $$\operatorname{Res}\left[h(z) , z=0\right]=\lim_{z\to 0 } \left(z \frac{\sin 2z}{z^2}\right) = 2$$ Then I integrated $h(z)$ as follows: $$\oint_{\Gamma_{r,R} } \frac{\sin 2z}{z^2} \ dz = \left(\int_{-R} ^{-r} + \int_{C_r^-} +\int_{r}^{R} + \int_{C_R} \right) \frac{\sin 2z}{z^2} \ dz $$ where: $$\lim_{r\to 0} \int_{C_r^-} \frac{\sin 2z}{z^2} \ dz \to -i\pi\operatorname{Res}\left[h(z), z=0\right] = -2i\pi $$ $$\left\lvert \int_{C_R} \frac{\sin 2z}{z^2} \ dz \right\rvert \le \int_{C_R} \frac{1}{R^2}\to 0, \ R\to+\infty $$ $$\lim_{r \to 0, \ R\to +\infty} \left(\int_{-R}^{-r} + \int_{r} ^{R} \right) h(z) \ dz \equiv \mathcal{P}\int_{-\infty} ^{+\infty} h(x) \ dx $$ Putting all of this together, we get: $$\mathcal{P}\int_{-\infty} ^{+\infty} \frac{\sin 2x}{x^2 } \ dx = -2i\pi$$ What bothers me the most and that makes me think I did something wrong is that the result of this real valued integral is an imaginary number. Moreover, this would mean $\operatorname{Im} (I) = -2i\pi\Rightarrow I\stackrel{?}{=} 2\pi$ or $I \stackrel{?}{=} -2i\pi$. Did I make some errors? Can you help me getting to the correct solution? I really need help with this because I feel like I'm missing something very important. Thanks a lot in advance for the help!!
proof: Suppose that $3| x^3+2x+1$ and $x$ is a rational number, $\frac{p}{q}$, $gcd(p, q) = 1, q \ne 0$ sub $\frac{p}{q}$ into $x^3+2x+1$: $\frac{p^3+2pq^2+q^3}{q^3} =3d$ for some integer d $p^3+2pq^2+q^3 =3dq^3$ this means that, $3|(p^3+2pq^2+q^3)$ I got stuck from this point onwards and could not find a contradiction, any hints on how should I proceed with the proof?
I'm having some problems solving this integral: $$ I = \mathcal{P} \int_{-\infty}^{+\infty} \frac{1-e^{2ix}}{x^2} \ dx$$ where $\mathcal{P}$ is the Cauchy principal value. The exercise suggests to use the fact that: $$I_* = \frac{1}{2} \operatorname{Re} \left[I\right]=\mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx$$ since $\sin^2 x = \frac{1}{2} \left(1- \cos(2x)\right)$. ***My solution.*** I went on and tried to solve $I_*$ as follows: I used the fact that the analytic extension of the integrand has no poles, which makes the integral equals to $0$ by using residue theory: $$\lim_{R\to + \infty}\oint_{\Gamma_R} \frac{\sin^2z}{z^2} \ dz = \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx = 0$$ where the second equality is true since $$\oint_{\Gamma_R}\frac{\sin^2z}{z^2} \ dz =\left(\int_{-R}^{+R} + \int_{C_R}\right) \frac{\sin^2z}{z^2} \ dz$$ where $C_R = \{z = r e^{i \theta}\in \mathbb{C} : 0\le r \le R\}$ and $$| \int_{C_R} \frac{\sin^2 z}{z^2} \ dz | \le \int_{C_R} \frac{1}{|z^2|} \ dz \le \int_{C_R} \frac{1}{|R^2|} \ dz \to 0, \ R\to +\infty$$ $$\lim_{R\to+\infty} \int_{-R}^{+R} \frac{\sin^2z}{z^2} \ dz = \lim_{R\to+\infty} \frac{\sin^2 x}{x^2} \ dx \equiv \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx$$ Since the residues of this function are all $0$, this means that also $$\mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx =0 $$ Ok, now, since this implies that $\operatorname{Re}I = 0$, I thought that $I$ must have just an imaginary part; for this reason, I then tried to calculate the following: $$\operatorname{Im} [I] = \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin(2x)}{x^2} \ dx \equiv \mathcal{P}\int_{-\infty}^{+\infty} h(x) \ dx$$ I extended $h(x)\to h(z)$, which has a first order pole in $z=0$: $$\operatorname{Res}\left[h(z) , z=0\right]=\lim_{z\to 0 } \left(z \frac{\sin 2z}{z^2}\right) = 2$$ Then I integrated $h(z)$ as follows: $$\oint_{\Gamma_{r,R} } \frac{\sin 2z}{z^2} \ dz = \left(\int_{-R} ^{-r} + \int_{C_r^-} +\int_{r}^{R} + \int_{C_R} \right) \frac{\sin 2z}{z^2} \ dz $$ where: $$\lim_{r\to 0} \int_{C_r^-} \frac{\sin 2z}{z^2} \ dz \to -i\pi\operatorname{Res}\left[h(z), z=0\right] = -2i\pi $$ $$\left\lvert \int_{C_R} \frac{\sin 2z}{z^2} \ dz \right\rvert \le \int_{C_R} \frac{1}{R^2}\to 0, \ R\to+\infty $$ $$\lim_{r \to 0, \ R\to +\infty} \left(\int_{-R}^{-r} + \int_{r} ^{R} \right) h(z) \ dz \equiv \mathcal{P}\int_{-\infty} ^{+\infty} h(x) \ dx $$ Putting all of this together, we get: $$\mathcal{P}\int_{-\infty} ^{+\infty} \frac{\sin 2x}{x^2 } \ dx = -2i\pi$$ What bothers me the most and that makes me think I did something wrong is that the result of this real valued integral is an imaginary number. Moreover, this would mean $\operatorname{Im} (I) = -2i\pi\Rightarrow I\stackrel{?}{=} 2\pi$ or $I \stackrel{?}{=} -2i\pi$. Did I make some errors? Can you help me getting to the correct solution? I really need help with this because I feel like I'm missing something very important. Thanks a lot in advance for the help!!
The [Foster graph](https://en.wikipedia.org/wiki/Foster_graph) is a distance-transitive symmetric bipartite cubic graph on 90 vertices (in fact, the only such graph). I have seen this graph described as the incidence structure of the unique flag-transitive triple cover of the generalized quadrangle associated with $Sp_4(2)$, but this is both not constructive (as I don't know how to build said triple cover without appeal to the Foster graph) and rather heavy on abstractions (surely I shouldn't have to define a bunch of incidence geometry axioms and symplectic groups to talk about this object!). Are there any "nice" constructions that give rise to this graph or to its associated incidence structure? Something in the vein of "count the following structures on the icosahedron as the points, and these other structures as your lines, where incidence is defined by blah" - ideally it would make it obvious that the resulting object inherits some kinds of symmetries from the base objects it's constructed out of.
Is there an elementary construction of the Foster graph or its associated geometry?
I believe this is a Cauchy distribution, but I am having a hard time identifying the parameters. Could someone help me to identify the value of $\gamma$ and $x_0$ if this is indeed a Cauchy distribution? $$f(x) = \frac{2}{\pi\sqrt{3}}\frac{1}{(1 + x^{2}/\sqrt3)^2}$$