title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
How many distributions of $18$ different objects into $3$ different boxes. How many distributions of $18$ different objects into $3$ different boxes are there with twice as many objects in one box as in the other two combined$?$ Soln: It appears to me as an arrangement problem with repetition of some sort. As such I did this: $$\binom{3}{1} \frac{18!}{(2n)! n! n! }$$ i did $\binom{3}{1}$ to account for the fact any of the $3$ boxes could get the double amount. $\frac{18!}{(2n)! n! n! }$ was done partially because I have seen a formula relating to this sort of problem, but personally I am trying to understand the REASON behind why this formula works here, that is if my treatment was correct
If you have $18$ items, the big box must have $12$ and the other two must have $6$ between them. There are $18 \choose 12$ ways to select the ones in the big box, $3$ ways to select the big box, now distribute the other $6$.
Out of $18$ we select any of the $12$ objects to be placed into any of the $3$ boxes.This can be done in $3*\binom{18}{12}$ ways.There remain $6$ distinct objects to be distributed into $2$ distinct boxes.This can be done in $64$ ways. $P(6;0,6)+P(6;1,5)+P(6;2,4)+P(6;3,3)+P(6;4,2)+P(6;5,1)+P(6;6,0)=64...$
Prove $\|x\|+\|y\|+\|z\|+\|x+y+z\| \ge \|x+y\|+\|y+z\|+\|z+x\|$ Assuming $x,y,z$ are complex numbers, or vectors, prove $$\|x\|+\|y\|+\|z\|+\|x+y+z\| \ge \|x+y\|+\|y+z\|+\|z+x\|$$ I've tried replacing $x+y$, $y+z$, $z+x$ with $a, b, c$ respectively to see if it is any easier to prove it and this is the new form: $$2(\|a\|+\|b\|+\|c\|)\le\|a+b+c\|+\|a+b-c\|+\|b-a+c\|+\|b-a-c\|$$ My hunch is that showing that $d(r,s)=\|r\|+\|s\|-\|r+s\|$ is a distance might help using $d(r,s)\le d(r,t)+d(t,s)$. I found these links that seem to have the solution: https://mathoverflow.net/questions/167685/absolute-value-inequality-for-complex-numbers Inequality for absolute values
After squaring of the both sides we need to probe that: $$\sum_{cyc}\left(\|x\|\|x+y+z\|+\|y\|\|z\|\right)\geq\sum_{cyc}\|x+y\|\|x+z\|,$$ which is true by the triangle inequality for complex numbers: $$\|x\|\|x+y+z\|+\|y\|\|z\|=\|x^2+xy+xz\|+\|yz\|\geq$$ $$\geq\|x^2+xy+xz+yz\|=\|x+y\|\|x+z\|.$$ It seems that for vectors it's not true.
After squaring of the both sides we need to probe that: $$\sum_{cyc}\left(\|x\|\|x+y+z\|+\|y\|\|z\|\right)\geq\sum_{cyc}\|x+y\|\|x+z\|,$$ which is true by the triangle inequality for complex numbers: $$\|x\|\|x+y+z\|+\|y\|\|z\|=\|x^2+xy+xz\|+\|yz\|\geq$$ $$\geq\|x^2+xy+xz+yz\|=\|x+y\|\|x+z\|.$$ It seems that for vectors it's not true.
Understanding the behaviour of order statistics of samples of uniform distribution Let $P$ denote any continuous distribution with density $p$ on $[0,1]$ and $Q$ the uniform distribution on $[0,1]$ whose density is $1$. Let $X_1,\ldots,X_n$ be $n$ i.i.d samples drawn from the distribution $Q$ and $X_{(1)},\ldots,X_{(n)}$ be their order statistics and let $Y$ be a random variable distributed acccording to $P$. Define weights as: $$ w_i=\frac{p(X_{(i)})}{q(X_{(i)})}= p(X_{(i)}), 1 \leq i \leq n $$ and $$ \tilde{w}_i = \mathbb{P}(Y \in [X_{(i)},X_{(i+1)}))= F_{P}(X_{(i+1)})-F_P(X_{(i)}). $$ I am trying to understand if there exists any notion of similarity or convergence between the random variables $w_i$ and $\tilde{w}_i$ as $n \to \infty$. Any help towards this would be appreciated. The reason why I encountered this is because of the observation that if I use a Gaussian Kernel Density estimator with weights $w_i$ and $\tilde{w}_i$ respectively, both the weights are able to fit the pdf $p(x)$ perfectly across a range of distributions $P$ and $Q$. I want to know if this is a manifestation of some underlying fact about the above defined weights. I am attaching the plots for completeness:
In short, the two weights sequences you define are asymptotically equivalent in probability. I put a sketch of proof, see below. But I'd recommend to have a closer look at Chapter 7 of [David H.A., Nagaraja H.N. Order Statistics (2003)]. I believe the topics discussed there are pretty close to the current one. Proposition. $\forall\rho>0\;\;\mathbb{P}(|w_i-\tilde{w}_i|>\rho)\to 0, \;n \to \infty$. Proof. a) A basic fact of order statistics is that $X_{(i)}$ converges in probability to $\frac{(i-1)/n\;+\; i/n}{2}$ as $n\to\infty$. b) If we denote $\delta_{i+1}=X_{(i+1)}-X_{(i)}$, then it's possible to get $\delta_{i+1}=\textit{o}(1/n)$ and $$ F_{P}(X_{(i+1)}) = F_P(X_{(i)}) + p(X_{(i)})\cdot\delta_{i+1} + \textit{o}(1/n^2) $$ using the Taylor approximation; the notation here is $p(\bullet)\equiv F^{\prime}_P(\bullet)$. c) Finally, $\mathbb{P}(|w_i-\tilde{w}_i|>\rho) = \mathbb{P}(|p(X_{(i)}) - \left(F_{P}(X_{(i+1)}) - F_P(X_{(i)})\right) |>\rho) = \mathbb{P}(|p(X_{(i)})\cdot\left(1-\delta_{i+1} \right) + \textit{o}(1/n^2)|>\rho). $ (We just used b) and then a).) As we see, the left-hand side of the inequality is of order $\textit{o}(1/n)$ (in probability, see a)). Thus, we can choose appropriate (big enough) $n^\ast$ given any positive $\rho$, which means that the probability of the event $\left\{\ldots >\rho\right\}$ tends to zero. This concludes the proof.
In short, the two weights sequences you define are asymptotically equivalent in probability. I put a sketch of proof, see below. But I'd recommend to have a closer look at Chapter 7 of [David H.A., Nagaraja H.N. Order Statistics (2003)]. I believe the topics discussed there are pretty close to the current one. Proposition. $\forall\rho>0\;\;\mathbb{P}(|w_i-\tilde{w}_i|>\rho)\to 0, \;n \to \infty$. Proof. a) A basic fact of order statistics is that $X_{(i)}$ converges in probability to $\frac{(i-1)/n\;+\; i/n}{2}$ as $n\to\infty$. b) If we denote $\delta_{i+1}=X_{(i+1)}-X_{(i)}$, then it's possible to get $\delta_{i+1}=\textit{o}(1/n)$ and $$ F_{P}(X_{(i+1)}) = F_P(X_{(i)}) + p(X_{(i)})\cdot\delta_{i+1} + \textit{o}(1/n^2) $$ using the Taylor approximation; the notation here is $p(\bullet)\equiv F^{\prime}_P(\bullet)$. c) Finally, $\mathbb{P}(|w_i-\tilde{w}_i|>\rho) = \mathbb{P}(|p(X_{(i)}) - \left(F_{P}(X_{(i+1)}) - F_P(X_{(i)})\right) |>\rho) = \mathbb{P}(|p(X_{(i)})\cdot\left(1-\delta_{i+1} \right) + \textit{o}(1/n^2)|>\rho). $ (We just used b) and then a).) As we see, the left-hand side of the inequality is of order $\textit{o}(1/n)$ (in probability, see a)). Thus, we can choose appropriate (big enough) $n^\ast$ given any positive $\rho$, which means that the probability of the event $\left\{\ldots >\rho\right\}$ tends to zero. This concludes the proof.
What's the difference between a probability mass function and a discrete probability distribution? The Wikipedia page on "probability mass function" says "The probability mass function is often the primary means of defining a discrete probability distribution" (emphasis added). I was under the impression that those two terms were completely synonymous. What's the difference between them?
The difference is subtle. I think it helps to realize that a (probability) distribution is a more general concept. A distribution is something intrinsic to a random variable and determines/describes the behaviour of it. In contrast, a probability mass function (PMF) is one way to concretely describe the distribution quantitatively. Things to note: In general, a distribution does not necessarily have a PMF or a probabilty density function (PDF). However, a discrete distribution always has a PMF, and a PMF always describes a discrete distribution, and therefore the two things are basically the same. You should think of a PMF as just one way to describe/define a dicrete distribution. We could also use the cummulative density function for example, or any other exotic description we can think of. Maybe you can very loosely think like this: The distribution is what carries out the random experiments, and it might look at the PMF to decide what to do, or it might look at something else.
The probability mass function is a function with whom you can calculate probability of a random variable. Discrete probabilty distribution is about a way a random variable can behave in terms of the values it can assume.
Let $f,g$ be continuous functions. If we can always can get x,y s.t. $f(x)<g(x)$ and $g(y)<f(y)$, prove that $f(a)=g(a)$. The whole question: Let $f,g:X\rightarrow \mathbb{R}$ be continuous at $a$. Supposing that for every neighborhood $V$ of $a$ exists points $x,y$ such that $f(x)&lt;g(x)$ and $g(y)&lt;f(y)$, prove that $f(a)=g(a)$. I'm stuck. I've tried to fix some $\epsilon$ and by continuity get some neighborhood of $a$ and so get the points $x,y$ given by hyphotesis. But i couldn't work with them succesfully. I also tried the contrapositive, but i couldn't go much further. Any hints on how to solve this problem?
Consider the continuous function $h(x) = f(x) - g(x)$ over $X$. We note that $\lim_{x \to a}h(x) = h(a)$, call this limit $L$. Every neighborhood of $a$ contains an $x,y$ such that $h(x) &lt; 0 &lt; h(y)$. However, by the definition of a limit: for any $\epsilon &gt; 0$, we can select a neighborhood $V$ such that the $x,y \in V$ must also satisfy $|h(x) - L|&lt; \epsilon$ and $|h(y) - L| &lt; \epsilon$. Thus, for every $\epsilon &gt; 0$, we have $-\epsilon &lt; L &lt; \epsilon$.
Let $ h(x) = f(x) - g(x)$. Consider intervals $(a -1/n ,a + 1/n)$ around a . Thus by intermediate value theorem on the intervals we get a sequence of $ b_n$ such that $h(b_n) = 0$. Now the intersection of these intervals is just one point a and $ b_n \to a$, thus $h(a) = 0 \Rightarrow f(a) = g(a)$
Base (topology) with closed intervals I am curious why it's a problem to define a base using closed sets? For example, my book uses the definition under "Constructing Topologies from Bases" as specified at http://en.wikibooks.org/wiki/Topology/Bases, as opposed to the "definition" listed on this page. I don't see why closed intervals are a problem for example, the point ${1} \in [0,1], [1,2]$ so in particular $ {1} \in [0,1]\cap[1,2]=[1,1]=\{1\}$ I realize that topologies consist of "open sets" but why can't closed sets be a base for (larger) open sets for a topology.... or more generaly, why can't topologies be constructed using closed sets.
$\newcommand{\ms}{\mathscr}$As is pointed out in the post linked from amWhy’s comment, one can construct a topology using closed sets. Recall that $\ms{T}\subseteq\wp(X)$ is a topology on $X$ iff $\varnothing,X\in\ms{T}$; $\bigcup\ms{U}\in\ms{T}$ whenever $\ms{U}\subseteq\ms{T}$; and $U\cap V\in\ms{T}$ whenever $U,V\in\ms T$. Suppose that $\ms T$ is a topology on $X$, and let $\ms C=\{X\setminus U:U\in\ms T\}$, the set of closed sets in $\langle X,\ms T\rangle$. Then it’s immediate from the De Morgan laws that $\ms C$ satisfies the following conditions: $\varnothing,X\in\ms C$; $\bigcap\ms F\in\ms C$ whenever $\ms F\subseteq\ms C$; and $H\cup K\in\ms C$ whenever $H,K\in\ms C$. It’s also clear that a family $\ms C\subseteq\wp(X)$ is the family of closed sets of some topology on $X$ iff $\ms C$ satisfies these conditions. Next, recall that a family $\ms B\subseteq\wp(X)$ is a base for a topology on $X$ iff it satisfies the following conditions: $\bigcup\ms B=X$, and if $B_0,B_1\in\ms B$ and $x\in B_0\cap B_1$, then there is a $B_2\in\ms B$ such that $x\in B_2\subseteq B_0\cap B_1$. In this case $\left\{\bigcup\ms U:\ms U\subseteq\ms B\right\}$ is a topology on $X$, and we say that $\ms B$ is a base for $\ms T$. By looking at the complements of members of a base for a topology on $X$, we can see how the notion of a base for the closed sets ought to be defined. A family $\ms X\subseteq\wp(X)$ is a base for the closed sets of a topology on $X$ iff $\bigcap\ms D=\varnothing$, and if $D_0,D_1\in\ms D$ and $x\notin D_0\cup D_1$, then there is a $D_2\in\ms D$ such that $x\notin D_2\supseteq D_0\cup D_1$. In this case $\left\{\bigcap\ms H:\ms H\subseteq\ms D\right\}$ is the family of closed sets of a topology on $X$, specifically, of the topology for which $\{X\setminus D:D\in\ms D\}$ is a base. Now we can ask whether the closed intervals in $\Bbb R$ are a base for the closed sets of some topology on $\Bbb R$. They certainly cover $\Bbb R$. However, it’s not necessarily true that if $I_0$ and $I_1$ are closed intervals not containing $x$, then there is a closed interval $I_2$ such that $x\notin I_2\supseteq I_0\cup I_1$. For instance, let $I_0=[0,1]$, $I_1=[3,4]$, and $x=2$: any closed interval that contains $[0,1]\cup[3,4]$ necessarily also contains $2$. On the other hand, the collection of all subsets of $\Bbb R$ of the form $(\leftarrow,a]\cup[b,\to)$ with $a&lt;b$ is a base for the closed sets of the usual topology on $\Bbb R$. One can of course also ask whether the closed intervals of $\Bbb R$ are a base for the open sets of some topology on $\Bbb R$. In fact they are, but that topology isn’t the usual one: it’s an easy exercise to show that it’s the discrete topology. (HINT: If $a&lt;b&lt;c$, then $[a,b]\cap[b,c]=\{b\}$.)
The simple answer is that if you wanted closed intervals to form a basis of some topology, then you would, for example, have to allow ${2}$ (which is the intersection of $[1,2]$ and $[2,3]$) to be a closed interval itself. But look at the question in your link. It says: Show that the collection $\mathcal{C}=\{[a,b]:a,b\in\mathbb{R},a&lt;b\}$ of all closed intervals in $\mathbb{R}$ is '''not''' a base for a topology on $\mathbb{R}$. It categorically says that [2,2] is not a valid basis element. So, I guess that should clear it out. However, as a fun fact, if you were to allow singletons, your topology is just the discrete topology. Cheers!
Is $4\underbrace{999 . . . 9}_{224 ({\rm times})}$ prime? Is $4\underbrace{999 . . . 9}_{224 ({\rm times})}$ prime? I wanted to find smallest prime its sum of digits is $2020$. I started with small primes; the smallest three digits prime its sum of digits is 22 is $499$; four digits is $4999$ with sum of digits 31, five digit is $49999$ with sum of 40.For the sum $2020$ we have: $2020=224\times 9+4$ and desired number can be of the form $4\underbrace{999 . . . 9}_{224 ({\rm times})}$ . So this number has at least 225 digits. If it is not prime we have to search for numbers with number of digits more than 225 which of course have digits less than 9 and first digit probably less than 4. I could not check it with my computer. I have these questions: 1- is $4\underbrace{999 . . . 9}_{224 ({\rm times})}$ primes? 2- are numbers of the form $499 . . . 99$ always primes? If so what is theoretical reason? If not what is conditions for it to be prime? Update: the closed form of these numbers is $N=5\times 10^n-1=5(10^n-1)+ 4$, $n ≥ 2$ if n is even we have: $10^{2k}-1=(10^k-1)(10^k+1)$ Since $[10^n-1, 5, 4]=1$ N can be a prime, but brute force gives a counter result. If n is odd N can be composite.
A computer search finds $4259\mid 5\times10^{224}-1$. I know of no elegant proof of this, just brute force.
is prime the form $N=5 \times 10^n-1$ is not prime if $n \neq1,7,13,19,25,...,n-1 \neq 6b$
Drawing a region in the complex plane Let $a,b \in \mathbb{C}$, $b \neq 0 $. and let $$ G_0 = \{ z \in \mathbb{C} : Im ( \frac{ z -a }{b} ) = 0 \} $$ Question is to draw this set. My attempt: Let $z = (x,y) , a = ( \alpha, \beta) , b = ( \alpha', \beta') $. After doing computation I find that the $y$ coordinate of the complex number $w = \frac{ z -a }{b} $ is $$ \frac{-(x - \alpha) \beta' + (y- \beta) \alpha'}{(\alpha')^2 + (\beta')^2} = Im(w)$$ In particular, $Im(w) = 0 $ implies $$ (x-\alpha)\beta' = (y-\beta) \alpha' $$ iff $$ y = \beta' x + ( \beta - \frac{ \alpha \beta'}{\alpha'} )$$ So, we have a line with slope $\beta'$ and $yintercept$ $\beta - \frac{ \alpha \beta'}{\alpha'} $. This is a correct drawing? thanks
Hint. A complex number has imaginary part equal to zero f and only if it is a real number.
Hint. A complex number has imaginary part equal to zero f and only if it is a real number.
Need fractional differential equations resource. I'm studying linear fractional differential equations. Can some one give me a simple resources about the fractional differential equations. Is there any explanation for : $$ \frac{\mathit{\lambda}}{\mathit{\Gamma}{\mathrm{(}}{q}{\mathrm{)}}}\mathop{\int}\limits_{t0}\limits^{t}{{\mathrm{(}}{t}\mathrm{{-}}{s}{\mathrm{)}}^{{q}\mathrm{{-}}{1}}}{x}_{0}\mathrm{(}s\mathrm{)}ds $$ How did become like this $$ \frac{{x}_{0}\hspace{0.33em}\mathit{\lambda}}{\mathit{\Gamma}{\mathrm{(}}{q}{\mathrm{)}}\mathit{\Gamma}{\mathrm{(}}{q}{\mathrm{)}}}\mathop{\int}\limits_{0}\limits^{1}{{\mathrm{(}}{t}\mathrm{{-}}{t}_{0}}{\mathrm{)}}^{{2}{q}\mathrm{{-}}{1}}{\mathrm{(}}{1}\mathrm{{-}}\mathit{\sigma}{\mathrm{)}}^{{q}\mathrm{{-}}{1}}{\mathit{\sigma}}^{{q}\mathrm{{-}}{1}}{d}\mathit{\sigma} $$ Where: $$ {x}_{0}{\mathrm{(}}{t}{\mathrm{)}}\mathrm{{=}}\frac{{x}_{0}{\mathrm{(}}{t}\mathrm{{-}}{t}_{0}{\mathrm{)}}^{{q}\mathrm{{-}}{1}}}{\mathit{\Gamma}{\mathrm{(}}{q}{\mathrm{)}}} $$
You can solve it there by clicking the dsolve button in mathHandbook.com reference on http://drhuang.com/science/mathematics/fractional_calculus/fractional_differential_equation.htm
You can solve it there by clicking the dsolve button in mathHandbook.com reference on http://drhuang.com/science/mathematics/fractional_calculus/fractional_differential_equation.htm
Determining convergence or divergence of series $ \sum_{n=1}^{\infty} \frac{(-1)^n}{ n+\sin (n)} $ I am wondering the convergence or divergence following series $$ \sum_{n=1}^{\infty} \frac{(-1)^n}{ n+\sin (n)} \\ $$ My 1st attempt is 'alternating series test' $$ $$ But, $$\frac{1}{n+\sin(n)}$$ isn't monotone decreasing. SO I failed... $$ $$ Please give me some advice. Thanks in advance.
\begin{align*} \frac{(-1)^n}{n+\sin n}&amp;=\frac{(-1)^n}{n}+\Bigl(\frac{(-1)^n}{n+\sin n}-\frac{(-1)^n}{n}\Bigr)\\ &amp;=\frac{(-1)^n}{n}-\frac{(-1)^n\sin n}{(n+\sin n)n}. \end{align*} $\displaystyle\sum_{n=1}^\infty\frac{(-1)^n}{n}$ converges conditionally. $\displaystyle\sum_{n=1}^\infty\frac{(-1)^n\sin n}{(n+\sin n)n}$ converges absolutely.
Easier: see that $-1 \leq \sin n \leq 1 $, hence the bounds on the summand are $\frac{(-1)^n}{n-1}$ and $\frac{(-1)^n}{n+1}$ which converge by alternating test (Leibniz test) and hence your original series converges.
Determining if a random variable is lognormal I'm struggling to understand a part of the solution to this question: If $X$ and $Y$ are lognormal random variables, is their product $XY$ lognormally distributed? The solution I have suggests looking at $ln(XY) = ln(X) + ln(Y)$. However, it says that $XY$ being lognormal depends on if $ln(X)$ and $ln(Y)$ are joint Normal. That is, $ln(X) + ln(Y)$ may not be normally distributed even if they are individually normally distributed implying $XY$ will not be lognormal. But I don't understand how $ln(X) + ln(Y)$ could be possibly non-Normal if each of them are Normal, as the sum of two normals is a normal.
The sum of two joint normal distributed random variables is again normal. It is important that the random variables are actually joint normal, as the assertion can fail otherwise. The classical example is the following: Let $X$ and $Z$ be independent random variables, where $X$ is standard Gaussian and $Z$ is Rademacher distributed, i.e. $P(Z = 1) = P(Z = -1) = \frac{1}{2}$. Then the random variables $X$ and $Y = ZX$ are both normal [and even uncorrelated!], but their sum isn't normal since $P(X + Y = 0) = P(X(Z + 1) = 0) = P(Z=-1)=\frac{1}{2}$.
As a counter-example to your statement that the "sum of two normals is a normal": Suppose $W \sim N(0,1)$, $S=1$ when $|W| \lt 1$, and $S=-1$ when $|W| \ge 1$. Then $V=SW $ is also normally distributed as $N(0,1)$ but $|W+V| \lt 2$ so in this case the sum $W+V$ is not normally distributed.
How is this property called for mod? We have a name for the property of integers to be $0$ or $1$ $\mathrm{mod}\ 2$ - parity. Is there any similar name for the remainder for any other base? Like a generalization of parity? Could I use parity in a broader sense, just to name the remainder $\mathrm{mod}\ n$?
Actually there is a standard name: residue. There are $5$ residues modulo $5$, namely $0,1,2,3,4$. Every prime greater than $3$ falls into only $2$ residue-classes modulo $6$.
Either AlessioDV's good answer, or you could say: "of the form $nq+r$". as a representation of $\equiv r \pmod n$ for example a number $N$ with $$N\equiv 1\pmod 3$$ has property $N=3q+1$.
How can I calculate $E(N_n)$ and $V(N_n)$ with $N_n = \min(X_1, X_2,\cdots, X_n)$? Consider ($X_1, X_2,\cdots, X_n$) a random sample of a population of $X$ as a Weibull distribution of parameters $(0, \delta, 2), \delta\in\mathbb R^+$, (in short , $X \sim W (0, \delta, 2)$ where $X$ has distribution function $F(x) = 1 - e^{-\left(\frac x\delta\right)^2}$. Consider $$\overline{X}_n = \frac1n\sum\limits_{i = 1}^n X_i\text{ and }N_n = \min(X_1, X_2, \cdots, X_n)$$ I already calculate $E(X)$ and $V(X)$ but now I'm trying to calculate $E(N_n)$ and $V(N_n)$. How can I calculate $E(N)$ and $V(N)$? I have no idea... The solution is $$E(N_n) = \frac{\sqrt\pi}2\frac\delta{\sqrt n}\text{ and }V(N_n) = \left(1 - \frac\pi4\right)\left(\frac\delta{\sqrt n}\right)^2 = \left(1 - \frac\pi4\right)\frac{\delta^2}n.$$ enter image description here
For any $t&gt;0$, we have $$ \{\min(X_1,\ldots,X_n)&gt;t\} = \bigcap_{i=1}^n\{X_i&gt;t\}, $$ and so $$ \mathbb P(N_n &gt; t) = \mathbb P\left(\bigcap_{i=1}^n\{X_i&gt;t\}\right). $$ Since the $X_i$ are independent, $$\mathbb P\left(\bigcap_{i=1}^n\{X_i&gt;t\}\right) = \prod_{i=1}^n \mathbb P(X_i&gt;t). $$ Since the $X_i$ all have $W(0,\delta,2)$ distribution, $$ \prod_{i=1}^n \mathbb P(X_i&gt;t) = \mathbb P(X_1&gt;t)^n = (1-F(t))^n = e^{-\left(\frac x\delta\right)^{2n}}. $$ It follows that $$ \mathbb E[N_n] = \int_0^\infty (1-F_{N_n}(t))\ \mathsf dt = \int_0^\infty e^{-\left(\frac x\delta\right)^{2n}}\ \mathsf dt = \delta\cdot \Gamma \left(1+\frac{1}{2 n}\right), $$ and $$ \mathbb E[N_n^2] = \int_0^\infty 2t(1-F_{N_n}(t))\ \mathsf dt = \int_0^\infty 2te^{-\left(\frac x\delta\right)^{2n}}\ \mathsf dt = \delta^2 \Gamma \left(1+\frac{1}{n}\right). $$ Therefore \begin{align} \operatorname{Var}(N_n) &amp;= \mathbb E[N_n^2] - \mathbb E[N_n]^2\\ &amp;= \delta^2 \Gamma \left(1+\frac{1}{n}\right) - \left(\delta\cdot \Gamma \left(1+\frac{1}{2 n}\right)\right)^2\\ &amp;= \delta ^2 \left(\Gamma \left(1+\frac{1}{n}\right)-\Gamma \left(1+\frac{1}{2 n}\right)^2\right) \end{align}
For any $t&gt;0$, we have $$ \{\min(X_1,\ldots,X_n)&gt;t\} = \bigcap_{i=1}^n\{X_i&gt;t\}, $$ and so $$ \mathbb P(N_n &gt; t) = \mathbb P\left(\bigcap_{i=1}^n\{X_i&gt;t\}\right). $$ Since the $X_i$ are independent, $$\mathbb P\left(\bigcap_{i=1}^n\{X_i&gt;t\}\right) = \prod_{i=1}^n \mathbb P(X_i&gt;t). $$ Since the $X_i$ all have $W(0,\delta,2)$ distribution, $$ \prod_{i=1}^n \mathbb P(X_i&gt;t) = \mathbb P(X_1&gt;t)^n = (1-F(t))^n = e^{-\left(\frac x\delta\right)^{2n}}. $$ It follows that $$ \mathbb E[N_n] = \int_0^\infty (1-F_{N_n}(t))\ \mathsf dt = \int_0^\infty e^{-\left(\frac x\delta\right)^{2n}}\ \mathsf dt = \delta\cdot \Gamma \left(1+\frac{1}{2 n}\right), $$ and $$ \mathbb E[N_n^2] = \int_0^\infty 2t(1-F_{N_n}(t))\ \mathsf dt = \int_0^\infty 2te^{-\left(\frac x\delta\right)^{2n}}\ \mathsf dt = \delta^2 \Gamma \left(1+\frac{1}{n}\right). $$ Therefore \begin{align} \operatorname{Var}(N_n) &amp;= \mathbb E[N_n^2] - \mathbb E[N_n]^2\\ &amp;= \delta^2 \Gamma \left(1+\frac{1}{n}\right) - \left(\delta\cdot \Gamma \left(1+\frac{1}{2 n}\right)\right)^2\\ &amp;= \delta ^2 \left(\Gamma \left(1+\frac{1}{n}\right)-\Gamma \left(1+\frac{1}{2 n}\right)^2\right) \end{align}
Proving the set of the strictly increasing sequences of natural numbers is not enumerable. How would one proceed to prove this statement? The set of the strictly increasing sequences of natural numbers is not enumerable. I've been trying to solve this for quite a while, however I don't even know where to start.
As other answers note, there are lots of fancy ways to prove this. But we can always go back to the basics. A straightforward diagonalization proof-by-contradiction suffices. Suppose there is such an enumeration. Maybe this is it: 1 --&gt; 1, 2, 3, 5, ... 2 --&gt; 4, 5, 7, 100, ... 3 --&gt; 1, 2, 3, 8, ... 4 --&gt; 2, 4, 5, 6, ... Now take the first number of sequence one, and add one to it. That's our first number: 2. Now take the second number of sequence two - 5 - and the number from the previous step - 2. Take the larger and add one: 6. Now take the third number of sequence three - 3 - and the number from the previous step - 6. Take the larger and add one: 7. Now take the fourth number of sequence four - 6 - and the number from the previous step - 7. Take the larger and add one: 8. Keep doing that and construct the sequence of monotone increasing naturals: 2, 6, 7, 8, ... By assumption, this sequence is in our enumeration, but where can it be? It cannot be at spot n for any n because by its construction the nth element of this sequence is larger than the element at spot n of the nth sequence. That's a contradiction, and therefore there cannot be any such enumeration.
Let $\{{s_i}_j\}$ be a countable list of strictly increasing sequences; define $\{c_i\}$ via $c_i = \max (c_{i-1}, {s_i}_i)+1$ and .... presto, Cantor! But it could be simpler (depending on one's idea of simple) to reduce to things we already know are uncountable. For simplicity, we can consider the sequence of differences between terms and not have to worry about the terms being increasing. i.e $a_{i+1} &gt; a_i$ so $b_{i+1} = a_{i+1} - a_i &gt; 0$ so $b_{i+1} \in \mathbb N$ and if $b_0 = a_0$. We have a one-to-one correspondence between $\{$ all increasing sequence of natural numbers $\}$ and $\{$ all sequences of natural numbers $\}$. $\{$ all sequences of natural numbers $\} \supset \{$ all sequences of 1.... 10$\} \cong \{$ all sequences of 0....9 $\} \cong \{$all real numbers between 0 and 1$\}$ which is uncountable.
How can I explain topology to my grandmother? I was recently look at a post on tex.stackexchange about explaining $\LaTeX$ to the OP's grandmother. I was wondering, could the same thing be done for topology? Except in this case the "grandmother" is me. I have not fully understood the gist of topology and its capabilities. To my understanding, topology is the study of spaces but how does that translate into equations and variables? Anything would be helpful. Thanks
Topology, aka "rubber-sheet geometry", is when a teacup is identical to a donut but there is no way a teacup could ever be like this. Topologists worry a lot about odd rings and bottles, some of them are quite concerned by knots while others try to comb hairy balls. All in all, these are rather strange characters...
Firstly, a topology is nothing but a subset of the power set of some set. This special set is required to satisfy some conditions(conditions that we intuit from some "concrete" objects, mainly Euclidean spaces) by definition. So this way, a topology, as a "compact"(do not confuse with term "compact space", I used its daily meaning) set, carries lots of information about the whole big set(which is your space). You can ask what kind of information we can get out of it? That is the crutial point indeed! Since a topology is a set of sets, it tells you in which way your points are connected/interrelated. For instance, due to its topology, some part of your space may perform wild behaviour while some other parts are kind of tame etc. Moreover, you can read from the topology where "too many" points are accumulated and where the less are at and so on... A few words on "doughnot = coffee mug" issue which is always given as a cliche example to a beginner of topology... You consider, as a topologist, a doughnot and a coffee mug as the same; the same in the sense that you care only in which way the points of both shapes are related. Here, for example, you forget about "distance" matter(the distance between any two points) as we have stepped up in a generalization(abstraction) level. I think the key point is, to mention again, a topology is a core which one can reach the information of how the points are "related". (Not an organized entry but I would be appreciated if I could contribute a bit.)
To prove that the limit of a bi-variate function is nonexistent at a point It has been asked to evaluate $$\lim_{(x,y)\to(0,0)} \frac{x^3 + y^3}{x-y}$$ if it exists at all and otherwise to disprove it. After much thoughts , I came up with an idea of substituting $y$ with $x-mx^3$ which devolved the limit to $(2/m)$ and thus served my purpose of disproving the existence of a limit. But, thinking of such half-weird substitutions take some reasonable amount of time; the luxury of which is seldom available at examinations. So, what are other better methods to disprove the existence of this limit? Any general algorithm (??) on disproving the existence of bi-variate limits (without indulging into trick-substitutions) will be also appreciated.
The standard and more effective method to show (by hand calculation) that a limit doesn't exist is to find at least two different paths with different limits as you have done. Of course we can apply the epsilon-delta method using the same two paths but that way shouldn't be so different form the method you have already used to prove that limit doesn't exist. Note that with some practice and a good strategy, figure out such "half-weird substitutions" don't take much time and it is indeed a correct an effective method to proceed. For the proper strategy to follow, refer also to the related What is $\lim_{(x,y)\to(0,0)}\frac{ (x^2y^2}{(x^3-y^3)}$?
The standard and more effective method to show (by hand calculation) that a limit doesn't exist is to find at least two different paths with different limits as you have done. Of course we can apply the epsilon-delta method using the same two paths but that way shouldn't be so different form the method you have already used to prove that limit doesn't exist. Note that with some practice and a good strategy, figure out such "half-weird substitutions" don't take much time and it is indeed a correct an effective method to proceed. For the proper strategy to follow, refer also to the related What is $\lim_{(x,y)\to(0,0)}\frac{ (x^2y^2}{(x^3-y^3)}$?
There exists $c \in [0,1]$ for which $\int_{0}^{1}\sin(x^3) = \int_{0}^{c}\sin(x^2)$ T/F: There exists $c \in [0,1]$ for which $\int_{0}^{1}\sin(x^3) = \int_{0}^{c}\sin(x^2)$ I know the answer it true, and I already saw the proof. What I don't get is this: $\sin$ is monotonically increasing in $[0, 1]$, so from the monotonocity of integrals: $$\sin(x^3) \leq \sin(x^2) \quad \forall x \in [0, 1]$$ $$\int_{0}^{1}\sin(x^3) \leq \int_{0}^{1}\sin(x^2) \quad \forall x \in [0, 1]$$ $$L = \int_{0}^{1}\sin(x^3) \leq \int_{0}^{c}\sin(x^2) + \int_{c}^{1}\sin(x^2) = R \quad \forall x \in [0, 1]$$ So if there exists such $c$, it means that: case 1: $L &lt; R \implies \int_{c}^{1}\sin(x^2) &lt; 0 \quad$ which is false. case 2: $L = R \implies \int_{c}^{1}\sin(x^2) = 0 \implies c = 1 \implies \int_{0}^{1}\sin(x^3) = \int_{0}^{1}\sin(x^2) \quad$ which is again, false So what's going on?
Let $L=\int_0^1\sin(x^3)\,dx$ and consider $$ f(t)=\int_0^t \sin(x^2)\,dx $$ Then $f(0)\le L\le f(1)$ and by the intermediate value theorem $L=f(c)$ for some $c\in[0,1]$. About your doubts. It doesn't make sense to say $$ \int_0^1\sin(x^3)\,dx\le\int_0^1\sin(x^2)\,dx \qquad \text{for all $x\in[0,1]$} $$ as the integrals don't depend on $x$. Then, indeed, $$ L\le\int_0^c\sin(x^2)\,dx+\int_c^1\sin(x^2)\,dx $$ just says that $$ \int_c^1\sin(x^2)\,dx\ge0 $$
Hint : put $f(t)=\int_0^t \sin(x^2)dx$ to conclude
To prove $f(x)=\lim_{n\to \infty}{x^{2n} \over x^{2n}+1 }$ is discontinuous at $x=\pm1$ I came across this question to show that the given $f(x)$ is continuous at all points on $R$ except $x=\pm1$ $$f(x)=\lim_{n\to \infty}{x^{2n} \over x^{2n}+1 }$$ I know that to be continuous at $x=1$, $$\lim_{x\to 1}f(x)=f(1)$$ I found $f(1)$ to be ${1 \over 2}$,but I am stuck as to how to calculate $\lim_{x\to 1}f(x)$ as there are two variables involved but its a question on a single variable calculus book.
The simplest way is likely to compute the pointwise limit at every $x$. When $\lvert x\rvert&lt;1$, $x^{2n}\to0$ as $n\to\infty$, so that $$ f(x):=\lim_{n\to\infty}\frac{x^{2n}}{x^{2n}+1}=\frac{0}{0+1}=0. $$ When $\lvert x\rvert &gt;1$, $x^{2n}\to+\infty$ as $n\to\infty$; so, we can write $$ f(x):=\lim_{n\to\infty}\frac{x^{2n}}{x^{2n}+1}=\lim_{n\to\infty}\frac{1}{1+\frac{1}{x^{2n}}}=\frac{1}{1+0}=1. $$ Finally, when $x=\pm1$, $$ f(x):=\lim_{n\to\infty}\frac{1}{1+1}=\frac{1}{2}. $$ Now, the fact that this function is discontinuous at $x=\pm1$ is clear.
When $x\not=\pm1$ then the limit equals $1$. Therefore the limit $f(x)$ as $x$ approaches $\pm1$ equals one. Otherwise at $x=\pm1$, $f(x)=1/2$. Such a function is not continuous at $x=\pm1$.
Calculate the value of the sum $1+3+5+\cdots+(2n+1)$ I have been thinking about this for a long time, may I know which step of my thinking is wrong as I do not seems to get the correct answer. If I am not going towards the right direction, may I get some help thanks! My attempt: Let $S = 1+3+5+\dotsb+(2n+1)\label{a}\tag{1}.$ Then I rearrange S from the last to first terms: $S = (2n+1)+(2n-1)+(2n-3)+\dotsb+1\label{b}\tag{2}.$ Adding the two series $(1)+(2)$: $$2S = (2n+2)+(2n+2)+(2n+2)+\dotsb+(2n+2),$$ I have $n$ copies of $(2n+2)$. Therefore: $2S = n(2n+2)$ $S = n(n+1)$.
Build a square in the following manner. On your first step, place $1$ block. On your second step put $3$ blocks around that, and on your third step put $5$ blocks around what you have, and so on. I think it should be pretty easy to see that the sum of the blocks is the area of the square for the step you are on. In other words, the sum of the first $n$ odd integers is $$ (2(1) - 1) + (2(2) - 1) + \dotsb + (2(n) - 1) = n^2. $$ Image source: google "sum of first $n$ odd numbers."
A tip: evaluate the first few partial sums. $$1, 4, 9, 16, 25, 36, 49\cdots$$ Nothing familiar ?
Quadratic Congruence (with Chinese Remainder Thm) How do we solve quadratic congruences such as: $x^2 \equiv11 \pmod{39}$ I know I must use the chinese remainder theorem with $p = 13, 3$ but I've only done linear examples and am unsure about how to do quadratic ones.
$$x^2 \equiv 11 \pmod{39} \implies x^2 \equiv 2 \pmod 3 \implies \text{No solution}$$ EDIT In general, when you want to solve for $$x^2 \equiv a \pmod n \,\,\, (\spadesuit)$$ and if $n = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}$, the idea is to first solve for $$x^2 \equiv a \pmod {p_l^{a_l}} \,\,\, (\clubsuit)$$ You have a solution for your original problem $(\spadesuit)$ iff you have a solution for each $l \in \{1,2,\ldots,k\}$ in $(\clubsuit)$. Once you find solution for each $l$ in $(\clubsuit)$, put them together using Chinese Remainder theorem. For instance, if you have $x^2 \equiv 23 \pmod {77}$, then we need to look at $x^2 \equiv 23 \pmod 7$ and $x^2 \equiv 23 \pmod{11}$ i.e. $x^2 \equiv 2 \pmod 7$ and $x^2 \equiv 1 \pmod{11}$. $$x^2 \equiv 2 \pmod7 \implies x \equiv \pm 3 \pmod 7$$ Similarly, $$x^2 \equiv 1 \pmod{11} \implies x \equiv \pm 1 \pmod{11}$$ Hence, $$x \equiv 3 \pmod 7 \text{ and } x \equiv 1 \pmod{11} \implies x \equiv 45 \pmod{77}$$ $$x \equiv -3 \pmod 7 \text{ and } x \equiv 1 \pmod{11} \implies x \equiv 67 \pmod{77}$$ $$x \equiv 3 \pmod 7 \text{ and } x \equiv -1 \pmod{11} \implies x \equiv 10 \pmod{77}$$ $$x \equiv -3 \pmod 7 \text{ and } x \equiv -1 \pmod{11} \implies x \equiv 32 \pmod{77}$$ Hence, $$x \equiv \pm 10, \pm 32 \pmod{77}$$
$x^2\equiv11(\pmod{39}$ $39$ is a composite number $3\cdot13$ so now problem is : $x^2\equiv11\pmod3$ and $x^2\equiv11\pmod{13}$. if $x^2\equiv a\pmod{p}$ is a problem then it has solution if $a^{((p-1)/2)} \pmod {p}=1$ and have no solution if ^((p-1)/2) mod(p)=p-1 1)x^2=11mod(3) (p-1)/2=1 11^1 mod(3)=2=p-1 so no solution 2)x^2=11 mod(13) (p-1)/2=6 11^6 mod(13)=12=p-1 so no solution
stuck in combinatorics exercice , need of help Bob has : 12 Hats 12 shirts 5 Pants 4 Jackets 3 belts Wearing 1 pair of pants and 1 shirt is OBLIGATORY. 4 out of the 12 Shirts are warm so no jacket has to be with them 5 out of the 12 Shirts Can be worn with a Jacket (not necessarily) The remaining 3 shirts out of the 12 are light to they MUST be worn with a jacket on (Obligatory) 3 pants out of 5 must be worn with belts on Finally Wearing a hat is OPTIONAL in how many Ways can bob get dressed ?? MY ANSWER : Upper body: (4 + 5*5 + 3*4)*13 = 533. Here the 5*5 represents the 5 shirts that can be either (a) with one of the 4 jackets or (b) with no jacket. The final 13 is the 12 hats and no hat. Lower body: other 2 pants CAN be worn with belts. Hence, 2 + 3*3 = 11. Finally, 533*11 = 5863.
Lower body: $3\cdot 3 + 2\cdot 4 = 17$ since 3 pants must be worn with one of 3 belts, and the other 2 pants have the extra choice of no belt.
Lower body: $3\cdot 3 + 2\cdot 4 = 17$ since 3 pants must be worn with one of 3 belts, and the other 2 pants have the extra choice of no belt.
What is a universal property? Sorry, but I do not understand the formal definition of "universal property" as given at Wikipedia. To make the following summary more readable I do equate "universal" with "initial" and omit the tedious details concerning duality. Suppose that $U: D \to C$ is a functor from a category $D$ to a category $C$, and let $X$ be an object of $C$. A universal morphism from $X$ to $U$ [...] consists of a pair $(A,\varphi)$ where $A$ is an object of $D$ and $\varphi: X \to U(A)$ is a morphism in $C$, such that the following universal property is satisfied: Whenever $Y$ is an object of $D$ and $f: X \to U(Y)$ is a morphism in $C$, then there exists a unique morphism $g: A \to Y$ such that the following diagram commutes: $\hspace{5cm}$ What kind of definition is this? Instead of "such that the following universal property is satisfied" one can equivalently say "such that the following property is satisfied". So how can this be a definition of "universal property"? Unfortunately, not even Awodey in his Category Theory gives a concise definition of "universal property". Where do I find a really concise definition of "universal property"? EDIT: I wonder why the attitude "you only have to understand the concrete examples, and the abstract notion will pop out by itself" seems to be accepted in this context. This reminds me of Augustine of Hippo: What, then, is time a universal property? If no one ask of me, I know; if I wish to explain to him who asks, I know not.
I agree with you that this is not about “concrete examples.” More about language. I apologize if my story is elementary, but there is really nothing complex. Maybe you do not realize that “$A$ has universal property” is the same as “$A$ is a universal object” is the same as “an object $A$ is universal.” These are different names of the same term. So the definition of a universal object also defines universal property. Consider the definition of an initial object. “…an initial object is an object… such that…” “Initial” is a property of objects, thus this definition defines a property. Properties are named not only by adjectives (e.g. “transitive”, “injective”), but also by nouns (e.g. “equivalence”, “injection”; “a function $f$ is an injection” is the same as “a function $f$ is injective”). In contrast, the definition of average, i.e. $(x, y)\mapsto \frac{x+y}{2}$, defines not a property. Consider the shorter definition in Wikipedia which you did not cite: An initial morphism from $X$ to $U$ is an initial object in the category $X \downarrow U$. This definition defines a property because it uses the definition of an initial object. The longer definition in Wikipedia which you cited is the shorter definition with the terms “initial object” and “comma category” unfolded. “The universal property of the quotient group” is not a definition, it is a theorem which says that the quotient group $G/N$ is an initial object in a category defined as: object: $(X, f)$ where $X$ is a group and $f:G\to X$ and $N\subseteq ker(f)$; morphism of type $(X_0, f_0)\to (X_1, f_1)$: $g:X_0\to X_1$ such that $g\circ f_0 = f_1$. I have essentially seconded lhf's answer, but he/she did not construct the category. I just can not find explicit construction of this category in textbooks. Wikipedia's definition of the universal property does not include the universal property of the quotient group as a particular case. The problem is that in Wikipedia's definition $f$ is a morphism, but in the case of groups $f$ is a homomorphism such that $N\subseteq ker(f)$. IMHO Wikipedia's definition is not general enough. P.&nbsp;S. I prefer “initial” and “terminal” over “universal”. A universal object is an initial object or a terminal object depending on context. Therefore, any text involving “universal” forces a reader to guess a precise meaning.
Given a diagram or graph a universal property says for the pair (U f) U is a object and f is a unique arrow, a U-cone is terminal or intial in f, if for any other cone over the U-cone the latter must factor through the former in particular through the unique arrow f.
Set of logical statements with contradiction. Let $\Sigma_1, \Sigma_2$ be two finite sets of statements in propositional calculus s.t $\Sigma_1\cup\Sigma_2$ has a contradiction. Prove that there is a statement A s.t $$\Sigma_1\vdash{A}\ and\ \Sigma_2\vdash{\neg{A}}$$ is it still true with infinite $\Sigma_1,\Sigma_2$? My attempt: in case one of the sets is $\phi$ or one of the sets has a contradiction its obvious... So, we left with the case where both aren't empty and both without contradiction. I didn't manage to prove this... Any kind of help will be appreciated Thanks!
Let $A_1,\ldots,A_n\in\Sigma_1$ and $B_1,\ldots,B_m\in\Sigma_2$ the sentences that generate the contradiction. Let $A=A_1\wedge\ldots\wedge A_n$. It is clear that $A_1\wedge\ldots\wedge A_n\vdash A$. Since propositional calculus is sound, $A\wedge(B_1\wedge\ldots\wedge B_m)$ must be false in $\Sigma_2$, or, equivalently, $(B_1\wedge\ldots\wedge B_m)\to\neg A$ must be true. Since propositional calculus is complete, $(B_1\wedge\ldots\wedge B_m)\vdash\neg A$. For the infinite sets case, I have been thinking, but no result so far.
There are a couple of problems with your puzzle. the idea that $\Sigma_1, \Sigma_2$ be two finite sets of statements is in itself allready a contradiction, if they are closed onder modus ponens and so they are infinite sets , there are just an infinite number of theorems that belong to both sets. If contradiction of a set means that $ (\Sigma_1\cup\Sigma_2\vdash {A} $ and $ \Sigma_1\cup\Sigma_2\vdash{\neg{A}}$ you cannot tell which if $ A$ is from $\Sigma_1 $ or from $ \Sigma_2$ so how can you proof that $\Sigma_1\vdash {A} $?
Alternative definition of Gamma function? The Gamma function is defined in terms of an integral as The notation $Γ(t)$ is due to Legendre. If the real part of the complex number $t$ is positive $(Re(t) &gt; 0)$, then the integral $$ \Gamma(t) = \int_0^\infty x^t e^{-x}\,\frac{{\rm d}x}{x} $$ converges absolutely, and is known as the Euler integral of the second kind (the Euler integral of the first kind defines the Beta function). Can it be equivalently defined in terms of a recursive relation as $$ \Gamma(t+1)=t \Gamma(t)$$ $$\Gamma(1)=1$$ with some non-redundant conditions? Thanks!
Baby Rudin gives the following definition: $\forall x\in(0,+\infty):\ \Gamma(x+1)=x\Gamma(x);$ $\forall n\in{\Bbb N}:\ \Gamma(n+1)=n!;$ $\log\Gamma$ is convex in $(0,+\infty)$.
There are an infinitude of functions interpolating $n!$, a few interesting alternatives are discussed here (Bernoulli's, Hadamard's, and Luschny's).
Mathematical Fallacy Analysis I am interested in mathematical fallacy and found some cases about it. I am one of education major college student, and of course I am afraid that students in school will encounter it, especially the lack of understanding ones. Here is the sample. We already know that $(-1)^3 = -1$. Yet, I will show you that it is not a true fact. $(-1)^3 = (-1)^{\frac{6}{2}} = ((-1)^6)^{\frac{1}{2}} = 1^{\frac{1}{2}} = 1$ So, the conclusion is $-1 = 1$. Most of students are easily trapped by such a imaginary number cases and absolute value properties, and that mathematical fallacies I shall look at are in that areas. What I want to know is that: are there any method, study theory, approachment, or anything, which can be used by teacher to make students have capabilities to analyze mathematical fallacy in solution steps and it is more better if they can think critically, implied that they just don't memorize certain math subject's properties.
The example given (or others like it) is exactly what students should be encouraged to keep in mind when they work with fractional exponents, because it demonstrates how easy it is to go astray if you don't pay attention to the precise statement of mathematical laws. The fallacy, of course, lies in the naive expectation that the "associative" law $z^{ab}=(z^a)^b$ holds for all numbers $z$, not just the positive reals. (Note, I'm assuming here that $a$ and $b$ are real numbers; the situation is even more delicate if $a$ and $b$ are complex.) It could be worth having students concoct their own examples of the fallacy, so that they can take "ownership" of it. For students familiar with function notation, it might also be worth presenting it in the form of an abstract question: Is $E(E(x,y),z))$ necessarily equal to $E(x,M(y,z))$? (I trust it's obvious what operations $E$ and $M$ refer to.)
The example given (or others like it) is exactly what students should be encouraged to keep in mind when they work with fractional exponents, because it demonstrates how easy it is to go astray if you don't pay attention to the precise statement of mathematical laws. The fallacy, of course, lies in the naive expectation that the "associative" law $z^{ab}=(z^a)^b$ holds for all numbers $z$, not just the positive reals. (Note, I'm assuming here that $a$ and $b$ are real numbers; the situation is even more delicate if $a$ and $b$ are complex.) It could be worth having students concoct their own examples of the fallacy, so that they can take "ownership" of it. For students familiar with function notation, it might also be worth presenting it in the form of an abstract question: Is $E(E(x,y),z))$ necessarily equal to $E(x,M(y,z))$? (I trust it's obvious what operations $E$ and $M$ refer to.)
solution using synthetic geometry I managed to solve this problem only using complex numbers but I'd like to solve it using synthetic geometry and I can't. Can someone help me to solve this problem using synthetic geometry? Let $ABC$ an acute triangle with $AB &gt; AC$ . Let $O$ its circumcenter and let $D$ the midpoint of $BC$. The circle of diameter $AD$ intersects again $AB$ and $AC$ in $E$ and in $F$, respectively. Let $M$ the midpoint of $EF$. Prove that $MD$ is parallel to $AO$. This is my solution. But, as I wrote above, I'd like to solve it using synthetic geometry and I can't. Setting the origin of the plane in O and the points A, B and C on the circumference of unit radius, we have that $ a \bar{a} = 1 \text{;} \ b \bar{b} = 1 \text{;} \ c \bar{c} = 1 $. Because $ D $ is the midpoint of $ BC $, we can write 1) $ d = \dfrac{b + c}{2} $ and because $ AD $ is a diamtere of the new circle then said $ Q $ his midpoint we have 2) $ q = \dfrac{a + d}{2} = \dfrac{2a + b + c}{4}$ Said $ M_{1} $ the projection of $ Q $ on $ AB $, then $ m_{1} = \frac{1}{2} \left[ \left( \dfrac{\bar{q} - \bar{a}}{\bar{b} - \bar{a}}\right) (b - a) + a + q \right] $ but $\dfrac{1}{\bar{b} - \bar{a}} = \dfrac{1}{\dfrac{1}{b} - \dfrac{1}{a}} = \dfrac{ab}{a – b} $ and so $ m_{1} = \frac{1}{2} \left[ \bar{q} ab (-1) - \dfrac{ab}{a} (- 1) + a + q \right] \Rightarrow $ $m_{1} = \frac{1}{2} \left( a + b + q - ab\bar{q} \right) $ In the same way, said $ M_{2} $ the projection of $ Q $ on $ AC $ we have $ m_{2} = \frac{1}{2} \left[ \dfrac{\bar{q} - \bar{a}}{\bar{c} - \bar{a}} (c - a) + a + q \right] \Rightarrow $ $ m_{2} = \frac{1}{2} \left( a + c + q - ac\bar{q} \right) $ Because $ Q $ is the center of the new circle passing through $ A \text{,} D \text{,} E \text{,} F $ we have that $ M_{1}Q $ is axes of $ AE $ and that $ M_{2}Q $ in axes of $ AF $. So we have that $ M_{1} $ is midpoint of $ AE $ and that $ M_{2} $ is midpoint of $ AF $. So we can write $ m_{1} = \dfrac{a + e}{2} \ \Rightarrow \ e = 2m_{1} – a $ and also $ m_{2} = \dfrac{a + f}{2} \ \Rightarrow \ f = 2m_{2} - a$ The point $ M $ is defined as midpoint of $ EF $ so we have $ m = \dfrac{e + f}{2} = \dfrac{2m_{1} + 2m_{2} - 2a}{2} = m_{1} + m_{2} - a$ therefore replacing $ m_{1} $ and $ m_{2} $ we have $m = \frac{1}{2} a + \frac{1}{2} c + \frac{1}{2} q + \frac{1}{2} a + \frac{1}{2} b + \frac{1}{2} q - \frac{ab\bar{q}}{2} - \frac{ac\bar{q}}{2} - a \ \Rightarrow $ $m = \dfrac{b + c}{2} + q - a\bar{q} \dfrac{b + c}{2}$ We know, furthemore, that $ \bar{q} $ is: 3) $\bar{q} = \dfrac{2\bar{a} + \bar{b} + \bar{c}}{4} = \frac{1}{4} \left( \frac{2}{a} + \frac{1}{b} + \frac{1}{c} \right) = \dfrac{2bc + ab + ac}{4abc}$ Now we write the equation of the parallel line through $D$ parallel to $AO$: let $z$ be a generic point of this line it is possible to write 4) $ \dfrac{z - d}{a - 0} = \dfrac{\bar{z} - \bar{d}}{\bar{a} – 0} $ Replacing $d$ with the expression of 1) and taking into account that $ \bar {a} = \frac{1}{a} $, we get 5) $z - \dfrac{b + c}{2} = a^{2} \left( \bar{z} - \dfrac{\bar{b} + \bar{c}}{2} \right) = a^{2}\bar{z} - \dfrac{a^{2} (\bar{b} + \bar{c})}{2}$ If the $ M $ point belongs to this line, $ m $ must satisfy equation 5). Substituting the value of $ m $ we have $\dfrac{b + c}{2} + q - a \bar{q} \cdot \left( \dfrac{b + c}{2} \right) - \dfrac{b + c}{2} = a^{2}\left( \dfrac{\bar{b} + \bar{c}}{2} + \bar{q} - \bar{a}q \cdot \dfrac{\bar{b} + \bar{c}}{2} \right) - \dfrac{a^{2} (\bar{b} + \bar{c})}{2} \Rightarrow $ $ q - a \bar{q} \cdot \left( \dfrac{b + c}{2} \right) = a^{2}\bar{q} - \dfrac{aq( b + c)}{2bc} \ \Rightarrow $ $ q \left( 1 + \dfrac{ab + ac}{2bc} \right) = \bar{q}a \left( a + \dfrac{b + c}{2} \right)$ Substituting the values of $ q $ in 2) and the value of $ \bar{q} $ in 3) we have $ \dfrac{(2a + b + c)(2bc + ab + ac)}{8bc} = \dfrac{a (2bc + ab + ac) (2a + b + c)}{a \cdot 8bc} $ which is, obviously, an identity, so $ MD $ is parallel to $ AO $, as we wanted to prove.
Hints: Construct squares $EDPR$ and $FDQS$ outwardly on the sides $ED$ and $FD$ of $\triangle DEF$. Let the line $PQ$ intersect lines $AB, AC$ at $U, V$ respectively . First show that $MD \perp \operatorname{line} PQ = \operatorname{line}\ UV$. Therefore if $l$ is the line through $A$ that is parallel to $MD$ then $l \perp \operatorname{line} UV$. Using $\triangle DEF \cong \triangle ABC$ (use ratio $AB/AC$ and $\angle A$) and $\operatorname{line} DP \parallel \operatorname{line} AB$ show that $\angle AUV = \angle ACB$. Since $\angle OAB = \frac\pi 2 - \angle ACB$, conclude that $\operatorname{line} AO \perp UV$. Thus lines $AO$ and $l$ through $A$ are both perpendicular to the line $UV$, which implies that $\operatorname{line} AO = l \parallel \operatorname{line} MD$.
I was not able to solve it using synthetic geometry either. Is there anyone who can solve it and post a solution using synthetic geometry methods?
How can I solve $4x + 51y = 9$ using congruences? I'm given: $4x+51y=9$. I am given a hint that when we use $4x=9 \pmod{51}$ we get $x = 15 + 15t$, and also if we use the congruence $51y=9 \pmod 4$ we get $y=3+4s$. They say it's handy to then find the relation between $s$ and $t$. I have no idea how they got those suggestions and I need to know how to do that too. I'm really stuck :(
Write: $4x + 51y = 9$ as $9x - 5x + 45y + 6y = 9$ $\to$ $9| (-5x + 6y)$ $\to$ $-5x + 6y = 9h$ $\to$ $-5x = 3(3h - 2y)$ $\to$ $3|(-5x)$ $\to$ $3|x$. $x = 3k$. Back to the main equation: $4(3k) + 51y = 9$ $\to$ $51y = 9 - 12k$ $\to$ $17y = 3 - 4k$ $\to$ $y = \dfrac{3 - 4k}{17}$. In order for $y$ to be an integer we must have: $k = 5 + 17t$. Thus $x = 3k = 3(5 + 17t) = 15 + 51t$. So $y = \dfrac{3 - 4(5 + 17t)}{17} = \dfrac{-17 - 68t}{17} = -1 - 4t$. Thus the solution is: $(x,y) = \{(15 + 51t, -1 - 4t): t \in \mathbb{Z}\}$. Check: $4x + 51y = 4(15 + 51t) + 51(-1 - 4t) = 60 + 204t - 51 - 204t = 9$.
modulo 51. 4x=9(mod51) 51=3×17 4x=9(mod3) then x=0(mod3) then x=3k k:integer 4x=9(mod17) then 12k=9(mod17) 9k=60k=45(mod17) k=5(mod17) then k=17t+5 t:integer x=3k=3(17t+5)=51t+15 modulo4. 51y=9(mod4) then -y=1(mod4) -y=4t+1 then y=-(4t+1)=-4t-1
Choosing the variance of a random normal variable to fulfill some criteria Suppose you create a random normal variable $X$ with mean zero and variance $\sigma^2$. You wish to choose $\sigma^2$ such that 80% of the time (or $a$% of the time, to make it more general), $X$ is between -1 and 1 (or between $-b$ and $b$ to make it more general). How to calculate $\sigma^2$?
First, note that $$\mathbb{P}(-1 \leq X \leq 1) = \mathbb{P}(X \leq 1) - \mathbb{P}(X \leq -1). $$ Furthermore, we have that $\mathbb{P}(X \leq -1) = \mathbb{P}(X \geq 1) = 1 - \mathbb{P}(X \leq 1)$ by symmetry of the density of the normal cdf. Hence, we obtain $$\mathbb{P}(-1 \leq X \leq 1) = 2 \mathbb{P}(X \leq 1) - 1 = 0.8 \Leftrightarrow \mathbb{P}(X \leq 1) = 0.9$$ Standardizing yields $$\mathbb{P}(X \leq 1) = \mathbb{P}\left(Z \leq \frac{1}{\sigma}\right) = 0.9 ,$$ where $Z \sim N(0,1)$. Looking in the table for the $z$-scores of the normal distribution, we find that $$\frac{1}{\sigma} = 1.282 \Rightarrow \sigma \approx 0.78$$
Hint: Find $\sigma^2$ such that: $$\mathbb{P}(-1 \leq X \leq 1) = 0.8,$$ knowing that $X \sim N(0,\sigma^2)$.
What are the hyperbolic rotation matrices in 3 and 4 dimensions? So the hyperbola-preserving transformation in 2 dimensional space is given by the matrix \begin{pmatrix} \cosh(\phi) &amp; \sinh(\phi) \\ \sinh(\phi) &amp; \cosh(\phi) \end{pmatrix} I'm wondering what such a matrix would be in 3 dimensional space (so that it preserves 2 dimensional hyperboloids) and 4 dimensional space (so that it preserves 3 dimensional hyperboloids). Sources or derivations would be appreciated. Thank you!
In a way, your transformation matrix is a variation of a common 2d rotation matrix $$\begin{pmatrix}\cos\phi&amp;-\sin\phi\\\sin\phi&amp;\cos\phi\end{pmatrix}\;.$$ Where the above preserves the unit circle $x^2+y^2=1$, yours preserves the hyperbola $x^2-y^2=1$. The unit circle here corresponds to the unit sphere in 3d. There are many ways to describe 3d rotations, but one very common one is to describe them as a product of rotations around the coordinate axes. You can do the same for your hyperboloid as well. For example, the one-sheeted hyperboloid $x^2+y^2-z^2=1$ has rotational symmetry around the $z$ axis. So you'd have these three “rotation” matrices: $$ \begin{pmatrix} \cos\alpha &amp; -\sin\alpha &amp; 0 \\ \sin\alpha &amp; \cos\alpha &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{pmatrix} \qquad \begin{pmatrix} \cosh\beta &amp; 0 &amp; \sinh\beta \\ 0 &amp; 1 &amp; 0 \\ \sinh\beta &amp; 0 &amp; \cosh\beta \end{pmatrix} \qquad \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; \cosh\gamma &amp; \sinh\gamma \\ 0 &amp; \sinh\gamma &amp; \cosh\gamma \end{pmatrix} $$ Each of them preserves the hyperboloid, so a product of them will preserve it as well. The two-sheeted hyperboloid $z^2-y^2-x^2=1$ is preserved by the above matrices, too. If you want $x^2-y^2-z^2=1$ instead, you have to change coordinates, so that the rotation around $x$ becomes a regular rotation while the other two use hyperbolic functions. I don't know whether it would make sense to translate any of the other rotation formalisms (like axis-angle or quaternions) into something preserving a hyperboloid. Probably things would become too complicated to make this useful. In four dimensions, you can try the same approach. You have $\binom42=6$ coordinate planes, and for each of them you can give a possible rotation matrix. If the signs of the coordinates in the equation of your hyperboloid are the same, you use a regular rotation, otherwise you use hyperbolic functions. So for example if you have $x^2+y^2+z^2-w^2=1$ then any matrix modifying the $w$ coordinate would use hyperbolic functions, while those that just involve two of $x,y,z$ are regular rotations using circular trigonometric functions.
You can always transform any formula for spheric rotation into a formula for hyperbolic rotation by using the mathematical identities: $$\cos x=\cosh ix$$ $$i\sin x=\sinh ix$$ $$\cosh x=\cos ix$$ $$\sinh x=-i\sin ix$$
A 10 digit positive number and ordered triplets A 10 digit positive number is said to be a “LearnHub”number if its digits are all distinct and it is a multiple of 11111. How many “LearnHub” numbers are there? Find the number of ordered triplets (a, b, c) of positive integers for which LCM (a, b) =1000, LCM (b, c) = 2000 and LCM (c, a) = 2000. How do i do this ? I have worked a lot on them.
Note that $1000=2^35^3$ and $2000=2^45^3$. What does this tell you about the prime factors of $a$, $b$, and $c$? Do you know what the LCM is of, say, $p^rq^s$ and $p^tq^u$, if $p$ and $q$ are primes? Can you see why any number with all 10 digits must be a multiple of 9? Can you see why any number that's a multiple of 9 and a multiple of 11111 must be a multiple of 99999? Now, $99999=100000-1$, so if you mutiply it by the 5-digit number abcde, you get abcde00000-abcde. From this you should be able to work out the possibilities for abcde to make the result contain all 10 digits. EDIT: Of course, there's always the computer programmer's solution, where you just ask your computer to look at all the 10-digit multiples of 11111 and count up how many have all 10 digits.
$1000 = 2^3 * 5^3$ $2000 = 2^4 * 5^3$ Since LCM of $a$ and $b$ is $1000$, they at the most contain $2$ three times and $5$ three times. Also the LCM of terms containing $C$ is $2000$, the term $c$ should contain $2$ four times, as $a$ and $b$ can contain $2$ only three times. so $c$ should either be $2^4 = 16$ $2^4 * 5^1 = 80$ $2^4 * 5^2 = 400$ or $2^4 * 5^3 = 2000$. Either $a$ or $b$ should have $2^3$. Hence fixing one as $2^3$ other can take values from $2^0$ to $2^3$. Hence total combination= $2*4 - 1$ (the one combination is $2^3,2^3$ which we took twice)$= 7$ 2 of the three should have power of $5$ as $5^3$, whereas the third can take 4 values in terms of powers of $5$ $(5^0$ to $5^3)$. Hence total no of such arrangement = $(3!/2! *4) -2$(this is for $5^3,5^3,5^3$ which we took thrice) = $12 -2 =10$ Hence ordered triplets will be $10*7 = 70$
Find a derivable function $f$ for which $f(x) - f(x-1) =\alpha ( f(x-1) - f(x-2) )$ Find a derivable function $f$ for which $f(x) - f(x-1) =\alpha ( f(x-1) - f(x-2) )$. My initial conditions would be: $$\begin{align*} f(0) &amp;= 0\\ f(1) &amp;= \beta \end{align*}$$ and $\alpha &lt; 1$ and my domain $[0,+\infty[$ Basically, if looping on integers, every increment will be $\alpha$ times the previous increment, but I want a derivable function. for example, for $\beta = 0.5$ and $\alpha = 1$: $$\begin{align*} f(2) &amp;= 1.5\\ f(3) &amp;= 1.75\\ f(4) &amp;= 1.875 \end{align*}$$ I want to be able to evaluate $f(5.7)$ for instance.
If $\alpha=0$ then any periodic function of period $1$ is a solution. If $\alpha\ne0$ write $\alpha=e^\lambda$ for some $\lambda\in{\mathbb C}$. Then any function $f$ of the form $$f(x):=g(x)+e^{\lambda x} h(x)$$ with $g$ and $h$ periodic of period $1$ is a solution.
If you take a possible solution $f$ to your problem (e.g. Dids) and multiply it by a differentiable function $g$ with $g(x)=g(x+1)$ and $g(0)=1$, e.g. $g(x)=cos(x\cdot 2 \pi)$ you get a new solution $fg$ which also solves your problem. Hence there is no reasonable way to evaluate $f(5.7)$ with only this information (but there is a reasonable way to evaluate $f(n)$ where $n$ is an integer).
Negating the statement $\exists x \in \Bbb R$ so that $x$ is not an integer, $x > 2016$, and $\lfloor x^2 \rfloor = \lfloor x \rfloor^2$ There exists a real number $x$ so that $x$ is not an integer, $x &gt; 2016$, and $\lfloor x^2 \rfloor = \lfloor x \rfloor^2$. I would like clarification on how to negate this. My idea of negation is for all real numbers $x$, so that $x$ is not an integer, $x&gt;2016$ and $\lfloor x^2\rfloor = \lfloor x\rfloor ^2$. I'm tempted to say for all $x$ so that $x$ is an integer, $x&gt; 2016$, but $\lfloor x^2 \rfloor \neq \lfloor x\rfloor ^2$.
Sometimes, representing the situation symbolically can help. Let's define $p(x)$ means "x is not an integer" $q(x)$ means "x is greater than 2016" $r(x)$ means "$\lfloor x^2\rfloor = \lfloor x\rfloor^2$" Now your original statement is $$\exists x \in \mathbb{R} : p(x) \wedge q(x) \wedge r(x).$$ We can get a simplified version of its negation using the rules of boolean algebra as follows: $$\begin{align*} \neg \left[ \exists x \in \mathbb{R} : p(x) \wedge q(x) \wedge r(x) \right]&amp;\iff \forall x \in \mathbb{R} : \neg \left[p(x) \wedge q(x) \wedge r(x)\right] \\ &amp;\iff \forall x \in \mathbb{R} : \neg p(x) \vee \neg q(x) \vee \neg r(x)\\ \end{align*}$$ To translate back into more natural language: Every real number $x$ has at least one of the following three properties, possibly more: it is an integer, it is less than or equal to 2016, and/or $\lfloor x^2\rfloor \neq \lfloor x \rfloor ^2$.
I read this question somewhat differently than the other two answerers. The phrasing "so that" in the original statement is sloppy at best. Usually, you would have something like "$\exists x\in\mathbb{R}$, where $x$ is not an integer, ..." In fact, in a reasonably decent text, I think you would more likely come across something like "$\exists x\in\mathbb{R}\setminus\mathbb{Z}$." All this to say, I do not think that "$x$ is an integer" should be treated as its own statement or claim. A better phrasing of your original statement, I believe, would be something along the lines of the following: There exists a noninteger real $x$ for which $x&gt;2016$ and $\lfloor x^2 \rfloor = \lfloor x \rfloor^2$. The above statement becomes easy to express in a more symbolically formal way: $$ (\exists x\in\mathbb{R}\setminus\mathbb{Z})(x&gt;2016\land\lfloor x^2 \rfloor = \lfloor x \rfloor^2). $$ The symbolic negation then becomes easy: $$ (\forall x\in\mathbb{R}\setminus\mathbb{Z})(x\leq2016\lor\lfloor x^2 \rfloor \neq \lfloor x \rfloor^2). $$ Finally, the linguistic equivalent, no doubt more meaningful, becomes the following: For all noninteger real $x$, either $x\leq2016$ or $\lfloor x^2 \rfloor \neq \lfloor x \rfloor^2$.
Finding the unknown variable What is the value of $x$ in $x^{x}=25$? How can this be solved in the easiest way of all? I just couldn't deduce any idea regarding where to start.
$$ \begin{align} x^x&amp;=25\tag{1}\\ x\log(x)&amp;=\log(25)\tag{2}\\ \log(x)e^{\log(x)}&amp;=\log(25)\tag{3}\\ \log(x)&amp;=\mathrm{W}(\log(25))\tag{4}\\ x&amp;=e^{\mathrm{W}(\log(25))}\tag{5}\\ &amp;=\frac{\log(25)}{\mathrm{W}(\log(25))}\tag{6} \end{align} $$ Explanation: $(2)$: take log of both sides $(3)$: $x=e^{\log(x)}$ $(4)$: if $we^w=x$, then $w=\mathrm{W}(x)$, $\mathrm{W}$ is Lambert W $(5)$: exponentiate $(4)$ $(6)$: divide $(2)$ by $(4)$
\begin{align} x^x &amp;= 25 \\ x^2 &amp;= 25 \end{align} possibilities, $x=5$ or $x=-5$ because $5^2 = 25$ and $-5^2 =25$ description: \begin{align} -5\times-5 &amp;=(-1)\times5\times(-1)\times5 \\ &amp;=(-1)\times(-1)\times5\times5 \\ &amp;=1\times5\times5 \\ &amp;=25 \end{align}
Parametric Equation of a Circle in 3D Space? So, my dilemma here is... I have an axis. This axis is given to me in the format of the slope of the axis in the x,y and z axes. I need to come up with a parametric equation of a circle. This circle needs to have an axis of rotation at the given axis with a variable radius. I've worked on this problem for days, and still haven't come up with a solution. I'm using this circle to map the path of a satellite, programmed in C. And help would be greatly appreciated. Thanks!
Let $(a_1,a_2,a_3)$ and $(b_1,b_2,b_3)$ be two unit vectors perpendicular to the direction of the axis and each other, and let $(c_1,c_2,c_3)$ be any point on the axis. (If ${\bf v} = (v_1,v_2,v_3)$ is a unit vector in the direction of the axis, you can choose ${\bf a} = (a_1,a_2,a_3)$ by solving ${\bf a} \cdot {\bf v} = 0$, scaling ${\bf a}$ to make $\|{\bf a}\| = 1$, then letting ${\bf b} = {\bf a} \times {\bf v}$.) Then for any $r$ and $\theta$, the point $(c_1,c_2,c_3) + r\cos(\theta)(a_1,a_2,a_3) + r\sin(\theta)(b_1,b_2,b_3)$ will be at distance $r$ from $(c_1,c_2,c_3)$, and as $\theta$ goes from $0$ to $2\pi$, the points of distance $r$ from $(c_1,c_2,c_3)$ on the plane containing $(c_1,c_2,c_3)$ perpendicular to the axis will be traced out. So the parameterization of the circle of radius $r$ around the axis, centered at $(c_1,c_2,c_3)$, is given by $$x(\theta) = c_1 + r\cos(\theta)a_1 + r\sin(\theta)b_1$$ $$y(\theta) = c_2 + r\cos(\theta)a_2 + r\sin(\theta)b_2$$ $$z(\theta) = c_3 + r\cos(\theta)a_3 + r\sin(\theta)b_3$$
If your circle has a unit normal vector &lt; cos(a), cos(b), cos(c)> then depending on c, you have: &lt; cos(t-arcsin(cos(b)/sin(c)))/sqrt(sin(t)^2 +cos(t)^2*sec(c)^2), sin(t-arcsin(cos(b)/sin(c)))/sqrt(sin(t)^2 +cos(t)^2*sec(c)^2), cos(t)*sin(c)/sqrt(cos(t)^2+sin(t)^2*cos(c)^2> 0 &lt; sin(t)*sin(a), -sin(t)*cos(a), cos(t)> cos(c)=0 and of course, &lt; cos(t), sin(t), 0> sin(c)=0 The first two will have t=0 be the maximum point
Finding a Pythagorean triple $a^2 + b^2 = c^2$ with $a+b+c=40$ Let's say you're asked to find a Pythagorean triple $a^2 + b^2 = c^2$ such that $a + b + c = 40$. The catch is that the question is asked at a job interview, and you weren't expecting questions about Pythagorean triples. It is trivial to look up the answer. It is also trivial to write a computer program that would find the answer. There is also plenty of material written about the properties of Pythagorean triples and methods for generating them. However, none of this would be of any help during a job interview. How would you solve this in an interview situation?
Assuming you do have a pen and paper, you could substitute $c = 40 - a - b$ into the first equation to get $$a^2 + b^2 = (40 - a - b)^2 = a^2 + b^2 + 1600 - 80(a + b) + 2ab.$$ Rewriting this equation, you get $$a + b - 20 = \frac{ab}{40}.$$ From this it follows that $ab$ has to be a multiple of $40$, i.e., one of them is a multiple of $5$. That narrows it down to only a few options... If that's still too much brute-force, you could also note that $a + b &gt; 20$ from the above equation, and $a + b &lt; 27$, since $c$ has to be the largest of the three. This leaves only the three pairs $$\{(5,16),(10,16),(15,8)\}.$$ Looking at the earlier equation, you see the third pair is the right one.
This is a rect triangle with a length of 40. for example triangle (3,4,5) with a length of 12, you can get the solution: (120/12,160/12,200/12). the point is that you know some patterns in advanced. (5,12,13) etc...
How many $3$-digit numbers are divisible by $4$? I want to find out how many numbers can be formed taking the digits from zero to nine without repetition such that they are divisible by four (of 3 digits). My Work: I find out the number pairs for the ones and ten's places which are divisible by four (like $04, 40, 08, 80, 12, 16,$ etc). These are $22$ in number. For any pair, for the hundred place, I have $8$ digits left (including $0$). So, my rough calculation becomes $8\times 22 = 176$. Then, I find out the number pairs without zero ($16$ in number) . Thus, my final answer becomes $176 - 16\times 1 = 160$. Am I correct ?
How many three-digit positive integers with distinct digits are divisible by $4$? If this was the intended question, then you are indeed correct. Your explanation could be improved if you made clear that you were subtract the $16$ strings with leading digit zero. For a number to be divisible by $4$, the number formed by its last two digits (appending leading zeros as necessary) must be divisible by $4$. \begin{array}{c c c c c} \color{red}{00} &amp; \color{blue}{04} &amp; \color{blue}{08} &amp; 12 &amp; 16\\ \color{blue}{20} &amp; 24 &amp; 28 &amp; 32 &amp; 36\\ \color{blue}{40} &amp; \color{red}{44} &amp; 48 &amp; 52 &amp; 56\\ \color{blue}{60} &amp; 64 &amp; 68 &amp; 72 &amp; 76\\ \color{blue}{80} &amp; 84 &amp; \color{red}{88} &amp; 92 &amp; 96 \end{array} Of the $25$ such numbers, three of them have a repeated digit ($\color{red}{00}, \color{red}{44}, \color{red}{88}$). Of the remaining $22$ choices, exactly six include the digit $0$ ($\color{blue}{04}, \color{blue}{08}, \color{blue}{20}, \color{blue}{40}, \color{blue}{60}, \color{blue}{80}$). Since the leading digit of a three-digit positive integer cannot be $0$, in the absence of the restriction that the digits cannot be repeated, we would have nine choices for the leading digit. Given that restriction, we have $9 - 1 = 8$ choices for the leading digit when $0$ is one of the final two digits and $9 - 2 = 7$ choices for the leading digit when $0$ is not one of the final two digits. Since there are six admissible choices for the final two digits that include the digit $0$ and $22 - 6 = 16$ admissible choices for the final two digits that do not include the digit $0$, the number of three-digit numbers that are divisible by $4$ in which no digit is repeated is $$8 \cdot 6 + 7 \cdot 16 = 48 + 112 = 160$$
HINT: There are $\lfloor2.5-\epsilon\rfloor=2$ single digit multiples of four. (excluding $0$) There are $\lfloor25-\epsilon\rfloor-2=22$ two digit multiples of four. There are $\lfloor 250-\epsilon\rfloor-22-2=225$ three digit multiples of four. You now have to subtract any cases where there are repeated digits.
How is the material conditional treated in Natural Deduction? I'm confused by the definition of the material conditional. In my implementation of propositional-logic I have the following definition of the material conditional: $$\frac{P\to{Q}}{\neg(P\land\neg{Q})}\quad\small\text{[MaterialConditionalElimination]}$$ $$\frac{\neg(P\land\neg{Q})}{P\to{Q}}\quad\small\text{[MaterialConditionalIntroduction]}$$ I should that this is a definition because there is (effectively) a bi-directional inference rule. That is to say, whenever you encounter $\neg(P\land\neg{Q})$ you can replace it by $P\to{Q}$ and vice-versa. However, according to Wikipedia, only the first of these rules is found in minimal logic. But I cannot see how you can derive the second from the first with the addition of the principle of explosion. Also, it is the second that serves, if one rule only can be taken, as a definition, because that is the rule that introduces the new $\to$ connective. Update: It seems that the material conditional does not find its way into propositional logic when defined in this natural deductive style. I have therefore taken it out of the aforementioned implementation. Further update: Well, it appears that it does, but only at the classical level. At this level, however, it is equivalent to logical consequence $\Rightarrow$ and therefore I am opting to leave it out still.
The implication sign does not denote material implication in intuitionistic logic. The Wikipedia page does not say what you think it does. It says that $\lnot P \lor Q$ (not $\lnot (P \land \lnot Q)$) entails $P \to Q$ in intuitionistic logic, but not in minimal logic (which is true but the reverse entailment is not provable in intuitionistic logic). In intuitionistic logic (and hence also in minimal logic) $\lnot P \lor Q$ is not equivalent to $\lnot(P \land \lnot Q)$ and neither of those is equivalent to $P \to Q$. To see that $\lnot P \lor Q$ is strictly stronger than $P \to Q$ in intuitionistic logic, take $P \equiv Q$ for $P$ a variable, then $P \to P$ is provable, but $\lnot P \lor P$ expresses the intuitionistically unacceptable law of the excluded middle. To see that $P \to Q$ is strictly stronger than $\lnot(P \land \lnot Q)$, take $Q$ to be a variable and take $P = \lnot\lnot Q$, then $\lnot(\lnot\lnot Q \land \lnot Q)$ is provable, but $\lnot\lnot Q \to Q$ is the intuitionistically unacceptable principle of double-negation elimination.
Let me try to answer my own question, although I am far from sure that I have it right. Perhaps this is just another way of asking it. The first rule, the introduction rule... $$ \frac{\neg(P\land\neg{Q})}{P\to{Q}} $$ ...it looks like should be taken as a given in minimal logic. This is debatable, however, if you consider $\to$ to be a primitive of the logic, and therefore not requiring a rule for its definition at all. What Wikipedia says is that, additionally, in minimal logic $P\to Q$ logically entails $\neg(P\land\neg{Q})$. I believe that one way of stating this is in the form of the second inference rule, the elimination rule: $$ \frac{P\to{Q}}{\neg(P\land\neg{Q})} $$ Therefore the statements $P\to Q$ and $\neg(P\land\neg{Q})$ can be seen as being entirely equivalent, since we effectively have a bi-directional inference rule. If drawing attention to this equivalence seems churlish, consider that introduction and elimination rules are not generally symmetric in this manner. Consider, for example, the introduction rule for logical consequence: $$ \frac{[P]\;...\;Q}{P\Rightarrow{Q}} $$ Turning this rule on its head is non-sensical, although many (including me) might carelessly view the antecendant and consequent as being equivalent in day to day usage. However, of course the corresponding rule is not the introduction rule just turned on its head, it is Modus Ponens: $$ \frac{P\Rightarrow{Q}\;\;P}{Q} $$ So the rules concerned with the material conditional seem to stand out to me against all the other standard (whatever quite that means) rules in propositional logic in that they seem to define nothing more than a syntactic equivalence. Put another way, it seems to me that $P\to Q$ can be viewed as nothing more than a convenient shorthand for $\neg(P\land\neg Q)$ and it is this usage for which I am seeking clarification.
Finding parametric equations for curve of intersection I am having difficulty finding parametric equations of the curve of intersection of $z = x^2 - y^2$ and cylinder $x^2 + y^2 = 1$ I am aware that the first equation represents a hyperboloid, and the second equation represents a cylinder. I think for the second one I can just do $x = cos(t)$ and $y = sin(t)$. Not sure about the other one.
We have the two surfaces $$ x^2-y^2 = z\\ x^2+y^2 = 1 $$ Solving for $x^2, y^2$ we have $$ x^2 = \frac 12(1+z)\\ y^2 = \frac 12(1-z) $$ so a parametric representation can be read as $$ (x(s),y(s),z(s)) = \left(\pm\sqrt{\frac 12(1+s)},\pm\sqrt{\frac 12(1-s)},s\right) $$ for $-1\le s \le 1$ Attached a plot with the four leafs
For the first one use that $$\cosh^2(t)-\sinh^2(t)=1$$ And since$$x^2=z+y^2$$ we get $$z+2y^2=1$$ so.... $$z=1-2y^2$$
Find all functions $f: \mathbb{N} \rightarrow \mathbb{N}$ which satisfy $ f\left(m^{2}+m n\right)=f(m)^{2}+f(m) f(n) $ Question - Find all functions $f: \mathbb{N} \rightarrow \mathbb{N}$ which satisfy the equation $$ f\left(m^{2}+m n\right)=f(m)^{2}+f(m) f(n) $$ for all natural numbers $m, n$ by putting $m=1$ and $f(1)=k$ we get $f(n+1)=k^2 + kf(n)$ then hint says use $3^2 + 3.1 = 2^2 +2.4$ to get polynomial relation for k.. i am not getting how to use this hint ...i think i am missing some very easy tricks to get at this which i have not lernt yet ... any help will be appreciated thankyou
Putting $n=1$ in the condition for $f$ gives $$f(m^2+m)=f(m)^2+kf(m)$$ Now set $m=3$. By the hint, we have $$f(3^2+3)=f(2^2+2\cdot 4)=f(2)^2+f(2)f(4)$$ which gives us the condition $$f(3)^2+kf(3)=f(2)^2+f(2)f(4)$$ You should be able to find $f(2)$,$f(3)$ and $f(4)$ in terms of $k$ by using your condition for $f(n+1)$. Hope this helps.
The function which you are searching for has the following properties. $$f(x+y) = f(x)+f(y)$$ and $$f(x*y) = f(x)*f(y)$$ The only function which fits into these constraints is identity function, i.e, $$f(x)=x$$
Does $\lim_{n \to \infty} |a_n|=0$ and $\limsup_{n \to \infty} |b_n|<\infty$ imply $\lim_{n\to\infty} |a_nb_n|=0$? I would like to know if the following statement is true: Let $(a_n)\subset \mathbb{C}$ and $(b_n)\subset \mathbb{C}.$ If $$\lim_{n \rightarrow \infty} |a_n|=0$$ and $$\limsup_{n \rightarrow \infty} |b_n|\leq \alpha&lt;\infty,$$ then $$\lim_{n \rightarrow \infty}|a_n b_n|=0.$$ Thank you very much. Masik
Yes, $\limsup |b_n|\le \alpha$ means that $|b_n|&lt;\alpha+1$ eventually. This means that $0\le |a_nb_n|\le |a_n|(\alpha+1)$ eventually and the RHS $\to0$ as $n\to\infty$.
first of all we know that $lim_{n \to \infty} | b_n | &lt;= M \ for M \in N$. there for if we assume that so $ 0 &lt;= lim_{n \to \infty} |a_n| \ lim_{n \to \infty}| b_n | &lt;= lim_{n \to \infty} |a_n| \ M = 0 $.
Solve for x $ \sin 30° \sin x \sin 10° = \sin 20° \sin ({80°-x}) \sin 40° $ $ \sin 30° \sin x \sin 10° = \sin 20° \sin ({80°-x}) \sin 40° $ I tried transformation formulas , $ 2\sin a \sin b $ one. I know the value of sin 30° but what about others? Original problem In triangle ABC, P is an interior point such that $ \angle PAB = 10°. \angle PBA = 20° PAC = 40° \angle PCA = 30° $ then what kind of triangle it is ? I solved it till I got stuck here.
Assume AB = 1. Apply sine law to PAB to get $PB = 2 \sin 10^0$. Apply sine law to PAC to get $PB = 2 \sin 20^0$ and $PC = 4 \sin 20^0 \sin 40^0$. Apply cosine law to PBC to get $BC$ in terms of those angles. Note that $\cos 100^0 = - \sin 10^0$. $BC^2 = 4 \sin^2 10 + 16 \sin^220^0\sin^2 40^0 + 16 \sin 20^0 \sin 40^0 \sin^210^0$ Suggestion:- Convert all angles in $BC^2$to $\sin 10^0$ (or $\sin 20^0$) by compound angle formula. Hope that the result is 1.
Like Solve $\frac{\sin(x&#186;)\sin(80&#186;)}{\sin(170&#186; - x&#186;) \sin(70&#186;)} = \frac {\sin(60&#186;)}{\sin (100&#186;)}$ $$\dfrac{\sin(80-x)}{\sin x}=\dfrac{\sin30\sin10}{\sin20\sin40}$$ $$\implies\sin80\cot x-\cos80=\dfrac{\sin30\sin10}{\sin20\sin40}=\dfrac{4\sin10\sin80}{2\sin(3\cdot20)}$$ using https://brainly.in/question/3475250 $$\implies\cos10\cot x=\sin10\left(1+4\cos10\tan30\right)$$ $$\implies\sqrt3\cot x=\sqrt3\tan10+4\sin10$$ Like Simplifying $\tan100^{\circ}+4\sin100^{\circ}$ set $A=360 n-3x$ in $2\sec A\sin x+\tan x=-\tan A$ $$2\sec3x\sin x+\tan x=\tan3x$$ Here $x=10\implies2\cdot\dfrac2{\sqrt3}\sin10+\tan10=\dfrac1{\sqrt3}\iff4\sin10+\sqrt3\tan10=1$ Can you take it from here?
sufficient condition for derivative to be continuous? let $f:\mathbb{R} \rightarrow\mathbb{R}$ be a continuous function in $x_0\in\mathbb{R}$. In addition, $f'(x)$ is defined for every $x\in(x_0-\delta,x_0+\delta),x_0\notin(x_0-\delta,x_0+\delta)$ In addition, i know that $\underset{x_0}{lim} f'(x)=L$ is it true that $f'(x)$ is continuous at every $x\in(x_0-\delta,x_0+\delta),x_0\notin(x_0-\delta,x_0+\delta)$
Assuming I understand your notation, no. Let $$ g(x) = \begin{cases} x^2 \sin \frac{1}{x} &amp; x \ne 0 \\ 0 &amp; x = 0\end{cases} $$ $g$ is everywhere differentiable, but the derivative is not continuous. Now let $$ f(x) = g(x - \frac{\delta}{2}) + g(x + \frac{\delta}{2}) $$ It's everywhere differentiable, but the deriv is not continuous at the points $\pm \delta/2$.
$f'(x)$ must not be continuous at every $x \in (x_0 - \delta, x_0 + \delta)$. Example: May be $rect_\delta(x-x_0)$ The rectangle function with Center at Point $x_0$ and broadness $\delta$. Then $f'(x)=rect_\delta(x-x_0)$ is discontinuous at $x_0= \delta/2$ and $x_0=- \delta/2$. The function $f(x)$, however is continuous.
Greatest common divisor of real analytic functions Consider two real-valued real analytic functions $f$ and $g$. I want to prove that there exists a greatest common divisor $d$, which is a real analytic function. By greatest common divisor, I mean the following: Common divisor: There exist real analytic functions $q_1, q_2$ such that $f = dq_1, g = dq_2$, and Greateast: If there is any other function $d_1$ that satisfies 1. above, then there exists a real analytic function $q_3$ such that $d = d_1q_3$. I am guessing that a proof could be derived from the Taylor series expansion, but I am not sure how to proceed.
The relevant data are the roots (with multiplicity). A GCD of two analytic functions $f$ and $g$ will be an analytic function $h$ such that for each $a \in \mathbb{R}$ one has $\operatorname{ord}_a (h) = \min \{\operatorname{ord}_a (f), \operatorname{ord}_a (g) \}$ where $\operatorname{ord}_a$ denotes the multiplicity of the root at $a$ (setting it $0$ if there is no root). Put differently the order is the index where the Taylor series expansion around $a$ actually starts. Algebraically the ring of real analytic functions is a Bézout domain, see for example Ring of analytic functions, meaning every finitely generated ideal is principal and in particular any two elements admit a GCD (see GCD domain)
What do you mean by "greatest"? I.e., what would $\max\{\sin x, \cos x\}$ be? Check out the development leading up to GCD among integers: It starts with the division algorithm, i.e., for all $a, b \in \mathbb{Z}$ there are $q, r \in \mathbb{Z}$ with $0 \le r \le \lvert b \rvert$ such that $a = b q + r$. Compare to the same situation for polynomials, where now you have $a(x) = b(x) q(x) + r(x)$, where the degree of $r$ is less than the degree of $b$. From this fundamental relationship you develop the idea of greatest common divisor (be it in the sense of greatest absolute value for integers or largest degree for polynomials). This assumes you have an integral domain with a valuation function (absolute value, degree) and a division algorithm. Real valued functions with pointwise operations (sum, product) aren't an integral domain (there are zero divisors, functions that aren't zero that multiply to the zero function).
Regular Expression for Simple Language I'm having trouble writing a regular expression given the following $\{a, b, c\}$ which produces the set of strings of length 3. I don't really understand how to restrict the length of the string. Obviously you could have 3 $a$'s, or 3 $b$'s or 3 $c$'s, but you could have $aab, aac, aba, \dots $ Is it something like $a^* \cup b^* \cup c^*$?
The regular expression you have given gives the language $$\{\varepsilon, a, aa, aaa, \dots\} \cup \{\varepsilon, b, bb, bbb, \dots\} \cup \{\varepsilon, c, cc, ccc, \dots\}$$ $$= \{\varepsilon, a, aa, aaa, \dots, b, bb, bbb, \dots, c, cc, ccc, \dots\},$$ which is clearly not what you are after. Using only union ($\cup$), concatenation ($\circ$), and the Kleene star ($^*$), you could do something like $$(a \cup b \cup c) \circ (a \cup b \cup c) \circ (a \cup b \cup c).$$
I think it can be simply written as $((a+b+c)^3)^*$ since $(a+b+c)^3$ will produce all strings of length $3$.
Using $\gcd(a,b)$ to find gcd of other values $\gcd(a^2,b)$ and $\gcd(a^3,b)$ If $\gcd(a,b) = p$, a prime, what are the possible values of $\gcd(a^2,b)$ and $\gcd(a^3,b)$? Through examples I've been able to find the answer, but I don't know how to come up with a proof. Update: I think gcd of either is $p^2$, but again this is just by working examples. I'm not familiar with much number theory. I've just started the second section of my textbook, so all I know is just divisibility and primes.
You do not have enough information to answer unambiguously. The most we can say is that $\mathrm{gcd}(a^2,b)$ is either $p$ or $p^2$, and $\mathrm{gcd}(a^3,b)$ is one of $p$, $p^2$, and $p^3$. For suppose $\mathrm{gcd}(a,b)=p$. Then there are integers $x,y$ such that $ax+by=p$, so (squaring this expression) $a^2x^2+b(2axy+by^2)=p^2$, showing that a linear combination of $a^2,b$ with integer coefficients is $p^2$, which implies that $\mathrm{gcd}(a^2,b)$ divides $p^2$. Since $\mathrm{gcd}(a,b)$ divides any combination of $a,b$ and therefore of $a^2,b$, then $p$ divides $\mathrm{gcd}(a^2,b)$. Now: $\gcd(p,p^2)=p$ and $\gcd(p^2,p^2)=p^2$, so certainly $p^2$ is possible. On the other hand, $\gcd(p,p)=p$ and $\gcd(p^2,p)=p$, so $p$ is possible as well. A similar analysis, with similar examples, gives the result for $\mathrm{gcd}(a^3,b)$.
It is easy to prove that is $\gcd(a, b) = m$ then $\gcd(a / m, b / m) = 1$, $\gcd(k a, k b) = k \gcd(a, b)$, and if $\gcd(a, b) = \gcd(a, c) = 1$ then $\gcd(a, b c) = 1$. Splice those together.
Separating 18 people into 5 teams A teacher wants to divide her class of 18 students into 5 teams to work on projects, with two teams of 3 students each and three teams of 4 students each. a) In how many ways can she do this, if the teams are not numbered? b) What is the probability that two of the students, Mia and Max, will be on the same team? [This is not a homework problem.]
Assume first that the teams are labelled, with the two teams of three having labels A and B, and the three teams of four having labels C, D, and E. There are $\binom{18}{3}$ ways to choose the people who will be on Team A. For each of these ways, there are $\binom{15}{3}$ ways to choose Team B. For every way to choose Teams A and B, there are $\binom{12}{4}$ ways to choose Team C, and then $\binom{8}{4}$ ways to choose Team D, and finally, if we like, $\binom{4}{4}$ ways to choose Team E. However, when we remove the labels, the number of choices for the $2$ three-person teams gets divided by $2!$, and the number of choices for the $3$ four-person teams gets divided by $3!$, for a total of $$\frac{\binom{18}{3}\binom{15}{3}\binom{12}{4}\binom{8}{4}\binom{4}{4}}{2!3!}.$$ For the Mia and Max problem, we could count. But we prefer to work directly with probabilities. Imagine that the people first get divided into $2$ groups, of $6$ and $12$, to make up the two kinds of team. The probability that Mia is selected for the group of $6$ is $\frac{6}{18}$. Given that this has happened, the probability Max is chosen for the same team is $\frac{2}{17}$. Thus the probability Mia and Max are in the same team of three is $$\frac{6}{18}\cdot\frac{2}{17}.$$ Similarly, the probability that Mia and Max are in the same group of $4$ is $$\frac{12}{18}\cdot\frac{3}{17}.$$ Add. Remark: Doing problem (b) by counting and dividing is not difficult, just a little more messy-looking. Note that the probabilities are the same for the labelled teams case as for the unlabelled case. It is easier not to make a mistake by counting the number of ways that Mia and Max can be on the same labelled team, and dividing by the number of labelled teams.
Starting with the "binomial(18, 3)+binomial(15, 3)+binomial(12, 4)+binomial(8, 4)+binomial(4, 4)" solution I got 1837 possible combinations, but if you start with dividing into the 4 teammember teams first I get "binomial(18, 4)+binomial(14, 4)+binomial(10, 4)+binomial(6, 3)+binomial(3, 3) = 4292 possible combinations", no idea if it's right or not just something to double check.
Is the number of Lebesgue covering dimensions a topological property? Is it correct to characterize the number of dimensions that a topological space has as a 'topological property'? By 'dimensions,' I have in mind Lebesgue covering dimensions. If the answer is yes, then why? If the answer is no, then what kind of a property is the number of dimensions?
Isn’t it obvious from the definition that the covering dimension is a topological invariant? it’s defined purely in terms of open sets and set-theory concepts.
There are many different definitions of dimension in different contexts. Some are topological, most are not. Look up: topological dimension.
If $a > b$, is $a^2 > b^2$? Given $a &gt; b$, where $a,b ∈ ℝ$, is it always true that $a^2 &gt; b^2$?
If $\: \color{#c00}{a &gt; b}\: $ then $\: a^2\! -\! b^2 = (\color{#c00}{a\!-\!b})(a\!+\!b) &gt; 0 \iff a\!+\!b &gt;0 $
Given that $ a &gt; b $, it is not always true that $ a^{2} &gt; b^{2} $. One counterexample would be that one of $ a $ or $ b $ is negative, say $ a = 1 $, and $ b = -1 $. Then $$ a^2 = 1^{2} = 1 $$ and $$ b^{2} = (-1)^{2} = 1 $$ making $ a^2 = b^2 $, a contradiction of our assumption that $ a^{2} &gt; b^{2} $.
Estimation, bias, and mean square error Let $X$ be a continuous random variable with pdf $f(x) =\frac{1}{2}(1+ \theta x)$, for $-1 &lt; x &lt; 1$, and $-1 &lt; \theta &lt; 1$ (a) Show that $E(X) = \frac{\theta}{3}$. (b) Given a random sample of size $n$ from a population with pdf $f(X)$, consider the estimator $\hat{\theta} = 3\bar{X}$. Find the variance, bias, and mean squared error of $\hat{\theta}$. For (a) I simply integrate $x f(x)$ For (b) I'm not sure it is related to part (a) or the pdf at all. I'm not quiet sure how to proceed. I know that $\operatorname{Var}(\hat{\theta})=E((\hat{\theta}-E[\theta])^2)$ $\operatorname{Bias}(\hat{\theta})=E(\hat{\theta}-\theta))$, and $\operatorname{MSE}=\sqrt{\operatorname{Var}(\hat{\theta})}$ Do I just need to manipulate the definitions?
$\newcommand{\var}{\operatorname{var}}\newcommand{\E}{\operatorname{E}}$ \begin{align} \E(\bar X) &amp; = \E\left( \frac {X_1+\cdots+X_n} n \right) \\[8pt] &amp; = \frac 1 n \E(X_1+\cdots+X_n) = \frac 1 n (\E(X_1)+\cdots+\E(X_n)) \\[8pt] &amp; = \frac 1 n \Big( n\E(X_1) \Big) = \E(X_1). \\[25pt] \var( \bar X ) &amp; = \var \left( \frac {X_1+\cdots+X_n} n \right) \\[8pt] &amp; = \frac 1 {n^2} \var(X_1+\cdots+X_n) = \frac 1 {n^2} (\var(X_1)+\cdots+\var(X_n)) \\[8pt] &amp; = \frac 1 {n^2} \Big( n \var(X_1) \Big) = \frac 1 n \var(X_1). \end{align} Based on the above, you should be able to find the bias and the mean squared error.
$$\operatorname{Var}\left(\hat\theta\right)=\frac{3(1-θ)}{n}$$ $$\operatorname{Bias}\left(\hat\theta\right)=E(\hat\theta\right-theta)=0 $$\operatorname{Bias}^2\left(\hat\theta\right)=\frac{27(1-\theta)+4n\theta^2}{9n}$$
The elementary coordinate geometry of polynomials? Of rational expressions? Of radicals? With a few colleagues, we're trying to design an (intermediate) algebra course (US terminology) where we stress the interplay between algebra and geometry. The algebraic topics we would like to cover are (1) linear equation in two variables, (2) quadratic equations in two variables, (3) polynomials in one variable, (4) rational functions in one variable (though we're not sure we want to introduce functions), (5) radicals. For (1) and (2) there are obvious geometric counterparts: lines and conic sections. Question: Are there natural geometric counterparts for (3), (4) and (5)? Are there elementary geometric constructions that naturally lead to these algebraic objects? Side question: Are there (affordable) textbooks or lecture notes out there which have this kind of approach?
This seems rather ambitious for most U.S. college intermediate algebra courses, which typically are not-for-credit remedial courses that lie below the level of college algebra and precalculus courses. Nonetheless, here are some things I've used in precalculus courses that might of use. To see what the graph of something like $yx^2 + 2y^3 = 3x + 3y$ looks like using a (standard implicit-incapable) graphing calculator, solve for $x$ in terms of $y$ using the quadratic formula (or $y$ in terms of $x$ when possible, but I'm giving an example where we don't have the option of solving for $y$) and then graph both solutions simultaneously as if $x$ and $y$ were switched. That is, if you're using one of the TI graphing calculators, then enter for y1= and y2= the following: y1 = (3+(9-4x(2x^3-3x))^(1/2))/(2x) y2 = (3-(9-4x(2x^3-3x))^(1/2))/(2x) (Note for would-be editors: Please don't LaTeX the expressions above, as what I've given is what the calculator input should be.) To account for the fact that you're graphing the inverse relation, rotate what you see 90 degrees counterclockwise and then reflect the rotated result across the vertical axis. Equivalently, you can reflect what you see about the line $y=x$, but I suspect what I first suggested is easier for students to carry out. Here's a more elaborate example. Suppose we want to know what the graph of $y^4 - 4xy^3 + 2y^2 + 4xy + 1 = 0$ looks like. (Yes, I know about the Newton polygon method, but let's not go there.) Although this can be solved for $y$ in terms of $x$, it is rather difficult to do so and the result is somewhat difficult to interpret graphically by hand. You'll get the 4 different expressions $y = x \pm \sqrt{x^2 - 1} \pm \sqrt{x^2 - x} \pm \sqrt{x^2 + x}$, where the 4 sign permutations are $(+,+,+),$ $(+,-,-)$, $(-,+,-)$, and $(-,-,+)$. On the other hand, it is easy to solve for $x$ in terms of $y$ and the result is $x = \frac{\left(y^2+1\right)^2}{4y\left(y^2-1\right)}$, which can be graphed by hand using standard methods for graphing rational functions. For graphs of polynomial functions, especially when given as (or easily put into) factored as linear and real-irreducible quadratics with real coefficients (and probably best to mostly avoid using real-irreducible quadratics, at least at the beginning), you can discuss how their graphs locally look at each $x$-intercept and how their graphs roughly look globally by using "order of contact with the $x$-axis" notions (which you don't have to define precisely, of course) and end behavior. For example, since $y = (x+2)^3 (2x+1)^2 x (3-x)$ has the form $y = (x)^3(2x)^2(x)(-x) + \;$ lower order terms, or $y = -4x^7 + \;$ lower order terms, a zoomed out view of the graph will look like the graph of $y = -4x^7$, so the graph "enters at the upper left of quadrant 2" and "exits at the lower right of quadrant 4". Also, the graph passes through the $x$-axis at $x=-2$ "in a cubic fashion" so that locally at this zero the graph looks like a version (translated and reflected, the latter because in going from left to right the graph passes from positive $y$-values to negative $y$-values) of the graph of $y = x^3$. The same kind of analysis leads to what the graph locally looks like at the other $x$-intercepts, which I'll assume you know what I'm talking about by now since this is (or at least it used to be) a fairly standard topic in precalculus courses. In the case of rational functions, one topic that could be investigated is linear fractional transformations (as they're called in complex analysis), or quotients of linear (= affine) functions, by looking at them through the lens of precalculus transformations of the graph of $y = \frac{1}{x}$. Although you definitely don't want to consider the general case, here's the general case version, where I'm assuming $c \neq 0$. (The first equality comes from a 1-step long division calculation.) $\frac{ax+b}{cx+d} = \frac{a}{c} + \frac{b - \frac{ad}{c}}{cx+d}$ $= \frac{a}{c} + \frac{\frac{b}{c} - \frac{ad}{c^2}}{x+\frac{d}{c}}$ $= \frac{a}{c} + \frac{1}{c^2}(bc-ad)\left[ \frac{1}{1 - \left(-\frac{d}{c}\right)}\right]$ Note that this shows the graph of $y = \frac{ax+b}{cx+d}$ can be obtained from the graph of $y = \frac{1}{x}$ by a horizontal translation to the right by $-\frac{d}{c}$ units, followed by a vertical stretch by a factor of $\frac{1}{c^2}(bc-ad)$, followed by a vertical translation up by $\frac{a}{c}$ units. For a possibly useful reference, see Edward C. Wallace, "Investigations involving involutions", Mathematics Teacher 81 (1988), pp. 578-579. [See also these related letters in the Reader Reflections column: Thomas Edwards (MT 83, p. 496), Larry Hoehm (MT 83, p. 496), Andrew Berry (MT 95, p. 406), and Sidney H. Kung (MT 97, pp. 227 &amp; 242).] While I'm on the topic, here's an "intermediate algebra appropriate" situation where a linear fractional transformation arises. For which number or numbers $x$, if any, can we find a number whose sum with $x$ equals its product with $x$? If we denote the desired number by $y$, then the condition becomes $x + y = xy$, or $y = \frac{x}{x-1}$.
For 5, for square roots, it seems almost too obvious to use the hypotenuse of right triangles. For higher roots, diagonals on cubes of higher dimensions? For 3, a quadratic in one variable is also a conic section. For higher degrees...so this is for high school right? ...yeah this one isn't obvious.
Prove that if $\chi(G-u-v)=\chi(G)-2$ for every vertices $u, v$ ($u \ne v$) then G is complete graph. I'm trying to prove this by contradiction: if $G$ isn't complete graph, there must exist vertices $u,v \in V(G)$ for which edge $uv \notin E(G)$. Then $u$ and $v$ must have the same color in proper coloring $C$ and now I'd like to prove that by removing $u, v$ we have $\chi(G-u-v) = \chi(G) - 1$ (that would be the contradiction), but I'm not sure whether it's true.
Consider the contrapositive of the statement instead: If $G$ is not complete then $\exists u, v \neq u \text{ s.t. } \chi(G - u - v) \neq \chi(G) - 2 \quad \quad (1)$. To prove (1), note that since $G$ is not complete, $\exists u, v \neq u$ with the same color in a proper minimal coloring of the graph; for such a pair of vertices we have that $\chi(G - u - v) \geq \chi(G) - 1$ and therefore $\chi(G - u - v) \neq \chi(G)-2$.
The accepted answer assumes a great deal about the existence of 2 vertices of the same color in the proper coloring of an incomplete graph $G$. It is better to assume less. Consider this proof by contradiction instead: Assume $G$ is not a complete graph with $\chi(G)=n$ and that the statement holds. Then, $\exists$ vertices $u$, $v\neq w$ in $G$ s.t. $u$ is not adjacent to $v$. Remove these vertices and attain a graph $G-u-v$. By assumption, $\chi(G-u-v) = \chi(G) - 2 = n - 2$. Now, add back 2 vertices $u$, $v$ that are not adjacent to one another. This graph $G$ can clearly be colored in at most $(n-2) + 1 = n - 1$ colors since the maximum degree of any vertex is $\delta(G) = n - 1$, but this is a contradiction since $\chi(G) \leq n - 1 \implies n \leq n - 1$, a clear falsehood. Thus, $G$ is a complete graph if $\chi(G-u-v) = \chi(G) - 2$. $\Box$
how many permutations of {1,2,...,9} How many permutations of {1,2,…,9} are there such that 1 does not immediately precede 2, 2 does not immediately precede 3, and so forth up to 8 not immediately preceding 9? One obvious example of such a permutation might be 987654321, but there are many others, such as 132465879 or 351724698.
For $\{1,2,3,\dots,n\}$, the numbers are tabulated here, along with lots of information, formulas, references, links, and whatnot. For $n=9$, it says 148329.
Let us do it step by step First take total permutation = 9!(With out any constraint and no repetition) Ok then as we know 12 in a sequence is not allowed so calculate total number of permutation when there is 12 = 7! There are total 8 pairs so Ans should be 9!-8*(7!)=322560
Multiplicative Inverse Element in $\mathbb{Q}[\sqrt[3]{2}]$ So elements of this ring look like $$a+b\sqrt[3]{2}+c\sqrt[3]{4}$$ If I want to find the multiplicative inverse element for the above general element, then I'm trying to find $x,y,z\in\mathbb{Q}$ such that $$(a+b\sqrt[3]{2}+c\sqrt[3]{4})(x+y\sqrt[3]{2}+z\sqrt[3]{4})=1$$ I can see that expanding gives me the system $$ax+2cy+2bz=1$$ $$bx+ay+2cz=0$$ $$cx+by+az=0$$ I don't want to solve this using matrices because I know it will turn ugly. Is there a more elegant way to approach the inverse calculation to avoid the ugly calculation? The only thing I thought of was setting the bottom two equations equal to each other $$bx+ay+2cz=cx+by+az$$ Which seems to indicate that $$a=b, b=c, a=2c$$ but this would make me think $a=b=c=0$ and thus a multiplicative inverse does not exist.
First let me remark that there is a general abstract argument which shows that if $L/K$ is a field extension and $a \in L$ is algebraic over $K$, then the $K$-algebra $K[a]$ is a field. Namely, $K[a]$ is an integral domain which is also a finite-dimensional vector space over $K$. This implies for $0 \neq x \in K[a]$ that the linear map $K[a] \to K[a]$, $y \mapsto x \cdot y$ is surjective, since it is injective, which means that $x$ is invertible. To get a constructive proof, we just have to sit down and make the linear algebra argument here explicit. The underlying vector space of $\mathbb{Q}[\sqrt[3]{2}]$ has basis $1,\sqrt[3]{2},\sqrt[3]{4}$. For some fixed non-zero element $x=a + b \sqrt[3]{2} + c \sqrt[3]{4}$, let's write down the linear map $y \mapsto x \cdot y$ in terms of this basis: $~~~\,1 \mapsto a + b \sqrt[3]{2} + c \sqrt[3]{4}$ $\sqrt[3]{2} \mapsto 2c + a \sqrt[3]{2} + b \sqrt[3]{4} $ $\sqrt[3]{4} \mapsto 2b+2c\sqrt[3]{2} + a \sqrt[3]{4} $ The corresponding matrix is: $$\begin{pmatrix} a &amp; 2c &amp; 2b \\ b &amp; a &amp; 2c \\ c &amp; b &amp; a \end{pmatrix}$$ From linear algebra we know how to invert matrices, for example via Cramer's rule. In this case, we get: $$\frac{1}{a^3 - 6abc + 2b^3 + 4c^3} \cdot \begin{pmatrix} a^2 - 2bc &amp; 2b^2 - 2ac &amp; 4c^2 - 2ab \\ 2c^2 - ab &amp; a^2 - 2bc &amp; 2b^2 - 2ac \\ b^2 - ac &amp; 2c^2 - ab &amp; a^2 - 2bc \end{pmatrix}$$ The determinant $a^3 - 6abc + 2b^3 + 4c^3$ has been computed via the Rule of Sarrus and the cofactors have been computed by the usual formula for $2 \times 2$-determinants. This matrix represents the linear map $y \mapsto x^{-1} \cdot y$ with respect to our basis. Thus, to get $x^{-1}$, we just have to evaluate at $1$, and we get: $$x^{-1} = \frac{1}{a^3 - 6abc + 2b^3 + 4c^3} \cdot ((a^2 - 2bc) + (2c^2 - ab) \sqrt[3]{2} + (b^2 - ac) \sqrt[3]{4})$$ Of course, this method works quite generally. For example, for $x=a + b \sqrt[3]{p} + c \sqrt[3]{p}^2$ we have: $$x^{-1} = \frac{1}{a^3 - 3pabc + pb^3 + p^2 c^3} \cdot ((a^2 - pbc) + (pc^2 - ab) \sqrt[3]{p} + (b^2 - ac) \sqrt[3]{p}^2)$$
Try these: Rationalizing the denominator 3 Deradicalization of denominators
Proving any product of four consecutive integers is one less than a perfect square Prove or disprove that : Any product of four consecutive integers is one less than a perfect square. OK so I start with $n(n+1)(n+2)(n+3)$ which can be rewritten $n(n+3)(n+1)(n+2)$ After multiplying we get $(n^2 + 3n)(n^2 + 3n + 2)$ How do I proceed from here to end up with something squared $- 1$?
You might note that for any $N$ at all, $$(N-1)(N+1) = N^2-1$$ and so is one less than a perfect square. I presume you have seen this before. Then you could take $N = M+1$ in the formula above, and get $$M\cdot(M+2) = (M+1)^2 - 1$$ is one less than a perfect square. And here you have $M = n^2+3n$, don't you?
It seems like the way to attack this that doesn't require guessing is this: Start with $N (N-1) (N-2) (N-3) = N^4 + 6 N^3 + 11 N^2 + 6 N = M^2 - 1$ Since the product is "near" $N^4$, $M$ has to be "near" $N^2$. And it's very likely that $M$ is a polynomial in $N$. So set $M = N^2 + aN + b$. Then $M^2-1 = N^4 + 2aN^3 + (2b+a^2)N^2 + 2abN + (b^2-1)$. Setting that into the first equation and equating powers of $N$ gives $2a = 6, 2b+a^2 = 11, 2ab = 6, b^2-1 = 0$, which has the unique solution $a = 3, b = 1$. So $M = N^2 + 3N + 1$ and $N (N-1) (N-2) (N-3) = (N^2 + 3N + 1)^2 -1$.
What's the behavior of $\displaystyle\sum_{n=1}^\infty (z+\sqrt{5}+2i)^{n!}$ outside its radius of convergence? I want to check the behavior of $$\displaystyle\sum_{n=1}^\infty (z+\sqrt{5}+2i)^{n!}$$ outside its radius of convergence. I've tried to use the ratio test as follows: $$\left|\frac{(z-\sqrt{5}+2i)^{(n+1)!}}{(z-\sqrt{5}+2i)^{n!}}\right|=|(z-\sqrt{5}+2i)^{nn!}|$$ This will converge to zero, if $|z-\sqrt{5}+2i|&lt;1$. However, what's about the cases $|z-\sqrt{5}+2i|=1$. How can I show convergence or divergence in this case? EDIT: I've updated my question to make things more clear. Why I was down-voted for that?
Evidently, this is a power series around $-\sqrt 5 -2i$, so you really should make a change of variable and look at the power series $$\sum z^{n!}.$$ The ratio test for series will indeed give you the correct radius of convergence 1, as you have written in your question. However, the ratio test never helps you for boundary behaviour. On the boundary, you usually use one of the following approaches: terms do not even converge to zero, therefore divergent terms are positive for $z=R$ and converge (because of known series, integral test, etc), therefore absolute convergence on boundary terms converge for $z\not=R$ but on boundary due to Leibniz test/Abel criterion/Raabe test etc.
If we wish to known the radius of convergence of $$\sum_{n=0}^\infty z^{n!}=z+z+z^2+z^6+z^{24}+\cdots, $$ then we calculate, $$\frac{1}{R} = \limsup \sqrt[n]{|\text{$n^{th}$ coefficient}|} = \limsup \sqrt[n]{\{0,2,1,0,0,0,1,0,0,0,\ldots\}}=1. $$ Rudin Principles of Mathematical Analysis - page 69 - Theorem 3.39
Value of $x^2 \sin (\frac1x)$ at $x=0$ What is the value of $y=x^2 \sin(1/x)$ at $x=0$? I see that $x^2 =0$ but $\sin (1/x)$ is undefined. More generally: if a function made up of a product of functions, like $y= a(x)b(x)c(x)d(x)\dots$, then at a specific value of $x$ if one of the functions is $0$ and all the other functions are undefined, does it still mean that $y=0$?
The function $x \, \longmapsto \, x^{2} \sin \Big( \frac{1}{x} \Big)$ is not defined at $x = 0$ because "$1/0$" does not exist. Note also that this function is continuous on $\mathbb{R}^{\ast}$. However, this is not all we can say. Since the function $x \, \longmapsto \, \sin \Big( \frac{1}{x} \Big)$ is bounded, one can prove that : $$ \lim \limits_{x \to 0} x^{2} \sin \Big( \frac{1}{x} \Big) = 0.$$ As a consequence, even though the function $x \, \longmapsto \, x^{2} \sin \Big( \frac{1}{x} \Big)$ is not defined at $x=0$, it has a finite limit as $x$ goes to $0$. So, this function can be extended to a new function, say $\overline{f}$, defined on $\mathbb{R}$ the following way : $$ \overline{f}(x) = \begin{cases} x^{2} \sin \Big( \frac{1}{x} \Big) &amp; \text{if } x \neq 0 \\[2mm] 0 &amp; \text{if } x = 0 \end{cases} $$ This new function $\overline{f}$ is defined at $x=0$ and continuous on $\mathbb{R}$. In a very unformal, non-mathematical way, we could say that the value of the function $x \, \longmapsto \, x^{2} \sin \Big( \frac{1}{x} \Big)$ at $x=0$ is $0$. But this is not really the truth !
Pretty sure if one of the functions is undefined $y$ must also be undefined. Pretty sure this is a question about limits though where $y\rightarrow 0$ when $x\rightarrow 0$.
What is the importance of eigenvalues/eigenvectors? What is the importance of eigenvalues/eigenvectors?
Short Answer Eigenvectors make understanding linear transformations easy. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation. Slightly Longer Answer There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations \begin{align*} \frac{dx}{dt} &amp;= ax + by\\\ \frac{dy}{dt} &amp;= cx + dy. \end{align*} This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid. Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like \begin{align*} \frac{dz}{dt} &amp;= \kappa z\\\ \frac{dw}{dt} &amp;= \lambda w \end{align*} that is, you can "decouple" the system, so that now you are dealing with two independent functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$.. Can this be done? Well, it amounts precisely to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a &amp; b\\c &amp; d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing.
An eigenvector $v$ of a matrix $A$ is a directions unchanged by the linear transformation: $Av=\lambda v$. An eigenvalue of a matrix is unchanged by a change of coordinates: $\lambda v =Av \Rightarrow \lambda (Bu) = A(Bu)$. These are important invariants of linear transformations.
Elegantly Proving that $~\sqrt[5]{12}~-~\sqrt[12]5~>~\frac12$ $\qquad$ How could we prove, without the aid of a calculator, that $~\sqrt[5]{12}~-~\sqrt[12]5~&gt;~\dfrac12$ ? I have stumbled upon this mildly interesting numerical coincidence by accident, while pondering on another curios approximation, related to musical intervals. A quick computer search then also revealed that $~\sqrt[7]{12}~-~\sqrt[12]7~&gt;~\tfrac14~$ and $~\sqrt[7]{15}~-~\sqrt[15]7~&gt;~\tfrac13.~$ I am at a loss at finding a meaningful approach for any of the three cases. Moving the negative term to the right hand side, and then exponentiating, is &mdash;for painfully obvious reasons&mdash; unfeasible. Perhaps some clever manipulation of binomial series might show the way out of this impasse, but I fail to see how...
An approach using binomial series could look as follows: For small positive $x$ and $y$ one has $$(1+x)^{1/5}&gt;1+{x\over5}-{2x^2\over25},\qquad (1+y)^{1/12}&lt;1+{y\over12}\ .$$ Using the ${\tt Rationalize}$ command in Mathematica one obtains, e.g., $12^{1/5}\doteq{13\over8}$. In fact $$12\cdot(8/13)^5-{18\over17}={1398\over 6\,311\,981}&gt;0\ .$$ It follows that $$12^{1/5}&gt;{13\over8}\left(1+{1\over17}\right)^{1/5}&gt;{13\over8}\left(1+{1\over85}-{2\over 85^2}\right)\doteq1.64367\ .$$ In the same way Mathematica produces $5^{1/12}\doteq{8\over7}$, and one then checks that $$5\cdot (7/8)^{12}-{141\over140}=-{136\,294\,769\over2\,405\,181\,685\,760}&lt;0\ .$$ It follows that $$5^{1/12}&lt;{8\over7}\left(1+{1\over140}\right)^{1/12}&lt;{8\over7}\left(1+{1\over12\cdot 140}\right)\doteq1.14354\ .$$ This solution is not as elegant as the solution found by Giovanni Resta, but the involved figures are considerably smaller.
A quick search through the $5$th and $12$th powers of the first 25 integers yields the fact that $5 \approx \frac {8^{12}}{7^{12}}$ and that $12 \approx \frac {23^{5}}{14^{5}}$. Thus $\sqrt[12]{5}\approx\frac8{7}$ and $\sqrt[5]{12}\approx\frac{23}{14}$. The rest follows. Let $\sqrt[12]{5} = \frac{16(1+a)}{14}$ and let $\sqrt[5]{12} = \frac{23(1+b)}{14}$. Then $\sqrt[5]{12} - \sqrt[12]{5} = \frac{23(1+b)}{14} - \frac{16(1+a)}{14}= \frac{23+23b-16-16a}{14} = \frac {7}{14} + \frac{23b-16a}{14}= \frac {1}{2} + \frac{23b-16a}{14} $. Just need to demonstrate that $23b&gt;16a$
group theory for non-mathematicians A very smart non-mathematician friend is looking to learn about groups, and I was wondering if people might have suggestions (this is NOT a duplicate of this question, since a textbook is not what I am looking for, at least not at first).
Groups and Symmetry: A Guide to Discovering Mathematics, by David W. Farmer. The highlighted title may convince that it assumes not too much mathematics for the learner. It is very little book, not of the type Definition-Theorem-Proof. At least (in on-line preview) I don't find a single concrete mathematical statement, but always beautiful pictures and only that.
I would suggest to read, Simon Singh's Fermat's Last Theorem. http://simonsingh.net/books/fermats-last-theorem/ It starts with the introduction to simple problem, solutions. And entire history as it evolves to solve the problem surrounding group theory. And then into advanced theory.
Distance Rate Time problem One morning, Ryan remembered lending a friend a bicycle. After breakfast, Ryan walked over to the friend’s house at 3 miles per hour, and rode the bike back home at 7 miles per hour, using the same route both ways. The round trip took 1.75 hours. What distance did Ryan walk?
HINT: Let the distance between the friend’s house from Ryan's is $d$ miles As we know speed $\displaystyle =\frac{\text{distance}}{\text{time}}\implies $ time $\displaystyle =\frac{\text{distance}}{\text{speed}}$ So, to go to the friend’s house, he took $\frac d3$ hours While returning he took $\frac d7$ hours So, $\frac d3+\frac d7=1.75$
As given total time taken is 1.75 hrs So assume total distance is d than 1.75=d/3+d/7 1.75=10d/21 d=3.675miles ans walking distance
6 periods and 5 subjects Question : There are 6 periods in each working day of a school. In how many ways can one organize 5 subjects such that each subject is allowed at least one period? Is the answer 1800 or 3600 ? I am confused. Initially this appeared as a simple question. By goggling a bit, I am stuck with two answers. Different sites gives different answers and am unable to decide which is right. Approach 1 (Source) we have 5 sub and 6 periods so their arrangement is 6P5 and now we have 1 period which we can fill with any of the 5 subjects so 5C1 6P5*5C1=3600 Approach 2 (Source) subjects can be arranged in 6 periods in 6P5 ways. Remaining 1 period can be arranged in 5P1 ways. Two subjects are alike in each of the arrangement. So we need to divide by 2! to avoid overcounting. Total number of arrangements = (6P5 x 5P1)/2! = 1800 Alternatively this can be derived using the following approach. 5 subjects can be selected in 5C5 ways. Remaining 1 subject can be selected in 5C1 ways. These 6 subjects can be arranged themselves in 6! ways. Since two subjects are same, we need to divide by 2! Total number of arrangements = (5C5 × 5C1 × 6!)/2! = 1800 Is any of these approach is right or is the answer different?
Approach 1 is incorrect. It arranges any five of the courses into one period each, then assigns the left over course to some period. It double counts each arrangement by having each of the two courses taught at the same time included in the original five. So the arrangement $12,3,4,5,6$ is counted as $1,3,4,5,6$ plus $2$ in first period and as $2,3,4,5,6$ plus $1$ in first period. Approach 2 is the same, but acknowledges the double counting in the division by $2!$ I cannot follow the argument in the paragraph starting "Alternatively"
i am a student of class 10 and even i had the same question in my book. But neither of your answers are correct. The exact answer in 600. I can explain. _ _ _ _ _ _ imagine these are the 6 space where you should put the 5 subjects. First 5 periods can be filled in 5P5 ways. i.e. 5!=120 and then the remaining one period can be filled with any one subject from the 5. i.e. 5P1=5. Then multiply both which is 120*5=600. And Combination cannot be used because this is a problem based on Permutation. Combination is mere selection of object. Permutation is an orderly arrangement of object. Here you have to arrange the subjects and not selected. There you go!
Derivative of definite integral using Taylor's theorem So I want to obtain the following form: For $x, p \in R^n$ $$\nabla f(x+p) = \nabla f(x) + \int^1_0 \nabla^2 f(x+tp)p \,dt$$ for $f$ twice continuously differentiable and $t \in (0,1)$. Taylor's theorem in integral form is: $$f(x) = f(a) + \int^x_a \nabla f(t) (x-t)\,dt$$ from here: https://www.math.upenn.edu/~kazdan/361F15/Notes/Taylor-integral.pdf My approach is: $$f(x+p) = f(x) + \int^{x+p}_x \nabla f(t) (x+p-t)\,dt$$ Reformulated as: $$f(x+p) = f(x) + \int^1_0 \nabla f(x+tp)(1-t)p\,dt$$ $$= f(x) + \int^1_0 \nabla f(x+tp) p \,dt - \int^1_0 \nabla f(x+tp) t p \,dt$$ But I am not sure how to take the derivatives of the integrals with respect to x; am I at a dead end? Should I instead, just begin with $\nabla f(x)$ in place of $f(x)$ when I apply Taylor's theorem?
Okay, I figured out the problem. Taylor's theorem in integral form is actually $$f(x) = f(a) + \int^x_a \nabla f(t) \,dt$$ From there, we have $$f(x+p) = f(x) + \int_0^1 \nabla f(x+tp)p\,dt$$ And we can substitute $\nabla f(x)$ for f(x) into the above statement. EDIT: The above is incorrect. Using https://www.math.washington.edu/~folland/Math425/taylor2.pdf (page 3), (with the same notation regarding $\alpha$) $$f(x+p) = f(x) + \sum_{|\alpha| = 1} \frac{p^{\alpha}}{\alpha!} \int_0^1 \partial^{\alpha} f(x+pt) \,dt $$ $$f(x+p) = f(x) + \sum_{j=1}^n p_j \int_0^1 \partial_j f(x+pt) \,dt = f(x) + p \cdot \int_0^1 \nabla f(x+pt)\,dt $$ Taking the gradient of the above: $$\frac{\partial}{\partial_i} f(x+p) = \frac{\partial}{\partial_i} f(x) + \frac{\partial}{\partial_i} p \cdot \int_0^1 \nabla f(x+pt) \, dt = \frac{\partial}{\partial_i} f(x) + p \cdot \int^1_0 \frac{\partial}{\partial_i} \nabla f(x+pt) \,dt $$ We have overall that $$\nabla f(x+p) = \nabla f(x) + \int^1_0 \nabla^2 f(x+pt) p \, dt$$
Okay, I figured out the problem. Taylor's theorem in integral form is actually $$f(x) = f(a) + \int^x_a \nabla f(t) \,dt$$ From there, we have $$f(x+p) = f(x) + \int_0^1 \nabla f(x+tp)p\,dt$$ And we can substitute $\nabla f(x)$ for f(x) into the above statement. EDIT: The above is incorrect. Using https://www.math.washington.edu/~folland/Math425/taylor2.pdf (page 3), (with the same notation regarding $\alpha$) $$f(x+p) = f(x) + \sum_{|\alpha| = 1} \frac{p^{\alpha}}{\alpha!} \int_0^1 \partial^{\alpha} f(x+pt) \,dt $$ $$f(x+p) = f(x) + \sum_{j=1}^n p_j \int_0^1 \partial_j f(x+pt) \,dt = f(x) + p \cdot \int_0^1 \nabla f(x+pt)\,dt $$ Taking the gradient of the above: $$\frac{\partial}{\partial_i} f(x+p) = \frac{\partial}{\partial_i} f(x) + \frac{\partial}{\partial_i} p \cdot \int_0^1 \nabla f(x+pt) \, dt = \frac{\partial}{\partial_i} f(x) + p \cdot \int^1_0 \frac{\partial}{\partial_i} \nabla f(x+pt) \,dt $$ We have overall that $$\nabla f(x+p) = \nabla f(x) + \int^1_0 \nabla^2 f(x+pt) p \, dt$$
connected sum of surface When I read Massey's Algebraic Topology:An Introduction,page 9,he points out that the topological type of $S_1$#$S_2$(here $S_i$ is surface,# is connected sum,i.e.,cutting an open disc $D_i$ in each surface,and then gluing the boundary circle through a homeomorphism h) does not depend on the choice of discs $D_i$ or the choice of the homeomorphism h. The independence of h is quite evident,but I don't know why the choice of discs is irrelevant.Is there any refference?
Wikipedia has an extremely short "Disc Theorem" entry which references Palais' "Extending diffeomorphisms" paper. It says Disc Theorem. For a smooth, connected, $n$-dimensional manifold $M$, if $f, f'\colon D^n \to M$ are two equi-oriented embeddings then they are ambiently isotopic. This is one of the fundamental results in Differential Topology, and in particular it implies that connected sums are well-defined wrt choice of embeddings. There might be a simpler proof in $2$D, but this is the standard result which is typically cited for all dimensions. (Here "ambiently isotopic" means there is an isotopy $H: M\times I \to M$ which begins at the identity map and induces an isotopy between $f$ and $f'$; "equi-oriented" means that $f$ preserves orientation iff $f'$ does. In fact the proof of Palais' theorem shows a bit more: the ambient isotopy can be chosen to be fixed outside of a compact, contractible subspace containing the images of $f$ and $f'$.) It's been a while since I looked at it, but a proof sketch sort of goes like this: First choose a small open tube around the unit interval $I\subset U\subset \mathbb{R}^2$ and an embedding $\gamma\colon \bar U \to M$ where $\gamma(0) = f(0)$, $\gamma(1) = f'(0)$. Now pick a small disc $D\subset U$ centred at $0$, and construct an ambient isotopy in the tube $F\colon U\times I \to U$ which transports $D$ to a disc centred at $1$. Then you have to construct ambient isotopies $H_1, H_2$ on $M$ which shrink $f(D^n)$ and $f'(D^n)$ down to $\gamma(D)$ and $\gamma(F_1(D))$ respectively (I guess this step uses a linearization trick). Then these three isotopies are pieced together to give the result.
If you consider two different locations/ sizes of the disc on one surface, you can define a homotopy on that surface that transforms one disc to the other. This works because all disks are homotopically trivial. Hence the topological type of the connected sum does not depend on the particular choice of discs.
Complete induction of $10^n \equiv (-1)^n \pmod{11}$ To prove $10^n \equiv (-1)^n\pmod{11}$, $n\geq 0$, I started an induction. It's $$11|((-1)^n - 10^n) \Longrightarrow (-1)^n -10^n = k*11,\quad k \in \mathbb{Z}. $$ For $n = 0$: $$ (-1)^0 - (10)^0 = 0*11 $$ $n\Rightarrow n+1$ $$\begin{align*} (-1) ^{n+1} - (10) ^{n+1} &amp;= k*11\\ (-1)*(-1)^n - 10*(10)^n &amp;= k*11 \end{align*}$$ But I don't get the next step.
You are not setting up your induction very well. You should not start with the equality you want to establish, namely that $(-1)^{n+1}-10^{n+1}$ is a multiple of $11$. Instead, you should start with the Induction Hypothesis, which is that $(-1)^n - 10^n$ is a multiple of $11$. So: the Inductive Step is to show that if $(-1)^n - 10^n$ is a multiple of $11$, then $(-1)^{n+1} - 10^{n+1}$ is also a multiple of $11$. Let's write out our Induction Hypothesis: it says that $$\text{There exists an integer }k\text{ such that }(-1)^n - 10^n = 11k.$$ What we want to prove is that: $$\text{there exists an integer }\ell\text{ such that }(-1)^{n+1}-10^{n+1}=11\ell.$$ (Note that the multiple may be different, that's why I used a different letter). So now we can try manipulating the expression we want. One possibility is to use the following identity: $$a^{n+1}-b^{n+1} = (a-b)(a^n + a^{n-1}b + a^{n-2}b^2 + \cdots + ab^{n-1}+b^n),$$ if you already know this identity. So we have, with $a=-1$ and $b=10$, $$ (-1)^{n+1} - 10^{n+1} = \Bigl( (-1) - 10\Bigr)\Bigl( (-1)^n + (-1)^{n-1}(10) + \cdots + (-1)10^{n-1} + 10^n\Bigr).$$ Now notice that you don't even need to use the induction hypothesis to conclude that $(-1)^{n+1}-10^{n+1}$ is a multiple of $11$ (as could be seen in mac's answer). If you don't know the identity, then you can perform some purely algebraic manipulations. E.g., $$\begin{align*} (-1)^{n+1} - 10^{n+1} &amp;= -1\left( (-1)^n + 10^{n+1}\right)\\ &amp;= -\left( (-1)^n -10^n + 10^n + 10^{n+1}\right)\\ &amp;= -\left( \Bigl((-1)^n - 10^n\Bigr) + 10^n\Bigl(1 + 10\Bigr)\right)\\ &amp;= -\left( 11k + 10^n(11)\right) &amp;\quad&amp;\text{(by the induction hypothesis)}\\ &amp;= -\left( 11(k+10^n)\right)\\ &amp;= 11\left( -(k+10^n)\right), \end{align*}$$ which gives that $(-1)^{n+1} - 10^{n+1}$ is a multiple of $11$, as desired, from the assumption that $(-1)^n - 10^n$ is a multiple of $11$. But easier still is to use the following property of congruences: Proposition. Let $a,b,c,d,k$ be integers. If $$a\equiv b\pmod{k}\qquad\text{and}\qquad c\equiv d\pmod{k}$$ then $ac\equiv bd\pmod{k}$. Proof. Since $a\equiv b\pmod{k}$, then $k|a-b$, so $k$ divides any multiple of $a-b$; for example, $k|(a-b)c = ac-bc$. Since $k$ divides $ac-bc$, then $ac\equiv bc\pmod{k}$. Since $c\equiv d\pmod{k}$, then $k|c-d$, so $k|(c-d)b = cb-db$, hence $bc\equiv bd\pmod{k}$. Since $ac\equiv bc\pmod{k}$ and $bc\equiv bd\pmod{k}$, then $ac\equiv bd\pmod{k}$. QED Corollary. If $a_1\equiv b_1\pmod{k}$, $a_2\equiv b_2\pmod{k},\ldots, a_n\equiv b_n\pmod{k}$, then $$a_1\cdots a_n\equiv b_1\cdots b_n\pmod{k}.$$ Proof. Induction on $n$. QED (This is where you would want to use induction, rather than the specific case you are looking at). Corollary. If $a\equiv b\pmod{k}$, then for all positive integers $n$, $a^n\equiv b^n\pmod{k}$. Proof. Apply previous corollary with $a_i=a$ and $b_i=b$ for all $i$. QED
Direct proof: $10^n=(11-1)^n=11(\dots)+(-1)^n$
If $\sin\alpha + \cos\alpha = 0.2$, find the numerical value of $\sin2\alpha$. If $\sin\alpha + \cos\alpha = 0.2$, find the numerical value of $\sin2\alpha$. How do I find a value for $\sin\alpha$ or $\cos\alpha$ so I can use a double angle formula? I know how to solve a problem like "If $\cos\alpha = \frac{\sqrt{3}}{2}$ , find $\sin2\alpha$" by using the 'double angle' formula: $\sin2\alpha = 2\sin\alpha\cos\alpha$ like this: Start by computing $\sin\alpha$ $$\sin^2\alpha = 1 -\cos^2\alpha = 1-(\frac{\sqrt{3}}{2})^2 = \frac{1}{4}$$ so $$\sin\alpha = \pm\frac{1}{2}$$ then it's just a simple matter of plugging $\sin\alpha = \pm\frac{1}{2}$ and $\cos\alpha=\frac{\sqrt{3}}{2}$ into $$\sin2\alpha = 2\sin\alpha\cos\alpha$$ to get $$\sin2\alpha = \pm\frac{\sqrt{3}}{2}$$ Where I can not make progress with the question "If $\sin\alpha + \cos\alpha = 0.2$, find the numerical value of $\sin2\alpha$". Is how do I find a value for $\sin\alpha$ or $\cos\alpha$ so I can use a double angle formula? What I have tried: If $\sin\alpha+\cos\alpha = 0.2$ then $\sin\alpha=0.2-\cos\alpha$ and $\cos\alpha=0.2-\sin\alpha$. Should I start by by computing $\sin\alpha$ using $\sin^2\alpha = 1 -\cos^2\alpha = 1-(0.2-\cos\alpha)^2$?
We have $$s^2=(\cos\alpha+\sin\alpha)^2=1+2\cos\alpha\sin\alpha=(0.2)^2$$ so we find $$p=\sin\alpha\cos\alpha=-0.48$$ hence $\sin\alpha$ and $\cos\alpha$ are roots of the quadratic equation: $$x^2-sx+p=x^2-0.2x-0.48=0$$
\begin{align} \sin \alpha+\cos \alpha &amp;= 0.2 \\ (\sin \alpha+\cos \alpha)^2 &amp;= 0.04 \\ \sin^2 \alpha + \cos^2 \alpha + 2\sin \alpha \cos\alpha &amp;= 0.04 \\ 1+\sin 2\alpha &amp;= 0.04 \\ \sin 2\alpha &amp;= -0.96 \end{align} and that would be straight forward to proceed
Convergence of $\sum_{n=1}^\infty \frac{1}{n^{a_n}}$ Let $\lim _{n\to\infty}a_n=l$. Show that $\sum_{n=1}^\infty \frac{1}{n^{a_n}}$ converges if $l&gt;1$ and diverges if $l&lt;1$. What happens if $l=1$? I tried to use the ratio test, but could not get a good estimate, I have difficulties, that there is the series $a_n$ involved. I know that $\sum_{n=1}^\infty \frac{1}{n^p}$ converges if $p&gt;1$ and diverges if $p\geq 1$ but I am not sure how to use this exactly. For $l=1$, I guess both things could happen? Definitely, we can take $a_n=1$ for all $n$ and then we get the harmonic series which is divergent. Is there an example where the series converges?
If $l=1$ nothing can be said, since $\sum_{n\geq 1}\frac{1}{n\log^2(n+1)}$ is convergent but $\sum_{n\geq 1}\frac{1}{n\log(n+1)}$ is divergent by Cauchy's condensation test.
Hints: if $l &gt; 1$ then proof that there is $N$ such that $a_n &gt; 1$ for any $n&gt;N$ same method for $l &lt; 1$ for $l = 1$ consider the series $a_n = 1 + 1/n$
Direct sum seen as functor. We know that if $(M_i)_{i\in I}, (N_i)_{i\in\ I}$ and $(P_i)_{i\in\ I}$ are three families of $R$-modules, where $R$ is a ring with unity, then $$M_i\xrightarrow{f_i}N_i\xrightarrow{g_i}P_i$$ is an exact sequence if, and only if, $$\bigoplus_{i\in\ I} M_i\xrightarrow{\oplus f_i}\bigoplus_{i\in I}N_i\xrightarrow{\oplus g_i}\bigoplus_{i\in\ I}P_i$$ is exact. I would like to know if it is true for an arbitray abelian category when the set $I$ is finite.
Yes, because homology is preserved by finite direct sums. If $M_i\to N_i\to P_i$ is a complex but not exact, it has nonzero homology $H_i$ at the middle terms. The direct sum $\bigoplus M_i\to\bigoplus N_i\to \bigoplus P_i$ will have homology $\bigoplus H_i$ at the middle term which doesn't vanish. Of course if any of the $M_i\to N_i\to P_i$ isn't a complex then neither is the direct sum.
Yes, because homology is preserved by finite direct sums. If $M_i\to N_i\to P_i$ is a complex but not exact, it has nonzero homology $H_i$ at the middle terms. The direct sum $\bigoplus M_i\to\bigoplus N_i\to \bigoplus P_i$ will have homology $\bigoplus H_i$ at the middle term which doesn't vanish. Of course if any of the $M_i\to N_i\to P_i$ isn't a complex then neither is the direct sum.
Generating elements of the free group of rank two up to conjugacy I've tried to google how to generate words in two generators up to conjugacy (this means I only want one representative for each conjugacy class). Sadly, I come up with articles that have, in the introduction, things like "It is well known that the elements of the free group of rank two can be enumerated by the rationals". I think I understood that there is a bijection between the conjugacy classes of elements of the free group on two generators and the rational numbers, and this bijection has something to do with the continued fraction associated with a given rational number. I couldn't find any reference on the topic though, so if anyone could give me one I would be grateful. I would also appreciate any other method to compute the conjugacy classes of the free group on two generators. (I need an algorithm for a program I am writing) Thank you!
Every word is conjugate to a cyclically reduced word, and the cyclically reduced conjugates of a cyclically reduced word are all cyclic permutations. This comes directly from wikipedia. From here on, when I say a cyclically reduced word, I mean the smallest cyclic reduced word in the lexicographical order in the set of all its cyclic permutations. The quote shows: You can find a cyclically reduced representative for a conjugacy class. A conjugacy class is uniquely determined by one cyclically reduced word. The question becomes how to find a list of cyclically reduced word over two alphabets. A cyclically reduced word is called a necklace. One can search for "necklace algorithms" to get more infomation. Practically speaking, you might just look at the answer to this question: Good simple algorithm for generating necklaces in Scheme?. If you just want something easy to implement and doesn't care about the running time: Generate the next word. (The alphabet is just 1,-1,2,-2, so one can represent a word in base 4 and count up) Check if the word is cyclically reduced. If not, go back to 1. Rotate the word, see if this word is lexicographically smallest in all its rotations. If not, go back to 1. If yes, print this word.
Every word is conjugate to a cyclically reduced word, and the cyclically reduced conjugates of a cyclically reduced word are all cyclic permutations. This comes directly from wikipedia. From here on, when I say a cyclically reduced word, I mean the smallest cyclic reduced word in the lexicographical order in the set of all its cyclic permutations. The quote shows: You can find a cyclically reduced representative for a conjugacy class. A conjugacy class is uniquely determined by one cyclically reduced word. The question becomes how to find a list of cyclically reduced word over two alphabets. A cyclically reduced word is called a necklace. One can search for "necklace algorithms" to get more infomation. Practically speaking, you might just look at the answer to this question: Good simple algorithm for generating necklaces in Scheme?. If you just want something easy to implement and doesn't care about the running time: Generate the next word. (The alphabet is just 1,-1,2,-2, so one can represent a word in base 4 and count up) Check if the word is cyclically reduced. If not, go back to 1. Rotate the word, see if this word is lexicographically smallest in all its rotations. If not, go back to 1. If yes, print this word.
What is the derivative of a vector with respect to its transpose? I've already looked at Vector derivative w.r.t its transpose $\frac{d(Ax)}{d(x^T)}$, but I wasn't able to find the direct answer to my question in that question. What is the value of $$\frac{d}{dx} x^T\text{ ?}$$ My initial intuition is that it is $1$, but I'm not exactly sure of why that would be so.
What sort of object can be the derivative of a vector-valued function whose values are row vectors and whose arguments are column vectors? Generally, what kind of object can be the derivative of a function whose values are members of one vector space $W$ and whose arguments are members of another vector space $V$? $$ f: V\to W $$ The answer is that the value of such a derivative at any point in $V$ is a linear transformation from $V$ into $W$, and it may be a different linear transformation at each point in $V$. But if $f$ is itself linear, then it's the same linear transformation at each point in $V$: it's $f$ itself. Transposition is linear. Therefore the value of its derivative at each point in its domain is itself. Often one represents a linear transformation by a matrix. What would be the matrix in this case? No matter what basis you pick for the domain $V$, it seems natural to pick as a basis of $W$ the set of transposes of the basis vectors you chose for $V$. In that case, the matrix would be the identity matrix.
That depends on how you define vector derivative. There are generally two ways. One is applying abstract index notation, then $$\frac{d}{dx}x^T=\left(\frac{dx_i}{dx^j}\right)=(\delta_{ij})=(e_1\otimes\cdots\otimes e_n)^T$$ where $e_i$s are unit vector whose $i$ th component is one and zero otherwise. Another way to look at it is to regard as directional derivative, then $$\frac{d}{dx}x^T=\lim_{h\to0}\frac{(x+hx)^T-x^T}{h}=x^T$$
The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $f(x) f(y) = f(x + y) + xy$ for all real numbers $x$ and $y.$ Find all possible functions $f.$ I was trying to solve the problem : The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $f(x) f(y) = f(x + y) + xy$ for all real numbers $x$ and $y.$ Find all possible functions $f.$ I started by substituting in $0$ for both, to find that $f(0) = 1$, as it becomes $f(0)f(x) = f(x)$ which can only mean $f(0) = 1$. However past this point, I started struggling, as setting both to $1$, or $1$ and $0$ doesn't reveal anything new about the problem. Trying to set both $f(x)$ terms equal to each-other didn't help either. Thanks!
Here is a solution. $f(x)f(1)-x=f(x+1)$. So $f(-1)f(1)=0$ So either $f(1)=0$ or $f(-1)=0$. Suppose $f(1)=0$. Then $f(x+1)=-x$. Hence $f(x)=-x+1$ for every $x$ which does satisfy the condition. Assume $f(-1)=0$. Then $f(x-1) = x$ or $ f(x)=x+1$ which is Ok. Answer: $f(x)=x+1$.or $-x+1$.
As you said we can say f(0)=1 and then substituting y=-x, we get $f(x)f(-x)=f(0)-x^2$ $f(x)f(-x)=1-x^2$ Considering polynomials, we can say that the function is a linear polynomial. Assuming $f(x)=ax+b$ we get x+1, -x+1 as the possible functions.
How can be proven that any number X is greater,lesser or equal to any other number Y? I have looked for it on the internet, really, but all I have found are particular cases like 1 > 0, or such. Is there an algebraic proof for proving that x > y or, x = y, or x &lt; y? I thought of using Euclid's Fundamental theorem of Arithmetic as a tool, and comparing the primes that produce a compound number, but I still would have to prove that those primes (and primes in general) are ones greater (or lesser) than others.
Different text may do these differently but they are all equivalent. x &lt; y;x=y; or y &lt; x are mutually exclusive and exhaustive by definition. Addition and multiplication (and the inverses subtraction and division) are given by fiat so that a+0=a 1xa =a, a+(-a)=0. If a ne 0 then ax1/a =1 and a (b+c)=ab+bc. These are axioms. From these we can prove basic this like (-(-a))=a and (-a)(-b)=ab and 0xa=0 and so on. We are then given two essential axioms of order. x+y &lt; x+z if y &lt; z And xy > 0 if x >0 and y > 0. From there we can prove everything. We can prove if x > 0 then - x &lt; 0 If x >0 and y &lt; z then xy &lt; xz. That $x^2 &gt; 0$ is $x \ne 0$ and in particular $1^2 = 1 &gt; 0$. That if 0 &lt; x &lt; y then 0 &lt; 1/y &lt; 1/x So that's how we can compare any two rational numbers. But what of irrational. Well when we define the reals it is as the limits of sequences of rationals and the laws of order are extended.
It seems that you're asking how to prove that $\le$ is a total ordering. The answer depends on the definition of $\le$. If I define $x\prec y$ iff $x$ perfectly divides $y$ then $2 \prec 4$ but $3\not\prec 4$ and $4\not\prec 3$ and $3\not= 4$. Suppose I define '$x\le y$' based upon the sign of $(y-x)$. $(y-x)$ zero or positive : $x\le y$ $(y-x)$ zero or negative : $y\le x$ Since the difference of two numbers must be at least one of these cases, the ordering has totality. To be a total order I'd also have to show (antisymmetry) if $a\leq b$ and $b\leq a$ then $a=b$ and (transitivity) if $a\leq b$ and $b\leq c$ then $a\leq c$
How to solve $2 \sin^2(x)+\cos^2(x)+\cos(x)=0$ I need to solve the following equation: $$2 \sin^2(x)+\cos^2(x)+\cos(x)=0$$ I think that $\sin^2(x)+\cos^2(x) = 1$ is helpful, but I don't know how to apply it here.
You can rewrite the equation to $$2(1-\cos^2x) + \cos^2 x + \cos x = 0$$ Now, introduce a new variable: $y=\cos x$ and first solve for $y$. Then, for each solution $y$, every $x$ that solves the equation $\cos x = y$ solves the original equation.
$2sen^2(x)+cos^2(x)+cos(x)=sen^2(x)+sen^2(x)+cos^2(x)+cos(x)=sen^2(x)+cos(x)+1=(1-cos^2(x))+cos(x)+1=2+cos(x)-cos^2(x)=0$. This last equation is simpler. Take $z:=cos(x)$ then $z^2-z-2=0$
Find all incongruent solutions to $21x \equiv 14 \pmod{91}$ Find all incongruent solutions to $21x \equiv 14 \pmod{91}$. I am able to work out the solution using Euclidean algorithm techniques, but the signs on the expression do not match up with the initial expression when I check my work. So by the linear congruence theorem, my solution has to satisfy: $$21x - 91y = 14$$ but after going through the process with the $\gcd(21, 91)$ my expression ends up as $$91 - 21(4) = 7$$ which I multiply by $2$ to get: $$91(2) - 21(8) = 14$$ Which would mean my solution has to have a negative somewhere in it. I can "put" a negative on one of my values and the original expression would be satisfied but that is not what I obtained through the work I did. Is the confusion in signs occurring on purpose or am I treating something wrong?
By definition, the congruence $$21x \equiv 14 \pmod{91} \tag{1}$$ is equivalent to the equation $$21x = 14 + 91t, t \in \mathbb{Z} \tag{2}$$ If we divide each term of equation 2 by $7$, we obtain the equivalent equation $$3x = 2 + 13t, t \in \mathbb{Z}$$ which is equivalent to the congruence $$3x \equiv 2 \pmod{13} \tag{3}$$ Hence, $$21x \equiv 14 \pmod{91} \Longleftrightarrow 3x \equiv 2 \pmod{13}$$ Since $\gcd(3, 13) = 1$, the congruence $3x \equiv 2 \pmod{13}$ has a solution. We can find it by applying the extended Euclidean algorithm. \begin{align*} 13 &amp; = 4 \cdot 3 + 1\\ 3 &amp; = 3 \cdot 1 \end{align*} Solving for $1$ in terms of $3$ and $13$ yields $$1 = 13 - 4 \cdot 3$$ Thus, $$1 \equiv -4 \cdot 3 \pmod{13} \implies -4 \equiv 3^{-1} \pmod{13}$$ Therefore, if we multiply both sides of congruence 3 by $-4$, we obtain $$x \equiv -8 \pmod{13}$$ To find all the solutions of congruence 1, we must find all the solutions of the inequality $$0 \leq -8 + 13t &lt; 91$$ in the integers. \begin{align*} 0 &amp; \leq -8 + 13t &lt; 91\\ 8 &amp; \leq 13t &lt; 99\\ \end{align*} Hence, $1 \leq t \leq 7$. Therefore, the solutions of the congruence $21x \equiv 14 \pmod{91}$ are \begin{align*} x &amp; \equiv 5 \pmod{91}\\ &amp; \equiv 18 \pmod{91}\\ &amp; \equiv 31 \pmod{91}\\ &amp; \equiv 44 \pmod{91}\\ &amp; \equiv 57 \pmod{91}\\ &amp; \equiv 70 \pmod{91}\\ &amp; \equiv 83 \pmod{91} \end{align*} which you can check by direct computation.
$$21x\equiv 14\pmod{91}\stackrel{:7}\iff 3x\equiv 2\equiv 15\pmod{13}$$ $$\stackrel{:3}\iff x\equiv 5\pmod{13}$$ All integers of the form $13k+5$ for some $k\in\mathbb Z$ are the solutions.
Intersection of two finite abelian subgroups I have a question concerning a proof that a group of order 144 is not simple. Given two Sylow 3-subgroups, $P$ and $Q$, we know that $P$ and $Q$ are both abelian as they are of order $p^2$. Let $M=N_G(P \cap Q)$. Then $P \cap Q$ is a normal subgroup and therefore normal in both $P$ and $Q$. This is where I am running in to a problem, the proof goes on to conclude that since $P$ and $Q$ are subset of $M$, then $PQ \subseteq M$. I am not sure how to prove this to myself. Also, $|P \cap Q|=3$.
Let $G$ be a group of order 144 and assume that $G$ is simple. We will argue through an analysis of the Sylow 3-subgroups of $G$ to arrive at a contradiction. On the way we will use a result maybe less known. Lemma Let $G$ be a group and $p$ a prime dividing the order of $G$. Assume that for every pair $P, Q \in Syl_p(G)$, $P=Q$ or $P \cap Q = 1$. Then $n_p=|Syl_p(G)| \equiv 1$ mod $|P|$.Note that this generalizes the well-known result that $n_p \equiv 1$ mod $p$. I will not prove the lemma, but to sketch it: fix a $P \in Syl_p(G)$ and let this group act on the set of all Sylow $p$-subgroups by conjugation. The orbit of $P$ itself is $\{P\}$ and if $Q \neq P$, the orbit of $Q$ has length $|P|$.Let's get on with the analysis. Since $G$ is simple there are at least two different $P, Q \in Syl_3(G)$. Put $D = P \cap Q$. Of course $D$ is a proper subgroup of $G$. We are going to show that in fact $D=1$. In that case we can apply the lemma, $n_3 \equiv 1$ mod 9 and together with the fact that $n_3 \in \{1,2,4,8,16\}$, this yields $n_3 =1$, which means that the Sylow 3-subgroup is normal, against our assumption $G$ being simple.Now suppose $D \neq 1$. Since $P \neq Q$, $D$ is a proper subgroup of $P$, but $|P|=9$, so $|D|=3$. Put $H=N_G(D)$. Indeed $P,Q \subset H$, because $P$ and $Q$ are abelian. Note that since $P$ and $Q$ are different subgroups, $P\neq H$ and index$[H:P] \neq 2$, so index$[H:P] \geq 4$. On the other hand, index$[G:H]$ cannot be $2$, otherwise $H$ would be normal in the simple group $G$, so it must be a divisor of $16$ and be at least $4$. All this can only be the case if $|H|=36$, or equivalently, index $[G:H]=4$. Of course core$_G(H)=1$ and this means that $G/$core$_G(H)=G$ can be embedded in $S_4$, which is absurd since $144 \nmid 24$. This is the final contradiction and hence $G$ cannot be simple.
As $P$ and $Q$ are both abelian, each normalizes $P\cap Q$, so by definition is in $M$. Let me add that it may not be true that $PQ$ is a group. In general, if $P$ and $Q$ are subgroups of $M$, then $\langle P, Q\rangle$ is a subgroup of $M$. However, in this case $\langle P, Q\rangle= PQ$ because $P$ and $Q$ are abelian.
least value of $\lfloor \frac{a+b}{c}\rfloor+\lfloor \frac{c+b}{a}\rfloor+\lfloor \frac{a+c}{b}\rfloor$ If $a,b ,c&gt;0$ . Then least value of $$\bigg\lfloor \frac{a+b}{c}\bigg\rfloor+\bigg\lfloor \frac{c+b}{a}\bigg\rfloor+\bigg\lfloor \frac{a+c}{b}\bigg\rfloor$$ Where $\lfloor x\rfloor$ is floor function of $x$ Plan Using $$x-1&lt; \lfloor x\rfloor\leq x$$ $$\frac{a+b}{c}-1&lt; \bigg\lfloor \frac{a+b}{c}\bigg\rfloor \leq \frac{a+b}{c}$$ $$\frac{b+c}{a}-1&lt; \bigg\lfloor \frac{b+c}{a}\bigg\rfloor \leq\frac{b+c}{a}$$ $$\frac{c+a}{b}-1&lt; \bigg\lfloor\frac{c+a}{b}\bigg\rfloor \leq \frac{c+a}{b}$$ How do i solve it Help me please
$$= 3 \bigg\lfloor \frac{a+b}{c} \bigg\rfloor$$ And given all variables are positive, the minimum occurs when $a+b$ is as close to $0$ as possible and $c$ is as large as possible.
$$= 3 \bigg\lfloor \frac{a+b}{c} \bigg\rfloor$$ And given all variables are positive, the minimum occurs when $a+b$ is as close to $0$ as possible and $c$ is as large as possible.
Linear programming solution in vertex I want to prove that if linear programming problem $$\max \{\langle c,x\rangle \ \colon Ax\leqslant b, \ x\geqslant 0\}$$ has a solution, then atleast one of the solutions is in the vertex of $$\Omega=\{x\ \colon Ax\leqslant b, \ x\geqslant 0\}.$$ Any ideas on how to approach this problem?
This is actually the main theorem of LP theory: Given a problem in standard form \begin{align} \max\ &amp;c^{\top}x\\ &amp;Ax=b\\ &amp;x\geq 0 \end{align} let $\Omega =\{x\in \mathbb{R}^n\ |\ Ax=b,\ x \geq 0\}$ be the feasible set. Assume that $rank(k)=m$. Then If $\Omega \neq \emptyset$ then there exists at least a basic feasible solution to the problem (i.e. $\Omega$ has at least a vertex). If the problem is not unbounded then there exists an optimal basic solution. Proof of statement 1 Let $x$ be a feasible solution. WLOG assume that $x_1, x_2,\ldots, x_p &gt; 0$ and $x_{p+1},\ldots, x_n =0$. If $A_1,\ldots,A_p$ are linearly independent columns of $A$ then $x$ is a basic feasible solution. Otherwise $A_1,\ldots,A_p$ are linearly dependent and $$\sum_{i=1}^p\lambda_iA_i=0$$ holds with at least one coefficient $\lambda_i \neq 0$. Observe that the equation system can be written as $$\sum_{i=1}^px_iA_i=b$$ Multiplying the first equation by $\epsilon \in \mathbb{R}$ and subtracting it from the last one we get $$\sum_{i=1}^px_iA_i - \epsilon\sum_{i=1}^p\lambda_iA_i=\sum_{i=1}^p(x_i - \epsilon\lambda_i)A_i=b$$ Therefore the vector $$x-\epsilon \lambda=[x_1-\epsilon \lambda_1,\ldots,x_p-\epsilon\lambda_p,0,\ldots,0]^{\top}$$ will be feasible if $$x_i-\epsilon\lambda_i \geq 0 \ \ \ i=1,\ldots,p$$ The solution of this system of inequalities is $\epsilon=\min \{\epsilon_1, \epsilon_2\}$ where $$\epsilon_1 = \max_{1\leq i\leq p}\Big\{\dfrac{x_i}{\lambda_i}\ |\ \lambda_i &lt; 0\Big\}$$ and $$\epsilon_2 = \min_{1\leq i\leq p}\Big\{\dfrac{x_i}{\lambda_i}\ |\ \lambda_i &gt; 0\Big\}$$ Taking $\epsilon=\epsilon_1$ or $\epsilon=\epsilon_2$ the vector $\bar{x}=x-\epsilon \lambda$ has at least one more null component. Now check if the columns of $A$ related to the non null components of $\bar{x}$ are linearly independent. If they are linearly independent $\bar{x}$ is a basic feasible solution; otherwise repeat the whole procedure starting from $\bar{x}$. Proof of statement 2 Let $x$ an optimal solution. If it is a basic feasible solution then we get the proof. If it is not basic, as in the statement 1, we can always construct a new vector $\bar{x}=x-\epsilon \lambda$ which is basic. The objective function value at $\bar{x}$ is $$c^{\top}\bar{x}=c^{\top}x-\epsilon c^{\top}\lambda$$ All we need to show for $\bar{x}$ to be optimal is that $c^{\top}\lambda=0$. Observe that if $c^{\top}\lambda&gt;0$, taking $\epsilon=\epsilon_1&lt;0$ we get $$c^{\top}\bar{x} &gt; c^{\top}x $$ if $c^{\top}\lambda&lt;0$ taking $\epsilon=\epsilon_2&gt;0$ we get $$c^{\top}\bar{x} &gt; c^{\top}x $$ In both cases we get a contradiction. Therefore it is $c^{\top}\lambda=0$, $c^{\top}\bar{x} = c^{\top}x $ and $\bar{x}$ is an optimal basic solution. QED REMARKS The theorem refers to problems with equality constraints (standard form problems). As you well know every LP problem can be transformed in standard form, so the theorem applies to all LP problems. The feasible sets of a generic LP problem and the corresponding standard form problem have the same shape, although they lie in different spaces. Thus there is a one-to-one correspondence between the vertexes of the two feasible sets. The theorem uses the concept of basic solution, but a well-known theorem states that $x$ is a vertex of $\Omega$ if and only if $x$ is a basic feasible solution of the system $Ax=b$.
This is a consequence of the Extreme Value Theorem. I think the best way to understand it is by looking at what happens in the $1$-dimensional case (obviously this is not a proof, but it gives you the intuition as to why it holds). An example, assume you are dealing with $$ \max\limits_{x \in \mathbb{R}}\{2x\; |\; x\le 2,\; x \ge 0 \} $$ So here, $\Omega = [0,2]$. Since $y=2x$ is linear, it is clear from a quick drawing that the maximum is in $x=2$, a vertex of $\Omega$. Try doing the same thing in $2$-D to convince yourself a little more.
Is it impossible to perfectly fit a polynomial to a trigonometric function on a closed interval? On a closed interval (e.g. $[-\pi, \pi]$), $\cos{x}$ has finitely many zeros. Thus I wonder if we could fit a finite degree polynomial $p:\mathbb{R} \to \mathbb{R}$ perfectly to $\cos{x}$ on a closed interval such as $[-\pi, \pi]$. The Taylor series is $$\cos{x} = \sum_{i=0}^{\infty} (-1)^i\frac{x^{2i}}{(2i)!} = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!}-\dots$$ Using Desmos to graph $\cos{x}$ and $1-\frac{x^2}{2}$ yields: which is clearly imperfect on $[-\pi,\pi]$. Using a degree 8 polynomial (the first 5 terms of the Taylor series above) looks more promising: But upon zooming in very closely, the approximation is still imperfect: There is no finite degree polynomial that equals $\cos{x}$ on all of $\mathbb{R}$ (although I do not know how to prove this either), but can we prove that no finite degree polynomial can perfectly equal $\cos{x}$ on any closed interval $[a,b]\subseteq \mathbb{R}$? Would it be as simple as proving that the remainder term in Taylor's Theorem cannot equal 0? But this would only prove that no Taylor polynomial can perfectly fit $\cos{x}$ on a closed interval...
Yes, it is impossible. Pick any point in the interior of the interval, and any polynomial. If you differentiate the polynomial repeatedly at that point, you will eventually get only zeroes. This doesn't happen for the cosine function, which instead repeats in an infinite cycle of length $4$. Thus the cosine function cannot be a polynomial on a domain with non-empty interior.
Although this is an incomplete answer unlike the ones I read here, I'd like to offer what I eventually thought of since the idea still seems original: There can be no polynomial with rational coefficients that exactly approximates $\cos$ on $[0,1]$, because it will have a wrong integral over this interval ($\sin 1$ being irrational). I believe this argument can be adapted to a different interval $[\alpha,\beta]$ by finding a sub-interval with rational endpoints $[a,b] \subset [\alpha,\beta]$ and using something like the notion of algebraic independence over $\mathbb{Q}$ (search for $a$ and $b$ such that $\sin b - \sin a$ be irrational? Which should happen most of the time) and/or Niven's theorem, and possibly enhanced to real coefficients since a polynomial with such coefficients can be well-approximated by sequences of polynomials with rational ones. Thank you for your question, it reminds me much of the kind I would've asked when younger!
When $f(0) = 0,$ is it always true that $G(0) = 0,$ where $G$ is the antiderivative of $f$? I have a hunch that it is, but it would be nice if somebody could confirm / disprove it for me. Thank you. Edit Is it when the constant of integration is equal to zero?
No, this is not true in general. Let $F(x)$ be a anti-derivative of $f(x)$ then $$\int f(x) \, \mathrm{d}x = F(x) + k$$ now, even if $F(0) = 0$ there's still the $+ k$ constant to contend with. An example would be $f(x) = x\exp x$ then we have $$F(x) = \int x\exp x \, \mathrm{d}x = e^x (x-1) + k$$ Now, even if $k = 0$, then $F(0) = -1$ whilst $f(0) = 0$.
No. The simplest example I can think of? $$f(x) = x$$ Then $G(x) = \frac{x^2}{2} + C$, which is not guaranteed to be always $0$.
Let a, b be two complex numbers in the left half-plane. $\int f(z) \leq 1$ and M-T suggest \exists c$
The geometric view is that $|a-b|$ is the length of a curve between $e^{ia}$ and $e^{ib}$ - namely, the curve $e^{i\theta}$ for $\theta\in[a,b]$ (or $[b,a]$, depending on the order of the values.) But the shortest distance between two points is the linear distance, which is for these points $\left|e^{ib}-e^{ia}\right|$.
Lemma: For $h&gt;0,1-\cos h &lt; h^2/2.$ Proof: Since $\sin t &lt; t$ for $t&gt;0,$ we have $$1-\cos h = \int_0^h\sin t \, dt &lt; \int_0^h t \, dt = h^2/2.$$ Thm: $|e^{ih}-1| &lt; h$ for $h&gt;0.$ Proof: $|e^{ih}-1|^2 = 2(1-\cos h) &lt; h^2,$ where we have applied the lemma. Now take square roots. Corollary: If $a&lt;b,$ then $|e^{ib} - e^{ia}| &lt; b-a.$ Proof: $$|e^{ib} - e^{ia}| = |e^{ia}(e^{i(b-a)} - 1)| = |e^{i(b-a)} - 1| &lt; b-a,$$ the last inequality coming from the theorem.
Using Ito's Lemma with more than one brownian motion term Question : Let $$ dY_t=c_tdt+d_tdW^1_t+e_tdW^2_t $$ Where $W^1_t,~~W^2_t$ are standard independent brownian motions. I am trying to apply Ito's formula to this, say for example trying to find $d(\frac {1}{Y_t})$ I'm not too sure about how to treat the variance term in Ito's formula. Any hints would be appreciated, Cheers
I'll assume that $c,d,e$ depend only on $t$. In integral form, $$ Y_t=Y_0+\int_0^tc_s\,\mathrm ds+\int_0^td_s\,\mathrm dW_s^1+\int_0^te_s\,\mathrm dW_s^2. $$ Then, for any $C^2$ function $f$ Itô's formula (in differential form) states $$ \mathrm d\left(f(Y_t)\right)=f'(Y_t)\,\mathrm dY_t+\frac12f''(Y_t)\,\mathrm d\langle Y\rangle_t. $$ I think your problem is finding $\mathrm d\langle Y\rangle_t$. To do this, use bilinearity and symmetry of $\langle \cdot,\cdot\rangle$: \begin{multline*} \langle Y\rangle_t=\langle \int_0^\cdot c_s\,\mathrm ds\rangle_t+\langle \int_0^\cdot d_s\,\mathrm dW_s^1\rangle_t+\langle \int_0^\cdot e_s\,\mathrm dW_s^2\rangle_t +2\langle \int_0^\cdot c_s\,\mathrm ds,\int_0^\cdot d_s\,\mathrm dW_s^1\rangle_t\\ +2\langle \int_0^\cdot c_s\,\mathrm ds,\int_0^\cdot e_s\,\mathrm dW_s^2\rangle_t+2\langle\int_0^\cdot e_s\,\mathrm dW_s^2,\int_0^\cdot d_s\,\mathrm dW_s^1\rangle_t. \end{multline*} Now, $\langle \int_0^\cdot c_s\,\mathrm ds\rangle_t=0$ and $\langle \int_0^\cdot c_s\,\mathrm ds,\int_0^\cdot \varphi_s\,\mathrm dW_s^i\rangle_t=0$ because $\int_0^\cdot c_s\,\mathrm ds$ has finite variation. Also, $\langle\int_0^\cdot e_s\,\mathrm dW_s^2,\int_0^\cdot d_s\,\mathrm dW_s^1\rangle_t=0$ because the two Brownian motions are independent. The only terms that remain are $$ \langle Y\rangle_t=\langle \int_0^\cdot d_s\,\mathrm dW_s^1\rangle_t+\langle \int_0^\cdot e_s\,\mathrm dW_s^2\rangle_t =\int_0^t d_s^2\,\mathrm ds+\int_0^t e_s^2\,\mathrm ds=\int_0^t \left(d_s^2+e_s^2\right)\,\mathrm ds. $$ Thus, $$ \mathrm d\left(f(Y_t)\right)=f'(Y_t)\left(c_t\mathrm dt+d_t\mathrm dW^1_t+e_t\mathrm dW^2_t\right)+\frac12f''(Y_t)\left(d_t^2+e_t^2\right)\,\mathrm dt. $$ Lastly, if you look around, you'll see that a general form of Itô's formula can be applied directly to $f(Y_t)=f\circ F(\int c_s\,\mathrm ds,\int d_s\,\mathrm dW_s^1,\int e_s\,\mathrm dW_s^2)$, making directly appear the quadratic variations of $W_t^1$ and $W_t^2$. The two approaches coincide.
$$ d(1/Y_{t}) = -Y_{t}^{-2}dY_{t} + Y_{t}^{-3}dY_{t}dY_{t} $$ Substituting $dY_{t}$ into this equation.
How to solve the initial value problem $y'(x)=\lambda \sin(x+y(x))$, $y(0)=1$. For $\lambda \in \mathbb{R}$, consider the initial value problem $y'(x)=\lambda \sin(x+y(x))$, $y(0)=1$. Then this initial value problem has no solution in any neighbourhood of $0$. a solution in $\mathbb{R}$ if $|\lambda|&lt;1$ a solution in a neighbourhood of $0$. a solution in $\mathbb{R}$ only if $|\lambda|&gt;1$. This is the first time I have encountered this kind of IVP, and have no idea to proceed. The entanglement of $x$ and $y(x)$ in $\sin(x+y(x))$ is what causing me the trouble to make any headway. So please help me to solve this. Thanks.
Hint: Observe we have \begin{align} y' = F(x, y) \end{align} where $F$ is Lipschitz in $y$ variable since \begin{align} |F(x, y_1) -F(x, y_2)| = |\lambda| |\sin(x+y_1)-\sin(x+y_2)| \leq |\lambda||y_1-y_2|. \end{align} Note we have used the fact $|\sin u-\sin v| \leq |u-v|$. Now, by Picard-Lindelof theorem, one can guarantee local existence, i.e. (3) holds if we do not know anything about $\lambda$. Moreover, one can use a Banach fixed point argument to show that the ode has a global solution, i.e. solution on all of $\mathbb{R}$ if $|\lambda|&lt;1$, which means (2) holds.
You first make a string of numbers for the range of $x$. Let's say $x\in[0,1]$. You could select $x=0, 0.1 , 0.2, ..., 1.0$. Next, substitute $x=0$ to the equation to get: $y^\prime(0) = \lambda \text{sin}(0+y(0)) = 0$. Then use: $y(0.1) = y(0) + 0.1 y^\prime(0) = 0$. Next, substitute $x=0.1$ in the equation: $y^\prime(0.1) = \lambda \text{sin}(0.1+y(0.1)) = \lambda \text{sin}(0.1)$ You continue the iteration up to $x=1.0$. The finer the meshing, the more precise the answer.
Integers, rationals and reals as sets? Natural numbers can be represented as pure sets by defining them to contain every number that is smaller than them. Arithmetic can be performed on them using the Peano axioms. Are there any similar definitions for integers, rationals and reals? For example, I could define a rational to be an ordered pair of dividend and divisor. But that would leave the two rationals $\frac{1}{2}$ and $\frac{2}{4}$ not equal to each other, and it would be based on ordered things rather than pure sets.
Yes the procedure you outline is correct, but you must define equivalence classes, $(a,b)\sim (b,c)$ if $ac=bd$. Thus a rational number is an equivalence class of pairs. One then defines the arithmetic operations addition and multiplications and show that they are invariant under equivalence. The same can be done to define the integers from the natural numbers here the relation is $(n,m)\sim (r,s)$ if $n+s=r+m$. The reals are then defined as Dedekind cuts of the rationals.
Yes. Define the ordered pair $(a, b)$ as the set $\{a, \{a, b\}\}$ Define the integers as $(\{1, 0\} \times \mathbb N) \setminus (0, 0)$: ordered pairs where the first element represents the sign and the second represents the magnitude, excluding negative $0$. In the first slot, $1$ represents positive, $0$ represents negative. Define the rational number $\frac{p}{q}$ as $\{(a, b): aq=bp\}$: the set of all ordered pairs that reduce to the same thing $(p, q)$ does in lowest terms. (That's not quite what this definition says, but it's easier to say and amounts to the same thing.) Define real numbers as sets of rationals for which if $\frac{p}{q}$ is in the set, then all rationals lower than $\frac{p}{q}$ are in the set. (Also, the empty set and $\mathbb Q$ are specifically excluded from being real numbers.)
Prove that Bernstein Transformation is linear The definition of the transformation is $$(B_nf)(x) = \sum_{k=0}^n f(\frac{k}{n}) {n \choose k}x^k(1-x)^{n-k}$$ How can I show this is a linear map? I know the sum of $B_n$ should be 1, but can I consider $f(\frac{k}{n})$ as a constant?
$(B_n(f+a g))(x) = \sum_{k=0}^n (f+a g)(k/n) \binom{n}{k} x^k (1-x)^{n-k} = \sum_{k=0}^n (f(k/n) + a g(k/n)) \binom{n}{k} x^k (1-x)^{n-k} = \sum_{k=0}^n f(k/n) \binom{n}{k} x^k (1-x)^{n-k} +\sum_{k=0}^n a g(k/n) \binom{n}{k} x^k (1-x)^{n-k} = sum_{k=0}^n f(k/n) \binom{n}{k} x^k (1-x)^{n-k} +a \sum_{k=0}^n g(k/n) \binom{n}{k} x^k (1-x)^{n-k} = (B_n f)(x) + (B_n g)(x)$. So $B_n(f+a g) = B_n f +a B_n g$. So linear.
Hint: $B_n$ associates to each function f another function defined as you wrote. In order to show that $B_n$ is linear you have to prove that $B_n(f+g)= B_n(f)+ B_n(g)$, and $B_n(af)=aB_n(f)$, for any functions f and g, and any real number a. But two functions are equal if they have equal values for each argument x.
What's an intuitive way to think about the determinant? In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from. Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state. The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now. Remember how those operations you mentioned change the value of the determinant? Switching two rows or columns changes the sign. Multiplying one row by a constant multiplies the whole determinant by that constant. The general fact that number two draws from: the determinant is linear in each row. That is, if you think of it as a function $\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$, then $$ \det(a \vec v_1 +b \vec w_1 , \vec v_2 ,\ldots,\vec v_n ) = a \det(\vec v_1,\vec v_2,\ldots,\vec v_n) + b \det(\vec w_1, \vec v_2, \ldots,\vec v_n),$$ and the corresponding condition in each other slot. The determinant of the identity matrix $I$ is $1$. I claim that these facts are enough to define a unique function that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant. In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of vectors of length N with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the signed volume of the region gotten by applying T to the unit cube. (Don’t worry too much if you don’t know what the “signed” part means, for now). How does that follow from our abstract definition? Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1. If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors. Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means). So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximations of the associated function, and consider a “differential volume element” in your starting coordinate system. It’s not too much work to check that the area of the parallelogram formed by vectors $(a,b)$ and $(c,d)$ is $\Big|{}^{a\;b}_{c\;d}\Big|$ either: you might try that to get a sense for things.
The determinant of a matrix gives the signed volume of the parallelepiped that is generated by the vectors given by the matrix columns. You can find a very pedagogical discussion at page 16 of A Visual Introduction to Differential Forms and Calculus on Manifolds Fortney, J.P. google book link, click on &quot;1 Background Material&quot; Given a parallelepiped whose edges are given by $ v_1 , v_2 , \dots, v_n \in \mathbb{R}^n $. Then if you accept these 3 properties: $D(I)=1$, where $I=[e_1,e_2,\dots,e_n]$ (identity matrix) $D(v_1,v_2,\dots,v_n)=0$ if $v_i=v_j$ for any $i\neq j$ $D$ is linear, $$\forall j,\ D(v_1,\dots,v_{j-1},v+cw,v_{j+1},\dots,v_n)=D(v_1,\dots,v_{j-1},v,v_{j+1},\dots,v_n)+cD(v_1,\dots,v_{j-1},w,v_{j+1},\dots,v_n)$$ you can show that $D$ is the parallelepiped signed volume and that $D$ is the determinant.
Line integration over a 3-dimensional curve Consider a curve C that is defined by the intersection of the following surfaces: $$x+y+z=0$$ $$and$$ $$x^2+y^2+z^2=K$$ for some non-zero, positive real number K. Find $$I=\int_Cy^2ds$$ I first tried to find the equation for the curve in terms of a vector r(t) and I obtained: $$r(t)=\left\langle \frac{\cos t}{\sqrt{ 2+\sin 2t}},\frac{\sin t}{\sqrt{ 2+\sin 2t}},\frac{-\cos t - \sin t}{\sqrt{ 2+\sin 2t}}\right\rangle$$ But then, to solve for I with this method, one must take the magnitude of the derivative of r, which becomes incredibly complicated. My professor claims there is an easier way to solve this problem, but I cannot figure anything out.
Here is a „non brute-force“ approach (assume WLOG $K=1$.) Note that, by symmetry of the curve, we have $$\int_C x^2 \,\mathrm ds= \int_C y^2 \,\mathrm ds =\int_C z^2 \,\mathrm ds.$$ Hence, $$3 \int_C y^2 \,\mathrm ds=\int_C x^2+y^2+z^2\,\mathrm ds = \int_C 1\,\mathrm ds=\text{Length of the curve } c.$$ It is known that the intersection of a sphere with a plane is a circle (under certain conditions which are fulfilled here.) Also, as can be seen also from your parametrization (compute the distance of two points with time $\pi$ apart), our circle has a diameter of $2$. So the length of the curve $c$ is simply the circumference of a circle with radius $2$ which is $2\pi$. So, by our above argument, $$\bbox[5px,border:2px solid #C0A000]{ I=\int_C y^2\,\mathrm ds = \frac{\text{Length of }c}3=\frac{2\pi}3.}$$
Here is a „brute-force approach“ (assume WLOG $K=1$.) Even with your parametrization the calculations are doable: Note that $$r'(t)=\left(-\frac{2 \sin (t)+\cos (t)}{(\sin (2 t)+2)^{3/2}},\frac{\sin (t)+2 \cos (t)}{(\sin (2 t)+2)^{3/2}},\frac{\sin (t)-\cos (t)}{(\sin (2 t)+2)^{3/2}}\right)$$ so that $$\|r'(t)\|_2=\sqrt{\frac{(\sin (t)-\cos (t))^2}{(\sin (2 t)+2)^3}+\frac{(\sin (t)+2 \cos (t))^2}{(\sin (2 t)+2)^3}+\frac{(2 \sin (t)+\cos (t))^2}{(\sin (2 t)+2)^3}}$$ which reduces to $$\|r'(t)\|_2=\sqrt{\frac{6+6\sin(t)\cos(t)}{(2+\sin(2t))^3}}=\sqrt{3} \sqrt{\frac{1}{(\sin (2 t)+2)^2}}.$$ So $$I=\int_0^{2\pi} \frac{\sqrt{3} \sin ^2(t)}{(\sin (2 t)+2)^2}\,\mathrm dt=2\sqrt{3}\int_0^{\pi} \frac{\sin ^2(t)}{(\sin (2 t)+2)^2}\,\mathrm dt.$$ It remains to calculate the last integral, which I will call $J$. $J$ is a standard integral and we may proceed as follows: $$J=\int_0^\pi \frac{\sin^2(t)}{(2\sin(t)\cos(t)+1)^2}\,\mathrm dt\overset{(1)}=\int_0^\pi \frac{\csc^2(t)}{4\cot^2(t)+4\csc^4(t)+8\cot(t)\csc^2(t)}\,\mathrm dt\overset{(2)}=\int_{-\infty}^\infty\frac{1}{4(u^2+u+1)^2}\,\mathrm du.$$ (1): Expand the fraction by $\csc^4$ (2): Use that $\csc^2=\cot^2+1$ and substitute $u=\cot(t)$. So, by substituting $s=u+\frac12$, $$J=\frac14\int_{-\infty}^\infty \frac{1}{(s^2+\frac34)^2}\,\mathrm ds=\frac{4}9\int_{-\infty}^\infty \frac{1}{(\frac43 s^2+1)^2}\,\mathrm ds.$$ The last integral can be evaluated as here. The final result should be $J=\frac{\pi}{3\sqrt 3}$ so that $$\bbox[5px,border:2px solid #C0A000]{I=\frac{2\pi}3.}$$
How do I identify the set of points satisfying $|z-1|+|z+1|\leq 2$? How do I identify the set of points satisfying $|z-1|+|z+1|\leq 2$? My idea is: $|z+1|^{2}=|z|^2+1+2x$ $|z-1|^{2}=|z|^2+1-2x$ $|z-1|+|z+1|\leq 2$ $\implies$ $|z+1|\leq 2 -|z-1|$ $(|z+1|)^2\leq (2 -|z-1|)^2$ (* is this step correct?)(because $-3&lt;2$ but $9 \nleq 4$) I am confused at this step of my solution. Can anyone suggest me how do I fix this problem?
Think of the triangle inequality. Since the distance from $z=-1$ to $z=1$ is $2$, the only points satisfying the (in)equality are on the line segment from $z=-1$ to $z=1$.
$|z+1|+|z-1|=2$ denotes a line segment. But $|z+1|+|z-1|&lt;2$ is a null set and it represents nothing in the Argand plane. And when $|z+1|+|z-1|&gt;2$, represents region (point) out side an ellipse.
Sum of averages vs average of sums I have essentially a table of numbers -- a time series of measurements. Each row in the table has 5 values for the 5 different categories, and a sum row for the total of all categories. If I take the average of each column and sum the averages together, should it equal the average of the rows' sums (ignoring rounding error, of course)? (I've got a case where the two values keep coming out different by about 30% and I'm wondering just how crazy I am.) Update: See below -- I was (slightly) crazy and had an error in my code.
The average of the entries in a column is the sum of the entries in that column, divided by the number of entries. The number of entries is the number of rows. So the sum of the averages is the sum of all the entries in the table, divided by the number of rows. The average of the row sums is the sum of all entries in the table divided by the number of rows, so you should get the same number either way.
If I take the average of each column and sum the averages together, should it equal the average of the rows' sums (ignoring rounding error, of course)? Generally, no.
Expectation of a Product of Independent Random Variables I was wondering if there was a formula for the expectation of a product of $n$ independent random variables. I have only seen one for two random variables. I guess what I am asking is: Let $X_1, \dots, X_n$ be $n$ independent random variables. Is there a formula for $E\left(\prod\limits_{i=1}^n X_i\right)$? I need this for a problem I am working on. It would be nice if it was equal to $\prod\limits_{i=1}^n E(X_i)$. While this is true when $n=2$, I'm not sure if this is true for general $n$. If you are able to provide a formula for this I would appreciate a source or some sort of derivation.
If the values are independent then, it basically follows from the definition of expectation and the fact of independence of the series (that the joint probability function is the product of the marginal probability functions): $$\begin{align} \mathsf E(\prod_i X_i) &amp; = \iiint (\;\prod_i x_i f_i(x_i)\;)\; \mathrm d x_n\cdots\mathrm d x_1 \\ &amp; = \int x_n f_n(x_n)\mathrm d x_n \cdot \iiint\prod_{i=1}^{n-1}x_if_i(x_i)\mathrm d x_{n-1}\cdots\mathrm d x_1 \\ &amp; = \prod_i \int x_i f_i(x_i)\mathrm d x_i \\ &amp; = \prod_i \mathrm E(X_i) \end{align}$$ Or you might prefer to use the Law of Iterative Expectation: $$\begin{align} \mathsf E(\prod_{i=1}^n X_i) &amp; = \mathsf E(\mathsf E(X_n\prod_{i=1}^{n-1} X_i\mid X_n)) \\ &amp; = \mathsf E(X_n\mathsf E(\prod_{i=1}^{n-1} X_i)) \\ &amp; = \mathsf E(X_n)\mathsf E(\prod_{i=1}^{n-1} X_i) \\ &amp; = \prod_{i=1}^n \mathsf E(X_i)\end{align}$$
Yes, if $X_1, \ldots, X_n$ are independent, the expected value of the product is the product of the expected values.
Show, there exists exactly one operator with $\int_A P_T(f)\, d\lambda=\int_{T^{-1}(A)}f\, d\lambda$ Let $T\colon\mathbb{R}\to\mathbb{R}$ be a non-singular function, i.e. a measurable function with the property that $$ \forall A\in\mathcal{B}: \lambda(A)=0 \implies \lambda(T^{-1}(A))=0. $$ Show: There exists exactly one linear operator $P_T\colon L_{\lambda}^1\to L_{\lambda}^1$ so that for all $f\in L_{\lambda}^1$ and all $A\in \mathcal{B}$ it is $$ \int_A P_T(f)\, d\lambda=\int_{T^{-1}(A)}f\, d\lambda. $$ Hello, I would really prefer to present you my own recent ideas but I do not have own ideas. To be honest, I am rather helpless. Can you pls give me help? Greetings. math12 New Edit: The only thing I already know is that $A\mapsto\int_{T^{-1}(A)}f\, d\lambda$ is a signed measure. Now one can apply Radon-Nikodým or something like that?
Consider the signed measure $$ \mu(A) = \int_{T^{-1}(A)}f\, d\lambda. $$ AFTER verifying that $\mu$ satisfies the Radon-Nikodým hypothesis, use the theorem to find a unique $g$ such that $$ \mu(A) = \int_A g\, d\lambda. $$ This $g$ is exactly $P_T(f)$. For the hypotheses, you have to show that $\mu$ is $\sigma$-finite and that $\mu \ll \lambda$. Let me know if you need help with that. Edit: Contrary to what I have written, you do not need to show that $\mu$ is $\sigma$-finite. This hypothesis is for $\lambda$. And by the way, $\lambda$ does not have to be Lebesgue. It just has to be $\sigma$-finite for the proof to hold. It seems that, in order to use the theorem, both measures, $\lambda$ and $\mu$ have to be $\sigma$-finite. If you know of a proof of the Radon-Nikodým theorem that does not require $\mu$ to be $\sigma$-finite, let me know... :-) But anyway, $\mu$ is not only $\sigma$-finite. It is finite, because $f$ has finite integral. In fact $$ |\mu(A)| = \left|\int_{T^{-1}(A)} f d\lambda\right| \leq \int_{\mathbb{}} |f| d\lambda &lt; \infty $$ Had we missed the hypothesis that $f \in L^1_{\lambda}$? ;-) Also, $\lambda$ does not have to be Lebesgue. It just has to be $\sigma$-finite for the proof to hold.
$ \int_{T^{-1}(A)}fd\lambda = \int_{A}dT\#(f\lambda) $ where $ T\#\nu $ is the push-forward measure of $ \nu $ by $T$. Then by change of variable you can find that $ T\#(f\lambda) = f\circ T^{-1}|det(DT\circ T^{-1})|^{-1}\lambda $. Let $P_T(f) = f(T^{-1})|det(DT\circ T^{-1})|^{-1} $, clearly linear in f. Not sure this proves uniqueness and you need some more assumptions on T, such as derivability, but perhaps this might help you.
Prove that if $A$ is an $n \times n$ matrix such that $A^{4} = 0$ then $(I_n - A)^{-1}=I_n+A+A^2+A^3$ Prove that if $A$ is an $n \times n$ matrix such that $A^{4}$ = 0 then: $$(I_n - A)^{-1}=I_n+A+A^2+A^3$$ My proof is as follows: $$(I_n - A)(I_n - A)^{-1}=I_n$$ $$(I_n - A)^{-1}=I_n/(I_n - A)$$ $$I_n/(I_n - A)=I_n+A+A^2+A^3$$ $$I_n=(I_n - A)(I_n+A+A^2+A^3)$$ $$I_n=I_n+A+A^2+A^3-A-A^2-A^3-A^4$$ $$I_n=I_n-A^4$$ because we know that: $$A^4=0$$ therefore: $$I_n=I_n$$ Is this an acceptable justification or have I made an error in my logic? *I apologize for any poor formatting
The essence of your proof is correct, but the first few lines are very confusing. Why not something more clear like: In order to show $(I_n-A)^{-1}=I_n + A +A^2 +A^3$, it suffices to show $$ (I_n-A)(I_n + A +A^2 +A^3)=(I_n + A +A^2 +A^3)(I_n-A)=I_n. $$ It's easy to show that the left equals the middle. Then by multiplying through and using the fact that $A^4=0$, it's easy to show that both of them equal the right.
To prove $a$ is inverse of $b$. Just find $ab$ and $ba$. If $b$ is inverse of $a$ then $ab=I=ba$. Thus Take $B=I_n+A+A^2+A^3$ and solve $AB$ and $BA$.
What does $ \lvert z-a \rvert = \mathit Re(z)+a $ look like? What does a loci with the equation look like? $ \lvert z-a \rvert = \mathit Re(z)+a $ This is for the applying complex numbers topic of an advanced HSC maths course. I was asked to describe the loci. I know that $ \lvert z-a \rvert $ would get me either a perpendicular bisector or a circle. I also know that $ \mathit Re(z) $ refers to the horizontal values on the complex plane. But I just can't imagine what it looks like.
Given $|z-a|$= $\Re(z)+a$ $$|x+iy-a|=x+a\\|(x-a)+iy|=x+a\\ $$ $$\sqrt{(x-a)^2+y^2}=x+a\\$$ taking square on both side $$(x-a)^2+y^2=(x+a)^2\\x^2+a^2-2ax+y^2=x^2+a^2+2ax$$ we get $$y^2=4ax$$ This is a right handed parabola with focus $(a,0)$
We can treat complex numbers $z = x + iy$ as equations over $(x, y) \in \mathbb R^2$, and use a geometry plotter to plot them. In this case, the equation system is: $$ \begin{align*} |x + iy - a| &amp;= Re(x+iy) + a \\ |(x - a) + iy| &amp;= x + a\\ \sqrt{(x-a)^2 + y^2} &amp;= x + a \\ (x-a)^2 + y^2 &amp;= (x + a)^2 \end{align*} $$ One can use a tool like Desmos for plotting curves like these. In this case, here is a playable version of the graph with $a$ as a parameter. The image for one choice of $a$ is:
Deriving Conditional Expectations I am really stuck on deriving the basic conditional expectations equations. First how does one prove this equation below? $$ E[X\mid A] = \frac{E[X\mathbf{1}_A]}{P(A)}. $$ Second, using the equation above how does one derive the the conditional expectations formula: $$E(x|y)= \int_a^bx\frac{f(x,y)}{f(y)}dx$$ I can't seem to figure how to go from this step $$ E(x|y)=E[X\mid A] = \frac{E[X\mathbf{1}_A]}{P(A)}=\frac{\int_{supp(z)}x\ \mathbf{1(y=y)}dx}{f(y)}. $$ How does the indicator function dissapear and appear in the bounds of the intergration and then create the joint pdf $f(x,y)$ as in equation 1. Thank you.
Most non-measure theory textbooks define conditional expectation in terms of a sum over a conditional mass function (for discrete cases) or an integral of a conditional density (for continuous cases). 1) Assuming $P[A]&gt;0$, you can prove $E[X|A] = \frac{E[X 1_A]}{P[A]}$ according to the law of total expectation, from my above comment. 2) If you assume $X$ takes values in the interval $[a,b]$, then we define: $$ E[X|Y=y] = \int_a^b x f_{X|Y=y}(x)dx = \int_a^b x \frac{f_{XY}(x,y)}{f_Y(y)}dy$$ where you can motivate the definition for the conditional PDF $f_{X|Y=y}(x) = \frac{f_{XY}(x,y)}{f_Y(y)}$ through various demonstrations, likely found in your textbook. The difficulty, of course, is that the event $\{Y=y\}$ typically has probability 0, and so conditioning on such things is not obvious and needs to be defined separately. You can also motivate the above definition of $E[X|Y=y]$ according to a demonstration similar to that given in your question (I will fix some of the issues with that demonstration below): Fix $y \in \mathbb{R}$ and $\delta&gt;0$ and assume $P[Y \in [y, y+\delta]]&gt;0$. So for small $\delta&gt;0$ we can imagine: \begin{align} E[X|Y=y] &amp;\approx E[X|Y \in [y, y+\delta]] \\ &amp;= \frac{E[X 1_{Y \in [y, y+\delta]}]}{P[Y \in [y, y+\delta]]} \\ &amp;= \frac{\int_{x=a}^b\int_{v=y}^{y+\delta} xf_{XY}(x,v)dxdv}{P[Y \in [y, y+\delta]]} \\ &amp;= \frac{\int_{x=a}^bx \left[\int_{v=y}^{y+\delta} f_{XY}(x,v)dv\right]dx}{P[Y \in [y, y+\delta]]} \\ &amp;\approx \frac{\int_{x=a}^b x[f_{XY}(x,y)\delta] dx}{f_Y(y)\delta}\\ &amp;=\frac{\int_a^b xf_{XY}(x,y)dx}{f_Y(y)} \end{align}
This isn't a proof (see the comments to your question, and Michael's answer for details on that), but let me give a heuristic argument for why you should expect that $$ \operatorname{E} [X \vert A] = \frac{\operatorname{E} [X \mathbf{1}_A]}{\operatorname{P}(A)} $$ I know this isn't exactly what you're asking for, but what I give below might hopefully still be at least somewhat enlightening. Recall from Bayes' rule that, when $A$ and $B$ are events, $$ \operatorname{P} (B \vert A) = \frac{\operatorname{P}(B \cap A)}{\operatorname{P} (A)} $$ Notice also that when $X = \mathbf 1_B$, $$\operatorname{E}[X] = \operatorname{E}[\mathbf 1_B] = \operatorname{P}(B)$$ Given that $\mathbf 1_{B \cap A} = \mathbf 1_B \mathbf 1_A $, we can write the conditional probability above as $$ \operatorname{P} (B \vert A) = \frac{\operatorname{E}[\mathbf 1_B \mathbf 1_A]}{\operatorname{P} (A)} $$ If we again let $X=\mathbf 1_B$, and by analogy with the fact that probabilities of events are expectations of indicator functions for those events, we can define the conditional expectation of indicator functions: $$ \operatorname{E}[X \vert A] =\frac{\operatorname{E}[X \mathbf 1_A]}{\operatorname{P}(A)} = \operatorname{P} (B \vert A)$$ Using the linearity of expectation, we can extend the above definition to the so-called simple functions which are constant on finitely many events: $$ X = \sum_{i=1}^n a_i \mathbf 1_{B_i} $$ Hence, $$ \operatorname{E}[X \vert A] = \frac{\operatorname{E}[X \mathbf 1_A]}{\operatorname{P}(A)} = \sum_{i=1}^n a_i \operatorname{P} (B_i \vert A) $$ What about more general random variables? It turns out that almost all random variables of interest can be approximated by simple functions. Hence, to take the conditional expectation of those, we can first find a sequence of increasingly accurate approximations via simple functions. We know how to find the conditional expectation of each of those approximations. The limit of these approximations is then the conditional expectation of the random variable. (I know I've glossed over a lot of detail in the previous paragraph. However, if you know a little measure theory, the previous construction along with the details should be familiar. If you don't, I don't think the details are going to be particularly helpful.)
Completeness of the space of Riemann integrable functions under $\left \| . \right \|_{\infty}$ on $\left [ 0,1 \right ]$ I read a proof about this topic at this site and was convinced with it , then I tried to construct a cauchy sequence of functions to see how things apply . Now I'm confused. The sequence I constructed is : $f_n(x)=\left\{\begin{matrix} -lnx &amp; if &amp; x&gt; e^{-n}\\ n&amp; if &amp; x\leq e^{-n} \end{matrix}\right.$ I proved that this sequence is cauchy and that it belongs to the mentioned space . For the limit I got that $f_n(x)\rightarrow f(x)=-lnx$ almost everywhere. But $f$ is not bounded on $\left [ 0,1 \right ]$ Did I miss or mess with something ??
First observe that the space $R[0,1]$ of all Riemann integrable functions is a subspace of the complete space $B[0,1]$ formed by all bounded functions. So $R[0,1]$ is complete iff it is closed in $B[0,1]$. Theorem. $R[0,1]$ is closed in $B[0,1]$, and hence complete. Proof. Given a sequence $\{f_n\}_n$ in $R[0,1]$ converging to some $f$ in $B[0,1]$, let $D_n$ be the set of points where $f_n$ is discontinuous. It is well known that $D_n$ has measure zero, and hence so does $$ D := \bigcup_n D_n. $$ Notice that if $x$ is not in $D$ then each $f_n$ is continuous at $x$, and hence so is $f$. In other words the discontinuities of $f$ lie in $D$ so $f$ is Riemann integrable. QED
First observe that the space $R[0,1]$ of all Riemann integrable functions is a subspace of the complete space $B[0,1]$ formed by all bounded functions. So $R[0,1]$ is complete iff it is closed in $B[0,1]$. Theorem. $R[0,1]$ is closed in $B[0,1]$, and hence complete. Proof. Given a sequence $\{f_n\}_n$ in $R[0,1]$ converging to some $f$ in $B[0,1]$, let $D_n$ be the set of points where $f_n$ is discontinuous. It is well known that $D_n$ has measure zero, and hence so does $$ D := \bigcup_n D_n. $$ Notice that if $x$ is not in $D$ then each $f_n$ is continuous at $x$, and hence so is $f$. In other words the discontinuities of $f$ lie in $D$ so $f$ is Riemann integrable. QED
Get $5$ by doing any operations with four $7$s How can one combine four sevens with elementary operations to get $5$? For example $$\dfrac{(7+7)\times7}{7}$$ (though that does not equal $5$). I am not able to do this. Can you solve it or prove that it's impossible?
How about: $$7 - \frac{7+7}{7} = 5$$ $$7 - \log_7 (7·7) = 5$$ $$7 - \frac{\ln (7·7)}{\ln 7} = 5$$ $$\left\lfloor \sqrt{\frac{7^7}{7!}} - 7\right\rfloor = 5$$ $$\lfloor 7\sin 777^\circ\rfloor = 5$$ $$\lfloor 7\cos 7^\circ\rfloor - \frac{7}{7} = 5$$ $$\lfloor 7\cos 7\rfloor = 5 \text{ using radians}$$ You can also use base $174$ and write: $$\sqrt{\frac{77}{7·7}} = 5$$ That can also reduce the amount of sevens by one if you write: $$\frac{\sqrt{77}}{7} = 5$$
7-(7+7)/7 I started looking for a way to use the modulus operator, but this is too simple.
Orthogonal Transformations How can I show that given any two unit vectors in Euclidean space, there is an orthogonal transformation taking one to the other? I considered something like a reflection, but I don't know how to formalize it, or even if it's correct.
For completeness, here's a proof of that orthogonal transformations preserve inner products: Take $\Bbb H$ to be a finite-dimensional, linear vector space with element vectors $u$ and $v$ and inner product $&lt;u,v&gt;\equiv u^Tv$. $\underline{\text{Show}}$: $&lt;u,v&gt;=&lt;Mu,Mv&gt;$, for $M$ some orthogonal transformation. $$ &lt;u,v&gt;=u^Tv\\ &lt;Mu,Mv&gt;=(Mu)^T(Mv)=u^TM^TMv $$ By definition of orthogonal matrices (i.e. $M^TM=MM^T=1$), $$ &lt;Mu,Mv&gt;=u^TM^TMv=u^T(M^TM)v=u^Tv=&lt;u,v&gt; $$ QED.
Here's a proof: Suppose $a$ and $b$ are $n$-dimensional unit vectors, then $$ a^Ta=\sum_{i=1}^{n}(a_n)^2=1\\b^Tb=\sum_{j=1}^{n}(b_n)^2=1 $$ where $a^T$ and $b^T$ are the transposes of their respective vectors. Take $M$ to be a linear transformation between $a$ and $b$. Without loss of generality: $$ Ma=b $$ By the properties of transposes, $(Ma)^T=a^TM^T$, so $a^TM^T=b^T$. Take the norm of both sides of the above equation: $$ (a^TM^T)(Ma)=b^Tb\\ a^T(M^TM)a=1 $$ Since $a^Ta=1$, if $M^TM=1$, then $a^T(M^TM)a=1$. $M$ is orthogonal if and only if $M^TM=1$. So, we may take $M$ to be orthogonal while still satisfying the conditions of the problem. Therefore, there exists a (not necessarily unique) transformation $M$ between $a$ and $b$ (with $a$ and $b$ unit vectors) such that $M$ is orthogonal.
Homework question compound interest If $ \$ 6000$ are invested at 7% compounded continuously, what amount after 2 years? I know how to set it up but at one point I get lost $$A=Pe^rt$$ $$A=6000^{0.07}(2)$$ Somebody please help.
If compounded continuously as you stated, at a nominal annual rate of 7%, then the amount accumulated is $$A = Pe^{rt},$$ where $P$ is the principal, $r$ is the nominal annual rate of interest, and $t$ is the time in years. Your error is that $t$ was not in the exponent along with $r$ when it should be. For $P = 6000$, $r = 0.07$, and $t = 2$, $A = 6901.64$. The other answers assume that 7% represents an effective annual rate, not a nominal one. If the figure of 7% were assumed to be an effective rate of interest $i$, then specifying the way the interest is compounded, and hence the purpose of the question, becomes pointless. Link to calculation in WolframAlpha
In order to calculate the interest over several years, the idea is to do the calculation for the first year and then continue the calculation for the next year with your new result. So let's do the calculation. At first, you'd have $6000\$$ and $6000\$ \cdot 0.07 = 420\$$ interest, so a total of $6420\$$. After the 2nd year, you'd have your $6420\$$ and an additional $6420\$ \cdot 0.07 = 449.40\$$ interest, so a total of $6869.40\$. Note that we actually calculated the following: $$(6400 \$ + 6400 \$ \cdot 0.07) + (6400 \$ + 6400 \$ \cdot 0.07) \cdot 0.07 = \\ (6400 \$ \cdot 1.07) + (6400 \$ \cdot 1.07)\cdot 0.07 = \\ (6400 \$ \cdot 1.07) \cdot 1.07 = \\ 6400 \$ \cdot 1.07^2 $$ which hopefully sheds some light on the given formular, you've tried to apply.
How to prove $\mathbb Z_3\rtimes(\mathbb Z_2\mathbb \times\mathbb Z_2) \cong S_3\times\mathbb Z_2$? I know that there is a unique semidirect product $\mathbb Z_3\rtimes(\mathbb Z_2\times \mathbb Z_2)$, defined by mapping two of the order two generators of $\mathbb Z_2\times \mathbb Z_2$ to the inversion automorphism of $\mathbb Z_3$. However, I am not exactly sure how to proceed to show that $\mathbb Z_3\rtimes(\mathbb Z_2\mathbb \times\mathbb Z_2) \cong S_3\times\mathbb Z_2$. What exactly is the map I could construct?
Let $G = S_3 \times \{\pm 1\}$ (writing the cyclic group multiplicatively). Now map $$ G \to \{\pm 1\} \times \{\pm 1\} $$ by the rule $$ (\sigma, \epsilon) \mapsto (\epsilon \cdot \text{sign}(\sigma), \epsilon). $$ The kernel is $A_3 \times \{1\}$, so we have an exact sequence $$ 1 \to A_3 \to G \to \{\pm 1\} \times \{\pm 1\} \to 1, $$ and there is a splitting $\{\pm 1\} \times \{\pm 1\} \to G$ given by $(-1, 1) \mapsto ((12), 1)$ and $(1, -1) \mapsto (\text{id}, -1)$. It follows that $G$ is some semidirect product of $A_3$ by $\{\pm 1\} \times \{\pm 1\}$ as desired (using $A_3 = \mathbb{Z}/3\mathbb{Z}$). We still must check that it is the non-trivial semi-direct product, i.e. that the lift $((12), 1)$ acts non-trivially on $A_3$ by conjugation. This is indeed true since e.g. $(12)(123)(12) \neq (123)$.
If $G$ is the group defined by the semidirect product you defined and $H$ is the direct product of $S_3\times \mathbb Z_2$, then all elements of $G$ take the form $$g=(x_1,x_2,x_3)$$ where $x_1\in\mathbb Z_3$ and $x_2,x_3\in\mathbb Z_2$. All elements of $H$ take the form $$h=(y_1,y_2)$$ where $y_1\in S_3$ and $y_2\in\mathbb Z_2$. If I understand your definition correctly, then for the group $G$, the following should hold: $$(x_1,0,0)(a,b,c)=(x_1+a,b,c)$$ $$(x_1,1,1)(a,b,c)=(x_1+a,b+1,c+1)$$ $$(x_1,0,1)(a,b,c)=(-x_1+a,b,c+1)$$ $$(x_1,1,0)(a,b,c)=(-x_1+a,b+1,c)$$ Then you should be able to find an isomorphism between $G$ and $H$ by mapping the elements of $G$ that take the form $(x_1,0,0)$ and $(x_1,1,1)$ to elements of $H$ that take the form $(y_1,0)$ and elements of $G$ in the form $(x_1,0,1)$ and $(x_1,1,0)$ to elements of $H$ in the form $(y_1,1)$.
Help understanding the Feynman-Kac formula From wikipedia: Suppose we wish to find the expected value of the function $e^{-\int_0^t V(x(\tau)) d\tau}$ in the case where $x(\tau)$ is some realization of a diffusion process starting at $x(0) = 0$. The Feynman–Kac formula says that this expectation is equivalent to the integral of a solution to a diffusion equation. Specifically, under the conditions that $uV(x) \ge 0$, $$E[e^{-u\int_0^t V(x(\tau)) d\tau}] = \int_{-\infty}^\infty w(x, t) dx$$ where $w(x, 0) = \delta(x)$ and $$\frac{\partial w}{\partial t} = \frac{1}{2} \frac{\partial^2w}{\partial x^2} - uV(x)w $$ My Questions: (1) What is $\delta(x)$ in this context? It's not mentioned by the rest of the wiki article. (2) The "randomness" of the variable whose expectation is being measured is entirely contained within the term $x(\tau)$. But it looks to me like the value of the expectation is independent of the function $x$. It's simply an integral of the function $w$, and the PDE that defines $w$ uses $x$ as a parameter to $w$ but not a function (right?). What am I missing? Thanks for your help.
Check here: http://www.math.sunysb.edu/~ajt/Teaching/560spring2011/PDF/Covariance%20Operator,%20Support%20of%20Wiener%20Measure.pdf This should be helpful.
Check here: http://www.math.sunysb.edu/~ajt/Teaching/560spring2011/PDF/Covariance%20Operator,%20Support%20of%20Wiener%20Measure.pdf This should be helpful.
Is this distribution binomial distribution? Suppose that three persons (A, B and C) throw at a target. And A throws 10 times with the probability 0.3 to hit the target; and B throws 15 times with the probability 0.2; and C throws 20 times with the probability 0.1. Now Determine the probability that the target will be hit at least 12 times. My solution is as follows: For each throw, the probability of hitting the target is Pr(H=1) = Pr(A)Pr(A Hit) + Pr(B)Pr(B Hit) + Pr(C)Pr(C Hit) which is Pr(H=1) = (10/45)*0.3 + (15/45)*0.2 + (20/45)*0.1 = 8/45 So, the throw distribution can be seemed as binomial distribution H ~ Bin(45, 8/45) then can get the answer. Am I right to consider the target hit variable as a binomial distribution? And please give me a help to get the correct answer, any hints will be appreciated. Thanks.
No. To see why, note that the MGF of the sum $W = X_1 + X_2$ of two independent binomial random variables $$X_i \sim \operatorname{Binomial}(n_i, p_i), \quad i = 1, 2,$$ is $$M_W(t) = M_{X_1}(t) M_{X_2}(t) = (1 + (e^t - 1) p_1)^{n_1} (1 + (e^t - 1) p_2)^{n_2}.$$ This is not in general equal to the MGF of a single binomial random variable $Y$ with parameters $n = n_1 + n_2$, $p = (n_1 p_1 + n_2 p_2)/n$, which would be $$M_Y(t) = \left(1 + (e^t - 1)\frac{n_1 p_1 + n_2 p_2}{n_1 + n_2} \right)^{n_1 + n_2},$$ except in the case where $p_1 = p_2$. It is worth noting that the correct exact probability of the event described in the question is $$\frac{10229891531523289867038696518983728647}{119209289550781250000000000000000000000} \approx 0.0858145.$$ However, the probability described by your solution would be around $0.0905153$.
A random variable $H$ is distributed binomially if it represents the number of successes $H=h$ in $n$ independent and identical trials where $n\ge h$ Hence, $H$ is distributed binomially assuming the independence and identicalness of the throws in which case we have: $$P(H = h) = \frac{45}{h} p^h (1-p)^{n-h}$$ As for the calculation of $p$, it looks like right but you'll have to clarify what exactly is meant by $P(A)$ and so on. How exactly did you come up with the formula? Did you use law of total probability? I think $$p = P(hit) = P(hit | A \ throws)P(A \ throws) + P(hit | B \ throws)P(B \ throws) + P(hit | C \ throws)P(C \ throws)$$ where $P(X \ throws)$ is the proportion of throws of $X$ to the total number of throws assuming independence, identicalness or something
Why does $e^{i\pi}=-1$? I will first say that I fully understand how to prove this equation from the use of power series, what I am interested in though is why $e$ and $\pi$ should be linked like they are. As far as I know $\pi$ comes from geometry (although it does have an equivalent analytical definition), and $e$ comes from calculus. I cannot see any reason why they should be linked and the proof doesn't really give any insights as to why the equation works. Is there some nice way of explaining this?
Euler's formula describes two equivalent ways to move in a circle. Starting at the number $1$, see multiplication as a transformation that changes the number $1 \cdot e^{i\pi}$. Regular exponential growth continuously increases $1$ by some rate; imaginary exponential growth continuously rotates a number in the complex plane. Growing for $\pi$ units of time means going $\pi\,\rm radians$ around a circle Therefore, $e^{i\pi}$ means starting at 1 and rotating $\pi$ (halfway around a circle) to get to $-1$. For more details explaining each step, read this article.
$$e^{ix} = \cos x + i\sin x$$ the $\pi$ is to some extent an arbitrary definition of the angle.