title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Prove an greater than 11 can be written as a sum of only 3's and only 7's. | $$12=3+3+3+3,13=3+3+7,14=7+7.$$
Now assume the result to be true for all $n$ such that $12\le n\le k$, where $k\ge14$. Then consider $n=k+1$.
$n=3+(k-2)$ where, by assumption $k-2$ can be expressed as an appropriate sum. Hence, by strong induction, every number greater than $12$ can be so expressed. |
How do these two results not conflict with each other? | "Mathematical Analysis II" uses $\mu$ for Jordan measure, which is not defined for your set $A.$ And while the book defines Lebesgue measure zero sets, it never defines or uses Lebesgue measure.
To answer 1(a), use a finite cover $C_1,\dots,C_k$ of $E$ by $n$-dimensional closed intervals of total measure less than $\epsilon,$ and note that the closure of $E$ is also covered by the same set. (If Jordan measure is defined in terms of half-open intervals or whatever, you might need to enlarge them slightly while keeping the total measure less than $2\epsilon$ say.)
Your answer for 1(b) is correct, as long as you know why that set is a Lebesgue measure zero set. |
The existence of a smooth functions taking values $0$ and $1$ on two given closed sets | Do you know the following fact?
If $C \subset \mathbb R^n$ is closed there is a $C^\infty$ function $\phi_C$ with the property that $\phi_C \ge 0$ and $\phi_C(x) = 0$ if and only if $x \in C$?
You can define $$f(x) = \frac{\phi_K(x) - \phi_F(x)}{\phi_K(x) + \phi_F(x)}.$$
This function is $C^\infty$, equals $1$ on $K$, $-1$ on $F$, and is otherwise strictly in between $-1$ and $1$.
Now use $\sigma = \dfrac 12 (f+1)$. |
canceling double fractions how? | $2\frac{1}{5}$ does not mean $2 \times \frac{1}{5}$. It means $2+\frac{1}{5}$. |
Prove that $\frac{\partial P} {\partial y} =\frac{\partial Q} {\partial x} $, given $f(x,y) = P(x,y) \widehat i + Q(x,y) \widehat j$ | You're on the right track. You have $P = \phi_x$ and $Q = \phi_y$, so $P_y = \phi_{xy} = \phi_{yx} = Q_x$. |
Functional equation $f(r \cos \varphi)+f(r \sin \varphi)=f(r)$ | Consider a monotonous function $ f : [ 0 , + \infty ) \to \mathbb R $ satisfying the functional equation
$$ f ( r \cos \phi ) + f ( r \sin \phi ) = f ( r ) \tag 0 \label 0 $$
for every $ r \in [ 0 , + \infty ) $ and every $ \phi \in \big[ \frac \pi 6 , \frac \pi 4 \big] $. You can show that there is a $ c \in \mathbb R $ such that
$ f ( x ) = c x ^ 2 $ for all $ x \in [ 0 , + \infty ) $. As you've mentioned, functions of this form satisfy the above criterions, and thus they form the class of all solutions.
The trick is to observe that when you consider $ ( r , \phi ) $ as polar coordinates of a point (noting that we have $ r \ge 0 $ and that's valid), $ ( x , y ) = ( r \cos \phi , r \sin \phi ) $ will be the cartesian coordinates of the same point. In that case, having $ \frac \pi 6 \le \phi \le \frac \pi 4 $ is equivalent to having $ y \le x \le 2 y $.
So we start by considering $ x $ and $ y $ so that $ y \le x \le 2 y $. Then letting $ r = \sqrt { x ^ 2 + y ^ 2 } $, we would have $ r \in [ 0 , + \infty ) $ and there would be a $ \phi \in \big[ \frac \pi 6 , \frac \pi 4 \big] $ so that
$ x = r \cos \phi $ and $ y = r \sin \phi $. Therefore by \eqref{0} we have
$$ f ( x ) + f ( y ) = f \left( \sqrt { x ^ 2 + y ^ 2 } \right) \tag 1 \label 1 $$
for every $ x $ and $ y $ with $ y \le x \le 2 y $. Define $ g : [ 0 , + \infty ) \to \mathbb R $ by $ g ( z ) = f \left( \sqrt z \right) $. Then by \eqref{1} we have
$$ g ( x ) + g ( y ) = g ( x + y ) \tag 2 \label 2 $$
for every $ x $ and $ y $ with $ 0 \le y \le x \le 4 y $. As $ 0 \le 0 \le 0 \le 4 \cdot 0 $, we can let $ x = y = 0 $ in \eqref{2} and get $ g ( 0 ) = 0 $. We can now inductively show that
$$ g ( n x ) = n g ( x ) \tag 3 \label 3 $$
for every nonnegative integer $ n $. This holds for $ n = 0 $ and $ n = 1 $. For $ n \ge 2 $, we either have $ n = 2 m $ or $ n = 2 m + 1 $ for some nonnegative integer $ m $ less than $ n $. If $ n = 2 m $, then since $ 0 \le m x \le m x \le 4 m x $, substituting $ m x $ for both $ x $ and $ y $ in \eqref{2} we get $ 2 g ( m x ) = g ( 2 m x ) $. As $ m < n $, by induction hypothesis we have $ g ( m x ) = m g ( x ) $, which leads to $ g ( n x ) = n g ( x ) $. If $ n = 2 m + 1 $, since $ n \ge 2 $ we have $ 0 \le m x \le ( m + 1 ) x \le 4 m x $, and substituting $ ( m + 1 ) x $ for $ x $ and $ m x $ for $ y $ in \eqref{2} we get $ g \big( ( m + 1 ) x \big) + g ( m x ) = g \big( ( 2 m + 1 ) x \big) $. As $ m + 1 < n $, by induction hypothesis we have $ g \big( ( m + 1 ) x \big) = ( m + 1 ) g ( x ) $ and $ g ( m x ) = m g ( x ) $, which again leads to $ g ( n x ) = n g ( x ) $.
Now, for $ n > 0 $, we can substitute $ \frac x n $ for $ x $ in \eqref{3} and get
$$ g \left( \frac x n \right) = \frac 1 n g ( x ) \text . \tag 4 \label 4 $$
Substituting $ m x $ for $ x $ in \eqref{4} for some nonnegative integer $ m $ and using \eqref{3}, we find out that $ g ( q x ) = q g ( x ) $ for every nonnegative rational number $ q $. In particular, defining $ c = g ( 1 ) $, we have $ g ( q ) = c q $ for every nonnegative rational number $ q $. Since $ f $ is monotonous, $ g $ is monotonous, too, and hence for every nonnegative real number $ x $ between two nonnegative rational numbers $ p $ and $ q $, $ g ( x ) $ is between $ c p $ and $ c q $. As nonnegative rational numbers are dense in nonnegative real numbers, this forces $ g ( x ) $ to be equal to $ c x $, and thus $ f ( x ) = c x ^ 2 $, as desired. |
Computing Gamma Median by Hand? | In [1], a sharp bound on the median of gamma distribution is introduced as
$$
n+\frac{2}{3} < median(n) < min(n+log2 , n+\frac{2}{3}+(2n+2)^{-1})
$$
in which the bound decrease with n, the parameter of Gamma distribution $\Gamma(n+1,1) $. It is also mentioned that the asymptotic expansion of median could be derived as
$$
median(n) = n + \frac{2}{3} + \frac{8}{405n} - \frac{64}{5103n^2} + ...
$$
For more information, it's best to review the reference.
[1]: Choi, K. P., On the medians of gamma distributions and an equation of Ramanujan, Proc. Am. Math. Soc. 121, No. 1, 245-251 (1994). ZBL0803.62007. |
Diagonalizable & Invertible from an 'eigenpicture' | I am not sure how much sense these eigenpictures make. $2\times 2$ matrices are the easiest to work with, so in the time you spend to draw it you can potentially solve invertability and diagonizability by hand...
Anyway. A square matrix is invertible iff the corresponding linear map is surjective iff the eigenpicture has two nonparallel blue lines (I assume the eigenpicture to have all outputs of unit vectors, in reality this is another limitation of this picture method).
Similarly a square matrix is diagonizable iff the the space admits a basis of eigenvectors. In our case this is equivalent to finding (at least) two nonparallel lines such that both the input vector and its blue output vector lie on the same line each. The matrix corresponding to the example is not diagonizable, since the only „line of eigenvectors“ is the diagonal in direction (1,1).
To be clear, I think eigenpictures are a great way to introduce and explain the concept of eigenvectors. But they don’t generalize well to higher dimensions (not even to 3d) and will never replace solid proofs, since they depend quite heavily on artistic skill and the matrix being „coarse“ enough such that its effects are visible in the picture... |
Exlamation about a claim of an existing such cycle in a simple Graph | We know that $v_0$ has at least $k$ neighbors in $P$. Now these neighbors have indices drawn from $\{1, \dots, l\}$ by definition. So by maximum index, he means let $j$ be the index such that $v_j$ is a neighbor of $v_0$ and $v_i$ is not a neighbor of $v_0$ for all $i > j$.
Now $j \ge k$ since if not, then $v_0$ does not have $k$ neighbors in $P$ which is a contradiction. You should be able to figure out the length argument from here.
Also, often times in graph theory, graphs are assumed to be undirected unless stated otherwise. |
Directional continuous derivative on vectors of a base implies differentiability in $\mathbb{R}^n$ | The key to this question is manipulating the differential quotient to get expressions involving the partial derivatives. You would do well to remember this.
To show that $Df(x)$ exists, we want to show that for every $\alpha \in \mathbb R^n$, the differential quotient $\lim_{h\to 0} \frac{f(x + h\alpha) - f(x)}{h}$ exists. We do something smart here: first, since $\alpha = \sum \alpha_iv_i$, where $\alpha_i$ are scalars and $v_i$ are the basis vectors, we have:
\begin{split}
\frac{f(x+h\alpha) - f(x)}{h} & = \frac{f(x + h(\sum_{i=1}^{\mathbf n} \alpha_iv_i)) - f(x+ h(\sum_{i=1}^{\mathbf{n-1}} \alpha_iv_i))}{h} \\& + \frac{f(x + h(\sum_{i=1}^{\mathbf{n-1}} \alpha_iv_i)) - f(x + h(\sum_{i=1}^{\mathbf {n-2}} \alpha_iv_i))}{h} + \frac{f(x + h(\sum_{i=1}^{\mathbf{n-2}} \alpha_iv_i)) - f(x + h(\sum_{i=1}^{\mathbf {n-3}} \alpha_iv_i))}{h} \\& + \ldots \\ & + \frac{f(x + h\alpha_1v_1) - f(x)}{h}
\end{split}
Right, so what have we done?We broke the sum above, into various parts, where you can see that in each fraction, the difference between the two inputs of $f$ is just some scalar multiple of a basis vector.
This is the insight. Allow me to be sketchy from here on.
At this stage, you should remember the mean value theorem in one variable. Something similar holds for partial derivatives as well : if $\frac{\partial g}{\partial x_i}$ exists in a neighbourhood of $a$, then $g(a+\alpha x_i) - g(x_i) = \alpha \times \frac{\partial g}{\partial x_i} (a + \beta x_i)$ , for some $0 < \beta < \alpha$.
Use that theorem, on each of the above terms. Then, let $h \to 0$, and note that the partial derivatives are continuous, to conclude that the partial derivative is indeed what you expect it to be i.e. that given in your question statement.
$()$ |
Open sets in Lebesgue sequence spaces | (0). I am using $l^1$ for the set of absolutely summable sequences and $l_1,l_2$ for the norms and their associated topologies.
(1). If $d_1$ and $d_2$ are metrics on the same set $X$ and there exists $k>0$ such that $d_2(p,q)\leq k\cdot d_1(p,q)$ for all $p,q\in X$ then the topology generated by $d_1$ is stronger than (or equal to ) the topology generated by $d_1.$
With $k=1$ and $d_i(p,q)=\|p-q\|_i$ for $p,q\in l_1$ and $i\in \{1,2\},$ therefore the $l_1$-topology on the set $l^1$ is stronger (or equal to) the topology generated by the $l_2$ norm.
(2). For brevity let $A=\{x\in l^1:\|x\|_1<1\}.$ Observe that any $S\subset l^1$ is $l_2$-open iff $y+rS =\{y+rx:x\in S\}$ is $l_2$-open for all $r>0$ and all $y\in l^1.$
So if $A$ is $l_2$-open then every $l_1$-open ball is $l_2$-open , because the $l_1$-open ball $B_1(y,r)$ (with $y\in l^1$ and $r>0$) is equal to $y+rA.$ This would imply that the $l_2$-topology on the set $l^1$ is stronger than or equal to the $l_1$ topology.
(3). By (1) and (2) if $A$ is $l_2$-open then the $l_1$ and $l_2$ norms generate the same topology on the set $l^1$. But consider $x_i=(x_{i,n})_{n\in N}$ where $x_{i,n}=(i\ln (n+1))^{-1}$ for $i\leq n,$ and $x_{i,n}=0$ for $i>n.$ Let $y=(y_n)_{n\in N}$ where $y_n=0$ for every $n.$
Then $y\in l^1$ and every $l_2$-nbhd of $y$ contains $x_i$ for infinitely many $i\in N$ because $\lim_{i\to \infty }\|x_i-y\|_2=0.$ But $\lim_{i\to \infty}\|x_i\|_1=1$ and $\|y\|_1=0$ so the $l_1$-open ball $\{z\in l^1:\|z\|<1/2\}$, which is an $l_1$-nbhd of $y$, contains $x_i$ for only finitely many $i\in N.$ So the topologies on $l^1$ generated by the two norms are not identical. So $A$ is not $l_2$-open. |
Find the sup and inf of a set with parameter | For $\lambda = 1$ it holds $$E = \{1\}$$ and we are done.
For $\lambda \not= 1$ it holds $$\inf E = 0 \\ \sup E = \infty$$… so no wonder you are struggling.
Assume $\lambda > 1$ then show that $$\lim_{n\to\infty} \frac{m^{\lambda} + n^{\frac{1}{\lambda}}}{m+n} = 0$$ and $$\lim_{m\to\infty} \frac{m^{\lambda} + n^{\frac{1}{\lambda}}}{m+n} = \infty$$
For $0 < \lambda < 1$ substitute $\overline{\lambda} = \frac{1}{\lambda} > 1$ and interchange the roles of $m$ and $n$ to see the results above still hold. |
Let $(X,d)$ be a metric space and $F ⊆ X$ . Show that $F ^- = \{x ∈ X : d(x, F) = 0\}.$ | Let $x\in\overline{F}$, there exists $(x_n)\in F^{\mathbb{N}}$ such that $d(x,x_n)\rightarrow 0$. For all $n\in\mathbb{N}$, $d(x,F)\leqslant d(x,x_n)$. Finally, one has $d(x,F)=0$.
Let $x\in X$ such that $d(x,F)=0$, using the definition of the infimum: $$\forall\varepsilon>0,\exists x_{\varepsilon}\in F\textrm{ s.t. }d(x,x_{\varepsilon})<\varepsilon.$$
Hence the result taking $\varepsilon=1/n$.
Reminder. For all $x\in X$, $d(x,F)=\inf\limits_{y\in F}d(x,y)$. |
Number of subset combinations with fixed size | If the four subgroups of five people each are distinguishable, then this is correct. By distinguishable I mean something like they are teams with names: "Team 1", "Team 2", "Team 3", and "Team 4". And you consider the case when persons A,B,C,D,E are in "Team 1" and persons F,G,H,I,J are in "Team 2" different from the case when persons A,B,C,D,E are in "Team 2" and persons F,G,H,I,J are in "Team 1".
But if you don't need to distinguish those four groups of five people, then your number is too big. To account for these four groups being indistinguishable, you also need to divide by $4!$. |
State the domain of each composite function? | a) $f\circ{g}=x^2-3$
b) $g\circ{f}=(x+1)^2-4$
c) $f\circ{f}=x+2$
d) $g\circ{g}=(x^2-4)^2-4$
The second part doenst make sense because they are not functions, but numbers! |
How to compute angle between vectors a and b if |a|=|b|=|a+b| | Since $|a+b|^2=\langle a+b,a+b \rangle = |a|^2 + 2\langle a, b \rangle + |b|^2$, it follows that $2\langle a,b \rangle=-|a|^2$. Hence $\cos(\alpha)= -\frac{1}{2}$. |
How to calculate $\sum_{k=1}^n \left(k \sum_{i=0}^{k-1} {n \choose i}\right)$ | $\displaystyle\sum_{k=1}^n\left[\sum_{i=0}^{k-1} k{n\choose i}\right]=\sum_{i=0}^{n-1}\left[\sum_{k=i+1}^{n}k{n\choose i}\right]=\sum_{i=0}^{n-1}\left[{n\choose i}\sum_{k=i+1}^n k\right]=$
$\displaystyle = \sum_{i=0}^{n-1}{n\choose i} \left[\dfrac{n(n+1)}{2}-\dfrac{i(i+1)}{2}\right]=\dfrac{n(n+1)}{2}(2^{n}-1)-\sum_{i=0}^{n-1}\dfrac{i(i+1)}{2}{n\choose i}$
$\displaystyle =\dfrac{n(n+1)}{2}(2^{n}-1)-\sum_{i=0}^{n-1}\left[\dfrac{n(n-1)}{2}{n-2\choose i-2}+n{n-1\choose i-1}\right]$
$\displaystyle =\dfrac{n(n+1)}{2}(2^{n}-1)-\dfrac{n(n-1)}{2}(2^{n-2}-1)-n(2^{n-1}-1)$
$\displaystyle =n(3n+1).2^{n-3}$ |
Probability density function dependent variable probability | To find $P(Y<1/2)$ you need to integrate $f(x,y)$ over all $(x,y)$ where
$y<1/2$ (yellow region in picture below). Since $f(x,y)=0$ for $(x,y)$ outside
the green triangle, the correct way to set up the double integral is
$$P(Y<1/2)=\int_0^{1/2} \int_0^y 8xy\, dx\,dy.$$
Can you take it from here? |
Lagrange interpolation uniqueness | We know that degree of $f < n$.
Let us assume that there is another polynomial $g$, with degree strictly less than $n$, such that $g(x_i) = y_i$, $i=1,2,\dots , n$.
Then the polynomial $f-g$ has degree strictly less than $n$ but has $n$ zeros (namely $x_1, x_2, \dots , x_n$). Which is a contradiction
Hence, no such $g$ exists. |
discuss convexity of the following set? | For this to be a convex set, you need to be able to draw a line $L$ between any two points $(x_1, y_1), (x_2,y_2) \in M$ such that every point $(x,y) \in L$ is also in $M$.
In this case, it's obvious that the set $M$ can only be convex if $a = 0$ and $b>0$ (Naturally, $b>a$).
If $a>0$, there will always exist a small $\epsilon > 0$ such that a line $L$ from $(a,\epsilon), (\epsilon, a)$ will always pass through your "donut hole".
Convex sets are important in discussing the uniqueness of solutions in optimization problems --which is what I'm assuming you're studying given the tags you've put on this question. Consider reading the wiki on this topic, the images on the right sidebar illustrate my point: http://en.wikipedia.org/wiki/Convex_set
Edit: Typo(s) |
On the Equation of the Ellipse Formed by Intersecting a Plane and Cone | Here's a 3-D picture, to explain more clearly that answer.
I chose as $x$-axis the intersection between the given plane, and a plane through the axis of the cone, perpendicular to it. I placed the origin at the midpoint of segment $AB$ (which is the major axis of the ellipse). $y$-axis is on the given plane, perpendicular $x$-axis. If you draw, from the center $M$ of the cone base, a line parallel to $y$-axis, then it intersects the cone (and the ellipse) at points $I$ and $H$, with coordinates $|x|=CM=(_1−_2)/2$, $|y|=MH=\tan Ω=\,$radius of the base of the cone.
EDIT.
To choose the intersecting plane such that the center of the ellipse is at a given point $C$ inside the cone, let's consider line $VC$, where $V$ is the vertex of the cone (see figure below).
If $C$ lies on the axis of the cone, then just choose a plane through $C$ perpendicular to the axis, to obtain as intersection a circle of centre $C$.
If $C$ doesn't lie on the axis, then consider the plane $\sigma$ through $C$ and the axis, intersecting the cone at two generatrices $VA$ and $VB$, and set
$\angle CVB=\beta$, $\angle CVA=\gamma$, with $\beta>\gamma$. If a plane through $C$, perpendicular to $\sigma$, intersects $VA$, $VB$ at $A$ an $B$ respectively, then $AB$ is the major axis of the intersection ellipse, and $C$ is its center if $AC=BC$.
Let $\theta=\angle VCB$.
From the sine law applied to triangles $VCB$ and $VCA$ one finds
$$
{BC\over VC}={AC\over VC}={\sin\beta\over\sin(\theta+\beta)}=
{\sin\gamma\over\sin(\theta-\gamma)},
\quad\text{whence:}\quad
\tan\theta={2\sin\beta\sin\gamma\over\sin(\beta-\gamma)}.
$$
Finally, we can express angle $\alpha$ between the plane of the ellipse and a plane perpendicular to the axis as a function of $\theta$:
$$
\alpha={\pi+\gamma-\beta\over2}-\theta.
$$ |
function not differentiable at $(0,0)$ | You can consider $$f(x,y) =\left(\frac{xy}{x^2+y^2}, \frac{xy}{x^2+y^2}, \frac{xy}{x^2+y^2}\right)$$
In general, $f:\mathbb{R}^n \to \mathbb{R}^m$ is differentiable iff each component function $f_i$, $1 \leq i \leq m$, is differentiable. Thus studying the differentiability of functions $\mathbb{R}^n \to \mathbb{R}^{m}$ more or less reduces to studying the differentiability of functions $\mathbb{R}^{n} \to \mathbb{R}$. |
can we use vertex degree to detect hamilton path and hamilton circuit? | Check the theorems of Ore and Dirac.
According to the theorem of Ore:
Let $G$ be a (finite and simple) graph with $n ≥ 3$ vertices. We denote by $\deg (v)$ the degree of a vertex $v$ in $G$, i.e. the number of incident edges in $G$ to $v$. Then, Ore's theorem states that if
$\deg (v) + \deg (w) ≥$ $n$ for every pair of non-adjacent vertices $v$ and $w$ of $G$
then $G$ is Hamiltonian.
According to the theorem of Dirac:
A simple graph with $n$ vertices $(n ≥ 3)$ is Hamiltonian if every vertex has degree $n / 2$ or greater. |
Jordan/Lebesgue measure of $A\times B$ | It still holds. Just write $A$ as countable union of bounded subset. Countable union of measure zero set is still measure zero. |
An alternating sum with binomial coefficients is not a prime number | Let
\begin{align}
\phi_{n} = {n \choose 1}-5{n \choose 2}+5^2{n \choose 3}-...+5^{n-1}{n\choose n}
\end{align}
then
\begin{align}
-5 \phi_{n} &= \sum_{r=1}^{n} \binom{n}{r} (-5)^{r} = -1 + \sum_{r=0}^{n} \binom{n}{r} (-5)^{r} \\
&= (1-5)^{n} -1
\end{align}
or
\begin{align}
\phi_{n} = \frac{ 1 -(-4)^{n} }{ 5 }.
\end{align}
For the case of $n=5$ the result is $\phi_{5} = 205 = 5 \cdot 41$ which is the product of two primes. Further results show that $\phi_{n \geq 5}$ is not prime. |
Proof of Caratheodory's Theorem (for Convex Sets) using Radon's Lemma | You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set $\{x_1,...,x_m\}$ (since, as you mentioned, $m\geq n+2$).
This gives us disjoint $I,J\subseteq [m]$ and equivalent convex combinations:
$$\sum_{h\in I}x_h\mu_h=\sum_{h\in J}x_h\mu_h,\sum_{h\in I}\mu_h=\sum_{h\in J}\mu_h=1$$
We can rename the points, and WLOG assume $I=\{1,...,k\},J=\{k+1,...,l\}$, and in addition:
$$\frac{\alpha_1}{\mu_1}=\min_{i\in I}{\frac{\alpha_i}{\mu_i}}:=t$$
(here, the coefficients $\alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
$$y=\sum_{i=1}^m\alpha_ix_i=\sum_{i=1}^m\alpha_ix_i-t\sum_{i=1}^k\mu_ix_i+t\sum_{j=k+1}^l\mu_ix_i=$$
$$\sum_{i=1}^k(\alpha_i-t\mu_i)x+\sum_{j=k+1}^l(\alpha_i+t\mu_i)x+\sum_{h=l+1}^m\alpha_ix_i$$
$t\mu_i= \frac{\alpha_1\mu_i}{\mu_1}\leq\frac{\alpha_i\mu_i}{\mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$. |
Linear congruence equations | Notice that neither $5$ nor $7$ is ever a prime factor of $6^N-4$. I'll do the case with $5$ and leave the $7$ case for you to check; having checked these two cases we can safely conclude that the three numbers in question are relatively prime because the only prime factors of $5^N$ and $7^N$ are $5$ and $7$, respectively (so none of the three numbers will share any prime factors).
By contradiction, suppose $6^N-4$ has $5$ as a prime factor for some $N\geq 2$, and thus $6^N-4\equiv 0\pmod 5$. Then $6^N\equiv 4\pmod 5$ and thus $1^N\equiv 4\pmod 5$ since $6\equiv 1\pmod 5$. This is absurd since $1^N=1$ and $1\not\equiv 4\pmod 5$. Hence, $5$ is never a prime factor of $6^N-4$ for any $N$. |
Bolzano-Weierstrass proof explanation | It is a construction proof. Nothing allows you to pick those intervals, and indexes. You are constructing them. However you are guaranteed the existence of indexes with the said property($n_1<n_2<...<n_m$) because you always pick the closed interval with infinite points in the sequence. |
Rotors/Quaternions: double reflection question | Your understanding is correct. Given two unit vectors $a$ and $b$, their geometric product $R = a b$ is a rotor. The rotor is:
$R = a b = a \cdot b + a \wedge b$
$R = \cos \theta + I \sin \theta$
where $I = \frac{a \wedge b}{\|a \wedge b\|}$. When $R$ is applied to a blade $X$ using the versor product $R X \tilde R$, the total angle of rotation is $2 \theta$. For convenience the rotor is defined to rotate $\theta/2$ so that when the versor product is applied the total rotation angle amounts to $\theta$.
It can be seen algebraically developing the versor product:
$R X \tilde R = (\cos \theta + I \sin \theta) X (\cos \theta - I \sin \theta)$
$R X \tilde R = \cos^2\theta \ X + (\cos \theta \sin \theta)( −X I + I X ) −
\sin^2\theta \ I X I $
Using the fact that $- X I + I X = -2 X \cdot I$
$R X \tilde R = \cos^2\theta \ X - 2(\cos\theta \sin\theta) X \cdot I − \sin^2\theta \ I X I$
It is convinient for our purpose to express the reflection $I X I$ of the blade $X$ with respect to the dual of the bivector $I^* = I e_{321}$ instead of $I$, so we get $I X I = (-1)^k I^* X I^* = -2 I^* (I^* \cdot X) + X$. where $k$ is the grade of the blade $X$. So we get:
$R X \tilde R = (\cos^2\theta - \sin^2\theta) \ X - 2(\cos\theta \sin\theta) \ X \cdot I + 2 \sin^2\theta \ I^* (I^* \cdot X)$
Applying the following trigonometric identities:
$\cos 2\theta = \cos^2\theta − \sin^2\theta$
$\sin 2\theta = 2 \cos\theta \sin\theta$
$1−\cos 2\theta = 2 \sin^2\theta$
we finally get:
$R X \tilde R = \cos 2\theta \ X - \sin 2\theta \ X \cdot I + (1 − \cos 2\theta) \ I^* (I^* \cdot X)$
This is the Geometric Algebra version of Rodrigues' formula. As can be seen, the versor product produces a rotation with total angle of rotation of $2 \theta$. That is why the convention is to take angle $\theta/2$. |
Why does list coloring provides a more general setting to discuss the chromatic number? | In list coloring, different vertices can have different lists, while if you're just $k$-coloring a graph, then every vertex has the same $k$ colors available. There are bipartite ($2$-colorable) graphs with arbitrarily high choice number. |
When is it possible to interpret composition as a natural transformation? | Nice question! As user43208 says in comments, I presume you’re assuming $\newcommand{\CC}{\mathcal{C}}\CC$ is Cartesian closed throughout.
The most straightforward way to see this as a natural transformation is to consider $Y$ as fixed. Then $Z^Y \times Y^X$ and $Z^X$ can each be seen as functors of two variables, contravariant in $X$ and covariant in $X$, i.e. as functors $\newcommand{\op}{\mathrm{op}}\CC^\op \times \CC \to \CC$.
Slightly better is to see $Z^Y \times Y^X$ as “isovariant” in $Y$, i.e. see both as functors $\CC^\op \times \CC^{\mathrm{iso}} \times \CC \to \CC$. (I’ll leave the details of this approach as an exercise.)
But best of all (again as user43208 says) is to move to the language of dinatural and extranatural transformations, which was designed exactly for this sort of situation, where you have a construction like $Z^Y \times Y^X$ which is covariant in some instances of an argument and contravariant in others. You can find full details at the n-lab, but roughly, the way this works is by splitting the co- and contra-variant instances of $Y$ into two different arguments, $Y_+$ and $Y_-$, and thinking about the functor of four variables $Z^{Y_-} \times Y_+^X : \CC^\op \times \CC \times \CC^\op \times \CC \to \CC$, and similarly thinking of $Z^X$ as a functor of those four variables (in which $Y_+$, $Y_-$ just so happen not to appear). Then a dinatural transformation between these is a family of maps $\alpha_{X,Y,Z}$ between “diagonal instances” (i.e. instances where both $Y_+$, $Y_-$ are given the same value $Y$), satisfying a naturality condition saying that for any $X,Y_+, Y_-,Z$ and map $f : {Y_-} \to Y_+$, the two different ways of getting from $Z^{Y_-} \times Y_+^X$ to $Z^X$ are the same.
(Exercise: draw out this naturality diagram! Hint: the “two different ways” go via $Z^{Y_+} \times {Y_+}^X$ and $Z^{Y_-} \times {Y_-}^X$ respectively.)
These three approaches take successively more language to set up; but they encapsulate increasingly more about the naturality of this composition operation. |
Finding areas of different curves in a rectangle | The rectangle is misleading. The integrals don't recognize the bottom of the rectangle, but instead the $x$-axis.
II and III are correct. |
Automorphisms of punctured plane | The automorphism group of $\mathbb{C}\setminus \{0\}$ is generated by linear maps $z\mapsto az$ (with $a\ne 0$) and inversion $z\mapsto 1/z$. So, every element of the group can be written as $z\mapsto az^{\pm 1}$.
Indeed, suppose $f$ is an automorphism of $\mathbb{C}\setminus \{0\}$. Since $f$ is injective, $0$ cannot be a point of essential singularity (recall Picard's theorem), so it's either a pole or removable singularity. By replacing $f$ with $1/f$ if necessary, we can assume $0$ is removable. Then $f$ extends to an automorphism of $\mathbb{C}$, so $f(z)=az+b$. And since it fixes $0$, $b=0$. |
difference between irreducible element and irreducible polynomial | Irreducible polynomials in $\mathbb{F}[x]$, for $\mathbb{F}$ a field, are examples of irreducible elements of the polynomial ring $\mathbb{F}[x]$.
The polynomial $x^{2}+2$ is irreducible over the rationals (we consider it to have rational coefficients so that we are working over a field) because it cannot be factored into two rational polynomials that are not units (your only choice is to factor it as something like $(1/2)(2x^{2}+4)$, where $1/2$ is a unit). Notice that $x + \sqrt{2}$ is not a unit, but it can't be used in a factorization of $x^{2}+2$ in $\mathbb{Q}[x]$ because it does not have rational coefficients.
The polynomial $x^{2}+4x+2 = (x+2)(x+2)$ is reducible, not irreducible, precisely because it can be factored as $(x+2)(x+2)$, where $x+2$ is not a unit. |
Same eigenvalues, but not similar | Consider the matrices
\begin{align}
A =
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\ \ \text{ and } \ \
B =
\begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}
\end{align}
which clearly have the same eigenvalues, but they are not similar because $B$ is a Jordan block that can't be diagonalized. |
Benford's law – formula | $$\mathbb P(d) =\sum\limits_{k=10^{n-2}}^{10^{n-1}-1} \log_{10}\left(1+\frac1{10k+d}\right)$$ is an expression for the probability under Benford's law that the $n$th digit being considered is $d$, with $d$ in $\{1,2,3,4,5,6,7,8,9\}$.
For this sum to to be meaningful, you need $n\ge 2$; if instead you are considering the first digit then you have $\mathbb P(d) = \log_{10}\left(1+\frac1{d}\right)$.
As an example, suppose $d=7$ and $n=3$ so you are considering the probability that the third digit is $7$ with Benford's law. You want the probability that numbers start $107\ldots$ or $117\ldots$ or $127\ldots$ and so on up to $997\ldots$.
That will be $\log_{10}\left(1+\frac1{107}\right)+\log_{10}\left(1+\frac1{117}\right)+\log_{10}\left(1+\frac1{127}\right) +\cdots+\log_{10}\left(1+\frac1{997}\right)$ which could be written as $\sum\limits_{k=10}^{99} \log_{10}\left(1+\frac1{10k+7}\right)$, i.e. the original expression with $d=7$ and $n=3$. |
In Towers of Hanoi (with 3 sticks and n disks without backtracking), do all legal sequences of moves reach the solution? | There cannot be deadlock in the Towers of Hanoi, as you almost always have three moves: you can move the smallest disk to one of the other two pegs, and unless all the disks are on the same peg you can always move another disk.
There are many ways of proving that any Towers of Hanoi position is solvable. One I like is to show the correspondence between the positions and the points of a Sierpinski triangle such as 1, 2 or 3. Since a Sierpinski triangle is connected, it is possible to move from any given legal position to any other, and so any Towers of Hanoi position is solvable. |
Accumulation points of the set $S=\{(\frac {1} {n}, \frac {1} {m}) \space m, n \in \mathbb N\}$ | Find the distance between points $\left(\frac{1}{n}, \frac{1}{m}\right)$ and $\left(\frac{1}{n'}, \frac{1}{m'}\right)$. Use an argument based on Cauchy sequences to conclude that any Cauchy sequence of $S$ (that isn't on the diagonal) will eventually only vary in one of its components. Find the limit.
It's been a loooong time since I did this problem (10 years?) but that should at least be a big hint. |
Closed-form solution of a system of nonlinear differential equations | With CAS like Maple I have 2 solution:
.
.
$$ \left\{ x \left( t \right) ={\gamma}^{-1}+{{\rm e}^{-\gamma\,t}}{\it
C1},y \left( t \right) =0 \right\}
$$
and second solution in implicit form:
$$ \left\{ -{\it C1}\,\sqrt [a]{y \left( t \right) {{\rm e}^{\gamma\,t}}
}-{\it C2}-{{\rm e}^{\gamma\,t}}+y \left( t \right) \gamma\,{{\rm e}^{
\gamma\,t}}=0,x \left( t \right) ={\frac {-\gamma\, \left( y \left( t
\right) \right) ^{2}a- \left( {\frac {\rm d}{{\rm d}t}}y \left( t
\right) \right) y \left( t \right) a+ay \left( t \right) }{\gamma\,y
\left( t \right) +{\frac {\rm d}{{\rm d}t}}y \left( t \right) }}
\right\}
$$ |
Global, local and linearized observability | To evaluate the observability of the nonlinear system, you must use Lie derivatives. However, you might be interested to look at the following paper:
Chen, Zhe, Ke Jiang, and James C. Hung. "Local observability matrix
and its application to observability analyses." Conference of IEEE
Industrial Electronics Society. 1990.
The linearized system is time-varying due to the computation of the Jacobians, so the local observability might be useful depending on your application. In some cases, the local observability of the linearized system is equivalent to the observability of the nonlinear system if the system is linearized using the true states. Unfortunately, the true state are not available in practice, so the local observability likely does not match the observability of the nonlinear system.
Nevertheless, the local observability is still a useful, but these things should be kept in mind. |
Prove that $x^2+1$ cannot be a perfect square for any positive integer x? | For any $n$, the square is $n^2$.
The following integer is $n+1$, and its square is $n^2 + 2n + 1$.
Since (assuming $x>0$) $$x^2 < x^2+1 < x^2 + 2x + 1$$
it also true that $$x = \sqrt{x^2} < \sqrt{x^2+1} < \sqrt{x^2 + 2x + 1} = x+1$$ but there are no integers between $x$ and $x+1$. |
The closure of the irrational numbers on $\mathbb{R}$ | Here $\tau$ represents the standard topology ,namely, the open intervals in $\Bbb{R}$. So take any real $x \in B \subset \tau$
Is $B \cap P $ contains a number other than $x$?
Surely yes! because any open interval $B$ contains both rational as well as irrationals, since they are dense |
Study material for fuzzy logic | If your interest on fuzzy logic is from a mathematical point of view (axiomatizations in propositional and predicate languages, etc.) I would strongly suggest Petr Hájek's book "Methamathematics of fuzzy logic" as a first step into this realm.
I suggest you take a look at the following links:
http://www.mathfuzzlog.org/index.php/Mathematical_Fuzzy_Logic
http://en.wikipedia.org/wiki/BL_%28logic%29
http://en.wikipedia.org/wiki/Monoidal_t-norm_logic |
Find the volume of the solid obtained by rotating the region | HINT
Firstly, notice that $y(x) = 9x - (3x)^{2} = 9x - 9x^{2} = 9x(1 - x)\geq 0$ for $0\leq x \leq 1$. Each height $y(x)$ contributes with $2\pi xy(x)\mathrm{d}x$ to the volume of the solid. Hence we conclude the desired volume equals
\begin{align*}
\int_{0}^{1}2\pi xy(x)\mathrm{d}x = 18\pi\int_{0}^{1}x^{2}(1-x)\mathrm{d}x
\end{align*}
Such method is known as the shell integration. Can you proceed from here? |
Design a connected graph with smallest diameter | Observe that for $k>1$ the diameter $d$ of an undirected graph with $2^k$ vertices and $k2^{k-1}$ edges is always greater than $1$. This is because it can not be complete.
Moreover, let $n$ and $e$ be positive integers, whenever $e \geq n-1$, you can always design an undirected graph with $n$ vertices and $e$ edges whose diameter is at most $2$. For example, let $G=(V,E)$, with
$$V=\left\{x_1, \ldots, x_n \right\},$$
and impose that the set
$$ \left\{(x_1,x_i)\,|\, i=2,\ldots, n\right\} \subseteq E.$$
Then the distance between every pair of vertices is at most $2$.
This should answer your question, since the smallest diameter is $2$ for $k>1$. |
Relationship Between Ring of Integers of a Number Field to P-adic integers | Yes. As a DVR $\Bbb{Z}_{\mathfrak{p}}$ is integrally closed in its field of fractions $K$. It contains $\Bbb{Z}$ so it also contains the integral closure of $\Bbb{Z}$ in $K$ which is ... |
Discrete Mathematics Symmetric Diffirence Proof | I feel like this is close to a proof by contradiction but don't feel it is formal enough yet looking for help on fixing it:
symmetric diffirence is the set of elements which are in either of the sets and not in their intersection.
the intersection of two sets is the set that contains all elements that are present in both sets.
so say I have an $$ x \in ( A \Delta C ) $$ and $$x \in ( A \Delta C )$$ then A = B because there can not be two symmetric differences of C that are the same unless A = B |
Question regarding an absolute value proof | Hint: saying that $s_n\to s$ is equivalent to saying that $|s_n-s|\to 0$. Then use triangle inequality for $|s_n|=|s -(s-s_n)|$. |
$\int^\infty_0\frac{\sin x}{x} \, dx = \frac{1}{2i}\int^\infty_{-\infty} \frac{e^{ix}-1}{x} \, dx$, why? | Note that $\displaystyle\int^\infty_{-\infty} \frac{e^{ix}-1}{x} \, dx$ is not well-defined, in fact
$$\frac{e^{ix}-1} x=\frac{\cos(x)-1} x+\frac{i\sin(x)} x$$
$x\mapsto\dfrac{\sin(x)} x$ is an even function, therefore
$$\frac{1}{2i}\int^\infty_{-\infty} \frac{i\sin(x)} x \, dx=\int^\infty_0\frac{\sin x} x \, dx$$
$x\mapsto\dfrac{\cos(x)-1} x$ is an odd function, but its integral is divergent, because $$\int^M_{0} \frac{\cos(x)-1} x \, dx=\left[\frac{\sin(x)-x} x\right]_{0}^{M}+\int^M_{0} \frac{\sin(x)-x} {x^2} \, dx=A-\int^M_{1}\frac{1} {x}dx$$
where A is finite but $\displaystyle\int^M_{1}\frac{1} {x}dx$ is divergent when $M\to\infty$. |
If $x_1$ does not lie in the range space of a space, does exist an $x_2$ in null space of it? | For subspaces $U,V$ of $\Bbb R^n$, we have $U \subseteq V \iff V^\perp \subseteq U^\perp$. Because $W$ is symmetric, the orthogonal complement to its range space is its null space.
We know that the span of $x_1$ is not a subset of the range space of $W$. It follows that the orthogonal complement of this range space is not a subset of the orthogonal complement of $x_1$. Thus, there exists an $x_2$ that is in the orthogonal complement of the range space of $W$ (from which it follows that $W(t_0,t_1)x_2 = 0$) but is not in the orthogonal complement of $x_1$ (from which it follows that $x_2'x_1 \neq 0$).
Another approach: given such an $x_1$, we can produce another vector $x_1^{\perp}$, which is the component of $x_1$ that is orthogonal to the range space of $W$. If we take $x_2 = x_1^\perp$, then we find that
$$
W x_2 = (x_2' W')' = (x_2'W)' = 0.
$$
On the other hand,
$$
x_2'x_1 = (x_1^\perp)'(x_1^\perp + x_1^{||}) = \|x_1^{\perp}\|^2,
$$
where $x_1^{||} = x_1 - x_1^{\perp}$ is the projection of $x_1$ onto the range space of $W$. |
Finding the maxima of $f(z)=\frac{1}{z^3+z^2+z+1}$ and the integral $\int_Cf(z)\mathrm dz$ | We have a partial-fraction decomposition
\begin{align}f(z)&=\frac{1}{2(z+1)}+\frac{1-z}{2(z^2+1)}\\
&=\frac{1}{2(z+1)}+\frac{i(1-z)}{4(z+i)}-\frac{i(1-z)}{4(z-i)}
\end{align}
We then obtain by cauchy's integral formula
\begin{align}
\oint_C f(z)dz=2\pi i\Big(\frac{1}{2}+\frac{i(1+i)}{4}-\frac{i(1-i)}{4}\Big)=0
\end{align} |
Every element of $U + V + W$ can be expressed uniquely in the form $u + v + w$ | People are really missing the point of the exercise. If I have two (finite-dimensional) subspaces $S,T$ in some larger vector space, the following are equivalent:
(1) $\dim (S + T) = \dim S + \dim T$
(2) $S \cap T = \{ \; \vec{0} \; \} $
(3) every vector in $S+T$ has only one expression as $s+t,$ with $s \in S, \; t \in T.$
About points (2) and (3), if I have some nonzero vector $v \in S \cap T,$ then I can write it as either $v \in S$ or $v \in T,$ which could be written in order as $v + 0 = 0 + v,$ giving non-uniqueness for the expression. For that matter, we get an alternative expression $0 = v + (-v).$ So, given any $w = s+t,$ we get a second expression $w = (s + v) + (t - v).$ That is called nonuniqueness.
Your question is about three subspaces $U,V,W$ which have (pairwise) trivial intersections.
The important words here are "uniquely" and "intersections."
EDIT, Monday, June 17. Notice that if you can prove the equivalence of the three properties above, a really good, careful proof, then the question for three subspaces is just proof by induction (just one such step), using some inequalities:
$$ \dim (U+V) \leq \dim U + \dim V. $$
$$ \dim (U+V+W) \leq \dim (U+V) + \dim W. $$
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
ANOTHER EDIT, same day:
THEOREM 1:
$$ \dim (U+V) \leq \dim U + \dim V, $$ and equality holds if and only if $$ U \cap V = \{ \vec{0} \}. $$
PROOF: by @Sarunas
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
THEOREM 2:
$$ \dim (U+V + W) \leq \dim (U+V) + \dim W, $$ and equality holds if and only if $$ (U+V) \cap W = \{ \vec{0} \}. $$
PROOF: apply Theorem 1.
CAUTION: the condition (for equality) in Theorem 2 above is sometimes stronger than just pairwise trivial intersections. The simplest example is three distinct one-dimensional spaces (lines through the origin) in the plane $\mathbb R^2.$ See MO_COMMON_FALSE_BELIEFS_TILMAN including the comment by @Willie Wong |
To evaluate double integral using Polar coordinates | With the first substitution you get:
$$I = \iint_D(4y- x^2-y^2)\,dxdy=\int_0^\pi \int_0^{4\sin \theta} (4r\sin\theta - r^2)\cdot r\,drd\theta = 8\pi$$
The second substitution should be $x = r\cos\theta, y = 2+r\sin\theta$. The Jacobian is again $r$, and the limits are $r \in [0,2], \theta \in [0,2\pi]$. We get
$$I = \iint_D (-x^2-(y-2)^2+4) \,dxdy = \int_0^{2\pi}\int_0^2 (-r^2+4)\cdot r\,drd\theta = 8\pi$$
which is the same. |
Find the element with the highest order in a symmetric group? | The conjugacy classes of $S_n$ are in one-to-one correspondence with the partitions of $n$, that is, a partition of $n$ is an ordered $k$-tuple ($k$ is arbitrary) of positive integers ordered increasingly whose sum is $n$. The partitions of $9$ are
$$
(1,1,1,1,1,1,1,1,1) \\ (1,1,1,1,1,1,1,2) \\ (1,1,1,1,1,1,3) \\ \vdots \\ (1,8), \\
(1,1,1,1,1,2,2) \\ (1,1,1,1,2,3) \\ (1,1,1,2,4) \\ (1,1,2,5) \\ (1,2,6) \\ (2,7), \\
(1,1,1,3,3) \\ (1,1,3,4) \\ (1,3,5) \\ (3,6) \\
(1,4,4) \\ (4,5) \\
(1,1,1,2,2,2) \\ (1,1,2,2,3) \\ (1,2,2,4) \\ (2,2,5) \\
(1,2,3,3) \\ (2,3,4) \\
(3,3,3)\\
(1,1,1,2,2,2)\\
(1,2,3,3) \\
(1,2,2,2,2) \\
(2,2,2,3) \\
(9)
$$
So by checking all their LCMs, you quickly notice that your 20 is indeed the correct one. I don't know if there is a general method for $S_n$ though.
Hope that helps, |
What is the relation between eigen vectors of AB and BA? | If $v$ is an eigenvector of $AB$ to the eigenvalue $\lambda\neq0$, then $Bv\ne0$
and $$\lambda Bv=B(ABv)=(BA)Bv,$$ which means $Bv$ is an eigenvector for $BA$ with the same eigenvalue.
But if $\lambda=0$ then
$0=\det(AB)=\det(BA)$, so $0$ is an eigenvalue of $BA$ as well. |
Characterizing continuous exponential functions for a topological field | It seems that as-stated, the answer is false. I'm not satisfied with the following counterexample, however, and I'll explain afterwards.
Take $K = \mathbb{C}$ and let $E(z) = e^z$ be the standard complex exponential. Take $E'(z) = \overline{e^z} = e^{\overline{z}}$, where $\overline{z}$ is the complex conjugate of $z$. Then $E'(z)$ is not of the form $E(r z)$, and yet is a perfectly fine homomorphism from the additive to the multiplicative groups of $\mathbb{C}$.
Here's why I'm not satisfied: you can take any automorphism of a field and cook up new exponentials by post-composition or pre-composition. In the case I mentioned, these two coincide.
This won't work in $\mathbb{R}$ because there are no nontrivial continuous automorphisms there. I would be interested in seeing an answer to a reformulation to this problem that reflected this. |
convergence of an integral ( with an inner integral) | For the first question, just note that $\log(1 + t + \sin t) = 2t + O(t^2)$, so that
$$F(x) = \int_{0}^{x} \log (1 + t + \sin t) \, dt = x^2 + O(x^3) $$
near $x = 0$. Also, since the integrand is positive for $t > 0$, we have $F(x) > 0$ for $x > 0$. Thus
$$ \int_{0}^{1} \frac{x^p}{F(x)} \, dx \text{ converges} \quad \Longleftrightarrow \quad \int_{0}^{1} x^{p-2} \, dx \text{ converges}. $$
Therefore we must have $p-2 > -1$, or equivalently $p > 1$.
For the second question, note that $\log t \leq \log(1 + t + \sin t) \leq \log (3t)$ for $t > 1$. That is, we have the asymptotic relation
$$F(x) = x \log x + O(x) \quad \text{as } x \to \infty. $$
So we find that
$$ \int_{1}^{\infty} \frac{x^p}{F(x)} \, dx \text{ converges} \quad \Longleftrightarrow \quad \int_{2}^{\infty} \frac{x^{p-1}}{\log x} \, dx \text{ converges}. $$
Now it is not hard to show that the latter converges if and only if $p-1 < -1$, or equivalently, $p < 0$. |
Prove that a set of functions has the same cardinality as $\mathbb{N}$ | Your set $X$ is not $\mathbb{N}^\mathbb{N}$; it's a subset. So, there is no contradiction.
For each $n\in\mathbb N$, let $X_n=\{f\in\mathbb{N}^\mathbb{N}\,|\,m\geqslant n\Longrightarrow f(m)=f(n)\}$. Then $X=\bigcup_{n\in\mathbb N}X_n$ and therefore all you need is to prove is that $|X_n|=|\mathbb{N}|$. That is easy, since $|X_n|$ is equal to the cardinal of the set of all functions from $\{1,2,\ldots,n\}$ to $\mathbb N$, which is equal to the cardinal of $\mathbb N$. |
approximating a discrete function with a continuous one | This depends on "how continuous" $f$ is. It's possible to construct continuously differentiable functions that have arbitrarily narrow spikes (e.g. Gaussian as $\sigma \rightarrow 0$). For these functions, sampling at $h$ sized intervals can be arbitrarily wrong.
Using the $\epsilon$-$\delta$ definition of continuity, for any error tolerance $\epsilon > 0$ there exists $\delta > 0$ such that when $|x - \hat x| < \delta$, we have $|f(x) - f(\hat x)| < \epsilon$. In other words, you need to know about $f$ to determine a bound for $h$.
Using differentiability and the mean value theorem, we can deduce that $|f(x) - f(\hat x)| \le |x - \hat x| \sup_{x \in (0, 1)} |f'(x)|$. |
ratio of cylinder to ratio of square prism | The lateral surface area of a cylinder or prism is the surface area that is not the ends. It is the product of the perimeter of the base and the height. Given that the heights are equal, the perimeters of the bases must be equal.
The volume of the cylinder or prism is the area of the base times the height. You are being asked for the ratio of areas of a square and circle that have equal perimeter. |
When is it legal to use $dx$? | Let's start with your $f'(x)= \lim\limits_{\Delta x\to 0} \frac{\Delta (f(x))}{\Delta x} = 2x$ when $f(x)=x^2$
Sometimes the derivative $f'(x)$ is written $\frac{df}{dx}$ or $\frac{d}{dx} f(x)$. All three of these are simply notation for $\lim\limits_{y \to x} \frac{f(y)-f(x)}{y-x}$ or $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ and some people describe this as the limit of the ratio of the change of $f(x)$ to the change in $x$ leading to your $\lim\limits_{\Delta x\to 0} \frac{\Delta (f(x))}{\Delta x}$ or $\lim\limits_{\Delta x\to 0} \frac{\Delta f}{\Delta x}$: the notation $\frac{df}{dx}$ is suggestive of this limit but the $dx$ does mean anything on its own here
Since the derivative operation can be applied to different functions, further notation lets $\frac{d}{dx}$ represent a derivative operator. It can be applied more than once: for example a double derivative would involve a double operation and is sometimes written $f''(x)= \frac{d^2}{dx^2}f(x)=\frac{d^2f}{dx^2}$; with $f(x)=x^2$ you would get $\frac{d^2f}{dx^2} = 2$. Again this is notation, and the $dx$ is part of the notation without having a stand-alone meaning; in particular it is not being squared
You will also see $dx$ as part of the notation for integrals: for example $\int_6^9 x^2\, dx = 171$. But while this might be suggestive of a sum (the long s in the integral sign) of lots of pieces for $f(x)$ with widths represented by $dx$, it is again notation and the $dx$ does not have an independent meaning here, just being part of a wider expression for the integral operation
Some people write expressions such as $d(fg)=f\, dg + g\, df$ for the product rule. Typically this is shorthand for $\frac{d}{dx}(fg)=f \frac{d}{dx}g + g \frac{d}{dx}f$ and is another example of the use of notation
There is nothing to prevent people from giving a particular meaning to a bare $dx$ so long as they define it in a meaningful way. Then again it would still be notation, with that definition |
Rudin Chapter 5 Exercise 14 | The forward direction is simple: If $g$ is increasing ($x < y \implies g(x) \le g(y)$), then for $h > 0, g(x + h) \ge g(x)$, and so $$\frac{g(x+ h) - g(x)}{h} \ge 0$$ If $g'(x)$ exists, then
$$g'(x) = \lim_{h \to 0+} \frac{g(x+ h) - g(x)}{h} \ge 0$$
(Note that existance of the two-sided limit implies that the one-sided limit exists and has the same value. Alternatively, you can show that the same inequality also holds for $h < 0$.)
Let $g = f'$ to apply this to your question. |
What does it mean to describe the partition defined by the equivalence class? | If $x + y$ is even, is $y + x$ even?
Is $x + x$ even, for all $x$?
If $x + y$ is even, and $y + z$ is even, is $x + z$ even?
Proving the above will allow you to conclude that this is an equivalent relation. They are asking you to determine what the equivalence classes are. Every integer is in exactly one class, with all the other "related" integers. To what class does 7 belong? 4? -3?
You might also find this page helpful, especially the example of people with the same last name. |
If all second partial derivatives exist and are continuous then all first partial derivatives are also continuous | From the definition $\frac{\partial^2 f}{\partial x_i\partial x_j}:=\frac{\partial }{\partial x_i}\left[\frac{\partial f}{\partial x_j}\right]$, you are saying that $\frac{\partial f}{\partial x_j}$ has continuous partial derivatives. Therefore it is a differentiable function. |
Why is the winding number homotopy invariant? | If you have a homotopy $H : [\alpha, \beta] \times [0; 1] \to \mathbb{C}\setminus \{a \}$, the function $\theta : [\alpha, \beta] \times [0; 1] \to \mathbb{R}/2 \pi \mathbb{Z}$ defined by $\theta(x,t) = \arg(H(x,t)-a)$ is continuous in both variables, and can be lifted into a function $\tilde{\theta}$.
Then you can define the continuous function $n(t) = \frac {\tilde \theta (\beta,t) - \tilde \theta (\alpha,t)}{2 \pi} \in \mathbb{Z}$.
Since $\mathbb{Z}$ is discrete, it has to be a constant map, so the winding numbers of $\gamma = H(.,0)$ and $H(.,1)$ are the same. |
Principal disjunctive normal form conversion problem. | $(P \ \lor \neg P)$ is exactly true. I guess the solution wants to express the normal form uniquely with elements of the original logical proposition (i.e $P$ and $Q$), and avoids using other symbols (such as $T$). This is why they are using $(P \ \lor \neg P)$.
Moreover, note that $(\neg P \ \lor Q) \land (P \ \lor \neg Q) \neq (\neg P \ \lor Q)$. For example, take $P$ False and $Q$ True. |
What is anti-derivative of this function? | $$\begin{cases}
I=\int \left(\frac{d}{dx}g(x)\right)\cdot\tanh\left(n\cdot f(x)\cdot\frac{d}{dx}f(x)\right)dx \\
g(x) = \frac{1}{1+n\cdot f(x)^2}
\end{cases}
$$
FIRST CASE :
$g(x)$ is the given (known) function and one want to express the next integral in terms of $g(x)$ :
$$
I=\int \left(\frac{dg}{dx}\tanh\left(\frac{n}{2}\frac{d(f^2)}{dx}\right)\right)dx
$$
$g(x) = \frac{1}{1+n\cdot f(x)^2}\quad\implies\quad f(x)^2=\frac{1}{n(g-1)}$
$\frac{d(f^2)}{dx}= -\frac{1}{n(g-1)^2}\frac{dg}{dx}$
$$
I=\int \left(\frac{dg}{dx}\tanh\left(-\frac{n}{2}\frac{1}{n(g-1)^2}\frac{dg}{dx} \right)\right)dx
$$
$$
I=-\frac12\int \left(\frac{dg}{dx}\tanh\left(\frac{1}{(g-1)^2}\frac{dg}{dx} \right)\right)dx
$$
So, as expected, the integral is expressed with $g(x)$ only. Further calculus requiers the explicit form of $g(x)$ which is not specified in the wording of the question.
SECOND CASE :
$f(x)$ is the given (known) function and one want to express the next integral in terms of $f(x)$ :
$$
I=\int \left(\frac{dg}{dx}\tanh\left(\frac{n}{2}\frac{d(f^2)}{dx}\right)\right)dx
$$
$\frac{dg}{dx}= -\frac{2n}{(1+nf^2)^2}\frac{d(f^2)}{dx}$
$$
I=\int \left(-\frac{2n}{(1+nf^2)^2}\frac{d(f^2)}{dx}\tanh\left(\frac{n}{2}\frac{d(f^2)}{dx}\right)\right)dx
$$
$$
I=-2n\int \left(-\frac{1}{(1+nf^2)^2}\frac{d(f^2)}{dx}\tanh\left(\frac{n}{2}\frac{d(f^2)}{dx}\right)\right)dx
$$
So, as expected, the integral is expressed with $f(x)$ only. Further calculus requiers the explicit form of $f(x)$ which is not specified in the wording of the question. |
an equivalent condition for a function to be convex | HINT:
Use
$$ \sum_{k=1}^n t_k z_k = (1-t_n)( \sum_{k=1}^{n-1} \frac{t_k}{1-t_n} z_k) + t_n z_n$$
Notice that $\sum_{k=1}^{n-1} \frac{t_k}{1-t_n} = 1$. |
what are the difference between metric space and metric linear space? | Consider the graph of $y=x^2$ where $x\in[0,1]$. This is a metric space that is not linear.
A metric and linearity are simply different structures that can be imposed on sets (simultaneously if necessary). |
Can row and column manipulation get the same row echelon from? | No. For example, consider:
$$
M = \begin{bmatrix}
1 & 3 \\
0 & 0
\end{bmatrix}
$$
Notice that $M$ is already in reduced row echelon form so that $M_r = M$. However, if we add $(-3)$ times column one to column two, then we obtain:
$$
M_c = \begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}
$$
which is not row-equivalent to $M_r$. |
How many solutions to $z^{6!}-z^{5!}\in \mathbb{R}$ and $|z|=1$ | $$\sin6w=\sin w\iff 6w=2k\pi+w \text{ or } 6w=2k\pi+\pi-w\iff w=\dfrac{2k\pi}{5} \text{ or } w=\dfrac{2k+1}{7}\pi$$
with $k\in\mathbb{Z}$. then
$$w=0, \dfrac{2\pi}{5}, \dfrac{4\pi}{5}, \dfrac{6\pi}{5}, \dfrac{8\pi}{5}, \dfrac{\pi}{7}, \dfrac{3\pi}{7}, \dfrac{5\pi}{7}, \dfrac{7\pi}{7}, \dfrac{9\pi}{7}, \dfrac{11\pi}{7}, \dfrac{13\pi}{7}$$ |
Probability of Defective coins | Please note that Gambler's fallacy does not apply here, because not all coins are equal.
The formula for Bayes's theorem expresses the relationship between $\Pr(A\mid B)$ and $\Pr(B\mid A)$. In our context it can be written as
$$\Pr(\text{Defective}\mid 10H) = \frac{\Pr(10H\mid\text{Defective})\Pr(\text{Defective})}{\Pr(10H)}$$
We knew that
$\Pr(\text{Defective}) = 1/100$
$\Pr(10H\mid\text{Defective}) = 1^{10} = 1$
$\Pr(10H) = 1 \cdot 1/100 + (1/2)^{10} \cdot 99/100 = 1123/102400$
So $\Pr(\text{Defective}\mid 10H) = 1024/1123$. This means the probability that you are holding a defective coin, given that 10 heads came up, is very high. This makes sense because it is unlikely that a good coin would give you 10 consecutive heads.
So, if it is indeed a defective coin, then the probability of getting a head again would be
$$1024/1123 \cdot 1 = 1024/1123$$
Or, if it is not a defective coin, then the probability is
$$99/1123 \cdot 1/2 = 99/2246$$
Therefore,
$$\Pr(\text{Head again}) = 1024/1123 + 99/2246 = 2147/2246$$ |
Intersection of infinite number of maximal ideals in a PID | You could easily use a sledgehammer, namely that in a noetherian ring any ideal has only finitely many minimal prime ideals containing it. In particular any non-zero ideal is contained in only finitely many maximal ideals, i.e. an infinite intersection of maximal ideals must be zero.
Or you use - as suggested in the comments - the more basic fact, that in an UFD, any non-zero element has finitely many irreducible factors and being contained in a maximal ideal is nothing else but saying that the generator of that maximal ideal is an irreducible factor. Basically this is the same idea as above. |
How to estimate $\sum_{q \leq Q, n \in \mathbb{Z}} w(n/q)$ with $w$ a smooth function? | $$\sum_n w(x+n)=\sum_k \hat{w}(k) e^{2i\pi kx}$$ then $$\sum_{q=1}^Q \sum_n w(n/q)=\sum_{q=1}^Q \sum_{m=0}^{q-1}\sum_k \hat{w}(k) e^{2i\pi k m/q}=\sum_{q=1}^Q \sum_k \hat{w}(k) \sum_{m=0}^{q-1} e^{2i\pi k m/q}$$
$$= \sum_{q=1}^Q \sum_{k, q|k} \hat{w}(k) q= \frac{Q(Q+1)}{2}\hat{w}(0)+\sum_{k\ne 0} \hat{w}(k) \sum_{q| k,q\le Q}q$$
$$ = \frac{Q(Q+1)}{2}\hat{w}(0)+\sum_{k\ne 0} \hat{w}(k) \sigma_1(k)+O(\sum_{|k|\ge Q} |\hat{w}(k)| \sigma_1(k))$$
Since $\hat{w}$ is rapidly decreasing, for any fixed $r>1$ it is
$$ = \frac{Q(Q+1)}{2}\hat{w}(0)+\sum_{k\ne 0} \hat{w}(k) \sigma_1(k)+O(\sum_{|k|\ge Q} k^{-r})$$ $$=\frac{Q(Q+1)}{2}\hat{w}(0)+\sum_{k\ne 0} \hat{w}(k) \sigma_1(k)+O(Q^{1-r})$$ |
Prove $Ker(A)=Ker(BA)$ if $Ker(B) \cap Ran(A) ={0}$ | 2nd implication: "Ker(BA)⊂Ker(A) since x∈Ker(A) implies BA.x=0⟹A.x∈Ker(A), as Ker(B)∩Ran(A)=0⟹A.x∈Ran(A)⟹A.x=0⟹x∈Ker(A)."
Let $x\in \ker(BA)$. Then $B(Ax) = BAx=0$ and so $Ax\in\ker (B)$.
But $\ker(B)\cap range(A)=0$ and so $Ax$ cannot lie in the range of $A$. Thus $Ax=0$ and so $x\in\ker(A)$. |
all roots of $f(z) = z^n + z^3 + z + 2$ tend to the unit circle | It suffices to show that for any $\varepsilon>0$, there exits $N>0$, such that if $n>N$, then for all $z\in\mathbb{C}$ with $||z|-1|>\varepsilon$, $z^n+z^3+z+2\neq 0$.
If $|z|-1>\varepsilon$, $|z^n+z^3+z+2|\geq|z|^n-|z|^3-|z|-2\geq (1+\varepsilon)^n-(1+\varepsilon)^3-(1+\varepsilon)-2\to\infty$ as $n\to\infty$. Thus there is some $N_1$ such that if $n>N_1$, $z^n+z^3+z+2\neq 0$ for $|z|-1>\varepsilon$.
If $|z|-1<-\varepsilon$, then $|z^n+z^3+z+2|\geq 2-|z|^n-|z|^3-|z|\geq 2-(1-\varepsilon)^n-(1-\varepsilon)^3-(1-\varepsilon)\to
2-(1-\varepsilon)^3 - (1-\varepsilon)>0$ as $n\to\infty$. Thus there is some $N_2$ such that if $n>N_2$, $z^n+z^3+z+2\neq 0$ for $|z|-1<-\varepsilon$.
In conclusion, for $n>\max(N_1,N_2)$, $z^n+z^3+z+2\neq 0$ for $||z|-1|>\varepsilon$. Thus all roots of $z^n+z^3+z+2$ tend to the circle as $n\to\infty$. |
Is the maximal set of injectivity of a measurable map a measurable set? | The maximal good set certainly isn't measurable in general. For instance, if the $\sigma$-algebra on $Y$ is $\{\emptyset,Y\}$, then any function is measurable, regardless of the $\sigma$-algebra on $X$.
I don't know what happens for the nice case of Borel functions on $\mathbb{R}$. The maximal good set is always coanalytic, since it is the complement of the projection of the Borel set $\{(x,y):f(x)=f(y),x\neq y\}\subset\mathbb{R}^2$ (onto either coordinate). |
Probability of a random matrix to be invertable. | I suppose by continuous you mean absolutely continuous, i.e. with a density with respect to Lebesgue measure (the more general case where you allow singular continuous distributions is a bit more subtle).
If
$F(x_1,\ldots,x_m)$ is
a non-constant polynomial, the variety $F^{-1}(0)$ has $m$-dimensional Lebesgue measure $0$, so if $(X_1, \ldots, X_m)$ are random variables with an absolutely continuous joint distribution, $P\{F(X_1,\ldots,X_m) = 0\} = 0$.
In your case, linear dependence of your vectors translates into the determinant of the matrix formed from these vectors being $0$. |
How would I solve integrals of derivatives? | $$I=\int\limits_a^b\int\frac{\mathrm df(t)}{\mathrm dt}\mathrm df(t)\mathrm dt$$
Notice that $$\mathrm df(t)=\frac{\mathrm df(t)}{\mathrm dt}\mathrm dt,$$
so $$I=\int\limits_a^b\int\frac{\mathrm df(t)}{\mathrm dt}\frac{\mathrm df(t)}{\mathrm dt}\mathrm dt\mathrm dt=\int\limits_a^b\int\left(\frac{\mathrm df(t)}{\mathrm dt}\right)^2\mathrm dt\mathrm dt=\int\limits_a^b\int f'\left(t\right)^2\mathrm dt\mathrm dt.$$
There most probably is no simpler form for the second antiderivative of the square of the first derivative than just $$\iint f'(t)^2\mathrm dt^2.$$ |
An exact formula for counting solutions of the Frobenius equation summed to 8 | The number of solutions will be
\begin{eqnarray*}
[x^8]: (1+x+x^2+\cdots)^n = [x^8]: \frac{1}{(1-x)^n}.
\end{eqnarray*}
Now use
\begin{eqnarray*}
[x^m]: \frac{1}{(1-x)^n}= \binom{m+n-1}{n-1}.
\end{eqnarray*} |
How to model a checking account with continuous-time compounding? | Let $x (t)$ be the amount of money in the account at time $t$ (years). Hence, if no money is spent,
$$\dot x = r x$$
where $r = \ln (1.03)$. If $\$1000$ is spent continuously every month, then we have the ODE
$$\dot x = r x - 12000$$
We have an equilibrium point when we have
$$\bar{x} := \frac{12000}{r} \approx \$406,000$$
in the account, as the interest earned per year then equals the amount of money expended per year. If we have more than $\bar{x}$ in the bank, then our wealth is growing. If we have less than $\bar{x}$ in the bank, then our wealth is decaying. Let us verify. Integrating the non-homogeneous ODE above, we obtain
$$x (t) = \bar{x} + (x_0 - \bar{x}) \, \mathrm{e}^{r t}$$
If $x_0 > \bar{x}$, our wealth is growing. If $x_0 < \bar{x}$, our wealth is decaying. If $x_0 = \bar{x}$, our wealth is stationary. Note that $\bar{x}$ is an unstable equilibrium point. |
Deciding if numbers can be written as the sum of three squares | If you know the deep theorem (whose proof is very far from being trivial, relying on Lagrange's identity for quaternions)
Every number that is not of the form $4^m(8k+7)$ is the sum of three
integer squares
to check if $154,155,156$ are $\square+\square+\square$ is straightforward.
$154=2\cdot 77$, hence $\nu_2(154)$ is odd and there are no issues;
$155$ is odd and $\equiv 3\pmod{8}$, no issues;
$156=4\cdot 39$ and $39\equiv 7\pmod{8}$, hence $156\neq \square+\square+\square$. |
Greatest Common Divisor of natural numbers | If $d$ divides both $a$ and $b$, then $d^2$ divides $n$.
So the largest possible value for $d$ is the square root of the largest square factor of $n$. |
Regular polygon Interior angles | Let $A,B$ be the consecutive vertices of a regular $n$-vertices polygon and $O$ be it's center.
$\angle AOB=\frac{360^{\circ}}{n}$, $\Delta AOB$ is isosceles, so $\angle OAB = \frac{180^{\circ} - \angle AOB}{2}$ and the interior angle $=2\angle OAB = x^{\circ}$ and $x$ is integer.
We are to find whether it could be an integer $n$ that gives that $x$.
$$180-\frac{360}{n}=x$$
$$\frac{360}{n}=180-x$$
$$n=\frac{360}{180-x}$$
So puts(360%(180-x)?"NO":"YES") should work. :) |
Normal approximation of a binomial distribution | It's a badly phrased question unless some larger context alters the meaning. There is a suggestion of such a broader context where it says this:
A defect rate of 3% is considered acceptable.
A null hypothesis might say that the defect rate is not more than 3%. (That strikes me as being unreasonably high, but I don't really know anything about that.) In that case evidence against the null hypothesis would consist of a high defect rate in a sample, and the "p-value" is the probability that the evidence against the null hypothesis would be at least as strong as what was actually observed, given that the null hypothesis is true. Finding the p-value seems to be what was intended. Still, I would not have written "this event"; rather I might just ask for the p-value, or perhaps ask what the probability is that the evidence against the null hypothesis is at least as strong as what was observed, given that the null hypothesis is true. |
Power of very big numbers | No, this is not correct. For example, The last digit of $2^{10}=1024$ is $4$, but the last digit of $2^0$ is 1. There are many small counter examples; three more are $13^{14}$, $3^{11}$, and $12^{12}$. |
How find this sum of $\sum_{d\mid n}\dfrac{G(d)}{d}$ | If $\rm n=ab$ with $\rm a,b$ coprime, then every divisor $\rm d\mid ab$ has a unique refinement $\rm d=d_1d_2$ satisfying $\rm d_1\mid a$ and $\rm d_2\mid b$. In particular, if $\rm n=2^rm$ with $\rm m$ odd, then every positive divisor $\rm d\mid n$ is uniquely of the form $\rm 2^k v$ with $\rm 0\le k\le r$ and $\rm v\mid m$. Thus
$$\rm \sum_{d\mid n}\frac{G(d)}{d}=\sum_{k=0}^r\sum_{v\mid m}\frac{G(2^km)}{2^km}=\sum_{k=0}^r\frac{1}{2^k}\sum_{v\mid m}1=\left(2-\frac{1}{2^{r}}\right)\sigma_0(m)$$
which can be expressed with standard number-theoretic devices as
$$\rm \sum_{d\mid n}\frac{G(d)}{d}=\left(2-|n|_2\right)\sigma_0\left(|n|_2n\right). $$
(see $\rm p$-adic absolute value and divisor sigma function). |
Simple question on linear transformation. | Let $x \in kernel(T) ∩ Range(T)$ . Then $Tx=0$ and $x=Ty$ for some $y$. Hence $0=Tx=T^2y$. This gives $y \in kernel(T^2)=kernel(T)$. Thus $x=Ty=0$.
Conclusion: option (c) is correct (and hence (d) too). |
Find the value of the angle $X$ in the given figure | Let's use the notation as in this picture.
Note that $\overline{DE} \perp \overline{BC}$ and $\overline{DG} \perp \overline{AC}$.
Hence $\angle GDE= 180-\angle EBG$.
AS the traingle $\triangle GED$ is isosceles, we conclude $$\angle DEG= \frac{1}{2}( 180-\angle GDE)= \frac{1}{2} \angle EBG$$
Similarly
$$\angle FED= \frac{1}{2} \angle FCE$$
In conclusion
$$\angle FEG =\frac{1}{2} (\angle FCE + \angle EBG)$$ |
On the Lie bracket of the Lie algebra of the group of invertible elements of an algebra | I suggest thinking of the canonical embedding of $A$ as a subalgebra of $\operatorname{End} A$, $x \mapsto (y \mapsto xy)$. |
How many vectors can be "close to mutual orthogonal like 80 degrees" in a high dimensional space? | Take a projective plane of order $9$ which exists since $9$ is a prime power. There are $9^2 + 9 + 1 = 91$ points and the same number of lines. The incidence vectors of the lines form a family of $91$ $(0,1)$-vectors in a space of the same dimension, and each of the line incidence vectors has exactly $10$ $1$s, corresponding to the subset of $10$ points which lie on the line.
By definition of projective plane, each distinct pair of lines intersects in a single point. One can now replace each of the vectors by the $2^{10}$ possible signed versions.
The resulting quasiorthogonal set of "ternary" vectors (i.e., with entries in $\{-1,0,+1\}$) has $91 * 2^{10} = 93,184$ elements. As $\arccos(1/10) \approx 84.26 \deg.$, only a $5.74 \deg$ deviation from strict orthogonality is needed to achieve a $1024$-fold inflation in the size of a pairwise-nearly-orthogonal set. |
What's a method for computing the indefinite integral $\int \dfrac{dz}{(a^2 + z^2)^{3/2}}$? | Hint:
Perform the trigonometric substitution $z=a\tan u$.
Edit:
When $z=a\tan u$, $dz=a\sec^2 u du$
$$\displaystyle \int \dfrac{dz}{(a^2 + z^2)^{3/2}}=\int \dfrac{a\sec^2 u du}{(a^2 + a^2\tan^2u)^{3/2}}=\int \dfrac{a\sec^2 u du}{(a^2(1+\tan^2u))^{3/2}}=\int \dfrac{a\sec^2 u du}{a^3\sec^3u}=\frac{1}{a^2}\int \dfrac{du}{\sec u}=\frac{1}{a^2}\int \cos udu=\frac{\sin u}{a^2}+C$$
Now, back substitute for $z$. |
Verifing the solution | I get $$\partial_x g|_{(x,y)=(0,0)} = e^{\sin (x-y)}\cos (x-y)|_{(x,y)=(0,0)}=1$$ $$\partial_y g|_{(x,y)=(0,0)} = e^{\sin (x-y)}\cos (x-y)(-1)|_{(x,y)=(0,0)}=-1,$$so the affine function is $1 + x - y$. |
Linear Algebra Solution Types Proven with Rank | If the matrix $A$ has columns $\vec a_1, \dots, \vec a_n$, then a system of equations $A \vec x = \vec b$ is equivalent to
$$
x_1 \vec a_1 + x_2 \vec a_2 + \dots + x_n \vec a_n = \vec b
$$
that is, to trying to write $\vec b$ as a linear combination of the columns of $A$. In other words, we are checking if $\vec b$ is in the column space of $A$.
Looking at the rank of $A$ will tell us if the solution, whenever it exists, will be unique.
If the rank of $A$ is equal to $n$, that means the dimension of the column space is equal to $n$: the columns are linearly independent. In this case, for every $\vec b$, there is either a single solution (if it is in the column space of $A$) or no solution (if it's not).
On the other hand, if the rank of $A$ is less than $n$, that means the dimension of the column space is less than $n$: the columns are linearly dependent. In this case, for every $\vec b$, there is either an infinite set of solutions (if it is in the column space of $A$) or no solution (if it's not).
In either case, to distinguish these two cases, we compare the rank of $A$ to the rank of the augmented matrix $A\mid \vec b$.
If $\operatorname{rank}(A) = \operatorname{rank}(A \mid \vec b)$, that means the column space doesn't change when we add a column $\vec b$. This can only happen if $\vec b$ was already in the column space, so there must be a solution.
If $\operatorname{rank}(A) < \operatorname{rank}(A \mid \vec b)$, this means that the column space becomes bigger when we add $\vec b$. This can only happen if $\vec b$ is linearly independent from $\vec a_1, \dots, \vec a_n$, so there is no solution. |
Lower bound on smallest eigenvalue of symmetric matrix | The formula is wrong: the average is always bigger than the minimal unless the numbers are all equal. |
Show that there exists a non-negative integer $r$ s.t. $ker(T^r) = ker(T^{r+1})$. | Hint: if $U$ is a subspace of $V$ and $U\neq V$, then $\dim U<\dim V$. |
Help trying to find the coefficient in a generating function expansion | Yes you are correct. We have that
$$(1-x^5)^7=1-7x^5+21x^{10}-35x^{15}+35x^{20}+o(x^{20}).$$
and
$$\frac{1}{(1-x)^{2}}=\sum_{n=0}^{\infty} (n+1)x^n.$$
Therefore
$$[x^{20}]\frac{(1-x^5)^7}{(1-x^2)^{2}}=1\cdot (20+1)-7\cdot (15+1)+21\cdot (10+1)-35\cdot (5+1)+35\cdot 1=-35$$
So the desired coefficient is $5\cdot (-35)=-175$. |
Finding the X- Intercepts for this Equation | Another method is this: if $$x^3+ax^2+bx+c=0$$ then you are looking to solve the following system.
$$r+s+t=a$$
$$rs+rt+st=b$$
$$rst=c$$
Should you find these $r,s,t$, these are your roots and your factored polynomial is just
$$(x-r)(x-s)(x-t)=0$$
However, not knowing these formulas is probably true. In this case use the rational root theorem. Test the possible rational roots (the factors of 30 in your case, $c$ in the general case) and then divide by the root or factor with long division or synthetic.
For your example, your possible rational roots are $1,2,3,5,6,10,15,30$. You test both positive and negative factors. You would find your first bit of luck with $2$ (the factor being $x-2$)...when you divide you will get a quotient of $x^2-8x+15$. Then factor that. See reckless reckoner's comment for further insight. |
Fourier Transform of simple functions | 1) The positive frequencies correspond to anticlockwise rotations of the complex exponential. ($e^{i\phi}$) Similarly, the negative frequencies correspond to the clockwise rotations. When the inverse Fourier transform is performed on the box, the imaginary parts of the positive/negative frequencies cancel out, resulting in a real signal.
2) Periodic functions get transformed to Dirac deltas. |
proof that there is a unique isomorphism between two free modules | You can use the universal property applied to $A \hookrightarrow F_1$. The function $\Phi: F_1 \rightarrow F_1$ is the identity on $A$, so it satisfies the universal property for the map $A \hookrightarrow F_1$. In other words $\Phi$ extends $A \hookrightarrow F_1$ to a map $F_1 \rightarrow F_1$. But there is already such a map, and that is the identity $F_1 \rightarrow F_1$. By uniqueness of the universal property, they must be the same. |