title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Inferior limit of a sequence of measurable functions Suppose $f_n$ is a sequence of non-negative measurable functions that converges in measure to a measurable function $f$. I am trying to show that $$\int f\leq\liminf_n\int f_n$$ Using Fatou's lemma we know that $$\int\liminf_n f_n\leq\liminf_n\int f_n$$ so it would be enough to show that $$f\leq\liminf_n f_n$$ but, is this even true? am I going in the right direction?
Hints: There exists a subsequence $(f_{n_k})_k$ of $(f_n)_n$ such that $$\liminf_{n \to \infty} \int f_n = \lim_{k \to \infty} \int f_{n_k}.$$ Check that $f_{n_k} \to f$ in measure as $k \to \infty$. There exists a further subsequence, say $(g_j)_j$, of $(f_{n_k})_k$ such that $g_j \to f$ almost everywhere. Applying Fatou's lemma yields, $$\int f \leq \liminf_{j \to \infty} \int g_j= \liminf_{n \to \infty} \int f_n.$$
Consider $g_n(x)=\inf\limits_{k \geq n}f_k(x)$, if so, we obtain $g_1 \leq g_2 \leq \ldots \leq g_n \leq \ldots$ and $\lim\inf f_n(x) \leq \lim\limits_{n \to \infty} g_n(x)$. By monotone convergence $\int\limits_X f(x)dx = \lim\limits_{n \to \infty} \int\limits_X g_n(x)dx$, which gives $g_n(x) \leq f_n(x)$, so $\int\limits_X g_n(x) dx \leq \int\limits_X f_n(x)dx$.
Black and white beads on a circle There are $n$ beads placed on a circle, $n\ge 3$. They are numbered in random order as viewed clockwise. Beads for which the number of the previous bead is less than the number of a next bead are painted in white color,and others - in black. Two colourations that can be made equal by rotation are considered identical. How many different colourations can occur? I've write a programm and for $n=3...11$ I've got answers $2 , 1 , 6 , 7 , 18 , 25 , 58 , 93 , 186$
What follows is not an answer but a conjecture and some additional material. We start by working with permutations. I assumed that the bit pattern / black-white pattern at position $q$ reflects the less-than greater-than relation at position $q$ in the permutation (whether the pair at $q$ and $q+1$ is an ascent or a descent, with circular wrap-around). I wrote a Perl program to investigate these necklaces (Perl aficionados are invited to verify and improve this work, e.g. the bit strings can be produced with pack/unpack). The program iterates over all bit patterns and attempts to find a circular permutation that fits the pattern using a backtracking algorithm, which I hope I've implemented correctly. The program produced the following table: $ time ./blg2.pl 12 2>/dev/null 3: 4 2/2 4: 6 4/2 5: 8 6/2 6: 14 12/2 7: 20 18/2 8: 36 34/2 9: 60 58/2 10: 108 106/2 11: 188 186/2 12: 352 350/2 real 7m9.188s user 7m7.926s sys 0m0.045s This table shows the total number of circular colorations obtained and indicates furthermore that for all $n$ all possible bit patterns were realized except two. The totals give the sequence $$4,6,8,14,20,36,60,108,188,352,\ldots $$ which is OEIS A00031 which is simply the substituted cycle index of the cyclic group $$\left.Z(C_n)(A+B)\right|_{A=1,B=1},$$ and can be taken as evidence that the program is working since it is the correct answer (total number of circular patterns with two colors.) Observe that the exact values which are $$2,4,6,12,18,34,58,106,186,350,\ldots $$ also have an OEIS entry namely OEIS A052823 which reflects the underlying necklace combinatorics but not the permutation aspect. The program can also output permutations that fit a given pattern. Here are the examples for $n=3,4,5$ and $n=6$: $ time ./blg2.pl 5 100 3-1-2 110 3-2-1 111 FAIL 000 FAIL 3: 4 2/2 1010 3-1-4-2 1000 4-1-2-3 1110 4-3-2-1 1100 4-2-1-3 1111 FAIL 0000 FAIL 4: 6 4/2 11100 5-3-2-1-4 11110 5-4-3-2-1 10100 4-1-5-2-3 11010 4-2-1-5-3 11000 5-2-1-3-4 10000 5-1-2-3-4 00000 FAIL 11111 FAIL 5: 8 6/2 100000 6-1-2-3-4-5 111000 6-3-2-1-4-5 101000 5-1-6-2-3-4 110100 5-2-1-6-3-4 100100 4-1-5-6-2-3 111110 6-5-4-3-2-1 110000 6-2-1-3-4-5 110110 4-2-1-6-5-3 111100 6-4-3-2-1-5 101010 3-1-5-4-6-2 111010 5-3-2-1-6-4 101100 4-1-6-5-2-3 111111 FAIL 000000 FAIL 6: 14 12/2 Studying the examples we immediately have a conjecture, namely that all black-white patterns can be realized except the monochrome ones (this is obvious as the circularity makes it impossible to have just one run in a circular permutation since not all $n$ elements can be to the left of a larger/smaller element). The conjectured formula is thus $$-2 + \left.Z(C_n)(A+B)\right|_{A=1,B=1}.$$ This is $$-2 + \left.\frac{1}{n} \sum_{d|n} \varphi(d) (A^d+B^d)^{n/d}\right| _{A=1, B=1}.$$ This gives the following CONJECTURE: $$\large{\color{#0A0}{ -2 + \frac{1}{n} \sum_{d|n} \varphi(d) 2^{n/d}}}.$$ Concluding remark. With the amount of data and context now available it should not be difficult to produce a combinatorial proof, which I suspect will turn out to be simple. This is the Perl code that was used to compute the above data. #! /usr/bin/perl -w # sub search { my($n, $bits, $q, $avail, $key, $seen, $sofar) = @_; if($q==$n){ if(($bits->[$n-1] == 0 && $sofar->[$n-1] < $sofar->[0]) || ($bits->[$n-1] == 1 && $sofar->[$n-1] > $sofar->[0])){ $seen->{$key} = join('-', @$sofar); } return; } my $pos = 0; while(!exists($seen->{$key}) && $pos<$n-$q){ my $nxttry = $avail->[$pos]; if($q==0 || (($bits->[$q-1] == 0 && $sofar->[$q-1] < $nxttry) || ($bits->[$q-1] == 1 && $sofar->[$q-1] > $nxttry))){ push @$sofar, $nxttry; splice @$avail, $pos, 1; search($n, $bits, $q+1, $avail, $key, $seen, $sofar); splice @$avail, $pos, 0, $nxttry; pop @$sofar; } $pos++; } } MAIN: { my $mx = shift || 6; for(my $n=3; $n<=$mx; $n++){ my $seen = {}; my $failed = {}; for(my $ind=0; $ind<2**$n; $ind++){ my $bits = []; for(my ($pos, $indx)=(0, $ind); $pos<$n; $pos++){ push @$bits, ($indx %2); $indx = ($indx-$bits->[-1])/2; } my $rot; for($rot=0; $rot<$n; $rot++){ my @rotbits = (@$bits[$rot..($n-1)], @$bits[0..($rot-1)]); my $rotkey = join('', @rotbits); last if exists($seen->{$rotkey}) || exists($failed->{$rotkey}); } if($rot==$n){ my $key = join('', @$bits); search($n, $bits, 0, [1..$n], $key, $seen, []); $failed->{$key} = 'FAIL' if !exists($seen->{$key}); } } my $total = scalar(keys %$seen) + scalar(keys %$failed); foreach my $pat (keys %$seen){ print STDERR "$pat " . $seen->{$pat} . "\n"; } foreach my $pat (keys %$failed){ print STDERR "$pat " . $failed->{$pat} . "\n"; } print "$n: $total " . scalar(keys(%$seen)) . "/" . scalar(keys(%$failed)) . "\n"; } } There is a radically simplified version of the above program which is not as fast, however. Instead of iterating over bit patterns and backtracking to find a matching permutation we iterate over all permutations and collect the bit patterns / black-white patterns that appear. The slow-down in the speed when solving the case of $n=10$ is on the order of a factor of $60$. This code uses the factorial number system to iterate over permutations and never allocates more than one permutation at a time. #! /usr/bin/perl -w # sub fact { my ($n) = @_; return 1 if ($n == 0 || $n == 1); return $n*fact($n-1); } MAIN: { my $mx = shift || 6; for(my $n=3; $n<=$mx; $n++){ my $seen = {}; for(my $ind=0; $ind<fact($n); $ind++){ my @perm = (1..$n); for(my ($pos, $indx) = ($n-1, $ind); $pos > 0; $pos--){ my $targ = $indx % ($pos+1); $indx = ($indx-$targ)/($pos+1); my $tmp = $perm[$pos]; $perm[$pos] = $perm[$targ]; $perm[$targ] = $tmp; } my @bits = (); for(my $pos=0; $pos<$n-1; $pos++){ my $bit = ($perm[$pos] < $perm[$pos+1] ? 1 : 0); push @bits, $bit; } push @bits, ($perm[$n-1] < $perm[0] ? 1 : 0); my $rot; for($rot=0; $rot<$n; $rot++){ my @rotbits = (@bits[$rot..($n-1)], @bits[0..($rot-1)]); my $rotkey = join('', @rotbits); last if exists($seen->{$rotkey}); } if($rot==$n){ my $key = join('', @bits); $seen->{$key} = join('-', @perm); } } foreach my $pat (keys %$seen){ print STDERR "$pat " . $seen->{$pat} . "\n"; } print "$n: " . scalar(keys(%$seen)) . "\n"; } }
Let $a_n$ be the number of configurations for this problem and let $b_n$ be the number of two-colored necklaces with $n$ beads (no flips allowed). It is well-known that $b_n=\frac 1n\sum_{d\mid n}\phi(d)2^\frac nd$ (this is OEIS A000031). Computer runs suggest that for odd $n$ all these patterns also occur for this problem, except the two monochromatic ones, so $a_n=b_n-2$. They also suggest that for even $n$ all patterns occur except the ones where one color occurs every second spot. In this case the number of necklaces where black occurs every second spot is $b_{\frac n2}$. We get the same number for the patterns where white occurs every second spot, and then we have double counted the one pattern where black and white alternate all around the necklace, so we get $a_n=b_n-2b_{\frac n2}+1$.
Why is compactness so important? I've read many times that 'compactness' is such an extremely important and useful concept, though it's still not very apparent why. The only theorems I've seen concerning it are the Heine-Borel theorem, and a proof continuous functions on R from closed subintervals of R are bounded. It seems like such a strange thing to define; why would the fact every open cover admits a finite refinement be so useful? Especially as stating "for every" open cover makes compactness a concept that must be very difficult thing to prove in general - what makes it worth the effort? If it helps answering, I am about to enter my third year of my undergraduate degree, and came to wonder this upon preliminary reading of introductory topology, where I first found the definition of compactness.
As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors. But why finiteness is important? Well, finiteness allows us to construct things "by hand" and constructive results are a lot deeper, and to some extent useful to us. Moreover finite objects are well-behaved ones, so while compactness is not exactly finiteness, it does preserve a lot of this behavior (because it behaves "like a finite set" for important topological properties) and this means that we can actually work with compact spaces. The point we often miss is that given an arbitrary topological space on an infinite set $X$, the well-behaved structures which we can actually work with are the pathologies and the rare instances. This is throughout most of mathematics. It's far less likely that a function from $\Bbb R$ to $\Bbb R$ is continuous, differentiable, continuously differentiable, and so on and so forth. And yet, we work so much with these properties. Why? Because those are well-behaved properties, and we can control these constructions and prove interesting things about them. Compact spaces, being "pseudo-finite" in their nature are also well-behaved and we can prove interesting things about them. So they end up being useful for that reason.
Every continuous function is Riemann integrable-uses Heine-Borel theorem. Since there are a lot of theorems in real and complex analysis that uses Heine-Borel theorem, so the idea of compactness is too important.
Conditional probability - a formal discussion This is a rather philosophical question. P[B|A] is formally defined as P[B and A]/P[B] where A and B are events in a sigma algebra and P is a probability mass function. That is, P[B|A] is just a division of two numbers. If so, how come there are problems where we find it hard to calculate P[B and A] as well as P[B], but it is easy for us to reason about P[B|A] and so we assign a value to P[B|A] immediately without going through the division? (I can't think of an example for this, but I surely recall there are such cases. Can anyone share an example?) To be more concrete, I'd be happy to see an example where it's hard\impossible to calculate P[A and B] or P[B] but it is easy to reason about P[A|B] on intuitive levels along with a justification for this reasoning (I'm talking about sample space and probability function definitions).
I think you mean $P(A|B)$ rather than $P(B|A)$; I'll assume that. It might happen that event $B$, if it happens, controls the conditions for $A$ to happen, which does not imply that one has any idea of how probable $B$ is. As an extreme case, $B$ might logically imply $A$, in which case $P(A|B)=1$ regardless. Another example is if someone tosses a coin but I have no idea whether the coin is fair; for the events $A$: I win the toss, and $B$: the coin is fair, I know by definition that $P(A|B)=0.5$, even though I know nothing about $P(B)$ or $P(A\cap B)$.
Role of Conditional Probability - My thoughts: A need for representing event in the presence of prior knowledge: Consider the probability of drawing a king of heart randomly from a standard deck of 52 cards. The probability of this event without any prior knowledge is 1/52. However, if one learns that the card drawn is red, then the probability of getting a king of heart becomes 1/26. Similarly, if one gathers knowledge that the card drawn is a face card (ace, king, queen or jack) then the probability gets shifted to 1/16. So we see that representing an event in the presence of prior knowledge is important and conditional event representation of (A|H) is the most adopted representation to solve this need. No matter what representation we adopt, we can agree that conditional event solves an unique concern of representing conditional knowledge. What is most elemental - unconditional Vs. conditional: The debate whether conditional probability is more elemental than (unconditional) probability remains as an enticing subject for many statistician [1] as well as philosophers [2]. While the most adopted notation of conditional probability and its ratio representation viz. P(A|H)=P(AH)/P(H) where P(H)>0 indicates (unconditional) probability is more elemental; the other school of thoughts has their logic too. For, them, when we say probability of getting face value of 2 in a random throw of a fair dice is 1/6, we apply prior knowledge that all throw will lands perfectly on a face such that a face will be visible unambiguously, or that the dice will not break into pieces when rolled and so on. Therefore we apply a prior knowledge in order to determine a sample space of six face values. No matter what kind of probability is the most elemental, following the notation of conditional probability, we can agree that we speak of (unconditional) probability when we’ve accepted a sample space as the super most population and we’re not willing to get astray by adding further sample points to this space. Similarly, we speak of conditional probability when we focus on an event with respect to the sub-population of the super-most (absolute in this sense) population. Is there any case which can be solved only by conditional probability: Once again, as long as we accept the ratio representation of the conditional probability, we see that conditional probability can be expressed in terms of unconditional probability. Thus, conceptually, any problem where conditional probability is used, can also be solved without use of conditional probability as well. However, we must appreciate that for cases where population and sub-population are not part of the same experiment, the use of conditional probability is really useful (not necessarily inevitable). To explain this further, in case of finding probability of a king of heart given that the card is red, we don’t really need conditional probability because the population of 52 cards and sub-population of 26 red cards are very clear to us. However, for cases such as applying a medicinal test on a cow to determine if it has mad-cow-disease, if we know false positive and false negative probabilities of the test, then to find out probability that a cow has disease given that it has tested positive, conditional probability can be used with great effect. If I may bring an analogy of ‘plus’ and ‘multiplication’ symbols of mathematics, we all know that any problem that uses multiplication symbol, can also be solved without it by mere use of ‘plus’ symbol. Similarly, in terms of solving problems, conditional probability can be avoided altogether just like multiplication symbol in mathematics. Still, we can appreciate the usefulness of conditional probability just like we can appreciate the use of multiplication in mathematics. ----------Bibliography------------ [1] H. Nguyen and C. Walker, “A history and introduction to the algebra of conditional events and probability logic,” Systems, Man and Cybernetics, IEEE Transactions on (Volume:24 , Issue: 12 ), pp. 1671 - 1675, Dec 1994. [2] A. Hájek, “Conditional Probability,” Handbook of the Philosophy of Science. Volume 7: Philosophy of Statistics., vol. 7, p. 99, 2011.
Do matrices have a "to the power of" operator? Well I was sure that saying "$A^3$" (where $A$ is an $n\times n$ matrix) is nonsense. Sure one could do $(A\cdot A) A$ But that contains different operators etc. So what did my prof mean by the following statement: show that $A^{25}\mathbf{x} = \mathbf{0}$ has only the trivial solution? (We're also given the determinant of A). I know the proof will probably end with stating: "This means that $A^{25}$ is invertible, so $A^{25}\mathbf{x} = \mathbf{0}$ has only the trivial solution. And well I could state that $\det(A^{25}) = 5^{25} \neq 0$. But then again: I really wonder what the "to the power of" operator means? Or did my prof make a mistake here?
If you want a formal definition of matrix exponentiation for non-negative integer values, just define $A^n = A^{n-1}\cdot A$ and $A^0 = I$. Since matrix multiplication is associative, we won't have any ambiguity there. Edit: As Tobias Kildetoft points out below, it might be wiser to define the base case as $A^1=A$ instead of $A^0=I$, so as to not have to worry about how $\det(A^n)=\det(A)^n$ for $\det(A)=0$ would imply $0^0=1$. Which isn't false, depending on how we want to define it, but is something we might not want to worry about for the purposes of defining matrix powers.
But matrix multiplication is not commutative, $A^{2}\cdot A^{3} \neq A^{3}\cdot A^{2}$ except in some particular cases. $$ A = \pmatrix{0 & 3 \\ 1&2}; A^2 = \pmatrix{3 & 6 \\ 2 & 7}; A^3 = \pmatrix{6& 21 \\ 9 & 20} $$ $$A^2·A^3 = \pmatrix{72 & 183\\ 75 & 182}; A^3·A^2 = \pmatrix{60 & 183 \\ 61 & 182} $$ So $A^5$ has more than 1 solution, at least in this case.
Probability that X is less than Y directly from joint CDF? Suppose X and Y are arbitrary random variables with joint cdf F. Is it possible to find $P(X<Y)$ or $P(X\le Y)$ directly from F? I think it's easier to think in terms of the equivalent $P(X-Y<0)$, but I'm not sure how to go from there. I know that if X and Y are continuous, we can differentiate the joint cdf to get the joint pdf and we can integrate over that to solve it. But is there a more general way to get it directly from the joint cdf? e.g., what if X or Y is not continuous or discrete?
$\displaystyle \mathbb{P}(X\leq Y)= \int_{-\infty}^{\infty}\int_{-\infty}^{y} f(x,y)dxdy$. Assuming continuous r.v pdf $f(x,y)$
$\displaystyle \mathbb{P}(X\leq Y)= \int_{-\infty}^{\infty}\int_{-\infty}^{y} f(x,y)dxdy$. Assuming continuous r.v pdf $f(x,y)$
Blow up of a solution What exactly does blow up mean, when people say, for example, that a solution (to a pde (say)) blows up. Thanks.
The meaning is, of course, context-dependent... In the context of differential equations, that a solution to an equation with a "time" variable blows up usually means that the maximal domain for which it is defined is finite, so that at the endpoint of that interval something `bad' happens: either the solution goes to infinity, or it stops being smooth (in a way that makes the differential equation stop having sense, maybe), or something. This is an important phenomenon, one which causes trouble. A couple of examples: Perelman's solution of the Poincaré conjecture—in a very vague sense—consists of a way to `work around' the fact that certain solutions of a (very complicated non-linear) PDE blow up; the third `Millenium' Clay problem is (very roughly) the question «do the solutions of the Navier-Stokes equation blow up?». Consider, as a very simple example, the equation $$\frac{\mathrm dx}{\mathrm dt}=x^2.$$ This equation makes sense and satisfies the conditions for existence and uniqueness of solutions on all of the $(t,x)$-plane, but if you solve it (which is easy to do explicitely, as it has the two variabls separated) you'll see that all of its solutions have a maximal interval which is a half-line (which is bounded on the left or on the right, depending on the initial condition) and that at the finite end of that interval the solutions become unbounded. We thus say that all solutions of our equation blow up in finite time. There are also equations which have some solutions which blow up and some which live forever. One example is $$\frac{\mathrm dx}{\mathrm dt}=\begin{cases}x^2&\text{if $x\geq0$}\\0&\text{if $x\leq0$}\end{cases}$$ and you'll surely find lots of fun in trying to concoct examples where even more interesting phenomena occur.
"Blows up" means it goes to infinity. 1/x "blows up" at x = 0.
Summation of an infinite series The sum is as follows: $$ \sum_{n=1}^{\infty} n \left ( \frac{1}{6}\right ) \left ( \frac{5}{6} \right )^{n-1}\\ $$ This is how I started: $$ = \frac{1}{6}\sum_{n=1}^{\infty} n \left ( \frac{5}{6} \right )^{n-1} \\ = \frac{1}{5}\sum_{n=1}^{\infty} n \left ( \frac{5}{6} \right )^{n}\\\\ = \frac{1}{5}S\\ S = \frac{5}{6} + 2\left (\frac{5}{6}\right)^2 + 3\left (\frac{5}{6}\right)^3 + ... $$ I don't know how to group these in to partial sums and get the result. I also tried considering it as a finite sum (sum from 1 to n) and applying the limit, but that it didn't get me anywhere! PS: I am not looking for the calculus method. I tried to do it directly in the form of the accepted answer, $$ \textrm{if} \ x= \frac{5}{6},\\ S = x + 2x^2 + 3x^3 + ...\\ Sx = x^2 + 2x^3 + 3x^4 + ...\\ S(1-x) = x + x^2 + x^3 + ...\\ \textrm{for x < 1},\ \ \sum_{n=1}^{\infty}x^n = -\frac{x}{x-1}\ (\textrm{I looked up this eqn})\\ S = \frac{x}{(1-x)^2}\\ \therefore S = 30\\ \textrm{Hence the sum} \sum_{n=1}^{\infty} n \left ( \frac{1}{6}\right ) \left ( \frac{5}{6} \right )^{n-1} = \frac{30}{5} = 6 $$
Letting $a = d = 1/6$ and $r = 5/6$, our sum is: $$ S = a + (a + d)r + (a + 2d)r^2 + (a + 3d)r^3 + \cdots $$ Scaling by $r$, we find that: $$ rS = ar + (a + d)r^2 + (a + 2d)r^3 + \cdots $$ Subtracting the two equations (by collecting like powers of $r$), we obtain: $$ (1 - r)S = a + dr + dr^2 + dr^3 + \cdots = a + dr(1 + r + r^2 + \cdots) = a + \frac{dr}{1 - r} $$ Hence, we conclude that: $$ S = \frac{a}{1 - r} + \frac{dr}{(1 - r)^2} = \frac{1/6}{1 - 5/6} + \frac{(1/6)(5/6)}{(1 - 5/6)^2} = \frac{1}{6 - 5} + \frac{(1)(5)}{(6 - 5)^2} = 6 $$
hint: differentiate the identity $$\sum_{k=0}^{\infty} x^k = \frac{1}{1-x} $$
Limit comparison test; negative outcome It says the outcome must be a positive finite answer, what happens if it's a negative finite answer? Is it inconclusive as if it were 0 or infinity?
If $\lim |a_n/b_n|=0$ and $\sum |b_n|$ converges, then $\sum |a_n|$ converges. If $\lim |a_n/b_n|=\infty$ and $\sum |a_n|$ diverges, then $\sum |b_n|$ diverges.
I'm not sure how the above answer relates, but in my calculus 3 class our teacher, when he finds a negative result ex: -1/10 when using the limit comparison theorem, he simply combines the theorem with the absolute value theorem that says if the absolute value converges so does the series, so then you get 1/10, which is positive. Combining theorems I believe is the complicated idea here.
Topology on the space of test functions I try to read into the theory of distributions and there is one thing which bothers me. I read that a distribution is a linear, continuous functional from the space of test functions, which, depending on the author, is sometimes defined as the Schwartz space and sometimes as $C^\infty_K$ (smooth functions with compact support). For starters, where does this ambiguity come from and does it matter? Now for the case $C^\infty_K$, I found two definitions of the associated topology. One as the final topology induced by the inclusion maps from $C^\infty(K)$ (I already understand how the topology on those spaces is defined) and one as the final locally convex topology induced by those maps, i.e. the finest locally convex topology which makes those maps continuous. I couldn't proof their equivalence, i.e. how do I show that the final topology induced by those maps is already linear and locally convex?
OK, I think I got a counterexample now, i.e. the final topology w.r.t. to the inclusion maps is not linear. I only worked it through on ${\bf R}$ though. The only assumption that I need is that for given $\epsilon>0$ and $k,n\in{\bf N}$ we find some $f\in C^\infty([-1,1])$ such that $\|f^{(j)}\|_\infty<\epsilon$ for $j<k$ and $f^{(k)}(0)>n$, which I think is easily constructable. Now the set $$A:=\{f\in C^\infty_K:\forall j>0.f(j)<1/f^{(j)}(0)\}$$ (where we set $1/0=\infty$) is open in the final topology, i.e. $A\cap C^\infty([-i,i])$ is open for each $i$. Yet, if the final topology were linear, we would find an neighbourhood $B$ of $0$ such that $B+B\subseteq A$. But any such $B$ contains a basic open neighbourhood from $C^\infty([-i,i])$ of the form $$U_i=\{f\in C^\infty([-i,i]):\forall j<k(i).\|f^{(j)}\|_\infty<\epsilon(i)\}$$ for each $i$. Now we can construct $f\in U_1$ such that $f^{(k(i))}(0)$ is large enough so $f+U_{k(i)+1}\subseteq B+B$ is no longer a subset of $A$. So the majority of books and scripts I read (e.g. Rudin) got it right, when they defined the topology on the space of test functions as the limit topology in the category of locally convex spaces, i.e. the finest locally convex topology which makes the inclusion maps continuous.
Let $K_n$ be the unit ball of radius $n$, so your $C_K^\infty = \bigcup_n C^\infty(K_n)$ and let $f_n:C^\infty(K_n)\longmapsto C_K^\infty$ be the natural embeddings. The system $$ \bigl\{ \bigcup_n f_n[U_n]\, ;\, U_n \text{ is a $0$-neighbourhood in } C^\infty(K_n) \bigr\} $$ is a filter base in $C_K^\infty$, immediately to verify (using $U_n = f_n^{-1}(f_n[U_n])$) that it is a filter base for a for a fundamental system of $0$-neighbourhoods of a linear topology. An absolutely convex set U is $0$-neighbourhood in $C_K^\infty$ for the topology described above, iff $f_n^{-1}[U]$ is a $0$-neighbourhood in $C^\infty(K_n)$ for each $n$. But the latter is precisely the characterization for a $0$-neighbourhood in the final topology on $C_K^\infty$.
Arc length of the squircle The squircle is given by the equation $x^4+y^4=r^4$. Apparently, its circumference or arc length $c$ is given by $$c=-\frac{\sqrt[4]{3} r G_{5,5}^{5,5}\left(1\left| \begin{array}{c} \frac{1}{3},\frac{2}{3},\frac{5}{6},1,\frac{4}{3} \\ \frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{3}{4},\frac{13}{12} \\ \end{array} \right.\right)}{16 \sqrt{2} \pi ^{7/2} \Gamma \left(\frac{5}{4}\right)}$$ Where $G$ is the Meijer $G$ function. Where can I find the derivation of this result? Searching for any combination of squircle and arc length or circumference has led to nowhere.
You can do this using this empirical formula to find perimeter of general superellipse $L=a+b*(((2.5/(n+0.5))^{1/n})*b+a*(n-1)*0.566/n^2)/(b+a*(4.5/(0.5+n^2)))$ This mentioned in my research
By your definition, $\mathcal{C} = \{(x,y) \in \mathbb{R}^{2}: x^4 + y^4 = r^4\}$. Which can be parametrized as \begin{align} \mathcal{C} = \begin{cases} \left(+\sqrt{\cos (\theta )},+\sqrt{\sin (\theta )} \right)r\\ \left(+\sqrt{\cos (\theta )},-\sqrt{\sin (\theta )} \right)r\\ \left(-\sqrt{\cos (\theta )},+\sqrt{\sin (\theta )} \right)r\\ \left(-\sqrt{\cos (\theta )},-\sqrt{\sin (\theta )} \right)r \end{cases} , \qquad 0 \leq \theta \leq \frac{\pi}{2}, \, 0<r \end{align} Now, look at this curve in $\mathbb{R}^{2}_{+}$ as $y = \sqrt[4]{r^4-x^4}$, then observe that symmetry with both axis. It yields the arc length is just: $$c = 4 \int_{0}^{r} \sqrt{1+\left(\dfrac{d}{dx}\sqrt[4]{r^4-x^4}\right)^2} \,dx = 4 \int_{0}^{r} \sqrt{1+\frac{x^6}{\left(r^4-x^4\right)^{3/2}}} \,dx$$
Find a confidence interval using as pivotal quantity a function of the MLE Let $X_1,\ldots,X_n$ be a random sample from $f(x\mid\theta)=\theta x^{\theta -1}$ for $0<x<1$. Find a confidence interval for $\theta$ using as pivotal quantity a function of the maximum likelihood estimator for $\theta$. Well, using the logaritmic version of the likelihood function, I got that the MLE of $\theta$ is $T_1=\frac{-n}{\sum\limits_{i=1}^n \log x_i}$. Is this correct? And how could I use a function of it as a pivotal quantity? I know that a pivotal quantity is a function of the sample and the parameter, whose distribution doesn't depend on $\theta$.
Assuming you want a confidence interval for $\theta$, you may use the CLT or a known chi distribution for variances and some integrals to get an interval for the parameter. First, we have that $$ \hat{\theta}=-\frac{n}{\sum ln\ x_i}. $$ For the expected value we have the integral $$ E[X] = \int_0^1 \theta x^{(\theta-1)}x\ \text{d}\theta = \frac{\theta}{1+\theta} $$ And for the second moment $$ \mu_2 = \int_0^1 \theta x^{\theta-1}x^2\ \text{d}\theta = \frac{\theta}{2+\theta} $$ So the variance is $$ Var(X) = \frac{\theta}{2+\theta} - \left( \frac{\theta}{1+\theta} \right)^2 = \\ = \frac{\theta}{(\theta + 1)^2(\theta+2)}. $$ Now then, we know that the following quotient has a chi-squared distribution, so in this case $$ C = \frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \text{ and } C = \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}} $$ where $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. Finally we can work with the random variable $C$ distributed as a chi squared. We want a confidence interval of $(1-\alpha)$. Then $$ P \left( \chi^2_{n-1}(\alpha/2) < C < \chi^2_{n-1}(1-\alpha/2)\right) = 1-\alpha $$ denoting that $\chi^2_{n-1}(\beta)$ is the $\beta$ percentile of the $\chi^2$ which may be found in reference tables. From here we work with the interval only substituting the percentiles with $\frac{1}{D_l}$ and $\frac{1}{D_s}$, lower and superior. $$ \frac{1}{D_l} < C < \frac{1}{D_s} \implies \frac{1}{D_l} < \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}} < \frac{1}{D_s} \\\ \\\ \\\ D_l > \frac{\frac{\theta}{(\theta + 1)^2(\theta+2)}}{(n-1)S^2} > D_s \\\ \\\ \\\ D_l((n-1)S^2) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > D_s((n-1)S^2) \\\ \\\ \\\ (D_l((n-1)S^2)) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > (D_s((n-1)S^2)) \\\ \\\ \\\ (\hat{\theta} + 1)^2(\hat{\theta}+2)\left( D_l((n-1)S^2) \right) > \theta > (\hat{\theta}+ 1)^2(\hat{\theta}+2)\left( D_s((n-1)S^2) \right) \\\ \\\ \\\ $$ where, taking the before definition and derivation, $\hat{\theta} = -\frac{n}{\sum ln\ x_i}$ and $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. When we sent the $\theta$ to the other side, we simply used its estimate to work around the problem. (However a mathematician should confirm this is valid!) Either way, if you find a better known distribution that may include $\theta$, then try it and tell us if it is simpler! Hope it helps! Note: made some edits due to an error :/
Assuming you want a confidence interval for $\theta$, you may use the CLT or a known chi distribution for variances and some integrals to get an interval for the parameter. First, we have that $$ \hat{\theta}=-\frac{n}{\sum ln\ x_i}. $$ For the expected value we have the integral $$ E[X] = \int_0^1 \theta x^{(\theta-1)}x\ \text{d}\theta = \frac{\theta}{1+\theta} $$ And for the second moment $$ \mu_2 = \int_0^1 \theta x^{\theta-1}x^2\ \text{d}\theta = \frac{\theta}{2+\theta} $$ So the variance is $$ Var(X) = \frac{\theta}{2+\theta} - \left( \frac{\theta}{1+\theta} \right)^2 = \\ = \frac{\theta}{(\theta + 1)^2(\theta+2)}. $$ Now then, we know that the following quotient has a chi-squared distribution, so in this case $$ C = \frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \text{ and } C = \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}} $$ where $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. Finally we can work with the random variable $C$ distributed as a chi squared. We want a confidence interval of $(1-\alpha)$. Then $$ P \left( \chi^2_{n-1}(\alpha/2) < C < \chi^2_{n-1}(1-\alpha/2)\right) = 1-\alpha $$ denoting that $\chi^2_{n-1}(\beta)$ is the $\beta$ percentile of the $\chi^2$ which may be found in reference tables. From here we work with the interval only substituting the percentiles with $\frac{1}{D_l}$ and $\frac{1}{D_s}$, lower and superior. $$ \frac{1}{D_l} < C < \frac{1}{D_s} \implies \frac{1}{D_l} < \frac{(n-1)S^2}{\frac{\theta}{(\theta + 1)^2(\theta+2)}} < \frac{1}{D_s} \\\ \\\ \\\ D_l > \frac{\frac{\theta}{(\theta + 1)^2(\theta+2)}}{(n-1)S^2} > D_s \\\ \\\ \\\ D_l((n-1)S^2) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > D_s((n-1)S^2) \\\ \\\ \\\ (D_l((n-1)S^2)) > \frac{\theta}{(\theta + 1)^2(\theta+2)} > (D_s((n-1)S^2)) \\\ \\\ \\\ (\hat{\theta} + 1)^2(\hat{\theta}+2)\left( D_l((n-1)S^2) \right) > \theta > (\hat{\theta}+ 1)^2(\hat{\theta}+2)\left( D_s((n-1)S^2) \right) \\\ \\\ \\\ $$ where, taking the before definition and derivation, $\hat{\theta} = -\frac{n}{\sum ln\ x_i}$ and $S^2 = \frac{1}{n-1}\sum(X_i-\overline{X})^2$. When we sent the $\theta$ to the other side, we simply used its estimate to work around the problem. (However a mathematician should confirm this is valid!) Either way, if you find a better known distribution that may include $\theta$, then try it and tell us if it is simpler! Hope it helps! Note: made some edits due to an error :/
Relation between eigenvalues of $A$ and $B=(f(a_{i,j}))$, where $f$ is concave/convex Let $A$ be an $n\times n$ real and symmetric matrix with unit diagonal. Furthermore, let the entries $0\leq a_{ij} \leq 1, i\neq j$. Now, define a function $f:[0,1] \mapsto [0,1]$ which is concave. For instance we can take $f(x)=2\sin\left(\frac{\pi x}{6}\right)$ which is the transformation of rank correlation coefficients to linear correlation coefficients in a Gaussian copula. This defines $B=(f(a_{i,j}))$, which is still symmetric with unit diagonal and $b_{ij}\geq a_{ij}$. I have three related questions: 1) What is the relationship between the eigenvalues of $A$ and $B$ in this case? Does a general result exist when $f$ is concave or convex? 2) Imagine that $A$ has $t$ negative eigenvalues, how about $B$? 3) If $B$ is positive semi-definit is $A$? The question arised when i implemented an algortihm to find the nearest (rank) correlation matrix to $A$ (where a requirement is non-negative eigenvalues) which worked as intended. Afterwards, I converted the rank-coefficients to linear correlation coefficients and this correlation matrix had some negative eigenvalues. This made me curious about the relation between the eigenvalues as the function $f$ possesses very nice properties (monotone, concave, infinitely differentiable). Perhaps a general result exists in this case?
Some quick observations. Presumably $n>1$. (2) Since your $A$ always has nonnegative entries and your $f$ is a function on $[0,1]$, $B$ must have a nonnegative trace. It follows that $B$ must possess at least one non-negative eigenvalue, but the number of non-negative entries may vary wildly. Example: let $A$ be indefinite, then: when $f(x)=x$, $B=A$ is indefinite; when $f(x)=1$, all eigenvalues of $B=ee^T$ (where $e$ is the all-one vector) are non-negative. (3) No. In the above example with $f(x)=1$, $B$ is positive semidefinite but $A$ is indefinite.
Some quick observations. Presumably $n>1$. (2) Since your $A$ always has nonnegative entries and your $f$ is a function on $[0,1]$, $B$ must have a nonnegative trace. It follows that $B$ must possess at least one non-negative eigenvalue, but the number of non-negative entries may vary wildly. Example: let $A$ be indefinite, then: when $f(x)=x$, $B=A$ is indefinite; when $f(x)=1$, all eigenvalues of $B=ee^T$ (where $e$ is the all-one vector) are non-negative. (3) No. In the above example with $f(x)=1$, $B$ is positive semidefinite but $A$ is indefinite.
On the idealizer of the set of elementary wedge products of two vectors in $K^4$, for a field $K$ Let $K$ be a field of characteristic zero. Consider $V=K^4$ with standard basis vectors $e_1,e_2,e_3,e_4$. We can consider the second exterior product $\bigwedge^2 V $ of $V=K^4$ with a basis given by $\{e_1\wedge e_2,e_1\wedge e_3,e_1\wedge e_4, e_2\wedge e_3, e_2\wedge e_4, e_3\wedge e_4\}$. For $\hat x=x_1e_1+x_2e_2+x_3e_3+x_4e_4 \in V$ and $\hat y=y_1e_1+y_2e_2+y_3e_3+y_4e_4 \in V$, we have the elementary wedge product $\hat x \wedge \hat y=\sum_{i<j}(x_iy_j-x_jy_i) e_i \wedge e_j$. Now let $R=K[T_1,...,T_6]$ and Consider $$I :=\{f \in R : f(a_1b_2-a_2b_1,a_1b_3-a_3b_1,a_1b_4-a_4b_1,a_2b_3-a_3b_2,a_2b_4-a_4b_2,a_3b_4-a_4b_3) = 0, \forall (a_1,...,a_4); (b_1,...,b_4)\in K^4\}$$. Then $I $ is an ideal of $R$. My question is : Is $I$ always a principal, radical ideal ? (May assume $K$ is algebraically closed if need be)
First thing's first: $I$ is the ideal corresponding to the closed embedding $G_k(n)\hookrightarrow \Bbb P(\bigwedge^k K^n)$ of the Grassmanian of $k$-planes in $n$-space into projective space. This enables us to make many determinations about $I$ from knowing what's going on with these geometric objects. As $n,k$ vary, $I$ is not always principal: if $I$ were principal, then $G_k(n)$ would be a subvariety of codimension at most one in $\Bbb P(\bigwedge^k K^n)$ by Krull's Height Theorem. But the dimension of $G_k(n)$ is $k(n-k)$ while the dimension of the projective space in question is $\binom{n}{k}-1$. It is not hard to see that $k(n-k)+1$ is not always greater than $\binom{n}{k}-1$: for instance, choosing $n=5,k=2$ gives that the former is $7$ while the latter is $9$. In your specific case of $k=2,n=4$, the ideal is principal and is generated by $p_{23}p_{14} - p_{13}p_{24} + p_{12}p_{34}$, which may be verified by examining the relations between the the coordinates you have written (we write $p_{ij}$ for the determinant of the minor composed of rows $i$ and $j$, or equivalently the coefficient of $e_i\wedge e_j$). This equation may be seen to generate a radical without too much trouble. In general, the equations which generate $I$ are called the Plucker relations, and it's known how to construct all of them (see wikipedia, for instance). In general, as $n,k$ vary, $I$ is always radical, since the Grassmanian over a field is reduced. This is in fact a special case of a more general theorem: determinental ideals are radical (see Are the determinantal ideals prime?) for several links and sources.
Once you fix $x$ and $y$, your question is equivalent to determining the ideal vanishing at a point in $K^6$. Such an ideal is nothing, but the maximal ideal corresponding to the point.
Sample space for die throwing experiment I am throwing $8$ non-identical dice, and I want to find the probability of getting a sum of the numbers on the dice equal to $30$. The number of ways of getting the sum equal to $30$ is $125588$ (using multinomial theorem). Is this the number of favorable cases? What will be the sample space? I am a little confused, please help me out...
Answer: Number of favorable cases $ = (-1)^0 {8\choose0}{29\choose7}+(-1)^1 {8\choose1}{23\choose7}+(-1)^2 {8\choose2}{17\choose7}+(-1)^3 {8\choose3}{11\choose7} = 1560780 - 8*245157 + 28*19448-56*330 =125588$ In general, the formula for finding the distribution of sum s in throwing n dice with x sides goes like this $$\sum_{k = 0}^{[(s-n)/x]} -1^k {n\choose k}{(s-1-xk)\choose (x-1)}$$
Answer: Number of favorable cases $ = (-1)^0 {8\choose0}{29\choose7}+(-1)^1 {8\choose1}{23\choose7}+(-1)^2 {8\choose2}{17\choose7}+(-1)^3 {8\choose3}{11\choose7} = 1560780 - 8*245157 + 28*19448-56*330 =125588$ In general, the formula for finding the distribution of sum s in throwing n dice with x sides goes like this $$\sum_{k = 0}^{[(s-n)/x]} -1^k {n\choose k}{(s-1-xk)\choose (x-1)}$$
How to find the gradient of norm square How to find the gradient of the function $f(x) = ||g(x)||^2$ where $x \in \Bbb R^d$ and $g: \Bbb R^d \to \Bbb R$.
I will assume $g:\mathbb R^d \to \mathbb R^n$. It's nice to avoid using components. Notice that \begin{equation} f(x) = h(g(x)) \end{equation} where $h(x) = \|x\|^2$. The chain rule tells us that \begin{align} f'(x) &= h'(g(x)) g'(x) \\ &= 2 \underbrace{g(x)^T}_{1 \times n} \underbrace{g'(x)}_{n \times d}. \end{align} If we use the convention that $\nabla f(x)$ is a column vector, rather than a row vector, then \begin{equation} \nabla f(x) = f'(x)^T = 2 g'(x)^T g(x). \end{equation}
Well, we know $x = (x_1,...,x_d) $, hence, $$ ||x||^2 = x_1^2 + ... + x_d^2 $$ Now, $$ \frac{ \partial f}{\partial x_1 } = 2x_1$$ $$ \frac{ \partial f}{\partial x_2 } = 2x_2 $$ .... $$ \frac{ \partial f}{\partial x_k } = 2x_k $$ Hence, $$ grad(f) = (2x_1,....,2x_d) = 2 x $$
Why does this series converge? My question is: Why does the series $$ \sum_{j,k=1}^\infty \frac{1}{j^4+k^4} $$ converge? I tested the convergence with Mathematica and Octave, but I can't find an analytical proof. In fact, numerical computations suggest that the value of the series is $<1$. One obvious thing to do would be to use the generalized harmonic series to see that \begin{align} \sum_{j,k=1}^\infty \frac{1}{j^4+k^4} &= \sum_{k=1}^\infty \frac{1}{2k^4} + \sum_{j,k=1; j\neq k} \frac{1}{j^4+k^4} \\ &= \frac{\pi^4}{180} + \sum_{j=1}^\infty \sum_{k=1}^{j-1} \frac{1}{j^4+k^4} + \sum_{j=1}^\infty \sum_{k=j+1}^{\infty} \frac{1}{j^4+k^4}\\ &\leq \frac{\pi^4}{180} + \sum_{j=1}^\infty \sum_{k=1}^{j-1} \frac{1}{(j-k)^4} + \sum_{j=1}^\infty \sum_{k=j+1}^{\infty} \frac{1}{(j-k)^4} \end{align} but unfortunately the last two (double-)series do not converge. The problem arises when one tries to estimate the Hilbert-Schmidt norm of the Laplacian in $H^2(\mathbb{T}_\pi^2)$.
You can use the inequality $2xy\le x^2+y^2$ (seen by expanding $(x-y)^2\ge0$) to get $2j^2k^2\le j^4+k^4$: $$\sum_{j,k=1}^\infty\frac1{j^4+k^4}\le\frac12\sum_{j=1}^\infty\sum_{k=1}^\infty\frac1{j^2}\frac1{k^2}=\frac12\Bigl(\sum_{j=1}^\infty\frac1{j^2}\Bigr)^2<\infty.$$
Straightforward application of comparison test: $$ \sum_{j,k=1}^\infty \frac{1}{j^4+k^4} < \sum_{j=1}^\infty \frac{1}{j^4} = \frac{\pi^4}{90} $$
Distance between two skew lines I have 2 skew lines $L_A$ and $L_B$ and 2 parallel planes $H_A$ and $H_B$. The line $L_A$ lies in $H_A$ and $L_B$ in $H_B$. If the equations of $H_A$ and $H_B$ are given like this: $x+y+z = 0$ (for $H_A$) $x+y+z = 5$ (for $H_B$) Can I just simply say that the distance between two lines $L_A$ and $L_B$ is 5 since there the two planes they lies are separated apart by 5?
No. The distance between the two planes is not 5 in the first place. However, if you find the correct distance between the two planes, then your answer may still be wrong if the lines are parallel. If they are not parallel, then it happens to be correct. You gave the condition of skew lines, but I mention these two cases because it shows that it is not at all trivial why the distance should be as claimed, and there is something crucial about the lines being skew.
Yes,the shortest distance between the skew lines will equal the distance between the planes.
Find three real orthogonal matrices of order $3$ having all integer entries. Find three real orthogonal matrices of order $3$ having all integer entries. I have no idea to solve the problem. I don't know how to start. If $A$ be such matrix then $AA^T=A^TA=I_3$. Please help me.
Let $A$ be such a matrix, and $a_1$, $a_2$ and $a_3$ its columns. Given that $A$ is orthogonal means that $$\langle a_i,a_j\rangle =\left\{\begin{array}{ll}1&\text{ if } i=j\\0&\text{ if }i\neq j\end{array}\right.$$ So in particular $\langle a_i,a_i\rangle=1$. Because the $a_i$ have all integer coefficients, it follows that $$a_1,a_2,a_3\in\{(1,0,0),(-1,0,0),(0,1,0),(0,-1,0),(0,0,1),(0,0,-1)\},$$ leaving six options for each of $a_1$, $a_2$ and $a_3$. For any given value of $a_1$ there are precisely $4$ values of $a_2$ such that $\langle a_1,a_2\rangle=0$. For any given values of $a_1$ and $a_2$ there are precisely $2$ values of $a_3$ such that $\langle a_1,a_3\rangle=\langle a_2,a_3\rangle=0$. Hence there are $6\times4\times2=48$ such matrices in total.
If $a,b,c$ are the entries of the first row of a matrix, then $a^2+b^2+c^2$ should be equal to one. this equation has the following integer solutions : (1,0,0),(0,1,0),(0,0,1),(-1,0,0),(0,-1,0),(0,0,-1). Thus, each row of the desired matrix should be filled with exactly one of these solutions. So, you have 120 possible matrices.
the definition of autocorrelation I find the definition of autocorrelation from wiki: $$ R(s,t)=\frac{E\left[ \left( {{X}_{t}}-{{\mu }_{t}} \right)\left( {{X}_{s}}-{{\mu }_{s}} \right) \right]}{{{\sigma }_{t}}{{\sigma }_{s}}} $$ But I also find the definition of autocorrelation as below somewhere: $$ R(s,t)=E\left[ X(s)X(t) \right] $$ Are these two definitions equal? Can anyone provide the proof?
The bottom definition is the one I often use, i.e. without subtracting the mean and without normalizing. When you take the expectation with the mean subtracted it is often termed the Autocovariance, i.e.: $C_{xx}(\tau) = E[(x(t)-\mu)(x(t+\tau)-\mu)]$ Having said that both definitions you provide get (confusingly) used, depending on the field of application. But just to make it clear they are not equivalent.
The first definition is the true definition for the autocorrelation. The second equation you got from "somewhere" just looks like the autocorrelation function for a Gaussian R.V. with $\mu = 0$ and $\sigma^2= 1.$
Let A, B, and C be events such that A and B are both subsets of C. Also, let P(A) = 0.3, P(B) = 0.4, and P(C) = 0.6. Then, P(A|B) could be what Let A, B, and C be events such that A and B are both subsets of C. Also, let P(A) = 0.3, P(B) = 0.4, and P(C) = 0.6. Then, P(A|B) could be what, there is hint saying that there is not only one answer and I do not get it.
$$P(A\cup B)\geq \max{\{P(A),P(B)\}}=0.4$$ $$P(A\cup B)\leq \min{\{P(C),P(A)+P(B)\}}=0.6$$ $$P(A\cup B)\in[0.4,0.6]$$ $$P(AB)=P(A)+P(B)-P(A\cup B)\in[0.1,0.3]$$ $$P(A|B)=\frac{P(AB)}{P(B)}\in[0.25,0.75]$$
$$0.6\geq P(A\cup B)=0.3+0.4-P(A\cap B).$$ Thus, $$P\left(A|B\right)=\frac{P(A\cap B)}{P(B)}\geq\frac{0.1}{0.4}\geq\frac{1}{4}.$$ Also, use that $P(A\cap B)\leq P(A).$
Min value of cos(sinx)+sin(cosx) The max value of $f(x) = \sin(\cos x) + \cos(\sin x)$ is equal to $1 + \sin 1$. What is the minimum value of $f(x)$? I found the max value using that range $\sin(\cos x)$ is $[- \sin 1, \sin 1]$ and range of $\cos(\sin x)$ is $[ \cos 1, 1]$ and the maximum of both occurs at $x=0$. Although I can't find the min value using this approach. Please try to find this without using graphing calculators.
Look at this graph of your function to see the multiple minima, essential for understanding and solving your problem.
Look at this graph of your function to see the multiple minima, essential for understanding and solving your problem.
Can anyone make me understand in simple language why the second condition for being an Euclidean domain is superfluous? Can anyone make me understand in simple language why the second condition for being an Euclidean domain is superfluous ? Why $v(a) \leq v(ab)$ is not needed? How we can deduce from the first one?
Suppose $V$ satisfies only the first Euclidean property, i.e. for all $\,a,b\in D\,$ if $\,b\neq 0\,$ then there are $\,q,r\in D\,$ such that $\, a = qb +r\,$ with $\, V(r) < V(b),\,$ where $V$ maps $D$ into (well-ordered) $\,\Bbb N.\,$ We show how to construct from $V$ another Euclidean function $v$ that satisfies $\, v(a) \le v(ab)\,$ if $\,ab\neq 0$. Derive $\,v\,$ from $\,V\,$ as follows $$\begin{align} v(0) &= V(0)\\ v(a) &= {\rm min}\{ V(b)\ :\ b\in aD\backslash 0\} \end{align}$$ Note $\,v(a)\le V(a)\,$ since it is clear if $\,a = 0,\,$ else it follows by $\, a\in aD\backslash 0$ $v$ is also a Euclidean function: if $\,a,b\in D\,$ and $\,b\neq 0\,$ then $\,v(b) = V(bc)\,$ for $\,0\neq c\in D.\,$ Since $\,V\,$ is a Euclidean function there are $\,q,r\in D\,$ such that $\, a = qbc + r\,$ and $\,V(r) < V(bc) = v(b).\,$ But by above we know $\,v(r)\le V(r)\,$ thus $\,v(r) < v(b),\,$ so $\,v\,$ is a Euclidean function. Note $\, v(a) \le v(ab)\,$ if $\,ab\neq 0\,$ since $\,aD\backslash 0\supseteq abD\backslash 0$ $\,\Rightarrow\,{\rm min}\,V(aD\backslash 0) \le {\rm min}\, V(abD\backslash 0)$ Remark $ $ See the paper cited here by Agargun & Fletcher for a comprehensive study of the logical relationships between various common definitions of Euclidean domains and rings.
The second property is superfluous because only the first one is needed to prove that every ideal of a Euclidean domain is principal.
Proof Involving Rational Numbers I asked this same question last night and got some answers but still can't make sense of this, normally I'd move on but since I know how to do everything else for the test I'm going to try to get this down. Anyway ..... Stuck on a tutorial question trying to study for a test. The question is : Consider the following statement: "Between any two different rational numbers, there are at least two different rational numbers." (a) Write this statement as a logical expression. The universe is all numbers. Use Q to denote the set of rational numbers. (b) Prove or disprove this statement. Thanks, proofs are what I'm having the hardest time with. There are actually other parts to the question but I know how to do those, can someone tell me what they'd consider the full answer?? Our prof gives us little to no examples so I have nothing to go on, plus I learn best from looking at example
For the proof you can just give some explicit construction that works. E.g., note that if $x < y$, then $x < x + \frac{1}{3}(y - x) < x + \frac{2}{3}(y - x) < y$. Then argue that if $x, y \in\mathbb{Q}$, then the two new numbers are rationals, too.
You can just use the fact that rationals numbers are dense. So given $a$ and $b$, there is a rational number in $(a,b)$ and there is a rational number in both of $(a,x)$ and $(x,b)$, where $x$ is any number in $(a,b)$.
equation of circle tangent to line with radius Find the equation of a circle tangent to line $3x + y - 2 = 0$ at $(-1,5)$ and with radius $\sqrt{10}$. I've no idea on how to do this.
Now as per the question the circle touches the line only at one point so it is a tangent and there can be two circles touching the given line and passing through (-1,5). so if the given line is a tangent to the circle then the centre lies on a line perpendicular to the given line, passing through (-1,5) and at a distance of $\sqrt(10)$. slope of line perpendicular to it is $$m1*m2=-1$$ where m1=-3 which implies m2 =1/3. let the line be y=(1/3)x+c. Now this passes through (-1,5) substitute and get c value The resulting equation which is perpendicular to the given line is $$3y=x+16---(1)$$ so to find the circle equation we need to find the centre of the circle which can be found with the equation shown which is nothing but the distance formula$$(x-(-1))^2+(y-5)^2=(\sqrt10)^2 ---(2)$$ since the circle has a radius of $\sqrt(10)$ units which implies the centre is at a distance of $\sqrt(10)$ units from it. solving (1) and (2) we get $$x=-4,2$$ and $$y=4,6$$ therefore the equations of two circles are $$(x-(-4))^2+(y-4)^2=(\sqrt10)^2$$ and $$(x-2)^2+(y-6)^2=(\sqrt10)^2$$ hope this helped.
Let the equation be $$10=(x-a)^2+(y-b)^2$$ $$\implies10=(a+1)^2+(b-5)^2\ \ \ \ (1)$$ Again like equation of circle tangent to line with radius, $$\dfrac{3a+b-2}{\sqrt{3^2+1^2}}=\pm\sqrt{10}$$ $$3a+b-2=\pm10$$ Considering '+' sign, $$3a+b-2=10\iff b=12-3a\ \ \ \ (2)$$ Use $(1),(2)$ find $a$ and hence $b$ Similarly consider '-' sign
How to solve for $N$ in $\prod\limits_{k=1}^{N}\left(1-\frac{k}{b}\right) = P$ As the title states, I'm trying to solve for $N$ in $$\displaystyle \prod_{k=1}^{N}\left(1-\frac{k}{b}\right) = P$$ where $0 < P < 1$ and $b > N$ and is an integer. I'm trying to solve this for extremely large values of $b$ and $N$ ($\sim 10^{50}$) so a brute-force numerical approach won't work in any reasonable time. I also can't make approximations for very small values of $P$. I'd like $P$ to be able to take on any value between $0$ and $1$. Can this be solved? If not are there certain approximations that can be made?
If $k$ is fixed, and $b$ is large, then the product is approximately equal to $e^{k N/b},$ so that $$\frac{N}{b} = \frac{P}{k},$$ so $$N = \frac{Pb}{k}.$$ Rewrite your LHS as $$LHS = \prod_{k=1}^N \frac{b-k}{b} = \frac{(b-1)!}{(b-N-1)! b^N}.$$ Now use Stirling's approximation, to get $$LHS\approx \frac{\sqrt{2\pi(b-1)} (b-1)^{b-1}/e^{b-1}}{b^N (b-1-N)^{b-N-1}/e^{b-N-1}} = \sqrt{\frac{b-1}{b-N-1}}\frac{(b-1)^{b-1} e^N}{b^N (b-N-1)^{b-N-1}}.$$ Now, you have to decide whether $b$ is a lot or a little bigger than $N.$ The first option is easier.
If $k$ is fixed, and $b$ is large, then the product is approximately equal to $e^{k N/b},$ so that $$\frac{N}{b} = \frac{P}{k},$$ so $$N = \frac{Pb}{k}.$$ Rewrite your LHS as $$LHS = \prod_{k=1}^N \frac{b-k}{b} = \frac{(b-1)!}{(b-N-1)! b^N}.$$ Now use Stirling's approximation, to get $$LHS\approx \frac{\sqrt{2\pi(b-1)} (b-1)^{b-1}/e^{b-1}}{b^N (b-1-N)^{b-N-1}/e^{b-N-1}} = \sqrt{\frac{b-1}{b-N-1}}\frac{(b-1)^{b-1} e^N}{b^N (b-N-1)^{b-N-1}}.$$ Now, you have to decide whether $b$ is a lot or a little bigger than $N.$ The first option is easier.
Possible definitions of exponential function I was wondering how many definitions of exponential functions can we think of. The basic ones could be: $$e^x:=\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ also $$e^x:=\lim_{n\to\infty}\bigg(1+\frac{x}{n}\bigg)^n$$ or this one: Define $e^x:\mathbb{R}\rightarrow\mathbb{R}\\$ as unique function satisfying: \begin{align} e^x\geq x+1\\ \forall x,y\in\mathbb{R}:e^{x+y}=e^xe^y \end{align} Can anyone come up with something unusual? (Possibly with some explanation or references).
The exponential function is the unique solution of the initial value problem $y'(x)=y(x) , \quad y(0)=1$.
Here's an "unusual" one: $e$ is the positive real number such that $$ \sqrt{6\log\left(e\sqrt[4]{e}\sqrt[9]{e}\sqrt[16]{e}\sqrt[25]{e}\ldots\right)} = \pi. $$
Matrix inequality after taking inverse Let A and B be Positive definite Matrices with $ A\leq B$ in the sense that $B-A$ is positive definite. Is it true that $A^{-1} \geq B^{-1} $?
Consider $B-A\geq 0$ using Schur complement, this is equivalent to $$\begin{bmatrix}B&I\\I&A^{-1}\end{bmatrix}\geq 0, \quad B>0$$ Since $A^{-1}>0$, now apply the Schur complement one more time to obtain $$A^{-1}-I(B)^{-1}I=A^{-1}-B^{-1}\geq 0$$ therefore we have $A^{-1}\geq B^{-1}$.
It is true. If you have matrices with this conditions ,then you can start from $A≤B$. Now multiply with $A^{-1}$ on the left $$A^{-1} A ≤ A^{-1} B$$ Now you have the identity Matrix on the left. Multiply with $B^{-1}$ on the right and you will get $$B^{-1} ≤ A^{-1}$$
Where is the fault in my proof? I had some spare time, so I was just doing random equations, then accidentally came up with a proof that showed that i was -1. I know this is wrong, but I can't find where I went wrong. Could someone point out where a mathematical error was made? $$(-1)^{2.5}=-1\\ (-1)^{5/2}=-1\\ (\sqrt{-1})^5=-1\\ i^5=-1\\ i=-1$$
Your mistake is that you have "$(-1)^{5/2} = -1$". It actually holds that $(-1)^{5/2} = i$ since you get by euler identity that $$(-1)^{5/2} = {e^{i\pi}}^{5/2} = e^{5/2 i\pi} = i.$$ Furthermore you shouldn't write $\sqrt{-1} = i$ because the root isn't defined for negative values and you can get all sorts of wrong proofs by using the rules for square roots in combination with this notation.
$$(-1)^{2.5} =(-1)^2\times (-1)^{1/2}= \sqrt{(-1)} = i$$ is your mistake
Shortest path on a sphere I'm quite a newbie in differential geometry. Calculus is not my cup of tea ; but I find geometrical proofs really beautiful. So I'm looking for a simple - by simple I mean with almost no calculus - proof that the shortest path between two points on a sphere is the arc of the great circle on which they lie. Any hint ? Edit: Or at least a reference ?
Here's a geometric observation that can hardly be called a "proof", but may be appealing nonetheless. If $p$ and $q$ are distinct points of the sphere $S^{n}$, if $C:[0, 1] \to S^{n}$ is a "shortest path" joining $p$ to $q$, and if $F:S^{n} \to S^{n}$ is a distance-preserving map fixing $p$ and $q$, then $F \circ C$ is also a shortest path (because the length of $F \circ C$ is equal to the length of $C$). Assume $q \neq -p$. If you believe there exists a unique shortest path from $p$ to $q$, it's not difficult to see that the "short" great circle arc is the only candidate: Every point not on the great circle through $p$ and $q$ is moved by some isometry of the sphere that fixes $p$ and $q$. If you're thinking specifically of $S^{2}$, reflection $F$ in the plane containing $p$, $q$, and the center of the sphere is an isometry, and $f(x) = x$ if and only if $x$ lies on the great circle through $p$ and $q$. (A similar argument "justifies" that the shortest path between distinct points of the Euclidean plane is the line segment joining them.)
An image worth a thousand words.... a movie a thousand images, and an equation a thousand movies..... :) Stare at the figure and try to draw a path shorter than that between $A$ and $B$, or $B$ and $C$, or any of the fourth vertices with any of the other three. As you said, you want a geometrical argument. It should not take time to convince you that the paths drawn on this figure (and the paths you can draw in your mind for the diagonals $A-C$ and $B-D$) are all connected to an arc that is centered at the origin. Those are parts of big circles.
Does $f(dx)$ have any meaning? Simple question, does a differential $dx$ have any meaning composed in a function $f$, such as $\sqrt{dx}$, where $f(x)\neq x$?
Sort of. There is a dual concept to differentials, that of a "tangent vector", which is not unreasonable to think of as a kind of infinitesimal. While $\mathrm{d}x$ is supposed to denote a differential, many unfortunately use the notation when they wish to speak of an infinitesimal. :( Anyways, if $f$ is differentiable at a point $a$ and $\epsilon$ is an infinitesimal, then we have $$ f(a+\epsilon) = f(a) + f'(a) \epsilon $$ Note this is a literal equality and not merely an approximation, as this kind variety of infinitesimal satisfies $\epsilon^2 = 0$.
Since $dx=\Delta x$, we may view $f(dx)$ as $f(\Delta x)$.
solve $\sin(3x - 4) = \cos(7x)$ I am attempting to solve the equation $$\sin(3x + 4) = \cos(7x),$$ with all numbers in degrees. My process is as such: $\cos(7x) = \sin(90 - 7x)$ $\sin(3x + 4) = \sin(90 - 7x)$ $3x + 4 = 90 - 7x + 360n$; (where n is an integer and $360n$ is added due to cycling) $x = \frac{94}{10} + 36n$ However, when I graph, I see that there is another answer that I have not solved for, which is $\frac{133}{2} + 90n$. How can I achieve this answer?
Draw a graph of $\sin x$. You will quickly see that $\sin a = \sin b$ does not imply $a \equiv b \pmod{360}$ - just see $\sin 60 = \sin 120$. The rest is left to the reader.
$$\sin(3x+4)=\sin3x\cos4+\cos3x\sin4$$ and \begin{align}\cos(7x)+\cos90&=2\cos\frac{(90+7x)}{2}.\cos\frac{(90-7x)}{2}\\ &=\cos^2(45)-\sin^2(7x/2)\\ &=\frac{1}{4}-\sin^2(7x/2)\end{align} now you have two sin equations can you the graphs from here if you think cos and sin are bit difficult.
How do you find the second moment of the beta distribution? I'm required to show $ E(Y^2) = \dfrac{\alpha(\alpha + 1)}{(\alpha + \beta + 1)(\alpha + \beta)} $ for the beta distribution using the definition of expectation. Now so far I have $ \int\limits_0^1 {y^2 \dfrac{\Gamma\left( \alpha + \beta \right)}{\Gamma \left( \alpha \right)\Gamma \left( \beta \right)} y^{\alpha-1}(1-y)^{\beta-1} dy} $ and I simplified it so that I pulled the gamma constants out front of the integral and combined $ y^2y^{\alpha-1} $ to be $ y^{\alpha+1} $. I'm not too sure where to continue from here... can anyone help me out?
Try looking at the kernel of the integral - in other words, ignore all the constant factors, focus on the bits involving the variable you are integrating with respect to. Do you recognise it? It's a good idea whenever you see the PDF of a distribution to pay attention to what its kernel is too. So forget about its normalization factor. Now, if a PDF can be written as the product of a normalizing factor $N$ and a kernel $k(x)$, then because I know: $$\int_{-\infty}^{\infty}f_X(x)dx=N\int_{-\infty}^{\infty}k(x)dx=1$$ I also know that: $$\int_{-\infty}^{\infty}k(x)dx=\frac{1}{N}$$ So, learn to recognize your PDF kernels! If you see an integral with the same kernel, but a different constant factor, then you can easily evaluate it: $$\int_{-\infty}^{\infty}A\cdot k(x)dx=\frac{A}{N}$$ Extra hint: the beta distribution has support [0, 1] so we only need our integrals to have those limits. If $X \sim Beta (\alpha,\, \beta)$ then $f_X(x)=\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha -1}(1-x)^{\beta -1}$ so the kernel is just $x^{\alpha -1}(1-x)^{\beta -1}$. You have $ \int\limits_0^1 {y^2 \dfrac{\Gamma\left( \alpha + \beta \right)}{\Gamma \left( \alpha \right)\Gamma \left( \beta \right)} y^{\alpha-1}(1-y)^{\beta-1} dy} $ which has kernel $y^{\alpha+1}(1-y)^{\beta-1} $. Which PDF is this the kernel of, and with what parameters?
you have to use the recurrent formula First gross moment µ= B(a+1,b)/B(a,b)= a/(a+b) 2nd gross moment M20= B(a+2,b)/B(a,b) = B(a+2,b)/(B(a+1,b){ B(a+1,b)/B(a,b)= {(a+1)/(a+1+b)}{a)/(a+b) If you need the variance V= M20 -µ^2 This is a presnt of Agustín F. Correa from Argentina
The derivative of $e^x$ using the definition of derivative as a limit and the definition $e^x = \lim_{n\to\infty}(1+x/n)^n$, without L'Hôpital's rule Let's define $$ e^x := \lim_{n\to\infty}\left(1+\frac{x} {n}\right)^n, \forall x\in\Bbb R $$ and $$ \frac{d} {dx} f(x) := \lim_{\Delta x\to0} \frac{f(x+\Delta x) - f(x)} {\Delta x} $$ Prove that $$ \frac{d} {dx} e^x = e^x $$ using the definition of $e^x$ and derivation above, without using L'Hôpital's rule or the "logarithm trick" and/or the "inverse function derivative trick". $$ \left( \frac{d} {dx} f^{-1}(x)= \frac{1} {\left(\frac{d}{d(f^{-1}(x))}f(x)\right)(f^{-1}(x))}\right) $$ Or equivalently prove that the following two definiton of $e$ are identical $$ 1)\space\space\space\space e =\lim_{n\to\infty}(1+\frac{1} {n})^n $$ $$ 2) \space\space\space\space e\in\Bbb R,\space\space(\frac{d} {dx} e^x)(x=0) = 1 $$ What I've got is $$ \frac{d}{dx}e^x=e^x \lim_{\Delta x\to0}\frac{e^{\Delta x} - 1} {\Delta x} = e^x \lim_{\Delta x\to0}\lim_{n\to\infty}\frac{\left(1+\frac{\Delta x}{n}\right)^{n}-1}{\Delta x} = e^x \lim_{\Delta x\to0}\frac{e^{0+\Delta x}-e^0}{\Delta x} $$ If i assume that $n\in\Bbb N$ I could use binomial theorem but I didn't got much out of it. Wolframalpha just uses L'Hospital rule to solve it, but I am looking for an elementary solution. What I'm interested in is basically is the equivalence of the two definition of $e$ mentioned above. And I'd like to get a direct proof rather than an indirect(I mean which involves logarithms or the derivatives of invers functions). I look forward getting your aswers.
One needs to establish the following two properties: 1) $\displaystyle \lim_{x \to 0}\frac{e^{x} - 1}{x} = 1$ 2) $\displaystyle e^{x + y} = e^{x}\cdot e^{y}$ and it turns out that both of these can be derived (albeit with some minor difficulty) using the definition $$e^{x} = \lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n}$$ We start with 1) first and that too with limitation $x \to 0+$. We have $\displaystyle \begin{aligned}\lim_{x \to 0+}\frac{e^{x} - 1}{x} &= \lim_{x \to 0+}\dfrac{{\displaystyle \lim_{n \to \infty}\left(1 + \dfrac{x}{n}\right)^{n} - 1}}{x}\\ &= \lim_{x \to 0+}\lim_{n \to \infty}\frac{1}{x}\left\{\left(1 + \dfrac{x}{n}\right)^{n} - 1\right\}\\ &= \lim_{x \to 0+}\lim_{n \to \infty}\frac{1}{x}\left\{\left(1 + x + \dfrac{(1 - 1/n)}{2!}x^{2} + \dfrac{(1 - 1/n)(1 - 2/n)}{3!}x^{3} + \cdots\right) - 1\right\}\\ &= \lim_{x \to 0+}\lim_{n \to \infty}\left(1 + \dfrac{(1 - 1/n)}{2!}x + \dfrac{(1 - 1/n)(1 - 2/n)}{3!}x^{2} + \cdots\right)\\ &= \lim_{x \to 0+}\lim_{n \to \infty}(1 + \phi(x, n))\end{aligned}$ where $\phi(x, n)$ is a finite sum defined by $$\phi(x, n) = \frac{(1 - 1/n)}{2!}x + \cdots + \frac{(1 - 1/n)(1 - 2/n)\cdots(1 - (n - 1)/n)}{n!}x^{n - 1}$$ For fixed positive $x$ the function $\phi(x, n)$ is a increasing sequence bounded by the convergent series $$F(x) = \frac{x}{2!} + \frac{x^{2}}{3!} + \cdots$$ Hence the limit $\lim_{n \to \infty}\phi(x, n)$ exists and let say it is equal to $\phi(x)$. Then $0 \leq \phi(x) \leq F(x)$. Now let $x < 2$ and then we can see that $$F(x) \leq \frac{x}{2} + \frac{x^{2}}{2^{2}} + \frac{x^{3}}{2^{3}} + \cdots = \frac{x}{2 - x}$$ Hence $\lim_{x \to 0+}F(x) = 0$ and therefore $\lim_{x \to 0+}\phi(x) = 0$. We now have $\displaystyle \begin{aligned}\lim_{x \to 0+}\frac{e^{x} - 1}{x} &= \lim_{x \to 0+}\lim_{n \to \infty}1 + \phi(x, n)\\ &= \lim_{x \to 0+}1 + \phi(x)\\ &= 1 + 0 = 1\end{aligned}$ From this it follows that $\lim_{x \to 0+}e^{x} = 1$. To handle the case for $x \to 0-$ we need to use another trick. We show that for $x > 0$ we have $$\lim_{n \to \infty}\left(1 - \frac{x}{n}\right)^{-n} = e^{x}$$ Clearly we have $\displaystyle \begin{aligned}\left(1 - \frac{x}{n}\right)^{-n} - \left(1 + \frac{x}{n}\right)^{n} &= \left(1 + \frac{x}{n}\right)^{n}\left\{\left(1 - \frac{x^{2}}{n^{2}}\right)^{-n} - 1\right\}\\ &< e^{x}\left\{\left(1 - \frac{x^{2}}{n}\right)^{-1} - 1\right\} = \frac{x^{2}e^{x}}{n - x^{2}}\end{aligned}$ and this last expression tends to $0$ as $n \to \infty$ and hence $$\lim_{n \to \infty}\left(1 - \frac{x}{n}\right)^{-n} = \lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = e^{x}$$ Taking reciprocals we see that $$\lim_{n \to \infty}\left(1 - \frac{x}{n}\right)^{n} = \frac{1}{e^{x}}$$ or in other words $e^{-x} = 1/e^{x}$ for $x > 0$ and by duality it holds for $x < 0$ also. Thus we can see that if $x \to 0-$ then we can write $x = -y$ so that $y \to 0+$ and then $\displaystyle \begin{aligned}\lim_{x \to 0-}\frac{e^{x} - 1}{x} &= \lim_{y \to 0+}\frac{e^{-y} - 1}{-y}\\ &= \lim_{y \to 0+}\frac{e^{y} - 1}{y}\frac{1}{e^{y}} = 1\cdot 1 = 1\end{aligned}$ Thus we have established two properties of $e^{x}$ namely $$\lim_{x \to 0}\frac{e^{x} - 1}{x} = 1,\,\, e^{-x} = \frac{1}{e^{x}}$$ The second property allows us to consider only positive arguments of the exponential function. Thus to establish the fundamental property $e^{x + y} = e^{x} \cdot e^{y}$ we need to consider $x, y > 0$ (for $x = y = 0$ it is obviously true). We can see that $\displaystyle \begin{aligned} f(x, y, n) &= \left(1 + \frac{x}{n}\right)^{n}\left(1 + \frac{y}{n}\right)^{n} - \left(1 + \frac{x + y}{n}\right)^{n}\\ &= \left(1 + \frac{x + y}{n} + \frac{xy}{n^{2}}\right)^{n} - \left(1 + \frac{x + y}{n}\right)^{n}\\ &= \left(1 + \frac{x + y}{n}\right)^{n}\left\{\left(1 + \frac{xy}{n(n + x + y)}\right)^{n} - 1\right\}\\ &< e^{x + y}\left\{\left(1 + \frac{xy}{n^{2}}\right)^{n} - 1\right\}\\ &= e^{x + y}\left\{\frac{xy}{n} + \frac{(1 - 1/n)}{2!}\left(\frac{xy}{n}\right)^{2} + \cdots\right\}\\ &< e^{x + y}\left\{\frac{xy}{n} + \left(\frac{xy}{n}\right)^{2} + \cdots\right\}\\ &= e^{x + y}\frac{xy}{n - xy}\end{aligned}$ This shows that for fixed $x, y > 0$ the function $f(x, y, n) \to 0$ as $n \to \infty$. And therefore we have established $e^{x}e^{y} - e^{x + y} = 0$. Now we can easily show that $\displaystyle \begin{aligned}\frac{d}{dx}e^{x} &= \lim_{h \to 0}\frac{e^{x + h} - e^{x}}{h}\\ &= \lim_{h \to 0}e^{x}\cdot\frac{e^{h} - 1}{h} = e^{x} \cdot 1 = e^{x}\end{aligned}$
If I am understanding the question correctly, I believe this should do it.
Proof By Induction - Divisibility by $7$. I am attempting: My solution is: But I am not sure where I am going wrong. The answer I get is not divisible by 7.
HINT: You have to prove the truth of $p(k+1)$ using $p(k)$, so you have to take out something from $p(k$) and then apply it to $p(k+1)$ to establish its truth. As you have assumed that $p(k)$ is true. So, $4^{k+1}+5^{2k-1}$ must be divisible by 7 say it is $7m$ where $m$ is an integer. So you get $4^{k+1}+5^{2k-1}=7m$. Some flipping will give you $5^{2k-1}=7m-4^{k+1}$. Now How does $p(k+1)$ looks like?? It will look like $4^{k+2}+5^{2k+1}$. If we prove that $4^{k+2}+5^{2k+1}$ is divisible by $7$ then we are done. Try using $5^{2k-1}=7m-4^{k+1}$ to proceed further.
Before considering your proof, let's gather some insight from a simpler proof using congruences. Below the inductive step follows very simply by using $\,\rm{\color{#C00}{CPR}} = $ Congruence Product Rule to multiply the first two congruences $$\begin{align} {\rm mod}\,\ 7\!:\qquad\ 4\,\ &\equiv\,\ 5^{\large 2}\\[0.3em] 4^{\large K+1}&\equiv -5^{\large 2K-1}\ \ \ {\rm i.e.}\ \ P(K)\\ \overset{\rm{\color{#C00}{CPR}}}\Longrightarrow\ \ \ 4^{\large K+2}&\equiv -5^{\large 2K+1}\ \ \ {\rm i.e.}\ \ P(K\!+\!1) \end{align}$$ The common inductive proofs using divisibility in other answers effectively do the same thing, i.e. they repeat the proof of the Congruence Product Rule in this special case, but expressed in divisibility vs. congruence language (e.g. see here). But the product rule is much less arithmetically intuitive when expressed as unstructured divisibilities, which greatly complicates the discovery of the inductive step. I explain this at length in other answers, e.g. see here. If congruences are unfamiliar then you can instead use the rule in divisibility form as below. This will allow you to structure the induction in the above intuitive arithmetical Product Rule form. $$\begin{align} {\rm mod}\,\ m\!:\, A\equiv a,\, B\equiv b&\ \ \,\Longrightarrow\,\ \ AB\equiv ab\qquad\text{Congruence Product Rule}\\[3pt] m\mid A-a,\ B-b&\,\Rightarrow\, m\mid AB-ab\qquad\text{Divisibility Product Rule}\\[4pt] {\bf Proof}\quad (A-a)B+a(B&-b)\, = AB-ab\end{align}$$ To finish the proof that you started we can proceed as follows $$\begin{align} f(k\!+\!1) - f(k) &=\, 3 \cdot 4^{\large k+1}+ \color{#0a0}{24}\cdot 5^{2k-1}\\ &=\, 3 \cdot 4^{\large k+1}+ \color{#0a0}3 \cdot 5^{2k-1} + \color{#0a0}{21}\cdot 5^{2k-1}\\ &=\, 3\, f(k) + 7n\\ \Rightarrow\qquad f(k\!+\!1)\, &=\, 4\, f(k) + 7n\\[0,3em] \Rightarrow\ \ 7\mid f(k\!+\!1)\,\ &{\rm if}\,\ 7\mid f(k), \ \ {\rm i.e.}\ \ P(k\!+\!1)\ \ {\rm if}\ \ P(k) \end{align}$$ Note that the above says that $\ f(k\!+\!1)\equiv 4\,f(k)\ \pmod{7}\,$ so an easy induction shows that $\ f(k)\equiv 4^{\large k-1}\, f(1)\pmod 7,\ $ so $\ 7\mid f(k)\iff 7\mid f(1)$ Now how using the Product Rule as above makes it much clearer that incrementing the index amounts simply to multiplication by $\,4,\,$ when viewed modulo $\,7.\,$ Once that innate arithmetical structure hs been revealed, the proof is easy.
Is $\{(a, b, c) \in \mathbb{C}^3 : a^3 = b^3\}$ a subspace of $\mathbb{C}^3$ I have two questions to solve: Is $\{(a, b, c) \in \mathbb{R}^3 : a^3 = b^3\}$ a subspace of $\mathbb{R}^3$? Is $\{(a, b, c) \in \mathbb{C}^3 : a^3 = b^3\}$ a subspace of $\mathbb{C}^3$? For the first one, I proved it is. Then for the second part, I found almost no difference. So I am not sure if I am on the right track. I mean, I believe there must be some differences. But I can't figure it out.
Try with $(e^{i\frac{π}{3}} , -1,0)$ and $(1,1,0)$. It's sum won't satisfy the property $a^{3}=b^{3}$
Try with $(e^{i\frac{π}{3}} , -1,0)$ and $(1,1,0)$. It's sum won't satisfy the property $a^{3}=b^{3}$
I can multiple matricies in any order? I am trying to complete some fairly simple matrix algebra for a homework task. Is it possible to multiple matrices in any order? Is the method for the below correct? Thanks
In general, for matrices $A$ and $B$, $$AB \neq BA,$$ so we can't change the order of the matrices in an expression. However, $$(AB)C=A(BC)=ABC,$$ so we can perform the multiplication operations in any order. This is because we can remove all the brackets from the expression for a sequence of multiplications by repeatedly applying the rule. For example: $$(A((BC)(DE)))F$$ $$=A((BC)(DE))F$$ $$=A(BC)(DE)F$$ $$=ABC(DE)F$$ $$=ABCDEF$$ However the terms were originally grouped, we end up with $ABCDEF$.
The algebraic explanation is that the set of $n\times n$-matrices $M_n(R)$ over a commutative ring $R$ (entries) forms a ring with unity (unit matrix). This includes that the multiplicative monoid is associative, i.e., $A(BC) = (AB)C$ for any matrices $A,B,C\in M_n(R)$.
What are the issues in modern set theory? This is spurred by the comments to my answer here. I'm unfamiliar with set theory beyond Cohen's proof of the independence of the continuum hypothesis from ZFC. In particular, I haven't witnessed any real interaction between set-theoretic issues and the more conventional math I've studied, the sort of place where you realize "in order to really understand this problem in homotopy theory, I need to read about large cardinals." I've even gotten the feeling from several professional mathematicians I've talked to that set theory is no longer relevant, and that if someone were to find some set-theoretic flaw in their axioms (a non-standard model or somesuch), they would just ignore it and try again with different axioms. I also don't personally care for the abstraction of set theory, but this is a bad reason to judge anything, especially at this early stage in my life, and I feel like I'd be more interested if I knew of some ways it interacted with the rest of the mathematical world. So: What do set theorists today care about? How does set theory interact with the rest of mathematics? (more subjective but) Would mathematicians working outside of set theory benefit from thinking about large cardinals, non-standard models, or their ilk? Could you recommend any books or papers that might convince a non-set theorist that the subject as it's currently practiced is worth studying? Thanks a lot!
Set theory today is a vibrant, active research area, characterized by intense fundamental work both on set theory's own questions, arising from a deep historical wellspring of ideas, and also on the interaction of those ideas with other mathematical subjects. It is fascinating and I would encourage anyone to learn more about it. Since the field is simply too vast to summarize easily, allow me merely to describe a few of the major topics that are actively studied in set theory today. Large cardinals. These are the strong axioms of infinity, first studied by Cantor, which often generalize properties true of $\omega$ to a larger context, while providing a robust hierarchy of axioms increasing in consistency strength. Large cardinal axioms often express combinatorial aspects of infinity, which have powerful consequences, even low down. To give one deep example, if there are sufficiently many Woodin cardinals, then all projective sets of reals are Lebesgue measurable, a shocking but very welcome situation. You may recognize some of the various large cardinal concepts---inaccessible, Mahlo, weakly compact, indescribable, totally indescribable, unfoldable, Ramsey, measurable, tall, strong, strongly compact, supercompact, almost huge, huge and so on---and new large cardinal concepts are often introduced for a particular purpose. (For example, in recent work Thomas Johnstone and I proved that a certain forcing axiom was exactly equiconsistent with what we called the uplifting cardinals.) I encourage you to follow the Wikipedia link for more information. Forcing. The subject of set theory came to maturity with the development of forcing, an extremely flexible technique for constructing new models of set theory from existing models. If one has a model of set theory $M$, one can construct a forcing extension $M[G]$ by adding a new ideal element $G$, which will be an $M$-generic filter for a forcing notion $\mathbb{P}$ in $M$, akin to a field extension in the sense that every object in $M[G]$ is constructible algebraically from $G$ and objects in $M$. The interaction of a model of set theory with its forcing extensions provides an extremely rich, intensely studied mathematical context. Independence Phenomenon. The initial uses of forcing were focused on proving diverse independence results, which show that a statement of set theory is neither provable nor refutable from the basic ZFC axioms. For example, the Continuum Hypothesis is famously independent of ZFC, but we now have thousands of examples. Although it is now the norm for statements of infinite combinatorics to be independent, the phenomenon is particularly interesting when it is shown that a statement from outside set theory is independent, and there are many prominent examples. Forcing Axioms. The first forcing axioms were often viewed as unifying combinatorial assertions that could be proved consistent by forcing and then applied by researchers with less knowledge of forcing. Thus, they tended to unify much of the power of forcing in a way that was easily employed outside the field. For example, one sees applications of Martin's Axiom undertaken by topologists or algebraists. Within set theory, however, these axioms are a focal point, viewed as expressing particularly robust collections of consequences, and there is intense work on various axioms and finding their large cardinal strength. Inner model theory. This is a huge on-going effort to construct and understand the canonical fine-structural inner models that may exist for large cardinals, the analogues of Gödel's constructible universe $L$, but which may accommodate large cardinals. Understanding these inner models amounts in a sense to the ability to take the large cardinal concept completely apart and then fit it together again. These models have often provided a powerful tool for showing that other mathematical statements have large cardinal strength. Cardinal characteristics of the continuum. This subject is concerned with the diverse cardinal characteristics of the continuum, such as the size of the smallest non-Lebesgue measurable set, the additivity of the null ideal or the cofinality of the order $\omega^\omega$ under eventual domination, and many others. These cardinals are all equal to the continuum under CH, but separate into a rich hierarchy of distinct notions when CH fails. Descriptive set theory. This is the study of various complexity hierarchies at the level of the reals and sets of reals. Borel equivalence relation theory. Arising from descriptive set theory, this subject is an exciting comparatively recent development in set theory, which provides a precise way to understand what otherwise might be a merely informal understanding of the comparative difficulty of classification problems in mathematics. The idea is that many classification problems arising in algebra, analysis or topology turn out naturally to correspond to equivalence relations on a standard Borel space. These relations fit into a natural hierarchy under the notion of Borel reducibility, and this notion provides us with a way to say that one classification problem in mathematics is at least as hard as or strictly harder than another. Researchers in this area are deeply knowledgable both about set theory and also about the subject area in which their equivalence relations arise. Philosophy of set theory. Lastly, let me also mention the emerging subject known as the philosophy of set theory, which is concerned with some of the philosophical issues arising in set theoretic research, particularly in the context of large cardinals, such as: How can we decide when or whether to adopt new mathematical axioms? What does it mean to say that a mathematical statement is true? In what sense is there an intended model of the axioms of set theory? Much of the discussion in this area weaves together profoundly philosophical concerns with extremely technical mathematics concerning deep features of forcing, large cardinals and inner model theory. Remark. I see in your answer to the linked question you mentioned that you may not have been exposed to much set theory at Harvard, and I find this a pity. I would encourage you to look beyond any limiting perspectives you may have encountered, and you will discover the rich, fascinating subject of set theory. The standard introductory level graduate texts would be Jech's book Set Theory and Kanamori's book The Higher Infinite, on large cardinals, and both of these are outstanding. I apologize for this too-long answer...
i find this talk of some area of math being disconnected from another area, and that this being a bad thing truly bizarre. not many areas of mathematics pop up in other areas in truly significant ways. mathematical concepts do, of course groups show up almost everywhere, the concept of continuity and etc. but so does the concept of union, subset, quotient, projection, or the concept that a property restricted to a domain defines an object, or the concept of infinity, Borel sets, reducibility between two mathematical structures, complexity of a mathematical classification and etc. these are concepts that come from mathematical logic that are everywhere. everyone uses them, and some use without giving a proper credit to the role logicians have played in development of these concepts. i know many algebraist in my department who never seem to use Birkhoff's Ergodic Theorem, in fact many are not even aware of it, does this mean ergodic theory is useless? does every person in analysis use sophisticated cancelation theories from group theory, most likely not, they often use the concept of group and work with that concept but i would be very surprised if small cancelation theory is everywhere in analysis. set theory is no different than this. our basic concepts and operations are everywhere, like unions, intersections, projections, infinity and etc are everywhere, and we have sophisticated theories that sometimes show up in interesting places (like recently in C^* algebras and even in theoretic financial math (when building certain kinds of limits)...), over time they will show up in more and more places, set theory is still very young. it is difficult to understand what makes one feel like the need to express himself in this way towards any subject.
find a topology where the sequence $\left(\frac1n\right)$ converges to $1$ can someone help me to find a topology where the sequence $\left(\frac1n\right)$ converges to an unique value which is $1$? I was thinking of the Trivial topology $(X,T)$ but it's not the wanted topology because the sequence converges to every point of $X$.
Paint a gigantic "$0$" symbol on $1$, and paint a gigantic "$1$" symbol on $0$. Now use the standard topology, except use the symbols you painted on $0$ and $1$ instead of the usual meanings of $0$ and $1$. To put this another way, let $f : \mathbb{R} \to \mathbb{R}$ be the function defined by $f(0)=1$, $f(1)=0$, and $f(x)=x$ if $x \ne 0,1$. Define $U \subset \mathbb{R}$ to be open in the topology $T$ if and only if $f(U)$ is open in the standard topology.
In the topological space (*) induced by the metric $$d(x,y) = \inf_{n\in \mathbb{Z}} |x-y-n|, \qquad (x,y) \in (0,1]^2$$ then $$\lim_{n \to \infty} d(1,\frac1n) =0$$ (*) Called the circle $\mathbb{S}_1$ or $\mathbb{R}/\mathbb{Z}$
A geometry problem with a cube (solid geometry) I ask you to solve this problem, because in the book and I have different answers Point $M$ is the midpoint of the edge $AB$ of the cube $ABCDA_1B_1C_1D_1$. Find the distance between the straight lines $A_1M$ and $B_1C$, given that the edge of the cube is equal to $a$. Please help me. It's Translated from Russian and it's the full context of the problem. The answer in the book is $(\sqrt{6}/3)a$. But my answer is $2a\sqrt3$.
Let $H$ be the point on $A_1M$ such that $A_1H/HM=2$, and $K$ be the point on $B_1C$ such that $CK/KB_1=2$. Then it is not difficult to show that segment $HK$ is perpendicular to both $A_1M$ and $B_1C$ and is thus the distance between the lines. To compute the length of $HK$ it may be useful to note that $HK={2\over3}GF$, where $G$ and $F$ are the midpoints of $A_1A$ and $B_1C_1$ respectively.
HINTS Take cube edges as parallel to x-,y-, z- axes. Use vectors to find position vectors of red lines. Normalize for length. Minimum distance lies along their cross product to find distance between skew vectors.
Paradox in Theory of probability of transition to polar coordinates Let there are two independent random variables $X$,$Y$ with normal distribution. Vector $(X, Y)$ can be considered as a random point on the plane. Let $R$ and $\phi$ polar coordinates of this point. assuming that $X =Y $ we get that the distribution $R^2 = 2X^2$ coincides with the distribution of the square of a random quantity multiplied by 2. At the same time, provided that $\phi = \pi/4$ or $\phi = 5\pi/4$ distribution of random variables $R^2 = X^2 + Y^2$ is the same as the distribution of the sum of squares of two independent standard normal values. Therefore, we got different distributions when $X=Y$ and when $\phi = \pi/4$ or $\phi = 5\pi/4$ and it's a paradox. What's the catch?
The catch is that as soon as you assumed that $X=Y$, they are not independent anymore.
The catch is that as soon as you assumed that $X=Y$, they are not independent anymore.
Dirichlet Function Pointwise Convergence We say that a function $f$ is Baire Class $1$ if there is a sequence of functions $f_i \to f$ pointwise where each $f_i$ is continuous. The set of discontinuities of a Baire Class $1$ function $f$ must be a first-category (meagre) set of points. The Dirichlet function $f(x) = \begin{cases} 1 & x \in \mathbb Q \\ 0 & x \notin \mathbb Q \end{cases}$ is not Baire class $1$ since it is discontinuous everywhere. But can't we write the function as $f_i(x) = \lim_{i\to\infty} \cos(i!\pi x)^{2i}$, each of which is a continuous function that converges pointwise to the Dirichlet function? What is wrong with this?
Let $0\le n_1<n_2<...<n_k<\dots$ and define $$x=2\sum_{j=1}^\infty \frac{1}{n_j!}$$ Basically, we will pick $\{n_i\}$ to grow "fast enough" so that $\limsup_i \left(\cos n_i!\pi x\right)^{2n_i}>0$ Now $n_i!x = 2K + 2\sum_{j>i} \frac{n_i!}{n_j!}$ for some integer $K$. And:$$0\leq \sum_{j>i} \frac{n_i!}{n_j!} < \frac{2}{n_i^{n_j-n_i}}(1+\frac{1}{n_j}+\frac{1}{n_j^2}+..) = \frac{2}{n_i^{n_j-n_i}}\frac{n_j}{n_j-1}$$ So $n_i!x\pi = 2K\pi + z$ where $0\leq z < \frac{2\pi}{n_i^{n_j-n_i}}\frac{n_j}{n_j-1}$. It's gonna be a little messy, but if you choose a reasonable sequence of $n_i$, then you'll be able to show that $$\limsup_{i\to\infty}\left(\cos n_i!\pi x\right)^{2n_i}>0 $$ In particular then, $\left(\cos j!\pi x\right)^{2j}$ cannot converge to zero.
The set of discontinuities of a Baire Class 1 function f must be a first-category (meagre) set of points. Hence Dirichlet is not Baire 1 $f(x)=\lim_{i\to\infty} \lim_{j\to\infty} (\cos(i!πx)^{2j})$. This proves Dirichlet function is Baire 2 function.
Why is the complex domain of cosine naturally a sphere? Near the end of this MAA piece about elliptic curves, the author explains why the complex domain of the cosine function is a sphere: since it's periodic, its domain can be taken as a cylinder, wrapping up the real axis. And because cosine of $\theta\pm i\infty$ is $\infty$, the two ends of the cylinder can be identified with a single point $\infty$. Ok, great, but this sounds to me like a pinched torus. Can I have a clearer explanation why this is a sphere?
Let me start by saying you are right in a sense, and I think the article is at best being unclear, but the larger point that the equation of ellipse cuts out a sphere is also right. There are a few things going on, hence a long answer below. What one is trying to describe is the topology of a set defined by a quadratic equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$. Of course it is crucially important to explain what values $x$ and $y$ are allowed to take. In the context to the article, that means whether one allows only real values or complex ones as well (this is the question of choosing the field of coefficients), and whether one allows infinite values and in what sense (this is a choice of compactification). Both of these choices affect the topology of the solution set. If only finite real values are allowed the result is the familiar ellipse in the plane (which has topology of a circle); now one has various options about how to treat adding infinite values of $x$ and $y$. However the (real) ellipse does not go to infinity in any way (it stays in finite part of the plane; or, in other words, it is already compact so needs no compactification; algebraically, no real vector with large modulus can solve $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$). Thus for this equation over real numbers the second choice is irrelevant. However (spoiler!) it is not irrelevant over complex numbers. To better see what's going on let's stick to the real variables for now, but consider the equation $uw=1$ instead. The solutions to this in $\mathbb{R}^2$ define a hyperbola and this has topology of 2 lines. Now if we compactify $\mathbb{R}^2$ to the sphere - treating all points at infinity as one - we get one extra point, and the topology of solution set is a wedge of two circles. If we treat $u$-infinities and $v$-infinities as different (compactify each $\mathbb{R}$ to $S^1$, so $\mathbb{R}^2$ is compactified to $S^1\times S^1$), then we get 2 extra points ("$(\infty, 0)$" and "$(0, \infty)$") and a topology of a single circle. We can also compactify $\mathbb{R}^2$ to $\mathbb{R}P^2$ (and still get a circle), or to $\mathbb{D}^2$ (and get two closed segments) or any other number of things, some more natural than others. To finish with the discussion over $\mathbb{R}$, note that if we consider $(u-v)(u+v)=1$ instead of $uv=1$, the two curves in $\mathbb{R}^2$ are isomorphic, but in the compactification $S^1\times S^1$ we get now a wedge of two circles instead of a single circle. This is because the coordinate change of $\mathbb{R}^2$ given by $u_n=u-v$, $v_n=u+v$ taking curve $(u-v)(u+v)=1$ to $u_n v_n=1$ does not extend to these compactifications. This never happens for $\mathbb{R}P^2$, which is one big reason why this is usually the prefered choice of compactification. Now over complex numbers, the equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ has a solution set in $\mathbb{C}^2$ which is a cylinder (aka tangent bundle of $S^1$). Then we can compactify by adding one point for "all infinity" and get a "pinched torus" with both infinities of the cylinder filled in by that one extra point; or by adding a point at infinity to each coordinate separately - this is what is done in the article - and still get a "pinched torus", since $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ forces $x$ to have infinite modulus as soon as $y$ does and vice versa, so the only added point is $(\infty, \infty)$. However, if we compactify to $\mathbb{C}P^2$ we get two extra points ($[a:ib:0]$ and $[a:-ib:0]$), and topology of a sphere. This is in a sense the "right" compactification of the image, sitting inside the "right compactification" $\mathbb{C}P^2$. Finally, we should note that this is not so much about the domain of $\cos$ but more about the image of $z \to (a\cos z, b\sin z)$ in $\mathbb{C}^2$ and its compactifications. Whether we can extend $\cos z$ or $\sin z$ or $(\cos z, \sin z)$ to a map of some compactification of $\mathbb{C}$ to somewhere depends on where (which extension/compactification of $\mathbb{C}$ or $\mathbb{C}^2$) we are mapping to, and does not really have so much intrinsic meaning.
Trigonometric functions are circular functions. Complex cosine is: cos z = (e^(iz) + e^(-iz))/2 where z = x + iy where x and y are real numbers and z is a complex number. Suppose we restrict z so modulus z = 1 = modulus (x + iy). Hence, the image of z is the unit circle in the complex plane. What if we were to look at the map X = (x, y)? Let X = (0, 0) correspond to an origin. Then, apparently, the image cos(z) = cos(0) = 1. The unit sphere has radius = 1. Consider any rotation of the plane we choose to call the complex plane passing through any point we have chosen to call the origin, namely X = (0, 0) with the property modulus z = 1. Again, this is the unit circle on that particular complex plane. When modulus z = r for any non-negative r, we have an entire complex plane which may be mapped onto the unit sphere. So much for the prelims. Perhaps making more sense write z = (e^x)(e^(iy) which has modulus r = e^x. Then realize cos(z) has modulus 1 and is entire as (x,y) range over all reals. Recalling the pre-image of cos(z) was z = x + iy, the domain of cos(z) may be mapped onto the unit sphere by starting at z, drawing a secant to the North Pole of the unit sphere centered at (0, 0, 1) and calling the point P on the unit sphere where the secant first pierces the unit sphere starting from z in the complex plane and repeating this for every z in the complex plane. The result is the points P trace out the unit sphere, so that by reverse stereographic projection, we have the unit sphere of all such points P as our domain for cos(z). For finite r = e^x, the North Pole of the unit sphere mapping gets omitted. Now let x approach infinity, so r = e^x approaches infinity. Identify the North Pole of the unit sphere with the point at infinity, and I believe the stereographic mapping for z = (e^x)(e^iy) becomes onto for all real x and y. I hope this helps.
Why is it that when proving trig identities, one must work both sides independently? Suppose that you have to prove the trig identity: $$\frac{\sin\theta - \sin^3\theta}{\cos^2\theta}=\sin\theta$$ I have always been told that I should manipulate the left and right sides of the equation separately, until I have transformed them each into something identical. So I would do: $$\frac{\sin\theta - \sin^3\theta}{\cos^2\theta}$$ $$=\frac{\sin\theta(1 - \sin^2\theta)}{\cos^2\theta}$$ $$=\frac{\sin\theta(\cos^2\theta)}{\cos^2\theta}$$ $$=\sin\theta$$ And then, since the left side equals the right side, I have proved the identity. My problem is: why can't I manipulate the entire equation? In this situation it probably won't make things any easier, but for certain identities, I can see ways to "prove" the identity by manipulating the entire equation, but cannot prove it by keeping both sides isolated. I understand, of course, that I can't simply assume the identity is true. If I assume a false statement, and then derive from it a true statement, I still haven't proved the original statement. However, why can't I do this: $$\frac{\sin\theta - \sin^3\theta}{\cos^2\theta}\not=\sin\theta$$ $$\sin\theta - \sin^3\theta\not=(\sin\theta)(\cos^2\theta)$$ $$\sin\theta(1 - \sin^2\theta)\not=(\sin\theta)(\cos^2\theta)$$ $$(\sin\theta)(\cos^2\theta)\not=(\sin\theta)(\cos^2\theta)$$ Since the last statement is obviously false, is this not a proof by contradiction that the first statement is false, and thus the identity is true? Or, why can't I take the identity equation, manipulate it, arrive at $(\sin\theta)(\cos^2\theta)=(\sin\theta)(\cos^2\theta)$, and then work backwards to arrive at the trig identity. Now, I start with a statement which is obviously true, and derive another statement (the identity) which must also be true - isn't that correct? Another argument that I have heard for keeping the two sides isolated is that manipulating an equation allows you to do things that are not always valid in every case. But the same is true when manipulating just one side of the equation. In my first proof, the step $$\frac{\sin\theta(\cos^2\theta)}{\cos^2\theta}$$ $$=\sin\theta$$ is not valid when theta is $\pi/2$, for example, because then it constitutes division by zero.
Why can't I manipulate the entire equation? You can. The analytical method for proving an identity consists of starting with the identity you want to prove, in the present case $$ \begin{equation} \frac{\sin \theta -\sin ^{3}\theta }{\cos ^{2}\theta }=\sin \theta,\qquad \cos \theta \neq 0 \tag{1} \end{equation} $$ and establish a sequence of identities so that each one is a consequence of the next one. For the identity $(1)$ to be true is enough that the following holds $$ \begin{equation} \sin \theta -\sin ^{3}\theta =\sin \theta \cos ^{2}\theta \tag{2} \end{equation} $$ or this equivalent one $$ \begin{equation} \sin \theta \left( 1-\sin ^{2}\theta \right) =\sin \theta \cos ^{2}\theta \tag{3} \end{equation} $$ or finally this last one $$ \begin{equation} \sin \theta \cos ^{2}\theta =\sin \theta \cos ^{2}\theta \tag{4} \end{equation} $$ Since $(4)$ is true so is $(1)$. The book indicated below illustrates this method with the following identity $$ \frac{1+\sin a}{\cos a}=\frac{\cos a}{1-\sin a}\qquad a\neq (2k+1)\frac{\pi }{2} $$ It is enough that the following holds $$ (1+\sin a)(1-\sin a)=\cos a\cos a $$ or $$ 1-\sin ^{2}a=\cos ^{2}a, $$ which is true if $$ 1=\cos ^{2}a+\sin ^{2}a $$ is true. Since this was proven to be true, all the previous indentities hold, and so does the first identity. Reference: J. Calado, Compêndio de Trigonometria, Empresa Literária Fluminense, Lisbon, pp. 90-91, 1967.
I think it can be done. We are proving that LHS =RHS, as by assuming so, you arrive at a Universal truth. Similarly, by assuming it's not true, we arrive at a contradiction , which is called, proof by contradiction.
Can every uncountable subset of $\mathbb R$ be split up this way? For me this question is like a fish that anytime when I (seem to) catch it, manages to slip out of my hands again. If $U$ is an uncountable subset of $\mathbb R$ then can it be shown that some $x\in\mathbb R$ exists such that $U\cap(-\infty,x)$ and $U\cap(x,\infty)$ are both uncountable? Thanks in advance.
Let $a$ be the supremum of all $x$ such that $U\cap (-\infty,x)$ is countable ($a=-\infty$ if there are none). Let $b$ be the infimum of all $x$ such that $U\cap (x,\infty)$ is countable, and $\infty$ if there are none. Now $a=b$ would imply that $U$ is countable so we must have $a<b$ and any $a<x<b$ will satisfy your demands.
I could be off here, but the following comes to mind: If $U$ has no neither first nor last item, for instance if it's set $\mathbb{Z}$, then any splitting the subset $U$ to $U_1\in\langle-\infty,x\rangle\cap U$ and $U_2\in\langle x,\infty\rangle$, both $U_1$ and $U_2$ are uncountable. However, if the set has a first or a last item, for example set $\mathbb{N}$, then $U_1$ xor $U_2$ would be countable. It would be impossible to prove that both subsets are uncountable without further specifying that $U$ doesn't have first nor last item. And if it so, any $x\in\langle-\infty,\infty\rangle$ would satisfy the criteria because $U\in\langle-\infty,\infty\rangle$ and $x\in U$
Prove that there exists a rational number raised to an irrational number that is an irrational number Prove: There exists $a \in \mathbb{Q}$ and $b \in \mathbb{R}\smallsetminus \mathbb{Q}$ such that $a^b \in \mathbb{R} \smallsetminus \mathbb{Q}$. I've tried using $\log_23$, $\sqrt 2$, and $\frac{1}{\sqrt 2}$ for the irrational number, but couldn't find a way to prove $a^b$ was irrational. Is there a way to prove this without using Gelfond–Schneider theorem?
Well, either $2^{\sqrt{2}}$ is irrational and we are done, or $(2^\sqrt{2})^{\sqrt{2}/4}=\sqrt{2}$ is an irrational, which is a rational to an irrational power.
Let $p$ be a rational in the form $\frac{a}{b}$ and $q$ be an irrational number, now, $$p^q=1+\frac{\left(q\ln p\right)}{1!}+\frac{\left(q\ln p\right)^2}{2!}...$$ For all $p \gt 1$, Each term in series is irrational thus $p^q$ must be irrational.
The square of a standard Normal random variable I am having a bit of trouble with this: Let $U=Z^2$ where Z is a standard Normal random variable with pdf: $$f_z(z) = \frac{1}{\sqrt{2\pi}} e^{\frac{-z^2}{2}}$$ I want to use the inversion method but have thus far only learned to use this when functions are strictly increasing or decreasing. Since a standard normal distribution function is strictly increasing and then strictly decreasing I thought perhaps I could find some way to use this method. I have the final answer as $f_u(u)= \frac{1}{\sqrt{2\pi}\sqrt{u}}e^{\frac{-u}{2}}$ for $u>0$ But I am not very comfortable with the process of getting to that answer. I used a bit of a walk through and made some assumptions about what was happening. Could anyone help me understand how I should use the above information to reach this answer?
Here is a way to get the answer using more basic principles. $$f_Z(z) = \frac{1}{\sqrt{2\pi}} e^{-z^2/2}, \quad -\infty < z < \infty.$$ This much you already know. Then $$\begin{align*} F_U(u) &= \Pr[U \le u] \\ &= \Pr[Z^2 \le u] \\ &= \Pr[-\sqrt{u} \le Z \le \sqrt{u}] \\ &= \Pr[Z \le \sqrt{u}] - \Pr[Z \le -\sqrt{u}] \\ &= F_Z(\sqrt{u}) - F_Z(-\sqrt{u}), \end{align*}$$ where $F_Z(z)$ is the cumulative distribution function for $Z$. Now differentiate with respect to $u$, taking care to use the chain rule: $$f_U(u) = \frac{f_Z(\sqrt{u})}{2\sqrt{u}} - \frac{f_Z(-\sqrt{u})}{-2\sqrt{u}} = \frac{f_Z(\sqrt{u}) + f_Z(-\sqrt{u})}{2\sqrt{u}}.$$ Now substitute back into the density for $Z$: $$f_U(u) = \frac{1}{\sqrt{2\pi}}\frac{2e^{-u/2}}{2\sqrt{u}} = \frac{e^{-u/2}}{\sqrt{2\pi u}}, \quad u > 0. $$ The distribution of $U$ is known as the chi-square distribution with $1$ degree of freedom.
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,{\rm Li}_{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align}&\color{#66f}{\large% \int_{-\infty}^{\infty} \frac{\expo{-z^{2}/2}}{\root{2\pi}}\delta\pars{u - z^{2}}\,\dd z} =2\Theta\pars{u}\,\frac{\expo{-u/2}}{\root{2\pi}}\int_{0}^{\infty} \frac{\delta\pars{z - \root{u}}}{2\root{u}}\,\dd z \\[5mm]&=\color{#66f}{\large\Theta\pars{u}\,\frac{u^{-1/2}\ \expo{-u/2}}{\root{2\pi}}} \end{align}
Étale cohomology of projective space I have some very basic question about étale cohomology. Namely I would like to compute the étale cohomology of of the projective space over the algebraic closure of $\mathbb F_q$ along with its Frobenius operation: $$H^i(\mathbb P^n_{\mathbb F},\mathbb Z /l)$$ I would expect that it vanishes for $i>2n$ or odd $i$ and is $\mathbb Z/l$ with Frobenius operation by multiplication by $q^{i/2}$ otherwise. Using the Gysin sequence I can check, that the cohomology groups look as expected, however I don't know how to compute the operation of the Frobenius. So my questions are: How does one compute the Frobenius operation on cohomology in this example? What are general techniques to compute Frobenius action on $l$-adic or étale cohomology?
You could count points and use the Lefschetz fixed point formula.
You could count points and use the Lefschetz fixed point formula.
Theorem of liouville Consider two entire functions with no zeroes and having a ratio equal to unity at infinity. Use Liouville’s Theorem to show that they are in fact the same function. My attempt Consider $h(z) = f(z)/g(z)$. First of all, $h$ is entire, since $f$ and $g$ are entire, and $g(z)$ is nonzero for all $z$ in $\mathbb{C}$. The fact that $\lim_{z→\infty} h(z) = \lim_{z→\infty} f(z)/g(z) = 1$ suggests that $h(z)$ is bounded as well. Why: Since $\lim_{z→\infty} f(z)/g(z) = 1$, there exists $N > 0$ such that $|f(z)/g(z) - 1| < 1$ for all $|z| > N$. (Note that I am taking $\epsilon = 1$ for concreteness.) Then, $|f(z)/g(z) - 1| \geq ||f(z)/g(z)| - 1|$ $\implies ||f(z)/g(z)| - 1| < 1$ $\implies |f(z)/g(z)| - 1 < 1$ or $1 - |f(z)/g(z)| < 1$ $\implies 0 < |f(z)/g(z)| < 2$. That is, $|f(z)/g(z)|$ is bounded above by $2$ for $|z| > N$. Moreover, we know that $|f(z)/g(z)|$ has a maximum $M$ on $|z| \leq N$ by maximum modulus principle (or simply from $|z| \leq N$ being compact). Hence, $|f(z)/g(z)|$ is bounded above by $\max \{2, M\}$ for all z in $\mathbb{C}$. Hence, $h(z)$ is constant by Liouville's Theorem, i.e. $h(z) = f(z)/g(z) = c$ for some constant $c$. Since this is true for all $z$ in $\mathbb{C}$, taking the limit as $z\to \infty$ yields $c = 1$. Hence $f(z) = g(z)$ for all $z$ in $\mathbb{C}$. Is correct my work?
Since both are entire functions without zeros, $h(z) = \frac{f(z)}{g(z)}$ is an entire function. $\lim_{z \rightarrow \infty} \frac{f(z)}{g(z)} \rightarrow 1$ where $\infty$ can be interpreted as the point at infinity, implies that $h$ is a bounded. A bounded entire function is constant $c$. $h(z) = c$ implies that $f(z) = c g(z)$. However $\lim_{z \rightarrow \infty} h(z) = 1$ implies that $c = 1$.
Since both are entire functions without zeros, $h(z) = \frac{f(z)}{g(z)}$ is an entire function. $\lim_{z \rightarrow \infty} \frac{f(z)}{g(z)} \rightarrow 1$ where $\infty$ can be interpreted as the point at infinity, implies that $h$ is a bounded. A bounded entire function is constant $c$. $h(z) = c$ implies that $f(z) = c g(z)$. However $\lim_{z \rightarrow \infty} h(z) = 1$ implies that $c = 1$.
Proving $\phi$ is continuous In this video the lecturer gave an example, that is: $\phi : M\to N$, where $M$ is equipped with arbitrary topology $\mathcal O_M$, and $N$ is equipped with the chaotic topology $\{\varnothing, N\}$. Then any $\phi$ is continuous. It's clear that the preimage of $\varnothing$ must be $\varnothing$ and thus a open set in $M$, but why the preimage of $N$ is an open set in $M$?
The preimage $\phi^{-1}(N) = \{x \in M : \phi(x) \in N\}$ is equal to $M$ and $M$ is open by definition of a topology.
Any map into a space with the trivial topology is continuous (as is any map from a discrete space), which can be easily seen, for instance, by applying the definition that the preimage of any open set is open.
Can epsilon be a matrix? Question In the following expression can $\epsilon$ be a matrix? $$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$ Background So in quantum mechanics we generally have a solution $|m\rangle$ to a Hamiltonian: $$ H | m\rangle = E |m\rangle $$ Now using perturbation theory: $$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$ I was curious and substituted $\epsilon$ as a matrix: $$ \epsilon = \left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right) $$ where $\epsilon$ now, is the nilpotent matrix, we get: $$ \left( \begin{array}{cc} H | m \rangle & 0 \\ H_1 |m_1 \rangle + H | m\rangle & H |m_1 \rangle \end{array} \right) = \left( \begin{array}{cc} E | m \rangle & 0 \\ E_1 |m_1 \rangle + E | m\rangle & E |m_1 \rangle \end{array} \right)$$ Which is what we'd expect if we compared powers of $\epsilon$'s. All this made me wonder if $\epsilon$ could be a matrix? Say something like $| m_k\rangle \langle m_k |$ ? Say we chose $\epsilon \to \hat I \epsilon$ then there exists a radius of convergence. What is the radius of convergence in a general case of any matrix?
I would say there's nothing preventing you from using a matrix as perturbation of a matrix equation as long as you let the limit of the norm converge to $0$.
Yes and No. When I say yes I mean it is possible in several ways. But it does not make sense. As usual for a physicist you did not specify what the space is in which your states (kets) live and thus not what the $H$ and $H_1$ are. But of course they are meant to be operators which you can consider to be generalizations of matrices to infinite dimensions. So when you assume $\epsilon$ to be a matrix you could as well absorb this into $H_1$. Further you put $\epsilon$ to be a constant matrix. Then you can just leave the $\epsilon$ away. The epsilon is there to control the perturbation $H_1$. If you set $\epsilon$ to be a constant you directly solve the problem.
Differential equation with no unique solution Can someone help me find solutions for the following differential equation: $x'=-t\,\text{sign}(x)\sqrt{|x|}$ with $x(\tau)=\xi$. With $x$ a function of $t$ and $\tau$ and $\xi$ constants.
It is possible to compute solutions where $x(t)$ is not changing its sign. Since you wanted to find 'solutions' and not 'all solutions' I will not investigate the case where $x(t)$ changes its sign. (It seems strenuous.) Now let's take the IVP $$\begin{cases} x'&=-t\,\text{sign}(x)\sqrt{|x|} \\ x(\tau)&=\xi \end{cases}$$ Assuming $x(t)>0$ for all $t$: We get $x'=-t \sqrt{x}$ which has the solution (by separation of variables) $$x(t)=\frac{1}{16}(t^2-A)^2>0$$ with $x'(t)=\frac{1}{4}(t^2-A)t$ where $A>0$ is a constant that has the properties: $x'(t)<0$ i.e. $\begin{cases} \text{for $t>0$: }~t^2-A<0 \Rightarrow t^2<A \Rightarrow |t|<\sqrt{A} \Rightarrow 0<t<\sqrt{A} \\ \text{for $t<0$: }~t^2-A>0 \Rightarrow t^2>A \Rightarrow |t|>\sqrt{A} \Rightarrow t<-\sqrt{A} \end{cases}$ $x(\tau)=\xi$ i.e. $\begin{cases} \text{for $\tau>0$: }~\tau^2-A=- 4 \sqrt{\xi}<0 \Rightarrow A=\tau^2 + 4\sqrt{\xi} \\ \text{for $\tau<0$: }~\tau^2-A= 4 \sqrt{\xi}>0 \Rightarrow A=\tau^2 - 4\sqrt{\xi} \\ \text{for $\tau=0$: }~ 0-A=-4\sqrt{\xi}<0 \Rightarrow A=4\sqrt{\xi} \end{cases}$ The first condition is problematic for arbitrarily large $t$. Since $A$ is specified by the second condition we get by the first condition that $x(t)=\frac{1}{16}(t^2-A)^2$ is only a solution for $$t \in \begin{cases} \left(-\infty,-\sqrt{\tau^2+4 \sqrt{\xi}} \right) \cup \left(0,\sqrt{\tau^2+4 \sqrt{\xi}} \right) &\text{ if } \tau>0\\ \left(-\infty,-\sqrt{\tau^2-4 \sqrt{\xi}} \right) \cup \left(0,\sqrt{\tau^2-4 \sqrt{\xi}} \right) &\text{ if } \tau<0 \wedge \tau^2>4 \sqrt{\xi} \\ \left(-\infty,-\sqrt{4 \sqrt{\xi}} \right) \cup \left(0,\sqrt{4 \sqrt{\xi}} \right) &\text{ if } \tau=0 \end{cases}$$ Assuming $x(t)<0$ for all $t$: We get $x'=t\sqrt{-x}$ i.e. $y'=-t\sqrt{y}$ for $y=-x$. So we are back in our first case just with a minus and the solution has to stay negative.
If $t\mapsto x(t)>0$ is a solution defined in some interval $J$ then $$\left(2\sqrt{x(t)}+{1\over2}t^2\right)'={x'(t)\over\sqrt{x(t)}}+t=0\ ,$$ hence $$2\sqrt{x(t)}+{1\over2}t^2={1\over2}c^2$$ for some $c>0$. This allows to conclude that $$\sqrt{x(t)}={1\over4}(c^2-t^2)\qquad(-c<t<c)\ ,$$ or $$x(t)={1\over16}(c^2-t^2)^2\qquad(-c<t<c)\ .\tag{1}$$ It is easy to check that these functions, together with their negatives, indeed solve the given ODE, and at the same time their graphs fill the complete $(t,x)$-plane, minus the line $x=0$, in the desired way. The function $$x(t):\equiv0\tag{2}$$ is a solution as well, but a special one: The Lipschitz-assumption of the general existence and uniqueness theorem is not fulfilled in the points $(\tau,0)$ with $\tau\ne0$. Therefore it does not come as a surprise that IVPs with such a point as initial point have three solution germs starting there: the solution $(2)$ and two arcs of type $(1)$ going off towards $t=0$. In direction $|t|\to\infty$ there is just the trivial behavior $(2)$. See the following figure:
$f$ strictly increasing does not imply $f'>0$ We know that a function $f: [a,b] \to \mathbb{R}$ continuous on $[a,b]$ and differentiable on $(a,b)$, and if $f'>0 \mbox{ on} (a,b)$ , f is strictly increasing on $[a,b]$. Is there any counterexample that shows the converse fails? I have been trying to come up with simple examples but they all involve functions that are discontinuous or has derivative $f'=0$ which does not agree with the hypothesis hmmm
Consider $f(x)=x^3$ on $[-1,1]$. It is strictly increasing, but has zero derivative at $0$.
Definition of Strictly Increasing: if $f(x)$ is strictly increasing then, $\forall x\forall y : y>x, f(y)> f(x) $ Definition of Monotonically increasing: if $f(x)$ is monotonically increasing then, $\forall x\forall y : y\geq x, f(y)\geq f(x) $ Definition of Derivative: $f'(x) = \lim_{∆x\to 0} \frac{f(x+∆x)-f(x)}{∆x} $ Postulate: $f(x)$ is a Strictly Increasing function $\iff$ $f'(x) >0$ Proof: WLOG, let $ ∆x\overset{\underset{\mathrm{def}}{}}{=} y-x$ Thus, $y = x + ∆x $ $f'(x) = \lim_{(y-x) \to 0} \frac{f(y)-f(x)}{y-x} $ $\because y > x, ∆x >0 $ and $ y-x > 0 $ also, $\because f(y) > f(x), f(y) - f(x) > 0 $ Thus $ \lim_{∆x\to 0} \frac{f(x+∆x)-f(x)}{∆x} > 0 $ $ \therefore f'(x) > 0$ QED Using the same logic, $f(x)$ is monotonic $\iff f'(x) \geq 0$
How many $6$ digit numbers can you make with the numbers $\{1, 2, 3, 4, 5\}$ so that the digit $2$ appears at least 3 times? I can't seem to understand. I thought about it this way. We take a first example: $222aaa$ for each blank space we have $5$ positions, so in this case the answer would be $125$ ($5 \times 5 \times 5$) numbers. Now the position of the twos can change so we calculate the numbers for that taking this example: $222aaa$ and the possibilities for this are the number of ways you can arrange $6$ digits ($6!$) and divide by repetitions so divided by $2 \cdot (3!)$ so answer $= 20$ So final answer should be $125 \cdot 20 = 2500$. But this answer is wrong and I don't understand why.
Let ${S_n}$ be the number of ways you can have the digit 2 appears ONLY 3, 4 ,5 , and 6 times where $ n = 3 , 4 , 5, 6$. We can then add $S_3 + S_4 + S_5 + S_6$ to get our answer. To find $S_n$, note that there are ${6}\choose {n}$ ways of arranging the 2's, and for each of those there are $6-n$ places for the other digits; however, we will not include 2 in these other digits or else we will end up overcounting. So we $4^{(6-n)}$ different numbers we can make. So $S_3 =$ ${6}\choose{3} $$4^3$ $S_4 =$ ${6}\choose {4}$$4^2$ $S_5 =$ ${6}\choose{5} $$4 = 24$ $S_6 =$ ${6}\choose{6}$ $=1$ Now just add these numbers up.
Since you have only $5$ numbers, you can try it making cases Case $1.$ When there are $3~2's$ $3$ spaces left Case $1.a)$ When all the three numbers are different Total numbers= $4\times 3\times 2=24$ Total permutation=$\frac{6!}{3!}=120$ Total cases=$24\times 120=2880$ Case $1.b)$ When two of three numbers are same Total numbers=$12$ Total permutation=${6!}{3!2!}=60$ Total cases=$12\times 60=720$ Case 2. When there are $4 ~2's$ $2$ places left Case $2.a)$ When the two numbers are different Total numbers=$4\times 3=12$ Total permutation=$\frac{6!}{4!}$ Total cases=$12\times 30=360$ Case $2.b) When the two numbers are same Total numbers=$4$ Total permutation=$\frac{6!}{4!2!}=15$ Total cases=$4\times 15=60$ Case $3.$ When there are five $2's$ Total cases=$4\times 6=24$ Case $4.$ When there are six $2's$ Total cases=$1$ Adding all will give $2880+720+360+60+24+1=4045$
Derivative Notation: Let $f(x)=x$. Is $f′(x^2)=1$ or is $f′(x^2)=2x$? Let $f(x)=x$. Is $f'(x^2)=1$ or is $f'(x^2)=2x$? In other words, using this notation, are we evaluating the derivative at a $x^2$?
If you consider the function $f$ of a variable $x$, then $f'$ is likewise a function. When you write $f'(a)$, you evaluate that function at $a$. If $a$ happens to be $x^2$, then you evaluate at $x^2$. It has nothing to do with differentiating with respect to $a$ or $x^2$ or anything else. So, in your example, $f'(a)=1$ no matter what you put in for $a$ when $f(x)=x$. If you consider $g(x)=x^2$ and the composition $(f\circ g)(x) = f(x^2)$, then $(f\circ g)'(x) = f'(g(x))g'(x) = f'(x^2)\cdot 2x$.
As $f(x)=x$, then $$\left(f(x^2)\right)'=(x^2)'.$$ The $'$ denotes differentiation on $x$.
Calculate limit of $\sqrt[n]{2^n - n}$ Calculate limit of $\sqrt[n]{2^n - n}$. I know that lim $\sqrt[n]{2^n - n} \le 2$, but don't know where to go from here.
HINT: $\sqrt[n]{2^n-2^{n-1}}\leq\sqrt[n]{2^n-n}\leq\sqrt[n]{2^n}$ $\lim\limits_{n\to\infty}\sqrt[n]{2^n}=\lim\limits_{n\to\infty}2^{\frac{n}{n}}=2^1=2$ $\lim\limits_{n\to\infty}\sqrt[n]{2^n-2^{n-1}}=\lim\limits_{n\to\infty}\sqrt[n]{2^{n-1}} =\lim\limits_{n\to\infty}2^{\frac{n-1}{n}}=2^{\lim\limits_{n\to\infty}\frac{n-1}{n}}=2^1=2$
Hint: $\sqrt[n]{2^n - n} = 2 (1 - \frac{n}{2^n})^{\frac{1}{n}} = 2 ((1 - \frac{n}{2^n})^{\frac{2^n}{n}} )^{\frac{1}{2^n}}$ Now... Do you know Euler?
$ef$ theorem for inertial degree and ramification index $L/K$ is a finite algebraic extension of fields. $O_{P}$ and $O_{\mathcal{P}}$ are DVRs with $P$ and $\mathcal{P}$ maximal ideals respectively, their quotient fields are $K$ and $L$ respectively. Let $e$ be the ramification index of $\mathcal{P} / P$ and $f$ be the inertial degree of $\mathcal{P}/P$. I am working on showing $ef \leq n=[L: K]$ (without separable condition). Not quite understand several details in the proof. The proof in the book is: Let $\Pi$ be a generator of $\mathcal{P}$ and choose $\omega_{1},\ldots,\omega_{m}$ such that their modulo $\mathcal{P}$ are linearly independent over $O_{P}/P$. Then show $\omega_{i}\Pi^{j}$ are linearly independent over $K$. Take a relation $$\sum_{j=0}^{e-1}\sum_{i=1}^{m}a_{ij}\omega_{i}\Pi^{j}=0.$$ If $a_{ij}$ are not all zero, we can assume the are all in $O_{P}$ and at least one of them is not in $P$ (I understand they could be put in $O_{P}$ since K is the quotient field, but why at least one not in $P$?). Then consider the elements $$A_{j}=\sum_{i=1}^{m}a_{ij}\omega_{i}.$$ If some $a_{ij}\not \in P$, then $A_{j}$ is a unit in $O_{\mathcal{P}}$. Otherwise, $A_{j}$ is divisible by $\pi$, the generator of $P$ and so $\text{ord}_{\mathcal{P}}(A_{j}) \geq e$. Thus, $\text{ord}_{\mathcal{P}}(\sum_{j=0}^{e-1}A_{j}\Pi^{j}) =j_{0}$ for some $j_{0} < e$ (how to get this conclusion? Just becasuce you have some $A_{j}$ with $0$ order and then multiply with $\Pi^{j}$?). Finally this is a contradiction since $\sum_{j=0}^{e-1}A_{j}\Pi^{j}=0$ (why is it a contradiction? Isn't $0$ has $\mathcal{P}$ order less than e?). Thanks for any kind help you offer!
First question: you have a relation $$\sum\limits_{i,j} a_{ij} \omega_i \Pi^j =0$$ for some $a_{ij} \in K$, not all zero, and you want to justify the claim that you can multiply this equation by some $c \in K$ so that all of the $ca_{ij}$ are in $\mathcal O_P$ with at least one of these not in $P$. The only way this would not be possible is if all the nonzero $a_{ij}$ had the same valuation. That would mean you could choose a $c \in K$ such that all the nonzero terms $ca_{ij}$ were units in $\mathcal O_P$. Taking the equation $$\sum\limits_{i=1}^m \sum\limits_{j=0}^{e-1} ca_{ij} \omega_i \Pi^j = 0 \tag{1}$$ modulo $\mathcal P$ yields the equation $\sum\limits_{i=1}^{m} ca_{i0} \omega_i \equiv 0 \pmod{\mathcal P}$. The linear independence of the cosets $\omega_i + \mathcal P$ over the field $\mathcal O_P/P$ implies that all the $ca_{i0}$ are in $P$. Since the nonzero terms $ca_{i0}$ are units in $\mathcal O_P$, this means that all the $ca_{i0}, 1 \leq i \leq m$ must be $0$. Equation (1) is then $$\sum\limits_{i=1}^m \sum\limits_{j=1}^{e-1} ca_{ij} \omega_i \Pi^j = 0.$$ Divide this equation by $\Pi$ and reduce modulo $P$ again. The same argument as in the previous paragraph implies that all the terms $ca_{i1}, 1 \leq i \leq m$ are all zero. Iterating this argument shows that all the terms $ca_{ij}$ have to be zero. This contradicts the assumption that not all the $a_{ij}$ were zero. Second question: For each $A_j, 0 \leq j \leq e-1$, we have either $\operatorname{ord}_{\mathcal P}(A_j) =0$ or $\operatorname{ord}_{\mathcal P}(A_j) \geq e$ as they explain. Since there is at least one $a_{ij}$ which is not in $P$, this means that there is at least one $j$ for which $\operatorname{ord}_{\mathcal P}(A_j) = 0$. The usual rule $$\operatorname{ord}_{\mathcal P}(x+y) \geq \operatorname{Min}\{ \operatorname{ord}_{\mathcal P}(x), \operatorname{ord}_{\mathcal P}(y)\}$$ is written for nonzero $x, y \in L$, but it continues to make sense if we make the convention that $\operatorname{ord}_{\mathcal P}(0) = \infty$. Remember this is an exact equality if $\operatorname{ord}_{\mathcal P}(x) \neq \operatorname{ord}_{\mathcal P}(y)$. Let $S$ be the set of $0 \leq j \leq e-1$ such that $\operatorname{ord}_{\mathcal P}(A_j) = 0$ (we know it is nonempty), and let $T$ be the set of $0 \leq j \leq e-1$ such that $\operatorname{ord}_{\mathcal P}(A_j) \geq e$. If we set $$X_S = \sum\limits_{j \in S} A_j \Pi^j, X_T = \sum\limits_{j \in T} A_j \Pi^j$$ then we have $$\sum\limits_{j=0}^{e-1} A_j \Pi^j = X_S + X_T = 0.$$ Since all the $\operatorname{ord}_{\mathcal P}(A_j)$ for $j \in S$ are $0$, we see that $\operatorname{ord}_{\mathcal P}(A_j \Pi^j) = j$ for all $j \in S$. Thus all the terms $A_j \Pi^j$ for $j \in S$ have distinct valuations, and so $$\operatorname{ord}_{\mathcal P}(X_S)= j_0$$ where $j_0$ is the smallest number in $S \subseteq \{0,1, ... , e-1\}$. Since all the terms $A_j \Pi^j : j \in T$ have order $\geq e$, the element $X_T$ has order $\geq e$. This implies $$\operatorname{ord}_{\mathcal P}(X_S + X_T) = j_0 < e.$$ In particular, $\sum\limits_{j=0}^{e-1} A_j \Pi^j$ does not have infinite valuation, so it cannot be zero.
$L,K$ are discretely valued field, the valuation $v$ is surjective $L\to \Bbb{Z}\cup \infty$ and $K\to e\Bbb{Z}\cup \infty$. $\Pi$ is chosen such that $v(\Pi)=1$. $\omega_i$ are taken in $O_\mathcal{P}$ to be a $O_P/P$ basis of $O_\mathcal{P}/\mathcal{P}$, ie. $m=f$. The $O_P/P$ independence of the $\omega_i$ gives that $$v(\sum\limits_{i=1}^m a_{ij} \omega_i) = \inf_i v(a_{ij})$$ Thus $v(\sum\limits_{i=1}^m a_{ij} \omega_i \Pi^j) \in j+ e\Bbb{Z}\cup \infty$. For $v(b)\ne v(c)$, $v(b+c)=\inf( v(b),v(c))$. Thus $$v(\sum_{j=0}^{e-1} \Pi^j\sum\limits_{i=1}^m a_{ij} \omega_i )=\inf_j v(\Pi^j\sum\limits_{i=1}^m a_{ij} \omega_i)=\inf_j (j+\inf_i v(a_{ij}))$$ That it is $\infty$ means that each $v(a_{ij})=\infty$ ie. each $a_{ij}=0$.
Why $U\le A \times B$ does not imply $U=\left(A\cap U\right) \times \left( B \cap U \right)$? Let $A,B$ be groups.Can you explain why $U\le A \times B$ does not imply $U=\left(A\cap U\right) \times \left( B \cap U \right)$ this is an exercise in the book of the theory of finite groups an introuduction written by H.Kurzweil. the meaning of each symbol may as follows. $A\times B$ is the direct product of $A$,$B$. $U$ is the subgroup of $A\times B$ , thus $U= \lbrace\left(a,b\right)|a \in A,b\in B \rbrace$, $A \cap U=\lbrace a_1|\left(a_1,b_1\right) \in U,a_1\in A \rbrace$,in the same way we could know $\left( B \cap U \right)$
How about a simple example such as $A=B\ne1$ and $U=\{(x,x)\mid x\in A\}$?
Think about it intuitively. Take $A,B = \mathbb{R}$. The LHS $U$ is "some set of points in the plane" whereas the RHS is "all possible x-coordinates from points in $U$ with all possible y-coordinates from points in $U$". Clearly $U\subseteq$ RHS but the RHS can be bigger. For a specific example let $U = \{(0,1),(1,0)\}$. Then the RHS is $\{(0,0),(0,1),(1,0),(1,1)\}$. The point is that the object on the RHS is combining things componentwise that might not have existed when we chose "special" points to be in $U$.
Quadratic Diophantine equation - Find all integer solutions I have a Quadratic Diophantine equation. I am sorry I didn't get how to show correctly. So I used a picture How can I find all integer solutions? Should I do something with Z(14)?
The integer equation $11 x^2 - 14 y^2 = 1$ is impossible. However, I do not see that a proof is available for a beginner. There is no simple proof with congruences. There is one using, in effect, continued fractions. The principal genus of this discriminant is two forms, $$ x^2 - 154 y^2 $$ $$ 2 x^2 - 77 y^2, $$ and your form $11x^2 - 14 y^2$ is $SL_2 \mathbb Z$ equivalent to $2x^2 - 77 y^2$ and simply does not integrally represent $1.$ The other genus is their negatives, $$ -x^2 + 154 y^2 $$ $$ -2 x^2 + 77 y^2, $$ Here are a list of the class group, Gauss-Lagrange reduced forms, and the Lagrange cycle of your form 1. 1 24 -10 cycle length 10 2. -1 24 10 cycle length 10 3. 2 24 -5 cycle length 8 4. -2 24 5 cycle length 8 form class number is 4 jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 11 0 -14 0 form 11 0 -14 delta 0 1 form -14 0 11 delta 1 2 form 11 22 -3 -1 -1 0 -1 To Return -1 1 0 -1 0 form 11 22 -3 delta -7 ambiguous 1 form -3 20 18 delta 1 2 form 18 16 -5 delta -4 3 form -5 24 2 delta 12 4 form 2 24 -5 delta -4 ambiguous 5 form -5 16 18 delta 1 6 form 18 20 -3 delta -7 7 form -3 22 11 delta 2 8 form 11 22 -3 form 11 x^2 + 22 x y -3 y^2 minimum was 2rep x = 5 y = 39 disc 616 dSqrt 24 M_Ratio 4.760331 Automorph, written on right of Gram matrix: 2419 5148 18876 40171 ========================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$
If you are asking for $(x, y)$ pairs that solve the equation: $$11x^2-14y^2=1$$ you can use the on solver at here to show that there are no solutions in integers.
Intermediate value theorem for $\sin x.$ How is intermediate value theorem valid for $\sin x$ in $[0,\pi]$? It has max value $1$ in the interval $[0,\pi]$ which doesn't lie between values given by $\sin0$ and $\sin\pi$.
Let $f\colon[a,b]\longrightarrow\mathbb R$ be a continuous function and let $y\in\mathbb R$. The intermediate value theorem says that if $f(a)\geqslant y\geqslant f(b)$ or if $f(a)\leqslant y\leqslant f(b)$, then there is a $c\in[a,b]$ such that $f(c)=y$. But it says nothing if $y$ lies outside the interval bounded by $f(a)$ and $f(b)$. So, there is no contradiction here.
In general, any time you have a theorem, "If ____, then ____," it really means, "If ____, then ____ and maybe some other stuff not mentioned here also happens." Because when you're working in just about any branch of mathematics, no matter how thoroughly you describe the implications of any mathematical fact there is always some other possible case you could have said something about but didn't. The only things you can rule out are the things the theorem explicitly says cannot happen. Unless a theorem says "and there are no other values outside this interval," don't assume all the values are inside the interval. In your example, the value $\sin(\pi/2)=1$ is part of the "other stuff" that "also happens."
Finding Angle using Geometry In an equilateral triangle $ ABC $ the point $ D $ and $ E $ are on sides $ AC $ and $AB$ respectively, such that $ BD $ and $ CE $ intersect at $P$ , and the area of the quadrilateral $ ADPE $ is equal to area of $ \Delta BPC $ find $ \angle BPE $. This question when I first tried looked easy and I was also able to guess the answer but when I tried to proof, was not able to work it out. I want some help. Thank you. No image was provided in question. I am attaching my drawing.
The hint. Prove that $$\left(S_{\Delta BPC}\right)^2=S_{\Delta APB}S_{\Delta APC}.$$ I got $$\measuredangle BPE=60^{\circ}.$$ Let $S_{\Delta BPC}=a$, $S_{\Delta PAC}=b$ and $S_{\Delta PAB}=c$. Thus, $$\frac{S_{\Delta PEB}}{c}=\frac{BE}{AB}=\frac{BE}{BE+EA}=\frac{1}{1+\frac{EA}{BE}}=\frac{1}{1+\frac{b}{a}}=\frac{a}{a+b},$$ which gives $$S_{\Delta PEB}=\frac{ac}{a+b}.$$ Similarly, $$S_{\Delta PDC}=\frac{ab}{a+c}.$$ Thus, $$S_{AEPD}=b+c-\frac{ab}{a+c}-\frac{ac}{a+b}=\frac{bc}{a+c}+\frac{bc}{a+b}.$$ Id est, $$\frac{bc}{a+c}+\frac{bc}{a+b}=a$$ or $$a^2=bc$$ or $$\frac{a}{b}=\frac{c}{a}$$ or $$\frac{BE}{AE}=\frac{AD}{CD},$$ which gives $$DC=AE,$$ $$\Delta AEC\cong\Delta CDB,$$ which gives $$\measuredangle BPE=\measuredangle DBC+\measuredangle ECB=\measuredangle DBC+60^{\circ}-\measuredangle ACE=60^{\circ}.$$
IF there exists a set of points P for which the two areas are equal, then one such point P lies on the vertical altitude (axis of symmetry), thus we only need to find that point. Dividing each of the two areas in half with the vertical altitude, the problem can be stated as Area(BPS) = Area (APE), where S is the foot of the altitude (the midpoint of BC). The midpoint (centroid?) of the triangle (with altitudes) divides the area of the whole triangle into six similar triangles, including the two above. So the midpoint of the triangle satisfies the condition, thus the angle is 60 degrees.
building transformation matrix from spherical to cartesian coordinate system How to arrive at the following from given $ x = r\sin \theta \cos \phi, y = r\sin \theta \sin \phi, z=r\cos\theta $ $$ \begin{bmatrix} A_x\\ A_y\\ A_z \end{bmatrix} = \begin{bmatrix} \sin \theta \cos \phi & \cos \theta \cos \phi & -\sin\phi\\ \sin \theta \sin \phi & \cos \theta \sin \phi & \cos\phi\\ \cos\theta & -\sin\theta & 0 \end{bmatrix} \begin{bmatrix} A_r\\ A_\theta\\ A_\phi \end{bmatrix}$$ Also how show that $$ \begin{bmatrix} \hat i\\ \hat j\\ \hat k \end{bmatrix} = \begin{bmatrix} \sin \theta \cos \phi & \cos \theta \cos \phi & -\sin\phi\\ \sin \theta \sin \phi & \cos \theta \sin \phi & \cos\phi\\ \cos\theta & -\sin\theta & 0 \end{bmatrix} \begin{bmatrix} \hat e_r\\ \hat e_\theta\\ \hat e_\phi \end{bmatrix}$$ How to change $(a,b,c)$ into spherical polar coordinates and $ (r ,\theta, \phi)$ into cartesian coordinates using this matrix? Thank you!!
I can partially answer this. I believe your first matrix is not the correct general transformation matrix for cartesian to spherical coordinates because you are missing factors of $\rho$ (the radial coordinate), as well as some other incorrect pieces. So it is not clear what you are trying to show. If you are trying to derive the general transformation matrix from spherical to cartesian, it is: $$\begin{bmatrix} A_x\\ A_y\\ A_z \end{bmatrix} = \begin{bmatrix} \sin \theta \cos \phi & \rho \cos \theta \cos \phi & -\rho \sin \theta \sin\phi\\ \sin \theta \sin \phi & \rho \cos \theta \sin \phi & \rho \sin\theta \cos\phi\\ \cos\theta & -\rho \sin\theta & 0 \end{bmatrix} \begin{bmatrix} A_r\\ A_\theta\\ A_\phi \end{bmatrix} $$ This matrix is formed from the derivative matrix in the following way: $ \sum_{(i,j)} \frac {\partial x^i} {\partial u^j} $ where $$x^1 = \rho\sin \theta \cos \phi, x^2 = \rho\sin \theta \sin \phi, x^3=\rho\cos\theta$$ and $u^1 = \rho, u^2 = \theta, u^3 = \phi $ (Note: the superscripts are indices, not exponents). If you write out all the derivatives, you get this matrix: $$ \begin{bmatrix} A_x\\ A_y\\ A_z \end{bmatrix} = \begin{bmatrix} \frac {\partial (\rho \sin \theta \cos \phi)} {\partial \rho} & \frac {\partial (\rho \sin \theta \cos \phi)} {\partial \theta} & \frac {\partial (\rho \sin \theta \cos \phi)} {\partial \phi}\\ \frac {\partial (\rho \sin \theta \sin \phi)} {\partial \rho} & \frac {\partial (\rho \sin \theta \sin \phi)} {\partial \theta} & \frac {\partial (\rho \sin \theta \sin \phi)} {\partial \phi}\\ \frac {\partial (\rho \cos \theta)} {\partial \rho} & \frac {\partial (\rho \cos \theta)} {\partial \theta} & \frac {\partial (\rho \cos \theta)} {\partial \phi} \end{bmatrix} \begin{bmatrix} A_r\\ A_\theta\\ A_\phi \end{bmatrix} $$ ...which evaluates to the correct general transformation matrix I listed above.
get the elements of the jacobian matrix, then divide them to their respective scale factors
A question on a compact space Show: If the closure of every discrete subset of a space is compact then the whole space is compact. Thanks advance:)
I realize that this is homework, but I don’t see any way to give a useful hint. Let $X$ be a Hausdorff space in which the closure of every discrete subset is compact. Clearly $X$ contains no infinite closed subset and is therefore countably compact. Suppose that $X$ is not compact. Then there are a regular uncountable cardinal $\kappa$ and an open cover $\mathscr{U}=\{U_\xi:\xi<\kappa\}$ such that $U_\xi\subsetneqq U_\eta$ whenever $\xi<\eta<\kappa$, and $\mathscr{U}$ has no finite subcover. For each $\xi<\kappa$ let $x_\xi\in U_{\xi+1}\setminus U_\xi$, and let $Y=\{x_\xi:\xi<\kappa\}$; clearly $Y$ is right-separated (i.e., initial segments are relatively open). Let $\mathscr D=\{D\subseteq Y:D\text{ is discrete}\}$. For $D_0,D_1\in\mathscr{D}$ define $D_0\preceq D_1$ iff $D_0\subseteq D_1$, and $\xi<\eta$ whenever $x_\xi\in D_0$ and $x_\eta\in D_1\setminus D_0$. (In other words, $D_0\preceq D_1$ iff $D_0$ is an initial segment of $D_1$ with respect to the well-order on $Y$ induced by the ordinal subscripts.) Clearly $\langle\mathscr{D},\preceq\rangle$ is a partial order. Suppose that $\mathscr{C}$ is a chain in $\langle\mathscr{D},\preceq\rangle$. Let $D=\bigcup\mathscr{C}$, and suppose that $x_\xi\in D$. Fix $C\in\mathscr{C}$ with $x_\xi\in C$; $C$ is discrete, so there is an open nbhd $V$ of $x_\xi$ such that $V\cap C=\{x_\xi\}$. Let $x_\eta$ be any other element of $D$; if $\eta<\xi$, then $x_\eta\in C$, so $x_\eta\notin V$, and if $\eta>\xi$, then $x_\eta\notin U_\xi$. Thus, $V\cap U_\xi$ is an open nbhd of $x_\xi$ that contains no other element of $D$. Since $x_\xi$ was an arbitrary element of $D$, $D$ is discrete, i.e., $D\in\mathscr{D}$. Clearly $C\preceq D$ for all $C\in\mathscr{C}$, so $D$ is an upper bound for $\mathscr{C}$ in $\mathscr{D}$, and by Zorn’s lemma $\mathscr{D}$ has a maximal element $M$. Let $K=\operatorname{cl}M$; $M$ is discrete, so $K$ is compact. Moreover, the maximality of $M$ implies that $M$ is dense in $Y$ and hence that $K\supseteq Y$. $\mathscr{U}$ is an open cover of $K$, so it has a finite subcover, and since $\mathscr{U}$ is an increasing nest of open sets, there is some $\eta<\kappa$ such that $Y\subseteq K\subseteq U_\eta$, which is absurd, since for example $x_{\eta+1}\in Y\setminus U_\eta$. This contradiction shows that $X$ must be compact. (The Zorn’s lemma argument can of course be replaced by a straightforward transfinite recursion to construct $M$.)
Continuing what Bryce said above .it implies every infinite set is not discrete.so given a infinite set there is a limit point x such that every neighbourhood of it contains all the points of the sequence.So x is a complete accumualation point.Hence every infinite subset of X has a complete accumulation point.Hence X is compact
Non-trivial continuous limits of repeated iteration This is not an exercise in any book, I've just thought for a while about this idea and couldn't figure it out, so I thought people at Math.SE might help me. Define a function transform as follows. For a function $f$ and each x in its domain, define a T-transform $T(f)(x)$ to be the limit of the sequence formed by repeated iteration with $f$ of x $( x, f(x), f(f(x)), f(f(f(x))), ...)$, and if the sequence doesn't converge, define $T(f)(x)$ to be zero. Are the only possible continuous T-transforms of functions from reals to reals constant functions and the identity function? In other words, is there a non-trivial continuous T-transform of some function? I suspect there is none, but I don't know how to prove it.
There are functions $f$ with continuous, nontrivial $T(f)$. Let $$ f(x) = \begin{cases}x/2 & x\le 0\\x & 0\le x\le 1\\ (x+1)/2 & x\ge 1\end{cases} $$ Then $$ T(f)(x)= \begin{cases}0& x\le 0\\x & 0\le x\le 1\\ 1 & x\ge 1\end{cases} $$ Edit: Here is another nice example. Let $g(x)$ be a triangle wave, defined by setting $g(4k+1)=1$ and $g(4k+3)=-1$ for all integers $k$, and interpolating linearly between. Then $g$ satisfies $g(g(x))=g(x)$, implying that $T(g)(x)=g(x)$, so $T(g)$ is the same triangle wave.
Take $f:x\mapsto|x|$ and we will have $$T(f)(x)=|x|$$ Likewise for $f:x\mapsto-|x|$.
Has error correction been "solved"? I recently came across Dan Piponi's blog post An End to Coding Theory and it left me very confused. The relevant portion is: But in the sixties Robert Gallager looked at generating random sparse syndrome matrices and found that the resulting codes, called Low Density Parity Check (LDPC) codes, were good in the sense that they allowed messages to be transmitted at rates near the optimum rate found by Shannon - the so-called Shannon limit. Unfortunately the computers of the day weren't up to the task of finding the most likely element of M from a given element of C. But now they are. We now have near-optimal error correcting codes and the design of these codes is ridiculously simply. There was no need to use exotic mathematics, random matrices are as good as almost anything else. The past forty years of coding theory has been, more or less, a useless excursion. Any further research in the area can only yield tiny improvements. In summary, he states that the rate of LDPC codes is very near channel capacity—so near that further improvements would not be worth the while. So, my question is: What does modern research in error-correcting codes entail? I noticed that Dan did not mention what channel the rate of LDPC codes approach the capacity of, so maybe there exist channels that LDPC codes don't work well on? What other directions does modern research in the field explore?
This is largely true in the sense that long LDPC codes are within a fraction of a dB of the Shannon limit in AWGN channel. Their design (by a process called density evolution - don't look at me, I don't know the math) also seems to be relatively simple. It depends on stochastic math as opposed to algebra/combinatorics of finite fields/rings. My limited experience only allows me to make the following remarks: The block length of an LDPC code needs to be relatively large. This is one of the reasons they were not chosen to LTE standard (European/Asian 4G cellular standard). My understanding about the `best' code in a radio channel is (as a function of the block length): less than 100 bits => use an algebraic block code, 40-400 bits => use a convolutional code, 300-20000 bits => use a turbo code, 5000 bits+ => use an LDPC code (the intervals overlap as I am not willing to bet the family fortune on any one of them). IIRC this is how it is done in LTE (release 9), except that in that cellular standard the longest data block has 6144 bits (so handled with a turbo code). The design of an LDPC code needs to take into account that it is usually not practical to base it on a random Tanner graph with the prescribed weight distribution. In particular, when it is supposed to be used on a handheld device. The routing problem of the belief propagation algorithm becomes prohibitive otherwise. So the LDPC codes specified for radio communication standards a few years back (DVB-S/T2 in Europe, Mediaflo in the US) were designed around a particular decoding circuitry, or rather the Tanner graphs had to come from a certain kind of a family. I have never been truly up to speed with these aspects, so some progress may have been made. When higher order modulation is used there are further problems and considerations that need to be taken into account. It is not entirely obvious that we should stick to LDPC codes based on bits alone in that case, because the channel will create dependencies among the LLRs of individual bits. But the BP algorithm becomes very complicated, if we want to feed something other than LLRs of individual bits to it. Getting rid of the error floors is still a challenge. Alas, I'm not up to speed with that either. Possible workarounds involve using catenated code with a light weight algebraic outer code handling the residual errors. Caveat: I have only studied this cursorily (enough to implement an LDPC decoder and simulate/analyze some of the codes). These are recollections and impressions rather than hard facts. You can get the minutes of the relevant meetings of workgroup in charge of this within LTE from the web pages of 3GPP (=third generation partnership project). Other types of codes will remain to be useful for specific purposes. There is no substitute to RS (or AG, if you need longer blocks), when you only have hard byte errors. LDPC (and Turbo) codes are designed with soft bit errors (=reliability estimates of received bits) in mind. This is a valid assumption in all wireless communication. Also network coding appears to be a hot topic currently. There the code operates at the level of data packets that may or may not arrive at all. Also look up Raptor code, if this type of problems are of interest to you. Algebraic coding theory is not dead yet, but it does concentrate on problems other than designing ECCs for bulk data transmission. An algebraic machinery yielding LDPC code designs guaranteed to have very low error floors (= high minimum Hamming distance) may be out there, but I don't know what it would be.
The problem has been solved since 1993 with the advent of turbo codes. Tribute to Mr Berrou and Glavieux who discovered those iterative decoders and the concepts of exchanging extrinsic information. Then McKay (unfortunately he just passed away) rediscovered the LDPC codes (Gallager codes). Today the research is focused mainly in which system has the lowest complexity (very important for the future networks). Recently also polar codes have been discovered bu Anrikan but I have doubts how the will beat turbo or ldpc....
Calculate limits without L'Hospital's Rule Questions: (A) Calculate $$\lim_{x \to \frac{\pi}{2}} \frac{\cot x}{2x - \pi}$$ (B) Calculate $$\lim_{x ~\to~0^{+}} x^{3}e^{\frac{1}{x}}$$ without using L'Hospital's Rule. Attempted solution: (A) Using the definition of $\cot x$ gives: $$\lim_{x \to \frac{\pi}{2}} \frac{\cot x}{2x - \pi} = \lim_{x \to \frac{\pi}{2}} \frac{\frac{\cos x}{\sin x}}{2x - \pi} = \lim_{x \to \frac{\pi}{2}} \frac{1}{\sin x} \cdot \frac{\cos x}{2x - \pi}$$ Since the first term will turn out to be 1, I carry on without it and making the substitution $y = x - \frac{\pi}{2}$ as well as the fact that $\cos x = \cos (-x)$: $$= \lim_{y \to 0} \frac{\cos (y+\frac{\pi}{2})}{2y} = \lim_{y \to 0} \frac{\cos (-y-\frac{\pi}{2})}{2y}$$ Using the subtraction formula for cosine gives: $$ = \lim_{y \to 0} \frac{\cos(-y) \cos \frac{\pi}{2} + \sin(-y) \sin{\frac{\pi}{2}}}{2y} = \lim_{y \to 0} \frac{\sin(-y)}{2y} =$$ $$ \lim_{y \to 0} \frac{- \sin(y)}{2y} = -\frac{1}{2}\lim_{y \to 0} \frac{\sin(y)}{y} = -\frac{1}{2} \cdot 1 = -\frac{1}{2}$$ Does this look reasonable? (B) I know that the limit does not exist (it "becomes" $\infty$). I figure that $e^{x}$ is continuous function with respect to x and so: $$\lim_{x ~\to~0^{+}} x^{3}e^{\frac{1}{x}} = \lim_{x ~\to~0^{+}} x^{3} \cdot \lim_{x ~\to~0^{+}} e^{\frac{1}{x}} = 0 \cdot e^{\lim_{x ~\to~0^{+}} \frac{1}{x}} $$ I am not sure where to go on from here. Will it be an argument based on $\frac{1}{x}$ growing faster towards infinity than $x^3$ shrinks towards $0$? Or is there a smart algebraic trick that can be used? In many cases, the limit appears to give a "zero times infinity" expression, but after figuring out the secret steps you can change it to something that you can calculate the limit from. How do you know when to stop trying and declare that the limit does not exist?
Your first calculation looks good as far as I can tell. You cannot write $$\lim_{x\to0^+}x^3e^{1/x}=\lim_{x\to0^+}x^3\cdot\lim_{x\to0^+}e^{1/x}$$ because the limit of a product should only be said to equal the product of the limits when all three limits exist. In general, one or both limits on the right might not exist and yet the limit on the left could still exist. It might help to let $z=1/x$, and note that $$\lim_{x\to0^+}x^3e^{1/x}=\lim_{z\to\infty}\frac{e^z}{z^3}$$ Then can you show that for large $z$, $e^z>cz^4$? One method is to show that $e^x>1$ for large enough $x$ show that for some $x_0$, $e^{x_0}>x_0$, and then since we have already established that $e^x$ has a larger derivative than $x$, it follows that for large enough $x$, $e^x>x$ repeat to show $e^x>\frac{1}{2}x^2$ repeat to show $e^x>\frac{1}{6}x^3$ repeat to show $e^x>\frac{1}{24}x^4$ So $$\lim_{x\to0^+}x^3e^{1/x}=\lim_{z\to\infty}\frac{e^z}{z^3}>\lim_{z\to\infty}\frac{cz^4}{z^3}=\infty$$
A To prove $\displaystyle\lim_{x \to \infty} \frac{\sin x}{x} = 1$ just look at the wedge of a circle and notice the the arc and the height are about the same. With an appropriate change of angles you can get I think in your cases it might be useful that $\displaystyle \lim_{x \to 0} \frac{\tan x }{x} =1$
How to prove that $(a^m)^n=a^{mn}$ where $a,m,n$ are real numbers and a>0? I know how to prove the equality when $m$ is a rational number and $n$ is an integer, but do not know how to go about proving this for real numbers. On a semi-related note, I was trying to prove this when both $m$ and $n$ are rational, and found out that I have to prove that $(\frac{1}{z})^{\frac{1}{y}}$=$\frac{1}{z^{\frac{1}{y}}}$. Does this need to be proven or can I accept it as a definition?
The very first thing you need to do is ask yourself what the definitions are. Without proper definitions, you'll never have a complete proof. So, if $a>0$ and $m\in \Bbb{R}$, how are you even supposed to define $a^m$? This is not at all a trivial task. For example, here's one possible approach to things: First, define $\exp: \Bbb{R} \to \Bbb{R}$ by $\exp(x):= \sum_{n=0}^{\infty}\frac{x^n}{n!}$. You of course have to check that this series converges for every $x\in \Bbb{R}$. Check basic properties of $\exp$, such as $\exp(0) = 1$ and for all $x,y \in \Bbb{R},$ $\exp(x+y) = \exp(x)\cdot \exp(y)$. Also, verify that $\exp:\Bbb{R} \to (0,\infty)$ is an invertible function. Since $\exp:\Bbb{R} \to (0,\infty)$ is invertible, we can consider its inverse function, which we denote as $\log:(0,\infty) \to \Bbb{R}$. Then, verify all the basic properties of $\log$, such as for all $a,b>0$, $\log(ab) = \log(a) + \log(b)$. Finally, given $a>0$ and $m\in \Bbb{R}$, we define $a^m := \exp(m \log(a))$. From this point, it is a simple matter to use the various properties of the exponential and logarithmic functions: for any $a>0$ and $m,n \in \Bbb{R}$ \begin{align} a^{m+n} &:= \exp((m+n)\log (a)) \\ &= \exp[m\log(a) + n \log (a)]\\ &= \exp[m \log(a)] \cdot \exp[n\log(a)] \\ &:= a^m \cdot a^n \tag{$*$} \end{align} Similarly, \begin{align} (a^m)^n &:= \exp[n \log(a^m)] \\ &:= \exp[n \log(\exp(m \log(a)))] \\ &= \exp[nm \log(a)] \tag{since $\log \circ \exp = \text{id}_{\Bbb{R}}$} \\ &:= a^{nm} \\ &= a^{mn} \end{align} where in the last line, we make use of commutativity of multiplication of real numbers. Note that steps 1,2,3 are not at all trivial, and indeed there are entire chapters of calculus/analysis textbooks devoted to proving these facts carefully. So, while I only listed out various statements, if you want the proofs for the statements I made, you should take a look at any analysis textbook, for example, Rudin's Principles of Mathematical Analysis, or Spivak's Calculus (I recall Spivak motivating these things pretty nicely). As for your other question, yes it is something which needs to be proven. This result can be easily deduced from two other facts. For any $x\in \Bbb{R}$, $1^x = 1$. (proof: $1^x := \exp[x \log(1)] = \exp[0] = 1$) For any $a,b > 0$ and $x\in \Bbb{R}$, $(ab)^x = a^x b^x$. The proof is a few lines, once you use the properties of $\exp$ and $\log$. Now, if $z>0$, then for any $x\in \Bbb{R}$, \begin{align} z^x \cdot \left(\frac{1}{z}\right)^x &= \left(z\cdot \frac{1}{z}\right)^x = 1^x = 1 \end{align} Hence, $\left(\frac{1}{z}\right)^x = \frac{1}{z^x}$. In particular, you can take $x=1/y$ to prove what you wanted. Edit: Motivating the definition $a^x := \exp(x\log(a))$, for $a>0, x \in \Bbb{R}$. The long story short: this definition is unique in certain sense, and is almost forced upon us once we impose a few regularity conditions. Now, let me once again stress that you should be careful to distinguish between definitions, theorems and motivation. Different authors have different starting points, so Author 1 may have one set definitions and motivations, and hence different theorems, while author 2 may have a completely different set of definitions, and hence have different theorems, and motivation. So, let's start with some motivating remarks. Fix a number $a>0$. Then, we usually start start by defining $a^1 = a$. Next, given a positive integer $m\in \Bbb{N}$, we define $a^m = \underbrace{a\cdots a}_{\text{$m$ times}}$ (If you want to be super formal, then ok, this is actually a recursive definition: $a^1:= 1$, and then for any integer $m\geq 2$, we recursively define $a^{m}:= a\cdot a^{m-1}$). Now, at this point what we observe from the definition is that for any positive integers $m,n\in \Bbb{N}$, we have $a^{m+n} = a^m \cdot a^n$. The proof of this fact follows very easily by induction. Next, we typically define $a^0 = 1$. Why do we do this? One answer is that it is a definition, so we can do whatever we want. Another answer, is that we are almost forced to do so. Why? notice that for any $m\in \Bbb{N}$, we have $a^m = a^{m+0}$, so if we want this to be equal to $a^m \cdot a^0$, then we had better define $a^0 = 1$. Next, if $m>0$ is an integer, then we usually define $a^{-m} := \dfrac{1}{a^{m}}$. Once again, this is just a definition, so we can do whatever we want. The motivation for making this definition is that we have $1 =: a^0 = a^{-m+m}$ for any positive integer $m$. So, if we want the RHS to equal $a^{-m}\cdot a^m$, then we had better define $a^{-m}:= \frac{1}{a^m}$. Similarly, if $m>0$, then we define $a^{1/m} = \sqrt[m]{a}$ (assuming you've somehow proven existence of $m^{th}$ roots of positive real numbers). Again, this is just a definition. But why do we do this? Because we have $a =: a^1 = a^{\frac{1}{m} + \dots +\frac{1}{m}}$, so if we want the RHS to equal $(a^{\frac{1}{m}})^m$, then of course, we had better define $a^{1/m}:= \sqrt[m]{a}$. Finally, we define $a^{\frac{m}{n}}$, for $m,n \in \Bbb{Z}$ and $n >0$ as $a^{m/n} = (a^{1/n})^m$. Once again, this is just a definition, so we can do whatever we want, but the reason we do this is to ensure the equality $a^{m/n} = a^{1/n + \dots + 1/n} = (a^{1/n})^m$ is true. Now, let's think slightly for what we have done. We started with a number $a>0$, and we defined $a^1 := a$, and we managed to define $a^x$ for every rational number $x$, simply by the requirement that the equation $a^{x+y} = a^x a^y$ hold true for all rational $x,y$. So, if you actually read through everything once again, what we have actually done is shown the following theorem: Given $a>0$, there exists a unique function $F_a:\Bbb{Q} \to \Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,y\in \Bbb{Q}$, $F_a(x+y) = F_a(x)\cdot F_a(y)$. (Note that rather than writing $a^x$, I'm just writing $F_a(x)$, just to mimic the function notation more) Our motivation has actually been to preserve the functional equation $F_a(x+y) = F_a(x)\cdot F_a(y)$ as much as possible. Now, we can ask whether we can extend the domain from $\Bbb{Q}$ to $\Bbb{R}$, while preserving the functional equation, and if such an extension is unique. If the answer is yes, then we just define $a^x := F_a(x)$ for all real numbers $x$, and then we are happy. It turns out that if we impose a continuity requirement, then the answer is yes; i.e the following theorem is true: Given $a>0$, there exists a unique continuous function $F_a:\Bbb{R} \to \Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,y\in \Bbb{R}$, $F_a(x+y) = F_a(x)\cdot F_a(y)$. Uniqueness is pretty easy (because $\Bbb{Q}$ is dense in $\Bbb{R}$ and $F_a$ is continuous). The tough part is showing the existence of such an extension. Of course, if you already know about the $\exp$ function and its basic properties like 1,2,3, then you'll see that the function $F_a:\Bbb{R} \to \Bbb{R}$ defined by $F_a(x):= \exp(x \ln(a))$ has all the nice properties (i.e is continuous, it satisfies that functional equation, and $F_a(1) = a$). Because of this existence and uniqueness result, this is the only reasonable way to define $a^x \equiv F_a(x) := \exp(x \log(a))$; anything other than this would be pretty absurd. The purpose of the rest of my answer is to try to motivate how anyone could even come up with the function $F_a(x) = \exp(x\ln(a))$; sure the existence and uniqueness result is very nice and powerful, but how could you try to come up with it by yourself? This certainly doesn't come from thin air (though at some points we have to take certain leaps of faith, and then check that everything works out nicely). To do this, let's start with a slightly more restrictive requirement. Let's try to find a function $f:\Bbb{R} \to \Bbb{R}$ with the following properties: for all $x,y\in\Bbb{R}$, $f(x+y) = f(x)\cdot f(y)$ $f$ is non-zero; i.e there exists $x_0\in \Bbb{R}$ such that $f(x_0) \neq 0$. $f$ is differentiable at $0$. The first two conditions seem reasonable, but the third one may seem a little strange, but let's just impose it for now (it's mainly there to try to motivate things and hopefully simlify the argument and to convince you that $x\mapsto \exp(x\ln(a))$ didn't come from thin air). First, we shall deduce some elementary consequences of properties 1,2,3: In (2), we assumed $f$ is non-zero at a single point. We'll now show that $f$ is no-where vanishing, and that $f(0)=1$. Proof: we have for any $x\in\Bbb{R}$, $f(x) \cdot f(x_0-x) = f(x_0) \neq 0$. Hence, $f(x) \neq 0$. In particular, $f(0) = f(0+0) = f(0)^2$. Since $f(0)\neq 0$, we can divide it on both sides to deduce $f(0) = 1$. We also have for every $x \in \Bbb{R}$, $f(x)>0$. Proof: We have \begin{align} f(x) = f(x/2 + x/2) = f(x/2)\cdot f(x/2) = f(x/2)^2 > 0, \end{align} where the last step is because $f(x/2) \neq 0$ (this is why in real analysis, we always impose the condition $a = f(1) > 0$). $f$ is actually differentiable on $\Bbb{R}$ (not just at the origin). This is because for $t\neq 0$, we have \begin{align} \dfrac{f(x+t) - f(x)}{t} &= \dfrac{f(x)\cdot f(t) - f(x) \cdot f(0)}{t} = f(x) \cdot \dfrac{f(0+t) - f(0)}{t} \end{align} now, the limit as $t\to 0$ exists by hypothesis since $f'(0)$ exists. This shows that $f'(x)$ exists and $f'(x) = f'(0) \cdot f(x)$. As a result of this, it immediately follows that $f$ is infinitely differentiable. Now, we consider two cases. Case ($1$) is that $f'(0) = 0$. Then, we have $f'(x) = 0$ for all $x$, and hence $f$ is a constant function, $f(x) = f(0) = 1$ for all $x$. This is clearly not very interesting. We want a non-constant function with all these properties. So, let's assume in addition that $f'(0) \neq 0$. With this, we have that $f'(x) = f'(0)\cdot f(x)$; this is a product of a non-zero number and a strictly positive number. So, this means the derivative $f'$ always has the same sign. So, $f$ is either strictly increasing or strictly decreasing. Next, notice that $f''(x) = [f'(0)]^2 f(x)$, is always strictly positive; this coupled with $f(x+y) = f(x)f(y)$ implies that $f$ is injective and has image equal to $(0,\infty)$. i.e $f:\Bbb{R} \to (0,\infty)$ is bijective. Theorem 1. Let $f:\Bbb{R} \to \Bbb{R}$ be a function such that: for all $x,y\in \Bbb{R}$, $f(x+y) = f(x)f(y)$ $f$ is non-zero $f$ is differentiable at the origin, with $f'(0) \neq 0$ Suppose $g:\Bbb{R} \to \Bbb{R}$ is a function which also satisfies all these properties. Then, there exists a number $c\in \Bbb{R}$ such that for all $x\in \Bbb{R}$, $g(x) = f(cx)$. In other words, such functions are uniquely determined by a constant $c$. Conversely, for any non-zero $c\in \Bbb{R}$, the function $x\mapsto f(cx)$ satisfies the three properties above. Proof To prove this, we use a standard trick: notice that \begin{align} \dfrac{d}{dx}\dfrac{g(x)}{f(cx)} &= \dfrac{f(cx) g'(x) - g(x) cf'(cx)}{[f(cx)]^2} \\ &= \dfrac{f(cx) g'(0) g(x) - g(x) c f'(0) f(cx)}{[f(cx)]^2} \\ &= \dfrac{g'(0) - c f'(0)}{f(cx)} \cdot g(x) \end{align} Therefore, if we choose $c = \dfrac{f'(0)}{g'(0)}$, then the derivative of the function on the LHS is always zero. Therefore, it must be a constant. To evaluate the constant, plug in $x=0$, and you'll see the constant is $1$. Thus, $g(x) = f(cx)$, where $c= \frac{g'(0)}{f'(0)}$. This completes the proof of the forward direction. The converse is almost obvious Remark Notice also that from $g(x) = f(cx)$, by plugging in $x=1$, we get $g(1) = f(c)$, and hence $c = (f^{-1} \circ g)(1) = \frac{g'(0)}{f'(0)}$ (recall that we already stated that such functions are invertible from $\Bbb{R} \to (0,\infty)$). It is this relation $c = (f^{-1} \circ g)(1)$, which is the key to understanding where $x\mapsto \exp(x\ln(a))$ comes from. We're almost there. Now, once again, just recall that we have been assuming the existence of a function $f$ with all these properties. We haven't proven the existence yet. Now, how do we go about trying to find such a function $f$? Well, recall that we have the fundamental differential equation $f'(x) = f'(0) f(x)$. From this, it follows that for every positive integer $n$, $f^{(n)}(0) = [f'(0)]^n$. We may WLOG suppose that $f'(0) = 1$ (other wise consider the function $x\mapsto f\left(\frac{x}{f'(0)}\right)$), then we get $f^{(n)}(0) = 1$. Finally, if we make the leap of faith that our function $f$ (which is initially assumed is only differentiable at $0$ with $f'(0) = 1$, and then proved it is $C^{\infty}$ on $\Bbb{R}$) is actually analytic on $\Bbb{R}$, then we know that the function $f$ must equal its Taylor series: \begin{align} f(x) &= \sum_{n=0}^{\infty} \dfrac{f^{(n)}(0)}{n!} x^n = \sum_{n=0}^{\infty}\dfrac{x^n}{n!} \end{align} This is one of the many ways of how one might guess the form of the exponential function, $\exp$. So, we now take this as a definition: $\exp(x):= \sum_{n=0}^{\infty}\frac{x^n}{n!}$. Of course using basic power series techniques, we can show that $\exp$ is differentiable everywhere, and satisfies that functional equation with $\exp(0)=\exp'(0) = 1$. So, now, back to our original problem. Given any $a>0$, we initially wanted to find a function $F_a:\Bbb{R} \to \Bbb{R}$ such that $F_a$ satisfies the functional equation, and $F_a(1) = a$, and such that $F_a$ is differentiable at $0$ with $F_a'(0) \neq 0$. Well, in this case, both $F_a$ and $\exp$ satisfy the hypothesis of theorem 1. Thus, there exists a constant $c \in \Bbb{R}$ such that for all $x\in \Bbb{R}$, $F_a(x) = \exp(cx)$. To evaluate the constant $c$, we just plug in $x=1$, to get $c = (\exp^{-1}\circ F_a)(1) := \log(a)$. Therefore we get $F_a(x) = \exp(x \log(a))$. This is why we come up with the definition $a^x := \exp(x\log(a))$.
The rule is merely a specific case of the addition law of indices/powers. Consider: $$(a^m)^n=a^m\times a^m \times a^m \times a^m \times a^m...\times a^m$$ where there are $n$ lots of $a^m$ multiplied together. We have the law that $$a^p \times a^q=a^{p+q}$$ Applying this to your question we have: $$(a^m)^n=a^{m+m+m+m+m...+m}$$ where we have $n$ lots of $m$. Now, $n$ lots of $m$ is equal to $n\times m$ which is equal to $mn$. So, finally, we have that $$(a^m)^n=a^{mn}$$ as required. Edit: Since writing the above proof for positive integer exponents I have been shown a proof that $\ln {a^x}=x\ln a $ without using what we are trying to prove in the question itself, which allows us to use the laws of logs etc to answer the question, as previous answers have done: Consider $$f(x)=\ln {x^n} -n\ln x\implies f'(x)=\frac{nx^{n-1}}{x^n}-\frac{n}{x}=\frac{n}{x}-\frac{n}{x}=0$$ This means that $f(x)$ must be equal to some constant, $c$, as only constants diiferentiate to $0$. Let's try to find $c$: We have $$\ln {x^n} -n\ln x=c$$ Let $x=1$: $$\ln1-n\ln1=c=0-0=0$$ So we have $c=0$, leaving us with $$\ln {x^n} =n\ln x$$ For the sake of completeness, I'll now go on to answer the question for all real exponents. Note that $\ln {x^n} =n\ln x$ is true for all $n\in\mathbb R$. Let $x=a^m$: $$e^{\ln {x^n}}=e^{\ln {(a^m)^n}}=(a^m)^n$$ But we also have $$e^{\ln {x^n}}=e^{n\ln {x}}=e^{n\ln {a^m}}=e^{mn\ln {a}}=e^{\ln {a^{mn}}}=a^{mn}$$ So, at last, we have: $$(a^m)^n=a^{mn}$$. Thanks to peek-a-boo, among others, who attempted to make me understand this in more detail.
Questions regarding The Fundamental Theorem of Calculus FTC1 suggested that $$ F'(x) = \frac{d}{dx}\int_a^xf(t)dt = f(x) $$ and the Chain Rule says that $$ \frac{\mathrm{d}}{\mathrm{d}x}F\big(g(x)\big)=F'\big(g(x)\big)·g'(x). $$ That leads to $$ \frac{\mathrm{d}}{\mathrm{d}x}F\big(g(x)\big) = g'(x)·\frac{\mathrm{d}}{\mathrm{d}g(x)} \int\limits_a^{g(x)}\!\!f(t)\,\mathrm{d}t = f\big(g(x)\big)·g'(x) $$ Is it right to combine these two? If it is, what should I do if I want to find $F'(x)$ instead of $\frac{\mathrm{d}}{\mathrm{d}x}F\big(g(x)\big)$ in the combined equation? I'm actually confused in the following question: f(t) = $ \int\limits_2^t( {\sqrt {\frac{7}{4}+u^3}}) du $ F(x) = $ \int\limits_1^{\sin x}f(t)dt $ Find: $F''(\pi)$
Leibniz integral rule is what you need $$\frac d{dx}F(x)=\frac d{dx}\int_{g(x)}^{h(x)}f(t)dt =f(h(x))h'(x) - f(g(x))g'(x)$$ $$F'(x)=f(\sin x)\cdot\cos(x)=\cos(x)\int_2^{\sin(x)}\sqrt{u^3+7/4}\ \ du$$ Can you take it from here. PS: I see you are trying to derive this formula. Look at the wikipedia page for the proofs.
Hint: You know how to differentiate $F(g(x)).$ You want to differentiate $F(x).$ That's just the first one when $g(x)=x.$ I see you've seen that without sufficient context, you may be unclear about what you want, therefore misleading potential answerers about your real problem. Here, instead of being all vague about arbitrary functions, you should have noted right from the get go what it was that was bothering you about a specific question. You only added this later after probably realising that no one seems to understand what you mean. I hope this is a lesson for next time. In any case, to explain your difficulty, you want to find $F''(π),$ where $$F(x)=\int_1^{\sin x}{f(t)\mathrm d t},$$ and $$f(t)=\int_2^t{\sqrt{\frac74 + u^3}\mathrm d u}.$$ Now, $$F'(x)=(\sin x)'f(\sin x)=\cos x\int_2^{\sin x}{\sqrt{\frac74 + u^3}\mathrm d u}.$$ Thus, we have that $$F''(x)=(\cos x)'\int_2^{\sin x}{\sqrt{\frac74 + u^3}\mathrm d u}+(\cos x)(\sin x)'\sqrt{\frac74 + \sin^3x}=-\sin x\int_2^{\sin x}{\sqrt{\frac74 + u^3}\mathrm d u}+(\cos x)^2\sqrt{\frac74 + \sin^3x}.$$ Now set $x=π.$
Probability of Running Around Laps The question is as follows: Jack is supposed to run laps around the outdoor track. At the start of each lap, including the first, there is always an $8$ percent chance that Jack will not run for the day. What is the probability that Jack will run (a) no laps? (b) at least four laps? (c) exactly four laps? (a) The chances of running no laps is $8$%, or $\frac{2}{25}$. (b) The chances of running at least four laps is $(\frac{23}{25})^4$. I am not completely sure about (c), especially in making sure whether he will run the exact four laps. If there are any errors, please let me know. Any help will be greatly appreciated.
You are correct so far. For c, he needs to run at least four laps, which is your answer to b, and then run no more laps, which is your answer to a.
In order to run exactly 4 laps he must run a) the first lap which has probability, as you say, 2/25 b) the second lap which also has probability 2/25 c) the third lap which also has probability 2/25 d) the fourth lap which also has probability 2/25 e) NOT run the fifth lap which has probability 23/25. The probability of running exactly 4 laps is $\left(\frac{2}{25}\right)^4\left(\frac{23}{25}\right)$
Limit of integration can't be the same as variable of integration? I am told that an expression like $$ \int_a^x f(x)dx $$ is not well formed, i.e. it should be $$ \int_a^xf(t)dt $$ or similar. Why is it that the limits of integration can't depend on the variable of integration?
In mathematics, it's generally regarded as a bad idea for the same symbol to have two different meanings in the same expression. In this case, the variable being integrated with respect to effectively disappears, and a new variable (really two new variables, the bounds of integration) takes over. To call them the same thing can make things confusing sometimes (although not always). This is more of a stylistic than a strictly logical concern, at least in one variable.
your question is very valid and I see all the answers posted say a very comfortable no. I can give a physical situation where the limit is also a function of the variable in the integral (one which I am trying to solve myself). It goes like this: The solar radiation intensity ($W/m^2$) at a point on the surface of the earth depends on the angle (theta) between the incident ray and the normal (radius vector) at the point. The exercise is to find the total radiant energy ($J/m^2$) over the period of a year. Now, theta as a function of time varies continuously and the limit of integration would cover the longitudinal sweep from sunrise to sunset. Bear in mind that this longitudinal sweep of daylight (from sunrise to sunset) varies with latitude of the point and also the time of year; thus the daily limit of integration is also a function of time (and this has to be summed up over a year). I hope the problem statement is clear and I have worked out the angle relationships between theta, latitude, tilt of earth axis and angle swept by the earth in its orbit around the sun (taking the northern hemisphere summer solstice as the starting point). It is a very practical problem and I'm sure that a solution to this exists that may have been discovered or is yet to be discovered.
Application of Jensen's Inequality-Positive Definite Matrices-Probability I'm given the question for $A$ positive definite matrix in $\mathbb{R}^n$, use Jensen's inequality to prove $(x^TAx)(x^TA^{-1}x)\geq1$ for unit vectors $x\in\mathbb{R}^n$. The hint is to think about $A$ diagonal and then diagonalize $A$ in the general case. (If $A$ is diagonal, $(x^TAx)(x^TA^{-1}x)=1$, but positive definite does not imply diagonalizable). I know $x^TAx>0$ for all $x\in\mathbb{R}^n$. But in order to use Jensen's inequality, I need to have a convex function. Here is what I have thought about: Let $g(y)=1/y$ is a convex function, then $g(\mathbb{E}y)\leq\mathbb{E}(g(y))$. I want then to get $y=(x^TAx)(x^TA^{-1}x)$ to see $\frac{1}{(x^TAx)(x^TA^{-1}x)}\leq 1$. I know this does not work, but it is how I am trying to use Jensen.
Hint: If $D$ is diagonal with elements $d_1,d_2\ldots, d_n$, then $D^{-1}$ is diagonal with elements $\frac1{d_1},\ldots,\frac1{d_n}$. Then $$x^TDx = \sum_i d_ix_i^2 = E(Y)$$ where $Y$ takes value $d_i$ with probability $x_i^2$. (Note that $x$ has norm 1, so the probabilities add up to 1.) What is the corresponding statement for $x^TD^{-1}x$? Now apply Jensen's inequality $Ef(Y)\ge f(E(Y))$ with a suitable choice of $f$. Spoiler below: $f(y)=\frac1y$.
Fix $x$, the map $B\xrightarrow{\quad\pi\quad}\left\langle x,Bx\right\rangle $ is positive definite on $M\left(n,\mathbb{C}\right)$. So by Cauchy-Schwarz inequality, \begin{equation} \left|\pi\left(A^{1/2}A^{-1/2}\right)\right|^{2}\leq\pi\left(A^{1/2}A^{1/2}\right)\pi\left(A^{-1/2}A^{-1/2}\right), \end{equation} i.e., $\left\langle x,Ax\right\rangle \left\langle x,A^{-1}x\right\rangle \geq1$, with $\left\Vert x\right\Vert =1$.
projective geometry and projective space Let $V$ is a vectorspace over field $F_q$, we denote the set of all subspaces of $V$ by $\mathcal{P}(V)$. I saw some referencess they mentioned $\mathcal{P}(V)$ as a projective space and some referencess they mentioned as projective geometry. what is the difference between a projective geometry and a projective space?
Geometry is defined as structure $G=(\Omega,I)$ consisting of set $\Omega$ and a relation I. That definition is pretty abstract and can be used on whole variety of objects and relations. Once you get points and lines as elements of $\Omega$ and incidence relation for I, you get projective geometry. Now we introduce axioms of projective geometry (which I am not going to write here) and we can define projective space as geometry $G=(\Omega,I)$ which satisfies given axioms. Beutelspacher, Albrecht; Rosenbaum, Ute, Projective geometry. From foundations to applications, Vieweg Studium 41. Aufbaukurs Mathematik. Braunschweig: Vieweg (ISBN 3-528-17241-X/pbk). x, 265 p. (2004). ZBL1050.51001.
Projective geometry is the study of projective spaces.
Find $\lim_{x\to ∞} (\sqrt[3]{x^{3}+3x^{2}}-\sqrt{x^{2}-2x})$ without L'Hopital or Taylor series. $$\large \lim_{x\to ∞} (\sqrt[3]{x^{3}+3x^{2}}-\sqrt{x^{2}-2x})$$ My try is as follows: $$\large \lim_{x\to ∞} (\sqrt[3]{x^{3}+3x^{2}}-\sqrt{x^{2}-2x})=$$$$ \lim_{x\to ∞}x\left(\sqrt[3]{1+\frac{3}{x}}-\sqrt{1\ -\frac{2}{x}}\right)$$$$=\lim_{x\to ∞}x\lim_{x\to ∞}\left(\sqrt[3]{1+\frac{3}{x}}-\sqrt{1\ -\frac{2}{x}}\right)$$ which is $∞×0$ , but clearly this zero is not exactly zero. I was thinking about generalized binomial theorem, but seems it will make the limit difficult, so how this kind of limits can be solved without using Taylor series or L'Hopital's rule?
We first note that for any positive integer $n$ and any real $a$, $$\lim_{x\to \infty}x\left(\sqrt[n]{1+\frac{a}{x}}-1\right)= \lim_{s\to 1}a\frac{s-1}{s^n-1}=\lim_{s\to 1}\frac{a}{s^{n-1}+s^{n-2}+\dots +s +1}=\frac{a}{n}$$ where $s=\sqrt[n]{1+a/x}$ and therefore $a/x=s^n-1$, and $x=a/(s^n-1)$. Hence, from your work, we split the limit in two: $$\begin{align}\lim_{x\to +\infty} (\sqrt[3]{x^{3}+3x^{2}}-\sqrt{x^{2}-2x}) &=\lim_{x\to +\infty}x\left(\sqrt[3]{1+\frac{3}{x}}-\sqrt{1\ -\frac{2}{x}}\right) \\&=\lim_{x\to +\infty}x\left(\sqrt[3]{1+\frac{3}{x}}-1\right)-\lim_{x\to \infty}x\left(\sqrt[2]{1 +\frac{-2}{x}}-1\right)\\&=\frac{3}{3}-\frac{-2}{2}=1+1=2. \end{align}$$ P.S. Note that on the other side, $$\lim_{x\to -\infty} (\sqrt[3]{x^{3}+3x^{2}}-\sqrt{x^{2}-2x})=-\infty$$
From OP's work $$L=\lim_{x \rightarrow \infty} x \left[ \left( 1+\frac{3}{x} \right)^{1/3}-\left( 1-\frac{2}{x} \right)^{1/2}\right]= \lim_{x \rightarrow \infty} x \left[ \left( 1+\frac{3}{3x} \right)-\left( 1-\frac{2}{2x} \right)\right]= \lim_{x \rightarrow \infty}x \frac{2}{x}=2. $$
Is there any sequence of functions in $C_c (\mathbb R)$ which converges pointwise only to $e^{-x^2}$ This question has been posted earlier. I could not understand the solution. Can anyone please help me to understand the solution? let $C_c(\mathbb{R})$ = { f : $\mathbb{R} \rightarrow \mathbb{R}$ | f is continuous and there exist a compact set $K$ such that $f(x) = 0$ for all $x \in K ^c$} . let $g(x) = e^{-x^2}$ for all $x \in$ $\mathbb{R}.$ which of the following satemnet is true? 1.There exist a sequence {$f_n$} in $C_c(\mathbb{R})$ such that $f_n \rightarrow g$ uniformly 2.There exist a sequemce {$f_n$} in $C_c (\mathbb{R})$ such that $f_n \rightarrow g$ pointwise If a sequence in $C_c (\mathbb{R})$ converge pointwise to g then it must converge uniformly to $g$. 4.There doesnot exists any sequence $C_c (\mathbb{R})$ converging pointwise to $g$ I can not understand the solution of 3. Can anyone please make me understand? False. Start from $f_n$ but add a function $\psi_n\in C_c^{\infty}(\mathbb{R})$ with $\operatorname{supp}\psi_n\subset [n+1,n+2]$, and such that $\psi_n(\xi)=-1$ for some $\xi\in (n+1,n+2)$. Then $g_n:=f_n+\psi_n\in C_c^{\infty}(\mathbb{R})$ and $g_n(x)\to e^{-x^2}$ for all $x\in \mathbb{R}$, but $g_n$ does not converge uniformly to $e^{-x^2}$ because $$\sup_{x\in \mathbb{R}}|g_n(x)-e^{-x^2}|\geq |g_n(\xi)-e^{-\xi^2}|\geq 1 $$
The point is that you can take a sequence, even one that converges uniformly to $g$, and tweak it in such a way that it still converges pointwise, but not uniformly. Consider first $g=0$. And take $g_n$ to be a "traveling bump". For instance, $$ g_n(t)=\begin{cases} 0,&\ t\not\in[n,n+1]\\ \ \\ 2t-2n,&\ t\in [n,n+1/2]\\ \ \\ -2t+2n+2,&\ t\in [n+1/2,n+1] \end{cases} $$ For any fix $t$, if $n>t$ then $g_n(t)=0$. So $g_n\to0$ pointwise. But not uniformly, because $$g_n(n+1/2)-g(n+1/2)=1$$ for all $n$. The solution you were given is basically to add the above example to a sequence $f_n$ converging to $e^{-x^2}$.
Quick answer: For Q.1 and Q.2, Yes. The following is a well known fact in general topology and functional analysis. For any locally compact Hausdorff space $\Omega$, the set $C_c(\Omega)$ is a linear subspace of $C_0(\Omega)$. Moreover, $C_c(\Omega)$ is norm-dense in $C_0(\Omega)$ with respect to the supremum norm. For your particular case, $\mathbb{R}$ is a locally compact Hausdorff topological space with respect to the usual topology and $x\mapsto \exp(-x^2)$ is an element in $C_0(\mathbb{R})$. The results follows immediately.
Sum of coefficients of $x^i$ (Multinomial theorem application) A polynomial in $x$ is defined by $$a_0+a_1x+a_2x^2+ \cdots + a_{2n}x^{2n}=(x+2x^2+ \cdots +nx^n)^2.$$ Show that the sum of all $a_i$, for $i\in\{n+1,n+2, \ldots , 2n\}$, is $$ \frac {n(n+1)(5n^2+5n+2)} {24}.$$ I don't know how to proceed. I know the Multinomial theorem, however, I have problems in applying it. Any help will be appreciated as it will help me understand the theorem well. Thanks!
Here is an easy method using multinomial coefficients. Put $x=1$ to get the sum of all the coefficients. Now, we want to evaluate $\sum_{i=0}^na_i$, then we will subtract that from sum of all coefficients. Observe that these coefficients will remain unaltered even in the following expansion (because the additional terms do not contribute to powers less than $x^{n+1}$): $$(x+2x^2+3x^3+...)^2 = x^2(1+2x+3x^2+...)^2$$ $$ = x^2\Bigg(\frac{1}{(1-x)^2}\Bigg)^2$$ $$ = \frac{x^2}{(1-x)^4}$$ $$ = x^2\sum_{m=0}^\infty\binom{m+4-1}{4-1}x^m$$ Now, apply the identity that $$\sum_{i=k}^n\binom{i}{k} = \binom{n+1}{k+1}$$ and you are done. Hope it helps:)
You can check in the link it's similar to what your question is. https://www.mathsdiscussion.com/forum/topic/sum-of-coefficients/?part=1#postid-55
Evaluate $\int \frac{x^2}{x-1} \,dx$ Evaluate $\int \frac{x^2}{x-1} \,dx$ (A) $2x^2+x+\ln|x-1|+C$ (B) $\frac{x^2}{2}+x+\ln|x+1|+C$ (C) $\frac{x^2}{2}+x+\ln|x-1|+C$ (D) $x^2+x+\ln|x-1|+C$ My attempt : Let $u=x-1$, so : $du=dx$ and $u+1$ $\int \frac{(u+1)^2}{u}\,du\\ =\int u +2+\frac{1}{u}\,du\\ =\frac{u^2}{2}+2u+\ln|u|+C\\ =\frac{(x-1)^2}{2}+2(x-1)+\ln|x-1|+C$ Simplify : $\frac{x^2-3}{2}+x+\ln|x-1|+C$ It's not on the option.
You have the right answer; you just have a different constant. Using $C_1$ instead of $C$, set your answer $$\frac{x^2-3}{2}+x+\ln|x-1|+C_1$$ equal to option (C) and simplify. You'll get $C-C_1=-\frac32$, which is fine since the difference is constant. Now, as an alternative approach to the problem, consider that you could find the derivative of each option and see which one reduces to $\frac{x^2}{x-1}$. Differentiation is typically easier than integration, and with multiple-choice questions it can be helpful to work backwards. As a side note, it isn't really correct to say that $\int\frac1x\ dx=\ln|x|+C$, because of the discontinuity of $\frac1x$ at $0$. It's a bit more nuanced than that: $$ \int\frac1x\ dx = \begin{cases} \ln|x| + C_1, & \text{if $x > 0$} \\ \ln|x| + C_2, & \text{if $x < 0$} \end{cases}$$ So in that sense, even the given answers are not entirely right.
After some rather basic algebraic manipulations, all you're left with is a pretty simple u-substantiation problem: $$ \begin{align} \int \frac{x^2}{x-1} \,dx &=\int \frac{x^2-1+1}{x-1} \,dx\\ &=\int \left(\frac{x^2-1}{x-1}+\frac{1}{x-1}\right) \,dx\\ &=\int \frac{(x-1)(x+1)}{x-1}\,dx+\int\frac{1}{x-1} \,dx\\ &=\int \left(x+1\right)\,dx+\int\frac{1}{x-1}\frac{d}{dx}\left(x-1\right) \,dx\\ &=\frac{x^2}{2}+x+\int\frac{1}{x-1}\,d\left(x-1\right)\\ &=\frac{x^2}{2}+x+\ln{\left|x-1\right|}+C.\\ \end{align} $$
Find the point, if any, the graph of $f(x) = \sqrt{8x^2+x-3}$ has a horizontal tangent line Section 2.5 #14 Find the point, if any, the graph of $f(x) = \sqrt{8x^2+x-3}$ has a horizontal tangent line. Okay, so having a horizontal tangent line at a point on the graph means that the slope of that tangent line is zero. The derivative of a function is another function that tells us the slope of the tangent line at any given point on the graph of the original function. Thus, to find where the graph of $f(x) = \sqrt{8x^2+x-3}$ has a horizontal tangent line, we need to take the derivative, set it equal to zero, and solve for $x$. This will give us the $x$-coordinate of where the graph of $f(x)$ has a horizontal tangent line. To find the corresponding $y$ value, we plug the $x$ value that we found into the original equation. In this problem, when we plug the $x$ value we find into the original equation, we get an imaginary number, which means that no point on the graph of $f(x)$ has a horizontal tangent line, and thus our answer is DNE, does not exist. Let's go through the motions!!! $f(x) = \sqrt{8x^2+x-3}=(8x^2+x-3)^{1/2}$ $f'(x) = \frac{d}{dx}(8x^2+x-3)^{1/2}$ Time do the chain rule!!! $$\begin{align} f'(x) &= \frac{(8x^2+x-3)^{-1/2}}{2}\frac{d}{dx}(8x^2+x-3)\\ &= \frac{(8x^2+x-3)^{-1/2}}{2}(16x+1)\\ &= \frac{(16x+1)}{2(8x^2+x-3)^{1/2}} \end{align}$$ Alright, we have our derivative. We want to find horizontal tangent lines, so we set this equal to zero and solve for $x$ $$0 = \frac{(16x+1)}{2(8x^2+x-3)^{1/2}}$$ multiplying both sides of the equation by $2(8x^2+x-3)^{1/2}$ we get $0 = (16x+1)$ And thus $x = \frac{-1}{16}$ Now, we plug this value into the original equation to get the corresponding $y$ value, because remember, we are looking for a point on the graph where the horizontal line is tangent, so our answer will be in $(x,y)$ format, is it exists, (which in this case, it won't).. $f(\frac{-1}{16}) = \sqrt{8(\frac{-1}{16})^2+\frac{-1}{16}-3}$ But $8(\frac{-1}{16})^2+\frac{-1}{16}-3<0$, so taking its square root will give us an imaginary number. Thus the answer is DNE
Given $$f(x) = \sqrt{8x^2+x-3}$$ the horizontal tangent lines of $f(x)$ occur all at points in the domain of $f(x)$ where $f'(x)=0$. You have correctly found the derivative $$f'(x)=\frac{16x+1}{2\sqrt{8x^2+x-3}}$$ which is a rational function. The zeroes of a rational function are where the numerator is zero. Solving $16x+1=0$, we find that $x=-1/16.$ In order to have a horizontal tangent line at $x=-1/16$, $f(x)$ must be defined at $x=-1/16$. The domain of $f(x)=\sqrt{8x^2+x-3}$ is where $8x^2+x-3\ge0$. Solving this inequality, we find the domain of $f(x)$ as $$x\ge \frac{1}{16}\left(\sqrt{97}-1\right)$$ $$x\le \frac{1}{16}\left(-1-\sqrt{97}\right)$$ therefore since $$\frac{-1-\sqrt{97}}{16} < \frac{-1}{16}$$ we see that $x=-1/16$ is not in the domain of $f(x)$. So, $f(x)$ doesn't have any horizontal tangent lines.
First off, I assume you are working in $\mathbb{R}$, yes? What is the domain of your original function? Since $$f\left(x\right)=\sqrt{8x^{2}+x-3}\text{, the domain is implied: }8x^{2}+x-3\ge 0.$$Solve this to find the domain. Then, when you find the $x$-value of the point where the tangent is $0$, see if that point is in the domain and if it is, you should end up with a real $y$-value, i.e., $y\in\mathbb{R}$.
For a given non-constant polynomial $f(x)$ with integer coefficients, how many solutions are there to $f(x)\equiv 0 \mod(n)$ where $n$ is composite? For a given non-constant polynomial $f(x)$ with integer coefficients, how many solutions are there to $f(x)\equiv 0 \mod(n)$ where $n$ is composite? Is there a general way to determine the number of incongruent solutions modulo $n$? My first idea is that we can of course break $n$ into its prime power factorization and look at $f(x)\equiv 0 \mod(p_{i}^{e_{i}})$ where $(p_{i}^{e_{i}})$ appears as a prime power factor in $n$. Here's where I start to become confused, if $f(x)=x$ then the Chinese remainder theorem tells us that the solution is unique modulo $n$, but if $f(x)$ is non-constant and non-linear then we need to use the lifting method to solve $f(x)\equiv 0$ for each $\mod(p_{i}^{e_{i}})$ - but so far the method tells us nothing about the number of solutions. I presume I am not incorrect in saying that the number of incongruent solutions to $f(x)\equiv \mod(p_{i}^{e_{i}})$ is at most $min(deg(f), p_{i}^{e_{i}})$, but is there a general way to determine precisely how many solutions are there?
First factor $m$ to powers of primes. $$N(m) = PI\ N(p^a)$$ $N$ is number of roots. This is by CRT. Then for each prime factor p of m, a polynomial $P(x)$ can be reduced to $P'(x)$ with degree $<= p$. If $P'(x)$ is a factor of $(x^p - x) \mod p$ (after dividing, the remainder is multiple of $p$), then it has exactly $n$ roots where $n$ is degree of $P'(x)$. From $p$ to $p^a$, it's Hensel's Lemma.
First factor $m$ to powers of primes. $$N(m) = PI\ N(p^a)$$ $N$ is number of roots. This is by CRT. Then for each prime factor p of m, a polynomial $P(x)$ can be reduced to $P'(x)$ with degree $<= p$. If $P'(x)$ is a factor of $(x^p - x) \mod p$ (after dividing, the remainder is multiple of $p$), then it has exactly $n$ roots where $n$ is degree of $P'(x)$. From $p$ to $p^a$, it's Hensel's Lemma.
What is the double sum of: $\sum\limits_{n=0}^\infty \sum\limits_{m=0}^\infty \frac{ \sin[ka(m-n)]}{(m-n)} , m \neq n $ where $k$ and $a$ are constants. How to treat this double sum?
Hint: Use the fact that $$\frac{\sin[(m-n)ka]}{m-n} = \frac 1 2\int_{-ka}^{ka}e^{i(m-n)t}dt$$
This is what i got. If there is any error, let me know, please.
Can we calculate $2^k$ using this easy Taylor series? Trying to calculate $2^k$ by hand for $k\in[0,1]$, it's tempting to use the Taylor expansion of $x^k$ around $x=1$, to get: $$2^k = 1^k + \frac{k (1)^{k-1}}{1!} + \frac{k(k-1) (1)^{k-2}}{2!} + \ldots =1+k +\frac{k(k-1)}{2!}+\ldots =\sum_{n=0}^\infty\binom{k}{n}$$ Unfortunately, $2$ lies exactly on the the radius of convergence $r = 1$, so in theory this may not converge. Can we prove this converges to the correct value for all $k$? Numerically it does seem to. What can be said about the rate of convergence? It seems quite slow. Can we bound the convergence?
Regarding convergence of the series, the $n$th term is $$a_n = \frac{k(k-1) \ldots (k - n +1)}{n!}.$$ We have $$\frac{a_{n+1}}{a_n} = \frac{k-n}{n+1} = - \frac{n-k}{n+1} < 0,$$ and the series is alternating for $n > k.$ Note that $$\frac{|a_{n}|}{|a_{n+1}|} = \frac{n+1}{n-k} = \frac{1+1/n}{1-k/n} = 1 + \frac{1+k}{n} +O\left(\frac1{n^2}\right),$$ and $$\lim_{n \to \infty} \left(n \frac{|a_n|}{|a_{n+1}|}- (n+1)\right) = k > 0.$$ There exists $N \in \mathbb{N}$ such that for $n > N$ $$n \frac{|a_n|}{|a_{n+1}|}- (n+1) > \frac{k}{2} \\ \implies |a_{n+1}| < \frac{2}{k}\left(n|a_n| - (n+1)|a_{n+1}|\right).$$ Thus for all $m > N$, the RHS forms a telescoping sum and $$\sum_{n = N}^m |a_{n+1}| < \frac{2}{k}\left(N|a_N| - (m+1)|a_{m+1}|\right) < \frac{2}{k}N|a_N|.$$ The series $\sum|a_n|$ is positive and bounded, and, hence, convergent. Therefore, the series $\sum_{n=0}^\infty\binom{k}{n}$ is absolutely convergent for $k > 0$. As an alternating series an error bound is $$\left|\sum_{n=m+1}^\infty\binom{k}{n}\right| \leqslant \left|\binom{k}{m+1}\right|.$$
Since $\left( \begin{array}{c} k \\ n \end{array}\right)=0$ for $n > k$ the series on the left side is always a finite sum. So to compute $2^k$ exact we need $k+1$ terms of this sum. So you can not speak of some sort of convergence.
Some Results in $\mathbb{Z} [\sqrt{10}]$ This is a question from an old Oxford undergrad paper on calculations in $\mathbb{Z} [\sqrt{10}]$. We equip this ring with the Eucliden function $d(a+b\sqrt{10})=|a^2-10b^2|$. I want to prove the following results: If $d(x)=1$, then $\frac{1}{x} \in \mathbb{Z} [\sqrt{10}]$ Any non-zero element of $\mathbb{Z} [\sqrt{10}]$ which is not a unit can be expressed as a product of finitely many irreducibles in $\mathbb{Z} [\sqrt{10}]$ The ideal generated by $2$ and $\sqrt{10}$ is not principal in $\mathbb{Z} [\sqrt{10}]$ Thought so far Suppose $x=a+b\sqrt{10}$. Clearly if $x$ is a unit then $d(x)=1$, though I'm not sure if this helps. Are we OK simply to note that $\frac{1}{x}=\frac{a-b\sqrt{10}}{a^2-10b^2}$ and since $d(x)=1$ then the deonminator is either $1$ or $-1$. I know this is true in general in a principal ideal domain and every Euclidean ring is a principal ideal domain, but this proof is lengthy. Is there any calculation one can perform in $\mathbb{Z} [\sqrt{10}]$ to demonstrate this property more quickly. Any help would be appreciated; I'm not actually too sure what this ideal looks set. Could someone put it in a set notation for me? Many thanks.
It seems to me that (1) and (3) are dealt with adequately in the comments (but I would be happy to incorporate that here if you need). For (2), the key point is that the ring is Noetherian, which is effectively a consequence of the Hilbert basis theorem, implying that a polynomial ring $\mathbb{Z}[x]$ is Noetherian, and the fact that a quotient ring of a Noetherian ring is Noetherian. Given this, every element can be written as a product of irreducibles, since otherwise you obtain an infinite ascending chain of ideals. Here are some more details for (3): First, observe that $d(xy)=d(x)d(y)$ for all $x,y$ and that $d(2)=4$ and $d(\sqrt{10})=10$. Thus any common divisor $x$ has $d(x)|2$. If $x$ is a non-unit this implies $d(x)=2$. Thus assuming a non-unit common divisor $x$ exists, there are integers $a,b$ with $$\pm 2=a^2-10b^2.$$ Consider this equation modulo $10$. It implies that either $2$ or $8$ is a square mod $10$, but the squares modulo $10$ are just $0,1,4,9,6,5$. Hence there is no non-unit common divisor $x$. On the other hand, the ideal is proper since its elements are all of the form $a+b\sqrt{10}$ with $a$ even. Therefore it is not principal.
The set Z[10] makes use of half-primes, which must occur in even numbers. This is pretty normal for many of these sorts of systems. These half-primes live in some other part that have this same ratio, ie $a\sqrt{5}+b\sqrt{2}$. In some systems, like z[3], there are even uits in there, eg $(\sqrt{6}+\sqrt{2})/2$. In the case of Z[10] (and many other schemes), there are subspaces that hold primes, but there is no unit to move to that space. An example is that of Z[210], which has three subspaces $a\sqrt{3}+b\sqrt{70}$ and $a\sqrt{2}+b\sqrt{105}$ and $a\sqrt{6}+b\sqrt{35}$, and a unit $\sqrt{15}+\sqrt{14}$, which transitions to another set of subspaces. This group has no fewer than eight sub-spaces, linked in pairs, and primes can be individually found in any of the main group, or the three subgroups (3), (2), (6). A number appears in the main group, if the product of the special primes of the subgroup, multiply up to a square. For example the prime factors for 29 are $\sqrt{35}+\sqrt{6}$. and for 7, one can find it in $\sqrt{105}+7\sqrt{2}= \sqrt{7}*(\sqrt{15}+\sqrt{14})$, but since 6*2 gives 12, one needs a further '3' type prime to make it appear in Z[210]. That is pretty much the order of the day for composite numbers in this type.
Which is right way to calculate percentage? A student gets the following marks. 50 out of 100 120 out of 150 30 out of 50 In first method : I calculate the percentage as (sum of obtained marks) / (Total marks) * 100. Hence [(50 + 120 + 30) / (100 + 150 + 50)] * 100 = (200 / 300) * 100 = 66.66% In second method : I calculate individual percentages and divide by three as: (50% + 80% + 60%) / 3 = (190 / 3) = 63.33% Why are the two percentages different and which one is the correct percentage?
First notice that the two results are close together, although you can probably create some strange case in which they aren't close. Second there is no "right" answer. Whoever creates the scoring scheme can decide how to combine the marks. Third the two answers are the result of two different weighting schemes. There are infinitely many such schemes.
Your first method would be correct. The cause for the difference between the two methods is that the second one doesn't take into account the differing amounts of total marks between assignments. This leads to an unequal weighting between assignments that are worth more or less than others. For an extreme example, say a student go the following marks: 2 out of 2 (100%) 20 out of 100 (20%) The first method would give us a correct answer of $\frac{2+20}{2+100} \approx 21.57\%$. The second method gives us an incorrect answer of $\frac{100+20}{2} = 60\%$
Do most mathematicians know most topics in mathematics? How many topics outside of his or her specialization is an average mathematician familiar with? For example, does an average group theorist know enough of partial differential equations to pass a test in a graduate-level PDE course? Also, what are the "must-know" topics for any aspiring mathematician? Why? As a graduate student, should I focus more on breadth (choosing a wide range of classes that are relatively pair-wise unrelated, e.g., group theory and PDEs) or depth (e.g., measure theory and functional analysis)?
Your question is philosophical rather than mathematical. A colleague of mine told me the following metaphor / illustration once when I was a bachelor student and he did his PhD. And since now some years have passed I can relate. It is hard to write it. Think about drawing a huge circle in the air, zooming in, and then drawing a huge circle again. This is all knowledge: [--------------------------------------------] All knowledge contains a lot, and math is only a tiny part in it - marked with the cross: [---------------------------------------x----] | Zooming in: [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] Mathematical research is divided into many topics. Algebra, number theory, and many others, but also numerical mathematics. That is this tiny part here: [xxxxxxxxxxxxxxxxxxxoxxxxxxxxxxxxxxxxxxxxxxxx] | Zooming in: [oooooooooooooooooooooooooooooooooooooooooooo] Numerical Math is divided into several topics as well, like ODE numerics, optimisation etc. And one of them is FEM-Theory for PDEs. [oooooooooooooooooooρoooooooooooooooooooooooo] | And that is the part of knowledge, where I feel comfortable saying "I know a bit more than most other people in the world". Now after some years, I would extend that illustration one more step: My knowledge in that part rather looks like [ ρ ρρ ρ ρ ρ ρ ρ] I still only know "a bit" about it, most of it I don't know, and most of what I had learned is already forgotten. (Actually FEM-Theory is still a huge topic, that contains e.g. different kinds of PDEs [elliptic, parabolic, hyperbolic, other]. So you could do the "zooming" several times more.) Another small wisdom is: Someone who finished school thinks he knows everything. Once he gained his masters degree, he knows that he knows nothing. And after the PhD he knows that everyone around him knows nothing as well. Asking about your focus: IMO use the first few years to explore topics in math to find out what you like. Then go deeper - if you found what you like. Are there "must know" topics? There are basics that you learn in the first few terms. Without them it is hard to "speak" and "do" math. You will learn the tools that you need to dig deeper. After that feel free to enjoy math :) If your research focus is for example on PDE numerics (as mine is) but you also like pure math - go ahead and take a lecture. Will it help you? Maybe, maybe not. But for sure you had fun gaining knowledge, and that is what counts. Don't think too much about what lectures to attend. Everything will turn out all right. I think most mathematicians will agree with that statement.
Certainly not. For example, the great mathematician Grothendieck was insufficiently well acquainted with arithmetic to recognize the integer $57$ as a non-prime. The many accounts of this story can be accessed by an internet search for the key terms; say, look for grothendieck prime 57. An earlier example is given by Ian Stewart on page 72 of his book Professor Stewart's Cabinet of Mathematical Curiosities, which I quote verbatim without any endorsement as to the degree of its truth: Ernst Kummer was a German Algebraist, who did some of the best work on Fermat's Last Theorem before the modern era. However, he was poor at arithmetic, so he always asked his students to do the calculations for him. On one occasion he needed to work out $9\times7$. "Umm ... nine times seven is ... nine times ... seven ... is ..." $\qquad$"Sixty-one," suggested one student. Kummer wrote this on the blackboard. $\qquad$"No, Professor! It should be sixty-seven!" said another. $\qquad$"Come, come, gentlemen," said Kummer. "It can't be both. It must be one or the other."