Q
stringlengths
70
13.7k
A
stringlengths
28
13.2k
meta
dict
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? And if so, how would one prove this?
Here I will augment the excellent answer by whuber by showing the mathematical form of your general model and the sufficient conditions that imply a normal distribution for $Y|X$. Consider the general hierarchical model form: $$\begin{align} X|Y=y &\sim \text{N}(\mu(y),\sigma^2(y)), \\[6pt] Y &\sim \text{N}(\mu_*,\sigma^2_*). \\[6pt] \end{align}$$ This model gives the joint density kernel: $$\begin{align} f_{X,Y}(x,y) &= f_{X|Y}(x|y) f_{Y}(y) \\[12pt] &\propto \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 \Bigg) \exp \Bigg( -\frac{1}{2} \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 + \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(x-\mu(y))^2 \sigma_*^2 + (y-\mu_*)^2 \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg), \\[6pt] \end{align}$$ which gives the conditional density kernel: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg). \\[6pt] \end{align}$$ In general, this is not the form of a normal density. However, suppose we impose the following conditions on the condtional mean and variance of $X|Y$: $$\mu(y) = a + by \quad \quad \quad \quad \quad \sigma^2(y) = \sigma^2.$$ These conditions mean that we require $\mu(y) \equiv \mathbb{E}(X|Y=y)$ to be an affine function of $y$ and we require $\sigma^2(y) \equiv \mathbb{V}(X|Y=y)$ to be a fixed value. Incorporating these conditions gives: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{((a + by)^2 - 2x (a + by)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(b^2 y^2 + 2ab y + a^2 b^2 - 2xa - 2xb y) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\sigma^2 + b^2 \sigma_*^2 ) y^2 + 2(b(a - x) \sigma_*^2 - \mu_* \sigma^2) y}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{y^2 + 2[(b(a - x) \sigma_*^2 - \mu_* \sigma^2)/(\sigma^2 + b^2 \sigma_*^2) ] y}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 ) } \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{1}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 )} \cdot \Big( y - \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2} \Big)^2 \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \text{N} \Bigg( y \Bigg| \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2}, \frac{\sigma^2 \sigma_*^2}{\sigma^2 + b^2 \sigma_*^2} \Bigg). \\[6pt] \end{align}$$ Here we see that we have a normal distribution for $Y|X$ which confirms that the above conditions on the conditional mean and variance of $X|Y$ are sufficient to give this property.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/602428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
KL divergence between two univariate Gaussians I need to determine the KL-divergence between two Gaussians. I am comparing my results to these, but I can't reproduce their result. My result is obviously wrong, because the KL is not 0 for KL(p, p). I wonder where I am doing a mistake and ask if anyone can spot it. Let $p(x) = N(\mu_1, \sigma_1)$ and $q(x) = N(\mu_2, \sigma_2)$. From Bishop's PRML I know that $$KL(p, q) = - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx$$ where integration is done over all real line, and that $$\int p(x) \log p(x) dx = -\frac{1}{2} (1 + \log 2 \pi \sigma_1^2),$$ so I restrict myself to $\int p(x) \log q(x) dx$, which I can write out as $$-\int p(x) \log \frac{1}{(2 \pi \sigma_2^2)^{(1/2)}} e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx,$$ which can be separated into $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \log e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx.$$ Taking the log I get $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \bigg(-\frac{(x-\mu_2)^2}{2 \sigma_2^2} \bigg) dx,$$ where I separate the sums and get $\sigma_2^2$ out of the integral. $$\frac{1}{2} \log (2 \pi \sigma^2_2) + \frac{\int p(x) x^2 dx - \int p(x) 2x\mu_2 dx + \int p(x) \mu_2^2 dx}{2 \sigma_2^2}$$ Letting $\langle \rangle$ denote the expectation operator under $p$, I can rewrite this as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\langle x^2 \rangle - 2 \langle x \rangle \mu_2 + \mu_2^2}{2 \sigma_2^2}.$$ We know that $var(x) = \langle x^2 \rangle - \langle x \rangle ^2$. Thus $$\langle x^2 \rangle = \sigma_1^2 + \mu_1^2$$ and therefore $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2 - 2 \mu_1 \mu_2 + \mu_2^2}{2 \sigma_2^2},$$ which I can put as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}.$$ Putting everything together, I get to \begin{align*} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &= \frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}. \end{align*} Which is wrong since it equals $1$ for two identical Gaussians. Can anyone spot my error? Update Thanks to mpiktas for clearing things up. The correct answer is: $KL(p, q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}$
I did not have a look at your calculation but here is mine with a lot of details. Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is the density of a normal random variable with mean $\mu_2$ and variance $\sigma^2_2$. The Kullback-Leibler distance from $q$ to $p$ is: $$\int \left[\log( p(x)) - \log( q(x)) \right] p(x) dx$$ \begin{align}&=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \left(\frac{x-\mu_1}{\sigma_1}\right)^2 + \frac{1}{2}\log(2\pi) + \log(\sigma_2) + \frac{1}{2} \left(\frac{x-\mu_2}{\sigma_2}\right)^2 \right]\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\&=\int \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right] \right\}\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\& =E_{1} \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right]\right\}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2\sigma_1^2} E_1 \left\{(X-\mu_1)^2\right\}\\ &=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2};\end{align} (Now note that $(X - \mu_2)^2 = (X-\mu_1+\mu_1-\mu_2)^2 = (X-\mu_1)^2 + 2(X-\mu_1)(\mu_1-\mu_2) + (\mu_1-\mu_2)^2$) \begin{align}&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} \left[E_1\left\{(X-\mu_1)^2\right\} + 2(\mu_1-\mu_2)E_1\left\{X-\mu_1\right\} + (\mu_1-\mu_2)^2\right] - \frac{1}{2}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{\sigma_1^2 + (\mu_1-\mu_2)^2}{2\sigma_2^2} - \frac{1}{2}.\end{align}
{ "language": "en", "url": "https://stats.stackexchange.com/questions/7440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "138", "answer_count": 2, "answer_id": 0 }
How to compute the standard error of the mean of an AR(1) process? I try to compute the standard error of the mean for a demeaned AR(1) process $x_{t+1} = \rho x_t + \varepsilon_{t+1} =\sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t+1-i}$ Here is what I did: $$ \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} \sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t-i}\right) \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_0 + & \rho^1 \varepsilon_{-1} + & \rho^2 \varepsilon_{-2} + & \cdots & \rho^{\infty} \varepsilon_{-\infty} + \\ \rho^0 \varepsilon_1 + & \rho^1 \varepsilon_{0} + & \rho^2 \varepsilon_{-1} + & \cdots & \rho^{\infty} \varepsilon_{1-\infty} + \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho^0\varepsilon_{N-1} + & \rho^1 \varepsilon_{N-2} + & \rho^2 \varepsilon_{N-3} + & \cdots & \rho^{\infty} \varepsilon_{N-1-\infty} + \\ \end{pmatrix} \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_{N-1} + \\ (\rho^0 + \rho^1) \varepsilon_{N-2} + \\ (\rho^0 + \rho^1 + \rho^2) \varepsilon_{N-3} + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) \varepsilon_{1} + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) \varepsilon_{0} + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) \varepsilon_{-1} + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) \varepsilon_{-2} + \\ \cdots\\ \end{pmatrix} \\ &= \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^0 + \\ (\rho^0 + \rho^1) + \\ (\rho^0 + \rho^1 + \rho^2) + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) + \\ \cdots\\ \end{pmatrix} \\ &= \frac{N \sigma_{\varepsilon}^2}{N^2} (\rho^0 + \rho^1 + \dots + \rho^{\infty}) \\ &= \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho} \\ \end{align*} $$ Probably, not every step is done in the most obvious way, so let me add some thoughts. In the third row, I just write out to two sum-signs. Here, the matrix has N rows. In the fourth row, I realign the matrix so that there is one row for every epsilon, so the number of rows is infinite here. Note that the last three parts in the matrix have the same number of elements, just differencing by a factor $\rho$ in each row. In the fifth row, I apply the rule that the variance of the sum of independent shocks is the sum of the variances of those shocks and notice that each $\rho^j$ element is summed up $N$ times. The end result looks neat, but is probably wrong. Why do I think so? Because I run a MCS in R and things don't add up: nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(list(order=c(1,0,0), ar=pers), n = N)) } #quantile(means, probs=c(0.025, 0.05, 0.5, 0.95, 0.975)) #That is the empirical standard error sd(means) 0.9459876 #This should be the standard error according to my formula 1/(N*(1-pers)) 0.1 Any hints on what I am doing wrong would be great! Or maybe a hint where I can find the correct derivation (I couldn't find anything). Is the problem maybe that I assume independence between the same errors? $$Var(X + X) = Var(2X) = 4Var(X) \neq 2Var(X)$$ I thought about that, but don't see where I make that erroneous assumption in my derivation. UPDATE I forgot to square the rhos, as Nuzhi correctly pointed out. Hence it should look like: $$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^{2\times0} + \\ (\rho^0 + \rho^1)^2 + \\ (\rho^0 + \rho^1 + \rho^2)^2 + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2})^2 + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1})^2 + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N})^2 + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1})^2 + \\ \cdots\\ \end{pmatrix} $$
Well there are three things as i see it with this question : 1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression .. i didnt consider the auto covariance earlier ..sorry about that $$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho^2} + \sum\limits_{t=0}^{N-1}\sum\limits_{t\neq j}^{N-1}\frac{\sigma_{\varepsilon}^2}{N^2} \frac{1}{1 - \rho^2}\rho^{|j-t|}$$ 2) In your code you have calculated the variance of xbar ... for the standard error the code should include taking the sqrt of the answer given 3) You have assumed that the white noise has been generated from a (0,1) distribution when in fact the white noise only has to have constant variance .. i dont know what values of the constant variance R uses to generate the time series ... perhaps you could check on that .. Hope this helps you :)
{ "language": "en", "url": "https://stats.stackexchange.com/questions/40585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Correlation coefficient for a uniform distribution on an ellipse I am currently reading a paper that claims that the correlation coefficient for a uniform distribution on the interior of an ellipse $$f_{X,Y} (x,y) = \begin{cases}\text{constant} & \text{if} \ (x,y) \ \text{inside the ellipse} \\ 0 & \text{otherwise} \end{cases}$$ is given by $$\rho = \sqrt{1- \left(\frac{h}{H}\right)^2 }$$ where $h$ and $H$ are the vertical heights at the center and at the extremes respectively. The author does not reveal how he reaches that and instead only says that we need to change scales, rotate, translate and of course integrate. I would very much like to retrace his steps but I am a bit lost with all that. I would therefore be grateful for some hints. Thank you in advance. Oh and for the record Châtillon, Guy. "The balloon rules for a rough estimate of the correlation coefficient." The American Statistician 38.1 (1984): 58-60 It's quite amusing.
Let $(X,Y)$ be uniformly distributed on the interior of the ellipse $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ where $a$ and $b$ are the semi-axes of the ellipse. Then, $X$ and $Y$ have marginal densities \begin{align} f_X(x) &= \frac{2}{\pi a^2}\sqrt{a^2-x^2}\,\,\mathbf 1_{-a,a}(x),\\ f_X(x) &= \frac{2}{\pi b^2}\sqrt{b^2-y^2}\,\,\mathbf 1_{-b,b}(y), \end{align} and it is easy to see that $E[X] = E[Y] = 0$. Also, \begin{align} \sigma_X^2 = E[X^2] &= \frac{2}{\pi a^2}\int_a^a x^2\sqrt{a^2-x^2}\,\mathrm dx\\ &= \frac{4}{\pi a^2}\int_0^a x^2\sqrt{a^2-x^2}\,\mathrm dx\\ &= \frac{4}{\pi a^2}\times a^4 \frac 12\frac{\Gamma(3/2)\Gamma(3/2)}{\Gamma(3)}\\ &= \frac{a^2}{4}, \end{align} and similarly, $\sigma_Y^2 = \frac{b^2}{4}$. Finally, $X$ and $Y$ are uncorrelated random variables. Let \begin{align} U &= X\cos \theta - Y \sin \theta\\ V &= X\sin \theta + Y \cos \theta \end{align} which is a rotation transformation applied to $(X,Y)$. Then, $(U,V)$ are uniformly distributed on the interior of an ellipse whose axes do not coincide with the $u$ and $v$ axes. But, it is easy to verify that $U$ and $V$ are zero-mean random variables and that their variances are \begin{align} \sigma_U^2 &= \frac{a^2\cos^2\theta + b^2\sin^2\theta}{4}\\ \sigma_V^2 &= \frac{a^2\sin^2\theta + b^2\cos^2\theta}{4} \end{align} Furthermore, $$\operatorname{cov}(U,V) = (\sigma_X^2-\sigma_Y^2)\sin\theta\cos\theta = \frac{a^2-b^2}{8}\sin 2\theta$$ from which we can get the value of $\rho_{U,V}$. Now, the ellipse on whose interior $(U,V)$ is uniformly distributed has equation $$\frac{(u \cos\theta + v\sin \theta)^2}{a^2} + \frac{(-u \sin\theta + v\cos \theta)^2}{b^2} = 1,$$ that is, $$\left(\frac{\cos^2\theta}{a^2} + \frac{\sin^2\theta}{b^2}\right) u^2 + \left(\frac{\sin^2\theta}{a^2} + \frac{\cos^2\theta}{b^2}\right) v^2 + \left(\left(\frac{1}{a^2} - \frac{1}{b^2}\right)\sin 2\theta \right)uv = 1,$$ which can also be expressed as $$\sigma_V^2\cdot u^2 + \sigma_U^2\cdot v^2 -2\rho_{U,V}\sigma_U\sigma_V\cdot uv = \frac{a^2b^2}{4}\tag{1}$$ Setting $u = 0$ in $(1)$ gives $\displaystyle h = \frac{ab}{\sigma_U}$. while implicit differentiation of $(1)$ with respect to $u$ gives $$\sigma_V^2\cdot 2u + \sigma_U^2\cdot 2v\frac{\mathrm dv}{\mathrm du} -2\rho_{U,V}\sigma_U\sigma_V\cdot \left(v + u\frac{\mathrm dv}{\mathrm du}\right) = 0,$$ that is, the tangent to the ellipse $(1)$ is horizontal at the two points $(u,v)$ on the ellipse for which $$\rho_{U,V}\sigma_U\cdot v = \sigma_v\cdot u.$$ The value of $H$ can be figured out from this, and will (in the unlikely event that I have made no mistakes in doing the above calculations) lead to the desired result.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/182293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Marginal distribution of normal random variable with a normal mean I have a question about calculation of conditional density of two normal distributions. I have random variables $X\mid M \sim \text{N}(M,\sigma^2)$ and $M \sim \text{N}(\theta, s^2)$, with conditional and marginal densities given by: $$\begin{equation} \begin{aligned} f(x|m) &= \frac{1}{\sigma \sqrt{2\pi}} \cdot \exp \Big( -\frac{1}{2} \Big( \frac{x-m}{\sigma} \Big)^2 \Big), \\[10pt] f(m) &= \frac{1}{s \sqrt{2\pi}} \cdot \exp \Big( - \frac{1}{2} \Big( \frac{m-\theta}{s} \Big)^2 \Big). \end{aligned} \end{equation}$$ I would like to know the marginal distribution of $X$. I have multiplied the above densities to form the joint density, but I cannot successfully integrate the result to get the marginal density of interest. My intuition tells me that this is a normal distribution with different parameters, but I can't prove it.
Your intuition is correct - the marginal distribution of a normal random variable with a normal mean is indeed normal. To see this, we first re-frame the joint distribution as a product of normal densities by completing the square: $$\begin{equation} \begin{aligned} f(x,m) &= f(x|m) f(m) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2} \Big[ \Big( \frac{x-m}{\sigma} \Big)^2 + \Big( \frac{m-\theta}{s} \Big)^2 \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2} \Big[ \Big( \frac{1}{\sigma^2}+\frac{1}{s^2} \Big) m^2 -2 \Big( \frac{x}{\sigma^2} + \frac{\theta}{s^2} \Big) m + \Big( \frac{x^2}{\sigma^2} + \frac{\theta^2}{s^2} \Big) \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2 \sigma^2 s^2} \Big[ (s^2+\sigma^2) m^2 -2 (x s^2+ \theta \sigma^2) m + (x^2 s^2+ \theta^2 \sigma^2) \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big[ m^2 -2 \cdot \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \cdot m + \frac{x^2 s^2 + \theta^2 \sigma^2}{s^2+\sigma^2} \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \\[6pt] &\quad \quad \quad \text{ } \times \exp \Big( \frac{(x s^2 + \theta \sigma^2)^2}{2 \sigma^2 s^2 (s^2+\sigma^2)} - \frac{x^2 s^2 + \theta^2 \sigma^2}{2 \sigma^2 s^2} \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \cdot \exp \Big( -\frac{1}{2} \frac{(x-\theta)^2}{s^2+\sigma^2} \Big) \\[10pt] &= \sqrt{\frac{s^2+\sigma^2}{2\pi \sigma^2 s^2}} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \\[6pt] &\quad \times \sqrt{\frac{1}{2\pi (s^2+\sigma^2)}} \cdot \exp \Big( -\frac{1}{2} \frac{(x-\theta)^2}{s^2+\sigma^2} \Big) \\[10pt] &= \text{N} \Big( m \Big| \frac{xs^2+\theta\sigma^2}{s^2+\sigma^2}, \frac{s^2 \sigma^2}{s^2+\sigma^2} \Big) \cdot \text{N}(x|\theta, s^2+\sigma^2). \end{aligned} \end{equation}$$ We then integrate out $m$ to obtain the marginal density $f(x) = \text{N}(x|\theta, s^2+\sigma^2)$. From this exercise we see that $X \sim \text{N}(\theta, s^2+\sigma^2)$.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/372062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
KL divergence between two univariate Gaussians I need to determine the KL-divergence between two Gaussians. I am comparing my results to these, but I can't reproduce their result. My result is obviously wrong, because the KL is not 0 for KL(p, p). I wonder where I am doing a mistake and ask if anyone can spot it. Let $p(x) = N(\mu_1, \sigma_1)$ and $q(x) = N(\mu_2, \sigma_2)$. From Bishop's PRML I know that $$KL(p, q) = - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx$$ where integration is done over all real line, and that $$\int p(x) \log p(x) dx = -\frac{1}{2} (1 + \log 2 \pi \sigma_1^2),$$ so I restrict myself to $\int p(x) \log q(x) dx$, which I can write out as $$-\int p(x) \log \frac{1}{(2 \pi \sigma_2^2)^{(1/2)}} e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx,$$ which can be separated into $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \log e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx.$$ Taking the log I get $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \bigg(-\frac{(x-\mu_2)^2}{2 \sigma_2^2} \bigg) dx,$$ where I separate the sums and get $\sigma_2^2$ out of the integral. $$\frac{1}{2} \log (2 \pi \sigma^2_2) + \frac{\int p(x) x^2 dx - \int p(x) 2x\mu_2 dx + \int p(x) \mu_2^2 dx}{2 \sigma_2^2}$$ Letting $\langle \rangle$ denote the expectation operator under $p$, I can rewrite this as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\langle x^2 \rangle - 2 \langle x \rangle \mu_2 + \mu_2^2}{2 \sigma_2^2}.$$ We know that $var(x) = \langle x^2 \rangle - \langle x \rangle ^2$. Thus $$\langle x^2 \rangle = \sigma_1^2 + \mu_1^2$$ and therefore $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2 - 2 \mu_1 \mu_2 + \mu_2^2}{2 \sigma_2^2},$$ which I can put as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}.$$ Putting everything together, I get to \begin{align*} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &= \frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}. \end{align*} Which is wrong since it equals $1$ for two identical Gaussians. Can anyone spot my error? Update Thanks to mpiktas for clearing things up. The correct answer is: $KL(p, q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}$
OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} \end{align} Note the missing $-\frac{1}{2}$. The last line becomes zero when $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/7440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "138", "answer_count": 2, "answer_id": 1 }
How to compute the standard error of the mean of an AR(1) process? I try to compute the standard error of the mean for a demeaned AR(1) process $x_{t+1} = \rho x_t + \varepsilon_{t+1} =\sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t+1-i}$ Here is what I did: $$ \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} \sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t-i}\right) \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_0 + & \rho^1 \varepsilon_{-1} + & \rho^2 \varepsilon_{-2} + & \cdots & \rho^{\infty} \varepsilon_{-\infty} + \\ \rho^0 \varepsilon_1 + & \rho^1 \varepsilon_{0} + & \rho^2 \varepsilon_{-1} + & \cdots & \rho^{\infty} \varepsilon_{1-\infty} + \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho^0\varepsilon_{N-1} + & \rho^1 \varepsilon_{N-2} + & \rho^2 \varepsilon_{N-3} + & \cdots & \rho^{\infty} \varepsilon_{N-1-\infty} + \\ \end{pmatrix} \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_{N-1} + \\ (\rho^0 + \rho^1) \varepsilon_{N-2} + \\ (\rho^0 + \rho^1 + \rho^2) \varepsilon_{N-3} + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) \varepsilon_{1} + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) \varepsilon_{0} + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) \varepsilon_{-1} + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) \varepsilon_{-2} + \\ \cdots\\ \end{pmatrix} \\ &= \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^0 + \\ (\rho^0 + \rho^1) + \\ (\rho^0 + \rho^1 + \rho^2) + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) + \\ \cdots\\ \end{pmatrix} \\ &= \frac{N \sigma_{\varepsilon}^2}{N^2} (\rho^0 + \rho^1 + \dots + \rho^{\infty}) \\ &= \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho} \\ \end{align*} $$ Probably, not every step is done in the most obvious way, so let me add some thoughts. In the third row, I just write out to two sum-signs. Here, the matrix has N rows. In the fourth row, I realign the matrix so that there is one row for every epsilon, so the number of rows is infinite here. Note that the last three parts in the matrix have the same number of elements, just differencing by a factor $\rho$ in each row. In the fifth row, I apply the rule that the variance of the sum of independent shocks is the sum of the variances of those shocks and notice that each $\rho^j$ element is summed up $N$ times. The end result looks neat, but is probably wrong. Why do I think so? Because I run a MCS in R and things don't add up: nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(list(order=c(1,0,0), ar=pers), n = N)) } #quantile(means, probs=c(0.025, 0.05, 0.5, 0.95, 0.975)) #That is the empirical standard error sd(means) 0.9459876 #This should be the standard error according to my formula 1/(N*(1-pers)) 0.1 Any hints on what I am doing wrong would be great! Or maybe a hint where I can find the correct derivation (I couldn't find anything). Is the problem maybe that I assume independence between the same errors? $$Var(X + X) = Var(2X) = 4Var(X) \neq 2Var(X)$$ I thought about that, but don't see where I make that erroneous assumption in my derivation. UPDATE I forgot to square the rhos, as Nuzhi correctly pointed out. Hence it should look like: $$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^{2\times0} + \\ (\rho^0 + \rho^1)^2 + \\ (\rho^0 + \rho^1 + \rho^2)^2 + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2})^2 + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1})^2 + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N})^2 + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1})^2 + \\ \cdots\\ \end{pmatrix} $$
This is the R code btw .. nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1)) } #Simulation answer ans1 <-sd(means) #This should be the standard error according to the given formula cov <- 0 for(i in 1:N){ for(j in 1:N){ cov <- cov +(1/((N^2)*(1-pers^2)))*pers^abs(j-i) } } ans2 <- sqrt(cov)
{ "language": "en", "url": "https://stats.stackexchange.com/questions/40585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Probability statistics I pick sports for fun with my friends. we pick 5 sports picks everyday. what is the probability of going at least 3/5 for 50 days in a row? How would you set this up
Well you haven't told us much in your question formulation so I am going to assume that the chance of picking correctly is the same as the chance of picking incorrectly and so all events have equal probability of $0.5$. Now, to solve this problem, let's think about what is the probability of picking at least 3/5 sports correctly in one day. Define $X=$ the number of sports correctly picked in a day. Then, the probability of picking at least 3 out of the 5 sports correctly is the following: \begin{align*} Pr(X\geq3) &= Pr(X=3) + Pr(X=4) + Pr(X=5)\\ &={5\choose 3}\left(\frac{1}{2}\right)^3\left(\frac{1}{2}\right)^2+{5\choose 4}\left(\frac{1}{2}\right)^4\left(\frac{1}{2}\right)^1+{5\choose 5}\left(\frac{1}{2}\right)^5\left(\frac{1}{2}\right)^0\\ &=10\left(\frac{1}{2}\right)^5+5\left(\frac{1}{2}\right)^5+1\left(\frac{1}{2}\right)^5\\ &=\left(\frac{1}{2}\right)^5(10+5+1)\\ &=\left(\frac{1}{2}\right)^5(16)\\ &=0.5 \end{align*} And so, the probability of selecting 3 out of the 5 sports correctly in one day is 0.5. This shouldn't come as a surprise since we could have picked X={0,1,2,3,4,5} sports correctly and only half (i.e., 0.5) of that list is at least 3 or greater. Now, ultimately, you want to know what is the probability of picking 3 out of the 5 sports correctly for 50 days in a row. And so, this probability is the following: \begin{align*} Pr(X\geq 3 \text{ for }50 \text{ days in a row})&=Pr(X\geq 3 \text{ on day 1 and day 2 ... and day 50})\\ &=Pr(X\geq 3 \text{ on day 1})\times\cdots\times Pr(X\geq 3 \text{ on day 50})\\ &=\left(\frac{1}{2}\right)^5(16)\times\cdots\times \left(\frac{1}{2}\right)^5(16)\\ &=\left(\left(\frac{1}{2}\right)^5(16)\right)^{50}\\ &=(0.50)^{50}\\ &=8.881784e-16 \end{align*} So practically a 0% chance of it occurring.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/182642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Monty hall problem, getting different probabilities using different formulas? In my Monty hall problem, I am computing what is the probability that P(H=1|D=3) i.e. price is behind door 1 and the 3rd door is opened. $P(H=1|D=3) = p(H=1) * \frac{p(D=3|H=1)} {p(D=3)} = 1/3 * 1/2 / 1/3 = 1/2 = 50\%$ $P(H=1|D=3) = p(H=1) \frac{p(D=3|H=1}{\sum_{i=1} ^3 p(H=i) * p(D=3|H=i)} = \frac{1/3 * 1/2} { 1/3 * 1/2 + 1/3 * 1 + 0} = 1/3 = 33\% $ And when I use Bayes formula without summation in the denominator I get answer 50% and when I use summation in the denominator in Bayes formula I get 33%. Why there is difference?
Your problem is that $P(D=3 \mid H=2) = \frac{1}{2}$ but you incorrectly wrote that it equals 1 Explanation: Let $D$ be the door with the prize. Let $H$ be the door Mr. Hall opens. The joint distribution of $D$ and $H$ is described in this table: $$ \begin{array}{cccc} &\text{H=1}&\text{H=2}&\text{H=3} \\ D = 1 & 0 & \frac{1}{6} & \frac{1}{6} \\ D = 2 & \frac{1}{6} & 0 & \frac{1}{6} \\ D = 3 & \frac{1}{6} & \frac{1}{6} & 0 \end{array}$$ The problem is in the denominator of your second formula: $P(H=1|D=3) = p(H=1) \frac{p(D=3|H=1}{\sum_{i=1} ^3 p(H=i) * p(D=3|H=i)} = \frac{1/3 * 1/2} { 1/3 * 1/2 + 1/3 * 1 + 0} = 1/3 = 33\% $ You incorrectly wrote $P(D=3 \mid H=2) = 1$ That is incorrect. $$P(D=3\mid H=2) = \frac{P(D=3,H=2)}{P(H=2)} = \frac{1/6}{1/3} = \frac{1}{2}$$ Make that correction and you have: $P(H=1|D=3) = p(H=1) \frac{p(D=3|H=1}{\sum_{i=1} ^3 p(H=i) * p(D=3|H=i)} = \frac{1/3 * 1/2} { 1/3 * 1/2 + 1/3 * 1/2 + 0} = 1/2$ which is correct An additional comment: This analysis does not solve the Monty Hall problem because it completely neglects the door $C$ that the contestant chooses.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/213693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Writing MA and AR representations I have to determine if $$(1 - 1.1B + 0.8B^2)Y_t = (1 - 1.7B + 0.72B^2)a_t$$ is stationary, invertible or both. I have shown that $\Phi(B) = 1 - 1.1B + 0.8B^2 = 0$ when $B_{1,2} = 0.6875 \pm 0.8817i$, whose moduli are both larger than 1, hence is stationary. Similarly, I have shown that $\Theta(B) = 1 - 1.7B + 0.72B^2 = 0$, when $B_1 = 1.25 > 1$ and $B_2 = 1.11 > 1$, hence is invertible. I also need to express the model as a MA and AR representation if it exists; which they do as I have already shown. However, to write as an MA process, I would need to write as: $$Y_t = \frac{1 - 1.7B + 0.72B^2}{1 - 1.1B + 0.8B^2}a_t$$ and for an AR process as: $$\frac{1 - 1.1B + 0.8B^2}{1 - 1.7B + 0.72B^2}Y_t = a_t$$ However, I am confused on how to do this given the division of the quadratic expressions. Should I use long division or is there some expansion formula I should be using?
Try the partial fraction decomposition: \begin{align} \frac{1}{(1 - \alpha B)(1 - \beta B)} & = \frac{\alpha/(\alpha - \beta)}{1 - \alpha B} + \frac{\beta/(\beta - \alpha)}{1 - \beta B} \\ & = (\alpha - \beta)^{-1} \left( \alpha(1 - \alpha B)^{-1} - \beta (1 - \beta B)^{-1} \right) \\ & = (\alpha - \beta)^{-1} \left( \alpha \sum_{k = 0}^\infty \alpha^k B^k - \beta \sum_{k = 0}^\infty \beta^k B^k \right) \\ & = (\alpha - \beta)^{-1} \sum_{k = 0}^\infty (\alpha^{k+1} - \beta^{k+1}) B^k \end{align} and apply it to both cases like: \begin{align} \frac{1 + c B + d B^2}{(1 - \alpha B)(1 - \beta B)} & = (1 + c B + d B^2) (\alpha - \beta)^{-1} \sum_{k = 0}^\infty (\alpha^{k+1} - \beta^{k+1}) B^k \\ & = (\alpha - \beta)^{-1} \sum_{k = 0}^\infty (\alpha^{k+1} - \beta^{k+1}) (1 + c B + d B^2) B^k \\ & = (\alpha - \beta)^{-1} \sum_{k = 0}^\infty (\alpha^{k+1} - \beta^{k+1}) (B^k + c B^{k+1} + d B^{k+2}) \end{align} and by distributing and reindexing the summations, we have \begin{align} & = (\alpha - \beta)^{-1} \left( \sum_{k = 0}^\infty (\alpha^{k+1} - \beta^{k+1}) B^k + \sum_{k = 1}^\infty c (\alpha^k - \beta^k) B^k + \sum_{k = 2}^\infty d (\alpha^{k-1} - \beta^{k-1}) B^k) \right) \\ & = (\alpha - \beta)^{-1} \left( (\alpha - \beta) + (\alpha^2 - \beta^2 + c(\alpha - \beta)) B + \sum_{k = 2}^\infty [(\alpha^{k+1} - \beta^{k+1}) + c (\alpha^k - \beta^k) + d (\alpha^{k-1} - \beta^{k-1})] B^k \right) \\ & = 1 + (\alpha + \beta + c) B + (\alpha - \beta)^{-1} \sum_{k = 2}^\infty [(\alpha^{k+1} - \beta^{k+1}) + c (\alpha^k - \beta^k) + d (\alpha^{k-1} - \beta^{k-1})] B^k \end{align} Assuming I haven't made any mistakes, this gives the AR representation when $\alpha = 0.9$, $\beta = 0.8$, $c = -1.1$, and $d = 0.8$ and gives the MA process when $\alpha, \beta = 0.68750 \pm 0.88167 i$, $c = -1.7$, and $d = 0.72$. Perhaps you could even simplify this more using the difference of nth powers formula (certainly you could cancel the $(\alpha - \beta)^{-1}$ term this way, but I don't know if you would call the result "simpler."
{ "language": "en", "url": "https://stats.stackexchange.com/questions/513073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding method of moments estimate for density function $f(x|\alpha) = \frac {\Gamma(2\alpha)} {\Gamma(\alpha)^2}[x(1-x)]^{\alpha - 1}$ Suppose that $X_1, X_2, ..., X_n$ are i.i.d random variables on the interval $[0,1]$ with the density function $$ f(x|\alpha) = \frac {\Gamma(2\alpha)} {\Gamma(\alpha)^2}[x(1-x)]^{\alpha - 1} $$ where $\alpha > 0$ is a parameter to be estimated from the sample. It can be shown that \begin{align} E(X) &= \frac 1 2 \\ \text{Var}(X) &= \frac 1 {4(2\alpha +1)} \end{align} How can the method of moments be used to estimate $\alpha$? My attempt It is clear that the first moment of $X$ is $\mu_1 = E(X) = \frac 1 2$. The second moment of $X$ is given by \begin{align} \mu_2 &= E(X^2) \\ &= \text{Var}(X) + (E(X))^2 \\ &= \frac 1 {4(2\alpha + 1)} + \frac 1 4 \\ &= \frac {\alpha + 1} {2(2\alpha + 1)} \end{align} Thus we have the relation $$ \alpha = \frac {1 - 2\mu_2} {4\mu_2 - 1} $$ Using the method of moments, we obtain \begin{align} \hat{\alpha} &= \frac {1 - 2\hat{\mu_2}} {4\hat{\mu_2} - 1} \\ &= \frac {1 - \frac 2 n \sum_{i=1}^n X_i^2} {\frac 4 n \sum_{i=1}^n X_i^2 - 1} \\ &= \frac {n - 2 \sum_{i=1}^n X_i^2} {4 \sum_{i=1}^n X_i^2 - n} \end{align} Solution provided $$ \hat{\alpha} = \frac n {8 \sum_{i=1}^n X_i^2 - 2n} - \frac 1 2 $$ Did I apply the method of moments correctly for this question? I can't seem to obtain the form as suggested in the sample solution provided. Any advice would be greatly appreciated!
You are right. In fact \begin{align} \frac n {8 \sum_{i=1}^n X_i^2 - 2n} - \frac 1 2&= \frac{2n - 8 \sum_{i=1}^n X_i^2 + 2n}{16 \sum_{i=1}^n X_i^2 - 4n}\\ &= \frac{4n - 8 \sum_{i=1}^n X_i^2}{16 \sum_{i=1}^n X_i^2 - 4n}\\ &=\frac {n - 2 \sum_{i=1}^n X_i^2} {4 \sum_{i=1}^n X_i^2 - n}\,. \end{align} The last equality is just a simplification by $4$.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/541121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to compute the standard error of the mean of an AR(1) process? I try to compute the standard error of the mean for a demeaned AR(1) process $x_{t+1} = \rho x_t + \varepsilon_{t+1} =\sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t+1-i}$ Here is what I did: $$ \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} \sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t-i}\right) \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_0 + & \rho^1 \varepsilon_{-1} + & \rho^2 \varepsilon_{-2} + & \cdots & \rho^{\infty} \varepsilon_{-\infty} + \\ \rho^0 \varepsilon_1 + & \rho^1 \varepsilon_{0} + & \rho^2 \varepsilon_{-1} + & \cdots & \rho^{\infty} \varepsilon_{1-\infty} + \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho^0\varepsilon_{N-1} + & \rho^1 \varepsilon_{N-2} + & \rho^2 \varepsilon_{N-3} + & \cdots & \rho^{\infty} \varepsilon_{N-1-\infty} + \\ \end{pmatrix} \\ &= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_{N-1} + \\ (\rho^0 + \rho^1) \varepsilon_{N-2} + \\ (\rho^0 + \rho^1 + \rho^2) \varepsilon_{N-3} + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) \varepsilon_{1} + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) \varepsilon_{0} + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) \varepsilon_{-1} + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) \varepsilon_{-2} + \\ \cdots\\ \end{pmatrix} \\ &= \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^0 + \\ (\rho^0 + \rho^1) + \\ (\rho^0 + \rho^1 + \rho^2) + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) + \\ \cdots\\ \end{pmatrix} \\ &= \frac{N \sigma_{\varepsilon}^2}{N^2} (\rho^0 + \rho^1 + \dots + \rho^{\infty}) \\ &= \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho} \\ \end{align*} $$ Probably, not every step is done in the most obvious way, so let me add some thoughts. In the third row, I just write out to two sum-signs. Here, the matrix has N rows. In the fourth row, I realign the matrix so that there is one row for every epsilon, so the number of rows is infinite here. Note that the last three parts in the matrix have the same number of elements, just differencing by a factor $\rho$ in each row. In the fifth row, I apply the rule that the variance of the sum of independent shocks is the sum of the variances of those shocks and notice that each $\rho^j$ element is summed up $N$ times. The end result looks neat, but is probably wrong. Why do I think so? Because I run a MCS in R and things don't add up: nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(list(order=c(1,0,0), ar=pers), n = N)) } #quantile(means, probs=c(0.025, 0.05, 0.5, 0.95, 0.975)) #That is the empirical standard error sd(means) 0.9459876 #This should be the standard error according to my formula 1/(N*(1-pers)) 0.1 Any hints on what I am doing wrong would be great! Or maybe a hint where I can find the correct derivation (I couldn't find anything). Is the problem maybe that I assume independence between the same errors? $$Var(X + X) = Var(2X) = 4Var(X) \neq 2Var(X)$$ I thought about that, but don't see where I make that erroneous assumption in my derivation. UPDATE I forgot to square the rhos, as Nuzhi correctly pointed out. Hence it should look like: $$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^{2\times0} + \\ (\rho^0 + \rho^1)^2 + \\ (\rho^0 + \rho^1 + \rho^2)^2 + \\ \cdots \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2})^2 + \\ (\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1})^2 + \\ (\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N})^2 + \\ (\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1})^2 + \\ \cdots\\ \end{pmatrix} $$
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give: > .9459876/sqrt(100) [1] 0.09459876 Not sure why you were using sd(means) and calling it standard error (if I understood the code comment right). It would make more sense to call the value SE(u) rather than Var(u) in the derivation as well, since I think that's what you intended? The variance of an AR(1) process is the variance of the error term divided by (1-phi^2), where you had N*(1-phi) (the N wouldn't be there if it was just variance). . I'll have to dig deeper to try to find a derivation of that. varianceAR1 simple AR(1) derivation on p. 36
{ "language": "en", "url": "https://stats.stackexchange.com/questions/40585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Finding the PMF of conditional probability, poisson process. Don't understand where $10^6$ goes "Customers arrive at a bank according to a Poisson process with rate 6 per hour. State (together with a proof) clearly the (conditional) probability mass function of the numbers of customers arrived during the first 20 minutes, given that 10 customers have arrived during the first hour." I did this: $$ P(X_{\frac{1}{3}} = x | X_1 = 10) $$ From the definition of joint probability, we know $$ P(X_{\frac{1}{3}} = x , X_1 = 10) = P(X_{\frac{1}{3}} = x | X_1 = 10) P(X_1 = 10)$$ Re-aranging gives: $$ P(X_{\frac{1}{3}} = x | X_1 = 10) = \frac{ P(X_{\frac{1}{3}} = x , X_1 = 10)} {P(X_1 = 10)}$$ Let's first calculate $P(X_{\frac{1}{3}} = x , X_1 = 10)$: This becomes: $$ P(X_{\frac{1}{3}} = x , X_{\frac{2}{3}}= y) = P(X_{\frac{1}{3}} = x) P(X_{\frac{2}{3}} = y) \hspace{1cm} y = 10 - x $$ Using the Poisson formula, you get: $$P(X_{\frac{1}{3}} = x) = \frac{e^{-2}2^x}{x!} \hspace 2cm P(X_{\frac{2}{3}}= y) = \frac{e^{-4}4^y}{y!}$$ And so $$ P(X_{\frac{1}{3}} = x , X_{\frac{2}{3}}= y) = e^{-6} \frac{2^x4^y}{x!y!} $$ Also, using the Poisson formula, we get: $$ P(X_1 = 10) = \frac{e^{-6}6^{10}}{10!} $$ So $$\frac{ P(X_{\frac{1}{3}} = x , X_1 = 10)} {P(X_1 = 10)} = \frac{e^{-6} \frac{2^x4^y}{x!y!}}{\frac{e^{-6}6^{10}}{10!}} $$ Which I get to be $$ \frac{10! 2^x 4^y}{x!y! 6^{10}}$$ But in the answers, it says it should be $$ \binom{10}{x} \left( \frac{1}{3} \right)^x \left( \frac{2}{3} \right)^{10 - x} $$ Why is this?
I assume in your question you mean $6^{10}$ as opposed to $10^6$. So, let's start with what you have: $$\frac{10! 2^x 4^{10-x}}{x!(10-x)!6^{10}} = \frac{10!}{x!(10-x)!}\times\frac{2^x 4^{10-x}}{6^{10}}.$$ The first term is $\binom{10}{x}.$ Let's pull $2^{10}$ out of the top and bottom of the second term and cancel them, leaving us with $$\binom{10}{x} \frac{1^x 2^{10-x}}{3^{10}} = \binom{10}{x} \frac{1^x}{3^x} \frac{2^{10-x}}{3^{10-x}} = \binom{10}{x}\left(\frac13\right)^x\left(\frac23\right)^{10-x}$$ which is the answer you want.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/46241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$? This question is from DEGROOT's PROBABILITY and STATISTICS. Problem Suppose that two dice are to be rolled repeatedly and the sum $T$ of the two numbers is to be observed for each roll. We shall determine the probability $p$ that the value $T =7$ will be observed before the value $T =8$ is observed. Solution The desired probability $p$ could be calculated directly as follows: We could assume that the sample space $S$ contains all sequences of outcomes that terminate as soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Then we could find the sum of the probabilities of all the sequences that terminate when the value $T = 7$ is obtained. However,there is a simpler approach in this example. We can consider the simple experiment in which two dice are rolled. If we repeat the experiment until either the sum $T = 7$ or the sum $T = 8$ is obtained, the effect is to restrict the outcome of the experiment to one of these two values. Hence, the problem can be restated as follows: Given that the outcome of the experiment is either $T = 7$ or $T = 8$, determine the probability $p$ that the outcome is actually $T = 7$. If we let $A$ be the event that $T = 7$ and let $B$ be the event that the value of $T$ is either $7$ or $8$, then $A ∩ B = A$ and $$ p = Pr(A|B) = \frac{Pr(A ∩ B)}{Pr(B)} =\frac{Pr(A)}{Pr(B)} $$ But $Pr(A) = 6/36$ and $Pr(B) = (6/36) + (5/36) = 11/36$. Hence, $p = 6/11$. Now, my doubts are * *Why does the author say We could assume that the sample space $S$ contains all sequences of outcomes that terminate as soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Then we could find the sum of the probabilities of all the sequences that terminate when the value $T = 7$ is obtained. *How can we go from lengthy sequences of outcomes that terminate as soon as either the sum $T = 7$ or the sum $T = 8$ is obtained to just the outcome of the experiment for which either $T = 7$ or $T = 8$ ?
Question 1 * *Why does the author say We could assume that the sample space $S$ contains all sequences of outcomes that terminate as soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Then we could find the sum of the probabilities of all the sequences that terminate when the value $T = 7$ is obtained. Answer Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ that end in either $7$ or $8$. Out of these sequences we're interested in summing up the probabilities of all the series that end in a $7$. The probability of a sequence of precisely $n$ throws ending in a $7$ is: $$ P_n(7) = \left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36} $$ However, since $n$ can take any value up to infinity, the overall probability of ending a sequence of any length in a $7$ is the sum of the prob. of ending a seq. after one throw plus the prob. of ending a sequence after two throws, and so on. This is the geometric series: $$ \Phi_7 = P_1(7) + P_2(7) + P_3(7) + ... + P_n(7) $$ which, as $n \rightarrow \infty$, sums up to (basic geometric sum formula) $$ \Phi_7 = \lim_{n \rightarrow \infty} \frac{\frac{6}{36}\left(1-\left(\frac{25}{36}\right)^n\right)}{1-\frac{25}{36}} = \lim_{n \rightarrow \infty} \frac{6}{11}\left(1-\left(\frac{25}{36}\right)^n\right) = \frac{6}{11} $$ This is the probability of ending a sequence of throws in a $7$ without ever hitting $8$. It's the answer you're looking for using the first, "more complicated" method. Question 2 * *How can we go from lengthy sequences of outcomes that terminate as soon as either the sum $T = 7$ or the sum $T = 8$ is obtained to just the outcome of the experiment for which either $T = 7$ or $T = 8$ ? Answer This will become clear if we rephrase the first method a little bit. Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ which end in either a $7$ or an $8$. The probability of you running a sequence of length $n$ which ends in $7$ is the probability $$ P_n(7)|(P_n(7) \cup P_n(8)) = \frac{P_n(7) \cap (P_n(7)\cup P_n(8))}{P_n(7)\cup P_n(8)} = \frac{P_n(7)}{P_n(7) \cup P_n(8)} $$ $$ \frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-1} \cdot \frac{5}{36}} = \frac{6}{11} $$ This is a lot of LaTeX for not a very impressive statement but it is useful because we can now use it to prove by induction the jump from a sequence of length $n$ to a sequence of length $1$. If we run the same formula for $n-1$ we get $$ P_{n-1}(7)|(P_{n-1}(7) \cup P_{n-1}(8)) = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)} $$ where $$ \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)} = \frac{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-2} \cdot \frac{5}{36}} = \frac{6}{11} $$ But this means that $$ \frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)} $$ and it follows, by induction, that $$ \frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{1}(7)}{P_{1}(7) \cup P_{1}(8)} $$ Therefore, whatever value $n$ takes, the probability of a sequence of that length ending in $7$ given that it ends in either $7$ or $8$ is equal to the probability of a sequence of length $1$ ending in $7$ given it is a part of $S$.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/71783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Derivation of $\frac{\sum (x - \bar{x})^2}{N} = \frac{\sum x^2 - \frac{(\sum x)^2}{N}}{N}$ I saw the above equation in an introductory statistics textbook, as a shortcut for evaluating the variance of a population. I tried to prove it myself:     $$\sigma^2 = \frac{\sum (x - \bar{x})^2}{N} \tag{1}$$ $$\sigma^2 = \frac{\sum x^2 - \frac{(\sum x)^2}{N}}{N} \tag{2}$$ We are given that $(1) = (2)$: $$\frac{\sum (x - \bar{x})^2}{N} = \frac{\sum x^2 - \frac{(\sum x)^2}{N}}{N} \tag{3}$$ Multiply $(3)$ through by $N$: $$\sum(x - \bar{x})^2 = \sum x^2 - \frac{(\sum x)^2}{N} \tag{4}$$ Expand the LHS in $(4)$: $$\sum\left(x^2 - 2x\bar{x} + \bar{x}^2\right) = {\sum x^2 - \frac{(\sum x)^2}{N}} \tag{5}$$ Expanding both sides in $(5)$: $$\sum x^2 - 2x\sum\bar{x} + \sum\bar{x}^2 = \sum x^2 - \frac{\sum x\sum x}{N} \tag{6}$$ From $(6)$: $$\sum\bar{x}^2 - 2\bar{x}\sum{x} = -\bar{x}\sum{x} \tag{7}$$ From $(7)$: $$\sum\bar{x}^2 = \bar{x}\sum{x} \tag{8}$$ I don't know how to make the LHS equal RHS in $(8)$.
Starting from what you know: $\sigma^2 =\dfrac{\sum (x - \bar{x})^2}{N} $ $= \dfrac{ \sum\left(x^2 - 2x\bar{x} +\bar{x}^2 \right)}{N}$ $= \dfrac{\sum x^2}N - \dfrac{2\sum x\bar{x}}N +\dfrac{\sum\bar{x}^2}N $ $=\dfrac{\sum x^2}N - 2\bar{x}\dfrac{\sum x}{N} + \bar{x}^2 $ $=\dfrac{\sum x^2}N - {2\bar{x}^2} + \bar{x}^2 $ $= \dfrac{\sum x^2}N - {\bar{x}^2} $ $= \dfrac{\sum x^2}N - \dfrac {\sum{\bar{x}^2}}N $ $= \dfrac{\sum (x^2 - \bar{x}^2) }N $
{ "language": "en", "url": "https://stats.stackexchange.com/questions/288517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
urn with two colors of balls; probability of selecting specific color 10 balls in an urn, 6 Black and 4 White. Three are removed, color not noted. What is probability that a white ball will be chosen next? The answer is 2/5, so my reasoning below must be faulty. After the initial three balls are removed, there will be 4 possible configurations:
You calculated $P(A)$, $P(B)$, $P(C)$ and $P(D)$ incorrectly. A can happen in $\binom{6}{3} = 20$ ways, B in $\binom{6}{2} * \binom{4}{1} = 60$ ways, C can happen in $\binom{6}{1} * \binom{4}{2} = 36$ ways, D can happen in $4$ ways. To check, there are $20+60+36+4=120$ total ways of removing $3$ balls at random, which is $\binom{10}{3}$. The answer is then $\frac{4}{7} * \frac{20}{120} + \frac{3}{7} * \frac{60}{120} + \frac{2}{7} * \frac{36}{120} + \frac{1}{7} * \frac{4}{120} = \frac{2}{5}$
{ "language": "en", "url": "https://stats.stackexchange.com/questions/301320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
A simple probability problem There are $n$ applicants for the director of computing. The applicants are interviewed independently by each member of the three-person search committee and ranked from $1$ to $n$. A candidate will be hired if he or she is ranked first by at least two of the three interviewers. Find the probability that a candidate will be accepted if the members of the committee really have no ability at all to judge the candidates and just rank the candidates randomly. My reasoning: * *There are $n \cdot (n-1) \cdot (n-2)$ many ways the interviewers put a different person at the first position. *So, $n^3 - n \cdot (n-1) \cdot (n-2)$ ways that at least two interviewers will put the same person at the first position *So, $\frac{n^3 - n \cdot (n-1) \cdot (n-2)}{n^3} = \frac{3n^2-2n}{n^3}$ is the desired probability But the answer given is $\frac{3n-2}{n^3}$ Where is the problem in my reasoning? thanks.
${3\choose2} \cdot 1/n \cdot 1/n \cdot (1 - 1/n)$ --this represents two out of the three committee members selecting the same individual times the third committee member selecting a different person. ${3\choose3} \cdot 1/n \cdot 1/n \cdot 1/n$ --this represents all three committee members selecting the same individual. Add the two up: $$ \array{ & & 3 \cdot 1/n^2 \cdot (n-1)/n + 1/n^3 & = \\ & = & 3(n-1)/n^3 + 1/n^3 & = \\ & = & (3n - 3)/n^3 + 1/n^3 & = \\ & = & (3n - 3 + 1)/n^3 & = \\ & = & (3n - 2)/n^3 }$$
{ "language": "en", "url": "https://stats.stackexchange.com/questions/301663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Least square regression weight estimation for $\beta_1$ and $\beta_2$? So, I have been looking at this post, and others that are similar and know that the least square estimation of $\beta_1,\beta_2$ will be $(X^TX)^{-1}X^TY$, where the model is $Y_i = \beta_1x_{1i}+ \beta_2x_{2i} + \epsilon_i$. But what is the generally derived formula in this case for $\hat{\beta_1}$ and $\hat{\beta_2}$. How can represent them in a scalar multiplied out value/weight format rather than a matrix/vector format?
Based on the comments above (by @Sycorax): $$X^TX = \begin{pmatrix} x_{11} & x_{12} & ... &x_{1n} \\ x_{21} & x_{22} & ... &x_{2n} \end{pmatrix} \begin{pmatrix} x_{11} & x_{21} \\ x_{12} & x_{22} \\ \vdots &\vdots\\ x_{1n} & x_{2n} \end{pmatrix} = \begin{pmatrix} x_{11}^2 + x_{12}^2 + \:...\: x_{1n}^2 & x_{11}x_{21} + x_{12}x_{22} + \:...\: +x_{1n}x_{2n} \\ x_{11}x_{21} + x_{12}x_{22} + \:...\: +x_{1n}x_{2n} & x_{21}^2 + x_{22}^2 + \:...\: x_{2n}^2 \\ \end{pmatrix} = \begin{pmatrix} \sum x_{1i}^2 & \sum x_{1i}x_{2i}\\ \sum x_{1i}x_{2i} & \sum x_{2i}^2 \\ \end{pmatrix}$$ $$(X^TX)^{-1} = \dfrac{1}{\sum x_{1i}^2 \sum x_{2i}^2 - (\sum x_{1i}x_{2i})^2} \begin{pmatrix} \sum x_{2i}^2 & - \sum x_{1i}x_{2i} \\ - \sum x_{1i}x_{2i} & \sum x_{1i}^2 \end{pmatrix}$$ $$X^T Y = \begin{pmatrix} x_{11} & x_{12} & ... &x_{1n} \\ x_{21} & x_{22} & ... &x_{2n} \end{pmatrix} \begin{pmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{pmatrix} = \begin{pmatrix} x_{11}y_1 + x_{12}y_2 + \: ... \: + x_{1n}y_n \\ x_{21}y_1 + x_{22}y_2 + \: ... \: + x_{2n}y_n \end{pmatrix} = \begin{pmatrix} \sum x_{1i}y_i \\ \sum x_{2i}y_i \end{pmatrix} $$ $$\therefore \beta = (X^TX)^{-1}X^TY = \begin{pmatrix} \hat{\beta_1} \\ \hat{\beta_2} \end{pmatrix} = \dfrac{1}{\sum x_{1i}^2 \sum x_{2i}^2 - (\sum x_{1i}x_{2i})^2}\begin{pmatrix} \sum x_{2i}^2 \sum x_{1i}y_i - \sum x_{1i}x_{2i} \sum x_{2i}y_i \\ -\sum x_{1i}x_{2i} \sum x_{1i}y_i + \sum x_{1i}^2 \sum x_{2i}y_i \end{pmatrix} $$
{ "language": "en", "url": "https://stats.stackexchange.com/questions/529529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$N(\theta,\theta)$: MLE for a Normal where mean=variance $\newcommand{\nd}{\frac{n}{2}}$For an $n$-sample following a Normal$(\mu=\theta,\sigma^2=\theta)$, how do we find the mle? I can find the root of the score function $$ \theta=\frac{1\pm\sqrt{1-4\frac{s}{n}}}{2},s=\sum x_i^2, $$ but I don't see which one is the maximum. I tried to substitute in the second derivative of the log-likelihood, without success. For the likelihood, with $x=(x_1,x_2,\ldots,x_n)$, $$ f(x) = (2\pi)^{-n/2} \theta^{-n/2} \exp\left( -\frac{1}{2\theta}\sum(x_i-\theta)^2\right), $$ then, with $s=\sum x_i^2$ and $t=\sum x_i$, $$ \ln f(x) = -\nd \ln(2\pi) -\nd\ln\theta-\frac{s}{2\theta}-t+\nd\theta, $$ so that $$ \partial_\theta \ln f(x) = -\nd\frac{1}{\theta}+\frac{s}{2\theta^2}+\nd, $$ and the roots are given by $$ \theta^2-\theta+\frac{s}{n}=0. $$ Also, $$ \partial_{\theta,\theta} \ln f(x) = \nd \frac{1}{\theta^2} - \frac{s}{\theta^3}. $$
Recall that the normal distribution $N(\mu, \sigma^2)$ has pdf $f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},$ Note here that $\mu = \theta$ and $\sigma^2 = \theta$ and therefore $\sigma = \sqrt{\theta}$ \begin{aligned} L(x_1,x_2,...,x_n | \theta) &= \prod_{i=1}^n f(x_i | \theta) \\ &= \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \theta}} \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\} \\ & = (2 \pi)^{-n/2} (\theta)^{-n /2} \prod_{i=1}^n \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\} \\ & = (2 \pi)^{-n/2} (\theta)^{-n /2} \ \exp \Big \{ - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 \Big\} \\ \log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 \end{aligned} Consider the term $\frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2$ which can be expanded and simplified \begin{aligned} \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 & = \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)(x_i - \theta) \\ & = \frac{1}{2 \theta} \sum_{i=1}^n \left( x_i^2 - 2 \theta x_i + \theta^2 \right) \\ & = \frac{1}{2 \theta} \left( \sum_{i=1}^n (x_i^2) - 2 \theta \sum_{i=1}^n (x_i) + n\theta^2 \right) \\ & = \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2} \end{aligned} We can now compute the derivative with respect to $\theta$, equate to zero and solve for $\theta$ \begin{aligned} \log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \left( \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2} \right) \\ \frac{d}{d\theta} \log L & = \frac{-n}{2\theta} - \left( \frac{-1}{2\theta^2} \sum_{i=1}^n (x_i^2) + \frac{n}{2} \right) = 0 \\ & = \frac{-n}{2\theta} + \frac{1}{2\theta^2} \sum_{i=1}^n (x_i^2) - \frac{n}{2} \\ & = - \theta^2 - \theta + \frac{1}{n} \sum_{i=1}^n (x_i^2) \\ &\text{let $s = \frac{1}{n} \sum_{i=1}^n (x_i^2)$} \\ 0 & = - \theta^2 - \theta + s \\ \hat \theta &= \frac{\sqrt{1 + 4s} -1 }{2} \end{aligned}
{ "language": "en", "url": "https://stats.stackexchange.com/questions/56295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Prove that sample covariance matrix is positive definite Consider the $p \times p$ sample covariance matrix: $$\mathbf{S} = \frac{1}{n-1} \cdot \mathbf{Y}_\mathbf{c}^\text{T} \mathbf{Y}_\mathbf{c} \quad \quad \quad \mathbf{Y}_\mathbf{c} = \mathbf{C} \mathbf{Y},$$ where $\mathbf{C} = \mathbf{I}-\frac{1}{n} \mathbf{1} \mathbf{1}^\text{T}$ is the $n \times n$ centering matrix and $\mathbf{Y}$ is an $n \times p$ matrix. How can it be proved that if the variables are continuos, not linearly related and $n-1> p$ then the sample covariance matrix is ​​positive definite? The following clue is found in Ranchera's book:
First, let's simplify the equation for your sample covariance matrix. Using the fact that the centering matrix is symmetric and idempotent you get the $p \times p$ form: $$\begin{align} \mathbf{S} &= \frac{1}{n-1} \cdot \mathbf{Y}_\mathbf{c}^\text{T} \mathbf{Y}_\mathbf{c} \\[6pt] &= \frac{1}{n-1} \cdot (\mathbf{C} \mathbf{Y})^\text{T} (\mathbf{C} \mathbf{Y}) \\[6pt] &= \frac{1}{n-1} \cdot \mathbf{Y}^\text{T} \mathbf{C}^\text{T} \mathbf{C} \mathbf{Y} \\[6pt] &= \frac{1}{n-1} \cdot \mathbf{Y}^\text{T} \mathbf{C} \mathbf{Y}. \\[6pt] \end{align}$$ This is a simple quadratic form in $\mathbf{Y}$. I will show that this matrix is non-negative definite (or "positive semi-definite" if you prefer) but it is not always positive definite. To do this, consider an arbitrary non-zero column vector $\mathbf{z} \in \mathbb{R}^p - \{ \mathbf{0} \}$ and let $\mathbf{a} = \mathbf{Y} \mathbf{z} \in \mathbb{R}^n$ be the resulting column vector. Since the centering matrix is non-negative definite (it has one eigenvalue equal to zero and the rest are equal to one) you have: $$\begin{align} \mathbf{z}^\text{T} \mathbf{S} \mathbf{z} &= \frac{1}{n-1} \cdot \mathbf{z}^\text{T} \mathbf{Y}^\text{T} \mathbf{C} \mathbf{Y} \mathbf{z} \\[6pt] &= \frac{1}{n-1} \cdot (\mathbf{Y} \mathbf{z})^\text{T} \mathbf{C} \mathbf{Y} \mathbf{z} \\[6pt] &= \frac{1}{n-1} \cdot \mathbf{a}^\text{T} \mathbf{C} \mathbf{a} \geqslant 0. \\[6pt] \end{align}$$ This shows that $\mathbf{S}$ is non-negative definite. However, it is not always positive definite. To see this, take any $\mathbf{z} \neq \mathbf{0}$ giving $\mathbf{a} = \mathbf{Y} \mathbf{z} \propto \mathbf{1}$ and substitute into the quadratic form to get $\mathbf{z}^\text{T} \mathbf{S} \mathbf{z} = 0$. Update: This update is based on the additional information you have added in your edit to the question and your comments. In order to get a positive definite sample variance matrix you need $\mathbf{a}^\text{T} \mathbf{C} \mathbf{a} > 0$. If $n-1>p$ and all $n$ rows of $\mathbf{Y}$ are linearly independent then $\mathbf{Y} \mathbf{z} \propto \mathbf{1}$ implies $\mathbf{z} = \mathbf{0}$. The contrapositive implication is that $\mathbf{a}^\text{T} \mathbf{C} \mathbf{a} > 0$ for all $\mathbf{z} \neq 0$, which establishes that the sample covariance matrix is positive definite. Presumably this is what you are looking for.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/487510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Expectation of bootstrap variance estimators I'm quite unclear how they derived (5) from (4). Since $\bar{X}$ and $s^2$ are unbiased estimators, I believe my lack of follow through comes from not knowing how to compute $\mathbb{E}[\bar{X}^2 s^2]$. Any help is appreciated.
$\bullet$ Since the data is normally distributed, $\bar X$ and $X_i-\bar X$ and thus $s^2$ are independently distributed. $\bullet$ The variance of $s^2$ is $$\operatorname{Var}[s^2] = \frac{2\sigma^4}{(n-1) }.\tag 1\label 1$$ Now \begin{align}\frac{4}n \mathbb E\left[\bar X^2s^2\right]+ \frac{2}{n^2}\mathbb E\left[s^4\right]&=\frac{4}n \mathbb E\left[\bar X^2\right]\mathbb E\left[s^2\right]+ \frac{2}{n^2}\mathbb E\left[s^4\right]\\ &=\frac{4}n \mathbb E\left[\operatorname{Var}\left[\bar X\right]+ \left(\mathbb E[\bar X]\right)^2\right]\mathbb E\left[s^2\right]+ \frac{2}{n^2}\mathbb E\left[\operatorname{Var}\left[s^2\right]+ \left(\mathbb E\left[s^2\right]\right)^2\right] \\ &= \frac{4}n \left[\frac{\sigma^2}n +\mu^2\right]\sigma^2+ \frac{2}{n^2}\left[\frac{2\sigma^4}{n-1}+\sigma^4\right]~~\textrm{using}~~\eqref{1}\\ &= \frac{4\sigma^4}{n^2} +\frac{4\mu^2\sigma^2}{n}+ \frac{2}{n^2}\left[\frac{2\sigma^4+n\sigma^4-\sigma^4}{n-1}\right]\\ &= \frac{4\sigma^4}{n^2} +\frac{4\mu^2\sigma^2}{n}+ \frac{2}{n^2}\frac{(n+1)\sigma^4}{n-1}.\end{align}
{ "language": "en", "url": "https://stats.stackexchange.com/questions/593166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equation for the equipotential lines? What is the equation for the equipotential lines in $x$-$y$ plane for a dipole oriented along the $x$ axis?
In three dimensions, the electric potential $V$ of a pure dipole $\mathbf p$ located at the origin is given by \begin{align} V(\mathbf x) = \frac{1}{4\pi\epsilon_0}\frac{\mathbf p\cdot\mathbf x}{|\mathbf x|^3} \end{align} If the dipole is oriented along the $x$-axis, then we have $\mathbf p = p\hat{\mathbf x}$ which gives $\mathbf p \cdot\mathbf x = px$. Moreover, notice that \begin{align} \frac{1}{|\mathbf x|^3} = \frac{1}{(\sqrt{x^2+y^2+z^2})^3} = \frac{1}{(x^2+y^2+z^2)^{3/2}} \end{align} Putting this all together gives the following expression for the potential: \begin{align} V(\mathbf x) = \frac{1}{4\pi\epsilon_0}\frac{px}{(\sqrt{x^2+y^2+z^2})^3} \end{align} As pointed out in the comments, the equation for an equipotential is then obtained by setting this expression to a constant. This gives \begin{align} \frac{x}{(\sqrt{x^2+y^2+z^2})^3} = \mathrm{const.} \end{align} If one is interested only in the equation for equipotentials in the $x$-$y$ plane, one can set $z=0$ which gives precisely your quoted result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/74312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What forces are exerted on a clothespin in space? Let's say a clothespin is modeled as a simple torsion spring as follows. Given: * *$p_1,\ p_2,\ p_3$: point-like objects of equal mass in 2-D space. *All objects float in space, i.e. the center of mass will not change. *At time $t_0$ a torsion spring is inserted at $p_1$, such that it exerts torque on $p_2$ and $p_3$, with: * *$\theta$: the angle of twist from its equilibrium position in radians *$\kappa$: the spring's torsion coefficient *$\tau = -\kappa \theta$ is the torque exerted by the spring Question: what are the resulting forces on $p_1$, $p_2$ and $p_3$? My answer: Because all objects have equal mass, we can leave mass out of the equation. F2 is a force perpendicular to $p_1,\ p_2$ of magnitude $\dfrac{\tau }{ |p_1-p_2|}$ . By Newton’s 3rd law, F2’ is a force of equal magnitude and opposite direction as F2 Similar for F3 and F3’
If $\theta$ is the angle between the arms, displaced from the equilibrium $\theta_0$ by $\Delta \theta$ and the torque applied is $\tau =-\kappa \Delta \theta$, assuming equal masses of $m$ with initially motionless parts. The first step is the kinematics, whereas the acceleration of 2 and 3 is related to the acceleration of 1 and the common angle. For simplification we have that 1 is not accelerating in the horizontal direction $\ddot{x}_1=0$ (as seen in figure below). $$ \begin{aligned} \ddot{x}_2 &= \ddot{x}_1 - \ell \cos \left( \frac{\theta}{2} \right) \frac{ \ddot{\theta}}{2} & \ddot{x}_3 &= \ddot{x}_1 + \ell \cos \left( \frac{\theta}{2} \right) \frac{ \ddot{\theta}}{2} \\ \ddot{y}_2 &= \ddot{y}_1 + \ell \sin \left( \frac{\theta}{2} \right) \frac{\ddot{\theta}}{2} & \ddot{y}_3 &= \ddot{y}_1 + \ell \sin \left( \frac{\theta}{2} \right) \frac{\ddot{\theta}}{2} \end{aligned} $$ Now for the equations of motion of each part. We start with free body diagrams in order to sum up the forces on each part. $$\begin{aligned} -Fr_2 \sin \left( \frac{\theta}{2} \right) + Fr_3 \sin \left( \frac{\theta}{2} \right) + Fn_2 \cos \left( \frac{\theta}{2} \right) + Fn_3 \cos \left( \frac{\theta}{2} \right) & = m \ddot{x}_1 = 0 \\ -Fr_2 \cos \left( \frac{\theta}{2} \right) - Fr_3 \cos \left( \frac{\theta}{2} \right) + Fn_2 \sin \left( \frac{\theta}{2} \right) + Fn_3 \sin \left( \frac{\theta}{2} \right) &= m \ddot{y}_1 \end{aligned} $$ The EOM are done in a direction along the arm $$\begin{aligned} m \ddot{x}_2 \cos \left( \frac{\theta}{2} \right) - m \ddot{y}_2 \sin \left( \frac{\theta}{2} \right) &= -Fn_2 \\ m \ddot{y}_2 \cos \left( \frac{\theta}{2} \right) + m \ddot{x}_2 \sin \left( \frac{\theta}{2} \right) & = Fr_2 \\ 0 & =\ell Fn_2 + \tau \end{aligned} $$ with $\Rightarrow Fn_2 =-\frac{\tau}{\ell}$ $$\begin{aligned} m \ddot{x}_3 \cos \left( \frac{\theta}{2} \right) + m \ddot{y}_3 \sin \left( \frac{\theta}{2} \right) &= -Fn_3 \\ m \ddot{y}_3 \cos \left( \frac{\theta}{2} \right) - m \ddot{x}_3 \sin \left( \frac{\theta}{2} \right) & = Fr_3 \\ 0 & =\ell Fn_3 - \tau \end{aligned} $$ with $\Rightarrow Fn_3 =\frac{\tau}{\ell}$ Combined the all of the above equations substituted into the kinematics are $$ \begin{aligned} -Fn_2 \cos \left(\theta \right) + Fr_2 \sin \left(\theta \right) - 2 Fn3 &= m \ell \frac{\ddot{\theta}}{2} \\ Fr_2 \cos \left(\theta \right) + Fn_2 \sin \left(\theta \right) + 2 Fr_3 &= 0 \\ - Fn_3 \cos \left(\theta\right) - Fr_3 \sin \left(\theta \right) + 2 Fn_2 &= - m \ell \frac{\ddot{\theta}}{2} \\ Fr_3 \cos \left(\theta \right) - Fn_3 \sin \left(\theta \right) + 2 Fr_2 &= 0 \end{aligned} $$ The above is solved with $$\boxed{\frac{3 \tau (\cos\theta-2)}{\ell (\sin^2\theta+3)} = m \ell \frac{\ddot{\theta}}{2}}$$ and $$\begin{aligned} Fr_2 & = \frac{\tau \sin\theta ( 2-\cos\theta)}{\ell (\sin^2\theta+3)} \\ Fn_2 &= -\frac{\tau}{\ell} \\ Fr_3 &= \frac{\tau \sin\theta ( 2-\cos\theta)}{\ell (\sin^2\theta+3)} \\ Fn_3 &= \frac{\tau}{\ell} \end{aligned}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the procedure (matrix) for change of basis to go from Cartesian to polar coordinates and vice versa? I'm following along with these notes, and at a certain point it talks about change of basis to go from polar to Cartesian coordinates and vice versa. It gives the following relations: $$\begin{pmatrix} A_r \\ A_\theta \end{pmatrix} = \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} A_x \\ A_y \end{pmatrix}$$ and $$\begin{pmatrix} A_x \\ A_y \end{pmatrix} = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} A_r \\ A_\theta \end{pmatrix}$$ I was struggling to figure out how these were arrived at, and then I noticed what is possibly a mistake. In (1), shouldn't it read $$A_r=A_x+A_y$$ Is this a mistake, or am I making a wrong assumption somewhere? I'm kinda stuck here, and would appreciate some inputs on this. Thanks.
I just wanted to build on @Hayate's answer here. while the mathematical identities are correct, there are a few steps omitted. Specifically, there is no explicit calculation for $\frac{\partial r}{\partial x}$, $\frac{\partial r}{\partial \theta}$, $\frac{\partial \theta}{\partial x}$, and $\frac{\partial \theta}{\partial y}$. Using the formula $r = \sqrt{x^2 + y^2}$, we have $\frac{\partial r}{\partial x} = \frac{2x}{2\sqrt{x^2+y^2}} = \frac{x}{r} = cos(\theta)$ $\frac{\partial r}{\partial y} = \frac{2y}{2\sqrt{x^2+y^2}} = \frac{y}{r} = sin(\theta)$ In addition, we have the formula $\theta = tan^{-1}(\frac{y}{x})$, from which it follows that $\frac{\partial \theta}{\partial x} = \frac{1}{1+(\frac{y}{x})^2} \frac{-y}{x^2} = \frac{-y}{x^2+y^2} = \frac{-y}{r} = -sin(\theta)$ $\frac{\partial \theta}{\partial y} = \frac{1}{1+(\frac{y}{x})^2} \frac{1}{x} = \frac{x}{x^2+y^2} = \frac{x}{r} = cos(\theta)$ This agrees with the transformation that was posted by @hayate. Hopefully that helps. -Paul
{ "language": "en", "url": "https://physics.stackexchange.com/questions/150978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Time in movement for forces as functions of position Let there be an object in rest of $1kg$ mass at $x=0$ and a force acted upon it which can be described by the equation $F(x) = \frac{1}{(1-2x)^2}$ with $x$ belonging in $[0,1/2]$. I want to know in how much time it will move by $1/2 m$. sο what I need is a way to connect $x$ with $t$ (where $t$ is time). neuton's second law of motion states that $F=ma$ where $m$ is the mass of the object and $a$ its acceleration so $F=ma=1a=a$ but $a= \frac{du}{dt}= \frac{du}{dx} \frac{dx}{dt}= \frac{du}{dx} u$ so $\frac{1}{(1-2x)^2} = \frac{du}{dx} u <=> \frac{dx}{(1-2x)^2} = u du <=> \frac{1}{2} \frac{1}{1-2x} = \frac{1}{2} u^2 <=> \frac{1}{1-2x} = u^2$
Should be $$\int_0^x \frac{dx}{(1-2x)^2}=\int_0^u udu$$ $$\frac{1}{2}\left[\frac{1}{1-2x}\right]_0^x=\frac{u^2}{2}$$ $$u^2=\frac{1}{1-2x}-1$$ So $$\frac{1}{(1-2x)^2}=(u^2+1)^2$$ Then $$\frac{du}{dt}=\frac{1}{(1-2x)^2}=(u^2+1)^2$$ $$\int_0^u\frac{du}{(u^2+1)^2}=\int_0^t dt$$ By substitution with $u=\tan\theta$, we have $$t=\frac{1}{2}\left[\arctan u + \frac{u}{1+u^2}\right]$$ Now, when $x\to 1/2$, $u\to+\infty$ Time taken is $\pi/4$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/271115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minkowski metric under coordinate change We are given the Minkowski metric $g_{ab}=\textbf{diag}(-1,1,1,1)$ and want to calculate $\tilde g_{ab}$ in the coordinates \begin{align} \tilde x^0 &= t-z \\ \tilde x^1 &= r\\ \tilde x^2 &= \phi\\ \tilde x^3 &= z\\ \end{align} where $(r,\phi)$ are plane polar coordinates in the $(x,y)$ plane. Now I thought that this would be rather easy to calculate when we use that \begin{align} ds^2 = g_{ij} \, dx^i \, dx^j = -dt^2 + dx^2 + dy^2 + dz ^2. \end{align} Calculating this would give me \begin{align} t &= \tilde x ^0 + z \rightarrow dt = d\tilde x ^0 + dz\rightarrow dt^2 = (d\tilde x^0) ^2 + dz^2 + 2 d\tilde x^0 dz \\ x&=r \cos (\phi) \rightarrow dx = \cos(\phi) dr - r\sin(\phi) d\phi \\ y &= r\sin(\phi) \rightarrow dy = \sin(\phi) dr + r \cos(\phi) d\phi \\ z&= \tilde x ^3 \rightarrow dz = d\tilde x^3 \end{align} and so \begin{align} ds^2 &= - (d\tilde x^0 +dz)^2 + dr^2 + r^2 d\phi ^2 +(dz)^2 =\\ &= -(d\tilde x^0)^2 + dr^2 + r^2 d\phi ^2 - 2 d\tilde x^0 \, dz . \end{align} Can this be true? I don't think so, because I tried verifying that $g(X,X) = \tilde g (\tilde X,\tilde X) $ for some arbitrary vector in the old and the new coordinates and it does not give me the same. EDIT: I have tried verifying $g(X,X)=\tilde g (\tilde X, \tilde X)$ with the vector $X=(1,1,0,0)^T$. For this $X$, I obviously get $g(X,X)=0$. This vector in the new coordinates $\tilde X$ should be \begin{align} \tilde X = (1,\frac{1}{\cos\phi},-\frac{\sin\phi}{r},1)^T \end{align} if I am not mistaken and therefore $\tilde g(\tilde X,\tilde X) \neq 0$.
You have calculated $ds^2$ correctly: $$ ds^2 = -(d\tilde{x}^0)^2 - 2 \, d\tilde{x}^0 \, d\tilde{x}^3 + (d\tilde{x}^1)^2 + (\tilde{x}^1)^2 (d\tilde{x}^2)^2 + (d\tilde{x}^3)^2 $$ But you have transformed $X$ incorrectly. We have $$\begin{align} \tilde{x}^0 &= t - z \\ \tilde{x}^1 &= r = \sqrt{x^2+y^2}, \\ \tilde{x}^2 &= \phi = \arctan(y/x), \\ \tilde{x}^3 &= z, \end{align}$$ so $$\begin{align} d\tilde{x}^0 &= dt - dz, \\ d\tilde{x}^1 &= \frac{x \, dx + y \, dy}{\sqrt{x^2+y^2}} = \cos\phi \, dx + \sin\phi \, dy, \\ d\tilde{x}^2 &= \frac{x \, dy - y \, dx}{x^2+y^2} = \frac{\cos\phi \, dy - \sin\phi \, dx}{r} \\ d\tilde{x}^3 &= dz \end{align}$$ which gives $$\begin{align} \tilde{X}^0 &= 1 - 0 = 1, \\ \tilde{X}^1 &= \cos\phi \cdot 1 + \sin\phi \cdot 0 = \cos\phi, \\ \tilde{X}^2 &= \frac{\cos\phi \cdot 0 - \sin\phi \cdot 1}{r} = -\frac{\sin\phi}{r}, \\ \tilde{X}^3 &= 0. \end{align}$$ Thus, $$\tilde{g}(\tilde{X}, \tilde{X}) = -1^2 - 2 \cdot 1 \cdot 0 + (\cos\phi)^2 + r^2 \left(-\frac{\sin\phi}{r}\right)^2 + 0^2 = 0 = g(X, X).$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Getting from $E^2 - p^2c^2 = m^2c^4$ to $E = \gamma mc^2$ What is each mathematical step (in detail) that one would take to get from: $E^2 - p^2c^2 = m^2c^4$ to $E = \gamma mc^2$, where $\gamma$ is the relativistic dilation factor. This is for an object in motion. NOTE: in the answer, I would like full explanation. E.g. when explaining how to derive $x$ from $\frac{x+2}{2}=4$, rather than giving an answer of "$\frac{x+2}{2}=4$, $x+2 = 8$, $x = 6$" give one where you describe each step, like "times 2 both sides, -2 both sides" but of course still with the numbers on display. (You'd be surprised at how people would assume not to describe in this detail).
Starting with relativistic momentum $$p^2 = \left( \gamma m v \right)^2 = \frac{m^2 v^2}{1 - \frac{v^2}{c^2}}$$ one than gets $$E = \pm \sqrt{ m^2 c^4 + p^2 c^2 } = \pm \sqrt{ m^2 c^4 + \frac{m^2 v^2 c^2}{1 - \frac{v^2}{c^2}} } = \pm mc^2 \sqrt{\frac{1- \frac{v^2}{c^2}}{1- \frac{v^2}{c^2}} + \frac{\frac{v^2}{c^2}}{1- \frac{v^2}{c^2}}} = \pm \gamma mc^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/28568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
moment of inertia when a shape is cut A disk of radius $r_1$ is cut from a disk of radius $r_2$, $(r_2>r_1)$ from the middle of the bigger disk . If the annular ring left has mass $M$ then find the moment of inertia about the axis passing through its centre and perpendicular to its plane.
I suppose your disk has uniform density. Then the mass of the whole disk is $$M\frac{\pi r_2^2}{\pi (r_2^2-r_1^2)}=M\frac{r_2^2}{r_2^2-r_1^2}$$ and the mass of the smaller disk is $$M\frac{\pi r_1^2}{\pi (r_2^2-r_1^2)}=M\frac{r_1^2}{r_2^2-r_1^2}$$ The momentum of inertia of the whole disk is $$\frac{1}{2}M\frac{r_2^2}{r_2^2-r_1^2}r_2^2$$ The moment of inertia of the smaller disk is $$\frac{1}{2}M\frac{r_1^2}{r_2^2-r_1^2}r_1^2$$ Hence the momentum of inertia of the ring is $$\frac{1}{2}M\frac{r_2^2}{r_2^2-r_1^2}r_2^2-\frac{1}{2}M\frac{r_1^2}{r_2^2-r_1^2}r_1^2=\frac{1}{2}M\frac{r_2^4-r_1^4}{r_2^2-r_1^2}=\frac{1}{2}M(r_1^2+r_2^2)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/227891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
What is the stiffness of a crushed rod or cylinder? If you are crushing a uniform rod between two plates with a known force, how do I estimate the deflection (and hence the stiffness) of the rod? I am interested in the overall deflection, including the effects of the contact and in the rest of the (blue) volume below. Heuristically I see that stiffness $k = \frac{F}{\delta}$ should be inversely proportional to the diameter and linear to the length $$ k \propto \frac{\ell}{d} $$ I wonder if there is an analytical expression that shows the dependency on diameter, length and force applied.
Overall deflection Considering that there is a small region of contact, and we can use the Hertzian model it seems that there is an analytical solution 1 (although I would not call this crushing) $$2\delta = \frac{P}{L} (V_1 + V_2)\left[1 + \log\left\{\frac{2L^3}{(V_1 +V_2) P d}\right\}\right]$$ where $V_i = (1 - \nu_i^2)/(\pi E)$. If we assume that the planes are infinitely rigid compared to the cylinder we obtain $$2\delta = \frac{P V_1}{L} \left[1 + \log\left\{\frac{2L^3}{V_1 P d}\right\}\right]$$ or $$2 \delta = \frac{P}{\pi E_1 L} \left(1 - \nu_1^2\right) \log{\left[\frac{2 \pi E_1 L^3}{d P \left(1 - \nu_1^2\right)} \right]}$$ This equation can be inverted to obtain $$P = \frac{2 \pi E_1 L^3}{d \left(1 -\nu_1^2\right)} e^{\operatorname{LambertW}{\left (- \frac{d \delta}{L^2} \right )}}$$ Stress at the interior We can model the cylinder as a 2D problem: a disk with radial forces in the poles. The stress function for a disk of diameter $d$ with center in the origin, and radial inward and opposite forces $P$ placed at $(0, d/2)$ and $(0, -d/2)$ is given by $$\phi = x\arctan\left[\frac{x}{d/2 - y}\right] + x\arctan\left[\frac{x}{d/2 + y}\right] + \frac{P}{\pi d}(x^2 + y^2)$$ We know that the stresses are given by \begin{align} \sigma_{xx} = \frac{\partial^2 \phi}{\partial x^2}\\ \sigma_{yy} = \frac{\partial^2 \phi}{\partial y^2}\\ \sigma_{xy} = -\frac{\partial^2 \phi}{\partial x \partial y} \end{align} that gives $$\sigma_{xx} = 2 \left[\frac{P}{\pi d} - \frac{32 x^{4}}{\left(d + 2 y\right)^{5} \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)^{2}} - \frac{32 x^{4}}{\left(d - 2 y\right)^{5} \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)^{2}} + \frac{8 x^{2}}{\left(d + 2 y\right)^{3} \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)} + \frac{8 x^{2}}{\left(d - 2 y\right)^{3} \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)}\right]$$ $$\sigma_{yy} = 2 \left[\frac{P}{\pi d} - \frac{8 x^{2}}{\left(d + 2 y\right)^{3} \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)^{2}} - \frac{8 x^{2}}{\left(d - 2 y\right)^{3} \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)^{2}} + \frac{2}{\left(d + 2 y\right) \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)} + \frac{2}{\left(d - 2 y\right) \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)}\right]$$ $$\sigma_{xy} = - 8 x \left[\frac{4 x^{2}}{\left(d + 2 y\right)^{4} \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)^{2}} - \frac{4 x^{2}}{\left(d - 2 y\right)^{4} \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)^{2}} - \frac{1}{\left(d + 2 y\right)^{2} \left(\frac{4 x^{2}}{\left(d + 2 y\right)^{2}} + 1\right)} + \frac{1}{\left(d - 2 y\right)^{2} \left(\frac{4 x^{2}}{\left(d - 2 y\right)^{2}} + 1\right)}\right]$$ and for strains \begin{align} \epsilon_{xx} &= \frac{1}{E}(\sigma_{xx} - \nu \sigma_{yy})\\ \epsilon_{yy} &= \frac{1}{E}(\sigma_{yy} - \nu \sigma_{xx})\\ \epsilon_{xy} &= \frac{\sigma_{xy}}{G} \, . \end{align} For displacements, there are two options that come to my mind. * *Rewrite the stress function in polar coordinates, and then use the Mitchell solution for displacements. The stress function should look something like $$\phi(r,\theta) = r\theta \sin\theta + \frac{2P}{\pi d}r^2$$ *Integrate the strains \begin{align} u_x = \int\epsilon_{xx} dx + f(y)\\ u_y = \int\epsilon_{yy} dy + g(x) \end{align} with $2\epsilon_{xy} = \partial u_x/\partial x + \partial u_y/\partial y$, differentiate this equation w.r.t $y$ and $x$ and solve for $f$ and $g$. References * *Puttock, M. J., & Thwaite, E. G. (1969). Elastic compression of spheres and cylinders at point and line contact. Melbourne, Australia: Commonwealth Scientific and Industrial Research Organization.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
About Fresnel's paper "Memoir on the diffraction of light" In Fresnell's paper "On the diffraction of light" he says that he express the difference RA+AF-RF of Fig.14 in series and he extract the equation (1). Figure 14: Let $\rm F$ be any point on the receiving screen outside the shadow. The difference of path traversed by the direct ray, and by the ray reflected at the edge of the opaque body, and meeting the direct ray at this point, is $\rm RA+AF-RF$. Let us represent $\rm FT$ by $x$ and and express in series the values of $\rm RF$, $\rm AR$, and $\rm AF$. Then, if we neglect all terms involving any power of $x$ or of $c$ higher than the second, since they are very small compared with distances $a$ and $b$, the terms which contain $c$ will disappear and we shall have for the difference of path traversed $$d=\frac{a}{2b(a+b)}x^2\tag{1}$$ I have tried to solve this with Maclaurin binomial series $$\\(1+x)^m=1+mx+\frac{m(m-1)x^2}2 $$ but I do not conclude the same solution. Is there any ideas how is it solved? For a more detailed description of the geometry, see this excerpt:
First, we have that the triangles $RAB$ and $RTC$ are similar, so that $$TC=RC\,\left(\frac{AB}{RB}\right)=(a+b)\,\left(\frac{c/2}{a}\right)=\frac{(a+b)c}{2a}\,.$$ This means $$\begin{align} RF&=\sqrt{FC^2+RC^2}=\sqrt{(FT+TC)^2+RC^2}=\sqrt{\left(x+\frac{(a+b)c}{2a}\right)^2+(a+b)^2} \\ &=(a+b)\,\sqrt{1+\left(\frac{x}{a+b}+\frac{c}{2a}\right)^2}\approx(a+b)\left(1+\frac{1}{2}\,\left(\frac{x}{a+b}+\frac{c}{2a}\right)^2\right) \\ &=a+b+\frac{x^2}{2(a+b)}+\frac{cx}{2a}+\frac{(a+b)c^2}{8a^2}\,. \end{align}$$ Next, $$\begin{align} RA&=\sqrt{RB^2+AB^2}=\sqrt{a^2+\left(\frac{c}{2}\right)^2}=a\,\sqrt{1+\left(\frac{c}{2a}\right)^2} \\&\approx a\,\left(1+\frac{1}{2}\,\left(\frac{c}{2a}\right)^2\right)=a+\frac{c^2}{8a} \end{align}$$ and $$\begin{align} AF&=\sqrt{AM^2+MF^2}=\sqrt{BC^2+(FC-MC)^2}=\sqrt{BC^2+(FC-AB)^2} \\ &=\sqrt{b^2+\left(x+\frac{(a+b)c}{2a}-\frac{c}{2}\right)^2}=b\,\sqrt{1+\left(\frac{x}{b}+\frac{c}{2a}\right)^2} \\ &\approx b\,\left(1+\frac{1}{2}\,\left(\frac{x}{b}+\frac{c}{2a}\right)^2\right) =b+\frac{x^2}{2b}+\frac{cx}{2a}+\frac{bc^2}{8a^2}\,. \end{align}$$ Finally, we get $$\begin{align}d&=RA+AF-RF \\ &\approx\small\left(a+\frac{c^2}{8a}\right)+\left(b+\frac{x^2}{2b}+\frac{cx}{2a}+\frac{bc^2}{8a^2}\right)-\left(a+b+\frac{x^2}{2(a+b)}+\frac{cx}{2a}+\frac{(a+b)c^2}{8a^2}\right) \\ &=\frac{x^2}{2b}-\frac{x^2}{2(a+b)}=\frac{ax^2}{2b(a+b)}\,. \end{align}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Adjoint representation of $SU(2)$ I'm trying to understand how the $SU(2)$ representations work. We know that the fundamental representation of $SU(2)$ is $\frac{1}{2} \sigma^{\alpha}$ where $\sigma^{\alpha}$ the Pauli matrices. These are 2x2 matrices that follow the Lie algebra. How can we derive a 3x3 representation (and basis) for $su(2)$? When I try the following basis I don't get the same algebra as the Pauli matrices, shouldn't that be the case? $T_1 = \frac{1}{\sqrt{2}}\begin{pmatrix}0&i&0\\i&0&i\\0&i&0\end{pmatrix}$ $T_2 = \begin{pmatrix} i&0&0\\0&0&0\\0&0&-i\end{pmatrix}$ $T_3 = \frac{1}{\sqrt{2}}\begin{pmatrix} 0&1&0\\-1&0&1\\0 &-1&0\end{pmatrix}$
Okay so basically with the change below everything is correct. An adjoint representation of SU(2) is the following: $ T_1 = \frac{1}{\sqrt{2}}\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix} $ $ \;\;\;\;\;\;T_2 =\frac{1}{\sqrt{2}} \begin{pmatrix}0&-i&0\\i&0&-i\\0&i&0\end{pmatrix} $ $ \;\;\;\;\;\;T_3 =\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} $
{ "language": "en", "url": "https://physics.stackexchange.com/questions/519173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Diagonalizing a given Hamiltonian The following Hamiltonian, which has to be diagonalized, is given: $H = \epsilon(f^{\dagger}_1f_1 + f_2^{\dagger}f_2)+\lambda(f_1^{\dagger}f_2^{\dagger}+f_1f_2)$ $f_i^{\dagger}$ and $f_i$ represent fermionic creation and annihiliation operators. Right now I am not sure how to approach this problem. My idea is to use some kind of Bogoliubov transformation. I would be thankful for ideas on how to approach this problem.
Since we have only two fermions creation operators, we are dealing with a finite dimensional system. In that case, I find that it is often easier to write out matrices and do algebra on those. In the basis $(|00\rangle,|01\rangle,|10\rangle,|11\rangle)$ we have : \begin{align} f_1 &= \begin{pmatrix} 0 & 0 & 1 & 0\\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}\\ f_2 &= \begin{pmatrix} 0&1&0&0\\ 0&0&0&0 \\ 0&0&0&1\\ 0&0&0&0 \end{pmatrix} \end{align} Therefore : $$H = \begin{pmatrix} 0 & 0&0 & \lambda\\ 0 & \epsilon & 0& 0 \\ 0 & 0 & \epsilon &0 \\ \lambda & 0& 0 & 2\epsilon \end{pmatrix}$$ The states $|01\rangle$ and $|10\rangle$ are eigenstates with eigenvalue $\epsilon$. On the orthogonal subspace, generated by $|00\rangle$ and $|11\rangle$, the induced Hamiltonian is $\begin{pmatrix} 0 & \lambda \\ \lambda & 2\epsilon \end{pmatrix} = \epsilon \mathbb I_2 + \lambda \sigma_x - \epsilon \sigma_z$. Therefore the eigenvalues are $\epsilon \pm \sqrt{\lambda^2 + \epsilon^2}$ and the eigenvectors are : $$|+\rangle = \frac{1}{\sqrt{2(\lambda^2+\epsilon^2 -\epsilon \sqrt{\epsilon^2 +\lambda^2}) }}\begin{pmatrix}-\epsilon + \sqrt{\epsilon^2 + \lambda^2} \\ \lambda\end{pmatrix} $$ and $$|-\rangle = \frac{1}{\sqrt{2(\lambda^2+\epsilon^2 +\epsilon \sqrt{\epsilon^2 +\lambda^2}) }}\begin{pmatrix}-\epsilon - \sqrt{\epsilon^2 + \lambda^2} \\ \lambda\end{pmatrix} $$ In the basis $|01\rangle,|10\rangle,|+\rangle,|-\rangle$, we have : $$H = \begin{pmatrix} \epsilon & 0 & 0 & 0\\ 0 & \epsilon &0 & 0\\ 0& 0 & \epsilon+\sqrt{\lambda^2+\epsilon^2} & 0 \\ 0 & 0 & 0& \epsilon -\sqrt{\lambda^2 + \epsilon^2}\end{pmatrix}$$ To write this as a Bogoliubov transform, we remark that we can write $H = E_0 + E_1 ( c_1^\dagger c_1 + c_2^\dagger c_2)$, with $c_1$ and $c_2$ independent fermion annihilation operators, when : \begin{align} E_0 &= \epsilon \\ E_1 &= \sqrt{\lambda^2 + \epsilon^2} \end{align} and the eigenstates of $c_1^\dagger c_1$ and $c_2^\dagger c_2$ are : \begin{align} |00\rangle' &= |-\rangle \\ |01\rangle' &= |01\rangle\\ |10\rangle' &= |10\rangle\\ |11\rangle' &= |+\rangle \end{align} Solving for $c_1,c_2$ as linear combination of $f_1,f_2,f_1^\dagger,f_2^\dagger$ (as in a Bogoliubov transform), we get : \begin{align} c_1 &=\frac{1}{\sqrt{2(\lambda^2+\epsilon^2 +\epsilon \sqrt{\epsilon^2 +\lambda^2}) }} \left( (-\epsilon -\sqrt{\epsilon^2+ \lambda^2})f_1 + \lambda f_2^\dagger\right) \\ &\qquad+\frac{1}{\sqrt{2(\epsilon^2 + \lambda^2 - \epsilon\sqrt{\epsilon^2+ \lambda^2})}} \left( (-\epsilon +\sqrt{\epsilon^2+ \lambda^2})f_2 + \lambda f_1^\dagger\right) \\ c_2 &=\frac{1}{\sqrt{2(\lambda^2+\epsilon^2 -\epsilon \sqrt{\epsilon^2 +\lambda^2}) }} \left( (-\epsilon + \sqrt{\epsilon^2+ \lambda^2})f_1 + \lambda f_2^\dagger\right) \\ &\qquad+\frac{1}{\sqrt{2(\epsilon^2 + \lambda^2 + \epsilon\sqrt{\epsilon^2+ \lambda^2})}} \left( (-\epsilon -\sqrt{\epsilon^2+ \lambda^2})f_2 + \lambda f_1^\dagger\right) \\ \end{align} There might be smarter/more efficient ways do perform the calculations, but this does the job.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Getting from $E^2 - p^2c^2 = m^2c^4$ to $E = \gamma mc^2$ What is each mathematical step (in detail) that one would take to get from: $E^2 - p^2c^2 = m^2c^4$ to $E = \gamma mc^2$, where $\gamma$ is the relativistic dilation factor. This is for an object in motion. NOTE: in the answer, I would like full explanation. E.g. when explaining how to derive $x$ from $\frac{x+2}{2}=4$, rather than giving an answer of "$\frac{x+2}{2}=4$, $x+2 = 8$, $x = 6$" give one where you describe each step, like "times 2 both sides, -2 both sides" but of course still with the numbers on display. (You'd be surprised at how people would assume not to describe in this detail).
Starting with your given equation, we add $p^2 c^2$ to both sides to get $$ E^2=m^2 c^4 + p^2 c^2$$ now using the definition of relativistic momentum $p=\gamma m v$ we substitute that in above to get $$E^2 = m^2 c^4 +(\gamma m v)^2 c^2=m^2 c^4 +\gamma^2 m^2 v^2 c^2$$ Now, factoring out a common $m^2 c^4$ from both terms on the RHS in anticipation of the answer we get $$E^2=m^2 c^4 (1+\frac{v^2}{c^2}\gamma^2)$$ Now using the definition of $\gamma$ as $$\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$ and substituting this in for $\gamma$ we get $$E^2=m^2 c^4 \left(1+\frac{\frac{v^2}{c^2}}{1-\frac{v^2}{c^2}}\right)$$ and making a common denominator for the item in parenthesis we get $$E^2=m^2 c^4 \left( \frac{1}{1-\frac{v^2}{c^2}} \right)=m^2 c^4 \gamma^2$$ Taking the square root of both sides gives $$E=\pm \gamma mc^2$$ Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/28568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
About the conserved charge for the ghost number current in $bc$ conformal field theory (skip disclaimer) I have a question about the conserved charge for the ghost number current in $bc$ conformal field theory in Polchinski's string theory p62. It is said For the ghost number current (2.5.14), $j=-:bc:$, the charge is $$ N^g= \frac{1}{2 \pi i} \int_0^{2 \pi} dw j_w = \sum_{n=1}^{\infty} ( c_{-n} b_n -b_{-n} c_n ) + c_0 b_0 - \frac{1}{2} \,\,\,\, (2.7.22) $$ First, what is $j_w$? Seems it haven't been defined before. Is current (2.5.14) in variable w, i.e. $z=\exp(-iw)$? Then how to derive Eq. (2.7.22)? I tried to derive it but differs the constant term and lots of (conformal) normal ordering $$ N^g= \frac{1}{2 \pi i} \int_0^{2 \pi} dw j_w = \frac{1}{2 \pi i} \oint dz j_z $$ $$ = -\frac{1}{2 \pi i} \oint dz :bc: =\frac{-1}{2 \pi i} \oint dz \sum_{m=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} \frac{ : b_m c_n:}{z^{m+n+1}} (\mathrm{omit} \,\,\, \lambda \,\,\, \mathrm{in} \,\,\, \mathrm{2.7.16})$$ $$ = - \sum_{m=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} : b_m c_n: \delta_{m+n,0} = - \sum_{m=-\infty}^{\infty} : b_m c_{-m}: $$ $$ =- :b_0 c_0: - \sum_{m=-\infty}^{-1} : b_m c_{-m}: - \sum_{m=1}^{\infty} : b_m c_{-m}: = -:b_0 c_0: - \sum_{m=1}^{\infty} : b_{-m} c_{m}: - \sum_{m=1}^{\infty} : b_m c_{-m}: $$ $$ = :c_0b_0: - 1 - \sum_{m=1}^{\infty}: b_{-m} c_{m}: - : c_{-m} b_{m}: - \sum_{m=1}^{\infty} \{ :b_{m},c_{-m}: \} $$ $$ = \sum_{n=1}^{\infty} (:c_{-n} b_n: - :b_n c_{-n}:) + :c_0b_0: - \infty$$ $$ = \sum_{n=1}^{\infty} (\circ :c_{-n} b_n: \circ- \circ :b_n c_{-n}: \circ) + \circ :c_0b_0: \circ - \infty$$ $\circ \cdots \circ$ indicates creation-annihilation normal ordering I guess there is huge problem in my derivation, but how to get (2.7.22) correctly?
I believe you are both missing something. As $z$ is related to $w$ by a conformal transformation you need to use (2.5.17) to relate $j_w(w)$ and $j_z(z)$. This is a my derivation: \begin{align} N^\mathrm{g} = \frac{1}{2\pi i } \int_0^{2\pi}dw j(w) \end{align} From (2.5.17) we have \begin{align} (\partial_w z) j_z(z) = j_w(w) + \frac{2\lambda-1}{2} \frac{\partial^2_wz}{\partial_w z} \end{align} with $z= e^{-iw}$ we have $\partial_w z = -i z$ and $\partial^2_wz = -z$ so that \begin{align} -i z j_z(z) = j_w(w) + \frac{2\lambda-1}{2} \frac{-z}{-iz} \Rightarrow j_w (w) = -i z j(z) +i \frac{2\lambda-1}{2} \end{align} Using this and $dw= idz/z$ we have \begin{align} N^\mathrm{g} =&\, \frac{1}{2\pi i } \oint \frac{idz}{z} i \left( - z j(z) + \frac{2\lambda-1}{2} \right) \nonumber\\ =&\, -\frac{1}{2\pi i} \oint dz \, \left( - j(z) + \frac{2\lambda-1}{2z} \right) \end{align} We now have \begin{align} N^\mathrm{g} =&\, - \frac{1}{2\pi i} \oint dz\, \left( :bc:(z) + \frac{2\lambda-1}{2} \right) = - \frac{1}{2\pi i} \oint dz\, \left( \substack{\circ\\\circ} b c \substack{\circ\\\circ}(z) +\frac{1-\lambda}{z}+ \frac{2\lambda-1}{2z} \right) \end{align} where we used that \begin{align} :bc:(z) =&\, \lim_{z'\rightarrow z} :b(z) c(z'): = \lim_{z'\rightarrow z} \substack{\circ\\\circ} b(z) c(z') \substack{\circ\\\circ}+ \lim_{z'\rightarrow z}\frac{(z/z')^{1-\lambda} -1}{z-z'} \nonumber\\ =&\, \substack{\circ\\\circ}b c \substack{\circ\\\circ} (z) +\frac{1-\lambda}{z} \end{align} Let us first quickly do the last two terms of $N^\mathrm{g}$. They give \begin{align} -\frac{1}{2\pi i} \oint dz \frac{1}{2z} =-\frac{1}{2} \end{align} The first terms gives \begin{align} & \, -\frac{1}{2\pi i} \oint \sum_{m=-\infty}^\infty \sum_{n=-\infty}^\infty \frac{\substack{\circ\\\circ} b_m c_n \substack{\circ\\\circ} }{z^{m+\lambda +n+1-\lambda}} = -\frac{1}{2\pi i} \oint \sum_{m=-\infty}^\infty \sum_{n=-\infty}^\infty \frac{\substack{\circ\\\circ} b_m c_n \substack{\circ\\\circ} }{z^{m+ +n+1} }\nonumber\\ =&\, -\sum_{m=-\infty}^\infty \ \sum_{n=-\infty}^\infty \substack{\circ\\\circ}b_m c_n \substack{\circ\\\circ}\delta_{m+n,0} = -\sum_{m=-\infty}^\infty \substack{\circ\\\circ} b_m c_{-m} \substack{\circ\\\circ}\nonumber\\ =& \, -\sum_{m=-\infty}^{-1} \substack{\circ\\\circ}b_m c_{-m} \substack{\circ\\\circ} -\substack{\circ\\\circ} b_0 c_0 \substack{\circ\\\circ} -\sum_{m=1}^{\infty} \substack{\circ\\\circ} b_m c_{-m} \substack{\circ\\\circ} \nonumber\\ =& \, -\sum_{m=-\infty}^{-1} b_m c_{-m} + c_0 b_0 +\sum_{m=1}^{\infty} c_{-m} b_m = \sum_{m=1}^{\infty} (c_{-m}b_m- b_{-m} c_{m} ) + c_0 b_0 \end{align} Combining both contributions we find \begin{align} N^\mathrm{g} =&\, \sum_{m=1}^{\infty} (c_{-m}b_m- b_{-m} c_{m} ) + c_0 b_0 - \frac{1}{2} \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/72257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Showing $K_\pm$ are raising/lowering operators In this post, I have the following operators defined: $$K_1=\frac 14(p^2-q^2)$$ $$K_2=\frac 14 (pq+qp)$$ $$J_3 = \frac 14 (p^2+q^2)$$ I am given $ J_3|m\rangle = m|m\rangle$ and asked to show that $K_\pm \equiv K_1 \pm i K_2$ are ladder operators. My approach (raising operator): $$K_+|m\rangle=K_1|m\rangle+iK_2|m\rangle$$ $$=K_1|m\rangle+[J_3,K1]|m\rangle$$ $$=K_1|m\rangle+ (J_3K_1-K_1J_3)|m\rangle$$ $$=K_1|m\rangle+J_3K_1|m\rangle-K_1m|m\rangle$$ First off, I'm unsure if this is the correct approach, and then I'm also lost on what to do next.
Are $p$ and $q$ the standard momentum and position operators in $L^2(\mathbb R)$? If the answer is positive, then: $$K_\pm := K_1 \pm iK_2 = \frac{1}{2}\left(\frac{1}{\sqrt{2}}(p\pm iq) \right)^2\:.$$ In other words, introducing the standard operators $a = \frac{1}{\sqrt{2}}(p- iq) $ and $a^\dagger = \frac{1}{\sqrt{2}}(p+ iq) $ for the harmonic oscillator: $$K_+ = \frac{1}{2}a^\dagger a^\dagger \:,\quad K_- = \frac{1}{2}aa\:.$$ Similarly: $$J_3 = \frac 14 (p^2+q^2) = \frac{1}{2} \left(a^\dagger a + a a^\dagger\right) = \frac{1}{2} (a^\dagger a + \frac{1}{2}I)\:.$$ This last identity implies that, in the eigenvalues equation $$J_3\psi_m = m\psi_m\:, $$ it must be $m= \frac{1}{2}(n+1/2)$ for $n=0,1,2\ldots$ and $$\psi_m = |(4m-1)/2\rangle\:,$$ where $|n\rangle$ is the standard basis of the harmonic oscillator with $n=0,1,2,\ldots$. Let us come to the action of $K_\pm$ on the vectors $\psi_m$. $$K_+ \psi_m = \frac{1}{2}a^\dagger a^\dagger |(4m-1)/2\rangle = \frac{1}{2} a^\dagger \sqrt{(4m-1)/2+1}|(4m-1)/2+1\rangle $$ $$K_+ \psi_m = \frac{1}{2} \sqrt{\frac{4m+3}{2}}\sqrt{\frac{4m+1}{2}}|\frac{4m+3}{2}\rangle = \frac{1}{4}\sqrt{(4m+3)(4m+1)}\psi_{m+1}\:.$$ We have found that: $$K_+ \psi_m = \frac{1}{2}\sqrt{\left(m+\frac{3}{4}\right)\left(m+\frac{1}{4}\right)}\psi_{m+1}\:.$$ Similarly $$K_- \psi_m = \frac{1}{2}a a |(4m-1)/2\rangle = \frac{1}{2} a \sqrt{(4m-1)/2}|(4m-1)/2-1\rangle $$ $$K_- \psi_m = \frac{1}{2} \sqrt{\frac{4m-1}{2}}\sqrt{\frac{4m-3}{2}}|\frac{4m-5}{2}\rangle = \frac{1}{4}\sqrt{(4m-1)(4m-3)}\psi_{m-1}\:,$$ so that: $$K_- \psi_m = \frac{1}{2}\sqrt{\left(m-\frac{1}{4}\right)\left(m-\frac{3}{4}\right)}\psi_{m-1}\:,$$ where $K_- \psi_m=0$ if $m=1/4,3/4$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Jackson equation 2.9; How to derive it? I wanted to understand the derivation of Jackson's equation (2.9) that you can find here on page 31 (50/661) from the potential given earlier. This equation is written as: $$ \textbf{F}=\frac{q}{y^2}\left[Q-\frac{qa^3(2y^2-a^2)}{y(y^2-a^2)^2}\right]\frac{\textbf{y}}{y} $$ Does anybody know how to do this as there are apparently many steps left out in between?
The problem is to find the force a charge $q$ feels from the image charge it produces on a conducting isolated sphere containing a total charge $Q$. We are given in 2.8 that the potential is \begin{equation} \Phi(\mathbf{x})=\frac{q}{|\mathbf{x} - \mathbf{y}|} - \frac{rq}{|\mathbf{x} - r^2 \mathbf{y}|} + \frac{Q + rq}{|\mathbf{x}|}, \end{equation} where $r$ the radius of the sphere in units of the distance of the charge $q$ from the sphere. To find the force on the charge $q$, we should differentiate this expression to get the electric field. However, we need not consider the first term since it is the term generated by the charge $q$ and so it represents a self-force and must be zero. Keep the other to terms we find the electric field from the sphere at a point $\mathbf{x}$ is \begin{equation} \mathbf{E}(\mathbf{x})=\left( - \frac{rq}{(\mathbf{x} - r^2 \mathbf{y})^2} + \frac{Q+rq}{x^2}\right)\hat{y} \end{equation} Since we are interested in the force on the charge $q$, we should evaulate this expression at the point $\mathbf{y}$. We find \begin{equation} \begin{aligned} \mathbf{E}(\mathbf{y})&=\left( - \frac{rq}{(\mathbf{y} - r^2 \mathbf{y})^2} + \frac{Q+rq}{y^2}\right)\hat{y} \\ &=\left( - \frac{rq}{(1 - r^2)^2} + Q+rq\right)\frac{\hat{y}}{y^2} \\ &= \left( \frac{-rq + rq(1-r^2)^2}{(1 - r^2)^2} + Q\right)\frac{\hat{y}}{y^2} \\ &= \left(Q + rq\frac{(1-r^2)^2-1}{(1 - r^2)^2} \right)\frac{\hat{y}}{y^2} \\ &= \left(Q + rq\frac{r^4-2r^2}{(1 - r^2)^2} \right)\frac{\hat{y}}{y^2} \\ &= \left(Q + r^3q\frac{r^2-2}{(1 - r^2)^2} \right)\frac{\hat{y}}{y^2} \\ &= \left(Q - r^3q\frac{2-r^2}{(1 - r^2)^2} \right)\frac{\hat{y}}{y^2} \end{aligned} \end{equation} Plugging in $\mathbf{F} = q\mathbf{E}$, we get 2.9.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the electric field due to a uniformly charged semicircle? I want to find the y-component of the electric at a point $b$ on the x-axis due to a charged semicircle centred around the origin. I found $dq$ using $$\frac{dq}{Q} = \frac{dθ}{π}$$ $$dq = \frac{Q}{π}dθ$$ And the distance $r$ from $dq$ to $b$ using cosine law $$r^2 = a^2 + b^2 - 2abcosθ$$ Subbing this into $dE = \frac{kdq}{r^2}$ gives $$dE = \frac{k\frac{Q}{π}}{a^2 + b^2 - 2abcosθ}dθ$$ Since I only want the y-component of the electric field, I multiplied the whole thing by sinθ and integrated to get $$E_y = \int\frac{k\frac{Q}{π}sinθ}{a^2 + b^2 - 2abcosθ}dθ$$ $$E_y = \frac{kQ}{2πab}ln(\frac{a^2+b^2+2ab}{a^2+b^2-2ab})$$ The answer that I should be getting is $$E_y = \frac{2kQ}{\pi}\frac{1}{b^2-a^2}$$
Assume $b>a$, as shown in your diagram. First, your given answer should be wrong. Because if we let $a\rightarrow 0$ and fix $b$, then the $y$-component should go to zero. But your answer goes to the Coulomb field of a point charge. The $y$ component is not obtained by simple multiplication by $\sin\theta$. You have $$d\vec{E} = \frac{k\frac{Q}{π}}{(a^2 + b^2 - 2ab\cos \theta)^{3/2}}d\theta((b-a\cos\theta) \hat{i}+a\sin\theta \hat{j})$$ So the $y$ component is $$dE_y = \frac{k\frac{Q}{π}}{(a^2 + b^2 - 2ab\cos\theta)^{3/2}}d\theta a\sin\theta$$ Then $$E_y= kQ/\pi \int_0^\pi \frac{a\sin\theta d\theta}{(a^2+b^2-2ab\cos\theta)^{3/2}} $$ $$=\frac{kQ}{2\pi b} \int_0^\pi \frac{d(a^2+b^2-2ab\cos\theta)}{(a^2+b^2-2ab\cos\theta)^{3/2}}$$ $$=\frac{kQ}{\pi b}\left[(a^2+b^2-2ab\cos\theta)^{-1/2}\right]_\pi^0$$ $$=\frac{kQ}{\pi b}\left[(a^2+b^2-2ab)^{-1/2}-(a^2+b^2+2ab)^{-1/2}\right]$$ $$=\frac{kQ}{\pi b}\left[\frac{1}{b-a}-\frac{1}{b+a}\right]$$ $$=\frac{2kQ}{\pi}\frac{a}{b}\frac{1}{b^2-a^2}$$ If instead you have $a>b$, then the answer is $$E_y=\frac{kQ}{\pi b}\left[\frac{1}{a-b}-\frac{1}{a+b}\right]$$ $$=\frac{2kQ}{\pi}\frac{1}{a^2-b^2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/360375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability of $\frac{-1}{\sqrt2} S_x + S_z $ I have a State $\left|\Psi\right>=\frac{\left|1\right>+\left|0\right>}{\sqrt{2}},$ in the $z$-Spin basis and want to calculate the probability of this state for the eigenvectors of the operator $\frac{-1}{\sqrt2} S_x + S_z $ which are $\begin{pmatrix} 1-\sqrt2\\ 1 \end{pmatrix} and \begin{pmatrix} 1+\sqrt2\\ 1 \end{pmatrix}$(In the $z$-basis). So I take the norm squared of$ \langle\begin{pmatrix} 1\pm\sqrt2\\ 1 \end{pmatrix}|\Psi\rangle.$ Which gives me 1 in both cases which is no good for a probability.Where am I wrong?
Your operator is $\frac{1}{\sqrt{2}}S_x +S_z$. The eigenvectors of this operator are not what you have written down. They are $v_1 = \frac{1}{\sqrt{5-2\sqrt{6}}}\begin{pmatrix} 1\\ \sqrt{3} - \sqrt{2} \end{pmatrix}$ $v_2 = \frac{1}{\sqrt{5+2\sqrt{6}}}\begin{pmatrix} 1\\ -\sqrt{3} - \sqrt{2} \end{pmatrix}$ Thus, the probability of finding the state $\left|\Psi\right>=\frac{\left|1\right>+\left|0\right>}{\sqrt{2}} = \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{pmatrix}$ is $\langle v_1\vert\psi\rangle^2 = \frac{1}{6}(3+\sqrt{3})$ $\langle v_2\vert\psi\rangle^2 = \frac{1}{6}(3-\sqrt{3})$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/437162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lagrangian equations of motion for ball rolling on turntable The equations governing the motion of a ball of mass $m$, radius $R$ rolling on a table rotating at constant angular velocity $ \Omega $ which are derived using Newton's laws are: (I present these for comparison) \begin{align*} (I+mR^2) \dot{\omega}_x &= (I+2mR^2)\Omega \omega_y-mR \Omega^2y \\ (I+mR^2) \dot{\omega}_y &= -(I+2mR^2)\Omega \omega_x + mR \Omega^2 x \\ \dot{x} &= R \omega_y \\ \dot{y} &= -R \omega_x \end{align*} Where $x,y,\omega_x,\omega_y$ are absolute values measured in the rotating frame ($x,y$ being positions and $\omega_x,\omega_y$ angular velocities of the ball). To express the position, velocity, etc in the inertial $XYZ$ frame we can perform a change of variables: \begin{align*} X=x \cos \theta - y \sin \theta \\ Y=x \sin \theta + y \cos \theta \end{align*} The equations written above are correct as far as I know. Now I've tried to derive these equations using the Lagrangian approach, but my equations differ slightly from the above. I'll share my work here: We start with the standard formulation of the Lagrangian equations of motion: $$\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q_i}} \right) - \frac{\partial L}{\partial q_i}=Q_i$$ For this system, there are no non-conservative forces doing work at any time so $Q_i=0$ (Assuming no-slip here). Now the kinetic energy of the system is: $$T=\frac{1}{2}m \| v \| ^2 + \frac{1}{2} I \| \omega \|^2 $$ Where $v$,$\omega$ are the absolute linear and angular velocities of the ball, $I$ is the moment of inertia of the center of mass. I'll proceed in the rotating frame basis $xyz$, where the position $r$ of the ball is given by $$\vec{r}=x \hat{i} + y \hat{j}$$ Using velocity kinematics in rotating frames, the absolute velocity of the ball is given by: $$\vec{v}=\vec{\Omega} \times \vec{r} + \vec{v}_{xyz}$$ Where $\vec{v}_{xyz}$ is the velocity of the ball in the rotating frame: $$\vec{v}_{xyz} = \dot{x} \hat{i} + \dot{y} \hat{j}$$ So the absolute velocity $\vec{v}$ after expanding gives: $$\vec{v} = (\dot{x}-\Omega y) \hat{i} + (\dot{y} +\Omega x) \hat{j} $$ It follows from taking the magnitude of the absolute velocity: $$\| v \|^2 = (\dot{x}-\Omega y)^2 + (\dot{y}+\Omega x)^2$$ The first unknown term in the Lagrangian equations of motion. The absolute angular velocity $\vec{\omega}$ is more straight-forward: $$\vec{\omega} = \vec{\Omega} + \vec{\omega}_{xyz}$$ Where $\vec{\omega}_{xyz}$ is the angular velocity of the ball in the rotating frame, where one can show it DOES NOT have a component in the $\hat{k}$ direction if we impose no-slip kinematics (which we are), so: $$\vec{\omega}_{xyz} = \omega_x \hat{i} + \omega_y \hat{j}$$ And since $$\vec{\Omega} = \Omega \hat{k}$$ The absolute angular velocity is: $$\vec{\omega} = \omega_x \hat{i} + \omega_y \hat{j} + \Omega \hat{k} $$ Then it follows that: $$\| \omega \|^2 = \omega_x^2 + \omega_y^2 + \Omega^2$$ The potential energy of the system is constant and doesn't affect the equations of motion so, the Lagrangian becomes: $$L=\frac{1}{2} m \left[ (\dot{x}-\Omega y)^2 + (\dot{y}+\Omega x)^2 \right] + \frac{1}{2} I \left[ \omega_x^2 + \omega_y^2 + \Omega^2 \right] $$ Now the constraint equations are the no-slip conditions: $$\vec{v}_{xyz} = \vec{\omega}_{xyz} \times \vec{R}$$ Where $\vec{R} = R \hat{k} $ of course, then we have two conditions: \begin{align*} \dot{x} &= R \omega_y \\ \dot{y} &= - R \omega_x \end{align*} Which are non-holomonic constraints, but given their simple nature I opted out of doing Lagrangian multipliers and simply substituted them into the equations of motion (I reworked it with Lagrangian multipliers after and got the same thing). Now after substituting the constraints, the Lagrangian becomes: $$L=\frac{1}{2} \left( m + \frac{I}{R^2} \right) (\dot{x}^2 + \dot{y}^2) - m \Omega \dot{x} y + m \Omega \dot{y} x + \frac{1}{2} m \Omega^2 (x^2 + y^2) + \frac{1}{2} I \Omega^2 $$ From here, applying the Lagrangian equation above with $Q_i=0$ I get the following equations of motion: $$\left( m+\frac{I}{R^2} \right) \ddot{x} - 2m \Omega \dot{y} - m \Omega^2 x = 0 $$ $$\left( m+\frac{I}{R^2} \right) \ddot{y} - 2m \Omega \dot{x} + m \Omega^2 y = 0$$ And using the no-slip conditions again we can rewrite: \begin{align*} \left( I + mR^2 \right) \dot{\omega}_x &= 2mR^2 \Omega \omega_y - mR \Omega^2 y \\ \left( I + mR^2 \right) \dot{\omega}_y &= -2mR^2 \Omega \omega_x + mR \Omega^2 x \end{align*} Now if you compare these last two equations with the ones I wrote in the beginning, the only difference is in the first term on the right-hand side. Look at these two for instance: \begin{align*} (I+mR^2) \dot{\omega}_x &= (I+2mR^2)\Omega \omega_y-mR \Omega^2y \\ \left( I + mR^2 \right) \dot{\omega}_x &= 2mR^2 \Omega \omega_y - mR \Omega^2 y \end{align*} The ONLY difference is the missing $I$ term! I'm missing the moment of inertia for some reason, why is that? What's wrong about my Lagrangian approach?
Newton Euler Equations \begin{align*} & \underbrace{\left[ \begin {array}{cccc} m&0&0&0\\ 0&m&0&0 \\ 0&0&{\it I_s}&0\\ 0&0&0&{\it I_s} \end {array} \right]}_{\boldsymbol A} \begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \dot{\omega_x} \\ \dot{\omega_y} \\ \end{bmatrix}= \underbrace{\begin{bmatrix} -m \left( -{\Omega}^{2}x-2\,\Omega\,{\dot y} \right) \\ -m \left( -{\Omega}^{2}y+2\,\Omega\,{\dot x} \right) \\ 0 \\ 0 \\ \end{bmatrix} }_{\boldsymbol b}+\boldsymbol C_B^T\,\boldsymbol\lambda_n\\\\ \end{align*} where $~I_s=m\,\frac 25\,\rho^2~$ the sphere moment of inertia and $\rho~$ the sphere radius, $~\boldsymbol C_B^T~$ distribution matrix of the generalized constraint forces $~ \boldsymbol\lambda_n$ Kinematic \begin{align*} &\begin{bmatrix} \dot{x} \\ \dot{y} \\ {\omega_x} \\ {\omega_y} \\ \end{bmatrix}=\underbrace{\left[ \begin {array}{cc} 0&-\rho\\ \rho&0 \\ 1&0\\ 0&1\end {array} \right]}_{\boldsymbol J} \begin{bmatrix} \omega_x\\ \omega_y \\ \end{bmatrix} \end{align*} The equations of motion \begin{align*} &\boldsymbol J^T\,\boldsymbol A\,\boldsymbol J\,\begin{bmatrix} \dot\omega_x\\ \dot\omega_y \\ \end{bmatrix}=\boldsymbol J^T\,\boldsymbol b\\ &\Rightarrow\\\\ &\begin{bmatrix} \dot{\omega}_x \\ \dot{\omega}_y \\ \end{bmatrix}=\left[ \begin {array}{c} {\frac {\rho\,m\Omega\, \left( -\Omega\,y+2 \,\omega_{{y}}\rho \right) }{{\rho}^{2}m+I{{s}}}} \\ -{\frac {\rho\,m\Omega\, \left( -\Omega\,x+2\, \omega_{{x}}\rho \right) }{{\rho}^{2}m+I_{{s}}}}\end {array} \right] \tag{1}\\ &\text{and}\\ &\dot{x}=-\rho\,\omega_y \tag{2}\\ &\dot{y}=\rho\,\omega_x \tag{3} \\ \end{align*} Initial system Sphere Position \begin{align*} & \boldsymbol R_s=\left[ \begin {array}{ccc} \cos \left( \Omega\,t \right) &-\sin \left( \Omega\,t \right) &0\\ \sin \left( \Omega\,t \right) &\cos \left( \Omega\,t \right) &0\\ 0&0&1 \end {array} \right] \begin{bmatrix} x \\ y \\ \rho \\ \end{bmatrix} \end{align*} the rotation matrix $\boldsymbol S~$ between local coordinate system and inertial system is: \begin{align*} &\boldsymbol S=\left[ \begin {array}{ccc} \cos \left( \varphi _{{z}} \right) &-\sin \left( \varphi _{{z}} \right) &0\\ \sin \left( \varphi _{{z}} \right) &\cos \left( \varphi _{{z}} \right) &0 \\ 0&0&1\end {array} \right] \, \left[ \begin {array}{ccc} \cos \left( \varphi _{{y}} \right) &-\sin \left( \varphi _{{y}} \right) &0\\ \sin \left( \varphi _{{y}} \right) &\cos \left( \varphi _{{y}} \right) &0 \\ 0&0&1\end {array} \right] \,\left[ \begin {array}{ccc} \cos \left( \varphi _{{x}} \right) &-\sin \left( \varphi _{{x}} \right) &0\\ \sin \left( \varphi _{{x}} \right) &\cos \left( \varphi _{{x}} \right) &0 \\ 0&0&1\end {array} \right]\\\ &\text{with}\\ &\boldsymbol{\dot{S}}=\left[ \begin {array}{ccc} 0&-\omega_{{z}}&\omega_{{y}} \\ \omega_{{z}}&0&-\omega_{{x}}\\ -\omega_{{y}}&\omega_{{x}}&0\end {array} \right] \boldsymbol S\,\\\\ &\Rightarrow\\ &\begin{bmatrix} \omega_x \\ \omega_y \\ \omega_z \\ \end{bmatrix}=\left[ \begin {array}{ccc} \cos \left( \varphi _{{z}} \right) \cos \left( \varphi _{{y}} \right) &-\sin \left( \varphi _{{z}} \right) &0 \\ \sin \left( \varphi _{{z}} \right) \cos \left( \varphi _{{y}} \right) &\cos \left( \varphi _{{z}} \right) &0 \\ -\sin \left( \varphi _{{y}} \right) &0&1 \end {array} \right] \,\begin{bmatrix} \dot\varphi_x \\ \dot\varphi_y \\ \dot\varphi_z \\ \end{bmatrix} \end{align*} from here you obtain \begin{align*} \begin{bmatrix} \dot{\varphi_x}\\ \dot{\varphi_y} \\ \dot{\varphi_z} \\ \end{bmatrix}= \left[ \begin {array}{ccc} {\frac {\cos \left( \varphi _{{z}} \right) }{\cos \left( \varphi _{{y}} \right) }}&{\frac {\sin \left( \varphi _{{z}} \right) }{\cos \left( \varphi _{{y}} \right) }}&0 \\ -\sin \left( \varphi _{{z}} \right) &\cos \left( \varphi _{{z}} \right) &0\\ {\frac {\cos \left( \varphi _{{z}} \right) \sin \left( \varphi _{{y}} \right) }{\cos \left( \varphi _{{y}} \right) }}&{\frac {\sin \left( \varphi _{{z}} \right) \sin \left( \varphi _{{y}} \right) }{\cos \left( \varphi _{{y }} \right) }}&1\end {array} \right] \begin{bmatrix} \omega_x \\ \omega_y \\ \Omega \\ \end{bmatrix} \tag {4} \end{align*} Altogether you obtain 7 first order differential equations Euler Lagrange with non holonomic constraint equations \begin{align*} &\frac{d}{dt}\left(\frac{\partial \mathcal{L}}{\partial \dot{\boldsymbol{w}}}\right)^T-\left( \frac{\partial \mathcal{L}}{\partial \boldsymbol{w}}\right)^T=\left[\frac{\partial \boldsymbol{R}}{\partial \boldsymbol{w}}\right]^T\,\boldsymbol{F}_s+ \left(\frac{\partial \boldsymbol{g}_n}{\partial \dot{\boldsymbol{w}}}\right)^T\boldsymbol{\lambda}_n\tag A \end{align*} where \begin{align*} & \boldsymbol{R}=\begin{bmatrix} x \\ y \\ \end{bmatrix}\\ & \dot{\boldsymbol{w}}=\begin{bmatrix} \dot{x} \\ \dot{y} \\ \omega_x \\ \omega_y \\ \end{bmatrix}~, {\boldsymbol{w}}=\begin{bmatrix} {x} \\ {y} \\ \dot{\varphi}_x \\ \dot{\varphi}_y \\ \end{bmatrix}\\\\ &\mathcal{L}=\frac{m}{2}\left(\dot{x}^2+\dot{y}^2\right)+ \frac{I_S}{2}\left[\omega_x~,\omega_y~,\Omega\right]^T\, \left[\omega_x~,\omega_y~,\Omega\right] \\\\ &\boldsymbol F_s=\begin{bmatrix} -m \left( -{\Omega}^{2}x-2\,\Omega\,{\dot y} \right) \\ -m \left( -{\Omega}^{2}y+2\,\Omega\,{\dot x} \right) \\ \end{bmatrix} \\\\ &\text{and the non holonomic constraint equations }\\ &\boldsymbol g_n= \left[ \begin {array}{c} {\dot{y}}-\rho\,\omega_{{x}} \\ {\dot{x}}+\rho\,\omega_{{y}}\end {array} \right] \end{align*} from equation (A) you obtain: \begin{align*} &\underbrace{\left[ \begin {array}{cccccc} m&0&0&0&0&-1\\ 0&m&0&0 &-1&0\\ 0&0&I_s&0&\rho&0\\ 0&0 &0&I_s&0&-\rho\\ 0&-1&\rho&0&0&0 \\ -1&0&0&-\rho&0&0\end {array} \right]}_{\boldsymbol A_L} \underbrace{\begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \dot\omega_x \\ \dot\omega_y \\ \lambda_n \end{bmatrix}}_{\boldsymbol{\ddot{w}}} =\underbrace{\begin{bmatrix} -m \left( -{\Omega}^{2}x-2\,\Omega\,{\dot y} \right) \\ -m \left( -{\Omega}^{2}y+2\,\Omega\,{\dot x} \right) \\ 0 \\ 0 \\ 0 \\ \end{bmatrix} }_{\boldsymbol b_L}\tag{B} \\\ &\Rightarrow \end{align*} substitute $~\dot{y}=\rho\,\omega_x~,\dot{x}=-\rho\,\omega_y~$ into equation (B) and solve for $~\ddot{\boldsymbol{w}}~$ you obtain \begin{align*} &\begin{bmatrix} \dot{\omega}_x \\ \dot{\omega}_y \\ \end{bmatrix}=\left[ \begin {array}{c} {\frac {\rho\,m\Omega\, \left( -\Omega\,y+2 \,\omega_{{y}}\rho \right) }{{\rho}^{2}m+I{{s}}}} \\ -{\frac {\rho\,m\Omega\, \left( -\Omega\,x+2\, \omega_{{x}}\rho \right) }{{\rho}^{2}m+I_{{s}}}}\end {array} \right] \end{align*} those results are equal to the results of equation (1) . Edit \begin{align*} &\textbf{how to obtain the Jacobi-Matrix $~\boldsymbol J~$ (kinematic equations) }\\\\ &\text{the non holonomic constraint equations are}\\ &\boldsymbol g_n=\begin{bmatrix} \dot{x}-\rho\omega_y \\ \dot{y}+\rho\omega_x \\ \end{bmatrix}=\boldsymbol 0\\\\ &\text{obtain the time derivative }\\ &\boldsymbol{\dot{q}}_n= \underbrace{\left[ \begin {array}{cccc} 1&0&0&-\rho\\ 0&1&\rho&0 \end {array} \right]}_{\boldsymbol C_n} \underbrace{\begin{bmatrix} \dot{x} \\ \dot{y} \\ \omega_x \\ \omega_y \\ \end{bmatrix}}_{\boldsymbol{\dot{w}}}=\boldsymbol C_n\,\boldsymbol{\dot{w}}=\boldsymbol 0\tag{c} \end{align*} you have two constraint equations and four velocities $\boldsymbol{\dot{w}}~$ thus the generalized velocities are two (4-2) if you chose the generalized velocities $~\omega_x~,\omega_y$ you can obtain from equation (c) \begin{align*} &\begin{bmatrix} \dot{x} \\ \dot{y} \\ \omega_x \\ \omega_y \\ \end{bmatrix}= \underbrace{\left[ \begin {array}{cc} 0&\rho\\ -\rho&0 \\ 1&0\\ 0&1\end {array} \right]}_{\boldsymbol J}\begin{bmatrix} \omega_x \\ \omega_y \\ \end{bmatrix}\\\ &\text{according to d'Alemberts principal }\\ &\boldsymbol J^T\,\boldsymbol C^T_n= \left[ \begin {array}{cccc} 0&-\rho&1&0\\ \rho&0&0& 1\end {array} \right] \,\left[ \begin {array}{cc} 1&0\\ 0&1 \\ 0&\rho\\ -\rho&0\end {array} \right]= \left[ \begin {array}{cc} 0&0\\ 0&0\end {array} \right] ~ \surd \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Derivation of the Electromagnetic Stress-Energy Tensor in Flat Space-time I am working on deriving the electromagnetic stress energy tensor using the electromagnetic tensor in the $(-, +, +, +)$ sign convention. However, I have hit a snag and cannot figure out where I have gone wrong. $$ F^{\mu \alpha}= \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ -\frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ -\frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ -\frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ $$ F^{\mu}_{\alpha} = \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ \frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ \frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ \frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ $$ T^{\mu\nu} = \frac{1}{\mu_0}(F^{\mu \alpha}F^{v}_{\alpha} - \frac{1}{4}\eta^{\mu\nu}F_{\alpha\beta}F^{\alpha \beta})$$ Doing matrix multiplication of the matrices $F^{\mu \alpha}$ and $F^{\nu}_{\alpha}$ from above gives $$ F^{\mu \alpha}F^{\nu}_{\alpha} = \begin{bmatrix} (\frac{E}{c})^{2} & -B_{z}\frac{E_{y}}{c} + B_{y}\frac{E_{z}}{c} & \frac{E_{x}}{c}B_{z} - \frac{E_{z}}{c}B_{x} & -\frac{E_{x}}{c}B_{y} + \frac{E_{y}}{c}B_{x} \\ B_{z}\frac{E_{y}}{c} - B_{y}\frac{E_{z}}{c} & -B_{z}^{2} - B_{y}^{2} - (\frac{E_{x}}{c})^{2} & -\frac{E_{x}}{c}\frac{E_{y}}{c} + B_{y}B_{x} & \frac{E_{x}}{c}\frac{E_{z}}{c} + B_{z}B_{x} \\ -B_{z}\frac{E_{x}}{c} + B_{x}\frac{E_{z}}{c} & -\frac{E_{y}}{c}\frac{E_{x}}{c} + B_{x}B_{y} & -(\frac{E_{y}}{c})^{2}-B_{z}^{2}-B_{x}^{2} & -\frac{E_{y}}{c}\frac{E_{z}}{c} + B_{z}B_{y} \\ B_{y}\frac{E_{x}}{c} - B_{x}\frac{E_{y}}{c} & -\frac{E_{z}}{c}\frac{E_{x}}{c} + B_{x}B_{z} & -\frac{E_{z}}{c}\frac{E_{y}}{c} + B_{y}B_{z} & -(\frac{E_{z}}{c})^{2}-B_{y}^{2}-B_{x}^{2} \\ \end{bmatrix} $$ Subtracting the $\frac{1}{4}\eta^{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}= \frac{1}{4}\eta^{\mu\nu}[2(B^{2} - (\frac{E}{c})^{2})]$ and multiplying by $\frac{1}{\mu_{0}}$ gives $$ T^{\mu\nu}=\frac{1}{\mu_{0}} \begin{bmatrix} (\frac{E}{c})^{2} + \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -B_{z}\frac{E_{y}}{c} + B_{y}\frac{E_{z}}{c} & \frac{E_{x}}{c}B_{z} - \frac{E_{z}}{c}B_{x} & -\frac{E_{x}}{c}B_{y} + \frac{E_{y}}{c}B_{x} \\ B_{z}\frac{E_{y}}{c} - B_{y}\frac{E_{z}}{c} & -B_{z}^{2} - B_{y}^{2} - (\frac{E_{x}}{c})^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -\frac{E_{x}}{c}\frac{E_{y}}{c} + B_{y}B_{x} & \frac{E_{x}}{c}\frac{E_{z}}{c} + B_{z}B_{x} \\ -B_{z}\frac{E_{x}}{c} + B_{x}\frac{E_{z}}{c} & -\frac{E_{y}}{c}\frac{E_{x}}{c} + B_{x}B_{y} & -(\frac{E_{y}}{c})^{2}-B_{z}^{2}-B_{x}^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -\frac{E_{y}}{c}\frac{E_{z}}{c} + B_{z}B_{y} \\ B_{y}\frac{E_{x}}{c} - B_{x}\frac{E_{y}}{c} & -\frac{E_{z}}{c}\frac{E_{x}}{c} + B_{x}B_{z} & -\frac{E_{z}}{c}\frac{E_{y}}{c} + B_{y}B_{z} & -(\frac{E_{z}}{c})^{2}-B_{y}^{2}-B_{x}^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) \\ \end{bmatrix} $$ However, the textbook definition of the electromagnetic stress energy tensor is: $$ T^{\mu\nu} = \begin{bmatrix} \frac{1}{2}(\epsilon_{0} |E|^{2} + \frac{1}{\mu_{0}}|B|^{2}) & \frac{S_{x}}{c} & \frac{S_{y}}{c} & \frac{S_{z}}{c} \\ \frac{S_{x}}{c} & -\sigma_{xx} & -\sigma_{xy} & -\sigma_{xz} \\ \frac{S_{y}}{c} & -\sigma_{yx} & -\sigma_{yy} & -\sigma_{yz} \\ \frac{S_{z}}{c} & -\sigma_{zx} & -\sigma_{zy} & -\sigma_{zz} \\ \end{bmatrix} $$ with $\vec{S} = \frac{1}{\mu_{0}}(\vec{E} \times \vec{B})$ and $\sigma_{ij} = \epsilon_{0} E_{i}E_{j} + \frac{1}{\mu_{0}}B_{i}B_{j} - \frac{1}{2}(\epsilon_{0} E^{2} + \frac{1}{\mu_{0}}B^{2})\delta_{ij} $ So, I know my $T^{01} = T^{10}$, $T^{02} = T^{20}$, and $T^{03} = T^{30}$ but they do not. They are of opposite signs. What did I do incorrectly?
You could do yourself a big favor by working in units with $c=1$. It's really cumbersome writing all the factors of $c$. You can always reinsert them at the end if you want a result in SI. Or if you want to compare with Wikipedia's equation that's expressed in SI, just drop all the $c$'s from WP's version. You detected the problem because the final result lacked the proper symmetry. So look at your calculation for the first place where that symmetry is lost. This happens at the very first step, where you calculate $F^{\mu\alpha}F^\mu{}_\alpha$. This should be the same as $g_{\beta\alpha}F^{\mu\alpha}F^{\nu\beta}$, which is manifestly symmetric in $\mu$ and $\nu$. For example, its 01 component is $F^{02}F^{12}g_{22}+F^{03}F^{13}g_{33}=E_yB_z-E_zB_y$, which is the same as its 10 component. I think the problem is that you say you found this result by matrix multiplication. The ordinary rules of matrix multiplication assume that both matrices are written in mixed upper-lower index form. Also note that in general there is a distinction between $T^\mu{}_\nu$ and $T_\nu{}^\mu$, so you can't just write $T^\mu_\nu$ without ambiguity. If this was just an issue with mathjax, the syntax that works is this: T^\mu{}_\nu.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Boyer-Lindquist coordinates to Cartesian coordinates The Kerr metric in Boyer-Lindquist coordinates is given by \begin{align} ds^2 &= -\left[ \frac{r^2 + a^2 \cos^2(\theta) - 2mr}{r^2+ a^2 \cos^2(\theta)} \right] dt^2 -\frac{4mra \sin^2(\theta)}{r^2 + a^2 \cos^2(\theta)} dt d\phi \\ & + \left[ \frac{r^2 + a^2 \cos^2(\theta)}{r^2 - 2mr + a^2} \right] dr^2 + \left(r^2 + a^2 \cos^2(\theta) \right) d\theta^2 \\ & + \left[ r^2 + a^2 +\frac{2mra^2 \sin^2(\theta)}{r^2 + a^2 \cos^2(\theta)} \right] \sin^2(\theta) d\phi^2 \end{align} I want to find proper transformation to Cartesian \begin{align} ds^2 &= -dt^2 + dx^2 + dy^2 + dz^2 \\ & + \frac{2mr^3}{r^4+ a^2 z^2} \left[ dt + \frac{r(xdx+ydy)}{a^2+ r^2} + \frac{a(ydx - xdy)}{a^2+ r^2} + \frac{z}{r}dz \right]^2 \end{align} First of all I know how to deal with in the case of $m=0$. i.e., \begin{align} ds^2 &= -dt^2 + \frac{r^2+ a^2 \cos^2(\theta)}{r^2 + a^2} dr^2 + (r^2+ a^2 \cos^2(\theta)) d\theta^2 + (r^2+ a^2) \sin^2(\theta) d\phi^2 \end{align} In this case taking spheroidal coordinates $x=\sqrt{r^2+ a^2} \sin(\theta) \cos(\phi), y=\sqrt{r^2+ a^2} \sin(\theta)\sin(\phi), z=r\cos(\theta)$ this reduces to \begin{align} ds^2 = - dt^2 + dx^2 + dy^2 + dz^2 \end{align} What about $m\neq 0$? In this case, after plugging spherodical coordinates into the metric, I have different forms. How to convert Boyer-Lindquist coordinates to Cartesian coordinates in the case of $m\neq 0$? I know after some consecutive transformation: Cartesin -> Original Kerr(Edington) -> time shift -> Boyer-Lindquist... Here the reason I post this question is I want to write transformation rule $\frac{dx'^{\mu}}{dx^{\nu}}$ on these two coordinates.
With the relation between two coordinates is given by \begin{align} & d\tilde{t} = dt + \frac{2mr}{r^2+a^2 -2mr} dr \\ &dx= \sin(\theta)\left( \cos(\bar{\phi}) - \frac{a \left( r \sin(\bar{\phi}) + a \cos(\bar{\phi}) \right)}{r^2+a^2 -2mr} \right)dr \\ &\qquad \qquad + \cos(\theta) \left( r \cos(\bar{\phi}) - a \sin(\bar{\phi}) \right) d\theta - \left( r\sin(\bar{\phi}) + a \cos(\bar{\phi}) \right) \sin(\theta)d \phi \\ & dy =\sin(\theta) \left( \sin(\bar{\phi}) + \frac{a \left( r \cos(\bar{\phi}) - a \sin(\bar{\phi}) \right)}{r^2+ a^2 -2mr} \right)dr \\ & \qquad \qquad + \cos(\theta) \left( r \sin(\bar{\phi}) + a \cos(\bar{\phi}) \right) d\theta + \sin(\theta) \left(r\cos(\bar{\phi}) - a \sin(\bar{\phi}) \right) d {\phi} \\ & dz = \cos(\theta) dr -r\sin(\theta) d\theta \end{align} where $\bar{\phi} = \phi + \int\frac{a}{r^2+a^2 -2mr} dr$, we have desired results.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/599094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A confusion in the Derivation of Lorentz Transformation My doubt is in the equation (1) and (2). Aren't x,y and z also the radiuses? EDIT Thank you guys for trying to give a wonderful explanation but I figured out the answer myself and it was just my silly interpretation. I thought the the value of ct on x axis would be equal to ct and forgot that the value is given by a perpendicular and not an arc (of the sphere). I was thinking that where the sphere touches x axis, that is the value of ct but it isn't. The value of ct for x is given by a point on the axis that lies perpendicular to the axis.
Think of the coordinates as a single object: $\mathbf{x}=\left(x,\,y,\,z\right)$. A point $P$ located at $\mathbf{P}=\left(a,\,b,\,c\right)$ would look like this: Now what is the distance from the origin ($\mathbf{x}=(0,\,0,\,0)$) and $\mathbf{P}$? For this we need to use the Pythagorean theorem: the square of the diagonal is equal to the sum of the squares of the sides (e.g., $d^2=x^2+y^2+z^2$). In 2 dimensions, this is done by $$a^2+b^2=c^2 \leftrightarrow x^2+y^2=r^2$$ But if we wanted to find the distance from some point that isn't the origin, we use $$r^2=\left(x-x_0\right)^2+\left(y-y_0\right)^2$$ where $x,\,y$ are the positions of the point and $x_0,\,y_0$ the reference position. In the subsequent discussion, $x_0=y_0=0$. In 3D, we can actually do the above 2D slice twice. Solve for $r$ in the $x$-$y$ plane and then solve for $d$ in the $r$-$z$ plane (using $x=a$, $y=b$, and $z=c$): $$ r^2=a^2+b^2\to r=\sqrt{a^2+b^2} $$ $$ d^2=r^2+c^2\to d^2=\left(\sqrt{a^2+b^2}\right)^2+c^2=a^2+b^2+c^2 $$ We could also have done this in the $y$-$z$ plane first: $$ r^2=b^2+c^2\to r=\sqrt{b^2+c^2} \\ d^2=r^2+a^2\to d^2=\left(\sqrt{b^2+c^2}\right)^2+a^2=a^2+b^2+c^2 $$ so clearly the ordering of it doesn't matter, we get $d=\sqrt{a^2+b^2+c^2}$ as the distance from the origin to the point $P$ when doing it both ways. Next, what happens if we let $y=b\to y=-b$? Since we take the square of $y$, we get $y^2=\left(-b\right)^2=+b^2$ which doesn't change anything, our distance is still $d=\sqrt{a^2+b^2+c^2}$. The same thing happens if we swap the other values for their negatives. That means that there are at least 8 points $P$ that have identical distances from the origin. Let's try using some values, just to get an idea here. Suppose we let $\mathbf{P}=\left(1,2,2\right)$. The distance $d$ is then $$d=\sqrt{1^2+2^2+2^2}=\sqrt{1+4+4}=\sqrt{9}=3$$ If we consider a sphere with constant radius (i.e., $d$ does not change), then if we change $x=1\to x=0$ we must have that $$\sqrt{y^2+z^2}=3$$ Obviously $y=z=3=d$ is not a solution because $$\sqrt{3^2+3^2}=\sqrt{9+9}=\sqrt{18}>4>d$$ Equally as well, $x=y=z=3=d$ is not a valid solution because $$\sqrt{3^2+3^2+3^2}=\sqrt{9+9+9}=\sqrt{27}>5>d$$ So clearly $x,\,y,\,z$ are not also the radius. Individually, the values could be the radius, but that would require the other two values being zero: $$\sqrt{3^2+0^2+0^2}=\sqrt{9+0+0}=\sqrt{9}=3=d$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can one prove the equation for a system of lenses? Let's say we have a system of 2 lenses which are placed in a non-negligible distance $L$ from eachother. How can I prove without using geometrical construction and principal planes that $$\frac 1f = \frac {1}{f_1} + \frac {1}{f_2} -\frac{L}{f_1 f_2} $$ describes the system's focal length? For example, with $ L \lt \lt f_1f_2 $ : $\dfrac{1}{f_1} = \dfrac{1}{o_1} + \dfrac{1}{i_1}$ where $o_1$ is the object distance and $i_1$ the image distance of the first lens. And of course, $\dfrac{1}{f_2} = \dfrac{1}{o_2} + \dfrac{1}{i_2}$ . Now you can say the image of the first lense is the object distance of the second length (with paying attention to conventions what is negative and what positive): $o_2 = -i_1$ Using this relation one receives $\dfrac{1}{o_1} + \dfrac{1}{i_2} = \dfrac{1}{f_1} + \dfrac{1}{f_2} = \dfrac{1}{f}$ I already tried saying $o_2 = -i_1 + L$ but this didn't lead anywhere.
Use ray transfer matrices. The system matrix is given by $$ S=\begin{pmatrix} 1 & 0 \\ -\frac{1}{f_2} & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & L \\ 0 & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -\frac{1}{f_1} & 1 \\ \end{pmatrix} $$ The back focal length $BFL$ is determined by the point where all the rays in an incoming collimated beam meet after the second lens: $$ \begin{pmatrix} 1 & BFL \\ 0 & 1 \\ \end{pmatrix} S \begin{pmatrix} z \\ 0 \\ \end{pmatrix} = \begin{pmatrix} 0 \\ \text{X} \\ \end{pmatrix} $$ for all $z$. Solve for $BFL$ and then find $f$ by adding the distance of the second lens to the origin of coordinates. The origin is not arbitrary, it is chosen to a specific point on purpose in order to give a neat symmetric expression.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Angles in a static equilibrium I have three masses $\left(F_\alpha, \, F_\beta , \, \text{and} \,F_g \right)$ with 2 pulleys, and a wind variable which is in static equilibrium. I have already calculated the appropriate forces for the 3 masses by multiplying it with $9.81 \, \frac{\mathrm{m}}{\mathrm{s}^{2}}$ (gravity). $$ \begin{alignat}{7} & F_{\text{wind}} && ~=~ & 60 \phantom{.0} & \, \mathrm{N} \\[2px] & F_{\alpha} && = & 313.9 & \, \mathrm{N} \\[2px] & F_{\beta} && = & 619 \phantom{.0} & \, \mathrm{N} \\[2px] & F_{g} && = & 882.9 & \, \mathrm{N} \\ \end{alignat} $$ I'm required to find the angles for vector $F_\alpha$ and $F_\beta$ as shown in below equations (which is derived from the vector's individual components ($x$ and $y$): $$ \begin{alignat}{7} F_α \, \cos{\left( α \right)} & \, + \, F_β \, \cos {\left( β \right)} && + F_\text{wind} & ~=~ 0 \tag{1} \\[2px] F_α \, \sin{\left(α\right)} & \, + \, F_β \, \sin {\left( β \right)} && - F_g & ~=~ 0 \tag{2} \end{alignat} $$ Replacing these with actual values: - 313.9cos α + 619cos β + 60 = 0 — (1) 313.9sin α + 619sin β - 882.9 = 0 — (2) How do I find the angle α & β from these two equations? Edit 2: I have re-organized the equation and square it as such: cos²a = (619² cos²β + 60² + 2(619cosβ * 60)) / 313.9² sin²a = (619² sin²β + 882.9² - 2(619sinβ * 882.9)) / 313.9²
You can eliminate the angle $\alpha$ from the equations with the trick the other answers give you *. But then you will end up with an equation of the form $$ A \cos \beta + B \sin \beta + C = 0$$ To solve this do the following transformation $$ \left. \begin{align} A & = R \cos \psi \\ B & = R \sin \psi \end{align} \right\} \begin{aligned} R & = \sqrt{A^2+B^2} \\ \psi & = \arctan\left( \frac{B}{A} \right) \end{aligned} $$ The equation is now $$ cos\beta\cos\psi + \sin\beta \sin\psi = \cos(\beta-\psi) = -\frac{C}{R} $$ which is solved for $$ \begin{split} \beta & = \arccos\left( -\frac{C}{R} \right) + \psi \\ & = \arccos\left( -\frac{C}{\sqrt{A^2+B^2}} \right) + \arctan\left( \frac{B}{A} \right)\end{split}$$ footnotes: * *make the equations of this form $$\begin{align} \cos \alpha & = a \cos\beta+c_x \\ \sin \alpha & = -a \sin \beta + c_y \end{align}$$ *square both sides and add them for $$ 1 = 2 a c_x \cos\beta - 2 a c_y \sin\beta + c_x^2 + c_y^2 +a^2 $$ $$ \left(2 a c_x\right) \cos\beta + \left(- 2 a c_y\right) \sin\beta + \left(c_x^2 + c_y^2 +a^2-1\right) = 0 $$ *Match the $A$, $B$ and $C$ coefficients. *Once $\beta$ is known, then divide the two equations above for $$ \tan \alpha = \frac{c_y - a \sin\beta}{c_x + a \cos\beta} $$ Edit 1 Here is the actual solution: $$\left. \begin{align} -313.9 \cos(\alpha) + 619 \cos(\beta) + 60 & = 0 \\ 313.9 \sin(\alpha) + 619 \sin(\beta) - 882.9 & = 0 \end{align} \right\} \begin{aligned} 313.9 \cos(\alpha) & = 619 \cos(\beta) + 60 \\ 313.9 \sin(\alpha) & = - 619 \sin(\beta) + 882.9 \end{aligned} $$ Square and add the two equations (on each side) to get $$ \left. 98533.21 = 74280 \cos(\beta) - 1093030.2 \sin(\beta) + 1166273.41 \right\}\\ 74280 \cos(\beta) - 1093030.2 \sin(\beta) + 1067740.2 = 0 $$ $$ \begin{aligned} \beta & = \arccos\left( -\frac{C}{\sqrt{A^2+B^2}} \right) + \arctan\left( \frac{B}{A} \right) \\ A & = 74280\\ B & = -1093030.2 \\ C & = 1067740.2\\ \beta &= 1.41284652 = 80.9501426° \\ \end{aligned} $$ Finally, $\alpha$ can be solved with the 2nd equation: $$ \sin(\alpha) = 2.81267919-1.97196559 \sin(\beta) $$ $$ \alpha = 1.04567064 = 59.9125144° $$ Now you can plug the values of $\alpha$ and $\beta$ into the two original equations to confirm it balances the forces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/240532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Oscillation of non-uniform plank on parallel springs A plank of length $l$ and mass $m$ is placed on two parallel springs, each with spring constant $k$ and equidistant from the plank's horizontal center of gravity. When the plank is displaced from it's equilibrium position, it's expected to oscillate at this frequency: $$ f = \frac{1}{2\pi}\sqrt{\frac{2k}{m}} $$ Now, if the plank isn't of uniform density, such that at equilibrium, the force acting on spring 1 is $amg$ and that on spring 2 is $bmg$ where $a + b = 1$. At what frequency will the plank oscillate at when it's displaced from its equilibrium position? Is it possible to obtain an estimate of its horizontal center of gravity at equilibrium (or rather, constants $a$, $b$) by observing the oscillations, say by taking a fourier transform of the plank's oscillation waveform?
First you decided on the independent degrees of freedom. I choose the center of the plank and I track the translation $x$ and rotation $\theta$ from the equilibrium conditions. The displacements of each spring are: $$\begin{align} x_1 & = x - \frac{\ell}{2} \theta \\ x_2 & = x + \frac{\ell}{2} \theta \end{align} $$ Each spring force is: $$\left. \begin{align} F_1 & = a m g - k x_1 \\ F_2 & = b m g - k x_2 \end{align} \right\} \begin{aligned} F_1 & = a m g + \frac{\ell}{2} k \theta - k x \\ F_2 & = b m g - \frac{\ell}{2} k \theta - k x \end{aligned}$$ The displacement of the center of mass is $x_C = x + \frac{\ell}{2} (b-a) \theta$ and hence the center of mass acceleration (needed for the equations of motion) is $ \ddot{x}_C = \ddot{x} + \frac{\ell}{2} (b-a) \ddot{ \theta}$. The EOM are: $$\begin{align} m \ddot{x}_C & = F_1 + F_2 - m g \\ I_C \ddot{\theta} & = a \ell F_1 - b \ell F_2 \end{align} $$ where $I_C$ is the mass moment of inertia about the center of mass. The above is solved by $x(t) = X \sin \omega t$ and $\theta(t) = \Theta \sin \omega t$. This produces the system of equations of $$ \begin{align} 2 k X & = m \omega^2 \left( X + \Theta \frac{\ell}{2} (b-a) \right) \\ k X \ell (a-b) + k \frac{\ell^2}{2} \Theta (a+b) & = I_C \Theta \omega^2 \end{align} $$ This has two solutions for frequency $\omega_T$ and $\omega_R$ for translational and rotational modes of vibration. The two degrees of freedom are coupled with $$ -\frac{X}{\Theta} = \frac{\frac{\ell}{2} m \omega^2 (a-b)}{ (2k-m \omega^2)}$$ The left hand side of this equation is the center of rotation position (distance) from the center of the plank. Pure translation occurs when $\omega^2 = 2 \frac{k}{m}$ and pure rotation when $a=b$. $$\begin{align} \omega^2_T & = \frac{k}{m} \left(1+ \frac{m \ell^2 (1-2 a b)}{2 I_C} + \sqrt{ 1 + \left( \frac{m \ell^2 (1-2 a b)}{2 I_C} \right)^2 - \frac{2 a b m \ell^2}{I_C} } \right) \\ \omega^2_R & = \frac{k}{m} \left(1+ \frac{m \ell^2 (1-2 a b)}{2 I_C} - \sqrt{ 1 + \left( \frac{m \ell^2 (1-2 a b)}{2 I_C} \right)^2 - \frac{2 a b m \ell^2}{I_C} } \right) \end{align} $$ Edit 1 To estimate $a$ $b$ from the resulting motion, maybe you can solve the equations of motion using the normalized frequency $n^2 = \frac{\omega^2}{2 k/m}$ and center of rotation location $r=-\frac{X}{\Theta}$. $$ \begin{align} a &= \frac{2 I_C n^2}{m \ell^2} + \frac{ r (1-n^2) (2 r+\ell)}{n^2 \ell^2} \\ b &= \frac{2 I_C n^2}{m \ell^2} + \frac{ r (1-n^2) (2 r-\ell)}{n^2 \ell^2} \end{align} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/242757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the 4-dimensional matrix representation of rotation operator? The rotation operator is $$\exp\left(-i\frac{\theta}{2}\boldsymbol{J}\cdot\hat{\boldsymbol{n}}\right).$$ * *If $\boldsymbol{\sigma}$ is the Pauli matrix, the operator can be written as a matrix form $$\boldsymbol{1}\cos(\phi/2)-i\boldsymbol{\sigma}\cdot\hat{\boldsymbol{n}}\sin(\phi/2).$$ *But when $J$ is the spin-3/2 operator, $J$ is 4-dimensional. Is there a matrix representation of operator $\exp\left(-i\frac{\theta}{2}\boldsymbol{J}\cdot\hat{\boldsymbol{n}}\right)$? I find that when $\{J_x,J_y\}\neq0$ for spin-3/2, not like Pauli matrices. *What is the case when $J$ is spin-1 operator?
There are several ways to proceed. First, be aware that Mathematica has a built-in function called WignerD and this function will give you the matrix element of a rotation matrix. I have found that this function does not seem to use the same parametrization as Varshalovich, Dmitriĭ Aleksandrovich, Anatolij Nikolaevič Moskalev, and Valerii Kel'manovich Khersonskii. Quantum theory of angular momentum. 1988., which in my opinion remains the bible. It seems you need to use the negative of all the angles in WignerD to obtain the formulas of Varshalovich. There are various closed form expressions for \begin{align} d^j_{mm'}(\beta)&:= \langle jm\vert R_y(\beta)\vert jm'\rangle \, \end{align} such as \begin{align} d^j_{mm'}(\beta)&=(-1)^{j-m'} \sqrt{(j+m)!(j-m)!(j+m')!(j-m')!}\\ &\times \sum_k (-1)^k \frac{\left(\cos\frac{1}{2}\beta\right)^{m+m'+2k} \left(\sin\frac{1}{2}\beta\right)^{2j-m-m'-2k}} {k!(j-m-k)!(j-m'-k)!(m+m'+k)!}\, . \end{align} There is also an expression in terms of Jacobi polynomials: \begin{align} d^j_{mm'}(\beta)&=\xi_{mm'} \sqrt{\frac{s!(s+\mu+\nu)!}{(s+\mu)!(s+\nu)!}} \left(\sin\textstyle\frac{1}{2}\beta\right)^\mu \left(\cos\textstyle\frac{1}{2}\beta\right)^\nu P_s^{\mu,\nu}(\cos\beta) \end{align} with $$ \mu=\vert m-m'\vert\, ,\quad \nu=\vert m+m'\vert\, ,\quad s=j-\frac{1}{2}(\mu+\nu) $$ and the phase $$ \xi_{mm'}=\left\{\begin{array}{cc} 1&\quad \hbox{if } m'\ge m\, \\ (-1)^{m'-m}&\quad \hbox{if } m'<m\, .\end{array}\right. $$ Finally, there is a method based on recursion relations. This is described in details in Wolters, G. F. "Simple method for the explicit calculation of d-functions." Nuclear Physics B 18.2 (1970): 625-653. Starting with $$ \langle jm\vert R_y(\beta) L_x\vert jm'\rangle $$ one can obtain a recursion relation \begin{align} &\sqrt{(j-m')(j+m'+1)}d^j_{m,m'+1}(\beta) +\sqrt{(j+m')(j-m'+1)}d^j_{m,m'-1}(\beta)\\ &\qquad = 2\hbox{cosec}(\beta)(m'\cos\beta-m)d^j_{mm'}(\beta) \end{align} The function \begin{align} d^j_{mj}(\beta)&={2j \choose j+m}^{1/2} \left(\cos\textstyle\frac{1}{2}\beta\right)^{j+m} \left(\sin\textstyle\frac{1}{2}\beta\right)^{j-m}\, ,\\ \end{align} can be used as a seed for the recursion. As a matrix with elements $d^{3/2}_{mm'}(\beta)$, the explicit results for $j=3/2$ is \begin{align} &R_y(\beta)=\\ &{\scriptsize\left( \begin{array}{cccc} \cos ^3\left(\frac{\beta }{2}\right) & -\sqrt{3} \cos ^2\left(\frac{\beta }{2}\right) \sin \left(\frac{\beta }{2}\right) & \sqrt{3} \cos \left(\frac{\beta }{2}\right) \sin ^2\left(\frac{\beta }{2}\right) & -\sin ^3\left(\frac{\beta }{2}\right) \\ \sqrt{3} \cos ^2\left(\frac{\beta }{2}\right) \sin \left(\frac{\beta }{2}\right) & \frac{1}{2} \cos \left(\frac{\beta }{2}\right) (3 \cos (\beta )-1) & -\frac{1}{2} (3 \cos (\beta )+1) \sin \left(\frac{\beta }{2}\right) & \sqrt{3} \cos \left(\frac{\beta }{2}\right) \sin ^2\left(\frac{\beta }{2}\right) \\ \sqrt{3} \cos \left(\frac{\beta }{2}\right) \sin ^2\left(\frac{\beta }{2}\right) & \frac{1}{2} (3 \cos (\beta )+1) \sin \left(\frac{\beta }{2}\right) & \frac{1}{2} \cos \left(\frac{\beta }{2}\right) (3 \cos (\beta )-1) & -\sqrt{3} \cos ^2\left(\frac{\beta }{2}\right) \sin \left(\frac{\beta }{2}\right) \\ \sin ^3\left(\frac{\beta }{2}\right) & \sqrt{3} \cos \left(\frac{\beta }{2}\right) \sin ^2\left(\frac{\beta }{2}\right) & \sqrt{3} \cos ^2\left(\frac{\beta }{2}\right) \sin \left(\frac{\beta }{2}\right) & \cos ^3\left(\frac{\beta }{2}\right) \\ \end{array} \right)} \end{align} with the columns and rows ordered as $3/2,1/2,-1/2,-3/2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/313973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A small error in Landau & Lifschitz "Mechanics" (3rd ed.)? I think I found a small error in Landau & Lifschitz "Mechanics" (3rd ed.). In section 28 (Anharmonic oscillations), they are discussing how to solve the following anharmonic oscillator problem: $$\ddot{x}+\omega_0^2 x = -\alpha x^2-\beta x^3 \tag{28.9}$$ They show how one can solve the anharmonic oscillator problem perturbatively by expanding the solution in powers of the initial amplitude $a$ and supposing that the fundamental frequency is also shifted, the shift being given by another power series in the amplitude. The ansatz for the lowest-order solution is the following: $$x^{(1)}=a\cos \omega t \tag{28.10}$$ One can grind out the higher order equations and arrive at the equation for the 3rd order perturbation: $$\ddot{x}^{(3)}+\omega_0^2 x^{(3)}=-2\alpha x^{(1)}x^{(2)}-\beta x^{(1)3}+2\omega_0 \omega ^{(2)} x^{(1)}\tag{pg. 87}$$ Now, first I want to mention that in that equation we have $\beta x^{(1)3}$ which we can write out as $$\begin{align}\beta x^{(1)3}&=\beta a^3\cos^3\omega t\\ &=a^3\left(\color{red}{\frac{1}{4}\beta} \cos 3\omega t \color{red}{+ \frac{3}{4}\beta} \cos \omega t\right) \end{align}$$ But in the next line the following full expansion is given (where I highlight the error in red): $$\begin{align} \ddot{x}^{(3)}+\omega_0^2 x^{(3)}=a^3&\left[\color{red}{\frac{1}{4}\beta}-\frac{\alpha^2}{6\omega_0^2}\right]\cos 3\omega t +\\ &+a\left[2\omega_0\omega^{(2)}+\frac{5a^2\alpha^2}{6\omega_0^2}\color{red}{- \frac{3}{4}a^2\beta}\right]\cos\omega t \tag{pg. 87}\end{align}$$ So the it looks like a spurious minus sign has entered, which carries on throughout the rest of the section. [Question]: Is this actually a mistake? I can't find any errata for the book so I can't validate this.
I think there are more problems with this solution. Especially with perturbation theory. For this kind of problems one has to use Poincare-Lindstedt method [1] [2]. The small parameters in the former differential equations are $\alpha$ and $\beta$ so we can write down (the same can done for $\beta$): $$\ddot{x} + \omega_0^2x = -\alpha\left(x^2+\frac{\beta}{\alpha}x^3 \right).$$ Here $\alpha$ will be our small parameter and we are looking for a solution of the form $$x(t) = A\cos(\omega t) + \alpha x_1(t) + \alpha^2x_2(t) + \ldots$$ $$\omega = \omega_0 + \alpha\omega_1 + \alpha^2\omega_2 + \ldots$$ with initial conditions $x(0)=A$ and $\dot{x}(0)=0$. First of all, we change variables $\tau = \omega t$, so that we get $$\omega^2 x''+ \omega^2_0 x = -\alpha\left(x^2 + \frac{\beta}{\alpha}x^3 \right),$$ where $x''$ denotes second derivative over $\tau$. We put our series formula for $x$ and $\omega$ and extract terms in the same order of $\alpha$: $$\left[ \omega_0^2 + 2\omega_0\omega_1 \alpha + \alpha^2 (\omega_1^2 + 2\omega_0\omega_2) + \ldots \right] \cdot \left[-A\cos\tau + \alpha x''_1 + \alpha^2 x''_2 + \ldots \right] + \omega_0^2 \left[A\cos\tau + \alpha x_1 + \alpha^2 x_2 + \ldots \right] = -\alpha \left[A^2\cos^2\tau + 2\alpha x_1 A\cos\tau + \frac{\beta}{\alpha}\left(A^3\cos^3\tau + 3\alpha x_1 A^2\cos^2\tau \right) + \ldots \right]$$ 1st order terms $$\begin{align} x''_1 + x_1 & = \frac{2A\omega_1}{\omega_0}\cos\tau - \frac{A^2}{\omega^2_0}\cos^2\tau - \frac{\beta}{\alpha}\frac{A^3}{\omega_0^2}\cos^3\tau \\ & = \cos\tau\left[\frac{2A\omega_1}{\omega_0}-\frac{\beta}{\alpha}\frac{3 A^3}{4\omega_0^2}\right] - \frac{1}{2}\frac{A^2}{\omega_0^2} - \frac{1}{2}\frac{A^2}{\omega_0^2}\cos 2\tau - \frac{1}{2}\frac{\beta}{\alpha}\frac{A^3}{\omega_0^2}\cos 3\tau, \end{align} $$ with $x_1(0)=0$ and $x'_1(0)=0$. If we want to kill the resonant term (the one with $\cos\tau$) we need to set $$\omega_1 = \frac{\beta}{\alpha}\frac{3}{8}\frac{A^2}{\omega_0}.$$ Then $x_1(\tau)$ solves to $$x_1(\tau) = \frac{A^2}{\omega_0^2}\left[-\frac{1}{2} + \left(\frac{1}{3} - \frac{1}{16}\frac{\beta}{\alpha}A\right)\cos\tau + \frac{1}{6}\cos 2\tau + A\frac{1}{16}\frac{\beta}{\alpha}\cos 3\tau \right].$$ 2nd order terms $$x_2'' + x_2 = \frac{A}{\omega_0^2}(\omega_1^2 + 2\omega_0\omega_2)\cos\tau - 2\frac{\omega_1}{\omega_0}x''_1 - x_1\frac{2A}{\omega_0^2}\cos\tau- x_1\frac{\beta}{\alpha}\frac{3A^2}{\omega_0^2}\cos^2\tau,$$ with $x_2(0)=0$ and $x'_2(0)=0$. This is a real mess which I am not going to post here, but $$\omega_2 = \frac{A^2}{\omega_0^3}\left[\frac{A}{4}\frac{\beta}{\alpha}-\frac{3A^2}{64}\left(\frac{\beta}{\alpha}\right)^2-\frac{3A^2}{64}-\frac{5}{12}\right]$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/367204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Covariant Maxwell equations invariant under parity transformation I tried to proof that the Maxwell equations are invariant under parity transformations. Therefore I used the covariant formulation of the Maxwell equations \begin{align} \partial_{\nu}F^{\nu\mu} &= \frac{4\pi}{c}j^{\mu}\\ \partial_{\nu}\tilde{F}^{\nu\mu} &= 0 \end{align} and the parity transformation given by \begin{align} P = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \end{align} Regarding only the first equation $\partial_{\nu}F^{\nu\mu} = \frac{4\pi}{c}j^{\mu}$ we have \begin{align} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \cdot \begin{pmatrix} \frac{1}{c}\frac{\partial}{\partial \text{t}} \\ \vec{\nabla} \end{pmatrix} = \begin{pmatrix} \frac{1}{c}\frac{\partial}{\partial \text{t}} \\ -\vec{\nabla} \end{pmatrix} \end{align} as well as \begin{align} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \cdot \begin{pmatrix} c\rho \\ \vec{j} \end{pmatrix} = \begin{pmatrix} c\rho \\ -\vec{j} \end{pmatrix} \end{align} and \begin{align} P \cdot F^{\nu\mu} &= \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}\begin{pmatrix} 0 & -E^1 & -E^2 & -E^3 \\ E^1 & 0 & -B^3 & B^2 \\ E^2 & B^3 & 0 & -B^1 \\ E^3 & -B^2 & B^1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & -E^1 & -E^2 & -E^3 \\ -E^1 & 0 & B^3 & -B^2 \\ -E^2 & -B^3 & 0 & B^1 \\ -E^3 & B^2 & -B^1 & 0 \end{pmatrix} \end{align} Based on these calculations, is there a way to see that Maxwell equations are invariant under parity transformations and if so how do I see it?
As current is a vector, it is not invariant under parity. Therefore neither is Ampère's law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Derivation of the Electromagnetic Stress-Energy Tensor in Flat Space-time I am working on deriving the electromagnetic stress energy tensor using the electromagnetic tensor in the $(-, +, +, +)$ sign convention. However, I have hit a snag and cannot figure out where I have gone wrong. $$ F^{\mu \alpha}= \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ -\frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ -\frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ -\frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ $$ F^{\mu}_{\alpha} = \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ \frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ \frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ \frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ $$ T^{\mu\nu} = \frac{1}{\mu_0}(F^{\mu \alpha}F^{v}_{\alpha} - \frac{1}{4}\eta^{\mu\nu}F_{\alpha\beta}F^{\alpha \beta})$$ Doing matrix multiplication of the matrices $F^{\mu \alpha}$ and $F^{\nu}_{\alpha}$ from above gives $$ F^{\mu \alpha}F^{\nu}_{\alpha} = \begin{bmatrix} (\frac{E}{c})^{2} & -B_{z}\frac{E_{y}}{c} + B_{y}\frac{E_{z}}{c} & \frac{E_{x}}{c}B_{z} - \frac{E_{z}}{c}B_{x} & -\frac{E_{x}}{c}B_{y} + \frac{E_{y}}{c}B_{x} \\ B_{z}\frac{E_{y}}{c} - B_{y}\frac{E_{z}}{c} & -B_{z}^{2} - B_{y}^{2} - (\frac{E_{x}}{c})^{2} & -\frac{E_{x}}{c}\frac{E_{y}}{c} + B_{y}B_{x} & \frac{E_{x}}{c}\frac{E_{z}}{c} + B_{z}B_{x} \\ -B_{z}\frac{E_{x}}{c} + B_{x}\frac{E_{z}}{c} & -\frac{E_{y}}{c}\frac{E_{x}}{c} + B_{x}B_{y} & -(\frac{E_{y}}{c})^{2}-B_{z}^{2}-B_{x}^{2} & -\frac{E_{y}}{c}\frac{E_{z}}{c} + B_{z}B_{y} \\ B_{y}\frac{E_{x}}{c} - B_{x}\frac{E_{y}}{c} & -\frac{E_{z}}{c}\frac{E_{x}}{c} + B_{x}B_{z} & -\frac{E_{z}}{c}\frac{E_{y}}{c} + B_{y}B_{z} & -(\frac{E_{z}}{c})^{2}-B_{y}^{2}-B_{x}^{2} \\ \end{bmatrix} $$ Subtracting the $\frac{1}{4}\eta^{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}= \frac{1}{4}\eta^{\mu\nu}[2(B^{2} - (\frac{E}{c})^{2})]$ and multiplying by $\frac{1}{\mu_{0}}$ gives $$ T^{\mu\nu}=\frac{1}{\mu_{0}} \begin{bmatrix} (\frac{E}{c})^{2} + \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -B_{z}\frac{E_{y}}{c} + B_{y}\frac{E_{z}}{c} & \frac{E_{x}}{c}B_{z} - \frac{E_{z}}{c}B_{x} & -\frac{E_{x}}{c}B_{y} + \frac{E_{y}}{c}B_{x} \\ B_{z}\frac{E_{y}}{c} - B_{y}\frac{E_{z}}{c} & -B_{z}^{2} - B_{y}^{2} - (\frac{E_{x}}{c})^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -\frac{E_{x}}{c}\frac{E_{y}}{c} + B_{y}B_{x} & \frac{E_{x}}{c}\frac{E_{z}}{c} + B_{z}B_{x} \\ -B_{z}\frac{E_{x}}{c} + B_{x}\frac{E_{z}}{c} & -\frac{E_{y}}{c}\frac{E_{x}}{c} + B_{x}B_{y} & -(\frac{E_{y}}{c})^{2}-B_{z}^{2}-B_{x}^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) & -\frac{E_{y}}{c}\frac{E_{z}}{c} + B_{z}B_{y} \\ B_{y}\frac{E_{x}}{c} - B_{x}\frac{E_{y}}{c} & -\frac{E_{z}}{c}\frac{E_{x}}{c} + B_{x}B_{z} & -\frac{E_{z}}{c}\frac{E_{y}}{c} + B_{y}B_{z} & -(\frac{E_{z}}{c})^{2}-B_{y}^{2}-B_{x}^{2} - \frac{1}{2}(B^{2} - (\frac{E}{c})^{2}) \\ \end{bmatrix} $$ However, the textbook definition of the electromagnetic stress energy tensor is: $$ T^{\mu\nu} = \begin{bmatrix} \frac{1}{2}(\epsilon_{0} |E|^{2} + \frac{1}{\mu_{0}}|B|^{2}) & \frac{S_{x}}{c} & \frac{S_{y}}{c} & \frac{S_{z}}{c} \\ \frac{S_{x}}{c} & -\sigma_{xx} & -\sigma_{xy} & -\sigma_{xz} \\ \frac{S_{y}}{c} & -\sigma_{yx} & -\sigma_{yy} & -\sigma_{yz} \\ \frac{S_{z}}{c} & -\sigma_{zx} & -\sigma_{zy} & -\sigma_{zz} \\ \end{bmatrix} $$ with $\vec{S} = \frac{1}{\mu_{0}}(\vec{E} \times \vec{B})$ and $\sigma_{ij} = \epsilon_{0} E_{i}E_{j} + \frac{1}{\mu_{0}}B_{i}B_{j} - \frac{1}{2}(\epsilon_{0} E^{2} + \frac{1}{\mu_{0}}B^{2})\delta_{ij} $ So, I know my $T^{01} = T^{10}$, $T^{02} = T^{20}$, and $T^{03} = T^{30}$ but they do not. They are of opposite signs. What did I do incorrectly?
Ok...I have done a little more work and I think I have it. If $$ F^{\mu \alpha}= \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ -\frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ -\frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ -\frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ and $$ T^{mv} = \frac{1}{\mu_0}(F^{\mu \alpha}F^{\nu}{}_{\alpha} - \frac{1}{4}\eta^{\mu\nu}F_{\alpha\beta}F^{\alpha \beta})$$ then letting c = 1 $$ F^{\mu \alpha}F^{v}{}_{\alpha} = \eta_{\beta\alpha}F^{\mu\alpha}F^{\nu\beta} $$ If we let $\mu = 0,1,2,3$, $\nu = 0, 1, 2, 3$, and $\alpha=\beta$ summing over repeated indexes gives for $\mu = 0$ and $\nu = 0$ $$ \eta_{\beta \alpha}F^{0\alpha}F^{0\beta} = \eta_{00}F^{00}F^{00}+\eta_{11}F^{01}F^{01}+ \eta_{22}F^{02}F^{02} + \eta_{33}F^{03}F^{03} = E_{x}^{2} + E_{y}^{2} + E_{z}^{2} $$ for $\mu = 1$ and $\nu = 0$ $$ \eta_{\beta \alpha}F^{1\alpha}F^{0\beta} = \eta_{00}F^{10}F^{00}+\eta_{11}F^{11}F^{01}+ \eta_{22}F^{12}F^{02} + \eta_{33}F^{13}F^{03} = B_{z}E_{y} - B_{y}E_{z} $$ and so on... Is this correct? When looking at the indexes for the metric will $\alpha=\beta$ always be true when working in general relativity?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Current density $\mathbf{J}$ of particle with magnetic dipole moment $\mathbf{m}$ I'm solving some excercises on magnetostatics, and encounterded this on which i'm having some trouble. Given a particle of magnetic dipole moment $\mathbf{m}$, show that its current density is given as $\mathbf{J} = \left( \mathbf{m} \times \nabla \right) \delta \left( \mathbf{r} - \mathbf{r}_{0} \right)$, where $\mathbf{r}_{0}$ is the vector position of the particle. I started from the stationary Ampere's law, given that the magnetic field $\mathbf{B}_{m}$ is due only by the magnetic dipole moment $$ \nabla \times \mathbf{B}_{m} \left( \mathbf{r} - \mathbf{r}_{0} \right) = \mu_{0} \; \mathbf{J} \left( \mathbf{r} \right)$$ So that $$ \boxed{\mathbf{J} \left( \mathbf{r} \right) = \frac{1}{\mu_{0}} \nabla \times \mathbf{B}_{m} \left( \mathbf{r} - \mathbf{r}_{0} \right)} \; \; \; \; (1)$$ Now, it terms of the magnetic dipole moment, the corresponding magnetic field is given as $$ \boxed{\mathbf{B}_{m} \left( \mathbf{r} - \mathbf{r}_{0} \right) = \frac{\mu_{0}}{4\pi} \left[ \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \left( \mathbf{r} - \mathbf{r}_{0} \right) - \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right]} \; \; \; \; (2) $$ Replacing, then, (2) into (1) $$ \mathbf{J}\left( \mathbf{r} \right) = \frac{1}{4 \pi} \nabla \times \left( \frac{\mu_{0}}{4\pi} \left[ \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \left( \mathbf{r} - \mathbf{r}_{0} \right) - \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right] \right) $$ $$ = \frac{1}{4 \pi} \left[ \nabla \times \left( \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \left( \mathbf{r} - \mathbf{r}_{0} \right) \right) - \nabla \times \left( \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right) \right] $$ $$ = \frac{1}{4 \pi} \left[ \nabla \left( \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) + \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \nabla \times \left( \mathbf{r} - \mathbf{r}_{0} \right) \right. $$ $$ \left. - \nabla \left( \frac{\mathbf{1}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right) \times \mathbf{m} - \frac{\mathbf{1}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \nabla \times \mathbf{m} \right] $$ Now, as $\nabla \times \left( \mathbf{r} - \mathbf{r}_{0} \right)=0$ and $\nabla \times \mathbf{m}=0$ $$ \boxed{\mathbf{J}\left( \mathbf{r} \right) = \frac{1}{4 \pi} \left[ \nabla \left( \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) - \nabla \left( \frac{\mathbf{1}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right) \times \mathbf{m} \right]} \; \; \; \; (3) $$ Now, by components $$ \left( \nabla \left[ \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \right] \right)_{i} = \partial_{i} \left( \frac{m_{j} (x_{j} - x^{0}_{j}) }{ [(x_{l} - x^{0}_{l})(x_{l} - x^{0}_{l})]^{5/2} } \right)$$ $$ = m_{j} \left\{ \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} \partial_{i} (x_{j} - x^{0}_{j}) + (x_{j} - x^{0}_{j}) \partial_{i} [(x_{l} - x^{0}_{l})(x_{l} - x^{0}_{l})]^{-5/2} \right\} $$ $$ = m_{j} \left\{ \frac{\delta_{ij}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} - \frac{5}{2}\frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {7}}2(x_{l} - x^{0}_{l})\delta_{il}(x_{j} - x^{0}_{j}) \right\} $$ $$ = m_{j} \left\{ \frac{\delta_{ij}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} - 5\frac{(x_{i} - x^{0}_{i})(x_{j} - x^{0}_{j})}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {7}} \right\} $$ $$ = \frac{m_{i}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} - 5 \frac{m_{j} (x_{j} - x^{0}_{j})}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {7}}(x_{i} - x^{0}_{i}) $$ $$ \boxed{\left( \nabla \left[ \frac{\mathbf{m}\cdot\left(\mathbf{r} - \mathbf{r}_{0}\right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{5}} \right] \right)_{i} = \left[ \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} - 5 \frac{\mathbf{m} \cdot (\mathbf{r} - \mathbf{r}_{0}) }{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {7}}(\mathbf{r} - \mathbf{r}_{0}) \right]_{i}} \; \; \; \; (4) $$ Likewise $$ \left( \nabla \left[ \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right] \right)_{i} = \partial_{i} [(x_{j} - x_{j}^{0})(x_{j} - x_{j}^{0})]^{-3/2} $$ $$ = -\frac{3}{2} \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}}2(x_{j} - x_{j}^{0})\delta_{ij} $$ $$ = -3 \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} (x_{i} - x_{i}^{0}) $$ $$ \boxed{\left( \nabla \left[ \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right] \right)_{i} = \left( -3 \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} (\mathbf{r} - \mathbf{r}_{0}) \right)_{i}} \; \; \; \; (5) $$ Replacing (4) and (5) into (3) $$ \mathbf{J}\left( \mathbf{r} \right) = \frac{1}{4 \pi} \left[ \left( \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} - 5 \frac{\mathbf{m} \cdot (\mathbf{r} - \mathbf{r}_{0}) }{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {7}}(\mathbf{r} - \mathbf{r}_{0}) \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) - \left( -3 \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} (\mathbf{r} - \mathbf{r}_{0}) \right) \times \mathbf{m} \right] $$ And, because $\left( \mathbf{r} - \mathbf{r}_{0} \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) = 0$ $$ \mathbf{J}\left( \mathbf{r} \right) = \frac{1}{4 \pi} \left[ \left( \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) + 3 \frac{1}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} (\mathbf{r} - \mathbf{r}_{0}) \times \mathbf{m} \right] $$ $$ \mathbf{J}\left( \mathbf{r} \right) = \frac{1}{4 \pi} \left(-2 \left( \frac{\mathbf{m}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} \right) \times \left( \mathbf{r} - \mathbf{r}_{0} \right) \right)$$ $$ \boxed{\mathbf{J}\left( \mathbf{r} \right) = -\frac{1}{2 \pi} \mathbf{m} \times \frac{\left( \mathbf{r} - \mathbf{r}_{0} \right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}}} $$ Here is where i'm stuck. I'm aware of the identity $$ \nabla \cdot \left[ \frac{\mathbf{r} - \mathbf{r}_{0}}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^{3}} \right] = 4 \pi \delta (\mathbf{r} - \mathbf{r}_{0})$$ So, assuming that the answer presented in the statement is true (and that my calculations were correct), then it must be true that $$ -\frac{1}{2 \pi} \frac{\left( \mathbf{r} - \mathbf{r}_{0} \right)}{\left| \mathbf{r} - \mathbf{r}_{0} \right|^ {5}} = \nabla \delta \left( \mathbf{r} - \mathbf{r}_{0} \right) $$ but, if this is indeed true, i'm not sure how to prove this. Thanks in advance for any help!
A Herculean effort. Well done. It looks correct though I had only a brief look. Griffiths (Introduction to Electrodynamics) has a quite nice section on 3d delta functions in Sec. 1.5 The trick with delta functions is that they only make sense inside the integral. You want to analyse $\boldsymbol{\nabla}.\left(\frac{\mathbf{r}-\mathbf{r}_0}{\left|\mathbf{r}-\mathbf{r}_0\right|^3}\right)$. First, simplify it to: $$\boldsymbol{\nabla}.\left(\frac{\mathbf{r}-\mathbf{r}_0}{\left|\mathbf{r}-\mathbf{r}_0\right|^3}\right)=\boldsymbol{\nabla}.\left(\frac{\boldsymbol{r}}{\left|\boldsymbol{r}\right|^3}\right)\bigg{|}_{\boldsymbol{r}=\mathbf{r}-\mathbf{r}_0}$$ Note, that since $\mathbf{r}_0$ is constant, there is no difficulty in switching from $\boldsymbol{\nabla}$ with respect to $\mathbf{r}$, to $\boldsymbol{\nabla}$ with respect to $\boldsymbol{r}$. Next, evaluate expression for $\boldsymbol{r}\neq\mathbf{0}$, you will find $$\boldsymbol{\nabla}.\left(\frac{\boldsymbol{r}}{\left|\boldsymbol{r}\right|^3}\right)=0 \mbox{ for } \boldsymbol{r}\neq\mathbf{0}$$ This is a good start, if you are aiming to end up with a delta function. Next stick your candidate into an integral. Let $V$ be a sphere with radius $R$, and centre at $\mathbf{r}_0$ $$\int_V d^3 r \boldsymbol{\nabla}.\left(\frac{\boldsymbol{r}}{\left|\boldsymbol{r}\right|^3}\right)=\oint_{\partial V} d^2\, r\boldsymbol{\hat{r}}.\frac{\boldsymbol{r}}{\left|\boldsymbol{r}\right|^3}=\oint_{\partial V} d^2\, r \frac{1}{r^2}=\int_\mbox{solid angle} R^2 d^2\Omega \frac{1}{R^2}=4\pi$$ So you have a situation where your quantity is zero everywhere except one point, and at that one point, if you integrate your quantity, it gives a finite constant value = that's a delta function. You can do it with more rigour, but basically all you need is here, so: $$\boldsymbol{\nabla}.\left(\frac{\boldsymbol{r}}{\left|\boldsymbol{r}\right|^3}\right)=4\pi\delta\left(\boldsymbol{r}\right)$$ Having said it, your approach is strange. You are using magnetic field to find the magnetic dipole. I would go in the other direction. I would define magnetic dipole to be either delta-function of magnetization, or a current due to point-like loop of current (yet another way to go is to consider low-order series expansion of the relevant Greens function - will ignore here). The second approach is more fun, so let us go with it. Assume the current is circulating in anti-clockwise direction around the z-axis, in $z=0$ plane, and that loop radius is $R$. The current density is then (using cyllindrical coordinates $\rho,\phi,z$): $$\mathbf{J}=\alpha\delta\left(\rho-R\right)\delta\left(z\right)\boldsymbol{\hat{\phi}}$$ If the current through the loop is $I$, by integrating in $\mathcal{P}=\{y=0,x>0\}$ plane we can find: $$I=\int_\mathcal{P}dx\,dz\,\mathbf{\hat{y}}.\mathbf{J}=\alpha \int_0^\infty d\rho\, \delta\left(\rho-R\right)=\alpha$$ Thus $\alpha=I$. As before, delta functions only make sense inside an integral, so let us stick our expression inside an integral with a arbitrary well-behaved vector field $\mathbf{V}$: $$\int_V d^3 r \mathbf{V}.\mathbf{J}=I\int dz\, \int_0^\infty d\rho\int_0^{2\pi}\rho\,d\phi V_\phi \delta\left(\rho-R\right)\delta\left(z\right) = I R\int_0^{2\pi}d\phi \,V_\phi\bigg|_{\rho=R,\,z=0}=I \oint_{\partial \mathcal{D}} dl \mathbf{\hat{l}}.\mathbf{V}=I \int_{\mathcal{D}} d^2\rho \mathbf{\hat{z}}.\boldsymbol{\nabla}\times\mathbf{V}$$ In the penultimate step I have introduced a disk region ($\mathcal{D}$) of radius $R$, lying in $z=0$ plane at the origin, and interpreted the integral as the integral around the boundary of that disk. We can then use the inverse of the Stokes theorem (which works in simply-connected space) and deduce the expression for the integral. We can now take the limit $R\to 0$ to get: $$\lim_{R\to 0}\int_V d^3 r \mathbf{V}.\mathbf{J}=I\pi R^2 \mathbf{\hat{z}}.\left(\boldsymbol{\nabla}\times\mathbf{V}\right)\bigg|_{\mathbf{r}=0}$$ All that remains to show, by integration by parts, is that: $$\int_V d^3 r \mathbf{V}.\boldsymbol{\nabla}\times\mathbf{\hat{z}}\delta\left(\mathbf{r}\right)=\mathbf{\hat{z}}.\left(\boldsymbol{\nabla}\times\mathbf{V}\right)\bigg|_{\mathbf{r}=0}$$ Thus, setting $\mathbf{m}=\mathbf{\hat{z}}I\pi R^2$: $$\mathbf{J}=\boldsymbol{\nabla}\times\mathbf{m}\delta\left(\mathbf{r}\right)$$ where $\mathbf{m}$ is a constant so it can be outside the derivative, there still seems to be sign mismatch. Because I have seen such problems before, I am inclined to think that my sign is correct, but I could be wrong.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Doubt in a property of Laplace equation One of the Laplace equation's property says that the maxima and minima can only occur at the boundaries. Okay so lets take 2 positive charges, one at the origin and the other $d$ distance apart on the $x$-axis. So the potential between them would be somewhat like what I have drawn in the image. Now lets take a region between $x=d/3$ and $x=2d/3$. Now apply the Laplace equation here in the region, (as there is no charge in this region), and so the potential's maxima and minima should occur at the boundary. But its maxima occurs at $x=d/2$ ??
(Assuming that you're referring to the $3$D case and not the $1$D case). In your example, $V(r)$ is not an extremum. Let's place two unit charges at $x = \pm d/2$, and look at the resulting potential $V(x,y)$ (taking $V(r \to \infty) = 0$). At $x = 0$, the potential is $V(0,0) = \frac{2}{d} + \frac{2}{d} = \frac{4}{d}$ Along the $x$ axis (close to $x = 0$), we have \begin{align} V(x=\varepsilon, 0) &= \frac{1}{\frac{d}{2} + \varepsilon} + \frac{1}{\frac{d}{2} - \varepsilon}\\ &= \frac{2}{d} \left(\frac{1}{1 + \frac{2 \varepsilon}{d}} + \frac{1}{1 - \frac{2 \varepsilon}{d}}\right)\\ &= \frac{4}{d} \left(1 + 4 \frac{\varepsilon^2}{d^2} + o\left(\frac{\varepsilon^2}{d^2}\right) \right) > V(0, 0). \end{align} But along the y axis, we have: \begin{align} V(x=0, y=\varepsilon) &= \frac{1}{\sqrt{\frac{d^2}{4} + \varepsilon^2}} + \frac{1}{\sqrt{\frac{d^2}{4} + \varepsilon^2}}\\ &= \frac{4}{d} \frac{1}{\sqrt{1 + \frac{4 \varepsilon^2}{d^2}}}\\ &= \frac{4}{d} \left(1 - 2 \frac{\varepsilon^2}{d^2} + o\left(\frac{\varepsilon}{d}^2\right) \right) < V(0, 0). \end{align} So while the partial derivative along $x$ and $y$ is indeed zero, this corresponds neither to a maximum nor a minimum of $V$ but rather to a saddle point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Computing Young's modulus of an ideal elastic substance using it's thermodinamic equation of state The equation of state of an ideal elastic substance is: \begin{equation} \mathcal{F} = KT \left[\left(\frac{L}{L_0}\right) - \left(\frac{L}{L_0}\right)^{-2}\right] \tag{1} \end{equation} Where $K$ is a constant and $L_0$ (the value of $L$ at zero tension) is a function of temperature only ($L_0(T)$). Show that the isothermal Young's modulus is given by \begin{equation} Y = \frac{\mathcal{F}}{A} + \frac{3KTL_0^2}{AL^2} \tag{2} \end{equation} Exersice 2.7.a of: Heat and Thermodynamics 7th Revised edition. Mark W. Zemansky; Richard H. Dittman If: \begin{equation} Y = \frac{L}{A}\left(\frac{\partial \mathcal{F}}{\partial L}\right)_T \tag{3} \end{equation} Can perform the derivative of equation 3 with equation 1 as: \begin{equation} \begin{aligned} \mathcal{F} & = KT \left[\left(\frac{L}{L_0}\right)-\left(\frac{L}{L_0}\right)^{-2}\right] \\ & = \frac{KTL}{L_0} - \frac{KTL_0^2}{L^2} \end{aligned} \tag{4} \end{equation} substituting 4 in 3, solvig the derivarives and reducing terms: \begin{equation} \begin{aligned} Y & = \frac{L}{A} \left(\frac{\partial}{\partial L}\right)_T \left[\frac{KTL}{L_0} - \frac{KTL_0^2}{L^2} \right] \\ & = \frac{L}{A} \left[ \left(\frac{\partial}{\partial L} \frac{KTL}{L_0}\right)_T - \left(\frac{\partial}{\partial L}\frac{KTL_0^2}{L^2}\right)_T \right] \\ & = \frac{L}{A} \left[\frac{KT}{L_0} + \frac{2KL_0^2T}{L^3} \right] \\ \\ & = \frac{L}{A} \left[\frac{KTL^3 + 2KL_0^3T}{L^3 L_0} \right] \\ & = \frac{KTL^3 + 2KL_0^3T}{AL^2 L_0} \\ & = KT\frac{L^3 +2L_0^3}{AL^2 L_0} \\ & = KT\left[ \frac{L}{AL_0} + \frac{2L_0^2}{AL^2} \right] \\ & = \frac{KTL}{AL_0} + \frac{2KTL_0^2}{AL^2} \\ \end{aligned} \end{equation} Which: \begin{equation} \boxed{ \frac{KTL}{AL_0} + \frac{2KTL_0^2}{AL^2} \neq \frac{\mathcal{F}}{A} + \frac{3KTL_0^2}{AL^2} } \end{equation} Then just by solving the derivative $\partial \mathcal{F}/\partial L$ of eq.3 in eq 1. leadsme to a path where I miss the term $\mathcal{F}/A$ in eq. 2. That's where I think I'm missing something in the theory, which is what I'm looking for. Even if I can demostrate that: \begin{equation} \mathcal{F} = \frac{KTL}{L_0} \end{equation} then: \begin{equation} \boxed{ \frac{\mathcal{F}}{A} + \frac{2KTL_0^2}{AL^2} \neq \frac{\mathcal{F}}{A} + \frac{3KTL_0^2}{AL^2} } \end{equation}
Chemomechanics answered your question, I am merely doing the algebra: You say $$ \boxed{ \frac{k T L}{A L_0} + \frac{2 k T L_0^2}{A L^2} \neq \frac{\mathcal F}{A}+\frac{3 k T L_0^2}{A L^2} } $$ but this is not true: From your Eq (1) $$ \frac{\mathcal F}{A} =\frac{kT L}{AL_0}-\frac{k T L_0^2}{L^2} $$ Then $$\boxed{\boxed{ \frac{\mathcal F}{A} + \frac{3 k T L_0^2}{A L^2} =\frac{kT L}{AL_0}+ \frac{2 k T L_0^2}{A L^2} }} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/724962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The two causes for the factor 2 in Coriolis effect While reading this document on Coriolis effect http://empslocal.ex.ac.uk/people/staff/gv219/classics.d/Persson98.pdf, I saw the followig sentence Two kinematic effects each contribute half of the Coriolis acceleration: relative velocity and the turning of the frame of reference. And this is the reason why Coriolis term has that factor $2$. Unfortunately it does not specify anything about this two causes. Does anyone have some further explanation for how "relative velocity" and "turning of the frame" actually give rise to the Coriolis term?
Take a free particle moving on a plane in polar coordinates $$ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} r \cos \theta \\ r \sin \theta \end{pmatrix}$$ The velocity is found from the chain rule, with clear separation for radial and tangential components: $$\begin{pmatrix} \dot{x} \\ \dot{y} \end{pmatrix} = \begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix} \begin{pmatrix} \dot{r} \\ r \dot{\theta}\end{pmatrix} $$ The acceleration is found again by differentiation $$ \begin{pmatrix} \ddot{x} \\ \ddot{y} \end{pmatrix} = \frac{{\rm d}\begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix}}{{\rm d}t} \begin{pmatrix} \dot{r} \\ r \dot{\theta}\end{pmatrix} + \begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix} \frac{{\rm d}\begin{pmatrix} \dot{r} \\ r \dot{\theta}\end{pmatrix}}{{\rm d}t}$$ $$ =\begin{vmatrix} 0 & -\dot{\theta} \\ \dot{\theta} & 0 \end{vmatrix}\begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix}\begin{pmatrix} \dot{r} \\ r \dot{\theta}\end{pmatrix} + \begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix}\begin{pmatrix} \ddot{r} \\ r \ddot{\theta}+\dot{r} \dot{\theta}\end{pmatrix} $$ $$ = \begin{vmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{vmatrix} \left[\begin{vmatrix} 0 & -\dot{\theta} \\ \dot{\theta} & 0 \end{vmatrix}\begin{pmatrix} \dot{r} \\ r \dot{\theta}\end{pmatrix}+ \begin{pmatrix} \ddot{r} \\ r \ddot{\theta}+\dot{r} \dot{\theta}\end{pmatrix}\right] $$ The above is first a rotation matrix by $\theta$, then the effect of the rotation on the (local) velocity and finally the (local) acceleration. Notice in the radial direction the local acceleration is just $\ddot{r}$, and in the tangential direction it has two terms. One is Euler's acceleration $r \ddot{\theta}$ and the other 1/2 the coriolis term. This part is due to the change in direction of the radial velocity. $$ \begin{pmatrix} \ddot{x} \\ \ddot{y} \end{pmatrix} = ({\rm Rotation}) \left[ \begin{pmatrix} -r \dot{\theta}^2 \\ \dot{r} \dot{\theta} \end{pmatrix} + \begin{pmatrix} \ddot{r} \\ r \ddot{\theta} + \dot{r} \dot{\theta} \end{pmatrix} \right] $$ Now the first part $(-r \dot{\theta}^2, \dot{r} \dot{\theta} )$ contains the centrifugal acceleration in the radial direction and the change in orientation of the tangential acceleration which is the other half of the coriolis effect. But I find all this confusing. I'd rather look at a picture: The changes in the velocity vector in radial coordinates (where the center of rotation is in the -x direction. The two $\dot{r} \dot{\theta}$ terms in the coriolis term are from a) turning of $\dot{r}$ and b) extension of $r \dot{\theta}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/248850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Coupled Harmonic Oscillator - Solve by diagonalization I have a problem where I have two massive Particles $M$ and one particle with mass $m<M$. The two particles on the outside are coupled with the one in the middle with two springs. The Hamiltonian for the system is given by: \begin{equation} H=\frac{p_1^2}{2M}+\frac{p_2^2}{2M}+\frac{p_3^2}{2m} + \frac{1}{2}k(x_3-x_1-d)^2+\frac{1}{2}k(x_2-x_3-d)^2 \end{equation} ($d$ is the length of the spring at equilibrium.) I have replaced \begin{equation} q_1 = x_1\\ q_2 = x_2-2d\\ q_3 = x_3 -d \end{equation} and rewrote the potential as a Matrix equation: \begin{equation} \frac{k}{2}\vec{q}^T A\vec{q} \end{equation} with \begin{equation} A = \begin{pmatrix} 1 & 0 &-1\\ 0 & 1 & -1\\ -1 & -1& 1\\ \end{pmatrix}\qquad \vec{q} = \begin{pmatrix}q_1\\q_2\\q_3\end{pmatrix} \end{equation} I can find diagonal representation: \begin{equation} A = \begin{pmatrix} 1 & 0 &0\\ 0 & 1+\sqrt{2} & 0\\ 0 & 0& 1-\sqrt(2)\\ \end{pmatrix} \end{equation} With the corresponding eigenvectors \begin{equation} \begin{pmatrix} -1\\1\\0 \end{pmatrix}\qquad \begin{pmatrix} -\frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}}\\1 \end{pmatrix}\qquad \begin{pmatrix} \frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}\\1 \end{pmatrix} \end{equation} This probably means to switch to new coordinates \begin{equation} y_1 = -q_1+q_2\\ y_2 = -\frac{1}{\sqrt{2}}q_1-\frac{1}{\sqrt{2}}q_2+q_3\\ y_3 = \frac{1}{\sqrt{2}}q_1+\frac{1}{\sqrt{2}}q_2+q_3 \end{equation} But what do I do with the momentum operators. They should change accordingly but I am confused as to how exactly
You solve the last 3 equations for the $x_i$ and express them as functions of the normal coordinates $y_i$. Then you differentiate these equations and multiply the velocities with the masses $m_i$ to obtain the momenta $$p_i=m_i\frac{dx_i}{dt}=m_i f_i(dy_1/dt,dy_2/dt,dy_3/dt)$$ These expression for the momenta in terms of the new coordinate velocities you insert into the original Hamiltonian.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/294591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$\phi^4$-theory: interpreting the RG flow I'm referring to these lectures on the Renormalization Group and more precisely the figure of the RG flow for the $\phi^4$-theory on page 18. The Lagrangian in the notation used in the text is $$\mathcal L = -\frac{1}{2}(\partial_\mu \phi)^2 - V(\phi)\quad \text{with } \quad V(\phi)=\sum_n \mu^{d-n(d-2)}\frac{g_{2n}}{(2n)!}\phi^{2n}\,, $$ where $\mu$ is the hard cut off. For convenience here is the pertinent figure for the RG flow for dimension $d=4-\epsilon$ with $\epsilon>0$ As explained in the text: "F describes a massless interacting theory that interpolates between a free theory in the UV and the Ising Model in the IR" My question is very simple: How the line F can represent massless theories as the coupling constant $g_2=m$ along that line is non-zero....?
I have done more direct approach to this task in first order in $\epsilon$. Let's start with direct calculation of propagator: $$ \langle \phi(p) \phi(-p) \rangle = \frac{1}{p^2+m^2} - \frac{4 \cdot 3 \cdot g}{(p^2 + m^2)^2} \int^\Lambda \frac{d^{4-\epsilon}k}{(2\pi)^{4-\epsilon}} \frac{1}{k^2+m^2} + \dots $$ $$ \int^\Lambda \frac{d^{d}k}{(2\pi)^d} \frac{1}{k^2+m^2} = \frac{Vol(S^{d-1})}{(2\pi)^d} \int_0^\Lambda \frac{k^{d-1} dk}{k^2 + m^2} = \frac{ \pi^{d/2}}{(2\pi)^d \Gamma(d/2)} \int_0^\Lambda \frac{(k^2)^{d/2-1} d(k^2)}{k^2 + m^2} $$ Using $g\sim \epsilon$, we fix $d=4$: $$ \int^\Lambda \frac{d^{d}k}{(2\pi)^d} \frac{1}{k^2+m^2} = \frac{1}{(4\pi)^2} \left(\Lambda^2 - m^2\ln \frac{\Lambda^2 + m^2}{m^2}\right) $$ Expanding obtained correlation function: $$ \langle \phi(p) \phi(-p) \rangle \approx \frac{1}{p^2+m^2}\left[1 - \frac{4\cdot 3 \cdot g\Lambda^2}{p^2 + m^2} \frac{1}{(4\pi)^2} \left(1 - \frac{m^2}{\Lambda^2}\ln \frac{\Lambda^2}{m^2}\right) \right] \approx \frac{1}{p^2} \left[1 - \frac{4\cdot 3 \cdot g\Lambda^2/ (4\pi)^2 + m^2}{p^2} \right] $$ So in critical WF point $m^2_{\star} \approx -\frac{1}{6} \Lambda^2 \epsilon$ , $\tilde{g}_{\star} = \frac{2\pi^2}{9}\epsilon$ we have: $$ \langle \phi(p) \phi(-p) \rangle_{\star} = \frac{1}{p^2} + O(\epsilon^2) $$ We have massless particles on curve (in first order in $\epsilon$): $$ m^2 = - 3 g\Lambda^2/ 4\pi^2$$ This line obviously connects Gaussian fixed point ($m^2 = g =0$) and Wilson-Fisher fixed point. Also $$ \frac{d}{ds}\left(4\cdot 3 \cdot g\Lambda^2/ (4\pi)^2 + m^2\right) =0 $$ on RG equations in first order in $\epsilon$. $$ \begin{cases} \frac{dm^2}{ds} = 2\mu^2 + \frac{3}{2\pi^2} g \Lambda^2\\ \frac{dg}{ds} = \epsilon g - \frac{9}{2\pi^2} g^2 \end{cases} $$ I will appreciate a lot if somebody explains to me the relation of such calculation to Abdelmalek Abdesselam's answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/340005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Where did the square root come from in this spin 1/2 spinor equation? This is coming from the spin-$1\over 2$ section of David J. Griffith's textbook Introduction to Quantum Mechanics. My textbook gives the generic expression for a spinor as $$\chi= \begin{pmatrix} a \\ b \\ \end{pmatrix}= a\chi_+ +b\chi_-$$ $$\chi_+= \begin{pmatrix} 1 \\ 0 \\ \end{pmatrix},\chi_-= \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix}$$ The textbook then uses this general form to find the spinor for $S_x$ (for spin $1 \over 2$). It finds the eigenvalue as $\pm{{\hbar} \over 2}$, and then solves for the eigenspinors, here I get confused. These are the steps that the book does: $${\hbar \over 2}\begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \\ \end{pmatrix} = \pm{\hbar \over 2} \begin{pmatrix} \alpha \\ \beta\\ \end{pmatrix}$$ The normalized eigenspinors of $\mathbf S_x$ are $$\chi_+^{(x)}=\begin{pmatrix} {1\over \sqrt{2}} \\ {1\over \sqrt{2}}\\ \end{pmatrix},\Big(eigenvalue+{\hbar\over 2} \Big )\ ; \ \chi_-^{(x)}\begin{pmatrix} {1\over \sqrt{2}} \\ -{1\over \sqrt{2}}\\ \end{pmatrix},\Big(eigenvalue-{\hbar\over 2} \Big ).$$ Then it says "As the eigenvectors of a hermitian matrix, they span the space; the generic spinor equation $\chi$ can be expressed as a linear combination of them:" $$\chi=\Big({a+b \over \sqrt{2}} \Big)\chi_+^{(x)}+\Big({a-b\over \sqrt{2}} \Big)\chi_-^{(x)} $$ So a couple things confuse me, first of all when the book says (eigenvalue$ + {\hbar \over 2}$) does it mean that you add an eigenvalue to ${\hbar \over 2}$ (which is already an eigenvalue) or that the eigenvalue that belongs to the spinor is ${\hbar \over 2}$? The second thing is that I don't know where $\sqrt 2$ comes from in the denominators of the last equation. I understand where it came from in the equations for the individual spinors (from normalizing them). But full expansion of the equation is $$\chi=\Big({a+b \over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ {1\over \sqrt 2}\\ \end{pmatrix}+\Big({a-b\over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ -{1\over \sqrt 2}\\ \end{pmatrix} .$$ The square roots are located within the $\chi_{\pm}$. It doesn't seem right to me that you would bring the square root out of a $\chi_{\pm}$ and then still write $\chi_{\pm}$ because it contains the square roots. I can't imagine the textbook would make such a mistake, but this is the only thing I can think of. So my two questions are: does (eigenvalue$ + {\hbar \over 2}$) mean that you add an eigenvalue to ${\hbar \over 2}$, and where did $\sqrt 2$ come from in the last equation?
For your first question, it is saying that the spinor to the immediate left has eigenvalue $\frac{\hbar}{2}$ (think of it as saying "eigenvalue of $+\frac{\hbar}{2}$"). For your second question, as @probably_someone suggested, calculate the magnitude of the spinor: $$ \chi=\Big({a+b \over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ {1\over \sqrt 2}\\ \end{pmatrix}+\Big({a-b\over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ -{1\over \sqrt 2}\\ \end{pmatrix} $$ $$ =\Big({a \over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ {1\over \sqrt 2}\\ \end{pmatrix}+\Big({b \over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ {1\over \sqrt 2}\\ \end{pmatrix}+\Big({a\over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ -{1\over \sqrt 2}\\ \end{pmatrix}- \Big({b \over \sqrt{2}} \Big)\begin{pmatrix} {1\over \sqrt 2}\\ {1\over \sqrt 2}\\ \end{pmatrix} $$ $$ \therefore |\chi|^2 = |{a \over \sqrt{2}}|^2 + |{b \over \sqrt{2}}|^2 + |{a \over \sqrt{2}}|^2 + |{-b \over \sqrt{2}}|^2 $$ $$ = {|a|^2 \over 2} + {|b|^2 \over 2} + {|a|^2 \over 2} + {|b|^2 \over 2} $$ From the normalization constraint on $a$ and $b$, we know that $|a|^2 + |b|^2 = 1$, and so: $$ = {1 \over 2} + {1 \over 2} $$ $$ = 1 $$ And therefore the state is properly normalized.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenvalues of a two particle system in a coupled vs. uncoupled basis Consider a system of two distinguishable spin-1/2 particles with Hamiltonian \begin{align} H &= \frac{\alpha}{4} \vec{\sigma}_1 \cdot\vec{\sigma}_2.\\ \end{align} where $\vec{\sigma}_1 = (\sigma_x\otimes 1, \sigma_y\otimes 1, \sigma_z\otimes 1)$ and $\vec{\sigma}_2 = (1\otimes \sigma_x,1\otimes \sigma_y,1\otimes \sigma_z)$ . In the uncoupled z-basis, we can write the Hamiltonian as \begin{align} H&= \frac{\alpha}{4}\left(\sigma_{x}\otimes\sigma_{x}+\sigma_{y}\otimes\sigma_{y}+\sigma_{z}\otimes\sigma_{z}\right)\\ &= \frac{\alpha}{4}\left(\sigma_{x}+\sigma_{y}+\sigma_{z}\right)\otimes\left(\sigma_{x}+\sigma_{y}+\sigma_{z}\right)\\ &= \frac{\alpha}{4}\begin{pmatrix}1 & 1-i\\ 1+i & -1\end{pmatrix}\otimes \begin{pmatrix}1 & 1-i\\ 1+i & -1\end{pmatrix} \end{align} The matrix $$\begin{pmatrix}1 & 1-i\\ 1+i & -1\end{pmatrix}$$ has eigenvalues $\pm\sqrt{3}$, so in the uncoupled diagonal-basis $$H = \frac{3\alpha}{4}\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}\otimes\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}$$ which has eigenvectors $$\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}\hspace{2mm}, \hspace{2mm}\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix} \\ \begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}\hspace{2mm}, \hspace{2mm} \begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix}$$ with respective eigenvalues $3\alpha/4, -3\alpha/4, -3\alpha/4, 3\alpha/4$. We could've rewritten the Hamiltonian as \begin{align} H &= \frac{\alpha}{2}\left[\left(\frac{1}{2}\vec{\sigma}_1+\frac{1}{2}\vec{\sigma}_2\right)^2 - \left(\frac{1}{2}\vec{\sigma}_1\right)^2 - \left(\frac{1}{2}\vec{\sigma}_2\right)^2\right]\\ &=\frac{\alpha}{2}\left[s(s+1) - \frac{1}{2}\left(\frac{1}{2} +1\right) - \frac{1}{2}\left(\frac{1}{2} +1\right)\right]\\ &=\frac{\alpha}{2}\left[s(s+1) - \frac{3}{2}\right] \end{align} where $s$ is the the spin in the coupled basis ($s=0$ or $1$). Therefore the eigenvalues of the Hamiltonian in the coupled basis are $-3\alpha/4$ (with degeneracy 1) and $\alpha/4$ (with degeneracy 3). The eigenvalues of the Hamiltonian shouldn't depend on your choice of basis, but in the above I get different eigenvalues in the coupled and uncoupled bases. Where am I going wrong? Solution (thanks to Vadim): In the $|\uparrow\uparrow\rangle, |\uparrow\downarrow\rangle,|\downarrow\uparrow\rangle,|\downarrow\downarrow\rangle$ basis the Hamiltonian takes the form \begin{align} H&= \frac{\alpha}{4}\left(\sigma_{x}\otimes\sigma_{x}+\sigma_{y}\otimes\sigma_{y}+\sigma_{z}\otimes\sigma_{z}\right)\\ &= \frac{\alpha}{4}\begin{pmatrix}1&0&0&0\\0&-1&2&0\\0&2&-1&0\\0&0&0&1\end{pmatrix} \end{align} which has eigenvalues $-3\alpha/4$ and $\alpha/4$. This is not the same as \begin{align} \frac{\alpha}{4}\left(\sigma_{x}+\sigma_{y}+\sigma_{z}\right)\otimes\left(\sigma_{x}+\sigma_{y}+\sigma_{z}\right) = \frac{\alpha}{4}\begin{pmatrix}1&1-i&1-i&-2i\\1+i&-1&2&-1+i\\1+i&2&-1&-1+i\\2i&-1-i&-1-i&1\end{pmatrix} \end{align} which has eigenvalues $\pm3\alpha/4$.
The error is in the first approach: $$\sigma_x\otimes\sigma_x + \sigma_y\otimes\sigma_y + \sigma_z\otimes\sigma_z \neq (\sigma_x + \sigma_y + \sigma_z)\otimes(\sigma_x + \sigma_y + \sigma_z),$$ as it is easy to verify by writing down these matrices in 4-by-4 basis $|\uparrow\uparrow\rangle, |\uparrow\downarrow\rangle, |\downarrow\uparrow\rangle, |\downarrow\downarrow\rangle.$ Working with 4-by-4 matrices may seem dounting at first, but it is actually quite easy, once you get a grasp of how they nest one within the other, e.g. $$ \sigma_x^{(1)}\otimes\sigma_x^{(2)} =\begin{pmatrix} 0&\sigma_x^{(2)}\\\sigma_x^{(2)}&0\end{pmatrix} = \begin{pmatrix} 0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0 \end{pmatrix}$$ $$ \sigma_x^{(1)}\otimes\sigma_y^{(2)} =\begin{pmatrix} 0&\sigma_y^{(2)}\\\sigma_y^{(2)}&0\end{pmatrix} = \begin{pmatrix} 0&0&0&-i\\0&0&i&0\\0&-i&0&0\\i&0&0&0 \end{pmatrix}$$ Incidentally, it is also helpful when dealing with the $\gamma$-matrices in the Dirac equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the derivative of Vector equal to Derivative of its rectilinear components? Take a vector $\mathbf A=t^4\mathbf i +t^2\mathbf j$, and call the unit vector along direction of $\mathbf A$ is $\mathbf k$, so the magnitude of this vector $\mathbf A$ along $\mathbf k$ will be $\sqrt{t^8+t^4}$ and thus the vector will be $\sqrt{t^8+t^4}\ \mathbf k$. So why is its derivative $d\mathbf A/dt=4t^3\mathbf i+2t\mathbf j$? will its magnitude be the same as that of the magnitude of derivative of the vector along $\mathbf k$? How can we distribute its derivative along $\mathbf i$ and $\mathbf j$ and how to prove that their magnitude is same as the magnitude of the derivative of the vector along $\mathbf k$?
After @gandalf61's answer above, the following is just tautology, but let's do it just for the sake of the question. $$\begin{align} \mathbf A &=t^4\mathbf i +t^2\mathbf j \\ \end{align}$$ Or, in the direction of $\mathbf A$, $$\begin{align} \mathbf A &= |A|.\mathbf k \\ \end{align}$$ Where $\mathbf k$ is the unit vector in the direction of $\mathbf A.$ Thus: $$\begin{align} \mathbf A &= \sqrt{t^8+t^4}.\frac{t^4\mathbf i +t^2\mathbf j}{\sqrt{t^8+t^4}} \\ \end{align}$$ Now, we can derive $\frac{d\mathbf A}{dt}$: $$\begin{align} \frac{d\mathbf A}{dt} &= \frac{d|A|}{dt}.\mathbf k + |A|.\frac{d\mathbf k}{dt} \\ &=\frac{8t^7 + 4t^3}{2\sqrt{t^8+t^4}}.\frac{t^4\mathbf i +t^2\mathbf j}{\sqrt{t^8+t^4}} + \sqrt{t^8+t^4}.\frac{2t^7 \mathbf i - 2t^9\mathbf j}{(t^8+t^4)^{3/2}} \\ &= \frac{(4t^{11} + 2t^7 + 2t^7) \mathbf i + (4t^9 + 2t^5 -2t^9)\mathbf j}{t^8+t^4} \\ &= \frac{t^4[(4t^7 + 4t^3)\mathbf i + (2t^5 + 2t) \mathbf j]}{t^4(t^4+1)} \\ &= \frac{(t^4+1)(4t^3 \mathbf i + 2t\mathbf j)}{t^4+1} \\ &= 4t^3 \mathbf i + 2t\mathbf j \end{align}$$ So, we see that the derivative of $\mathbf A$ is indeed the same whether we represent $\mathbf A$ in terms of $\mathbf i$ and $\mathbf j$, or in terms of $\mathbf k$ (and subsequently $\mathbf k$ in terms of $\mathbf i$ and $\mathbf j$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/651296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Calculate The Electric Field Generated By A Quarter Circle I'm just beginning to study electrostatics and have a question about the following problem: A quarter circle of radius $R$ is uniformly charged with a total charge $Q$. What is the electric field at the origin, which is the center of the arc? My Idea: We can use $E =\frac{F_e}{q}$ where $q$ is the point charge at the origin. First of all, the linear charge density, $\lambda$, would be $\frac{Q}{\frac{\pi R}{2}} = \frac{2Q}{\pi R}.$ If we position the quarter circle such that the two ends of the arc are at $\frac{3 \pi}{4}$ and $\frac{5 \pi}{4}$ radians, then the $y$-forces will cancel out. The only force on the origin will be in the $x-$direction and can be calculated by finding twice the force applied on the semi-circle from $\frac{3 \pi}{4}$ to $\pi$ (due to symmetricity). Choose a point $X$ on the arc that has an angle of $\theta$ (in polar form) and consider the width of the arc to be $d \theta$. Then, the total charge is $\lambda d \theta = \frac{2Q}{ \pi R} d \theta$ and the distance is $R$. So, by Coulomb's Law, the total electrostatic force felt is $$\frac{1}{4 \pi \epsilon_0} \frac{ \frac{2Q}{ \pi R} d \theta q}{R^2} = \frac{Q q}{2 \pi^2 \epsilon_0 R^3} d \theta$$ Thus, the force in the $x$-direction is $$\frac{Q q}{2 \pi^2 \epsilon_0 R^3} ( - \cos(\theta)) d\theta$$ Now, we integrate this: $$F_e = 2 \int_{\frac{3 \pi}{4}}^\pi \frac{Q q}{2 \pi^2 \epsilon_0 R^3} (-\cos(\theta)) d\theta = \frac{Qq}{\pi^2 \epsilon_0 R^3} \int_{\frac{3 \pi}{4}}^\pi - \cos(\theta) d\theta = \frac{ \sqrt{2}}{2} \frac{Qq}{\pi^2 \epsilon_0 R^3}$$ So, the electric field is $$\frac{ \sqrt{2}}{2} \frac{Q}{\pi^2 \epsilon_0 R^3}$$ However, apparently, the correct answer should be $$\frac{ \sqrt{2}}{2} \frac{Q}{\pi^2 \epsilon_0 R^2}$$ Can anyone see why I am off by a factor of $R$? What did I mess up?
From $\vec{E} = \vec{F}/q$ we can arrive to more convenient formula for finding electric field $$ \vec{E} = \frac{1}{4\pi\epsilon_0} \int \frac{dq}{r^2} ( {\hat{x} \cos\theta + \hat{y} \sin\theta} ). $$ Then using $dq = \lambda dl = \lambda r d\theta$ as pointed out by Aryan Komarla, where $r = R$ in your case, we can arrive at \begin{equation}\tag{1}\label{eqn:electric-field-arc} \begin{array}{rcl} \vec{E} & = & \displaystyle \frac{1}{4\pi\epsilon_0} \int \frac{R\lambda d\theta}{R^2} ( {\hat{x} \cos\theta + \hat{y} \sin\theta} ) \newline & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{\lambda}{R} \int ( {\hat{x} \cos\theta + \hat{y} \sin\theta} ) d\theta. \end{array} \end{equation} Since you have already chosen integral lower and upper bounds to be $\frac{3}{4}\pi$ and $\frac{3}{4}\pi$ then $$ \begin{array}{rcl} \vec{E} & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{\lambda}{R} \int_{\frac34 \pi}^{\frac54 \pi} ( {\hat{x} \cos\theta + \hat{y} \sin\theta} ) d\theta \newline & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{\lambda}{R} \left[ {\hat{x} \sin\theta - \hat{y} \cos\theta}\right]_{\theta = \frac34 \pi}^{\frac54 \pi} \newline & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{\lambda}{R} \left[ \hat{x}(-\tfrac12\sqrt{2} - \tfrac12\sqrt{2} ) - \hat{y}(-\tfrac12\sqrt{2} + \tfrac12\sqrt{2}) \right] \newline & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{\lambda}{R} \sqrt{2} \hat{x}. \end{array} $$ Finally using your result $\lambda = 2Q/\pi R$ will lead us to $$ \begin{array}{rcl} \vec{E} & = & \displaystyle \frac{1}{4\pi\epsilon_0} \frac{2Q}{\pi R^2} \sqrt{2} \hat{x} \newline & = & \displaystyle \frac{\sqrt{2}}{2} \frac{Q}{\pi^2 \epsilon_0 R^2} \hat{x} \end{array} $$ that you expect. Bt the way Eqn. \eqref{eqn:electric-field-arc} is more general and you can choose any initial and final angles you want.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculating the amount of work done to assemble a net charge on a sphere I've been reviewing electrostatics using an old exam and I stumbled upon this question: Calculate the amount of work required to assemble a net charge of $+Q$ on a spherical conductor of radius $R$. If an additional charge of $-Q$ were to be assembled on a concentric spherical conductor of radius $R+a$,what amount of work would the entire process require? Now the first part is not that difficult, we just do: $$\vec E = \frac{Q}{4 \pi \epsilon_0 R^2} \hat r \, \text{(From Gauss's Law)}$$ $$\begin{align} W & = \frac{\epsilon_0}{2} \int E^2 \, d\tau \\ & = \left(\frac{\epsilon_0}{2}\right) \left(\frac{Q^2}{(4 \pi \epsilon_0)^2}\right) \int d \Omega \int_R^{\infty} \frac{1}{R'^{4}} R'^{2} dR'\\ & = \frac{4\pi Q^2}{32 \pi^2 \epsilon_0} \frac{1}{R} \\ &= \frac{Q^2}{8 \pi \epsilon_0 R} \\ \end{align}$$ But for the second part, according to a solution that a friend of mine gave me, only thing that we need to do to calculate the total work is to do: $$\begin{align} W_{tot} & = \frac{\epsilon_0}{2} \int E^2 d \tau \\ & = \frac{4 \pi Q^2}{32 \pi^2 \epsilon_0} \int_R^{R+a} \frac{1}{R'^2} dR' \\ & = \frac{Q^2}{8 \pi \epsilon_0} \left(\frac{1}{R} - \frac{1}{R+a}\right) \\ \end{align}$$ But according to equation $(2.47)$ of Griffiths, total work should be equal to: $$\begin{align} W_{tot} & = \frac{\epsilon_0}{2} \int (E_1+E_2)^2 d\tau \\ & = \frac{epsilon_0}{2} \int (E_1^2 + E_2^2 + 2E_1 \cdot E_2) d\tau \\ & = W_1 + W_2 + \epsilon_0 \int E_1 \cdot E_2 d \tau \\ \end{align}$$ Wherein for this case $W_1$ is the work required for a sphere of radius $R$ as shown earlier, and $W_2$ is the work required for a sphere of radius $R+a$. Is the first method correct?
Answer:The two methods are both correct. As I have suggested in the comment area, total work calculated by using the first method should be $$W_{tot}=\frac{Q^2}{8\pi\epsilon_0}(\frac{1}{R}-\frac{1}{R+a}), $$ since $\int \frac{1}{r^2}dr=-\frac{1}{r}+\text{Constant}$. Next we will calculate the total work by the second method, i.e. the equation $(2.47)$ of Griffiths. As indicated in the problem, we have \begin{align} \mathbf{E}_1 & =\frac 1 {4\pi\epsilon_0}\frac{Q}{r^2}\hat{\mathbf{r}},\ \text{while}\ r\ge R\\ \mathbf{E}_2 & =-\frac 1 {4\pi\epsilon_0}\frac{Q}{r^2}\hat{\mathbf{r}},\ \text{while}\ r\ge R+a \end{align} So, \begin{align} E_1^2 & =\frac{Q^2}{16\pi^2\epsilon_0^2r^4}\\ E_2^2 & =\frac{Q^2}{16\pi^2\epsilon_0^2r^4}\\ E_1\cdot E_2 & =-\frac{Q^2}{16\pi^2\epsilon_0^2r^4} \end{align} And \begin{align} W_1 & =\frac{\epsilon_0}{2}\int E_1^2d\tau\\ & =\frac{Q^2}{8\pi\epsilon_0}\int_R^\infty\frac{1}{r^2}dr\\ & =\frac{Q^2}{8\pi\epsilon_0R} \end{align} By using same method, we get $W_2$ as foolow, $$ W_2=\frac{Q^2}{8\pi\epsilon_0(R+a)} $$ Finally, the cross term is that \begin{align} \epsilon_0 \int \mathbf{E}_1 \cdot \mathbf{E}_2 d\tau & =-\frac{Q^2}{4\pi\epsilon_0}\int_{R+a}^\infty \frac{1}{r^2}dr\\ & =-\frac{Q^2}{8\pi\epsilon_0}\frac{2}{R+a} \end{align} Then we add them all up, we have, \begin{align} W_{tot} & =W_1+W_2+\epsilon_0\int \mathbf{E}_1\cdot\mathbf{E}_2d\tau \\ & =\frac{Q^2}{8\pi\epsilon_0}\frac{1}{R}+\frac{Q^2}{8\pi\epsilon_0}\frac{1}{R+a}-\frac{Q^2}{8\pi\epsilon_0}\frac{2}{R+a}\\ & =\frac{Q^2}{8\pi\epsilon_0}(\frac{1}{R}-\frac{1}{R+a}) \end{align} Conclusion: The results are the same by two methods.In the first method, when we wrote the formula of $W_{tot}$, the electric field $E$ is the final field after used superposition princeple. In the second, we also used superposition princeple, but we wrote it in the form of $W$ explicitly. I mean that the two methods are the same, but have different forms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/226456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finite difference method applied to the 2D time-independent Schrödinger equation The 1D Schrödinger equation $$ E\psi=\left[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+\hat{V}\right]\psi $$ can be solved using a finite difference scheme. The kinetic energy operator is represented by the matrix $$ -\frac{\hbar^2}{2m}\left( \begin{array}{cccc} 2 & -1 & 0 & \dots & \dots & 0 \\ -1 & 2 & -1 & \dots & \dots & 0 \\ 0 & -1 & 2 & -1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & & \vdots \\ \vdots & \vdots & \vdots & & \ddots & -1 \\ 0 & \dots & \dots & 0 & -1 & 2 \\ \end{array} \right) $$ Similarly, the potential energy operator has the form $$ \left( \begin{array}{cccc} V_1 & 0 & 0 & \dots & \dots & 0 \\ 0 & V_2 & 0 & \dots & \dots & 0 \\ 0 & 0 & V_3 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & & \vdots \\ \vdots & \vdots & \vdots & & \ddots & 0 \\ 0 & \dots & \dots & 0 & 0 & V_N \\ \end{array} \right) $$ My question is: what is the general form for the 2D case? I know the above matrices are generated from the three-point stencil, where $$ \frac{\partial^2 \psi}{\partial x^2} = \frac{\psi_{i+1}+\psi_{i-1}-2\psi_{i}}{\Delta x^2} $$ So with a five-point stencil for the 2d case, I'll have $$ \frac{\partial^2\psi}{\partial x^2} + \frac{\partial^2\psi}{\partial y^2}= \frac{\psi_{i+1,j}+\psi_{i-1,j}-2\psi_{i,j}}{\Delta x^2} + \frac{\psi_{i,j+1}+\psi_{i,j-1}-2\psi_{i,j}}{\Delta y^2} $$ But I'm unsure of how to convert this into the relevant matrices. [edit] I think I've managed to derive a solution by following this video. The kinetic energy operator is $$ -\frac{\hbar^2}{2m}\left( \begin{array}{cccc} 4 & -1 & 0 & -1 & 0 & 0 & 0 & 0 \\ -1 & 4 & -1 & 0 & -1 & 0 & 0 & 0 \\ 0 & -1 & 4 & 0 & 0 & -1& 0 & 0\\ -1 & 0 & 0 & 4 & -1 & 0& -1 & 0\\ 0 & -1 & 0 & -1 & 4 & -1 & 0 & 0\\ 0 & 0 & -1 & 0 & -1 & 4 & 0 & 0\\ & & & \vdots & & & & \\ \end{array} \right) $$ And the potential energy operator is $$ \left( \begin{array}{cccc} v_{11} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & v_{21} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \ddots & 0 & 0 & 0& 0 & 0\\ 0 & 0 & 0 & v_{N1} & 0 & 0& 0 & 0\\ 0 & 0 & 0 & 0 & v_{22} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & \ddots & 0 & 0\\ & & & \vdots & & & & v_{NM} \\ \end{array} \right) $$
The natural solution to solve this problem is to store $\psi$ into a 2D matrix $\Psi = \Psi_{ij}$. Let's call $$D = \begin{pmatrix} 2 & -1 & 0 & \cdots & \cdots & 0 \\ -1 & 2 & -1 & \cdots & \cdots & 0 \\ 0 & -1 & 2 & -1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & & \vdots\\ \vdots & \vdots & \vdots & & \ddots & -1 \\ 0 & 0 & 0 & \cdots & -1 & 2 \end{pmatrix}$$ then, as you noticed, $D\Psi$ represents $\frac{\textrm{d}^2\psi}{\textrm{d}x^2}$. To find a way of calculating $\frac{\textrm{d}^2\psi}{\textrm{d}y^2}$, you can think about the fact that $\Psi^\top$ represents $\psi$, where the $x$ and $y$ axis are swapped, so $D\Psi^\top$ represents $\frac{\textrm{d}\psi}{\textrm{d}y}$, with swapped axis. However, $D$ is symetric, so it's judicious to write $D=\, D^\top$, which finally gives $D\Psi^\top = (\Psi D)^\top$: we can now swap the axis once again, so $\frac{\textrm{d}^2\psi}{\textrm{d}x^2}$ is represented by $D\Psi$. Then, you can represent $\Delta\psi$ by $D\Psi + \Psi D$ Notice that here I implicitly assumed that $\Psi$ was a square matrix: if it's not the case, you will have to create to matrix $D_x$ and $D_y$, which are similar to $D$, but with different dimensions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Transforming inertia tensor from corner of a beam to the centre So I have a beam with mass M and sides a (x-direction) ,b (y-direction) and c (z-direction). I have figured out that the inertia tensor in the corner is $$I = M\begin{bmatrix} \frac{b^2 + c^2}{3} & -\frac{ab}{4} & -\frac{ac}{4}\\ -\frac{ab}{4} & \frac{a^2+c^2}{3} & -\frac{bc}{4} \\ -\frac{ac}{4} & -\frac{bc}{4} & \frac{a^2+b^2}{3}\end{bmatrix}$$ and in the middle a nice diagonal matrix $$I = \frac{M}{12}\begin{bmatrix} b^2+c^2 & 0 & 0\\ 0 & a^2 + c^2 & 0\\ 0 & 0 & a^2 + b^2 \end{bmatrix}$$ and I know the vector you should use to transform from the corner to the middle is $$ R = \begin{bmatrix} \frac{a}{2} \\ \frac{b}{2} \\ \frac{c}{2} \end{bmatrix}$$ but I don't really get how I would for example transform the $I_{zz}$ component of the corner to the middle with the transform formula $I_a = I_b + MR^2$
The 3D form of the parallel axis theorem is defined as follows. $$ \mathbf{I}_A = \mathbf{I}_C + m \left( -[\boldsymbol{c} \times][\boldsymbol{c} \times] \right) $$ where $[\boldsymbol{c}\times] = \pmatrix{0 & -z & y\\z & 0 & -x\\-y & x & 0}$ is the 3×3 cross product operator matrix. Combined the above expression is $$ \mathbf{I}_A = \mathbf{I}_C + m \begin{vmatrix} y^2+z^2 & -x y & -x z \\ -x y & x^2 + z^2 & -y z \\ -x z & -y z & x^2+y^2 \end{vmatrix} $$ Do you see the pattern in the above? The diagonal elements are the sum of squares of the vector components excluding the row of the diagonal (1st row has 2nd and 3rd components, etc.) and the non diagonals contains the negative product of the vector components of the row and column (the 2nd row, 3rd column contains $-y z$). Using the above you can solve for $\mathbf{I}_C$ at the center given the mass moment of inertia tensor at the corner $\mathbf{I}_A$ and the position vector of the center $\boldsymbol{c} = \pmatrix{ a/2 \\ b/2 \\ c/2 }$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/502559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Defining the exchange operator $P_{12}$ without reference to dummy variables Let $\Psi \in \mathcal{L}^{2}(\mathbb{R}^{6}; \mathbb{C}^{2}\otimes\mathbb{C}^{2})$ so that $\Psi$ represents the state of two interacting fermions. Traditionally, we define the exchange operator $P_{12}$ via \begin{align*} P_{12}\Psi(\mathbf{r}_1, \mathbf{r}_2) = \Psi(\mathbf{r}_2, \mathbf{r}_1) \end{align*} but pretty soon we realize this is incorrect, as we know that we need to interchange the spin state as well. This is often achieved by writing \begin{align*} P_{12}\Psi(\mathbf{r}_1, \sigma_1, \mathbf{r}_2, \sigma_2) = \Psi(\mathbf{r}_2, \sigma_2, \mathbf{r}_1, \sigma_1) \end{align*} but this is still awkward, as the spin state is a dependent variable in the theory. To avoid these problems I would like to define the exchange operator without any reference to the dummy variables $\mathbf{r}_1$ and $\mathbf{r}_2$. This is what I have so far: Suppose $\{\psi_{n}\}$ separates $\mathcal{L}^{2}(\mathbb{R}^{3}, \mathbb{C})$. Then any function $\Psi \in \mathcal{L}^{2}(\mathbb{R}^{6}; \mathbb{C}^{2}\otimes\mathbb{C}^{2})$ can be written as \begin{align}\label{separate} \Psi &= \sum_{n} a_{nm} \begin{pmatrix} \psi_{n} \\ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 \\ \psi_{m} \end{pmatrix} + b_{nm} \begin{pmatrix} \psi_{n} \\ 0 \end{pmatrix} \otimes \begin{pmatrix} \psi_{m} \\ 0 \end{pmatrix} + c_{nm} \begin{pmatrix} 0 \\ \psi_{n} \end{pmatrix} \otimes \begin{pmatrix} 0 \\ \psi_{m} \end{pmatrix} + d_{nm} \begin{pmatrix} 0 \\ \psi_{n} \end{pmatrix} \otimes \begin{pmatrix} \psi_{m} \\ 0 \end{pmatrix} \end{align} where the first part of the tensor product is assumed to act on $\mathbf{r}_1$ and the second assumed to act of $\mathbf{r}_2$. Now we define $P_{12}$ on the basis, interchanging the functions rather than the dummy variables. Exchange of the spin-coordinate is now trivial, defined by \begin{align*} P_{12}^{\mathrm{spin}} \begin{pmatrix} \psi_{n} \\ \psi_{m} \end{pmatrix} \otimes \begin{pmatrix} \psi_{j} \\ \psi_{k} \end{pmatrix} = \begin{pmatrix} \psi_{m} \\ \psi_{n} \end{pmatrix} \otimes \begin{pmatrix} \psi_{k} \\ \psi_{j} \end{pmatrix} \end{align*} However, what I would consider the natural space exchange definition \begin{align*} P_{12}^{\mathrm{space}} \begin{pmatrix} \psi_{n} \\ \psi_{m} \end{pmatrix} \otimes \begin{pmatrix} \psi_{j} \\ \psi_{k} \end{pmatrix} = \begin{pmatrix} \psi_{j} \\ \psi_{k} \end{pmatrix} \otimes \begin{pmatrix} \psi_{n} \\ \psi_{m} \end{pmatrix} \end{align*} gives results which are in disagreement with the notation found in standard quantum mechanics texts. For instance, take $\Psi(\mathbf{r}_1, \mathbf{r}_2) = \psi(\mathbf{r}_1)\psi(\mathbf{r}_2)(\uparrow \downarrow - \downarrow \uparrow)$. This is clearly antisymmetric, but is symmetric under $P_{12}^{\mathrm{space}}P_{12}^{\mathrm{spin}}$. This can be patched up by defining $P_{12}^{\mathrm{space}}$ differently on each of the four types of terms is \ref{separate}, but that seems so unnatural that I feel like I'm on the wrong track. Is there a natural definition of $P_{12}$ which doesn't make reference to the dummy variables $\mathbf{r}_1$ and $\mathbf{r}_2$?
The column vector notation can be used to express the result of exchange of spatial or spin arguments of the psi function, if we adopt this convention: if the symbol is on the left side of the tensor product, it belongs to particle 1, if it is on the right side, it belongs to particle 2. This means that when the symbol $\psi_n$ is moved from the left vector to the right vector as part of your operation $P_{12}^{space}$, the particle index of the spatial argument changes from 1 to 2 which is desired, but the symbol then multiplies spin vector of 2nd particle, not spin vector of the 1st one as it should. Similar problem occurs for your operator $P_{12}^{spin}$. Consequently, one cannot represent the exchange operations for spatial or spin arguments in the way you wrote. Using the original function notation it can be derived that the exchange operators work this way in the column vector notation: \begin{align} P_{12}^{space} \begin{pmatrix} f_+ \\ f_- \end{pmatrix} \otimes \begin{pmatrix} g_+ \\ g_- \end{pmatrix} = \end{align} \begin{align} = \begin{pmatrix} g_+ \\ . \end{pmatrix} \otimes \begin{pmatrix} f_+ \\ . \end{pmatrix} + \begin{pmatrix} g_- \\ . \end{pmatrix} \otimes \begin{pmatrix} . \\ f_+ \end{pmatrix} + \begin{pmatrix} . \\ g_+ \end{pmatrix} \otimes \begin{pmatrix} f_- \\ . \end{pmatrix} + \begin{pmatrix} . \\ g_- \end{pmatrix} \otimes \begin{pmatrix} . \\ f_- \end{pmatrix}. \end{align} These 4 terms cannot be simplified into lesser number of terms, because they are linearly independent. For spin exchange the result is similar, just exchange $f$ with $g$ everywhere. You can then try to verify that subsequent application of $P_{12}^{space}$ and $P_{12}^{spin}$ on the ket \begin{align} \begin{pmatrix} \psi\\ . \end{pmatrix} \otimes \begin{pmatrix} . \\ \psi \end{pmatrix} - \begin{pmatrix} . \\ \psi \end{pmatrix} \otimes \begin{pmatrix} \psi\\ . \end{pmatrix} \end{align} results in minus this ket.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/313776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can the same density matrix represent two (or more) different ensembles? Given an ensemble i.e, a collection of states and the respective probabilities $\{p_i,|i\rangle\}$, one can uniquely construct the density matrix using $\rho=\sum_ip_i|i\rangle\langle i|$. Is the converse also true? Given a density matrix can we uniquely say which ensemble does it refer to i.e., reconstruct the set $\{p_i,|i\rangle\}$? When I mean ensembles are different, I mean, can the ensembles be distinguished on the basis of some expectation value of some observable?
No, we can't. For example, the ensembles $$\left\{ \left( \frac{1}{3}, |\uparrow\rangle \right), \left( \frac{2}{3}, \frac{1}{\sqrt{2}} (|\uparrow\rangle + |\downarrow\rangle) \right) \right\}$$ and \begin{align*} \left\{ \left( \frac{1}{2} + \frac{\sqrt{5}}{6}, \sqrt{\frac{1}{2} + \frac{1}{2\sqrt{5}}} |\uparrow\rangle + \sqrt{\frac{2}{\sqrt{5}+5}} |\downarrow\rangle \right),\\ \left( \frac{1}{2} - \frac{\sqrt{5}}{6}, \frac{1-\sqrt{5}}{\sqrt{10-2 \sqrt{5}}} |\uparrow\rangle + \sqrt{\frac{1}{2}+\frac{1}{2 \sqrt{5}}}|\downarrow\rangle \right) \right\} \end{align*} both correspond to the same non-degenerate density matrix $$\rho = \left(\begin{array}{cc} \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} \end{array} \right).$$ They are completely statistically indistinguishable. (Sorry, there's probably a simpler counterexample.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Gauge boson layout in $\rm SU(5)$ GUT unification It is known that the theory of the Great Unification $\rm SU(5)$ GUT encapsulates the gauge groups of the standard model to one large group $\rm SU(5)$. But is it possible to show schematically how to extract a massless photon and Z boson from it? The problem i see is that in standard (SM) notation there are a few operators that create them. $A_μ = W_{11}sinθ_{w} + B_μcosθ_{w}$ and $Z_μ = W_{11}cosθ_{w} - B_μsinθ_{w}$ corresponding to 2x2 $\rm SU(2)$ block in SM. But it seems that $\rm SU(5)$ has a different structure, in the 2x2 $\rm SU(2)$ block the signs are incorrect and the constants near the fields $W$ and $B$ do not correspond to weak mixing angle. So what is the correct way to look at all this? $V_μ = \begin{pmatrix} G_1^1-\frac{2}{\sqrt{30}}B & G_2^1 & G_3^1 & \bar{X^1} & \bar{Y^1}\\ G_1^2 & G_2^2-\frac{2}{\sqrt{30}}B & G_3^2 & \bar{X^2} & \bar{Y^2}\\ G_1^3 & G_2^3 & G_3^3-\frac{2}{\sqrt{30}}B & \bar{X^3} & \bar{Y^3}\\ X^1 & X^2 & X^3 & \frac{1}{\sqrt{2}}W^3+\frac{3}{\sqrt{30}}B & W^{+}\\ Y^1 & Y^2 & Y^3 & W^{-} & -\frac{1}{\sqrt{2}}W^3+\frac{3}{\sqrt{30}}B\\ \end{pmatrix}$
I am rewriting this to avoid sign confusions, which I originally tried to coddle you with by switching the location of the v.e.v.. The mass matrix which you are trying to diagonalize arises out of the 55 entry of the square of your vector field matrix element surviving action on the 5 Higgs field v.e.v., $$ \left (\sqrt{\frac{3}{5}}B-W^3\right )^2 , $$ where I have taken out a common normalization of to be incorporated into your embedding normalizations and does not affect the mixing. Now, define $\tan^2\theta = 3/5$. The above mass term then, up to normalization, may be re-expressed as $$ (B,W^3) \begin{pmatrix} \sin^2 \theta & -\sin \theta\cos \theta \\ -\sin \theta\cos \theta & \cos^2 \theta \end{pmatrix} \begin{pmatrix} B \\ W^3 \end{pmatrix}, $$ whose evident symmetric (zero-determinant) mass-matrix eigenvectors are $(\cos\theta,\sin\theta)^T$ and $(-\sin\theta,\cos\theta)^T$, respectively, orthogonal to each other, of course. The first one has vanishing eigenvalue, so zero mass. The matrix then collapses to $$ \begin{pmatrix} \cos\theta& -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix} \cos\theta& \sin\theta \\ - \sin\theta & \cos\theta \end{pmatrix}. $$ Mindful of irrelevant common factors to be incorporated in the couplings, the massless eigenvector combination amounts to $$ \cos\theta ~B +\sin\theta ~W^3 \mapsto \sqrt{5/8} ~B + \sqrt{3/8} ~W^3 , $$ and the massive one to $$ \mapsto - \sqrt{3/8} B + \sqrt{5/8} W^3, $$ identifiable with the γ and the Z, respectively, the routine expressions you wrote. I don't see any sign discrepancies! The square of the sine of the mixing angle at the GUT scale is then $$ \sin^2\theta = {\tan^2\theta\over 1+\tan^2\theta}= 3/8 \approx 0.38, $$ to be compared to the physical 0.23. In fact, this is sufficiently close (and dealt with renormalizing down from the GUT scale to the SM SSB scale) that it counted as an early encouraging success of the SU(5) model.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/673845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quantum gate: Phase shift I dont undestand how to apply a phase shift gate to a qubit. By example how to map $|\psi_0\rangle = \cos (30^\circ) |0\rangle + \sin (30^\circ) |1\rangle$ to $|\psi_1\rangle = \cos(-15^\circ) |0\rangle + \sin(-15^\circ) |1\rangle$
So, you have two vectors. Let $|0\rangle = \begin{pmatrix} 1 \\ 0 \\ \end{pmatrix}$ and $|1\rangle = \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix}$. So, your initial vector is $\begin{pmatrix} cos \frac{\pi}{6} \\ sin \frac{\pi}{6} \\ \end{pmatrix}$ and final vector is $\begin{pmatrix} cos \frac{-\pi}{12} \\ sin \frac{-\pi}{12} \\ \end{pmatrix}$. The phase difference between these two vectors, $\theta$ is $cos^{-1} \left[ \frac{\begin{pmatrix} cos \frac{\pi}{6} \\ sin \frac{\pi}{6} \\ \end{pmatrix} . \begin{pmatrix} cos \frac{-\pi}{12} \\ sin \frac{-\pi}{12} \\ \end{pmatrix}}{|\begin{pmatrix} cos \frac{\pi}{6} \\ sin \frac{\pi}{6} \\ \end{pmatrix}| | \begin{pmatrix} cos \frac{-\pi}{12} \\ sin \frac{-\pi}{12} \\ \end{pmatrix}|} \right]$. Evaluate $\theta$ and plug in the value in \begin{pmatrix} 1 & 0 \\ 0 & e^{i\theta} \\ \end{pmatrix}. The resulting matrix will be your gate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/56881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving differential Equation for the Two-Body Problem So, I'm following the derivation in D. Morin, Introduction to Classical Mechanics, of the equations for a two-body system. I understand all of it, aside from this one step. When he's talking about solving Lagrangian for $r(\theta)$, I don't follow the step. $$\tag{*}L = \frac {1}{2} m\dot{r}^{2}+ \frac{\ell^2}{2mr^2}+u(r)$$ and some how he ends with $$\tag{7.16}(\frac{1}{r^2} \frac{dr}{d \theta})^2 = \frac{2mE}{\ell^2}- \frac{1}{r^2} - \frac{2mu(r)}{\ell^2}.$$ This is probably a case where my lack of differential equation skills causes trouble.
I think there is a sign problem in one of your formulae, however, with your conventions, this is my derivation: The lagrangian L is $L = \frac {1}{2} m\dot{r}^{2}+ \frac{l^2}{2mr^2}+u(r) = K - V$ where K is the kinetic energy $\frac {1}{2} m\dot{r}^{2} + \frac{l^2}{2mr^2}$ and $V$ is the potential $V(r) = - u(r)$ The constant energy is then $E = K + V = \frac {1}{2} m\dot{r}^{2} + \frac{l^2}{2mr^2} - u(r)$ Multiplying by $\frac{2m}{l^2}$, we get : $\frac{2m}{l^2} E = \frac{2m}{l^2}(\frac {1}{2} m\dot{r}^{2}) + \frac{1}{r^2} - \frac{2m}{l^2} u(r)$ With $l = m \dot\theta r^2$, you have $\frac{2m}{l^2}(\frac {1}{2} m\dot{r}^{2}) = \frac{2m}{(\large m \dot\theta r^2)^2}(\frac {1}{2} m\dot{r}^{2}) = (\frac{\dot r}{\dot \theta r^2})^2 = (\frac{1}{r^2} \frac{dr}{d \theta})^2$, So we have : $$\frac{2m}{l^2} E = (\frac{1}{r^2} \frac{dr}{d \theta})^2 + \frac{1}{r^2} - \frac{2m}{l^2} u(r)$$, that is : $$(\frac{1}{r^2} \frac{dr}{d \theta})^2 = \frac{2m}{l^2} E - \frac{1}{r^2} + \frac{2m}{l^2} u(r)$$ So I do not understand the minus sign for the last term of your second formula.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier Transforming the Klein Gordon Equation Starting with the Klein Gordon in position space, \begin{align*} \left(\frac{\partial^2}{\partial t^2} - \nabla^2+m^2\right)\phi(\mathbf{x},t) = 0 \end{align*} And using the Fourier Transform: $\displaystyle\phi(\mathbf{x},t) = \int \frac{d^3p}{(2\pi)^3}e^{i \mathbf{p} \cdot\mathbf{x}}\phi(\mathbf{p},t)$: \begin{align*} \int \frac{d^3p}{(2\pi)^3}\left(\frac{\partial^2}{\partial t^2} - \nabla^2+m^2\right)e^{i \mathbf{p} \cdot\mathbf{x}}\phi(\mathbf{p},t)&=0 \\ \int \frac{d^3p}{(2\pi)^3}e^{i \mathbf{p} \cdot\mathbf{x}}\left(\frac{\partial^2}{\partial t^2} +|\mathbf{p}|^2+m^2\right)\phi(\mathbf{p},t)&=0 \end{align*} Now I don't understand why we are able to get rid of the integral, to be left with \begin{align*} \left(\frac{\partial^2}{\partial t^2} +|\mathbf{p}|^2+m^2\right)\phi(\mathbf{p},t)=0 \end{align*}
The reason you can get rid of the integral and the exponential is due to the uniqueness of the Fourier transform. Explicitly we have, \begin{align} \int \frac{ \,d^3p }{ (2\pi)^3 } e ^{ i {\mathbf{p}} \cdot {\mathbf{x}} } \left( \partial _t ^2 + {\mathbf{p}} ^2 + m ^2 \right) \phi ( {\mathbf{p}} , t ) & = 0 \\ \int d ^3 x \frac{ \,d^3p }{ (2\pi)^3 } e ^{ i ( {\mathbf{p}} - {\mathbf{p}} ' ) \cdot {\mathbf{x}} } \left( \partial _t ^2 + {\mathbf{p}} ^2 + m ^2 \right) \phi ( {\mathbf{p}} , t ) & = 0 \\ \left( \partial _t ^2 + {\mathbf{p}} ^{ \prime 2} + m ^2 \right) \phi ( {\mathbf{p}'} , t ) & = 0 \end{align} where we have used, \begin{equation} \int d ^3 x e ^{- i ( {\mathbf{p}} - {\mathbf{p}} ' ) \cdot x } = \delta ( {\mathbf{p}} - {\mathbf{p}} ' ) \end{equation} and \begin{equation} \int \frac{ d ^3 p }{ (2\pi)^3 } \delta ( {\mathbf{p}} - {\mathbf{p}} ' ) f ( {\mathbf{p}} ) = f ( {\mathbf{p}} ' ) \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 4, "answer_id": 3 }
Hamiltonian matrix written in a certain basis If I have a matrix H=$\left( \begin{array}{cc} 0 & b \\ d & 0 \\ \end{array} \right)$ writen in the basis |1>=$\left( \begin{array}{c} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \\ \end{array} \right)$ and |2>=$\left( \begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \\ \end{array} \right)$ shouldn't the following be true: <1|H|1>=0 and <2|H|2>=0? Since they are the matrix elements. But if i calculate them directly by matrices multiplication i get the other element b and d. Where am i wrong? Thank you in advance.
It appears you don't understand the concept of coordinates being based on the basis you choose. And also the H you gave is ambiguous. Is it in the "f" basis, or the "e" basis? (see below) $|1>_{f}$ = $ \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{-1}{\sqrt{2}} \end{pmatrix}_{e} $ and not $|1>_{f}$ = $ \begin{pmatrix} 1 \\ 0 \end{pmatrix}_{e} $ So, assuming that your H is in the "e" basis: <1|H|1> = $ \begin{pmatrix} \frac{1}{\sqrt{2}} \frac{-1}{\sqrt{2}} \end{pmatrix}_{e} $ $ \begin{pmatrix} 0 & b\\ d & 0 \end{pmatrix}_{e} $$ \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{-1}{\sqrt{2}} \end{pmatrix}_{e} = $ ... do at least this part yourself Similarly, <2|H|2> = $ \begin{pmatrix} \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \end{pmatrix}_{e} $ $ \begin{pmatrix} 0 & b\\ d & 0 \end{pmatrix}_{e} $$ \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} \end{pmatrix}_{e} $ = ... do at least this part yourself Note: <1|H|2> = $ \begin{pmatrix} \frac{1}{\sqrt{2}} \frac{-1}{\sqrt{2}} \end{pmatrix}_{e} $ $ \begin{pmatrix} 0 & b\\ d & 0 \end{pmatrix}_{e} $$ \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} \end{pmatrix}_{e} $ and you can figure out <2|H|1> similarly. Otherwise, if you gave H in the "f" basis, then yes, <1|H|1> = 0.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove a force driven by a cross product between a vector and its velocity gives a spiral movement parallel to the vector I was given a problem in a Classical Mechanics course that went somehow like the following: "Consider a particle of mass $m$ moving under the presence of a force $\vec{F} = k\hat{x}\times\vec{v}$, where $\hat{x}$ is an unit vector in the positive direction of the $x$ axis and $k$ any constant. Prove that the movement of a particle under this force is restricted to a circular motion with angular velocity $\vec{\omega}=(k/m)\hat{x}$ or, in a more general case, a spiral movement parallel to the direction of $\hat{x}$." In an Electrodynamics elementary college course you can see and solve that a magnetic force sort of as: $$m\ddot{\vec{r}}=\frac{q}{c}\left(\vec{v}\times\vec{B}\right)$$ with a magnetic field, say, $\vec{B}=B_0\hat{z}$ can drive a particle through a spiral movement in the precise direction of that magnetic field you customize, involving a cyclotron frequency and so, if and only if you input further initial conditions to the movement in x, y and z. My inquiry then is, how can you prove the relation given above for the angular velocity and conclude a spiral movement, from a classical mechanics perspective? I can see there's a link between both procedures, but I cannot try solving the first one without giving a glimpse to the latter.
With an aid of differential geometry, velocity, acceleration and jerk can be written as: \begin{align*} \mathbf{v} &= \dot{s} \, \mathbf{T} \\ &= v \, \mathbf{T} \\ \mathbf{a} &= \ddot{s} \, \mathbf{T}+ \kappa \, \dot{s}^2\mathbf{N} \\ \mathbf{b} &= (\dddot{s}-\kappa^2 \dot{s}^3) \mathbf{T}+ (3\kappa \dot{s} \ddot{s}+\dot{\kappa} \dot{s}^2) \mathbf{N}+ \kappa \tau \dot{s}^3 \mathbf{B} \\ \end{align*} Now $$ \mathbf{a}=\boldsymbol{\omega} \times \mathbf{v} \implies \mathbf{a} \perp \mathbf{T} \implies \ddot{s}=0 \implies \dot{s}=\text{constant} \implies v=u$$ and $$ \mathbf{b}= \dot{\mathbf{a}}=\boldsymbol{\omega} \times \mathbf{a} \implies \mathbf{b} \perp \mathbf{a} \implies \mathbf{b} \perp \mathbf{N} \implies \dot{\kappa}\dot{s}^2=0 \implies \kappa=\text{constant}$$ Also, \begin{align*} \int \mathbf{a} \, dt &= \boldsymbol{\omega} \times \int \mathbf{v} \, dt \\ \mathbf{v} &= \mathbf{u}+\boldsymbol{\omega} \times \mathbf{r} \\ \mathbf{r}(0) &= \mathbf{0} \\ \dot{\mathbf{r}}(0) &= \mathbf{u} \\ \boldsymbol{\omega} \cdot \mathbf{v} &= \boldsymbol{\omega} \cdot \mathbf{u} \\ &= \text{constant} \end{align*} We have \begin{align*} \mathbf{v} \times \mathbf{a} &= \mathbf{v} \times (\boldsymbol{\omega} \times \mathbf{v}) \\ &= v^2 \boldsymbol{\omega}-(\boldsymbol{\omega} \cdot \mathbf{v})\mathbf{v} \\ \kappa &= \frac{|v^2 \boldsymbol{\omega}-(\boldsymbol{\omega} \cdot \mathbf{v})\mathbf{v}|} {v^3} \\ &= \frac{|\boldsymbol{\omega} \times \mathbf{v}|}{v^2} \\ &= \frac{|\boldsymbol{\omega} \times \mathbf{u}|}{u^2} \\ |\mathbf{a}| &= |\boldsymbol{\omega} \times \mathbf{u}| \\ &= \text{constant} \\ \mathbf{a} \times \mathbf{b} &= a^2 \boldsymbol{\omega}-(\boldsymbol{\omega} \cdot \mathbf{a}) \mathbf{a} \\ &= a^2 \boldsymbol{\omega} \\ \tau &= \frac{\mathbf{v} \cdot a^2\boldsymbol{\omega}} {(\mathbf{v} \times \mathbf{a})^2} \\ &= \frac{\boldsymbol{\omega} \cdot \mathbf{v}}{v^2} \\ &= \frac{\boldsymbol{\omega} \cdot \mathbf{u}}{u^2} \\ &= \text{constant} \end{align*} Both $\kappa$ and $\tau$ are constants implying the path is helical. If $\mathbf{u} \cdot \boldsymbol{\omega}=0 \implies \tau=0$, then it'll be a circle. While $\mathbf{u} \times \boldsymbol{\omega}=\mathbf{0} \implies \kappa=0$, that'll be a straight line. Fitting with initial conditions: $$\fbox{$\quad \mathbf{r}=\mathbf{u}t+ \frac{\mathbf{u} \times \boldsymbol{\omega}}{\omega^2}(\cos \omega t-1)+ \frac{\boldsymbol{\omega} \times (\mathbf{u} \times \boldsymbol{\omega})}{\omega^{3}}(\sin \omega t-\omega t) \quad \\$}$$ Some facts from differential geometry \begin{align*} s &= \int |\mathbf{v}| \, dt \tag{arclength} \\ \dot{s} &= |\mathbf{v}| \tag{speed} \\ &= v \\ \mathbf{T} &= \frac{\mathbf{v}}{v} \tag{tangent vector}\\ \mathbf{B} &= \frac{\mathbf{v} \times \mathbf{a}}{|\mathbf{v} \times \mathbf{a}|} \tag{binormal vector} \\ \mathbf{N} &= \mathbf{B} \times \mathbf{T} \tag{normal vector} \\ \kappa &= \frac{|\mathbf{v} \times \mathbf{a}|}{v^3} \tag{curvature} \\ \tau &= \frac{\mathbf{v} \cdot \mathbf{a} \times \mathbf{b}} {(\mathbf{v} \times \mathbf{a})^2} \tag{torsion} \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/317192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Finding the charge distribution on a charged sphere - image method Considering the case where we have a grounded sphere (so the potential on the surface is 0) and the following charge system: Now, I'm trying to solve the case for when, instead of a grounded sphere, we have a charged one, which I assumed to be the same as before, but with an extra charge Q at the centre of the sphere. Solving the Laplace equation for $R\leq r$, I got that: $$\phi (x,y,z)=\frac{1}{4\pi \epsilon_0}[\frac{q}{\sqrt{(D-x)^2 + y^2+z^2}}+\frac{qR}{D\sqrt{(x-b)^2 + y^2+z^2}}+\frac{Q}{\sqrt{x^2 + y^2+z^2}}]$$ But now how do I find the charge distribuition around the surface of sphere? I was thinking of changing everything to spherical coordinates and then solving the equation which comes from Gauss' Theorem: $\sigma =\epsilon_0 \frac{\delta \phi}{\delta r}$ at r=R and: $$\frac{\delta \phi}{\delta r}=\frac{\delta \phi}{\delta x}\frac{\delta x}{\delta r}+\frac{\delta \phi}{\delta y}\frac{\delta y}{\delta r}+\frac{\delta \phi}{\delta z}\frac{\delta z}{\delta r}$$ But isn't there another way of doing this?
The system can be written as $\phi (r,\theta) =\frac{1}{4\pi \epsilon_0}[\frac{Q}{|r-d|}+\frac{Q_i}{|r-d_i|}] $ Now for $r=a$ we want $ 0=\phi (r=a,\theta) =\frac{1}{4\pi \epsilon_0}[\frac{Q_i}{|a-d_i|}+\frac{Q}{|a-d|}]\\ \Rightarrow Q^2|a-d_i|^2=Q_i^2|a-d|^2\\ \Leftrightarrow Q^2(a^2+d_i^2-2ad_i\cos(\theta))=Q_i^2(a^2+d^2-2ad\cos(\theta))\\ \Leftrightarrow Q^2(a^2+d_i^2)-Q_i^2(a^2+d^2)=2a\cos(\theta)(Q^2d_i-Q_i^2d) $ We would like the induced charge to be homogeneous on the sphere $\phi (r=a,\theta)=\phi (r=a)$ so we need that $Q^2d_i-Q_i^2d=0\quad (1)\Rightarrow Q^2(a^2+d_i^2)-Q_i^2(a^2+d^2)=0\quad (2)$ therefore $0=_{2}(a^2+d_i^2)-\frac{Q_i^2}{Q^2}(a^2+d^2)\\ =_{1}(a^2+d_i^2)-\frac{d_i}{d}(a^2+d^2)\\ =d_i^2-\frac{(a^2+d^2)}{d}d+a^2\\ =(d_i-\frac{a^2}{d})(d_i-d) $ The solution $d_i=d$ is outside the sphere right over the other charge so instead we take $d_i=\frac{a^2}{d}$ which implies $Q_i=\pm\frac{a}{d}Q$ but both charges can't be positive so we take $Q_i=-\frac{a}{d}Q$ Notice that this means we have $d_id=a^2$ which is called also an sphere inversion $OP\times OP^{\prime} =r^2$ (if we where working in $\mathbb{C}$ instead of $\mathbb{R^2}$ this is would read as $z\overline{z}=\|z\|$) and further can be read at the wikipedia article for the image method Putting this back in our system we get $\phi (r,\theta)\\ =\frac{1}{4\pi \epsilon_0}[\frac{Q}{|r-d|}+\frac{Q_i}{|r-d_i|}]\\ =\frac{Q}{4\pi \epsilon_0}[\frac{1}{|r-d|}+\frac{-a/d}{|r-a^2/d|}]\\ =\frac{Q}{4\pi \epsilon_0}[\frac{1}{|r-d|}+\frac{-1}{|rd/a-a|}]\\ =\frac{Q}{4\pi \epsilon_0}[\frac{1}{\sqrt{r^2+d^2-2rd\cos(\theta)}}+\frac{-1}{\sqrt{r^2d^2/a^2+a^2-2rd\cos(\theta)}}]\\ $ The derivative evaluated at $r=a$ gives $\phi_r (r,\theta)|_{r=a}\\ =\frac{Q}{4\pi \epsilon_0}[-\frac{d\cos(\theta)-r}{(r^2+d^2-2rd\cos(\theta))^{3/2}}+\frac{d\cos(\theta)-rd^2/a^2}{r^2d^2/a^2+a^2-2rd\cos(\theta))^{3/2}}]|_{r=a}\\ =\frac{Q}{4\pi \epsilon_0}[-\frac{d\cos(\theta)-a}{(a^2+d^2-2ad\cos(\theta))^{3/2}}+\frac{d\cos(\theta)-d^2/a}{(d^2+a^2-2ad\cos(\theta))^{3/2}}]\\ =\frac{Q}{4\pi \epsilon_0}[\frac{a^2-d^2}{a(d^2+a^2-2ad\cos(\theta))^{3/2}}]\\ $ Finally we want the total charge on the surface: $Q_{\text{total}}=\int\sigma(\theta)d\Omega\\ =\int_{-\pi}^{\pi}\int_0^{\pi}\epsilon_0\phi_r(r=a,\theta)a^2\sin(\theta)d\theta d\varphi\\ =\int_{-\pi}^{\pi}\int_{0}^{\pi}\epsilon_0(\frac{Q}{4\pi \epsilon_0}[\frac{a^2-d^2}{a(d^2+a^2-2ad\cos(\theta))^{3/2}}])a^2\sin(\theta)d\theta d\varphi\\ =\frac{a(a^2-d^2)Q}{4\pi}\int_{-\pi}^{\pi}d\varphi\int_{-\pi}^{\pi}[\frac{\sin(\theta)}{a(d^2+a^2-2ad\cos(\theta))^{3/2}}]d\theta\\ =\frac{a(a^2-d^2)Q}{2}\int_{0}^{\pi}[\frac{\sin(\theta)}{(d^2+a^2-2ad\cos(\theta))^{3/2}}]d\theta\\ =\frac{a(a^2-d^2)Q}{2}[\frac{1/ad}{\sqrt{d^2+a^2-2ad\cos(\theta)}}]|_{0}^{\pi}\\ =\frac{(a^2-d^2)Q}{2d}[\frac{1}{\sqrt{d^2+a^2-2ad}}-\frac{1}{\sqrt{d^2+a^2+2ad}}]\\ =\frac{(a^2-d^2)Q}{2d}[\frac{1}{\sqrt{(d-a)^2}}-\frac{1}{\sqrt{(d+a)^2}}]\\ =\frac{(a^2-d^2)Q}{2d}[\frac{1}{(d-a)}-\frac{1}{(d+a)}]\\ =\frac{(a^2-d^2)Q}{2d}[\frac{2a}{(d^2-a^2)}]\\ =\frac{-aQ}{d}\\ $ Where the factor $\frac{-a}{d}$ represents the fraction of charge induced by the particle at distance $d$ over the sphere of radius $a$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hamiltonian of two coupled oscillators Lets say I have this system: Two different masses with three different springs. It's not very nice to do, but I can find the eigenvalues of this system (It's not nice because the two masses are different and the three springs are different). The Hamiltonian is: $$ H=\frac{p_{1}^{2}}{2m_{1}}+\frac{p_{2}^{2}}{2m_{2}}+\frac{k_{1}x_{1}^{2}}{2}+\frac{k_{2}x_{2}^{2}}{2}+\frac{k\left(x_{2}-x_{1}\right)^{2}}{2} $$ and the expression for $\omega$ (normal mode's frequencies after diagnolization): $$ \omega_{1,2}^{2}=\frac{-\frac{m_{2}k_{1}+m_{2}k+km_{1}+m_{1}k_{2}}{m_{1}m_{2}}\pm\sqrt{\left(\frac{m_{2}k_{1}+m_{2}k+km_{1}+m_{1}k_{2}}{m_{1}m_{2}}\right)^{2}-4\left(\frac{k_{1}k+k_{1}k_{2}+kk_{2}}{m_{1}m_{2}}\right)}}{2} $$ Now, let's say I want to write the Hamiltonian in the terms of $$ \omega_{1} , \omega_{2} $$ How do I do that? I don't see a way fixing the omegas to substitute into the Hamiltonian. Thanks! Edit for ytlu: Hey :) Thanks again, but still I don't understand. I don't mind finding the exact omegas. I just want to write the Hamiltonian in a form like that: $$ H=\frac{\hat{p}_{1}^{2}}{2m_{1}}+\frac{\hat{p}_{2}^{2}}{2m_{2}}+\frac{m_{1}\omega_{1}^{2}\hat{x}_{1}^{2}}{2}+\frac{m_{2}\omega_{2}^{2}\hat{x}_{2}^{2}}{2} $$ meaning, a decoupled Hamiltonian. But I'm not sure if it is right to write it and how to show it. for example, If you have the same system but with all the masses equal and all the springs equal then you get $$ \omega_{1}=\sqrt{3k/m} $$ $$ \omega_{2}=\sqrt{k/m} $$ and then it is straight forward to substitute the k into the Hamiltonian to get this form: $$ H=\frac{\hat{p}_{1}^{2}}{2m_{1}}+\frac{\hat{p}_{2}^{2}}{2m_{2}}+\frac{m_{1}\omega_{1}^{2}\hat{x}_{1}^{2}}{2}+\frac{m_{2}\omega_{2}^{2}\hat{x}_{2}^{2}}{2} $$ I'm looking for a way doing it when all the masses and springs are different
Starting from the dynamical matrix for the normal mode of the coupled oscillation. The normal mode is defined as: $$ \begin{pmatrix} x_1(t) \\ x_2(t) \end{pmatrix} = e^{i\omega t} \begin{pmatrix} \xi_1 \\ \xi_2 \end{pmatrix}. \tag{1} $$ And the equation for the normal mode: $$ -\omega^2 \begin{pmatrix} \xi_1 \\ \xi_2 \end{pmatrix} = \begin{pmatrix} -\frac{k_1 + k}{m_1} & \frac{k}{m_1} \\ \frac{k}{m_2} & -\frac{k_2 + k}{m_2} \end{pmatrix} \begin{pmatrix} \xi_1 \\ \xi_2 \end{pmatrix} \equiv \mathbf A \vec \xi. \tag{2} $$ The dynamical matrix $\mathbf A$ is not symmetrical. To fix this problem, we re-write Eq.1 as $$ \vec y \equiv \begin{pmatrix} \sqrt{m_1} x_1(t) \\ \sqrt{m_2} x_2(t) \end{pmatrix} = e^{i\omega t} \begin{pmatrix} \sqrt{m_1} \xi_1 \\ \sqrt{m_2}\xi_2 \end{pmatrix}. $$ Equation 2 becomes: \begin{align*} -\omega^2 \begin{pmatrix} \sqrt{m_1} \xi_1 \\\sqrt{m_2} \xi_2 \end{pmatrix} &= \begin{pmatrix} \sqrt{m_1} & 0 \\ 0 & \sqrt{m_2} \end{pmatrix} \begin{pmatrix} -\frac{k_1 + k}{m_1} & \frac{k}{m_1} \\ \frac{k}{m_2} & -\frac{k_2 + k}{m_2} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{m_1}} & 0 \\ 0 & \frac{1}{\sqrt{m_2}} \end{pmatrix} \begin{pmatrix} \sqrt{m_1} & 0 \\ 0 & \sqrt{m_2} \end{pmatrix} \begin{pmatrix} \xi \\ \xi \end{pmatrix} \\ &= \begin{pmatrix} -\frac{k_1 + k}{m_1} & \frac{k}{\sqrt{m_1 m_2}} \\ \frac{k}{\sqrt{m_2 m_1}} & -\frac{k_2 + k}{m_2} \end{pmatrix} \begin{pmatrix} \sqrt{m_1} \xi_1 \\\sqrt{m_2} \xi_2 \end{pmatrix} \\ \end{align*} In term of vector $\vec y$, the dynamical matrix becomes symmetricl: $$ -\omega^2 \vec y = \begin{pmatrix} -\frac{k_1 + k}{m_1} & \frac{k}{\sqrt{m_1 m_2}} \\ \frac{k}{\sqrt{m_2 m_1}} & -\frac{k_2 + k}{m_2} \end{pmatrix} \begin{pmatrix} \sqrt{m_1} \xi_1 \\\sqrt{m_2} \xi_2 \end{pmatrix} \equiv\mathbf B \vec y. \tag{3} $$ Thus, we can diagonal the hamiltonian using the eigen vectors of matrix $\mathbf B$. They are mutually orthogonal. Diagonalization of Hamiltonian: Write Hamiltoniam in terms of $\vec y$, defined as $$ \vec y(t) \equiv \begin{pmatrix} \sqrt{m_1} x_1(t) \\ \sqrt{m_2} x_2(t) \end{pmatrix}. $$ The hamiltonian becomes \begin{align*} H &= \frac{1}{2} \left(\frac{\sqrt{m_1} dx_1}{dt}\right)^2 + \frac{1}{2} \left(\frac{\sqrt{m_2} dx_2}{dt}\right)^2 \\ &+ \frac{1}{2} \begin{pmatrix} \sqrt{m_1}x_1(t) & \sqrt{m_2}x_2(t) \end{pmatrix} \begin{pmatrix} +\frac{k_1 + k}{m_1} & -\frac{k}{\sqrt{m_1 m_2}} \\ -\frac{k}{\sqrt{m_2 m_1}} & +\frac{k_2 + k}{m_2} \end{pmatrix} \begin{pmatrix} \sqrt{m_1} x_1(t) \\ \sqrt{m_2}x_2(t) \end{pmatrix}\\ &= \frac{1}{2} \dot{\vec y}^T \dot{\vec y} + \frac{1}{2} \vec y^T \mathbf B \vec y \end{align*} Because $\mathbf B$ is a symmetrical matrix, its eigen vectors form a orthogonal matrix $$ \mathbf R = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} =\begin{pmatrix}\hat v_1^T \\ \hat v_2^T \end{pmatrix} $$ where $\hat v_1$ and $\hat v_2$ is two eigen vectors of $\mathbf B$: \begin{align*} \mathbf B \hat v_1 &= \lambda_1 \hat v_1 = \omega_1^2 \hat v_1 \\ \mathbf B \hat v_2 &= \lambda_2 \hat v_2 = \omega_2^2 \hat v_2 \end{align*} $\mathbf R \vec y = \vec \eta$, and $\vec y^T \mathbf R^T = \vec \eta^T$, with $\mathbf R^\dagger \mathbf R =\mathbf I$. Apply the the transformation to the above Hamiltonian: \begin{align*} H &= \frac{1}{2} \dot{\vec y}^T \dot{\vec y} + \frac{1}{2} \vec y^T \mathbf B \vec y\\ &= \frac{1}{2} \dot{\vec y}^T \left( \mathbf R^T \mathbf R\right) \dot{\vec y} + \frac{1}{2} \vec y^T\left( \mathbf R^T \mathbf R\right) \mathbf B \left( \mathbf R^T \mathbf R\right)\vec y\\ &= \frac{1}{2} \dot{\vec \eta}^T \dot{\vec \eta} + \frac{1}{2} \vec \eta^T \mathbf R \mathbf B \mathbf R^T \vec \eta\\ &= \frac{1}{2} \left(\dot{\eta_1}^2 + \dot{\eta_2}^2\right) + \frac{1}{2} \left(\omega_1^2 \eta_1^2 + \omega_2^2 \eta_2^2 \right) \end{align*} where $\mathbf R \mathbf B \mathbf R^T$ renders a diagonal matrix.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Problem with calculating density matrix I am trying to calculate the density matrix for following mixed state $|\Omega\rangle$: $|\Omega\rangle=\frac{1}{2}(|\Psi\rangle+|\Phi\rangle)$ $|\Psi\rangle=\frac{1}{\sqrt{2}}(|u\rangle+|d\rangle)$ $|\Phi\rangle=\frac{1}{\sqrt{2}}(|u\rangle-|d\rangle)$ using two different paths but the answers I get are different. The first result seem to be correct and makes sense to me, but I can not arrive at the same result using the second method! Can you please tell me where I am making the mistake? * *$\rho=\sum_{1}^{k}p_{k}\rho_{k}$ $\rho_{\Psi}=|\Psi\rangle\langle\Psi|= \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix}= \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{pmatrix} $ $\rho_{\Phi}=|\Phi\rangle\langle\Phi|= \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{pmatrix}= \begin{pmatrix} \frac{1}{2} & \frac{-1}{2} \\ \frac{-1}{2} & \frac{1}{2} \end{pmatrix} $ $\rho=\frac{1}{2}\left( \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{pmatrix} + \begin{pmatrix} \frac{1}{2} & \frac{-1}{2} \\ \frac{-1}{2} & \frac{1}{2} \end{pmatrix}\right)= \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{pmatrix} $ *$|\Omega\rangle=\frac{1}{2}(|\Psi\rangle+|\Phi\rangle)= \frac{1}{2}(\frac{(|u\rangle+|d\rangle)}{\sqrt{2}}+\frac{(|u\rangle-|d\rangle)}{\sqrt{2}})=\frac{2|u\rangle}{2\sqrt{2}}=\frac{|u\rangle}{\sqrt{2}}$ $\rho=|\Omega\rangle\langle\Omega|= \begin{pmatrix} \frac{1}{\sqrt{2}} & 0 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \\ 0 \end{pmatrix}= \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & 0 \end{pmatrix} $
The second approach is not a valid approach because your final $\rho$ is not a pure state (and thus cannot be represented as a pure state). The fact that only pure states can be represented as vectors (bras/kets or row/column vectors) is the reason we need something more complex such as the density matrix in the first place. $\rho_\Omega$ is an incoherent mixture of coherent states so can only be calculated via a procedure such as what you did in #1, i.e. as a convex sum of other density matrices (which may or may not themselves be a coherent/pure states). In your second approach, the key misconception was the belief that you can write an incoherent mixture as some pure state, $\left|\Omega\right\rangle$. This would only work if the mixture was a coherent mixture of states (i.e. a quantum superposition state). Note an indication that this wasn't correct is the fact that $\left|\Omega\right\rangle$ isn't a valid/normalized state (i.e. $\langle\Omega|\Omega\rangle = 2 \ne 1$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
College Board Problem: Conservation of Momentum -> Conservation of Energy in a Spring In my AP Physics C class today, we ran into a problem written by College Board whose answer we disputed. The problem is as such : A block of mass $M=5.0 \ \mathrm{kg}$ is hanging at equilibrium from an ideal spring with spring constant $k=250 \ \mathrm{N/m}$.An identical block is launched up into the first block. The new block is moving with a speed of $v=5.0 \ \mathrm{m/s}$ when it collides with and sticks to the original block. Calculate the maximum compression of the spring after the collision of the two blocks. According to the College Board answer key, the answer is $0.5 \ \mathrm{m}$ : $p_1=p_2$ $Mv_0=(M+M)v_2$ $v_2=\frac{1}{2}v_0= \left (\frac{1}{2} \right)\left (5.0 \frac{m}{s}\right)$ $v_2=2.5 \frac{m}{s}$ $K_1 + U_1=K_2+U_2$ $\frac{1}{2}mv_1^2 +0=0+\frac{1}{2}kx_2^2$ $x_2=\sqrt{\frac{m}{k}}v_1= \sqrt{\frac{(10 \ \mathrm{kg)}}{\left(250 \frac{N}{m}\right)}} \left(2.5 \frac{m}{n}\right)$ $x_2=0.50 \ \mathrm{m}=50 \ \mathrm{cm}$ However, half of us disputed this during class. We argued that, yes, $U_2$ includes $\frac{1}{2}kx^2$, but it also includes gravitational potential energy at the maximum compression (that is, when it compresses $x$ meters from equilibrium, the mass $M$ is $x$ meters higher above ground). Thus $K_1+U_1=K_2+U_2$ is $\frac{1}{2}mv^2+0=0+\frac{1}{2}kx^2+mgx$. When $mgx$ is included, $x$ is $0.24 \ \mathrm{m}$, not $0.5 \ \mathrm{m}$. My physics teacher reluctantly agreed with College Board but could not give a solid explanation why. He said he would e-mail College Board, but in the meantime, I would very much appreciate any input from people who know the answer.
I guess this is a homework/check-my-work problem, so by the letter of the law I should not answer, but I would argue there is broad interest in solving it correctly given that a supposedly reputable source is presenting an incorrect solution. Here is how I would do this. Initially, the spring is stretched distance $d=mg/k$ below its equilibrium position. Choose the position of the hanging block as $y=0$, so the gravitational potential energy immediately after the collision is zero. In terms of the speed $v=v_0/2$ of the blocks after the collision and the mass $m$ of one block, the total energy immediately after the collision is \begin{align} E_i &= \frac{1}{2}(2m) v^2 + \frac{1}{2}kd^2\\ &= \frac{1}{4}mv_0^2 + \frac{1}{2}\frac{m^2g^2}{k}. \end{align} Let $h$ be the distance of the blocks above the equilibrium position of the spring when the blocks are at their maximum height. At this point, the blocks are at rest, so their total energy is \begin{align} E_f &= \frac{1}{2}kh^2 + 2mg(h + d)\\ &= \frac{1}{2}kh^2 + 2mgh + \frac{2m^2g^2}{k}. \end{align} Using conservation of energy, \begin{align} &\frac{1}{4}mv_0^2 + \frac{1}{2}\frac{m^2g^2}{k} = \frac{1}{2}kh^2 + mgh + \frac{2m^2g^2}{k}\\ \rightarrow &\frac{1}{2}kh^2 + 2mgh + \frac{3}{2}\frac{m^2g^2}{k} - \frac{1}{4}mv_0^2 = 0. \end{align} We can solve this quadratic equation for $h$ to obtain \begin{align} h = -\frac{2mg}{k} + \sqrt{\frac{m^2g^2}{k^2} + \frac{mv_0^2}{2k}}. \end{align} In terms of the given numbers, $v_0 = 5.0\,\text{m}/\text{s}$, $m=5.0\,\text{kg}$, and $k=250\,\text{N}/\text{m}$, we get $\boxed{h=15\,\text{cm}.}$ Note that if we set $g=0$ so that there is no gravity, we get \begin{align} h = \sqrt{\frac{mv_0^2}{2k}} = \boxed{50\,\text{cm}.} \end{align} We are left to conclude that the author of the solution was likely in free-fall at the time of its writing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/680196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Should normalisation factor in a QM always be positive? I have a fairly simple question about a normalisation factor. After normalising a wavefunction for a particle in an infinite square well on an interval $-L/2<x<L/2$ I got a quadratic equation for a normalisation factor $A_0$ which has a solution like this: $$A_0 = - \frac{8\pi}{6} \pm \sqrt{\frac{8^2\pi^2}{36}+\frac{3}{4}}$$ The sign $\pm$ gives me two options while a normalisation factor can only be one value. So I want to know if there are any criteria on which i could decide which value to choose. Is it possible that normalisation factor should always be a positive value? I added the procedure I used to derive this eq. :
I think you made a mistake. Using your wavefunction and noting that $$\int^\frac{L}{2}_{-\frac{L}{2}} \sin\left(\frac{\pi x}{L}\right)\sin\left(\frac{2\pi x}{L}\right) = \frac{4L}{3\pi}$$ we see that $$1=\int^{L/2}_{-L/2}|\psi(x)|^2 dx = \frac{2}{L}\left(|A_0|^2\frac{L}{2} +\frac{4L}{3\pi} \left(A_0 + A_0^*\right)+ \frac{1}{4}\frac{L}{2}\right) = \left(|A_0|^2 +\frac{8}{3\pi} \left(A_0 + A_0^*\right)+ \frac{1}{4}\right)$$ and so we have the constraint $$\boxed{|A_0|^2 +\frac{8}{3\pi} \left(A_0 + A_0^*\right)+ \frac{1}{4} = 1}$$ If we let $A_0 = a+ib$, with $a,b$ real, this reduces to $$a^2+b^2 +\frac{16}{3\pi}a + \frac{1}{4} = 1 $$ If we make the choice that $A_0$ is real, that is $A_0 = a$ we find $$A_0^2 +\frac{16}{3\pi} A_0 + \frac{1}{4} = 1 ~~\implies ~~ \boxed{A_0 = -\frac{8}{3\pi}\pm\sqrt{\left(\frac{8}{3\pi}\right)^2+\frac{3}{4}}}$$ which is what I am guessing you were trying to get. We see that the most general solution involves both $a$ and $b$ - it is complex. To find a general result, parameterized by $a$ we solve $$a^2+b^2 +\frac{16}{3\pi}a + \frac{1}{4} = 1~~\implies ~~ b = \pm \sqrt{\frac{3}{4}-a^2-\frac{16}{3\pi}a}$$ and so our general solution is $$\boxed{A_0 = a \pm i\sqrt{\frac{3}{4}-a^2-\frac{16}{3\pi}a}} $$ (note we must restrict $a$ to take values for which $\sqrt{\frac{3}{4}-a^2-\frac{16}{3\pi}a}$ is real). So we have a range of possible solutions. I cannot see any way to make a choice of one solution over any other. Added after comments below: It is likely that the question (wherever you found it) has a typo and the interval of interest is actually $[0,L]$. This makes more sense, because the first two energy eigenstates of an infinite square well potential on the interval $[0,L]$ are (up to a normalization) $\sin\left(\frac{\pi x}{L}\right)$ and $\sin\left(\frac{2\pi x}{L}\right)$. Under this scenario, the problem of normalization is much simpler, because these two functions are orthogonal on this interval. This means that we have $$1 = \int^L_0 dx |\psi(x)|^2 = |A_0|^2 + \frac{1}{4} \implies |A_0| = \frac{\sqrt{3}}{2}$$ so we would have $A_0 = e^{i\theta}\frac{\sqrt{3}}{2}$ for arbitrary, real $\theta$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/73210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability of measuring two observables in a mixed state Lets say i have density Matrix on the usual base $$ \rho = \left( \begin{array}{cccc} \frac{3}{14} & \frac{3}{14} & 0 & 0 \\ \frac{3}{14} & \frac{3}{14} & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{4}{7} \\ \end{array} \right) $$ from this two states $$|v_1\rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{c} 1\\ 1 \\ 0 \\ 0 \\ \end{array} \right);|v_2\rangle=\left( \begin{array}{c} 0\\ 0 \\ 0 \\ 1 \\ \end{array} \right)$$ with weights $\frac{3}{7}$ and $\frac{4}{7}$ respectively And two observables A and B $$A=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \\ \end{array} \right) $$ $$B=\left( \begin{array}{cccc} 3 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ \end{array} \right) $$ If you measure $A$ and $B$ at the same time, what you are doing is measuring $A$ with some state and measuring $B$ with not necessarily the same state but it may be the same. So if I am trying to measure probability of measuring 2 and 4, that is $p(2)\times p(4)$ ? Thats $p(2) = \frac{4}{7} $ and $p(4) = \frac{11}{14}$ then obtaining the two at the same time is $ \frac{4}{7}\times\frac{11}{14} = \frac{22}{49}$ What confuses me is, why is lower than $\frac{4}{7}$, since $|v_2\rangle$ has that chance of being measured, the other state just adds chances of measuring 4 with B. Whats really going on here?
If you are measuring both observables at the same time (which is possible, since the two observables commute), then you have to do one measurement which measures both quantities. Therefore the measurement must result in one of the common eigenstates of $A$ and $B$. Now it turns out that $A$ and $B$ have an unique common set of eigenstates, which are just the basis states. Therefore the probabilities are just the diagonal elements of the density matrix, that is (naming $a$ the measurement result of $A$ and $b$ the measurement result of $B$): $$\begin{aligned} p(a=1 \land b=3) &= \frac{3}{14}\\ p(a=1 \land b=4) &= \frac{3}{14}\\ p(a=2 \land b=3) &= 0\\ p(a=2 \land b=4) &= \frac{4}{7} \end{aligned}$$ Note that $p(a=2 \land b=4) \ne p(a=2)\cdot p(b=4)$ since the events are not independent of each other. But that's not specifically quantum; if you know $a=2$, you already know that $b$ must be $4$, while from learning $a=1$ you don't learn anything about the measurement result $b$. Already classical probability theory tells you that in that case, the product formula for the probabilities doesn't hold. Indeed, given that there is a single combined measurement, you could split the measurement into two steps; first doing the measurement (which produces the values for $a$ and $b$), and then, reading the measurement results (which is a completely classical process, described by ordinary probability theory).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Matrix exponentiation of Pauli matrix I was working through the operation of the time reversal operator on a spinor as was answered in this question, however, I cannot figure out how this step was done: $$e^{-i \large \frac{\pi}{2} \sigma_y} = -i\sigma_y.$$ I suspect it has something to do with a taylor series expansion. Here $\sigma_y$ is the pauli matrix which has the form $\sigma_y=\begin{pmatrix}0&-i\\i&0\end{pmatrix}$.
The relation is shown using a taylor series of the exponential: $e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...$ so that $e^{-i\pi/2\sigma_y}$ can be expanded. $e^{-\frac{i\pi\sigma_y}{2}}=1+\left(-\frac{i\pi\sigma_y}{2}\right)+\frac{\left(-\frac{i\pi\sigma_y}{2}\right)^2}{2!}+\frac{\left(-\frac{i\pi\sigma_y}{2}\right)^3}{3!}+\frac{\left(-\frac{i\pi\sigma_y}{2}\right)^4}{4!}+\frac{\left(-\frac{i\pi\sigma_y}{2}\right)^5}{5!}+...$ Noting that $\sigma_y^2=\begin{pmatrix}0&-i\\i&0\end{pmatrix}\begin{pmatrix}0&-i\\i&0\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}=I$ then \begin{equation} \begin{aligned} e^{-\frac{i\pi\sigma_y}{2}}&=1-i\sigma_y(\pi/2)-\frac{(\pi/2)^2}{2!}+i\sigma_y\frac{(\pi/2)^3}{3!}+\frac{(\pi/2)^4}{4!}-i\sigma_y\frac{(\pi/2)^5}{5!}+...\\ &=\bigg\{1-\frac{(\pi/2)^2}{2!}+\frac{(\pi/2)^4}{4!}+...\bigg\}-i\sigma_y\bigg\{(\pi/2)-\frac{(\pi/2)^3}{3!}+...\bigg\}\\ &=\cos(\pi/2)-i\sigma_y\sin(\pi/2)\\ &=-i\sigma_y \end{aligned} \end{equation} Here the taylor series for cos and sin were used to simplify the infinite sequence: $\cos(x)=1-\frac{x^2}{2!}+\frac{x^4}{4!}+...$ and $\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}+...$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Second-order energies of a quartic pertubation of a harmonic oscillator A homework exercise was to calculate the second-order perturbation of a quantum anharmonic oscillator with the potential $$ V(x) = \frac{1}{2}x^2 + \lambda x^4. $$ We set $\hbar = 1$, $m=1$, etc. Using the harmonic oscillator $H = \frac{1}{2}p^2 + \frac{1}{2}x^2$ as my basis hamiltonian, I calculated the perturbed ground state energy multiplication factors as $$ E_0(\lambda) = 1 + \frac{3}{4} \lambda - \frac{21}{\color{red}{8}} \lambda^2 + \mathcal{O}(\lambda^3) $$ while our lecture notes state $$ E_0(\lambda) = 1 + \frac{3}{4} \lambda - \frac{21}{\color{red}{16}} \lambda^2 + \mathcal{O}(\lambda^3). $$ I did not find any sources in literature, neither did I find a mistake in my calculations yet. Which one is correct?
It seems that your result is not correct (with your convention for the hamiltonian): Starting from your hamiltonian $H = \frac{1}{2}p^2 + \frac{1}{2}x^2 + \lambda x^4$, and with : $$E_0^{(1)} = V_{00}, \quad E_0^{(2)} = \sum\limits_{m \neq 0} \frac{|V_{0m}|^2}{E_0-E_m} \tag{1}$$ Here : $E_n^{(0)} = n + \frac{1}{2}$, with $V_{00} = \lambda \langle 0|X^4|0\rangle$ and $|V_{0m}|^2 = \lambda^2 |\langle 0|X^4|m\rangle|^2$, and with $X = \frac{1}{\sqrt{2}} (a+a^+), P = \frac{i}{\sqrt{2}}(a^+-a)$ By applying successively ($4$ times) the operator $X = \frac{1}{\sqrt{2}} (a+a^+)$, on the state $|0\rangle$ (with the rules $a|n\rangle = \sqrt{n}|n-1\rangle$ and $a^+|n\rangle = \sqrt{n+1}|n+1\rangle$) you find : $X^4|0\rangle = \dfrac{1}{\sqrt{2}^4}(\sqrt{24}|4\rangle + 2\sqrt{18}|2\rangle + 3|0\rangle) \tag{2}$ So, finally : $$E_0^{(1)} = \frac{3}{4} \lambda\tag{3}$$ $$E_0^{(2)} = -\frac{1}{2^4}(\frac{24}{4} + \frac{4*18}{2})\lambda^2= - \frac{42}{16} {\lambda^2} = - \frac{21}{8}\tag{4} {\lambda^2}$$ So, finally, the (absolute) modifyed energy for the ground state is : $$E_0(\lambda) = \frac{1}{2} + \frac{3}{4} \lambda - \frac{21}{8} {\lambda^2} \tag{5}$$ This is compatible with the other convention for the hamiltonian (in my reference) which is : $H' = p^2 + x^2 + \lambda x^4 = 2(\frac{1}{2}p^2 + \frac{1}{2}x^2 + \frac{\lambda}{2} x^4)$, the (absolute) modifyed energy for the ground state is then : $E'_0(\lambda) = 2 E_0(\frac{\lambda}{2}) = 2(\frac{1}{2} + \frac{3}{4} \frac{\lambda}{2} - \frac{21}{8} \frac{{\lambda^2}}{4})=1 + \frac{3}{4} \lambda - \frac{21}{16} {\lambda^2} \tag{6}$ Now, if you want relative factors, you have to consider $\frac{E_0(\lambda)}{E_0(0)}$ or $\frac{E'_0(\lambda)}{E'_0(0)}$, depending on the hamiltonian you are considering, so with your hamiltonian $H$, you have : $$\frac{E_0(\lambda)}{E_0(0)} = 1+ \frac{3}{2} \lambda - \frac{21}{4} {\lambda^2} \tag{7}$$ while, with the hamiltonian $H'$, we get : $$\frac{E'_0(\lambda)}{E'_0(0)}=1 + \frac{3}{4} \lambda - \frac{21}{16} {\lambda^2}\tag{8}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
QFT: differential cross section from center of mass to lab frame I have the following process: two ingoing particles, a photon hitting a nucleus, and two outgoing particles, the nucleus and a pion. I have computed $|M|^2$ and the differential cross section in the center of mass frame $\frac{d \sigma}{d \Omega_{CM}}$; I now have to go into the lab frame, where the nucleus is initially at rest, and consider the limit of a infinite massive nucleus $M_N \to \infty$, and compute $\frac{d \sigma}{d \Omega_{lab}}$. Is there a general procedure to go from the first to the second? I first wrote $\frac{d \sigma}{dt}$ and then multiplied it for a rather complicated expression that I found on a book to obtain $\frac{d \sigma}{d \Omega_{lab}}$. However, taking the infinite massive nucleus limit, the result I get is not what I'm supposed to.
A Lorentz boost can be used to go from the center of mass frame to the lab frame. Mandelstam variables are invariant under a boost. Here is the boost procedure for Compton scattering: In the center of mass frame, let $p_1$ be the inbound photon, $p_2$ the inbound electron, $p_3$ the scattered photon, $p_4$ the scattered electron. \begin{equation*} p_1=\begin{pmatrix}\omega\\0\\0\\ \omega\end{pmatrix} \qquad p_2=\begin{pmatrix}E\\0\\0\\-\omega\end{pmatrix} \qquad p_3=\begin{pmatrix} \omega\\ \omega\sin\theta\cos\phi\\ \omega\sin\theta\sin\phi\\ \omega\cos\theta \end{pmatrix} \qquad p_4=\begin{pmatrix} E\\ -\omega\sin\theta\cos\phi\\ -\omega\sin\theta\sin\phi\\ -\omega\cos\theta \end{pmatrix} \end{equation*} where $E=\sqrt{\omega^2+m^2}$. It is easy to show that \begin{equation} \langle|\mathcal{M}|^2\rangle = \frac{e^4}{4} \left( \frac{f_{11}}{(s-m^2)^2} +\frac{f_{12}}{(s-m^2)(u-m^2)} +\frac{f_{12}^*}{(s-m^2)(u-m^2)} +\frac{f_{22}}{(u-m^2)^2} \right) \end{equation} where \begin{equation} \begin{aligned} f_{11}&=-8 s u + 24 s m^2 + 8 u m^2 + 8 m^4 \\ f_{12}&=8 s m^2 + 8 u m^2 + 16 m^4 \\ f_{22}&=-8 s u + 8 s m^2 + 24 u m^2 + 8 m^4 \end{aligned} \end{equation} for the Mandelstam variables $s=(p_1+p_2)^2$, $t=(p_1-p_3)^2$, $u=(p_1-p_4)^2$. Next, apply a Lorentz boost to go from the center of mass frame to the lab frame in which the electron is at rest. \begin{equation*} \Lambda= \begin{pmatrix} E/m & 0 & 0 & \omega/m\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ \omega/m & 0 & 0 & E/m \end{pmatrix}, \qquad \Lambda p_2=\begin{pmatrix}m \\ 0 \\ 0 \\ 0\end{pmatrix} \end{equation*} The Mandelstam variables are invariant under a boost. \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=(\Lambda p_1+\Lambda p_2)^2 \\ t&=(p_1-p_3)^2=(\Lambda p_1-\Lambda p_3)^2 \\ u&=(p_1-p_4)^2=(\Lambda p_1-\Lambda p_4)^2 \end{aligned} \end{equation} In the lab frame, let $\omega_L$ be the angular frequency of the incident photon and let $\omega_L'$ be the angular frequency of the scattered photon. \begin{equation} \begin{aligned} \omega_L&=\Lambda p_1\cdot(1,0,0,0)=\frac{\omega^2}{m}+\frac{\omega E}{m} \\ \omega_L'&=\Lambda p_3\cdot(1,0,0,0)=\frac{\omega^2\cos\theta}{m}+\frac{\omega E}{m} \end{aligned} \end{equation} It follows that \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=2m\omega_L+m^2 \\ t&=(p_1-p_3)^2=2m(\omega_L' - \omega_L) \\ u&=(p_1-p_4)^2=-2 m \omega_L' + m^2 \end{aligned} \end{equation} Compute $\langle|\mathcal{M}|^2\rangle$ from $s$, $t$, and $u$ that involve $\omega_L$ and $\omega_L'$. \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L} +\left(\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1\right)^2-1 \right) \end{equation*} From the Compton formula \begin{equation*} \frac{1}{\omega_L'}-\frac{1}{\omega_L}=\frac{1-\cos\theta_L}{m} \end{equation*} we have \begin{equation*} \cos\theta_L=\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1 \end{equation*} Hence \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L}+\cos^2\theta_L-1 \right) \end{equation*} The differential cross section for Compton scattering is \begin{equation*} \frac{d\sigma}{d\Omega}\propto \left(\frac{\omega_L'}{\omega_L}\right)^2\langle|\mathcal{M}|^2\rangle \end{equation*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/59901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Harmonic oscillator identity : show $ \sum_{k = 0}^{n-1} \phi_k(x)^2 = \phi_n'(x)^2 + (n - \frac{x^2}{4})\phi_n(x)^2 $ I am reading about Hermite polynomials in a math textbook and I am sure they are working too hard. Let $H = p^2 + x^2$ be the quantum mechanical harmonic oscillator. Or perhaps $H = \frac{1}{2m}p^2 + \frac{1}{2} m \omega^2 x^2$ Let $n$ be the eigenstates so that $H|n \rangle = (n + \frac{1}{2})|n \rangle$ and let $\phi_n(x) = \langle n | x \rangle$. Then I found the identities: $$ \sum_{k = 0}^{n-1} \phi_k(x)^2 = \phi_n'(x)^2 + (n - \frac{x^2}{4})\phi_n(x)^2 \neq \color{#999999}{ \langle \phi_n | \frac{d}{dx} \frac{d}{dx} | \phi_n \rangle + \langle \phi_n | (n - \frac{x^2}{4}) | \phi_n \rangle }$$ The identity looked a bit mysterious, but then I tried doing it in Bra-ket notation: $$ \sum_{k = 0}^{n-1} \big\langle x \big| k \big\rangle \big\langle k \big| x \big\rangle = |\big\langle x \big| p \big| n \big\rangle|^2 + (n - \frac{x^2}{4})|\big\langle x \big| n \big\rangle|^2 \neq \color{#999999}{\Big\langle n \Big| \Big(\underbrace{\frac{d^2}{dx^2} - \frac{x^2}{4}}_{p^2} +n\Big) \Big| n \Big\rangle }\tag{$\ast$}$$ I am totally botching the normalization here... can someone help me prove the identity $*$?
I got something, although coefficients differ. It may be just a matter of definitions, but correct me if I got any bungled. You need two things: * *The index raising and lowering identities for the $\phi_n$-s in position representation: $$ \begin{eqnarray} \frac{1}{\sqrt{2}}\left(\sqrt{\frac{m\omega}{\hbar}}x + \sqrt{\frac{\hbar}{m\omega}} \frac{d}{dx} \right)\phi_n(x) &=& \sqrt{n}\;\phi_{n - 1}(x)\\ \frac{1}{\sqrt{2}}\left(\sqrt{\frac{m\omega}{\hbar}}x - \sqrt{\frac{\hbar}{m\omega}} \frac{d}{dx} \right)\phi_n(x) &=& \sqrt{n + 1}\;\phi_{n + 1}(x) \end{eqnarray} $$ Take into account the scaling of the $\phi_n$-s, $$ \phi_n(x) \equiv \phi_n(\xi) = \frac{a}{\sqrt{2^n n!}}e^{-\frac{1}{2}\xi^2}H_n(\xi),\;\;\;\text{for}\;\; a = \left(\frac{m\omega}{\pi\hbar}\right)^{1/4},\;\;\;\xi = \sqrt{\frac{m\omega}{\hbar}}x $$ and rewrite the identities in compact form, $$ \begin{eqnarray} \frac{1}{\sqrt{2}}\left(\xi \phi_n + \frac{d\phi_n }{d\xi}\right)&=& \sqrt{n}\;\phi_{n - 1}\\ \frac{1}{\sqrt{2}}\left(\xi \phi_n - \frac{d\phi_n }{d\xi}\right)&=& \sqrt{n + 1}\;\phi_{n + 1} \end{eqnarray} $$ If we multiply them side by side we get $$ \frac{1}{2}\left(\xi^2\phi_n^2(\xi) - \left( \frac{d\phi_n }{d\xi}\right)^2 \right) = \sqrt{n(n+1)}\phi_{n-1}\phi_{n+1} $$ *Turán's identity for Hermite polynomials: $$ H_n^2(\xi) - H_{n-1}(\xi)H_{n+1}(\xi) = (n-1)!\sum_{k=0}^{n-1}{\frac{2^{n-k}}{k!}H_k^2(\xi)} $$ Multiply both sides by $\frac{a^2 e^{-\xi^2}}{2^n(n-1)!}$ and rearrange to get the corresponding identity in the $\phi_n$-s: $$ n\left(\frac{ae^{-\xi^2/2}H_n(\xi)}{\sqrt{2^n n!}}\right)^2 - \sqrt{n(n+1)} \left(\frac{ae^{-\xi^2/2}H_{n-1}(\xi)}{\sqrt{2^{n-1}(n-1)!}}\right)\left(\frac{ae^{-\xi^2/2}H_{n+1}(\xi)}{\sqrt{2^{n+1}(n+1)!}}\right) = \sum_{k=0}^{n-1}{\left(\frac{ae^{-\xi^2/2}H_k}{\sqrt{2^k k!}}\right)^2} $$ $$ \Rightarrow \;\; n\phi_n^2 -\sqrt{n(n+1)}\phi_{n-1}\phi_{n+1} = \sum_{k=0}^{n-1}{\phi_k^2} $$ Now substitute the expression for $\sqrt{n(n+1)}\phi_{n-1}\phi_{n+1}$ obtained from the lowering and raising identities and obtain $$ \sum_{k=0}^{n-1}{\phi_k^2} = \frac{1}{2}\left( \frac{d\phi_n }{d\xi}\right)^2 + \left(n - \frac{\xi^2}{2} \right) \phi_n^2 $$ Revert the scaling if needed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/227401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sudden release of condensate from trap - Ermakov equation - Scaling solution This is related to the scaling solution of the hydrodynamic equations. I get a relation for the scaling parameter $b$: $\ddot{b} = -\omega^2(t)*b + \omega_0^2/b^3$ When the trap for the condensate in suddenly switched off $\omega(t)$ goes to zero, so you get the equation $\ddot{b} = \omega_0^2/b^3$ with initial conditions $b(0)=1$ and $\dot{b}(0) = 0$. What would be the solution for $b(t)$?
The equation is $$\frac{d^2b}{dt^2} = \frac{\omega_0^2}{\, b^3 \, }$$ Multiply both sides of the equation by $\frac{db}{dt}$ and obtain $$\frac{d^2b}{dt^2} \, \frac{db}{dt} = \frac{\omega_0^2}{\, b^3 \, } \, \frac{db}{dt}$$ which can be interpreted as $$\frac{db}{dt} \, \frac{d}{dt}\left(\frac{db}{dt}\right) = \Big(\omega_0^2\, b^{-3}\Big) \, \frac{db}{dt}$$ which by going backwards with chain rule is the same as $$\frac{1}{2}\, \frac{d}{dt}\left(\frac{db}{dt}\right)^2 = \frac{d}{dt} \Big(\omega_0^2\, \frac{b^{-2}}{-2}\Big)$$ $$\frac{1}{2}\, \frac{d}{dt}\left(\frac{db}{dt}\right)^2 = -\, \frac{1}{2}\frac{d}{dt} \Big(\omega_0^2\, b^{-2}\Big)$$ and after cancelling the one half $$ \frac{d}{dt}\left(\frac{db}{dt}\right)^2 = -\,\frac{d}{dt} \Big(\omega_0^2\, b^{-2}\Big)$$ Integrate both sides with respect to $t$ $$ \left(\frac{db}{dt}\right)^2 = E_0 - \omega_0^2\, b^{-2}$$ $$ \left(\frac{db}{dt}\right)^2 = \frac{E_0 \, b^2 - \omega_0^2}{b^2}$$ $$ b^2 \, \left(\frac{db}{dt}\right)^2 = {E_0 \, b^2 - \omega_0^2}$$ $$ \left(b \, \frac{db}{dt}\right)^2 = {E_0 \, b^2 - \omega_0^2}$$ $$ \left(\frac{1}{2} \, \frac{d(b^2)}{dt}\right)^2 = {E_0 \, b^2 - \omega_0^2}$$ $$ \left(\frac{d(b^2)}{dt}\right)^2 = {4 \, E_0 \, b^2 - 4 \, \omega_0^2}$$ When $\frac{db}{dt}(0) = 0$ and $b(0) = 1$ we arrive at $E_0 = \omega_0^2$. Change the dependent variable by setting $u = b^2$ and the equation becomes $$ \left(\frac{du}{dt}\right)^2 = {4 \, E_0 \, u - 4 \, \omega_0^2}$$ or after taking square root on both sides $$ \frac{du}{dt} = \pm \, \sqrt{\, 4 \, E_0 \, u - 4 \, \omega_0^2 \, }$$ This is a separable equation $$ \frac{du}{ 2\, \sqrt{\, E_0 \, u - \omega_0^2 \, }} = \pm \, dt$$ $$ \frac{d\big(E_0 \, u - \omega_0^2 \big)}{ 2\, \sqrt{\, E_0 \, u - \omega_0^2 \, }} = \pm \, E_0 \, dt$$ $$ d \Big( \sqrt{\, E_0 \, u - \omega_0^2 \, } \Big) = \pm \, E_0 \, dt$$ Integrate both sides $$\sqrt{\, E_0 \, u - \omega_0^2 \, } = C_0 \pm E_0 \, t $$ Square both sides $${\, E_0 \, u - \omega_0^2 \, } = \big(\, C_0 \pm E_0 \, t \,\big)^2 $$ and solve for $u$ $$ u = \frac{1}{E_0} \, \big(\, C_0 \pm E_0 \, t \, \big)^2 + \frac{\omega_0^2}{E_0}$$ Return back to $u = b^2$ $$b^2 = \frac{1}{E_0} \, \big(\, C_0 \pm E_0 \, t \, \big)^2 + \frac{\omega_0^2}{E_0}$$ so finally $$b(t) = \pm \, \sqrt{ \, \frac{1}{E_0} \, \big(\, C_0 \pm E_0 \, t \, \big)^2 + \frac{\omega_0^2}{E_0} \, }$$ However, we know that $E_0 = \omega_0^2$ so $$b(t) = \pm \, \sqrt{ \, \frac{1}{\omega_0^2} \, \big(\, C_0 \pm \omega_0^2 \, t \, \big)^2 + 1 \, }$$ Thus, $b(0) = 1$ is possible when $C_0 = 0$ and finally $$b(t) = \sqrt{ \, \omega_0^2 \, t^2 + 1 \, }$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\int_{-\infty}^{\infty} |\psi(x)|^2 ~ dx = 1$ when $\psi(x) = C\exp\left(\frac{x^2}{2a^2} + \frac{ix^3}{3a^3}\right)$ The information given is: Consider a state $|\psi\rangle $ describing a quantum particle on a line, whose position representation $\langle x|\psi\rangle = \psi(x)$ is given by: \begin{gather*} \psi(x) = C\exp\left(-\frac{x^2}{2a^2} + \frac{ix^3}{3a^3}\right), \end{gather*} where $a$ and $C$ are positive constants. Find the constant $C$ by requiring that this state be normalized. My progress so far: For the state to be normalized it must satisfy $\int_{-\infty}^{\infty} |\psi(x)|^2 \text{d} x = 1$. Calculate the integral $\int_{-\infty}^{\infty} |\psi(x)|^2 \text{d}x$. \begin{align*} \int_{-\infty}^{\infty} |\psi(x)|^2 \text{d}x &= 1\\ \int_{-\infty}^{\infty} \left| C\exp(-\frac{x^2}{2a^2} + \frac{ix^3}{3a^3})\right|^2 \text{d}x &= 1\\ C^2 \int_{-\infty}^{\infty} \left|\exp(-\frac{x^2}{2a^2} + \frac{ix^3}{3a^3})\right|^2 \text{d}x &= 1\\ \end{align*} I know that the result should be: \begin{align*} C^2 \int_{-\infty}^\infty e^{-x^2/a^2} \text{d}x &= 1\\ C^2 = \frac{1}{\sqrt{\pi a}} \end{align*} But I fail to see how. Help is greatly appreciated, thanks.
You can write $$ \exp \left(-\frac{x^2}{2a^2} + \frac{ix^3}{3a^3}\right) = \exp \left(-\frac{x^2}{2a^2}\right)\exp\left(\frac{ix^3}{3a^3}\right). $$ When you take the magnitude of this complex number, you drop the term with the $i$ as $|\exp(i\times \text{anything})|=1$. Therefore, $$ |C|^2 \int_{-\infty}^\infty \exp\left(-\frac{x^2}{a^2}\right)= 1. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why doesn't this way of calculating the moment of inertia of a sphere work? Instead of the usual approach of integrating a bunch of discs, I do it differently. I integrate by subdividing the sphere into a bunch of concentric spheres containing mass on their surface. The integral becomes: $\int_0^R r^2 \cdot (4\pi r^2) \cdot \rho \space dr$ which ends up equaling $\frac{3}{5}MR^2$ Inside the integral I'm taking the squared distance from the axis times the surface area of the sphere times the density. The right answer is of course $\frac{2}{5}MR^2$, but why does my approach fail? Conceptually, why does this method of breaking the sphere into smaller shells not work?
Since $V=\int_0^R (4\pi r^2) \space dr$ produces the correct formula for the volume of a sphere, the problem is elsewhere. Namely in the $r^2$ factor inside the volume integral. The problem is that MMOI is measured by the perpendicular distance of a particle to an axis. So the $r^2$ factor inside the integral is incorrect, as it used the radial distance and not the perpendicular distance. The correct way to integrate is to consider spherical coordinates and use $$ V = \int_0^R \int_{-\pi/2}^{\pi/2} \int_0^{2 \pi} ( r^2 \cos \psi) \space {\rm d}\varphi \space {\rm d}\psi \space {\rm d} r = \frac{4}{3} \pi R^3$$ and since $m = \rho V$ the MMOI tensor is $$ \mathrm{I} = \frac{m}{V} \int_0^R \int_{-\pi/2}^{\pi/2} 2 \pi \begin{bmatrix} \frac{ r^4 \cos \psi (\sin^2 \psi+1)}{2} & & \\ & r^4 \cos^3 \psi & \\ & & \frac{ r^4 \cos \psi (\sin^2 \psi+1)}{2} \end{bmatrix} (2 \pi r^2 \cos \psi)\space {\rm d}\psi \space {\rm d} r $$ $$ \mathrm{I} = \frac{m}{V} \begin{bmatrix} \frac{8}{15} \pi R^5 & & \\ & \frac{8}{15} \pi R^5 & \\ & & \frac{8}{15} \pi R^5 \end{bmatrix} = \begin{bmatrix} \frac{2}{5} m R^2 & & \\ & \frac{2}{5} m R^2 & \\ & & \frac{2}{5} m R^2 \end{bmatrix}$$ The definition of the MMOI tensor from the volume integral is $$ I = \frac{m}{V} \int \begin{bmatrix} y^2+z^2 & -x y & -x z \\ -x y & x^2+z^2 & -y z \\ -x z & -y z & x^2+y^2 \end{bmatrix} {\rm d}V $$ and the position in spherical coordinates is $$ \pmatrix{x \\ y \\ z} = \pmatrix{ r \cos \psi \cos \varphi \\ r \sin \psi \\ r \cos \psi \sin \varphi} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/656582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
Lagrangian derivation of Thomson scattering cross-section (ie photon-electron) Does anyone know a quick way to obtain the classical Thomson scattering cross-section (for photons scattering on electrons) from QED, avoiding the lengthy calculation yielding the Compton Klein-Nishina cross-section and taking the $m\to\infty$ limit at the end of the day?
Actually, there is an easy way to derive Klein-Nishina. First derive $\langle|\mathcal{M}|^2\rangle$ for the center of mass frame and then use a Lorentz boost to go to the lab frame. Then set $\omega=\omega'$ to get the Thomson formula. In the center of mass frame, let $p_1$ be the inbound photon, $p_2$ the inbound electron, $p_3$ the scattered photon, $p_4$ the scattered electron. \begin{equation*} p_1=\begin{pmatrix}\omega\\0\\0\\ \omega\end{pmatrix} \qquad p_2=\begin{pmatrix}E\\0\\0\\-\omega\end{pmatrix} \qquad p_3=\begin{pmatrix} \omega\\ \omega\sin\theta\cos\phi\\ \omega\sin\theta\sin\phi\\ \omega\cos\theta \end{pmatrix} \qquad p_4=\begin{pmatrix} E\\ -\omega\sin\theta\cos\phi\\ -\omega\sin\theta\sin\phi\\ -\omega\cos\theta \end{pmatrix} \end{equation*} where $E=\sqrt{\omega^2+m^2}$. It is easy to show that \begin{equation} \langle|\mathcal{M}|^2\rangle = \frac{e^4}{4} \left( \frac{f_{11}}{(s-m^2)^2} +\frac{f_{12}}{(s-m^2)(u-m^2)} +\frac{f_{12}^*}{(s-m^2)(u-m^2)} +\frac{f_{22}}{(u-m^2)^2} \right) \end{equation} where \begin{equation} \begin{aligned} f_{11}&=-8 s u + 24 s m^2 + 8 u m^2 + 8 m^4 \\ f_{12}&=8 s m^2 + 8 u m^2 + 16 m^4 \\ f_{22}&=-8 s u + 8 s m^2 + 24 u m^2 + 8 m^4 \end{aligned} \end{equation} for the Mandelstam variables $s=(p_1+p_2)^2$, $t=(p_1-p_3)^2$, $u=(p_1-p_4)^2$. Next, apply a Lorentz boost to go from the center of mass frame to the lab frame in which the electron is at rest. \begin{equation*} \Lambda= \begin{pmatrix} E/m & 0 & 0 & \omega/m\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ \omega/m & 0 & 0 & E/m \end{pmatrix}, \qquad \Lambda p_2=\begin{pmatrix}m \\ 0 \\ 0 \\ 0\end{pmatrix} \end{equation*} The Mandelstam variables are invariant under a boost. \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=(\Lambda p_1+\Lambda p_2)^2 \\ t&=(p_1-p_3)^2=(\Lambda p_1-\Lambda p_3)^2 \\ u&=(p_1-p_4)^2=(\Lambda p_1-\Lambda p_4)^2 \end{aligned} \end{equation} In the lab frame, let $\omega_L$ be the angular frequency of the incident photon and let $\omega_L'$ be the angular frequency of the scattered photon. \begin{equation} \begin{aligned} \omega_L&=\Lambda p_1\cdot(1,0,0,0)=\frac{\omega^2}{m}+\frac{\omega E}{m} \\ \omega_L'&=\Lambda p_3\cdot(1,0,0,0)=\frac{\omega^2\cos\theta}{m}+\frac{\omega E}{m} \end{aligned} \end{equation} It follows that \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=2m\omega_L+m^2 \\ t&=(p_1-p_3)^2=2m(\omega_L' - \omega_L) \\ u&=(p_1-p_4)^2=-2 m \omega_L' + m^2 \end{aligned} \end{equation} Compute $\langle|\mathcal{M}|^2\rangle$ from $s$, $t$, and $u$ that involve $\omega_L$ and $\omega_L'$. \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L} +\left(\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1\right)^2-1 \right) \end{equation*} From the Compton formula \begin{equation*} \frac{1}{\omega_L'}-\frac{1}{\omega_L}=\frac{1-\cos\theta_L}{m} \end{equation*} we have \begin{equation*} \cos\theta_L=\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1 \end{equation*} Hence \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L}+\cos^2\theta_L-1 \right) \end{equation*} The differential cross section for Compton scattering is \begin{equation*} \frac{d\sigma}{d\Omega}\propto \left(\frac{\omega_L'}{\omega_L}\right)^2\langle|\mathcal{M}|^2\rangle \end{equation*} Set $\omega_L=\omega_L'$ to obtain the Thomson scattering cross section \begin{equation*} \frac{d\sigma}{d\Omega}\propto (1+\cos^2\theta_L) \end{equation*} See the following link for a complete derivation of $\langle|\mathcal{M}|^2\rangle$ in the center of mass frame. http://www.eigenmath.org/compton-scattering.pdf
{ "language": "en", "url": "https://physics.stackexchange.com/questions/249070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Smallest relative velocity driving a two-stream instability The physical picture is a two-stream system of cold electrons and ions (i.e. $T_i=T_e=0$) with realtive velocities $V_i$ and $V_e$. The dispersion relation obtained is $$D(\omega)=1-\frac{\omega_{pi}^2}{(\omega-kV_i)^2}-\frac{\omega_{pe}^2}{(\omega-kV_e)^2}=0$$ where $\omega_{pi}$, $\omega_{pe}$ respectevely correspond to electron and ion plasma frequency. I am asked to give the minimum relative velocity $V_e-V_i$ that may drive an instability. What I know is that there can be an instability if $D(\omega)<0$, because such situation give two real and two imaginary roots, the second case give exponential solutions if $\text{Im}(\omega)>0$.
Background The first thing to do is use a different reference frame to simplify things by going into the ion rest frame. Thus, the dispersion relation goes to: $$ D\left( \omega \right) = 1 - \left( \frac{ \omega_{pi} }{ \omega } \right)^{2} - \left( \frac{ \omega_{pe} }{ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right) } \right)^{2} = 0 \tag{1} $$ where $\mathbf{V}_{o} = \mathbf{V}_{e} - \mathbf{V}_{i}$. We can further simplify things by rewriting $D\left( \omega \right) = 1 - F\left( \omega \right)$, where this new term is given by: $$ F\left( \omega \right) = \left( \frac{ \omega_{pi} }{ \omega } \right)^{2} + \left( \frac{ \omega_{pe} }{ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right) } \right)^{2} = 1 \tag{2} $$ We can see that $F\left( \omega \right)$ has two poles at $\omega = 0$ and $\omega = \mathbf{k} \cdot \mathbf{V}_{o}$ and a minimum at $\partial F/\partial \omega = 0$, given by: $$ \begin{align} \frac{ \partial F }{ \partial \omega } & = - \frac{ 2 }{ \omega } \left( \frac{ \omega_{pi} }{ \omega } \right)^{2} - - \frac{ 2 }{ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right) } \left( \frac{ \omega_{pe} }{ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right) } \right)^{2} = 0 \tag{3a} \\ & = \frac{ \omega_{pi}^{2} }{ \omega^{3} } + \frac{ \omega_{pe}^{2} }{ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right)^{3} } = 0 \tag{3b} \\ \omega_{pe}^{2} \ \omega^{3} & = - \omega_{pi}^{2} \ \left( \omega - \mathbf{k} \cdot \mathbf{V}_{o} \right)^{3} \tag{3c} \end{align} $$ After we make a few substitutions (i.e., $\zeta = \tfrac{\omega}{k \ V_{o}}$ and $\alpha = \tfrac{\omega_{pi}^{2}}{\omega_{pe}^{2}}$) and assume everything is one-dimensional (i.e., $\mathbf{k}$ is parallel to $\mathbf{V}_{o}$), Equation 3c can be reduced to: $$ \zeta^{3} + \alpha \ \left( \zeta - 1 \right)^{3} = 0 \tag{4} $$ The three roots of Equation 4 are messy but when we note that if this is a proton-electron plasma, then $\alpha = \tfrac{m_{e}}{m_{p}}$, where $m_{s}$ is the mass of species $s$. Thus, we can expand in a Taylor series for small $\alpha$ to find more simplified results. We can also take advantage of the fact that we are looking for the real part of the frequency, so we can rearrange Equation 4 to get: $$ \zeta^{3} = \alpha \left( 1 - \zeta \right)^{3} \tag{5} $$ If we take the cubic root of Equation 5, we can solve for $\zeta$ to find: $$ \begin{align} \zeta & = \frac{ \alpha^{1/3} }{ 1 + \alpha^{1/3} } \tag{6a} \\ & = \frac{ 1 }{ 1 + \alpha^{-1/3} } \tag{6b} \\ \omega_{sol} & = \frac{ k \ V_{o} }{ 1 + \left( \frac{ m_{p} }{ m_{e} } \right)^{1/3} } \tag{6c} \end{align} $$ where we replaced our normalized parameters with the original inputs. Threshold Drift Velocity To find the threshold for instability, we use the results in Equation 6c and impose an additional constraint that $F\left( \omega_{sol} \right) > 1$, which gives us: $$ \begin{align} F\left( \omega_{sol} \right) & = \left( \frac{ \omega_{pe} }{ k \ V_{o} } \right)^{2} \left[ 1 + \left( \frac{ m_{e} }{ m_{p} } \right)^{1/3} \right]^{3} > 1 \tag{7a} \\ \left( k \ V_{o} \right)^{2} & < \omega_{pe}^{2} \ \left[ 1 + \left( \frac{ m_{e} }{ m_{p} } \right)^{1/3} \right]^{3} \tag{7b} \end{align} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/341998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rotation matrix with deficit angle I need to find the rotation matrix for a space with a deficit angle. The question is as pictured The following is my answer to the question: If $\theta$ could vary between $0$ and $2 \pi$, $$ R(\theta) = \begin{pmatrix} \cos(\theta) && \sin(\theta) \\ -\sin(\theta) && \cos(\theta) \end{pmatrix} $$ In this space, instead of rotating $2 \pi$ to get to the same point, we rotate $2 \pi - \phi$. So a rotation of $2 \pi$ (full circle) in this funny space is equivalent to a rotation of $2 \pi - \phi$ in ordinary space. So a rotation of $\theta$ in the ordinary space is equivalent to a rotation of $\frac{\theta}{1 - \frac{\phi}{2 \pi}}$ in the funny space. Thus, with the new metric, we let $ \theta \rightarrow \frac{\theta}{1-\frac{\phi}{2 \pi}}$ and we have $$ R(\theta) = \begin{pmatrix} \cos\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) && \sin\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) \\ -\sin\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) && \cos\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) \end{pmatrix} $$ $$ \therefore R(0) = \begin{pmatrix} 1 && 0 \\ 0 && 1 \end{pmatrix} $$ and $$ R(2 \pi - \phi ) = \begin{pmatrix} 1 && 0 \\ 0 && 1 \end{pmatrix} $$ This satisfies the requirement that $R(0) = R(2 \pi - \phi) = I_{2} $. Is this the correct rotation matrix and are my steps logical? Thank you.
I think that your metric is not correct. why ? your new polar coordinates are: $x=r\cos \left( {\frac {2\pi \,\theta}{2\,\pi -\phi}} \right) $ $y=r\sin \left( {\frac {2 \pi \,\theta}{2\,\pi -\phi}} \right) $ The Jacobi matrix is: $J=\left[ \begin {array}{cc} \cos \left( 2\,{\frac {\pi \,\theta}{2\, \pi -\phi}} \right) &-2\,r\sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi}} \right) \pi \left( 2\,\pi -\phi \right) ^{-1} \\ \sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi }} \right) &2\,r\cos \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi}} \right) \pi \left( 2\,\pi -\phi \right) ^{-1}\end {array} \right] $ and the metric : $g=J^T\,J=\left[ \begin {array}{cc} 1&0\\ 0&\,{\frac {4{\pi }^{2}}{ \left( 2\,\pi -\phi \right) ^{2}}r^2}\end {array} \right] $ If you know the equations for $x$ and $y$ you can calculate the transformation matrix $R$ with this equation: $J=R\,H$ , with the matrix $H_{i,i}=\sqrt{g_{i,i}}\,,H_{i,j}=0$ $H= \left[ \begin {array}{cc} 1&0\\ 0&2\,{\frac {\pi \, r}{2\,\pi -\phi}}\end {array} \right] $ $R=J\,H^{-1}$ $R=\left[ \begin {array}{cc} \cos \left( 2\,{\frac {\pi \,\theta}{2\, \pi -\phi}} \right) &-\sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi - \phi}} \right) \\ \sin \left( 2\,{\frac {\pi \, \theta}{2\,\pi -\phi}} \right) &\cos \left( 2\,{\frac {\pi \,\theta}{2 \,\pi -\phi}} \right) \end {array} \right] $ This is your transformation matrix. Remark: I use symbolic Program MAPLE to do the calculation
{ "language": "en", "url": "https://physics.stackexchange.com/questions/423772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spin of 3 particles I am trying to decompose the isospins of a three particle state using Clebsch-Gordan coefficients such as: $|1,1\rangle \otimes |1/2,-1/2\rangle \otimes |1,0\rangle$ Decomposing the first two states gives: $|1,1\rangle \otimes |1/2,-1/2\rangle = \sqrt{\frac{1}{3}}|3/2,1/2\rangle + \sqrt{\frac{2}{3}}|1/2,1/2\rangle$ And then these combined with the third state give: $|3/2,1/2\rangle \otimes |1,0 \rangle = \sqrt{\frac{3}{5}}|5/2,1/2\rangle + \sqrt{\frac{1}{15}}|3/2,1/2\rangle - \sqrt{\frac{1}{3}}|1/2,1/2\rangle$ $|1/2,1/2\rangle \otimes |1,0 \rangle = \sqrt{\frac{2}{3}}|3/2,1/2\rangle + \sqrt{\frac{1}{3}}|1/2,1/2\rangle$ When I combine these all together I get: $|1,1\rangle \otimes |1/2,-1/2\rangle \otimes |1,0\rangle = \sqrt{\frac{1}{5}} |5/2,1/2\rangle + \frac{10+\sqrt{5}}{15} |3/2,1/2\rangle + \frac{-1+\sqrt{2}}{3}|1/2,1/2\rangle$ Which has to be incorrect as this state is not normalised. Basically my question is, what am I doing wrong? Edit: What I'm attempting to calculate is amplitudes for processes like $\Lambda p \to \Lambda p \pi^0$ using isospin states for all of the particles.
So the key point here is to realize that the coupling $$ 1\otimes \frac{1}{2}\otimes 1 $$ will contain some final $J$ values more than once. Indeed $$ 1\otimes \frac{1}{2}=\frac{3}{2}\oplus \frac{1}{2} \tag{1} $$ and coupling this to $1$ will produce, for instance, two types of states with final $J=\frac{1}{2}$, depending on the intermediate $J_{12}$ value. Thus, copy will come from the $J_{12}=\frac{3}{2}$ and the other will come from the $J_{12}=1$ states of (1). To be systematic write $$ \vert 1,1\rangle \vert\textstyle\frac{1}{2},-\frac{1}{2}\rangle =\frac{1}{\sqrt{3}}\vert \frac{3}{2}\frac{1}{2}\rangle + \sqrt{\frac{2}{3}} \vert\frac{1}{2}\frac{1}{2}\rangle\, .\tag{2} $$ Coupling the $\frac{3}{2}$ state to $1$, the part proportional to final $J=\frac{1}{2}$ using the CG $C_{3/2,1/2;1,0}^{1/2,1/2}=-\frac{1}{\sqrt{3}}$ yields $\vert \frac{1}{2}\frac{1}{2};J_{12}=\frac{3}{2}\rangle$ with $$ \langle \textstyle\frac{1}{2}\frac{1}{2};J_{12}=\frac{3}{2}\vert 1,1;\frac{1}{2},\frac{1}{2} ; 1,0\rangle =-\frac{1}{3} $$ but going through the $J_{12}=\frac{1}{2}$ produces a different $J=\frac{1}{2}$ state with $$ \langle \textstyle\frac{1}{2}\frac{1}{2};J_{12}=\frac{1}{2}\vert 1,1; \frac{1}{2},\frac{1}{2} ; 1,0\rangle =+\frac{\sqrt{2}}{3} $$ You can actually check that $\vert\textstyle\frac{1}{2}\frac{1}{2};J_{12}=\frac{1}{2}\rangle$ is really different from $\vert\textstyle\frac{1}{2}\frac{1}{2};J_{12}=\frac{3}{2}\rangle$ by computing their explicitly expressions in terms of $j_1=1,j_2=\frac{1}{2}, j_3=1$ states; you will see that these are distinct linear combinations of the $j_1=1,j_2=\frac{1}{2}, j_3=1$ states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Holstein Primakoff - Showing Spin-Relation $S^2 = S(S+1)$ I'm trying to prove that Holstein-Primakoff-Transformation \begin{align} S_i^{+} &= \sqrt{2S-n_i}b_i \\ S_i^{-} &= b_i^{\dagger} \sqrt{2S -n_i} \\ S_i^{z} &= S-n_i \end{align} holds the condition $$ \vec S^2=S(S+1)$$ but I get the following (with $S_x=\frac{1}{2}( S^+ + S^-) $ und $S_y=\frac{1}{2i} (S^+-S^-)$): \begin{align*} S^{2} &= S_x^{2} + S_y^{2} + S_z^{2} = \frac{1}{4} \left( S^{+} + S^{-}\right)^{2} - \frac{1}{4} (S^{+}- S^{-})^{2} + (S-n_i)^{2} \\ &= \frac{1}{4} (2 S^{+} S^{-} + 2 S^{-} S^{+}) + S^{2} - 2Sn_i + n_i^{2} \\ &= \frac{1}{2} \left( \sqrt{2S-n_i} b_ib_i^{\dagger} \sqrt{2S-n_i} + b_i^{\dagger}(2S-n_i) b_i \right) + S^{2} - 2Sn_i + n_i^{2} \\ &= \frac{1}{2} \left( \sqrt{2S-n_i}(1+n_i) \sqrt{2S-n_i} + b_i^{\dagger}(2S-n_i) b_i \right) + S^{2} - 2Sn_i + n_i^{2} \\ &= \frac{1}{2} \left( n_i (2S-n_i) + 2S -n_i + b_i^{\dagger}(2S-n_i) b_i \right) + S^{2} - 2Sn_i + n_i^{2} \\ &= S^{2} + \frac{n_i^{2}}{2} + S - \frac{n_i}{2} - \frac{1}{2} b_i^{\dagger} b_i^{\dagger} b_i b_i \end{align*} So my question is, whether the Holstein-Primakoff satisfies this condition or not? - It should satisifies the condition, because otherwise it would be unphysically to use this transformation.
What you've done is treat $n_i$ as a number, which is not completely honest, since it really an operator $\hat n$. Thus for instance \begin{align} S^+ \vert n\rangle &= \sqrt{2S-\hat n}\, b\vert n\rangle = \sqrt{2S-\hat n} \sqrt{n}\vert n-1\rangle = \sqrt{(2S-n+1)n}\vert n-1\rangle\\ S^- \vert n\rangle &= b^\dagger \sqrt{2S-\hat n} \vert n\rangle = \sqrt{(2S-n)(n+1)} \vert n+1\rangle \end{align} so that \begin{align} S^+S^-\vert n\rangle &= \sqrt{(2S-n)(n+1)} S^+ \vert n+1\rangle =\sqrt{(2S-n)(n+1)} \sqrt{(2S-n)(n+1)} \vert n\rangle \\ &=(2S-n)(n+1) \vert n\rangle \\ S^-S^+\vert n\rangle &=(2S-n+1)n \vert n\rangle\, . \end{align} If my algebra is right this should take care of it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/369249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How many degrees of freedom does the covariant derivative of the Riemann tensor have? It is a commonly cited result that the Riemman tensor $R_{abcd}$ has $\frac{1}{12}n^2(n^2-1)$ degrees of freedom in $n$ dimensions, which follows from the identities of the Riemann tensor. This gives $20$ degrees of freedom in $3+1$ dimensions. How many does the covariant derivative $R_{abcd;e}$ have?
The covariant derivative of the Riemann tensor should add a factor of $n$ to the degrees of freedom you had before, giving $\frac{1}{12}n^3(n^2-1)$. However, the second Bianchi identity restricts the amount of components: $$R_{ab[cd;e]} = 0.$$ It is then enough to count how many tensors of rank 5 with the last three indices antisymmetrized there are and subtract it from $\frac{1}{12}n^3(n^2-1)$. There are $\frac{n!}{r!(n-r)!}$ independent components in a rank $r$ antisymmetric tensor. Thus, $R_{ab[cd;e]} = 0$ gives us $$\frac{n(n-1)}{2} \times \frac{n!}{3!(n-3)!} = \frac{1}{12}n^2(n-1)^2(n-2)$$ independent components, where the factor $\frac{n(n-1)}{2} $ comes from the first two indices, which are antisymmetric. So if I didn't miss anything, the final result should be: $$\frac{1}{12}n^3(n^2-1) - \frac{1}{12}n^2(n-1)^2(n-2) = \frac{1}{12}n^2(n-1)\Big(n(n+1) - (n-1)(n-2)\Big)$$ $$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\frac{1}{12}n^2(n-1)(4n-2) = \frac{1}{6}n^2(n-1)(2n-1).$$ In 4D we get 56 independent components.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/658054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove an action expansion's even-indexed terms have to be integrated, where the odd-indexed terms are only derivatives of the potential (WKB) After assuming a wavefunction of a form: $$ \psi \approx A \exp{\left(i \frac{S(x)}{\hbar}\right)}$$ and letting $$S = \hbar^0 S_0 + \hbar^1 S_1 + \hbar^2 S_2 +...$$ The odd-indexed terms of the action for a one-dimensional potential in the time-independent Schrodinger equation do not require integration if $p$ is known and differentiable. However the even-terms require non-trivial integration(which is extensively more computationally taxing). p is defined by: $$ p = \sqrt{2 m ( E - V(x))} $$ The terms: $$ S_0^\prime = p$$ $$ \boxed{S_0 = \int dx\, p =\pm \int dx \sqrt{2m(E-V(x))} } $$ $$ S_1^\prime = \frac{i}{2}\frac{1}{p}\frac{d p}{d x} $$ $$ \boxed{S_1 = \frac{i}{2} ln(p) } $$ $$ S_2^\prime = \frac{1}{8 p^3} \left(\frac{d p}{d x} \right)^2 - \frac{1}{4} \left(\frac{1}{p^2} \frac{d^2 p}{d x^2} - \frac{1}{p^3} \left(\frac{d p}{d x}\right)^2\right)$$ $$ \boxed{S_2 = \int dx \left(\frac{1}{4 p^2} \frac{d^2 p}{d x^2} + \frac{3}{8 p^3} \left( \frac{d p}{d x}\right)^2 \right)}\mathrm{\,requires\,\,integration} $$ Now for $S_3$: $$ S_3^\prime = -\frac{i}{8 p^3} \frac{d^3 p}{d x^3} + \frac{3}{4} \left( \frac{i}{p^4} \frac{d p}{d x} \frac{d^2 p}{d x^2} - \frac{1}{p^5} \left( \frac{d p}{d x}\right)^3 \right)$$ After making an educated guess that: $$ \frac{d}{dx} \left( -\frac{i}{8 p^3} \frac{d^2 p}{d x^2} + \frac{3 i}{16 p^4} \left(\frac{d p}{d x}\right)^2\right) = -\frac{i}{8 p^3} \frac{d^3 p}{d x^3} + \frac{3}{4} \left( \frac{i}{p^4} \frac{d p}{d x} \frac{d^2 p}{d x^2} - \frac{1}{p^5} \left( \frac{d p}{d x}\right)^3 \right)$$ Then $$\int dx S_3^\prime = \int dx \frac{d}{dx} \left( -\frac{i}{8 p^3} \frac{d^2 p}{d x^2} + \frac{3 i}{16 p^4} \left(\frac{d p}{d x}\right)^2\right)$$ Clearly because of my guess $$\boxed{S_3 = -\frac{i}{8 p^3} \frac{d^2 p}{d x^2} + \frac{3 i}{16 p^4} \left(\frac{d p}{d x}\right)^2} $$ How do we know that $S_2$ cannot be retrieved the same way? Surely I cannot try an ansatz for every possible function that could be a candidate for $S_2$. This problem has come up in general for different types of problems outside the scope of quantum mechanics. Is it only a consequence of this equation(1-D time-independent Schrodinger equation) that we have odd-indexed and even-indexed terms this way? Would it be different for 3-D?
The governing Schrödinger equation $$(e^{\frac{i}{\hbar}S})^{\prime\prime}~=~-k(x)^2 e^{\frac{i}{\hbar}S} ~\Leftrightarrow~ S^{\prime2} ~=~p(x)^2+i\hbar S^{\prime\prime} $$ can be turned into a fixed point equation $$S^{\prime} ~=~ \sqrt{p^2+i\hbar S^{\prime\prime}} ~=~ \sqrt{p^2+i\hbar \frac{d}{dx}\sqrt{p^2+i\hbar S^{\prime\prime}}} ~=~ \sqrt{p^2+i\hbar \frac{d}{dx}\sqrt{p^2+i\hbar \frac{d}{dx}\sqrt{p^2+i\hbar S^{\prime\prime}}}} ~=~ \ldots $$ Expanding $$S^{\prime}~=~\sum_{n=0}^{\infty} (i\hbar)^{n}S^{\prime}_n,$$ one gets $$S^{\prime}_0 ~=~ p,\qquad S^{\prime}_1 ~=~ \frac{p^{\prime}}{2p},\qquad S^{\prime}_2 ~=~ \frac{p^{\prime\prime}}{4p^2}-\frac{3(p^{\prime})^2}{8p^3}, $$ $$ S^{\prime}_3 ~=~ \frac{p^{\prime\prime\prime}}{8p^3}-\frac{3p^{\prime}p^{\prime\prime}}{4p^4}+\frac{3(p^{\prime})^3}{4p^5},\qquad, \ldots $$ OP then asks if there is a method to check if $f=S^{\prime}_n$ is a total derivative? Yes, one can check if the Euler-Lagrange operator $$E(f)~=~\sum_{n=0}^{\infty} \left( - \frac{d}{dx}\right)^n \frac{\partial f}{\partial p^{(n)}}$$ vanishes on $f=S^{\prime}_n$. E.g. $$E(S^{\prime}_0)~=~1~\neq~ 0, \qquad E(S^{\prime}_1)~=~ 0,\qquad E(S^{\prime}_2)~\neq~ 0,\qquad E(S^{\prime}_3)~=~ 0,\qquad\ldots. $$ Here we are using the fact that a local functional can be written as a boundary term if and only if the Euler-Lagrange equation vanishes identically. See also this Phys.SE answer. At least in this way it is possible to check operationally order by order in $n$ whether $S^{\prime}_n$ is a total derivative or not. We have not checked the calculations beyond OP's claims, nor investigated OP's conjectures.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/59366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Equilibrium of a Solid Body To find the torques around B, see the figure below, I took the cross product as followed: and we know that $W=160 N$ but when I insert that value into the the equation in the red box, I get $F_3=46.2 N$ but they get 138.6 N, how do they get that? the only possible way that can happen is if they multiply $F_3$ by 3.
The book is wrong and you are correct. $$\begin{pmatrix} -\frac{L}{2} \cos 60° \\ \frac{L}{2} \sin 60° \\ 0 \end{pmatrix} \times \begin{pmatrix} 0 \\ -W \\ 0 \end{pmatrix} + \begin{pmatrix} -L \cos 60° \\ L \sin 60° \\ 0 \end{pmatrix} \times \begin{pmatrix} F_3 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$ $$\rightarrow \begin{pmatrix} 0 \\ 0 \\ W \frac{L}{2} \cos 60° - F_3 L \sin 60° =0 \end{pmatrix}$$ $$ F_3 = \frac{W}{2} \frac{\cos 60°}{\sin 60°} = \frac{80}{\sqrt{3}} = 46.1880 $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/249010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transformation matrix of a strain tensor If the stress $\sigma_{xx}$ is applied to an isotropic, three-dimensional body, the following strain tensor results: $$\boldsymbol\epsilon=\left(\begin{matrix}\frac{1}{E}\sigma _{xx} & 0 & 0 \\0 & -\frac{\nu}{E}\sigma _{xx} & 0 \\0 & 0 & -\frac{\nu}{E}\sigma _{xx}\end{matrix}\right)$$ Now the tensor should be rotated around the y-axis with the angle $\alpha$. The transformation should be carried out with $ A'=QAQ^T $. How would be the transformation matrix $ Q $ in this case?
I use this notation the transformation matrix, transformed a vector components from rotate system index B to inertial system index I rotation about the x-axis angle $~\alpha~$ between y and y' $${_B^I}\,Q_x=\left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \alpha \right) &-\sin \left( \alpha \right) \\ 0& \sin \left( \alpha \right) &\cos \left( \alpha \right) \end {array} \right] $$ rotation about the y-axis angle $~\alpha~$ between x and x' $${_B^I}\,Q_y= \left[ \begin {array}{ccc} \cos \left( \alpha \right) &0&\sin \left( \alpha \right) \\ 0&1&0\\ -\sin \left( \alpha \right) &0&\cos \left( \alpha \right) \end {array} \right] $$ rotation about the z-axis angle $~\alpha~$ between x and x' $${_B^I}\,Q_z=\left[ \begin {array}{ccc} \cos \left( \alpha \right) &-\sin \left( \alpha \right) &0\\ \sin \left( \alpha \right) &\cos \left( \alpha \right) &0\\ 0&0&1\end {array} \right] $$ vector transformation from B to I system $$\mathbf v_I={_B^I}\mathbf Q\,\mathbf v_B$$ matrix transformation $$\mathbf M_I=\mathbf ={_B^I}\mathbf Q \,\mathbf M_B\,{_I^B}\mathbf Q =\mathbf Q\,\mathbf M_B\,\mathbf Q^T\\ \mathbf M_B=\mathbf ={_I^B}\mathbf Q \,\mathbf M_I\,{_B^I}\mathbf Q =\mathbf Q^T\,\mathbf M_I\,\mathbf Q $$ your matrix is $$\mathbf \epsilon_I= \left[ \begin {array}{ccc} \epsilon_{{11}}&0&0\\ 0& \epsilon_{{22}}&0\\ 0&0&\epsilon_{{22}}\end {array} \right] \\ \mathbf \epsilon_B=Q^T\,\mathbf \epsilon_I\,\mathbf Q $$ for $~\mathbf Q=\mathbf Q_x~$ you obtain $$\mathbf \epsilon_B=\mathbf \epsilon_I$$ for $~\mathbf Q=\mathbf Q_y~$ $$\mathbf \epsilon_B=\left[ \begin {array}{ccc} \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{11}}+\epsilon_{{22}}- \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{22}}&0&\cos \left( \alpha \right) \sin \left( \alpha \right) \left( -\epsilon_{{22}}+\epsilon_ {{11}} \right) \\ 0&\epsilon_{{22}}&0 \\ \cos \left( \alpha \right) \sin \left( \alpha \right) \left( -\epsilon_{{22}}+\epsilon_{{11}} \right) &0& \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{22}}+\epsilon_{{11} }- \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{11}} \end {array} \right] $$ for $~\mathbf Q=\mathbf Q_z~$ $$\mathbf \epsilon_B= \left[ \begin {array}{ccc} \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{11}}+\epsilon_{{22}}- \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{22}}&-\cos \left( \alpha \right) \sin \left( \alpha \right) \left( -\epsilon_{{22}}+\epsilon_ {{11}} \right) &0\\ -\cos \left( \alpha \right) \sin \left( \alpha \right) \left( -\epsilon_{{22}}+\epsilon_{{11}} \right) & \left( \cos \left( \alpha \right) \right) ^{2}\epsilon_{{ 22}}+\epsilon_{{11}}- \left( \cos \left( \alpha \right) \right) ^{2} \epsilon_{{11}}&0\\ 0&0&\epsilon_{{22}}\end {array} \right] $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/666320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to determine the probabilities for a cuboid die? Imagine we take a cuboid with sides $a, b$ and $c$ and throw it like a usual die. Is there a way to determine the probabilities of the different outcomes $P_{ab}, P_{bc}$ and $P_{ac}$? With $ab$, $bc$, $ac$ I label the three distinct faces of the die (let us not distinguish between the two opposide sides). I would guess that the probabilities can not be calculated solely by the weights of the different areas (something like $P^\text{try}_{ab}=ab/(ab+bc+ac)$). I believe this is a complicated physical problem which can in principle depend on a lot of factors like friction of air and the table, the material of the die, etc. However, I would like to simplify it as much as possible and assume we can calculate the probabilities just by knowing the lengths of the different sides. For a start, let us assume that two sides $b=c$ are equal (that is, we only have two events $ab$ and $bb$). Now with dimensional analysis we know that the probabilities $P_{ab}$ and $P_{bb}$ can only be functions of the ratio $\rho=a/b$. We also have $P_{ab}(\rho)+P_{bb}(\rho)=1$ and we know, for example, that (i) $P_{ab}(0)=0$, (ii) $P_{ab}(1)=2/3$ and (iii) $P_{ab}(\rho\rightarrow\infty)=1$. My question is: is there a way to determine $P_{ab}(\rho)$? Bonus: Since I am too lazy to perform the experiment, would there be a way to run this through some 3D rigid-body physics simulation and determine the probabilities by using a huge number of throws (of course this is doomed to fail for extreme values of $\rho$)? Remark: Actually, the function $P^\text{try}$ given above fulfills all three properties (i)-(iii). For $b=c$ we have $P^\text{try}_{ab}=2\frac{ab}{2ab+b^2}=\frac{2\rho}{1+2\rho}$ (the additional factor of 2 comes from the fact, that we have four sides $ab$ instead of two like in the asymmetric die above)
I think a reasonable first approximation can be made like this: choose an arbitrary orientation for the die, and figure out, if the die were released in that orientation with its lowermost point resting on a surface, which side would it fall on? That can be easily calculated; you just draw a line going straight down from the center of mass, and whichever face it passes through, that's the one that will land down. We can then assume (this is the unjustified approximation) that the orientation of the die before it begins its final tip is uniformly random, so the problem is reduced to finding the proportion of all possible directions which pass through each face. Hopefully it's clear that that is just the solid angle subtended by the face divided by the total solid angle, $4\pi$. For a cuboid, we can figure out the solid angle of a face by doing an integral, $$\Omega = \iint_S\frac{\hat r\cdot\mathrm{d}A}{r^2}$$ as given on MathWorld. Take the face to lie in the plane $z = Z$ (capital letters will be constants) and to have dimensions $2X\times 2Y$. The integral then becomes $$\begin{align}\Omega &= \int_{-Y}^{Y}\int_{-X}^{X} \frac{(x\hat{x} + y\hat{y} + Z\hat{z})\cdot\hat{z}}{(x^2 + y^2 + Z^2)^{3/2}}\mathrm{d}x\mathrm{d}y \\ &= Z\int_{-Y}^{Y}\int_{-X}^{X} \frac{1}{(x^2 + y^2 + Z^2)^{3/2}}\mathrm{d}x\mathrm{d}y\\ &= 4\tan^{-1}\biggl(\frac{XY}{Z\sqrt{X^2 + Y^2 + Z^2}}\biggr) \end{align}$$ This gets divided by $4\pi$ to produce the probability. Translating into your notation, and multiplying by 2 to take into account the two opposite sides under one probability, this becomes $$P_{ab} = \frac{2}{\pi}\tan^{-1}\biggl(\frac{ab}{c\sqrt{a^2 + b^2 + c^2}}\biggr)$$ $P_{bc}$ and $P_{ca}$ are the same under the appropriate permutation of the labels. Some sanity checks show that this is at least a reasonable candidate solution: * *$P_{ab}$ is symmetric in $a$ and $b$ *$P_{ab}$ is directly related to $a$ and $b$ and inversely related to $c$ *$P_{ab} \to 1$ as $a,b\to\infty$ or $c\to 0$ *$P_{ab} \to 0$ as $a,b\to 0$ or $c\to\infty$ *For three equal sides, $$P_{aa} = \frac{2}{\pi}\tan^{-1}\biggl(\frac{a}{\sqrt{3a^2}}\biggr) = \frac{2}{\pi}\tan^{-1}\frac{1}{\sqrt{3}} = \frac{1}{3}$$ *For two equal sides, $b = c$, as in your simplified example: $$\begin{align}P_{ab} &= \frac{2}{\pi}\tan^{-1}\biggl(\frac{ab}{b\sqrt{a^2 + 2b^2}}\biggr) = \frac{2}{\pi}\tan^{-1}\biggl(\frac{\rho}{\sqrt{\rho^2 + 2}}\biggr) \\ P_{bb} &= \frac{2}{\pi}\tan^{-1}\biggl(\frac{b^2}{a\sqrt{a^2 + 2b^2}}\biggr) = \frac{2}{\pi}\tan^{-1}\biggl(\frac{1}{\rho\sqrt{\rho^2 + 2}}\biggr)\end{align}$$ * *$P_{ab}(0) = 0$ as expected *$P_{ab}(1) = \frac{2}{\pi}\tan^{-1}\frac{1}{\sqrt{3}} = \frac{2}{\pi}\frac{\pi}{6} = \frac{1}{3}$ for one particular pair of opposite sides, as expected *$P_{ab}(\infty) = \frac{2}{\pi}\tan^{-1}0 = \frac{2}{\pi}\frac{\pi}{2} = 1$, as expected *$P_{ab} + P_{bc} + P_{ca} = 1$ can be shown using the identity $$\tan^{-1} x + \tan^{-1} y = \tan^{-1}\frac{x + y}{1 - xy}$$ from which follows $$\begin{align}\tan^{-1} x + \tan^{-1} y + \tan^{-1} z &= \tan^{-1}\frac{(x + y)/(1 - xy) + z}{1 - z(x + y)/(1 - xy)} \\ &= \tan^{-1}\frac{(x + y + z - xyz)/(1 - xy)}{(1 - xy - zx - zy)/(1 - xy)} \\ &= \tan^{-1}\frac{x + y + z - xyz}{1 - xy - zx - zy} \end{align}$$ The relevant products are $$\begin{align} x + y + z &= \frac{ab}{c\sqrt{a^2 + b^2 + c^2}} + \frac{bc}{a\sqrt{a^2 + b^2 + c^2}} + \frac{ca}{b\sqrt{a^2 + b^2 + c^2}} \\ &= \frac{(ab)^2 + (bc)^2 + (ca)^2}{abc\sqrt{a^2 + b^2 + c^2}} \\ xy &= \frac{ab}{c\sqrt{a^2 + b^2 + c^2}}\frac{bc}{a\sqrt{a^2 + b^2 + c^2}} \\ &= \frac{b^2}{a^2 + b^2 + c^2} \text{and cyclic permutations, so}\\ xy + yz + zx &= 1 \\ xyz &= \frac{abc}{(a^2 + b^2 + c^2)^{3/2}} \\ x + y + z - xyz &= \frac{[(ab)^2 + (bc)^2 + (ca)^2](a^2 + b^2 + c^2) - (abc)^2}{abc(a^2 + b^2 + c^2)^{3/2}} \neq 0 \end{align}$$ so you wind up with $\tan^{-1}\frac{x + y + z - xyz}{0} = \frac{\pi}{2}$, giving $\sum P = 1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
How to find the logarithm of Pauli Matrix? When I solve some physics problem, it helps a lot if I can find the logarithm of Pauli matrix. e.g. $\sigma_{x}=\left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right)$, find the matrix $A$ such that $e^{A}=\sigma_{x}$. At first, I find a formula only for real matrix: $$\exp\left[\left(\begin{array}{cc} a & b\\ c & d \end{array}\right)\right]=\frac{e^{\frac{a+d}{2}}}{\triangle}\left(\begin{array}{cc} \triangle \cosh(\frac{\triangle}{2})+(a-d)\sinh(\frac{\triangle}{2}) & 2b\cdot \sinh(\frac{\triangle}{2})\\ 2c\cdot \sinh(\frac{\triangle}{2}) & \triangle \cosh(\frac{\triangle}{2})+(d-a)\sinh(\frac{\triangle}{2}) \end{array}\right)$$ where $\triangle=\sqrt{\left(a-d\right)^{2}+4bc}$ but there is no solution for the formula on this example; After that, I try to Taylor expand the logarithm of $\sigma_{x}$: $$ \log\left[I+\left(\sigma_{x}-I\right)\right]=\left(\sigma_{x}-I\right)-\frac{\left(\sigma_{x}-I\right)^{2}}{2}+\frac{\left(\sigma_{x}-I\right)^{3}}{3}... $$ $$ \left(\sigma_{x}-I\right)=\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} -2 & 0\\ 0 & 0 \end{array}\right)\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right) $$ \begin{eqnarray*} \log\left[I+\left(\sigma_{x}-I\right)\right] & = & \left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left[\left(\begin{array}{cc} -2 & 0\\ 0 & 0 \end{array}\right)-\left(\begin{array}{cc} \frac{\left(-2\right)^{2}}{2} & 0\\ 0 & 0 \end{array}\right)...\right]\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right)\\ & = & \left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} -\infty & 0\\ 0 & 0 \end{array}\right)\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right) \end{eqnarray*} this method also can't give me a solution.
As March comments: $e^{ia(\hat{n}\cdot\vec{\sigma})}=I\cos(a)+i(\hat{n}\cdot\vec{\sigma})\sin(a)$ would be of use For this example, set $a=\frac{\pi}{2}$, $\hat{n}\cdot\vec{\sigma}=\sigma_{x}$, the Euler's formula is rewritten as: $$ e^{i\frac{\pi}{2}\sigma_{x}}=i\sigma_{x}=e^{i\frac{\pi}{2}I}\sigma_{x} \tag{1}$$ Since $[\sigma_{x},I]=0$, we can combine two exponentials to get: $$ e^{i\frac{\pi}{2}\sigma_{x}-i\frac{\pi}{2}I}=\sigma_{x} \tag{2}$$ Finally, we get a solution for $A$: $A=i\frac{\pi}{2}\sigma_{x}-i\frac{\pi}{2}I=\left(\begin{array}{cc} -i\frac{\pi}{2} & i\frac{\pi}{2}\\ i\frac{\pi}{2} & -i\frac{\pi}{2} \end{array}\right)$. If we consider periodic conditions in equations (1) and (2), we may get the same result as higgsss gets here. The procedure is identical for other Pauli Matrices except the subscript, as a result: $A=i\frac{\pi}{2}(\sigma_{j}-I)$ is a solution for $e^{A}=\sigma_{j}$,$j\in\{x,y,z\}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/225668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What is the vector corresponding to a two-particle product state $|\psi\rangle_1|\psi\rangle_2$? Let's suppose we have two particles described by \begin{equation} |\psi\rangle_1 = \begin{pmatrix} a \\ b \end{pmatrix}_1 \text{ and } |\psi\rangle_2 = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}_2 \end{equation} If the particles don't interact the product state $|\psi\rangle$ of both states describes the whole system. \begin{equation} |\psi\rangle=|\psi\rangle_1 |\psi\rangle_2 \end{equation} If I know want to write the state like a vector, I get following \begin{equation} |\psi\rangle =\begin{pmatrix} a \\ b \\ \alpha \\ \beta \end{pmatrix} \end{equation} Is this the correct way of doing this? Assuming both one-particle states have length $1$, we get \begin{equation} \sqrt{|a|^2+|b|^2+|\alpha|^2 + |\beta|^2}=2 \end{equation} Obviously the product state should be normed! What is the right way to fix this? Should we define the product state as follows \begin{equation} \frac{1}{\sqrt{2}}\begin{pmatrix} a \\ b \\ \alpha \\ \beta \end{pmatrix} \end{equation} or should we redefine the length of the vector like this \begin{equation} \sqrt{|a|^2+|b|^2} \cdot \sqrt{|\alpha|^2 + |\beta|^2} = 1 \end{equation} Or are both approaches possible? Or are both wrong? Never really found something in the literature because they all stick to the bracket notation.
The tensor product of two 2-entry vectors is not simply a 4-entry vector and does not behave like it, e.g. when you think about things as simple addition of two such objects. For example, $$\left( \begin{array}{c} a \\ b \end{array} \right) \otimes \left( \begin{array}{c} c \\ d \end{array} \right) + \left( \begin{array}{c} a \\ b \end{array} \right) \otimes \left( \begin{array}{c} e \\ f \end{array} \right) = \left( \begin{array}{c} a \\ b \end{array} \right) \otimes \left( \begin{array}{c} c+e \\ d+f \end{array} \right) \neq \left( \begin{array}{c} a+a \\ b+b \end{array} \right) \otimes \left( \begin{array}{c} c+e \\ d+f \end{array} \right).$$ This gets even worse with the norm or things like this since the right norm is the product of the norms of the two vectors: $$ \left\| \left( \begin{array}{c} a \\ b \end{array} \right) \otimes \left( \begin{array}{c} c \\ d \end{array} \right) \right\| = \left\| \left( \begin{array}{c} a \\ b \end{array} \right) \right\|_1 \cdot \left\| \left( \begin{array}{c} c \\ d \end{array} \right) \right\|_2. $$ There is actually a way of rewriting a tensor product of two vectors as one vector, but in a different way than you did. The tensor product of our spaces is four-dimensional, with a basis e.g. given as $$ \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \end{array} \right), \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \end{array} \right), \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \end{array} \right), \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \end{array} \right) .$$ In this basis, the four-entry vector from your question correctly reads $$ \left( \begin{array}{c} a \alpha \\ a \beta \\ b \alpha \\ b \beta \end{array} \right) $$ and now you can compute the norm the usual way: $$ \sqrt{ a^2 \alpha^2 + a^2 \beta^2 + b^2 \alpha^2 + b^2 \beta^2 } = 1.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/355797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Problem with loop Integral (HQET) I have come across the Integral: $$ \int_0^{\infty}dx [x^2-ixa+c]^{n-\frac{d}{2}}e^{-bx},$$ where $n = 1,2 ; a,b,c,d \in \mathbb{R}; b,d > 0$. This integral should contain some divergences for $d \rightarrow 4$ (for $c=0$). I guess one must be able to write it as some combination of Gamma functions.
I'll first start by completing the square and using the Binomial theorem twice, before simplifying the resulting expression. Let's go. \begin{align} I&=\int_0^\infty (x-iax+c)^{n-\frac{d}{2}} e^{-bx}\,dx \\ &=\int_0^\infty \left[\left(x-\frac{ia}{2}\right)^2+\frac{a^2+4c}{4}\right]^{n-\frac{d}{2}} e^{-bx}\,dx \\ &=\int_0^\infty \sum_{k=0}^{n-\frac{d}{2}} {n-\frac{d}{2}\choose k} \left(x-\frac{ia}{2}\right)^{2k} \left(\frac{a^2+4c}{4} \right)^{n-\frac{d}{2}-k} e^{-bx}\,dx \\ &=\int_0^\infty \sum_{k=0}^{n-\frac{d}{2}}\sum_{m=0}^{2k} {n-\frac{d}{2}\choose k} {2k \choose m} \left(\frac{a^2+4c}{4} \right)^{n-\frac{d}{2}-k} \left(-\frac{ia}{2} \right)^{2k-m} x^m e^{-bx}\,dx \\ &= \sum_{k=0}^{n-\frac{d}{2}}\sum_{m=0}^{2k} {n-\frac{d}{2}\choose k} {2k \choose m} \left(\frac{a^2+4c}{4} \right)^{n-\frac{d}{2}-k} \left(-\frac{ia}{2} \right)^{2k-m} \int_0^\infty x^m e^{-bx}\,dx \\ &= \sum_{k=0}^{n-\frac{d}{2}}\sum_{m=0}^{2k} {n-\frac{d}{2}\choose k} {2k \choose m} \left(\frac{a^2+4c}{4} \right)^{n-\frac{d}{2}-k} \left(-\frac{ia}{2} \right)^{2k-m}\frac{m!}{b^{m+1}} \\ \end{align} After using Wolfram Alpha to simplify the inner sum, we obtain \begin{align} I&= e^{-iab/2}\sum_{k=0}^{n-\frac{d}{2}} \frac{\Gamma\left(2k+1, -\frac{iab}{2}\right)}{b^{2k+1}} {n-\frac{d}{2}\choose k} \left(\frac{a^2+4c}{4} \right)^{n-\frac{d}{2}-k} \\ \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Momentum of a moving object in FRW metric according to an observer comoving with cosmic expansion I would like to show that in an FRW metric the momentum of a freely falling object decays as the inverse of the scale factor. I know there are many proofs and arguments for this but I am trying to get this starting from geodesics and having some trouble. General logic My general approach is the following. The object is moving along some path. I want to find the momentum of the object that would be measured by a coincident, locally inertial observer, i.e. a detector that is freely falling and locally cannot tell that spacetime is curved. In the locally inertial frame where the observer is momentarily at rest the 4-velocity of the observer has components $u^\mu\equiv d\xi^\mu/d\sigma= (1,0,0,0)$, where $\xi^\mu$ are the coordinates of this local inertial (cartesian) frame and $\sigma$ is the observer's proper time (in this frame $d\sigma=d\xi^0$). In this same frame the object has 4-momentum $p^\mu \equiv md\xi^\mu/d\tau$, where $\tau$ is the object's proper time, and $m$ the rest mass. Since special relativity holds in this frame the components of $p^\mu$ have the values $p^\mu=(E_\mathrm{obs},\mathbf{p}_\mathrm{obs})$, the energy and momentum that would be measured by the observer at rest in this frame. They obey $E_\mathrm{obs}^2-\mathbf{p}_\mathrm{obs}^2=m^2$. We can retrieve the energy $E$ by contracting $u^\mu$ with $p^\mu$: $$ E_\mathrm{obs}=g_{\mu\nu}u^\mu p^\nu, $$ where $g_{\mu\nu}=\text{diag}(1,-1,-1,-1)$ are the components of the metric in this locally inertial coordinate system. The magnitude of the 3-momentum in this system is just $$ |\mathbf{p}_\mathrm{obs}| = \sqrt{(g_{\mu\nu}u^\mu p^\nu)^2 - m^2}. $$ The right hand side is invariant under general coordinate transformations. Therefore, I think we can calculate the RHS in any coordinate system we want and the value will be the magnitude of the 3-momentum as measured by the observer corresponding to the path $u^\mu$ in a locally inertial frame. Is this ok so far? FRW metric $x^\mu = (t, \mathbf{x})$ are the coordinates of the FRW metric, defined by: $$ d\tau^2 = dt^2 - a(t)^2 \left(d\mathbf{x}^2 + K \frac{(\mathbf{x}\cdot d\mathbf{x})^2}{1-K\mathbf{x}^2} \right), $$ where $K =0$, $+1$, or $−1$. The geodesic paths can be found using the Euler-Lagrange equations (see this post). The resulting equations are \begin{align} 0 &= \frac{d^2 t}{d\tau^2} + a\dot{a} \left(\mathbf{x}'^2 +\frac{K(\mathbf{x}\cdot \mathbf{x}')^2}{1-K \mathbf{x}^2}\right)\\ \mathbf{0} &= \frac{d}{d\tau} \left[ a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right) \right] - \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} a^2 \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), \end{align} where a prime means $d/d\tau$ and $\dot{a}=da/dt$ is a function of $t$. The simple solutions to these equations are $t=\tau, \mathbf{x}=\text{const}$. These solutions correspond to the "comoving" observers that move along with the cosmic expansion. The world lines of these observers all have 4-velocity $u^\mu=(1,0,0,0)$ in this coordinate system. When the object is at some location $\mathbf{x}$ at time $t$ I want to get the momentum as measured in a locally inertial frame in which the comoving observer that sits at position $\mathbf{x}$ is momentarily at rest. To get the equations of motion for an arbitrary freely falling object you have to integrate the above equations. I was able to do this only for the second equation but I think that's enough. Define $\mathbf{f}$ by $$ \mathbf{f} \equiv a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), $$ to write the second equation as $$ \mathbf{0} = \frac{d \mathbf{f}}{d\tau} - \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} \mathbf{f}. $$ Multiply through by the integrating factor $\sqrt{1-K\mathbf{x}^2}$ to get \begin{align} \mathbf{0} &= \sqrt{1-K\mathbf{x}^2}\frac{d \mathbf{f}}{d\tau} - \frac{K (\mathbf{x} \cdot \mathbf{x}')}{\sqrt{1-K\mathbf{x}^2}} \mathbf{f} \\ &= \sqrt{1-K\mathbf{x}^2}\frac{d \mathbf{f}}{d\tau} + \mathbf{f} \frac{d}{d\tau}\sqrt{1-K\mathbf{x}^2} \\ &= \frac{d}{d\tau}\left(\mathbf{f} \sqrt{1-K\mathbf{x}^2}\right). \end{align} The solution is \begin{align} \frac{\mathbf{c}}{\sqrt{1-K\mathbf{x}^2}} = a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), \end{align} for some constant 3-vector $\mathbf{c}$. The term $(\mathbf{x}\cdot\mathbf{x}')$ can be isolated by dotting this equation with $\mathbf{x}$: \begin{align} \frac{\mathbf{c}\cdot\mathbf{x}}{\sqrt{1-K\mathbf{x}^2}} &= a^2\left((\mathbf{x} \cdot \mathbf{x}') + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}^2}{1-K\mathbf{x}^2}\right)\\ &= a^2 (\mathbf{x} \cdot \mathbf{x}') \left(1+ \frac{K\mathbf{x}^2}{1-K\mathbf{x}^2}\right)\\ &= a^2 \frac{\mathbf{x} \cdot \mathbf{x}'}{1-K\mathbf{x}^2}. \end{align} Now $\mathbf{x}'$ can be written in terms of $\mathbf{x}$: \begin{align} a^2 \mathbf{x}' &= \frac{\mathbf{c}}{\sqrt{1-K\mathbf{x}^2}} - a^2 \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\\ &= \frac{\mathbf{c}}{\sqrt{1-K\mathbf{x}^2}} - \frac{K(\mathbf{c}\cdot\mathbf{x})\mathbf{x}}{\sqrt{1-K\mathbf{x}^2}}\\ &= \frac{\mathbf{c} - K(\mathbf{c}\cdot\mathbf{x})\mathbf{x}}{\sqrt{1-K\mathbf{x}^2}}. \end{align} Finally the spatial part of the object's 4-momentum is \begin{align} p^i = m\frac{d\mathbf{x}}{d\tau} = m \frac{\mathbf{c} - K(\mathbf{c}\cdot\mathbf{x})\mathbf{x}}{a^2\sqrt{1-K\mathbf{x}^2}}. \end{align} Now to try to use the $|\mathbf{p}_\mathrm{obs}| = \sqrt{(g_{\mu\nu}u^\mu p^\nu)^2 - m^2}$ formula I had earlier. Since $u^\mu=(1,0,0,0)$, $g_{00}=1$, and $g_{0i}=0$ at $\mathbf{x}$, $|\mathbf{p}_\mathrm{obs}| = \sqrt{(p^0)^2 - m^2}$. Writing out $g_{\mu\nu}p^\mu p^\nu = m^2$ gives \begin{align} m^2 &= (p^0)^2 - m^2 a^2\left(\mathbf{x}'^2 + K \frac{(\mathbf{x}\cdot \mathbf{x}')^2}{1-K\mathbf{x}^2} \right) \\ &= (p^0)^2 - m^2 (\mathbf{f} \cdot \mathbf{x}') \\ &= (p^0)^2 - m^2 \frac{\mathbf{c}}{\sqrt{1-K\mathbf{x}^2}} \cdot \frac{\mathbf{c} - K(\mathbf{c}\cdot\mathbf{x})\mathbf{x}}{a^2\sqrt{1-K\mathbf{x}^2}} \\ & = (p^0)^2 - m^2 \frac{\mathbf{c}^2 - K(\mathbf{c}\cdot\mathbf{x})^2}{a^2 (1-K\mathbf{x}^2)}, \end{align} and I get $$ |\mathbf{p}_\mathrm{obs}| = \sqrt{(p^0)^2 - m^2} = \frac{m}{a}\sqrt{\frac{\mathbf{c}^2 - K(\mathbf{c}\cdot\mathbf{x})^2}{1-K\mathbf{x}^2}}. $$ But I don't think this is proportional to $1/a$ in general. If the object passes through the origin then everything works as I expect. In this case $\mathbf{c}$ is parallel to $\mathbf{x}'$ when $\mathbf{x}=0$ and the equation of motion keeps it moving along a straight line parallel to this initial $\mathbf{x}'$ so $\mathbf{x}$ remains parallel to $\mathbf{c}$ and $(\mathbf{c} \cdot \mathbf{x})^2 = \mathbf{c}^2\mathbf{x}^2$ and the term in the square root is just a constant $|\mathbf{c}|$. But for an object flying somewhere else, not passing through the origin I think the angle between $\mathbf{c}$ and $\mathbf{x}$ will change as it goes (and in particular it will not always be $0$) and so the momentum observed by a locally inertial observer will not simply scale as $1/a$. I guess maybe the angle between $\mathbf{c}$ and $\mathbf{x}$ changes as $\mathbf{x}^2$ changes in just such a way as to keep the term in the square root constant but I don't see how to show that. Or do I have the entire logic of the exercise wrong?
It turns out that last factor in the square root is actually a constant -- so everything works nicely and momentum decays as $1/a$ for any object moving on a geodesic as it's supposed to. I took the derivative of the term in the square root: $$ \frac{\mathbf{c}^2-K(\mathbf{c}\cdot \mathbf{x})^2}{1-K\mathbf{x}^2} \equiv A. $$ This introduces the terms $\mathbf{c}\cdot \mathbf{x}'$ and $\mathbf{x}\cdot \mathbf{x}'$, which we can get rid of using some of the formulas in the original post: \begin{align} \mathbf{c}\cdot \mathbf{x}' &= \frac{\mathbf{c}^2-K(\mathbf{c}\cdot \mathbf{x})^2}{a^2\sqrt{1-K\mathbf{x}^2}} = \frac{\sqrt{1-K\mathbf{x}^2}}{a^2}A \\ \mathbf{x}\cdot \mathbf{x}' &= \frac{1}{a^2}\sqrt{1-K\mathbf{x}^2}(\mathbf{c}\cdot \mathbf{x}) \end{align} \begin{align} \frac{dA}{d\tau} &=\frac{-2K (\mathbf{c}\cdot \mathbf{x})(\mathbf{c}\cdot \mathbf{x}')}{1-K\mathbf{x}^2} + \frac{\left(\mathbf{c}^2-K(\mathbf{c}\cdot \mathbf{x})^2\right)2K(\mathbf{x}\cdot \mathbf{x}')}{(1-K\mathbf{x}^2)^2}\\ &= \frac{-2K (\mathbf{c}\cdot \mathbf{x})}{1-K\mathbf{x}^2}\frac{\sqrt{1-K\mathbf{x}^2}}{a^2}A + \frac{A}{1-K\mathbf{x}^2} \frac{2K}{a^2}\sqrt{1-K\mathbf{x}^2}(\mathbf{c}\cdot \mathbf{x})\\ &= 0. \end{align} So $$ |\mathbf{p}_\mathrm{obs}| = \frac{m}{a}\sqrt{A}= \frac{\text{const}}{a}. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area of Kerr-Newman event horizon I want to calculate the area of event horizon for a Kerr-Newman black hole by using boyer's coordinates. I searched a lot from web, but I could not find any information about calculating event horizon radius for kerr newman. Can anyone help me to find any information, or help me how to calculate ?
In geometrical-Gaussian units with $G$, $c$, and $\frac{1}{4\pi\epsilon_0}$ equal to 1, the Kerr-Newman metric for a black hole of mass $M$, angular momentum $J=aM$, and charge $Q$ is $$\begin{align} ds^2= &-\left(1-\frac{2Mr-Q^2}{r^2+a^2\cos^2{\theta}}\right)dt^2 +\frac{r^2+a^2\cos^2{\theta}}{r^2-2Mr+a^2+Q^2}dr^2\\ &+(r^2+a^2\cos^2{\theta)}\,d\theta^2 +\left(r^2+a^2+\frac{a^2(2Mr-Q^2)\sin^2{\theta}}{r^2+a^2\cos^2{\theta}}\right)\sin^2{\theta}\,d\phi^2\\ &-\frac{2a(2Mr-Q^2)\sin^2{\theta}}{r^2+a^2\cos^2{\theta}}\,dt\,d\phi \end{align}$$ in Boyer-Lindquist coordinates $(t,r,\theta,\phi)$. (When $Q$ is zero, this reduces to Wikipedia’s form for the Kerr metric. Wikipedia’s form for the Kerr-Newman metric is equivalent to the above, but seems less straightforward.) The $g_{rr}$ component of the metric tensor is infinite when the denominator $r^2-2Mr+a^2+Q^2$ is zero. This happens at two radial coordinates, $$r_\pm=m\pm\sqrt{m^2-a^2-Q^2}.$$ The event horizon is at $r_+$. We want to find the area of this surface. The 2D metric on the surface $t=$ constant and $r=r_+$ is $$ds_+^2= (r_+^2+a^2\cos^2{\theta)}\,d\theta^2 +\left(r_+^2+a^2+\frac{a^2(2Mr_+-Q^2)\sin^2{\theta}}{r_+^2+a^2\cos^2{\theta}}\right)\sin^2{\theta}\,d\phi^2$$ and the area element on this surface is $$\begin{align} dA_+&=\sqrt{\det{g_+}}\,d\theta\,d\phi\\ &=\sqrt{(r_+^2+a^2\cos^2{\theta})\left(r_+^2+a^2+\frac{a^2(2Mr_+-Q^2)\sin^2{\theta}}{r_+^2+a^2\cos^2{\theta}}\right)}\sin{\theta}\,d\theta\,d\phi\\ &=\sqrt{(r_+^2+a^2\cos^2{\theta})(r_+^2+a^2)+a^2(2Mr_+-Q^2)(1-\cos^2{\theta})}\sin{\theta}\,d\theta\,d\phi\\ &=\sqrt{(r_+^4+a^2r_+^2+2Ma^2r_+-a^2Q^2)+a^2(r_+^2-2Mr_++a^2+Q^2)\cos^2{\theta}}\sin{\theta}\,d\theta\,d\phi. \end{align}$$ Conveniently, the coefficient of $\cos^2{\theta}$ in the square root vanishes by the definition of $r_+$, $$r_+^2-2Mr_++a^2+Q^2=0,$$ and, using this equation to eliminate $M$ in the first term in the square root, what's left under the square root becomes the perfect square $(r_+^2+a^2)^2$. Thus the area element simplifies to the trivial-to-integrate $$dA_+=(r_+^2+a^2)\sin{\theta}\,d\theta\,d\phi.$$ Integrating over $\theta$ from $0$ to $\pi$ and over $\phi$ from $0$ to $2\pi$ gives the area of the event horizon, $$A_+=4\pi(r_+^2+a^2)=4\pi\left(2M^2-Q^2+2M\sqrt{M^2-a^2-Q^2}\right).$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to write units when multiple terms are involved in a derivation? Say I am going to write down the steps of some calculations to get the final value of $s$ from an equation like this: $$ s = s_0 + \frac12 gt^2. $$ Let us say $s_0 = 20\,\mathrm{m}$, $g = 10\,\mathrm{ms^{-2}}$, and $t = 2\,\mathrm{s}$. What is the right way to write down the units in every step so that all the formula, LHS, and RHS are always consistent in terms of units? Alternative 1 \begin{align*} s &= s_0 + \frac12gt^2 \\ &= 20\,\mathrm{m} + \frac{1}{2} \cdot 10\,\mathrm{m\,s^{-2}} \cdot (2\,\mathrm{s})^2 \\ &= 20\,\mathrm{m} + 20\,\mathrm{m} \\ &= 40\,\mathrm{m}. \end{align*} Alternative 2 \begin{align*} s &= s_0 + \frac12gt^2 \\ &= 20\,\mathrm{m} + \frac{1}{2} \cdot 10\,\mathrm{m\,s^{-2}} \cdot 2^2\,\mathrm{s^2} \\ &= 20\,\mathrm{m} + 20\,\mathrm{m} \\ &= 40\,\mathrm{m}. \end{align*} Alternative 2 \begin{align*} s &= s_0 + \frac12gt^2 \\ &= 20\,\mathrm{m} + \left(\frac{1}{2} \cdot 10 \cdot 2^2\right)\,\mathrm{m} \\ &= 20\,\mathrm{m} + 20\,\mathrm{m} \\ &= 40\,\mathrm{m}. \end{align*} Alternate 3 \begin{align*} s &= s_0 + \frac12gt^2 \\ &= \left(20 + \frac{1}{2} \cdot 10 \cdot 2^2\right)\,\mathrm{m} \\ &= 20\,\mathrm{m} + 20\,\mathrm{m} \\ &= 40\,\mathrm{m}. \end{align*} Are all alternatives correct? Is there any alternative above or another alternative that is widely used in literature?
I perfectly agree with garyp: There is no right or wrong way to put the units into the equations. However, in my experience people tend to make mistakes, if they rewrite the equations. That's why I recommend * *to convert all units to the standard SI units (no prefixes like nano or kilo), *do the insertion and calculation of the numbers as a side mark, where we do not include the units, and *finally just writing the final result with the SI unit as the solution. Hence, your calculation would read $$ s = s_0 + \frac{1}{2}g t ^2 = \fbox{40m} $$ and the side mark would read $$ \left(20 + \frac{1}{2} \cdot 10 \cdot 2^2\right) = 20 + 20 = 40. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Motion of a projectile in a rotated coordinate system, and converting equations to the new system Context: This in essence is explained on Mathematics of Classical & Quantum Physics by Byron & Fuller. This is from Section $1.4$, pages $12-13$. So, suppose we launch a projectile (mass $m$) in the $x_1x_2$ plane, with initial velocity $\mathbf{v_0}$, at an angle of $\theta$ from the positive $x_1$-axis. We would like to consider the equations of motion in the $x_1'x_2'$ plane, which (to my understanding) should be formed by rotating the former coordinates (call them $K$) by $\theta$ to get a new set ($K'$). We keep things simple and only assume a downward force, $-mg$, per Newton's law. The text communicates this situation with this diagram. (I believe there is a typo; it should be $x_2$ and $x_2'$ wherever $y_2$ and $y_2'$, respectively, are mentioned.) To keep things brief, the text claims the following equations of motion, in the $K$ frame and the $K'$ frame respectively: $$\begin{align*} \newcommand{\t}{\theta} m \ddot x_1 = 0 &\xrightarrow{\text{rotate $K \to K'$}} m \ddot x_1' = -mg \sin \t \tag{1} \\ m \ddot x_2 = -mg &\xrightarrow{\text{rotate $K \to K'$}} m \ddot x_2' = -mg \cos \t \tag{2}\\ x_1(0) = 0 &\xrightarrow{\text{rotate $K \to K'$}} x_1'(0) = 0 \tag{3} \\ x_2(0) = 0 &\xrightarrow{\text{rotate $K \to K'$}} x_2'(0) = 0 \tag{4} \\ \dot x_1(0) = v_0 \cos \t &\xrightarrow{\text{rotate $K \to K'$}} \dot x_1'(0) = v_0 \tag{5} \\ \dot x_2(0) = v_0 \sin \t &\xrightarrow{\text{rotate $K \to K'$}} \dot x_2'(0) = 0 \tag{6} \\ x_2 = \frac{-g}{2v_0^2 \cos^2 \t} x_1^2 + x_1 \tan \t &\xrightarrow{\text{rotate $K \to K'$}} x_1' = x_2' \tan \t + v_0 \sqrt{ \frac{2 x_2'}{-g \cos \t}} \tag{7} \end{align*}$$ My goal is to verify these. That is, I want to figure out how to convert from $K$ to $K'$, and then verify the above conversions. I would think that the natural conversion is simply through a rotation matrix. As you'll recall, the matrix associated with rotating $\mathbb{R}^2$ by $\theta$ counterclockwise is $$R = \begin{pmatrix} \cos \t & -\sin \t \\ \sin \t & \cos \t \end{pmatrix} \newcommand{\m}[1]{\begin{pmatrix} #1 \end{pmatrix}}$$ Hence, to verify $(1)$ and $(2)$, we would look at this calculation: $$\begin{align*}\m{ \cos \t & -\sin \t \\ \sin \t & \cos \t} \m{ \ddot x_1 \\ \ddot x_2} &= \m{ \cos \t & -\sin \t \\ \sin \t & \cos \t} \m{ 0 \\ -g} \\ &= \m{g \sin \t \\ -g \cos \t} \end{align*} $$ This does not give me $(\ddot x_1,\ddot x_2)^T$ however... Looking at $(3)$ and $(4)$ is trivial, so we'll ignore them; the subsequent pair, $(5)$ and $(6)$, give us $$\begin{align*}\m{ \cos \t & -\sin \t \\ \sin \t & \cos \t} \m{ \dot x_1(0) \\ \dot x_2(0) } &= \m{ \cos \t & -\sin \t \\ \sin \t & \cos \t} \m{ v_0 \cos \t \\ v_0 \sin \t} \\ &= \m{ v_0 \cos^2 \t - v_0 \sin^2 \t \\ v_0 \sin \t \cos \t + v_0 \sin \t \cos \t } \end{align*}$$ again problematic. What bugs me further is that if I tweak $R$ a bit, to get, say, $$S = R^T = R^{-1}= \begin{pmatrix} \cos \t & \sin \t \\ -\sin \t & \cos \t \end{pmatrix}$$ then something else happens. In $(5)$ and $(6)$, applying $S$ as the transformation $K \to K'$ gives $$\begin{align*}\m{ \cos \t & \sin \t \\ -\sin \t & \cos \t} \m{ \dot x_1(0) \\ \dot x_2(0) } &= \m{ \cos \t & \sin \t \\ -\sin \t & \cos \t} \m{ v_0 \cos \t \\ v_0 \sin \t} \\ &= \m{ v_0 \cos^2 \t + v_0 \sin^2 \t \\ -v_0 \sin \t \cos \t + v_0 \sin \t \cos \t } \\ &=\m{ v_0 \\ 0} \\ \end{align*}$$ as we'd expect! And if I look at $(1)$ and $(2)$ again, we get $$\begin{align*}\m{ \cos \t & \sin \t \\ -\sin \t & \cos \t} \m{ \ddot x_1 \\ \ddot x_2} &= \m{ \cos \t & \sin \t \\ -\sin \t & \cos \t} \m{ 0 \\ -g} \\ &= \m{-g \sin \t \\ -g \cos \t} \end{align*} $$ But this does not make any sense to me. $S$ corresponds to a clockwise rotation by $\t$, not counterclockwise. Why should $S$ be giving the correct answers? Can anyone explain this strange discrepancy to me? The equations of motion also seem to be correct for what it's worth. (I also apologize in advance if it's a little too basic; physics is far from my forte.)
If you rotate the gravity vector to $~S'~$ you obtain: \begin{align*} &\vec{g}'=\left[ \begin {array}{ccc} \cos \left( \theta \right) &\sin \left( \theta \right) &0\\ -\sin \left( \theta \right) & \cos \left( \theta \right) &0\\ 0&0&1\end {array} \right]\,\begin{bmatrix} 0 \\ -g \\ 0 \\ \end{bmatrix}=-g\begin{bmatrix} \sin(\theta) \\ \cos(\theta) \\ \end{bmatrix} \end{align*} from here $$x'=v_0\,t+(\vec{g}')_x\,\frac{t^2}{2}=v_0\,t-g\,\sin(\theta)\,\frac{t^2}{2}\\ y'=+(\vec{g}')_y\,\frac{t^2}{2}=-g\,\cos(\theta)\,\frac{t^2}{2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/724195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }