Q
stringlengths
70
13.7k
A
stringlengths
28
13.2k
meta
dict
Composition of Lorentz Transformations If a particle is moving in the $x$-direction with velocity $c/2$, then the Lorentz transformation $\Lambda = \begin{pmatrix}\gamma & -\beta \gamma & 0 & 0 \\ -\beta \gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix}\cosh\ \phi & -\sinh\ \phi & 0 & 0 \\ -\sinh\ \phi & \cosh\ \phi & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix}\frac{2}{\sqrt{3}} & -\frac{1}{\sqrt{3}} & 0 & 0 \\ -\frac{1}{\sqrt{3}} & \frac{2}{\sqrt{3}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$, where the rapidity $\phi$ is given by $\tanh\ \phi = \frac{v}{c}$. Subsequently, if the particle is moving in the $y$-direction with velocity $c/2$, then the Lorentz transformation $\Lambda' = \begin{pmatrix}\gamma & 0 & -\beta \gamma & 0 \\ 0 & 1 & 0 & 0 \\ -\beta \gamma & 0 & \gamma & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix}\cosh\ \phi & 0 & -\sinh\ \phi & 0 \\ 0 & 1 & 0 & 0 \\ -\sinh\ \phi & 0 & \cosh\ \phi & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix}\frac{2}{\sqrt{3}} & 0 & -\frac{1}{\sqrt{3}} & 0 \\ 0 & 1 & 0 & 0 \\ -\frac{1}{\sqrt{3}} & 0 & \frac{2}{\sqrt{3}} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$. Therefore, the combined transformation $\Lambda''(w) = \Lambda' \Lambda = \begin{pmatrix}\cosh^{2}\ \phi & -\sinh\ \phi\ \cosh\ \phi & -\sinh\ \phi & 0 \\ -\sinh\ \phi & \cosh\ \phi & 0 & 0 \\ -\sinh\ \phi\ \cosh\ \phi & \sinh^{2}\ \phi & \cosh\ \phi & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$. But now I'm having a bit of a trouble finding the boost velocity $w$ for $\Lambda''(w)$. Any suggestions?
If you are doing what I think you are doing then, you are trying to get the Boost matrix for an arbitrary direction. Then the way to go about that will be to use the generalized boost matrix (see J D Jackson page547 or http://en.wikipedia.org/wiki/Lorentz_transformation). The the matrix against which you can compare to get the velocity from wikipedia is image. In particular, look at the firs row and first column then $\Lambda_{11} = \gamma, \frac{c \Lambda_{12}}{\Lambda_{11}} = v_x, \frac{c \Lambda_{13}}{\Lambda_{11}} = v_y$ and $\frac{c \Lambda_{14}}{\Lambda_{11}} = v_z$ Aside: Are you sure your last matrix is correct? It isn't symmetric.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/174624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Using the uncertainty principle to estimate the ground state energy of hydrogen I have been reading through this estimate of the ground state energy of hydrogen and others like it. In this one it says it is using the uncertainty principle but then proceeded to use the following: $$pr=\hbar$$ But why is it not using: $$pr=\frac{\hbar}{2}$$ which is more in line with the uncertainty principle and what comes earlier in the derivation.
Note that $\Delta p_x \Delta r$ does not satisfy the uncertainty principle in the strict sense since $r$ is not conjugate to $p_x$ (or $p_y$ and $p_z$). Instead you can consider $\Delta p_x \Delta x$. The ground state of the hydrogen atom is \begin{equation} \psi_0(r) = \frac{1}{\sqrt{\pi a^3}} e^{-r/a}, \end{equation} where $a$ is the Bohr radius. First of all, $\left< x \right> = 0$, because $\psi_0$ is spherically symmetric. The fluctuation in $x$ is thus given by \begin{equation} \Delta x = \sqrt{\left< x^2 \right>} = \sqrt{\frac{\left< r^2 \right>}{3}}, \end{equation} since the ground state has spherical symmetry. For the expectation value of $r^2$, we find \begin{align} \left< r^2 \right> & = \frac{4\pi}{\pi a^3} \int_0^\infty dr \, r^4 e^{-2r/a} \\ & = \frac{4}{a^3} \frac{1}{2^4} \frac{d^4}{d^4(1/a)} \int_0^\infty dr \, e^{-2r/a} \\ & = \frac{1}{8a^3} \frac{d^4}{d^4(1/a)} \frac{1}{1/a} = \frac{4!}{8} a^2 = 3a^2, \end{align} and we obtain $\Delta x = a$. Next, $\left< \vec p \right> = 0$ because the wave function is real and normalizable ($\vec p$ is hermitian) so we have \begin{equation} \Delta p_x = \sqrt{\left< p_x^2 \right>} = \sqrt{\frac{\left< p^2 \right>}{3}}, \end{equation} similarly as before. With $\left< p^2 \right> = \frac{\hbar^2}{a^2}$ (see the answer of gonenc), you find $\Delta p_x = \frac{\hbar}{\sqrt{3}a}$. Finally, we obtain \begin{equation} \Delta x \Delta p_x = \frac{\hbar}{\sqrt{3}} \approx 0.58 \hbar > \frac{\hbar}{2}. \end{equation} Also note that since $\Delta r = \frac{\sqrt{3}}{2} a$ (see the answer of gonenc), we have \begin{equation} \Delta r \Delta p_x = \frac{\hbar}{2}. \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/183960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Hindered rotation model for flexible polymers: deriving the Flory characteristic ratio In the hindered rotation model we assumes constant bond angles $\theta$ and lengths $\ell$, with torsion angles between adjacent monomers being hindered by a potential $U(\phi_i)$. In Rubinstein's book problem 2.9 asks us to derive the Flory characteristic ratio for such a model, which is given as $C_\infty=\frac{1+\cos\theta}{1-\cos\theta}\cdot\frac{1+\langle\cos\phi\rangle}{1-\langle\cos\phi\rangle}$. I am not sure where to start in working out the correlations between bonds to prove this relation. Starting from $\langle \vec{r_i}\cdot \vec{r_j}\rangle$ it seems from earlier derivations that I am expecting the correlations to be of the form $\langle \vec{r_i}\cdot \vec{r_j}\rangle = \ell^2\left(\cos^{|i-j|}\theta + \langle \cos\phi\rangle^{|i-j|}\right)$, but I am having a hard time seeing how to show that. Any insights into how to see the correlations geometrically (or an indication that I am on the wrong track entirely) would be greatly appreciated. Incidentally, it's a bit odd that there are no tags specifically for polymer physics.
The vector $\mathrm{r}_i$ is between the beads $i$ and $i+1$. Define a local coordinate system so that $x$ is along $\mathrm{r}_i$. The coordinate $y$ is defined so that $\mathrm{r}_{i-1}$, $\mathrm{r}_{i}$ are both on the same plane, and $z$ is normal to this plane. Thus we can write \begin{align} \mathrm{r}_{i-1} &= (\ell\cos\theta, -\ell\sin\theta, 0)_i^T \\ \mathrm{r}_i &= (\ell, 0, 0)_i^T \\ \mathrm{r}_{i+1} &= (\ell\cos\theta, -\ell\sin\theta\cos\varphi_{i+1}, \ell\sin\theta\sin\varphi_{i+1})_i^T \end{align} Or after working out the full coordinate transformation matrix from the above information, $$\begin{pmatrix}x \\ y \\ z \end{pmatrix}_i = A_i\begin{pmatrix}x' \\ y' \\ z' \end{pmatrix}_{i+1}$$ where $$A_i = \begin{pmatrix} \cos\theta & -\sin \theta & 0 \\ -\sin\theta\cos\varphi_{i+1} & -\cos\theta\cos\varphi_{i+1} & -\sin\varphi_{i+1} \\ \sin\theta\sin\varphi_{i+1} & \cos\theta\sin\varphi_{i+1} & -\cos\varphi_{i+1} \end{pmatrix}$$ Now \begin{align} \langle R^2 \rangle &= \left \langle \left(\sum_{i=1}^n \mathbf{r}_i\right) \cdot \left(\sum_{j=1}^n \mathbf{r}_j\right) \right\rangle = \sum_{i=1}^n \sum_{j=1}^n \langle \mathbf{r}_i \cdot \mathbf{r}_j\rangle \\ &= \sum_{i=1}^n \left( \sum_{j=1}^{i-1} \langle \mathbf{r}_i \cdot \mathbf{r}_j\rangle + \langle \| \mathbf{r}_i \|^2\rangle + \sum_{j=i+1}^{n} \langle \mathbf{r}_i \cdot \mathbf{r}_j\rangle \right) \\ &= n\ell^2 + 2\sum_{i=1}^n \sum_{j=i+1}^{n} \langle \mathbf{r}_i \cdot \mathbf{r}_j\rangle \end{align} Using the transition matrix and noting that $j>i$, \begin{align} \langle \mathbf{r}_i \cdot \mathbf{r}_j\rangle &= \ell^2 \langle(1, 0, 0) A_i \cdots A_{j-1} (1, 0, 0)^T\rangle = \ell^2 \langle A_i \cdots A_{j-1}\rangle_{11} \\ &=\ell^2 (\langle A\rangle^{j-i})_{11} \end{align} Where $A$ is any of the matrices $A_i$, for example we could set $A = A_1$. Thus $$\langle R^2 \rangle = n\ell^2 + 2\ell^2 \sum_{i=1}^n \sum_{j=i+1}^{n} (\langle A\rangle^{j-i})_{11} = n\ell^2 + 2\ell^2 \left(\sum_{i=1}^n (n-i) \langle A\rangle^{i}\right)_{11}$$ So towards the characteristic ratio: \begin{align} \frac{\langle R^2 \rangle}{n\ell^2} &= 1 + \frac{2}{n} \left(n\langle A\rangle(I-\langle A\rangle^{n})(I-\langle A\rangle)^{-1}\right. \\ & \qquad \left. - \langle A\rangle(I-(n+1)\langle A\rangle^{n} + n\langle A\rangle^{n+1})(I-\langle A\rangle)^{-2}\right)_{11} \\ & = \left((I+\langle A\rangle)(I-\langle A\rangle)^{-1} - \frac{2\langle A \rangle}{n}(I-\langle A\rangle^{n})(I-\langle A\rangle)^{-2}\right)_{11} \end{align} Correlation between two beads tends to zero as distance goes to infinity, i.e. $\lim_{n\to\infty}\langle A\rangle^{n} = 0$, so $$C_\infty = \left((I+\langle A\rangle)(I-\langle A\rangle)^{-1}\right)_{11}$$ If the potential is symmetric, by the virtue of sine being odd $\langle \sin\varphi\rangle = 0$, and we have $$\langle A\rangle = \begin{pmatrix} \cos\theta & -\sin \theta & 0 \\ -\sin\theta\langle\cos\varphi\rangle & -\cos\theta\langle\cos\varphi\rangle & 0 \\ 0 & 0 & -\langle\cos\varphi\rangle \end{pmatrix}$$ Remembering that for a 3x3 matrix $T$ $$T^{-1} = \frac{1}{\det(T)}\begin{pmatrix}\det\begin{pmatrix} T_{22} & T_{23} \\ T_{32} & T_{33} \end{pmatrix} & \det\begin{pmatrix} T_{13} & T_{12} \\ T_{33} & T_{32} \end{pmatrix} & \det\begin{pmatrix} T_{12} & T_{13} \\ T_{22} & T_{23} \end{pmatrix} \\ \det\begin{pmatrix} T_{23} & T_{21} \\ T_{33} & T_{31} \end{pmatrix} & \det\begin{pmatrix} T_{11} & T_{13} \\ T_{31} & T_{33} \end{pmatrix} & \det\begin{pmatrix} T_{13} & T_{11} \\ T_{23} & T_{21} \end{pmatrix} \\ \det\begin{pmatrix} T_{21} & T_{22} \\ T_{31} & T_{32} \end{pmatrix} & \det\begin{pmatrix} T_{12} & T_{11} \\ T_{32} & T_{31} \end{pmatrix} & \det\begin{pmatrix} T_{11} & T_{12} \\ T_{21} & T_{22} \end{pmatrix} \end{pmatrix}$$ we have $$C_\infty = \frac{(1+\cos\theta)(1+\cos\theta\langle\cos\varphi\rangle) + \sin\theta \sin\theta\langle\cos\varphi\rangle}{(1-\cos\theta)(1+\cos\theta\langle\cos\varphi\rangle) - \sin\theta \sin\theta\langle\cos\varphi\rangle}$$ Which gives us the final relation $$C_\infty = \frac{(1+\cos\theta)(1+\langle\cos\varphi\rangle)}{(1-\cos\theta)(1-\langle\cos\varphi\rangle)}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/256207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Generalized Euler substitution doesn't seem to work when the integration variable has a dimension I came across Euler substitutions while trying to evaluate the integral $\int \frac{y^2}{x^2+y^2+z^2 + x\sqrt{x^2+y^2+z^2}} dy$, where $x, y, z$ are length quantities. The generalized substitution at the bottom of the page, $\sqrt{ax^2 + bx + c} = \sqrt{a} + xt$ where $x$ is the integration variable, is the one I'd prefer to memorize since it's more powerful than the first three together and has no additional conditions. But how does it work in physics where the $x$ has a dimension? The $\sqrt{a}$ term never has the same unit as the other terms. In my case it can be written as $\sqrt{y^2 + x^2 + z^2} - 1 = yt$. I can't subtract a dimensionless quantity from a dimensional one. The other three substitutions don't have this problem.
You can always make a substitution that makes the integration variable dimensionless, and then this isn't an issue. Rewrite your integral as $$ \int \frac{y^2}{ x^2+z^2 + y^2 + x \sqrt{x^2 + z^2} \sqrt{\frac{y^2}{x^2+z^2} + 1} } dy. $$ Substitute the dimensionless variable $\eta = y/\sqrt{x^2 + z^2}$: $$ (x^2 + z^2)^{3/2} \int \frac{\eta^2 d \eta}{(x^2 + z^2)(1 + \eta^2) + x \sqrt{x^2 + z^2} \sqrt{\eta^2 + 1}} = \sqrt{x^2 + z^2} \int \frac{\eta^2 d \eta}{1 + \eta^2 + \frac{x}{\sqrt{x^2 + z^2}} \sqrt{\eta^2 + 1}}. $$ The Euler substitution is now $\sqrt{\eta^2 + 1} = 1 + \eta t$, which is dimensionally self-consistent since both $\eta$ and $t$ are dimensionless. In fact, the entire integral is dimensionless, and the pre-factor of $\sqrt{x^2 + z^2}$ ensures that the whole thing has dimensions of length like your original integral did. (Whether or not this substitution makes the integral any easier I don't know.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/332695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A geometrical calculation in Fresnel's paper "Memoir on the diffraction of light" 1819 It is a geometrical problem which I find difficult to solve reading the Fresnel's paper "Memoir on the diffraction of Light". According to the figure Fresnel sets $z$ as the distance of the element $nn'$ from the point $M$---- (I suppose $z=nM$)-----, $a=CA$, $b=AB$, $IMA$ is an arc with center $C$, $EMF$ is an arc with center $P$ tangential to the point $M$ with the first arc. Eventually, Fresnel calculate that the distance $$nS=\frac{z^2(a+b)}{2ab}.$$ ( I believe that it is an approximation saying $nS≈\frac{z^2(a+b)}{2ab}$) (1) How does he find that result? (2) In my attempt I find that $nS≈\frac{z^2}{2PM}$, pretty close but I cannot find the $PM$ value.Are there any ideas? (3) Υοu can find the original paper here (page 119): https://archive.org/stream/wavetheoryofligh00crewrich#page/118
First, we have that the triangles $RAB$ and $RTC$ are similar, so that $$TC=RC\,\left(\frac{AB}{RB}\right)=(a+b)\,\left(\frac{c/2}{a}\right)=\frac{(a+b)c}{2a}\,.$$ This means $$\begin{align} RF&=\sqrt{FC^2+RC^2}=\sqrt{(FT+TC)^2+RC^2}=\sqrt{\left(x+\frac{(a+b)c}{2a}\right)^2+(a+b)^2} \\ &=(a+b)\,\sqrt{1+\left(\frac{x}{a+b}+\frac{c}{2a}\right)^2}\approx(a+b)\left(1+\frac{1}{2}\,\left(\frac{x}{a+b}+\frac{c}{2a}\right)^2\right) \\ &=a+b+\frac{x^2}{2(a+b)}+\frac{cx}{2a}+\frac{(a+b)c^2}{8a^2}\,. \end{align}$$ (We expand the square with binomial series and receive the approximation) Next, $$\begin{align} RA&=\sqrt{RB^2+AB^2}=\sqrt{a^2+\left(\frac{c}{2}\right)^2}=a\,\sqrt{1+\left(\frac{c}{2a}\right)^2} \\&\approx a\,\left(1+\frac{1}{2}\,\left(\frac{c}{2a}\right)^2\right)=a+\frac{c^2}{8a} \end{align}$$ and $$\begin{align} AF&=\sqrt{AM^2+MF^2}=\sqrt{BC^2+(FC-MC)^2}=\sqrt{BC^2+(FC-AB)^2} \\ &=\sqrt{b^2+\left(x+\frac{(a+b)c}{2a}-\frac{c}{2}\right)^2}=b\,\sqrt{1+\left(\frac{x}{b}+\frac{c}{2a}\right)^2} \\ &\approx b\,\left(1+\frac{1}{2}\,\left(\frac{x}{b}+\frac{c}{2a}\right)^2\right) =b+\frac{x^2}{2b}+\frac{cx}{2a}+\frac{bc^2}{8a^2}\,. \end{align}$$ Finally, we get $$\begin{align}d&=RA+AF-RF \\ &\approx\small\left(a+\frac{c^2}{8a}\right)+\left(b+\frac{x^2}{2b}+\frac{cx}{2a}+\frac{bc^2}{8a^2}\right)-\left(a+b+\frac{x^2}{2(a+b)}+\frac{cx}{2a}+\frac{(a+b)c^2}{8a^2}\right) \\ &=\frac{x^2}{2b}-\frac{x^2}{2(a+b)}=\frac{ax^2}{2b(a+b)}\,. \end{align}$$ We transform the figure into it's mirror one and take approximatly that $z=m'A$: Triangles $RTC$, $ARB$ are similar (Fig. 1). So: $$\frac{TC}{AB}=\frac{a+b}{a}$$ Also, triangles $FRC$, $m'RB$ are similar. So: $$\frac{FC}{m'B}=\frac{a+b}{a}$$ This means that: $(x= FT)$ $$\frac{x}{m'A}=\frac{TC}{AB}=\frac{a+b}{a}$$ With $z≈m'A$ we have: $$\frac{x}{z}=\frac{a+b}{a}\tag{1}$$ We have calculate above the difference d, as: $$d=\frac{a}{2b(a+b)}x^2\tag{2}$$ Combined (1) and (2): $$d= nS =\frac{a+b}{2ab}z^2\tag{3}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/422721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Geodesics for FRW metric using variational principle I am trying to find geodesics for the FRW metric, $$ d\tau^2 = dt^2 - a(t)^2 \left(d\mathbf{x}^2 + K \frac{(\mathbf{x}\cdot d\mathbf{x})^2}{1-K\mathbf{x}^2} \right), $$ where $\mathbf{x}$ is 3-dimensional and $K=0$, $+1$, or $-1$. Geodesic equation Using the Christoffel symbols From Weinberg's Cosmology (Eqs. 1.1.17 - 20) in the geodesic equation I get: \begin{align} 0 &= \frac{d^2 t}{d\lambda^2} + a\dot{a} \left[ \left( \frac{d\mathbf{x}}{d\lambda} \right)^2 +\frac{K(\mathbf{x}\cdot \frac{d\mathbf{x}}{d\lambda})^2}{1-K \mathbf{x}^2}\right], &\text{($t$ equation)}\\ 0 &= \frac{d^2\mathbf{x}}{d\lambda^2} + 2 \frac{\dot{a}}{a}\frac{dt}{d\lambda}\frac{d\mathbf{x}}{d\lambda} + \left[ \left(\frac{d\mathbf{x}}{d\lambda}\right)^2 + \frac{K(\mathbf{x} \cdot \frac{d\mathbf{x}}{d\lambda})^2}{1-K\mathbf{x}^2} \right]K\mathbf{x}, &\text{($\mathbf{x}$ equation)} \end{align} where $\lambda$ is the affine parameter, and $\dot{a}=da/dt$. Variational principle It should also be possible to get the geodesics by finding the paths that extremize the proper time $d\tau$, i.e. using the Euler-Lagrange equations with a Lagrangian equal to the square root of the $d\tau^2$ I wrote above: $$ L = \frac{d\tau}{dp}= \sqrt{ t'^2 - a(t)^2 \left(\mathbf{x}'^2 + K \frac{(\mathbf{x}\cdot \mathbf{x}')^2}{1-K\mathbf{x}^2} \right) }, $$ where a prime is the derivative with respect to the variable $p$ that parameterizes the path. When I try this $L$ in the E-L equation for $t$ I get the same equation as above. However, when I try the E-L equation for $\mathbf{x}$ my result does not agree with the geodesic equation. I find $$ \frac{\partial L}{\partial \mathbf{x}} = -\frac{1}{L} \frac{a^2 K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), $$ and $$ \frac{\partial L}{\partial \mathbf{x}'} = -\frac{a^2}{L} \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right). $$ I write the E-L equation $$\frac{d}{dp}\frac{\partial L}{\partial \mathbf{x}'}=\frac{\partial L}{\partial \mathbf{x}},$$ and then multiply both sides by $dp/d\tau$ to replace $p$ with $\tau$ everywhere and get rid of the $L$'s in the denominators (using the fact that $1/L=dp/d\tau$ and changing the meaning of the primes to mean derivatives with respect to proper time $\tau$). I get $$ \frac{d}{d\tau} \left[ a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right) \right] = \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} a^2 \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right). $$ I cannot rearrange this into the formula from the geodesic equation and suspect that the two sets of equations are not equivalent. I've gone through both methods a couple of times but haven't spotted any errors. Can anyone tell me where the inconsistency (if there actually is one) is coming from? [Interestingly, the E-L equation can be integrated once with an integrating factor of $\sqrt{1-K\mathbf{x}^2}$, whereas I don't see how to do so with the geodesic equation (not that I am very good at solving differential equations).]
I think the equations may be consistent after all. First a solution to the EL equation for $\mathbf{x}$ also satisfies the geodesic equation: Starting with the EL equation I have above: $$ \frac{d}{d\tau} \left[ a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right) \right] = \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} a^2 \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), $$ define $\mathbf{f}$ as $$ \mathbf{f} \equiv a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right),$$ so the EL equation is $$ \frac{d\mathbf{f}}{d\tau} - \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} \mathbf{f}=\mathbf{0}. $$ Note that \begin{align} \mathbf{f} \cdot \mathbf{x} &= a^2\left(\mathbf{x} \cdot \mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}^2}{1-K\mathbf{x}^2}\right) \\ &= a^2 (\mathbf{x} \cdot \mathbf{x}') \left(1 + \frac{K\mathbf{x}^2}{1-K\mathbf{x}^2}\right) \\ &= \frac{a^2 (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2}, \end{align} and $$ \mathbf{f} \cdot \mathbf{x}' = a^2\left(\mathbf{x}'^2 + \frac{K(\mathbf{x} \cdot \mathbf{x}')^2}{1-K\mathbf{x}^2}\right) \equiv Q, $$ ($Q$ appears in the geodesic equation for $\mathbf{x}$). Next dot the EL equation with $\mathbf{x}$: \begin{align} 0 &= \frac{d\mathbf{f}}{d\tau} \cdot \mathbf{x} - \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} \mathbf{f}\cdot \mathbf{x} \\ &=\frac{d}{d\tau}\left(\mathbf{f} \cdot \mathbf{x}\right) - \mathbf{f} \cdot \mathbf{x}' - a^2 \frac{K (\mathbf{x} \cdot \mathbf{x}')^2}{(1-K\mathbf{x}^2)^2}, \end{align} so $$ \frac{d}{d\tau}\left(\mathbf{f} \cdot \mathbf{x}\right) = Q +a^2 \frac{K (\mathbf{x} \cdot \mathbf{x}')^2}{(1-K\mathbf{x}^2)^2}. $$ Now go back to the original EL equation (first equation) and apply the $d/d\tau$ inside the brackets: $$ \text{EL LHS} = \frac{d}{d\tau}\left(a^2 \mathbf{x}'\right) + \left[\frac{d}{d\tau}\left( a^2 \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2}\right)\right]\mathbf{x} + a^2 \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2}\mathbf{x}'. $$ The last term above cancels with the first term on the right hand side of the EL equation. Moving everything that's left to one side you get \begin{align} \mathbf{0} &= \frac{d}{d\tau}\left(a^2 \mathbf{x}'\right) + \left[\frac{d}{d\tau}\left( a^2 \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2}\right)\right]\mathbf{x} - \frac{a^2 K^2(\mathbf{x} \cdot \mathbf{x}')^2\mathbf{x}}{(1-K\mathbf{x}^2)^2} \\ &= \frac{d}{d\tau}\left(a^2 \mathbf{x}'\right) + \left[\frac{d}{d\tau}\left( a^2 \frac{(\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2}\right) - \frac{a^2 K(\mathbf{x} \cdot \mathbf{x}')^2}{(1-K\mathbf{x}^2)^2}\right]K\mathbf{x}\\ &= \frac{d}{d\tau}\left(a^2 \mathbf{x}'\right) + \left[\frac{d}{d\tau}\left( \mathbf{f} \cdot \mathbf{x}\right) - \frac{a^2 K(\mathbf{x} \cdot \mathbf{x}')^2}{(1-K\mathbf{x}^2)^2}\right]K\mathbf{x} \\ &= \frac{d}{d\tau}\left(a^2 \mathbf{x}'\right) + Q K\mathbf{x}, \end{align} which, after dividing both sides by $a^2$, is exactly the geodesic equation from my original question. If you want to start with a solution to the geodesic equation and show it satisfies the EL equation you can almost reverse the steps. The only new thing you need to show is the reverse of the very last step, that the geodesic equation implies $$ Q = \frac{d}{d\tau}\left(\mathbf{f} \cdot\mathbf{x}\right) - \frac{a^2 K(\mathbf{x} \cdot \mathbf{x}')^2}{(1-K\mathbf{x}^2)^2}. $$ You start by dotting the geodesic equation with $\mathbf{x}$, and then start rearranging (using the definitions of $\mathbf{f}$ and $Q$ at some point).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/475877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
X-ray diffraction intensity and Laue equations My textbook, Solid-State Physics, Fluidics, and Analytical Techniques in Micro- and Nanotechnology, by Madou, says the following in a section on X-Ray Intensity and Structure Factor $F(hkl)$: In Figure 2.28 we have plotted $y = \dfrac{\sin^2(Mx)}{\sin^2(x)}$. This function is virtually zero except at the points where $x = n\pi$ ($n$ is an integer including zero), where it rises to the maximum value of $M^2$. The width of the peaks and the prominence of the ripples are inversely proportional to $M$. Remember that there are three sums in Equation 2.38. For simplicity we only evaluated one sum to calculate the intensity in Equation 2.39. The total intensity equals: $$I \propto \dfrac{\sin^2 \left( \dfrac{1}{2} M \mathbf{a}_1 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} \right)} \times \dfrac{\sin^2 \left( \dfrac{1}{2} N \mathbf{a}_2 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_2 \cdot \Delta \mathbf{k} \right)} \times \dfrac{\sin^2 \left( \dfrac{1}{2} P \mathbf{a}_3 \cdot \Delta \mathbf{k} \right)}{ \sin^2 \left( \dfrac{1}{2} \mathbf{a}_3 \cdot \Delta \mathbf{k} \right)} \tag{2.40}$$ so that the diffracted intensity will equal zero unless all three quotients in Equation 2.40 take on their maximum values at the same time. This means that the three arguments of the sine terms in the denominators must be simultaneously equal to integer multiples of $2\pi$, or the peaks occur only when: $$\mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$$ $$\mathbf{a}_2 \cdot \Delta \mathbf{k} = 2 \pi f$$ $$\mathbf{a}_3 \cdot \Delta \mathbf{k} = 2 \pi g$$ These are, of course, the familiar Laue equations. I could be mistaken, but I see two possible errors here: * *Since we have that the function is virtually zero except at the points where $x = n \pi$, where $n$ is an integer, we use L'Hôpital's rule to get that $\dfrac{2M \cos(Mx)}{2\cos(x)} = \dfrac{M \cos(Mx)}{\cos(x)}$, which is a maximum of $M$ -- not $M^2$ -- for $x$. *Assuming that we require that the arguments of the sine terms of the three denominators equal integer multiples of $2\pi$, we have that $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2\pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 4 \pi e$$ However, as the author indicates, the Laue equation is $\mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$. So should it not be the case that we require that the arguments of the sine terms of the three denominators equal integer multiples of $\pi$, so that we have that $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = \pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2\pi e$$ I would greatly appreciate it if people would please take the time to review this.
On applying L'Hôpital's rule, we get $ y = \frac{2Msin(Mx)cos(Mx)}{2sin(x)cos(x)}$. Again applying L'Hôpital's rule $\frac{sin(Mx)}{sin(x)} = M$, giving $y=M^{2}$. Just here, it is proved that $\frac{sin^{2}(Mx)}{sin^{2}x}$ has maxima at $ x = n\pi$. Here n is any integer and not just even integers. In $\frac{sin^{2}(\frac{1}{2}M\vec{a_{1}}.\vec{\Delta k)}}{sin^{2}(\frac{1}{2}\vec{a_{1}}.\vec{\Delta k)}}$, $ x = \frac{1}{2}\vec{a_{1}}.\vec{\Delta k}$, therefore $$\dfrac{1}{2} \mathbf{a}_1 \cdot \Delta \mathbf{k} = \pi e \Rightarrow \mathbf{a}_1 \cdot \Delta \mathbf{k} = 2 \pi e$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number degrees of freedom for Sphere on a inclined plane How many generalized coordinates are required to describe the dynamics of a solid sphere is rolling without slipping on an inclined plane? What I think is that there two translational degrees of freedom and three rotational degrees of freedom. But I'm not able to write the rolling constraint to see if they are integrable or not? So please help me through this.
you have one holonomic constraint equation which is $~z=a~$ and two nonholonomic constraint equations the relative velocities componets , at the contact point between the sphere and the incline plane are zero thus: $$\begin{bmatrix} \omega_x \\ \omega_y \\ \omega_z \\ \end{bmatrix}\times \begin{bmatrix} 0 \\ 0\\ a \\ \end{bmatrix}-\begin{bmatrix} \dot{x} \\ \dot{y}\\ 0 \\ \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ 0 \\ \end{bmatrix} $$ you obtain two equations : $$\omega_x\,a+\dot x=0\tag 1$$ $$\omega_y\,a-\dot y=0\tag 2$$ where : $$\omega_x=\dot\varphi -\sin \left( \vartheta \right) \dot\psi $$ $$\omega_y=\cos \left( \varphi \right) \dot\vartheta +\cos \left( \vartheta \right) \sin \left( \varphi \right) \dot\psi $$ from the six possible velocities of the sphere $~\dot x\,,\dot y\,,\dot z\,,\dot\varphi\,,\dot\vartheta\,\,,\dot\psi~$ just three of them are independent. solve Eq. (1) (2) for $~\dot x,~\dot y$ you obtain with $~\dot z=0$ $$ \left[ \begin {array}{c} {\dot x}\\{\dot y} \\\dot \varphi \\\dot \vartheta \\\dot \psi \end {array} \right]=\left[ \begin {array}{ccc} \sin \left( \psi \right) \cos \left( \vartheta \right) a&\cos \left( \psi \right) a&0\\ -\cos \left( \psi \right) \cos \left( \vartheta \right) a&\sin \left( \psi \right) a&0\\ 1&0&0 \\ 0&1&0\\ 0&0&1\end {array} \right]\,\underbrace{\left[ \begin {array}{c} \dot\varphi \\ \dot\vartheta \\ \dot \psi \end {array} \right] }_{\vec{\dot{q}}} $$ thus you have 3 generalized velocities coordinates $~,\dot\varphi~,\dot\vartheta~,\dot\psi$ you can also solve Eq. (1) (2) for $~\dot \varphi,~\dot \vartheta$ $$\left[ \begin {array}{c} {\dot x}\\{\dot y} \\\dot \varphi \\\dot \vartheta \\\dot \psi \end {array} \right]=\left[ \begin {array}{ccc} 1&0&0\\ 0&1&0 \\ {\frac {\sin \left( \psi \right) }{a\cos \left( \vartheta \right) }}&-{\frac {\cos \left( \psi \right) }{a\cos \left( \vartheta \right) }}&0\\ {\frac {\cos \left( \psi \right) }{a}}&{\frac {\sin \left( \psi \right) }{a}}&0 \\ 0&0&1\end {array} \right]\,\begin{bmatrix} \dot x \\ \dot y \\ \dot\psi \\ \end{bmatrix}$$ again you have 3 generalized velocities coordinates $~\dot x~,\dot y,~\dot \psi$ Edit the equations of motion for a ball rolled on a plane I use the NEWTON- EULER approach \begin{align*} &\boldsymbol J^T\,\boldsymbol M\,\boldsymbol J\,\boldsymbol{\ddot{q }}=\boldsymbol J^T\left(\boldsymbol f_A-\boldsymbol M\,\underbrace{\boldsymbol{\dot{J}}\,\boldsymbol{\dot{q}}}_{\boldsymbol f_Z}\right)\tag 1\\ &\text{where}\\ &\boldsymbol{\dot{J}}=\frac{\partial \left(\boldsymbol J\,\boldsymbol{\dot{q}}\right)}{\partial \boldsymbol q} \end{align*} \begin{align*} &\boldsymbol q=\left[ \begin {array}{c} x\\ y\\ \psi\end {array} \right] \\ &\boldsymbol M=\left[ \begin {array}{cccccc} m&0&0&0&0&0\\ 0&m&0&0 &0&0\\ 0&0&m&0&0&0\\ 0&0&0&\Theta k&0&0\\ 0&0&0&0&\Theta k&0\\ 0&0&0 &0&0&\Theta k\end {array} \right] ~, \boldsymbol J= \left[ \begin {array}{ccc} 1&0&0\\ 0&1&0 \\ 0&0&0\\ {\frac {\sin \left( \psi \right) }{a\cos \left( \vartheta \right) }}&-{\frac {\cos \left( \psi \right) }{a\cos \left( \vartheta \right) }}&0 \\ {\frac {\cos \left( \psi \right) }{a}}&{\frac { \sin \left( \psi \right) }{a}}&0\\ 0&0&1\end {array} \right]\\ &\boldsymbol f_z=\left[ \begin {array}{c} 0\\ 0\\ 0 \\ {\frac { \left( \cos \left( \psi \right) {\it \dot{x}} +{\it \dot{y}}\,\sin \left( \psi \right) \right) \dot{\psi} }{a\cos \left( \vartheta \right) }}\\ -{\frac { \left( -\cos \left( \psi \right) {\it \dot{y}}+{\it \dot{x}}\,\sin \left( \psi \right) \right) \dot{\psi} }{a}}\\ 0\end {array} \right]~, \boldsymbol f_A=\left[ \begin {array}{c} F_x\\ 0\\ -mg\\ 0\\ 0\\ \tau_\psi \end {array} \right] \end{align*} and from the nonholonomic constraint equations you get additional two differential equations \begin{align*} &F_{c1}=\left( \sin \left( \psi \right) \cos \left( \vartheta \right) \dot \varphi +\cos \left( \psi \right) \dot \vartheta \right) a-{ \dot x} \\ &F_{c2}=- \left( \cos \left( \psi \right) \cos \left( \vartheta \right) \dot \varphi -\sin \left( \psi \right) \dot\vartheta \right) a-{\dot y} \\ &\Rightarrow\\ &\dot{\varphi}={\frac {-\cos \left( \psi \right) {\it \dot{y}}+{\it \dot{x}}\,\sin \left( \psi \right) }{a\cos \left( \vartheta \right) }} \tag 2\\ &\dot{\vartheta}={\frac {\cos \left( \psi \right) {\it \dot{x}}+{\it \dot{y}}\,\sin \left( \psi \right) }{a}}\tag 3 \end{align*} altogether you have eight first order differential equations (Eq. (1),(2),(3)) to solve this problem
{ "language": "en", "url": "https://physics.stackexchange.com/questions/603105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Expanding the Graphene Hamiltonian near Dirac points upto second order term I was trying to solve the Graphene Hamiltonian near the Dirac points upto the second order term for the nearest neighborhood points. So expanding the function near the Dirac Point, we get$$g(K+q)=\frac{3ta}{2}e^{i\pi/3}(q_x-iq_y)+\frac{3ta^2}{8}e^{i\pi/3}(q_x-iq_y)^2$$ Now the Hamiltonian should be $$ \begin{pmatrix} 0 & g(K+q)\\ g^*(K+q) &0 \end{pmatrix} $$ And the energy form will be $E=\pm |g(K+q)|$ Now for solving only for the first order term we get the massless Dirac equation as $E=\pm\hbar v|q|$, which I got. While looking for the so called trigonal warping, I need the second term as $$=\mp\frac{3ta^2}{8}sin(3\theta_p)|q|^2 $$ where $\theta_p=\arctan(\frac{q_x}{q_y})$ Now I have tried solving for this term a lot of times using lot of ways but I can't get around that $sin(3\theta_p)$ term. I got $$E =\sqrt{\frac{9t^2a^2}{4}(q_x^2+q_y^2)+ \frac{9t^2a^4}{64}(q_x^4+q_y^4)+ \frac{9t^2a^3}{16} (q_x^3+q_xq_y^2) } $$ Now how to take a square root of this thing and get that $sin(3\theta_p)$ term. Please help me!! Also any information about this trigonal warping would be also useful. Thank You!!
Something seems to be off with the energy term here... Your last term goes as $\cos\theta$. Let's try again :) $$ H = \begin{pmatrix} 0&-t f_\mathbf{q} \\ -tf_\mathbf{q}^*&0 \end{pmatrix}\,, $$ where $f_\mathbf{q} = 1 + e^{i\mathbf{q}\cdot\mathbf{d}_1}+ e^{i\mathbf{q}\cdot\mathbf{d}_2}$ and $\mathbf{d}_1$ and $\mathbf{d}_2$ are the basis vectors. If the Dirac point is at $\mathbf{q} = \mathbf{K}$, for momenta close to it, we write $$ f_\mathbf{q} = 1 + e^{i\left(\mathbf{q}+\mathbf{K}\right)\cdot\mathbf{d}_1}+ e^{i\left(\mathbf{q}+\mathbf{K}\right)\cdot\mathbf{d}_2} = 1 + e^{i\mathbf{k}\cdot\mathbf{d}_1} e^{i\mathbf{K}\cdot\mathbf{d}_1} + e^{i\mathbf{k}\cdot\mathbf{d}_2} e^{i\mathbf{K}\cdot\mathbf{d}_2}\,. $$ For small $k$, we get $$ f_\mathbf{q} \approx 1 + (1+i\mathbf{k}\cdot\mathbf{d}_1 - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2) e^{i\mathbf{K}\cdot\mathbf{d}_1} + (1+i\mathbf{k}\cdot\mathbf{d}_2 - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2) e^{i\mathbf{K}\cdot\mathbf{d}_2} \\ = 1 + e^{i\mathbf{K}\cdot\mathbf{d}_1} + e^{i\mathbf{K}\cdot\mathbf{d}_2} + (i\mathbf{k}\cdot\mathbf{d}_1 ) e^{i\mathbf{K}\cdot\mathbf{d}_1} + (i\mathbf{k}\cdot\mathbf{d}_2) e^{i\mathbf{K}\cdot\mathbf{d}_2} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2 e^{i\mathbf{K}\cdot\mathbf{d}_1} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2 e^{i\mathbf{K}\cdot\mathbf{d}_2} \,. $$ Using $\mathbf{K} = 4\pi(0, 1)/(3\sqrt{3}a)$ and $\mathbf{d}_{1/2}=a(3,\pm\sqrt{3})/2$, we have $e^{i\mathbf{K}\cdot\mathbf{d}_1} = e^{ 2\pi i/3}$ and $e^{i\mathbf{K}\cdot\mathbf{d}_2} = e^{- 2\pi i/3}$: $$ f_\mathbf{q} = i\mathbf{k}\cdot(\mathbf{d}_1 e^{ 2\pi i/3} + \mathbf{d}_2 e^{- 2\pi i/3}) - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2 e^{2\pi i/3} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2 e^{- 2\pi i/3} \\ = -\frac{3a}{2}ik e^{-i \theta} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2 e^{ 2\pi i/3} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2 e^{-2\pi i/3} \,, $$ where $\mathbf{k} = k(\cos\theta,\sin\theta)$. We know that the dispersion is $\pm t^2\sqrt{f_\mathbf{q}f_\mathbf{q}^*}$, so let's deal with the square root. For small $k$ (keeping to 3d order), we have $$ f_\mathbf{q}f_\mathbf{q}^* \approx \frac{9a^2}{4}k^2 -\frac{3a}{2}ik e^{-i \theta} \left[ - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2 e^{ -2\pi i/3} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2 e^{2\pi i/3} \right] +\frac{3a}{2}ik e^{i \theta} \left[ - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_1\right)^2 e^{ 2\pi i/3} - \frac{1}{2}\left(\mathbf{k}\cdot\mathbf{d}_2\right)^2 e^{-2\pi i/3} \right] \\ =\frac{9}{4}a^2k^2+\frac{9}{8}a^3k^3\sin(3\theta)\,. $$ You can check this in, say, Mathematica. From this, $$ \sqrt{f_\mathbf{q}f_\mathbf{q}^*} = \sqrt{\frac{9}{4}a^2k^2+\frac{9}{8}a^3k^3\sin(3\theta)} = \frac{3}{2}ak\sqrt{1+\frac{1}{2}a^2k^2\sin(3\theta)} \approx \frac{3}{2}ak\left(1+\frac{1}{4}a^2k^2\sin(3\theta)\right)\,, $$ as required.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/659577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Spin $\frac{3}{2}$ representation in Georgi's book? Georgi's book Lie Algebras in Particle Physics 2ed equation 3.32 lists the spin operators in the spin $\frac{3}{2}$ representation as: $$J_1=\left( \begin{array}{cccc} 0 & \sqrt{\frac{3}{2}} & 0 & 0 \\ \sqrt{\frac{3}{2}} & 0 & 2 & 0 \\ 0 & 2 & 0 & \sqrt{\frac{3}{2}} \\ 0 & 0 & \sqrt{\frac{3}{2}} & 0 \\ \end{array} \right)$$ $$J_2=\left( \begin{array}{cccc} 0 & -i\sqrt{\frac{3}{2}} & 0 & 0 \\ i\sqrt{\frac{3}{2}} & 0 & -i2 & 0 \\ 0 & i2 & 0 & -i\sqrt{\frac{3}{2}} \\ 0 & 0 & i\sqrt{\frac{3}{2}} & 0 \\ \end{array} \right)$$ $$J_3=\left( \begin{array}{cccc} \frac{3}{2} & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & -\frac{3}{2} \\ \end{array} \right)$$ but the commutators don't seem to work out $[J_1,J_2]\neq i J_3$. What gives? I wrote the following Mathematica command j[n,s] to generate the n=1,2, or 3 spin s matrix. It generates the Pauli matrices and spin $1$ matrices correctly, but doesn't match the $3/2$ rep in Georgi's book. j[3,s_/;IntegerQ[2s+1]&&s>0]:=SparseArray[Band[{1,1}]->Table[i,{i,s,-s,-1}],2s+1]; jplus[s_/;IntegerQ[2s+1]]:=SparseArray[Band[{1,2}]->Table[Sqrt[(s+1+m)(s-m)/2],{m,s-1,-s,-1}],2s+1]; jminus[s_/;IntegerQ[2s+1]]:=SparseArray[Band[{2,1}]->Table[Sqrt[(s+m)(s-m+1)/2],{m,s,1-s,-1}],2s+1]; j[1,s_/;IntegerQ[2s+1]]:=(jplus[s]+jminus[s])/Sqrt[2]; j[2,s_/;IntegerQ[2s+1]]:=(jplus[s]-jminus[s])/(I Sqrt[2]);
There is a typo in the book's equation, and there doesn't appear to be an easily accessible online errata. If one follows the formulas the book gives for $J_\pm$: $$J_+=\frac{1}{\sqrt{2}}\left(J_1+i J_2\right)$$ $$J_-=\frac{1}{\sqrt{2}}\left(J_1-i J_2\right)$$ $$J_{-,m'm}\frac{\sqrt{\left(s-m\right) \left(m+s+1\right)} \delta _{m+1,m'}}{\sqrt{2}}$$ $$J_{+,m'm}\frac{\sqrt{\left(s+m\right) \left(s-m+1\right)} \delta _{m-1,m'}}{\sqrt{2}}$$ one finds: $$J_1=\left( \begin{array}{cccc} 0 & \frac{\sqrt{3}}{2} & 0 & 0 \\ \frac{\sqrt{3}}{2} & 0 & 1 & 0 \\ 0 & 1 & 0 & \frac{\sqrt{3}}{2} \\ 0 & 0 & \frac{\sqrt{3}}{2} & 0 \\ \end{array} \right)$$ $$J_2=\left( \begin{array}{cccc} 0 & -\frac{1}{2} \left(i \sqrt{3}\right) & 0 & 0 \\ \frac{i \sqrt{3}}{2} & 0 & -i & 0 \\ 0 & i & 0 & -\frac{1}{2} \left(i \sqrt{3}\right) \\ 0 & 0 & \frac{i \sqrt{3}}{2} & 0 \\ \end{array} \right)$$ $$J_3=\left( \begin{array}{cccc} \frac{3}{2} & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & -\frac{3}{2} \\ \end{array} \right)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to find the logarithm of Pauli Matrix? When I solve some physics problem, it helps a lot if I can find the logarithm of Pauli matrix. e.g. $\sigma_{x}=\left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right)$, find the matrix $A$ such that $e^{A}=\sigma_{x}$. At first, I find a formula only for real matrix: $$\exp\left[\left(\begin{array}{cc} a & b\\ c & d \end{array}\right)\right]=\frac{e^{\frac{a+d}{2}}}{\triangle}\left(\begin{array}{cc} \triangle \cosh(\frac{\triangle}{2})+(a-d)\sinh(\frac{\triangle}{2}) & 2b\cdot \sinh(\frac{\triangle}{2})\\ 2c\cdot \sinh(\frac{\triangle}{2}) & \triangle \cosh(\frac{\triangle}{2})+(d-a)\sinh(\frac{\triangle}{2}) \end{array}\right)$$ where $\triangle=\sqrt{\left(a-d\right)^{2}+4bc}$ but there is no solution for the formula on this example; After that, I try to Taylor expand the logarithm of $\sigma_{x}$: $$ \log\left[I+\left(\sigma_{x}-I\right)\right]=\left(\sigma_{x}-I\right)-\frac{\left(\sigma_{x}-I\right)^{2}}{2}+\frac{\left(\sigma_{x}-I\right)^{3}}{3}... $$ $$ \left(\sigma_{x}-I\right)=\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} -2 & 0\\ 0 & 0 \end{array}\right)\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right) $$ \begin{eqnarray*} \log\left[I+\left(\sigma_{x}-I\right)\right] & = & \left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left[\left(\begin{array}{cc} -2 & 0\\ 0 & 0 \end{array}\right)-\left(\begin{array}{cc} \frac{\left(-2\right)^{2}}{2} & 0\\ 0 & 0 \end{array}\right)...\right]\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right)\\ & = & \left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} -\infty & 0\\ 0 & 0 \end{array}\right)\left(\begin{array}{cc} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} \end{array}\right) \end{eqnarray*} this method also can't give me a solution.
Observe that \begin{equation} \sigma_{z} = \begin{pmatrix}1&0\\0&-1\end{pmatrix} = \exp(B) = \sum_{r=0}^{\infty} \frac{B^{r}}{r!} \end{equation} with \begin{equation} B = i\pi\begin{pmatrix}2m&0\\0&2n+1\end{pmatrix}, \end{equation} where $m,n\in\mathbb{Z}$. Next, notice that \begin{equation} \sigma_{x} = U \sigma_{z} U^{\dagger} \end{equation} with \begin{equation} U = \exp(-i\pi\sigma_{y}/4)= \frac{1}{\sqrt{2}}(I - i\sigma_{y})= \frac{1}{\sqrt{2}} \begin{pmatrix}1&-1\\1&1\end{pmatrix}. \end{equation} Hence, we have \begin{equation} \sigma_{x} = \sum_{r=0}^{\infty} U\frac{B^{r}}{r!} U^{\dagger} = \sum_{r=0}^{\infty} \frac{(UBU^{\dagger})^{r}}{r!} = \exp(A) \end{equation} with \begin{equation} \begin{split} A &= UBU^{\dagger} = i\pi U\left[\left(m+n+\frac{1}{2}\right)I + \left(m-n-\frac{1}{2}\right)\sigma_{z}\right]U^{\dagger}\\ &= i\pi \left[\left(m+n+\frac{1}{2}\right)I + \left(m-n-\frac{1}{2}\right)\sigma_{x}\right]\\ &= i\pi\begin{pmatrix}m + n + 1/2&m - n - 1/2\\m - n - 1/2&m + n + 1/2\end{pmatrix}. \end{split} \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/225668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Relation between Spin 1 representation and angular momentum and $SO(3)$ This is a naive question. It occurred to me while studying in detail the the Spin 1 angular momentum matrices. The generators of $SO(3)$ are $J_x= \begin{pmatrix} 0&0&0 \\ 0&0&-1 \\ 0&1&0 \end{pmatrix} \hspace{1cm} J_y=\begin{pmatrix} 0&0&1 \\ 0&0&0 \\ -1&0&0 \end{pmatrix} \hspace{1cm} J_z= \begin{pmatrix} 0&-1&0 \\ 1&0&0 \\ 0&0&0 \end{pmatrix} $ And the Spin 1 generators are $J_x= \dfrac{1}{2} \begin{pmatrix} 0&\sqrt{2}&0 \\ \sqrt{2}&0&\sqrt{2} \\ 0&\sqrt{2}&0 \end{pmatrix} \hspace{1cm} J_y= \dfrac{1}{2}\begin{pmatrix} 0&-i\sqrt{2}&0 \\ i\sqrt{2}&0&-i\sqrt{2} \\ 0&\sqrt{2}&0 \end{pmatrix} \hspace{1cm} J_z= \begin{pmatrix} 1&0&0 \\ 0&0&0 \\ 0&0&-1 \end{pmatrix} $ Why is the Spin 1 representation generators different from the $SO(3)$ generators if both concern rotations in 3D space and both are $3x3$ matrices? Is there a relation between them?
The two representations are unitarily equivalent to each other, except for an overall factor of $i$. To be clear, I'll write $J$ and $\tilde J$ for the generators in the two different representations. One representation is $$ J_x = \left( \begin{matrix} 0&0&0\\ 0&0&-1 \\ 0&1&0\end{matrix} \right) \hskip1cm J_y = \left( \begin{matrix} 0&0&1\\ 0&0&0 \\ -1&0&0\end{matrix} \right) \hskip1cm J_z = \left( \begin{matrix} 0&-1&0\\ 1&0&0 \\ 0&0&0\end{matrix} \right) $$ and the other is $$ \tilde J_x = \frac{1}{\sqrt{2}}\left( \begin{matrix} 0&1&0\\ 1&0&1 \\ 0&1&0\end{matrix} \right) \hskip1cm \tilde J_y = \frac{i}{\sqrt{2}}\left( \begin{matrix} 0&-1&0\\ 1&0&-1 \\ 0&1&0\end{matrix} \right) \hskip1cm \tilde J_z = \left( \begin{matrix} 1&0&0\\ 0&0&0 \\ 0&0&-1\end{matrix} \right). $$ The $J$s are anti-hermitian and $\tilde J$s are hermitian. That's just a matter of convention, because we can multiply the $J$s by $i$ to make them hermitian. The unitary matrix $$ U = \frac{1}{\sqrt{2}} \left( \begin{matrix} 1&0&-1\\ i&0&i \\ 0&-\sqrt{2}&0\end{matrix} \right) $$ satisfies $$ i\,J_x U = U\tilde J_x \hskip2cm i\,J_y U = U\tilde J_y \hskip2cm i\,J_z U = U\tilde J_z, $$ which proves that the two representations are equivalent except for the overall factor of $i$. These identities could be written in the form $i\,J=U\tilde J U^{-1}$ instead, but they way I wrote them above makes them easier to check.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/438522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How compute the mass of AdS-Schwarzschild by ADM mass formula? I want to compute the mass of AdS schwarzschild by ADM mass formula but I could not find where I am wrong. AdS schwarzschild line element is : $$ ds^2 =-f dt^2 +\frac{dr^2}{f} +r^2 d\sigma^2_{d-1} $$ where: $$ f=k+\frac{r^2}{L^2}-\frac{\omega^{d-2}}{r^{d-2}} $$ ADM mass formula is $$ M=\int (k-k_0)\sqrt{\sigma}d^{d-1}x $$ $k$ is extrinsic curvature of $S_{t=cte,r=cte}$ and $k_0$ the extrinsic curvature $S_{t=cte,r=cte}$ in pure AdS. $$ k=\sigma^{\alpha\beta}k_{\alpha\beta}=\frac{1}{r^2}\Gamma^r_{\alpha\beta}n_r $$ $n$ is normal vector to the surface $r=cte$, $n_\alpha=(\frac{1}{\sqrt{f}},0,...,0)$. $$ k=\frac{1}{r^2}\frac{1}{2}g^{rr}\partial_rg_{\alpha\beta}\frac{1}{\sqrt{f}}=\frac{\sqrt{f}}{r} $$ $k_0$ has the same relation of $k$ but $f=k+\frac{r^2}{L^2}$. $$ M=\lim _{r->\infty}\int (k-k_0)\sqrt{\sigma}d^{d-1}x=V_{d-1} r^{d-1} (\frac{\sqrt{k+\frac{r^2}{L^2}-\frac{\omega^{d-2}}{r^{d-2}}}}{r}-\frac{\sqrt{k+\frac{r^2}{L^2}}}{r})=V_{d-1} r^{d-1} L((1+\frac{kL^2}{r^2}-\frac{\omega^{d-2}L^2}{r^d})^{\frac{1}{2}}-(1+\frac{kL^2}{r^2})^{\frac{1}{2}}=V_{d-1} r^{d-1}L (-\frac{\omega^{d-2}L^2}{r^d})=\lim_{r->\infty}V_{d-1} L (-\frac{\omega^{d-2}L^2}{r})=0 $$ I dont know where it is wrong.
I think the issue is that you’re using a formula for the ADM mass which assumes unit lapse. Consider a spherically symmetric spacetime of the form \begin{align} {\rm d}s^2 & = -f(r) \, {\rm d}t^2 + h(r)\, {\rm d}r^2 + r^2 ({\rm d}\theta^2 + \sin^2\theta\, {\rm d}\varphi^2) \end{align} From the outward pointing unit normal 3-vector $(s^r,s^\theta,s^\varphi)=(h(r)^{-1/2},0,0)$ we compute the trace of the extrinsic curvature, \begin{align} k & = D_i s^i \\ & = \partial_r s^r + (\Gamma^r_{rr}+\Gamma^\theta_{\theta r} + \Gamma^\varphi_{\varphi r})s^r \\ & = -\frac{1}{2}\frac{h’(r)}{h(r)^{3/2}} + \left[\frac{h’(r)}{2h(r)} + \frac{1}{r} + \frac{1}{r}\right]\frac{1}{\sqrt{h(r)}} \\ & = \frac{2}{r}\frac{1}{\sqrt{h(r)}} \end{align} Specializing to the case of Schwarzschild-AdS$_4$, \begin{align} f(r) & = 1 - \frac{2m}{r} + a^2 r^2 \\ h(r) & = \frac{1}{1 - \frac{2m}{r} + a^2 r^2} \end{align} we obtain \begin{align} k & = \frac{2}{r}\left(1-\frac{2m}{r}+a^2r^2\right)^{1/2} \\ k_0 & = \frac{2}{r}\left(1 + a^2 r^2\right)^{1/2} \end{align} The ADM mass is \begin{align} M_{\rm ADM} & = -\frac{1}{8\pi}\int {\rm d}^2 x \sqrt{\sigma} N(k-k_0) \\ & = -\frac{1}{8\pi} \lim_{r\to\infty} (4\pi r^2) \left(1 - \frac{2m}{r} + a^2 r^2\right)^{1/2} \frac{2}{r}\left[\left(1 - \frac{2m}{r} + a^2 r^2\right)^{1/2} - \left(1 + a^2 r^2\right)^{1/2}\right] \\ & = m \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/716943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Matrix operation in dirac matrices If we define $\alpha_i$ and $\beta$ as Dirac matrices which satisfy all of the conditions of spin $1/2$ particles , and $p$ is the momentum of the particle, then how can we get the matrix form \begin{equation} \alpha_i p_i= \begin{pmatrix} p_z & p_x-ip_y \\ p_x+ip_y & -p_z \end{pmatrix}. \end{equation}
It's just a matrix manipulation. Let $\sigma_i$ pauli matrices. \begin{equation} \alpha_i p_i= \begin{pmatrix} 0& \sigma_i \\ \sigma_i & 0 \end{pmatrix} p_i . \end{equation} $ \alpha_i p_i= \begin{pmatrix} 0& p_1 \sigma_1 \\ p_1\sigma_1 & 0 \end{pmatrix} + \begin{pmatrix} 0& p_2 \sigma_2 \\ p_i\sigma_2 & 0 \end{pmatrix} + \begin{pmatrix} 0& p_3 \sigma_3 \\ p_3\sigma_3 & 0 \end{pmatrix} $ But $ \sigma_1 p_1 = \begin{pmatrix} 0& 1 \\\ 1 & 0 \end{pmatrix}p_1=\begin{pmatrix} 0& p_1 \\\ p_1 & 0 \end{pmatrix}$ , $ \sigma_2 p_2= \begin{pmatrix} 0& -i \\\ -i & 0 \end{pmatrix}p_2=\begin{pmatrix} 0& -ip_2 \\\ ip_2 & 0 \end{pmatrix}$ $ \sigma_3 p_3= \begin{pmatrix} 1& 0 \\\ 0 & -1 \end{pmatrix}p_3= \begin{pmatrix} p_3& 0 \\\ 0 & -p_3 \end{pmatrix}$ Now adding these we get ($1\rightarrow x $, $2\rightarrow y $ ,$3\rightarrow z $) , \begin{equation} \alpha_i p_i= \begin{pmatrix} p_z & p_x-ip_y \\ p_x+ip_y & -p_z \end{pmatrix} . \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/48044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The potential and the intensity of the gravitational field in the axis of a circular plate Calculate the potential and the intensity of the gravitational field at a distance $x> 0$ in the axis of thin homogeneous circular plate of radius $a$ and mass $M$. Could anybody describe how to calculate this? Slowly and in detail. I'm helpless. Answer is: potencial $\phi = - \frac {2 \kappa M}{a^2}(\sqrt{a^2+x^2}-x)$ and intesity $K = \frac {2 \kappa M}{a^2} \left( \frac{x}{\sqrt{a^2+x^2}}-1 \right)$
First let us calculate the potential for a ring of radius $a$ at a distance $x$ from the center along the axis Potential due to an infinitesimal mass element $dm$ will be $$\frac{-Gdm}{\sqrt{a^2+x^2}}$$ Potential due to the ring is then $$\int{\frac{-Gdm}{\sqrt{a^2+x^2}}}=\frac{-G}{\sqrt{a^2+x^2}}\int{dm}=\frac{-Gm}{\sqrt{a^2+x^2}}$$ Since $G, a, x$ are constant Now let us break the disc into infinitesimal rings of mass $dm=2\pi rdr\frac{M}{\pi a^2} (=area * density)$ The potential due to a ring of radius $r$ and mass $dm$ as given above is $$\frac{-Gdm}{\sqrt{r^2+x^2}}=\frac{-2GMrdr}{a^2\sqrt{r^2+x^2}}$$ Integrating this from $0$ to $a$ $$\int{\frac{-2GMrdr}{a^2\sqrt{r^2+x^2}}}$$ $$=\frac{-GM}{a^2}\int{\frac{2rdr}{\sqrt{r^2+x^2}}}$$ putting $t^2=r^2+a^2$ and $2rdr=2tdt$ $$=\frac{-GM}{a^2}\int{\frac{2tdt}{\sqrt{t^2}}}$$ $$=\frac{-GM}{a^2}[2t]^{\sqrt{a^2+x^2}}_{x}$$ $$=\frac{-2GM}{a^2}({\sqrt{a^2+x^2}}-{x})$$ For intensity, it can be seen by symmetry that it is along the axis thus we work only with axial components So, for ring $$\int\frac{-Gdmcos\theta}{a^2+x^2}$$ where $\theta$ is half of the angle subtented by the point on the ring $$cos\theta=\frac{x}{\sqrt{a^2+x^2}}$$ $$K=\int\frac{-Gxdm}{(a^2+x^2)^{3/2}}=\frac{-Gxm}{(a^2+x^2)^{3/2}}$$ For a disc, based on the same reasoning as in potential, it is $$K=\int\frac{-Gxdm}{(r^2+x^2)^{3/2}}$$ $$=\int\frac{-2GMxrdr}{a^2(r^2+x^2)^{3/2}}$$ $$=\int\frac{-2GMxtdt}{a^2(t^2)^{3/2}}$$ $$=\int\frac{-2GMxdt}{a^2t^2}$$ $$=\frac{-2GMx}{a^2}[\frac{-1}{t}]^{\sqrt{a^2+x^2}}_x$$ $$K=\frac {2 G M}{a^2} \left( \frac{x}{\sqrt{a^2+x^2}}-1 \right)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to write the equation of a field line of an electrostatic field? How can we write the equations of a line of force between two charges, say $q$ and $q'$? As an example, you may consider the simpler case of two opposite charges $+q$ and $-q$, and focus on the field line emerging out of $+q$ by making an angle of $\theta$ and reaching $-q$ with an angle $\phi$ with respect to the $x$-axis (see picture below).
Let a charge $+q$ be at the point $(a, 0)$ and a charge $-q$ be at the point $(-a, 0)$. Then the electric field at a point $(x, y)$ is \begin{equation}\tag{e1}\label{e1} \vec{E} = q\vec{r}\left(\frac{1}{r_1^3} - \frac{1}{r_2^3}\right) - qa\hat{e}_x\left(\frac{1}{r_1^3} + \frac{1}{r_2^3}\right), \end{equation} where $\vec{r} = x\hat{e}_x + y\hat{e}_y$, $\vec{r}_1 = \vec{r} + a\hat{e}_x$ and $\vec{r}_2 = \vec{r} - a\hat{e}_x$. The equation of lines of force is \begin{equation}\tag{e2}\label{e2} \frac{\partial y}{\partial x} = \frac{y}{x + a\frac{r_2^3 + r_1^3}{r_2^3 - r_1^3}}. \end{equation} We will now solve equation \eqref{e2}. We first rearrange it as \begin{equation} \left((r_2^3 - r_1^3)x + a(r_2^3 + r_1^3)\right)\frac{\partial y}{\partial x} = (r_2^3 - r_1^3)y. \end{equation} Multiplying both sides by $y/(r_1^3 r_2^3)$, \begin{equation} \left(\frac{x + a}{r_1^3} - \frac{x - a}{r_2^3}\right)y\frac{\partial y}{\partial x} = \frac{y^2}{r_1^3} - \frac{y^2}{r_2^3}. \end{equation} We substitute $y^2 = r_1^2 - (x + a)^2$ in the first factor on the right hand side and $y^2 = r_2^2 - (x - a)^2$ in the second factor to get \begin{equation} \left(\frac{x + a}{r_1^3} - \frac{x - a}{r_2^3}\right)y\frac{\partial y}{\partial x} = \frac{1}{r_1} - \frac{(x+a)^2}{r_1^3} - \frac{1}{r_2} + \frac{(x-a)^2}{r_2^3} \end{equation} or, \begin{equation}\tag{e3}\label{e3} \frac{1}{r_1} - \frac{1}{r_2} - \frac{(x+a)}{r_1^3}\left((x + a) + y\frac{\partial y}{\partial x}\right) + \frac{(x-a)}{r_2^3}\left((x - a) + y\frac{\partial y}{\partial x}\right) = 0. \end{equation} We use the derivatives of $r_1$ and $r_2$ with respect to $x$ \begin{eqnarray*} \frac{\partial r_1}{\partial x} &=& \frac{x + a}{r_1} + \frac{y}{r_1}\frac{\partial y}{\partial x} \\ \frac{\partial r_2}{\partial x} &=& \frac{x - a}{r_2} + \frac{y}{r_2}\frac{\partial y}{\partial x} \end{eqnarray*} in equation \eqref{e3} to get \begin{equation} \frac{1}{r_1} - \frac{x+a}{r_1^2}\frac{\partial r_1}{\partial x} - \frac{1}{r_2} + \frac{x-a}{r_2}\frac{\partial r_2}{\partial x} = 0, \end{equation} or \begin{equation} \frac{d}{dx}\left(\frac{x + a}{r_1}\right) - \frac{d}{dx}\left(\frac{x + a}{r_1}\right) = 0, \end{equation} from which we readily get \begin{equation}\tag{e4}\label{e4} \frac{x + a}{r_1} - \frac{x - a}{r_2} = C, \end{equation} where $C$ is a constant, as the solution of the differential equation (e2). This is also the solution given in article 63 of 'The Mathematical Theory of Electricity and Magnetism' by Sir James Jeans (5th edition).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Partitioning the kinetic energy into components in relativity In classic physics, kinetic energy is defined as $$ KE = \frac{1}{2}m v_x^2 + \frac{1}{2}m v_y^2 + \frac{1}{2}m v_z^2 $$ So, by defining $ KE_x = \frac{1}{2} m v_x^2 $ , $ KE_y = \frac{1}{2} m v_y^2 $, $ KE_z = \frac{1}{2} m v_z^2 $, we can know the contribution of each components to the total kinetic energy. However, in the case of relativistic kinetic energy, its definition is $$ KE = \sqrt{(mc^2)^2+(p_x^2+p_y^2+p_z^2) c^2}-mc^2 $$ Now, partitioning this into each components seems impossible. Does this mean thinking of "x component" of kinetic energy becomes meaningless in relativistic theory?
What you are looking for is a function of $f(v_i)$ such that $f(v_x) + f(v_y) + f(v_z) = KE$. In the case of Newtonian physics, $f(v_i) = \frac{1}{2}mv_i^2$. In the case of special relativity, it is impossible to find any such function. Proof by contradiction: Suppose that such an $f$ did exist. Now imagine three objects of mass $m$, each with a different velocity: Object $A$ is stationary. Its kinetic energy, denoted $KE_A$, is equal to $0$. This implies that $f(0)+f(0)+f(0)=0$, and therefore $f(0)=0$. Object B is traveling in the $x$ direction with velocity $v_x=\frac{3c}{5}$. Therefore it has kinetic energy $KE_B = \frac{1}{4}mc^2$. Together with the result for object A, this implies that $f(\frac{3c}{5}) = \frac{1}{4}mc^2$. Object C is traveling in both the $x$ direction and the $y$ direction. Its velocity components are $v_x=v_y=\frac{3c}{5}$. Its total velocity is $v = \sqrt{2}\frac{3c}{5}$. We have: $\gamma=\frac{1}{\sqrt{1-\frac{18}{25}}} = \frac{5}{\sqrt{7}}$. Therefore, $KE_C=\frac{5\sqrt{7}-7}{7}mc^2$. However, it must also be true that $KE_c = f(\frac{3c}{5}) + f(\frac{3c}{5}) = \frac{1}{2}mc^2$. Since these two answers are different, we have a contradiction. Q.E.D.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/422050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Analytical expressions for acceleration due to zonal harmonics of a gravitational field? Wikipedia's Geopotential_model; The deviations of Earth's gravitational field from that of a homogeneous sphere discusses the expansion of the potential in spherical harmonics. The first few zonal harmonics ($\theta$ dependence only) are seen after the monopole term in $$u = -\frac{GM}{r} - \sum_{n=2} J^0_n \frac{P^0_n(\sin \theta)}{r^{n+1}}$$ where $P^0_n$ are Legendre polynomials. I want to calculate the first three terms for $J_2, J_3, J_4$ by hand. I have $$P^0_2(\sin \theta) = \frac{1}{2}(3 \sin^2 \theta - 1)$$ $$P^0_3(\sin \theta) = \frac{1}{2}(5 \sin^3 \theta - 3 \sin \theta)$$ $$P^0_4(\sin \theta) = \frac{1}{8}(35 \sin^4 \theta - 30 \sin^2 \theta + 3)$$ Since these terms are cylindrically symmetric I can write $$\sin^2(\theta) = \frac{x^2+y^2}{r^2} = \frac{x^2+y^2}{x^2+y^2+z^2} $$ The $J_2$ term in the potential is then: $$u_{J_2} = -J_2 \frac{1}{2} \frac{1}{r^3} \frac{3x^2 + 3y^2 - r^2}{r^2} = -J_2 \frac{1}{2} \frac{1}{r^5} (2x^2 + 2y^2 - z^2)$$ and the acceleration from this would be the negative gradient $-\nabla u$ or $$\mathbf{a_{J_2}} = -\nabla u_{J_2}$$ Using this Wolfram Alpha link to make sure I don't make errors taking derivatives, I get (after a slight adjustment) $$a_x = J_2 \frac{x}{r^7} \left( \frac{9}{2} z^2 - 3(x^2 + y^2) \right)$$ $$a_y = J_2 \frac{y}{r^7} \left( \frac{9}{2} z^2 - 3(x^2 + y^2)\right)$$ $$a_z = J_2 \frac{z}{r^7} \left( \frac{3}{2}z^2 - 6 (x^2 + y^2)\right)$$ and these look very similar to but not the same as the results in Wikipedia's Geopotential_model; The deviations of Earth's gravitational field from that of a homogeneous sphere: $$a_x = J_2 \frac{x}{r^7} \left(6 z^2 - \frac{3}{2}(x^2 + y^2\right)$$ $$a_y = J_2 \frac{y}{r^7} \left(6 z^2 - \frac{3}{2}(x^2 + y^2\right)$$ $$a_z = J_2 \frac{z}{r^7} \left(3 z^2 - \frac{9}{2}(x^2 + y^2\right)$$ I'm close but I can't reproduce Wikipedia's result here. Once I'm confident with the process I can continue for the $J_3$ and $J_4$ terms and start doing numerical integration of orbits.
Let's look at @mmeent's comment suggesting that the spherical coordinates used in the linked Wikipedia article set the polar angle equal to zero at the equator rather than the pole. where spherical coordinates (r, θ, φ) are used, given here in terms of cartesian (x, y, z) for reference While that link shows $\theta = 0$ at the "north pole" (how I've usually seen spherical coordinates defined) the equations directly below that line do indeed define $\theta = 0$ to be the equator with $z=0$: $$x = r \cos \theta \cos \phi$$ $$y = r \cos \theta \sin \phi$$ $$x = r \sin \theta$$ $$\sin^2(\theta) = \frac{z^2}{r^2} = \frac{z^2}{x^2+y^2+z^2} $$ then (noting that in the original question I'd put a minus sign where none existed): $$u_{J_2} = +J_2 \frac{1}{2} \frac{1}{r^3} \frac{3z^2 - r^2}{r^2} = J_2 \frac{1}{2} \frac{1}{r^5} (2z^2 - (x^2 + y^2))$$ and using $\mathbf{a_{J_2}} = -\nabla u_{J_2}$ and Wolfram Alpha I get: $$a_x = J_2 \frac{x}{r^7} \left( 6 z^2 - \frac{3}{2}(x^2 + y^2) \right)$$ $$a_y = J_2 \frac{y}{r^7} \left( 6 z^2 - \frac{3}{2}(x^2 + y^2) \right)$$ $$a_z = J_2 \frac{z}{r^7} \left( 3 z^2 - \frac{9}{2}(x^2 + y^2) \right)$$ which agrees with Wikipedia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Contradiction between Force and Torque equations A thin uniform rod of mass $M$ and length $L$ and cross-sectional area $A$, is free to rotate about a horizontal axis passing through one of its ends (see figure). What is the value of shear stress developed at the centre of the rod, immediately after its release from the horizontal position shown in the figure? Firstly, we can find the angular acceleration $\alpha$ of the rod by applying the Torque equation about hinge point A as follows: $$ \frac{MgL}{2} = \frac{ML^2}{3} \alpha$$ $$\alpha = \frac{3g}{2L}$$ So the acceleration of the centre of the rod equals $\alpha \cdot \frac{L}{2} = \frac{3g}{4}$, Hence the hinge force $F_H = \frac{Mg}{4}$ Now consider an imaginary cut at the centre of the rod, dividing it into two halves. To account for the effect of one half on the other, we can add a shear force $F$ acting tangentially to the cross-section area on each half. Now focusing on the left-most half, the acceleration of its centre of mass should be equal to $\alpha$ times its distance from hinge point A So, $a_{cm} = {\frac{L}{4}} \cdot {\frac{3g}{2L}} = \frac{3g}{8}$ Keeping in mind that the mass of the left half is $\frac{M}{2}$ and applying force equation, we get the following: $$\frac{Mg}{2} + F - F_H = \frac{3Mg}{16}$$ $$F - F_H = \frac{-5Mg}{16}$$ $$F = \frac{Mg}{4} - \frac{5Mg}{16} = -\frac{Mg}{16}$$ But if we apply the Torque equation about hinge point A, for the left half: $$\frac{Mg}{2} \cdot \frac{L}{4} + F \cdot \frac{L}{2} = \frac{\left(\frac{M}{2}\right)\left(\frac{L}{2}\right)^2}{3} \cdot \alpha $$ $$ \frac{MgL}{8} + \frac{FL}{2} = \frac{ML^2}{24} \cdot \frac{3g}{2L}$$ $$ \frac{FL}{2} = \frac{MgL}{16} - \frac{MgL}{8} = -\frac{MgL}{16}$$ $$F = -\frac{Mg}{8}$$ Why is there a contradiction?
Your initial calculations are correct. The pin force is indeed $\tfrac{m g}{4} $ a fact that kind of surprised me the first time I encountered this problem. Your idealization on the second part is where things were missed. I am using the sketch below, and I am counting positive directions as downwards (same as gravity) and positive angles as clock-wise. Notice each half-bar has mass $m/2$ and mass moment of inertia about its center of mass $ \tfrac{1}{12} \left( \tfrac{m}{2} \right) \left( \tfrac{\ell}{2} \right)^2 = \tfrac{m \ell^2}{96}$ Let's look at the equations of motion for the two half-bars as they are derived from the free body diagrams. $$ \begin{aligned} \tfrac{m}{2} a_G & = \tfrac{m}{2} g - F_C - F_A \\ \tfrac{m \ell^2}{96} \alpha & = -\tau_C - \tfrac{\ell}{4} F_C + \tfrac{\ell}{4} F_A \\ \tfrac{m}{2} a_H & = \tfrac{m}{2} g + F_C \\ \tfrac{m \ell^2}{96} \alpha & = \tau_C - \tfrac{\ell}{4} F_C \\ \end{aligned} $$ And consider the kinematics, where it all acts like a rotating rigid bar, with point accelerations $a_G = \tfrac{\ell}{4} \alpha$ and $a_H = \tfrac{3 \ell}{4} \alpha$ The solution to the above 4×4 system of equations is $$ \begin{aligned} F_A & = \tfrac{m g}{4} & F_C & = \tfrac{m g}{16} \\ \alpha & = \tfrac{3 g}{2 \ell} & \tau_C &= \tfrac{m g \ell}{32} \end{aligned} $$ I think because you did not account for the torque transfer $\tau_C$ between the half-bars, you got $F_C = \tfrac{m g}{8}$ which is incorrect. Note, I used PowerPoint and IguanaTex plugin for the sketches.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/718414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is equivalent resistance in parallel circuit always less than each individual resistor? There are $n$ resistors connected in a parallel combination given below. $$\frac{1}{R_{ev}}=\frac{1}{R_{1}}+\frac{1}{R_{2}}+\frac{1}{R_{3}}+\frac{1}{R_{4}}+\frac{1}{R_{5}}.......\frac{1}{R_{n}}$$ Foundation Science - Physics (class 10) by H.C. Verma states (Pg. 68) For two resistances $R_{1}$ and $R_{2}$ connected in parallel, $$\frac{1}{R_{ev}}=\frac{1}{R_{1}}+\frac{1}{R_{2}}=\frac{R_{1}+R_{2}}{R_{1}R_{2}}$$ $$R_{ev}=\frac{R_{1}R_{2}}{R_{1}+R_{2}}$$ We see that the equivalent resistance in a parallel combination is less than each of the resistances. I observe this every time I do an experiment on parallel resistors or solve a parallel combination problem. How can we prove $R_{ev}<R_{1},R_{2},R_{3},...R_{n}$ or that $R_{ev}$ is less than the Resistor $R_{min}$, which has the least resistance of all the individual resistors?
We can prove it by induction. Let $$ \frac{1}{R^{(n)}_{eq}} = \frac{1}{R_1} + \cdots+ \frac{1}{R_n} $$ Now, when $n=2$, we find $$ \frac{1}{R^{(2)}_{eq}} = \frac{1}{R_1} + \frac{1}{R_2} \implies R_{eq}^{(2)} = \frac{R_1 R_2}{R_1+R_2} = \frac{R_1}{1+\frac{R_1}{R_2}} = \frac{R_2}{1+\frac{R_2}{R_1}} $$ Since $\frac{R_1}{R_2} > 0$, we see that $R^{(2)}_{eq} < R_1$ and $R^{(2)}_{eq} < R_2$ or equivalently $R^{(2)}_{eq} < \min(R_1, R_2)$. Now, suppose it is true that $R^{(n)}_{eq} < \min (R_1, \cdots, R_n)$. Then, consider $$ \frac{1}{R^{(n+1)}_{eq}} = \frac{1}{R_1} + \cdots+ \frac{1}{R_n} + \frac{1}{R_{n+1}} = \frac{1}{R^{(n)}_{eq}} + \frac{1}{R_{n+1}} $$ Using the result from $n=2$, we find $$ R^{(n+1)}_{eq} < \min ( R_{n+1} , R^{(n)}_{eq} ) < \min ( R_{n+1} , \min (R_1, \cdots, R_n)) $$ But $$ \min ( R_{n+1} , \min (R_1, \cdots, R_n)) = \min ( R_{n+1} , R_1, \cdots, R_n) $$ Therefore $$ R^{(n+1)}_{eq} < \min ( R_1, \cdots, R_n , R_{n+1} ) $$ Thus, we have shown that the above relation holds for $n=2$, and further that whenever it holds for $n$, it also holds for $n+1$. Thus, by induction, it is true for all $n\geq2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Contradiction between Force and Torque equations A thin uniform rod of mass $M$ and length $L$ and cross-sectional area $A$, is free to rotate about a horizontal axis passing through one of its ends (see figure). What is the value of shear stress developed at the centre of the rod, immediately after its release from the horizontal position shown in the figure? Firstly, we can find the angular acceleration $\alpha$ of the rod by applying the Torque equation about hinge point A as follows: $$ \frac{MgL}{2} = \frac{ML^2}{3} \alpha$$ $$\alpha = \frac{3g}{2L}$$ So the acceleration of the centre of the rod equals $\alpha \cdot \frac{L}{2} = \frac{3g}{4}$, Hence the hinge force $F_H = \frac{Mg}{4}$ Now consider an imaginary cut at the centre of the rod, dividing it into two halves. To account for the effect of one half on the other, we can add a shear force $F$ acting tangentially to the cross-section area on each half. Now focusing on the left-most half, the acceleration of its centre of mass should be equal to $\alpha$ times its distance from hinge point A So, $a_{cm} = {\frac{L}{4}} \cdot {\frac{3g}{2L}} = \frac{3g}{8}$ Keeping in mind that the mass of the left half is $\frac{M}{2}$ and applying force equation, we get the following: $$\frac{Mg}{2} + F - F_H = \frac{3Mg}{16}$$ $$F - F_H = \frac{-5Mg}{16}$$ $$F = \frac{Mg}{4} - \frac{5Mg}{16} = -\frac{Mg}{16}$$ But if we apply the Torque equation about hinge point A, for the left half: $$\frac{Mg}{2} \cdot \frac{L}{4} + F \cdot \frac{L}{2} = \frac{\left(\frac{M}{2}\right)\left(\frac{L}{2}\right)^2}{3} \cdot \alpha $$ $$ \frac{MgL}{8} + \frac{FL}{2} = \frac{ML^2}{24} \cdot \frac{3g}{2L}$$ $$ \frac{FL}{2} = \frac{MgL}{16} - \frac{MgL}{8} = -\frac{MgL}{16}$$ $$F = -\frac{Mg}{8}$$ Why is there a contradiction?
I think you can obtain the results like this (your approach) I ) obtain $~F_H~,a_{CM}~,\alpha~$ with sum of the forces at the center of mass $$M\,a_{CM}=M\,g-F_H$$ sum of the torques at the center of mass $$I_{CM}\,\alpha=\frac{F_H\,L}{2}$$ and the kinematic equation $$\tan(\alpha)\approx\alpha=\frac{2\,a_{CM}}{L}$$ with $~I_{CM}=\frac{M\,L^2}{12}$ $\Rightarrow$ $$F_H= \frac{M\,g}{4}~,a_{CM}=\frac{3\,g}{4}~,\alpha=\frac{3\,g}{2\,L}$$ II) for the left side where you „cut“ the rod sum of the forces at the center of mass G \begin{align*} &\frac{M\,a_{G}}{2}=\frac{M\,g}{2}-F_H-F_C\tag 1 \end{align*} sum of the torques at the center of mass \begin{align*} I_{G}\,\alpha=\frac{F_H\,L}{4}-\frac{F_C\,L}{4}-\tau_C \tag 2 \end{align*} and the kinematic equation $$\frac{a_{CM}}{\frac L2}=\frac{a_{G}}{\frac L4}\quad\Rightarrow a_G=\frac 38\,g$$ with $~I_G~=\frac{1}{12}\frac{M}{2}\left(\frac{L}{2}\right)^2~$ and equation (1), (2) you obtain \begin{align*} &F_C=\frac{1}{16}\,M\,g\quad,\tau_C=\frac{1}{32}\,M\,g\,L \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/718414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Derive $\frac{\mathrm{d}}{\mathrm{d}t}(\gamma m\mathbf{v}) = e\mathbf{E}$ from elementary principles? It is experimentally known that the equation of motion for a charge $e$ moving in a static electric field $\mathbf{E}$ is given by: $$\frac{\mathrm{d}}{\mathrm{d}t} (\gamma m\mathbf{v}) = e\mathbf{E}$$ Is it possible to show this using just Newton's laws of motion for the proper frame of $e$, symmetry arguments, the Lorentz transformations and other additional principles?
If I understand your question correctly, you can show it by the next way. First of all, use some expressions from Special relativity, which are Lorentz transformations for force $\mathbf F $, radius-vector $\mathbf r$ and speed $\mathbf v$ ($\mathbf u$ is the speed of inertial system): $$ \mathbf r' = \mathbf r + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r)}{c^{2}} - \gamma \mathbf u t = \mathbf r + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r)}{c^{2}} \quad(t = 0) \quad \Rightarrow r'^{2} = r^{2} + \frac{(\mathbf u \cdot \mathbf r)^{2}\gamma^{2}}{c^{2}}, $$ $$ (\mathbf u \cdot \mathbf r') = \gamma (\mathbf u \cdot \mathbf r), \quad (\mathbf v' \cdot \mathbf r') = \frac{(\mathbf r \cdot \mathbf v)}{\gamma (1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}})} - \gamma (\mathbf r \cdot \mathbf u), $$ $$ \frac{\mathbf F}{\gamma(1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}})} = \mathbf F' + \gamma \frac{\mathbf u (\mathbf F' \cdot \mathbf v')}{c^{2}} + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf F')}{c^{2}} \qquad (.1). $$ Secondly, use (.1) for Coulomb's law. You can make it, because it doesn't have an information about speed of interaction, which can be proved by thought experiment with two charges, binded by the stiff spring at the rest state. So, $$ \frac{\mathbf F}{\gamma(1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}})} = \frac{Qq}{r'^{3}}\!{\left[\mathbf r' + \gamma \mathbf u \frac{(\mathbf r' \cdot \mathbf v')}{c^{2}} + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r')}{c^{2}}\right]} = $$ $$ = \frac{Qq}{r'^{3}}\!{\left[\mathbf r + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r)}{c^{2}} + \gamma \frac{\mathbf u}{c^{2}} \left(\frac{(\mathbf u \cdot \mathbf r)}{\gamma(1 - \frac{(\mathbf u \cdot \mathbf v)}{c^{2}})} - \gamma (\mathbf u \cdot \mathbf r)\right) + \Gamma \gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r)}{c^{2}}\right]} = $$ $$ = \frac{Qq}{r'^{3}}\!{\left[\mathbf r + \Gamma \mathbf u \frac{(\mathbf u \cdot \mathbf r)}{c^{2}}(1 + \gamma) - \frac{\mathbf u}{c^{2}}\gamma^{2}(\mathbf u \cdot \mathbf r) + \gamma \frac{\mathbf u}{c^{2}}\frac{(\mathbf v \cdot \mathbf r)}{\gamma(1 - \frac{(\mathbf u \cdot \mathbf v)}{c^{2}})}\right]} = | \Gamma (1 + \gamma) = \gamma^{2} | = $$ $$ = \frac{Qq}{r'^{3}}\!{\left[\mathbf r + \gamma^{2} \frac{\mathbf u}{c^{2}}(\mathbf u \cdot \mathbf r) - \frac{\mathbf u}{c^{2}}\gamma^{2}(\mathbf u \cdot \mathbf r)) + \gamma \frac{\mathbf u}{c^{2}}\frac{(\mathbf v \cdot \mathbf r)}{\gamma(1 - \frac{(\mathbf u \cdot \mathbf v)}{c^{2}})}\right]} = $$ $$ = \frac{Qq}{r'^{3}}\!{\left[\mathbf r + \frac{\mathbf u (\mathbf v \cdot \mathbf r)}{c^{2}(1 - \frac{(\mathbf u \cdot \mathbf v)}{c^{2}})}\right]} = |\mathbf u (\mathbf v \cdot \mathbf r) = \left[ \mathbf v [\mathbf u \times \mathbf r ] \right] + \mathbf r (\mathbf u \cdot \mathbf v)| = $$ $$ = \frac{Qq}{r'^{3}}\left[ \mathbf r + \frac{[\mathbf v \times \frac{[\mathbf u \times \mathbf r]}{c^{2}}]}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}} + \frac{\mathbf r \frac{(\mathbf u \cdot \mathbf v)}{c^{2}}}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}}\right] = $$ $$ = \frac{Qq}{r'^{3}}\left[ \frac{[\mathbf v \times \frac{[\mathbf u \times \mathbf r]}{c^{2}}]}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}} + \frac{\mathbf r \left(\frac{(\mathbf v \cdot \mathbf u)}{c^{2}} + 1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}\right)}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}}\right] = \frac{Qq}{r'^{3}}\left[ \frac{[\mathbf v \times [\mathbf u \times \mathbf r]]}{c^{2}\left(1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}\right)} + \frac{\mathbf r}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}}\right]. $$ After that, using $\mathbf r'^{3} = \left(r'^{2}\right)^{\frac{3}{2}} = (r^{2} + \gamma^{2}\frac{(\mathbf r \cdot \mathbf u)^{2}}{c^{2}})^{\frac{3}{2}}$, we can assume, that $$ \frac{\mathbf F}{\gamma(1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}})} = \frac{qQ}{(r^{2} + \gamma^{2}\frac{(\mathbf r \cdot \mathbf u)^{2}}{c^{2}})^{\frac{3}{2}}}\!{\left[\frac{[\mathbf v \times \frac{[\mathbf u \times \mathbf r]}{c^{2}}]}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}} + \frac{\mathbf r}{1 - \frac{(\mathbf v \cdot \mathbf u)}{c^{2}}}\right]} \Rightarrow $$ $$ \Rightarrow \mathbf F = \frac{q Q \gamma}{(r^{2} + \gamma^{2}\frac{(\mathbf r \cdot \mathbf u)^{2}}{c^{2}})^{\frac{3}{2}}}\!{\left[\mathbf r + \left[\mathbf v \times \frac{[\mathbf u \times \mathbf r]}{c^{2}}\right]\right]} \qquad (.2). $$ Using designations $$ \mathbf E = \frac{Q\gamma \mathbf r}{\left(r^{2} + \frac{\gamma^{2}}{c^{2}}(\mathbf u \cdot \mathbf r)^{2} \right)^{\frac{3}{2}}}, \quad \mathbf B = \frac{1}{c}[\mathbf u \times \mathbf E], $$ (.2) can be rewrited as $$ \mathbf F = q\mathbf E + \frac{q}{c}[\mathbf v \times \mathbf B]. $$ Of course, you know, that $$ \mathbf F = \frac{d}{dt}(\frac{m \mathbf v }{\sqrt{1 - \frac{v^{2}}{c^{2}}}}) = \frac{m\mathbf v (\frac{\mathbf v \cdot \mathbf a}{c^{2}})}{\left(1 - \frac{v^{2}}{c^{2}}\right)^{\frac{3}{2}}} + \frac{m \mathbf a}{\sqrt{1 - \frac{v^{2}}{c^{2}}}}. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/2978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
Calculating the expectation value for kinetic energy $\langle E_k \rangle$ for a known wave function I have a wavefunction ($a=1nm$): $$\psi=Ax\exp\left[\tfrac{-x^2}{2a}\right]$$ for which I already calculated the normalisation factor (in my other topic): $$A = \sqrt{\frac{2}{a\sqrt{\pi a}}} = 1.06\frac{1}{nm\sqrt{nm}}$$ What I want to know is how to calculate the expectation value for a kinetic energy. I have tried to calculate it analyticaly but i get lost in the integration: \begin{align} \langle E_k \rangle &= \int\limits_{-\infty}^{\infty} \overline\psi\hat{T}\psi \,dx = \int\limits_{-\infty}^{\infty} Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\left(-\tfrac{\hbar^2}{2m}\tfrac{d^2}{dx^2}Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\right)\,dx =\dots \end{align} At this point I go and solve the second derivative and will continue after this: \begin{align} &\phantom{=}\tfrac{d^2}{dx^2}Ax \exp \left[{-\tfrac{x^2}{2a}}\right] = A\tfrac{d^2}{dx^2}x \exp \left[{-\tfrac{x^2}{2a}}\right]= A\tfrac{d}{dx}\left(\exp \left[{-\tfrac{x^2}{2a}}\right]-\tfrac{2x^2}{2a}\exp \left[{-\tfrac{x^2}{2a}}\right]\right)= \\ &=A \left(-\tfrac{2x}{2a}\exp \left[{-\tfrac{x^2}{2a}}\right] - \tfrac{1}{a}\tfrac{d}{dx}x^2\exp \left[{-\tfrac{x^2}{2a}}\right]\right) = \\ &=A \left(-\tfrac{x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] - \tfrac{2x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) = \\ &= A \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) \end{align} Ok so now I can continue the integration: \begin{align} \dots &= \int\limits_{-\infty}^{\infty} Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\left(-\tfrac{\hbar^2}{2m} A \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right)\right)\,dx = \\ &= \int\limits_{-\infty}^{\infty} -\frac{A^2\hbar^2}{2m}x\exp\left[-\tfrac{x^2}{2a}\right] \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) \,dx\\ &= \int\limits_{-\infty}^{\infty} \frac{A^2\hbar^2}{2m}\left(\tfrac{3x^2}{a}\exp \left[{-\tfrac{x^2}{a}}\right] - \tfrac{x^4}{a^2}\exp \left[{-\tfrac{x^2}{a}}\right]\right) \,dx\\ &= \frac{A^2\hbar^2}{2m} \underbrace{\int\limits_{-\infty}^{\infty}\left(\tfrac{3x^2}{a}\exp \left[{-\tfrac{x^2}{a}}\right] - \tfrac{x^4}{a^2}\exp \left[{-\tfrac{x^2}{a}}\right]\right) \,dx}_{\text{How do i solve this?}}=\dots\\ \end{align} This is the point where I admited to myself that I was lost in an integral and used the WolframAlpha to help myself. Well I got a weird result. My professor somehow got this ($m$ is a mass of an electron) but I don't know how: \begin{align} \dots = \frac{\hbar^2}{2m}\cdot\frac{3}{2a} = \frac{3\hbar^2}{4ma} = 0.058eV \end{align} Can anyone help me to understand the last integral? How can I solve it? Is it possible analyticaly (it looks like professor did it, but i am not sure about it)?
In your problem, you need integrals of kind : $I_{2n} = \int x^{2n} e^{- \large \frac{x^2}{a}} ~ dx$ Note first that $I_0 = (\pi)^\frac{1}{2} (\frac{1}{a})^ {-\frac{1}{2}}$ Now, it is easy to see that there is a reccurence relation between the integrals : $$I_{2n+2} = - \frac{\partial I_{2n}}{\partial (\frac{1}{a}) } $$ For instance, $$I_2 = - \frac{\partial I_{0}}{\partial (\frac{1}{a}) } = \frac{1}{2}(\pi)^\frac{1}{2} (\frac{1}{a})^ {- \large\frac{3}{2}} = \frac{1}{2}(\pi)^\frac{1}{2} ~a^ {\large\frac{3}{2}}$$ $$I_4 = - \frac{\partial I_{2}}{\partial (\frac{1}{a}) } = \frac{3}{2} \frac{1}{2}(\pi)^\frac{1}{2} (\frac{1}{a})^ {- \large\frac{5}{2}} = \frac{3}{2} \frac{1}{2}(\pi)^\frac{1}{2} ~a^ {\large\frac{5}{2}}$$ A general formula is : $$I_{2n} = I_0 ~(2n-1)!! ~(\frac{a}{2})^n = \frac{(\pi)^\frac{1}{2}}{2^n} ~(2n-1)!! ~a^{n+\frac{1}{2}}$$ where $(2n-1)!! = (2n-1)(2n-3)......5.3.1$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/72950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Calculating angle of min deviation of prism Two rays incident with angle 40 and 60 on one face of equilateral triangular prism the angle of deviation are equal .find angle of minimum deviation?
The graph of angle of deviation vs angle of incidence is a $U$ shape. The fact that the angle of deviation is the same for these two rays means that a ray which is incident at $40$ degrees to the normal will emerge at $60$ degrees to the normal. This allows us to find the refractive index. $$\sin i_1 = \sin 40 = n\sin r_1$$ $$n\sin i_2 = \sin r_2 = \sin 60$$ From geometry, $r_1+i_2 = A = 60$ therefore $$\sin r_1 = \sin (60-i_2) = \sin60\cos i_2 - \cos 60\sin i_2$$ hence $$\begin{eqnarray} \sin 40 &=& n\left(\sin 60\cos i_2 - \cos 60\sin i_2\right)\\ &=& \sin 60 \;n\cos i_2 - \cos 60 \;n\sin i_2\\ &=& \sin60 \;n\cos i_2 - \cos 60\sin 60\end{eqnarray}$$ $$n\cos i_2 = \frac{\sin 40}{\sin 60} + \cos 60 = 1.24223$$ $$n^2\cos^2 i_2 = n^2 - (n\sin i_2)^2 = n^2 - (\sin 60)^2 = n^2 - 0.75 = 1.54313$$ $$n^2 = 2.29313$$ $$n = 1.51431$$ When deviation is a minimum then $$\sin \frac{A+D}2 = n\sin \frac A2 = 1.51431\times\sin 30 = 0.75715$$ $$\frac{A+D}2 = 49.2\; degrees$$ $$D = 98.4 - 60 = 38.4 \;degrees$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/255708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix representation of spin-2 system? I am surprised no one has asked this before, but what is the matrix representation of a spin-2 system? Also, what are the equivalent of the Pauli matrices for the system?
The irreducible representation of $su(2)$ corresponding to spin 2 is 5-dimensional. One possible choice of explicit $5\times5$ matrices for spin-2 angular momentum is $$J_1=\left( \begin{array}{ccccc} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & \sqrt{\frac{3}{2}} & 0 & 0 \\ 0 & \sqrt{\frac{3}{2}} & 0 & \sqrt{\frac{3}{2}} & 0 \\ 0 & 0 & \sqrt{\frac{3}{2}} & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ \end{array} \right)$$ $$J_2=\left( \begin{array}{ccccc} 0 & -i & 0 & 0 & 0 \\ i & 0 & -i \sqrt{\frac{3}{2}} & 0 & 0 \\ 0 & i \sqrt{\frac{3}{2}} & 0 & -i \sqrt{\frac{3}{2}} & 0 \\ 0 & 0 & i \sqrt{\frac{3}{2}} & 0 & -i \\ 0 & 0 & 0 & i & 0 \\ \end{array} \right)$$ $$J_3=\left( \begin{array}{ccccc} 2 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & -2 \\ \end{array} \right)$$ You can verify that they are Hermitian; that the eigenvalues of each one are -2, -1, 0, 1, and 2; that $$[J_i,J_j]=i\epsilon_{ijk}J_k;$$ and that $$J_1^2+J_2^2+J_3^2=2(2+1)I.$$ This paper discusses the construction of spin matrices for arbitrary spin.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/468129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How the equation of a projectile represents a parabola? I am not able to prove that equation of motion of a projectile is parabola. The book simply says the given below is the equation of a parabola but does not clarify it $$y= {\tan\theta}x - \frac{g}{2(u\cos\theta)^2}x^2$$ But equation of parabola says (i) $y^2=4ax$ (ii) $y^2=-4ax$ (iii) $x^2=4ay$ (iii) $x^2=-4ay$
None of the four options seem like the proper definition for a parabola. Mabe if you show that $y \sim x^2$ it would suffice. Specifically, the vertex form of a parabola is $y = a (x-x_0)^2 + d$, where $a$, $x_0$ and $d$ are constants. Bring the given equation in this form to show it is indeed a parabola. * *Equate the two expressions $$ (\tan \theta) x - \frac{g}{4 v^2 \cos^2 \theta} x^2 = a \left(x - x_0 \right)^2 + d $$ *Foil the right-hand side to expand the terms $$ (\tan \theta) x - \frac{g}{4 v^2 \cos^2 \theta} x^2 = a x^2 -2 a x x_0 + a x_0^2 + d $$ *Match the coefficients of $x^2$, $x$ and the constant $$ \begin{aligned} - \frac{g}{4 v^2 \cos^2 \theta} x^2 &= a x^2 \\ (\tan \theta) x &= -2 a x x_0 \\ 0 & = a x_0^2 + d \end{aligned}$$ *Solve the three equations for $a$, $x_0$ and $d$ respectively $$ \begin{aligned} a & = -\frac{g}{4 v^2 \cos^2 \theta} \\ x_0 & = \frac{2 v^2 \sin \theta \cos \theta}{g} \\ d & = \frac{v^2 \sin^2 \theta}{g} \end{aligned}$$ *Since all the terms above are constant and do not depend on $x$, the two expressions are indeed equal to each other, and we have the trajectory in "parabolic" form $$y = \underbrace{ \left( -\frac{g}{4 v^2 \cos^2 \theta} \right)}_a \left( x - \underbrace{\left( \frac{2 v^2 \sin \theta \cos \theta}{g} \right)}_{x_0} \right)^2 + \underbrace{ \left(\frac{v^2 \sin^2 \theta}{g}\right)}_d$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/745055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculating the expectation value for kinetic energy $\langle E_k \rangle$ for a known wave function I have a wavefunction ($a=1nm$): $$\psi=Ax\exp\left[\tfrac{-x^2}{2a}\right]$$ for which I already calculated the normalisation factor (in my other topic): $$A = \sqrt{\frac{2}{a\sqrt{\pi a}}} = 1.06\frac{1}{nm\sqrt{nm}}$$ What I want to know is how to calculate the expectation value for a kinetic energy. I have tried to calculate it analyticaly but i get lost in the integration: \begin{align} \langle E_k \rangle &= \int\limits_{-\infty}^{\infty} \overline\psi\hat{T}\psi \,dx = \int\limits_{-\infty}^{\infty} Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\left(-\tfrac{\hbar^2}{2m}\tfrac{d^2}{dx^2}Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\right)\,dx =\dots \end{align} At this point I go and solve the second derivative and will continue after this: \begin{align} &\phantom{=}\tfrac{d^2}{dx^2}Ax \exp \left[{-\tfrac{x^2}{2a}}\right] = A\tfrac{d^2}{dx^2}x \exp \left[{-\tfrac{x^2}{2a}}\right]= A\tfrac{d}{dx}\left(\exp \left[{-\tfrac{x^2}{2a}}\right]-\tfrac{2x^2}{2a}\exp \left[{-\tfrac{x^2}{2a}}\right]\right)= \\ &=A \left(-\tfrac{2x}{2a}\exp \left[{-\tfrac{x^2}{2a}}\right] - \tfrac{1}{a}\tfrac{d}{dx}x^2\exp \left[{-\tfrac{x^2}{2a}}\right]\right) = \\ &=A \left(-\tfrac{x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] - \tfrac{2x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) = \\ &= A \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) \end{align} Ok so now I can continue the integration: \begin{align} \dots &= \int\limits_{-\infty}^{\infty} Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\left(-\tfrac{\hbar^2}{2m} A \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right)\right)\,dx = \\ &= \int\limits_{-\infty}^{\infty} -\frac{A^2\hbar^2}{2m}x\exp\left[-\tfrac{x^2}{2a}\right] \left(-\tfrac{3x}{a}\exp \left[{-\tfrac{x^2}{2a}}\right] + \tfrac{x^3}{a^2}\exp \left[{-\tfrac{x^2}{2a}}\right]\right) \,dx\\ &= \int\limits_{-\infty}^{\infty} \frac{A^2\hbar^2}{2m}\left(\tfrac{3x^2}{a}\exp \left[{-\tfrac{x^2}{a}}\right] - \tfrac{x^4}{a^2}\exp \left[{-\tfrac{x^2}{a}}\right]\right) \,dx\\ &= \frac{A^2\hbar^2}{2m} \underbrace{\int\limits_{-\infty}^{\infty}\left(\tfrac{3x^2}{a}\exp \left[{-\tfrac{x^2}{a}}\right] - \tfrac{x^4}{a^2}\exp \left[{-\tfrac{x^2}{a}}\right]\right) \,dx}_{\text{How do i solve this?}}=\dots\\ \end{align} This is the point where I admited to myself that I was lost in an integral and used the WolframAlpha to help myself. Well I got a weird result. My professor somehow got this ($m$ is a mass of an electron) but I don't know how: \begin{align} \dots = \frac{\hbar^2}{2m}\cdot\frac{3}{2a} = \frac{3\hbar^2}{4ma} = 0.058eV \end{align} Can anyone help me to understand the last integral? How can I solve it? Is it possible analyticaly (it looks like professor did it, but i am not sure about it)?
My statistical physics professor call those ones Laplace integrals $I(h)$. $$I(h)=\int_{0}^{\infty}x^{h}e^{-a^2x^2}dx$$ Note that $$\int_{-\infty}^{\infty}x^{h}e^{-a^2x^2}dx=2I(h) $$ some values $$I(0)=\frac{\sqrt{\pi}}{2a}, I(1)=\frac{1}{2a^2}, I(2)=\frac{\sqrt{\pi}}{4a^3},I(3)=\frac{1}{2a^4}, I(4)=\frac{3\sqrt{\pi}}{8a^5} $$ You may brute force by integrating by parts to get rid of $x^{h}$ and use $I(0) $ a classical result, or you may use induction over $h$ or some other method.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/72950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is the "associated scalar equation" of equations of motion? In an essay I am reading on celestial mechanics the equations of motion for a 2 body problem is given as: $$\mathbf{r}''=\nabla(\frac{\mu}{r})=-\frac{\mu \mathbf{r}}{r^3}$$ Fine. Then it says the "associated scalar equation" is: $$r''=-\frac{\mu}{r^2}+\frac{c^2}{r^3}$$ I've never heard of such a thing. Can someone please explain what the "associated scalar equation" of an equation of motion is. If it is just the equation of motion in scalar form, then why does that extra term $\frac{c^2}{r^3}$ appear? Oh, $\mu$ is the mass constant. It's not clear from the essay what $c^2$ is. It might be the speed of light squared, or perhaps a constant of integration. EDIT: The essay in question can be found here. The equations in question are found on page 5.
The "associated scalar equation" is just the formula for the time evolution of the scalar magnitude of the displacement, $r$, rather than all its vector components. It really only makes sense to write such an equation if the right-hand side can be expressed in terms of $r$ only, and not $\mathbf{r}$. Then you can use it to analyze the evolution of $r$ in simple scalar terms, without worrying about vector quantities. To see where it comes from, first note the scalar $r$ can be written $r = \sqrt{\mathbf{r} \cdot \mathbf{r}}$. Then $$ r' = \frac{1}{2} (\mathbf{r} \cdot \mathbf{r})^{-1/2} (\mathbf{r} \cdot \mathbf{r}' + \mathbf{r}' \cdot \mathbf{r}) = \frac{\mathbf{r}'\cdot\mathbf{r}}{r}. $$ Continuing with the next derivative, we find \begin{align} r'' & = \frac{1}{r^2} \left((\mathbf{r}'' \cdot \mathbf{r} + \mathbf{r}' \cdot \mathbf{r}') r - (\mathbf{r}' \cdot \mathbf{r}) r'\right) \\ & = \frac{1}{r^2} \left(\left(-\frac{\mu}{r^3} \mathbf{r} \cdot \mathbf{r} + \mathbf{r}' \cdot \mathbf{r}'\right) r - \frac{(\mathbf{r}'\cdot\mathbf{r})^2}{r}\right), \end{align} where we use the formula we found for $r'$ as well as $\mathbf{r}'' = -\mu \mathbf{r} / r^3$. Recalling $\mathbf{r} \cdot \mathbf{r} = r^2$, we can write $$ r'' = -\frac{\mu}{r^2} + \frac{1}{r^3} \left((\mathbf{r}' \cdot \mathbf{r}') (\mathbf{r} \cdot \mathbf{r}) - (\mathbf{r}' \cdot \mathbf{r})^2\right), $$ which is the same form as the given associated scalar equation. It remains to show that the parenthesized expression is constant. Recognizing and then manipulating some triple products yields \begin{align} r'' & = -\frac{\mu}{r^2} - \frac{1}{r^3} \mathbf{r} \cdot (\mathbf{r}' \times (\mathbf{r}' \times \mathbf{r})) \\ & = -\frac{\mu}{r^2} - \frac{1}{r^3} (\mathbf{r} \times \mathbf{r}') \cdot (\mathbf{r}' \times \mathbf{r}). \end{align} But $\mathbf{r}' \times \mathbf{r}$ is just the specific relative angular momentum $\mathbf{h}$, which is conserved in the two-body problem. Thus we recover the given formula with the constant $c^2 = \mathbf{h} \cdot \mathbf{h}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solution to invariance equation when deriving Lorentz transformation Requiring that the spacetime interval (1 spatial dimension) between the origin and an event $(t,\, x)$ stays constant under a transformation between reference frames: $$c^2t^2-x^2= c^2t'^2- x'^2$$ we want to find a solution to that equation, i.e. express $(t,\, x)$ in terms of $(t',\,x')$. In Landau/Lifshitz II it is stated that the general solution is $$x= x' \mathrm{cosh} (\psi)+ ct' \mathrm{sinh} (\psi),\quad ct= x' \mathrm{sinh}(\psi) + ct' \mathrm{cosh}(\psi)$$ with $\psi$ being the rotating angle in the $tx$ - plane. It is clear to me that the equation is then satisfied, but how would one find this solution starting from the given equation and why is it the most general one?
The different types of Lorentz transformations can be categorized using group theory, but in this case, I think one can convince themselves with a bit of algebra. Let's write our transformation in matrix notation. Representing our event as a column vector, $\begin{pmatrix} ct \\ x \end{pmatrix}$, the transformation you wrote above is given as $\begin{pmatrix} \cosh (\psi) & \sinh(\psi) \\ \sinh (\psi) & \cosh(\psi) \end{pmatrix}$ since $\begin{pmatrix} \cosh (\psi) & \sinh(\psi) \\ \sinh (\psi) & \cosh(\psi) \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix} = \begin{pmatrix} ct \cosh(\psi) + x \sinh(\psi) \\ ct \sinh(\psi) + x \cosh(\psi) \end{pmatrix}\,.$ More generally, transformations can be represented by 2 x 2 matrices, $\begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix}\,.$ The spacetime interval is given by $\begin{pmatrix} ct & x \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix} = c^2 t^2 - x^2\,.$ Let's act on this value by a general transformation $\begin{pmatrix} ct & x \end{pmatrix} \begin{pmatrix} a & c \\ b & d\end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} a & b \\ c & d\end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix}\,.$ Note that the transformation acts twice - once for each copy of the event. Demanding that this value stay the same is identical to demanding $\begin{pmatrix} a & c \\ b & d\end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} a & b \\ c & d\end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\,.$ Doing out the matrix multiplication, we get the constraints $a^2 - c^2 = 1$, $b^2 - d^2 = -1$, and $ac-bd=0$. Assuming we are rotating in the $tx$ plane, $b$ and $c$ cannot equal zero, and we can rule out simple solutions like $a=\pm 1$, $d = \pm 1$. As far as I can tell, hyperbolic functions are the only functions that satisfy those above constraints. Note that through this formulation, we've essentially reduced our physics question of Lorentz transformations to a mathematical question of matrices - specifically, what sort of matrices satisfy the last matrix equation. Matrices that do this are part of the indefinite orthogonal group $O(p,q)$ In this case, we have one space and one time component, so our group is $O(1,1)$ - I found a more mathematical discussion of how to get all the members of $O(1,1)$ here, in case the above wasn't sufficiently convincing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Massless Kerr black hole Kerr metric has the following form: $$ ds^2 = -\left(1 - \frac{2GMr}{r^2+a^2\cos^2(\theta)}\right) dt^2 + \left(\frac{r^2+a^2\cos^2(\theta)}{r^2-2GMr+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2 + \left(r^2+a^2+\frac{2GMra^2}{r^2+a^2\cos^2(\theta)}\right)\sin^2(\theta) d\phi^2 - \left(\frac{4GMra\sin^2(\theta)}{r^2+a^2\cos^2(\theta)}\right) d\phi\, dt $$ This metric describes a rotating black hole. If one considers $M=0$: $$ ds^2 = - dt^2 + \left(\frac{r^2+a^2\cos^2(\theta)}{r^2+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2 + \left(r^2+a^2\right)\sin^2(\theta) d\phi^2 $$ This metric is a solution of the Einstein equations in vacuum. What is the physical interpretation of such a solution?
It's simply flat space in Boyer-Lindquist coordinates. By writing $\begin{cases} x=\sqrt{r^2+a^2}\sin\theta\cos\phi\\ y=\sqrt{r^2+a^2}\sin\theta\sin\phi\\ z=r\cos\theta \end{cases}$ you'll get good ol' $\mathbb{M}^4$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/593675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Evaluating volume integral for electric potential in an infinite cylinder with uniform charge density Suppose I have an infinitely long cylinder of radius $a$, and uniform volume charge density $\rho$. I want to brute force my way through a calculation of the potential on the interior of the cylinder using the relation: $$ \Phi(\mathbf{x}) = \frac{1}{4\pi \epsilon_0} \int d^3 x' \frac{\rho(\mathbf{x}')}{|\mathbf{x} - \mathbf{x}'|} $$ To simplify the integral, I place my axes so that $\mathbf{x}$ points along the $x$-axis. Thus $$ \mathbf{x} = x \mathbf{\hat{x}} $$ $$ \mathbf{x}' = r' \cos \phi' \mathbf{\hat{x}} + r' \sin \phi' \mathbf{\hat{y}} + z' \mathbf{\hat{z}} $$ $$ \rho(\mathbf{x}') = \rho $$ The resulting volume integral is then: $$ \Phi(\mathbf{x}) = \frac{\rho}{4 \pi \epsilon_0} \int_{-\infty}^{\infty} dz' \int_0^{2\pi} d\phi' \int_0^a r' dr' \frac{1}{\sqrt{x^2 + r'^2 - 2xr'\cos\phi' + z'^2}} $$ How do I evaluate this integral? I've tried to do some substitutions ($\mu = \cos \phi'$, $z' = \sinh\theta$), but nothing has given anything workable. To be clear, I understand that this problem is reasonably easily solvable by first finding the electric field with Gauss's law and then taking the line integral. I would just like to know how to take this integral and, if possible, get some insight into why the integral in this easy problem is stupid hard.
I drop the constant and focus on the integral, also the prime sign: $$ \Phi(\mathbf{x}) = \int_{-\infty}^{\infty} dz \int_0^{2\pi} d\phi \int_0^a r dr \frac{1}{\sqrt{x^2 + r^2 - 2xr\cos\phi + z^2}} $$ The integral is divergent. To remove the divergence is to change the reference point of the potential from $x=\infty$ to $x=0$. Thus, I will change the integrand back to an integration form, and change the lower limit, which only change an infinite constant to the potential. $$ \frac{1}{\sqrt{x^2 + r^2 - 2xr\cos\phi + z^2}} =- \int^x_\infty d\xi \frac{\xi - r\cos \phi}{\left(\xi^2 + r^2 - 2\xi r\cos\phi + z^2\right)^{3/2}}\\ \to -\int^x_0 d\xi \frac{\xi - r\cos \phi}{\left(\xi^2 + r^2 - 2\xi r\cos\phi + z^2\right)^{3/2}} $$ The new potential from: $$ \Phi(\mathbf{x}) = -\int^x_0 d\xi \int^{2\pi}_0 d\phi \int_0^a rdr\int_{-\infty}^\infty dz \frac{\xi - r\cos \phi}{\left(\xi^2 + r^2 - 2\xi r\cos\phi + z^2\right)^{3/2}}\\ = -\int^x_0 d\xi \int^{2\pi}_0 d\phi \int_0^a rdr \frac{2(\xi - r\cos \phi)}{\left(\xi^2 + r^2 - 2\xi r\cos\phi\right)} \to I_1 - I_2 $$ The integral of $z$ can be carried out by triangular substitution. Next, I will try to integrate over $\phi$, by complex contour integral in the unit circle. Details refer to the appendixes in the bottom. $$ I_1 = 2\xi \int^{2\pi}_0 d\phi \frac{1}{\left(\xi^2 + r^2 - 2\xi r\cos\phi\right)} = \frac{4\pi\xi}{r_>^2-r_<^2} $$ $$ I_2 = 2 r \int^{2\pi}_0 d\phi \frac{\cos\phi}{\left(\xi^2 + r^2 - 2\xi r\cos\phi\right)} =\frac{4\pi r}{r_>^2-r_<^2} \frac{r_<}{r_>} $$ The above integral is done by change $Z = e^{i\phi}$ and turn the integral into a closed contour integral on the unit circle. $r_>$ is the larger one between $r$ and $\xi$, $r_<$ the smaller one. $$ \Phi(\mathbf{x}) =- 4 \pi \int^x_0 d\xi \int_0^a rdr \left[ \xi - r\frac{r_<}{r_>} \right] \frac{1}{r_>^2-r_<^2} $$ For $x < a$ : $$ \Phi(\mathbf{x}) =- 4 \pi \int^x_0 d\xi \left\{\int_0^\xi rdr \left[ \xi - r\frac{r}{\xi} \right] \frac{1}{\xi^2-r^2} + \int_\xi^a rdr \left[ \xi - r\frac{\xi}{r} \right] \frac{1}{r^2-\xi^2} \right\} \\ =- 4 \pi \int^x_0 d\xi \left\{\int_0^\xi rdr \frac{1}{\xi} + 0 \right\} =- 2 \pi \int^x_0 \xi d \xi = -\pi x^2 $$ For $x > a$ : $$ \Phi(\mathbf{x}) =- 4 \pi \int^a_0 rdr \left\{\int_0^r d\xi \left[ \xi - r\frac{\xi}{r} \right] \frac{1}{r^2-\xi^2} + \int_r^x d\xi \left[ \xi - r\frac{r}{\xi} \right] \frac{1}{\xi^2-r^2} \right\} \\ =- 4 \pi \int^a_0 rdr \left\{ 0 + \int_r^x d\xi \frac{1}{\xi} \right\}\\ =- \pi a^2 \left( 2 \ln x - 2 \ln a + 1 \right). $$ A first quick check of the result is the continuity of the potential as $x = a$, where both forms render $\Phi(a) = -\pi a^2$. Appendix A For $0< b < 1$ the complete integral over angle $\phi$: $$ I_1 = \int^{2\pi}_0 d\phi \frac{1}{\left(1 + b^2 - 2b \cos\phi\right)} = \frac{2\pi}{1-b^2} $$ Let $Z = e^{i\phi}$, hence $d\phi= -i \frac{dZ}{Z}$. Write $I_1$ as $$ I_1 = -i \oint_{unit- circle} \frac{dZ}{Z \left(1 + b^2 \right) - b \left( Z^2 + 1 \right)} = + \frac{i}{b} \oint_{unit- circle} \frac{dZ}{(Z-b)(Z-\frac{1}{b})} \\ = \frac{-2\pi}{b} Res(b) = \frac{-2\pi}{b} \frac{1}{b-\frac{1}{b}} = \frac{2\pi}{1-b^2} $$ Appendix B For $0< b < 1$ the complete integral over angle $\phi$: $$ I_2 = \int^{2\pi}_0 d\phi \frac{\cos\phi}{\left(1 + b^2 - 2b \cos\phi\right)} = \frac{2\pi b}{1 -b^2} $$ Let $Z = e^{i\phi}$: $$ I_2 = \frac{i}{2b} \oint_{unit- circle} \frac{Z^2 + 1}{(Z-b)(Z-\frac{1}{b})} \frac{dZ}{Z}\\ = \frac{i}{2b} 2\pi i \left\{ Res(0) + Res(b)\right\} = - \frac{\pi}{b} \left\{ 1 - \frac{1+b^2}{1-b^2}\right\} = \frac{2\pi b}{1 -b^2} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are any points at rest on a globe spinning both horizontally and vertically? Suppose there was a globe which could be spun about polar and horizontal axis. It is first spun about the polar axis, then promptly spun about the horizontal axis. Would any points be at rest?
the components of a point on sphere surface are: \begin{align*} &\mathbf{R}=\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix}= \left[ \begin {array}{c} \cos \left( \theta_0 \right) \sin \left( \phi_0 \right) \\ \sin \left( \theta_0 \right) \sin \left( \phi_0 \right) \\ \cos \left( \phi_0 \right) \end {array} \right] \end{align*} you rotate the position vector $~\mathbf R~$ to obtain the final position vector $~\mathbf R_f~$ \begin{align*} &\mathbf{R}_f=\mathbf S_y(\varphi_2)\,\mathbf S_z(\varphi_1)\,\mathbf{R}= \left[ \begin {array}{c} \cos \left( \varphi _{{2}} \right) \cos \left( \varphi _{{1}} \right) x-\cos \left( \varphi _{{2}} \right) \sin \left( \varphi _{{1}} \right) y+\sin \left( \varphi _{{2}} \right) z\\ \sin \left( \varphi _{{1}} \right) x+ \cos \left( \varphi _{{1}} \right) y\\ -\sin \left( \varphi _{{2}} \right) \cos \left( \varphi _{{1}} \right) x+\sin \left( \varphi _{{2}} \right) \sin \left( \varphi _{{1}} \right) y+ \cos \left( \varphi _{{2}} \right) z\end {array} \right] \tag 1\\ \end{align*} where $~\varphi_1~$ the rotation about the z-axes and $~\varphi_2~$ the rotation about the new y-axes . from here e.g. with the request that $~\mathbf R_f=[0,0,1]~$ you obtain from equation (1) that $~\varphi_1=-\theta_0~,\varphi_2=-\phi_0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/722642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Velocity from velocity potential I have this homework question and I get a different answer to the solutions. In Cylindrical polar coordinates $(r,\theta,z)$, the velocity potential of a flow is given by: $$\phi = -\frac{Ua^2r}{b^2-a^2}(1+\frac{b^2}{r^2})cos\theta$$ Find the velocity. I get the velocity as: $$v = (-\frac{Ua^2}{b^2-a^2}(1+\frac{b^2}{r^2})cos\theta + \frac{2Ua^2b^2}{(b^2-a^2)r^2}cos\theta)e_r + (\frac{Ua^2}{b^2-a^2}(1+\frac{b^2}{r^2})sin\theta) e_{\theta}$$ The answer misses out the second term in the $r$ direction, but I can't see where I've gone wrong. Any help appreciated.
For a flow in polar coordinates, the stream function $\phi$ leads to the velocities as $$ v_r=\frac{1}{r}\frac{\partial\phi}{\partial\theta}\qquad v_\theta=-\frac{\partial\phi}{\partial r} $$ and not $v_r=\partial_r\phi$ and $v_\theta=\partial_\theta\phi$. Thus, $$ v_r=\frac{1}{r}\frac{\partial}{\partial \theta}\left(-\frac{Ua^2r}{b^2-a^2}\left(1+\frac{b^2}{r^2}\right)\cos\theta\right) \\ = \frac{1}{r}\left(\frac{Ua^2r}{b^2-a^2}\left(1+\frac{b^2}{r^2}\right)\sin\theta\right) \\ =\frac{Ua^2}{b^2-a^2}\left(1+\frac{b^2}{r^2}\right)\sin\theta $$ and $$ v_\theta=-\frac{\partial}{\partial r}\left(-\frac{Ua^2r}{b^2-a^2}\cos\theta-\frac{Ua^2}{b^2-a^2}\frac{b^2}{r}\cos\theta\right) \\ = +\frac{Ua^2}{b^2-a^2}\cos\theta-\frac{Ua^2}{b^2-a^2}\frac{b^2}{r^2}\cos\theta $$ If $v_0\equiv Ua^2/(b^2-a^2)$, then the vector velocity is $$ \vec{v} = v_0\left(1+\frac{b^2}{r^2}\right)\sin\theta\hat{r}+v_0\left(1-\frac{b^2}{r^2}\right)\cos\theta\hat{\theta} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/79835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gauge-fixing of an arbitrary field: off-shell & on-shell degrees of freedom How to count the number of degrees of freedom of an arbitrary field (vector or tensor)? In other words, what is the mathematical procedure of gauge fixing?
In this answer, we summarize the results. The analysis itself can be found in textbooks, see e.g. Refs. 1 & 2. $\downarrow$ Table 1: Massless spin $j$ field in $D$ spacetime dimensions. $$\begin{array}{ccc} \text{Massless}^1 & \text{Off-shell DOF}^2 & \text{On-shell DOF}^3 \cr j=0 & 1 & 1 \cr j=\frac{1}{2} & n & \frac{n}{2} \cr j=1 & D-1 & D-2 \cr j=\frac{3}{2} & n(D-1) & \frac{n}{2}(D-3) \cr j=2 & \frac{D}{2}(D-1) & \frac{D}{2}(D-3) \cr \vdots &\vdots &\vdots \cr \text{Integer spin }j\in\mathbb{N}_0 & \begin{pmatrix} D+j-2 \cr D-2 \end{pmatrix}+ \begin{pmatrix} D+j-5 \cr D-2 \end{pmatrix}& \begin{pmatrix} D+j-4 \cr D-4 \end{pmatrix}+ \begin{pmatrix} D+j-5 \cr D-4 \end{pmatrix}\cr \text{Integer spin }D=4 & j^2+2 & 2-\delta^j_0 \cr \text{Integer spin }D=5 & \frac{1}{6}(2j+1)(j^2+j+6) & 2j+1\cr \vdots &\vdots &\vdots \cr \text{Half-int. spin }j\in\mathbb{N}_0+\frac{1}{2} & n\begin{pmatrix} D+j-\frac{5}{2} \cr D-2 \end{pmatrix}+ n\begin{pmatrix} D+j-\frac{9}{2} \cr D-2 \end{pmatrix}& \frac{n}{2}\begin{pmatrix} D+j-\frac{9}{2}\cr D-4 \end{pmatrix} \cr \text{Half-int. spin }D=4 &n(j^2+\frac{3}{4}) &\frac{n}{2} \cr \text{Half-int. spin }D=5 &\frac{n}{6}(2j+1)(j^2+j+\frac{9}{4}) &\frac{n}{4}(2j+1) \cr \vdots &\vdots &\vdots \cr \end{array}$$ $^1$For massive multiplets, go up 1 spacetime dimension, i.e. change $D\to D+1$ (without changing the number $n$ of spinor components). E.g. the on-shell DOF for massive 4D fields famously has a factor $2j+1$, cf. the row $D=5$ in Table 1. $^2$ Off-shell DOF = # (components)- # (gauge transformations). $^3$ On-shell DOF = # (helicity states)= (Classical DOF)/2, where Classical DOF = #(initial conditions). $n$=# (spinor components). E.g. a Dirac spinor has $n=2^{[D/2]}$ complex components, while a Majorana spinor has $n=2^{[D/2]}$ real components, $\downarrow$ Table 2: Antisymmetric $p$-form gauge potential in $D$ spacetime dimensions, $p\in\mathbb{N}_0$. $$\begin{array}{ccc} p\text{-form gauge potential}& \text{Off-shell DOF} & \text{On-shell DOF} \cr & \begin{pmatrix} D-1 \cr p \end{pmatrix} & \begin{pmatrix} D-2 \cr p \end{pmatrix} \cr \end{array}$$ References: * *D.Z. Freedman & A. Van Proeyen, SUGRA, 2012. *H. Nastase, Intro to SUGRA, arXiv:1112.3502; chapter 5.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Infinitesimal generator flux of Lorentz trasformations in spacetime I'm considering the following matrixs which I know that they form a flux of Lorentz trasformation in spacetime. I want to know how to calculate the infinitesimal generator of this flux. Unfortunately I have no particular knowledge of Lie algebra for this reason I need an explanation that does not assume the whole knowledge of it. $$\begin{pmatrix} \frac{4- \cos(\rho)}{3} & \frac{2- 2\cos(\rho)}{3} & 0 & -\frac{\sin(\rho)}{\sqrt{3}} \\ \frac{2\cos(\rho) - 2}{3} & \frac{4- \cos(\rho)}{3} & 0 & \frac{2\sin(\rho)}{\sqrt{3}}\\ 0 & 0 & 1 & 0 \\ -\frac{\sin(\rho)}{\sqrt{3}} & -\frac{2\sin(\rho)}{\sqrt{3}} & 0 & \cos(\rho) \\ \end{pmatrix}$$ Thank you so much for your help
I'm a bit suspicious of the 22 entry of the matrix you write down , $$M=\begin{pmatrix} \frac{4- \cos(\rho)}{3} & \frac{2- 2\cos(\rho)}{3} & 0 & -\frac{\sin(\rho)}{\sqrt{3}} \\ \frac{2\cos(\rho) - 2}{3} & \frac{4- \cos(\rho)}{3} & 0 & \frac{2\sin(\rho)}{\sqrt{3}}\\ 0 & 0 & 1 & 0 \\ -\frac{\sin(\rho)}{\sqrt{3}} & -\frac{2\sin(\rho)}{\sqrt{3}} & 0 & \cos(\rho) \\ \end{pmatrix}$$ whose logarithm you are invited to take. I suspect that entry to be something like $(4\cos \rho -1)/3$ instead---see below. Your time appears to be in the 4th component, unlike the first one in the conventional notation. In any case, observe $M=\mathbb{1}$ as $\rho\to 0$, so to find its logarithm, we expand in the first two powers of ρ, $$ M=\mathbb{1} -\frac{\rho}{\sqrt{3}}\begin{pmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 0 & -2\\ 0 & 0 & 0 & 0 \\ 1 & 2& 0 & 0 \\ \end{pmatrix}+ \frac{\rho^2}{6} \begin{pmatrix} 1 & 2 & 0 & 0 \\ - 2 & 1& 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 &0 & 0 & -3 \\ \end{pmatrix} +Ο(\rho^3). $$ Let us call the first big matrix A and the second one B. Note $$A^2=\begin{pmatrix} 1 & 2 & 0 & 0 \\ - 2 & -4& 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 &0 & 0 & -3 \\ \end{pmatrix} . $$ Now, if the 22 entry of B were -4 instead of 1, we'd have $A^2=B$, and thus $A^3=-3A$, $A^4=-3A^2$, etc... so (glory!) you can confirm $$ M=\mathbb{1} -\frac{\rho}{\sqrt{3}}A+ \frac{1}{2} \left(-\frac{\rho A}{\sqrt{3}}\right)^2 +...=e^{-\frac{\rho }{\sqrt{3}}A}, $$ since the expansion of the exponential reduces to $$ =\mathbb{1} -\sin\rho ~ \frac{A}{\sqrt{3}} +\frac{1-\cos\rho}{3} A^2 , $$ by the above recursive rules. You would then, indeed, call this logarithm $A/\sqrt{3}$ of the exponential, up to the parameter -ρ, the generator of the group element M . In your specific case, you see it is a linear combination of a spacetime rotation (antisymmetric elements) and a boost-like strain (symmetric elements). However, as it stands, your B is problematic, which is why I am convinced it is wrong, and should be my proposed expression, instead.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/366097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to derive the Klein-Nishina formula from the Dirac equation? I'm looking for the simplest demonstration of the Klein-Nishina formula, from the Dirac equation without the field described as a quantum operator: https://en.wikipedia.org/wiki/Klein%E2%80%93Nishina_formula Consider $\psi$ as a "classical" spinor field (not a quantum operator), satisfying the Dirac equation : $$\tag{1} \gamma^a \partial_a\psi + i m \psi = 0. $$ How can we deduce the following Klein-Nishina formula? $$\tag{2} \frac{d\sigma}{d\Omega} = \frac{r_{\mathrm{c}}^2}{2} \Big( P(E, \vartheta) + \frac{1}{P(E, \vartheta)} - \sin^2 \vartheta \Big) P^2(E, \vartheta), $$ where $r_{\mathrm{c}}$ is the classical electron radius and $$\tag{3} P(E, \vartheta) = \frac{1}{1 + \frac{E}{m c^2}(1 - \cos{\vartheta})}. $$ The formula (2) was derived in 1928 to the lowest non-trivial order, after Dirac published his equation and before QFT was formulated (i.e. QED), so I'm expecting that the derivation isn't very complicated.
In the center of mass frame, let $p_1$ be the inbound photon, $p_2$ the inbound electron, $p_3$ the scattered photon, $p_4$ the scattered electron. \begin{equation*} p_1=\begin{pmatrix}\omega\\0\\0\\ \omega\end{pmatrix} \qquad p_2=\begin{pmatrix}E\\0\\0\\-\omega\end{pmatrix} \qquad p_3=\begin{pmatrix} \omega\\ \omega\sin\theta\cos\phi\\ \omega\sin\theta\sin\phi\\ \omega\cos\theta \end{pmatrix} \qquad p_4=\begin{pmatrix} E\\ -\omega\sin\theta\cos\phi\\ -\omega\sin\theta\sin\phi\\ -\omega\cos\theta \end{pmatrix} \end{equation*} where $E=\sqrt{\omega^2+m^2}$. It is easy to show that \begin{equation} \langle|\mathcal{M}|^2\rangle = \frac{e^4}{4} \left( \frac{f_{11}}{(s-m^2)^2} +\frac{f_{12}}{(s-m^2)(u-m^2)} +\frac{f_{12}^*}{(s-m^2)(u-m^2)} +\frac{f_{22}}{(u-m^2)^2} \right) \end{equation} where \begin{equation} \begin{aligned} f_{11}&=-8 s u + 24 s m^2 + 8 u m^2 + 8 m^4 \\ f_{12}&=8 s m^2 + 8 u m^2 + 16 m^4 \\ f_{22}&=-8 s u + 8 s m^2 + 24 u m^2 + 8 m^4 \end{aligned} \end{equation} for the Mandelstam variables $s=(p_1+p_2)^2$, $t=(p_1-p_3)^2$, $u=(p_1-p_4)^2$. Next, apply a Lorentz boost to go from the center of mass frame to the lab frame in which the electron is at rest. \begin{equation*} \Lambda= \begin{pmatrix} E/m & 0 & 0 & \omega/m\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ \omega/m & 0 & 0 & E/m \end{pmatrix}, \qquad \Lambda p_2=\begin{pmatrix}m \\ 0 \\ 0 \\ 0\end{pmatrix} \end{equation*} The Mandelstam variables are invariant under a boost. \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=(\Lambda p_1+\Lambda p_2)^2 \\ t&=(p_1-p_3)^2=(\Lambda p_1-\Lambda p_3)^2 \\ u&=(p_1-p_4)^2=(\Lambda p_1-\Lambda p_4)^2 \end{aligned} \end{equation} In the lab frame, let $\omega_L$ be the angular frequency of the incident photon and let $\omega_L'$ be the angular frequency of the scattered photon. \begin{equation} \begin{aligned} \omega_L&=\Lambda p_1\cdot(1,0,0,0)=\frac{\omega^2}{m}+\frac{\omega E}{m} \\ \omega_L'&=\Lambda p_3\cdot(1,0,0,0)=\frac{\omega^2\cos\theta}{m}+\frac{\omega E}{m} \end{aligned} \end{equation} It follows that \begin{equation} \begin{aligned} s&=(p_1+p_2)^2=2m\omega_L+m^2 \\ t&=(p_1-p_3)^2=2m(\omega_L' - \omega_L) \\ u&=(p_1-p_4)^2=-2 m \omega_L' + m^2 \end{aligned} \end{equation} Compute $\langle|\mathcal{M}|^2\rangle$ from $s$, $t$, and $u$ that involve $\omega_L$ and $\omega_L'$. \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L} +\left(\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1\right)^2-1 \right) \end{equation*} From the Compton formula \begin{equation*} \frac{1}{\omega_L'}-\frac{1}{\omega_L}=\frac{1-\cos\theta_L}{m} \end{equation*} we have \begin{equation*} \cos\theta_L=\frac{m}{\omega_L}-\frac{m}{\omega_L'}+1 \end{equation*} Hence \begin{equation*} \langle|\mathcal{M}|^2\rangle= 2e^4\left( \frac{\omega_L}{\omega_L'}+\frac{\omega_L'}{\omega_L}+\cos^2\theta_L-1 \right) \end{equation*} The differential cross section for Compton scattering is \begin{equation*} \frac{d\sigma}{d\Omega}\propto \left(\frac{\omega_L'}{\omega_L}\right)^2\langle|\mathcal{M}|^2\rangle \end{equation*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/416155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Peskin & Schroeder's way of showing $Z_1=Z_2$ via integration by parts I am trying to follow Peskin & Schroeder's textbook on Renormalization. I tried a few ways but this does not match with the textbook. First equation (10.43 )in Peskin is given \begin{align} \delta_2 = -\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dx \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left( (1-x)^2 m^2 + x \mu^2 \right)^{2-\frac{d}{2}}} \left[ (2-\epsilon) x - \frac{\epsilon}{2} \frac{2x(1-x)m^2}{\left( (1-x)^2 m^2 + x \mu^2 \right)} (4-2x - \epsilon (1-x)) \right]. \label{1043} \end{align} and equation (10.46) in Peskin is given \begin{align} \delta_1 &= -\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dz (1-z) \\ &\left\{ \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{2-\frac{d}{2}}} \frac{(2-\epsilon)^2}{2} + \frac{\Gamma \left(3-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{3-\frac{d}{2}}} \left( 2 (1-4z + z^2) - \epsilon(1-z)^2 \right) m^2 \right\}. \label{1046} \end{align} From integration by parts I want to obtain 10.46 to 10.43 My first trial was re-write equation 10.46 as \begin{align} \delta_1 = -\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dz(1-z) \frac{\Gamma(2-\frac{d}{2})}{((1-z)^2 m^2 + z \mu^2)^{2-\frac{d}{2}}} \left[ \frac{(2-\epsilon)^2}{2} + \frac{(2-\frac{d}{2})}{((1-z)^2 m^2 + z \mu^2)} (2(1-4z+z^2) -\epsilon (1-z)^2 ) m^2\right] \end{align} and then do integration by parts. [replacing $(1-z) \rightarrow x$ is not a good choice] First I just compute with mathematica and later i noticed that I have a problem with boundary term. Do you have any ideas?
First equation (10.43 )in Peskin is given \begin{align} \delta_2 = -\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dx \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left( (1-x)^2 m^2 + x \mu^2 \right)^{2-\frac{d}{2}}} \left[ (2-\epsilon) x - \frac{\epsilon}{2} \frac{2x(1-x)m^2}{\left( (1-x)^2 m^2 + x \mu^2 \right)} (4-2x - \epsilon (1-x)) \right]. \end{align} and equation (10.46) in Peskin is given \begin{align} \delta_1 &= -\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dz (1-z) \\ &\left\{ \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{2-\frac{d}{2}}} \frac{(2-\epsilon)^2}{2} + \frac{\Gamma \left(3-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{3-\frac{d}{2}}} \left( 2 (1-4z + z^2) - \epsilon(1-z)^2 \right) m^2 \right\}. \end{align} Want to show from 10.46 to 10.43 using integration by parts. Using \begin{align} &\frac{d}{dz}\left[ \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{ 2-\frac{d}{2}}} \right] = \frac{\Gamma\left(3-\frac{d}{2}\right)}{\left( (1-z)^2 m^2 + z \mu^2 \right)^{ 3-\frac{d}{2}}} \left( 2m^2(1-z) - \mu^2 \right). \end{align} Now we subtract $\delta_1$ and $\delta_2$ and collect (1-2z). For the (1-2z) terms replace this by total-derivatives, we have \begin{align} \delta_1 - \delta_2 &\equiv -\frac{\epsilon}{2}\frac{e^2}{(4\pi)^{\frac{d}{2}}} \int_0^1 dz (1-z) \frac{\Gamma\left(2-\frac{d}{2}\right)}{\left((1-z)^2 m^2 + z \mu^2\right)^{3-\frac{d}{2}}} \left( 2 m^2 (1-z)(1+ z(2-\epsilon) )- z \mu^2 (1-\epsilon) \right) . \end{align} So at this moment, we see that finite parts of $\delta_1$ and $\delta_2$ coincides. i.e., In the limit $\epsilon \rightarrow 0$, $\delta_1 -\delta_2 \rightarrow 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/581281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Massless Kerr black hole Kerr metric has the following form: $$ ds^2 = -\left(1 - \frac{2GMr}{r^2+a^2\cos^2(\theta)}\right) dt^2 + \left(\frac{r^2+a^2\cos^2(\theta)}{r^2-2GMr+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2 + \left(r^2+a^2+\frac{2GMra^2}{r^2+a^2\cos^2(\theta)}\right)\sin^2(\theta) d\phi^2 - \left(\frac{4GMra\sin^2(\theta)}{r^2+a^2\cos^2(\theta)}\right) d\phi\, dt $$ This metric describes a rotating black hole. If one considers $M=0$: $$ ds^2 = - dt^2 + \left(\frac{r^2+a^2\cos^2(\theta)}{r^2+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2 + \left(r^2+a^2\right)\sin^2(\theta) d\phi^2 $$ This metric is a solution of the Einstein equations in vacuum. What is the physical interpretation of such a solution?
A reference which answers this is Visser (2008). It discusses the limits of vanishing mass $M \rightarrow 0$, and rotation parameter $a \rightarrow 0$. Your example is in $\S5$. Visser comments "This is flat Minkowski space in so-called “oblate spheroidal” coordinates...", as described in a different answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/593675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
Parametric representation of the motion of a particle in context of kruskal coordinates In "N. Straumann - General Relativity" talking about the Kruskal continuation of the Schwarschild solution it is considered for radial timelike geodesic the following: $$d\tau=(\frac{2m}{r}-\frac{2m}{R})^{\frac{-1}{2}}dr$$ with the following parametric solution: $$r=\frac{R}{2}(1+\cos{\eta})$$ $$\tau=(\frac{R^3}{8m})^{\frac{1}{2}}(\eta+\sin{\eta})$$ I would like to verify these parametric representation of the motion. So I have computed $$dr=-\frac{R}{2}\sin{\eta}d\eta$$ $$d\tau=(\frac{R^3}{8m})^{\frac{1}{2}}(1+\cos{\eta})d\eta$$ And so I should have: $$(\frac{R^3}{8m})^{\frac{1}{2}}(1+\cos{\eta})d\eta=-\Big(\frac{2m}{\frac{R}{2}(1+\cos{\eta})}-\frac{2m}{R}\Big)^{\frac{-1}{2}}\frac{R}{2}\sin{\eta}d\eta$$ So from this seems that the identity does not hold...thus I am doing some mistakes. Can you help me?
Correct me if I am wrong, but I think it should be $$\text{d}\tau = -\left(\frac{r_s}{r}-\frac{r_s}{R}\right)^{-\frac 12}\text{d}r$$ with $r_s = 2M$. This would be consistent for an infalling motion with the changes in the proper time and radius when compared to the changes in the parameter $\eta\,$ ($\text{d}r <0$ while we want $\text{d}\tau > 0$). With this, you can show that the parametrization is indeed equal. One has \begin{align*}\left(\frac{r_s}{r}-\frac{r_s}{R}\right)^{-\frac 12} &= \frac{1}{\sqrt{r_s}}\left(\frac{1-\frac 12 (1+\cos\eta)}{\frac R2 (1+\cos\eta)}\right)^{-\frac 12} \\ &= \frac{1}{\sqrt{r_s}} \left(\frac{\frac 12 - \frac 12 \cos\eta}{\frac R2 (1+\cos\eta)}\right)^{-\frac 12} \\ &= \sqrt{\frac{R}{r_s}} \sqrt{\frac{1+\cos\eta}{1-\cos\eta}} \\ &= \sqrt{\frac{R}{r_s}} \frac{1}{\tan \frac \eta 2} \end{align*} Plugging this in leads to \begin{align*} \text{d}\tau &= -\left(\frac{r_s}{r}-\frac{r_s}{R}\right)^{-\frac 12}\text{d}r \\ &= \sqrt{\frac{R}{r_s}} \frac{1}{\tan \frac \eta 2} \frac{R}{2}\sin\eta \text{d}\tau \\ &= \sqrt{\frac{R^3}{4r_s}} \frac{\sin\eta}{\tan \frac \eta 2}\text{d}\eta \end{align*} Notice that \begin{align*} \frac{\sin\eta}{\tan \frac \eta 2} &= \sin\eta \frac{\cos \frac \eta 2}{\sin \frac \eta 2} \\ &= 2\sin\frac \eta 2\,\cos\frac \eta 2\, \frac{\cos \frac \eta 2}{\sin \frac \eta 2} \\ &= 2\,\cos^2\frac \eta 2 \\ &= 2 \left(\frac{1+\cos\eta}{2}\right) \\ &= 1+\cos\eta \end{align*} Hence \begin{equation*} \text{d}\tau = \sqrt{\frac{R^3}{4r_s}} (1+\cos\eta)\text{d}\eta = \left(\frac{R^3}{8m}\right)^{\frac 12} (1+\cos\eta) \text{d}\eta \end{equation*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Quantum Mechanics: Does $\vec{A} \cdot \vec{p} = \frac{1}{2} \vec{B}\cdot\vec{L}$? In a quantum mechanics question context, I noticed the need to prove that for a constant magnetic field $\mathbf{B}$, The vector potential $\mathbf{A}$ and the angular momentum operator $\mathbf{L}$, satisfy: $$\mathbf{A}\cdot\mathbf{p} = \frac{1}{2} \mathbf{B}\cdot \mathbf{L}$$ Where $\mathbf{p}$ is the momentum operator. Without loss of generality, I can take a constant $\mathbf{B} = B_0 \mathbf{\hat{z}}$, and get: $$\mathbf{A} = \frac{1}{2} B_0 (x_2,-x_1)$$ And using a Weyl ordering for $\mathbf{A}\cdot\mathbf{p}$ I get: $$ \frac{1}{2}(A_j p_j + p_j A_j) = \frac{1}{4} B_0 \left(x_2 p_1 - x_1 p_2 + p_1 x_2 - p_2 x_1\right) = \frac{1}{2} \mathbf{B} \cdot \mathbf{L}$$ As requested. Is it possible to prove this for a general, not necessarily constant $\mathbf{B}$?
This identity doesn't hold for non constant magnetic fields. For instance with: $$\mathbf{A} = \frac{1}{2}B_0 k_0 ( -y^2, x^2) \Rightarrow \mathbf{B} = B_0 k_0 (0,0,x+y)$$ We got on one the hand: $$ A_j p_j + p_j A_j = \frac{1}{2} B_0 k_0 ( -y^2 p_x + x^2 p_y - p_x y^2 + p_y x^2) = B_0 k_0 (x^2 p_y-y^2 p_x)$$ Where as: $$\frac{1}{2}\mathbf{B}\cdot \mathbf{L} = \frac{1}{2}B_0 k_0 ((x+y)L_z + L_z (x +y)) = \frac{1}{2}B_0k_0 ((x+y)(xp_y-yp_x) + (xp_y-yp_x)(x+y)) = \frac{1}{2} B_0k_0(x^2p_y-y^2p_x + yxp_y-xyp_x + xp_yx-yp_xy + xp_yy-yp_xx) = B_0k_0\left(x^2 p_y - y^2p_x+\frac{x}{2}[y,p_y]-\frac{y}{2}[x,p_x]\right) = B_0k_0 \left(x^2p_y-y^2p_x+i\frac{\hbar}{2}(x-y)\right)$$ And we can see that Proper Weyl ordering produces a difference of: $$\mathbf{A}\cdot \mathbf{p} - \frac{1}{2}\mathbf{B}\cdot \mathbf{L} = B_0k_0 i \frac{\hbar}{2} (x-y)$$ And without weyl ordering, i.e when interpreting: $\mathbf{B}\cdot\mathbf{L}= B_j L_j = B_z (xp_y-yp_x)$ the discrepancy is even worse: $$\mathbf{A}\cdot \mathbf{p}-\frac{1}{2}B_jL_j = B_0k_0(yxp_y-xyp_x)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/741354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a better, faster way to do this projectile motion question? The question is In a combat exercise, a mortar at M is required to hit a target at O, which is taking cover 25 m behind a structure of negligible width 10 m tall. This mortar can only fire at an angle of 45 degrees to the horizontal, but can fire shells of any velocity. Find the minimum initial velocity required to hit the target. I solved it as follows. Forming a parabola with $ r_v $ against $ r_h $: $ r_v = u_vt + \frac{1}{2}a_vt^{2} $ (1) $ r_h = u_ht + \frac{1}{2}a_ht^{2} $ but $ a_h = 0 $ so $ r_h = u_ht $ $ t = \frac{r_h}{u_h} $ (2) Substituting (2) into (1): $ r_v = u_v\frac{r_h}{u_h} + \frac{1}{2}a_v\frac{r_h^{2}}{u_h^{2}} $ (3) and because the angle of inclination is 45° $ u_v = u_h = \frac{u}{\sqrt{2}} $ (4) From (4) and (3): $ r_v = r_h^{2}\frac{a}{u^{2}} + r_h $ (5) Let the distance between the mortar and the building be $ d $. Then when $ r_h = d + 25 $, $ r_v = 0 $. (6) From (5) and (6): $ 0 = (d + 25)\frac{a}{u^{2}} + 1 $ so $ u^2 = -a(d + 25) $ (7) Substituting (7) into (5): $ r_v = -r_h^{2}\frac{1}{(d + 25)} + r_h $ (8) We also know that to clear the building, when $ r_h = d $, $ r_v > 10 $. (9) From (9) and (8): $ 10 < -d^{2} \frac{1}{(d + 25)} + d $ After simplifying... ($ d + 25 $ is positive) $ d > \frac{50}{3} $ (10) Rearranging (7): $ d = -\frac{u^{2}}{a} - 25 $ (11) And then from (10) and (11) and with $ a = -9.8 $: $ \frac{50}{3} < \frac{u^{2}}{9.8} - 25 $ Simplifying, and with the knowledge that $ u > 0 $: $ u > 20.2073... $ So the minimum initial velocity required to hit the target is 20 m/s (2 s. f.). Huzzah! My question is: is there a faster way to solve the problem?
Whether this is any faster is debatable, but you could do it this way: The trajectory of the shell is symmetric, so $M$ firing a shell at $O$ is the same as $O$ firing a shell at $M$. So all you have to do is consider $O$ firing the mortar at 45° and ask what is the minimum velocity required to clear the wall. So, if $O$ fires the shell at 45° the equations of motion are: $$\begin{align} x &= t \frac{v}{\sqrt{2}} \\ y &= t \frac{v}{\sqrt{2}} - \frac{1}{2} g t^2 \end{align}$$ If we require that the trajectory passes through the point $(25, 10)$ then this gives us two simultaneous equations in $t$ and $v$: $$\begin{align} 25 &= t \frac{v}{\sqrt{2}} \\ 10 &= t \frac{v}{\sqrt{2}} - \frac{1}{2} g t^2 \end{align}$$ It's $v$ we're interested in, so we rearrange the first equation to get: $$ t = \frac{25\sqrt{2}}{v} $$ and substitute it in the second equation to get: $$ 10 = \frac{25\sqrt{2}}{v} \frac{v}{\sqrt{2}} - \frac{1}{2} g \left( \frac{25\sqrt{2}}{v} \right)^2 $$ and a quick rearrangement gives: $$ v = \sqrt{\frac{g \space 25^2}{15}} = 20.2 m/s $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to derive the relation between Euler angles and angular velocity and get the same form as mentioned in the bellow figure How to derive the relation between Euler angles and angular velocity and get this form: $$ \left. \begin{cases}{} P \\ Q\\ R \\ \end{cases} \right\}= \left[ \begin{array}{c} 1&0&-\sin\Theta\\ 0&\cos\Phi&\cos\Theta\sin\Phi\\ 0&-\sin\Phi&\cos\Theta\cos\Phi \end{array} \right] \left. \begin{cases}{} \dot{\Phi} \\ \dot{\Theta}\\ \dot{\Psi} \\ \end{cases} \right\} $$ $$ \left. \begin{cases}{} \dot{\Phi} \\ \dot{\Theta}\\ \dot{\Psi} \\ \end{cases} \right\}= \left[ \begin{array}{c} 1&\sin\Phi\tan\Theta&\cos\Phi\tan\Theta\\ 0&\cos\Phi&-\sin\Phi\\ 0&\sin\Phi\sec\Theta&\cos\Phi\sec\Theta \end{array} \right] \left. \begin{cases}{} P \\ Q\\ R \\ \end{cases} \right\} $$
How to derive the relation between euler angles and angular velocity \begin{align*} &\text{The equations to calculate the angular velocity $\vec{\omega}$ for a given transformation matrix $S$ are: } \\\\ &\left[_B^I\dot{S}\right]=\left[\tilde{\vec{\omega}}_I\right]\left[_B^I S\right]\,\quad \Rightarrow \left[\tilde{\vec{\omega}}_I\right]=\left[_B^I\dot{S}\right]\left[_I^B S\right]\\ &\text{or}\\ &\left[_B^I\dot{S}\right]=\left[_B^I S\right]\left[\tilde{\vec{\omega}}_B\right]\,\quad \Rightarrow \left[\tilde{\vec{\omega}}_B\right]=\left[_I^B S\right]\left[_B^I\dot{S}\right]\\ &\text{with}\\ &\left[_B^I S\right]\,\left[_I^B S \right]= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\\\\ &\text{$\left[_B^I{S}\right]$ Transformation matrix between B- System and I-System }\\ &\text{$\left[\vec{\omega}\right]_B$ Vector components B-System}\\ &\text{$\left[\vec{\omega}\right]_I$ Vector components I-System and}\\ &\tilde{\vec{\omega}}= \begin{bmatrix} 0 & -\omega_z &\omega_y \\ \omega_z & 0 & -\omega_x \\ -\omega_y & \omega_x & 0 \\ \end{bmatrix}\\\\ &\textbf{Example: Transformation matrix Euler angle}\\ &\left[_B^I{S}\right]=S_z(\psi)\,S_y(\vartheta)\,S_z(\varphi)\\ &S_z(\psi)=\left[ \begin {array}{ccc} \cos \left( \psi \right) &-\sin \left( \psi \right) &0\\ \sin \left( \psi \right) &\cos \left( \psi \right) &0\\ 0&0&1\end {array} \right]\\ &S_y(\vartheta)=\left[ \begin {array}{ccc} \cos \left( \vartheta \right) &0&\sin \left( \vartheta \right) \\ 0&1&0 \\ -\sin \left( \vartheta \right) &0&\cos \left( \vartheta \right) \end {array} \right]\\ &S_z(\varphi)=\left[ \begin {array}{ccc} \cos \left( \varphi \right) &-\sin \left( \varphi \right) &0\\ \sin \left( \varphi \right) &\cos \left( \varphi \right) &0\\ 0&0&1\end {array} \right]\\ &\Rightarrow\\ &\vec{\omega}_B= \left[ \begin {array}{ccc} 0&\sin \left( \varphi \right) &-\cos \left( \varphi \right) \sin \left( \vartheta \right) \\ 0&\cos \left( \varphi \right) &\sin \left( \varphi \right) \sin \left( \vartheta \right) \\ 1 &0&\cos \left( \vartheta \right) \end {array} \right] \begin{bmatrix} \dot{\varphi} \\ \dot{\vartheta}\\ \dot{\psi} \\ \end{bmatrix}\\ &\begin{bmatrix} \dot{\varphi} \\ \dot{\vartheta}\\ \dot{\psi} \\ \end{bmatrix}= \left[ \begin {array}{ccc} {\frac {\cos \left( \varphi \right) \cos \left( \vartheta \right) }{\sin \left( \vartheta \right) }}&-{ \frac {\sin \left( \varphi \right) \cos \left( \vartheta \right) }{ \sin \left( \vartheta \right) }}&1\\ \sin \left( \varphi \right) &\cos \left( \varphi \right) &0\\ -{\frac {\cos \left( \varphi \right) }{\sin \left( \vartheta \right) }}&{\frac {\sin \left( \varphi \right) }{\sin \left( \vartheta \right) }}&0\end {array} \right] \begin{bmatrix} \omega_x \\ \omega_y\\ \omega_z \\ \end{bmatrix}_B\\ &\vec{\omega}_I= \left[ \begin {array}{ccc} \cos \left( \psi \right) \sin \left( \vartheta \right) &-\sin \left( \psi \right) &0\\ \sin \left( \psi \right) \sin \left( \vartheta \right) &\cos \left( \psi \right) &0\\ \cos \left( \vartheta \right) &0& 1\end {array} \right] \begin{bmatrix} \dot{\varphi} \\ \dot{\vartheta}\\ \dot{\psi} \\ \end{bmatrix}\\ &\begin{bmatrix} \dot{\varphi} \\ \dot{\vartheta}\\ \dot{\psi} \\ \end{bmatrix}= \left[ \begin {array}{ccc} {\frac {\cos \left( \psi \right) }{\sin \left( \vartheta \right) }}&{\frac {\sin \left( \psi \right) }{\sin \left( \vartheta \right) }}&0\\ -\sin \left( \psi \right) &\cos \left( \psi \right) &0\\ -{\frac { \cos \left( \vartheta \right) \cos \left( \psi \right) }{\sin \left( \vartheta \right) }}&-{\frac {\cos \left( \vartheta \right) \sin \left( \psi \right) }{\sin \left( \vartheta \right) }}&1\end {array} \right] \begin{bmatrix} \omega_x \\ \omega_y\\ \omega_z \\ \end{bmatrix}_I \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/420695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Commutation relations for creation, annihilator operators $a_\mathbf{p}^\dagger, a_\mathbf{p}$ Write the field $\phi$ and momentum $\pi$ in terms of creation and annihilation operators $a_\mathbf{p}^\dagger, a_\mathbf{p}$ $$ \phi(\mathbf{x}) = \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2\omega_\mathbf{p}}} [a_\mathbf{p}e^{i\mathbf{p}\cdot\mathbf{x}} + a_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot\mathbf{x}}], $$ $$ \pi(\mathbf{x}) = \int \frac{d^3p}{(2\pi)^3} (-i) \sqrt{\frac{\omega_\mathbf{p}}{2}} [a_\mathbf{p}e^{i\mathbf{p}\cdot\mathbf{x}} - a_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot\mathbf{x}}]. $$ The goal is to show that $$ [a_\mathbf{p}, a_\mathbf{q}] = [a_\mathbf{p}^\dagger, a_\mathbf{q}^\dagger] = 0, $$ $$ [a_\mathbf{p}, a_\mathbf{q}^\dagger] = (2\pi)^3 \delta^{(3)}(\mathbf{p}-\mathbf{q}). $$ I have no luck in arriving at the commutation relations. Take inverse Fourier transform, $$ \tilde{\phi}(\mathbf{p}) = \int d^3x\ \phi(\mathbf{x}) e^{-i\mathbf{p}\cdot\mathbf{x}} = \frac{1}{\sqrt{2\omega_\mathbf{p}}} a_\mathbf{p} + \frac{1}{\sqrt{2\omega_\mathbf{-p}}} a_\mathbf{-p}^\dagger, $$ $$ \tilde{\pi}(\mathbf{p}) = \int d^3x\ \pi(\mathbf{x}) e^{-i\mathbf{p}\cdot\mathbf{x}} = (-i)\Bigg(\sqrt{\frac{\omega_\mathbf{p}}{2}} a_\mathbf{p} - \sqrt{\frac{\omega_\mathbf{-p}}{2}} a_\mathbf{-p}^\dagger\Bigg). $$ Then $$ a_\mathbf{p} = \frac{1}{2} \Bigg(\sqrt{2\omega_\mathbf{p}} \tilde{\phi}(\mathbf{p}) + i\sqrt{\frac{2}{\omega_\mathbf{p}}} \tilde{\pi}(\mathbf{p}) \Bigg), $$ $$ a_\mathbf{-p}^\dagger = \frac{1}{2} \Bigg(\sqrt{2\omega_\mathbf{p}} \tilde{\phi}(\mathbf{p}) - i\sqrt{\frac{2}{\omega_\mathbf{p}}} \tilde{\pi}(\mathbf{p}) \Bigg). $$ Using $[\phi(\mathbf{x}), \phi(\mathbf{y})] = [\pi(\mathbf{x}), \pi(\mathbf{y})] = 0$, $[\phi(\mathbf{x}), \pi(\mathbf{y})] = i\delta^{(3)}(\mathbf{x}-\mathbf{y})$, \begin{align} [a_\mathbf{p}, a_\mathbf{q}] &= \frac{1}{4} \int d^3x d^3y\ 2i\sqrt{\frac{\omega_\mathbf{p}}{\omega_\mathbf{q}}} [\phi(\mathbf{x}), \pi(\mathbf{y})] e^{-i\mathbf{x}\cdot\mathbf{p}} e^{-i\mathbf{y}\cdot\mathbf{q}} + 2i\sqrt{\frac{\omega_\mathbf{q}}{\omega_\mathbf{p}}} [\pi(\mathbf{x}), \phi(\mathbf{y})] e^{-i\mathbf{x}\cdot\mathbf{p}} e^{-i\mathbf{y}\cdot\mathbf{q}} \\ &= \frac{1}{4} \int d^3x d^3y\ 2i\sqrt{\frac{\omega_\mathbf{p}}{\omega_\mathbf{q}}} i\delta^{(3)}(\mathbf{x}-\mathbf{y}) e^{-i\mathbf{x}\cdot\mathbf{p}} e^{-i\mathbf{y}\cdot\mathbf{q}} + 2i\sqrt{\frac{\omega_\mathbf{q}}{\omega_\mathbf{p}}} (-i)\delta^{(3)}(\mathbf{y}-\mathbf{x}) e^{-i\mathbf{x}\cdot\mathbf{p}} e^{-i\mathbf{y}\cdot\mathbf{q}} \\ &= -\frac{i}{2} \int d^3x \Bigg(\sqrt{\frac{\omega_\mathbf{p}}{\omega_\mathbf{q}}} - \sqrt{\frac{\omega_\mathbf{q}}{\omega_\mathbf{p}}} \Bigg) e^{-i\mathbf{x}\cdot(\mathbf{p} + \mathbf{q})} \end{align} Why is this equal to $0$? It's not true that $\omega_\mathbf{p} = \omega_\mathbf{q}$, is it?
The only dependence on $\mathbf{x}$ that remains is in the exponent factor. Integrating it we get $\delta$-function, \begin{equation} \int d^3x\, e^{-i\mathbf{x}\cdot(\mathbf{p}+\mathbf{q})}=(2\pi)^3\delta^{(3)}(\mathbf{p}+\mathbf{q}) \end{equation} That means that we can replace $\omega_\mathbf{q}$ with $\omega_{-\mathbf{p}}=\omega_\mathbf{p}$. That results in cancellation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/503383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do sharp edges of a metallic conductor have more charges than flat edges? charges accumulate on sharp edges more than flat edges.
Like charges on the surface of a conductor repel each other. Let us consider two identical charges $q$ in points A and B on the surface of a conductor in an area where the curvature of the surface is $R$, so OA=OB=R (see the picture). Let us assume that the length of arc AB is $l$, and $l\ll R$. Then angle $\alpha=l/R$. The distance between the charges AB=$\delta=R\cdot2\sin\frac{\alpha}{2}$. The Coulomb force between the charges is directed along AB, and its magnitude is $\frac{q^2}{\delta^2}$. However, the normal component of the Coulomb force cannot move the charges, so we need to calculate the tangential component of the Coulomb force. The angle between AB and the red dashed tangent to the curve is $\frac{\alpha}{2}$, so the tangential projection of the Coulomb force is $$\frac{q^2}{\delta^2}\cos\frac{\alpha}{2}=\frac{q^2}{R^2\cdot 4\sin^2\frac{\alpha}{2}}\cos\frac{\alpha}{2}=\frac{q^2}{\frac{l^2}{4 x^2}\cdot 4\sin^2 x}\cos x=\frac{q^2}{l^2}\frac{x^2\cos x}{\sin^2 x},$$ where $x=\frac{l}{2 R}\ll 1$. Let us use the following approximation: $$\frac{x^2\cos x}{\sin^2 x}\approx \frac{x^2(1-\frac{x^2}{2})}{(x-\frac{x^3}{6})^2}\approx\frac{x^2(1-\frac{x^2}{2})}{x^2-\frac{x^4}{3}}=\frac{1-\frac{x^2}{2}}{1-\frac{x^2}{3}}\approx(1-\frac{x^2}{2})(1+\frac{x^2}{3})\approx$$ $$\approx 1-\frac{x^2}{6}. $$Thus, if the arc length $l$ is constant and the radius of curvature decreases, $x$ increases, and the tangential component of the Coulomb force between the charges decreases. Therefore, one can expect that the charge density will be greater in the areas where the curvature is greater.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
4-momentum and an $y$ component of momentum I have 2 coordinate systems which move along $x,x'$ axis. I have derived a Lorentz transformation for an $x$ component of momentum, which is one part of an 4-momentum vector $p_\mu$. This is my derivation: $$ \scriptsize \begin{split} p_x &= mv_x \gamma(v_x)\\ p_x &= \frac{m (v_x'+u)}{\left(1+v_x' \frac{u}{c^2}\right) \sqrt{1 - \left(v_x' + u \right)^2 / c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2}} \\ p_x &= \frac{m (v_x'+u) \left( 1+ v_x' \frac{u}{c^2} \right)}{\left(1+v_x' \frac{u}{c^2}\right) \sqrt{\left[c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2 - \left(v_x' + u \right)^2 \right] / c^2 }} \\ p_x &= \frac{m (v_x'+u)}{\sqrt{\left[c^2 \left( 1+ v_x' \frac{u}{c^2} \right)^2 - \left(v_x' + u \right)^2 \right] / c^2 }} \\ p_x &= \frac{m (v_x'+u)}{\sqrt{\left[c^2 \left( 1+ 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4} \right) - v_x'^2 - 2 v_x' u - u^2 \right] / c^2 }} \\ p_x &= \frac{mv_x'+mu}{\sqrt{\left[c^2 + 2 v_x'u + v_x'^2 \frac{u^2}{c^2} - v_x'^2 - 2 v_x' u - u^2 \right] / c^2 }} \\ p_x &= \frac{mv_x'+mu}{\sqrt{\left[c^2 + v_x'^2 \frac{u^2}{c^2} - v_x'^2 - u^2 \right] / c^2 }} \\ p_x &= \frac{mv_x'+mu}{\sqrt{1 + v_x'^2 \frac{u^2}{c^4} - \frac{v_x'^2}{c^2} - \frac{u^2}{c^2} }} \\ p_x &= \frac{mv_x'+mu}{\sqrt{\left(1 - \frac{u^2}{c^2}\right) \left(1-\frac{v_x'^2}{c^2} \right)}} \\ p_x &= \gamma \left[mv_x' \gamma(v_x') + mu \gamma(v_x') \right] \\ p_x &= \gamma \left[mv_x' \gamma(v_x') + \frac{mc^2 \gamma(v_x') u}{c^2} \right] \\ p_x &= \gamma \left[p_x' + \frac{W'}{c^2} u\right] \end{split} $$ I tried to derive Lorentz transformation for momentum also in $y$ direction, but i can't seem to get relation $p_y=p_y'$ because in the end i can't get rid of $2v_x'\frac{u}{c^2}$ and $\frac{v_y'^2}{c^2}$. Here is my attempt. $$ \scriptsize \begin{split} p_y &= m v_y \gamma(v_y)\\ p_y &= \frac{m v_y'}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{1 - v_y'^2/c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2}}\\ p_y &= \frac{m v_y' \left( 1 + v_x' \frac{u}{c^2} \right)}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{\left[c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2 - v_y'^2\right]/c^2}}\\ p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2\left( 1 + v_x' \frac{u}{c^2} \right)^2 - v_y'^2\right]/c^2}}\\ p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2\left( 1 + 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4}\right) - v_y'^2\right]/c^2}}\\ p_y &= \frac{m v_y'}{\gamma \sqrt{\left[c^2 + 2 v_x' u + v_x'^2 \frac{u^2}{c^2} - v_y'^2\right]/c^2}}\\ p_y &= \frac{m v_y'}{\gamma \sqrt{1 + 2 v_x' \frac{u}{c^2} + v_x'^2 \frac{u^2}{c^4} - \frac{v_y'^2}{c^2}}}\\ \end{split} $$ This is where it ends for me and I would need someone to point me the way and show me, how i can i get $p_y = p_y'$? I know I am very close.
Well this is how $p_y$ part of a four-momentum is put together. \begin{equation} \scriptsize \begin{split} p &= m v \gamma(v)\\ &\Downarrow\\ p_y &= m v_y \gamma(v) = m v_y \gamma \left( \sqrt{v_x^2 + v_y^2 + v_z^2}\right) = m v_y \gamma \left( \sqrt{v_x^2 + 0 + 0}\right) = m v_y \gamma(v_x) =\\ &= \frac{m v_y'}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{1 - \frac{\left(v_y' + u\right)^2}{c^2 \left(1 + v_x' \frac{u}{c^2}\right)^2}}} = \frac{mv_y'}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{\frac{c^2 \left(1 + v_x' \frac{u}{c^2}\right)^2 - \left(v_x' + u\right)^2}{c^2 \left(1 + v_x' \frac{u}{c^2}\right)^2}}}=\\ &= \frac{mv_y' \left(1 + v_x' \frac{u}{c^2}\right)}{\gamma \left(1 + v_x' \frac{u}{c^2}\right) \sqrt{\left[c^2 \left(1 + v_x' \frac{u}{c^2}\right)^2 - \left(v_x' + u\right)^2\right] / c^2}} = \frac{mv_y'}{\gamma \sqrt{\left[c^2 \left(1 + v_x' \frac{u}{c^2}\right)^2 - \left(v_x' + u\right)^2\right] / c^2}}=\\ &= \frac{mv_y'}{\gamma \sqrt{\left[c^2 \left(1 + 2 v_x' \frac{u}{c^2} + {v_x'}^2 \frac{u^2}{c^4}\right) - {v_x'}^2 - 2 {v_x'}u - u^2\right] / c^2}}=\\ & = \frac{mv_y'}{\gamma \sqrt{\left[c^2 + 2 v_x' u + {v_x'}^2 \frac{u^2}{c^2} - {v_x'}^2 - 2 {v_x'}u - u^2\right] / c^2}}= \frac{mv_y'}{\gamma \sqrt{\left[c^2 + {v_x'}^2 \frac{u^2}{c^2} - {v_x'}^2 - u^2\right] / c^2}}=\\ & = \frac{mv_y'}{\gamma \sqrt{1 + {v_x'}^2 \frac{u^2}{c^4} - \frac{{v_x'}^2}{c^2} - \frac{u^2}{c^2}}}= \frac{mv_y'}{\gamma \sqrt{\left(1 - \frac{u^2}{c^4}\right) \left(1-\frac{{v_x'}^2}{c^2}\right)}}= mv_y' \gamma(v_x')\\ \end{split} \end{equation} In our case $v_x' = v'$ and we can modify last part of this big equation so that we get: \begin{equation} \scriptsize \begin{split} p_y= mv_y' \gamma(v')\\ \end{split} \end{equation} Now we can see Lorentz tr. and reverse Lorentz tr. which are: \begin{equation} \scriptsize \begin{split} &\boxed{p_y=p_y'} ~~~\boxed{p_y'=p_y}\\ \end{split} \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to calculate the intensity of the interference of two waves in a given point? There are two different point sources which produce spherical waves with the same power, amplitude, ω, wavenumber and phase. I can calculate the intensity of each wave in a point: $$ I_1 = P / (4 \pi r_1^2) $$ $$ I_2 = P / (4 \pi r_2^2) $$ But how can I calculate the intensity of the resulting wave in that point?
I found the solution, but forgot to add it here. But since a considerable number of people has seen the question, the answer could be useful. Notation * *$r_1$: distance between the given point and the focus 1. *$r_2$: distance between the given point and the focus 2. *$A_1$: amplitude in the given point of the wave generated by focus 1. *$A_2$: amplitude in the given point of the wave generated by focus 2. *$A_r$: amplitude in the given point of the resulting wave. *$I_1$: intensity in the given point of the wave generated by focus 1. *$I_2$: intensity in the given point of the wave generated by focus 2. *$I_r$: intensity in the given point of the resulting wave. *$P$: power of the initial waves. *$\varphi$: phase difference between the initial waves in the given point. *$k$: wavenumber of the initial and resulting waves. *$\omega$: angular frequency of the initial and resulting waves. *$v$ : velocity of the initial and resulting waves. Preface I will use the formula $$ I = \frac{1}{2}\rho v \omega^2 A^2 $$ The resulting wave will be a wave of amplitude $A_r$ with the same $k$ and $\omega$ than the initial waves. Moreover, $\rho$ and $v$ will also be the same because they only depend on the environment. Then, $$ \begin{cases} I_r = \frac{1}{2}\rho v \omega^2 A_r^2 \\ I_2 = \frac{1}{2}\rho v \omega^2 A_2^2 \\ \end{cases} \implies I_r = I_2 \left(\frac{A_r}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{A_r}{A_2}\right)^2 $$ In order to express $I_r$ in terms of $r_1$ and $r_2$ instead of $A_1$ and $A_2$, I will use that the amplitude of an spherical wave is inversely proportional to the distance to the focus. That is: $$ \frac{A_1}{A_2} = \frac{r_2}{r_1} $$ Answer In a point with destructive interference ($\varphi = \pi$) The resulting amplitude will be the difference of amplitudes: $$ A_r = |A_1 - A_2| $$ Then, the resulting intensity is $$ I_r = \frac{P}{4\pi r_2^2} \left(\frac{A_r}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{A_1 - A_2}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{A_1}{A_2}-1\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{r_2}{r_1}-1\right)^2 = \frac{P}{4\pi} \left(\frac{r_2-r_1}{r_1 r_2}\right)^2 $$ In a point with constructive interference ($\varphi = 0$) The resulting amplitude will be the sum of amplitudes: $$ A_r = A_1 + A_2 $$ Then, the resulting intensity is $$ I_r = \frac{P}{4\pi r_2^2} \left(\frac{A_r}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{A_1 + A_2}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{A_1}{A_2}+1\right)^2 = \frac{P}{4\pi r_2^2} \left(\frac{r_2}{r_1}+1\right)^2 = \frac{P}{4\pi} \left(\frac{r_1+r_2}{r_1 r_2}\right)^2 $$ In general The resulting amplitude will be ($\varphi$ is the phase difference): $$ A_r = \sqrt{A_1^2 + A_2^2 + 2 A_1 A_2 \cos{\varphi}} $$ Then, the resulting intensity is $$ I_r = \frac{P}{4\pi r_2^2} \left(\frac{A_r}{A_2}\right)^2 = \frac{P}{4\pi r_2^2} \frac{A_1^2 + A_2^2 + 2 A_1 A_2 \cos{\varphi}}{A_2^2} = \frac{P}{4\pi r_2^2} \left(\left(\frac{A_1}{A_2}\right)^2 + 1 + 2 \frac{A_1}{A_2} \cos{\varphi}\right) = \frac{P}{4\pi r_2^2} \left(\left(\frac{r_2}{r_1}\right)^2 + 1 + 2 \frac{r_2}{r_1} \cos{\varphi}\right) = \frac{P}{4\pi} \frac{r_1^2 + r_2^2 + 2 r_1 r_2 \cos{\varphi}}{(r_1 r_2)^2} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/54556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
George Green's derivation of Poisson's equation I was reading George Green's An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, and I got confused on one step in his derivation of Poisson's Equation. Specifically, how does Green obtain conclude that: $$\delta\left(2\pi a^2\varrho-\frac{2}{3}\pi b^2\varrho\right)=-4\pi\varrho.$$ Here are two pages to provide context; I understand everything except for the equality above.
Let's first derive the value of $V$ inside the small sphere: $$ V_\text{sphe} = \rho\int\frac{\text{d}x'\text{d}y'\text{d}z'}{r'}, $$ Where the sphere is sufficiently small such that $\rho$ can be considered constant. We can orientate the axes such that $p$ lies on the $z'$ axis. In spherical coordinates, the integral then has the form $$ \begin{align} V_\text{sphe} &= \rho\int_0^a\text{d}r'\int_0^{2\pi}\text{d}\varphi \int_0^{\pi}\frac{r'^2\sin\theta}{\sqrt{r'^2 + b^2 - 2r'b\cos\theta}}\text{d}\theta\\ &=\frac{2\pi}{b}\rho\int_0^a r' \left(\sqrt{r'^2+b^2+2r'b}-\sqrt{r'^2+b^2-2r'b}\right)\text{d}r'\\ &= \frac{2\pi}{b}\rho\int_0^a r'(r'+b - |r'-b|)\text{d}r'\\ &=\frac{4\pi}{b}\rho\left[\int_0^b r'^2\text{d}r' + \int_b^a r'b\,\text{d}r'\right]\\ &= \frac{4\pi}{3}\rho b^2 + 2\pi\rho a^2 - 2\pi\rho b^2 = 2\pi\rho a^2 - \frac{2\pi}{3}\rho b^2. \end{align} $$ Since $$ b^2 = (x-x_l)^2 + (y-y_l)^2 + (z-z_l)^2, $$ we get $$ \begin{align} \frac{\partial b^2}{\partial x} &= 2(x-x_l),\qquad \frac{\partial^2 b^2}{\partial x^2} = 2 = \frac{\partial^2 b^2}{\partial y^2} = \frac{\partial^2 b^2}{\partial z^2} \end{align} $$ so that $$ \delta b^2 = \frac{\partial^2 b^2}{\partial x^2} + \frac{\partial^2 b^2}{\partial y^2} + \frac{\partial^2 b^2}{\partial z^2} = 6 $$ and $\delta a^2 = 0$ since $a$ is a constant. Therefore, $$ \delta V = \delta V_\text{sphe} = -\frac{2\pi}{3}\rho (\delta b^2) + \left(2\pi a^2 - \frac{2\pi}{3} b^2\right)\delta\rho = -4\pi\rho. $$ The term with $\delta\rho$ disappears since $a$ and $b$ are exceedingly small.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/217835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Approximating an expression for a potential In a problem which I was doing, I came across an expression for the potential $V$ of a system as follows $$V = k\left(\frac{1}{l - x} + \frac{1}{l + x}\right)\tag{1}\label{1}$$ where $k$ is a constant, $l$ and $x$ are distances and $l \gg x$. Now I went to find an approximate expression $$V = k\left(\frac{(l + x) + (l - x)}{l^2 - x^2}\right)$$ and reasoning that since $l \gg x$, $l^2 - x^2 \approx l^2$ and thus $$V \approx \frac{2k}{l} \tag{2}\label{2}$$ but this turns out to be wrong as the potential is expected to for an harmonic oscillator and thus propotional to $x^2$. The right way to approximate is \begin{align} V & = \frac{k}{l}\left(\frac{1}{1 - x / l} + \frac{1}{1 + x / l}\right) \\ & \approx \frac{k}{l}\left(\left(1 + \frac{x}{l} + \frac{x^2}{l^2}\right) + \left(1 - \frac{x}{l} + \frac{x^2}{l^2}\right)\right) \\ & \approx \frac{k}{l}\left(2 + \frac{2x^2}{l^2}\right) \end{align} Ignoring the constant $2$ as I'm concerned about the differences in the potential, I get $$ V \approx \frac{2kx^2}{l^3} \label{3}\tag{3}$$ which is correct. What mistake did I do in my approximation method?
We have $$ x^2 \ll l^2 \implies \frac{2x^2}{l^2} \ll 2 \implies \frac{2x^2}{l^2} + 2 \approx 2 $$ So, $$ V \approx \frac{k}{l} \bigg(2 + \frac{2x^2}{l^2}\bigg) \approx 2\frac{k}{l} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/495163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Minimum of the free energy in Landau-Ginzburg theory I'm reading David Tong's lectures and this page: I understand how to get the solution $m_0$ when $m$ has no spatial dependence. But I do not understand how one can find the solution when $m = m(x)$ $m = m_0\tanh(\sqrt{\frac{-a}{2c}} x)$ In particular, where does tanh come from? I'd appreciate your help. Thank you.
You need to solve differential equation: $$ \frac{d^2m}{d x^2} = \frac{am}{c} + \frac{2b m^3}{c} $$ We multiple this equation by $\frac{dm}{dx}$ (if $\frac{dm}{dx}=0$ we obtain 2 vacuum solutions): $$ \frac{dm}{dx}\frac{d^2m}{d x^2} = \frac{dm}{dx}\frac{am}{c} + \frac{dm}{dx}\frac{2b m^3}{c} $$ $$ \frac{1}{2}\frac{d}{dx}\left(\frac{dm}{dx}\right)^2 = \frac{1}{2}\frac{d}{dx}\frac{am^2}{c} + \frac{1}{2}\frac{d}{dx}\frac{bm^4}{c} $$ $$ \left(\frac{dm}{dx}\right)^2 = \frac{am^2}{c} + \frac{bm^4}{c} $$ Such equation you can easily solve: $$ \int^{m(x)} \frac{dm}{|m|\sqrt{a+bm^2}} = \frac{x}{\sqrt{c}} + const $$ Because $a<0$: $$ \int^{m(x)\sqrt{-\frac{b}{a}}} \frac{dy}{|y|\sqrt{y^2-1}} = \frac{\sqrt{-a}x}{\sqrt{c}} + const $$ $$ \int^{-m^2(x)\frac{b}{a}} \frac{dy^2}{2y^2\sqrt{y^2-1}} = \frac{\sqrt{-a}x}{\sqrt{c}} + const $$ We introduce $t^2 = y^2-1$: $$ \int^{-m^2(x)\frac{b}{a} -1} \frac{dt^2}{2(t^2+1)t} = \sqrt{\frac{-a}{c}} x+ const $$ $$ \int^{\sqrt{-m^2(x)\frac{b}{a} -1}} \frac{dt}{(t^2+1)} = \sqrt{\frac{-a}{c}} x + const $$ $$ \arctan(\sqrt{-m^2(x)\frac{b}{a} -1}) = \sqrt{\frac{-a}{c}} x + const $$ ... Maybe I made mistake:(
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Parameterizing a universe with non-zero curvature, some matter, no dark matter, and no dark energy Defining $\Omega_i$ by $\rho_i (t_0) = \Omega_i \rho_{c_0}$, we can obtain the below equality. $$H^2 = H_0^2 \left(\frac{\Omega_r}{a^4} + \frac{\Omega_m}{a^3}+\frac{\Omega_k}{a^2} + \Omega_\Lambda\right)$$ What is the meaning of the $\Omega$ parameters? What do they sum up to? Each of the parameters are used as coefficients to represent each of the partial densities in terms of the critical density of the universe. The sum is equal to $1$, which implies that we live in a flat universe. The density parameter $\Omega$ can be defined as the ratio of actual or observed density $\rho$ to the critical density $\rho_c$ of Friedmann's universe. Let's set $\Omega_r=\Omega_\Lambda=0$, so we have non-zero curvature and some matter. Show that when $\Omega_k<0$ the solutions can be written in parametric form as $$ \begin{align} t (\theta) &= A \left(\sinh\theta - \theta\right) \\ a (\theta) &= B \left(\cosh \theta - 1 \right) \end{align} $$ I understand that using Friedmann's equations, $$ H^2 = \left( \frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3}\rho - \frac{k}{a^2} + \frac{\Lambda}{3} $$ might be helpful. I derived that for a normal Einstein-de Sitter universe $$\frac{\dot{a}^2}{a^2} = \frac{8\pi G\rho_0}{3a^3} \implies \dot{a}^2a = \frac{8\pi G \rho_0}{3} \implies \text{, with $\dot{a} = \frac{da}{dt}$, } \int \sqrt{a} da = \int \sqrt[3]{\frac{8\pi G \rho_0}{3}} dt \implies \frac{2a^{\frac{3}{2}}}{3} = 2t \sqrt[3]{\frac{\pi G \rho_0}{3}} \implies a(t) = t^\frac{2}{3} \sqrt[9]{81 \pi^2 G^2 \rho_0^2} \iff a(t) \propto t^{\frac{2}{3}} \implies H_0 = \frac{\dot{a}}{a}|t_0 = \frac{2t_0^{-\frac{1}{3}} }{3t_0^{\frac{2}{3}}} = \frac{2}{3t_0} \implies t_0 = \frac{2}{3H_0}$$ However, I don't understand how to parameterize the time and scale factor themselves as functions of $\theta$.
The Friedmann equation for these models can be written $$ \dot{a}^2 = H_0^2(\frac{\Omega_{m}}{a} + 1 - \Omega_{m}) $$ For a universe with both matter and nonzero curvature, we have $$ \frac{\dot{a}^2}{a^2} = H_0^2(\Omega_m a^{-3} + \Omega_k a^{-2}) \implies (\frac{da}{dt})^2 = H_0^2(\Omega_m a^{-1} + \Omega_k) $$ Therefore, $$ H_0dt = \frac{da}{\sqrt{\Omega_ma^{-1} + \Omega_k}} = \frac{1}{\sqrt{\Omega_m}} \frac{a^{\frac{1}{2}}da}{\sqrt{1+a(\frac{\Omega_k}{\Omega_m})} } $$ It turns out that it is easier to first solve for the conformal time $d\eta = \frac{dt}{a}$. We have $$ \eta = \int d \eta = \int \frac{dt}{a} = \frac{1}{H_0\sqrt{\Omega_m}} \int \frac{a^{-\frac{1}{2}} da}{\sqrt{1+a(\frac{\Omega_k}{\Omega_m})}} $$ Here begins our solution for positive curvature, i.e. $k > 0$ and therefore $\Omega_k = \frac{-k}{H_0^2} < 0$. Then, let $u^2 = \frac{-\Omega_k}{\Omega_m a}$, so that $u = \sqrt{\frac{-\Omega_k}{\Omega_m}}a^{\frac{1}{2}}$ and $du = \frac{1}{2}\sqrt{\frac{-\Omega_k}{\Omega_m}}a^{-\frac{1}{2}}$. We have $$ \eta = \frac{2}{H_0\sqrt{\Omega_m}}\sqrt{\frac{\Omega_m}{-\Omega_k}} \int \frac{du}{\sqrt{1 - u^2}} = \frac{2}{H_0\sqrt{-k}} \sin^{-1} u $$ Inverting $$ u = \sin\frac{\theta}{2}, \; \theta = H_0 \eta \sqrt{-\Omega_k} $$ under the same change of variables $a \to u$, $u^2 du = \frac{1}{2} \frac{-\Omega_k}{\Omega_m}^\frac{3}{2} a^\frac{1}{2} da$, the equation for $t$ becomes $$ \frac{1}{H_0\sqrt{\Omega_m}} \int \frac{a^\frac{1}{2} da}{\sqrt{1+a(\frac{\Omega_k}{\Omega_m})}} = \frac{2}{H_0\sqrt{\Omega_m}} (\frac{\Omega_m}{-\Omega_k})^\frac{3}{2} \int \frac{u^2du}{\sqrt{1 - u^2}} $$ Now, changing $u = \sin \frac{\theta}{2}, du = \frac{1}{2} \cos \frac{\theta}{2} d\theta$, we have $$ t = \frac{2 \Omega_m}{H_0(-\Omega_k)^\frac{3}{2}} \int \frac{\sin^2 \frac{\theta}{2} \cos \frac{\theta}{2} d \theta}{2\sqrt{1 - \sin^2 \frac{\theta}{2}}} = \frac{2 \Omega_m}{H_0(-\Omega_k)^\frac{3}{2}} \int \sin^2 \frac{\theta}{2} d\theta $$ Now, using $\cos \theta = \cos^2 \frac{\theta}{2} - \sin^2 \frac{\theta}{2} = 1 - \sin^2 \frac{\theta}{2}$, we find $$ t = \frac{\Omega_m}{2H_0(-\Omega_m)^\frac{3}{2}} \int [1 - \cos \theta] d\theta = \frac{\Omega_m}{2H_0(-\Omega_k)^\frac{3}{2}}(\theta - \sin \theta) $$ Finally, recall that $a = -\frac{\Omega_m}{\Omega_k}u^2 = -\frac{\Omega_m}{\Omega_k}\sin^2 \frac{\theta}{2}$ so that we have a parametric solution for a cycloid, with $\theta = H_0 \sqrt{-\Omega_k}\eta$, $$ t(\theta) = \frac{\Omega_m}{2H_0 (-\Omega_k)^\frac{3}{2}}(\theta - \sin \theta), a(\theta) = \frac{\Omega_m}{2H_0(-\Omega_k)^{\frac{3}{2}}}(1 - \cos \theta ), \textbf{ for positive curvature.} $$ Similarly, using $\theta$ as a dummy variable to parameterize this a negatively curved universe, we can make a substitution $Q$, where $Q = \frac{\sinh^2\frac{\theta}{2}}{a} = \frac{1 - \Omega_{m}}{\Omega_{m}}$, to obtain: $$ t(\theta) = \frac{\Omega_{m}}{2H_0(-\Omega_k)^\frac{3}{2}}(\sinh \theta - \theta), a(\theta) = \frac{\Omega_{m}}{2(-\Omega_m)}(\cosh \theta - 1), \textbf{ for negative curvature.} $$ Take solutions from previous exercise and use perturbative expansion when $|\Omega_k|\ll 1$. How does the age of the universe vary with $H_0$ and $\Omega_k$, when $\Omega_k$ is small? For the universe with a negative curvature, the time equation is as follows, knowing that $\Omega_m + \Omega_k = 1$: $$ t(\theta) = \frac{1-\Omega_k}{2H_0(-\Omega_{k})^\frac{3}{2}}(\sinh \theta - \theta) $$ Since $\Omega_k = \frac{-k}{H_0}$, $\Omega_k < 0$ corresponds to a closed universe, where $k > 0$, which reaches a maximum turnaround scale factor $a_{ta} = \frac{\Omega_m}{-\Omega_k}$ at time $t_{ta}$, so $H_0 t_{ta} = \frac{\pi}{2}(\frac{\Omega_m}{-\Omega_k^{\frac{3}{2}}})$. In similar fashion, for the universe with a positive curvature, the time equation is as follows, again knowing that $\Omega_m + \Omega_k = 1$: $$ t(\theta) = \frac{1 - \Omega_k}{2H_0 (-\Omega_k)^\frac{3}{2}}(\theta - \sin \theta) $$ In both cases, as $\Omega_k \to 0$, both $t_{ta}, a_{ta} \to \infty$ and the solution approaches that of a flat universe without turnaround, i.e. $a(t) = (\frac{3}{2} \sqrt{\Omega_m} H_0 t)^{\frac{2}{3}}$, an asymptotically flat universe with an exceedingly large age. $\textbf{Correction}$: As $\Omega_k \to 0$, the universe becomes primarily matter-dominated, implying that it asymptotically reaches a flat universe with the solution to elapsed time given by that of the Einstein-de Sitter model of the universe: $t = \frac{2}{3H_0}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evolution of Euler's angles in time The general motion of a rigid body over time can be determined in the body frame by solving Euler's equations, selecting the principal axes for the body axes. Also, Euler's angles can be used to transform from the rotating body frame to an inertial space frame. But, in my physics mechanics textbooks and on this site, I have found no discussion of evaluating the motion over time in the space frame. I did find the motion over time in the space frame addressed on page 10 of the following reference:http://dma.ing.uniroma1.it/users/lss_da/MATERIALE/Textbook.pdf. (This reference was provided by @JAlex in response to the question How do the inertia tensor varies when a rigid body rotates in space? on this exchange.) Is the approach on page 10 of the reference the standard way to determine the motion in time in the space frame? Is the motion in time in the space frame important? If so, why is it not addressed in many standard physics mechanics textbooks that address the general motion of a rigid body? Perhaps quaternions can be used as mentioned in a response by @John Alexiou to Euler's Angles and Uniquely Defining the Orientation of a Rigid Body on this exchange?
Euler Equation \begin{align*} &\mathbf{I}\,{\dot{\omega}}+\mathbf\omega\times \left(\mathbf{I}\,\mathbf\omega\right)=\mathbf\tau\tag 1 \end{align*} and the kinematic equation \begin{align*} &\mathbf{\dot{\phi}}=\mathbf{A}(\mathbf{\phi})\,\mathbf{\omega}\tag 2 \end{align*} all vector components and the inertia tensor must be given either in body system or in inertial system the inertia tensor in inertial system : \begin{align*} &\mathbf{I}_I=\mathbf{R}\,\mathbf{I}_B\,\mathbf{R}^T \end{align*} equation (2) in inertial system: with \begin{align*} & \left[ \begin {array}{ccc} 0&-\omega_{{z}}&\omega_{{y}} \\ \omega_{{z}}&0&-\omega_{{x}}\\ -\omega_{{y}}&\omega_{{x}}&0\end {array} \right]_B=\mathbf{R}^T\,\mathbf{\dot{R}}\quad\Rightarrow\quad \mathbf{\omega}_B=\mathbf{J}_R(\mathbf{\phi})\,\mathbf{\dot{\phi}}_B~\Rightarrow~ \mathbf{\dot{\phi}}_B=\underbrace{\mathbf{J}_R^{-1}}_{\mathbf{A}}\,\mathbf{\omega}_B \end{align*} \begin{align*} &\mathbf{\dot{\phi}}_I=\underbrace{\mathbf{R}\,\mathbf{A}\mathbf{R^T\,}}_{\mathbf{A}(\mathbf{\phi})}\mathbf{\omega}_I \end{align*} the rotation matrix $~\mathbf{R}~$ can build up from three rotation matrices for example \begin{align*} \mathbf{R}&=\mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z\\ &= \left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \phi_{{x}} \right) &-\sin \left( \phi_{{x}} \right) \\ 0&\sin \left( \phi_{{x}} \right) &\cos \left( \phi_{{x}} \right) \end {array} \right] \, \left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \phi_{{y}} \right) &-\sin \left( \phi_{{y}} \right) \\ 0&\sin \left( \phi_{{y}} \right) &\cos \left( \phi_{{y}} \right) \end {array} \right] \, \left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \phi_{{z}} \right) &-\sin \left( \phi_{{z}} \right) \\ 0&\sin \left( \phi_{{z}} \right) &\cos \left( \phi_{{z}} \right) \end {array} \right]\tag 3 \end{align*} * *$~\mathbf I~$ inertia tensor *$~\mathbf\omega~$ angular velocity vector *$~\mathbf\tau~$ external torque vector *$~\mathbf{\dot{\phi}}~$ angle velocity vector *$~\mathbf R~$ rotation matrix between B-system and I-system *$~I~$ inertial system *$~B~$ body system with equation (3) \begin{align*} & \mathbf{\omega}_B=\mathbf{J}_R(\mathbf{\phi})\,\mathbf{\dot{\phi}}_B\quad, \mathbf{J}_R=\left[ \begin {array}{ccc} \cos \left( \phi_{{y}} \right) \cos \left( \phi_{{z}} \right) &\sin \left( \phi_{{z}} \right) &0 \\ -\cos \left( \phi_{{y}} \right) \sin \left( \phi_ {{z}} \right) &\cos \left( \phi_{{z}} \right) &0\\ \sin \left( \phi_{{y}} \right) &0&1\end {array} \right] \quad\Rightarrow\\ &\mathbf A_B=\left[ \begin {array}{ccc} {\frac {\cos \left( \phi_{{z}} \right) }{ \cos \left( \phi_{{y}} \right) }}&-{\frac {\sin \left( \phi_{{z}} \right) }{\cos \left( \phi_{{y}} \right) }}&0\\ \sin \left( \phi_{{z}} \right) &\cos \left( \phi_{{z}} \right) &0 \\ -{\frac {\sin \left( \phi_{{y}} \right) \cos \left( \phi_{{z}} \right) }{\cos \left( \phi_{{y}} \right) }}&{\frac {\sin \left( \phi_{{y}} \right) \sin \left( \phi_{{z}} \right) }{\cos \left( \phi_{{y}} \right) }}&1\end {array} \right]\quad, \mathbf{A}_I=\left[ \begin {array}{ccc} 1&{\frac {\sin \left( \phi_{{x}} \right) \sin \left( \phi_{{y}} \right) }{\cos \left( \phi_{{y}} \right) }}&-{ \frac {\cos \left( \phi_{{x}} \right) \sin \left( \phi_{{y}} \right) } {\cos \left( \phi_{{y}} \right) }}\\ 0&\cos \left( \phi_{{x}} \right) &\sin \left( \phi_{{x}} \right) \\ 0&-{\frac {\sin \left( \phi_{{x}} \right) }{\cos \left( \phi_{{y}} \right) }}&{\frac {\cos \left( \phi_{{x}} \right) } {\cos \left( \phi_{{y}} \right) }}\end {array} \right] \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/707276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the perturbations to the metric determinant for 3º order? From the post How do I calculate the perturbations to the metric determinant?, I'm trying to calculate the expansion of the metric's determinant $\sqrt{-g}$ up to 3rd order. I saw in another post the procedure until further notice. but I'm not understanding this step: \begin{align} &= \sqrt{-\det{b}}\left(1 + \frac{1}{2}\operatorname{tr}(b^{-1}h)-\frac{1}{4}\operatorname{tr}{(b^{-1}h)^2} + \frac{1}{2}\left(\frac{1}{2}\operatorname{tr}(b^{-1}h)-\frac{1}{4}\operatorname{tr}{(b^{-1}h)^2}\right)^2\right) + \mathcal O(h^3)\\ &= \sqrt{-\det{b}}\left(1 + \frac{1}{2}\operatorname{tr}(b^{-1}h)-\frac{1}{4}\operatorname{tr}{(b^{-1}h)^2} + \frac{1}{8}\operatorname{tr}^2{(b^{-1}h)}\right) + \mathcal O(h^3)\\ \end{align} I also don't understand how Tr and Tr² is calculated. Can anybody help me?
Following the calculations from the post you reference, note that all they have done is Taylor expand the logarithm and then the exponential. For that, we need to know the following Taylor identities: \begin{align} \exp(x)=&1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\dots\\ \log(1+x)=&x-\frac{1}{2}x^2+\frac{1}{3}x^3+\dots \end{align} Therefore we have: \begin{align} \sqrt{-\det{g}}=&\sqrt{-\det{b}}\exp{\left[\frac{1}{2}\log{\det{1+b^{-1}h}}\right]}\\ =&\sqrt{-\det{b}}\exp{\left[\frac{1}{2}\rm{tr}\,{\log{1+b^{-1}h}}\right]}\\ =&\sqrt{-\det{b}}\exp{\left[\frac{1}{2}\rm{tr}\left[b^{-1}h-\frac{1}{2}(b^{-1}h)^2+\frac{1}{3}(b^{-1}h)^3+\dots\right]\right]}\\ =&\sqrt{-\det{b}}\exp{\left[\frac{1}{2}\rm{tr}(b^{-1}h)-\frac{1}{4}\rm{tr}(b^{-1}h)^2+\frac{1}{6}\rm{tr}(b^{-1}h)^3+\dots\right]}\\ =&\sqrt{-\det{b}}\left[1+\frac{1}{2}\rm{tr}(b^{-1}h)-\frac{1}{4}\rm{tr}(b^{-1}h)^2+\frac{1}{6}\rm{tr}(b^{-1}h)^3\right.\\ &\left.+\frac{1}{2}\left(\frac{1}{2}\rm{tr}(b^{-1}h)-\frac{1}{4}\rm{tr}(b^{-1}h)^2+\frac{1}{6}\rm{tr}(b^{-1}h)^3\right)^2+\frac{1}{3!}\left(\frac{1}{2}\rm{tr}(b^{-1}h)-\frac{1}{4}\rm{tr}(b^{-1}h)^2+\frac{1}{6}\rm{tr}(b^{-1}h)^3\right)^3+\dots\right]. \end{align} From there you just need to substitute $b$ for your backround metric (being $b=1$ for Minkowski) and compute just up the order 3 terms in the same way they do in the original post. Hope this is useful :)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/725670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Divergence not defined I’m currently working on the practice problems in Introduction to Electrodynamics by Griffiths. I got confused by the solution to this problem. What does “ill-defined divergence” even mean? I understand how and when to use delta function, but I don’t understand how divergence is not defined.
I think we can use $$ \nabla \cdot (\psi \vec{a}) = \vec{a} \cdot \nabla \psi + \psi \nabla \cdot \vec{a} $$ to see what's happened for $n \lt -2$. $$\begin{align*} \nabla \cdot (r^{-3} \hat{r}) &= \left( \frac{1}{r^2} \hat{r} \right) \cdot \nabla \frac{1}{r} + \frac{1}{r} \nabla \cdot \frac{1}{r^2} \hat{r} \\ &= - \frac{1}{r^4} + \frac{1}{r} \nabla \cdot \frac{1}{r^2} \hat{r} && \left( \nabla \frac{1}{r} = - \frac{1}{r^2} \right) \\ &= - \frac{1}{r^4} + \frac{4 \pi}{r} \delta^3(\vec{r}) \end{align*}$$ When $r$ toward $0$ (below misuses the delta function, delta function is meaning less outside of integral), $$\begin{align*} \nabla \cdot (r^{-3} \hat{r}) &= - \frac{1}{r^4} + \frac{4 \pi}{r} \delta^3(\vec{r}) \\ &= -\infty + \infty \cdot \infty \end{align*}$$ We can not assign a meaningful value to $\nabla \cdot (r^{-3} \hat{r})$, so it is called "ill-defined".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/738354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Vector space of $\mathbb{C}^4$ and its basis, the Pauli matrices How do I write an arbitrary $2\times 2$ matrix as a linear combination of the three Pauli Matrices and the $2\times 2$ unit matrix? Any example for the same might help ?
A slow construction would go... $$$$ $$ \begin{pmatrix}a&b\\c&d\end{pmatrix} = a\begin{pmatrix}1&0\\0&0\end{pmatrix} +b\begin{pmatrix}0&1\\0&0\end{pmatrix} +c\begin{pmatrix}0&0\\1&0\end{pmatrix} +d\begin{pmatrix}0&0\\0&1\end{pmatrix} $$ $$ \begin{pmatrix}1&0\\0&0\end{pmatrix} =\frac{1}{2} \begin{pmatrix}1&0\\0&1\end{pmatrix} + \frac{1}{2} \begin{pmatrix}1&0\\0&-1\end{pmatrix} =\frac{1}{2}1_2+\frac{1}{2}\sigma_3 $$ $$ \begin{pmatrix}0&1\\0&0\end{pmatrix} =\ ... $$ $$ \Longrightarrow \begin{pmatrix}a&b\\c&d\end{pmatrix} = \frac{a}{2}1_2+\frac{a}{2}\sigma_3+\ ...\ (\text{other combintations of the four matrices}) $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Equivalent Rotation using Baker-Campbell-Hausdorff relation Is there a way in which one can use the BCH relation to find the equivalent angle and the axis for two rotations? I am aware that one can do it in a precise way using Euler Angles but I was wondering whether we can use just the algebra of the rotation group to perform the same computation?
As $\mathrm{SO}(3)$ is a connected group, $\exp(\mathsf{L}(\mathrm{SO}(3))) = \mathrm{SO}(3)$ and hence this should – in theory – work. Let us work in the fundamental representation of $\mathrm{SO}(3)$, that is orthogonal 3x3 matrices. Assume you have a rotation $B$ acting first and a second rotation $A$, the resulting rotation is then given by $AB \equiv C \in \mathrm{SO}(3)$. Furthermore, we can express $A$, $B$ and $C$ by $\exp(a)$, $\exp(b)$ and $\exp(c)$ for $a,b,c \in \mathsf{L}(\mathrm{SO}(3))$. We then have¹ $$ \exp(a) \exp(b) = AB = C = \exp(c) = \exp\left(a + b + \frac{1}{2}[a,b] + \frac{1}{12} [ a, [a,b]] - \frac{1}{12}[b,[a,b]]+ \ldots\right)\quad.$$ Now, the problem with verifying this by an example is that these commutators are rather ugly. I shall do two examples: First example: Two rotations about the $x$ axis Take $A$ to rotate about $(1,0,0)$ by $\theta$ and $B$ to rotate about the same axis by $\phi$. We then have $$ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{pmatrix}$$ and similarly for $B$. The associated $a$ is then simply: $$ a = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta \\ 0 & \theta & 0 \end{pmatrix}\quad $$ and again similarly for $b$ with $\theta \to \phi$. You can check easily that $\exp(a)$ gives you indeed $A$. Now since $a$ and $b$ commute, we have $[a,b] = 0$ and hence $c = a + b$ - which is $$ c = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta -\phi \\ 0 & \theta + \phi & 0 \end{pmatrix}\quad.$$ This very likely illuminates better than $AB$ that two rotations about the same axis are equivalent to one rotation by the sum of the angles. You can again check that $\exp(c)$ gives you $C$. Second Example: One rotation about $y$, a second about $x$. This one is more difficult, as we will have to calculate annoying commutators. The result presented here will hence only be approximate, not exact. Take $$ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{pmatrix} \qquad B = \begin{pmatrix} \cos(\phi) & 0 & \sin(\phi) \\ 0 & 1 & 0 \\ -\sin(\phi) & 0 & \cos(\phi) \end{pmatrix} \quad .$$ You can calculate that $$ AB = C = \begin{pmatrix} \cos(\phi) & 0 & \sin(\phi) \\ \sin(\theta) \sin(\phi) & \cos(\theta) & -\sin(\theta) \cos(\phi) \\ \sin(\phi)\cos(\theta) & \sin(\theta) & \cos(\phi)\cos(\theta) \end{pmatrix} \quad .$$ Similarly to the above, we have $$ a = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -\theta \\ 0 & \theta & 0 \end{pmatrix} \qquad b = \begin{pmatrix} 0 & 0 & \phi \\ 0 & 0 & 0 \\ -\phi & 0 & 0 \end{pmatrix} \quad.$$ Now the tricky part is to calculate $$ c = a + b + \frac{1}{2} [ a,b] + \frac{1}{12} [ a, [a,b]] - \frac{1}{12} [b,[a,b]] + \ldots $$ to such a precision that $\exp(c)$ gives remotely sensible results. At this, I mostly failed, but here's what I got: $$ \frac{1}{2} [ a,b] = \frac{1}{2} \begin{pmatrix} 0 & -\theta\phi & 0 \\ \theta\phi & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\quad,$$ which looks an awful lot like the element of the Lie algebra basis corresponding to a rotation about the $z$ axis, but unfortunately doesn’t fit in at all (something linear in either $a$ or $b$ would have been nice…). I then went on to calculate $[a,[a,b]]$ and $[b,[a,b]]$ and arrived at $$ c \approx \begin{pmatrix} 0 & -\frac{1}{2}\theta\phi & \phi - \frac{1}{12} \theta^2 \phi \\ \frac{1}{2} \theta \phi & 0 & -\theta + \frac{1}{12} \theta\phi^2 \\ -\phi + \frac{1}{12} \theta^2 \phi & \theta - \frac{1}{12} \theta \phi^2 & 0 \end{pmatrix} \quad . $$ The nice thing here is that this is still an antisymmetric matrix and hence (can be) in $\mathsf{L}(\mathrm{SO}(3))$. In order to now compare this to anything, we have to approximate $C$. Recall the expression from above. As a first approximation, I will set $\cos(x) = 1 - \frac{1}{2}x^2$, $\sin(x) = x - \frac{1}{6} x^3$. I then get $$ C \approx \begin{pmatrix} 1 - \frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \left(\theta - \frac{\theta^3}{6}\right) \left(\phi - \frac{\phi^3}{6}\right) & 1 - \frac{\theta^2}{2} & -\left(1-\frac{\phi^2}{2}\right)\left(\theta - \frac{\theta^3}{6}\right) \\ -\left(1-\frac{\theta^2}{2}\right)\left(\phi-\frac{\phi^3}{6}\right) & \theta - \frac{\theta^3}{6} & \left(1 - \frac{\theta^2}{2}\right)\left(\phi - \frac{\phi^3}{6}\right) \end{pmatrix} \quad ,$$ expanding out the brackets and throwing away anything of order four, I arrive at $$ C \approx \begin{pmatrix} 1 - \frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \theta\phi & 1 - \frac{\theta^2}{2} & -\theta + \frac{\phi^2\theta}{2} \\ -\phi + \frac{\theta^2\phi}{2} & \theta - \frac{\theta^3}{6} & \phi - \frac{\theta^2\phi}{2} \end{pmatrix}\quad.$$ This expression should be roughly equal to $$ 1_3 + c + \frac{1}{2} c^2 + \frac{1}{6} c^3 \quad,$$ which is the expansion of $\exp(c)$. After again throwing away everything of order four, we arrive at $$ \exp(c) \approx \begin{pmatrix} 1-\frac{\phi^2}{2} & 0 & \phi - \frac{\phi^3}{6} \\ \theta\phi & 1 - \frac{\theta^2}{2} & -\theta+\frac{\theta^3}{6} +\frac{\theta\phi^2}{2}\\ -\phi +\frac{\theta^2 \phi}{2} & \theta - \frac{\theta^3}{6} & 1 - \frac{\theta^2}{2} - \frac{\phi^2}{2} \end{pmatrix} \quad .$$ The remaining ‘wrong’ terms here most probably cancel with higher orders of $c$, but I have to admit I am slightly too lazy for that. Conclusion The main problem with the BCH formula is really that, in general, $[a,b] \neq 0$ and you hence most often not even get an exact expression for $c$ – from which one could most likely deduce angle and axis of rotation without evaluating that pesky exponential. Without an exact expression for $c$, however, all is lost, as non-exact expressions merely rely on the fact that for infinitesimal angles of rotation, all rotations commute. I would love to hear other opinions, though, especially regarding the ‘theoretical’ part what one could do with $c$, if it was known exactly.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/29100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
$N$ coupled quantum harmonic oscillators I want to find the wave functions of $N$ coupled quantum harmonic oscillators having the following hamiltonian: \begin{eqnarray} H &=& \sum_{i=1}^N \left(\frac{p^2_i}{2m_i} + \frac{1}{2}m_i\omega^2 x^2_i + \frac{\kappa}{2} (x_i-x_{i+1})^2 \right)\,, \qquad x_{N+1}=0\,,\\ &=& \frac{1}{2}p^T Mp + \frac{1}{2}x^TKx\,, \end{eqnarray} where $M=\text{diag}(\frac{1}{m_1}, \cdots,\frac{1}{m_N})$ and $K$ is a real symmetric $N\times N$ matrix with positive eigenvalues, \begin{equation} K= \begin{pmatrix} k'_1& -\kappa & 0 & \cdots & 0 \\ -\kappa & k'_2& -\kappa & \ddots & \vdots \\ 0 & -\kappa & \ddots& \ddots & 0\\ \vdots & \ddots &\ddots & k'_{N-1}&-\kappa \\ 0&\cdots & 0 & -\kappa & k'_N \end{pmatrix} \end{equation} with $k'_i = m_i\omega^2+2\kappa$ but $k'_{1,N} = m_{1,N}\omega^2+\kappa$. By choosing a basis which diagonalizes the matrix $K$, the hamiltonian can be express as the sum of uncoupled harmonic oscillators hamiltonian. As an example, consider two coupled quantum harmonic oscillators with hamiltonian \begin{equation} H = \frac{p^2_1}{2m_1} + \frac{p^2_2}{2m_2} + \frac{1}{2}m_1\omega^2 x^2_1 + \frac{1}{2}m_2 \omega^2 x^2_2 + \frac{\kappa}{2} (x_1-x_2)^2 \,. \end{equation} We make the following changes of variables (normal coordinates) \begin{eqnarray} x &=& \frac{x_1 - x_2}{\sqrt{2}} \,, \\ X &=& \frac{m_1 x_1 + m_2 x_2}{M\sqrt{2}}\,, \end{eqnarray} or equivalently, \begin{eqnarray} x_1 &=& \frac{1}{\sqrt{2}}\left(X + \frac{m_2}{M}x\right) \,, \\ x_2 &=& \frac{1}{\sqrt{2}}\left(X - \frac{m_1}{M}x\right) \,, \end{eqnarray} where $M=(m_1+m_2)/2$. Then the hamiltonian becomes \begin{equation} H = \frac{p^2_x}{2\mu} + \frac{1}{2}\mu\omega_-^2 x^2 + \frac{p^2_X}{2M} + \frac{1}{2}M\omega_+^2 X^2 \,. \end{equation} where $\displaystyle\mu = \frac{m_1m_2}{M}$ and $\omega_+^2=\omega^2$ and $\omega_-^2 = \omega^2 + 2\kappa/\mu$. The wave functions are \begin{equation} \Psi_{mn}(x_1,x_2) = \frac{1}{\sqrt{\pi x_0X_0}}\frac{e^{-x^2/2x_0^2}}{\sqrt{m!\,2^m}}\frac{e^{-X^2/2X_0^2}}{\sqrt{n!\,2^n}}H_m\left(\frac{x}{x_0} \right)H_n\left(\frac{X}{X_0} \right) \,, \end{equation} where $x=x(x_1,x_2)$ and $X=X(x_1,x_2)$ and $\displaystyle x_0=\sqrt{\frac{\hbar}{\mu\omega_-}}$ and $\displaystyle X_0=\sqrt{\frac{\hbar}{M\omega_+}}$. How does all this work using matrices "formalism"? And how to extend it to $N$ CQHO? Ultimately, I would like to redemonstrate (8) and (13) in http://arxiv.org/pdf/hep-th/9303048.pdf
(If I did not make obvious algebra errors) the elegant solution to this problem is to realize that the matrix $$ \Lambda=\left(\begin{array}{lllr} 0&1&0\ldots&0\\ 0&0&1&\ldots 0\\ \vdots&\vdots&\vdots&1\\ 1&0&0\ldots&0 \end{array}\right) $$ actually commutes with $H$ since $\Lambda$ basically maps $x_{i+1}$ to $x_i$. As a result, $H$ and $\Lambda$ have a common set of eigenvectors. Since $\Lambda^n=1_{n\times n}$, the eigenvalues of $\Lambda$ satisfy $\lambda_k^n=1$ so are the $n$’th root of unity. $$ \lambda_k=e^{2\pi i k/n}=\omega_n^k\, . $$ The eigenvectors are then easily found to be the Fourier vectors, i.e. $$ v_k= \frac{1}{\sqrt{n}}\left(\begin{array}{c} \omega_n^k\\ (\omega_n^k)^2\\ \vdots \\ 1 \end{array}\right)\, . $$ With these you can construct the matrix $U$ of eigenvectors that will diagonalise $H$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Projectiles on inclined planes with coefficient of restitution This is the problem I am currently attempting. So far, I've resolved the velocity parallel and perpendicular to the plane to get, perpendicular: $u \sin \theta$ upon launch and $-u\sin \theta$ on landing. Parallel: $u \cos \theta - 2u \sin \theta \tan \alpha$. Where do I go from here?
@EDIT: SOLVED IT! :) The key part I was missing was that $e$ only acts on the component perpendicular to the slope (the y-component), i.e $u_{r,x} = v_x$ and $u_{r,x} = -e v_y$. Huge thanks to @Floris for spotting this, and all the help!! Always start with a diagram :) I tried solving this two ways: relative to vertical/horizontal and relative to the slope. The reflection for vertical/horizontal is horrendous, and it works out much neater to just resolve relative to the slope. Relative to the slope of the plane Projection \begin{align} {v_x \choose v_y} &= {u_x + a_x t_1\choose u_y + a_y t_1} \\ {s_x \choose 0} &= {u_x t_1 + \frac{1}{2}a_x t_1^2\choose u_y t_1 + \frac{1}{2}a_y t_1^2} \end{align} where $u_x = u\cos(\theta)$, $u_y = u\sin(\theta)$, $a_x = -mg \sin(\alpha)$, $a_y = -mg \cos(\alpha)$ and $s_y = 0$. Reflection $$ \vec{u_{r}} = \vec{v} - 2(\vec{v} \cdot \hat{n})\hat{n} $$ where $\hat{n} = {0 \choose 1}$ is the unit normal vector to the slope. $$ {u_{r,x} \choose u_{r,y}} = {v_x \choose v_y} - 2 ({v_x \choose v_y} \cdot {0 \choose 1}) {0 \choose 1} $$ Rearrange the equation and remember that the velocity perpendicular to the plane is reduced by a factor of $e$. $$ {u_{r,x} \choose u_{r,y}} = {v_x \choose -ev_y} $$ Rebound \begin{align} {-s_x \choose -s_y} &= {u_{r,x} t_2 + \frac{1}{2}a_x t_2^2\choose u_{r,y} t_2 + \frac{1}{2}a_y t_2^2} \\ {s_x \choose 0} &= {-v_x t_2 - \frac{1}{2}a_x t_2^2 \choose -ev_y t_2 + \frac{1}{2}a_y t_2^2} \end{align} By using $s_y = 0$, we can immediately solve for $t_1$ and $t_2$ $$ t_1 = -\frac{2u_y}{a_y} \quad t_2 = \frac{2ev_y}{a_y} $$ By plugging our solution for $t_1$ into $v_y = u_y + a_y t_1$, we get $$ v_y = -u_y $$ which in hindsight is obvious, because acceleration perpendicular to the plane is constant. Now for the fun part: setting $s_x$ during projection equal to the $s_x$ during the rebound. $$ u_x t_1 + \frac{1}{2}a_x t_1^2 = -v_x t_2 - \frac{1}{2}a_x t_2^2 $$ Plug in for time $$ [u_x + \frac{1}{2}a_x (-\frac{2u_y}{a_y}) ](-\frac{2u_y}{a_y}) = [-v_x - \frac{1}{2}a_x \frac{2ev_y}{a_y}] \frac{2ev_y}{a_y} $$ Cancel $\frac{2}{a_y}$ from both sides $$ [- u_x + u_y\frac{a_x}{a_y} ]u_y = [-v_x - ev_y \frac{a_x}{a_y}] ev_y $$ Multiply both sides by $-1$ and factor out $u_y$ from the left and $v_y$ from the right. $$ [\frac{u_x}{u_y} - \frac{a_x}{a_y} ]u_y^2 = [\frac{v_x}{v_y} + e\frac{a_x}{a_y}] ev_y^2 $$ Remember that $v_y = -u_y$, so we can cancel $v_y^2$ and $u_y^2$ from both sides. $$ \frac{u_x}{u_y} - \frac{a_x}{a_y} = e\frac{v_x}{v_y} + e^2\frac{a_x}{a_y} $$ Now we want to plug in for $v_x/v_y$ $$ v_x = u_x + a_x t_1 = u_x -2 \frac{a_x}{a_y}u_y = (\frac{u_x}{u_y} - 2 \frac{a_x}{a_y})u_y \\ \therefore \frac{v_x}{v_y} = 2 \frac{a_x}{a_y} - \frac{u_x}{u_y} $$ Plug this back in $$ \frac{u_x}{u_y} - \frac{a_x}{a_y} = e(2 \frac{a_x}{a_y} - \frac{u_x}{u_y}) + e^2\frac{a_x}{a_y} $$ This is a quadratic in $e$, so rearrange into an obviously quadratic form $$ \frac{a_x}{a_y} e^2 + (2 \frac{a_x}{a_y} - \frac{u_x}{u_y})e + \frac{a_x}{a_y} - \frac{u_x}{u_y} = 0 $$ Solve using the quadratic formula $e = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ \begin{align} b^2 - 4ac &= (2 \frac{a_x}{a_y} - \frac{u_x}{u_y})^2 - 4 (\frac{a_x}{a_y})(\frac{a_x}{a_y} - \frac{u_x}{u_y}) \\ &= 4 (\frac{a_x}{a_y})^2 - 4 \frac{a_x}{a_y}\frac{u_x}{u_y} + (\frac{u_x}{u_y})^2 - 4(\frac{a_x}{a_y})^2 + 4\frac{a_x}{a_y}\frac{u_x}{u_y} \\ &= (\frac{u_x}{u_y})^2 \end{align} $$ e = \frac{(\frac{u_x}{u_y} - 2 \frac{a_x}{a_y}) \pm \frac{u_x}{u_y}}{2\frac{a_x}{a_y}} \\ \therefore \quad e_- = -1, \quad e_+ = \frac{u_x}{u_y}\frac{a_y}{a_x} - 1 $$ Remember the definitions from the start: $u_x = u\cos(\theta)$, $u_y = u\sin(\theta)$, $a_x = -mg \sin(\alpha)$, $a_y = -mg \cos(\alpha)$, so $$ \frac{u_x}{u_y} = \cot(\theta) \quad \frac{a_x}{a_y} = \tan(\alpha) $$ Plug this into our expression for $e_+$ and voila! $$ e_+ = \cot(\theta) \cot(\alpha) - 1 $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/232303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Non-zero components of the Riemann tensor for the Schwarzschild metric Can anyone tell me which are the non-zero components of the Riemann tensor for the Schwarzschild metric? I've been searching for these components for about 2 weeks, and I've found a few sites, but the problem is that each one of them shows different components, in number and form. I´ve calculated a few components but I don't know if they are correct. I'm using the form of the metric: $$ds^2 = \left(1-\frac{2m}{r}\right)dt^2 + \left(1-\frac{2m}{r}\right)^{-1} dr^2 + r^2 d\theta^2 + r^2\sin^2\theta \, d\phi^2.$$
According to Mathematica, and assuming I haven't made any silly errors typing in the metric, I get the non-zero components of $R^\mu{}_{\nu\alpha\beta}$ to be: {1, 2, 1, 2} -> (2 G M)/(r^2 (-2 G M + c^2 r)), {1, 2, 2, 1} -> -((2 G M)/(r^2 (-2 G M + c^2 r))), {1, 3, 1, 3} -> -((G M)/(c^2 r)), {1, 3, 3, 1} -> (G M)/(c^2 r), {1, 4, 1, 4} -> -((G M Sin[\[Theta]]^2)/(c^2 r)), {1, 4, 4, 1} -> (G M Sin[\[Theta]]^2)/(c^2 r), {2, 1, 1, 2} -> (2 G M (-2 G M + c^2 r))/(c^4 r^4), {2, 1, 2, 1} -> -((2 G M (-2 G M + c^2 r))/(c^4 r^4)), {2, 3, 2, 3} -> -((G M)/(c^2 r)), {2, 3, 3, 2} -> (G M)/(c^2 r), {2, 4, 2, 4} -> -((G M Sin[\[Theta]]^2)/(c^2 r)), {2, 4, 4, 2} -> (G M Sin[\[Theta]]^2)/(c^2 r), {3, 1, 1, 3} -> (G M (2 G M - c^2 r))/(c^4 r^4), {3, 1, 3, 1} -> (G M (-2 G M + c^2 r))/(c^4 r^4), {3, 2, 2, 3} -> (G M)/(r^2 (-2 G M + c^2 r)), {3, 2, 3, 2} -> (G M)/(r^2 (2 G M - c^2 r)), {3, 4, 3, 4} -> (2 G M Sin[\[Theta]]^2)/(c^2 r), {3, 4, 4, 3} -> -((2 G M Sin[\[Theta]]^2)/(c^2 r)), {4, 1, 1, 4} -> (G M (2 G M - c^2 r))/(c^4 r^4), {4, 1, 4, 1} -> (G M (-2 G M + c^2 r))/(c^4 r^4), {4, 2, 2, 4} -> (G M)/(r^2 (-2 G M + c^2 r)), {4, 2, 4, 2} -> (G M)/(r^2 (2 G M - c^2 r)), {4, 3, 3, 4} -> -((2 G M)/(c^2 r)), {4, 3, 4, 3} -> (2 G M)/(c^2 r),
{ "language": "en", "url": "https://physics.stackexchange.com/questions/295814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Scattering amplitude with a change in basis of fields Suppose I know the Feynman rules for the scattering process $\pi^j \pi^k \rightarrow \pi^l \pi^m$ where $j,k,l,m$ can be $1, 2$ or $3$. Define the charged pion fields as $\pi^\pm=\frac{1}{\sqrt{2}}(\pi^1 \pm i \pi^2)$ and neutral pion field as $\pi^0=\pi^3$. I would like to derive the scattering amplitudes for processes like $\pi^+ \pi^- \rightarrow \pi^+ \pi^-$ from my knowledge of scattering amplitude of $\pi^j \pi^k \rightarrow \pi^l \pi^m$. How should I proceed? I suppose it can be done by clever change of indices in Feynman rules, but I am unable to see how exactly.
I assume your Lagrangian might have a term of the form $\bar{N}\vec{\pi}.\vec{\tau}\gamma^5N$, or something with $\vec{\pi}.\vec{\tau}$. One method is to expand the following and see how $\pi^+$ and $\pi^-$ come into your Lagrangian, \begin{align*} \vec{\pi}.\vec{\tau} &= \pi^1\sigma^1 + \pi^2\sigma^2 +\pi^3\sigma^3 \\ &= \begin{pmatrix} 0 && \pi^1 \\ \pi^1 && 0 \end{pmatrix} + \begin{pmatrix} 0 && -i \pi^2 \\ i \pi^2 && 0 \end{pmatrix} + \begin{pmatrix} \pi^3 && 0 \\ 0 && -\pi^3 \end{pmatrix} \\ &= \begin{pmatrix} \pi^3 && \pi^1 - i \pi^2 \\ \pi^1 + i\pi^2 && -\pi^3 \end{pmatrix} \\ &= \begin{pmatrix} \pi^0 && \sqrt{2}\pi^- \\ \sqrt{2}\pi^+ && -\pi^0 \end{pmatrix}. \end{align*} Then expand your Lagrangian in these fields, \begin{align*} \bar{N} \vec{\pi}.\vec{\tau}\gamma^5 N &= \begin{pmatrix} \bar{p} && \bar{n} \end{pmatrix} \begin{pmatrix} \pi^0 && \sqrt{2}\pi^- \\ \sqrt{2}\pi^+ && -\pi^0 \end{pmatrix}\gamma^5 \begin{pmatrix} p \\ n \end{pmatrix}\\ &= \bar{p}\pi^0\gamma^5 p + \sqrt{2} \bar{p}\pi^-\gamma^5 n + \sqrt{2} \bar{n} \pi^+ \gamma^5 p - \bar{n} \pi^0 \gamma^5 n \end{align*} After expanding this out you can read off your Feynman rules with these new fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Computation of the self-energy term of the exact propagator for $\varphi^3$ theory in Srednicki In M. Srednicki "Quantum field theory", Section 14 -Loop corrections to the propagator-, the exact propagator $\mathbf {\tilde \Delta} (k^2)$ is stated as $$\frac{1}{i} \mathbf {\tilde \Delta} (k^2) = \frac{1}{i} \tilde \Delta (k^2) + \frac{1}{i} \tilde \Delta (k^2) [i \Pi (k^2)] \frac{1}{i} \tilde \Delta (k^2) + O(g^4)\tag{14.2}$$ where: $\tilde \Delta (k^2) = \frac{1}{k^2 + m^2 - i \epsilon}$ free-field propagator $i \Pi (k^2) = \frac{1}{2} (i g)^2 (\frac{1}{i})^2 \int \frac{d^d l}{(2 \pi)^d} \tilde \Delta ((l + k)^2) \tilde \Delta (l^2) - i (A k^2 + B m^2) + O(g^4)$ self-energy Many pages are then dedicated to the laborious calculation of the self-energy term. I could understand all of the many passages, however almost at the end of the demonstration to compute the expression $$\Pi (k^2) = \frac{1}{2} \alpha \int_0 ^1 dx D ln (D / D_0) - \frac{1}{12} \alpha (k^2 + m^2) + O(\alpha^2)\tag{14.43}$$ where: $D = x (1 - x) k^2 + m^2$ $D_0 = D \vert_{k^2 = -m^2}$ the integral over $x$ is solved in closed form as $$\Pi (k^2) = \frac{1}{12} \alpha [c_1 k^2 + c_2 m^2 + 2 k^2 f(r)] + O(\alpha^2)\tag{14.44}$$ where: $c_1 = 3 - \pi \sqrt{3}$ $c_2 = 3 - 2 \pi \sqrt{3}$ $f(r) = r^3 tanh^{-1}(1/r)$ $r = (1 + 4 m^2 / k^2)^{1/2}$ My question is: How to move from Eq. (14.43) to Eq. (14.44)? Any hint, or any link where I can find it?
It is very easy to integrate it with Mathematica. From Eq.(14.14) and Eq.(14.42) we have $D=x(1-x)k^2+m^2$ and $D_0=[1-x(1-x)]m^2$, so $$ \frac\alpha2\int_0^1\text{d}x\ D\ln\frac D{D_0}=\frac\alpha{12}\left[4(k^2+m^2)-\sqrt3(k^2+2m^2)\pi+2\sqrt{\frac{(k^2+4m^2)^3}{k^2}}\arctan\sqrt{\frac{k^2}{k^2+4m^2}}\right]. $$ Therefore, $$ \begin{aligned} \Pi(k^2)=&\ \frac\alpha2\int_0^1\text{d}x\ D\ln\frac D{D_0}-\frac\alpha{12}(k^2+m^2)+\mathcal O(\alpha^2) \\ =&\ \frac\alpha{12}\left[(3-\pi\sqrt3)k^2+(3-2\pi\sqrt3)m^2+2k^2\left(1+\frac{4m^2}{k^2}\right)^\frac32\arctan\frac1{\left(1+\frac{4m^2}{k^2}\right)^\frac12}\right]+\mathcal O(\alpha^2) \\ =&\ \frac\alpha{12}\left[c_1k^2+c_2m^2+2k^2f(r)\right]+\mathcal O(\alpha^2). \end{aligned} $$ That's all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Issues with Feynman parameters As a sanity check, I have tried to evaluate a Feynman parameter integral, and have been unable to reproduce the textbook result. I wish to verify the identity $$\frac{1}{ABC} = \int\limits_0^1\int\limits_0^1\int\limits_0^1dxdydz\frac{2\delta(x+y+z-1)}{[Ax + By + Cz]^3} ~\hat{=}~I.$$ We can use the delta function to do the integral over $dz$. $$I = \int\limits_0^1\int\limits_0^1 dxdy \frac{2}{[C + (A-C)x + (B-C)y]^3}.$$ To do the integral over $y$ we introduce the substitution $$w = C + (A-C)x + (B-C)y$$ This will give a factor of $(B-C)^{-1}$, and change the limits \begin{align*} I &= \int\limits_0^1 \frac{1}{B-C}dx \int\limits_{C + (A-C)x}^{B+(A-C)x}dw \frac{2}{w^3} \\ &= \int\limits_0^1 dx \frac{1}{B-C}\left[\frac{1}{[C+(A-C)x]^2} - \frac{1}{[B+(A-C)x]^2}\right] \end{align*} The first integral is the Feynman parameter identity for 2 terms in the denominator, which is much easier to verify and gives $1/AC$. The second can be solved by essentially the same substitution as above. $$w = B + (A-C)x$$ which gives a factor of $(A-C)^{-1}$ in similar fashion to before. \begin{align*} I &= \frac{1}{(B-C)AC} - \frac{1}{(B-C)(A-C)}\int\limits_{B}^{A+B-C}\frac{dw}{w^2} \\ &= \frac{1}{(B-C)AC} - \frac{1}{(B-C)(A-C)}\left[\frac{1}{B} - \frac{1}{A+B-C}\right]\\ &= \frac{1}{B-C}\left[\frac{1}{AC} - \frac{1}{A-C}\frac{A-C}{B(A+B-C)}\right] \\ &= \frac{1}{B-C}\frac{B(A+B-C) - AC}{ABC(A+B-C)} \\ &= \frac{1}{ABC} + \frac{1}{AB(A+B-C)} \end{align*} I don't know where this extra term has come from, and I can't seem to figure out where I went wrong.
The problem is in the very first step sadly. When you resolve the $\delta$ function you are putting $$ z = 1 - x - y\,. $$ This will hold only when $x+y \leq 1$ because you are integrating only in the region $z \in [0,1]$. In other words, the zero of the $\delta$ function sometimes falls outside of the region of integration and thus gives no contribution. The correct way to resolve the $\delta$ function is $$ I = \int_0^1 \mathrm{d}x \int_{0}^{1-x}\mathrm{d}y \frac{2}{[C + (A-C)x + (B-C)y]^3}\,. $$ If you follow the same exact steps as you did in your post with the limits modified this way, you'll get the right answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding common eigenvectors for two commuting hermitian matrices Let $A = \begin{bmatrix} 1 &0 &0 \\ 0& 0& 0\\ 0&0 &1 \end{bmatrix}$ and $B = \begin{bmatrix} 0 &0 &1 \\ 0& 1& 0\\ 1&0 &0 \end{bmatrix}$ the representation of two hermitian operators in a $(\phi_{1},\phi_{2},\phi_{3})$ basis. Find a common basis of eigenvectors of the two operators... So... is easily shown that both matrices commute and are hermitian, the corresponding eigenvalues and eigenvectors are: * *For $A$: $a_1 = 0$ with corresponding $\begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix}$ Eigenvector, $a_2 = 1$ with corresponding $\begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix} , \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$ Eigenvectors *For $B$: $b_1 = 1$ with Eigenvectors $\begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix}, \begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix}$ , $b_2 = -1 $ with corresponding $\begin{bmatrix} 1\\ 0\\ -1 \end{bmatrix}$ Eigenvector How can I find a common set of Eigenvectors?
Just use the matrix of the eigenvectors of B: $$ U = \left( \begin{matrix} 1&1 &0 \\ 0& 0&1\\ 1& -1&0 \end{matrix} \right) $$ With this matrix, you find that: $$ U^{-1}AU = \left( \begin{matrix} 1&0&0 \\ 0& 1&0\\ 0& 0& 0 \end{matrix} \right)$$ and $$ U^{-1}BU = \left( \begin{matrix} 1&0&0 \\ 0& -1&0\\ 0& 0& 1 \end{matrix} \right), $$ which are both diagonal matrices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/723538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Having trouble understanding Re-derivation of Young’s Equation I was going over a section 2.1 of this article (regarding Young's Equation and for some reason I wasn't able to derive E10 from E9 (see image below). I know it is more about the trigonometry of the problem, but when I tried to derive it myself it became cumbersome real fast and I don't get to E10. I would appreciate if someone could help me with that.
The first parenthesis in E9 is simplified into $$ \begin{aligned} &\sin^2\theta + \cos\theta \frac{-(1 - \cos\theta)(2 + \cos\theta)}{1 + \cos\theta} \\ = &(1 - \cos^2\theta) - \cos\theta \frac{2 - \cos\theta - \cos^2\theta}{1 + \cos\theta} \\ = &1 - \cos\theta \frac{\cos\theta(1 + \cos\theta) + (2 - \cos\theta - \cos^2\theta)}{1 + \cos\theta} \\ = &1 - \cos\theta \frac{2}{1 + \cos\theta} \\ = &\frac{1 - \cos\theta}{1 + \cos\theta} \end{aligned} $$ The second parenthesis in E9 is simplified into $$ \begin{aligned} &2(1 - \cos\theta) - \frac{(1 - \cos\theta)(2 + \cos\theta)}{1 + \cos\theta} \\ = &\frac{2 (1-\cos\theta)(1+\cos\theta) - (1-\cos\theta)(2+\cos\theta)}{1+\cos\theta} \\ = &\frac{2 - 2\cos^2\theta - 2 + \cos\theta + \cos^2\theta}{1+\cos\theta} \\ = &\frac{-\cos^2\theta + \cos\theta}{1 + \cos\theta} \\ = &\cos\theta \frac{1 - \cos\theta}{1 + \cos\theta} \end{aligned} $$ Now use these simplifications in the E9 $$(\gamma_{sl} - \gamma_{so}) \frac{1 - \cos\theta}{1 + \cos\theta} + \gamma \cos\theta \frac{1 - \cos\theta}{1 + \cos\theta} = 0$$ $$\frac{1 - \cos\theta}{1 + \cos\theta} \Bigl( (\gamma_{sl} - \gamma_{so}) + \gamma \cos\theta \Bigr) = 0$$ I do not know specifics about the above equation, but from purely mathematical point of view it has a singularity at $\theta = \pi$, and for $\theta = 0$ you cannot assume the term in parenthesis to be equal to zero. If we neglect these two special cases for $\theta$ the Eq. E9 is simplified into $$\boxed{\gamma_{sl} + \gamma \cos\theta = \gamma_{so}} \tag {E10}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Verification of the Poincare Algebra The generators of the Poincare group $P(1;3)$ are supposed to obey the following commutation relation to be verified: $$\left[ M^{\mu\nu}, P^{\rho} \right] = i \left(g^{\nu\rho} P^{\mu} - g^{\mu\rho} P^{\nu} \right)$$ where $M^{\mu\nu}$ are the 6 generators of the Lorentz group and $P^\mu$ are the 4 generators of the four-dimensional translation group $T(4)$. For $\mu = 3, \nu=1, \rho=0$ the LHS becomes: $ [M^{31},P^{0}] = M^{31}P^{0} - P^{0}M^{31}$. Here $M^{31} = J^2 = -J_2= \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & 0 & 0 \\ 0 & i & 0 & 0 \end{pmatrix}$ and $ P^0 = P_0 = -i \begin{pmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$. My question is that how can I multiply $M^{31}$ and $P^0$ when they are $4\times4$ and $5\times 5$ matrices respectively?
Consider $$M_{31} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \text{ and } P_0 = -i \begin{pmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix},$$ Then the commutator vanishes! As expected from $\left[ M_{31}, P_{0} \right] = i \left(g_{10} P_{3} - g_{30} P_{1} \right) = 0$. If you take $$M_{01} = \begin{pmatrix} 0 & i & 0 & 0 & 0 \\ i & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \text{ and } P_0 = -i \begin{pmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix},$$ then $$ \left[M_{01},P_0\right] = -i \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} = P_1. $$ And so on!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/127690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does this imaginary number mean for time and velocity? As some have pointed out in the chat, perhaps the question that I should have asked is, am I really integrating for velocity? My integration might be misleading in that it integrates for something but probably not velocity. Velocity could not be in the same direction as acceleration unless the object is already at terminal velocity - in which case acceleration = 0 and it therefore has no direction. A paradox, which perhaps explains for my imaginary numbers. I believe that I have understood my conceptual misunderstanding. In regards to 2D kinematics, I essentially did partial fractions and integrated for time as a function of velocity, and suddenly, I have to deal with an imaginary number! My specific problem is not so much about my working out - it's about why I am obtaining imaginary numbers? Is this inherent of the coefficient of drag and lift equations? I can't see why it would be. I initially had multiple acceleration functions in terms of velocity, using the equations for coefficient of drag, lift and gravity. \begin{align} ma_x &= F_L\sin\theta-F_D\cos\theta \\ ma_y&=mg-F_L\cos\theta-F_D\sin\theta \end{align} with $$ F=C\frac{pv^2}{2}A. $$ I broke this down into horizontal ($h$) and vertical components ($l$). My third line comes from expanding and simplify the equations of coefficients of drag, lift and gravity. $$ acceleration{^2} = h^2 +l^2$$ $$ \frac{dv}{dt} = \sqrt{h^2 +l^2 }$$ $$\sqrt{h^2 +l^2 } = \sqrt{av^4 -bv^2 + \frac{c}{a} }$$ $v$ is velocity, $t$ is time, and all other letters are pre-determined constants. Rearrange equation to produce integrals and isolate dt $$\begin{align}\int \left ( \frac{\mathrm dv}{\sqrt{av^4 -bv^2 + \frac{c}{a}}} \right ) &= \int \mathrm dt \\ \end{align} \\ \frac{\mathrm dv}{\sqrt{a[\left(v^2 - \frac{b}{2a}\right)^2 - \left(\frac{b}{2a}\right)^2 + \frac{c}{a}]}} \ $$ In order to keep things easier to read, substitute a value for $D$. $$D = \left(\frac{b}{2a}\right)^2 - \frac{c}{a}$$ $$ \\ \frac{\mathrm dv}{\sqrt{a[\left(v^2 - \frac{b}{2a}\right)^2 - D]}} \\ \frac{1}{\sqrt{a}} \frac{\mathrm dv}{\sqrt{(v^2 - \frac{b}{2a})^{2} - D}} $$ Pretend for a moment that constant $1/\sqrt a$ is not there, to make it easier to write. $$ \frac{\mathrm dv}{((v^2 - \frac{b}{2a}) + D^{1/2})^{1/2}\cdot((v^2 - \frac{b}{2a}) - D^{1/2})^{1/2}} $$ Partial fractions. $$ \int \frac{A}{((v^2 - \frac{b}{2a}) + D^{1/2})^{1/2}} + \frac{B}{((v^2 - \frac{b}{2a}) - D^{1/2})^{1/2}} \mathrm dv = t $$ Equate coefficients. $$ A\sqrt{-2\sqrt{D}} = 1. $$ This is what makes us have the imaginary number. $$ A = \frac{1}{\sqrt{-2\sqrt{D}}} $$ and $$ B = \frac{1}{({4D)^{1/4}}}. $$ I did everything else on Wolfram Alpha, including the integration. The result is, $$ \frac{\log\left(\sqrt{2}\sqrt{\frac{-2a\sqrt{D}+2av^2-b}{a}}+2v\right)}{\sqrt{2}\sqrt[4]{D}}+\frac{\log\left(\sqrt{2}\sqrt{\frac{2a\sqrt{D}+2av^2-b}{a}}+2v\right)}{\sqrt{2}\sqrt{-\sqrt D}}=t $$
Complex velocity doesn't make sense in physics so you have to choose the parameters $a,b,c$ so you don't get an imaginary velocity. \begin{align*} & \sqrt{a v^4- b v^2+\frac{c}{a}}\quad\Rightarrow\quad a v^4- b v^2+\frac{c}{a} \ge 0\\ &v^2 \mapsto x\quad\Rightarrow\\ &g_1=a x^2- b x+\frac{c}{a} \ge 0\\ &g_1=(x-\tau_1)\,(x-\tau_2)\ge 0\quad\text{with:}\\ &\tau_1=-\frac{b}{2\,a}+\frac{1}{2\,a}\,\sqrt{b^2-4\,c}\\ &\tau_2=-\frac{b}{2\,a}-\frac{1}{2\,a}\,\sqrt{b^2-4\,c}\\ &\quad \Rightarrow\quad \\&b^2-4\,c \ge 0\quad b\ge 2\,\sqrt{c} \,\quad c \ge 0\\ &\text{with:}\quad v^2=x\quad\Rightarrow\quad x > 0&\\\quad \Rightarrow\\ &g_1\ge 0\quad \,\Rightarrow\quad \\\\&x - \tau_1\ge 0 \quad\text{and}\quad x-\tau_2\ge 0 \\&\text{or}\\ &x- \tau_1\le 0 \quad\text{and}\quad x- \tau_2\le 0 \end{align*} Consequences: \begin{align*} &c \ge 0\\ &b \ge 2\sqrt{c}\\ &a > 0\,,\text{$x$ must be positive !!}\\\\ & \tau_1 \le x \le \inf \end{align*} Example: \begin{align*} &c=3\,,b=2\sqrt{3}+3\,,a=2\\ &\Rightarrow\\ &\tau_1=2.98\,,\tau_2=0.25\\ &x > \tau_1=4\,\Rightarrow\quad g_1=7.6 > 0\\ &v=\sqrt{x}=2\,,\checkmark \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The Electromagetic Tensor and Minkowski Metric Sign Convention I am trying to figure out how to switch between Minkowski metric tensor sign conventions of (+, -, -, -) to (-, +, +, +) for the electromagnetic tensor $F^{\alpha \beta}$. For the convention of (+, -, -, -) I know the contravariant and covarient forms of the electromagnetic tensor are: $$ F^{\alpha \beta} = \begin{bmatrix} 0 & -\frac{E_{x}}{c} & -\frac{E_{y}}{c} & -\frac{E_{z}}{c} \\ \frac{E_{x}}{c} & 0 & -B_{z} & B_{y} \\ \frac{E_{y}}{c} & B_{z} & 0 & -B_{x} \\ \frac{E_{z}}{c} & -B_{y} & B_{x} & 0 \\ \end{bmatrix} $$ and $$ F_{\alpha \beta} = \eta_{\alpha \mu} F^{\mu v} \eta_{v \beta} = \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ -\frac{E_{x}}{c} & 0 & -B_{z} & B_{y} \\ -\frac{E_{y}}{c} & B_{z} & 0 & -B_{x} \\ -\frac{E_{z}}{c} & -B_{y} & B_{x} & 0 \\ \end{bmatrix}. $$ Now for the convention of (-, +, +, +) are the contravariant and covariant forms of the electromagnetic tensor just switched from above along with signs?: $$ F^{\alpha \beta}= \begin{bmatrix} 0 & \frac{E_{x}}{c} & \frac{E_{y}}{c} & \frac{E_{z}}{c} \\ -\frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ -\frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ -\frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix} $$ and $$ F_{\alpha \beta} = \eta_{\alpha \mu} F^{\mu v} \eta_{v \beta} = \begin{bmatrix} 0 & -\frac{E_{x}}{c} & -\frac{E_{y}}{c} & -\frac{E_{z}}{c} \\ \frac{E_{x}}{c} & 0 & B_{z} & -B_{y} \\ \frac{E_{y}}{c} & -B_{z} & 0 & B_{x} \\ \frac{E_{z}}{c} & B_{y} & -B_{x} & 0 \\ \end{bmatrix}~? $$ Basically, I am trying to figure out how to switch between the two sign conventions.
I use this way: \begin{equation}\tag{1} F_{ab} = \partial_a \, A_b - \partial_b \, A_a, \end{equation} where \begin{equation}\tag{2} A^a = (\phi, \, A_x, \, A_y, \, A_z), \qquad\qquad A_a = (\phi, - A_x, - A_y, - A_z). \end{equation} Then, we have: \begin{align} E_i &= \Big( -\, \vec{\nabla} \, \phi - \frac{\partial \vec{A}}{\partial t} \Big)_i, \tag{3} \\[12pt] B_i &= (\vec{\nabla} \times \vec{A})_i. \tag{4} \end{align} (1) and sign convention (2) implies \begin{equation}\tag{5} F_{0 i} = \partial_0 \, A_i - \partial_i \, A_0 \equiv E_i. \end{equation} Also: $F^{0 i} = -\, E_i$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Landau-Lifshitz Equation of Motion for Triangular Heisenberg Antiferromagnet There is a paper (PhysRevB.95.014435) in which the dispersion relation for some Heisenberg model on the honeycomb lattice is derived from the Landau-Lifshitz equation: \begin{align} \frac{d S_i}{dt} = - S_i \times \mathcal H_{\rm eff} \end{align} Their attempt from Eq. 2 to Eq.4 is pretty simple and I'll try the same for the 2D triangular Heisenberg antiferromagnet (THAF) (in xy-plane), which has a much simpler Hamiltonian: \begin{align} \mathcal H = \sum_{\langle {ij}\rangle } J S_i S_j,\quad \mathcal H_{\rm eff} = J \sum_j S_j \end{align} where $\langle {ij}\rangle$ sums over all nearest neighbors. There are some papers out there (for example PhysRevB.74.180403) which have derived the dispersion to be \begin{align} \omega_{\bf k} = \sqrt{(1- \gamma_{\bf k} ) ( 1+ 2 \gamma_{\bf k} ) } \label{eq:thaf_disp} \end{align} with \begin{align} \gamma_{\bf k} = \frac{1}{z} \sum_{j} \mathrm{e}^{i \bf{k}( \bf{R}_i - \bf{R}_j )} = \frac{1}{3}\left(\cos k_{x}+2 \cos \frac{k_{x}}{2} \cos \frac{\sqrt{3}}{2} k_{y}\right) \, . \end{align} The ground-state of the THAF is the $120^{\circ}$-Neel order. My idea is similar to the derivation in Linear Spin Wave Theory and I'm starting by some rotation of spin vectors \begin{align} S_{i \in A} &= (\delta m_i^{x}, \delta m_i^{y}, 1) \\ S_{i \in B } &= ( \sqrt{3}/2 \delta m_i^{y} - 1/2 \delta m_i^{x}, -\sqrt{3}/2 \delta m_i^{x} - 1/2 \delta m_i^{y}, 1) \\ S_{i \in C} &= ( -\sqrt{3}/2 \delta m_i^{y} - 1/2 \delta m_i^{x}, \sqrt{3}/2 \delta m_i^{x} - 1/2 \delta m_i^{y}, 1) \end{align} where A,B,C are the three sublattices of the ground-state and $\delta m \ll 1$ . Then I tried to solve the Landau-Lifshitz equation: \begin{align*} \frac{d S_{i \in A}}{dt} &=- \begin{pmatrix} \delta m_i^{x} \\ \delta m_i^{y} \\ 1 \end{pmatrix} \times \left(\sum_j J S_{j\in B} + J S_{j \in C}\right) =- \sum_j J \begin{pmatrix} \delta m_i^{x} \\ \delta m_i^{y} \\ 1 \end{pmatrix} \times \begin{pmatrix} - \delta m_j^{x} \\ - \delta m_j^{y} \\ 2 \end{pmatrix} \approx - \sum_jJ \begin{pmatrix} \delta m_j^{y} + 2 \delta m_i^{y} \\ - \delta m_j^{x} - 2 \delta m_i^{x} \\ 0 \end{pmatrix} \\ \frac{d S_{i \in B}}{d t} &= -\begin{pmatrix} \frac{\sqrt{3}}{2} \delta m_i^{y} - \frac{1}{2}\delta m_i^{x} \\ -\frac{\sqrt{3}}{2} \delta m_i^{x} - \frac{1}{2} \delta m_i^{y} \\ 1 \end{pmatrix} \times \left(\sum_j J S_{j \in A} + J S_{j \in C} \right) \\ &= - \sum_j J \begin{pmatrix} \frac{\sqrt{3}}{2} \delta m_i^{y} - \frac{1}{2} \delta m_i^{x} \\ -\frac{\sqrt{3}}{2} \delta m_i^{x} - \frac{1}{2} \delta m_i^{y} \\ 1 \end{pmatrix} \times \begin{pmatrix} \frac{1}{2} \delta m_j^{x} - \frac{\sqrt{3}}{2} \delta m_j^{y} \\ \frac{\sqrt{3}}{2} \delta m_j^{x} + \frac{1}{2} \delta m_j^{y} \\ 2 \end{pmatrix} \approx - \sum_j J \begin{pmatrix} -(\sqrt{3} \delta m_i^{x} + \delta m_i^{y}) - ( \frac{\sqrt{3}}{2} \delta m_j^{x} + \frac{1}{2} \delta m_j^{y} ) \\ \frac{1}{2} \delta m_j^{x} - \frac{\sqrt{3}}{2} \delta m_j^{y} - (\sqrt{3} \delta m_i^{y} - \delta m_i^{x}) \\ 0 \end{pmatrix} \\ &=\sum_j J\begin{pmatrix} \frac{\sqrt{3}}{2} (2 \delta m_i^{x} + \delta m_j^{x} ) + \frac{1}{2}(2 \delta m_i^{y} +\delta m_j^{y} ) \\ \frac{\sqrt{3}}{2} (2\delta m_i^{y} + \delta m_j^{y} ) -\frac{1}{2} (2\delta m_i^{x} + \delta m_j^{x} ) \\ 0 \end{pmatrix} \\ \frac{d S_{i \in C}}{d t} &= - \sum_j \begin{pmatrix} -\frac{\sqrt{3}}{2} \delta m_i^{y} - \frac{1}{2} \delta m_i^{x} \\ \frac{\sqrt{3}}{2} \delta m_i^{x} - \frac{1}{2} \delta m_i^{y} \\ 1 \end{pmatrix} \times \begin{pmatrix} \frac{\sqrt{3}}{2} \delta m_j^{y} + \frac{1}{2} \delta m_j^{x} \\ -\frac{\sqrt{3}}{2} \delta m_j^{x} + \frac{1}{2} \delta m_j^{y} \\ 2 \end{pmatrix} \approx - \sum_j J \begin{pmatrix} \sqrt{3} \delta m_i^{x} - \delta m_i^{y} - (-\frac{\sqrt{3}}{2} \delta m_j^{x} + \frac{1}{2} \delta m_j^{y}) \\ (\frac{\sqrt{3}}{2} \delta m_j^{y} + \frac{1}{2} \delta m_j^{x}) + \sqrt{3} \delta m_i^{y} + \delta m_i^{x} \\ 0 \end{pmatrix} \\ &= \sum_j J \begin{pmatrix} \frac{1}{2} (2\delta m_i^{y} + \delta m_j^{y}) - \frac{\sqrt{3}}{2} (2 \delta m_i^{x} + \delta m_j^{x}) \\ - \frac{\sqrt{3}}{2} (2\delta m_i^{y} + \delta m_j^{y}) - \frac{1}{2} (2\delta m_i^{x} + \delta m_j^{x}) \\ 0 \end{pmatrix} \end{align*} By using Bloch-Theorem: \begin{align} \delta m_i^{x} = X \exp(i \left( \bf{k} \bf{R}_i - \omega t \right) ), \quad \delta m_i^{y} = Y \exp(i \left( \bf{k} \bf{R}_i - \omega t \right) ) \end{align} Since I only have now one sublattice I don't need $X_A$, $X_B$ and $X_C$ etc. like in the paper. If you compare left-hand and right-hand side of the those equations of motions all do have the same structure. This structure looks like \begin{align} i \omega \begin{pmatrix} X \\ Y \end{pmatrix} \mathrm{e}^{i (\bf{k} \bf{R}_i - \omega t)} = \sum_j J \begin{pmatrix} - 2 Y \\ 2X \end{pmatrix}\mathrm{e}^{i (\bf{k} \bf{R}_i - \omega t)} + \sum_j J\begin{pmatrix} -Y \\ X \end{pmatrix} \mathrm{e}^{i (\bf{k} \bf{R}_j - \omega t)} \end{align} where the Bloch theorem is already used. This would then lead to the following matrix \begin{align} i \omega \begin{pmatrix} X \\ Y \end{pmatrix} = J \begin{pmatrix} 0 & -2 - \gamma_k \\ 2 + \gamma_k & 0 \end{pmatrix} \begin{pmatrix} X \\ Y \end{pmatrix} = H \begin{pmatrix} X \\ Y \end{pmatrix} \end{align} The paper sugested using $\psi^{\pm} = (X\pm iY)/\sqrt{2}$. This can be achieved by the Matrix \begin{align} U = \begin{pmatrix} 1 & i \\ 1 & -i \end{pmatrix} \end{align} and by calculating $i/2 \sigma_z UHU^{-1}$ I ended up with an hermitian matrix which uses $\psi^{\pm}$ as the amplitudes like sugested in the paper above: \begin{align} \begin{pmatrix} - \gamma_k - 2 & 0 \\ 0 & \gamma_k + 2 \end{pmatrix} \end{align} which would lead to $\omega_k = \pm \sqrt{(\gamma_k + 2)^2}$ which is obviously wrong but I cannot figure out where my mistake is or where I'm thinking wrong.
I see two possible problems in your consideration. * *You've investigated perturbations of ferromagnetic ground state. When spin variations $\delta m$ are zeros, spins on three sublattices are the same: $$ S_i = (0, 0, 1),\quad \forall i. $$ *The Landau-Lifshitz equation is a nonlinear one. Effective field ${\cal H}_{i,{\rm eff}}$ depends on neighboring spins. Hence you need to take into account variations of effective field: $$ \frac{d \delta S_i}{dt} = -\delta S_i \times {\cal H}_{i,{\rm eff}} - S_i \times \delta {\cal H}_{i,{\rm eff}}. $$ I didn't analyze your application of the Bloch theorem. I think there also could be problems. Neel state on triangular lattice is invariant under translation of states of triangular cells of spins, not of individual spins.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/589583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem with the proof that for every timelike vector there exists an inertial coordinate system in which its spatial coordinates are zero I am reading lecture notes on special relativity and I have a problem with the proof of the following proposition. Proposition. If $X$ is timelike, then there exists an inertial coordinate system in which $X^1 = X^2 = X^3 = 0$. The proof states that as $X$ is timelike, it has components of the form $(a, p\,\mathbf{e})$, where $\mathbf{e}$ is a unit spatial vector and $\lvert a \rvert > \lvert p \rvert$. Then one considers the following four four-vectors: \begin{align*} \frac{1}{\sqrt{a^2 - p^2}}(a, p\,\mathbf{e}) & & \frac{1}{\sqrt{a^2 - p^2}}(p, a\,\mathbf{e}) & & (0, \mathbf{q}) & & (0, \mathbf{r})\,, \end{align*} where $\mathbf{q}$ and $\mathbf{r}$ are chosen so that $(\mathbf{e}, \mathbf{q}, \mathbf{r})$ form an orthonormal triad in Euclidean space. Then the proof concludes that these four-vectors define an explicit Lorentz transformation and stops there. For me this explicit Lorentz transformation is represented by the following matrix. \begin{bmatrix} \frac{1}{\sqrt{a^2 - p^2}} a & \frac{p}{\sqrt{a^2 - p^2}} & 0 & 0 \\ \frac{p}{\sqrt{a^2 - p^2}} e^1 & \frac{a}{\sqrt{a^2 - p^2}} e^1 & q^1 & r^1 \\ \frac{p}{\sqrt{a^2 - p^2}} e^2 & \frac{a}{\sqrt{a^2 - p^2}} e^2 & q^2 & r^2 \\ \frac{p}{\sqrt{a^2 - p^2}} e^3 & \frac{a}{\sqrt{a^2 - p^2}} e^3 & q^3 & r^3 \\ \end{bmatrix} However, multiplying the column vector $(X^0, X^1, X^2, X^3)$ by the matrix above does not seem to yield a column vector whose spatial components are zero. What did I miss?
Actually, your matrix can be greatly simplified as $$ M = \begin{bmatrix} \frac{1}{\sqrt{a^2 - p^2}} a & \frac{p}{\sqrt{a^2 - p^2}} & 0 & 0 \\ \frac{p}{\sqrt{a^2 - p^2}} & \frac{a}{\sqrt{a^2 - p^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ since $(\mathbf{e}, \mathbf{q}, \mathbf{r})$ forms an orthonormal triad, hence $e_1 = q_2 = r_3 = 1$ and the other coefficients are zero. Now, as pointed out by Valter Moretti in his comment, the matrix you're looking for is the inverse of this matrix. An easy calculation gives the inverse. $$ M^{-1} = \frac{1}{(a^2 - p^2)^{\frac{3}{2}}} \begin{bmatrix} a(a^2 - p^2) & -p(a^2 - p^2) & 0 & 0 \\ -p(a^2 - p^2) & a(a^2 - p^2) & 0 & 0 \\ 0 & 0 & (a^2 - p^2)^{\frac{3}{2}} & 0 \\ 0 & 0 & 0 & (a^2 - p^2)^{\frac{3}{2}} \end{bmatrix} $$ Finally, one easily checks the result as follows. $$ M^{-1} \times \begin{bmatrix} a \\ p \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} \sqrt{a^2 - p^2} \\ 0 \\ 0 \\ 0 \end{bmatrix} $$ Note that taking $\mathbf{q}$ and $\mathbf{r}$ such that $(\mathbf{e}, \mathbf{q}, \mathbf{r})$ forms an orthonormal triad is equivalent to doing a spatial rotation such that only $X^1$ is nonvanishing. Thus, a more geometric, less algebraic, proof will start with a spatial rotation then will proceed with a boost along the $x$-axis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/592938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Dimensional regularization: order of integration This is a two-loop calculation in dim reg where I seem to be getting different results by integrating it in different orders. I am expanding it about $D=1$. What rule am I breaking? $$\int \frac{d^{D} p}{(2\pi)^D}\frac{d^{D}q}{(2\pi)^D}\frac{p^2+4m^2}{(q^2+m^2)((q-p)^2+m^2)}=?$$ If we integrate $q$ first, the inner integral converges in $D=1$ $$\int \frac{d^{D}q}{(2\pi)^D}\frac{1}{(q^2+m^2)((q-p)^2+m^2)}=\frac{1}{m}\frac{1}{p^2+4m^2}$$ then integrating over $p$, by the rules of dimensional regularization we have $$\int \frac{d^{D} p}{(2\pi)^D}\frac{1}{m}\frac{p^2+4m^2}{p^2+4m^2}=0.$$ If we integrate $p$ first we have $$\int \frac{d^{D} p}{(2\pi)^D}\frac{p^2+4m^2}{(q-p)^2+m^2}=\int \frac{d^{D} u}{(2\pi)^D}\frac{u^2+2u\cdot q +q^2+4m^2}{u^2+m^2}=\frac{q^2+4m^2}{2m}+\int \frac{d^{D} u}{(2\pi)^D}\frac{u^2}{u^2+m^2}$$ where in the last equality we threw away the $u\cdot q$ term by symmetric integration, and split off the terms in the numerator that converge in $D=1$. The remaining term is a common integral in dim reg (though perhaps the limit $D=1$ is not) $$\int \frac{d^{D} u}{(2\pi)^D}\frac{u^2}{u^2+m^2}=\frac{1}{(4\pi)^{D/2}}\frac{D}{2}\Gamma(-D/2)(m^2)^{D/2}= -\frac{m^2}{2m}$$ where we used $\Gamma(-1/2)=-2\sqrt{\pi}$. Now integrating the outer integral over $q$ using the same rules discussed above, $$\int \frac{d^{D} q}{(2\pi)^D}\frac{1}{2m}\frac{q^2+3m^2}{q^2+m^2}=\frac{1}{2}\neq 0$$ What part is invalid?
I'd say, when you set $$ \int \frac{d^{D}q}{(2\pi)^D}\frac{1}{(q^2+m^2)((q-p)^2+m^2)}=\frac{1}{m}\frac{1}{p^2+4m^2} $$ the correct answer actually has a $+\mathcal O(d-1)$ piece. The $p$ integral has $1/(d-1)$ divergences which, when multiplied by the missing subleading piece, leaves a finite contribution. We can do the integrals exactly in $d$. For the $q$ integral we combine denominators à la Feynman, and do the linear shift $q\to q+(1-x)p$. We get $$ \frac{2 \pi ^{d/2}}{(2 \pi )^d \Gamma \left(\frac{d}{2}\right)}\int_0^\infty\frac{q^{d-1}}{\left(m^2-p^2 (x-1) x+q^2\right)^2}\,\mathrm dq=\frac{2^{-d-1} (2-d) \pi ^{1-\frac{d}{2}} \csc \left(\frac{\pi d}{2}\right) }{\Gamma \left(\frac{d}{2}\right)}\left(\frac{1}{m^2-p^2 (x-1) x}\right)^{2-\frac{d}{2}} $$ Next we evaluate the $p$ integral: \begin{align} -\frac{2^{-d-1} (d-2) \pi ^{1-\frac{d}{2}} \left(2 \pi ^{d/2} \csc \left(\frac{\pi d}{2}\right)\right)}{(2 \pi )^d \Gamma \left(\frac{d}{2}\right)^2}\int_0^\infty p^{d-1} \left(4 m^2+p^2\right) \left(\frac{1}{m^2-p^2 (x-1) x}\right)^{2-\frac{d}{2}}\,\mathrm dp=\\ =-2^{-2 d-1} d \pi ^{-d} m^{2 d-2} (-((x-1) x))^{-\frac{d}{2}-1} (8 d (x-1) x+d-8 (x-1) x) \Gamma (-d) \end{align} Finally, we perform the $x$ integral: \begin{align} \frac{2^{-2 d-1} (d-2) \pi ^{1-d} m^{2 d-2} \left(\csc \left(\frac{\pi d}{2}\right) \Gamma \left(\frac{d}{2}+1\right) \Gamma (-d)\right)}{\Gamma \left(2-\frac{d}{2}\right) \Gamma \left(\frac{d}{2}\right)^2}\int_0^1((1-x) x)^{-\frac{d}{2}-1} (-8 d (1-x) x+d+8 (1-x) x)\,\mathrm dx=\\ \color{red}{\frac{2^{1-2 d} \pi ^{2-d} m^{2 d-2} \csc ^2\left(\frac{\pi d}{2}\right)}{\Gamma \left(\frac{d}{2}\right)^2}} \end{align} For $d\to 1$ you can expand this as $$ =\frac12+\big(\log\frac{m}{\pi}+\frac12\gamma\big)(d-1)+O(d-1)^2 $$ where I haven't bothered to include the standard $\mu^\epsilon$ scale to get a dimensionally consistent series. We correctly reproduce the leading $1/2$ result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Balmer proportionality How did Johannes Balmer arrive at $$ \lambda \propto \frac{n^2}{n^2-4}, \quad (n=3,4,\dots), $$ and then how did Rydberg mathematically derive $$ \frac{1}{\lambda}=R\left(\frac{1}{n^2_1}-\frac{1}{n^2_2}\right)? $$ I know $n$ stands for the shells but in the textbook, it doesn't define what $n$ is at first. Was this because Balmer did not know what shells were at that time?
I recommend reading Balmer's original paper "Notiz über die Spektrallinien des Wasserstoffs" (1885). Balmer took the known wavelengths of the visible hydrogen spectrum ($H_\alpha$, $H_\beta$, $H_\gamma$, $H_\delta$) as measured by Ångström with high precision. He recognized they are related by certain fractions. $$\begin{array}{c|c c c} & \lambda \\ \hline H_\alpha & 656.2 \text{ nm} &= 364.56 \text{ nm} \cdot \frac{9}{5} &= 364.56 \text{ nm} \cdot \frac{3^2}{3^2-4} \\ \hline H_\beta & 486.1 \text{ nm} &= 364.56 \text{ nm} \cdot \frac{4}{3} &= 364.56 \text{ nm} \cdot \frac{4^2}{4^2-4} \\ \hline H_\gamma & 434.0 \text{ nm} &= 364.56 \text{ nm} \cdot \frac{25}{21} &= 364.56 \text{ nm} \cdot \frac{5^2}{5^2-4} \\ \hline H_\delta & 410.1 \text{ nm} &= 364.56 \text{ nm} \cdot \frac{9}{8} &= 364.56 \text{ nm} \cdot \frac{6^2}{6^2-4} \end{array}$$ This could be summarized in one formula. $$\lambda=364.56 \text{ nm} \cdot \frac{n^2}{n^2-4} \quad\text{with }n=3,4,5,6$$ You see, there was no physics involved here, "only" guessing a formula which exactly fits the experimentally measured numbers. Rydberg rewrote Balmer's formula using the reciprocal wavelength because then it gets the simpler form of a difference between two terms. $$\frac{1}{\lambda}=\frac{1}{91.13\text{ nm}}\left(\frac{1}{2^2}-\frac{1}{n^2}\right) \quad\text{with }n=3,4,5,6,...$$ He predicted there would be even more spectral lines in the hydrogen spectrum according to this Rydberg formula (1888). $$\frac{1}{\lambda}=\frac{1}{91.13\text{ nm}}\left(\frac{1}{m^2}-\frac{1}{n^2}\right) \quad\text{with }m,n=1,2,3,4,5,...$$ And indeed, soon experimental physicists found these series of spectral lines in the ultraviolet and infrared part of the hydrogen spectrum. $$\begin{align} \text{Lyman series:}\quad & \frac{1}{\lambda}=\frac{1}{91.13\text{ nm}}\left(\frac{1}{1^2}-\frac{1}{n^2}\right) & \text{with }n=2,3,4,5,... \\ \text{Paschen series:}\quad & \frac{1}{\lambda}=\frac{1}{91.13\text{ nm}}\left(\frac{1}{3^2}-\frac{1}{n^2}\right) & \text{with }n=4,5,6,7,... \\ \text{Brackett series:}\quad & \frac{1}{\lambda}=\frac{1}{91.13\text{ nm}}\left(\frac{1}{4^2}-\frac{1}{n^2}\right) & \text{with }n=5,6,7,8,... \end{align}$$ Again, there was no physical theory available yet. This had to wait until the invention of quantum mechanics, beginning with the Bohr model (1913) and its explanation of the Rydberg formula.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/734989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Height of the atmosphere - conflicting answers Okay. I have two ways of working out the height of the atmosphere from pressure, and they give different answers. Could someone please explain which one is wrong and why? (assuming the density is constant throughout the atmosphere) 1) $P=h \rho g$, $\frac{P}{\rho g} = h = \frac{1.01\times 10^5}{1.2\times9.81} = 8600m$ 2) Pressure acts over SA of Earth. Let r be the radius of the Earth. Area of the Earth is $4 \pi r^2$ Volume of the atmosphere is the volume of a sphere with radius $(h+r)$ minus the volume of a sphere with radius $r$. $\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$ Pressure exerted by the mass of the atmosphere is: $P=\frac{F}{A}$ $PA=mg$ $4\pi r^2 P = \rho V g$ $4\pi r^2 P = \rho g (\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3)$ $\frac{4\pi r^2 P}{\rho g} = \frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$ $3 \times \frac{r^2 P}{\rho g} = (h+r)^3 - r^3$ $3 \times \frac{r^2 P}{\rho g} + r^3 = (h+r)^3$ $(3 \times \frac{r^2 P}{\rho g} + r^3)^{\frac{1}{3}} - r = h$ $(3 \times \frac{(6400\times10^3)^2 \times 1.01 \times 10^5}{1.23 \times 9.81} + (6400\times10^3)^3)^{\frac{1}{3}} - (6400\times10^3) = h = 8570m$ I know that from Occams razor the first is the right one, but surely since $h\rho g$ comes from considering the weight on the fluid above say a 1m^2 square, considering the weight of the atmosphere above a sphere should give the same answer?
Neither calculation is anything approaching physically realistic, but I guess you know that and you're just interested in why the two approaches give different answers. Take your equation from your second method: $$ 4\pi r^2 P = \rho V g $$ If the area is a flat sheet you have $V = Ah$ and $A = 4\pi r^2$, and substituting this in your equation gives: $$ 4\pi r^2 P = \rho 4\pi r^2 h g $$ and dividing both sides by $4\pi r^2$ gives you back $P = \rho h g$ as in your first method. However in the second method you've taken the volume to be a spherical shell with inner surface area of $4 \pi r^2$ and thickness $h$, and the volume of this shell is greater than $Ah$ i.e. $$ \frac{4}{3}\pi (r + h)^3 - \frac{4}{3}\pi r^3 \gt 4\pi r^2 h $$ The greater volume is why your second method gives you a smaller height.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Derivation of Total Momentum Operator for Klein-Gordon Field Quantization I am studying the second chapter of Peskin and Schroeder's QFT text. In equation 2.27 and 2.28, the book defines the field operators: $$ \phi(x) = \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2 w_p}} (a_p + a^\dagger_{-p}) \, e^{ipx} \\ \pi(x) = \int \frac{d^3p}{(2\pi)^3} (-i) \sqrt{\frac{w_p}{2}} (a_p - a^\dagger_{-p}) \, e^{ipx} $$ The momentum operator is then calculated in equation 2.33. However, my own derivation gives a different answer. I am reproducing my steps hoping that someone will be able to find where I went wrong. Starting with the definition of the momentum (conserved charge of spatial translations): $$ \mathbf{P} = -\int d^3x \, \pi(x) \nabla \phi(x) \\ \mathbf{P} = -\int d^3x \, \Bigg[ \int \frac{d^3p}{(2\pi)^3} (-i) \sqrt{\frac{w_p}{2}} (a_p - a^\dagger_{-p}) \, e^{ipx} \Bigg]\Bigg[\int \frac{d^3p'}{(2\pi)^3} \frac{1}{\sqrt{2 w_{p'}}} (a_{p'} + a^\dagger_{-p'}) \, \nabla e^{ip'x} \Bigg] \\ \mathbf{P} = -\int \int \frac{d^3p}{(2\pi)^3} \frac{d^3p'}{(2\pi)^3} \int d^3x \, (-i^2) \, \mathbf{p}' \, e^{i(p+p')x} \, \Bigg[\sqrt{\frac{w_p}{2}} (a_p - a^\dagger_{-p}) \, \Bigg]\Bigg[\frac{1}{\sqrt{2 w_{p'}}} (a_{p'} + a^\dagger_{-p'}) \Bigg] \\ \mathbf{P} = -\int \int \frac{d^3p}{(2\pi)^3} \frac{d^3p'}{(2\pi)^3} \mathbf{p}' (2\pi)^3 \delta(p+p') \, \sqrt{\frac{w_p}{2}} (a_p - a^\dagger_{-p}) \, \frac{1}{\sqrt{2 w_{p'}}} (a_{p'} + a^\dagger_{-p'}) \\ \mathbf{P} = -\int \int \frac{d^3p}{(2\pi)^3} (-\mathbf{p}) \sqrt{\frac{w_p}{2}} (a_p - a^\dagger_{-p}) \, \frac{1}{\sqrt{2 w_{-p}}} (a_{-p} + a^\dagger_{p}) \\ $$ Since $w_{p} = w_{-p} = |p|^2 + m^2$, we get $$ \mathbf{P} = -\int \frac{d^3p}{2(2\pi)^3} (-\mathbf{p}) \sqrt{\frac{w_p}{w_{p}}} (a_p - a^\dagger_{-p}) (a_{-p} + a^\dagger_{p}) \\ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} (a_p - a^\dagger_{-p}) (a_{-p} + a^\dagger_{p}) \\ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \bigg[ a_p a_{-p} + a_p a^\dagger_{p} -a^\dagger_{-p} a_{-p} - a^\dagger_{-p} a^\dagger_{p} \bigg] \\ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \bigg[ a_p a_{-p} - a^\dagger_{-p} a^\dagger_{p} \bigg] + \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \bigg[ a_p a^\dagger_{p} -a^\dagger_{-p} a_{-p}\bigg] \\ $$ The first integral is odd with respect to p, and vanishes. For the second term, we can formally prove that $a^\dagger_{-p} a_{-p} = a^\dagger_{p} a_{p}$, but we can also argue that from noting that this operator pair creates a particle but then destroys it, with any possible constants only depending on the magnitude of \mathbf{p}. This line of reasoning gives us: $$ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \bigg( a_p a^\dagger_{p} -a^\dagger_{p} a_{p}\bigg) = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \, [ a_p,a^\dagger_{p}] \\ $$ The commutator here is proportional to the delta function, and hence this expression doesn't match what Peskin & Schroeder, and other QFT books have, i.e., $$ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \mathbf{p} \, a^\dagger_{p} a_p $$ UPDATE: I realized later that my assumption that $a^\dagger_{-p} a_{-p} = a^\dagger_{p} a_{p}$ was wrong. When I was trying to prove this using the expansion of the ladder operators in terms of $\phi(x)$ and $\pi(x)$ I was making an algebra error.
From $$\mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \frac{\mathbf{p}}{2} \bigg( a_p a^\dagger_{p} -a^\dagger_{-p} a_{-p}\bigg)$$ you actually have $$ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \left(\frac{\mathbf{p}}{2}a_p a^\dagger_{p} - \frac{\mathbf{p}}{2}a^\dagger_{-p} a_{-p}\right) = \int \frac{d^3p}{(2\pi)^3} \left(\frac{\mathbf{p}}{2}a_p a^\dagger_{p} - \frac{-\mathbf{p}}{2}a^\dagger_{p} a_{p}\right) = \int \frac{d^3p}{(2\pi)^3} \left(\frac{\mathbf{p}}{2}a^\dagger_{p} a_p + \frac{\mathbf{p}}{2}a^\dagger_{p} a_{p}\right) + \mbox{renormalization term.}\\ $$ Dropping the infinite renormalization term due to $\delta({\bf 0})$ $$ \mathbf{P} = \int \frac{d^3p}{(2\pi)^3} \mathbf{p}\: a^\dagger_{p} a_p \:. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/375585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
Proof of form of 4D rotation matrices I am considering rotations in 4D space. We use $x, y, z, w$ as coordinates in a Cartesian basis. I have found sources that give a parameterization of the rotation matrices as \begin{align} &R_{yz}(\theta) = \begin{pmatrix} 1&0&0&0\\0&\cos\theta&-\sin\theta&0\\0&\sin\theta&\cos\theta&0\\0&0&0&1 \end{pmatrix}, R_{zx}(\theta) = \begin{pmatrix} \cos\theta&0&\sin\theta&0\\0&1&0&0\\-\sin\theta&0&\cos\theta&0\\0&0&0&1 \end{pmatrix},\\ &R_{xy}(\theta) = \begin{pmatrix} \cos\theta&-\sin\theta&0&0\\\sin\theta&\cos\theta&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}, R_{xw}(\theta) = \begin{pmatrix} \cos\theta&0&0&-\sin\theta\\0&1&0&0\\0&0&1&0\\\sin\theta&0&0&\cos\theta \end{pmatrix},\\ &R_{yw}(\theta) = \begin{pmatrix} 1&0&0&0\\0&\cos\theta&0&-\sin\theta\\0&0&1&0\\0&\sin\theta&0&\cos\theta \end{pmatrix}, R_{zw}(\theta) = \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&\cos\theta&-\sin\theta\\0&0&\sin\theta&\cos\theta \end{pmatrix}, \end{align} where the subscript labels a plane that is being rotated. This seems to be a very intuitive extension of lower dimensional rotations. However, I would really like to see a proof that these are correct, and I'm not sure how I could go about doing that. By correct, I mean that these 6 matrices can generate any 4D rotation. My initial attempt was to construct a set of transformations from the definition of the transformations (as matrices) that define a 4D rotation, \begin{align} \{R|RR^T = I\}, \end{align} where $I$ is the identity matrix (4D), but this has 16 (constrained) parameters and I thought that there must be an easier way.
For each of your 4D rotation matrix $~\mathbf R~$ if this equation $$\mathbf Z^T\, \mathbf Z= \left(\mathbf R\,\mathbf Z\right)^T\,\left(\mathbf R\,\mathbf Z\right)$$ is fulfilled the rotation matrix $~\mathbf R~$ is orthonormal .$~\mathbf R^T\,\mathbf R=\mathbf I_4$ where $$\mathbf Z= \begin{bmatrix} x \\ y \\ z \\ w \\ \end{bmatrix}$$ Edit you can also check the determinate of the Rotation matrix ,if the determinate of the Rotation matrix is equal one the matrix is orthonormal ?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/652658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Height of the atmosphere - conflicting answers Okay. I have two ways of working out the height of the atmosphere from pressure, and they give different answers. Could someone please explain which one is wrong and why? (assuming the density is constant throughout the atmosphere) 1) $P=h \rho g$, $\frac{P}{\rho g} = h = \frac{1.01\times 10^5}{1.2\times9.81} = 8600m$ 2) Pressure acts over SA of Earth. Let r be the radius of the Earth. Area of the Earth is $4 \pi r^2$ Volume of the atmosphere is the volume of a sphere with radius $(h+r)$ minus the volume of a sphere with radius $r$. $\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$ Pressure exerted by the mass of the atmosphere is: $P=\frac{F}{A}$ $PA=mg$ $4\pi r^2 P = \rho V g$ $4\pi r^2 P = \rho g (\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3)$ $\frac{4\pi r^2 P}{\rho g} = \frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$ $3 \times \frac{r^2 P}{\rho g} = (h+r)^3 - r^3$ $3 \times \frac{r^2 P}{\rho g} + r^3 = (h+r)^3$ $(3 \times \frac{r^2 P}{\rho g} + r^3)^{\frac{1}{3}} - r = h$ $(3 \times \frac{(6400\times10^3)^2 \times 1.01 \times 10^5}{1.23 \times 9.81} + (6400\times10^3)^3)^{\frac{1}{3}} - (6400\times10^3) = h = 8570m$ I know that from Occams razor the first is the right one, but surely since $h\rho g$ comes from considering the weight on the fluid above say a 1m^2 square, considering the weight of the atmosphere above a sphere should give the same answer?
The first formula is just a first order expansion in $1/r$ of the second formula which is thus the exact one. The expansion is: $$h = \frac{P}{\rho g} - \left( \frac{P}{\rho g} \right)^2 \frac{1}{r} + \frac{5}{3} \left( \frac{P}{\rho g} \right)^3 \frac{1}{r^2} + \ldots$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to get "complex exponential" form of wave equation out of "sinusoidal form"? I am a novice on QM and until now i have allways been using sinusoidal form of wave equation: $$A = A_0 \sin(kx - \omega t)$$ Well in QM everyone uses complex exponential form of wave equation: $$A = A_0\, e^{i(kx - \omega t)}$$ QUESTION: How do i mathematically derive exponential equation out of sinusoidal one? Are there any caches? I did read Wikipedia article where there is no derivation.
You asked about the second equation. See below: $e^{ix}{}= 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots \\[8pt] {}= 1 + ix - \frac{x^2}{2!} - \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} - \frac{x^6}{6!} - \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots \\[8pt] {}= \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - \cdots \right) + i\left( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \right) \\[8pt] {}= \cos x + i\sin x \ .$ To calculate the expansions I have used in the above equation, you need to understand the procedure for finding Taylor expansions of functions. This youtube video teaches the procedure: http://www.youtube.com/watch?v=GUtLtRDox3c
{ "language": "en", "url": "https://physics.stackexchange.com/questions/53005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Types of photon qubit encoding How many types of qubit encoding on photons exist nowadays? I know only two: * *Encoding on polarization: $$ \lvert \Psi \rangle = \alpha \lvert H \rangle + \beta \lvert V \rangle $$ $$ \lvert H \rangle = \int_{-\infty}^{\infty} d\mathbf{k}\ f(\mathbf{k}) e^{-iw_k t} \hat{a}^\dagger_{H}(\mathbf{k}) \lvert 0 \rangle_\text{Vacuum} $$ $$ \lvert V \rangle = \int_{-\infty}^{\infty} d\mathbf{k}\ f(\mathbf{k}) e^{-iw_k t} \hat{a}^\dagger_{V}(\mathbf{k}) \lvert 0 \rangle_\text{Vacuum} $$ *Time-bin: $$ \lvert \Psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle $$ $$ \lvert 0 \rangle = \int_{-\infty}^{\infty} dz\ f\left(\frac{t -z/c}{\delta t_{ph}}\right) e^{-i w_0 (t-z/c)} \hat{a}^\dagger(z) \lvert 0 \rangle_\text{Vacuum} $$ $$ \lvert 1 \rangle = \int_{-\infty}^{\infty} dz\ f\left(\frac{t -z/c+\tau}{\delta t_{ph}}\right) e^{-i w_0 (t-z/c+\tau)} \hat{a}^\dagger(z) \lvert 0 \rangle_\text{Vacuum} $$ Is there anything else?
Yeah there is a couple of 'em - off the top of my head I can think of: \begin{align} |\uparrow\;\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |\downarrow\;\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |g\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |e\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |L\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |R\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |H\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |V\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |0\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |1\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |+\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |-\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix} \end{align} But these are only the naming conventions in the northern hemisphere. In the southern hemisphere they are labeled; \begin{align} |\downarrow\;\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |\uparrow\;\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |e\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |g\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |R\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |L\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |V\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |H\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |1\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |0\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ |-\rangle\;=\; \begin{pmatrix} 1 \\ 0 \end{pmatrix}\qquad&\qquad |+\rangle\;=\; \begin{pmatrix} 0 \\ 1 \end{pmatrix} \end{align} So as you can see you have to be careful about who you are talking to, otherwise you could get your calculations all backwards.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/59731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the depth in meters of the pond? A small spherical gas bubble of diameter $d= 4$ μm forms at the bottom of a pond. When the bubble rises to the surface its diameter is $n=1.1$ times bigger. What is the depth in meters of the pond? Note: water's surface tension and density are $σ= 73 \times 10^{-3} \mbox{ N}$ and $ρ= 10^3 \mbox{ kg/m}^3$, respectively. The gas expansion is assumed to be isothermal. My attempts: I used the equation of pressure: $P_1V_1=P_2V_2$ where $P_1$ is the pressure at the top and $V_1$ is the volume of bubble at the top and $P_2$ is the pressure at the bottom, and $V_2$ is the volume of the bubble at the bottom Because the bubble at the bottom, it received the hydrostatic pressure, so the equation became: $P_1V_1$= $(P_0+\rho g d) V_2$ Since $V_1$ and $V_2$ is sphere, we can use the sphere volume. And $P_1$ is same with atmospheric pressure= $10^5 \mbox{ Pa}$ $10^5 \cdot (\frac{4}{3} \pi r_1^3) = (10^5 + 10^3 \cdot 9.8 \cdot d) \cdot (\frac{4}{3} \pi r_2^3)$ Cancel out the $\frac{4}{3}\pi$ and we get: $10^5 \cdot (r_1)^3 = (r_2)^3 \cdot (10^5 + 9800d)$ Substitute $r_1 = 4 \times 1.1= 4.4 \mbox{ μm}= 4.4\times10^{-6} \mbox{ m}$ and $r_2 = 4 \times 10^{-6} \mbox{ m}$ $10^5 \cdot (4.4 \times 10^{-6})^3 = (4 \times 10^{-6})^3 \cdot (10^5 + 9800d)$ $10^{-13} \cdot (4.4)^3 = 64 \times 10^{-18} \times 10^5 + 64 \times 10^{-18} \times 9800d$ $85.184 \times 10^{-13} = 64 \times 10^{-13} + 627200 \times 10^{-18} d$ $21.184 \times 10^{-13} = 627200 \times 10^{-18} d$ $d= 3.37 \mbox{ m}$ So, the depth of the pond is $3.37 \mbox{ m}$. My question: what is the useful of $σ = 73 \times 10^{-3} \mbox{ N}$ ? I really confused about it. Thanks
The surface tension of the bubble would help you to find out how much work the bubble is doing while it increases its volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Apparent dimensional mismatch after taking derivative Suppose I have a variable $x$ and a constant $a$, each having the dimension of length. That is $[x]=[a]=[L]$ where square brackets denote the dimension of the physical quantity contained within them. Now, we wish to take the derivative of $u = log (\frac{x^2}{a^2})-log (\frac{a^2}{x^2})$. Here, we have taken the natural logarithm. It is clear that $u$ is a dimensionless function. $$\frac{du}{dx} = \frac{a^2}{x^2}.\frac{2x}{a^2} - \frac{x^2}{a^2}.(-2a^2).\frac{2x}{x^3} \\ = \frac{1}{x} - 4. $$ Here, the dimensions of the two terms on the right do not match. The dimension of the first term is what I expected. Where am I going wrong?
Where am I going wrong? Recall $$\frac{d}{dx}f(g(x))=f'(g(x))g'(x)$$ with $$f(\cdot) = \ln(\cdot) \rightarrow f'(\cdot) = \frac{1}{\cdot}$$ and $$g(x) = \frac{a^2}{x^2} \rightarrow g'(x) = \frac{-2a^2}{x^3}$$ Thus $$\frac{d}{dx}\ln\frac{a^2}{x^2} = \frac{1}{\frac{a^2}{x^2}}\frac{-2a^2}{x^3} = -\frac{2}{x}$$ An alternative approach is to recognize $$\ln x^{-2} = -2\ln x$$ thus $$ \ln\frac{a^2}{x^2} = \ln a^2 + \ln x^{-2} = \ln a^2 -2 \ln x$$ for which we can immediately write the derivative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Force distribution on corner supported plane This question has been annoying me for a while. If you have a completely ridged rectangular plate of width and height x and y that is supported on each corner (A,B,C,D) and has force (F) directly in its center then I think the force on each corner support will be F/4. What I want to know is how to prove this? Obviously there are 4 unknowns so we require 4 equations. $$\sum F_z = 0$$ $$\therefore F_A+F_B+F_C+F_D=F$$ Also $$\sum M = 0$$ Now taking the moments about point A \begin{equation*} \begin{vmatrix} i & j & k \\ x & 0 & 0 \\ 0 & 0 & F_B \\ \end{vmatrix} + \begin{vmatrix} i & j & k \\ x & y & 0 \\ 0 & 0 & F_C \\ \end{vmatrix} +\begin{vmatrix} i & j & k \\ 0 & y & 0 \\ 0 & 0 & F_D \\ \end{vmatrix}+ \begin{vmatrix} i & j & k \\ x/2 & y/2 & 0 \\ 0 & 0 & -F \\ \end{vmatrix}=0 \end{equation*} As the sum of moments equals 0, let i and j = 0 $$\therefore F_C+F_D=F/2$$ and $$F_B+F_C=F/2$$ Then taking the moments about another point to get 4th equation. But no matter what location I use the equations will not solve. I have been using a matrix to find $$F_A, F_B, F_C, F_D$$ See below. But when getting the det of the first matrix the answer is always equal to 0. \begin{equation*} \begin{vmatrix} 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \\ ? & ? & ? & ? \\ \end{vmatrix} * \begin{vmatrix} F_A \\ F_B \\ F_C \\ F_D \\ \end{vmatrix} =\begin{vmatrix} 1 \\ 0.5 \\ 0.5 \\ ? \\ \end{vmatrix}*F \end{equation*} Can someone please tell me what I am doing wrong? Thank you in advance.
I think I have a solution. Considering the corner forces $A$, $B$, $C$ and $D$ you have a system of 3 equations and 4 unknowns $$\begin{align} A + B + C + D & = F \\ \frac{y}{2} \left(C+D-A-B\right) &= 0 \\ \frac{x}{2} \left(A+D-B-C\right) & = 0 \end{align}$$ $$\begin{vmatrix} 1 & 1 & 1 & 1 \\ -\frac{y}{2} & -\frac{y}{2} & \frac{y}{2} & \frac{y}{2} \\ \frac{x}{2} & -\frac{x}{2} & -\frac{x}{2} & \frac{x}{2} \end{vmatrix} \begin{vmatrix} A \\ B \\ C \\ D \end{vmatrix} = \begin{vmatrix} F \\ 0 \\ 0 \end{vmatrix} $$ What if consider the forces as deviation from $\frac{F}{4}$ such that $$\begin{align} A & = \frac{F}{4} + U \\ B & = \frac{F}{4} + V \\ C & = \frac{F}{4} + W \\ D & = \frac{F}{4} + G \end{align} $$ $$ \begin{vmatrix} 1 & 1 & 1 & 1 \\ -\frac{y}{2} & -\frac{y}{2} & \frac{y}{2} & \frac{y}{2} \\ \frac{x}{2} & -\frac{x}{2} & -\frac{x}{2} & \frac{x}{2} \end{vmatrix} \begin{vmatrix} \frac{F}{4} \\ \frac{F}{4} \\ \frac{F}{4} \\ \frac{F}{4} \end{vmatrix} + \begin{vmatrix} 1 & 1 & 1 & 1 \\ -\frac{y}{2} & -\frac{y}{2} & \frac{y}{2} & \frac{y}{2} \\ \frac{x}{2} & -\frac{x}{2} & -\frac{x}{2} & \frac{x}{2} \end{vmatrix} \begin{vmatrix} U \\ V \\ W \\ G \end{vmatrix} = \begin{vmatrix} F \\ 0 \\ 0 \end{vmatrix} $$ $$ \begin{vmatrix} F \\ 0 \\ 0 \end{vmatrix} + \begin{vmatrix} 1 & 1 & 1 & 1 \\ -\frac{y}{2} & -\frac{y}{2} & \frac{y}{2} & \frac{y}{2} \\ \frac{x}{2} & -\frac{x}{2} & -\frac{x}{2} & \frac{x}{2} \end{vmatrix} \begin{vmatrix} U \\ V \\ W \\ G \end{vmatrix} = \begin{vmatrix} F \\ 0 \\ 0 \end{vmatrix} $$ $$ \begin{vmatrix} 1 & 1 & 1 & 1 \\ -\frac{y}{2} & -\frac{y}{2} & \frac{y}{2} & \frac{y}{2} \\ \frac{x}{2} & -\frac{x}{2} & -\frac{x}{2} & \frac{x}{2} \end{vmatrix} \begin{vmatrix} U \\ V \\ W \\ G \end{vmatrix} = \begin{vmatrix} 0 \\ 0 \\ 0 \end{vmatrix} $$ Now a possible solution is $U=0$, $V=0$, $W=0$ and $G=0$ I think to beyond this method, you need to assume a non-rigid frame, with some simple stiffness matrix, and in the end find the limit where $k \rightarrow \infty$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/243626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Simple Electric Field Problem Solve the Electric Field distance z above a circular loop of radius r. The charge/length = $\lambda$ The arc-length is 2$\pi$r. So the smallest portion of the circle is 2$\pi r \delta \theta$ and charge is therefore \begin{align} q&=2\pi r \delta \theta*\lambda \\ R&=\sqrt{r^2+z^2}= \text{constant} \\ E&= \frac {1}{4 \pi \epsilon _o}\frac {q}{R^2}=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \delta \theta*\lambda}{\sqrt{r^2+z^2}}\end{align} And we only need the z component. $$E =\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \delta \theta*\lambda}{\sqrt{r^2+z^2}}\sin(\theta)=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \delta \theta*\lambda}{\sqrt{r^2+z^2}} \frac {z}{\sqrt{z^2+r^2}}$$ and everything is constant except for $\delta \theta$ $$E=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \lambda}{\sqrt{r^2+z^2}} \frac {z}{\sqrt{z^2+r^2}}\int_0^{2\pi}{\delta \theta}$$ So I thought the correct answer must be: $$E=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \lambda z}{[r^2+z^2]^{3/2}} \cdot 2\pi$$ But the correct answer does not multiply by 2$\pi$ Correct: $$E=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \lambda z}{[r^2+z^2]^{3/2}}$$ Why was I wrong? where did I slip up? Thanks!
The length element should be $r d\theta$ not $2\pi r d\theta$. So the charge element is $$dq=\lambda r d\theta$$ but not $$dq=\lambda 2\pi r d\theta.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/288654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do one show that the Pauli Matrices together with the Unit matrix form a basis in the space of complex 2 x 2 matrices? In other words, show that a complex 2 x 2 Matrix can in a unique way be written as $$ M = \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$ If$$M = \Big(\begin{matrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{matrix}\Big)= \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$ I get the following equations $$ m_{11}=\lambda_0+\lambda_3 \\ m_{12}=\lambda_1-i\lambda_2 \\ m_{21}=\lambda_1+i\lambda_2 \\ m_{22}=\lambda_0-\lambda_3 $$
Let $M_2(\mathbb{C})$ denote the set of all $2\times2$ complex matrices. We also note that dim$(M_2(\mathbb{C}))=4$, because if $M\in M_2(\mathbb{C})$ and $M=\left( \begin{array}{cc} z_{11} & z_{12}\\ z_{21} & z_{22} \\ \end{array} \right)$, where $z_{ij}\in \mathbb{C}$, then $M=\left( \begin{array}{cc} z_{11} & z_{12}\\ z_{21} & z_{22} \\ \end{array} \right)=z_{11}\left( \begin{array}{cc} 1 & 0\\ 0 & 0 \\ \end{array} \right)+z_{12}\left( \begin{array}{cc} 0 & 1\\ 0 & 0 \\ \end{array} \right)+z_{21}\left( \begin{array}{cc} 0 & 0\\ 1 & 0 \\ \end{array} \right)+z_{22}\left( \begin{array}{cc} 0 & 0\\ 0 & 1 \\ \end{array} \right)$. The standard four Pauli matrices are: $I=\left( \begin{array}{cc} 1 & 0\\ 0 & 1 \\ \end{array} \right),~~ \sigma_1=\left( \begin{array}{cc} 0 & 1\\ 1 & 0 \\ \end{array} \right),~~ \sigma_2=\left( \begin{array}{cc} 0 & -i\\ i & 0 \\ \end{array} \right),~~ \sigma_3=\left( \begin{array}{cc} 1 & 0\\ 0 & -1 \\ \end{array} \right)$. It is straightforward to show that these four matrices are linearly independent. This can be done as follows. Let $c_\mu\in \mathbb{C}$ such that $c_0I+c_1\sigma_1+c_2\sigma_2+c_3\sigma_3=$ O (zero matrix). This gives $\left( \begin{array}{cc} c_0+c_3 & c_1-ic_2\\ c_1+ic_2 & c_0-c_3 \\ \end{array} \right)=\left( \begin{array}{cc} 0 & 0\\ 0 & 0 \\ \end{array} \right)$ which further gives the following solution: $c_0=c_1=c_1=c_3=0$. It is left to show that $\{I,\sigma_i\}$ where $i = 1,2,3$ spans $M_2(\mathbb{C})$. And this can accomplished in the following way: $M=c_0I+c_1\sigma_1+c_2\sigma_2+c_3\sigma_3$ gives $\left( \begin{array}{cc} c_0+c_3 & c_1-ic_2\\ c_1+ic_2 & c_0-c_3 \\ \end{array} \right)=\left( \begin{array}{cc} z_{11} & z_{12}\\ z_{21} & z_{22} \\ \end{array} \right)$ which further gives the following equations: $c_0+c_3=z_{11},~c_0-c_3=z_{22},~c_1-ic_2=z_{12},~c_1+ic_2=z_{21}$. Solving these equations, one obtains $c_0=\frac{1}{2}(z_{11}+z_{22}),~c_1=\frac{1}{2}(z_{12}+z_{21}),~c_2=\frac{1}{2}i(z_{12}-z_{21}),~c_3=\frac{1}{2}(z_{11}-z_{22})$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Zero mass Kerr metric When mass in Kerr metric is put to zero we have $$ds^{2}=-dt^{2}+\frac{r^{2}+a^{2}\cos^{2}\theta}{r^{2}+a^{2}}dr^{2}+\left(r^{2}+a^{2}\cos^{2}\theta\right)d\theta^{2}+\left(r^{2}+a^{2}\right)\sin^{2}\theta d\phi^{2},$$ where $a$ is a constant. This is a flat metric. What exactly is the coordinate transformation that changes this into the usual Minkowski spacetime metric form $$ds^{2}=-dt^{2}+dx^{2}+dy^{2}+dz^{2}?$$
As mentioned in @Umaxo's comment, according to Boyer-Lindquist coordinates - Line element: The coordinate transformation from Boyer–Lindquist coordinates $r,\theta,\phi$ to Cartesian coordinates $x,y,z$ is given (for $m\to 0$} by: $$\begin{align} x &= \sqrt{r^2+a^2}\sin\theta\cos\phi \\ y &= \sqrt{r^2+a^2}\sin\theta\sin\phi \\ z &= r\cos\theta \end{align}$$ Proving that this is the desired transformation is a straight-forward but very tedious task. First calculate the differentials of the above: $$\begin{align} dx &= \frac{r}{\sqrt{r^2+a^2}}\sin\theta\cos\phi\ dr \\ &+ \sqrt{r^2+a^2}\cos\theta\cos\phi\ d\theta \\ &- \sqrt{r^2+a^2}\sin\theta\sin\phi\ d\phi \\ dy &= \frac{r}{\sqrt{r^2+a^2}}\sin\theta\sin\phi\ dr \\ &+ \sqrt{r^2+a^2}\cos\theta\sin\phi\ d\theta \\ &+ \sqrt{r^2+a^2}\sin\theta\cos\phi\ d\phi \\ dz &= \cos\theta\ dr - r\sin\theta\ d\theta \end{align}$$ Then insert these differentials into the Minkowski metric: $$\begin{align} ds^2 &= -dt^2+dx^2+dy^2+dz^2 \\ &\text{... omitting the lengthy algebra here, exploiting $\cos^2\alpha+\sin^2\alpha=1$ several times} \\ &= -dt^2 +\frac{r^2+a^2\cos^2\theta}{r^2+a^2}dr^2 +\left(r^2+a^2\cos^2\theta\right)d\theta^2 +\left(r^2+a^2\right)\sin^2\theta\ d\phi^2 \end{align}$$ which is the Kerr-metric for zero mass $M$, angular momentum $J$, and charge $Q$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/490819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Manipulation of the diffusive term in MHD induction equation I am trying to solve the magnetohydrodynamic (MHD) equations with a spatially varying resistivity, $\eta$. To remove some of the numerical stiffness from my finite volume approach, I am trying to get rid of these curl expressions with some vector calculus identities. The expression that is causing me issue is: $$ \nabla\times\left(\eta\nabla\times B\right) $$ I have also seen this written as: $$ \nabla\cdot\left(\eta\left(\nabla B-\nabla B^{T}\right)\right) $$ such as in the paper: Space–time adaptive ADER-DG schemes for dissipative flows: Compressible Navier–Stokes and resistive MHD equations, Computer Physics Communications. My question is: are these two expressions equal? I can kind of see how they might be using the cross product rule: https://en.wikipedia.org/wiki/Vector_calculus_identities but I'm a little uneasy of using this identity with the vector operator $\nabla$. Would anybody kindly be able to shed any light on this for me, and possibly take me through the steps to cast the first expression as the second form? Thank you in advance. P.S. This is my first post, I hope it's OK.
There are several vector/tensor calculus rules that will come in handy, so I will defined them here first (in no particular order): $$ \begin{align} \nabla \cdot \left[ \nabla \mathbf{A} - \left( \nabla \mathbf{A} \right)^{T} \right] & = \nabla \times \left( \nabla \times \mathbf{A} \right) \tag{0a} \\ \nabla \cdot \left( f \ \mathbf{A} \right) & = f \nabla \cdot \mathbf{A} + \nabla f \cdot \mathbf{A} \tag{0b} \\ \nabla \times \left( f \ \mathbf{A} \right) & = f \nabla \times \mathbf{A} + \nabla f \times \mathbf{A} \tag{0c} \\ \nabla \left( \mathbf{A} \cdot \mathbf{B} \right) & = \mathbf{A} \times \left( \nabla \times \mathbf{B} \right) + \mathbf{B} \times \left( \nabla \times \mathbf{A} \right) + \left( \mathbf{A} \cdot \nabla \right) \mathbf{B} + \left( \mathbf{B} \cdot \nabla \right) \mathbf{A} \tag{0d} \\ \mathbf{A} \cdot \left( \nabla \mathbf{B} \right)^{T} & = \left( \mathbf{A} \times \nabla \right) \times \mathbf{B} + \mathbf{A} \left( \nabla \cdot \mathbf{B} \right) \tag{0e} \\ \left( \nabla \mathbf{B} \right) \cdot \mathbf{A} & = \mathbf{A} \times \left( \nabla \times \mathbf{B} \right) + \left( \mathbf{A} \cdot \nabla \right) \mathbf{B} \tag{0f} \\ \nabla \cdot \left( \mathbf{A} \times \mathbf{B} \right) & = \mathbf{B} \cdot \left( \nabla \times \mathbf{A} \right) - \mathbf{A} \cdot \left( \nabla \times \mathbf{B} \right) \tag{0g} \\ \nabla \times \left( \nabla \times \mathbf{A} \right) & = \nabla \left( \nabla \cdot \mathbf{A} \right) - \nabla^{2} \mathbf{A} \tag{0h} \\ \nabla \times \left( \mathbf{A} \times \mathbf{B} \right) & = \nabla \cdot \left( \mathbf{A} \mathbf{B} \right)^{T} - \left( \mathbf{A} \mathbf{B} \right) \tag{0i} \end{align} $$ From these relations, one can show the following: $$ \nabla \cdot \left\{ \eta \left[ \nabla \mathbf{B} - \left( \nabla \mathbf{B} \right)^{T} \right] \right\} = \nabla \times \left( \eta \nabla \times \mathbf{B} \right) + \left( \nabla \eta \cdot \nabla \right) \mathbf{B} - \left( \nabla \eta \times \nabla \right) \times \mathbf{B} \tag{1} $$ where we have taken advantage of Maxwell's equations to eliminate the divergence of the magnetic field term. The second term on the right-hand side can be expanded to the following form: $$ \left( \nabla \eta \cdot \nabla \right) \mathbf{B} = \left( \mathbf{B} \cdot \nabla \right) \nabla \eta - \nabla^{2} \eta \mathbf{B} - \nabla \times \left( \nabla \eta \times \mathbf{B} \right) \tag{2} $$ where all the terms on the right-hand side involve second order derivatives of $\eta$. Generally, to simplify this down to make the two expressions of interest in your question equal one needs to make assumptions about the properties of the system. For instance, the $\nabla \times \left( \eta \nabla \times \mathbf{B} \right)$ comes from an approximation of Ohm's law and Ampere's law, i.e., $\mathbf{E} \approx \eta \mathbf{j}$ and $\mathbf{j} \propto \nabla \times \mathbf{B}$. If there are no local electric field sources (i.e., no excess charges), then $\nabla \cdot \mathbf{E} = 0$, which implies: $$ \nabla \cdot \left( \eta \mathbf{j} \right) = \eta \nabla \cdot \mathbf{j} + \nabla \eta \cdot \mathbf{j} = 0 \tag{3} $$ If $\mathbf{j} \propto \nabla \times \mathbf{B}$ is true, then the first term is zero as the divergence of the curl of a vector is always zero so we are left with: $$ \nabla \cdot \left( \eta \mathbf{j} \right) \approx \nabla \eta \cdot \mathbf{j} = 0 \tag{4} $$ The right-hand side can be rewritten as $\nabla \eta \cdot \left( \nabla \times \mathbf{B} \right) = 0$. We can then use Equation 0g above to show that the following is also true: $$ \nabla \cdot \left( \nabla \eta \times \mathbf{B} \right) = 0 \tag{5} $$ where we have used the fact that the curl of the gradient of a scalar is always zero. We also know another relationship from Faraday's law where: $$ \begin{align} \nabla \times \mathbf{E} & = - \frac{ \partial \mathbf{B} }{ \partial t } \tag{6a} \\ & = \nabla \times \left( \eta \ \mathbf{j} \right) \tag{6b} \\ & = \eta \nabla \times \mathbf{j} + \nabla \eta \times \mathbf{j} \tag{6c} \\ & = \frac{ 1 }{ \mu_{o} } \left[ \eta \nabla \times \left( \nabla \times \mathbf{B} \right) + \nabla \eta \times \left( \nabla \times \mathbf{B} \right) \right] \tag{6d} \\ & = \frac{ 1 }{ \mu_{o} } \nabla \times \left( \eta \nabla \times \mathbf{B} \right) \tag{6e} \end{align} $$ where $\mu_{o}$ is the permeability of free space. My question is: are these two expressions equal? In general, no. Under the right approximations, yes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding Locally flat coordinates on a unit sphere I know this is more of a math question, but no one in the Mathematics community was able to give me an answer, and since physicists are familiar with General Relativity, I thought I might get an answer. Imagine a unit sphere and the metric is: $$ds^2 = d\theta ^2 + \cos^2(\theta) d\phi^2$$ I want to find Locally Flat Coordinates (I think they're called Riemann Normal Coordinates) on the point $(\frac{\pi}{4}, 0)$, so what I need are coordinates such that the metric would reduce to the Kronecker Delta and the Christoffel Symbols should vanish. I start by the following translation: $$\theta' = \theta - \frac{\pi}{4}$$ then do the following substitution by guessing: $$\frac{f(\theta')}{\cos(\theta)} d\phi' = d\phi$$ And the condition is $f(0)$ should be 1, so the metric becomes: $$ds^2 = d\theta' + f^2(\theta')d\phi'$$ And it is a matter of finding $f(\theta')$. I calculate the Christoffel Symbols: $$\Gamma^{\lambda}_{\mu\nu} = \frac{1}{2} g^{\lambda \alpha}(\partial_{\mu}g_{\alpha \nu} + \partial_{\nu}g_{\mu \alpha} - \partial_{\alpha}g_{\mu \nu})$$ And make them vanish. So what I get is: $$\frac{f'(0)f(0)}{f^2(0)} = 0$$ Obviously, $f(\theta')=\cos(\theta')$ is a solution which is the thing I know is correct. However, there are infinite functions that satisfy the above conditions. Are all of these functions eligible to make the new coordinates Riemann normal coordinates?
starting with components of the unit sphere : \begin{align*} &\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix}=\left[ \begin {array}{c} \cos \left( \phi \right) \sin \left( \theta \right) \\ \sin \left( \phi \right) \sin \left( \theta \right) \\ \cos \left( \theta \right) \end {array} \right] \end{align*} from here \begin{align*} &\begin{bmatrix} dx \\ dy \\ dz \\ \end{bmatrix}=\underbrace{\left[ \begin {array}{cc} \cos \left( \phi \right) \cos \left( \theta \right) &-\sin \left( \phi \right) \sin \left( \theta \right) \\ \sin \left( \phi \right) \cos \left( \theta \right) &\cos \left( \phi \right) \sin \left( \theta \right) \\ -\sin \left( \theta \right) &0\end {array} \right]}_{\mathbf J}\, \left[ \begin {array}{c} d\theta \\ d\phi \end {array} \right] \end{align*} and the metric \begin{align*} &\mathbf{G}=\mathbf J^T\,\mathbf J=\left[ \begin {array}{cc} 1&0\\ 0& \left( \sin \left( \theta \right) \right) ^{2}\end {array} \right] \end{align*} now we are looking for the transformation matrix $~\mathbf{T}~$ that transformed the metric to unit matrix \begin{align*} &\mathbf{T}^T\,\mathbf{G}\,\mathbf T=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}\quad\Rightarrow\quad \mathbf{T}=\begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{\sin(\theta)} \\ \end{bmatrix} \end{align*} hence \begin{align*} &\begin{bmatrix} dx \\ dy \\ dz \\ \end{bmatrix}\mapsto \underbrace{\mathbf{J}\,\mathbf T}_{\mathbf{T}_n}\, \left[ \begin {array}{c} d\theta \\ d\varphi \end {array} \right] \end{align*} and the neue metric is: \begin{align*} &dx^2+dy^2+dz^2\mapsto d\theta^2+d\phi^2 \end{align*} where $~\mathbf{T}_n~$ is a function of $~\theta~,\phi~$ \begin{align*} &\mathbf{T}_n(\theta=\pi/4~,\phi=0)= \left[ \begin {array}{cc} \frac 12\,\sqrt {2}&0\\ 0&1 \\ -\frac{1}{2}\,\sqrt {2}&0\end {array} \right] \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/718912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Comparing Static Frictions In this figure, which of the static frictional forces will be more? My aim isn't to solve this particular problem but to learn how is static friction distributed . Since each of the rough-surfaces are perfectly capable of providing the $-1N$ horizontal frictional force but why don't they ? This is kind of ambiguity that who will provide a bigger share in total static friction. And as the surface have different $\mu$, so we can't even invoke symmetry.
The contact forces with two blocks are $N_1 = m_1 g + m_2 g$ for the bottom block (to the floor) and $N_2 = m_2 g$ for the top block (to the 1st block). The available traction is $F^\star_1 = \mu_1 (m_1+m_2)\,g$ and $F^\star_2 = \mu_2 m_2\, g$ or $$ \begin{pmatrix}F_1^\star\\F_2^\star\end{pmatrix} = \begin{bmatrix}1&-1\\0&1\end{bmatrix} ^{-1} \begin{pmatrix}\mu_1 m_1\,g\\\mu_2 m_2\,g\end{pmatrix} = \begin{pmatrix}\mu_1 (m_1+m_2)\,g\\\mu_2 m_2\,g\end{pmatrix} $$ The balance of horizontal forces is $$ \boxed{ \begin{pmatrix}P_1\\P_2\end{pmatrix} - \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{pmatrix}F_1\\F_2\end{pmatrix} = \begin{pmatrix}m_1 \ddot{x}_1\\m_2 \ddot{x}_2\end{pmatrix}} $$ where $P_1$, $P_2$ are any applied forces on the blocks (in your case $P_1=1N,\; P_2=0N$) and $F_1$, $F_2$ are the friction forces. Here comes the fun part: Assume blocks are sticking and solve for the required friction $F_1$, $F_2$ when $\ddot{x}_1=\ddot{x}_2=0$ $$ \begin{pmatrix}F_1\\F_2\end{pmatrix}_{stick} = \begin{bmatrix}1&-1\\0&1\end{bmatrix} ^{-1} \begin{pmatrix}P_1\\P_2\end{pmatrix} = \begin{pmatrix}P_1+P_2\\P_2\end{pmatrix} $$ Find the cases where required friction exceeds traction $$ \begin{pmatrix}F_1\\F_2\end{pmatrix}_{stick} > \begin{pmatrix}F_1^\star\\F_2^\star\end{pmatrix} = \begin{pmatrix}\mu_1 (m_1+m_2)\,g\\\mu_2 m_2\,g\end{pmatrix} $$ For those cases set $F_i = F_i^\star$ otherwise set $\ddot{x}_i = \ddot{x}_{i-1}$ and solve the balance of horizontal forces. Example 1, All slipping: $$ \begin{pmatrix}P_1\\P_2\end{pmatrix} - \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{pmatrix}\mu_1 (m_1+m_2)\,g\\\mu_2 m_2\,g\end{pmatrix} = \begin{pmatrix}m_1 \ddot{x}_1\\m_2 \ddot{x}_2\end{pmatrix} $$ to be solved for $\ddot{x}_1$ and $\ddot{x}_2$ Example 2, All sticking: $$\begin{pmatrix}P_1\\P_2\end{pmatrix} - \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{pmatrix}F_1\\F_2\end{pmatrix} = \begin{pmatrix}0\\0\end{pmatrix}$$ to be solved for $F_1$ and $F_2$ Example 3, Bottom slipping, top sticking: $$ \begin{pmatrix}P_1\\P_2\end{pmatrix} - \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{pmatrix}\mu_1 (m_1+m_2)\,g\\ F_2\end{pmatrix} = \begin{pmatrix}m_1 \ddot{x}_1\\m_2 \ddot{x}_1\end{pmatrix} $$ to be solved for $F_2$ and $\ddot{x}_1$. The matrix $A=\begin{bmatrix}1&-1\\0&1\end{bmatrix}$ is the connectivity matrix, and it can be expanded if you have more blocks. See my full solution here of similar problem in more detail: https://physics.stackexchange.com/a/79182/392
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Moment of inertia of a hollow sphere wrt the centre? I've been trying to compute the moment of inertia of a uniform hollow sphere (thin walled) wrt the centre, but I'm not quite sure what was wrong with my initial attempt (I've come to the correct answer now with a different method). Ok, here was my first method: Consider a uniform hollow sphere of radius $R$ and mass $M$. On the hollow sphere, consider a concentric ring of radius $r$ and thickness $\text{d}x$. The mass of the ring is therefore $\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi r\cdot\text{d}x$. Now, use $r^2 = R^2 - x^2:$ $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \left(R^2 - x^2 \right)^{1/2}\text{d}x$$ and the moment of inertia of a ring wrt the centre is $I = MR^2$, therefore: $$\text{d}I = \text{d}m\cdot r^2 = \frac{M}{4\pi R^2}\cdot 2\pi\left(R^2 - x^2\right)^{3/2}\text{d}x $$ Integrating to get the total moment of inertia: $$I = \int_{-R}^{R} \frac{M}{4\pi R^2} \cdot 2\pi\cdot \left(R^2 - x^2\right)^{3/2}\ \text{d}x = \frac{3MR^2 \pi}{16}$$ which obviously isn't correct as the real moment of inertia wrt the centre is $\frac{2MR^2}{3}$. What was wrong with this method? Was it how I constructed the element? Any help would be appreciated, thanks very much.
The mass of the ring is wrong. The ring ends up at an angle, so its total width is not $dx$ but $\frac{dx}{sin\theta}$ You made what I believe was a typo when you wrote $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \left(R^2 - x^2 \right)\text{d}x$$ because based on what you wrote further down, you intended to write $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \sqrt{\left(R^2 - x^2 \right)}\text{d}x$$ This problem is much better done in polar coordinates - instead of $x$, use $\theta$. But the above is the basic reason why you went wrong. In essence, $sin\theta=\frac{r}{R}$ so you could write $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \frac{r}{sin\theta} \ \text{d}x \\ = \frac{M}{4\pi R^2}\cdot 2\pi \frac{r}{\frac{r}{R}} \ \text{d}x\\ = \frac{M}{4\pi R^2}\cdot 2\pi R \ \text{d}x\\ = \frac{M}{2 R} \ \text{d}x$$ Now we can substitute this into the integral: $$I = \int_{-R}^{R} \frac{M}{2 R} \cdot \left(R^2 - x^2\right)\ \text{d}x \\ = \frac{M}{2R}\left[{2R^3-\frac23 R^3}\right]\\ = \frac23 M R^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Physical reason for Lorentz Transformation Seeing the mathematical derivation of the Lorentz Transformation for time coordinates of an event for two observers we get the term $$t'=-\frac{v/c^2}{\sqrt{1-\frac{v^2}{c^2}}}x+\frac1{\sqrt{1-\frac{v^2}{c^2}}}t$$ Now how to make sense physically of the $t-\frac{vx}{c^2}$ factor. I am looking for an argument along the lines of the following. When relating spatial coordinates, one observer measures the length separation between an event and the second observer in his frame and tells the other observer that this should be your length, which the second observer denies due to relativity of simultaneity and multiplies by the $\gamma$ factor to get the correct length.
The physical reason IS the constancy of the velocity of light... since I'm writing in a tablet the answer won't be complete, but expect to get you to the mathematical cross-road. Constancy of velocity of light implies that \begin{equation} \frac{d|\vec{x}|}{dt} = c, \quad\Rightarrow\quad d|\vec{x}| = c\,dt. \end{equation} Since $d|\vec{x}| = \sqrt{dx^2 + dy^2 + dz^2}$, it follows that \begin{equation} dx^2 + dy^2 + dz^2 = c^2\,dt^2 \quad\Rightarrow\quad dx^2 + dy^2 + dz^2 - c^2\,dt^2 = 0. \end{equation} From here it is straightforward to see that the set (or group) of transformations preserving this quantity are those known as Lorentz transformations. Now I leave you to analize the "generalization" to nonvanishing intervals, for massive particles. Hint: define a four-dimensional metric! (continuation... after a few days) The interval As exposed previously, the physical condition of constancy of the speed of light leads to the conclusion that All equivalent observers are connected through a transformation which keep the quantity $$dx^2 + dy^2 + dz^2 - c^2\,dt^2 = 0.$$ This can be generalized to the preservation of the quantity $$ I = dx^2 + dy^2 + dz^2 - c^2\,dt^2, $$ called interval. Notice that the interval can be written as $$ I = X^t\, \eta\, X = \begin{pmatrix} ct & x & y & z \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} ct \\ x \\ y \\ z \end{pmatrix} $$ Invariance of the interval In order for the interval to be invariant under a transformation $X' = M\, X$, one needs to \begin{align} I &= I' \notag \\ X^t\, \eta\, X &= (M\, X)^t\, \eta\, M\,X \notag \\ &= X^t\, M^t\, \eta\, M\,X \notag \\ \Rightarrow\quad \eta &= M^t\, \eta\, M. \tag{*} \end{align} Therefore the problem is to find a set of transformations $M$ satisfying Eq. (*). Two-dimensional case Finding a general 4 by 4 matrix $M$ preserving the Minkowskii metric ($\eta$) requires a lot of algebra, but one can easily find the transformation preserving the 2 by 2 restriction to the $(ct,x)$-plane. Propose a matrix $$M = \begin{pmatrix} a & b \\ c & d \end{pmatrix},$$, and solve the equation $$ \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} a & c \\ b & d \end{pmatrix} \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} -a^2+c^2 & -ab+cd \\ -ab+cd & -b^2+d^2 \end{pmatrix}, $$ which is simple if one uses the identities $\cosh^2\theta - \sinh^2\theta = 1$, and the condition $M(\theta\to 0) = \mathbf{1}$. Thus, $$M = \begin{pmatrix} \cosh\theta & -\sinh\theta \\ -\sinh\theta & \cosh\theta \end{pmatrix}.$$ Relation with the velocity In a similar fashion of Euclidean geometry, in which $$ \frac{y}{x} = \tan\theta, $$ one uses the transformation $M$ above to relate the $ct$ coordinate with the $x$ coordinate $$ \frac{v}{c} \equiv \frac{x}{ct} = \mathop{\mathrm{tanh}}\theta. $$ Now, \begin{align} \mathop{\mathrm{tanh}^2}\theta &= 1 - \mathop{\mathrm{sech}^2}\theta \notag \\ &= 1 - \tfrac{1}{\cosh^2\theta} \notag \\ \Rightarrow\quad \cosh\theta &= \frac{1}{\sqrt{1 - \left(\frac{v}{c}\right)^2}} \notag \\ \sinh\theta &= \frac{\frac{v}{c}}{\sqrt{1 - \left(\frac{v}{c}\right)^2}}. \notag \end{align} Finally, from the relation $X' = M\, X$, one obtain the usual relations \begin{align} x' &= -\sinh\theta\cdot t +\cosh\theta\cdot x \notag\\ &= \frac{1}{\sqrt{1 - \left(\frac{v}{c}\right)^2}}\left( x - vt \right) \notag \\ t' &= \cosh\theta \cdot t -\sinh\theta\cdot \tfrac{x}{c} \notag\\ &= \frac{1}{\sqrt{1 - \left(\frac{v}{c}\right)^2}}\left( t - \tfrac{v}{c^2}t \right). \notag \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Stability and Laplace's equation Consider four positive charges of magnitude $q$ at four corners of a square and another charge $Q$ placed at the origin. What can we say about the stability at this point? My attempt goes like this. I considered 4 charged placed at $(1,0)$, $(0,1)$, $(-1,0)$, $(0,-1)$ and computed the potential and and it's derivatives. When using the partial derivative test, the result was that origin is a stable equilibrium position. $$V(x,y)=k[\frac{1}{\sqrt{(x-1)^2 + y^2}}+\frac{1}{\sqrt{(x+1)^2 + y^2}}+\frac{1}{\sqrt{x^2 + (y-1)^2}}+\frac{1}{\sqrt{x^2 + (y+1)^2}}] $$ $$\partial_x V= -k[\frac{x-1}{((x-1)^2 + y^2)^\frac{3}{2}}+ \frac{x+1}{((x+1)^2 + y^2)^\frac{3}{2}} + \frac{x}{(x^2 + (y-1)^2)^\frac{3}{2}} + \frac{x}{(x^2 + (y+1^2)^\frac{3}{2}}] $$ $$\partial_{xx} V= k[\frac{2(x-1)^2 -y^2}{((x-1)^2 + y^2)^\frac{5}{2}} + \frac{2(x+1)^2 -y^2}{((x+1)^2 + y^2)^\frac{5}{2}} + \frac{2x^2 -(y-1)^2}{(x^2 + (y-1)^2)^\frac{5}{2}} +\frac{2x^2 -(y+1)^2}{(x^2 + (y+1)^2)^\frac{5}{2}}] $$ $$\partial_{yx} V= 3k[\frac{(x-1)y}{((x-1)^2 + y^2)^\frac{5}{2}} + \frac{(x+1)y}{((x+1)^2 + y^2)^\frac{5}{2}} + \frac{x(y-1)}{(x^2 + (y-1)^2)^\frac{5}{2}} +\frac{x(y+1)}{(x^2 + (y+1)^2)^\frac{5}{2}}] $$ $ \partial_{yy}$ is same as $\partial_{xx}$ except for x and y exchange by symmetry. At the origin(equilibrium point), $\partial_{xx}$ and $\partial_{yy}$ are positive while $\partial_{yx}=\partial_{xy}=0$ Hence by the partial derivative test for stability I have a stable equilibrium. A local minimum of potential. Now starts my confusion. According to what I learnt in Laplace's equation $\Delta V=0$ for potential, in a charge free region(I take the region without charges with origin in it) there can never have a local minima or maxima within the boundary. This contradicts with the above conclusion that we have a minimum of potential. Please help me, to see the cause of this contradiction.
The Coulomb Potential is a solution to Laplace's equation in 3 dimensions. In 2 dimensions the equivalent solution is a logarithmic potential. You have written down the Coulomb potential for 4 charges but then treat the problem as 2 dimensional, which is causing your problems. To resolve this you need to add a load of $z^2$s to your potential. If you consider placing a fifth charge at the origin it will be repelled by each of the 4 original charges, so it is not surprising that the forces acting on it in the xy plane push it back to the centre. It is, however, clearly unstable in the z direction as it is repelled by the entire existing arrangement of charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/198094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the minimal G-force curve in 2-dimensional space? Given two parallel roads, which need to be connected, what shape of curve would produce the minimum overall horizontal G-force(s) on travelers? Is it a $sin$ or $cos$ wave? Is it a basic cubic function? Is it something else? I'm working on an engineering project, not actually involving roads, but the road analogy is easier to understand than my actual project (it involves more complex topics, like aerodynamics, which would just confuse the problem needlessly).
The answer is two arcs. One arc with a constant gee loading in one direction and then flipping to the opposite direction. This is called the bang-bang method, and it is no very smooth, but the gee forces never exceed the specified maximum. Given a path $y(x)$ the instantaneous radius of curvature at each x is $$ \rho = \frac{ \left(1+ \left(\frac{{\rm d}y}{{\rm d}x}\right)^2 \right)^\frac{3}{2} }{ \frac{{\rm d}^2 y}{{\rm d}x^2} } $$ The lateral acceleration is $a_L = \frac{v^2}{\rho}$ so we are comparing paths using the parameter $\gamma = \frac{L}{\rho}$ Here are some possible curves (use $L$ for the transition length, and $W$ for the step width) $$ \begin{align} y(x) &= \tfrac{W}{2} \sin \left(\frac{\pi x}{L} \right) & \text{harmonic}\\ y(x) &= \begin{cases} -\frac{L^2-W^2}{4 W} + \sqrt{ \left( \frac{(L^2+W^2)^2}{16 W^2}-\left( \frac{L}{2}+x \right)^2 \right)} & x<0 \\ \frac{L^2-W^2}{4 W} - \sqrt{ \left( \frac{(L^2+W^2)^2}{16 W^2}-\left( \frac{L}{2}-x \right)^2 \right)} & x>0 \end{cases} & \text{arcs} \\ y(x) &=- \tfrac{W}{2} \frac{{\rm erf}\left(\frac{2 \pi x}{L} \right)}{{\rm erf}(\pi)} & \text{smooth} \end{align} $$ Above ${\rm erf}(x)$ is the error-function $$\begin{align} \frac{L}{\rho(x)} & = \frac{4\pi^2 L^2 W \sin\left( \frac{\pi x}{L} \right)}{ \left( \pi^2 W^2 \cos^2 \left(\frac{\pi x}{L}\right)+4 L^2\right)^\frac{3}{2}} & \text{harmonic} \\ \frac{L}{\rho} & = \pm \frac{4 L W}{L^2+W^2} & \text{arcs} \\ \frac{L}{\rho(x)} & = \frac{ 16 L W x \pi^\frac{5}{2} {\rm e}^\frac{8 \pi^2 x^2}{L^2} {\rm erf}(\pi)^2}{\left( L^2 {\rm e}^\frac{8 \pi^2 x^2}{L^2} {\rm erf}(\pi)^2+4 \pi W^2\right)^\frac{3}{2}} & \text{smooth} \end{align} $$ The peak for the harmonic is $\frac{L}{\rho} = \frac{\pi^2 W}{2 L}$ at $x=\frac{L}{2}$ which is always a higher value than the arcs solution. The peak for the smooth is not easy to find analytically, but for some test cases I looked at it was much higher than the arcs solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/250905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Product of two Pauli matrices for two spin $1/2$ In the lecture, my professor wrote this on the board $$ \begin{equation} \begin{split} (\vec{\sigma}_{1}\cdot\vec{\sigma}_{2})|++\rangle &= |++\rangle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(\blacktriangledown)\\ (\vec{\sigma}_{1}\cdot\vec{\sigma}_{2})(|+-\rangle+|-+\rangle) &= (|+-\rangle+|-+\rangle)\\ (\vec{\sigma}_{1}\cdot\vec{\sigma}_{2})(|+-\rangle-|-+\rangle) &= -3(|+-\rangle+|-+\rangle) \end{split} \end{equation} $$ but I don't get how these are correct. I know that $$ \begin{equation} \begin{split} |1\;1\rangle &= |++\rangle \\ |1\;0\rangle &= \frac{1}{\sqrt{2}}(|+-\rangle+|-+\rangle) \\ |0\;0\rangle &= \frac{1}{\sqrt{2}}(|+-\rangle-|-+\rangle) \end{split} \end{equation} $$ I will work out equation $(\blacktriangledown)$ in the usual matrix representation of the eigenstates of $S_z$ basis: $$ |+\rangle=\begin{pmatrix}1\\ 0 \end{pmatrix},\;\;\;\;\;\;\;\;\;\;\;\;\;\;|-\rangle=\begin{pmatrix}0\\ 1 \end{pmatrix}, $$ So we have $$ \begin{equation} \begin{split} (\vec{\sigma}_{1}\cdot\vec{\sigma}_{2})|+\rangle_{1}\otimes|+\rangle_{2}&=&\vec{\sigma}_{1}|+\rangle_{1}\otimes\vec{\sigma}_{2}|+\rangle_{2}\\&=&\begin{pmatrix}1 & 1-i\\ 1+i & -1 \end{pmatrix}_{1}\begin{pmatrix}1\\ 0 \end{pmatrix}_{1}\otimes\begin{pmatrix}1 & 1-i\\ 1+i & -1 \end{pmatrix}_{2}\begin{pmatrix}1\\ 0 \end{pmatrix}_{2}\\&=&\begin{pmatrix}1\\ 1+i \end{pmatrix}_{1}\otimes\begin{pmatrix}1\\ 1+i \end{pmatrix}_{2} \end{split} \end{equation} $$ but this is not $|++\rangle=|+\rangle\otimes|+\rangle$. What did I do wrong here? What have I misunderstood?
Your expression for: $$(\vec \sigma_1 \cdot \vec \sigma_2) |+\rangle_1 \otimes |+\rangle_2=\vec \sigma_1 |+\rangle\otimes \vec \sigma_2 |+\rangle_2$$ Is wrong. It sould read: $$(\vec \sigma_1 \cdot \vec \sigma_2) |+\rangle_1 \otimes |+\rangle_2=\sigma_{1x}|+\rangle_1\otimes \sigma_{2x}|+\rangle_2+\sigma_{1y}|+\rangle_1\otimes \sigma_{2y}|+\rangle_2+$$ $$\sigma_{1z}|+\rangle_1\otimes \sigma_{2z}|+\rangle_2$$ $$=\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}$$ $$+\begin{pmatrix}0&-i\\i&0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}0&-i\\i&0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}$$ $$+\begin{pmatrix}1&0\\0&-1\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}1&0\\0&-1\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}$$ $$=\begin{pmatrix} 0 \\1 \end{pmatrix}\otimes\begin{pmatrix} 0 \\1 \end{pmatrix}$$ $$+\begin{pmatrix} 0 \\i \end{pmatrix}\otimes \begin{pmatrix} 0 \\i\end{pmatrix}$$ $$+\begin{pmatrix} 1 \\0 \end{pmatrix}\otimes \begin{pmatrix} 1\\0\end{pmatrix}$$ $$=\begin{pmatrix} 0 \\1 \end{pmatrix}\otimes\begin{pmatrix} 0 \\1 \end{pmatrix}$$ $$-\begin{pmatrix} 0 \\1 \end{pmatrix}\otimes \begin{pmatrix} 0 \\1\end{pmatrix}$$ $$+\begin{pmatrix} 1 \\0 \end{pmatrix}\otimes \begin{pmatrix} 1\\0\end{pmatrix}$$ $$=\begin{pmatrix} 1 \\0 \end{pmatrix}\otimes \begin{pmatrix} 1\\0\end{pmatrix}$$ i.e. I think you have to do the dot product between the Pauli matrices vectors first then put them through the tensor product.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/260916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Angular velocity by velocities of 3 particles of the solid Velocities of 3 particles of the solid, which don't lie on a single straight line, $V_1, V_2, V_3$ are given (as vector-functions). Radius-vectors $r_1, r_2$ from third particle to first and second are given aswell. How could I find the angular velocity $w$ of the solid? I tried to solve this problem using Euler's theorem : $V_2=V_3+[w \times r_2]$, $V_1=V_3+[w \times r_1]$. After this step I tried to consider different cases: if $V_1 $ is not collinear to $V_2$ we could write $w = k*[(V_2-V_3) \times (V_1-V_3)]$. However, it doesn't really help. The second case is even more difficult to analyze. Second attempt consisted in solving this system by multiplication (scalar product or vector work) equations by appropriate vectors. However, I didn't really succeed.
The algebra is not especially nice, but it is just algebra. This is rigid body rotation, taking point 3 as the origin of coordinates, so effectively $$\mathbf{r}_1=\mathbf{R}_1-\mathbf{R}_3, \qquad \mathbf{r}_2=\mathbf{R}_2-\mathbf{R}_3. $$ We start as you suggested, and abbreviate $$ \mathbf{v}_1=\mathbf{V}_1-\mathbf{V}_3, \qquad \mathbf{v}_2=\mathbf{V}_2-\mathbf{V}_3, $$ so that $$ \mathbf{v}_1 = \boldsymbol{\omega}\times\mathbf{r}_1, \qquad \mathbf{v}_2 = \boldsymbol{\omega}\times\mathbf{r}_2. $$ Now since the three points are not collinear, we can let $$ \boldsymbol{\omega} = a\,\mathbf{r}_1 + b\,\mathbf{r}_2 + c\, \mathbf{r}_1\times\mathbf{r}_2 $$ but we must remember that $\mathbf{r}_1$ and $\mathbf{r}_2$ will not in general be orthogonal. We can obtain $c$ directly, from either of the two equivalent equations \begin{align*} \mathbf{r}_2\cdot\mathbf{v}_1 &= \mathbf{r}_2\cdot\boldsymbol{\omega}\times\mathbf{r}_1 = \boldsymbol{\omega}\cdot\mathbf{r}_1\times\mathbf{r}_2 = c |\mathbf{r}_1\times\mathbf{r}_2|^2 \\ \mathbf{r}_1\cdot\mathbf{v}_2 &= \mathbf{r}_1\cdot\boldsymbol{\omega}\times\mathbf{r}_2 = -\boldsymbol{\omega}\cdot\mathbf{r}_1\times\mathbf{r}_2 = -c |\mathbf{r}_1\times\mathbf{r}_2|^2 \\ \Rightarrow\quad c&= \frac{\mathbf{r}_2\cdot\mathbf{v}_1}{|\mathbf{r}_1\times\mathbf{r}_2|^2} = -\frac{\mathbf{r}_1\cdot\mathbf{v}_2}{|\mathbf{r}_1\times\mathbf{r}_2|^2} \end{align*} where we took advantage of the properties of the scalar triple product. The other coefficients come from scalar products with $\mathbf{r}_1\times\mathbf{r}_2$. We use the general identity $$ (\mathbf{A}\times\mathbf{B})\cdot(\mathbf{C}\times\mathbf{D}) = (\mathbf{A}\cdot\mathbf{C})\,(\mathbf{B}\cdot\mathbf{D}) - (\mathbf{B}\cdot\mathbf{C})\,(\mathbf{A}\cdot\mathbf{D}) $$ and a special case of this, which we use, is $|\mathbf{r}_1\times\mathbf{r}_2|^2=|\mathbf{r}_1|^2|\mathbf{r}_2|^2-(\mathbf{r}_1\cdot\mathbf{r}_2)^2$. \begin{align*} \mathbf{r}_1\times\mathbf{r}_2 \cdot \mathbf{v}_2 &= (\mathbf{r}_1\times\mathbf{r}_2 ) \cdot (\boldsymbol{\omega}\times\mathbf{r}_2) \\ &= \left( a|\mathbf{r}_1|^2 + b(\mathbf{r}_1\cdot\mathbf{r}_2) \right)\, |\mathbf{r}_2|^2- \left( a(\mathbf{r}_1\cdot\mathbf{r}_2) + b|\mathbf{r}_2|^2 \right)\, (\mathbf{r}_1\cdot\mathbf{r}_2) \\ &= a |\mathbf{r}_1\times\mathbf{r}_2|^2 \\ \Rightarrow\quad a&=\frac{\mathbf{r}_1\times\mathbf{r}_2 \cdot \mathbf{v}_2 }{|\mathbf{r}_1\times\mathbf{r}_2|^2} \\ \mathbf{r}_1\times\mathbf{r}_2 \cdot \mathbf{v}_1 &= (\mathbf{r}_1\times\mathbf{r}_2 ) \cdot (\boldsymbol{\omega}\times\mathbf{r}_1) \\ &= \left( a|\mathbf{r}_1|^2 + b(\mathbf{r}_1\cdot\mathbf{r}_2) \right)\,(\mathbf{r}_1\cdot\mathbf{r}_2) - \left( a(\mathbf{r}_1\cdot\mathbf{r}_2) + b|\mathbf{r}_2|^2 \right) \, |\mathbf{r}_1|^2 \\ &= -b |\mathbf{r}_1\times\mathbf{r}_2|^2 \\ \Rightarrow\quad b &=-\frac{\mathbf{r}_1\times\mathbf{r}_2 \cdot \mathbf{v}_1}{|\mathbf{r}_1\times\mathbf{r}_2|^2} \end{align*} I hope I haven't made any slips, you should definitely check!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/431674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A certain regularization and renormalization scheme In a certain lecture of Witten's about some QFT in $1+1$ dimensions, I came across these two statements of regularization and renormalization, which I could not prove, (1) $\int ^\Lambda \frac{d^2 k}{(2\pi)^2}\frac{1}{k^2 + q_i ^2 \vert \sigma \vert ^2} = - \frac{1}{2\pi} ln \vert q _ i \vert - \frac{1}{2\pi}ln \frac{\vert \sigma\vert}{\mu}$ (..there was an overall $\sum _i q_i$ in the above but I don't think that is germane to the point..) (2) $\int ^\Lambda \frac{d^2 k}{(2\pi)^2}\frac{1}{k^2 + \vert \sigma \vert ^2} = \frac{1}{2\pi} (ln \frac{\Lambda}{\mu} - ln \frac{\vert \sigma \vert }{\mu} )$ I tried doing dimensional regularization and Pauli-Villar's (motivated by seeing that $\mu$ which looks like an IR cut-off) but nothing helped me reproduce the above equations. I would glad if someone can help prove these above two equations.
Let's just look at the integral $$\int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2+\alpha^2}.$$ The other integrals should follow from this one. Introduce the Pauli-Villars regulator, $$\begin{eqnarray*} \int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2+\alpha^2} &\rightarrow& \int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2+\alpha^2} - \int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2+\Lambda^2} \\ &=& (\Lambda^2-\alpha^2)\int \frac{d^2k}{(2\pi)^2} \frac{1}{(k^2+\alpha^2)(k^2+\Lambda^2)} \\ &=& (\Lambda^2-\alpha^2)\int_0^1 dx\, \int\frac{d^2k}{(2\pi)^2} \frac{1}{(k^2 + \beta^2)^2} \\ &=& (\Lambda^2-\alpha^2)\int_0^1 dx\, \frac{1}{2} \frac{2\pi}{(2\pi)^2} \int_0^\infty dk^2\,\frac{1}{(k^2 + \beta^2)^2} \\ &=& (\Lambda^2-\alpha^2) \frac{1}{4\pi} \int_0^1 dx\, \frac{1}{\beta^2} \\ &=& (\Lambda^2-\alpha^2) \frac{1}{4\pi} \int_0^1 dx\, \frac{1}{\Lambda^2 - x(\Lambda^2-\alpha^2)} \\ &=& -\frac{1}{2\pi} \ln \frac{|\alpha|}{\Lambda} \end{eqnarray*}$$ Where we have combined denominators with the Feynman parameter $x$, with the intermediate variable $\beta^2 = \Lambda^2 - x(\Lambda^2-\alpha^2)$. Of course, this could also be approached with dimensional regularization with the same result. Addendum: After regularization we must renormalize. Using the minimal subtraction prescription we find $$\int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2+\alpha^2} \rightarrow -\frac{1}{2\pi} \ln \frac{|\alpha|}{\mu},$$ as required.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/28194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is the spin 1/2 rotation matrix taken to be counterclockwise? The spin 1/2 rotation matrix around the $z$-axis I worked out to be $$ e^{i\theta S_z}=\begin{pmatrix} \exp\frac{i\theta}{2}&0\\ 0&\exp\frac{-i\theta}{2}\\ \end{pmatrix} $$ Is this taken to be anti-clockwise around the $z$-axis?
For your example, we have $e^{i\theta S_z}\mathbf{S}e^{-i\theta S_z}=\begin{pmatrix}\cos\theta & -\sin\theta&0\\\sin\theta & \cos\theta&0\\0&0&1\end{pmatrix}\mathbf{S}$, with $e^{i\theta S_z}=\begin{pmatrix}e^{i\frac{\theta }{2}} & 0\\ 0 & e^{-i\frac{\theta }{2}}\end{pmatrix}$ and $\mathbf{S}=\begin{pmatrix}S_x\\ S_y\\ S_z\end{pmatrix}$ representing the spin-1/2 operators. Comments: In fact, for the most general spin rotation, we have $$U\mathbf{S}U^\dagger=A\mathbf{S}\rightarrow (1)$$, where $U$ represents the general spin rotation operator $U=e^{i\alpha S_z}e^{i\beta S_y}e^{i\gamma S_z}=\begin{pmatrix}\cos{\frac{\beta }{2}}e^{i\frac{\alpha + \gamma}{2}} & \sin{\frac{\beta }{2}}e^{i\frac{\alpha - \gamma}{2}}\\ -\sin{\frac{\beta }{2}}e^{i\frac{\gamma-\alpha}{2}} & \cos{\frac{\beta }{2}}e^{-i\frac{\alpha + \gamma}{2}}\end{pmatrix}\in SU(2)$, and $A=\begin{pmatrix}\cos\alpha \cos\beta\cos\gamma-\sin\alpha\sin\gamma& -\sin\alpha \cos\beta\cos\gamma-\cos\alpha\sin\gamma &\sin\beta\cos\gamma\\ \cos\alpha \cos\beta\sin\gamma+\sin\alpha\cos\gamma & -\sin\alpha \cos\beta\sin\gamma+\cos\alpha\cos\gamma&\sin\beta\sin\gamma\\-\cos\alpha\sin\beta&\sin\alpha\sin\beta&\cos\beta\end{pmatrix}$ $\in SO(3)$ with the three Euler angles $\alpha,\beta,\gamma$. Eq.(1) gives the map from $SU(2)$ to $SO(3)$ and the relation $SO(3)\cong SU(2)/Z_2.$ Remarks: $e^{i\theta S_x}=\begin{pmatrix}\cos{\frac{\theta }{2}} & i\sin{\frac{\theta }{2}}\\ i\sin{\frac{\theta }{2}} & \cos{\frac{\theta }{2}} \end{pmatrix},e^{i\theta S_y}=\begin{pmatrix}\cos{\frac{\theta }{2}} & \sin{\frac{\theta }{2}}\\ -\sin{\frac{\theta }{2}} & \cos{\frac{\theta }{2}}\end{pmatrix},e^{i\theta S_z}=\begin{pmatrix}e^{i\frac{\theta }{2}} & 0\\ 0 & e^{-i\frac{\theta }{2}}\end{pmatrix}.$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Apparent dimensional mismatch after taking derivative Suppose I have a variable $x$ and a constant $a$, each having the dimension of length. That is $[x]=[a]=[L]$ where square brackets denote the dimension of the physical quantity contained within them. Now, we wish to take the derivative of $u = log (\frac{x^2}{a^2})-log (\frac{a^2}{x^2})$. Here, we have taken the natural logarithm. It is clear that $u$ is a dimensionless function. $$\frac{du}{dx} = \frac{a^2}{x^2}.\frac{2x}{a^2} - \frac{x^2}{a^2}.(-2a^2).\frac{2x}{x^3} \\ = \frac{1}{x} - 4. $$ Here, the dimensions of the two terms on the right do not match. The dimension of the first term is what I expected. Where am I going wrong?
I think the second half of your derivative is wrong: $ \frac{d}{dx} \log\left( \frac{a^2}{x^2}\right) = \frac{x^2}{a^2} \cdot \frac{d}{dx} \left(a^2 x^{-2}\right) = \frac {x^2} {a^2} \left(-3a^2\right) x^{-3} = \frac{-3 a^4}{x} $ which has the correct dimension.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Line element in Kruskal coordinates I try to calculate the line element in Kruskal coordinates, these coordinates use the Schwarzschild coordinates but replace $t$ and $r$ by two new variables. $$ T = \sqrt{\frac{r}{2GM} - 1} \ e^{r/4GM} \sinh \left( \frac{t}{4GM} \right) \\ X = \sqrt{\frac{r}{2GM} - 1} \ e^{r/4GM} \cosh \left( \frac{t}{4GM} \right) $$ Wikipedia shows the result of the line element. $$ ds^2 = \frac{32 G^3M^3}{r} e^{-r/2GM} (-dT^2 + dX^2) + r^2d\Omega^2 $$ I tried to calculate the metric tensor using $ds^2 = g_{ij} \ dx^i dx^j$. As $T$ and $X$ show no dependence in $\theta$ and $\phi$, the $d\Omega$ seems to make sense, but the calculation of the first component of $g$ was not working. $$ g_{tt} = J^TJ = \frac{\partial T}{\partial t} \frac{\partial T}{\partial t} + \frac{\partial X}{\partial t} \frac{\partial X}{\partial t}\\ = \frac{1}{32} \left( \frac{r}{GM} - 2 \right) \frac{ e^{\frac{1}{2} \frac{r}{GM}}}{G^2M^2} \left( \cosh^2 \left( \frac{t}{4GM} \right) + \sinh^2 \left( \frac{t}{4GM} \right) \right) $$ Is this the right way to compute the line elements? What would be better way to calculate the line elements (maybe starting with the Schwarzschild-coordinates)?
I don't think you can drive the line element with the jacobian $J$ The Kruskal-Szekeres line element Beginning with the Schwarzschild line element: \begin{align*} &\boxed{ds^2 =\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2-r^2\,d\Omega^2}\\\\ r_s &:=\frac{2\,G\,M}{c^2} \,,\quad \text{for 2 dimension space}\\ ds^2 & =\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2 \end{align*} Step I) \begin{align*} &\text{for} \quad ds^2=0\\ 0&=\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2\,,\Rightarrow\\ \left(\frac{dt}{dr}\right)^2&=\left(1-\frac{r_s}{r}\right)^{-2}\,,\Rightarrow \quad t(r)=\pm\underbrace{\left[r+r_s\ln\left(\frac{r}{r_s}-1\right)\right]}_{r^*}\\ &\Rightarrow\\ \frac{dr^*}{dr}&=\left(1-\frac{r_s}{r}\right)^{-1}\,,\quad \frac{dr}{dr^*}=\left(1-\frac{r_s}{r}\right)\,,&(1) \end{align*} Step II) \begin{align*} &\text{New coordinates}\\ u & =t+r^* \\ v & =t-r^*\\ &\Rightarrow\\ t&=\frac{1}{2}(u+v)\,,\quad dt=\frac{1}{2}(du+dv)\\ r^*&=\frac{1}{2}(u-v)\,,\quad dr^*=\frac{1}{2}(du-dv)\\ dr&=\left(1-\frac{r_s}{r}\right)\,dr^*=\frac{1}{2}\,\left(1-\frac{r_s}{r}\right) (du-dv) \quad\quad(\text{With equation (1)})\\ \Rightarrow \end{align*} \begin{align*} ds^2 &=\left(1-\frac{r_s}{r}\right)\,du\,dv \end{align*} Step III) \begin{align*} r^* & =\left[r+r_s\ln\left(\frac{r}{r_s}-1\right)\right]= \frac{1}{2}(u-v)\,\Rightarrow\\ \left(\frac{r}{r_s}-1\right)&=\exp\left(-\frac{r}{r_s}\right) \,\exp\left(\frac{1}{2\,r_s}(u-v)\right)\\ \left(1-\frac{r_s}{r}\right)&=\frac{r_s}{r}\left(\frac{r}{r_s}-1\right)\\ \,\Rightarrow\\\\ ds^2&=\frac{r_s}{r}\,\exp\left(-\frac{r}{r_s}\right) \,\exp\left(\frac{1}{2\,r_s}(u-v)\right)\,du\,dv \end{align*} Step IV) \begin{align*} &\text{New coordinates}\\ U= & -\exp\left(\frac{u}{2\,r_s}\right) \,,\quad \frac{dU}{du}=-\frac{1}{2\,r_s}\,\exp\left(\frac{u}{2\,r_s}\right)\\ V= & \exp\left(-\frac{v}{2\,r_s}\right) \,,\quad \frac{dV}{dv}=-\frac{1}{2\,r_s}\,\exp\left(-\frac{v}{2\,r_s}\right)\\ \,\Rightarrow\\\\ ds^2&=\frac{4\,r_s^3}{r}\exp\left(-\frac{r}{r_s}\right) \,dU\,dV \end{align*} Step V) \begin{align*} &\text{New coordinates}\\ U & =T-X\,,\quad dU=dT-dX \\ V & =T+X\,,\quad dV=dT+dX\\ \,\Rightarrow\\\\ &\boxed{ds^2=\frac{4\,r_s^3}{r}\exp\left(-\frac{r}{r_s}\right) \left(dT^2-dX^2\right)} \end{align*} With Matrices and Vectors The Kruskal-Szekeres line element Beginning with : \begin{align*} ds^2 & =a\,du\,dv\\ &\Rightarrow\\ g&=\frac{1}{2}\begin{bmatrix} 0 & a \\ a & 0 \\ \end{bmatrix}\\\\ q'&=\begin{bmatrix} du \\ dv \\ \end{bmatrix}\,,\quad q=\begin{bmatrix} u \\ v \\ \end{bmatrix} \,,\quad a=\left(1-\frac{r_s}{r}\right) \end{align*} Step I) \begin{align*} R&= \begin{bmatrix} \frac{1}{2}(u+v) \\ \frac{1}{2}(u-v) \\ \end{bmatrix} \,\Rightarrow\quad J_1=\frac{dR}{dq}= \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} \\ \end{bmatrix}\\\\ ds^2=&a\,q'^T\,J_1^T\,\eta\,J_1\,q'=a\,du\,dv \end{align*} where $\eta= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\\\\$ Step II) \begin{align*} a&\mapsto {\it r_s}\,{{\rm e}^{-{\frac {r}{{\it r_s}}}}}{{\rm e}^{1/2\,{\frac {u-v }{{\it r_s}}}}}{r}^{-1} \\\\ ds^2&=a\,du\,dv={{\it du}}^{2}{\it r_s}\,{{\rm e}^{-1/2\,{\frac {2\,r-u+v}{{\it r_s}}}}} {r}^{-1}-{{\it dv}}^{2}{{\rm e}^{1/2\,{\frac {2\,r-u+v}{{\it r_s}}}}}r{ {\it r_s}}^{-1} \end{align*} Step III) \begin{align*} R & = \begin{bmatrix} -\exp\left(\frac{u}{2\,r_s}\right) \\ \exp\left(-\frac{v}{2\,r_s}\right) \\ \end{bmatrix}\,,\Rightarrow\quad J_2=\frac{dR}{dq}=\begin{bmatrix} -\frac{2\,r_s}{\exp\left(\frac{u}{2\,r_s}\right)} & 0 \\ & -\frac{2\,r_s}{\exp\left(-\frac{v}{2\,r_s}\right)} \\ \end{bmatrix}\\\\ ds^2=&q'^T\,J_2^T\,J_1^T\,g\,J_1\,J_2\,q'= \frac{4\,r_s^3\,\exp\left(-\frac{r}{r_s}\right)}{r}\,du\,dv \end{align*} Step IV \begin{align*} R & = \begin{bmatrix} u-v \\ u+v \\ \end{bmatrix}\,,\Rightarrow\quad J_3=\frac{dR}{dq}=\begin{bmatrix} 1 & -1 \\ 1 & 1 \\ \end{bmatrix}\\\\ ds^2=&q'^T\,J_3^T\,J_2^T\,J_1^T\,g\,J_1\,J_2\,J_3\,q' = \frac{4\,r_s^3\,\exp\left(-\frac{r}{r_s}\right)}{r}\left( du^2-dv^2 \right) \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Potential due to line charge: Incorrect result using spherical coordinates Context This is not a homework problem. Then answer to this problem is well known and can be found in [1]. The potential of a line of charge situated between $x=-a$ to $x=+a$ ``can be found by superposing the point charge potentials of infinitesmal charge elements. [1]'' Adjusting from [1] ($b\to a$), the answer to the problem below is $$ \boxed{ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{2\, a} \left[ \ln{\left(\frac{ a+ \sqrt{a ^2 + r^2}}{ -a+ \sqrt{a ^2 + r^2}}\right)} \right] \, \,.} $$ Yet, because I am practicing using the curvilinear spherical coordinate system, I attempted to work this problem in that system. I know that $$\Phi( \mathbf{r} ) = \frac{1}{4\,\pi\,\epsilon_o} \int \frac { 1} { \left\| \mathbf{r}-\mathbf{r}^\prime\right\| }\rho(\mathbf{r}^\prime) \,d\tau^\prime \,$$ I also know that $$ \rho(r,\theta,\varphi) = \frac{Q}{2\,a} \,\frac{H{\left(r-0\right)}- H{\left(a-r \right)}}{1}\,\frac{\delta{\left(\theta-\frac{\pi}{2}\right)}}{r}\, \frac{\delta(\varphi-0) + \delta(\varphi-\pi) }{r\,\sin\theta} \,.$$ Further, since \begin{equation} \begin{aligned} x &= r \sin\theta \cos\varphi , \\ y &= r \sin\theta \sin\varphi , \\ z &= r \cos\theta , \end{aligned} \end{equation} I know that the expression of the distance between two vectors in spherical coordinates is given by the equation \begin{align} \|\mathbf{r}-\mathbf{r}^\prime\| = \sqrt{r^2+r'^2-2rr'\left[ \sin(\theta)\sin(\theta')\,\cos(\phi-\phi') +\cos(\theta)\cos(\theta')\right]}. \end{align} Finally, we are given that the obervation points, $\mathbf{r}$, are restricted as given by the equation $$\mathbf{r} = \left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right) .$$ Putting these togehter, we have that $$ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \int \frac { \frac{Q}{2\,a} \,\frac{H{\left(r^\prime-0\right)}- H{\left(a-r^\prime \right)}}{1}\,\frac{\delta{\left(\theta^\prime-\frac{\pi}{2}\right)}}{r^\prime}\, \frac{\delta(\varphi^\prime-0) + \delta(\varphi^\prime-\pi) }{r^\prime\,\sin\theta^\prime} } { \sqrt{r^2+r'^2-2rr'\left[ \sin(\theta)\sin(\theta')\,\cos(\phi-\phi') +\cos(\theta)\cos(\theta')\right]} } \, {r^\prime}^2\,\sin\theta^\prime\,dr^\prime\,d\theta^\prime\,d\phi^\prime \,.$$ Based on the point of observation, we rewrite the potential according to the equation $$ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{2\,a} \, \int \frac { \left[H{\left(r^\prime-0\right)}- H{\left(a-r^\prime \right) } \right] \, \delta{\left(\theta^\prime-\frac{\pi}{2}\right)} \, \left[\delta(\varphi^\prime-0) + \delta(\varphi^\prime-\pi) \right] } { \sqrt{r^2+{r^\prime}^2-2\,r\,r^\prime\, \sin(\theta')\,\cos(\pi\pm \frac{\pi}{2}-\phi') } } \,dr^\prime\,d\theta^\prime\,d\phi^\prime \,.$$ Upon taking the angular integrals I rewrite the potential according to equation $$ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{2\,a} \, \int_0^a \frac { 2 } { \sqrt{r^2+{r^\prime}^2 } } \,dr^\prime \,.$$ I know that $$ \int \frac{dx}{\sqrt{x^2 \pm a^2}} = \ln{\left(x+ \sqrt{x^2 \pm a^2}\right)} \,. $$ Therefore, $$ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{ a} \left[ \ln{\left(r^\prime+ \sqrt{{r^\prime}^2 + r^2}\right)} \right]_0^a \,. $$ Upon evaluation of the limits of integration, I have the incorrect result that $$ \boxed{ \Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{ a} \left[ \ln{\left(\frac{ a+ \sqrt{a ^2 + r^2}}{r}\right)} \right] \, \,.} $$ Question The result should be identical no matter what coordinate system that I choose. I have a gap in my understanding. Please help by identifying and stating the error in my analysis? Bibliography [1] http://hyperphysics.phy-astr.gsu.edu/hbase/electric/potlin.html
Adjusting from [1] $(b→a)$, the answer to the problem below is $$\Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{ a} \left[ \ln{\left(\frac{ a+ \sqrt{a ^2 + r^2}}{ -a+ \sqrt{a ^2 + r^2}}\right)} \right]$$ This is wrong by a factor of 2, in the original answer they use the linear density $\lambda$, which you substituted by $Q/a$, while it should be $Q/2a$. $$\Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{2\,a} \, \int_0^a \frac { 2 } { \sqrt{r^2+{r^\prime}^2 } } \,dr^\prime$$ From here you can use that the integrand is symmetric under the change of variables $r' \to -r'$, so you can write $2\int_0^a = \int_{-a}^a$ and the desired answer follows trivially. Another option is to start with your final expresion $$\Phi{\left(r, \frac{\pi}{2},\pi\pm \frac{\pi}{2}\right)} = \frac{1}{4\,\pi\,\epsilon_o} \frac{Q}{ a} \left[ \ln{\left(\frac{ a+ \sqrt{a ^2 + r^2}}{r}\right)} \right]$$ And realise that $f(a):=\ln{\left(\frac{ a+ \sqrt{a ^2 + r^2}}{r}\right)}$ is odd under the change $a\to-a$. So you can rewrite $f(a)=\frac{1}{2}(f(a)-f(-a))$. This also gives you the desired result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/749536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The analytical result for free massless fermion propagator For massless fermion, the free propagator in quantum field theory is \begin{eqnarray*} & & \langle0|T\psi(x)\bar{\psi}(y)|0\rangle=\int\frac{d^{4}k}{(2\pi)^{4}}\frac{i\gamma\cdot k}{k^{2}+i\epsilon}e^{-ik\cdot(x-y)}. \end{eqnarray*} In Peskin & Schroeder's book, An introduction to quantum field theory (edition 1995, page 660, formula 19.40), they obtained the analytical result for this propagator, \begin{eqnarray*} & & \int\frac{d^{4}k}{(2\pi)^{4}}\frac{i\gamma\cdot k}{k^{2}+i\epsilon}e^{-ik\cdot(x-y)}=-\frac{i}{2\pi^{2}}\frac{\gamma\cdot(x-y)}{(x-y)^{4}} .\tag{19.40} \end{eqnarray*} Question: Is this analytical result right? Actually I don't know how to obtain it.
The calculation of the propagator in four dimensions is as follows. \begin{eqnarray*} \int\frac{d^4 k}{(2\pi)^4}e^{-ik\cdot (x-y)}\frac{1}{k^2} &=& i\int \frac{d^4 k_E}{(2\pi)^4}e^{ik_E\cdot (x_E-y_E)}\frac{1}{-k_E^2} \\ &=& \frac{-i}{(2\pi)^4} \left( \int_0^{2\pi}d\theta_3 \int_0^{\pi}d\theta_2 \sin \theta_2 \right) \int_0^{\infty} dk_E k_E^3 \frac{1}{k_E^2} \int_0^{\pi}d\theta_1 \sin^2 \theta_1 e^{ik_E | x_E-y_E | \cos \theta_1} \\ &=& \frac{-i4\pi}{(2\pi)^4} \int_0^{\infty} dk_E k_E \int_0^{\pi}d\theta_1 \frac{1-\cos 2\theta_1}{2} e^{ik_E | x_E-y_E | \cos \theta_1} \\ &=& \frac{-i}{4\pi^3} \frac{1}{| x_E-y_E |^2} \int_0^{\infty} ds s (\frac{\pi}{2} J_0(s)- \frac{\pi i^2}{2} J_2(s)) \end{eqnarray*} where $s\equiv k_E\| x_E-y_E \| $, and $J_n(s)$'s are bessel functions and I made use of Hansen-Bessel Formula. \begin{eqnarray*} &=& \frac{-i}{4\pi^3} \frac{1}{| x_E-y_E |^2} \int_0^{\infty} ds s \frac{\pi}{2} \frac{2}{s} J_1(s) \\ &=& -\frac{i}{4\pi^2} \frac{1}{| x_E-y_E|^2} \int_0^{\infty} ds \, J_1(s) \\ &=& -\frac{i}{4\pi^2} \frac{1}{| x_E-y_E |^2} \\ &=& \frac{i}{4\pi^2} \frac{1}{(x-y)^2} \end{eqnarray*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/263846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Does the unit hypercube in Minkowski space always have the 4-volume of 1? Suppose we have a unit hypercube in Minkowski space defined by the column vectors in the identity matrix $$ \mathbf I = \begin{bmatrix} 1 & 0 & 0 & 0 \\[0.3em] 0 & 1 & 0 & 0 \\[0.3em] 0 & 0 & 1 & 0 \\[0.3em] 0 & 0 & 0 & 1 \end{bmatrix}$$ Now; the length of one edge would have units of time, but this is solved by multiplying the time interval with the speed of light $c = 1.$ Obviously, this hypercube would have the 4-volume of 1, as seen by its determinant: $$\det \left(\begin{bmatrix} 1 & 0 & 0 & 0 \\[0.3em] 0 & 1 & 0 & 0 \\[0.3em] 0 & 0 & 1 & 0 \\[0.3em] 0 & 0 & 0 & 1 \end{bmatrix}\right) = \det \mathbf I = 1$$ Now, I have performed some numerical testing on the using the Lorentz transformation written as a matrix, $$ \left[ \begin{array}{c} t' \\x'\\ y' \\ z' \\\end{array}\right] = \\ \left[ \begin{array}{c} t \\x\\ y \\ z \\\end{array}\right]\left[\begin{array}{cccc} \frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_x}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_y}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_z}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} \\ -\frac{v_x}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_x^2}{v_x^2+v_y^2+v_z^2}+1 & \frac{v_x v_y \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{v_x v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} \\ -\frac{v_y}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{v_x v_y \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_y^2}{v_x^2+v_y^2+v_z^2}+1 & \frac{v_y v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} \\ -\frac{v_z}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{v_x v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{v_y v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_z^2}{v_x^2+v_y^2+v_z^2}+1 \\ \end{array}\right]$$ and the determinant of the resulting matrix always seem to be $1 \forall \{v_x, v_y, v_z\}$, even when $\sqrt{v_x^2 + v_y^2 + v_z^2} >1$, indicating that this "cube" will always have the same 4-volume, regardless of the inertial frame of reference (including tachyonic ones). It seems that if the 4-volume of an arbitrary "hypercube" is 1 in one intertial reference frame, it must also have the 4-volume equal to 1 in every other inertial reference frame. Is this really true? How would one prove a such proposition?
In terms of matrix components, Lorentz transformations have matrices that satisfy $$\eta = \Lambda^T \eta \Lambda$$ where $\eta$ is the Minkowski metric. Taking determinants of each side, we have $$|\eta| = |\Lambda^T| |\eta| |\Lambda| = |\Lambda|^2 |\eta|$$ which implies that $|\Lambda| = \pm 1$. Since a transformation by $A$ changes volume by $|A|$, this implies that Lorentz transformations preserve the (absolute) volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hamiltonian for a magnetic field An atom has an electromagnetic moment, $\mu = -g\mu_B S$ where S is the electronic spin operator ($S=S_x,S_y.S_z$) and $S_i$ are the Pauli matrices, given below. The atom has a spin $\frac{1}{2}$ nuclear magnetic moment and the Hamiltonian of the system is \begin{gather*} H = -\mu .B + \frac{1}{2}A_0S_z \end{gather*} The first term is the Zeeman term, the second is the Fermi contact term and $A_0$ is a real number. Obtain the Hamiltonian in matrix form for a magnetic field, $B=B_x,B_y,B_z$. Show that when the atom is placed in a magnetic field of strength B, aligned with the z axis, transitions between the ground and excited states of the atom occur at energies: \begin{gather*} E= g\mu_B B + \frac{1}{2}A_0 \end{gather*} The Pauli Matrices are: \begin{gather*} S_x = \frac{1}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \ S_y = \frac{1}{2} \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} , \ S_z = \frac{1}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \end{gather*} Where do I even start for a solution to this problem I am unclear as to how to formulate the B matrix. If I can get that hopefully the second part will become apparent to prove
The Hamiltonain is calculated as \begin{align} H =& \, g \mu_B \, \left(B_x S_x + B_y S_y + B_z S_z\right) \, + \, \frac{1}{2}A_0 S_z = \\ =& \, \frac{g \mu_B}{2} \, \left(B_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + B_y \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} + B_z \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\right) \, + \, \frac{1}{4}A_0 \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\ =& \, \frac{g \mu_B}{2} \, \begin{bmatrix} B_z & B_x-iB_y \\ B_x+iB_y & -B_z \end{bmatrix} \, + \, \frac{1}{4} \begin{bmatrix} A_0 & 0 \\ 0 & -A_0 \end{bmatrix} \\ \end{align} In the case of a constant magnetic field aligned with the $z-$axis, $B_x = B_y=0$ and $B_z = B$. Then $$H = \, \frac{g \mu_B}{2} \, \begin{bmatrix} B_z & 0 \\ 0 & -B_z \end{bmatrix} \, + \, \frac{1}{4} \begin{bmatrix} A_0 & 0 \\ 0 & -A_0 \end{bmatrix} = \frac{1}{2}\begin{bmatrix} g\mu_B\, B_z+\frac{1}{2}A_0 & 0 \\ 0 & - \, g\mu_B\,B_z-\frac{1}{2}A_0 \end{bmatrix} $$ By solving the linear eigenvalue equations $$H \, | \psi \rangle = \lambda\, | \psi \rangle $$ you would get the basis energy states (the eignevectors $| \psi \rangle$) and their energy levels (the eigenvalyes $\lambda$). Since $H$ is a 2 by 2 matrix, so $$ | \psi \rangle = \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix}$$ the equation is $$\begin{bmatrix} \frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 & 0 \\ 0 & - \, \frac{1}{2} g\mu_B\,B_z-\frac{1}{4}A_0 \end{bmatrix} \, \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix} = \lambda \, \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix}$$ so it is easy to see that the egienvectros are $$\begin{bmatrix} 1 \\ 0 \end{bmatrix} \text { and } \begin{bmatrix} 0 \\ 1\end{bmatrix}$$ with energy levels $$ \frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 \,\, \text { and }\,\, - \frac{1}{2} g\mu_B\, B_z-\frac{1}{4}A_0$$ respectively. There are only two eigenstates and the transition from on to the other happens when the energy is equal to the difference of the energy levels, i.e. $$\left(\frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 \right) - \left( - \frac{1}{2} g\mu_B\, B_z-\frac{1}{4}A_0\right) = g\mu_B\, B_z+\frac{1}{2}A_0$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/440351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Optimizing engines to produce a certain torque and net force Say we have $n$ engines sitting on a rigid body. Each engine has position $R_i$ and points in a certain direction and generates a force in that direction, $F_i$. The magnitude of that force ($k_i$) follows this constraint: $0 \leq k_i \leq m_i$. In other words, the engine can only output so much force. I want to be able to find the values of $k$ that would bring the resultant force $F$ and torque $T$ closest to a desired value. Based on the resultant force equation, I know that $$F = \sum_{i=1}^{n} (k_iF_i)$$ and the torque: $$T = \sum_{i=1}^{n} (R_i) \times (k_iF_i)$$ where $R_i$ is the position of the engine relative to the point of application of the resultant force. However, I'm unsure of how to best proceed from here. edit: I solved the problem one way using linear programming and minimizing the absolute value of the difference between each component of the ideal and actual force/torque, not optimizing for fuel. This allowed me to put constraints on the engine's force magnitude.
The problem is one of least squares (until the point where the magnitude is capped). Consider the target force vector $\vec{F}$ and the target moment vector $\vec{T}$ as the right-hand side $\boldsymbol{b}$ of a linear system of equations, and the vector $\boldsymbol{x}$ of $n$ force magnitudes is the unknowns. $$ \mathbf{A}\;\boldsymbol{x} = \boldsymbol{b} $$ $$ [\mathbf{A}]_{6\times n}\; \begin{pmatrix}F_{1}\\ F_{2}\\ \vdots\\ F_{n} \end{pmatrix}_{n\times1} = \begin{pmatrix}\vec{F}\\ \vec{T} \end{pmatrix}_{6\times1} \tag{1}$$ We will get later into what the coefficient matrix $\mathbf{A}$ is. For now consider a case where $n \geq 6$, and the solution is given by $$ \boldsymbol{x} = \mathbf{A}^\top \left( \mathbf{A} \mathbf{A}^\top \right)^{-1} \boldsymbol{b} \tag{2}$$ Where $^\top$ is the matrix transpose. So what is $\mathbf{A}$? There are 6 rows and $n$ columns to this matrix, and the first 3 rows is filled with all $n$ force direction vectors $\vec{z}_i$, and the last 3 rows with all $n$ torque directions $\vec{r}_i \times \vec{z}_i$. $$ \mathbf{A} = \begin{bmatrix}\vec{z}_{1} & \vec{z}_{2} & \cdots & \vec{z}_{n}\\ \vec{r}_{1}\times\vec{z}_{1} & \vec{r}_{2}\times\vec{z}_{2} & \cdots & \vec{r}_{n}\times\vec{z}_{n} \end{bmatrix}_{6\times n} \tag{3}$$ The result isn't guaranteed to be within the force limits, but it will be the least possible force system overall. Reduced Example Consider a planar example (for simplicity with 3 DOF instead of 6) with $n=4$ forces arranged in a rectangle of size $a$, $b$, and each direction pointing to the next force location. $$\begin{aligned} \vec{r}_1 &= \pmatrix{-\tfrac{a}{2} \\ -\tfrac{b}{2} } & \vec{z}_1 &= \pmatrix{1\\0} & \vec{r}_1 \times \vec{z}_1 = \tfrac{b}{2} \\ \vec{r}_2 &= \pmatrix{ \tfrac{a}{2} \\ -\tfrac{b}{2} } & \vec{z}_2 &= \pmatrix{0\\1} & \vec{r}_2 \times \vec{z}_2 = \tfrac{a}{2}\\ \vec{r}_3 &= \pmatrix{ \tfrac{a}{2} \\ \tfrac{b}{2} } & \vec{z}_3 &= \pmatrix{-1\\0} & \vec{r}_3 \times \vec{z}_3 = \tfrac{b}{2} \\ \vec{r}_4 &= \pmatrix{-\tfrac{a}{2} \\ \tfrac{b}{2} } & \vec{z}_4 &= \pmatrix{0\\-1} & \vec{r}_4 \times \vec{z}_4 = \tfrac{a}{2} \\ \end{aligned} $$ with the target force $\vec{F}= \pmatrix{3 \\ 2} $ and moment $T=\pmatrix{1}$ $$ \boldsymbol{b} = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} $$ The coefficient matrix is composed from (3) $$ \mathbf{A} = \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ \tfrac{b}{2} & \tfrac{a}{2} & \tfrac{b}{2} & \tfrac{a}{2}\end{bmatrix} $$ and solution from (2) $$ \pmatrix{F_1 \\ F_2 \\ F_3 \\ F_4} = \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ \tfrac{b}{2} & \tfrac{a}{2} & \tfrac{b}{2} & \tfrac{a}{2}\end{bmatrix}^\top \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \tfrac{a^2+b^2}{2} \end{bmatrix} ^{-1} \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} = \pmatrix{\tfrac{3}{2} + \tfrac{b}{a^2+b^2} \\ 1+\tfrac{a}{a^2+b^2} \\ -\tfrac{3}{2}+\tfrac{b}{a^2+b^2} \\ -1+\tfrac{a}{a^2+b^2}} $$ Let us check the result $$ \vec{F}= F_1 \vec{z}_1 + F_2 \vec{z}_2 + F_3 \vec{z}_3 + F_4 \vec{z}_4 = \pmatrix{3\\2} \; \checkmark$$ $$ \vec{T} =F_1 (\vec{r}_1 \times \vec{z}_1) + F_2 (\vec{r}_2 \times \vec{z}_2) + F_3 (\vec{r}_3 \times \vec{z}_3) + F_4 (\vec{r}_4 \times \vec{z}_4) = \pmatrix{1} \; \checkmark$$ This method also solves the "Find the forces of the four legs of a table" problem given an arbitrary load on the table surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }