text
stringlengths
256
16.4k
Inertia In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts. Contents Derivation Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy). The length of a circle arc is given by: [math] L = \theta r [/math] where [math]L[/math] is the length of the arc (m) [math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m) A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of: [math] v = \frac{\theta r}{t} [/math] where [math]v[/math] is the rotational velocity (m/s) [math]t[/math] is the time it takes for the mass to rotate L metres (s) Alternatively, rotational velocity can be expressed as: [math] v = \omega r [/math] where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s) [math]n[/math] is the speed in revolutions per minute (rpm) The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies: [math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math] where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m 2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg) Alternatively, rotational kinetic energy can be expressed as: [math] KE = \frac{1}{2} J\omega^{2} [/math] where [math]J = mr^{2}[/math] is called the moment of inertia (kg.m 2). Notes about the moment of inertia: In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2or slug.ft 2. Conversions factors: 1 lb.ft 2= 0.04214 kg.m 2 1 slug.ft 2= 1.356 kg.m 2 1 lb.ft WR Normalised Inertia Constants The moment of inertia can be expressed as a normalised quantity called the inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math] where [math]H[/math] is the inertia constant (s) [math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type. Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below: Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia. For example, suppose a generator suddenly disconnects from the network. In that instant, the equilibrium in generation and demand is broken and demand exceeds generation. Because energy must be conserved, there must always be energy balance in the system and the instantaneous deficit in energy is supplied by the system inertia. However, the kinetic energy in rotating masses is finite and when energy is used to supply demand, the rotating masses begin to slow down. In aggregate, the speed of rotation of these rotating masses is roughly proportional to the system frequency and so the frequency begins to fall. New generation must be added to the system to reestablish the equilibrium between generation and demand and restore system frequency, i.e. put enough kinetic energy back into the rotating masses such that it rotates at a speed proportional with nominal frequency (50/60 Hz). The figure on the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large. Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the Rate of Change of Frequency (RoCoF).
Search Now showing items 1-9 of 9 Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... The new Inner Tracking System of the ALICE experiment (Elsevier, 2017-11) The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE (Elsevier, 2017-11) The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Jet-hadron correlations relative to the event plane at the LHC with ALICE (Elsevier, 2017-11) In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ... Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE (Elsevier, 2017-11) We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
I have a symmetric positive-definite matrix $A\in R_{n\times n}$. Its eigenvectors $e_i$ are an orthonormal basis of $R_n$. Are the $n^2$ matrices $[e_i\,e_j^T]$ a basis of $R_{n\times n}$? I have noticed an interesting property: $[e_i\,e_j^T]$ commute with $A$ iff $\lambda_i=\lambda_j$ If $A$ does not commute with $[e_a\,e_b^T]$ and $[e_c\,e_d^T]$, can it commute with $\left( [e_a\,e_b^T]+[e_c\,e_d^T] \right)$? Would the matrices $[\bar{e}_i\,\bar{e}_j^T]$ (eigenvectors with the same eigenvalue) generate the space of all matrices that commute with $A$?
Series of Power over Factorial Converges Theorem Proof If $x = 0$ the result is trivially true as: $\forall n \ge 1: \dfrac {0^n} {n!} = 0$ If $x \ne 0$ we have: $\left|{\dfrac{\left({\dfrac {x^{n+1}} {(n+1)!}}\right)}{\left({\dfrac {x^n}{n!}}\right)}}\right| = \dfrac {\left|{x}\right|} {n+1} \to 0$ as $n \to \infty$. This follows from the results: Sequence of Powers of Reciprocals is Null Sequence, where $\dfrac 1 n \to 0$ as $n \to \infty$ The Squeeze Theorem for Real Sequences, as $\dfrac 1 {n + 1} < \dfrac 1 n$ The Multiple Rule for Real Sequences, putting $\lambda = \left|{x}\right|$. $\blacksquare$ Alternatively, the Comparison Test could be used but this is more cumbersome in this instance. Another alternative is to view this as an example of Radius of Convergence of Power Series over Factorial setting $\xi = 0$. Also see Equivalence of Definitions of Exponential Function, where it is shown that this series converges to the exponential function.
Cameron Murray has a great post about the challenge of reforming economics in which he points out two challenges: social and technical. The social challenge is that different "schools" are tribal, and reconciliation isn't rewarded. Just read Murray on this. The second challenge is something that I have tried to work towards answering: [H]ow do you teach a pluralist program when there is no recognised structure for presenting content from many schools of thought, which can often be contradictory, and when very few academics are themselves sufficiently trained to to so? ... What is needed is a way to structure the exploration of economic analysis by arranging around economic problems around some core domains. Approaches from various schools of thought can be brought into the analysis where appropriate, with the common ground and links between them highlighted. Despite being completely out of the mainstream, the information equilibrium framework does not have to subscribe to a specific school of economic thought. I actually thought this is what you were supposed to mean by framework (other economists disagree and include model-specific results in what they call frameworks). In fact, I defined framework by something that is not model specific: One way to understand what a framework is is to ask whether the world could behave in a different way in your framework ... Can you build a market monetarist model in your framework? It doesn't have to be empirically accurate (the IT framework version is terrible), but you should be able to at least formulate it. If the answer is no, then you don't have a framework -- you have a set of priors. This is what pushed me to try and formulate the MMT and Post-Keynesian (PK) models that use "Stock Flow Consistent" (SFC) analysis as information equilibrium model. The fact that I criticized an aspect of SFC analysis upset the MMT and PK tribes (see the post and comments) led me to not end up posting the work I'd done. But in the interest of completeness and showing that the information equilibrium framework allows you to talk about completely different schools of economics with the same language, let me show the model SIM from Godley and Lavoie as an information equilibrium model. SFC models as an information equilibrium model First, divide through by $Y$ (this represents an overall scale invariance), so all the variables below are fractions of total output (I didn't change the variable names, though because it would get confusing). $$ B \equiv G - T $$ Define $x$ to be a vector of consumption, the variable $B$, taxes, disposable income and high powered money: \begin{bmatrix} C \\ B \\ T \\ Y_{D} \\ H \end{bmatrix} $$ Define the matrix $A$ to be $$ \begin{bmatrix} 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & -1 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & -\alpha_{1} & -\alpha_{2} \\ 0 & 0 & 1 & 1 & 0 \end{bmatrix} $$ Define the vector $b$ to be $$ \begin{bmatrix} -1 \\ 0 \\ -\theta \\ 0 \\ -1 \end{bmatrix} $$ The SFC model SIM from Godley and Lavoie is then $$ A x + b = 0 $$ $$ H \rightleftarrows Y_{D} $$ with [Ed. note: I originally got my notes confused because I wrote $Y_{D}$ as $D$ through part of them and $B$ instead of the $Y_{D}$ I use here, so left off the following equation] $$ B \equiv \int_{\Gamma} dY_{D} $$ where the second equation is an information equilibrium relationship [and the third is a path integral; in the model SIM, they take $\Gamma$ to effectively be a time step]. The issue that I noticed (and upset the SFC community) is that it's assumed that the information transfer index is 1 so that instead of: H \sim Y_{D}^{k} $$ You just have $$ H \sim Y_{D} $$ and the velocity of high powered money is equal to 1. Also, there is no partial equilibrium -- only general equilibrium so you never have high powered money that isn't in correspondence with debt (or actually in the SFC model, exactly equal to debt). Even with this assumption, however, the model can still be interpreted as an information equilibrium model. There is supply and demand for government debt that acts as money. This money is divided up to fund various measures e.g. consumption. Market monetarism as an information equilibrium model Over time, I have attempted to put the various models Scott Sumner writes down into the information equilibrium framework. The first three are described better here. u : NGDP ⇄ W/H The variable u is the unemployment rate. His total hours worked and Wis total nominal wages. 2) (W/H)/(PY/L) ⇄ u PYis nominal output ( Pis the price level), Lis the total number of people employed and u is the unemployment rate. 3) (1/P) : M/P ⇄ M where Mis the money supply. This may look a bit weird, but it could potentially work if Sumner didn't insist on an information transfer index k= 1 (if kis not 1, that opens the door to a liquidity trap, however). As it is, it predicts that the price level is constant in general equilibrium and unexpected growth shocks are deflationary in the short run. 4) V : NGDP ⇄ MBand i ⇄ V where V is velocity, MB is the monetary base and i is the nominal interest rate. So that in general equilibrium we have: V = k NGDP/MB log i = α log V Or more compactly: log i = α log NGDP/MB + β More models! More mainstream Keynesian and other models all appear here or in my paper. Here's a model that is based on the Solow model. However, I think showing how the framework can illustrate both Market Monetarism and Post Keynesianism using the same tools gives an idea of how useful it is. I can even put John Cochrane's asset pricing equation approach in the framework! The interesting part is that it lays bare some assumptions (e.g. that the IS-LM model is an AD-AS model with low inflation). And despite my protests, expectations can be included. It just involves looking at the model temporally rather than instantaneously.
These notes contain a proof. The digraph used in this proof is a little more complicated than the one that you have in mind: each point of the partial order corresponds to two vertices of the digraph. Let $\langle P,\preceq\rangle$ be the partially ordered set; I’ll write $p\prec q$ to indicate that $p\preceq q$ and $p\ne q$. For each $p\in P$ the digraph $D$ will have two vertices, $p^-$ and $p^+$; in addition, it will have a source vertex $s$ and a sink vertex $t$. For each $p\in P$, $D$ has edges $\langle s,p^-\rangle$ and $\langle p^+,t\rangle$; in addition, for each$p,q\in P$ with $p\prec q$ it has $\langle p^-,q^+\rangle$. Each edge has capacity $1$. Let $f$ be a maximal flow in $D$, and let $\langle S,T\rangle$ be a minimal cut constructed by the Ford-Fulkerson algorithm. Note that every path from $s$ to $t$ has the form $s\to p^-\to q^+\to t$ for some $p,q\in P$ with $p\prec q$; thus $|f|$ is the number of $p\in P$ such that $f(s,p^-)=1$ there is a $q\in P$ such that $f(p^-,q^+)=1$. Let $C=\{p\in P:f(s,p^-)=0\}$; for future reference note that $|C|=|P|-|f|$. If $p\in P\setminus C$, then $f(s,p^-)=1$, and there must therefore be a unique ‘successor’ $\sigma(p)\in P$ such that $f(p^-,\sigma(p)^+)=1$, and hence $p\prec\sigma(p)$. Thus, for each $p\in P$ there is a well-defined chain $$p=p_0\prec p_1\prec\ldots\prec p_n\in C$$ in $P$ such that $p_{k+1}=\sigma(p_k)$ for $k=0,\dots,n-1$. ($P$ is finite, so each chain must terminate, and the elements of $C$ are the only elements without successors.) For $p\in P\setminus C$ let $\xi(p)$ be the unique element of $C$ at which the chain from $p$ terminates, and let $\xi(p)=p$ for $p\in C$; then $$\Big\{\{p\in P:\xi(p)=q\}:q\in C\Big\}$$ partitions $P$ into $|C|$ chains. To complete the proof we must show that $P$ also has an antichain of cardinality $|C|$. The desired antichain is $A=\{p\in P:p^-\in S\text{ and }p^+\in T\}$. Proving that it’s an antichain isn’t too hard. Suppose that $p,q\in A$ with $p\ne q$. Then $p^-\in S$ and $q^+\in T$, so $f(p^-,q^+)\ne0$: either $\langle p^-,q^+\rangle$ isn’t an edge of $D$ at all, or $f(p^-,q^+)=1$, and I leave it to you to show that the latter is impossible: it would unbalance the flow at $p^-$. (The argument here uses the hypothesis that the minimal cut was constructed using the F-F algorithm.) It follows that $p\not\preceq q$, and by symmetry (or a similar argument) $q\not\preceq p$. Thus, $A$ is an antichain. The last step is to show that the edges contributing capacity towards $c(S,T)$ are preciesly the edges $\langle s,p^-\rangle$ and $\langle p^+,t\rangle$ such that $p\notin A$; this also uses the hypothesis that the minimal cut was constructed using the F-F algorithm, to show that there are no edges of the form $\langle p^-,q^+\rangle$ with $p^-\in S$ and $q^+\in T$. Once you’ve worked out these details, you’ll have shown that $c(S,T)=|P|-|A|$ and hence that $|A|=|P|-c(S,T)=|P|-|f|=|C|$, as desired.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Experimental implementation of a quantum computing algorithm strongly relieson the ability to construct required unitary transformations applied to theinput quantum states. In particular, near-term linear optical computingrequires universal programmable interferometers, capable of implementing anarbitrary transformation of input optical modes. So far these devices werecomposed as a circuit with well defined building blocks, such as balancedbeamsplitters. This approach is vulnerable to manufacturing imperfections Weyl points, synthetic magnetic monopoles in the 3D momentum space, are thekey features of topological Weyl semimetals. The observation of Weyl points inultracold atomic gases usually relies on the realization of high-dimensionalspin-orbit coupling (SOC) for two pseudospin states (% \textit{i.e.,}spin-1/2), which requires complex laser configurations and precise control oflaser parameters, thus has not been realized in experiment. Here we proposethat robust Wely points can be realized using 1D triple-well superlattices The vast and growing number of publications in all disciplines of sciencecannot be comprehended by a single human researcher. As a consequence,researchers have to specialize in narrow sub-disciplines, which makes itchallenging to uncover scientific connections beyond the own field of research.Thus access to structured knowledge from a large corpus of publications couldhelp pushing the frontiers of science. Here we demonstrate a method to build asemantic network from published scientific literature, which we call SemNet. We Two time-reversal quantum key distribution (QKD) schemes are the quantumentanglement based device-independent (DI)-QKD andmeasurement-device-independent (MDI)-QKD. The recently proposed twin field(TF)-QKD, also known as phase-matching (PM)-QKD, has improved the key ratebound from $O\left( \eta \right )$ to $O\left( \sqrt {\eta} \right )$ with$\eta$ the channel transmittance. In fact, TF-QKD is a kind of MDI-QKD butbased on single-photon detection. In this paper, we propose a different PM-QKD We analyse the charging process of quantum batteries with general harmonicpower. To describe the charge efficiency, we introduce the charge saturationand the charging power, and divide the charging mode into the saturatedcharging mode and the unsaturated charging mode. The relationships between thetime-dependent charge saturation and the parameters of general driving fieldare discussed both analytically and numerically. And according to the Floquet The Carnot cycle combines reversible isothermal and adiabatic strokes toobtain optimal efficiency, at the expense of a vanishing power output. Here, weconstruct quantum Carnot-analog cycles, operating irreversibly at non-vanishingpower. Swift thermalization is obtained utilizing shortcut to equilibriumprotocols and the isolated strokes employ frictionless shortcut to adiabaticityprotocols. We solve the dynamics for a working medium composed of a particle in We consider a fractional generalization of two-dimensional (2D)quantum-mechanical Kepler problem corresponding to 2D hydrogen atom. Our mainfinding is that the solution for discreet spectrum exists only for $\mu>1$(more specifically $1 < \mu \leq 2$, where $\mu=2$ corresponds to "ordinary" 2Dhydrogenic problem), where $\mu$ is the L\'evy index. We show also that infractional 2D hydrogen atom, the orbital momentum degeneracy is lifted so thatits energy starts to depend not only on principal quantum number $n$ but also We introduce a model to study the collisions of two ultracold diatomicmolecules in one dimension interacting via pairwise potentials. We presentresults for this system, and argue that it offers lessons for real molecularcollisions in three dimensions. We analyze the distribution of the adiabaticpotentials in the hyperspherical coordinate representation as well as thedistribution of the four-body bound states in the adiabatic approximation (i.e.no coupling between adiabatic channels). It is found that while the adiabatic We analyse quasi-periodically driven quantum systems that can be mappedexactly to periodically driven ones and find Floquet Time Spirals in analogywith spatially incommensurate spiral magnetic states. Generalising themechanism to many-body systems we discover that a form of discretetime-translation symmetry breaking can also occur in quasi-periodically drivensystems. We construct a discrete time quasi-crystal stabilised by many-bodylocalisation, which persists also under perturbations that break the We present a detailed study of the topological Schwinger model[$\href{this http URL}{Phys. \; Rev.\; D \; {\bf99},\;014503 \; (2019)}$], which describes (1+1) quantum electrodynamics of anAbelian $U(1)$ gauge field coupled to a symmetry-protected topological mattersector, by means of a class of $\mathbb{Z}_N$ lattice gauge theories. Employingdensity-matrix renormalization group techniques that exactly implement Gauss'
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
The tale that hash tables are amortized $\Theta(1)$ is a lie an oversimplification. This is only true if: - The amount of data to hash per item is trivial compared to the number of Keys and the speed of hashing a Key is fast - $k$. - The number of Collisions is small - $c$. - We do not take into account time needed to Resize the hash table - $r$. Large strings to hash If the first assumption is false the running time will go up to $\Theta(k)$. This is definitely true for large strings, but for large strings a simple comparision would also have a running time of $\Theta(k)$. So a hash is not asymptotically slower, although hashing will always be slower than a simple comparision, because comparison has a early opt out ergo $O(1)$, $\Omega(k)$ and hashing always has to hash the full string $O(k)$, $\Omega(k)$. Note that integers grow very slowly. 8 bytes can store values up to $10^{18}$; 8 bytes is a trivial amount to hash. If you want to store bigints then just think of them as strings. Slow hash algorithm If the amount spend hashing is non-trivial compared to the storage of the data then obviously the $\Theta(1)$ assumption becomes untenable. Unless a cryptographic hash is used this should not be an issue. What matters is that $n$ $>>$ $k$. As long as that holds $\Theta(1)$ is a fair statement. Many collisions If the hashing function is poor, or the hash table is small, or the size of the hash table is awkward collisions will be frequent and the running time will go to $O(log(n))$. The hashing function should be chosen so that collisions are rare whilst still being as fast as possible, when in doubt opt for fewer collisions at the expense of slower hashing. A rule of thumb is that the hashing table should always be less than 75% full. And the size of the hashing table should not have any correlation with the hashing function. Often the size of the hashing table is (relatively) prime. Resizing the hash table Because a nearly full hash table will give too many collisions and a big (empty) hash table is a waste of space, many implementations allow the hash table to grow (and shrink !) as needed. The growing of a table can involve a full copy of all items (and possibly a reshuffle), because the storage needs to be continuous for performance reasons. Only in pathological cases will the resizing of the hash table be an issue so the (costly but rare) resizes are amortized across many calls. Running time So the real running time of a hash table is $\Theta(kcr)$. Each of $k$, $c$, $r$ on average is assumed to be a (small) constant in the amortized running time and thus we say that $\Theta(1)$ is a fair statement. To get back to your questions Please excuse me for paraphrasing, I've tried to extract different sets of meanings, feel free to comment if I've missed some You seem to be concerned about the length of the output of the hash function. Let's call this $m$ ($n$ is generally taken to be the number of items to be hashed). $m$ will be $log(n)$ because m needs to uniquely identify an entry in the hash table. This means that m grows very slowly. At 64 bits the number of hash table entries will take up a sizeable portion of worldwide available RAM. At 128 bits it will far exceed the available disk storage on planet earth. Producing a 128 bit hash is not much harder than a 32 bit hash, so no, the time to create a hash is not $O(m)$ (or $O(log(n))$ if you will). The hash function going through $log(n)$ bits of element is going to take $Θ(log(n))$ time. But the hash function does not go through $log(n)$ bits of elements. Per one item (!!) it only goes though $O(k)$ data. Also the length of the input (k) has no relation with the number of elements.This matters, because some non hashing algorithms have to examine many elements in the collection to find a (non)matching element. The hash table only does 1 or 2 comparisons per item under consideration on average before reaching a conclusion. Why are hash tables efficient for storing variable-length elements? Because irrespective of the length of the input ($k$) the length of the output ($m$) is always the same, collisions are rare and lookup time is constant. However when the key length $k$ grows large compared the to number of items in the hash table ($n$) the story changes... Why are hash tables efficient for storing large strings? Hash tables are not very efficient for very large strings. If $not$ $n >> k$ (i.e. the size of the input is rather large compared to the number of items in the hash table) then we can no longer say that the hash has a constant running time, but must switch to a running time of $\Theta(k)$ especially because there is no early out. You have to hash the full key. If you're only storing a limited number of items then you may be much better off using a sorted storage, because when comparing $k1$ $\ne$ $k2$ you can opt out as soon as a difference is seen. However if you know your data, you can choose not to hash the full key, but only the (known or assumed) volatile part of it, restoring the $\Theta(1)$ property whilst keeping the collisions in check. Hidden constants As everyone ought to know $\Theta(1)$ simply means that the time per element processed is a constant. This constant is quite a bit larger for hashing than for simple comparison. For small tables a binary search will be faster than a hash lookup, because e.g. 10 binary comparisons might very well be faster than a single hash. For small datasets alternatives to hash tables should be considered. It's on large datasets that hash tables truly shine.
Fundamental principle I: stars are self-gravitating bodies in dynamical equilibrium due to a balance of gravity and internal pressure forces. Equation of hydrostatic equilibrium: consider a small volume element at a distance r from the centre- cross section δS, length δr, Equation of distribution of mass Dimensional analysis: use this to estimate the central pressure. Consider a point at and approximate- , , Dynamical timescale: the dynamical timescale is the time it would take the star to collapse completely if pressure forces were negligible. Equation of motion: Inward displacement: Put s~R s, r~Rs, Mr~ Mto estimate. s Virial theorem: Start with $latex 4\pi r^3dP_r=-(\frac{GM_r}{r})4\pi r^2\rho_rdr Integrate We can cancel the first term as it is zero at both limits, leaving- (total gravitational energy) (1) Thermal energy per unit volume: ( f represents degrees of freedom). , , Substitute all this in to get Now recall the Ideal gas law -> and total thermal energy is Now back to (1): For a fully ionised, ideal gas -> Total energy Now for a quick summary to remind you of the steps to take if you ever need to derive this Start with equation of hydrostatic equilibrium Multiply by 4πrand integrate over the radius of the star 3 Substitute for gravitational energy Consider thermal energy Fundamental principle II- negative ‘heat capacity’: for the above case, if total energy decreases, thermal energy increases and the star heats up. Nuclear burning is self regulatory: if nuclear burning and thus total energy decrease, the core contracts and heats up; this causes nuclear burning to then increase again, as it is highly temperature dependent. Conversely, if there is an increase in nuclear burning, the core will expand and cool, decreasing it again. Unstable stars: if γ=4/3, then E=0, i.e. the star has zero binding energy. In this case, the star is easily disrupted, causing rapid mass loss. Periods when nuclear burning is not active: during star formation, energy is lost from the surface, so the proto-star contracts and heats up until hydrogen burning is ignited. When hydrogen fuel is exhausted, the same thing happens, and if the star is massive enough, fusion of heavier elements can occur. Fundamental Principle III: since stars lose energy by radiation, stars supported by thermal pressure require an energy source to avoid collapse. Thermal timescale: (aka Kelvin-Helmholtz timescale) ~ 15 million years, i.e. too short to provide energy for a stellar lifetime. Since thermal energy~gravitational energy, it is clear that for stars there must be another mechanism- this is where nuclear fusion comes in. Nuclear timescale: where η is the efficiency- for H->He fusion, and M c is the mass of the core. Working this out gives us the timescale we would expect for a star. Energy loss at a stellar surface is compensated for by energy release from nuclear reactions in the stellar interior. where ε r is the nuclear energy released per unit mass per second. for any elementary shell. Energy transport Conduction:negligible except in degenerate matter. Radiation:interior consists of X-ray photons which undergo a random walk over ~5×10 3yr and are degraded to optical frequencies. Convection:Consider a bubble rising with initial pressure and density , rising by an amount dr. Ambient pressure and density are . Consider a spherical shell, area A=4πr 2, radius r, thickness dr Radiation pressure momentum flux Rate of deposition of momentum in r->r+dr (i) Opacity, κ, is a measure of absorption. Solution Where we define optical depth is the mean free path. If τ>>1, material is optically thick If τ<<1, material is optically thin Rate of momentum absorption in shell , which is equal to equation (i). So putting it all together, our final equation is Assuming the bubble remains in pressure equilibrium with the ambient medium, then (binomial expansion). We have convective instability when the bubble keeps rising, i.e. if Now substitute the expressions we worked out for ρ 2 and into our condition for instability. -> Instability when (dropping the subscripts). Alternatively, we can express the instability condition as actual temperature gradient $latex > adiabatic (critical) temperature gradient Fundamental principle summary: Before we forge ahead, let’s just check that we’re not too lost on everything covered above. · Stars are self-gravitating bodies in dynamic equilibrium due to a balance of gravity and internal pressure forces (hydrostatic equilibrium). · Stars lose energy by radiation from the surface. Stars supported by thermal pressure require an energy source to avoid collapse, .e.g. nuclear and gravitational energy (3( γ-1) U+Ω=0). · Temperature structure is largely determined by the mechanism by which energy is transported from the core to the surface; conduction, convection and radiation. · The central temperature is determined by the characteristic temperature for the appropriate nuclear fusion reactions (10 7 K for hydrogen, 10 8 K for helium). · Normal stars have a negative ‘heat capacity’ (virial theorem), i.e. they heat up when their total energy decreases. · In a non-degenerate core, nuclear burning is self-regulatory (negative feedback system). · The global structure of a star is determined by the simultaneous satisfaction of these principles, and the local properties of a star are determined by the global structure (so it all boils down to making sure everything works as we’ve said it does). Mathematically, this requires the simultaneous solution of a set of coupled, non-linear differential equations with boundary conditions, but before you hide under your covers at the prospect of this, that isn’t something we’re even going to attempt here. Equations of stellar structure summary aka the ones to remember- 1. 2. 3. 4.
Suppose $A\in\mathbb{R}^{n\times n}$ is symmetric and positive definite and that we have several symmetric matrices $B_i\in\mathbb{R}^{n\times n}$ that are low-rank and indefinite. I need an algorithm for solving the following optimization problem:$$\begin{align*}\min_{x\in\mathbb{R}^n}\ & x^\top Ax\\\textrm{s.t.}\ & \|x\|_2^2=1\\& x^\top B_i x=0\ \forall i\in\{1,\ldots,k\}.\end{align*}$$ Any suggestions? I am hoping it is somehow equivalent to an eigenvalue problem. Assorted observations: If the second constraint is removed, the output is the minimal eigenvector of $A$. The Lagrange multiplier expression is $Ax=\lambda x + \sum_i \mu_iB_ix$. So, if we somehow know the dual variables $\{\mu_i\}_{i=1}^k$, then $x$ is an eigenvector of $A-\sum_i\mu_i B_i$ with minimal eigenvalue $\lambda$. So, we could think of $x$ as a very nonlinear function $x(\mu_1,\ldots,\mu_k)$. If one of the $B_i$'s is positive definite, then this problem is not satisfiable (implies $x=0$). But for my specific problem I can prove the feasible region is nonempty. This can be approached using semidefinite relaxation, but this would be very slow for large $n$.
The magnetic field due to a magnetic dipole moment, $\boldsymbol{m}$ at a point $\boldsymbol{r}$ relative to it may be written $$ \boldsymbol{B}(\boldsymbol{r}) = \frac{\mu_0}{4\pi r^3}[3\boldsymbol{\hat{r}(\boldsymbol{\hat{r}} \cdot \boldsymbol{m}) - \boldsymbol{m}}], $$ where $\mu_0$ is the vacuum permeability. In geomagnetism, it is usual to write the radial and angular components of $\boldsymbol{B}$ as: $$ \begin{align*} B_r & = -2B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\cos\theta, \\ B_\theta & = -B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\sin\theta, \\ B_\phi &= 0, \end{align*} $$ where $\theta$ is polar (colatitude) angle (relative to the magnetic North pole), $\phi$ is the azimuthal angle (longitude), and $R_\mathrm{E}$ is the Earth's radius, about 6370 km. See below for a derivation of these formulae. With this definition, $B_0$ denotes the magnitude of the mean value of the field at the magnetic equator on the Earth's surface, about $31.2\; \mathrm{\mu T}$.With these definitions, we can plot the dipole component of the Earth's magnetic field as a Matplotlib streamplot. We first construct a meshgrid of $(x,y)$ coordinates and convert them into polar coordinates with the relations:$$\begin{align*}r &= \sqrt{x^2 + y^2},\\\theta &= \mathrm{atan2}(y/x),\end{align*}$$where the two-argument arctangent function, $\mathrm{atan2}$, is implemented in NumPy as arctan2. We can then use the formulae above to calculate $B_r$ and $B_\theta$ and convert back to Cartesian coordinates to plot the streamplot. The $\theta$ coordinate is offset by $\alpha = 9.6^\circ$ to account for the current tilt of the Earth's magnetic dipole with respect to its rotational axis. import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Circle # Mean magnitude of the Earth's magnetic field at the equator in T B0 = 3.12e-5 # Radius of Earth, Mm (10^6 m: mega-metres!) RE = 6.370 # Deviation of magnetic pole from axis alpha = np.radians(9.6) def B(r, theta): """Return the magnetic field vector at (r, theta).""" fac = B0 * (RE / r)**3 return -2 * fac * np.cos(theta + alpha), -fac * np.sin(theta + alpha) # Grid of x, y points on a Cartesian grid nx, ny = 64, 64 XMAX, YMAX = 40, 40 x = np.linspace(-XMAX, XMAX, nx) y = np.linspace(-YMAX, YMAX, ny) X, Y = np.meshgrid(x, y) r, theta = np.hypot(X, Y), np.arctan2(Y, X) # Magnetic field vector, B = (Ex, Ey), as separate components Br, Btheta = B(r, theta) # Transform to Cartesian coordinates: NB make North point up, not to the right. c, s = np.cos(np.pi/2 + theta), np.sin(np.pi/2 + theta) Bx = -Btheta * s + Br * c By = Btheta * c + Br * s fig, ax = plt.subplots() # Plot the streamlines with an appropriate colormap and arrow style color = 2 * np.log(np.hypot(Bx, By)) ax.streamplot(x, y, Bx, By, color=color, linewidth=1, cmap=plt.cm.inferno, density=2, arrowstyle='->', arrowsize=1.5) # Add a filled circle for the Earth; make sure it's on top of the streamlines. ax.add_patch(Circle((0,0), RE, color='b', zorder=100)) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_xlim(-XMAX, XMAX) ax.set_ylim(-YMAX, YMAX) ax.set_aspect('equal') plt.show() For now, consider the dipole to lie along the $y$-axis: $\boldsymbol{m} = m\boldsymbol{\hat{\jmath}}$ and define $\boldsymbol{B_0}$ to be the field perpendicular to $\boldsymbol{m}$ at a distance $R_\mathrm{E}$ from it ( i.e. on the Earth's magnetic equator):$$\boldsymbol{B_0} = -\frac{\mu_0m}{4\pi R_\mathrm{E}^3} \boldsymbol{\hat{\jmath}} = B_0\boldsymbol{\hat{\jmath}}.$$Then,$$\boldsymbol{B}(\boldsymbol{r}) = -B_0 \left(\frac{R_\mathrm{E}}{r}\right)^3 [3\cos\theta \boldsymbol{\hat{r}} - \boldsymbol{\hat{\jmath}}].$$The radial component of the magnetic field is therefore$$B_r = \boldsymbol{B}\cdot\boldsymbol{\hat{r}} = -B_0\left(\frac{R_\mathrm{E}}{r}\right)^3[3\cos\theta - \cos\theta] = -2B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\cos\theta,$$Its angular component is perpendicular to this:$$B_\theta = \boldsymbol{B}\cdot\boldsymbol{\hat{\theta}} = - B_0 \left(\frac{R_\mathrm{E}}{r}\right)^3 [-\boldsymbol{\hat{\jmath}} \cdot \boldsymbol{\hat{\theta}}] = -B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\sin\theta.$$Finally, $B_\phi = 0$: the field is symmetric about the axis of the dipole.
I have never encountered this before (encountered it now in Sean Carroll's GR text when discussing the benefit of solving problems in locally inertial references frame). Let $\gamma$ be the Lorentz factor, $g_{\mu\nu}$ be the metric tensor, and let $U^\mu$ and $V^\mu$ be 2 four-velocities. Can anyone explain the following definition? $$\gamma = -g_{\mu\nu}U^\mu V^\nu $$
Bull. Korean Math. Soc. Published online August 6, 2019 Saadoun Mahmoudi and Karim SameiBu Ali Sina Abstract : In this paper, we introduce $SR$-additive codes as a generalization of the classes of $\mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$ and $\mathbb{Z}_{2}\mathbb{Z}_{2}[u]$-additive codes, where $S$ is an $R$-algebra and an $SR$-additive code is an $R$-submodule of $S^{\alpha}\times R^{\beta}$. In particular, the definitions of bilinear forms, weight functions and Gray maps on the classes of $\mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$ and $\mathbb{Z}_{2}\mathbb{Z}_{2}[u]$-additive codes are generalized to $SR$-additive codes. Also the singleton bound for $SR$-additive codes and some results on one weight $SR$-additive codes are given. Among other important results, we obtain the structure of $SR$-additive cyclic codes. As some results of the theory, the structure of cyclic $\mathbb{Z}_{2}\mathbb{Z}_{4}$, $\mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$, $\mathbb{Z}_{2}\mathbb{Z}_{2}[u]$, $(\mathbb{Z}_{2})(\mathbb{Z}_{2} + u\mathbb{Z}_{2} + u^{2}\mathbb{Z}_{2})$, $(\mathbb{Z}_{2} + u\mathbb{Z}_{2} )(\mathbb{Z}_{2} + u\mathbb{Z}_{2} + u^{2}\mathbb{Z}_{2})$, $(\mathbb{Z}_{2})(\mathbb{Z}_{2} + u\mathbb{Z}_{2} + v\mathbb{Z}_{2})$ and $(\mathbb{Z}_{2} + u\mathbb{Z}_{2} )(\mathbb{Z}_{2} + u\mathbb{Z}_{2} + v\mathbb{Z}_{2})$-additive codes are presented. Keywords : Additive code, Chain ring, Galois ring
Basically, you do not compute these lengths $2^n$ or $n^2$, but only the from-to computations that the automaton would be able to make on input with this particular lengths. For the first two problems the automaton computes (in its internal states) the relation $E_\ell = \{ (p,q)\in Q\times Q \mid q\in\delta^*(p,y) \text{ for some $y$ with $|y|=\ell$} \}$ for some value of $\ell$ (which depends on $|x|$). Then $E_{2\ell}=E^2_\ell$ which helps to solve the first problem. Here $E^2$ is the composition $E\circ E$ of the binary relation $E$ with itself: $E^2 = \{ (p,r) \mid \text{ for some } q\in Q \text{ both } (p,q)\in E \text{ and } (q,r)\in E\}$ I add some details. This is still rather formal, without much intuition. I hope it helps you into the right direction. Assume we are given a FSA $\mathcal A= (Q,\Sigma,\delta,q_0,F) $ for language $A$. We will construct an automaton for your first problem. The new states are of the form $(p,E)$ where $p\in Q$ and $E\subseteq Q\times Q$; i.e., the new state set equals $Q\times{\mathcal P}(Q\times Q)$. Note that the number of possible relations $E$ is finite, hence adding such a relation to the states of the automaton will still lead to a finite state automaton.The second component will always contain the relation $E_{2^n}$ where $n$ is the length of the input read. The initial state equals $(q_0,E_1)$, where $E_1$ is the one-step relation on $Q$, just like defined above. Now from every $(p,E)$ reading letter $a$ we move to state $(\delta(p,a),E^2)$. Thus, the first component copies the original move in $\mathcal A$, the second component steps from $E_{2^n}$ to $E_{2^n}\circ E_{2^n}=E_{2^{n+1}}$ following the recipe above. When is a state $(p,E)$ final? When there is a pair $(p,q)\in E$ such that $q\in F$ is final in $\mathcal A$. The $n^2$ problem is along similar lines, but actually slightly more involved (I think). It uses the fact that $(n+1)^2 = n^2 + 2n +1$. The automaton stores both $E_{n^2}$ and $E_{n}$ to easily update the square.
The Schwinger effect can be calculated in the world-line formalism by coupling the particle to the target space potential $A$. My question relates to how this calculation might extend to computing particle creation in an accelerating frame of reference, i.e. the Unruh effect. Consider the one-loop world-line path integral: $$Z_{S_1} ~=~ \int^\infty_0 \frac{dt}{t} \int d[X(\tau)] e^{-\int_0^t d\tau g^{\mu\nu}\partial_\tau X_\mu \partial_\tau X_\nu},$$ where $g_{\mu\nu}$ is the target space metric in a (temporarily?) accelerating reference frame in flat space and the path integral is over periodic fields on $[0,t]$, $t$ being the modulus of the circular world-line. If the vacuum is unstable to particle creation, then the imaginary part of this should correspond to particle creation. Since diffeomorphism invariance is a symmetry of the classical 1-dimensional action here, but not of $Z_{S^1}$, since it depends on the reference frame, can I think of the Unruh effect as an anomaly in the one-dimensional theory, i.e. a symmetry that gets broken in the path integral measure when I quantize? This question would also apply to string theory: is target space diffeomorphism invariance anomalous on the worldsheet?
Show that if $0\leq x < n, n \geq 1$, and $n\in\mathbb{N}$ then $$ 0 \leq e^{-x} - \left( 1 - \frac{x}{n} \right)^n \leq \frac{x^2e^{-x}}{n}. $$ By using induction. Progress: Decided to split the problem up into two parts, (i) and (ii). (i) $ 0 \leq e^{-x} - \left ( 1- \frac{x}{n} \right ) ^ n $. (i) $e^{-x} - \left ( 1- \frac{x}{n} \right ) ^ n \leq \frac{x^2e^{-x}}{n}.$ For part (i), Base case: (n = 1) We have $$\begin{align} 1-x &\leq e^{-x} \\ e^{-x} +x -1 & \geq 0. \end{align}$$ Differentiating gives, $$ \begin{align} \frac{d}{dy} \left ( e^{-x} +x -1 \right )& = -e^{-x} +1 \\ & \geq 0 \end{align}$$ for $0 \leq x <1$. Thus the function is non-decreasing on $\left[ 0,1 \right )$ which implies $e^{-x} + x -1 \geq 0 $ thus,$ 1-x \leq e^{-x}$ on the required interval. (my calc isn't very strong but I assume this is the right process to prove one function is $\geq$ another). Induction step: Assume formula holds for some arbitrary positive integer $n=k$, that is $$0 \leq e^{-x} - \left( 1-\frac{x}{k} \right )^k$$using this, we must now show that the formula holds for $n=k+1$, i.e. $$ 0 \leq e^{-x} - \left ( 1- \frac{x}{k+1} \right ) ^{k+1}. $$ So how on earth do we use the induction step in get the required equation for $n=k+1?$
In studying some physical propagator, I came across the following sum$$\sum_{n = -\infty}^{+\infty} \frac{ a^n }{ \sin^2(z + n \pi \tau) }\ . $$Obviously, my question is how to evaluate this sum. To some extent, I understand the result when $a = 1$. Loosely speaking, without properly regularizing, we have $$ \sum_{n \in \mathbb{Z}} \frac{ 1 }{ \sin^2(z + n \pi \tau) } = - \sum_{n \in \mathbb{Z}} \partial_z \partial_z \ln \sin(z + n\pi\tau) = - \partial_z^2 \ln \prod_{n \in \mathbb{Z}} \sin(z+n\pi\tau) \ . $$ The final infinite product can be identified with $\theta_1(z/\pi|\tau)$, where $q = e^{2\pi i \tau}$, so up to regularization issue, we have $$ \sum_{n\in \mathbb{Z}} \frac{ 1 }{ sin^2(z + n\pi \tau) } = - \partial_z \partial_z \ln \theta_1(z/\pi|\tau) $$ However in the presence of $a^n$, I can't pull off this trick again (as far as I can see). Suggestions on literature/references and more tricks are welcome!
I was presented with the following problem; Show that if $\sum b_n$ is a rearrangement of a series $\sum a_n$ , and $a_n$ diverges to $\infty$, then $\sum b_n = \infty$. How would one solve this? It seems intuitively true, but how could I show it? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I was presented with the following problem; Show that if $\sum b_n$ is a rearrangement of a series $\sum a_n$ , and $a_n$ diverges to $\infty$, then $\sum b_n = \infty$. How would one solve this? It seems intuitively true, but how could I show it? I suppose you mean that $a_n\to +\infty$. Under this assumption, we can proceed as follows. Let $M>0$ be an arbitrary number. Then there is some $N>0$ such that $a_n>1$ for any $n>N$. Since $\{b_n\}$ is a rearrangement of $\{a_n\}$, then there is some $N'>0$ such that $a_1,\ldots,a_N$ are contained in $b_1,\ldots,b_{N'}$. Set $B=\sum_{k=1}^{N'}b_k$. Then for any $T>|B|+M+N'$, we have $$\sum_{k=1}^{T}b_k=\sum_{k=1}^{N'}b_k+\sum_{k=N'+1}^{T}b_k\ge B+(T-N')>M.$$ Therefore, $\sum b_n\to+\infty$. If we only know $\sum a_n=+\infty$, we can say nothing about $\sum b_n$. See Riemann rearrangement theorem.
Let $A$ be the free associative algebra over a field $k$ generated by countably many indeterminates $x_1, x_2, \ldots$. I want to show that for any $n$, $x_1 \ldots x_n$ is not in the ideal $I$ generated by $S=\{x_i^2, x_ix_j+x_jx_i : i,j \geq 0\}$. My attempt:Suppose for a contradiction that it is. Then it would be a sum of terms of the form $xsy$ where $x,y \in A$ and $s \in S$. I would like to argue that each such term $xsy$ must be $0$ since $s$ has the form $x_i^2$ (so it has degree $2$ in $x_i$, whereas $x_1 \ldots x_n$ does not) or $x_ix_j+x_jx_i$ (and this is 'bad' because it contributes two terms where the orders in which $x_i$ and $x_j$ occur are flipped). A problem with this argument (other than it being too informal), is that it might be that (the 'bad' part of) such a term $xsy$ cancels with (the 'bad' part of) another term $xs'y$, neither term being $0$. How do I correct and formalize my argument? Footnote: I am only interested in the case where $k$ has characteristic not equal to $2$; in this case the ideal $I$ can be generated just by the relations $\{ x_ix_j+x_jx_i : i,j \geq 0\}$, since $x_i^2 = \frac{1}{2} (x_ix_i+x_ix_i)$. Feel free to assume this, but if there is a proof that works in full generality even in characteristic $2$ I would love to see it.
I have a matrix of $n \times n$ dimension: $$ K - \omega^2 M = \begin{pmatrix} 2\omega_0^2 - \omega^2 & - \omega_0^2 & 0 & \cdots & 0 \\ - \omega_0^2 & 2\omega_0^2 - \omega^2 & -\omega_0^2 & \cdots & 0 \\ 0 & -\omega_0^2 & 2\omega_0^2-\omega^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 2\omega_0^2-\omega^2 \end{pmatrix} $$ And I want a solution to the equation: $$ \left( K - \omega^2 M \right) \cdot \mathbf{c} = 0, \quad \mathbf{c} = \left( c_1, c_2, \cdots, c_n \right)^T $$ The first problem is obviously the characteristic equation: $$ \det \left( K - \omega^2 M \right) = 0$$ which is too hard for Mathematica to handle (couldn't simplify even when I plugged in general eigenvalue), so I carried this manually and the eigenvalues are: $$ \omega_k = 2 \omega_0 \cos \frac{k \pi}{2(n+1)}, \quad 1 \leq k \leq n $$ My problem is to obtain eigenvectors for general case: $n$, which I can set to any integer value and have it evaluated. Definition of matrix is simaple so far: M = Table[Table[KroneckerDelta[i, j] (2 - a^2) - KroneckerDelta[i, j + 1] - KroneckerDelta[i, j - 1], {j, 1, n}], {i, 1, n}] Where I can set $n$ to any value ($n = 5$ for example). Notice that this is a nondimensionalised matrix with $\omega = a \omega_0$ and in sake of clarity $\omega_0$ was cancelled $n$-times. Now here comes the problem: I will need a column vector of arbitrary length, but I can't write: c = Table[ci, {i, 1, n}] because Mathematica does not recognise "i" being a variable in "ci". Although this is desired result for $n = 5$: c = {c1, c2, c3, c4, c5} The next thing is solution to the problem with correct eigenvalues: S1 = Solve[Dot[M/.a->2Cos[1 Pi/(2n+2)],c] == 0, c]S2 = Solve[Dot[M/.a->2Cos[2 Pi/(2n+2)],c] == 0, c]S3 = Solve[Dot[M/.a->2Cos[3 Pi/(2n+2)],c] == 0, c]...Sn = Solve[Dot[M/.a->2Cos[n Pi/(2n+2)],c] == 0, c] Again, how can I rewrite this for some arbitrary eigenvalue, so I can handle it in general form? The desired result is several lists of eigenvectors: L1 = Flatten[{c1 /. S1, c2 /. S1, c3 /. S1, ..., cn /. Sn}]L2 = Flatten[{c1 /. S2, c2 /. S2, c3 /. S2, ..., cn /. Sn}]L3 = Flatten[{c1 /. S3, c2 /. S3, c3 /. S3, ..., cn /. Sn}]...Ln = Flatten[{c1 /. Sn, c2 /. Sn, c3 /. Sn, ..., cn /. Sn}] And the final step is to plot all solutions: Table[ListPlot[Li],{i,1,n}] Now all of this is obtainable by simply invoking: e = Eigenvectors[M] Which is simply a matrix of eigenvectors (one can think of a basis in which $M$ is diagonal). The problem is, that Mathematica doesn't really know about the beauty and simplicity of eigenvalues of such a matrix. As a result, the eigenvalues for e.g. $n = 6$ are pretty nasty, involving complex numbers and such - it's because $\cos \frac{\pi}{7}$ is really not a nice closed-form expression. Then the problem is, that Mathematica cannot find eigenvectors for $n = 6$ in suitable form (a typical eigenvector is " 2-a^2 - Root[...]" with strange things like #1) when the problem is obviously ONLY in eigenvalues (when plugging some eigenvalue manually I can obtain corresponding eigenvector). My question is: how can I generalize those expressions for $\mathbf{c}$, $S_n$, $L_n$ and so on, or, alternatively, how can I obtain eigenvectors for every $n$ with Eigenvectors[M] without some time-consuming procedure involving #1 and Root[...] and so they are SORTED by corresponding eigenvalues? P.S.: I know that eigenvectors are stationary waves.
On $C$-Bochner curvature tensor of a contact metric manifold Bull. Korean Math. Soc. 2005 Vol. 42, No. 4, 713-724 Published online December 1, 2005 Jeong-Sik Kim, Mukut Mani Tripathi, and Jaedong Choi Mathematical Information Yosu National University, Lucknow University, Korea Air Force Academy Abstract : We prove that a $(\kappa ,\mu )$-manifold with vanishing $E$-Bochner curvature tensor is a Sasakian manifold. Several interesting corollaries of this result are drawn. Non-Sasakian $(\kappa ,\mu )$-manifolds with $C$-Bochner curvature tensor $B$ satisfying $B\left( \xi ,X\right) \cdot S=0$, where $S$ is the Ricci tensor, are classified. $% N(\kappa )$-contact metric manifolds $M^{2n+1}$, satisfying $B\left( \xi ,X\right) \cdot R=0$ or $B\left( \xi ,X\right) \cdot B=0$ are classified and studied.
Let's try an example. Let's say you're trying to prove the following: Simple Theorem over the natural numbers: If $n$ is even, then $n+1$ is odd. What he's trying to say is that you don't need an independent proof that $n$ is even. In the natural numbers, this isn't even true, since not all numbers are even. What he's trying to say is that when trying to prove an implication like the Simple Theorem above, you're allowed to introduce the assumption that $n$ is even, and from that assumption, together with whatever other axioms you're using, try to prove that $n+1$ is odd. If you succeed in this, you can now correctly conclude that the implication is true. From his paper: The rule makes intuitive sense, a proof justifying A ⊃ B true assumes, hypothetically, the left-hand side of the implication so that A true, and uses this to show the right-hand side of the implication by proving B true. The proof of A⊃B true constructs a proof of B true from the additional assumption that A true. The whole painful stuff with the bars and labels is a variation of the pain that comes with other deduction systems when you're trying to create a proof context to hold the introduction of an assumption. When these contexts get nested in a complicated proof, it requires machinery analogous to lexical variable scoping rules in programming languages. The assumptions have to come and go properly so they don't allow unsound deductions in other parts of the proof. EDIT: per request of original poster. OK, let's dig into this some more. I'm going to use some different notation. The notation $$\Gamma \vdash \sigma$$ means some set of axioms $\Gamma$ proves some formula $\sigma$, where the turnstile $\vdash$ means "proves". Now the inference rule we're looking at can be written like this. We can conclude $$\Gamma \vdash A \rightarrow B$$ if we can prove $$\{\Gamma; A\} \vdash B$$ where the semicolon means we've added formula $A$ to the set of axioms $\Gamma$. Note that this also creates what I've been calling a "proof context" where we're keeping track of assumptions we've been making. The Pfenning paper uses labels on bars to do the same thing. Let's try it on a complicated example with nested proof contexts: prove the tautology $$A \rightarrow (B \rightarrow (C \rightarrow B))$$ In our notation, proving this tautology looks like this: $$\{\} \vdash A \rightarrow (B \rightarrow (C \rightarrow B))$$ That is, $\Gamma$ is empty since we're proving a tautology. Here we go, using our rule of inference. We can prove $$\{\} \vdash A \rightarrow (B \rightarrow (C \rightarrow B))$$ if we can prove $$\{A\} \vdash B \rightarrow (C \rightarrow B)$$ Let's keep going. We can in turn prove $$\{A\} \vdash B \rightarrow (C \rightarrow B)$$ if we can prove $$\{A, B\} \vdash (C \rightarrow B)$$ One more application of our rule: We can prove $$\{A, B\} \vdash (C \rightarrow B)$$ if we can prove $$\{A, B, C\} \vdash B$$ But this follows easily: since B is already an axiom, it follows that we can prove it. But we only got this axiom from our inference rule that allowed us to add it safely.
On $(\alpha, \beta)$-fuzzy subalgebras of $BCK/BCI$-algebras Bull. Korean Math. Soc. 2005 Vol. 42, No. 4, 703-711 Published online December 1, 2005 Young Bae Jun Gyeongsang National University Abstract : Using the \emph{belongs to} relation ($\in$) and \emph{quasi-coincidence with} relation (q) between fuzzy points and fuzzy sets, the concept of $(\alpha, \beta)$-fuzzy subalgebras where $\alpha,$ \, $\beta$ are any two of $\{\in,$ ${\rm q},$ $\in \!\vee \, {\rm q},$ $\in \!\wedge \, {\rm q}\}$ with $\alpha \ne \, \in \!\wedge \, {\rm q}$ is introduced, and related properties are investigated. Keywords : belong to, quasi-coincident with, $(\alpha, \beta)$-fuzzy subalgebra
For example, consider the electromagnetic theory given by\begin{align}I=-\frac{1}{4}\int d^4x\, F_{\mu\nu}F^{\mu\nu},\end{align}where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. The action has a symmetry given by the space-time translations\begin{align}\delta A_\mu=-\epsilon^\alpha\partial_\alpha A_\mu,\end{align}with $\epsilon^\alpha$ the transformation parameter. In this case the energy-momentum tensor will be given by\begin{align}T^{\alpha}_\nu=F^{\mu\alpha}\partial_\nu A_\mu-\frac{1}{4}\delta^{\alpha}_\nu F_{\lambda\rho}F^{\lambda\rho}.\end{align}Everything seems to be ok, but note $T^{\alpha}_\nu$ will not be gauge-invariant and this is because $\delta A_\mu=-\epsilon^\alpha\partial_\alpha A_\mu$ is not gauge-invariant. This seems to be a problem, so one can do some sleight-of-hand and do an ' improved translation', which is a space-time translation + a particular gauge transformation\begin{align}\delta A_\mu=-\epsilon^{\alpha}\partial_\alpha A_\mu+\partial_\mu(\epsilon^{\alpha}A_\alpha)=F_{\mu\alpha}\epsilon^\alpha.\end{align}Note the transformation will be gauge-invariant. For completeness, the (gauge-invariant) energy-momentum tensor will be\begin{align}T^{\alpha}_\nu=-F^{\alpha\beta}F_{\nu\beta}+\frac{1}{4}\delta^{\alpha}_\nu F^{\gamma\delta}F_{\gamma\delta}.\end{align}So this was a happy example. My questions are: What would happen if I can't do the 'improved translation' always? Energy-momentum tensor should always be gauge-invariant? What does imply this? (Furthermore, what would be the consequences at the quantum theory?) There's some examples of not gauge-invariant energy-momentum tensor? (or perhaps some loss of - some more (or less) "dramatic" - symmetry?)
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
I just wanted to clarify the difference between the Algebra and $\sigma$-algebra: Algebra: If $A_1, A_2 \ldots $ are in $\mathscr A$, then $\bigcup_{i = 1}^{n} A_i \in \mathscr A$ $\sigma$- Algebra: $A_1, A_2 \ldots $ are in $\mathscr A$, then $\bigcup_{i = 1}^{\infty} A_i \in \mathscr A$ Notice the difference is just between $n$ and $\infty$. Let's use this following as illustration: (1) Let $X$ be a set; let $A, B, C, D \subset X$ (2) Let $\mathscr A_1 = \{\emptyset, A, A^c, X\}$ and $\mathscr A_2 = \{\emptyset, B, B^c, X\}$ be $\sigma$-algebras of $X$ (3) Then $\mathscr A_1 \cup \mathscr A_2 = \{\emptyset, A, A^c, B, B^c, X\}$ (4) Here, $\mathscr A_1 \cup \mathscr A_2$ is algebra because $\emptyset \cup A \cup A^c \cup B \cup B^c \cup X = X$ (5) But $\mathscr A_1 \cup \mathscr A_2$ is not a $\sigma$-algebra because in order to qualify, any pick (or any length) of union of elements of $\mathscr A_1 \cup \mathscr A_2$ has to be in $\mathscr A_1 \cup \mathscr A_2$ also. (6) Here, since $A \cup B$ is not in $\mathscr A_1 \cup \mathscr A_2$, therefore $\mathscr A_1 \cup \mathscr A_2$ is not $\sigma$-algebra (7) However, if $A, B \subset X$ only, then $\mathscr A_1 \cup \mathscr A_2$ is $\sigma$-algebra since $A \cup B = X$ and $ X \in \mathscr A_1 \cup \mathscr A_2$. Please let me know if I am wrong, especially (5) to (7). Thank you very much for your time and effort.
Let us take the $\mathcal{F}$ presheaf of bounded real functions on the real line $\mathbb{R}$. Then for each $U\subset \mathbb{R}$ open we have $$U \mapsto \mathcal{F}(U) = \{ f\colon U \longrightarrow \mathbb{R} \mid \sup_U |f| < \infty \}$$It is clearly a presheaf. Now let's see the sheaf requirements. Fix $U\subset \mathbb{R}$ and an open covering $U=\cup_i U_i$. For $s,t \in \mathcal{F}(U)$, we need that$$s|_{U_i} = t|_{U_i}, \forall i \Rightarrow s=t$$ For a family $\{s_i \in \mathcal{F}(U_i)\}$ we need that$$s_i|_{U_i\cap U_j} = s_j|_{U_i\cap U_j}, \forall i,j \Rightarrow \exists s \in \mathcal{F}(U) \mid s|_{U_i}= s_i$$ It is not hard to see that 1. holds. Now for 2. take $U_i = (i-2,i+2)$ an open interval for each $i\in \mathbb{Z}$. We have $\mathbb{R} = \cup U_i$. Define$$s_i\colon U_i \longrightarrow \mathbb{R}, \, s_i(t) = t$$Then $\sup_{U_i} |s_i| = \max\{|i-2|,|i+2|\}$ and $s_i \in \mathcal{F}(U_i)$. Suppose that there exists $s\in\mathcal{F}(\mathbb{R})$ such that $s|_{U_i}= s_i$. Let $N \in \mathbb{Z}$ be such that $N> \sup_{\mathbb{R}} |s|$. Then we have an absurd since $$s(N) = s_N(N) = N > \sup_{\mathbb{R}} |s|.$$
Bolzano-Weierstrass Theorem/General Form Theorem Proof The proof of this theorem will be given as a series of lemmas that culminate in the actual theorem in the end. Unless otherwise stated, all real spaces occurring in the proofs are equipped with the euclidean metric/topology. Lemma 0: Suppose $S' \subseteq S \subseteq \R$. Then, any limit point of $S'$ is a limit point of $S$. Proof: Consider any limit point $l$ of $S'$ and fix an $\epsilon > 0$. Then, by definition: $\paren {\map {B_\epsilon} l \setminus \set l} \cap S' \ne \O$ Thus, there is a real $s_\epsilon$ in both $\map {B_\epsilon} l \setminus \set l$ and $S'$. But since $S' \subseteq S$, $s_\epsilon \in S'$ implies $s_\epsilon \in S$. So, in other words, $s_\epsilon$ is in both $\map {B_\epsilon} l \setminus \set l$ and $S$. That is: $\paren {\map {B_\epsilon} l \setminus \set l} \cap S \ne \O$. This is exactly what it means for $l$ to be a limit point of $S$. This trivial lemma is given purely for the sake of argumentative completeness. It is assumed implicitly in all the proofs below. $\Box$ Lemma 1: Suppose $S$ is a non-empty subset of the reals such that its supremum, $\map \sup s$, exists. If $\map \sup s \notin S$, then $\map \sup s$ is a limit point of $S$. Proof: Aiming for a contradiction, suppose $\map \sup s$ were not a limit point of $S$. So, by the negation of the definition of a limit point, there is an $\epsilon > 0$ such that: $\paren {\map {B_\epsilon} {\map \sup s} \setminus \set {\map \sup s} } \cap S = \O$ Since $\map \sup s \notin S$, adding back $\map \sup s$ to $\map {B_\epsilon} {\map \sup s} \setminus \set {\map \sup s}$ still gives an empty intersection with $S$. That is: $\map {B_\epsilon} {\map \sup s} \cap S = \openint {\map \sup s - \epsilon} {\map \sup s + \epsilon} \cap S = \O$ So, since $\openint {\map \sup s - \epsilon} {\map \sup s} \subset \openint {\map \sup s - \epsilon} {\map \sup s + \epsilon}$, we also have: $\openint {\map \sup s - \epsilon} {\map \sup s} \cap S = \O$ Now, because $\epsilon > 0$, $\openint {\map \sup s - \epsilon} {\map \sup s}$ is non-empty. So, there is a real $r$ such that $\map \sup s - \epsilon < r < \map \sup s$. This $r$ is an upper bound on $S$. To see this, note that for any $s \in S$, $s < \map \sup s$. Indeed, $s \le \map \sup s - \epsilon$ because otherwise, $\map \sup s - \epsilon < s < \map \sup s$ and $s$ would be in $\openint {\map \sup s - \epsilon} {\map \sup s}$ contradicting what we established earlier: that $\openint {\map \sup s - \epsilon} {\map \sup s}$ cannot have an element of $S$. Hence, we finally have $s \le \map \sup s - \epsilon < r < \map \sup s$, making $r$ a lower upper bound on $S$ than $\map \sup s$. This contradicts the Continuum Property of $\map \sup s$. $\Box$ Proof: The proof is entirely analogous to that of the first lemma. $\Box$ ($\tilde s$ is used to mean $\map \sup s$, $\underline s$ is used to mean $\map \inf s$ throughout.) Proof: As $S$ is bounded, it is certainly bounded above. Also, since it is infinite by hypothesis, it is of course non-empty. Hence, by the completeness axiom of the reals, $\tilde s_0 = \sup S$ exists as a real. Now, there are two cases: Case 1.0:$\tilde s_0 \notin S$: Then, by Lemma 1above, $\tilde s_0$ is a limit point of $S$ and we are done. Case 2.0:$\tilde s_0 \in S$: Then, because $S$ is infinite, $S_1 = S \setminus \{\tilde s_0\}$ is non-empty. Of course, as $S_1 \subset S$, it is still bounded above because $S$ is. Hence, $\tilde s_1 = \sup S_1$ exists, again, by the completeness axiom of the reals. So, yet again, we have two cases: Case 1.1:either $\tilde s_1 \notin S_1$, in which case we stop because we get a limit point of $S_1$ (and hence of $S$ as $S_1 \subset S$), Case 2.1:or $\tilde s_1 \in S_1$, in which case we continue our analysis with $\tilde s_2 = \sup S_1 \setminus \set {\tilde s_1} = \sup S \setminus \set {\tilde s_0, \tilde s_1}$. Continuing like this, we note that our analysis stops after a finite number of steps if and only if we ever reach a case of the form Case 1.k for some $k \in \N$. In this case, $\tilde s_k = \sup S_k \notin S_k$ and we use Lemma 1 to show that $\tilde s_k$ is a limit point of $S_k$ and, therefore, of $S$. Otherwise, the proof continues indefinitely if we keep getting cases of the form Case 2.k for all $k \in \N$. In that case, $\tilde s_k \in S_k$ and we get a sequence $\tilde S = \sequence {\tilde s_k}_{k \mathop \in \N}$ of reals with the following properties: Each $\tilde s_k$ is in $S$. This is because, as remarked earlier, the only way we get our sequence is if $\tilde s_k \in S_k$. But $S_k$ is either $S$ when $k = 0$ or $S \setminus \{\tilde s_0, \ldots , \tilde s_{k-1}\}$ when $k \geq 1$. In both cases, $S_k$ is a subset of $S$. From this fact, the claim easily follows. $\tilde s_k > \tilde s_{k+1}$. To see this, note that $\tilde s_{k+1} \in S_{k+1} = S \setminus \set {\tilde s_0, \ldots , \tilde s_k} = S_k \setminus \set{ \tilde s_k}$. So, firstly, $\tilde s_{k+1} \ne \tilde s_k$ and, secondly, because $\tilde s_k$ is by construction an upper bound on $S_k$ (and therefore on its subset $S_{k+1}$), we have $\tilde s_k \ge \tilde s_{k+1}$. Combining both these facts gives our present claim. Now, the first property says that the set of all the $\tilde s}_k$'s, which is $\tilde S$, is a subset of $S$. So, it is bounded because $S$ is. Then, certainly, it is also bounded below. Also, $\tilde S$ is obviously non-empty because it is infinite. Hence, one final application of the completeness axiom of the reals gives that $\underline{s} = \inf \tilde S$ exists as a real. Note that $\underline{s} \notin \tilde S$. Otherwise, if $\underline{s} = \tilde s_k$ for some $k \in \N$, by the second property of our sequence, we would have $\underline{s} > \tilde s_{k+1}$. This would contradict the fact that $\underline{s}$ is a lower bound on $\tilde S$. But then, by Lemma 2 above, $\underline{s}$ is a limit point of the set $\tilde S$ and, therefore, of its superset $S$. $\Box$ Before moving onto the proof of the main theorem, I skim over the elementary concept of projection that will be used in the proof. Fix positive integers $m, n$ where $m \leq n$. Then, for any set $X$, There is a function $\pi_{1, \ldots ,m}:X^n \to X^m$ such that $\pi_{1, \ldots ,m}(x_1, \ldots , x_m , \ldots , x_n) = (x_1, \ldots , x_m)$. Essentially, $\pi_{1, \ldots ,m}$ takes in a coordinate of $n$ elements of $X$ and simply outputs the first $m$ elements of that coordinate. There is a function $\pi_m:X^n \to X$ such that $\pi_m(x_1, \ldots , x_m , \ldots , x_n) = x_m$. Essentially, $\pi_m$ takes in a coordinate of $n$ elements of $X$ and outputs just the $m^\text{th}$ element of that coordinate. In general, for positive integers $i \leq j < n$, there is a function $\pi_{i, \ldots , j}:X^n \to X^{j - i + 1}$ such that $\pi_{i, \ldots ,j}(x_1, \ldots , x_i , \ldots , x_j , \ldots , x_n) = (x_i, \ldots , x_j)$. We begin with an easy lemma: Lemma 4: For positive integers $m < n$ and $S \subseteq X^n$, $S \subseteq \pi_{1, \ldots ,m}(S) \times \pi_{m+1, \ldots , n}(S)$. Proof: Fix any $(x_1, \ldots , x_m , x_{m+1} , \ldots , x_n) \in S$. Then, by the Definition:Image of Subset under Mapping, $(x_1, \ldots , x_m) \in \pi_{1, \ldots ,m}(S)$ because $\pi_{1, \ldots ,m}(x_1, \ldots , x_m , x_{m+1} , \ldots , x_n) = (x_1, \ldots , x_m)$. Similarly, $(x_{m+1} , \ldots , x_n) \in \pi_{m+1, \ldots , n}(S)$. So, by Definition:Cartesian Product, $(x_1, \ldots , x_m , x_{m+1} , \ldots , x_n) \in \pi_{1, \ldots ,m}(S) \times \pi_{m+1, \ldots , n}(S)$. Since $(x_1, \ldots , x_m , x_{m+1} , \ldots , x_n)$ was an arbitrary element of $S$, this means that $S \subseteq \pi_{1, \ldots ,m}(S) \times \pi_{m+1, \ldots , n}(S)$. $\Box$ Lemma 5: For positive integers $i \leq j \leq n$ and $S \subseteq \R^n$, if $S$ is a bounded space in $\R^n$, then so is $\pi_{i, \ldots ,j}(S)$ in $\R^{j - i + 1}$. Proof: For a contradiction, assume otherwise. So, by the negation of the definition of a bounded space, for every $K \in \R$, there are $x=(x_i, \ldots , x_j)$ and $y=(y_i, \ldots , y_j) $ in $\pi_{i, \ldots ,j}(S)$ such that $d(x,y) = |x - y| = \sqrt{\sum\limits_{s=i}^{j}(x_s -y_s)^2} > K$ where we get the formula $|x - y| = \sqrt{\sum\limits_{s=i}^{j}(x_s -y_s)^2}$ because we are working with the euclidean metric on all real spaces (after a suitable change of variables in the summation). Now, by definition of the image set $\pi_{i, \ldots ,j}(S)$, there are points $x' = (x_1, \ldots , x_i, \ldots , x_j , \ldots , x_n)$ and $y' = (y_1, \ldots , y_i, \ldots , y_j , \ldots , y_n)$ in $S$ from which $x$ and $y$ originated as coordinate components. Therefore, $d(x',y') = |x'-y'| = \sqrt{\sum\limits_{s=1}^{n}(x_s -y_s)^2} \geq \sqrt{\sum\limits_{s=i}^{j}(x_s -y_s)^2} > K$ contradicting the fact that $S$ is a bounded space. $\Box$ Lemma 6: For any function $f: X \to Y$ and subset $S \subseteq X$, if $S$ is infinite and $f(S)$ is finite, then there exists some $y \in f(S)$ such that $f^{-1}(y) \cap S$ is infinite. Here, $f^{-1}(y)$ is the preimage of the element $y$. Proof: If there weren't such an element in $f(S)$, then for all $y \in f(S)$, $f^{-1}(y) \cap S$ would be finite. Also, since $f(S)$ is finite, we may list its elements: $y_1, \ldots , y_n$ (there must be at least one image element as $S$ is non-empty). Then, by repeated applications of Union of Finite Sets is Finite, we get that: $\bigcup\limits_{y \in f(S)}(f^{-1}(y) \cap S) = (f^{-1}(y_1) \cap S) \cup \cdots \cup (f^{-1}(y_n) \cap S)$ must be finite. But notice that: \(\displaystyle \bigcup\limits_{y \in f(S)}(f^{-1}(y) \cap S)\) \(=\) \(\displaystyle \left[\bigcup\limits_{y \in f(S)}f^{-1}(y)\right] \cap S\) Intersection Distributes over Union \(\displaystyle \) \(=\) \(\displaystyle f^{-1}(f(S)) \cap S\) Preimage of Union under Mapping/Family of Sets \(\displaystyle \) \(=\) \(\displaystyle S\) Subset of Domain is Subset of Preimage of Image This contradicts the fact that $S$ is infinite. $\Box$ Proof: We proceed by induction on the positive integer $n$: (Base Case) When $n = 1$, the theorem is just Lemma 3 above which has been adequately proven. (Inductive Step) Suppose that the theorem is true for some positive integer $n$. We must show that it is also true for the positive integer $n+1$. So, fix any infinite, bounded subset $S$ of $\R^{n + 1}$. Consider the image of $S$ under the projection functions $\pi_{1, \ldots , n}$ and $\pi_{n+1}$: $S_{1, \ldots , n} = \pi_{1, \ldots , n}(S)$ and $S_{n+1} = \pi_{n+1}(S)$. Then, Because $S$ is a bounded space of $\R^{n + 1}$, $S_{1, \ldots , n}$ and $S_{n+1}$ must be bounded spaces of $\R^n$ and $\R$ respectively by Lemma 5. Also, $S \subseteq S_{1, \ldots , n} \times S_{n+1}$ by Lemma 4. So, by the fact that $S$ is infinite and Subset of Finite Set is Finite, $S_{1, \ldots , n} \times S_{n+1}$ is infinite. But then, by Product of Finite Sets is Finite, either $S_{1, \ldots , n}$ or $S_{n+1}$ must be infinite. Let us analyze the case that $S_{1, \ldots , n}$ is infinite first. Then, $S_{1, \ldots , n}$ is an infinite bounded space of $\R^n$. So, by the induction hypothesis, it has a limit point $l_{1, \ldots , n}=(l_1, \ldots , l_n)$. By definition, for every $\epsilon > 0$, there is an $s_\epsilon \in (B_\epsilon(l) \setminus \{l\}) \cap S_{1, \ldots , n}$. To this $s_\epsilon$, which is in $S_{1, \ldots , n}$, there corresponds the set of all $(n+1)^\text{th}$ coordinates of $S$-elements that have $s_\epsilon=(s_{\epsilon,1}, \ldots , s_{\epsilon, n})$ as their first $n$ coordinates: $\tilde S_{\epsilon, n+1} = \pi_{n+1}(\pi_{1, \ldots , n}^{-1}(s_\epsilon) \cap S) \subseteq S_{n+1}$ and collect every element of such sets in one set: $\tilde S_{n+1} = \bigcup\limits_{\epsilon > 0} \tilde S_{\epsilon, n+1} = \pi_{n+1}(\left[\bigcup\limits_{\epsilon > 0}\pi_{1, \ldots , n}^{-1}(s_\epsilon)\right] \cap S) \subseteq S_{n+1}$. Now, if $\tilde S_{n+1}$ is $\blacksquare$ Also known as Some sources refer to this result as the Weierstrass-Bolzano theorem. Source of Name
Note The constant acceleration equations apply from the first instant in time after the projectile leaves the launcher to the last instant in time before the projectile hits something, such as the ground. Once the projectile makes contact with the ground, the ground exerts a huge force on the projectile causing a drastic change in the acceleration of the projectile over a very short period of time until, in the case of a projectile that doesn’t bounce, both the acceleration and the velocity become zero. To take this zero value of velocity and plug it into constant acceleration equations that are devoid of post-ground-contact acceleration information is a big mistake. In fact, at that last instant in time during which the constant acceleration equations still apply, when the projectile is at ground level but has not yet made contact with the ground, (assuming that ground level is the lowest elevation achieved by the projectile) the magnitude of the velocity of the projectile is at its biggest value, as far from zero as it ever gets! Consider an object in freefall with a non-zero initial velocity directed either horizontally forward; or both forward and vertically (either upward or downward). The object will move forward, and upward or downward—perhaps upward and then downward—while continuing to move forward. In all cases of freefall, the motion of the object (typically referred to as the projectile when freefall is under consideration) all takes place within a single vertical plane. We can define that plane to be the \(x\)-\(y\) plane by defining the forward direction to be the \(x\) direction and the upward direction to be the \(y\) direction. One of the interesting things about projectile motion is that the horizontal motion is independent of the vertical motion. Recall that in freefall, an object continually experiences a downward acceleration of \(9.80\dfrac{m}{s^2}\) but has no horizontal acceleration. This means that if you fire a projectile so that it is approaching a wall at a certain speed, it will continue to get closer to the wall at that speed, independently of whether it is also moving upward and/or downward as it approaches the wall. An interesting consequence of the independence of the vertical and horizontal motion is the fact that, neglecting air resistance, if you fire a bullet horizontally from, say, shoulder height, over flat level ground, and at the instant the bullet emerges from the gun, you drop a second bullet from the same height, the two bullets will hit the ground at the same time. The forward motion of the fired bullet has no effect on its vertical motion. The most common mistake that folks make in solving projectile motion problems is combining the \(x\) and \(y\) motion in one standard constant-acceleration equation. Don’t do that. Treat the \(x\)-motion and the \(y\)-motion separately. In solving projectile motion problems, we take advantage of the independence of the horizontal \((x)\) motion and the vertical \((y)\) motion by treating them separately. The one thing that is common to both the \(x\) motion and the \(y\) motion is the time. The key to the solution of many projectile motion problems is finding the total time of “flight.” For example, consider the following example. Example \(\PageIndex{1}\) A projectile is launched with a velocity of \(11 m/s\) at an angle of \(28^\circ\) above the horizontal over flat level ground from a height of \(2.0 m\) above ground level. How far forward does it go before hitting the ground? (Assume that air resistance is negligible.) Solution Before getting started, we better clearly establish what we are being asked to find. We define the forward direction as the \(x\) direction so what we are looking for is a value of \(x\). More specifically, we are looking for the distance, measured along the ground, from that point on the ground directly below the point at which the projectile leaves the launcher, to the point on the ground where the projectile hits. This distance is known as the range of the projectile. It is also known as the range of the launcher for the given angle of launch and the downrange distance traveled by the projectile. Okay, now that we know what we’re solving for, let’s get started. An initial velocity of \(11 m/s\) at \(28^\circ\) above the horizontal, eh? Uh oh! We’ve got a dilemma. The key to solving projectile motion problems is to treat the \(x\) motion and the \(y\) motion separately. But we are given an initial velocity vo which is a mix of the two of them. We have no choice but to break up the initial velocity into its \(x\) and \(y\) components. Now we’re ready to get started. We’ll begin with a sketch which defines our coordinate system, thus establishing the origin and the positive directions for \(x\) and \(y\). Recall that in projectile motion problems, we treat the \(x\) and \(y\) motion separately. Let’s start with the \(x\) motion. It is the easier part because there is no acceleration. x motion \[x=v_{ox} t\label{13-1}\] Note that for the \(x\)-motion, we start with the constant acceleration equation that gives the position as a function of time. (Imagine having started a stopwatch at the instant the projectile lost contact with the launcher. The time variable t represents the stopwatch reading.) As you can see, because the acceleration in the \(x\) direction is zero, the equation quickly simplifies to \(x=V_{0x}t\). We are “stuck” here because we have two unknowns, \(x\) and \(t\), and only one equation. It's time to turn to the \(y\) motion. It should be evident that it is the y motion that yields the time, the projectile starts off at a known elevation \((y = 2.0 m)\) and the projectile motion ends when the projectile reaches another known elevation, namely, \(y = 0\). y-motion \[y=y_0+V_{0y}t+\dfrac{1}{2}a_yt^2 \label{13-2}\] This equation tells us that the \(y\) value at any time \(t\) is the initial y value plus some other terms that depend on \(t\). It’s valid for any time \(t\), starting at the launch time \(t = 0\), while the object is in projectile motion. In particular, it is applicable to that special time \(t\), the last instant before the object makes contact with the ground, that instant that we are most interested in, the time when \(y = 0\). What we can do, is to plug \(0\) in for \(y\), and solve for that special time \(t\) that, when plugged into Equation \(\ref{13-2}\), makes \(y\) be \(0\). When we rewrite Equation \(\ref{13-2}\) with y set to 0, the symbol \(t\) takes on a new meaning. Instead of being a variable, it becomes a special time, the time that makes the \(y\) in the actual Equation \(\ref{13-2}\) \((y=y=y_0+V_{0y}t+\dfrac{1}{2}a_yt^2)\) zero. \[0=y_0+V_{0y}t_{\ast}+\dfrac{1}{2}a_yt_{\ast}^2 \label{13-3}\] To emphasize that the time in Equation \(\ref{13-3}\) is a particular instant in time rather than the variable time since launch, I have written it as \(t_{\ast}\) to be read “\(t\) star.” Everything in Equation \(\ref{13-3}\) is a given except \(t_{\ast}\) so we can solve Equation \(\ref{13-3}\) for \(t_{\ast}\). Recognizing that Equation \(\ref{13-3}\) is a quadratic equation in \(t_{\ast}\) we first rewrite it in the form of the standard quadratic equation \(ax+bx^2+c=0\). This yields: \[\dfrac{1}{2}a_yt_{\ast}^2+V_{0y}t_{\ast}+y_0=0\] Then we use the quadratic formula \(x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\) which for the case at hand appears as: \[t_{\ast}=\dfrac{-V_{0y}\pm\sqrt{V_{0y}^2-4(\dfrac{1}{2}a_y)y_0}}{2(\dfrac{1}{2}a_y)}\] which simplifies to \[t_{\ast}=\dfrac{-V_{0y}\pm\sqrt{V_{0y}^2-2a_yy_0}}{a_y}\] Substituting values with units yields: \[t_{\ast}=\dfrac{-4.65\dfrac{m}{s}\pm\sqrt{(-4.65\dfrac{m}{s})^2-2(-9.80\dfrac{m}{s^2})2.0m}}{-9.80\dfrac{m}{s^2}}\] which evaluates to \(t_{\ast}=-0.321s\) and \(t_{\ast}=1.27s\) We discard the negative answer because we know that the projectile hits the ground after the launch, not before the launch. Recall that \(t_{\ast}\) is the stopwatch reading when the projectile hits the ground. Note that the whole time it has been moving up and down, the projectile has been moving forward in accord with Equation \(\ref{13-1}\), \(x=V_{0x}t\). At this point, all we have to do is plug \(t_{\ast}=1.27s\) into Equation \(\ref{13-1}\) and evaluate: \[\begin{align*} x&=V_{0x}t_{\ast} \\[5pt] &=9.97\dfrac{m}{s}(1.27s) \\[5pt] &=13m \end{align*}\] This is the answer. The projectile travels \(13 m\) forward before it hits the ground.
Consider the 1D Poisson equation $$\nabla^2 u = f.$$ Using finite difference method on cell corner data and a uniform grid with ghost points, I think we can write the system of equations with Neumann BCs as: $$Au = f-Au_{BC},$$ where $$ \begin{aligned} A &= \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc} -2 & 2 & & & & & \\ 1 & -2 & 1 & & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & & 1 & -2 & 1 \\ & & & & & 2 & -2 \\ \end{array} \right] \\ Ax_{BC} &= \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc} 2 \hat{n} \theta \Delta x \\ 0 \\ \\ \vdots \\ \\ 0 \\ 2 \hat{n} \theta \Delta x \\ \end{array} \right] \end{aligned} $$ And $\hat{n},\theta$ is the outward facing normal and the prescribed slope of the derivative at the boundaries. Notably, the system is singular, which can be addressed by removing the mean of the RHS. Question I know that $A$ must be symmetric, positive definite. I've done some tests without multiplying the first and last rows by 0.5 and they seem to work fine, so my question is: Must the first and last rows of the left and right hand side be multiplied by 0.5? In other words, are there cases where it will not work without multiplication of 0.5? Notes I imagine that both sides of the equation would be balanced the same with and without this multiplication. As a final note, I know that the code / algorithm may be more clear with the multiplication, since the equation explicitly satisfies symmetry, but I'm more interested in whether the multiplication is necessary or not. Any help is greatly appreciated.
An important concept in plasma physics is the Debye length, which describes the screening of a charge's electrostatic potential due to the net effect of the interactions it undergoes with the other mobile charges (electrons and ions) in the system. It can be shown that, given a set of reasonable assumptions about the behaviour of charges in the plasma, the electric potential due to a "test charge", $q_\mathrm{T}$ is given by$$\phi = \frac{q_\mathrm{T}}{4\pi\epsilon_0 r}\exp\left(-\frac{r}{\lambda_\mathrm{D}}\right),$$where the electron Debye length,$$\lambda_\mathrm{D} = \sqrt{\frac{\epsilon_0 T_e}{e^2n_0}},$$for an electron temperature $T_e$ expressed as an energy (i.e. $T_e = k_\mathrm{B}T_e'$ where $T_e'$ is in K) and number density $n_0$. Rigorous derivations, starting from Gauss' Law and solving the resulting Poisson equation with a Green's function are given elsewhere (e.g. Section 7.2.2. in J. P. Freidberg, Plasma Physics and Fusion Energy, CUP (2008)). The following Python code plots the shielded and unshielded Coulomb potential due to a point test charge $q_\mathrm{T} = +e$, assuming an electron temperature and density typical of a tokamak magnetic confinement nuclear fusion device. import numpy as np from scipy.constants import k as kB, epsilon_0, e from matplotlib import rc import matplotlib.pyplot as plt rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 16}) rc('text', usetex=True) # We need the following so that the legend labels are vertically centred on # their indicator lines. rc('text.latex', preview=True) def calc_debye_length(Te, n0): """Return the Debye length for a plasma characterised by Te, n0. The electron temperature Te should be given in eV and density, n0 in cm-3. The debye length is returned in m. """ return np.sqrt(epsilon_0 * Te / e / n0 / 1.e-6) def calc_unscreened_potential(r, qT): return qT * e / 4 / np.pi / epsilon_0 / r def calc_e_potential(r, lam_De, qT): return calc_unscreened_potential(r, qT) * np.exp(-r / lam_De) # plasma electron temperature (eV) and density (cm-3) for a typical tokamak. Te, n0 = 1.e8 * kB / e, 1.e26 lam_De = calc_debye_length(Te, n0) print(lam_De) # range of distances to plot phi for, in m. rmin = lam_De / 10 rmax = lam_De * 5 r = np.linspace(rmin, rmax, 100) qT = 1 phi_unscreened = calc_unscreened_potential(r, qT) phi = calc_e_potential(r, lam_De, qT) # Plot the figure. Apologies for the ugly and repetitive unit conversions from # m to µm and from V to mV. fig, ax = plt.subplots() ax.plot(r*1.e6, phi_unscreened * 1000, label=r'Unscreened: $\phi = \frac{e}{4\pi\epsilon_0 r}$') ax.plot(r*1.e6, phi * 1000, label=r'Screened: $\phi = \frac{e}{4\pi\epsilon_0 r}' r'e^{-r/\lambda_\mathrm{D}}$') ax.axvline(lam_De*1.e6, ls='--', c='k') ax.annotate(xy=(lam_De*1.1*1.e6, max(phi_unscreened)/2 * 1000), s=r'$\lambda_\mathrm{D} = %.1f \mathrm{\mu m}$' % (lam_De*1.e6)) ax.legend() ax.set_xlabel(r'$r/\mathrm{\mu m}$') ax.set_ylabel(r'$\phi/\mathrm{mV}$') plt.savefig('debye_length.png') plt.show()
Inertia In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts. Contents Derivation Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy). The length of a circle arc is given by: [math] L = \theta r [/math] where [math]L[/math] is the length of the arc (m) [math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m) A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of: [math] v = \frac{\theta r}{t} [/math] where [math]v[/math] is the rotational velocity (m/s) [math]t[/math] is the time it takes for the mass to rotate L metres (s) Alternatively, rotational velocity can be expressed as: [math] v = \omega r [/math] where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s) [math]n[/math] is the speed in revolutions per minute (rpm) The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies: [math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math] where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m 2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg) Alternatively, rotational kinetic energy can be expressed as: [math] KE = \frac{1}{2} J\omega^{2} [/math] where [math]J = mr^{2}[/math] is called the moment of inertia (kg.m 2). Notes about the moment of inertia: In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2or slug.ft 2. Conversions factors: 1 lb.ft 2= 0.04214 kg.m 2 1 slug.ft 2= 1.356 kg.m 2 1 lb.ft WR Normalised Inertia Constants The moment of inertia can be expressed as a normalised quantity called the inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math] where [math]H[/math] is the inertia constant (s) [math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type. Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below: Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia. For example, suppose a generator suddenly disconnects from the network. In that instant, the equilibrium in generation and demand is broken and demand exceeds generation. Because energy must be conserved, there must always be energy balance in the system and the instantaneous deficit in energy is supplied by the system inertia. However, the kinetic energy in rotating masses is finite and when energy is used to supply demand, the rotating masses begin to slow down. In aggregate, the speed of rotation of these rotating masses is roughly proportional to the system frequency and so the frequency begins to fall. New generation must be added to the system to reestablish the equilibrium between generation and demand and restore system frequency, i.e. put enough kinetic energy back into the rotating masses such that it rotates at a speed proportional with nominal frequency (50/60 Hz). The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large. Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the Rate of Change of Frequency (RoCoF). Figure 4 shows the system frequency response to a generator trip at different levels of system inertia. It can be seen that the rate of change of the frequency decline increases as the system inertia is decreased. Furthermore, the minimum frequency that the system falls to (called the frequency nadir) is also lower as system inertia is decreased.
Definition:Free Group on Set Jump to navigation Jump to search The Contents Definition Let $X$ be a set. that can be defined as follows: Definition 1: by universal property For every $X$-pointed group $(G, \kappa)$ there exists a unique group homomorphism $\phi : F \to G$ such that $\phi \circ \iota = \kappa$, that is, a morphism of pointed groups $F \to G$. Definition 2: As the group of reduced group words The free group on $X$ is the pair $(F, \iota)$ where:
Recently, I asked a question on Math SE. No response yet. This question is related to that question, but more technical details toward computer science. Given two DFAs $A = (Q, \Sigma, \delta, q_1, F_1)$ and $B = (Q, \Sigma, \delta, q_2, F_2)$ where the set of states, the input alphabet and the transition function of $A$ and $B$ are the same, the initial states and the final(accepting) states could be different. Let $L_1$ and $L_2$ be the languages accepted by $A$ and $B$, respectively. There are four cases: $q_1 = q_2$ and $F_1 = F_2$. $q_1 \neq q_2$ and $F_1 = F_2$. $q_1 = q_2$ and $F_1 \neq F_2$. $q_1 \neq q_2$ and $F_1 \neq F_2$. My question is What are the differences between $L_1$ and $L_2$ in cases 2, 3 and 4? I have a more specific question along this line, The transition monoid of an automaton is the set of all functions on the set of states induced by input strings. See the page for more details. The transition monoid can be regarded as a monoid acting on the set of states. See this Wiki page for more details. In many literatures, an automaton is called strongly connected when the monoid action is transitive, i.e. there is always at least one transition (input string) from one state to another state. If $A$ and $B$ are strongly connected automata, what are the differences between $L_1$ and $L_2$ in cases 2, 3 and 4 above? Any literatures discussing these issues in details? I have searched many books and articles and found nothing helpful so far. I believe I don't have the appropriate key words yet. Thus I am seeking help. Any pointers/references will be appreciated very much.
Conditions for C^1 Smooth Solution of Euler's Equation to have Second Derivative Theorem Let $y$ be a real function. Let $y$ have a continuous first derivative and satisfy Euler's equation: $F_y - \dfrac \d {\d x} F_{y'} = 0$ Then $\map y x$ has continuous second derivatives wherever: $F_{y' y'} \sqbrk{x, \map y x, \map y x'} \ne 0$ Proof Consider the difference \(\displaystyle \Delta F_{y'}\) \(=\) \(\displaystyle F \sqbrk{x +\Delta x,y+\Delta y,y'+\Delta y'} - F \sqbrk{x,y,y'}\) \(\displaystyle \) \(=\) \(\displaystyle \Delta x\overline F_{y'x}+\Delta y\overline F_{y'y}+\Delta y'\overline F_{y'y'}\) Multivariate Mean Value Theorem Overbar indicates that derivatives are evaluated along certain intermediate curves. Divide $\Delta F_{y'} $ by $\Delta x$ and consider the limit $\Delta x\to 0$: $\displaystyle \lim_{\Delta x\to 0}\frac{\Delta F_{y'} }{\Delta x}=\lim_{\Delta x\to 0}\paren{\overline{F}_{y'x}+\frac{\Delta y}{\Delta x}\overline F_{y'y}+\frac{\Delta y'}{\Delta x}\overline F_{y'y'} }$ Existence of second derivatives and continuity of $F$ is guaranteed by conditions of the theorem: $\displaystyle\lim_{\Delta x\to 0}\frac{\Delta F_{y'} }{\Delta x}=F_{y'x}$, $\displaystyle\lim_{\Delta x\to 0}\overline F_{y'x}=F_{y'x}$, $\displaystyle\lim_{\Delta x\to 0}\overline F_{y'y}=F_{y'y}$, $\displaystyle\lim_{\Delta x\to 0}\overline F_{y'y}=F_{y'y'}$ Similarly, $\displaystyle\lim_{\Delta x \to 0}\frac{\Delta y}{\Delta x}= y'$ By Product Rule for Limits of Functions, it follows that $\displaystyle\lim_{\Delta x\to 0}\frac{\Delta y'}{\Delta x}=y''$ Hence $y''$ exists wherever $F_{y' y'} \ne 0$. Euler's equation and continuity of necessary derivatives of $F$ and $y$ implies that $y''$ is continuous.
Zeta-function method for regularization zeta-function regularization Regularization and renormalization procedures are essential issues in contemporary physics — without which it would simply not exist, at least in the form known today (2000). They are also essential in supersymmetry calculations. Among the different methods, zeta-function regularization — which is obtained by analytic continuation in the complex plane of the zeta-function of the relevant physical operator in each case — might well be the most beautiful of all. Use of this method yields, for instance, the vacuum energy corresponding to a quantum physical system (with constraints of any kind, in principle). Assuming the corresponding Hamiltonian operator, , has a spectral decomposition of the form (think, as simplest case, of a quantum harmonic oscillator): , with some set of indices (which can be discrete, continuous, mixed, multiple, etc.), then the quantum vacuum energy is obtained as follows [a5], [a6]: where is the zeta-function corresponding to the operator . The formal sum over the eigenvalues is usually ill-defined, and the last step involves analytic continuation, inherent to the definition of the zeta-function itself. These mathematically simple-looking relations involve very deep physical concepts (no wonder that understanding them took several decades in the recent history of quantum field theory, QFT). The zeta-function method is unchallenged at the one-loop level, where it is rigorously defined and where many calculations of QFT reduce basically (from a mathematical point of view) to the computation of determinants of elliptic pseudo-differential operators (DOs, cf. also Pseudo-differential operator) [a2]. It is thus no surprise that the preferred definition of determinant for such operators is obtained through the corresponding zeta-function. When one comes to specific calculations, the zeta-function regularization method relies on the existence of simple formulas for obtaining the analytic continuation above. These consist of the reflection formula of the corresponding zeta-function in each case, together with some other fundamental expressions, as the Jacobi theta-function identity, Poisson's resummation formula and the famous Chowla–Selberg formula [a2]. However, some of these formulas are restricted to very specific zeta-functions, and it often turned out that for some physically important cases the corresponding formulas did not exist in the literature. This has required a painful process (it has taken over a decade already) of generalization of previous results and derivation of new expressions of this kind [a5], [a6]. [a1]. zeta regularization for integrals The zeta function regularization may be extended in order to include divergent integrals \begin{equation} \int_{a}^{\infty}x^{m}dx \qquad m >0 \end{equation} by using the recurrence equation \begin{equation} \begin{array}{l} \int\nolimits_{a}^{\infty }x^{m-s} dx =\frac{m-s}{2} \int\nolimits_{a}^{\infty }x^{m-1-s} dx +\zeta (s-m)-\sum\limits_{i=1}^{a}i^{m-s} +a^{m-s} \\ -\sum\limits_{r=1}^{\infty }\frac{B_{2r} \Gamma (m-s+1)}{(2r)!\Gamma (m-2r+2-s)} (m-2r+1-s)\int\nolimits_{a}^{\infty }x^{m-2r-s} dx \end{array} \end{equation} this is the natural extension to integrals of the Zeta regularization algorithm , this recurrence equation is finite since for \begin{equation} m-2r < -1 \qquad \int_{a}^{\infty}dxx^{m-2r}= -\frac{a^{m-2r+1}}{m-2r+1} \end{equation} the integrals inside the recurrence equation are convergent References [a1] A.A. Bytsenko, G. Cognola, L. Vanzo, S. Zerbini, "Quantum fields and extended objects in space-times with constant curvature spatial section" Phys. Rept. , 266 (1996) pp. 1–126 [a2] E. Elizalde, "Multidimensional extension of the generalized Chowla–Selberg formula" Commun. Math. Phys. , 198 (1998) pp. 83–95 [a3] S.W. Hawking, "Zeta function regularization of path integrals in curved space time" Commun. Math. Phys. , 55 (1977) pp. 133–148 [a4] M. Nakahara, "Geometry, topology, and physics" , Inst. Phys. (1995) pp. 7–8 [a5] E. Elizalde, S.D. Odintsov, A. Romeo, A.A. Bytsenko, S. Zerbini, "Zeta regularization techniques with applications" , World Sci. (1994) [a6] E. Elizalde, "Ten physical applications of spectral zeta functions" , Springer (1995) Garcia , Jose Javier http://prespacetime.com/index.php/pst/article/view/498 The Application of Zeta Regularization Method to the Calculation of Certain Divergent Series and Integrals Refined Higgs, CMB from Planck, Departures in Logic, and GR Issues & Solutions vol 4 Nº 3 prespacetime journal http://prespacetime.com/index.php/pst/issue/view/41/showToc How to Cite This Entry: Zeta-function method for regularization. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Zeta-function_method_for_regularization&oldid=29548
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious? Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...) @Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be). @Bubaya (gotta go now, no time for followups on this one …) @egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE: \documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document} @PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.) @JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-) @DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users. @UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe @UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it? @DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ... @JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer) @JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
Assume a random sample X1, ..., Xn with a normal distribution with mean μ and variance σ2. How do we know the following estimator is unbiased, but inconsistent? closed as off-topic by Giskard, Kenny LJ, Adam Bailey, Maarten Punt, Dan Feb 1 at 23:26 This question appears to be off-topic. The users who voted to close gave this specific reason: When an estimator is consistent, the sampling distribution of the estimator converges to the true parameter value being estimated as the sample size increases. Picking some samples from the distribution and calculate the average always gives you unbiased estimate. Even if you only pick the first (i.i.d) observation from the data, it is an unbiased estimator. To ensure that it is consistent, you need to have $\forall \epsilon$, $P(|\hat{\mu}-\mu|>\epsilon) = 0$ as your sample size goes to infinity. Here, notice that since you are using the first 30 observations from the data, your $\hat{\mu} \sim N(\mu,\frac{\sigma^2}{30})$ even the sample size goes to infinity. Given that normal distribution has a full support over the real line, for any (nonnegative) $\epsilon$ you choose, your probability does not converge to 0. Hence it is not consistent. If an estimator is unbiased and its variance converges to 0, then your estimator is also consistent but on the converse, we can find funny counterexample that a consistent estimator has positive variance. So we need to think about this question from the definition of consistency and converge in probability.
My question is as follows. Suppose we have a function $f(r)$ and we want to study its asymptotic behavior at infinity ($r\rightarrow \infty$). For example, the function may reduce to $-\frac{a}{r}$ or $b e^{-cr}$ at infinity. How do I identify the constants $a,b$ and $c$ using Mathematica? Or, more generally, how do I identify the asymptote of a function? Can anybody point out useful built-in functions? I am interested in the function: $$ f(r)=-\frac{\sqrt[3]{3} e^{-2 r/3}}{\pi ^{2/3}}-\frac{\sqrt[3]{2 \pi } e^{2 r/3}}{5 \left(\frac{3 \sqrt[3]{\pi } e^{2 r/3} \sinh ^{-1}\left(2 \sqrt[3]{2 \pi } e^{2 r/3}\right)}{5\ 2^{2/3}}+1\right)} $$ or f[r_]:=-((3^(1/3) E^(-2 r/3))/\[Pi]^(2/3)) - (E^(2 r/3) (2 \[Pi])^(1/3))/( 5 (1 + (3 E^(2 r/3) \[Pi]^(1/3) ArcSinh[2 E^(2 r/3) (2 \[Pi])^(1/3)])/(5 2^(2/3)))) I expect this function to have -$\frac{1}{r}$-behavior. How do I check it? I am not interested in a numerical value of the limit (which is 0), but rather in a function the original function reduces to at infinity. P.S. Using Mathematica for a week
I want to have common subexpression elimination of comlicated functions where the elimination is done by factoring out the expressions. The result is not to be digested by a compiler, it should remain symbolic. I want this to work in general, without choosing things by hand. As an example of what I want to do, take to following expression: -3 a - 2 a^3 + 4 Sqrt[1 + a^2] (5 - 9 Log[2]) + 4 a^2 Sqrt[1 + a^2] (5 - 9 Log[2]) + 12 (1 + a^2)^(3/2) Log[1 + Sqrt[1 + 1/a^2]] - 6 (4 (Sqrt[1 + a^2] - a (2 + a^2 - a Sqrt[1 + a^2])) Log[a] + a Log[1 + a^2]) $-2 a^3+4 \sqrt{a^2+1} a^2 (5-9 \log (2))+12 \left(a^2+1\right)^{3/2} \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)-6 \left(4 \left(\sqrt{a^2+1}-a \left(a^2-\sqrt{a^2+1} a+2\right)\right) \log (a)+a \log \left(a^2+1\right)\right)+4 \sqrt{a^2+1} (5-9 \log (2))-3 a$ This will not change by doing FullSimplify[ExpandAll[%], Assumptions -> _Symbol \[Element] Reals]. What I am after is to transform this expression to something where the complicated terms (e.g. measured in LeafCount) are nested.What I came up with until now is something which mostly works. However in terms of programming it is probably the ugliest function written in recent human history. I would like to ask if somebody has an idea how to do things nicer. Perhaps using OptimizeExpression is actually the wrong thing to do. nestComplicatedTerms[x_] := Block[ {expr, compiled, exprList, exprList2, countList}, {}; compiled = Experimental`OptimizeExpression[x, OptimizationLevel -> 2]; compiled = Map[Hold, compiled, {2}][[1, 2]]; compiled = ToString[InputForm[compiled]]; compiled = StringReplace[ compiled, {"=" -> "->", ";" -> ",", "Hold" -> "List"}]; exprList = ToExpression[compiled]; exprList2 = exprList; countList = Table[( exprList = exprList /. exprList[[i]]; {exprList2[[i]][[1]], LeafCount[exprList[[i]][[2]]]* Boole[Count[exprList2[[-1]], exprList2[[i]][[1]], {-1}] > 0]} ), {i, 1, Length[exprList] - 1}]; countList = SortBy[countList, -Last[#] &]; countList = Table[countList[[i]][[1]], {i, 1, Length[countList]}]; exprList = exprList2; expr = HornerForm[exprList[[-1]], countList]; Do[( expr = expr /. exprList[[i]]; exprList = exprList /. exprList[[i]]; ), {i, 1, Length[exprList] - 1}]; expr ] What it does is to first produce the output from OptimizeExpression. Then comes the ugly step. The output is transformed to a string, replacing some characters and then transfomed back to an expression. I had to do this, because I was unable to extract the replacement rules without actually replacing the variables in the function. This is how the output from OptimizeExpression looks like: Experimental`OptimizedExpression[ Block[{Compile`$938, Compile`$948, Compile`$954, Compile`$955, Compile`$956, Compile`$957, Compile`$958, Compile`$961, Compile`$962, Compile`$963, Compile`$964}, Compile`$938 = a^2; Compile`$948 = a Compile`$938; Compile`$954 = 1 + Compile`$938; Compile`$955 = Sqrt[Compile`$954]; Compile`$956 = Log[2]; Compile`$957 = -9 Compile`$956; Compile`$958 = 5 + Compile`$957; Compile`$961 = Compile`$955 Compile`$954; Compile`$962 = 1/Compile`$938; Compile`$963 = 1 + Compile`$962; Compile`$964 = Sqrt[Compile`$963]; -3 a - 2 Compile`$948 + 4 Compile`$955 Compile`$958 + 4 Compile`$938 Compile`$955 Compile`$958 + 12 Compile`$961 Log[1 + Compile`$964] - 6 (4 (Compile`$955 - a (2 + Compile`$938 - a Compile`$955)) Log[ a] + a Log[Compile`$954])]] It is a mathematica coding expression, which actually is being processed when I want to extract the replacement rules, changing its own content when I do.. How could I convert this output to a list of replacement rules, like {Compile`$938 -> a^2, Compile`$948 -> a Compile`$938, ..}) other than doing the forth and back string conversion? The rest of the function is straightforward and does what I want. It is these first couple of lines which I do not like. Oh and what it does to the expression I actually wanted to process? It decreases the LeafCount from 114 down to 89: -3 a - 2 a^3 + 12 (1 + a^2)^(3/2) Log[1 + Sqrt[1 + 1/a^2]] + 48 a Log[a] + 24 a^3 Log[a] + Sqrt[1 + a^2] ((4 + 4 a^2) (5 - 9 Log[2]) - 24 Log[a] - 24 a^2 Log[a]) - 6 a Log[1 + a^2] $-2 a^3+24 a^3 \log (a)-6 a \log \left(a^2+1\right)+12 \left(a^2+1\right)^{3/2} \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)+\sqrt{a^2+1} \left(-24 a^2 \log (a)+\left(4 a^2+4\right) (5-9 \log (2))-24 \log (a)\right)-3 a+48 a \log (a)$ Doing a FullSimplify on it further simplifies the expression to a LeafCount of 72: -3 a - 2 a^3 + 12 (1 + a^2)^(3/2) Log[1 + Sqrt[1 + 1/a^2]] + 48 a Log[a] + 24 a^3 Log[a] - 4 (1 + a^2)^(3/2) (-5 + Log[512] + 6 Log[a]) - 6 a Log[1 + a^2] $-2 a^3+24 a^3 \log (a)-6 a \log \left(a^2+1\right)+12 \left(a^2+1\right)^{3/2} \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)-4 \left(a^2+1\right)^{3/2} (6 \log (a)-5+\log (512))-3 a+48 a \log (a)$ And if one repeats the procedure on this last output by FullSimplify[nestComplicatedTerms[%]] the expression is again reduced to a LeafCount of 61. -3 a + 48 a Log[a] - 4 (1 + a^2)^( 3/2) (-5 + Log[512] - 3 Log[1 + Sqrt[1 + 1/a^2]] + 6 Log[a]) + a^3 (-2 + 24 Log[a]) - 6 a Log[1 + a^2] $a^3 (24 \log (a)-2)-6 a \log \left(a^2+1\right)-4 \left(a^2+1\right)^{3/2} \left(-3 \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)+6 \log (a)-5+\log (512)\right)-3 a+48 a \log (a)$ I do not know why this is not directly done the first time, though. Anyhow comparing to the initial expression it is much shorter and has way less repeating terms.
I am a beginner at mathematica, so I have made huge blunders in my code which gave too many errors so I did not think it would be relevant to post it. I will try and explain my target as well as possible. I am trying to trace the motion of a particle in a varying electric field. I have the initial position and initial velocity of the particle. The equations are as follows: $$ r(t)=r(0)+ \int_0^t v(\tau)d\tau $$ $$ v(t)=v(0)+ \int_0^t A.E(r) d\tau $$ $$ E(r)= (B).\frac{p}{pdist^3} + C.z^\frac{1}{3} $$ where $A$, $B$ and $C$ are constants, $p$ is the position vector from the point {2, 3, 4} and $pdist$ is the distance from that point. $r(t),\,v(t)$ and $E(t)$ are in vector forms. vector $r$ is in the form $\{x,y,z\}$ (that is the $z$ which is in equation 3). I have to simultaneously solve these equations of motion to determine the motion of the particle and make a parametric plot of its trajectory. I have to account for the motion for $t=15$ minutes. I made a noob attempt which did not work at all. I tried solving all three using DSolve which gave lots of errors. I am also not familiar with how to work with trajectory vectors in mathematica. Can you please guide me in the direction towards the path which I need to take while solving something like this?
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Registro completo - Registros similares 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Registro completo - Registros similares 2018-08-23 11:31 Registro completo - Registros similares 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Registro completo - Registros similares 2018-08-23 11:31 Registro completo - Registros similares 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Registro completo - Registros similares 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Registro completo - Registros similares 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Registro completo - Registros similares 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Registro completo - Registros similares 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Registro completo - Registros similares
Nested Collection Policies in Off-Policy Evaluation Off-policy evaluation allows you to estimate the reward of a policy using feedback data collected from a different policy. The standard method is to use inverse propensity scores (IPS) based on importance sample reweighting, i.e., reweighting with respect to the ratio between the target policy and collection policy. IPS requires the policies to be stochastic and for the probability of actions under the collection policy to be non-zero wherever the target policy is non-zero. But what happens when the collection policy has additional noise? For example, let's say a ghost haunts your production pipeline and at random times during collection it tweaks the collection policy -- assume you have still logged the correct propensity, it's just that it is different to what was intended. It might seem that you would need to correct for this additional noise in the logged propensity somehow. But in fact, counter-intuitively, the standard IPS estimator works fine in this case too. Nested Collection Policy Let's assume the ghost in the machine ensures that the collection policy is actually a mixture of S sub-policies:$$\pi_c = \sum_{s=1}^S w_s \pi_c^s$$ In words, for each action the bandit randomly chooses one of S policies according to the weights \(w\) where \( \sum_s w_s = 1\) and \( w_s \ge 0 \; \forall s \), then randomly selects an action according to the selected policy \( \pi_c^s \). As usual, we are interested in estimating the average reward (r.v. \( R \)) under the target policy \( \pi_t \) as \( \mathbb{E}_{\pi_t}[R] \) using data logged from the collection policy \( (x_i, a_i, r_i) \sim F_X \pi_c F_R \). How to show that the standard IPS estimator (that uses whatever logged propensity was applied in each action) is equal to \( \mathbb{E}_{\pi_t}[R] \) in expectation? That is, how to show:$$ \mathbb{E} \left[ \frac{1}{N} \sum_{n=1}^N \frac{\pi_t(a_n)}{\pi_c^{s_n}(a_n)} r_n\ \right] = \mathbb{E}_{\pi_t}[R],$$ where I have introduced \( s_n \) as the index of the sub-policy used for action \( n \). The trick here is to separate the data from the different policies into S "sub-experiments" then notice that we have an unbiased estimator for each sub-experiment and use the fact that \( \sum_s w_s = 1\):$$ \mathbb{E} \left[ \frac{1}{N} \sum_{n=1}^N \frac{\pi_t(a_n)}{\pi_c^{s_n}(a_n)} r_n \right] \\ = \mathbb{E} \left[\sum_{s=1}^S \frac{1}{N} \sum_{n: s_n = s}^N \frac{\pi_t(a_n)}{\pi_c^{s_n}(a_n)} r_n \right] \\ = \mathbb{E} \left[ \sum_{s=1}^S \frac{\sum_{n: s_n = s}^N 1}{N} \mathbb{E}_{\pi_t}[ R ] \right ]\\ = \sum_{s=1}^S w_n \mathbb{E}_{\pi_t}[ R ]\\ = \mathbb{E}_{\pi_t}[ R ]. $$ Needless to say, all this requires that none of the sub-policies violates the assumption that the collection policy has non-zero mass wherever the target policy has non-zero mass. Acknowledgements: thanks to Alois Gruson, Christophe Charbuillet, and Damien Tardieu for the helpful discussion.
The transcendental numbers form a field, or so I thought. I'm familiar with the fact that the algebraic numbers form a field which implies that reciprocals of transcendental numbers must be again transcendental (if reciprocal is not transcendental, then the reciprocal of the reciprocal, the transcendental element itself, must be algebraic...). But I was wondering about sums and products of transcendental numbers which are covered in numerous threads here on MSE. However, I came across an awful contradiction after combining certain proofs from here. Let's begin clear. Let $L/K$ be a field extension with $\alpha,\beta\in L$. Then obviously, it is true that $\alpha$ and $\beta$ are algebraic iff $\alpha+\beta$ and $\alpha\beta$ are algebraic; a simple proof of this is given using the polynomial $$f=x^2-(\alpha+\beta)x+\alpha\beta=(x-\alpha)(x-\beta)$$ in combination with the tower rule. I want to prove and disprove that $\alpha\beta$ is transcendental when $\alpha$ and $\beta$ are both transcendental. Let's assume that $\alpha$ and $\beta$ are transcendental. First for the proof: if $\alpha\beta$ is not transcendental, then it must be algebraic and hence $\alpha$ and $\beta$ must be algebraic, but they were assumed to be transcendental. Hence, a contradiction and $\alpha\beta$ must be transcendental. The "result" above is easily disproven: we know by the reasoning from earlier that $\frac{1}{\alpha}$ must also be transcendental; we take this reciprocal as our transcendental $\beta$. Now $\alpha\beta=1$ which is algebraic. Where did I go wrong? Thanks for the time. In addition: if we take the case $\beta\neq\frac{1}{\gamma\alpha}$ where $\gamma$ is algebraic, is it then the case that $\alpha\beta$ is always transcendental given $\alpha$ and $\beta$ transcendental. EDIT:Thanks to the people from the comment section below, I now know what went wrong in my (wrong) argumentation. The answer here tells the story quite well and the part that is wrong in my text is that I also assumed that $\alpha$ and $\beta$ are algebraic iff $\alpha\beta$ algebraic, which is false. I'm going to leave this open so that anyone having the same issue in the future will find more (summarised) info here.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range The New England Journal of Medicine, ISSN 0028-4793, 11/2017, Volume 377, Issue 18, pp. 1713 - 1722 Fifteen children with spinal muscular atrophy type 1 received gene-replacement therapy with a single dose of adeno-associated virus containing SMN. In marked... SURVIVAL | MEDICINE, GENERAL & INTERNAL | MOUSE MODEL | DISEASE | PHENOTYPE | SMN2 | MOTOR-NEURON | DELIVERY | Spinal Muscular Atrophies of Childhood - genetics | Humans | Infant | Male | Dependovirus | Respiration, Artificial | Spinal Muscular Atrophies of Childhood - therapy | Disease-Free Survival | Liver Diseases - etiology | Survival of Motor Neuron 1 Protein - genetics | Female | Spinal Muscular Atrophies of Childhood - physiopathology | Historically Controlled Study | Infusions, Intravenous | Genetic Vectors | Nutritional Support | Motor Skills | Infant, Newborn | Cohort Studies | Genetic Therapy - adverse effects | Neuromuscular diseases | Intravenous administration | Cell survival | Gene transfer | Neurons | SMN protein | Motor neuron disease | Nervous system | Patients | Children & youth | Spinal muscular atrophy | Proteins | Children | Gene therapy | Neuromuscular system | Prednisolone | Age | Deoxyribonucleic acid--DNA | Index Medicus | Abridged Index Medicus SURVIVAL | MEDICINE, GENERAL & INTERNAL | MOUSE MODEL | DISEASE | PHENOTYPE | SMN2 | MOTOR-NEURON | DELIVERY | Spinal Muscular Atrophies of Childhood - genetics | Humans | Infant | Male | Dependovirus | Respiration, Artificial | Spinal Muscular Atrophies of Childhood - therapy | Disease-Free Survival | Liver Diseases - etiology | Survival of Motor Neuron 1 Protein - genetics | Female | Spinal Muscular Atrophies of Childhood - physiopathology | Historically Controlled Study | Infusions, Intravenous | Genetic Vectors | Nutritional Support | Motor Skills | Infant, Newborn | Cohort Studies | Genetic Therapy - adverse effects | Neuromuscular diseases | Intravenous administration | Cell survival | Gene transfer | Neurons | SMN protein | Motor neuron disease | Nervous system | Patients | Children & youth | Spinal muscular atrophy | Proteins | Children | Gene therapy | Neuromuscular system | Prednisolone | Age | Deoxyribonucleic acid--DNA | Index Medicus | Abridged Index Medicus Journal Article 2. 480. Gene Therapy for Spinal Muscular Atrophy Type 1 Shows Potential to Improve Survival and Motor Functional Outcomes Molecular Therapy, ISSN 1525-0016, 05/2016, Volume 24, pp. S190 - S190 Journal Article 3. AVXS-101 Phase 1 gene therapy clinical trial in SMA Type 1: Interim data demonstrates improvements in supportive care use European Journal of Paediatric Neurology, ISSN 1090-3798, 2017, Volume 21, pp. e14 - e14 Journal Article 4. IP 853. AVXS-101 Phase-1-Gene Therapy Clinical Trial in SMA Type 1: Event-Free Survival and Achievement of Developmental Milestones Neuropediatrics, ISSN 0174-304X, 10/2018, Volume 49, Issue S 02 Conference Proceeding 5. Precision measurement and interpretation of inclusive W+, W−and Z/γ production cross sections with the ATLAS detector The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 6, pp. 1 - 62 High-precision measurements by the ATLAS Collaboration are presented of inclusive W+ -> l(+) nu, W- -> l(-) (nu) over bar and Z/gamma* -> ll (l = e, mu)... MONTE-CARLO | DRELL-YAN | INTEGRATOR | PARTON DISTRIBUTIONS | DECAY | PROTON | QCD ANALYSIS | EP SCATTERING | COMBINATION | PHYSICS, PARTICLES & FIELDS | Protons | Luminosity | Large Hadron Collider | Leptons | Scattering cross sections | Distribution functions | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Subatomic Physics | Astrophysics | Settore FIS/01 - Fisica Sperimentale | Settore ING-INF/07 - Misure Elettriche e Elettroniche | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | Science & Technology | Settore FIS/04 - Fisica Nucleare e Subnucleare | Phenomenology | High Energy Physics | hep-ex, hep-ex | Nuclear Experiment | Subatomär fysik | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences MONTE-CARLO | DRELL-YAN | INTEGRATOR | PARTON DISTRIBUTIONS | DECAY | PROTON | QCD ANALYSIS | EP SCATTERING | COMBINATION | PHYSICS, PARTICLES & FIELDS | Protons | Luminosity | Large Hadron Collider | Leptons | Scattering cross sections | Distribution functions | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Subatomic Physics | Astrophysics | Settore FIS/01 - Fisica Sperimentale | Settore ING-INF/07 - Misure Elettriche e Elettroniche | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | Science & Technology | Settore FIS/04 - Fisica Nucleare e Subnucleare | Phenomenology | High Energy Physics | hep-ex, hep-ex | Nuclear Experiment | Subatomär fysik | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article The Auk, ISSN 0004-8038, 7/2014, Volume 131, Issue 3, pp. 454 - 456 Journal Article AUK, ISSN 0004-8038, 07/2014, Volume 131, Issue 3, pp. 454 - 456 Journal Article 8. Search for triboson $W^{\pm }W^{\pm }W^{\mp }$ production in pp collisions at $\sqrt{s}=8$ $\text {TeV}$ with the ATLAS detector European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 03/2017, Volume 77, Issue 3 This paper reports a search for triboson W±W±W∓ production in two decay channels (W±W±W∓ → ℓ±νℓ±νℓ∓ν and W±W±W∓ → ℓ±νℓ±νjj with ℓ = e,μ) in proton-proton... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 9. Measurement of W±W± vector-boson scattering and limits on anomalous quartic gauge couplings with the ATLAS detector Physical Review D, ISSN 2470-0010, 01/2017, Volume 96, Issue 1 Journal Article 10. Precision measurement and interpretation of inclusive W+, W- and Z/gamma production cross sections with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 2017, Volume 77, Issue 6 Journal Article 11. Measurement of the W charge asymmetry in the W→μν decay mode in pp collisions at s=7 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 06/2011, Volume 701, Issue 1, pp. 31 - 49 Journal Article 12. Evidence for electroweak production of W±W±jj in pp collisions at sqrt[s] = 8 TeV with the ATLAS detector Physical review letters, ISSN 0031-9007, 10/2014, Volume 113, Issue 14, pp. 141803 - 141803 This Letter presents the first study of W(±)W(±)jj, same-electric-charge diboson production in association with two jets, using 20.3 fb(-1) of proton-proton... Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article 13. Search for heavy Majorana or Dirac neutrinos and right-handed $W$ gauge bosons in final states with two charged leptons and two jets at $\sqrt{s}$ = 13 TeV with the ATLAS detector Journal of High Energy Physics (Online), ISSN 1029-8479, 09/2018, Volume 2019, Issue 1 Journal Article 14. Search for the Higgs boson produced in association with a W boson and decaying to four b-quarks via two spin-zero particles in pp collisions at 13 TeV with the ATLAS detector The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2016, Volume 76, Issue 11, pp. 1 - 31 This paper presents a dedicated search for exotic decays of the Higgs boson to a pair of new spin-zero particles, $$H \rightarrow aa$$ H → a a , where the... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PAIR PRODUCTION | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PAIR PRODUCTION | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 15. Searches for electroweak production of charginos, neutralinos, and sleptons decaying to leptons and W, Z, and Higgs bosons in pp collisions at 8 TeV The European Physical Journal C, ISSN 1434-6044, 9/2014, Volume 74, Issue 9, pp. 1 - 42 Searches for the direct electroweak production of supersymmetric charginos, neutralinos, and sleptons in a variety of signatures with leptons and... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Models | Detectors | Collisions (Nuclear physics) | Leptons | Searching | Collisions | Decay | Higgs bosons | Luminosity | Texts | Signatures | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Models | Detectors | Collisions (Nuclear physics) | Leptons | Searching | Collisions | Decay | Higgs bosons | Luminosity | Texts | Signatures | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics Journal Article
Seminars 2014: December 4, 2014, 10:00 Gerard Freixas (Jussieu, Paris) : Reciprocity laws on Riemann surfaces, connections on Deligne pairings and holomorphic torsio. Abstract: In this talk I will recall classical reciprocity laws on Riemann surfaces and explain how they translate into the language of connections on Deligne pairings. I will give a couple of applications. One explains a construction of Hitchin on hyperkähler varieties of lambda-connections, the other explains a result of Fay on holomorphic extensions of analytic torsion on spaces of characters of the fundamental group of a Riemann surface. The contents of this talk are based on joint work with Richard Wentworth (University of Maryland). December 3, 2014, 10:00 Eleonora Di Nezza (Imperial College London):Regularizing properties and uniqueness of the Kaehler-Ricci flow Abstract: Let X be a compact Kaehler manifold. I will show that the Kaehler-Ricci flow, as well as its twisted version, can be run from an arbitrary positive closed current, and that it is immediately smooth in a Zariski open subset of X. Moreover, if the initial data has positive Lelong number we indeed have propagation of singularities for short time. Finally, I will prove a uniqueness result in the case of zero Lelong numbers. (This is a joint work with Chinh Lu) November 26, 2014, 10:00 Chinh H. Lu (Göteborg): The completion of the space of Kähler metrics. Abstract: We will talk about recent results of T. Darvas concerning the completion of the space of Kähler metrics June 18, 2014, 10:00 Ivan Cheltsov (Edinburgh): Cylinders in del Pezzo surfaces. Abstract: For an ample divisor H on a variety V, an H-polar cylinder in V is an open ruled affine subset whose complement is a support of an effective Q-divisor that is Q-rationally equivalent to H. In the case when V is a Fano variety and H is its anticanonical divisor, this notion links together affine, birational and Kahler geometries. I prove existence and non-existence of H-polar cylinders in smooth and mildly singular (with at most du Val singularities) del Pezzo surfaces for different ample divisors H. In particular, I will answer an old question of Zaidenberg and Flenner about additive group actions on the cubic Fermat affine threefold cone. This is a joint work with Park and Won. May 21, 2014, 10:00 Bo Berndtsson: Yet another proof of the Ohsawa-Takegoshi theorem. Abstract: A very simple proof of optimal lower bounds for the Bergman kernel was given recently by Blocki and Lempert. I will show how their method generalizes to at least one version of the Ohsawa-Takegoshi theorem. May 15, 2014, 10:00 Robert Berman (Göteborg) : A primer on toric manifolds – from the analytic point of view Abstract: In this talk I will give an elementary and “hands-on” introduction to toric manifolds, from an analytic point of view. More precisely, I will explain how to build a polarized compact complex manifold X from a given polytope P (satisfying the Delzant conditions). Topologically, the manifold X is fibered over the polytope P in real tori, in much the same way as the Riemann sphere may be fibered over the unit-interval by embedding it in R^3 and projecting onto a coordinate axis. April 30, 2014, 10:00 Chinh H. Lu (Göteborg): Complex Hessian equations on compact Kähler manifolds. Abstract: The complex Hessian equation is an interpolation between the Laplace and complex Monge-Ampère equation. An analogous version of the Calabi-Yau equation has been recently solved by Dinew and Kolodziej. In this talk we first survey some known results about existence and regularity of solutions in the non-degenerate case. We then discuss how to solve the degenerate equation by a variational approach due to Berman-Boucksom-Guedj-Zeriahi. The key (and new) point is a regularization result for m-subharmonic functions. This is joint work with Van-Dong Nguyen (Ho Chi Minh city University of Pedagogy). April 23, 2014, 10:00 Yanir Rubinstein (University of Maryland): Logzooet - beyond elephants Abstract: This is the second talk in the series. It should be of interest to differential/algebraic/complex geometers The compact four-manifolds that admit a Kähler metric with positive Ricci curvature have been classified in the 19th century: they come in 10 families. In analogy with conical Riemann surfaces (e.g., football, teardrop) and hyperbolic 3-folds with a cone singularity along a link appearing in Thurston's program, one may consider 4-folds with a Kähler metric having "edge singularities", namely admitting a 2-dimensional cone singularity transverse to an immersed minimal surface, a `complex edge'. What are all the pairs (4-fold, immersed surface) that admit a Kähler metric with positive Ricci curvature away from the edge? In joint work with I. Cheltsov (Edinburgh) we classify all such pairs under some assumptions. These now come in infinitely-many families and we then pose the "Calabi problem" for these pairs: when do they admit Kähler-Einstein edge metrics? This problem is far from being solved, even in this low dimension, but we report on some initial progress: some understanding of the non-existence part of the conjecture, as well as several existence results. April 9, 2014, 10:00 Yanir Rubinstein (University of Maryland): Logzooet Abstract: This is the first talk out of two and will require little prerequisites. The compact four-manifolds that admit a Kähler metric with positive Ricci curvature have been classified in the 19th century: they come in 10 families. In analogy with conical Riemann surfaces (e.g., football, teardrop) and hyperbolic 3-folds with a cone singularity along a link appearing in Thurston's program, one may consider 4-folds with a Kähler metric having "edge singularities", namely admitting a 2-dimensional cone singularity transverse to an immersed minimal surface, a `complex edge'. What are all the pairs (4-fold, immersed surface) that admit a Kähler metric with positive Ricci curvature away from the edge? In joint work with I. Cheltsov (Edinburgh) we classify all such pairs under some assumptions. These now come in infinitely-many families and we then pose the "Calabi problem" for these pairs: when do they admit Kähler-Einstein edge metrics? This problem is far from being solved, even in this low dimension, but we report on some initial progress: some understanding of the non-existence part of the conjecture, as well as several existence results. April 2, 2014, 10:00 Choi Young-Jun (KIAS): Variations of Kähler-Einstein metrics on strongly pseudoconvex domains Abstract: By a celebrated theorem of Cheng and Yau, every bounded strongly pseudo-convex domain with smooth boundary admits a unique complete Kähler-Einstein metric. In this talk, we discuss the plurisubharmonicity of variations of the Kähler- Einstein metrics of strongly pseudoconvex domains. March 19, 2014, 10.00 Nikolay Shcherbina (Wuppertal University): Bounded plurisubharmonic functions, cores and Liouville type property. Abstract. For a domain G in a complex manifold M of dimension n let F be the family of all bounded above plurisubharmonic functions. Define a notion of the core c(G) of G as: c(G) = {z \in G: rank L(z,f) < n for all functions f \in F}, where L(z,f) is the Levi form of f at z. The main purpose of the talk is to discuss a Liouville type property of c(G), namely, to investigate if it is true or not that every function g from F has to be a constant on each connected component of c(G). March 12, 2014, 10.00 Robert Berman (Göteborg): The convexity of the K-energy on the space of all Kähler metrics and applications Abstract: The K-energy is a functional on the space H of all Kähler metrics in a given cohomology class and was introduced by Mabuchi in the 80's. Its critical points are Kähler metrics with constant scalar curvature. As shown by Mabuchi the K-energy is convex along smooth geodesics in the space H. This was result was later put into a the framework of geometric invariant theory in infinite dimensions, by Donaldson. However, when studying the geometry of H one is, in general, forced to work with a weaker notion of geodesics introduced by Chen (due to lack of a higher order regularity theory for the PDE describing the corresponding geodesic equation). In this talk, which is based on a joint work with Bo Berndtsson, it will be shown that the K-energy remains convex along weak geodesics, thus confirming a conjecture by Chen. Some applications in Kähler geometry will also be briefly discussed. February 26, 2014 9.30 Bo Berndtsson (Göteborg): Complex interpolation of real norms Abstract: The method of complex interpolation is a way to, given a family of complex Banach norms, find intermediate or averaged norms. I will describe an extension of this to real norms and also mention the relation to the boundary value problem for geodesics in the space of metrics on a line bundle. This is joint work with Bo'az Klartag, Dario Cordero and Yanir Rubinstein. February 12, 2014, 10.00 Xu Wang (Tongji University, Shanghai): Variation of the Bergman kernels of pseudoconvex domains. Abstract: We shall give a variational formula of the full Bergman kernels associated to smoothly bounded strongly pseudoconvex domains. An equivalent criterion for the triviality of holomorphic motions of planar domains in terms of the Bergman kernel is given as an application. January 27, 2014, 10.00 Håkan Samuelsson Kalm (Göteborg): On analytic structure in maximal ideal spaces Abstract: John Wermer's classical maximality theorem says the following: Let $f$ be a continuous function on the unit circle $b\Delta$, where $\Delta \subset \mathbb{C}$ is the unit disk. Then, either $f$ is the boundary values of a holomorphic function on $\Delta$ or the uniform algebra generated by $z$ and $f$ on the unit circle equals the algebra of all continuous functions on the unit circle. I will discuss to what extent Wermer's maximality theorem extends to the setting of several complex variables. In particular, we will answer a question posed by Lee Stout concerning the presence of analytic structure for a uniform algebra whose maximal ideal space is a manifold. This is joint work with A. Izzo (Bowling Green) and E. F. Wold (Oslo). Seminars 2007, 2008, 2009, 2010, 2011, 2012 2013: See attached pdf below. Links to lists of previous seminars:
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
We conisder structure: $X=\langle A,\le\rangle$. Thinnish linear order is such order that for each $x,y$ that $x<y$ there exists only finitely many $z\in A$ such that $x<z<y$. Prove that there is no such set $\Delta$ of first order formulas that $\Delta\models X$ $\leftrightarrow X$ is thinnish linear order. I should use here compacntess theorem. When it comes to my approach I state that thinnish linear order is equivalent to directed chain (finite or infinite). It is because each linear order may be depicted as directed chain. Similarly, each thinnish linear order may be represented as directed chain. So the problem is equivalent: prove that there is no $\Delta$ such that $\Delta\models G\leftrightarrow G$ is directed chain. I am not sure If so far I have benn corrected. However, I try to show version with graph. Lets assume that such $\Delta$ exists. Then, we extend our signature by constans $a,b$ representing nodes. Then we state: $\phi_i:$ $a$ and $b$ are not reachable each other with $i$ edges(no matter direction). Now, lets get $\Delta\cup\{\phi_i:i\in\mathbb{N}\}$. Let's consider finite arbritrary subset of this set. It is easy to see that it is satisfable, because in infinite chain we easily assign to $a,b$ nodes in such way that $a$ and $b$ are from each other farther than the selected set max $\phi$. Thanks to compactness theorem we conclude that $\Delta\cup\{\phi_i:i\in\mathbb{N}\}$ is satisfable. However, it is not possible to interpret $a,b$ such that for each $\phi_i$ it will be satisfied. What do you think about this solution ? Is it correct ? If not, how to solve it ?
Shear modulus also known as Modulus of rigidity is the measure of the rigidity of the body, given by the ratio of shear stress to shear strain. Often denoted by G sometimes by S or μ. What is Shear Modulus? Shear Modulus of elasticity is one of the measures of mechanical properties of solids. Other elastic moduli are Young’s modulus and Bulk modulus. The shear modulus of material gives us the ratio of shear stress to shear strain in a body. Measured using the SI unit pascal or Pa. Dimensional formula of Shear modulus is M 1 L -1 T -2 . It is denoted by G. It can be used to explain how a material resists transverse deformations but this is practical for small deformations only, following which they are able to return to the original state. This is because large shearing forces lead to permanent deformations (no longer elastic body). Modulus Of Rigidity Formula Where, \(\tau _{xy}=\frac{F}{A}\) is shear stress. F is the force acting on the object. A is the area on which the force is acting. \(\gamma _{xy}=\frac{\Delta x}{l}\) is the shear strain. \(\Delta x\) is the transverse displacement. l is the initial length. Units: The modulus of rigidity is measured using SI unit pascal or Pa. Commonly it is expressed in terms of GigaPascal (GPa). Alternatively, it is also expressed in Pounds per square inch(PSI). Modulus Of Rigidity – Overview: The modulus of rigidity is the elastic coefficient when a shear force is applied resulting in lateral deformation. It gives us a measure of how rigid a body is. The table given below briefs everything you need to know about rigidity modulus. Definition Shear modulus is the ratio of shear stress to shear strain in a body. Symbol G or S or μ SI unit Pascal (Pa), N/m 2 Formula Shear stress/shear strain Dimension formula M 1L -1T -2 Example of Modulus Of Rigidity The following example will give you a clear understanding of how the shear modulus helps in defining the rigidity of any material. Shear modulus of wood is 6.2×10 8Pa Shear modulus of steel is 7.2×10 10Pa Thus, it implies that steel is a lot more (really a lot more) rigid than wood, around 127 times more! Calculation: A block of unknown material kept on a table(The square face is placed on the table.), is under a shearing force. The following data is given, calculate the shear modulus of the material. Dimensions of the block = 60mm x 60mm x 20mm Shearing Force = 0.245N Displacement = 5mm Solution: Substituting the values in the formula we get- Shear stress \(=\frac{F}{A}=\frac{0.245}{60\times 20\times 10^{-6}}\)\(=\frac{2450}{50}\) N/m 2 Shear strain \(=\frac{\Delta x}{l}=\frac{5}{60}=\frac{1}{12}\) Thus, Shear modulus, \(G=\frac{Shear\;stress}{Shear\;strain}\)\(=\frac{2450\times 12}{12}\) = 2450 N/m 2. You may also want to check out these topics given below! Relation Between Elastic Constants The elastic moduli of a material, like Young’s Modulus, Bulk Modulus, Shear Modulus are specific forms of Hooke’s law, which states that, for an elastic material, the strain experienced by the corresponding stress applied is proportional to that stress. Thus, we can write the relation between elastic constants by the following equation: 2G(1+υ) = E = 3K(1−2υ) Where, G is the Shear Modulus E is the Young’s Modulus K is the Bulk Modulus υ is Poisson’s Ratio Frequently Asked Questions (FAQs): Q1) How does rigidity modulus is related to other elastic moduli? Ans: The shear modulus is related to other elastic moduli as 2G(1+υ) = E = 3K(1−2υ) where, G is the Shear Modulus E is the Young’s Modulus K is the Bulk Modulus υ is Poisson’s Ratio Q2) What happens to shear modulus if applied shear force increases? Ans: When the shear force is increased, the value of shear modulus also increases. Q3) If the shear modulus of material 1 is x pascals and Material 2 is 30x pascals. What does it mean? Ans: If the shear modulus of material 1 is x pascals and Material 2 is 30x pascals. It means that material 2 is more rigid than material 1. Stay tuned with BYJU’S for more such interesting articles. If you wish to learn more physics concepts with the help of interactive video lessons, download BYJU’S – The Learning App.
The Laplacian operator is called an operator because it does something to the function that follows: namely, it produces or generates the sum of the three second-derivatives of the function. Of course, this is not done automatically; you must do the work, or remember to use this operator properly in algebraic manipulations. Symbols for operators are often (although not always) denoted by a hat ^ over the symbol, unless the symbol is used exclusively for an operator, e.g. \(\nabla\) (del/nabla), or does not involve differentiation, e.g.\(r\) for position. Recall, that we can identify the total energy operator, which is called the Hamiltonian operator, \(\hat{H}\), as consisting of the kinetic energy operator plus the potential energy operator. \[\hat {H} = - \frac {\hbar ^2}{2m} \nabla ^2 + \hat {V} (x, y , z ) \label{3-22}\] Using this notation we write the Schrödinger Equation as \[ \hat {H} \psi (x , y , z ) = E \psi ( x , y , z ) \label{3-23}\] The Hamiltonian The term Hamiltonian, named after the Irish mathematician Hamilton, comes from the formulation of Classical Mechanics that is based on the total energy, \[H = T + V\] rather than Newton's second law, \[\vec{F} = m\vec{a}\] Equation \(\ref{3-23}\) says that the Hamiltonian operator operates on the wavefunction to produce the energy, which is a number, (a quantity of Joules), times the wavefunction. Such an equation, where the operator, operating on a function, produces a constant times the function, is called an eigenvalue equation. The function is called an eigenfunction, and the resulting numerical value is called the eigenvalue. Eigen here is the German word meaning self or own. It is a general principle of Quantum Mechanics that there is an operator for every physical observable. A physical observable is anything that can be measured. If the wavefunction that describes a system is an eigenfunction of an operator, then the value of the associated observable is extracted from the eigenfunction by operating on the eigenfunction with the appropriate operator. The value of the observable for the system is the eigenvalue, and the system is said to be in an eigenstate. Equation \(\ref{3-23}\) states this principle mathematically for the case of energy as the observable. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
Sum of k Choose m up to n Contents Theorem Let $m, n \in \Z: m \ge 0, n \ge 0$. Then: $\displaystyle \sum_{k \mathop = 1}^n \binom k m = \binom {n + 1} {m + 1}$ where $\displaystyle \binom k m$ is a binomial coefficient. Proof Proof by induction: For all $n \in \N$, let $\map P n$ be the proposition: $\displaystyle \sum_{k \mathop = 0}^n \binom k m = \binom {n + 1} {m + 1}$ Basis for the Induction $\map P 0$ says: $\dbinom 0 m = \dbinom 1 {m + 1}$ When $m = 0$ we have by definition: $\dbinom 0 0 = 1 = \dbinom 1 1$ When $m > 0$ we also have by definition: $\dbinom 0 m = 0 = \dbinom 1 {m + 1}$ This is our basis for the induction. Induction Hypothesis Now we need to show that, if $\map P r$ is true, where $r \ge 0$, then it logically follows that $\map P {r + 1}$ is true. So this is our induction hypothesis: $\displaystyle \sum_{k \mathop = 1}^r \binom k m = \binom {r + 1} {m + 1}$ Then we need to show: $\displaystyle \sum_{k \mathop = 1}^{r + 1} \binom k m = \binom {r + 2} {m + 1}$ Induction Step This is our induction step: \(\displaystyle \sum_{k \mathop = 1}^{r + 1} \binom k m\) \(=\) \(\displaystyle \sum_{k \mathop = 1}^r \binom k m + \binom {r + 1} m\) \(\displaystyle \) \(=\) \(\displaystyle \binom {r + 1} {m + 1} + \binom {r + 1} m\) Induction Hypothesis \(\displaystyle \) \(=\) \(\displaystyle \binom {r + 2} {m + 1}\) Pascal's Rule So $\map P r \implies \map P {r + 1}$ and the result follows by the Principle of Mathematical Induction. Therefore: $\displaystyle \forall m, n \in \Z, m \ge 0, n \ge 0: \sum_{k \mathop = 0}^n \binom k m = \binom {n + 1} {m + 1}$ $\blacksquare$ Alternative Proof \(\displaystyle \sum_{k \mathop = 0}^n \binom k m\) \(=\) \(\displaystyle \sum_{0 \mathop \le m + k \mathop \le n} \binom {m + k} m\) \(\displaystyle \) \(=\) \(\displaystyle \sum_{-m \mathop \le k \mathop < 0} \binom {m + k} m + \sum_{0 \mathop \le k \mathop \le n - m} \binom {m + k} m\) \(\displaystyle \) \(=\) \(\displaystyle 0 + \sum_{0 \mathop \le k \mathop \le n - m} \binom {m + k} m\) as $\dbinom k m = 0$ for $k < 0, m \ge 0$ \(\displaystyle \) \(=\) \(\displaystyle \binom {m + \paren {n - m} + 1} {n - m} = \binom {n + 1} {n - m}\) Sum of r+k Choose k up to n \(\displaystyle \) \(=\) \(\displaystyle \binom {n + 1} {\paren {n + 1} - \paren {n - m} }\) Symmetry Rule for Binomial Coefficients \(\displaystyle \) \(=\) \(\displaystyle \binom {n + 1} {m + 1}\) $\blacksquare$
I'm attempting to construct the thermodynamic potential for an economy by elaborate analogy -- demand/output is analogous to energy, price to pressure and supply to volume. What does this help with? For one thing, it leads toward a way to introduce a chemical potential (which after writing this post, I realize might not be necessary). However, it also allows for a way to organize thought around microeconomic and macroeconomic forces (see e.g. here or here). Using the definitions here and here (and writing $N$ for nominal output, $X$ for goods with price $p$ -- which could be taken to be the price level $P$ but we'll leave separate for now, $T$ for the 'economic temperature' and $S$ for the 'economic entropy' -- the latter two being defined at the links), we have for a monetary economy: N = TS + \kappa P M + \alpha p X $$ $$ N \approx c N/\kappa + \kappa P M + \alpha p X $$ where we use Stirling's approximation (with large $N$, but small changes) and the definitions S \sim \log N! \approx N \log N - N \;\;\text{ and }\;\; 1/T \sim \log M $$ with $M$ being the money supply (empirically, base money minus reserves) and $\kappa = \kappa (N, M)$ being the information transfer index for the money market. Note that $\kappa \sim 1/T$ so that high $\kappa$ represents a low temperature economy and vice versa. $$ N = T S + \kappa P M + \sum_{i} \alpha_{i} p_{i} X_{i} $$ where the sum is over the individual market "generalized forces" (microeconomic forces). For example, we can look at a simple model of an aggregate goods market $A$ and a labor market $L$: N = T S + \kappa P M + \alpha P A + \beta P L $$ ... all prices for labor and goods are taken to be proportional to the price level. This allows us to organize microeconomic and macroeconomic forces N = \underbrace{T S + \kappa P M}_{\text{macro forces}} + \underbrace{\alpha P A + \beta P L}_{\text{micro forces}} $$ In truth, the $P M$ component should probably be considered a microeconomic force (since it behaves like one for the most part) and only $TS$ -- the entropic forces -- should be considered macroeconomic forces. However, since $P M$ is a large component of the economy (and would likely be for a commodity money system as well, see footnote [1]) and policy-relevant, I'll keep it in. Understanding this distinction would point towards (using the separation from this earlier post about a financial and government sectors $F$ and $G$): \text{(1) } N = T S + \kappa P M + \alpha P A + \beta P L + \gamma P G + \epsilon i F $$ where $i$ is a general market index (e.g. the S&P500 could be used). This approach can be compared with the older approaches that use the definition of nominal output: N = C + I + G + NX $$ where we'd instead write (for example, assuming the prices are all proportional to the price level $P$): \text{(2) }N = a_{1} P C + a_{2} P I + a_{3} P G + a_{4} P NX $$ The $a_{i}$ are all constants. Comparing equations (1) and (2) we can see that they mostly just represent different partitions of nominal output. Equation (2) lacks an explicit monetary component, but the biggest difference is that it lacks an 'entropic' component $T S$. I'd visualize $T S$ as the additional gains in welfare from exchange -- exchange makes both parties better off and increases the value of whatever it is that is exchanged. Another topic that becomes clearer with the construction (1) is that of monetary vs Keynesian takes on macroeconomic stabilization. In (1), it becomes clear that a change in $G$ could be offset by a change in the $\kappa P M$ term or even the $T S$ term in general. In practice it depends on the details of the model (specifically the value of $\kappa$ -- if it is near 1 changes in $M$ have limited impact, and if it is near 1/2 you have an almost ideal quantity theory of money). Additionally, the conditions that allow monetary offset of fiscal stimulus to occur also allow the monetary offset of the effects of a financial crisis. At least if (1) is a valid way to build an economy. This last piece is interesting -- it implies that financial crises cause bigger problems in a liquidity trap economy ($\kappa \sim 1$). Assuming the model is correct, the reason the global financial crisis was so bad was because it struck when $\kappa \sim 1$ for a large portion of the world economy: the EU, US, and Japan. Other financial crises (e.g. 1987 in the US or even the dot-com boom) struck at a time when $\kappa < 1$ and were better able to be offset by monetary policy. Footnotes: [1] Actually, the $\kappa P M$ component is like one of the goods markets and in e.g. a commodity money economy, it would be one (and entropy should be defined in terms of that good). However it may be more useful to separate it as a macroeconomic force as is done later in the post.
Definition:Class/Zermelo-Fraenkel Definition In $\textrm{ZF}$, classes are written using class builder notation: $\left\{{x : P \left({x}\right)}\right\}$ \(\displaystyle y \in \left\{ {x: P \left({x}\right)}\right\}\) \(\quad \text{for} \quad\) \(\displaystyle P \left({y}\right)\) \(\displaystyle \left\{ {x: P \left({x}\right)}\right\} \in y\) \(\quad \text{for} \quad\) \(\displaystyle \exists z \in y: \forall x: \left({x \in z \iff P \left({x}\right)}\right)\) \(\displaystyle \left\{ {x: P \left({x}\right)}\right\} \in \left\{ {y: Q \left({y}\right)}\right\}\) \(\quad \text{for} \quad\) \(\displaystyle \exists z: \left({Q \left({z}\right) \land \forall x: \left({x \in z \iff P \left({x}\right)}\right)}\right)\) where: $x, y ,z$ are variables of $\textrm{ZF}$ $P, Q$ are propositional functions. Through these "rules", every statement involving $\left\{{x : P \left({x}\right) }\right\}$ can be reduced to a simpler statement involving only the basic language of set theory. That such does not lead to ambiguity is proved on Class Membership Extension of Set Membership. Class Variables In deriving general results about $\textrm{ZF}$ which mention classes, it is often convenient to have class variables, which denote an arbitrary class. By convention, these variables are taken on $\mathsf{Pr} \infty \mathsf{fWiki}$ to be the (start of) the capital Latin alphabet, i.e. $A, B, C$, and so on. Caution Unlike in von Neumann-Bernays-Gödel set theory, it is prohibited to quantify over classes. That is, expressions like: $\forall A: P \left({A}\right)$ are ill-defined; admitting them without further consideration would lead us to Russell's Paradox.
I am having problems minimizing a potential: $\text{V}(h,\eta)=\gamma \left(-h^2\right) \left(\eta ^2 \cos ^2(\theta )+\eta \cos (\delta ) \sqrt{-\eta ^2-h^2+1} \sin (2 \theta )+\left(-\eta ^2-h^2+1\right) \sin ^2(\theta )\right)$ (Input code below) V = -h^2 γ (η^2 Cos[θ]^2 + (1 - h^2 - η^2) Sin[θ]^2 + η Sqrt[1 - h^2 - η^2] Cos[δ] Sin[2 θ]) I try to minimize simply using: sol = Solve[{D[V, h] == 0, D[V, η] == 0}, {h, η}] It seems to solve quickly and with no problems giving me 9 solutions, these solutions depend on the parameters $\theta, \delta, \gamma$. However if for example I try to calculate: Chop[N[D[V,h] /. sol /. θ -> 1 /. δ -> 1 /. γ -> 1]] The output it gives is: {0, -0.104069, 0, 0, -0.484451, 0.104069, 0, 0, 0.484451} As you can see extrema 2, 5, 6 and 9 are not even close to zero! It seems Mathematica is solving this incorrectly, or maybe the solutions are only valid in specific regions (not the full domain)? Does anyone have any ideas? I thought maybe the problem was with the square root in the function (otherwise it's a simple polynomial) but have tried solving it using the Lagrange multiplier method taking a function $V(h,η,s)$ and $s=\sqrt{1-h^2-η^2}$ but it also doesn't seem to give me all zero solutions when I sub it in to the Lagrange equation $\frac{\partial V}{\partial h}+\lambda \frac{\partial g}{\partial g}$, where $g=s-\sqrt{1-h^2-η^2}=0$, the constraint applied to this function. Any ideas what's happening ? Has anyone encountered this problem before?
I've been reading up on the construction of derived categories. I understand why we prefer localizing with respect to a localizing class of morphisms (to get a nice representation of morphisms as simple roofs thanks to the Ore conditions). Also, it's clear to me where the proof that quasi-isomorphisms form a localizing class goes wrong if we're in Kom(A) instead of K(A). Does anybody have a simple counterexample, i.e. a wedge of complexes $Y\overset{s}\rightarrow X\overset{f}\leftarrow Z$ with s a quasi-isomorphism and f a morphism of complexes, which we cannot complete to an Ore-square in the category of complexes? Suppose $A$ is an abelian category with enough injectives, and $M,N$ are objects of $A$ with $\operatorname{Ext}^i(M,N)\neq0$ for some $i>0$. Then a non-zero element of $\operatorname{Ext}^i(M,N)$ can be represented by a diagram $$N[i]\overset{s[i]}\rightarrow I_N[i]\overset{f}\leftarrow M,$$ where $s:N\to I_N$ is a quasi-isomorphism from $N$ to an injective resolution. But there can't be a diagram $$N[i]\overset{g}\leftarrow M'\overset{t}\rightarrow M$$ with $t$ a quasi-isomorphism and $ft=s[i]g$ in the category of complexes, as $ft$ would have to be non-zero (as it's non-zero in the derived category), but $ft$ can only be non-zero in degree zero, but $s[i]g$ can only be non-zero in degree $i$.
Suppose the Fresnel equations give us complex reflexion co-effcients $R_p$ and $R_s$ for $p$- and $s$-polarized light, respectively. Then the intensity reflexion co-efficient (power reflexion coefficient) for depolarized light is (in most cases): $\frac{1}{2} (|R_s|^2 + |R_p|^2)$ You do likewise for the transmission co-efficients, so that the transmitted power ratio is: $\frac{1}{2} (|T_s|^2 + |T_p|^2) = 1- \frac{1}{2} (|R_s|^2 + |R_p|^2)$ where $T_p$ and $T_s$ are the Fresnel equation-derived complex transmission co-efficients for $p$- and $s$-polarized light. Forming average square magnitudes like this is often called "incoherent summing". To understand fully how to do your calculation, you need to understand exactly what depolarized light is, and it has quite a complicated description: it is bound up with decoherence and partially coherent light, a topic which Born and Wolf in "Principles of Optics" give a whole chapter to. A classical description, roughly analogous to Born and Wolf's is as follows: if the transverse (normal to propagation) plane is the $x,y$ plane, then we represent the electric field at a point as: $\mathbf{E} = \left(\begin{array}{cc}E_x(t) \cos(\omega t + \phi_x(t))\\E_y(t) \cos(\omega t + \phi_y(t))\end{array}\right)$ where $\omega$ is the centre frequency and the phases $\phi_x(t)$, $\phi_y(t)$ and envelopes $E_x(t)$, $E_y(t)$ are stochastic processes, which can be as complicated as you like. The formulas I cite above just assume that: $E_x$, $E_y$ and $\phi$ behave like independent random variables, and They vary with time swiftly compared to your observation interval (the time interval whereover you gather light in a sensor to come up with an "intensity" measurement) but not so swiftly that the light's spectrum broadened so much that we cannot still think of the light as roughly monochromatic. A simple quantum description is actually conceptually clearer than Born and Wolf's classical one, as long as light states do not become entangled. Each photon can be thought of as a perfectly coherent wave propagating following Maxwell's equations. The Fresnel equations thus apply to each photon as they would to a perfectly coherent wave. For each photon, therefore, you calculate the intensity of reflexion and transmission, and then average this intensity over all photon polarization states - we assume the source is creating "random" pure states. Thus, suppose the Fresnel equations give us complex reflexion co-effcients $R_p$ and $R_s$ for $p$- and $s$-polarized light: the complex amplitude reflexion co-efficient for a general polarization state is then: $R(\alpha, \phi) = \alpha R_p e^{i \frac{\phi}{2}} + \sqrt{1-\alpha^2} R_s e^{-i \frac{\phi}{2}}$ where $\alpha \in [0, 1]$ and $\phi \in [0, 2\,\pi)$. Summing intensities over all values of $\phi$ (assuming all phases equally likely) yields: $\frac{1}{2\pi}\int\limits_0^{2\pi} \left(\alpha^2 |R_p|^2 + (1-\alpha^2) |R_s|^2 + 2 \alpha\sqrt{1-\alpha^2} |R_p| |R_s| \cos\phi\right)\mathrm{d}\phi = \alpha^2 |R_p|^2 + (1-\alpha^2) |R_s|^2$ and then summing intensities over all values of $\alpha^2$ (assuming the $\alpha^2$ is uniformly distributed in $[0,1]$) leaves the formulas above. This will not give a full picture for general entangled polarization states, when you have to resort to more general coherent and cross correlation functions to describe what is going on. Likewise for Born and Wolf's classical description. But it is an excellent first approximation and it is probably true to say that it is hard to arrange for it not to hold in the laboratory. Deviations from it are likely to be seen if you sample the light intensities over very short sampling intervals, when you will see complicated, extremely swift fluctuations in scattered and transmitted intensities, often following white noise processes.
I am having a bit of difficulty in trying to understand a paper. The paper uses spectral method to solve for an eigenvalue that comes from a system of coupled ODEs. I will write out only one equation now, because it is enough to get to the crux of my question(s). The equation is $V[r] = \frac{e^{-(\nu[r] +\lambda[r])}}{\epsilon[r] + p[r]} *\biggr[ (\epsilon[r] + p[r])( e^{\nu[r] +\lambda[r]})r W[r] \biggr]'$ I carry out the derivative and get (Eq1) $V = \biggr[ \frac{\epsilon' +p'}{\epsilon + p} + r(\nu'+\lambda') +1 \biggr] W + r W'$ Now according to the paper I should be able to expand equilibrium quantities $(\epsilon ,p ,\nu ,\lambda$) of the system as Chebyshev Polynomials of the form $B[r] = \Sigma_{i=0}^{\infty}b_i T_i[y] - \frac{1}{2} b_0 $, where $T_i[y]$ are the polynomials. I know how to get the $b_i$ using code I wrote in Mathematica. Also $y = 2(r/R) -1$, and the domain of $r$ is $(0,R)$. The paper also states that the functions ($V,W$) can be expanded as $F[r] = (\frac{r}{R})^l \Sigma_{i=0}^{\infty}f_i T_i[y] - \frac{1}{2} f_0 $, and that in general a term like $B[r]F[r]$ can be expressed as $B[r]F[r] = \frac{1}{2} (\frac{r}{R})^l \Sigma_{i=0}^{\infty} \pi_i T_i[y] - \frac{1}{2} \pi _0 $ where $\pi_i = \Sigma_{j=0}^{\infty}[b_{i+j} + \Theta(j-1)b_{|i-1|} ] f_l $and $\Theta(k) = 0$ for $k<0$ and equals 1 for $k\geq 0$. With all that being said, lets say I make the following equilibrium functions $\frac{\epsilon' +p'}{\epsilon + p} = B_1[r] $ and $ r(\nu'+\lambda') = B_2[r]$, Then Eq1 becomes (Eq2) $ \bigg((\frac{r}{R})^l \Sigma_{i=0}^{\infty}v_i T_i[y] - \frac{1}{2} v_0 \bigg) = \biggr[ B_1[r] + B_2[r] +1 \biggr] \biggr( (\frac{r}{R})^l \Sigma_{i=0}^{\infty}w_i T_i[y] - \frac{1}{2} w_0 \biggr) + r W' $. Question1: What do I do with the $(\frac{r}{R})^l $terms? The polynomials are functions of $[y]$ so how can I even have an expansion like $B[r]= (\frac{r}{R})^l$ X function of [y]? Also it seems like I can just divide them out on each side of the equation, so what was the point of introduction that term? I mean, according to the paper this term is supposed to impose the boundary condition that $V,W$ go to zero as $r$ goes to zero. * Question2:*How am I supposed to deal with the $r$ in the $r*W'$ term. The paper gives a description of how to handle derivative terms, but what about the $r$ itself. Am I supposed to treat it like an equilibrium value and use the rule for terms like $B[r]F[r]$ or should I express this $r$ in terms of $y$. Or should I do something else altogether?
How can I generate the higher $n$ quantum harmonic oscillator wavefunction (in position space) numerically? Here, higher means around $n=500$, or say $n=2000$, where $n$ is the $n$th oscillator wavefunction. The position-space wavefunction of the $n$th state involves Hermite polynomials of order n (see Griffith's book for the detailed form of the solution). I can generate wavefunction up to $n\approx 160$ only using 'gsl' library in C because the library doesn't offer Hermite polynomials of order higher than $\approx 170$. To get rid of this limitation, I found that the higher $n$ wavefunctions can be achieved by using 'raising operator', and then repeatedly acting it on the previous wavefunction to get the next wavefunction, and so on. Raising operator: $a^{\dagger} = \frac{1}{\sqrt{2}} \left(x - \frac{d}{dx}\right)$. (Consider, constants in the expression to be 1.) In the expression, the derivative term, i.e., $d/dx$ needs to be evaluated. It can be solved either by the finite-difference method or by using Fourier transform to get 1st derivative (which is more accurate than the finite-difference method). I wrote a Matlab program for $a^{\dagger}$ ( shown below) using the 2nd method, i.e., Fourier transform. After evaluation of states greater than $n=10$, the program is becoming unstable and becoming even worse for higher $n$ ( instability plot is shown below in the link). The same issue had appeared when I used the finite-difference method to evaluate the derivative (code is not attached). Solution to the issue or an alternate approach to the problem will be helpful. UPDATE: I'm also interested to know the reason for the development of instability while using 'fft' in the code. sigma = 1.0;xmin = -10.0;xmax = 10.0;npts = 512;nstates = 14;dx = (xmax-xmin)/npts;x = xmin + dx*(0:npts-1);% -- initial state/wavefunctionpsi_init = exp(-0.5*x.^2/sigma^2)*(pi*sigma^2)^(-0.25);psi = zeros(nstates,npts); % -- list to store oscillator statespsi(1,:) = psi_init;for nn=2:nstates psi(nn,:) = raising_psi(psi(nn-1,:),xmin,xmax,npts,sigma); endfunction adag_fn = raising_psi(previous_fn,a,b,n,sigma) % -- raising_operator dx = (b-a)/n; x = a + dx*(0:n-1); % -- going into fourier space using 'fft' library fwd_fft = fft(previous_fn); k = (2*pi/(b-a))*[0:n/2-1,0,-n/2+1:-1]; dfk_dx = 1i*k.*fwd_fft; % -- 1st derivative in fourier space df_dx = ifft(dfk_dx); % -- back into position space % -- a^dagger acting on the previous state adag_fn = (x.*previous_fn/sigma - sigma*real(df_dx))/sqrt(2); norm_fn = adag_fn*transpose(adag_fn); adag_fn = adag_fn/sqrt(norm_fn); % -- normalizationend
We will describe the economic laws of supply and demand as the result of an information transfer model. Much of the description of the information transfer model follows [1]. Following Shannon [3] we have a system that transfers information $I_q$ from a source $q$ to a destination $u$ (see figure above). Any process can at best transfer complete information, so we know that $I_u \leq I_q$. We will follow [1] and use the Hartley definition of information $I= K^s n$ where $K^s=K^0 \log s$ where $s$ is the number of symbols and $K^0$ defines the unit of information (e.g. $1/\log 2$ for bits). If we take a rod of length $q$ (process source) and subdivide it in to segments $\delta q$ (process source signal) then $n_q=q/\delta q$ and we get (defining $\kappa = K_{u}^{s}/K_q^s$ the ideal transfer index) $$(1) \space \kappa \frac{u}{\delta u} \leq \frac{q}{\delta q}$$ Compared to paper [1], we have dropped the absolute values in order to deal with positive quantities $q$, $u$ (and changed some of the notation, e.g. $\Delta q \rightarrow q$). Now we define a process signal detector that relates the process source signal $\delta q$ emitted from the process source $q$ to a process destination signal $\delta u$ that is detected at the process destination $u$ and delivers an output value: $$(2) \space p =\left(\frac{\delta q}{\delta u}\right)_\text{detector}$$ If our source and destination are large compared to our signals ($n_q , n_u \gg 1$) we can take $\delta q \rightarrow dq$, we can re-arrange the information transfer condition: $$(3) \space p=\frac{dq}{du} \leq \frac{1}{\kappa} \frac{q}{u}$$ Next, we derive supply and demand using this model. References [1] Information transfer model of natural processes: from the ideal gas law to the distance dependent redshift P. Fielitz, G. Borchardt http://arxiv.org/abs/0905.0610v2 [2] http://en.wikipedia.org/wiki/Gronwall's_inequality [3] http://en.wikipedia.org/wiki/Noisy_channel_coding_theorem#Mathematical_statement [4] http://en.wikipedia.org/wiki/Entropic_force [5] http://en.wikipedia.org/wiki/Sticky_(economics)
Logblog: Richard Zach's Logic Blog You are looking at an archived page. The website has moved to richardzach.org. Last week I gave my decision problem talk at Berkeley. I briefly mentioned the 1917/18 Hilbert/Bernays completeness proof for propositional logic. It (as well as Post's 1921 completeness proof) made essential use of provable equivalence of a formula with its conjunctive normal form. Dana Scott asked who first gave (something like) the following simple completeness proof for propositional logic: We want to show that a propositional formula is provable from a standard axiomatic set-up iff it is a tautology. A simple corollary will show that if it is not provable, then adding it as an axiom will make all formulae provable. If a formula is provable, then it is a tautology. Because the axioms are tautologies and the rules of inference (substitution and detachment) preserve being a tautology. The argument is by induction on the length of the proof. If a formula is not provable, then it is not a tautology. We need three lemmata about provable formulae. The symbols $T$ and $F$ are for true and false. We write negation here as $\lnot p$. $\vdash p \rightarrow [\phi(p) \leftrightarrow \phi(T)]$. $\vdash \lnot p \rightarrow [\phi(p) \leftrightarrow \phi(F) ]$. If $\phi$ has no propositional variables, then either $\vdash \phi \leftrightarrow T$ or $\vdash \phi \leftrightarrow F$. All these are proved by induction on the structure of $\phi$ and require checking principles of substitutivity of equivalences. And I am claiming here this is less work than formulating and proving how to transform formulae into conjunctive normal form. From (i) and (ii) it follows that: \[\vdash \phi(p) \leftrightarrow [ [ p \rightarrow \phi(T) ] \land [\lnot p \rightarrow \phi(F)] ],\] because we can show generally: \[\vdash \psi \leftrightarrow [ [ p \rightarrow \psi ] \land [ \lnot p \rightarrow \psi ] ].\] Thus, we can conclude: if $\vdash \phi(T)$ and $\vdash \phi(F)$, then $\vdash \phi(p)$. Hence if $\phi(p)$ is not provable, then one of $\phi(T)$, $\phi(F)$ is not provable. So, if a formula $\phi$ has no propositional variables and is not provable, then by (iii), $\phi \leftrightarrow F$. So it is not a tautology. Arguing now by induction on the number of propositional variables in the formula, if $\phi(p)$ is not provable, then one of $\phi(T)$, $\phi(F)$ is not a tautology. And so $\phi(p)$ is not a tautology. QED I don't know the answer. Do you? The only thing it reminded me of was this old paper which shows that all tautologies in $n$ variables can be proved in $f(n)$ steps using the schema of equivalence. It uses a similar idea: evaluate formulas without variables to truth values, and then inductively generate the tautologies by induction on the number of variables and excluded middle. You could turn that proof into a completeness proof by establishing for a given calculus that the required equivalences and formulas are provable. Dana's proof is a lot simpler, though. Thanks to him for allowing me to post his question here.
Subgroup Action is Group Action Theorem Let $\struct {G, \circ}$ be a group. Let $\struct {H, \circ}$ be a subgroup of $G$. Let $*: H \times G \to G$ be the subgroup action defined for all $h \in H, g \in G$ as: $\forall h \in H, g \in G: h * g := h \circ g$ Then $*$ is a group action. Proof Let $g \in G$. First we note that since $G$ is closed, and $h \circ g \in G$, it follows that $h * g \in G$. Next we note: $e * g = e \circ g = g$ and so Group Action Axiom $GA \, 2$ is satisfied. Now let $h_1, h_2 \in G$. We have: \(\displaystyle \paren {h_1 \circ h_2} * g\) \(=\) \(\displaystyle \paren {h_1 \circ h_2} \circ g\) Definition of $*$ \(\displaystyle \) \(=\) \(\displaystyle h_1 \circ \paren {h_2 \circ g}\) Group Axiom $G \, 1$: Associativity \(\displaystyle \) \(=\) \(\displaystyle h_1 * \paren {h_2 * g}\) Definition of $*$ and so Group Action Axiom $GA \, 1$ is satisfied. Hence the result. $\blacksquare$
Droop Control Droop control is a control strategy commonly applied to generators for primary frequency control (and occasionally voltaqe control) to allow parallel generator operation (e.g. load sharing). Contents Background Recall that the active and reactive power transmitted across a lossless line are: [math]P = \frac{V_{1} V_{2}}{X} \sin\delta [/math] [math]Q = \frac{V_{2}}{X} (V_{2} - V_{1} \cos\delta) [/math] Since the power angle [math]\delta \,[/math] is typically small, we can simplify this further by using the approximations [math]\sin\delta \approx \delta \,[/math] and [math]\cos\delta \approx 1 \,[/math]: [math]\delta \approx \frac{PX}{V_{1} V_{2}} [/math] [math](V_{2} - V_{1}) \approx \frac{QX}{V_{2}} [/math] From the above, we can see that active power has a large influence on the power angle and reactive power has a large influence on the voltage difference. Restated, by controlling active and reactive power, we can also control the power angle and voltage. We also know from the swing equation that frequency in synchronous power systems is related to the power angle, so by controlling active power, we can therefore control frequency. This forms the basis of frequency and voltage droop control where active and reactive power are adjusted according to linear characteristics, based on the following control equations: [math]f = f_{0} - r_{p} (P - P_{0}) \, [/math] ... Eq. 1 [math]V = V_{0} - r_{q} (Q - Q_{0}) \, [/math] ... Eq. 2 where [math]f \, [/math] is the system frequency (in per unit) [math]f_{0} \, [/math] is the base frequency (in per unit) [math]r_{p} \, [/math] is the frequency droop control setting (in per unit) [math]P \, [/math] is the active power of the unit (in per unit) [math]P_{0} \, [/math] is the base active power of the unit (in per unit) [math]V \, [/math] is the voltage at the measurement location (in per unit) [math]V_{0} \, [/math] is the base voltage (in per unit) [math]Q \, [/math] is the reactive power of the unit (in per unit) [math]Q_{0} \, [/math] is the base reactive power of the unit (in per unit) [math]r_{q} \, [/math] is the voltage droop control setting (in per unit) These two equations are plotted in the characteristics below: The frequency droop characteristic above can be interpreted as follows: when frequency falls from [math]f_{0}[/math] to [math]f[/math], the power output of the generating unit is allowed to increase from [math]P_{0}[/math] to [math]P[/math]. A falling frequency indicates an increase in loading and a requirement for more active power. Multiple parallel units with the same droop characteristic can respond to the fall in frequency by increasing their active power outputs simultaneously. The increase in active power output will counteract the reduction in frequency and the units will settle at active power outputs and frequency at a steady-state point on the droop characteristic. The droop characteristic therefore allows multiple units to share load without the units fighting each other to control the load (called "hunting"). The same logic above can be applied to the voltage droop characteristic. Alternative Droop Equations The basic per-unit droop equations in Eq. 1 and Eq. 2 above can be expressed in natural quantities and in terms of deviations as follows: [math]r_{p} = \frac{\Delta f}{\Delta P} \times \frac{P_{n}}{f_{n}} [/math] [math]r_{q} = \frac{\Delta V}{\Delta Q} \times \frac{Q_{n}}{V_{n}} [/math] where [math]\Delta f \, [/math] is the frequency deviation (in Hz) [math]f_{n} \, [/math] is the nominal frequency (in Hz), e.g. 50 or 60 Hz [math]\Delta P \, [/math] is the active power deviation (in kW or MW) [math]P_{n} \, [/math] is the rated active power of the unit (in kW or MW) [math]r_{p} \, [/math] is the frequency droop control setting (in per unit) [math]\Delta V \, [/math] is the voltage deviation at the measurement location (in V) [math]V_{n} \, [/math] is the nominal voltage (in V) [math]\Delta Q \, [/math] is the reactive power deviation (in kVAr or MVAr) [math]Q_{n} \, [/math] is the rated reactive power of the unit (in kVAr or MVAr) [math]r_{q} \, [/math] is the voltage droop control setting (in per unit) Droop Control Setpoints Droop settings are normally quoted in % droop. The setting indicates the percentage amount the measured quantity must change to cause a 100% change in the controlled quantity. For example, a 5% frequency droop setting means that for a 5% change in frequency, the unit's power output changes by 100%. This means that if the frequency falls by 1%, the unit with a 5% droop setting will increase its power output by 20%. The short video below shows some examples of frequency (speed) droop: Limitations of Droop Control Frequency droop control is useful for allowing multiple generating units to automatically change their power outputs based on dynamically changing loads. However, consider what happens when there is a significant contingency such as the loss of a large generating unit. If the system remains stable, all the other units would pick up the slack, but the droop characteristic allows the frequency to settle at a steady-state value below its nominal value (for example, 49.7Hz or 59.7Hz). Conversely, if a large load is tripped, then the frequency will settle at a steady-state value above its nominal value (for example, 50.5Hz or 60.5Hz). Other controllers are therefore necessary to bring the frequency back to its nominal value (i.e. 50Hz or 60hz), which are called secondary and tertiary frequency controllers.
I am trying to understand this paper: http://link.aps.org/doi/10.1103/PhysRevLett.99.236809 (Here is an arXiv version: http://arxiv.org/abs/0709.1274) In the introduction, they mention certain symmetry arguments (the two paragraphs in the second column of the first page). Unfortunately, I am ill-equipped to understand these symmetry arguments. Would it be possible for an expert to walk me through these two paragraphs? I am sorry if this is a poorly worded question (this is my first post here). -- As per the comments, I am copying the relevant paragraphs here: ``Before starting specific calculations, it will be instructive to make some general symmetry analysis. A valley contrasting magnetic moment has the relation $ \mathfrak{m}_v=\chi \tau_z $, where $\tau_z = \pm 1$ labels the two valleys and $\chi$ is a coefficient characterizing the material. Under time reversal, $\mathfrak{m}_v$ changes sign, and so does $\tau_z$ (the two valleys switch when the crystal momentum changes sign). Therefore, $\chi$ can be non-zero even if the system is non-magnetic. Under spatial inversion, only $\tau_z$ changes sign. Therefore $\mathfrak{m}_v$ can be nonzero only in systems with broken inversion symmetry. Inversion symmetry breaking simultaneously allows a valley Hall effect, with $\mathbf j^v = \sigma^v_H \hat{\mathbf z} \times \mathbf E$, where $\sigma^v_H$ is the transport coefficient (valley Hall conductivity), and the valley current $\mathbf j^v$ is defined as the average of the valley index times the velocity operator. Under time reversal, both the valley current and electric field are invariant. Under spatial inversion, the valley current is still invariant but the electric field changes sign. Therefore, the valley Hall conductivity can be non-zero when the inversion symmetry is broken, even if the time reversal symmetry remains.''
I defined such a model for stock price (1).... $$dS = \mu\ S\ dt + \sigma\ S\ dW + \rho\ S(dH - \mu) $$ , where $H$ is a so-called "resettable poisson process" defined as (2).... $$dH(t) = dN_{\lambda}(t) - H(t-)dN_{\eta}(t) $$ , and $\mu := \frac{\lambda}{\eta}$. Is it possible to derive some analytic results similar to Black-Scholes equation (3)? (3).... $$ \frac{\partial V}{\partial t} + r\ S \frac{\partial V}{\partial S} + \frac{\sigma^2S^2}{2}\frac{\partial^2 V}{\partial S^2} - r\ V = 0$$ Even better, could we derive something similar to Black-Scholes formula for call/put option prices? I tried but failed. In classic GBM model, to get Black-Scholes euqation (3), the essential steps are: By Ito's lemma, (4)... $$df = (f_x+\mu f_x+\sigma^2/2\cdot f_{xx})dt + \sigma f_x dW$$ Based on GBM stock price model (5), (5)... $$ds = \mu S dt + \sigma S dW$$ We have (6).... $$dV = \left( \frac{\partial V}{\partial t} +\mu S \frac{\partial V}{\partial S}+\frac{\sigma^2 S^2}{2}\frac{\partial^2 V}{\partial S^2}\right)dt + \sigma S \frac{\partial V}{\partial S}dW$$ Putting (5) in (6) again we have (7).... $$dV - \sigma S \frac{\partial V}{\partial S}dS = \left( \frac{\partial V}{\partial t} +\frac{\sigma^2 S^2}{2}\frac{\partial^2 V}{\partial S^2}\right)dt $$ then we can define (8)... $$\Pi = V - \frac{\partial V}{\partial S} S$$ , so the LHS of (7) is just $d\Pi$ and it's not related to any random effect, so we have (9)... $$d\Pi = r \Pi dt$$ then we get (3). After I introduce the "resettable Poisson process" $H_{\lambda, \eta}(t)$ in the model, I couldn't find a way to cancel both the $dW$ and $dN$.... Do you know how to solve this? Any suggestions are appreciated, I'm stuck here...
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
May be it helps you to see more easily the way to attack the problem.The first that you could try it's find or understand the recursive relation behind Floyd-Warshall Algorithm. As the next function $f$. $$f(u,v,k) = \begin{cases}w_{u,v} & k = 0\\\min\, (\ f(u,v, k-1),\ f(u,k,k-1) + f(k,v,k-1)) & \text{otherwise}\end{cases}$$ $w_{u,v}$ it's the weight of edge from $u$ to $v$ in the direct graph $G$. $\infty$ if that edge doesn't exist. $f(u,v,k)$ is the weight of a shortest path from $u$ to $v$ if you consider the first $k$ vertices as intermediates. To adapt the above function form for your problem, I'd sketched the essence of the problem in the next graph: $m_i\in E_0$ $l_i\in E_1$ The curly curve is the shortest path considering the first $k-1$ vertices. You have a alternate path that consider the $k$ node when you see the pattern (normal-$k$-dashed) lines in the path, or with convention above: (dashed-$k$-normal) line or $(m_i, l_j)^*$ , $(l_i, m_j)^*$ pattern in general. So the next step it's construct the recursive relation looking the connection between subproblems, and for that you could think for a moment that your function solve the problem for a small size of problem, or instances pattern (follow principle of optimality). $$f(u,v,k,i) = \begin{cases}w_{u,v,i} & k = 0\\\min\, (\ f(u,v, k-1,i),\ f(u,k,k-1,~i) + f(k,v,k-1,i)) & \text{otherwise}\end{cases}$$ $w_{u,v,i}$ is the weight for the edge in $E_i$ (Note that I change enumeration of $E$ for convenience). $\infty$ if that edge doesn't exist. $f(u,v,k,i)$ is the weight of a shortest path that ended up with a edge in $E_i$ that consider the first $k$ vertices as intermediates like the graph above. $~i = 0$ if $i = 1$ or $1$ if $i = 0$. So, the weight of the shortest alternate path from $u$ to $v$ is $$\min \,(\, f(u, v, |V|, 0),\ f(u, v, |V|, 1) )$$ where $|V|$ is the number of vertices. Of course, after some work, you could notice that: It's possible delete parameter $k$ you could implement iterative version or use a memorization technique. Other details, that I'm forgiving. It's late. I'd wrote a python code/ and input test file if you can play with that. But, Here, it's not important.
I'm looking for a hint for solving the following problem : Given an array $A[1, \dots, k]$ of integers where $A[1] < A[2] < \dots < A[k]$ write pseudocode for an algorithm that determines whether $A[j]=j$ for some $j\in \{1,\dots,k\}$. Worst-case running time should be $\Theta(\log k)$. I shall find an easy to calculate criteria which finds the index involving the special structure of $A$. Let's say $A = [-3 , 2 , 4, 5, 7, 8, 9, 10]$. My first approach would be to subdivide $A$ into two parts around the median (here $A[4]$). Since $A[4] = 5 > 4$ and the sequence keeps increasing, I don't have to look at $A[4,\dots,8]$. My criterion would then be: For a sequence of numbers $A[a, \dots, b]$, if $A[a] > a$, then the solution can't be in that sequence since $A[a]< \dots < A[b]$. Am I going in the right direction?
Consider the tunneling Hamiltonian in the Hubbard model for a 1D lattice of quantum dots. $$\begin{align}\hat{H}_t=t\displaystyle\sum_{i,j,\sigma}c_{i,\sigma}^{\dagger}c_{j\sigma}+c^{\dagger}_{j,\sigma}c_{i,\sigma}\hspace{5mm}\text{where } i\neq j\label{one} \tag{1}\end{align} $$ where $i,j\in {1,2}$ (we will only be looking in the case of 2 dots) are the index of the dots and $\sigma \in {\uparrow,\downarrow} $ is the spin of the electron and $t$ is the tunnel coupling between the dots. This term is also referred to as the "kinetic" term as it conveys the hopping of an electron from one site $i$ to a neighbour site $j=i+1$ or $j=i-1$. From a physical perspective this term doesn't prefer the hopping of a spin up $\sigma=\uparrow$ or a spin down state $\sigma=\downarrow$. However when looking into literature of lateral semi-conductor quantum dots, the hamiltonian for the hopping is conveyed as $$\hat{H}_t=t\displaystyle\sum_{i,j,\sigma}c_{i,\sigma}^{\dagger}c_{j\sigma}-c^{\dagger}_{j,\sigma}c_{i,\sigma}\hspace{5mm}\text{where } i\neq j \tag{2}$$ where there seems to be a distinction between which spin is favourable to hop. For example the Hamiltonian in the following basis: [1][2] $$\psi_i\in\{|\downarrow, \downarrow\rangle,|\uparrow, \downarrow\rangle,|\downarrow, \uparrow\rangle,|\uparrow, \uparrow\rangle,|\uparrow\downarrow, 0\rangle,|0, \uparrow\downarrow\rangle\}$$ is given by: $$H_t=\langle\psi_i|\hat{H}_t|\psi_j\rangle=\left(\begin{array}{cccccc}{0} & {0} & {0} & {0} & {0} & {0} \\ {0} & {0} & {0} & {0} & {t} & {t} \\ {0} & {0} & {0} & {0} & {-t} & {-t} \\ {0} & {0} & {0} & {0} & {0} & {0} \\ {0} & {t} & {-t} & {0} & {0} & {0} \\ {0} & {t} & {-t} & {0} & {0} & {0}\end{array}\right)$$ where $$\begin{align} \langle\psi_2|\hat{H}_t|\psi_5\rangle &=\langle\uparrow, \downarrow|\hat{H}_t|\uparrow\downarrow, 0\rangle =t \\ \langle\psi_2|\hat{H}_t|\psi_6\rangle &=\langle\uparrow, \downarrow|\hat{H}_t|0,\uparrow\downarrow\rangle =t \\ \langle\psi_3|\hat{H}_t|\psi_5\rangle &=\langle\downarrow, \uparrow|\hat{H}_t|\uparrow\downarrow, 0\rangle =-t \\ \langle\psi_3|\hat{H}_t|\psi_6\rangle &=\langle\downarrow, \uparrow|\hat{H}_t|0,\uparrow\downarrow\rangle =-t \end{align}$$ From these equations it seems that when there are two electrons in a single dot $\{|\uparrow\downarrow, 0\rangle,|0, \uparrow\downarrow\rangle\}$, the spin down $\sigma=\downarrow$ state seems to be the preferred electron to hop as the matrix element is $-t$ and not $t$. My question is how is this explained from a physical perspective? How can you create a preference for what spin to hop? Usually in the Hubbard model all the tunneling elements have the same sign because of equation 1. [3] What is exactly the physical interpertation of equation 2 in contrast to equation 1? References:
In physics, the line integrals are used, in particular, for computations of mass of a wire; center of mass and moments of inertia of a wire; work done by a force on an object moving in a vector field; magnetic field around a conductor (Ampere’s Law); voltage generated in a loop (Faraday’s Law of magnetic induction). Consider these applications in more details. Mass of a Wire Suppose that a piece of a wire is described by a curve \(C\) in three dimensions. The mass per unit length of the wire is a continuous function \(\rho \left( {x,y,z} \right).\) Then the total mass of the wire is expressed through the line integral of scalar function as \[m = \int\limits_C {\rho \left( {x,y,z} \right)ds} .\] If \(C\) is a curve parameterized by the vector function \(\mathbf{r}\left( t \right) =\) \(\left( {x\left( t \right),y\left( t \right),z\left( t \right)} \right),\) then the mass can be computed by the formula \[{m \text{ = }}\kern0pt {\int\limits_\alpha ^\beta {\rho \left( {x\left( t \right),y\left( t \right),z\left( t \right)} \right)\cdot }}\kern0pt{{ \sqrt {{{\left( {\frac{{dx}}{{dt}}} \right)}^2} + {{\left( {\frac{{dy}}{{dt}}} \right)}^2} + {{\left( {\frac{{dz}}{{dt}}} \right)}^2}} dt}} \] If \(C\) is a curve in the \(xy\)-plane, then the mass of the wire is given by \[m = \int\limits_C {\rho \left( {x,y} \right)ds}\] or in parametric form \[{m \text{ = }}\kern0pt {\int\limits_\alpha ^\beta {\rho \left( {x\left( t \right),y\left( t \right)} \right) \cdot }}\kern0pt{{ \sqrt {{{\left( {\frac{{dx}}{{dt}}} \right)}^2} + {{\left( {\frac{{dy}}{{dt}}} \right)}^2}} dt} .} \] Center of Mass and Moments of Inertia of a Wire Let a wire is described by a curve \(C\) with a continuous density function \(\rho \left( {x,y,z} \right).\) The coordinates of the center of mass of the wire are defined as \[{\bar x = \frac{{{M_{yz}}}}{m},\;\;\;}\kern0pt{\bar y = \frac{{{M_{xz}}}}{m},\;\;\;}\kern0pt{\bar z = \frac{{{M_{xy}}}}{m},}\] where \[ {{M_{yz}} = \int\limits_C {x\rho \left( {x,y,z} \right)ds} ,\;\;\;}\kern-0.3pt {{M_{xz}} = \int\limits_C {y\rho \left( {x,y,z} \right)ds} ,\;\;\;}\kern-0.3pt {{M_{xy}} = \int\limits_C {z\rho \left( {x,y,z} \right)ds} } \] are so-called first moments. The moments of inertia about the \(x\)-axis, \(y\)-axis and \(z\)-axis are given by the formulas \[ {{I_x} = \int\limits_C {\left( {{y^2} + {z^2}} \right)\rho \left( {x,y,z} \right)ds} ,\;\;}\kern-0.3pt {{I_y} = \int\limits_C {\left( {{x^2} + {z^2}} \right)\rho \left( {x,y,z} \right)ds} ,\;\;}\kern-0.3pt {{I_z} = \int\limits_C {\left( {{x^2} + {y^2}} \right)\rho \left( {x,y,z} \right)ds}} \] Work Work done by a force \(\mathbf{F}\) on an object moving along a curve \(C\) is given by the line integral \[W = \int\limits_C {\mathbf{F} \cdot d\mathbf{r}} ,\] where \(\mathbf{F}\) is the vector force field acting on the object, \(d\mathbf{r}\) is the unit tangent vector (Figure \(1\)). The notation \({\mathbf{F} \cdot d\mathbf{r}}\) means dot product of \(\mathbf{F}\) and \(d\mathbf{r}.\) Note that the force field \(\mathbf{F}\) is not necessarily the cause of moving the object. It might be some other force acting to overcome the force field that is actually moving the object. In this case the work of the force \(\mathbf{F}\) could result in a negative value. If a vector field is defined in the coordinate form \[ {\mathbf{F} \text{ = }}\kern0pt { \left( {P\left( {x,y,z} \right), Q\left( {x,y,z} \right), }\right.}\kern0pt{\left.{ R\left( {x,y,z} \right)} \right),} \] then the work done by the force is calculated by the formula \[{W = \int\limits_C {\mathbf{F} \cdot d\mathbf{r}} }={ \int\limits_C {Pdx + Qdy + Rdz} .}\] If the object is moved along a curve \(C\) in the \(xy\)-plane, then the following formula is valid: \[{W = \int\limits_C {\mathbf{F} \cdot d\mathbf{r}} }={ \int\limits_C {Pdx + Qdy},}\] where \(\mathbf{F} \) \(= \left( {P\left( {x,y} \right),Q\left( {x,y} \right)} \right).\) If a path \(C\) is specified by a parameter \(t\) (\(t\) often means time), the formula for calculating work becomes \[ {W \text{ = }}\kern0pt {\int\limits_\alpha ^\beta {\left[ {P\left( {x\left( t \right),y\left( t \right),z\left( t \right)} \right)\frac{{dx}}{{dt}} }\right.}}+{{\left.{ Q\left( {x\left( t \right),y\left( t \right),z\left( t \right)} \right)\frac{{dy}}{{dt}} }\right.}}+{{\left.{ R\left( {x\left( t \right),y\left( t \right),z\left( t \right)} \right)\frac{{dz}}{{dt}}} \right]dt} ,} \] where \(t\) goes from \(\alpha\) to \(\beta.\) If a vector field \(\mathbf{F}\) is conservative, then then the work on an object moving from \(A\) to \(B\) can be found by the formula \[W = u\left( B \right) – u\left( A \right),\] where \(u\left( {x,y,z} \right)\) is a scalar potential of the field. Ampere’s Law The line integral of a magnetic field \(\mathbf{B}\) around a closed path \(C\) is equal to the total current flowing through the area bounded by the contour \(C\) (Figure \(2\)). This is expressed by the formula \[\int\limits_C {\mathbf{B} \cdot d\mathbf{r}} = {\mu _0}I,\] where \({\mu _0}\) is the vacuum permeability constant, equal to \(1,26 \times {10^{ – 6}}\,\text{H/m}.\) Faraday’s Law The electromotive force \(\varepsilon\) induced around a closed loop \(C\) is equal to the rate of the change of magnetic flux \(\psi\) passing through the loop (Figure \(3\)). \[{\varepsilon = \int\limits_C {\mathbf{E} \cdot d\mathbf{r}} }={ – \frac{{d\psi }}{{dt}}.}\] Solved Problems Click a problem to see the solution. Example 1Find the mass of a wire running along the plane curve \(C\) with the density \(\rho \left( {x,y} \right) = 3x + 2y.\) The curve \(C\) is the line segment from point \(A\left( {1,1} \right)\) to point \(B\left( {2,4} \right).\) Example 2Find the mass of a wire lying along the arc of the circle \({x^2} + {y^2} = 1\) from \(A\left( {1,0} \right)\) to \(B\left( {0,1} \right)\) with the density \(\rho \left( {x,y} \right) = xy\) (Figure \(4\)). Example 3Find the moment of inertia \({I_x}\) of the circle \({x^2} + {y^2} = {a^2}\) with the density \(\rho = 1.\) Example 4Find the work done by the force field \(\mathbf{F}\left( {x,y} \right) \) \(= \left( {xy,x + y} \right)\) on an object moving from the origin \(O\left( {0,0} \right)\) to the point \(A\left( {1,1} \right)\) along the path \(C,\) where \(C\) is the line segment \(y = x;\) \(C\) is the curve \(y = \sqrt x.\) Example 5An object with a mass of \(m\) is thrown under the angle \(\alpha\) with the initial velocity \({v_0}\) (Figure \(5\)). Calculate the work performed by the gravitational force \(\mathbf{F} = m\mathbf{g}\) while the object moves until the moment it strikes the ground. Example 6Find the magnetic field in vacuum a distance \(r\) from the axis of a long straight wire carrying current \(I.\) Example 7Evaluate the maximum electromotive force \(\varepsilon\) and the electric field \(E\) induced in a finger ring of radius \(1\,\text{cm}\) when the passenger flies on an airplane in the magnetic field of the Earth with the velocity of \(900\,\text{km/h}.\) Example 1.Find the mass of a wire running along the plane curve \(C\) with the density \(\rho \left( {x,y} \right) = 3x + 2y.\) The curve \(C\) is the line segment from point \(A\left( {1,1} \right)\) to point \(B\left( {2,4} \right).\) Solution. We first find the parametric equation of the line \(AB:\) \[ {{\frac{{x – {x_A}}}{{{x_B} – {x_A}}} = \frac{{y – {y_A}}}{{{y_B} – {y_A}}} }={ t,\;\;}}\Rightarrow {{\frac{{x – 1}}{{2 – 1}} = \frac{{y – 1}}{{4 – 1}} }={ t,\;\;}}\Rightarrow {{\frac{{x – 1}}{1} = \frac{{y – 1}}{3} }={ t\;\;\;}}\kern0pt {\text{or}\;\;\left\{ {\begin{array}{*{20}{l}} {x = t + 1}\\ {y = 3t + 1} \end{array}} \right.,} \] where parameter \(t\) varies in the interval \(\left[ {0,1} \right].\) Then the mass of the wire is \[ {m \text{ = }}\kern0pt{ \int\limits_\alpha ^\beta {\rho \left( {x\left( t \right),y\left( t \right)} \right) \cdot}}\kern0pt{{ \sqrt {{{\left( {\frac{{dx}}{{dt}}} \right)}^2} + {{\left( {\frac{{dy}}{{dt}}} \right)}^2}} dt} } = {{\int\limits_0^1 {\left( {3x\left( t \right) + 2y\left( t \right)} \right) \cdot}}\kern0pt{{ \sqrt {{{\left( {\frac{{dx}}{{dt}}} \right)}^2} + {{\left( {\frac{{dy}}{{dt}}} \right)}^2}} dt} }} = {\int\limits_0^1 {\left( {9t + 5} \right)\sqrt {{1^2} + {3^2}} dt} } = {\sqrt {10} \int\limits_0^1 {\left( {9t + 5} \right)dt} } = {\sqrt {10} \left[ {\left. {\left( {\frac{{9{t^2}}}{2} + 5t} \right)} \right|_0^1} \right] } = {\frac{{19\sqrt {10} }}{2} }\approx{ 30.} \]
Two events are independent if whatever happesn does not affect the other one at all. The probability of happening of two independent events is the product of the probability of both happenings. $$$p(X=x \& Y=y)=p(X=x) \cdot p(Y=y)$$$ The probability to get a $$6$$ in each of two independent dices $$A$$ and $$B$$ is: $$$\displaystyle p(X=6 \& Y=6)=p(X = 6 ) \cdot p(Y=6)= \frac{1}{6} \cdot \frac{1}{6} = \frac{1}{36}$$$ The probability of rain one day in February in Sevilla is $$35 \%$$, and that the Betis wins is of $$75 \%$$. What is the probability for not rain in Sevilla and for the Betis wins? $$$ P(\mbox{ not rain })= 1-P(\mbox { rain })=0.65 \\ P(\mbox{ not rain }) \cdot P(\mbox{ Betis wins })=0.65 \cdot 0.75=0.49 \rightarrow 49\%$$$
Sorry to bring up an old thread. But here's something that might be relevant. Let pCFL be the class of permutation-closed CFLs. The equality problem for pCFL is decidable. Given $L$ in $\Sigma = \{ \sigma_1 , \dots , \sigma_n \}$, let $W_L = \{ \langle \#_{a_1}(w) , \dots , \#_{a_n}(w) \rangle \mid w \in L\}$. By Parikh's Theorem, $W_L$ is semilinear whenever $L$ is context-free. Now, if $L$ is in pCFL, we have that $w \in L$ iff $\langle \#_{a_1}(w) , \dots , \#_{a_n}(w) \rangle \in W_L$. Thus, for $L_1 , L_2$ in pCFL, $L_1 = L_2$ iff $W_{L_1} = W_{L_2}$. But equality of semilinear sets is decidable; see: This raises a question to which I would like to know the answer: is it decidable whether a given context-free language is permutation-closed?
It is well known that: $\displaystyle \mathcal{H}_n = \log{n} + \gamma - \sum_{k=1}^\infty \frac{B_{k}}{k \, n^{k}}$ Where $\displaystyle B_k$ are the Bernoulli Numbers A similar asymptotic expansion for the Harmonic Numbers begins with the logarithm shifted slightly: $\displaystyle \mathcal{H}_n = \log{\left(n+\frac{1}{2}\right)} + \gamma + \sum_{k=2}^\infty \frac{A_k}{x^k}$ Is there a closed form for $\displaystyle A_k$ in terms of known number sequences? The first few terms of $\displaystyle A_k$ are: $\displaystyle \frac{1}{24}, -\frac{1}{24}, \frac{23}{960}, -\frac{1}{160}, -\frac{11}{8064}, -\frac{1}{896}, \frac{143}{30720}, -\frac{1}{4608}$ for $\displaystyle k \in \left[2,\cdots,9 \right]$
First note that enumerable usually means can be effectively enumerated, where as you seem to ask if it is countable (or denumerable). The conclusion is wrong, so the approach cannot be right. The set is countable. As for the approach, note that you are defining a polynomial with infinitely many non-zero coefficients, which is therefore not a polynomial. Otherwise the argument would be reduced to an even worse state: a misinterpretation of the Cantor diagonal argument. That is, you are listing the polynomials and then finding something not on that concrete list; whereas the actual argument is "given any enumeration, we can find something not on the list". To see that the polynomials are countable first note that $\mathbb Q$ is countable. This means that if we show that polynomials whose coefficients are in $\mathbb N$ give a countable set, we can use the bijection between $\mathbb Q$ and $\mathbb N$ to prove that the polynomials over $\mathbb Q$ are countable. Now to see that, first we can (as you did) identify the polynomial with finite sequences, then we can find explicit bijections between $\mathbb N$ and the set of finite sequences, for example: $$\langle a_0,\ldots,a_n\rangle\mapsto p_0^{a_0}\cdot\ldots\cdot p_n^{a_n}-1$$ Where $p_i$ is the $i$-th prime number. Of course that if you are allowed to rely on the fact that countable unions of countable sets are countable, you can take the approach which avoids the explicit bijection: $\mathbb N\times\mathbb N$ is countable; By induction if $\mathbb N^k$ is countable then $\mathbb N^{k+1}$ is countable. The set of all finite sequences is equal to $\bigcup_{n\in\mathbb N}\mathbb N^n$, and therefore is a countable union of countable sets. Further reading: The cartesian product $\mathbb{N} \times \mathbb{N}$ is countable Cartesian Product of Two Countable Sets is Countable Proving $\mathbb{N}^k$ is countable
I am trying to verify the following formula involving Bessel functions of the first kind and am having no luck. The formula is $$ \int{\omega} J_n(\rho \omega)\mathrm d\omega = \frac {1} {\rho} \left\{ -\omega J_{n-1} (\rho \omega) + n \int{J_{n-1}(\rho \omega)\mathrm d\omega } \right\} $$ I apologize if this is painfully obvious with integration by parts but I couldn't see it. Moreover, I get the impression from this other post about a nearly identical integral that the above may not be right. Any help is greatly appreciated. Also, if there is a simpler way to express/solve this integral, I would also be very grateful for that.
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Direction Cosines When a directed line passing through the origin makes \(\alpha \), \(\beta\) and \( \gamma\) angles with the \(x\), \(y \) and \(z \) axis respectively with OP Oas the reference, these angles are referred as the direction angles of the line and the cosine of these angles give us the direction cosines. These direction cosines are usually represented as l, m and n. If we extend the line OP on the three dimensional Cartesian coordinate system, then to figure out the direction cosines, we need to take the supplement of the direction angles. It is pretty obvious from this statement that on reversal of the line OP in opposite direction, the direction cosines of the line also get reversed. In a situation where the given line does not pass through the origin, a line parallel to the given line passing through the origin is drawn and in doing so, the angles remain same as the angles made by the original line. Hence, we get the same direction. Since we are considering a line passing through the origin to figure out the direction angles and their cosines, we can consider the position vectors of the line OP. If = r, then from the above figure 1, we can see that \(x \)= \(r cos \alpha\) \(y\) = \( r cos \beta\) \(z\) = \( r cos \gamma\) Where r denotes the magnitude of the vector and it is given by, \(r\) = \(\sqrt{(x – 0)^2 + (y – 0)^2 + (z – 0)^2}\) \(\Rightarrow r\) =\( \sqrt{x^2 + y^2 + z^2}\) The cosines of direction angles are given by \(cos \alpha\), \(cos \beta\) and \(cos\gamma\) and these are denoted by l, m and n respectively. Therefore, the above equations can be reframed as: \(x\) = \(r cos\alpha\) =\( lr\) ——————————————————— (1) \(y\) =\( r cos\beta\) =\( mr \)——————————————————– (2) \(z\) = \(r cos \gamma\) = \( nr \)——————————————————— (3) We can also represent r in terms of its unit vector components using the orthogonal system. \(r\) =\( x\hat{i} + y \hat{j} + k \hat{z}\) Substituting the values of x, y and z, we have \(r \)=\( lr\hat{i} + mr\hat{j} + nr\hat{z}\) \(\Rightarrow \hat{r}\) =\( \frac{r}{|r|}\) = \( l\hat{i} + m\hat{j} + n \hat{z}\) Therefore, we can say that cosines of direction angles of a vector r are the coefficients of the unit vectors \(\hat{i}\),\( \hat{j}\) and \(\hat{k}\) when the unit vector \(\hat{r}\) is resolved in terms of its rectangular components. Any number proportional to the direction cosine is known as the direction ratio of a line. These direction numbers are represented by a, b and c. Also as \(OP^2\) = \( OA^2 + OB^2 + OC^2 \) In simple terms, \(r\) = \(\sqrt{x^2 + y^2 + z^2}\) On dividing the equation,\(r^2 \)=\( x^2 + y^2 + z^2~ by ~r^2 \)we have, \(\frac{r^2}{r^2} \)=\( \frac{x^2}{r^2} + \frac{y^2}{r^2} + \frac{z^2}{r^2}\) Using equations 1, 2 and 3, we get \(\Rightarrow 1 \)= \((\frac{x}{r})^2 + (\frac{y^2}{r^2})^2 + (\frac{z^2}{r^2})^2\) = \(l^2 + m^2 + n^2\) We can conclude that sum of the squares of the direction cosines of a line is 1. From the above definition, we can say that \(a ∝ l\) \(b ∝ m\) \(c ∝ n\) From these relations, we get \( a \)=\( kl\) \( b \)=\( km\) \( c \)=\( kn\) The ratio between the direction cosines and direction ratios of a line is given by \(\frac{l}{a} \)= \(\frac{m}{b}\) = \(\frac{n}{c}\) = \(k\) But we know that \(l^2 + m^2 + n^2 \)= \(1\) From this, we can find that \(k\) =\( \pm \frac{1}{\sqrt{a^2 + b^2 + c^2}}\)< The value of can be chosen as positive or negative depending upon the direction of the directed line. We can take any number of direction ratios by altering the value of k. We are clear on the concept of the direction ratios and cosines of a line. To investigate more about three-dimensional geometry, download BYJU’S learning app and keep learning.
Qualitative properties of positive solutions for mixed integro-differential equations 1. Departamento de Ingeniería Matemática and Centro de Modelamiento, Matemático, Universidad de Chile, Santiago, Chile 2. Department of Mathematics, Jiangxi Normal University, Nanchang, Jiangxi 330022, China $\begin{equation}\left\{ \begin{array}{l} (-\Delta)_x^{α} u+(-\Delta)_y u+u = f(u) \ \ \ \ {\rm in}\ \ {\mathbb{R}}^N×{\mathbb{R}}^M,\\ u>0\ \ {\rm{in}}\ {\mathbb{R}}^N\times{\mathbb{R}}^M,\ \ \ \lim_{|(x,y)|\to+\infty}u(x,y) = 0, \end{array} \right.\;\;\;\left( {0.1} \right)\end{equation}$ $N≥ 1$ $M≥ 1$ $α∈ (0,1)$ Mathematics Subject Classification:35R11, 35B06, 35B40, 35B50. Citation:Patricio Felmer, Ying Wang. Qualitative properties of positive solutions for mixed integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 369-393. doi: 10.3934/dcds.2019015 References: [1] G. Barles, R. Buckdahn and E. Pardoux, Backward stochastic differential equations and integral partial differential equations, [2] G. Barles, E. Chasseigne, A. Ciomaga and C. Imbert, Lipschitz regularity of solutions for mixed integro-differential equations, [3] G. Barles, E. Chasseigne, A. Ciomaga and C. Imbert, Large time behavior of periodic viscosity solutions for uniformly elliptic integro-differential equations, [4] F. Benth, K. Karlsen and K. Reikvam, Optimal portfolio selection with consumption and nonlinear integro-differential equations with gradient constraint: A viscosity solution approach, [5] [6] [7] [8] J. Busca and P. Felmer, Qualitative properties of some bounded positive solutions to scalar field equations, [9] X. Cabré and Y. Sire, Nonlinear equations for fractional laplacians Ⅱ: Existence, uniqueness and qualitative properties of solutions, [10] [11] [12] [13] [14] [15] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a schrödinger type problem involving the fractional laplacian, [16] J. Dolbeault and P. Felmer, Symmetry and monotonicity properties for positive solution of semi-linear elliptic PDE's, [17] A. Esfahani and S. E. Esfahani, Positive and nodal solutions of the generalized BO-ZK equation, Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales, Serie A. Matematicas, DOI: 10.1007/s13398-017-0435-2(2017).Google Scholar [18] P. Felmer, A. Quaas and J. Tan, Positive solutions of non-linear Schrödinger equation with the fractional laplacian, [19] P. Felmer and Y. Wang, Radial symmetry of positive solutions to equations involving the fractional laplacian, [20] B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of non-linear elliptic equations in ${\mathbb{R}}^N$, [21] [22] [23] N. S. Landkof, [24] [25] Y. Li and W. M. Ni, Radial symmetry of positive solutions of non-linear elliptic equations in ${\mathbb{R}}^N$, [26] [27] F. Pacella and M. Ramaswamy, Symmetry of solutions of elliptic equations via maximum principles, [28] [29] A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional laplacian in the half space, [30] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional laplacian: Regularity up to the boundary, [31] [32] [33] Y. Sire and E. Valdinoci, Fractional laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result, show all references References: [1] G. Barles, R. Buckdahn and E. Pardoux, Backward stochastic differential equations and integral partial differential equations, [2] G. Barles, E. Chasseigne, A. Ciomaga and C. Imbert, Lipschitz regularity of solutions for mixed integro-differential equations, [3] G. Barles, E. Chasseigne, A. Ciomaga and C. Imbert, Large time behavior of periodic viscosity solutions for uniformly elliptic integro-differential equations, [4] F. Benth, K. Karlsen and K. Reikvam, Optimal portfolio selection with consumption and nonlinear integro-differential equations with gradient constraint: A viscosity solution approach, [5] [6] [7] [8] J. Busca and P. Felmer, Qualitative properties of some bounded positive solutions to scalar field equations, [9] X. Cabré and Y. Sire, Nonlinear equations for fractional laplacians Ⅱ: Existence, uniqueness and qualitative properties of solutions, [10] [11] [12] [13] [14] [15] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a schrödinger type problem involving the fractional laplacian, [16] J. Dolbeault and P. Felmer, Symmetry and monotonicity properties for positive solution of semi-linear elliptic PDE's, [17] A. Esfahani and S. E. Esfahani, Positive and nodal solutions of the generalized BO-ZK equation, Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales, Serie A. Matematicas, DOI: 10.1007/s13398-017-0435-2(2017).Google Scholar [18] P. Felmer, A. Quaas and J. Tan, Positive solutions of non-linear Schrödinger equation with the fractional laplacian, [19] P. Felmer and Y. Wang, Radial symmetry of positive solutions to equations involving the fractional laplacian, [20] B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of non-linear elliptic equations in ${\mathbb{R}}^N$, [21] [22] [23] N. S. Landkof, [24] [25] Y. Li and W. M. Ni, Radial symmetry of positive solutions of non-linear elliptic equations in ${\mathbb{R}}^N$, [26] [27] F. Pacella and M. Ramaswamy, Symmetry of solutions of elliptic equations via maximum principles, [28] [29] A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional laplacian in the half space, [30] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional laplacian: Regularity up to the boundary, [31] [32] [33] Y. Sire and E. Valdinoci, Fractional laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result, [1] Walter Allegretto, John R. Cannon, Yanping Lin. A parabolic integro-differential equation arising from thermoelastic contact. [2] Narcisa Apreutesei, Nikolai Bessonov, Vitaly Volpert, Vitali Vougalter. Spatial structures and generalized travelling waves for an integro-differential equation. [3] Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. [4] Frederic Abergel, Remi Tachet. A nonlinear partial integro-differential equation from mathematical finance. [5] Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. [6] [7] [8] Olivier Bonnefon, Jérôme Coville, Jimmy Garnier, Lionel Roques. Inside dynamics of solutions of integro-differential equations. [9] [10] Giuseppe Maria Coclite, Mario Michele Coclite. Positive solutions of an integro-differential equation in all space with singular nonlinear term. [11] [12] Jean-Michel Roquejoffre, Juan-Luis Vázquez. Ignition and propagation in an integro-differential model for spherical flames. [13] [14] [15] Yubo Chen, Wan Zhuang. The extreme solutions of PBVP for integro-differential equations with caratheodory functions. [16] Narcisa Apreutesei, Arnaud Ducrot, Vitaly Volpert. Travelling waves for integro-differential equations in population dynamics. [17] Tonny Paul, A. Anguraj. Existence and uniqueness of nonlinear impulsive integro-differential equations. [18] [19] Tianling Jin, Jingang Xiong. Schauder estimates for solutions of linear parabolic integro-differential equations. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I have to diagonalize, within a Fortran-written code, a block tridiagonal Toeplitz Hermitian matrix, e.g. $$ \left[ \begin{array}{ccccc} \ddots & \hat{A} & & & \\ \hat{A}^\dagger & \hat{B} & \hat{A} & & \\ & \hat{A}^\dagger & \hat{B} & \hat{A} & \\ & & \hat{A}^\dagger & \hat{B} & \hat{A} \\ & & & \hat{A}^\dagger & \ddots \end{array} \right] $$ where $B$ is a Hermitian matrix. For the moment, I am just using standard Lapack routine ZHEEV for Hermitian matrices. Do you have any suggestion on how to take advantage of one or more of the properties of this matrix to get a faster computation?
Classification of Compact One-Manifolds Contents Theorem Corollary Any compact one-manifold has an even number of points in its boundary. Proof Lemma 1 Let $f$ be a function on $[a,b]$ that is smooth and has a positive derivative everywhere except one interior point, $c$. Then there exists a globally smooth function $g$ that agrees with $f$ near $a$ and $b$ and has a positive derivative everywhere. Proof of Lemma 1 Let $r$ be a smooth nonnegative function that vanishes outside a compact subset of $(a,b)$, which equals $1$ near $c$, and which satisfies $\displaystyle \int_a^b r = 1$. Define: $\displaystyle g(x) = f(a) + \int_a^x (k r(s)+f'(s)(1-r(s)))ds$ where the constant $\displaystyle k=f(b)-f(a)-\int_a^b f'(s)(1-r(s))ds$. $\Box$ Now, let $f$ be a Morse function on a one-manifold $X$, and let $S$ be the union of the critical points of $f$ and $\partial X$. Since $S$ is finite, $X-S$ consists of a finite number of one-manifolds, $L_1,L_2,\cdots,L_n$. Lemma 2 $f$ maps each $L_i$ diffeomorphically onto an open interval in $\R$ Proof of Lemma 2 Let $L$ be any of the $L_i$. Because $f$ is a local diffeomorphism and $L$ is connected, $f(L)$ is open and connected in $\R$. We also have $f(L) \in f(X)$, the latter of which is compact, so there are numbers $c$ and $d$ such that $f(L) = (c,d)$. It suffices to show $f$ is one to one on $L$, because then $f^{-1}:(c,d) \to L$ is defined and locally smooth. Let $p$ be any point of $L$ and set $q=f(p)$. It suffices to show that every other point $z \in L$ can be joined to $p$ by a curve $\gamma : [q,y] \to L$, such that $f \circ \gamma$ is the identity and $\gamma (y) = z$. Since $f(z) = y \neq q = f(p)$, this result shows $f$ is one to one. So let $Q$ be the set of points $x$ that can be so joined. Since $f$ is a local diffeomorphism, $Q$ is both open and Definition:Closed Set (Topology) and hence $Q=L$. $\Box$ Lemma 3 Proof of Lemma 3 Let $g$ be a diffeomorphism $g:(a,b) \to L$ and let $p \in Cl(L)-L$. Let $J$ be a closed subset of $X$ diffeomorphic to $[0,1]$ such that $1$ corresponds to $p$ and $0$ corresponds to some $g(t)$ in $L$. Consider the set $\left\{{ s \in (a,t) | g(s) \in J }\right\}$. This set is both open and closed in $(a,b)$, hence $J$ contains either $g((a,t))$ or $g((t,b))$. $\Box$ Proof of Corollary Follows trivially. $\blacksquare$
$\Gamma(\Upsilon(1S) \to \ell^+\ell^-)$ to NNNLO (in session "Standard Model") ALICE: bottomonium results in p-Pb and Pb-Pb (in session "Quarkonium In Media") ATLAS measurements of associated vector boson plus quarkonium production at the LHC (in session "Production") Bc Physics at LHCb (in session "Spectroscopy") Bottomonium decays into light hadrons at Belle (in session "Decays") Bottomonium in NRQCD (in session "Quarkonium In Media") Bottomonium O(alpha_s^3) NR sum rules (in session "Standard Model") Bottomonium production in heavy-ion collisions with CMS (in session "Quarkonium In Media") Bottomonium radiative transitions (in session "Decays") Bottomonium Spectroscopy (in session "Spectroscopy") Bottomonium(-like) Spectroscopy and Transitions at Belle (in session "Spectroscopy") Bottomonium(like)/Charmonium(like) transitions (in session "Decays") Charmed Baryons (in session "Spectroscopy") Charmed Baryons (in session "Spectroscopy") Charmonium and bottomonium in pA (in session "Quarkonium In Media") Charmonium decays into light hadrons at BES (in session "Decays") Charmonium hadronic transitions at BES (in session "Decays") Charmonium production in heavy-ion collisions with CMS (in session "Quarkonium In Media") Charmonium radiative transitions at BES (in session "Decays") Charmonium rare decays at BES (in session "Decays") Charmonium Spectroscopy (in session "Spectroscopy") Charmonium(-like) States from B Decays at Belle (in session "Spectroscopy") Charmonium(-like) States from ISR at Belle (in session "Spectroscopy") Close CMS results on Bc decays (in session "Decays") CMS results on quarkonium spectroscopy (in session "Spectroscopy") Cold nuclear matter and nPDF (in session "Quarkonium In Media") Cold nuclear matter effects on quarkonium production (in session "Quarkonium In Media") Colour octet static QCD potential at three loop order (in session "Standard Model") Complementary constraints on light Dark Matter from heavy quarkonium decays (in session "Standard Model") Complete next-to-leading-order study on the yield and polarization of Upsilon(1S,2S,3S) at the Tevatron and LHC (in session "Production") Confirmation of Z(4430) and radiative X(3872) decays with LHCb (in session "Spectroscopy") Correlators in lattice QCD (in session "Quarkonium In Media") Critical review of quarkonium production results at hadron colliders (in session "Production") D0, D0* masses and X(3872) Binding Energy (in session "Spectroscopy") Dalitz plot analysis of eta_c and J/psi decays (in session "Decays") Determination of the parameters of J/psi and Y from e+e- experiments and their implications for the search at future facilities (in session "Standard Model") Determinations of m_b, m_c, and alpha_s from quarkonium correlators (in session "Standard Model") Discussion of the Z states in charmonium and bottomonium (in session "Spectroscopy") Discussion on the PDG naming scheme (in session "Discussion on PDG naming scheme") Double differential cross sections of S-wave quarkonia with CMS (in session "Production") E1 transitions (in session "Decays") Enhanced breaking of heavy quark spin symmetry in Y(5S) transitions (in session "Decays") eta_c, J/psi, h_c decay constants from lattice (in session "Spectroscopy") Exclusive photoproduction of J/psi and psi(2S) states in proton-proton collisions at the CERN LHC (in session "Production") Exotic hidden-flavour at ATLAS: Search for $X_b \to \pi+ \pi- \Upsilon(1S)$ (in session "Spectroscopy") Factorized power expansion for high-pT heavy-quarkonium production (in session "Production") Fragmentation contributions to J/psi production at the Tevatron and the LHC (in session "Production") From imaginary potential to quarkonium phenomenology (in session "Quarkonium In Media") Gluon fragmentation into quarkonium at next-to-leading order (in session "Production") Hadronic Transitions in e+e- Collisions above 4 GeV at BESIII (in session "Spectroscopy") HALQCD Potentials/Interactions (in session "Spectroscopy") Heavy hadrons in Hidden Valley Models and prospects for detection at the LHC (in session "Beyond the Standard Model") Heavy quark potential at T>0 (in session "Quarkonium In Media") Heavy quarkonium results in pp from ALICE (in session "Production") Heavy-light P-wave mesons contributions to Quarkonium transitions (in session "Decays") HELAC-Onia: An automatic matrix element generator for heavy quarkonium physics (in session "Production") Hybrids (in session "Spectroscopy") J/psi production in p+p and p+A within CGC/NRQCD framework (in session "Quarkonium In Media") J/psi results in p-Pb and Pb-Pb from ALICE (in session "Quarkonium In Media") Light Quark Dependence in X3872 (in session "Spectroscopy") Main results in the charmonium region from KEDR (in session "Spectroscopy") Measuring the $HQ\bar Q$ couplings in Higgs decays to Quarkonia (in session "Standard Model") Molecules (in session "Spectroscopy") Next-to-Leading-Order study on the associated production of J/psi+gamma at the LHC (in session "Production") NNLL heavy quark pair production close to threshold (in session "Standard Model") Omega transitions of the Y(5S) (in session "Decays") Overview of ATLAS quarkonium production measurements (in session "Production") PANDA progress report (in session "Future Opportunities") Polarizations of chi_c1 and chi_c2 at the LHC in non-relativistic QCD (in session "Production") Probing Quarkonium Production Mechanisms with Jet Substructure (in session "Production") Production of P-wave quarkonia in pp collisions at 7 or 8 TeV at CMS (in session "Production") Production of Stoponium at the LHC (in session "Beyond the Standard Model") Production of Tetraquarks at the LHC (in session "Production") Production of the X(3872) and its partners (in session "Production") Psi(2S) results in p-Pb and Pb-Pb from ALICE (in session "Quarkonium In Media") Quantum Thermometer of QGP (in session "Quarkonium In Media") Quarkonium energy levels at NNNLO in perturbative QCD (in session "Standard Model") Quarkonium Polarisation at LHCb (in session "Production") Quarkonium polarization measurements with CMS (in session "Production") Quarkonium prod. in U+U, Cu+Au, energy dependent Au+Au and 500 GeV (in session "Quarkonium In Media") Quarkonium Production at CDF (in session "Production") Quarkonium Production at HERA (in session "Production") Quarkonium production at LHCb (in session "Production") Quarkonium production in the LHC era: a polarized perspective (in session "Production") Quarkonium spectral function (in session "Quarkonium In Media") Quarkonium suppression and entropic force: an update (in session "Quarkonium In Media") Quarkonium Van Der Waals Interactions (in session "Spectroscopy") R measurement at BESIII (in session "Standard Model") Radiative Transitions in e+e- Collisions above 4 GeV at BESIII (in session "Spectroscopy") Recent results from LHCb relevant to decays (in session "Decays") Round table on hadronic transitions (in session "Decays") Round table on radiative transitions (in session "Decays") Seminar (in session "Seminar") Some Issues in Quarkonium Production (in session "Production") STAR quarkonium results from BES, U+U collisions and 500 GeV p+p collisions (in session "Quarkonium In Media") Studies of Bc-meson with the ATLAS experiment (in session "Decays") Studies of the Dark Sector at Belle (in session "Standard Model") Subtracting IR ambiguity from the static potential (in session "Standard Model") Tau-charm factories in China and Russia (in session "Future Opportunities") Tetraquarks (in session "Decays") Tetraquarks in large N (in session "Spectroscopy") The line shape of Y(4260) (in session "Decays") The Zc States at BESIII (in session "Spectroscopy") Threshold production of top-quark pairs at NNNLO (in session "Standard Model") UPC results from ALICE in p-Pb and Pb-Pb collisions (in session "Quarkonium In Media") Update on the determination of alpha_s from the QCD static energy (in session "Standard Model") Upsilon leptonic width from lattice QCD (in session "Standard Model") Upsilon production in potential models (in session "Quarkonium In Media") Upsilon(2S) --> eta_b gamma from lattice QCD (in session "Decays") Welcome address XYZ (in session "Spectroscopy") Zc and tetraquarks (in session "Spectroscopy") Zc in CLQCD (in session "Spectroscopy") Include materials from selected contributions
Nick Rowe has a new post up and it inspired me to take up his challenge (entering as a non-economist). Rowe is probably one of the best economist bloggers out there if you want to get more technical than the typical post from Scott Sumner or Paul Krugman. His question is this: Q. Assume an economy where there are (say) 7 markets. Suppose 6 of those markets are in equilibrium (with quantity demanded equal to quantity supplied). Is it necessarily true that the 7th market must also be in equilibrium (with quantity demanded equal to quantity supplied)? I've looked at Walras' law before (e.g. this post). I'm going to answer this using information theory with progressively more complexities, but I'll start with some notation. Define $I(D_{k})$ to be the source (demand) information in the $k^{\text{th}}$ market and $I(S_{k})$ to be the received information (supply). Define aggregate source information (aggregate demand, AD) and aggregate received information (aggregate supply, AS) as I(AD) = I(\sum_{k} D_{k}) \;\;\text{and}\;\; I(AS) = I(\sum_{k} S_{k}) $$ If the information in each market is independent, this becomes: $$ I(AD) = \sum_{k} I(D_{k}) \;\;\text{and}\;\; I(AS) = \sum_{k} I(S_{k}) $$ And lastly, define excess information in the $k^{\text{th}}$ market as $$ \Delta I_{k} \equiv I(D_{k}) - I(S_{k}) $$ Rowe's question becomes $$ \text{If } \Delta I_{k = 1 .. 6} = 0 \text{ then what is } \Delta I_{7} \text{ ?} $$ First is the "Walras' law is correct" version [1] ... We assume that the information in each market is independent and that $I(AD) = I(AS)$, so that $$ 0 = I(AD) - I(AS) = \sum_{k} I(D_{k}) - \sum_{k} I(S_{k}) = \sum_{k} \Delta I_{k} $$ rearranging the terms $$ 0 = \Delta I_{7} + \sum_{k = 1}^{6} \Delta I_{k} = \Delta I_{7} + 0 $$ Therefore, $\Delta I_{7} = 0$. Now the thing is that all we can really say is that $I(AS) \leq I(AD)$ (the market doesn't necessarily transfer all the information), so that brings us to the non-ideal information transfer version [2] ... We assume that the information in each market is independent and that $I(AS) \leq I(AD)$, so that 0 \leq I(AD) - I(AS) = \sum_{k} I(D_{k}) - \sum_{k} I(S_{k}) = \sum_{k} \Delta I_{k} $$ rearranging the terms $$ 0 \leq \Delta I_{7} + \sum_{k = 1}^{6} \Delta I_{k} = \Delta I_{7} + 0 $$ Therefore, $\Delta I_{7} \geq 0$. That means Walras' law doesn't pin down that last market, and says that there can be excess demand. But it's even worse than that, which brings us to the non-independent (i.e. mutual) information version [3] ... $$ I(D_{j} + D_{k}) = I(D_{j}) + I(D_{k}) $$ But this isn't necessarily true and in general (e.g. Shannon joint entropy) $$ \text{(1) } I(D_{j} + D_{k}) \leq I(D_{j}) + I(D_{k}) $$ This says for practical purposes that some of the information in the source in one market may be the same as the information in the source in another, hence they do not necessarily add to yield more information. So that all we really know is that I(\sum_{k = 1}^{6} D_{k}) \geq I(\sum_{k = 1}^{6} S_{k}) $$ based on the fact that you can't get more information out than you put in. This means that knowing the six markets clear doesn't necessarily even tell us about the aggregate demand of the 6 markets (ignoring the seventh). Nick Rowe basically arrives at this last version -- he says there can be excess demands/supplies of money in each of the six markets so Walras' law can't really tell us anything about the seventh. The information theory argument presented here does not require money, which is consistent with Rowe. He says that the same result could hold in a barter economy because some good could effectively operate as money and there would be excess demands for various barter goods in each of the individual markets. Rowe says that: Walras' Law is true and useful for the economy as a whole only if there is only one market in the whole economy, where all goods are traded for all goods. This appears to be saying that if you can't decompose $AD = D_{1} + D_{2} + \cdots$ (or the decomposition is trivial), then you get Walras' law back -- and it's true. If you can't decompose the markets, then there are no "joint entropies" that can be formed from their decomposition, so there is no information loss in equation (1) above. This doesn't rule out non-ideal information transfer in version [2] above, but assuming markets work, saying you can't decompose the markets (or the decomposition is trivial) gets you back to version [1] where Walras' law holds.
I've come across this problem while trying to work out a table-formatting algorithm. It's very similar to standard linear programming (though it uses $>$ instead of $<$; I'm not extremely familiar with linear programming, but I believe this doesn't matter much). Let $\vec v = (v_1, \dots, v_n)$ be a vector of positive-integer variables. The problem is to find if there is an assignment to $\vec v$ such that $$ v_1 + \dots + v_n = c $$ and $$ A \vec v \geq \vec w $$ where $c$ is a known constant, $\vec w$ is a known constant, and $A$ is a matrix with the special property that each row of $A$ is of the form $(0, \dots, 0, 1, \dots, 1, 0, \dots, 0)$. That is, each inequality constraint only uses coefficients 1 and 0, and only "consecutive" variables appear in each linear constraint. I think that this problem is NP-complete, but I haven't been able to prove it. I think a reduction to exactly-1-in-3-SAT or set-cover is most likely to succeed (variables would be literals/values respectively and rows in the matrix would correspond to clauses/sets), but the restriction that constraints only refer to consecutive variables doesn't seem strong enough to describe arbitrary clauses/sets. Alternatively, I might be wrong, and this problem actually has an algorithm that I have been missing. (The problem I'm actually interested in solving is finding the smallest $c$ such that constraints remain satisfiable, but I've phrased the problem this way so that it remains a simple decision problem) As an example, here's a small instance of this problem: $$ \begin{array}{} v_1 + v_2 + v_3 + v_4&=& 10 \\ v_1 &\geq& 1 \\ v_2 &\geq& 1 \\ v_2 + v_3 &\geq & 3 \\ v_3 + v_4 & \geq & 4 \\ v_3 &\geq& 1 \\ v_4 &\geq& 1 \end{array} $$ which has a possible assignment $\vec v = (1, 1, 6, 2) $.
I'm currently studying for my qualifying exam in algebraic topology, and I'm looking over old exam questions. The first part of the question was to show that $\mathbb{R} P^3$ is not homotopy equivalent to $\mathbb{R} P^2 \vee S^3$, which is fairly straightforward to do using either covering spaces or the cohomology rings. The second part of the question was to show that $\Sigma(\mathbb{R} P^3)$ is not homotopy equivalent to $\Sigma(\mathbb{R} P^2 \vee S^3)$. We can't use the cohomology rings to tell them apart any more (as the cup product structure is trivial), and I don't know of any tools to compute homotopy groups of suspensions without some amount of 'connectivity'. Based on a hint given in the question, I attemped to to distinguish these spaces by computing the Bockstein homomorphism $\beta : H^*(X,\mathbb{Z}_2) \to H^{*+1}(X,\mathbb{Z}_2)$ for both spaces coming from the short exact sequence $0 \to \mathbb{Z}_2 \to \mathbb{Z}_4 \to \mathbb{Z}_2 \to 0$, and using the fact that the Bockstein commutes with the suspension isomorphism on cohomology. However for both spaces I computed $\beta_1 : H^1(X,\mathbb{Z}_2) \to H^2(X,\mathbb{Z}_2)$ was an isomorphism, while $\beta_2 : H^2(X,\mathbb{Z}_2) \to H^3(X,\mathbb{Z}_2)$ was the zero map. Is this the way I should be trying to distinguish these spaces, or is there something more obvious?
Hello! I'm back from a short vacation and slowly getting to the comments. As I mentioned here, there might be a bit more to the information equilibrium picture of the Solow model than just the basic mechanics -- in particular I pointed out we might be able to figure out some dynamics of the savings rate relative to demand shocks. In the previous post, we built the model: Y \rightarrow K \rightarrow I $$ Where $Y$ is output, $K$ is capital and $I$ is investment. Since information equilibrium (IE) is an equivalence relation, we have the model: p: Y \rightarrow I $$ with abstract price $p$ which was described here (except using the symbol $N$ instead of $Y$) in the context of the IS-LM model. If we write down the differential equation resulting from that IE model \text{(1) }\;\; p = \frac{dY}{dI} = \frac{1}{\eta} \; \frac{Y}{I} $$ There are a few of things we can glean from this ... I. General equilibrium We can solve equation (1) under general equilibrium giving us $Y \sim I^{1/\eta}$. Empirically, we have $\eta \simeq 1$: Combining that with the results from the Solow model, we have Y \sim K^{\alpha} \; \text{,} \; K \sim I^{\sigma} \; \text{and} \; Y \sim I $$ which tells us that $\alpha \simeq 1/\sigma$ -- one of the conditions that gave us the original Solow model. II. Partial equilibrium Since $Y \rightarrow I$ we have a supply and demand relationship between output and investment in partial equilibrium. We can use equation (1) and $\eta = 1$ to write I = (1/p) Y \equiv s Y $$ Where we have defined the saving rate as $s \equiv 1/(p \eta)$ to be (the inverse of) the abstract price $p$ in the investment market. The supply and demand diagram (including an aggregate demand shock) looks like this: A shock to aggregate demand would be associated in a fall in the abstract price and thus a rise in the savings rate. There is some evidence of this in the empirical data: Overall, you don't always have pure supply or demand shocks, so there might be some deviations from a pure demand shock view. In particular, a "supply shock" (investment shock) should lead to a fall in the savings rate. III. Interest rates If we update the model here (i.e. the IS-LM model mentioned above) to include the more recent interest rate ($r$) model written in terms of investment and the money supply/base money: (r \rightarrow p_{m}) : I \rightarrow M $$ where $p_{m}$ is the abstract price of money (which is in IE with the interest rate), we have a pretty complete model of economic growth that combines the Solow model with the IS-LM model. The interest rate joins the already empirically accurate production function: Since I inevitably get questions about causality, it is important to note that these are all IE relationships therefore all relationships are effectively causal in either direction. However it is also important to note that the direct impact of $M$ on $Y$ is neglected in the above treatment (including the interest rates) -- and the direct impact changes depending on the information transfer index in the price level model. Summary A full summary of the Solow + IS-LM model in terms of IE relationships is: Y \rightarrow K \rightarrow I \; \text{,} \; K \rightarrow D $$ $$ Y \rightarrow L $$ $$ 1/s : Y \rightarrow I $$ $$ (r \rightarrow p_{m}) : I \rightarrow M $$ Update 5/27/2015:Forgot first graph; corrected.
Exercises Exercise \(\PageIndex{1}\) Find the general solution. (a) \(y'+ay=0\) (\(a\)=constant) (b) \(y'+3x^2y=0\) (c) \(xy'+(\ln x)y=0\) (d) \(xy'+3y=0\) (e) \(x^2y'+y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{2}\) Solve the initial value problem. (a) \({y'+\left({1+x\over x}\right)y=0,\quad y(1)=1}\) (b) \({xy'+\left(1+{1\over\ln x}\right)y=0,\quad y(e)=1}\) (c) \({xy'+(1+ x\cot x)y=0,\quad y\left({\pi\over 2} \right)=2}\) (d) \({y'-\left({2x\over 1+x^2}\right)y=0,\quad y(0)=2}\) (e) \({y'+{k\over x}y=0,\quad y(1)=3 \quad {(k=constant)}}\) (f) \( y'+(\tan kx)y=0,\quad y(0)=2 \quad {(k=constant)}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{3}\) Find the general solution. Also, plot a direction field and some integral curves on the rectangular region \(\{-2\le x\le2,\ -2\le y\le2\)\}.} (a) \(y'+3y=1\) (b) \({y'+\left({1\over x}-1\right)y=-{2\over x}}\) (c) \(y'+2xy=xe^{-x^2}\) (d) \({y'+{2x\over1+x^2}y={e^{-x}\over1+x^2}}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{4}\) Find the general solution. (a) \({y'+{1\over x}y={7\over x^2}+3}\) (b) \({y'+{4\over x-1}y = {1\over (x-1)^5}+{\sin x\over (x-1)^4}}\) (c) \(xy'+(1+2x^2)y=x^3e^{-x^2}\) (d) \({xy'+2y={2\over x^2}+1}\) (e) \(y'+(\tan x)y=\cos x\) (f) \({(1+x)y'+2y={\sin x \over 1 + x}}\) (g) \((x-2)(x-1)y'-(4x-3)y=(x-2)^3\) (h) \(y'+(2\sin x\cos x) y=e^{-\sin^2x}\) (i) \(x^2y'+3xy=e^x\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{5}\) Solve the initial value problem and sketch the graph of the solution. (a) \(y'+7y=e^{3x},\quad y(0)=0\) (b) \({(1+x^2)y'+4xy={2\over 1+x^2},\quad y(0)=1}\) (c) \({xy'+3y={2\over x(1+x^2)},\quad y(-1)=0}\) (d) \({y'+ (\cot x)y=\cos x,\quad y\left({\pi\over 2}\right)=1}\) (e) \({y'+{1\over x}y={2\over x^2}+1,\quad y(-1)=0}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{6}\) Solve the initial value problem. (a) \({(x-1)y'+3y={1\over (x-1)^3} + {\sin x\over (x-1)^2},\quad y(0)=1}\) (b) \(xy'+2y=8x^2,\quad y(1)=3\) (c) \(xy'-2y=-x^2,\quad y(1)=1\) (d) \(y'+2xy=x,\quad y(0)=3\) (e) \({(x-1)y'+3y={1+(x-1)\sec^2x\over (x-1)^3},\quad y(0)=-1}\) (f) \({(x+2)y'+4y={1+2x^2\over x(x+2)^3},\quad y(-1)=2}\) (g) \((x^2-1)y'-2xy=x(x^2-1),\quad y(0)=4\) (h) \((x^2-5)y'-2xy=-2x(x^2-5),\quad y(2)=7\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{7}\) Solve the initial value problem and leave the answer in a form involving a definite integral. You can solve these problems numerically by methods discussed in Chapter~3. (a) \(y'+2xy=x^2,\quad y(0)=3\) (b) \({y'+{1\over x}y={\sin x\over x^2},\quad y(1)=2}\) (c) \({y'+y={e^{-x}\tan x\over x},\quad y(1)=0}\) (d) \({y'+{2x\over 1+x^2}y={e^x\over (1+x^2)^2}, \quad y(0)=1}\) (e) \(xy'+(x+1)y=e^{x^2},\quad y(1)=2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{8}\) Experiments indicate that glucose is absorbed by the body at a rate proportional to the amount of glucose present in the bloodstream. Let \(\lambda\) denote the (positive) constant of proportionality. Now suppose glucose is injected into a patient's bloodstream at a constant rate of \(r\) units per unit of time. Let \(G=G(t)\) be the number of units in the patient's bloodstream at time \(t>0\). Then \(G'=-\lambda G+r,\) where the first term on the right is due to the absorption of the glucose by the patient's body and the second term is due to the injection. Determine \(G\) for \(t>0\), given that \(G(0)=G_0\). Also, find \(\lim_{t\to\infty}G(t)\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{9}\) (a) Plot a direction field and some integral curves for equation A: \(xy'-2y=-1\) on the rectangular region \(\{-1\le x\le 1, -.5\le y\le 1.5\}\). What do all the integral curves have in common? (b) Show that the general solution of (A) on \((-\infty,0)\) and \((0,\infty)\) is \(y={1\over2}+cx^2.\) (c) Show that \(y\) is a solution of (A) on \((-\infty,\infty)\) if and only if \(y=1 \over 2+c_1x^2, x\ge 0\) and \(1\over 2+c_2x^2, x < 0,\). where \(c_1\) and \(c_2\) are arbitrary constants. (d) Conclude from \part{c} that all solutions of (A) on \((-\infty,\infty)\) are solutions of the initial value problem \(xy'-2y=-1,\quad y(0)={1\over2}.\) (e) Use part b to show that if \(x_0\ne0\) and \(y_0\) is arbitrary, then the initial value problem \(xy'-2y=-1,\quad y(x_0)=y_0\) has infinitely many solutions on \((-\infty,\infty)\). Explain why this doesn't contradict Theorem~\ref{thmtype:3.3.1} \part{b}. Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{10}\) Suppose \(f\) is continuous on an open interval \((a,b)\) and \(\alpha\) is a constant. (a) Derive a formula for the solution of the initial value problem \(y'+\alpha y=f(x),\quad y(x_0)=y_0, \) where \(x_0\) is in \((a,b)\) and \(y_0\) is an arbitrary real number. (b) Suppose \((a,b)=(a,\infty)\), \(\alpha > 0\) and \(\displaystyle{\lim_{x\to\infty} f(x)=L}\). Show that if \(y\) is the solution of (a), then \(\displaystyle{\lim_{x\to \infty} y(x)=L/\alpha}\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{11}\) Assume that all functions in this exercise are defined on a common interval \((a,b)\) (a) Prove: If \(y_1\) and \(y_2\) are solutions of \(y'+p(x)y=f_1(x)\) and \(y'+p(x)y=f_2(x)\) respectively, and \(c_1\) and \(c_2\) are constants, then \(y=c_1y_1+c_2y_2\) is a solution of \(y'+p(x)y=c_1f_1(x)+c_2f_2(x).\) (This is the principle of superposition) (b) Use (a) to show that if \(y_1\) and \(y_2\) are solutions of the nonhomogeneous equation \(y'+p(x)y=f(x),\) then \(y_1-y_2\) is a solution of the homogeneous equation B: \(y'+p(x)y=0\). (c) Use (a) to show that if \(y_1\) is a solution of (A) and \(y_2\) is a solution of (B), then \(y_1+y_2\) is a solution of (A). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{12}\) Some nonlinear equations can be transformed into linear equations by changing the dependent variable. Show that if \(g'(y)y'+p(x)g(y)=f(x)\) where \(y\) is a function of \(x\) and \(g\) is a function of \(y\), then the new dependent variable \(z=g(y)\) satisfies the linear equation \(z'+p(x)z=f(x).\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{13}\) We've shown that if \(p\) and \(f\) are continuous on \((a,b)\) then every solution of equation A: \(y'+p(x)y=f(x)\) on \((a,b)\) can be written as \(y=uy_1\), where \(y_1\) is a nontrivial solution of the complementary equation for (A) and \(u'=f/y_1\). Now suppose \(f\), \(f'\), \dots, \(f^{(m)}\) and \(p\), \(p'\), \dots, \(p^{(m-1)}\) are continuous on \((a,b)\), where \(m\) is a positive integer, and define \begin{eqnarray*} f_0&=&f,\\ f_j&=&f_{j-1}'+pf_{j-1},\quad 1\le j\le m. \end{eqnarray*} Show that \(u^{(j+1)}={f_j\over y_1},\quad 0\le j\le m.\) Answer Add texts here. Do not delete this text first.
In Exercises \((3.5E.1)\) to \((3.5E.6)\), find all solutions. Exercise \(\PageIndex{1}\) \(\displaystyle{y'={3x^2+2x+1\over y-2}}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{2}\) \((\sin x)(\sin y)+(\cos y)y'=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{3}\) \(xy'+y^2+y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{4}\) \(y' \ln |y|+x^2y= 0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{5}\) \(\displaystyle{(3y^3+3y \cos y+1)y'+{(2x+1)y\over 1+x^2}=0}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{6}\) \(x^2yy'=(y^2-1)^{3/2}\) Answer Add texts here. Do not delete this text first. In Exercises \((3.5E.7)\) to \((3.5E.10)\), find all solutions. Also, plot a direction field and some integral curves on the indicated rectangular region. Exercise \(\PageIndex{7}\) \(\displaystyle{y'=x^2(1+y^2)}; \; \{-1\le x\le1,\ -1\le y\le1\}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{8}\) \( y'(1+x^2)+xy=0 ; \; \{-2\le x\le2,\ -1\le y\le1\}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{9}\) \(y'=(x-1)(y-1)(y-2); \; \{-2\le x\le2,\ -3\le y\le3\}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{10}\) \((y-1)^2y'=2x+3; \; \{-2\le x\le2,\ -2\le y\le5\}\) Answer Add texts here. Do not delete this text first. In Exercises \((3.5E.11)\) to \((3.5E.12)\), solve the initial value problem. Exercise \(\PageIndex{11}\) \(\displaystyle{y'={x^2+3x+2\over y-2}, \quad y(1)=4}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{12}\) \(y'+x(y^2+y)=0, \quad y(2)=1\) Answer Add texts here. Do not delete this text first. In Exercises \((3.5E.13)\) to \((3.5E.16)\), solve the initial value problem and graph the solution. Exercise \(\PageIndex{13}\) \((3y^2+4y)y'+2x+\cos x=0, \quad y(0)=1\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{14}\) \(\displaystyle{y'+{(y+1)(y-1)(y-2)\over x+1}=0, \quad y(1)=0}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{15}\) \(y'+2x(y+1)=0, \quad y(0)=2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{16}\) \(y'=2xy(1+y^2),\quad y(0)=1\) Answer Add texts here. Do not delete this text first. In Exercises \((3.5E.17)\) to \((3.5E.23)\), solve the initial value problem and find the interval of validity of the solution. Exercise \(\PageIndex{17}\) \(y'(x^2+2)+ 4x(y^2+2y+1)=0, \quad y(1)=-1\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{18}\) \(y'=-2x(y^2-3y+2), \quad y(0)=3\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{19}\) \(\displaystyle{y'={2x\over 1+2y}, \quad y(2)=0}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{20}\) \(y'=2y-y^2, \quad y(0)=1\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{21}\) \(x+yy'=0, \quad y(3) =-4\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{22}\) \(y'+x^2(y+1)(y-2)^2=0, \quad y(4)=2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{23}\) \((x+1)(x-2)y'+y=0, \quad y(1)=-3\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{24}\) Solve \(\displaystyle{y'={(1+y^2) \over (1+x^2)}}\) explicitly. Hint: Use the identity \(\displaystyle{\tan(A+B)={\tan A+\tan B\over1-\tan A\tan B}}\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{25}\) Solve \(\displaystyle {y'\sqrt{1-x^2}+\sqrt{1-y^2}=0}\) explicitly. Hint: Use the identity \(\sin(A-B)=\sin A\cos B-\cos A\sin B\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{26}\) Solve \(\displaystyle{y'={\cos x\over \sin y},\quad y (\pi)={\pi\over2}}\) explicitly. Hint: Use the identity \(\cos(x+\pi/2)=-\sin x\) and the periodicity of the cosine. Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{27}\) Solve the initial value problem \begin{eqnarray*} y'=ay-by^2,\quad y(0)=y_0. \end{eqnarray*} Discuss the behavior of the solution if part (a) \(y_0\ge0\); part (b) \(y_0<0\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{28}\) The population \(P=P(t)\) of a species satisfies the logistic equation \begin{eqnarray*} P'=aP(1-\alpha P) \end{eqnarray*} and \(P(0)=P_0>0\). Find \(P\) for \(t>0\), and find \(\lim_{t\to\infty}P(t)\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{29}\) An epidemic spreads through a population at a rate proportional to the product of the number of people already infected and the number of people susceptible, but not yet infected. Therefore, if \(S\) denotes the total population of susceptible people and \(I=I(t)\) denotes the number of infected people at time \(t\), then \begin{eqnarray*} I'=rI(S-I), \end{eqnarray*} where \(r\) is a positive constant. Assuming that \(I(0)=I_0\), find \(I(t)\) for \(t>0\), and show that \(\lim_{t\to\infty}I(t)=S\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{30}\) The result of Exercise \((3.5E.29)\) is discouraging: if any susceptible member of the group is initially infected, then in the long run all susceptible members are infected! On a more hopeful note, suppose the disease spreads according to the model of Exercise \((3.5E.29)\), but there's a medication that cures the infected population at a rate proportional to the number of infected individuals. Now the equation for the number of infected individuals becomes \begin{equation} \label{eq:3.5E.1} I'=rI(S-I)-qI \end{equation} where \(q\) is a positive constant. (a) Choose \(r\) and \(S\) positive. By plotting direction fields and solutions of \eqref{eq:3.5E.1} on suitable rectangular grids \begin{eqnarray*} R=\{0\le t \le T,\ 0\le I \le d\} \end{eqnarray*} in the \((t,I)\)-plane, verify that if \(I\) is any solution of \eqref{eq:3.5E.1} such that \(I(0)>0\), then \(\lim_{t\to\infty}I(t)=S-q/r\) if \(q<rS\) and \(\lim_{t\to\infty}I(t)=0\) if \(q\ge rS\). (b) To verify the experimental results of part (a), use separation of variables to solve \eqref{eq:3.5E.1} with initial condition \(I(0)=I_0>0\), and find \(\lim_{t\to\infty}I(t)\). Hint: There are three cases to consider: part(i) \(q<rS\); part(ii) \(q>rS\); part(iii) \(q=rS\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{31}\) Consider the differential equation \begin{equation} \label{eq:3.5E.2} y'=ay-by^2-q, \end{equation} where \(a\), \(b\) are positive constants, and \(q\) is an arbitrary constant. Suppose \(y\) denotes a solution of this equation that satisfies the initial condition \(y(0)=y_0\). (a) Choose \(a\) and \(b\) positive and \(q<a^2/4b\). By plotting direction fields and solutions of \eqref{eq:3.5E.2} on suitable rectangular grids \begin{equation} \label{eq:3.5E.3} R=\{0\le t \le T,\ c\le y \le d\} \end{equation} in the \((t,y)\)-plane, discover that there are numbers \(y_1\) and \(y_2\) with \(y_1<y_2\) such that if \(y_0>y_1\) then \(\lim_{t\to\infty}y(t)=y_2\), and if \(y_0<y_1\) then \(y(t)=-\infty\) for some finite value of \(t\). (What happens if \(y_0=y_1\)?) (b) Choose \(a\) and \(b\) positive and \(q=a^2/4b\). By plotting direction fields and solutions of \eqref{eq:3.5E.2} on suitable rectangular grids of the form \eqref{eq:3.5E.3}, discover that there's a number \(y_1\) such that if \(y_0\ge y_1\) then \(\lim_{t\to\infty}y(t)=y_1\), while if \(y_0<y_1\) then \(y(t)=-\infty\) for some finite value of \(t\). (c) Choose positive \(a\), \(b\) and \(q>a^2/4b\). By plotting direction fields and solutions of \eqref{eq:3.5E.2} on suitable rectangular grids of the form \eqref{eq:3.5E.3}, discover that no matter what \(y_0\) is, \(y(t)=-\infty\) for some finite value of \(t\). (d) Verify your results experiments analytically. Start by separating variables in \eqref{eq:3.5E.2} to obtain \begin{eqnarray*} {y'\over ay-by^2-q}=1. \end{eqnarray*} To decide what to do next you'll have to use the quadratic formula. This should lead you to see why there are three cases. Take it from there! Because of its role in the transition between these three cases, \(q_0=a^2/4b\) is called a \({\color{blue}{\mbox{ bifurcation value}}\) of \(q\). In general, if \(q\) is a parameter in any differential equation, \(q_0\) is said to be a bifurcation value of \(q\) if the nature of the solutions of the equation with \(q<q_0\) is qualitatively different from the nature of the solutions with \(q>q_0\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{32}\) \item\label{exer:2.2.32} \Lex By plotting direction fields and solutions of $$ y'=qy-y^3, $$ convince yourself that $q_0=0$ is a bifurcation value of $q$ for this equation. Explain what makes you draw this conclusion. Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{33}\) \item\label{exer:2.2.33} Suppose a disease spreads according to the model of Exercise~\ref{exer:2.2.29}, but there's a medication that cures the infected population at a constant rate of $q$ individuals per unit time, where $q>0$. Then the equation for the number of infected individuals becomes $$ I'=rI(S-I)-q. $$ Assuming that $I(0)=I_0>0$, use the results of Exercise~\ref{exer:2.2.31} to describe what happens as $t\to\infty$. Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{34}\) \item\label{exer:2.2.34} Assuming that $p \not\equiv 0$, state conditions under which the linear equation $$ y'+p(x)y=f(x) $$ is separable. If the equation satisfies these conditions, solve it by separation of variables and by the method developed in Section~2.1. Answer Add texts here. Do not delete this text first. Solve the equations in Exercises \((3.5E.35)\) to \((3.5E.38)\) using variation of parameters followed by separation of variables. Exercise \(\PageIndex{35}\) \begin{tabular}[t]{@{}p{168pt}@{}p{168pt}} \item\label{exer:2.2.35} $\dst{y'+y={2xe^{-x}\over1+ye^x}}$& Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{36}\) \item\label{exer:2.2.36}\vspace*{7pt} $\dst{xy'-2y={x^6\over y+x^2}}$ \end{tabular} Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{37}\) \begin{tabular}[t]{@{}p{168pt}@{}p{168pt}} \item\label{exer:2.2.37} $\dst{y'-y}={(x+1)e^{4x}\over(y+e^x)^2}$& Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{38}\) \item\label{exer:2.2.38} $y'-2y=\dst{xe^{2x}\over1-ye^{-2x}}$ \end{tabular} Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{39}\) \item\label{exer:2.2.39} Use variation of parameters to show that the solutions of the following equations are of the form $y=uy_1$, where $u$ satisfies a separable equation $u'=g(x)p(u)$. Find $y_1$ and $g$ for each equation. \begin{tabular}[t]{@{}p{168pt}@{}p{168pt}} {\bf (a)} $xy'+y=h(x)p(xy)$ & {\bf (b)} $\dst{xy'-y=h(x) p\left({y\over x}\right)}$\\[2\jot] {\bf (c)} $y'+y=h(x) p(e^xy)$ & {\bf (d)} $xy'+ry=h(x) p(x^ry)$\\[2\jot] {\bf (e)} $\dst{y'+{v'(x)\over v(x)}y= h(x) p\left(v(x)y\right)}$ \end{tabular} \end{exerciselist} Answer Add texts here. Do not delete this text first.
Simon Wren-Lewis sends us via Twitter to Medium for an exquisite example of my personal definition of mathiness: using math to obscure rather than enlighten. Here's the article in a nutshell: Any proposed government policy is challenged with the same question: “how are you going to pay for it”. The answer is: “by spending the money”. Which may sound counter intuitive, but we can show how by using a bit of mathematics. [a series of mathematical definitions] And that is why you pay for government expenditure by spending the money[1] . The outlay will be matched by taxation and excess saving to the penny after n transactions. Expressing it using mathematics allows you to see what changing taxation rates attempts to do.It is trying to increase and decrease the magnitude of n — the number of transactions induced by the outlay. It has nothing to do with the monetary amount. I emphasized a sentence that I will go back to in the end. But first let's delve into those mathematical definitions, shall we? And yes, almost every equation in the article is a definition. The first set of equations are definitions of initial conditions. The second is a definition of the relationship between $f$ and $T$ and $S$. The third set of equations define $T$. The fourth defines $S$. The fifth defines $r$. The sixth defines the domain of $f$, $T$, and $S$. Only the seventh isn't a definition. It's just a direct consequence of the previous six as we shall see. The main equation defined is this: \text{(1) }\; f(t) \equiv f(0) - \sum_{i}^{t} \left( T_{i} + S_{i}\right) $$ It's put up on the top of the blog post as if it's $S = k \log W$ on Boltzmann's grave. Already we've started some obfuscation because $f(0)$ is previously set to be $X$, but let's move on. What does this equation say? As yet, not much. For each $i < t$, we take a bite out of $f(0)$ that we arbitrarily separate into $T$ and $S$ which we call taxes and saving because those are things that exist in the real world and so their use may lend some weight to what is really just a definition that: K(t) \equiv M - N(t) $$ In fact we can rearrange these terms and say: $$ \begin{align} f(t) \equiv & f(0) - \sum_{i}^{t} T_{i} - \sum_{i}^{t} S_{i}\\ f(t) \equiv & M - T(t) - S(t)\\ K(t) \equiv & M - N(t) \end{align} $$ As you can probably tell, this is about national income accounting identities. In fact, that is Simon Wren-Lewis's point. But let's push forward. The article defines $T$ in terms of a tax rate $0 \leq r < 1$ on $f(t-1)$. However, instead of defining $S$ analogously in terms of a savings rate $0 \leq s < 1$ on $f(t-1)$, the article obfuscates this as a "constraint" f(t-1) - T_{t} - S_{t} \geq 0 $$ Let's rewrite this with a bit more clarity using a savings rate, substituting the definition of $T$ in terms of a tax rate $r$: \begin{align} f(t-1) - r_{t} f(t-1) - S_{t} & \geq 0\\ (1- r_{t}) f(t-1) - S_{t} & \geq 0\\ s_{t} (1- r_{t}) f(t-1) & \equiv S_{t} \; \text{given}\; 0 \leq s_{t} < 1 \end{align} $$ Let's put both the re-definition of $T_{i}$ and this re-definition of $S_{i}$ in equation (1), where we can now solve the recursion and obtain f(t) \equiv f(0) \prod_{i}^{t} \left(1-r_{i} \right) \left(1-s_{i} \right) $$ This equation isn't derived in the Medium article (and it really doesn't simplify the recursive equation without defining the savings rate). Note that both $s_{i}$ and $r_{i}$ are positive numbers less than 1. There's an additional definition that says that $r_{t}$ can't be zero for all times. Therefore the product of (one minus) those numbers is another number $0 < a_{i} < 1$ (my real analysis class did come in handy!) so what we really have is: \text{(2) }\; f(t) \equiv f(0) \prod_{i}^{t} a_{i} $$ And as we all know, if you multiply a number by a number that is less than one, it gets smaller. If you do that a bunch of times, it gets smaller still. In fact, that is the content of all of the mathematical definitions in the Medium post. You can call it the polite cheese theorem. If you put out a piece of cheese at a party, and if people take a non-zero fraction of it each half hour, those pieces will get smaller and smaller but eventually there is nothing left (i.e. somebody takes the last bit of cheese when it is small enough). Which is to say for $t \gg 1$ (with dimensionless time) $X \equiv T + S$ because $f(t) = 0$ with $t \gg 1$. But that's just an accounting identity and the article just obfuscated that fact by writing it in terms of a recursive function. Anyway, I wrote it all up in Mathematicain footnote [2]. Now back to that emphasized sentence above: Expressing it using mathematics allows you to see what changing taxation rates attempts to do. No. No it doesn't. If I write $Y = C + S + T$ per the accounting identities, then a change in $T$ by $\delta T$ means [3] \delta Y = \left( \frac{\partial C}{\partial T}+ \frac{\partial S}{\partial T} + 1 \right) \delta T $$ Does consumption rise or fall with increased taxation rates? Does saving rise or fall with increased taxation rate? Whatever the answer to those questions are, they are either modelsor empirical regularities. The math just helps you figure out the possibilities; it doesn't specify which occurs (for that you need data). The Medium article claims that all that changes is how fast $f(t)$ falls (i.e. the number of transactions before it reaches zero). However that's just the consequence of the assumptions leading to equation (2). And those assumptions represent assumptions about $\partial C/\partial T$ (and to a lesser extent $\partial S/\partial T$). Let's rearrange equation (3) and use $G = T + S$ [4]: \begin{align} \delta Y = & \frac{\partial C}{\partial T}\delta T + \frac{\partial S}{\partial T}\delta T + \delta T \\ \delta Y = & \frac{\partial C}{\partial T}\delta T + \frac{\partial G}{\partial T}\delta T \\ \delta Y = & \frac{\partial C}{\partial T}\delta T + \delta G \end{align} $$ And there's where we see the obfuscation of original prior. In the medium article, $f(0) = X$ is first called the "initial government outlay". It's $\delta G$. However, later $f(t-1)$ is called "disposable income". That is to say it's $\delta Y - \delta T$. However those two statements are impossible to reconcile with the accounting identities unless $X$ is the initial government outlay, meaning it is $\delta G - \delta T$. In that case we can reconcile the statements, but only if $\partial C/\partial T = 0$ because we've assumed net \begin{align} \delta Y - \delta T & = \delta G - \delta T\\ \delta Y & = \delta G \end{align} $$ This was a long journey to essentially arrive at the prior behind MMT: government spending is private income, and government spending does not offset private consumption. It was obfuscated by several equations that I clipped out of the quote at the top of this post. And you can see how that prior leads right to the "counterintuitive" statement at the beginning of the quote: Any proposed government policy is challenged with the same question: “how are you going to pay for it”. The answer is: “by spending the money”. Which may sound counter intuitive, but we can show how by using a bit of mathematics. No, you don't need the mathematics. If government spending is private income, then (assuming there is only a private and a public sector) private spending is government "income" (i.e. paying the government outlay back by private spending). Now is this true? For me, it's hard to imagine that $\partial C/\partial T = 0$ or $\delta Y = \delta G$ exactly. The latter is probably a good approximation (effective theory) at the zero lower bound or for low inflation (it's a similar result to the IS-LM model). For small taxation changes, we can probably assume $\partial C/\partial T \approx 0$. Overall, I have no real problem with it. It's probably not a completely wrong collection of assumptions. What I do have a problem with, however, is the unnecessary mathiness. I think it's there to cover up the founding principle of MMT that government spending is private income. Why? I don't know. Maybe they don't think people will accept that government spending is their income (which could easily be construed as saying we're all on welfare)? Noah Smith called MMT a kind of halfway house for Austrian school devotees, so maybe there's some residual shame about interventionism? Maybe MMT people don't really care about empirical data, and so there's just an effluence of theory? Maybe MMT people don't want to say they're making unfounded assumptions just like mainstream economists (or anyone, really) and so hide them "chameleon model"-style a laPaul Pfleiderer. Whatever the reason (I like the last one), all the stock-flow analysis, complex accounting, and details of how the monetary system works serve mainly to obscure the primary point that government spending is private income for us as a society. It's really just a consequence of the fact that your spending is my income and vice versa. That understanding is used to motivate a case against austerity: government cutting spending is equivalent to cutting private income. From there, MMT people tell us austerity is bad and fiscal stimulus is good. This advice is not terribly different from what Keynesian economics says. And again, I have no real problem with it. I'm sure I will get some comments that say I've completely misunderstood MMT and that it's really about something else. But please don't forget to tell us all what that "something else" is. But the statement here that "money is a tax credit" plus accounting really does say, basically, that government spending is our income. There seems to be a substitution of mathematics for understanding. In fact, the Medium article seems to think the derivation it goes through is necessary to derive its conclusion. But how can a series of definitions lead to anything that isn't itself effectively a definition? Let me give you an analogy. Through a series of definitions (which I have done as an undergrad math major in that same real analysis course mentioned above), I can come to the statement \frac{df(x)}{dx} = 0 $$ implies $x$ optimizes $f(x)$ (minimum or maximum). There's a bunch of set theory (Dedekind cuts) and some other theorems that can be proven along the way (e.g. the mean value theorem). This really tells us nothing about the real world unless we make some connection to it however. For example, I could call $f(x)$ tax revenue and $x$ the tax rate ‒ and adding some other definitions ($f(x) > 0$ except $f(0) = f(1) = 0$) and say that the Laffer curve is something you can clearly see if you just express it in terms of mathematics. The thing is that the Laffer curve is really just a consequence of those particular definitions. The question of whether or not it's a useful consequence of those definitions depends on comparing the "Laffer theory" to data. Likewise, whether or not "private spending pays off government spending" is a useful consequence of the definitions in the Medium article critically depend on whether or not the MMT definitions used result in a good empirical description of a macroeconomy. Without comparing models to data, physics would just be a bunch of mathematical philosophy. And without comparing macroeconomic models to data, economics is just a bunch of mathematical philosophy. ... Update 5 May 2017: Here's a graphical depiction of the different ways an identity $G = B + R$ can change depending on assumptions. These would be good pictures to use to try and figure out which one someone has in their head. For example, Neil has the top-right picture in his head. The crowding out picture is the bottom-right. You could call the picture on the bottom-left a "multiplier" picture. Update 6 May 2017:Fixed the bottom left quadrant of the picture to match the top right quadrant. ... Footnotes: [1] "And that's why I don't like cricket." [YouTube] This is basically equivalent to what is done in the Medium article. [2] Here you go: [2] Here you go: [3] If someone dares to say something about discrete versus continuous variables I will smack you down with some algebraic topology [pdf]. [4] I think people who reason from accounting identities seem to make the same mistakes that undergrad physics students make when reasoning from thermodynamic potentials. Actually, in the information equilibrium ensemble picture this becomes a more explicit analogy.
Assume the Earth to be a uniform sphere of mass M and radius R. Also let the Earth be stationary initially. Assume further that there is all other stellar bodies are very far away so as to have no influence in this problem. A ball of mass $m$ is to be thrown from the surface of the Earth so that it ultimately ends up moving in a circular orbit at a height $h$ from the surface of the Earth. How should the ball be thrown, as in, what should be its initial velocity and, in particular, what should be the angle $\theta$ of the initial velocity with the radial vector from the Earth's center to its initial position on the surface of the Earth? My attempt at the solution: Since the Earth-ball system is isolated, its energy would be conserved. Let the initial speed of the ball be $v_0$ and the final speed be $v_f$. We'll have the following equation: $$ \frac{1}{2}m v_0 ^2 - \frac{GMm}{R} = \frac{1}{2}mv_f^2 - \frac{GMm}{R+h}$$ Next, we know that when the ball will finally be in circular orbit, the gravitational force will be providing the necessary centripetal acceleration for it to maintain circular motion, which gives us the following equation: $$ \frac{mv_f^2}{R+h} = \frac{GMm}{(R+h)^2} $$ Besides, since gravity is a central force, the angular momentum of the ball about the center of the Earth will remain conserved, which yields the following equation: $$ mv_0R \sin(\theta) = mv_f(R+h) $$ That completes setting up three equations for three variables $- v_0, v_f, \theta $. Solving them, we would obtain the following values for each: $$ v_o = \sqrt\frac{GM(R+2h)}{R(R+h)} $$ $$ v_f = \sqrt\frac{GM}{R+h} $$ $$ \theta = \sin^{-1} \Biggl( \frac{1+\frac{h}{R}}{\sqrt{1+\frac{2h}{R}}} \Biggr) $$ However, it is observed that for $\theta$, the argument of $\sin^{-1}$ is always greater than or equal to one, being one only if $h$ is zero. The proof may be carried out by initially assuming that the argument of $\sin^{-1}$ is less than or equal to one for $h$ greater than or equal to $0$, and solving the inequality will yield that $h$ must be less than or equal to zero. So the only acceptable solution is for $h$ to be zero. This just winds up being the case of throwing the ball with sufficient speed tangentially on the surface of Earth so that it keeps orbiting on the Earth's surface itself. Thus, there is no possible way to just throw a ball so that it ends up in a circular orbit of certain desired radius. The problem is that I don't see any physical intuition for why this should be true; either that or I am doing something wrong. Any guidance or help would be greatly appreciated!
Vitali variation A generalization to functions of several variables of the Variation of a function of one variable, proposed by Vitali in [Vi] (see also [Ha]). The same definition of variation was subsequently proposed by H. Lebesgue [Le] and M. Fréchet [Fr] and it is sometimes called Fréchet variation. However the modern theory of functions of bounded variation uses a different generalization (see Function of bounded variation and Variation of a function). Therefore the Vitali variation is seldomly used nowadays. Consider a rectangle $R:= [a_1, b_1]\times \ldots \times [a_n, b_n]\subset \mathbb R^n$ and a function $f:R\to \mathbb R$. We define \[ \Delta_{h_k} (f, x) := f (x_1, \ldots, x_k+ h_k, \ldots, x_n) - f(x_1, \ldots, x_k, \ldots x_n) \] and, recursively, \[ \Delta_{h_1h_2\ldots h_k} (f, x):= \Delta_{h_k} \left(\Delta_{h_1\ldots h_{k-1}} , x\right)\, . \] Consider next the collection $\Pi_k$ of finite ordered families $\pi_k$ of points $t_k^1 < t_k^2< \ldots < t_k^{N_k+1}\in [a_k, b_k]$. For each such $\pi_k$ we denote by $h^i_k$ the difference $t_k^{i+1}- t_k^i$. Definition We define the Vitali variation of $f$ to be the supremum over $(\pi_1, \ldots, \pi_n)\in \Pi_1\times \ldots \times \Pi_n$ of the sums\begin{equation}\label{e:v_variation}\sum_{i_1=1}^{N_1} \ldots \sum_{i_n=1}^{N_n} \left|\Delta_{h^{i_1}_1\ldots h^{i_n}_n} \left(f, \left(x^{i_1}_1, \ldots x^{i_n}_n\right)\right)\right|\, \end{equation}If the Vitali variation is finite, then one says that $f$ has bounded (finite) Vitali variation.$f$ has finite Vitali variation if and only if it can be written as difference of two functions for which all the sums of type \eqref{e:v_variation} are nonnegative. This statement generalizesthe Jordan decomposition of a function of bounded variation of one variable. The class of functions with finite Vitali variation may be used to introduce the multi-dimensional Stieltjes integral, as was observed in [Fr]. Fréchet also used it in [Fr1] to study continuous bilinear functionals on the space of continuous functions of two variables of the form $(x_1,x_2)\mapsto \phi (x_1)\phi (x_2)$. The classical Jordan criterion for the convergence of Fourier series can be extended to functions which have finite Vitali variation (see [MT]). In particular, if a function $f$ on $[0,2\pi]^n$ has finite Vitali variation, then the rectangular partial sums of its Fourier series converges at every $x=(x_1, \ldots x_n)$ to the value \[ \frac{1}{2^n} \sum f(x_1^\pm, \ldots, x_n^\pm)\, . \] References [Fr] M. Fréchet, "Extension au cas d'intégrales multiples d'une définition de l'intégrale due à Stieltjes" Nouv. Ann. Math. ser. 4 , 10 (1910) pp. 241–256. JFM Zbl 41.0333.02 [Fr1] M. Fréchet, "Sur les fonctionelles bilinéaires" Trans. Amer. Math. Soc. , 16 : 3 (1915) pp. 215–234 JFM Zbl 45.0546.01 [Ha] H. Hahn, "Theorie der reellen Funktionen" , 1 , Springer (1921). JFM Zbl 48.0261.09 [Le] H. Lebesgue, "Sur l'intégration des fonctions discontinues" Ann. Sci. École Norm. Sup. (3) , 27 (1910) pp. 361–450. [MT] M. Morse, W. Transue, "The Fréchet variation and the convergence of multiple Fourier series" Proc. Nat. Acad. Sci. USA , 35 : 7 (1949) pp. 395–399. MR0030587 Zbl 0033.35901 [Ri] F. Riesz, B. Szökefalvi-Nagy, "Functional analysis" , F. Ungar (1955). MR0071727 Zbl 0732.47001 Zbl 0070.10902 Zbl 0046.33103 [Vi] G. Vitali, "Sui gruppi di punti e sulle funzioni di variabili reali" Atti Accad. Sci. Torino , 43 (1908) pp. 75–92. JFM Zbl 39.0101.05 How to Cite This Entry: Vitali variation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Vitali_variation&oldid=28291
This might be a slightly daft question as I'm a physicist rather than a chemist, but I have a slight problem in using Henry's law which I think I can circumvent using the ideal gas law – I'd be grateful if anyone can confirm or deny my logic! I have an equation that yields the volume of oxygen gas per unit mass of tissue. We'll call this quantity $C$ with S.I units of $\mathrm{m^3\: kg^{-1}}$. We also have the partial pressure $p$ measured in $\mathrm{mmHg}$ and we wish to relate the two; several texts have suggested using Henry's law of the form $C = kp$ where $k$ is Henry's constant for oxygen. However, this causes a problem, as I can find no form of Henry's law with the correct units to allow this conversions (units of $\text{vol} / ({\text{pressure} \times \text{mass}})$). My first question is does such a series of constants exist and does anyone know where? In the interim, I decided to try to circumvent this problem by using the ideal gas law, which I believe holds at the relatively low pressures in very shallow water. Knowing one mole of oxygen gas has a mass of $32\:\mathrm{g}$, and occupies a volume of $22.4\:\mathrm{l}$ at standard temperature and pressure of $760\:\mathrm{mmHg}$, I re-wrote the expression for the mass of $\ce{O_2}$ in a given volume; $m_{\ce{O_2}} = \frac{0.024\,p}{(0.032)760RT} = \frac{0.7\,p}{760RT}$ Using the ideal gas constant, and a temperature of $310.15\:\mathrm{K}$, I get $m_{\ce{O_2}} = 3.572 \times 10^{-7} \cdot p$ If I define $M_{\ce{O_2}}$ as the unitless mass fraction of $m_{\ce{O_2}}$ per unit mass, then knowing the density of oxygen gas is around $1.331\:\mathrm{kg/m^3}$, I can divide this into $M_{\ce{O_2}}$ to get the expression $C = 2.6835 \times 10^{-7} \cdot p$ which has the correct units. My second question is then is this valid, and if not, why not?
Let $f(x)$ be a one-one, polynomial function such that $f(x)f(y)+2=f(x)+f(y)+f(xy) \ \forall \ x,y \in \mathbb R - \{0\}$, $f(1) \neq 1$, $f'(1)=3$. Find $f(x)$. I tried to find the degree of the polynomial from the equation by using suitable substitution, but it didn't work. Also, I found that $f(1)=2$ and then I substituted $y=\dfrac{1}{x}$ to get $f(x)f(\dfrac{1}{x})= f(x) + f(\dfrac{1}{x})$. But I can't simplify further. Also, the answer given in my book is $f(x)=x^3+1$. Any help will be appreciated. Thanks.
Specific Heats, Isotherms, Adiabatics Introduction: the Ideal Gas Model, Heat, Work and Thermodynamics The Kinetic Theory picture of a gas (outlined in the previous lecture) is often called the Ideal Gas Model. It ignores interactions between molecules, and the finite size of molecules. In fact, though, these only become important when the gas is very close to the temperature at which it become liquid, or under extremely high pressure. In this lecture, we will be analyzing the behavior of gases in the pressure and temperature range corresponding to heat engines, and in this range the Ideal Gas Model is an excellent approximation. Essentially, our program here is to learn how gases absorb heat and turn it into work, and vice versa. This heat-work interplay is called thermodynamics. Julius Robert Mayer was the first to appreciate that there is an equivalence between heat and mechanical work. The tortuous path that led him to this conclusion is described in an earlier lecture, but once he was there, he realized that in fact the numerical equivalence—how many Joules in one calorie in present day terminology—could be figured out easily from the results of some measurements of gas specific heat by French scientists. The key was that they had measured specific heats both at constant volume and at constant pressure. Mayer realized that in the latter case, heating the gas necessarily increased its volume, and the gas therefore did work in pushing to expand its container. Having convinced himself that mechanical work and heat were equivalent, evidently the extra heat needed to raise the temperature of the gas at constant pressure was exactly the work the gas did on its container. ( Historical note: although he did the work in 1842, he didn’t publish until 1845, and at first miscalculated—but then gave a figure within 1% of the correct value of 4.2 joules per calorie.) The simplest way to see what’s going on is to imagine the gas in a cylinder, held in by a piston, carrying a fixed weight, able to move up and down the cylinder smoothly with negligible friction. The pressure on the gas is just the total weight pressing down divided by the area of the piston, and this total weight, of course, will not change as the piston moves slowly up or down: the gas is at constant pressure. The Gas Specific Heats C and V C P V P Consider now the two specific heats of this same sample of gas, let’s say one mole: Specific heat at constant volume, \( C_v \) (piston glued in place), Specific heat at constant pressure,\( C_p \) (piston free to rise, no friction). In fact, we already worked out \( C_v \) in the Kinetic Theory lecture: at temperature T, recall the average kinetic energy per molecule is \( \dfrac {3}{2} kT \), so one mole of gas—Avogadro’s number of molecules—will have total kinetic energy, which we’ll label internal energy \[ E_{int} = \dfrac{3}{2} kT \cdot N_A = \dfrac {3}{2} RT \] (In this simplest case, we are ignoring the possibility of the molecules having their own internal energy: they might be spinning or vibrating—we’ll include that shortly). That the internal energy is \( \dfrac{3}{2} RT \) per mole immediately gives us the specific heat of a mole of gas in a fixed volume, \[ C_V = \dfrac{3}{2} R \] that being the heat which must be supplied to raise the temperature by one degree. However, if the gas, instead of being in a fixed box, is held in a cylinder at constant pressure, experiment confirms that more heat must be supplied to raise the gas temperature by one degree. As Mayer realized, the total heat energy that must be supplied to raise the temperature of the gas one degree at constant pressure is \( \dfrac{3}{2} k \) per molecule plus the energy required to lift the weight. The work the gas must do to raise the weight is the force the gas exerts on the piston multiplied by the distance the piston moves. If the area of piston is A, then the gas at pressure P exerts force PA. If on heating through one degree the piston rises a distance Δ h the gas does work \[ PA \cdot \Delta h = P \Delta V. \] Now, for one mole of gas, \( PV = RT \), so at constant P \[ P \Delta V = R \Delta T. \] Therefore, the work done by the gas in raising the weight is just \( R \Delta T \), the specific heat at constant pressure, the total heat energy needed to raise the temperature of one mole by one degree, \[ C_p = C_v + R \] In fact, this relationship is true whether or not the molecules have rotational or vibrational internal energy. (It’s known as Mayer’s relationship.) For example, the specific heat of oxygen at constant volume \[ C_v (O_2) = \dfrac{5}{2} R \] and this is understood as a contribution of \( \dfrac{3}{2} R \) from kinetic energy, and R from the two rotational modes of a dumbbell molecule (just why there is no contribution form rotation about the third axis can only be understood using quantum mechanics). The specific heat of oxygen at constant pressure \[ C_p (O_2) = \dfrac{7}{2} R. \] It’s worth having a standard symbol for the ratio of the specific heats: \[ \dfrac {C_p}{C_v} = \gamma. \] Tracking a Gas in the ( P, V) Plane: Isotherms and Adiabats An ideal gas in a box has three thermodynamic variables: P, V, T. But if there is a fixed mass of gas, fixing two of these variables fixes the third from \(PV=nRT\) (for n moles). In a heat engine, heat can enter the gas, then leave at a different stage. The gas can expand doing work, or contract as work is done on it. To track what’s going on as a gas engine transfers heat to work, say, we must follow the varying state of the gas. We do that by tracing a curve in the ( P, V) plane. Supplying heat to a gas which consequently expands and does mechanical work is the key to the heat engine. But just knowing that a gas is expanding and doing work is not enough information to follow its path in the ( P, V) plane. The route it follows will depend on whether or not heat is being supplied (or taken away) at the same time. There are, however, two particular ways a gas can expand reversibly—meaning that a tiny change in the external conditions would be sufficient for the gas to retrace its path in the ( P, V) plane backwards. It’s important to concentrate on reversible paths, because as Carnot proved and we shall discuss later, they correspond to the most efficient engines. The two sets of reversible paths are the isotherms and the adiabats. the gas is kept Isothermal behavior: at constant temperatureby allowing heat flow back and forth with a very large object (a “heat reservoir”) at temperature T. From \(PV=nRT\), it is evident that for a fixed mass of gas, held at constant Tbut subject to (slowly) varying pressure, the variables P, Vwill trace a hyperbolic path in the ( P, V) plane. This path, \(PV=nRT_1\), say is called the isotherm at temperature T 1. Here are two examples of isotherms: “adiabatic” means “nothing gets through”, in this case Adiabatic behavior: no heat gets in or outof the gas through the walls. So all the work done in compressing the gas has to go into the internal energy \(E_{int} \). As the gas is compressed, it follows a curve in the ( P, V) plane called an adiabat. To see how an adiabat differs from an isotherm, imagine beginning at some point on the blue 273K isotherm on the above graph, and applying pressure so the gas moves to higher pressure and lower volume. Since the gas’s internal energy is increasing, but the number of molecules is staying the same, its temperature is necessarily rising, it will move towards the red curve, then above it. This means the adiabats are always steeper than the isotherms. In the diagram below, we’ve added a couple of adiabats to the isotherms: Equation for an Adiabat What equation for an adiabat corresponds to \(PV=nRT_1\) for an isotherm? On raising the gas temperature by \(\Delta T \), the change in the internal energy—the sum of molecular kinetic energy, rotational energy and vibrational energy (if any), \[ \Delta E_{int} = C_v \Delta T \] This is always true: whether or not the gas is changing volume is irrelevant, all that counts in \(E_{int} \) is the sum of the energies of the individual molecules (assuming as we do here that attractive or repulsive forces between molecules are negligible). In adiabatic compression, all the work done by the external pressure goes into this internal energy, so \[ -P \Delta V = C_v \Delta T \] (Compressing the gas of course gives negative \( \Delta V \), positive \( \Delta E_{int} \).) To find the equation of an adiabat, we take the infinitesimal limit \[ -PdV = C_vdT\] Divide the left-hand side by \(PV\), the right-hand side by \( nRT \) (since \(PV = nRT \) that’s OK) to find \[ - \dfrac{R}{C_v} \dfrac{dV}{V} = \dfrac {dT}{nT} \] Recall now that \( C_p = C_v + nR \) and \( \dfrac{C_p}{C_v} = \gamma. \) It follows that \[ \dfrac{nR}{C_v} = \dfrac{C_p - C_v}{C_v} = \gamma - 1 \] Hence \[ - (\gamma - 1) \int \dfrac {dV}{V} = \int \dfrac{dT}{T} \] and integrating \[ lnT + (\gamma - 1)lnV = const. \] from which the equation of an adiabat is \[ TV^{\gamma-1} = const. \] From \( PV = nRT \), the P, V equation for an adiabat can be found by multiplying the left-hand side of this equation by the constant PV/ T, giving \[ PV^{\gamma} = const. \] for an adiabat, where \( \gamma = 5/3 \) for a monatomic gas \( 7/5 \), for a diatomic gas.