text
stringlengths
100
500k
subset
stringclasses
4 values
(int. Algebra) Inequality result question. Thread: (int. Algebra) Inequality result question. Just when I thought I completely understood union and intersection, I ran into something that I just can't figure out. $\displaystyle [5,\infty)$. Is this correct? If not, why? Re: (int. Algebra) Inequality result question. Yes, that is correct. In the first case, you are told that $\displaystyle x\ge -2$ or $\displaystyle x\ge 5$. That is, x could satisfy either one of those inequalities. For example, x= 1 would satisfy that. Even though it does not satisfy $\displaystyle x\ge 5$, it does satisfy $\displaystyle x\ge -2$. Of course, any number greater than or equal to 5 is greater than or equal to -2 so it is enough to say $\displaystyle x\ge -2$. On the other hand, if we are told that $\displaystyle x\ge -2$ and $\displaystyle x\ge 5$, x must satisfy both inequalities. x= 1 would not satisfy this- it does not satisfy $\displaystyle x\get 5$. Of course, again, any number greater than or equal to 5 is greater than or equal to -2 so it is enough to say $\displaystyle x \ge 5$.
CommonCrawl
The calendars are printed 2-up to fit CD jewel cases -- the output PDF contains a photo of the actual physical output (the bigger one). Can now be compile with xelatex (updated Dec 18, 2018). Colours, illustrations, fonts etc are customisable. The calendars can be marked with events with date ranges, with different markers and styles (updated Dec 18, 2016). Use the sundayweek document class option to make weeks start on Sundays. (updated Aug 3, 2015). Localisation possible with languages supported by babel/translator/datetime2. Tested with british, spanish, french, ngerman, italian, portuges, polish, croatian, greek. Use the nobabel option and make your own customisations, for languages not supported by babel and/or translator (Dec 18, 2016). Here are examples for Chinese and Japanese. Note: If you get an error when you change the language, click on the "compile from scratch" option in the error message window. The corresponding calendar to fit into a 3.5" floppy disk jewel case can be found here, while a full-page "giant" version can be found here. Or fork it on Github! %%% full-blown "giant" version that prints full-page! %%% italian, portuges, polish, croatian, greek. %%% Use the sundayweek option for weeks to start on Sundays. Here are the actual printed calendars. The smaller calendar (9\,cm $\times$ 9.5\,cm) fits floppy disk jewel cases; while the bigger one (11.7\,cm $\times$ 13.65\,cm) fits CD jewel cases. %%% as the monthly calendar.
CommonCrawl
What is the common notion of equilibrium in economics? The concept of equilibrium referred to in General Equilibrium Theory is taken from Physics. It coincides with mechanical equilibrium. equilibrium quantities and prices. In both cases, the mathematical tool is optimization with constraints using the method of Lagrange multipliers. Walras and Pareto explicitly inspired their pioneering work on General Equilibrium Theory to Physics and mechanical equilibrium. This was made clear by Ingrao and Israel (1990). R. von Mises before WWII). then $p(x)$ is called a stationary distribution or invariant measure. If the chain starts with states distributed according to $p(x)$, this distribution does not change as time goes by. Note that the states are jumping from one to another one, but the probability of finding the system in a specific state does not change. This is exactly the idea of statistical equilibrium put forward by Ludwig Boltzmann. However, more can and should be said. First of all, the stationary distribution may not exist; secondly the chain usually starts from a specific state, so that the initial distribution is a vector full of 0's and with a single 1 in the initial state. The latter state of affairs can be represented by a Kronecker delta $\pi(x) = \delta(x,x_0)$, where $x_0$ is the specific initial state. Usually, this is not a stationary distribution and the convergence of the chain to the stationary distribution is not granted at all. the chain always converges to the stationary distribution irrespective of its initial distribution. prove that also $y$ leads to $x$ and the set of communicating states have a common period. In an irreducible chain all the states communicate and they have a common period $d$. The chain is aperiodic if $d=1$. irrespective of the initial state $x$. This means that, after a transient period, the distribution of chain states reaches a stationary distribution, which can then be interpreted as an equilibrium distribution in the statistical sense. Why and where statistical equilibrium may be useful in economics? to discuss some toy models for the distribution of wealth (not of income!) as in Scalas et al. (2006) and in Garibaldi et al. (2007). to study a simple agent-based model of financial market with heterogeneous knowledge as in Toth et al. (2007). to generalize a sectoral productivity model originally due to Aoki and Yoshikawa, in Scalas and Garibaldi (2009). In Scalas et al. (2006), Garibaldi et al. (2007), and Scalas and Garibaldi (2009), we promote the use of a finitary approach to combinatorial stochastic processes which will be the main topic of my presentation in Reykjavik. O. Penrose (1970), Foundations of Statistical Mechanics: A Deductive Treatment, Dover, NY. R. von Mises (1945), Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statistik und theoretischen Physik, Rosenberg, NY. E. Scalas, U. Garibaldi and S. Donadio (2006), Statistical equilibrium in simple exchange games I - Methods of solution and application to the Bennati-Dragulescu-Yakovenko (BDY) game, European Physical Journal B, 53(2), 267-272. U. Garibaldi, E. Scalas and P. Viarengo (2007), Statistical equilibrium in simple exchange games II. The redistribution game, European Physical Journal B, 60(2), 241-246. B. Toth, E. Scalas, J. Huber and M. Kirchler (2007), The value of information in a multi-agent market model - The luck of the uninformed, European Physical Journal B, 55(1), 115-120. E. Scalas and U. Garibaldi (2009), A Dynamic Probabilistic Version of the Aoki–Yoshikawa Sectoral Productivity Model, Economics, The Open-Access, Open-Assessment E-Journal, 3, 2009-15.
CommonCrawl
Abstract: We use the most recent Type-Ia Supernova data in order to study the dark energy - dark matter unification approach in the context of the Generalized Chaplygin Gas (GCG) model. Rather surprisingly, we find that data allow models with $\alpha > 1$. We have studied how the GCG adjusts flat and non-flat models, and our results show that GCG is consistent with flat case upto 68% confidence level. Actually this holds even if one relaxes the flat prior assumption. We have also analysed what one should expect from a future experiment such as SNAP. We find that there is a degeneracy between the GCG model and a XCDM model with a phantom-like dark energy component.
CommonCrawl
A loosely defined concept, a Complex System presents a behavior nontrivially determined by the interactions between its parts. Complex systems often exhibit emergence phenomena, such as swarming and pattern formation. In such systems, nonlinear interactions can lead to memory and feedback mechanisms, self-organized criticality, and chaotic behavior. Network theory, systems biology, and adaptive/evolutionary systems also fall under this umbrella concept. Are quantum mechanical orbits specified uniquely by a Hamiltonian and initial state? Level set of Hamiltonian are the orbits? Poincaré recurrence theorem with irrational frequencies? Good books on Quantum Complexity and its application to Theoretical Physics? Is *every* planar/2D system integrable? Is it possible to say anything about the behaviour of the Kuramoto model for large but finite $N$ based on an analysis of the model obtained in the limit $N\rightarrow\infty$? Can phase transitions occur in open systems? Is it possible that in open systems can occur phase transitions if the required conditions (i.e., temperature and pressure) are met? Are there examples? Why is dynamics first order in phase space? Multi-stability potential is suitable for interdisciplinary branches of science?
CommonCrawl
In this , I could not understand wny in oscillation both the torque would be equal as given in the solution . I could not understand why they will oscillate and in what direction . Perhaps they do not oscillate. Perhaps this is a static problem, then it is obvious that the torques must be equal and opposite. But in that case the string cannot be parallel to the electric field, as the question states. Another poorly worded question, and confusing solution. If the pendulum does oscillate, it does not have to stop when the string is parallel to the electric and gravitational fields. This seems to be a condition in the question, or an assumption in the answer. it is not clear if the question is asking about the static equilibrium of the pendulum or about its oscillations. In either case we can find the equilibrium angle $\alpha$ of the string to the vertical from a vector diagram. The accelerations due to gravity (g) and due to the constant electric field (qE/m) add as vectors to give a new effective acceleration due to gravity (g'). The pendulum will hang along the direction of g', or oscillate about the line of g', at equal angles to it left and right. g' is only the bisector of g and qE/m if g=qE/m. Then $\alpha=\frac12 \theta$. So the amplitude of the swing will only coincide with the directions g and qE/m if they are at equal angles from g. This requires g=qE/m. The torque calculation is valid, but it assumes that the extremes of oscillation lie along g and E. The question seems to say that the pendulum string rises up to the direction of E. That is impossible statically ; it can only happen in the limit of qE/mg -->infinity. Another interpretation is that the string reaches the direction of E dynamically, at the end of its swing. However, the question says nothing about the other extreme of the swing. In my diagram it will swing past the direction of g, making the same angles left and right of g'. The solution assumes that the 2nd extreme of swing is along g, although there is nothing in the question to justify this assumption. Torque is equal (and opposite) at both ends of the swing for any pendulum. So the calculation in the given solution is valid. The torque calculation could be used for the static case also - ie the equilibrium position when the oscillations die out. Then the string lies along g' (left diagram). Torque due to g is $mgl \sin\alpha$ clockwise, torque due to E is $qEl \sin(\theta-\alpha)$ anticlockwise. These must be equal for any static equilibrium situation, not only the special situation in this question, in which value of E allows the pendulum to swing between the directions of g and E.
CommonCrawl
[SOLVED] How do we know a quantum state isn't just an unknown classical state? [SOLVED] When light reflects off a mirror, does the wave function collapse? [SOLVED] Why does observation collapse the wave function? [SOLVED] Is there a difference between observing a particle and hitting it with another particle? [SOLVED] How does a Wavefunction collapse? [SOLVED] How is it possible that quantum phenomenons (e.g. superposition) are possible when all quantum particles are being constantly observed? [SOLVED] Is there an objective asymmetry between a collapsed and un-collapsed wave function? [SOLVED] In the double-slit experiment of electrons (observed by photons), is it correct to say the collapse is caused by the momentum of the photons? [SOLVED] Does $\sigma_x\sigma_p = 0 \cdot \infty$ after a measurement of particle position? [SOLVED] Is a photon always in a state of superposition while traveling through space? [SOLVED] Is Schrödinger's cat misleading? And what would happen if Planck constant is bigger? [SOLVED] Would every particle in the universe not have some form of measurement occurring at any given time? [SOLVED] What are the strongest objections to be made against decoherence as an explanation of "collapse?"
CommonCrawl
Abstract: In this paper, we investigate the existence of Ulrich bundles on a smooth complete intersection of two $4$-dimensional quadrics in $\mathbb P^5$ by two completely different methods. First, we find good ACM curves and use Serre correspondence in order to construct Ulrich bundles, which is analogous to the construction on a cubic threefold by Casanellas-Hartshorne-Geiss-Schreyer. Next, we use Bondal-Orlov's semiorthogonal decomposition of the derived category of coherent sheaves to analyze Ulrich bundles. Using these methods, we prove that any smooth intersection of two 4-dimensional quadrics in $\mathbb P^5$ carries an Ulrich bundle of rank $r$ for every $r \ge 2$. Moreover, we provide a description of the moduli space of stable Ulrich bundles.
CommonCrawl
The weak tensor product was introduced by Snevily as a way to construct new graphs that admit a-labelings from a pair of known a-graphs. In this article, we show that this product and the application to a-labelings can be generalized by considering as a second factor of the product, a family G of bipartite (p, q)-graphs, p and q fixed. The only additional restriction that we should consider is that for every F ¿ G , there exists an a-labeling fF with fF (V(F )) = L¿H, where L, H ¿ [0, q] are the stable sets induced by the characteristic of fF and they do not depend on F .Wealso obtain analogous applications to near a-labelings and bigraceful labelings. CitationLópez, S.C.; Muntaner-Batle, F.A. A new application of the $\otimes_h$-product to $\alpha$-labelings. "Discrete mathematics", 2015, vol. 338, núm. 6, p. 839-843.
CommonCrawl
A simple proof of the generalized strong recurrence for any non-zero parameterJun 16 2010Sep 22 2010The strong recurrence is equivalent to the Riemann hypothesis. In the present paper, we give a simple proof of the generalized strong recurrence for all non-zero parameters. On semi-continuity problems for minimal log discrepanciesMay 07 2013Jul 19 2014We show the semi-continuity property of minimal log discrepancies for varieties which have a crepant resolution in the category of Deligne-Mumford stacks. Using this property, we also prove the ideal-adic semi-continuity problem for toric pairs. Long-range Scattering Matrix for Schrödinger-type OperatorsApr 16 2018Nov 18 2018We show that the scattering matrix for a class of Schr\"odinger-type operators with long-range perturbations is a Fourier integral operator with the phase function which is the generating function of the modified classical scattering map. Smoothability of Z\times Z-actions on 4-manifoldsFeb 18 2009Nov 24 2009We construct a nonsmoothable Z\times Z-action on the connected sum of an Enriques surface and S^2\times S^2, such that each of generators is smoothable. We also construct a nonsmoothable self-homeomorphism on an Enriques surface. Simple proof of the functional relation for the Lerch type Tornheim double zeta functionDec 06 2010Dec 07 2010In this paper, we give a simple proof of the functional relation for the Lerch type Tornheim double zeta function. By using it, we obtain simple proofs of some explicit evaluation formulas for double $L$-values.
CommonCrawl
Search Results: 1 - 10 of 489917 matches for " F. M. M.;Pimentel " Abstract: supracrustal rocks of the araí group, together with coeval a-type granites represent a ca. 1.77-1.58 ga old continental rift in brazil. two granite families are identified: the older (1.77 ga) group forms small undeformed plutons, and the younger granites (ca. 1.58 ga) constitute larger, deformed plutons. sr-nd isotopic data for these rocks indicate that the magmatism is mostly product of re-melting of paleoproterozoic sialic crust. initial sr ratios for both granite families are ca 0.726 and 0.720. most tdm model ages are between 2.58 and 1.80 ga. end(t) values are between +3.6 and -11.9. araí volcanics are bimodal, with basalts and dacites/rhyolites interlayered with continental sediments. the felsic volcanics show nd isotopic characteristics which are very similar to the granites, and are also interpreted as reworking of paleoproterozoic crust. detrital sediments of the araí group revealed tdm model ages between 2.4 and 2.16 ga, indicating that they are the product of erosion of paleoproterozoic crust. the data indicate that the araí rift system was established on crust that had just become stable after the paleoproterozoic orogeny. PIMENTEL MáRCIO M.,BOTELHO NILSON F. Abstract: Supracrustal rocks of the Araí Group, together with coeval A-type granites represent a ca. 1.77-1.58 Ga old continental rift in Brazil. Two granite families are identified: the older (1.77 Ga) group forms small undeformed plutons, and the younger granites (ca. 1.58 Ga) constitute larger, deformed plutons. Sr-Nd isotopic data for these rocks indicate that the magmatism is mostly product of re-melting of Paleoproterozoic sialic crust. Initial Sr ratios for both granite families are ca 0.726 and 0.720. Most TDM model ages are between 2.58 and 1.80 Ga. epsilonND(T) values are between +3.6 and -11.9. Araí volcanics are bimodal, with basalts and dacites/rhyolites interlayered with continental sediments. The felsic volcanics show Nd isotopic characteristics which are very similar to the granites, and are also interpreted as reworking of Paleoproterozoic crust. Detrital sediments of the Araí Group revealed T DM model ages between 2.4 and 2.16 Ga, indicating that they are the product of erosion of Paleoproterozoic crust. The data indicate that the Araí rift system was established on crust that had just become stable after the Paleoproterozoic orogeny. Abstract: The increase risk of cancer development in patients with inflammatory intestinal disease (IBD) has already studied for decades. The anti-TNF therapy has changed the treatment strategy of IBD. By using on a larger scale and for a longer time, the anti-TNF raised concern over its potential adverse events. A male Crohn's disease (CD) patient, 55 years old, diagnosed for nine years, treated with infliximab for 6 years. In 2011, he underwent a nupper endoscopy (UE) which showed flat erosive gastritis with moderate intensity in antrum, gastric polyps and gastric erosion. Pathological examination revealed a chronic gastritis in erosive activity and search for Helicobacter pylori resulted positive. In May 2014, the patient was asymptomatic, when it held UE, which showed suggestive lesion of early gastric cancer, measuring 1.5 cm and search for Helicobacter pylori negative. Histopathological exams confirmed the adenocarcinoma. The patient underwent to a laparoscopic surgery (total gastrectomy with lymphadenectomy and reconstruction Roux-en-Y). Risk factors for the development of gastric cancer in general population are already well defined. However studying a possible association among CD and the different therapeutic modalities used in the treatment of this disease with gastric cancer appearance is important to set specific assessment strategies, prevention and follow-up. While there is no consensus on a proper monitoring for gastric cancer prevention in these patients, individualized conduct, taking into account individual characteristics, family record and other risk factors, should be adopted to avoid unfavorable outcomes in CD patients. Abstract: We study the persistent current and the Drude weight of a system of spinless fermions, with repulsive interactions and a hopping impurity, on a mesoscopic ring pierced by a magnetic flux, using a Density Matrix Renormalization Group algorithm for complex fields. Both the Luttinger Liquid (LL) and the Charge Density Wave (CDW) phases of the system are considered. Under a Jordan-Wigner transformation, the system is equivalent to a spin-1/2 XXZ chain with a weakened exchange coupling. We find that the persistent current changes from an algebraic to an exponential decay with the system size, as the system crosses from the LL to the CDW phase with increasing interaction $U$. We also find that in the interacting system the persistent current is invariant under the impurity transformation $\rho\to 1/\rho $, for large system sizes, where $\rho $ is the defect strength. The persistent current exhibits a decay that is in agreement with the behavior obtained for the Drude weight. We find that in the LL phase the Drude weight decreases algebraically with the number of lattice sites $N$, due to the interplay of the electron interaction with the impurity, while in the CDW phase it decreases exponentially, defining a localization length which decreases with increasing interaction and impurity strength. Our results show that the impurity and the interactions always decrease the persistent current, and imply that the Drude weight vanishes in the limit $N\to \infty $, in both phases. Abstract: aim: to correlate the sagittal abdominal diameter (sad) and waist circumference (wc) with metabolic syndrome-associated abnormalities in adults. methods: this cross-sectional study included onehundred twelve adults (m = 27, f = 85) aging 54.0 ± 11.2 yrs and average body mass index (bmi) of 30.5 ± 9.0 kg/m2. the assessment included blood pressure, plasma and anthropometric measurements. results: in both men and female, sad and wc were associated positively with body fat% (r = 0.53 vs r = 0.55), uric acid (r = 0.45 vs r = 0.45), us-pcr (r = 0.50 vs r = 0.44), insulin (r = 0.89 vs r = 0.75), insulin resistance homa-ir (r = 0.86 vs r = 0.65), ldl-ox (r = 0.51 vs r = 0.28), ggt (r = 0.70 vs r = 0.61), and diastolic blood pressure (r = 0.35 vs r = 0.33), and negatively with insulin sensibility quicki (r = -0.89 vs r = -0.82) and total cholesterol/tg ratio (r = -0.40 vs r = -0.22). glycemia, tg, and hdl-c were associated significantly only with sad (r = 0.31; r = 39, r = -0.43, respectively). conclusion: though the sad and wc were associated with numerous metabolic abnormalities, only sad correlated with dyslipidemia (tg and hdl-c) and hyperglycemia (glycemia). Abstract: Aim: To correlate the sagittal abdominal diameter (SAD) and waist circumference (WC) with metabolic syndrome-associated abnormalities in adults. Methods: This cross-sectional study included onehundred twelve adults (M = 27, F = 85) aging 54.0 ± 11.2 yrs and average body mass index (BMI) of 30.5 ± 9.0 kg/m2. The assessment included blood pressure, plasma and anthropometric measurements. Results: In both men and female, SAD and WC were associated positively with body fat% (r = 0.53 vs r = 0.55), uric acid (r = 0.45 vs r = 0.45), us-PCR (r = 0.50 vs r = 0.44), insulin (r = 0.89 vs r = 0.75), insulin resistance HOMA-IR (r = 0.86 vs r = 0.65), LDL-ox (r = 0.51 vs r = 0.28), GGT (r = 0.70 vs r = 0.61), and diastolic blood pressure (r = 0.35 vs r = 0.33), and negatively with insulin sensibility QUICKI (r = -0.89 vs r = -0.82) and total cholesterol/TG ratio (r = -0.40 vs r = -0.22). Glycemia, TG, and HDL-c were associated significantly only with SAD (r = 0.31; r = 39, r = -0.43, respectively). Conclusion: Though the SAD and WC were associated with numerous metabolic abnormalities, only SAD correlated with dyslipidemia (TG and HDL-c) and hyperglycemia (glycemia). Objetivo: Correlacionar el diámetro abdominal sagital (DAS) y la circunferencia de la cintura (CC) con las anomalías asociadas al síndrome metabólico en adultos. Métodos: Este estudio transversal incluyó a 112 adultos (H = 27, M = 85) con edad de 54,0 ± 11,2 a os y un promedio de índice de masa corporal (IMC) de 30,5 ± 9,0 kg/m2. La evaluación incluía la presión sanguínea y medidas plasmáticas y antropométricas. Resultados: Tanto en hombres como mujeres, DAS y CC se asociaban positivamente con el % grasa corporal (r = 0,53 vs r = 0,55), el ácido úrico (r = 0,45 vs r = 0,45), la us-PCR (r = 0,50 vs r = 0,44), la insulina (r = 0,89 vs r = 0,75), la resistencia a la insulina HOMA-IR (r = 0,86 vs r = 0,65), la LDL-ox (r = 0,51 vs r = 0,28), GGT (r = 0,70 vs r = 0,61), y la presión sanguínea diastólica (r = 0,35 vs r = 0,33), y negativamente con la sensibilidad a la insulina QUICKI (r = -0,89 vs r = -0,82) y el cociente colesterol total/TG (r = -0,40 vs r = -0,22). La glucemia, los TG, y la HDL-c se asociaban significativamente sólo con DAS (r = 0,31; r = 0,39, r = -0,43, respectivamente). Conclusión: Aunque DAS y CC se asociaban con numerosas anomalías metabólicas, sólo DAS se correlacionaba con la dislipemia (TG y HDL-c) y la hiperglucemia (glucemia). Abstract: as shown by pimentel gomes (1965), the theory proves that the use of the arithmetic mean of diameters to estimate basal areas in forestry leads to a bias. this paper evaluates this bias in the computation of cut out basal area in forestry thinnings, by means of theoretical study, samples generated in a computer, and also through the study of actual populations of trees in groves of araucaria angustifolia (bert.) o. ktze, pinus elliottii eng., p. taeda l. and p. caribaea var. hondurensis mor. the study thus carried out showed that the bias indicated can be rather serious.
CommonCrawl
I would like to place the $+$ and $\equiv$ be nearly touching, almost as if they were one symbol. Is it possible to do this? What I would like is for these two symbols to be touching. the gap between them should not be present. Is there a way to do this? Bracing the symbols prevents them to get their standard meaning of operation or relation; the surrounding \mathrel gives the combination the status of a relation. Is it one of these you want? How/Where to find such special symbol/character? How to write an overarrow between two symbols in formula?
CommonCrawl
Abstract: The problem of gravitational fluctuations confined inside a finite cutoff at radius $r=r_c$ outside the horizon in a general class of black hole geometries is considered. Consistent boundary conditions at both the cutoff surface and the horizon are found and the resulting modes analyzed. For general cutoff $r_c$ the dispersion relation is shown at long wavelengths to be that of a linearized Navier-Stokes fluid living on the cutoff surface. A cutoff-dependent line-integral formula for the diffusion constant $D(r_c)$ is derived. The dependence on $r_c$ is interpreted as renormalization group (RG) flow in the fluid. Taking the cutoff to infinity in an asymptotically AdS context, the formula for $D(\infty)$ reproduces as a special case well-known results derived using AdS/CFT. Taking the cutoff to the horizon, the effective speed of sound goes to infinity, the fluid becomes incompressible and the Navier-Stokes dispersion relation becomes exact. The resulting universal formula for the diffusion constant $D(horizon)$ reproduces old results from the membrane paradigm. Hence the old membrane paradigm results and new AdS/CFT results are related by RG flow. RG flow-invariance of the viscosity to entropy ratio $\eta /s$ is shown to follow from the first law of thermodynamics together with isentropy of radial evolution in classical gravity. The ratio is expected to run when quantum gravitational corrections are included.
CommonCrawl
There is a long history of studying the logic obtained by assigning probabilities, instead of truth values, to first-order formulas. In a 1964 paper, Gaifman studied probability distributions on countable structures that are invariant under renaming of the underlying set – which he called "symmetric measure-models", and which are essentially equivalent to what today are known as $S_\infty $-invariant measures. In this paper, he asked the question of which first-order theories admit invariant measures concentrated on the models of the theory. We answer this question of Gaifman, a key first step towards understanding the model theory of these measures, which can be thought of as "probabilistic structures". In this talk, we will also discuss related questions, such as how many probabilistic structures are models of a given theory, and when probabilistic structures are almost surely isomorphic to a single classical model. Joint work with Nathanael Ackerman and Rehana Patel.
CommonCrawl
Why doesn't car fuel/energy consumption scale like the cube of the velocity? The power required to overcome drag is relative to speed cubed. When I'm driving $100 km/h$ my car consumes $~10 $litres/$100km$. At $200 km/h$ the consumption should be $2^3 × 10$ litres/$100km = 80 $litres/$100km$. Obviously, it's a lot less, maybe only double. What is it I don't understand here? There are two extra things to consider here. First, in even the absolute simplest case, your car is not just fighting wind resistance (which indeed follows a $F \propto v^2$ law at these velocities) but also various static friction forces, usually following an $F \propto v^0$ law. And as you might imagine some of these forces are dropping based on what gear you're in, as some of the static friction is internal to the engine block. You can also read "constant force" as meaning "constant energy expenditure per unit of distance," which clarifies that something like the pistons compressing air but then that now-hot air being vented out (as it will be) turns out to be a constant force on average. So in summary, one of these factors of 2 is flat-out wrong for calculating fuel efficiency, the power may go as speed cubed but the energy per unit distance only goes with speed squared in the limit $F_0 = 0,~u = 0.$ The other missing factor of 2 probably comes from the fact that in the lower gear the drag forces $F_0$ and $k v^2$ are approximately comparable, whereas in the higher gear you've reduced $F_0$ considerably by upshifting -- but some components of it probably also come from a slight cross-wind that both acts as a linear drag force and redirects the airflow over a less-aerodynamic profile over the car. It does obey the laws of physics. It is a complicated system that it does not the drag force or the fuel consumption is not always proportional to $v^3$ for all values of $v$. The drag force depends on the constants $C_1, C_2, \ldots$ and $v$. For very small velocities, the lower order terms are more significant compared to the higher order terms. For large velocities, the higher powers become more significant. The amount of fuel you need is a function of not just of the drag force, but also of other external forces such as friction. The amount of fuel consumed has no decent relationship with the drag force acting. It depends on the time you travelled, how fast you accelerated, etc. Each of these terms vary different with velocity. Moreover, the efficiency of the engine changes with the speed of rotation and the gear with which you drove at. There simply isn't such a cute relation between distance travelled and fuel consumed. Not the answer you're looking for? Browse other questions tagged newtonian-mechanics energy drag or ask your own question. Could one fire a bullet with sufficient speed to leave the Earth? Why is velocity squared in kinetic energy? What's wrong with this argument that kinetic energy goes as $v$ rather than $v^2$? Why is it so much harder to ride a bike up a hill than to push it? Is the mechanical power output of a car constant?
CommonCrawl
Let $F$ be a field of characteristic different from 2, $\psi$ a quadratic $F$-form of dimension $\geq5$, and $D$ a central simple $F$-algebra of exponent 2. We denote by $F(\psi,D)$ the function field of the product $X_\psi\times X_D$, where $X_\psi$ is the projective quadric determined by $\psi$ and $X_D$ is the Severi-Brauer variety determined by $D$. We compute the relative Galois cohomology group $H^3(F(\psi,D)/F,\Z/2\Z)$ under the assumption that the index of $D$ goes down when extending the scalars to $F(\psi)$. Using this, we give a new, shorter proof of the theorem [23, Th. 1] originally proved by A. Laghribi, and a new, shorter, and more elementary proof of the assertion [2, Cor. 9.2] originally proved by H. Esnault, B. Kahn, M. Levine, and E. Viehweg. 1991 Mathematics Subject Classification: 19E15, 12G05, 11E81.
CommonCrawl
Probabilist by training. Currently working as a data scientist. 14 Usefulness of Frechet versus Gateaux differentiability or something in between. 12 Why do we care about $L^p$ spaces besides $p = 1$, $p = 2$, and $p = \infty$? 11 Is there an extension of the Arzela-Ascoli theorem to spaces of discontinuous functions?
CommonCrawl
If $x_1$ and $x_2$ are the roots of $$ax^2+bx+c=0$$ then $x_1^3$ and $x_2^3$ are the roots of which equation? but from here I realized it's probably pointless to do this since I wouldn't be able to use it, and I'm out of ideas. Let $B=b/a$ and $C=c/a$. Then $x_1$ and $x_2$ are the roots of $x^2+Bx+C$. Moreover, $x_1+x_2=-B$ and $x_1x_2=C$. Not the answer you're looking for? Browse other questions tagged quadratics or ask your own question. For what interval of $k$ does the equation have one postitive and one negative root?
CommonCrawl
I'd like to explain to you how to draw chessboards by hand in perfect perspective, using only a straightedge. In this post, I'll explain how to construct chessboards of any size, starting with the size of the basic unit square. This post follows up on the post I made yesterday about how to draw a chessboard in perspective view, using only a straightedge. That method was a subdivision method, where one starts with the boundary of the desired board, and then subdivides to make a chessboard. Now, we start with the basic square and build up. This method is actually quite efficient for quickly making very large boards in perspective view. I want to emphasize that this is something that you can actually do, right now. It's fun! All you need is a piece of paper, a pencil and a straightedge. I'll wait right here while you gather your materials. Use a ruler or a chop stick (as I did) or the edge of a notebook or the lid of a box. Sit at your table and draw a huge chessboard in perspective. You can totally do this. Start with a horizon, having two points at infinity (orange), at left and right, and a third point midway between them (brown), which we will call the diagonal infinity. Also, mark the front corner of your chess board (blue). Extend the front corner to the points at infinity. And then mark off (red) a point that will be a measure of the grid spacing in the chessboard. This will the be size of the front square. You can extend that point to infinity at the right. This delimits the first rank of the chessboard. Next, extend the front corner of the board to the diagonal infinity. The intersection of that diagonal with the previous line determines a point, which when extended to infinity at the left, produces the first square of the chessboard. And that line determines a new point on the leading rank edge. Extend that point up to the diagonal infinity, which determines another point on the second rank line. Extend that line to infinity at the left, which determines another point on the leading rank edge. Continuing in this way, one can produce as many first rank squares as desired. Go ahead and do that. At each step, you extend up to the diagonal infinity, which determines a new point, which when extended to infinity at the left determines another point, and so on. If you should now reflect on the current diagram, you may notice that we have actually determined many further points in the grid than we have mentioned — and thanks to my daughter Hypatia for noticing this simplification — for there is a whole triangle of further intersection points between the files and the diagonals. One can construct a perspective chessboard of any size this way, and one can simply continue with the construction and make it larger, if desired. It will look a little better if you add a point at infinity down below (and do so directly below the diagonal point at infinity, but a good distance down below the board), and extend the board downward one level. The corresponding diagram on yesterday's post might be helpful. You can now color the tile pattern, and you'll have a chessboard in perfect perspective view. If you keep going, you can make extremely large chessboards. In time, I hope that you will come to learn how to complete an infinite chess board in finite time. This entry was posted in Exposition, Math for Kids and tagged chess, geometry, kids, three-point perspective, two-point perspective by Joel David Hamkins. Bookmark the permalink. I have updated the post to include an efficiency noticed by my daughter (age 12). Previously, I had used the diagonal lines on the left side of the board also, but she noticed that one needn't do that, because of the points already determined on the right (as now described above in the current post). It seems to me that in order to make an $n\times n$ chessboard (with $n+1$ lines in each direction), this method will require $2(n+1)+n=3n+2$ many construction lines in total. This seems likely to be optimal, but can we prove this? This is a critical point. You are right that with straightedge only, you cannot construct the exact midpoint (you can prove this by observing that it is not invariant under the transformations of projective geometry, although these preserve straight lines). The point of the construction is that the chessboard will still look good even if you do not use the exact midpoint. A similar issue affects the other construction method, where you fix the initial square, since the corresponding diagonal point at infinity may not be the midpoint.
CommonCrawl
The topic of matrix stability is very important for determining the stability of solutions to systems of differential equations. We examine several problems in the field of matrix stability, including minimal conditions for a $7\times7$ matrix sign pattern to be potentially stable, and applications of sign patterns to the study of Turing instability in the $3\times3$ case. Furthermore, some of our work serves as a model for a new method of approaching similar problems in the future. Hambric, Christopher, "Potential Stability of Matrix Sign Patterns" (2018). Undergraduate Honors Theses. Paper 1183.
CommonCrawl
Carrithers, J. A. , C. C. Carroll, R. H. Coker, D. E. N. N. I. S. H. Sullivan, and T. A. Trappe. 2007. Concurrent exercise and muscle protein synthesis: implications for exercise countermeasures in space. Aviation, space, and environmental medicine 78:457–462. Yeo, S. E. , N. P. Hays, R. A. Dennis, P. A. T. R. I. C. K. M. Kortebein, D. E. N. N. I. S. H. Sullivan, W. I. L. L. I. A. M. J. Evans, and R. H. Coker. 2007. Fat distribution and glucose metabolism in older, obese men and women. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 62:1393–1401. Coker, R. H. , R. H. Williams, S. E. Yeo, P. A. T. R. I. C. K. M. Kortebein, D. L. Bodenner, P. A. Kern, and W. I. L. L. I. A. M. J. Evans. 2009. Visceral fat and adiponectin: associations with insulin resistance are tissue-specific in women. Metabolic syndrome and related disorders 7:61–67. Coker, R. H. , B. D Lacy, P. E. Williams, and D. H. Wasserman. 2000. Hepatic $\alpha$-and $\beta$-adrenergic receptors are not essential for the increase in Ra during exercise in diabetes. American Journal of Physiology-Endocrinology And Metabolism 278:E444–E451. Simonsen, L. , R. Coker, N. A. L. Mulla, M. Kjær, and J. Bülow. 2002. The effect of insulin and glucagon on splanchnic oxygen consumption. Liver 22:459–466. Coker, R. H. , B. D Lacy, M. G. Krishna, and D. H. Wasserman. 1999. Splanchnic glucagon kinetics in exercising alloxan-diabetic dogs. Journal of Applied Physiology 86:1626–1631. Coker, R. H. , Y. Koyama, B. D Lacy, P. E. Williams, N. Rhèaume, and D. H. Wasserman. 1999. Pancreatic innervation is not essential for exercise-induced changes in glucagon and insulin or glucose kinetics. American Journal of Physiology-Endocrinology And Metabolism 277:E1122–E1129. Koyama, Y. , P. Galassetti, R. H. Coker, R. R Pencek, B. D Lacy, S. N. Davis, and D. H. Wasserman. 2002. Prior exercise and the response to insulin-induced hypoglycemia in the dog. American Journal of Physiology-Endocrinology And Metabolism 282:E1128–E1138. Krishna, M. G. , R. H. Coker, B. D Lacy, B. A. Zinker, A. E. Halseth, and D. H. Wasserman. 2000. Glucagon response to exercise is critical for accelerated hepatic glutamine metabolism and nitrogen disposal. American Journal of Physiology-Endocrinology And Metabolism 279:E638–E645. Galassetti, P. , Y. Koyama, R. H. Coker, D. B. Lacy, A. D. Cherrington, and D. H. Wasserman. 1999. Role of a negative arterial-portal venous glucose gradient in the postexercise state. American Journal of Physiology-Endocrinology And Metabolism 277:E1038–E1045.
CommonCrawl
We investigate the efficiency of the recently proposed Restricted Boltzmann Machine (RBM) representation of quantum many-body states to study both the static properties and quantum spin dynamics in the two-dimensional Heisenberg model on a square lattice. For static properties we find close agreement with numerically exact Quantum Monte Carlo results in the thermodynamical limit. For dynamics and small systems, we find excellent agreement with exact diagonalization, while for larger systems close consistency with interacting spin-wave theory is obtained. In all cases the accuracy converges fast with the number of network parameters, giving access to much bigger systems than feasible before. This suggests great potential to investigate the quantum many-body dynamics of large scale spin systems relevant for the description of magnetic materials strongly out of equilibrium. 1- The results presented are new. 2- The technicalities are well-covered in the paper. 1 - In addition to the energy and the magnetization, the authors could consider other ground state static observables, e.g., magnetic susceptibility, to check the accuracy of the method. The paper by Fabiani et. al. investigates the accuracy of a recently proposed quantum many-body variational method based on the restricted Boltzmann Machine (RDM) neural network. As point out in the introduction, this method has the potential to efficiently simulate static and dynamic properties of many-body wave functions in any dimension. However, the efficiency of the RDM method in simulations of dynamical properties was not tested in dimensions higher then one. The introduction motivates well this point. In the paper, the RDM method is applied to study static and some dynamical properties of the prototypical two-dimensional Heisenberg model (HM); the results are then validated with other exact (or approximated) methods. The results presented by the authors provide relevant information about the the efficiency of the restricted Boltzmann Machine in two-dimensions. 1 - The authors mention that for larger systems, already for $\alpha = 4$ "convergence is reached within Monte Carlo error". Is this a general feature of the method, i.e., larger systems requires smaller $\alpha$ for convergence? or just a numerical observation for this specific case? I think the authors should comment about this on the manuscript. 1- Timeless investigation of RBM states for strongly-correlated systems. 1) I do not like the fact the nomenclature of ``reinforcement learning'' for the optimization technique. Even though this is a fancy name, the optimisation is just standard, according to the Monte Carlo community (see ref.26). For the real-time evolution a VMC approach was proposed in Scientific Reports 2, 243 (2012) for a Bose-Hubbard model. I think that this paper should be cited. 2) Are the variational parameters real of complex for the static calculations? Is the Marshall sign imposed? 3) The standard way to define the energy accuracy is to normalize |E_vmc-E_0| by E_0 and not by E_vmc. 1) Do not use ``reinforcement learning'' and add the reference. 2) Specify if W is real or complex for the static calculations. 3) Change the normalization in the accuracy.
CommonCrawl
Simplify $7^2 \times 7^3$ after first writing in factor form. , which is the same answer. Simplify $4^3 \times 4 \times 4^5$. Many questions will be algebraic, meaning that a pronumeral is used. In such questions we multiply the coefficients and apply the multiplication rule to the pronumeral separately. When there is more than one pronumeral involved in the question, we apply this rule to each pronumeral separately. Simplify $3x^2 \times 2x^5 \times x \times x^3$.
CommonCrawl
Mixture density networks (MDN) (Bishop, 1994) are a class of models obtained by combining a conventional neural network with a mixture density model. We demonstrate with an example in Edward. A webpage version is available at http://edwardlib.org/tutorials/mixture-density-network. """Draws samples from mixture model. Returns 2 d array with input X and sample from prediction of mixture model. We use the same toy data from David Ha's blog post, where he explains MDNs. It is an inverse problem where for every input $x_n$ there are multiple outputs $y_n$. We define TensorFlow placeholders, which will be used to manually feed batches of data during inference. This is one of many ways to train models with data in Edward. We use a mixture of 20 normal distributions parameterized by a feedforward network. That is, the membership probabilities and per-component mean and standard deviation are given by the output of a feedforward network. We use tf.layers to construct neural networks. We specify a three-layer network with 15 hidden units for each hidden layer. """loc, scale, logits = NN(x; theta)""" # sampling is not necessary for MAP estimation anyways. Note that we use the Mixture random variable. It collapses out the membership assignments for each data point and makes the model differentiable with respect to all its parameters. It takes a Categorical random variable as input—denoting the probability for each cluster assignment—as well as components, which is a list of individual distributions to mix over. For more background on MDNs, take a look at Christopher Bonnett's blog post or at Bishop (1994). # specify the neural networks. Here, we will manually control the inference and how data is passed into it at each step. Initialize the algorithm and the TensorFlow variables. Now we train the MDN by calling inference.update(), passing in the data. The quantity inference.loss is the loss function (negative log-likelihood) at that step of inference. We also report the loss function on test data by calling inference.loss and where we feed test data to the TensorFlow placeholders instead of training data. We keep track of the losses under train_loss and test_loss. Note a common failure mode when training MDNs is that an individual mixture distribution collapses to a point. This forces the standard deviation of the normal to be close to 0 and produces NaN values. We can prevent this by thresholding the standard deviation if desired. After training for a number of iterations, we get out the predictions we are interested in from the model: the predicted mixture weights, cluster means, and cluster standard deviations. To do this, we fetch their values from session, feeding test data X_test to the placeholder X_ph. Let's plot the log-likelihood of the training and test data as functions of the training epoch. The quantity inference.loss is the total log-likelihood, not the loss per data point. Below we plot the per-data point log-likelihood by dividing by the size of the train and test data respectively. We see that it converges after roughly 400 iterations. Let's look at how individual examples perform. Note that as this is an inverse problem we can't get the answer correct, but we can hope that the truth lies in area where the model has high probability. In this plot the truth is the vertical grey line while the blue line is the prediction of the mixture density network. As you can see, we didn't do too bad. We can check the ensemble by drawing samples of the prediction and plotting the density of those. The MDN has learned what we'd like it to learn. We thank Christopher Bonnett for writing the initial version of this tutorial. More generally, we thank Chris for pushing forward momentum to have Edward tutorials be accessible and easy-to-learn.
CommonCrawl
I would like to know how one can solve the following optimization problem using Sage. I would like to have a matrix where every element is an integer from 0 to 9. One can read numbers from the matrix by choosing the starting element and direction from one of the eight direction (horizontal, vertical, diagonal) and go to that direction 0 to 5 steps and concatenate the numbers. Now the problem is to find an $n\times m$ matrix with digits from 0 to 9 where one can read the squares of 1 to 100 in a way described above where $nm$ is the smallest possible. How can I do that on Sage? Can Sage do it for example in the case 12x10 griid? Does for example simulated annealing or genetic algorithm work in here, or is it easy to implement the algorithm in https://stackoverflow.com/questions/9... ? I would like to see how good result we can achieve. So $2 \times 5$ (or $5 \times 2$ after transposition) is minimal. You can also let the code print all the solutions, if you want. It would be interesting to see other approaches. Thanks for that! I think this proves the minimality of the example. But the case I would like to solve is much bigger. I think it might be solved by genetic algorithm of by simulated annealing but I have no experience of implementing those. Stick (line segments) percolation - graph theory? Can sage do symbolic optimization? How to find a minimum number of switch presses to shut down all lamps?
CommonCrawl
What does this last expression mean? What happens if the condition $ \mathbb E | X | < \infty $ in the statement of the LLN is not satisfied? Will convergence become visible if we take $ n $ even larger? Why does adding independent copies produce a bell-shaped distribution? itself a convex combination of three beta distributions. What happens when you replace $ [0, \pi / 2] $ with $ [0, \pi] $? Illustrates the delta method, a consequence of the central limit theorem. lb = "$N(0, g'(\mu)^2 \sigma^2)$" lb = "Chi-squared with 2 degrees of freedom"
CommonCrawl
Supposing that $\Gamma$ is an infinite, discrete group and that $\beta\Gamma$ is the Stone-Cech compactification of $\Gamma$, the group structure of $\Gamma$ can be extended to a semigroup structure on $\beta\Gamma$ by means of its universal property, for which the right multiplication maps over $\beta\Gamma$ are all continuous. It is well-known that any minimal left ideal (i.e., subsets of the form $(\beta\Gamma)x$ for some $x\in\beta\Gamma$) contains an idempotent. Does $\beta\Gamma\setminus\Gamma$ contain a non-idempotent element? Let $\Gamma=\mathbb Z$, and consider the sequence of all odd numbers, viewed as a filter. Pick an ultrafilter containing this filter. Then this ultrafilter is not idempotent. The reason is simple: odd + odd = even. Not the answer you're looking for? Browse other questions tagged semigroups-and-monoids compactifications stone-cech-compactification or ask your own question. Are semigroups with finite-to-one right multiplication "moving"?
CommonCrawl
J. Math. Study, 52 (2019), pp. 1-17. We investigate traveling fronts, including pulsating ones, of a forced curvature flow in a plane fibered medium. The main topic of this note is an uniqueness issue of such traveling fronts. In addition to line-shaped profiles, we also consider traveling fronts in the form of V-shaped parabolas. J. Math. Study, 52 (2019), pp. 18-29. A parameterized generalized successive overrelaxation (PGSOR) method for a class of block two-by-two linear system is established in this paper. The convergence theorem of the method is proved under suitable assumptions on iteration parameters. Besides, we obtain a functional equation between the parameters and the eigenvalues of the iteration matrix for this method. Furthermore, an accelerated variant of the PGSOR (APGSOR) method is also presented in order to raise the convergence rate. Finally, numerical experiments are carried out to confirm the theoretical analysis as well as the feasibility and the efficiency of the PGSOR method and its variant. J. Math. Study, 52 (2019), pp. 30-37. In this paper, we investigate the complete moment convergence and complete convergence for randomly weighted sums of negatively superadditive dependent (NSD, in short) random variables. The results obtained in the paper generalize the convergence theorem for constant weighted sums to randomly weighted sums of dependent random variables. In addition, strong law of large numbers for NSD sequence is obtained. J. Math. Study, 52 (2019), pp. 38-52. where $0<\alpha \le 1$, is established. Such expression is precisely the classical Taylor's and Cauchy's mean value theorem in the particular case $\alpha=1$. In addition, detailed expressions for $R_n^\alpha (\xi,\eta)$ and $T_n^\alpha (\xi,\eta)$ involving the sequential Caputo fractional derivative are also given. J. Math. Study, 52 (2019), pp. 53-59. The concept of minimality is generalized in different ways, one of which is the definition of k-minimality. In this paper k-minimality is studied for minimal hypersurfaces of a Euclidean space under different conditions on the number of principal curvatures. We will also give a counterexample to Lk-conjecture. J. Math. Study, 52 (2019), pp. 60-74. In this paper, we study the global regularity issue of two dimensional incompressible magnetic Bénard equations with partial dissipation and magnetic diffusion. It remains open whether the smooth initial data produce solutions that are globally regular in time for all values of the parameters involved in the equations. We present conditional global regularity of the solutions. Moreover, we prove the global regularity for the slightly regularized system. J. Math. Study, 52 (2019), pp. 75-97. We first get an existence and uniqueness result for a nonlinear eigenvalue problem. Then, we establish the constant rank theorem for the problem and use it to get a convexity property of the solution. J. Math. Study, 52 (2019), pp. 98-110. The motion of hydro-magnetic fluid can be described by Navier-StokesMaxwell system. In this paper, we prove global existence and uniqueness for the solutions of Navier-Stokes-Maxwell system in 3 dimensional space for small data.
CommonCrawl
Fitting a smooth curve to data points $d_0,\dots,d_n$ lying on a Riemannian manifold $\mathcal M$ and associated with real-valued parameters $t_0,\dots,t_n$ is a common problem in applications like wind field approximation, rigid body motion interpolation, or sphere-valued data analysis. The resulting curve should strike a balance between data proximity and a smoothing regularization constraint. In this talk we present the general framework of optimization on manifolds. We then introduce a variational model to fit a composite Bézier curve to the set of data points $d_0,\dots,d_n$ on a Riemannian manifold $\mathcal M$. The resulting curve is obtained in such a way that its mean squared acceleration is minimal in addition to remaining close the data points. We approximate the acceleration by discretizing the squared second order derivative along the curve. We derive a closed-form, numerically stable and efficient algorithm to compute the gradient of a Bézier curve on manifolds with respect to its control points. This gradient can be expressed as a concatenation of so called adjoint Jacobi fields. Several examples illustrate the capabilities of this approach both for interpolation and approximation.
CommonCrawl
Abstract: On a polarised surface, solutions of the Vafa-Witten equations correspond to certain polystable Higgs pairs. When stability and semistability coincide, the moduli space admits a symmetric obstruction theory and a $\mathbb C^*$ action with compact fixed locus. Applying virtual localisation we define invariants constant under deformations. When the vanishing theorem of Vafa-Witten holds, the result is the (signed) Euler characteristic of the moduli space of instantons. In general there are other, rational, contributions. Calculations of these on surfaces with positive canonical bundle recover the first terms of modular forms predicted by Vafa and Witten.
CommonCrawl
Suppose $T: V \to W$, why matrices are used as a method of recording the values of the $Tv_j$'s in terms of a basis of $W$? It said matrices are used as an efficient method of recording the values of the $Tv_j$'s in terms of a basis of $W$. Why it is related to the basis of $W$? Does it just mean the after transforming a basis in $V$, resulting in a vector in $W$, and that vector in $W$ can be written as a combination of the basis of $W$? Think about this: the elements of the matrix will be conditioned by the basis of $V$, but the elements of $W$ you are obtaining applying $T$ to $V$ will be expressed as a lineal combination of basis of $W$, which also conditions the particular elements of the $T$ matrix. Insights about $Tv_j=w_j$, the linear maps and basis of domain. For $T \in \mathcal L (V, W)$ show that $\mathcal M (T)$ has at least dim range $T$ nonzero entries. Is the linear map on basis of $V$ a basis of $W$? Why linear maps act like matrix multiplication?
CommonCrawl
Is it true that, for any $1$-form on a $3$-mfld $\alpha$, $\alpha \wedge d\alpha=0$? Show that the volume element of $V$ is $ϕ_1\wedge\cdots\wedge ϕ_k$. Show that $1$-form has particular coordinate representation. $2$-dimensional subbundle of tangent bundle of closed $3$-manifold integrable if and only if $\alpha \wedge d\alpha = 0$?
CommonCrawl
This is another iteration of beat the casino. That question did not require a practical, implementable strategy, whereas this one does. The rules are the same and I list them below. However, the OP answer to that question only had a theoretical result with no concrete strategy, and the accepted answer was not much of an improvement on the naive 2/3 method. I am looking for a practical, implementable result that achieves at least 70% success rate; well within the bounds of the theoretical result. Each round, $A$, $B$, and the casino simultaneously decide to show a $0$ or a $1$. If all three numbers match, $A$ and $B$ win that round. $A$ and $B$ are working cooperatively and can communicate before the game begins. $A$ has a method, just before the game starts, to learn the choices the casino will make over all the rounds. However, after learning this information $A$ cannot communicate with $B$ in any way except by her choices in the game. $A$ and $B$ are trying to maximize the fraction $p$ of rounds they win in the worst case. the game lasts for $n$ rounds. What is the best possible $p$ that $A$ and $B$ can achieve as $n \to \infty$? NOTE: I answered the referenced question (very late) and was able to achieve 67.8% with a relatively easy to describe strategy. I provided exact details on the strategies of each player, which got fairly complicated. If your strategy is easy to describe but complicated to implement, that is fine, so long as you can show the implementation is possible. Edit edit: Reverted an answer due to critical math error. This doesn't achieve 70%, but it's an improvement based on your previous answer, so I'll post it to get started. Assume we have one bit of information, communicating the most common answer in the next 29 rounds. We'll design a method that cycles every N rounds, and passes such a bit forward for each next iteration of the method. For those 29 rounds, Bob will always pick the answer indicated by the majority bit. Whenever Bob will be answering wrong, Alice can communicate information (14 bits). Alice can also communicate 4 bits by getting one answer wrong that Bob gets right: getting the first wrong communicates 0000, the second communicates 0001, all the way up to getting the fifteenth wrong: 1110. 1111 is represented by Alice choosing to get none of the answers wrong. One of the bits will be the majority bit for the next batch. 15 of those bits can be spent to tell Bob the next 15 answers, (leaving us with 2 bits unspent), and again Alice can get one of those 15 answers wrong to communicate 4 additional bits (again with 1111 being represented by getting all answers right). We then use the remaining 6 bits to communicate the next 6 answers. So we get 14 of the first 29 right, 14 of the next 15 right, and 6 of the next 6 right, 34/50 = 68%. This can be improved further: consider three of these 50-game batches: we always win game 45-50, 95-100 and 145-50. By choosing to lose one of those games we can communicate an additional 4 bits of information about the answers for 151-154. We could cut the above a little finer still (instead sacrifice two games from [45-50,95-100,145-150,195-200,245-250] in order to get 8 bits, then look at [251-258, 509-516] and sacrifice one of them to get an additional 4 bits...) but that's a pit of diminishing returns that I doubt will take us above 70%. Not the answer you're looking for? Browse other questions tagged mathematics strategy game open-ended or ask your own question.
CommonCrawl
As an example suppose the set $x_i=1,\ldots,6000$ and $W=24000$. How can I use FrobeniusSolve and apply this constraint to $c_i$ efficiently? Browse other questions tagged number-theory diophantine-equations or ask your own question. How to plot the number of Mersenne Primes lower than a given input?
CommonCrawl
SageMath-Combinat is a software project whose mission statement is to improve the open source mathematical system SageMath as an extensible toolbox for computer exploration in (algebraic) combinatorics, and foster code sharing between researchers in this area. SageMath-Combinat is the reincarnation in SageMath of MuPAD-Combinat; see the list of publications citing the latter. Below is a list of publications citing SageMath-Combinat. This list is also available in BibTeX format. The publications listed in each section are sorted in chronological order. Where two or more items are published in the same year, these items are sorted alphabetically by the authors' last names. Dan Drake and Jang Soo Kim. k-distant Crossings and Nestings of Matchings and Partitions. Proceedings of the 21st International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AK, pages 351--362, 2009. Ghislain Fourier, Masato Okado, and Anne Schilling. Kirillov-Reshetikhin crystals for nonexceptional types. Advances in Mathematics, volume 222, number 3, pages 1080--1116, 2009. Tom Denton. A combinatorial formula for orthogonal idempotents in the 0-Hecke algebra of S_N. Proceedings of the 22nd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AN, pages 701--712, 2010. Dan Drake. Bijections from Weighted Dyck Paths to Schröder Paths. Journal of Integer Sequences, volume 13, number 9, pages 10.9.2, 2010. Ghislain Fourier, Masato Okado, and Anne Schilling. Perfectness of Kirillov-Reshetikhin Crystals for Nonexceptional Types. Contemporary Mathematics, volume 506, pages 127--143, 2010. Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. The biHecke Monoid of A Finite Coxeter Group. Proceedings of the 22nd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AN, pages 307--318, 2010. Brant Jones and Anne Schilling. Affine Structures and a Tableau Model for E_6 Crystals. Journal of Algebra, volume 324, number 9, pages 2512--2542, 2010. Thomas Lam, Anne Schilling, and Mark Shimozono. K-theory Schubert Calculus of the Affine Grassmannian. Compositio Mathematica, volume 146, number 4, pages 811--852, 2010. Jean-Christophe Novelli, Franco Saliola, and Jean-Yves Thibon. Representation theory of the higher-order peak algebras. Journal of Algebraic Combinatorics, volume 32, number 4, pages 465--495, 2010. Anne Schilling and Qiang Wang. Promotion Operator on Rigged Configurations of Type A. The Electronic Journal of Combinatorics, volume 17, number 1, pages R24, 2010. Jason Bandlow, Anne Schilling, and Mike Zabrocki. The Murnaghan-Nakayama rule for k-Schur functions. Journal of Combinatorial Theory, Series A, volume 118, number 5, pages 1588--1607, 2011. Jason Bandlow, Anne Schilling, and Mike Zabrocki. The Murnaghan-Nakayama rule for k-Schur functions. Proceedings of the 23rd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AO, pages 99--110, 2011. Daniel Bump and Maki Nakasuji. Casselman's Basis of Iwahori Vectors and the Bruhat Order. Canadian Journal of Mathematics, volume 63, pages 1238--1253, 2011. Marie-Claude David and Nicolas M. Thiéry. Exploration of finite dimensional Kac algebras and lattices of intermediate subfactors of irreducible inclusions. Journal of Algebra and Its Applications, volume 10, number 5, pages 995--1106, 2011. Tom Denton. A combinatorial formula for orthogonal idempotents in the 0-Hecke algebra of the symmetric group. Electronic Journal of Combinatorics, volume 18, number 1, pages P28, 2011. Tom Denton, Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. On the representation theory of J-trivial monoids. Séminaire Lotharingien de Combinatoire, volume 64, pages B64d, 2011. A. Blondin Massé, S. Brleka, A. Garona, and S. Labbé. Equations on palindromes and circular words. Theoretical Computer Science, volume 412, number 27, pages 2922--2930, 2011. Gregg Musiker and Christian Stump. A Compendium on the Cluster Algebra and Quiver Package in Sage. Séminaire Lotharingien de Combinatoire, volume 65, pages B65d, 2011. Steven Pon and Qiang Wang. Promotion and evacuation on standard Young tableaux of rectangle and staircase shape. Electronic Journal of Combinatorics, volume 18, number 1, pages P18, 2011. Viviane Pons. Multivariate Polynomials in Sage. Séminaire Lotharingien de Combinatoire, volume 66, number 7, pages B66z, 2011. Anne Schilling and Peter Tingley. Demazure crystals and the energy function. Proceedings of the 23rd International Conference on Formal Power Series and Algebraic Combinatorics. Discrete Mathematics & Theoretical Computer Science, volume AO, pages 861--872, 2011. Marcelo Aguiar, Carlos André, Carolina Benedetti, Nantel Bergeron, Zhi Chen, Persi Diaconis, Anders Hendrickson, Samuel Hsiao, I. Martin Isaacs, Andrea Jedwab, Kenneth Johnson, Gizem Karaali, Aaron Lauve, Tung Le, Stephen Lewis, Huilan Li, Kay Magaard, Eric Marberg, Jean-Christophe Novelli, Amy Pang, Franco Saliola, Lenny Tevlin, Jean-Yves Thibon, Nathaniel Thiem, Vidya Venkateswaran, C. Ryan Vinroot, Ning Yan, and Mike Zabrocki. Supercharacters, symmetric functions in noncommuting variables, and related Hopf algebras. Adv. Math., volume 229, number 4, pages 2310--2337, 2012. Chris Berg, Nantel Bergeron, Steven Pon, and Mike Zabrocki. Expansions of k-Schur Functions in the Affine nilCoxeter Algebra. Electronic Journal of Combinatorics, volume 19, number 2, pages P55, 2012. Chris Berg, Nantel Bergeron, Hugh Thomas, and Mike Zabrocki. Expansion of k-Schur functions for maximal rectangles within the affine nilCoxeter algebra. J. Comb., volume 3, number 3, pages 563--589, 2012. Tom Denton. Canonical Decompositions of Affine Permutations, Affine Codes, and Split k-Schur Functions. Electronic Journal of Combinatorics, volume 19, number 4, pages P19, 2012. Tom Denton. Algebraic and affine pattern avoidance. Sém. Lothar. Combin., volume 69, pages Art. B69c,40, 2012. Valentin Féray and Pierre-Loïc Méliot. Asymptotics of q-plancherel measures. Probability Theory and Related Fields, volume 152, pages 589--624, 2012. Kyu-Hwan Lee and Ben Salisbury. A combinatorial description of the Gindikin-Karpelevich formula in type $A$. J. Combin. Theory Ser. A, volume 119, number 5, pages 1081--1094, 2012. Jennifer Morse and Anne Schilling. A combinatorial formula for fusion coefficients. Proceedings of the 24th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2012). Discrete Mathematics & Theoretical Computer Science, volume AR, pages 735--744, 2012. Steven Pon. Affine Stanley symmetric functions for classical types. Journal of Algebraic Combinatorics, volume 36, number 4, pages 595--622, 2012. Martin Rubey. Maximal 0-$1$-fillings of moon polyominoes with restricted chain lengths and rc-graphs. Adv. in Appl. Math., volume 48, number 2, pages 290--305, 2012. Anne Schilling and Peter Tingley. Demazure Crystals, Kirillov-Reshetikhin Crystals, and the Energy Function. The Electronic Journal of Combinatorics, volume 19, number 2, pages P4, 2012. Arvind Ayyer, Steven Klee, and Anne Schilling. Markov chains for promotion operators. Fields Communications Series, Springer, 2013. I. P. Goulden, Mathieu Guay-Paquet, and Jonathan Novak. Polynomiality of monotone Hurwitz numbers in higher genera. Advances in Mathematics, volume 238, number 1, pages 1--23, 2013. Florent Hivert, Anne Schilling, and Nicolas M. Thiéry. The biHecke monoid of a finite Coxeter group and its representations. Algebra & Number Theory, volume 7, number 3, pages 595--671, 2013. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, and Mark Shimozono. A uniform model for Kirillov-Reshetikhin crystals. Extended abstract. DMCTS proc, volume AS, pages 25--36, 2013. Cristian Lenart and Anne Schilling. Crystal energy functions via the charge in types A and C. Mathematische Zeitschrift, volume 273, number 1-2, pages 401--426, 2013. Masato Okado, Reiho Sakamoto, and Anne Schilling. Affine crystal structure on rigged configurations of type D_n^(1). Journal of Algebraic Combinatorics, volume 37, number 3, pages 571--599, 2013. Arvind Ayyer, Steven Klee, and Anne Schilling. Combinatorial Markov chains on linear extensions. J. Algebraic Combin., volume 39, number 4, pages 853--881, 2014. Chris Berg, Nantel Bergeron, Franco Saliola, Luis Serrano, and Mike Zabrocki. A lift of the Schur and Hall-Littlewood bases to non-commutative symmetric functions. Canad. J. Math., volume 66, number 3, pages 525--565, 2014. Mathieu Guay-Paquet, Alejandro H. Morales, and Eric Rowland. Structure and enumeration of $(3+1)$-free posets. Ann. Comb., volume 18, number 4, pages 645--674, 2014. Kyu-Hwan Lee and Ben Salisbury. Young tableaux, canonical bases, and the Gindikin-Karpelevich formula. J. Korean Math. Soc., volume 51, number 2, pages 289--309, 2014. Kyu-Hwan Lee, Philip Lombardo, and Ben Salisbury. Combinatorics of Casselman-Shalika formula in type $A$. Proc. Amer. Math. Soc., volume 142, number 7, pages 2291--2301, 2014. Tomoki Nakanishi and Salvatore Stella. Diagrammatic description of $c$-vectors and $d$-vectors of cluster algebras of finite type. Electron. J. Combin., volume 21, number 1, pages Paper 1.3,107, 2014. Arvind Ayyer, Anne Schilling, Benjamin Steinberg, and Nicolas M. Thiéry. Directed Nonabelian Sandpile Models on Trees. Comm. Math. Phys., volume 335, number 3, pages 1065--1098, 2015. Arvind Ayyer, Anne Schilling, Benjamin Steinberg, and Nicolas M. Thiéry. Markov chains, $ℛ$-trivial monoids and representation theory. Internat. J. Algebra Comput., volume 25, number 1-2, pages 169--231, 2015. Chris Berg, Nantel Bergeron, Franco Saliola, Luis Serrano, and Mike Zabrocki. Indecomposable modules for the dual immaculate basis of quasi-symmetric functions. Proc. Amer. Math. Soc., volume 143, number 3, pages 991--1000, 2015. Darij Grinberg and Tom Roby. Iterative Properties of Birational Rowmotion II: Rectangles and Triangles. The Electronic Journal of Combinatorics, volume 22, number 3, pages 3.40, 2015. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, and Mark Shimozono. A uniform model for Kirillov-Reshetikhin crystals I: Lifting the parabolic quantum Bruhat graph. Int. Math. Res. Not. IMRN, number 7, pages 1848--1901, 2015. Ben Salisbury and Travis Scrimshaw. A rigged configuration model for $B(\infty)$. J. Combin. Theory Ser. A, volume 133, pages 29--57, 2015. Anne Schilling and Travis Scrimshaw. Crystal structure on rigged configurations and the filling map. Electronic J. Combinatorics, volume 22, number 1, pages P1.73, 2015. Darij Grinberg and Tom Roby. Iterative Properties of Birational Rowmotion I: Generalities and Skeletal Posets. The Electronic Journal of Combinatorics, volume 23, number 1, pages 1.33, 2016. Seok-Jin Kang, Kyu-Hwan Lee, Hansol Ryu, and Ben Salisbury. A combinatorial description of the affine Gindikin-Karpelevich formula of type $A_n^(1)$. Lie Algebras, Lie Superalgebras, Vertex Algebras and Related Topics. Amer. Math. Soc., Providence, RI, pages 145--165, 2016. Jennifer Morse and Anne Schilling. Crystal approach to affine Schubert calculus. International Mathematics Research Notices, volume 2016, number 8, pages 2239--2294, 2016. Ben Salisbury and Travis Scrimshaw. Connecting marginally large tableaux and rigged configurations. Algebr. Represent. Theory, volume 19, number 3, pages 523--546, 2016. Travis Scrimshaw. A crystal to rigged configuration bijection and the filling map for type $D_4^(3)$. J. Algebra, volume 448, pages 294--349, 2016. Mike Zabrocki Chris Berg, Nathan Williams. Symmetries on the Lattice of k-Bounded Partitions. Ann. Comb., volume 20, number 2, pages 251--281, 2016. Arvind Ayyer, Anne Schilling, and Nicolas M. Thiéry. Spectral gap for random-to-random shuffling on linear extensions. Exp. Math., volume 26, number 1, pages 22--30, 2017. Emily Gunawan and Travis Scrimshaw. Realization of Kirillov-Reshetikhin crystals $B^1,s$ for $\widehat\mathfraksl_n$ using Nakajima monomials. Sém. Lothar. Combin., volume 78B, pages Art. 47,12, 2017. Tobias Johnson, Anne Schilling, and Erik Slivken. Local limit of the fixed point forest. Electronic Journal of Probability, volume 22, pages 1--26, 2017. Masato Okado, Reiho Sakamoto, Anne Schilling, and Travis Scrimshaw. Type $D_n^(1)$ rigged configuration bijection. J. Algebraic Combin., volume 46, number 2, pages 341--401, 2017. Jianping Pan and Travis Scrimshaw. Virtualization map for the Littelmann path model. Transform. Groups, 2017. Ben Salisbury and Travis Scrimshaw. Rigged configurations for all symmetrizable types. Electron. J. Combin., volume 24, number 1, pages Paper 1.30,13, 2017. Ben Salisbury and Travis Scrimshaw. Using rigged configurations to model $B(\infty)$. Sém. Lothar. Combin., volume 78B, pages Art. 34,12, 2017. Anne Schilling. Richard Stanley through a crystal lens and from a random angle. in: The Mathematical Legacy of Richard P. Stanley, AMS 2016, pages 287--299, 2017. Anne Schilling, Nicolas M. Thiery, Graham White, and Nathan Williams. Braid moves in commutation classes of the symmetric group. European J. Combin., volume 62, pages 15--34, 2017. Travis Scrimshaw. Rigged configurations as tropicalizations of loop Schur functions. J. Integrable Syst., volume 2, number 1, pages xyw015,24, 2017. Ben Salisbury and Travis Scrimshaw. Rigged configurations and the $*$-involution. Lett. Math. Phys., 2018. Nicolas Borie. Calculate invariants of permutation groups by Fourier Transform. PhD thesis, Laboratoire de mathématiques d'Orsay, 2011. Tom Denton. Excursions into Algebra and Combinatorics at q=0. PhD thesis, Department of Mathematics, 2011. Viviane Pons. Combinatoire algèbrique lièe aux ordres sur les permutations. PhD thesis, Marne-la-Valle, 2013. Travis Scrimshaw. Crystals and rigged configurations. PhD thesis, Department of Mathematics, 2015. Alexandre Casamayou, Nathann Cohen, Guillaume Connan, Thierry Dumont, Laurent Fousse, François Maltey, Matthias Meulien, Marc Mezzarobba, Clément Pernet, Nicolas M. Thiéry, and Paul Zimmermann. Calcul Mathématique avec Sage. CreateSpace, 2013. Thomas Lam, Luc Lapointe, Jennifer Morse, Anne Schilling, Mark Shimozono, and Mike Zabrocki. k-Schur functions and affine Schubert calculus. Springer, New York; Fields Institute for Research in Mathematical Sciences, Toronto, ON, 2014. William Paulsen. Abstract Algebra: An Interactive Approach, 2nd ed.. Chapman and Hall/CRC Press, 2016. Daniel Bump and Anne Schilling. Crystal Bases: Representations and Combinatorics. World Scientific, 2017. Nicolas M. Thiéry. Algèbre Combinatoire et Effective; des graphes aux algèbres de Kac via l'exploration informatique. Mémoire d'Habilitation à Diriger des Recherches, 2008. Nicolas M. Thiéry. Sage-Combinat, Free and Practical Software for Algebraic Combinatorics. Software demonstration, FPSAC'09, Hagenberg, Austria, 2009. Nicolas Borie and Nicolas M. Thiéry. An evaluation approach to computing invariants rings of permutation groups. arXiv:1110.3849, 2011. Chris Berg, Nantel Bergeron, Franco Saliola, Luis Serrano, and Mike Zabrocki. Multiplicative structures of the immaculate basis of non-commutative symmetric functions. arXiv:1305.4700, 2013. James M. Borger. Witt vectors, semirings, and total positivity. arXiv:1310.3013, 2013. Frédéric Chapoton, Florent Hivert, and Jean-Christophe Novelli. A set-operad of formal fractions and dendriform-like sub-operads. arXiv:1307.0092, 2013. Andrew Mathas. Cyclotomic quiver Hecke algebras of type A. arXiv:1310.2142, 2013. Dan Orr and Mark Shimozono. Specializations of nonsymmetric Macdonald-Koornwinder polynomials. arXiv:1310.0279, 2013. Grégory Chatel Viviane Pons. Counting smaller elements in the Tamari and $m$-Tamari lattices. arXiv:1311.3922, 2013. Chris Berg, Viviane Pons, Travis Scrimshaw, Jessica Striker, and Christian Stump. FindStat - the combinatorial statistics database. arXiv:1401.3690, 2014. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, and Mark Shimozono. A uniform model for Kirillov-Reshetikhin crystals II. Alcove model, path model, and $P=X$. arXiv:1402.2203, 2014. Jin Hong, Hyeonmi Lee, and Roger Tian. Rigged Configuration Descriptions of the Crystals $B(\infty)$ and $B(łambda)$ for Special Linear Lie Algebras. arXiv:1604.04357, 2016. Mike Zabrocki Rosa Orellana. Symmetric group characters as symmetric functions. arXiv:1605.06672, 2016. N. Cohen and D. V. Pasechnik. Implementing Brouwer's database of strongly regular graphs. arXiv:1601.00181, 2016-01. Erik Aas, Darij Grinberg, and Travis Scrimshaw. Multiline queues with spectral parameters. arXiv:1810.08157, 2018. If you use Sage-Combinat in a book, paper, website, etc., please email the webmaster and the Sage-Combinat team. Please reference Sage-Combinat as described here. The preferred way to submit publication details is to provide them as a pull request on the dedicated GitHub repository to update the file Sage-Combinat.bib. Also, be sure to find out which components of SageMath, e.g. NumPy, PARI, GAP, that your calculation uses and properly attribute those systems. If you are unsure, ask on the sage-support mailing list. Similarly, consider finding out who wrote the SageMath code you are using and acknowledge them explicitly as well.
CommonCrawl
Abstract: A method is proposed for calculating the matrix elements and cross sections with a wave function with definite angular momentum. The method consists of calculating the matrix element in which the function with definite angular momentum is replaced by the function $\exp(i\mathbf k\mathbf r-\eta r)$ with subsequent differentiation of the result with respect to $\eta$ and taking of the gradients with respect to $\mathbf k$ in accordance with $Eq$.(5). The method is illustrated for the example of the calculation of the photoeffect cross section from an arbitrary shell without the dipole approximation.
CommonCrawl
I have a proof (sketch) of the Strong Law of Large Numbers, at least the "sufficiency" half of it, that seems a little too easy. This is the version where you only assume i.i.d. random variables, and $E[X]<\infty$. The idea is to derive it ultimately from the special case of i.i.d. Bernoulli r.v.s. That case is relatively elementary to prove: you can get it from the convergence of the binomial to the normal, for instance, or by showing that the 4th centralized moment is proportional to $n^2$ for the binomial distribution. The apply Borel-Cantelli, etc. Browse other questions tagged law-of-large-numbers strong-convergence or ask your own question. Can someone check if this proof of the strong law of large numbers is correct? Why does the SLLN imply this?
CommonCrawl
Oftentimes, you will have an image that you want to extract data from. It will most likely be something annoyingly minor, but will somehow turn into the most important piece to the puzzle that you NEED RIGHT NOW! Okay… maybe that's just me. Sometimes it's nice to be able to extract data from an image you found in a journal article to replicate the results. Here, I will go through the steps I took to convert an image into RGB format (using Python - naturally) and convert it into an array of data. From there it gets interesting. A lot of images I come across have been interpolated from relatively sparse data. You don't need all that data! It's a waste of space and you're better off trimming the bits you don't actually need - especially if you would like to do some spatial post-processing. Import the image using the scipy.ndimage module. Query the colour map with many numbers along the interval you've defined and store that in a KDTree. Query the KDTree using the $m \times n \times 3$ ndimage RGB array. The result is an $m \times n$ data array. Reduce the data by taking the first spatial derivative using numpy.diff and removing points where $|\nabla| > 0$. Use sklearn.DBScan from the scikit-learn module to cluster the data and remove even more unnecessary data. At the end I have a Python class that contains all these steps rolled into one nifty code chunk for you to use on your own projects, if you find this useful. That looks nice, im is a $m \times n \times 3$ array - i.e. 3 sets of $m$ rows, $n$ columns, that correspond to the number of pixels, for each RGB channel. Now let's create the colour map and chuck everything into a KDTree for nearest-neighbour lookup. Seriously, KDTrees are awesome. srsly. """ Create a matplotlib scalarmap object from colour stretch """ We scaled the colour map between 0.0 and 0.7, which corresponds to the extent in the image. Optionally, we could mask pixels that are some distance, $d$, from the colour map. E.g. data = np.ma.array(value_array[index], mask=(d > 70)). This would mask out the black text and white background in the image. I don't know a better way to map the RGB colour arrays to data than to query entries in a KDTree. If you do, please mention in the comments below. Now that we have converted an image into an array we can reduce the data so that only the important bits remain. Firstly, we take the first spatial derivative and find where $\nabla = 0$. Here, mask is a $m \times n$ boolean array that contains True values where we have good data. But we can reduce it even further using DBScan. This finds spatial regions of high density and expands clusters from them. I think this is basically a higher order use of KDTrees, but let's humour sklearn toolkit anyway. """ find centroid of a cluster in xy space """ Above, we found the centroids for each cluster to reduce the substantially reduce the dataset from 16667 points down to 1221. Each colour represents a unique cluster in the image below. mask2 has significantly improved on mask by finding clusters in the data. Now that it's done, let's see if we can reconstruct the image we first started with. This is a good check to make sure we haven't removed too much of the original data. Overall the two look pretty similar! There are some bits that could be refined by adjusting some of the parameters, e.g. eps in DBScan and $\nabla \approx 0$. Also, adjusting the distance threshold in the original RGB $\rightarrow$ data conversion would remove artifacts caused by the coastline (i.e. bits that don't find on the colour map).
CommonCrawl
Notes for Bocconi utilized Math essential half summarizing lecture notes and workouts. During the last fifteen years, the geometrical and topological tools of the idea of manifolds have assumed a crucial function within the so much complex components of natural and utilized arithmetic in addition to theoretical physics. the 3 volumes of "Modern Geometry - tools and functions" comprise a concrete exposition of those tools including their major purposes in arithmetic and physics. One of many goals of this paintings is to enquire a few typical houses of Borel units that are undecidable in $ZFC$. The authors' start line is the subsequent uncomplicated, even though non-trivial outcome: think of $X \subset 2omega\times2omega$, set $Y=\pi(X)$, the place $\pi$ denotes the canonical projection of $2omega\times2omega$ onto the 1st issue, and believe that $(\star)$ : ""Any compact subset of $Y$ is the projection of a few compact subset of $X$"". 15(b). Thus, the assumption there that G is open is essential. We close this section with a circumspective remark. We did not use sequences in this section. Connectedness is one of the only properties of a metric space I know whose examination never uses the concept of a convergent sequence. 2. 2, that a subset E of R is an interval if and only if when a, b ∈ E and a < b, then c ∈ E whenever a < c < b? (3) If (X, d) is connected and f : X → R is a continuous function such that |f (x)| = 1 for all x in X, show that f must be constant. 9 (Urysohn's4 Lemma). If A and B are two disjoint closed subsets of X, then there is a continuous function f : X → R having the following properties: (a) 0 ≤ f (x) ≤ 1 for all x in X: (b) f (x) = 0 for all x in A: (c) f (x) = 1 for all x in B. Proof. Define f : X → R by f (x) = dist (x, A) , dist (x, A) + dist (x, B) which is well defined since the denominator never vanishes. ) It is easy to check that f has the desired properties. 10. If F is a closed subset of X and G is an open set containing F , then there is a continuous function f : X → R such that 0 ≤ f (x) ≤ 1 for all x in X, f (x) = 1 when x ∈ F , and f (x) = 0 when x ∈ / G. While he was there, World War I began, and he was unable to return to France. His health deteriorated further, depression ensued, and he spent the rest of his life on the shores of Lac L´ eman in Switzerland. It was there that he received the Chevalier de la Legion d'Honneur, and in 1922 he was elected to the Acad´ emie des Sciences. He published significant works on number theory and functions. He died in 1932 at Chamb´ ery near Geneva. Exercises 37 The details of this induction argument are left as Exercise 1. Social choice by Craven J.
CommonCrawl
We estimate the accretion rates of 235 Classical T Tauri star (CTTS) candidates in the Lagoon Nebula using $ugri$H$\alpha$ photometry from the VPHAS+ survey. Our sample consists of stars displaying H$\alpha$-excess, the intensity of which is used to derive accretion rates. For a subset of 87 stars, the intensity of the $u$-band excess is also used to estimate accretion rates. We find the mean variation in accretion rates measured using H$\alpha$ and $u$-band intensities to be $\sim$ 0.17 dex, agreeing with previous estimates (0.04-0.4 dex) but for a much larger sample. The spatial distribution of CTTS align with the location of protostars and molecular gas suggesting that they retain an imprint of the natal gas fragmentation process. Strong accretors are concentrated spatially, while weak accretors are more distributed. Our results do not support the sequential star forming processes suggested in the literature.
CommonCrawl
Abstract: The zero sets of harmonic polynomials play a crucial role in the study of the free boundary regularity problem for harmonic measure. In order to understand the fine structure of these free boundaries a detailed study of the singular points of these zero sets is required. In this paper we study how "degree $k$ points" sit inside zero sets of harmonic polynomials in $\mathbb R^n$ of degree $d$ (for all $n\geq 2$ and $1\leq k\leq d$) and inside sets that admit arbitrarily good local approximations by zero sets of harmonic polynomials. We obtain a general structure theorem for the latter type of sets, including sharp Hausdorff and Minkowski dimension estimates on the singular set of "degree $k$ points" ($k\geq 2$) without proving uniqueness of blowups or aid of PDE methods such as monotonicity formulas. In addition, we show that in the presence of a certain topological separation condition, the sharp dimension estimates improve and depend on the parity of $k$. An application is given to the two-phase free boundary regularity problem for harmonic measure below the continuous threshold introduced by Kenig and Toro.
CommonCrawl
Abstract: The chromoelectric and chromomagnetic fields, created by a static gluon-quark-antiquark system, are computed in the quenched approximation of lattice QCD, in a $24^3\times 48$ lattice at $\beta=6.2$. We study two geometries, one with a U shape and another with an L shape. The degenerate case of the two gluon glueball is also studied. This is relevant to understand the microscopic structure of hadrons, in particular of hybrids. This also contributes to understand confinement with flux tubes of the chromoelectric field, and to discriminate between the models of fundamental or adjoint tubes.
CommonCrawl
Abstract: This letter summarises the status of the global fit of the CKM parameters within the Standard Model performed by the CKMfitter group. Special attention is paid to the inputs for the CKM angles $\alpha$ and $\gamma$ and the status of $B_s\to\mu\mu$ and $B_d\to \mu\mu$ decays. We illustrate the current situation for other unitarity triangles. We also discuss the constraints on generic $\Delta F=2$ New Physics. All results have been obtained with the CKMfitter analysis package, featuring the frequentist statistical approach and using Rfit to handle theoretical uncertainties.
CommonCrawl
At this stage we know what the next answer will be (without working it out) because, as one digit is $0$, the product of the digits will be zero, and hence the answer will also be zero. Whenever a digit is zero, the next answer will be zero! Thus if we reach $144$ we stay there however many times we apply this rule. We say that $144$ is fixed by this rule. Now try $233$; what does this go to? What do you notice about $332$ and $233$? What happens if we start with $98$? Can you find some other numbers that go to $144$? There is another number that is fixed by this rule; it is $1$ (because the sum of the digits of $1$ is $1$, and the product of the digits is $1$ so, starting with $1$, the answer is $1\times 1=1$). Now here is something interesting. We only know of one other number (apart from $0$) that is fixed by this rule, and $1$, $144$ and this other number are the only numbers that are fixed by this rule; such numbers are sometimes called SP numbers. What is this other number? You can find it for yourself, but to help you I will tell you that it lies between $110$ and $140$. There are a few numbers that have the property that when we apply the rule repeatedly, we end up at $1$, $144$ or this other number (which by now you should have found). It seems that most numbers will eventually end up at $0$ when we apply the rule repeatedly, but again, no-one has yet proved this. Try some numbers for yourself and see if they end up at $0$. If you find something surprising, show your teacher because you may have discovered something that has never been noticed before! SP Numbers Continued, published in October 1998, is a follow-up (much harder) article on this topic.
CommonCrawl
A matrix whose number of rows does not equal to the number of columns, is called a rectangular matrix. Rectangular matrix is a type of matrix and the elements are arranged in the matrix as number of rows and the number of columns. The arrangement of elements in matrix is in rectangle shape. Thus, it is called as a rectangular matrix. The rectangular matrix can be expressed in general form as follows. The elements of this matrix are arranged in $m$ rows and $n$ columns. Therefore, the order of the matrix is $m \times n$. Rectangle shape in matrix is possible if the number of rows is different to the number of columns. It means $m \ne n$. Therefore, there are two possibilities to form rectangular matrix, one is number of rows is greater than the number of columns ($m > n$) and the other is number of rows is less than the number of columns ($m < n$). The following two cases are the possibility for the formation of rectangular matrices in the matrix algebra. $A$ is a matrix and elements are arranged in matrix as $3$ rows and $4$ columns. The order of the matrix $A$ is $3 \times 4$. The number of rows is not equal to the number of columns ($3 \ne 4$), and also the number of rows is less than number of columns ($3 < 4$). Therefore, the matrix $A$ is an example for a rectangular matrix. $B$ is a matrix and the elements are arranged in the matrix in $5$ rows and $2$ columns. The order of the matrix $B$ is $5 \times 2$. The number of rows is not equal to the number of columns ($5 \ne 2$) and also the number of rows is greater than number the columns ($5 > 2$). So, the matrix $B$ is known as a rectangular matrix.
CommonCrawl
Did you know, that the string length of my profile name fits perfectly in the allowed space. Can you say conspiracy! LOL.😃 Also, adjoints increases your productivity and makes you drink plenty of water. Rarely to people get really high in math and then want a coffee immediately. I recommend it to mathematicians for the creative thinking benefit. The Earth wants more 🌲🌲🌲's. In everyday waking consciousness your IQ is say $x$ but when you trip on adjoints, your IQ goes to $\infty$. Did you know you can use Unicode Emoji's like 😜 on this site? great mathematicians see analogies between analogies. 19 Is there a way to draw a graph (vertices & edges) in LaTeX on this website? 10 Can an Earley Parser be made into a fuzzy parser similar to the Levenshtein Automata Algo for DFA? 9 Raised bed around tree. Do I need to protect the trunk? 9 Tilling soil around a tree for a raised bed. How much is safe for tree? 6 Can I espalier a regular black cherry tree starting with 1" trunk, 6' tall? 6 Has anyone mixed linear algebra with formal language theory in this way? 6 Is a vertical, soil vegetable garden a bad idea?
CommonCrawl
Consider an $n \times n$ grid whose top-left square is $(1,1)$ and bottom-right square is $(n,n)$. Your task is to move from the top-left square to the bottom-right square. On each step you may move one square right or down. In addition, there are $m$ traps in the grid. You cannot move to a square which has a trap. What is the total number of possible paths? The first input line contains two integers $n$ and $m$: the size of the grid and the number of traps. After this, there are $m$ lines that describe the traps. Each such line contains two integers $y$ and $x$: the location of a trap. You can assume that there are no traps in the top-left and bottom-right square. Print the number of paths modulo $10^9+7$.
CommonCrawl
The system already does this by itself but the algorithm is imperfect and some good oldies escape it and would stay dormant without user intervention. $\ldots$ the only questions I ever saw getting bumped are those that are in 'unanswered' while having an answer, ie, have answer(s) but none with positive score. The context of this comment suggests that things may be different now, but my non-systematic observation of MO leads me to think the comment is essentially valid now. So if I understand correctly, a question with no answer at all is consigned to oblivion if there are no answers or edits, while one with an answer that is not upvoted will be poked from time to time? Surely this is an anomaly? Is there a good reason for this? If there is not, I propose the following. Questions with no answer should be bumped by the system on the same basis as questions with at least one answer but no upvoted answer. Depending on how one reads your post, it can be considered as two or three (or perhaps even more) closely related questions. Could (and should) we change behavior of the community user such that it bumps questions which have no answers? What are the reasons why this feature request currently works the way it does? It seems that the main purpose of this feature is not to bring unanswered questions to the attention of users. It is about bringing attention to answers. I will quote from this post: How can we make the purpose of Community "bumping" more obvious? To be clear, the intent here is to resurface questions that someone has attempted to answer, but which haven't yet attracted any votes to either confirm the usefulness or decry the worthlessness of the answer(s) that've been posted. So changing this to bumping questions without answers would mean using this feature for a different purpose than originally intended. As with any automated system, it is probably difficult to define good criteria how to choose questions which should be bumped. (I.e., questions where additional attention might be useful.) I do not know whether all details how community user selects the questions are publicly known, the algorithm might be quite complicated. For example, only recently I learned that number of views is also taken into account. Is my understanding that community user does not bump questions with no answers correct? I wrote questions with no answers rather than unanswered questions since by the meaning commonly used here on Stack Exchange network, the term unanswered question includes question having an answer, which is neither upvoted nor accepted. After this modification, the answer to your question is: Yes. Some links to the basic info about community user can be found in the tag-wiki. In particular, this post contains many details about what community user does: Who is the Community user? Specifically to your question this meta.SO post seems to be relevant: Community user does not bump questions that never had an answer I will point out that it is tagged as status-bydesign. Are older questions with no answer doomed? So if I understand correctly, a question with no answer at all is consigned to oblivion if there are no answers or edits. Bump by community user is not the only way how a question can be bumped. Already in the answer you linked, apart from editing also bounty is mentioned as a possibility. And even if an old question is not bumped and does not get to the front page, it is not the only way it can get to attention of potential answerers. There are even some rather bizarre ways how the question can get renewed attention. They happen probably rather rarely, but they do happen. For example, your question might be used in review audit. (Although that gives only one additional view by the reviewer, unless they decide to edit or answer the question.) Or the question might be bumped because a spammer post an answer advertising some product. (Spam occasionally appear here - some rough stats were mentioned here.) However spam is probably more frequently posted as questions rather than answers. And there are certainly many ways how somebody can get to the question from outside MO. For example, it might be mentioned on some blog or website. Questions from MO definitely appear on social media like Facebook and Twitter. (For example, this Google search returns several MO question posted on twitter. And I am pretty sure that many more MO-related tweets than the few results found by this simple search exist.) Or it can be discussed even in email exchange or discussion over lunch with a colleague. And probably one of the most frequent ways is that somebody gets to some question is finding it using Google (or other search engine). The users with 25k+ reputation can see in site analytics what are the most frequent sources of traffic no MO. But even without access to site analytics it is easy to see that MO questions typically rank high among result of google searches. Not the answer you're looking for? Browse other questions tagged feature-request community-user unanswered-questions bumping .
CommonCrawl
operators, so that the font can be used seamlessly in documents using both. As of v0.2, this is done automatically when you use \sffamily and \rmfamily. for instance, using it with the Kepler and Biolinum fonts (kpfonts and biolinum). then you may write your equations in the form $α+β$ instead of $\alpha+\beta$. The Current Maintainer of this work is J. A. Ouassou. This work consists of the file eulerpx.sty. v0.1: Initial eulerpx package created. the alphabet used for operators and numbers to match the environment. v0.2.1: Fixed a bug that prevented \infty from displaying correctly. opinion, the newpx brackets are much more aesthetic than the Euler ones. for other encodings than T1 has been removed. *all* environments, and only typeset operators and digits in sans/serif.
CommonCrawl
Password Depot is a powerful and very user-friendly password manager which helps to organize all of your passwords – but also, for instance, information from your credit cards or software licenses. The software provides security for your passwords – in three respects: It safely stores your passwords, guarantees you a secure data use and helps you to have secure passwords. However, Password Depot does not only guarantee security: It also stand for convenient use, high customizability, marked flexibility in interaction with other devices and, last but not last, extreme functional versatility. Best possible enryption . In Password Depot, your information is encrypted not merely once but in fact twice, thanks to the algorithm AES or Rijndael 256. In the US, this algorithm is approved for state documents of utmost secrecy! Double protection. You can secure your databases doubly. To start with, you select a master password that has to be entered in order to be able to open the file. Additionally, you can choose to protect your data by means of a key file that must be uploaded to open the file. Protection against brute-force attacks. After every time the master password is entered incorrectly, the program is locked for three seconds. This renders attacks that rely on the sheer testing of possible passwords – so called "brute-force attacks" – virtually impossible. Lock function. This function locks your program and thereby denies unauthorized access to your passwords. The locking conditions are determined by you yourself, for instance every time the program has not been used for a certain time. Backup copies. Password Depot generates backup copies of your databases. The backups may be stored optionally on FTP servers on the Internet (also via SFTP) or on external hard drives. You can individually define the time interval between the backup copies' creation. Protection from keylogging. All password fields within the program are internally protected against different types of the interception of keystrokes (Key Logging). This disables that your sensible data entries can be spied out. Traceless Memory. Dealing with your passwords, Password Depot does not leave any traces in your PC's working memory. Therefore, even a hacker sitting directly at your computer and searching through its memory dumps cannot find any passwords. Clipboard protection: Password Depot automatically detects any active clipboard viewers and masks its changes to the keyboard; after performing auto-complete, all sensitive data is automatically cleared from the clipboard. Virtual keyboard. The ultimate protection against keylogging. With this tool you can enter your master password or other confidential information without even touching the keyboard. Password Depot does not simulate keystrokes, but uses an internal cache, so that they can neither be intercepted software- nor hardware-based. Fake mouse cursors. Typing on the program's virtual keyboard, you can also set the program to show multiple fake mouse cursors instead of your usual single cursor. This additionally renders impossible to discern your keyboard activities. Uncrackalble passwords. The integrated Password Generator creates virtually uncrackable passwords for you. Thus in future, you will not have to use passwords such as "sweetheart" anymore, a password that may be cracked within minutes, but e.g. "g\/:1bmVuz/z7ewß5T$x_sb}@<i". Even the latest PCs take millennia to crack this password! Verified password quality. Let Password Depot check your passwords' quality and security! Intelligent algorithms will peruse your passwords and warn you against 'weak' passwords which you can subsequently replace with the help of the Passwords Generator. Password policies. You can define basic security requirements that must be met by all passwords which are added or modified. For instance, you can specify the passwords' minimum length and the characters contained therein. Security warnings. Password Depot contains a list of warnings which always keep an eye on your passwords' security. For instance, the program warns you in case you use the unsafe FTP protocol and in this case advices you to use SFTP instead. Protection against dictionary attacks. An important warning featured in Password Depot is the notification in case you are using unsafe passwords. These are passwords which are frequently used, therefore appear in hacker dictionaries and are easily crackable. Warning against password expiry. You can set Password Depot to warn you before your passwords expire, for instance before the expiry date of your credit card. This ensures that your password data always remains up-to-date and valid. Password Depot is very easy to use and spares you a lot of work. User-friendly interface. Password Depot's user interface is similar to that of Windows Explorer. This allows you to effectively navigate through your databases and to quickly find any password you happen to be searching for. Auto-completion. If you wish, Password Depot automatically fills in your password data into websites opened within the common browsers. This function runs via an internal setting on the one hand, and via so called browser add-ons on the other hand. Automatic recognition. You can set the program to automatically recognize which password information corresponds to the website you have called up and to then pre-select this password entry for you – as well as, if desired, to finally automatically fill this information into the website. Top bar. The program's form can be reduced to a narrow bar whose position may be individually determined: whether freely movable or stuck to the screen edge (Application Desktop Toolbar). In this way, the software is always at your hand without disturbing you. Direct opening of websites. URLs belonging to password entries saved in Password Depot may be opened directly from within the program. This spares you the hassle of having to manually copy website addresses and then paste them into your browser. Usage via mouse click. Using your password information may be done super easily via simple clicks with your mouse cursor. By means of a single mouse click, you can copy data to the clipboard and can even drag it directly into the target field on the website. Hotkeys. Pasword Depot features keyboard shortcuts for often-used commands in Windows ("Hotkeys"). By means of these hotkeys, you can easily turn Password Depot's format into a top bar or call it into the foreground when minimized to the system tray. Unicode support. Password Depot supports Unicode, the international standard defining a digital code for every character. This allows you to use international characters such as "ä" or "ç" within your password information. Recycle bin. Password Depot features a recycle bin that stores deleted password data and enables their restoration. In this way, data you may have accidentally deleted, for instance, is yet not lost irrevocably. You can configure Password Depot individually and in this way adapt it precisely to your needs. Configurable program options. Thanks to many program options, Password Depot may be individually configured to the slightest detail – not only in view of its external layout, but also regarding its internal functions such as the use of browsers or networks. Custom browsers. You can determine yourself the browsers you would like to use the program with. In this way, you are not bound to the common browsers such as Firefox or Internet Explorer but can also use e.g. Opera. Individual user modes. As new user, you can work with only few functions in the Beginner Mode, while as expert you can use all functionalities in the Expert Mode or can define your own Custom Mode. Personal favorites. The list of favorites contains the passwords you use most frequently. As you will likely want to have this often-used data always ready at hand, the list of favorites may be accessed directly via the top bar. Custom fields. You can extend the existing data input fields by any number of self-defined fields. This is possible both for a single password entry ("Custom Field") as well as for the entire passwords file ("Global Field"). Password icons. You can save icons for your password entries enabling you to easily find and place them. These icons are even available if you open your passwords file on a different PC as they are saved directly within the passwords file. Individual safety warnings. You yourself can determine the warnings you would like Password Depot to show and which not. Additionally, you may individually set whether the program should warn you in case your passwords expire and, if yes, how many days prior to the expiry. Password statistics. Clear statistics show at a glance how often you have used which password. In this way, you might also realize which entries you do not use at all and can therefore delete in order to keep your passwords file up-to-date. Password Depot is able to work together with many other applications - flexibly and without problems. Enterprise Server. Password Depot features a separate server model enabling several users to access the same passwords simultaneously. The access to the databases may run either via a local network or via the Internet. USB stick. You can copy both your databases and the program Password Depot itself onto a USB stick. In this way, you can carry the files and the software along wherever you go, always having them ready to use. Cloud devices. Password Depot supports web services, among them GoogleDrive, Microsoft OneDrive and Dropbox. In this way, Password Depot enables you to quickly and easily enter the Cloud! TAN support. Password Depot supports the input and management of TAN numbers. In this way, it facilitates the life of all of those users that refer to online banking, securely storing their sensible banking data. URL placeholders. Entering URLs into Password Depot, you can replace any number of characters by placeholders, namely an asterisk (*). Using this symbol, you can thus match several URLs to a single password entry instead of having to enter one entry for each URL. Cards, identities, licenses. Password Depot protects and manages not only your passwords but also your information from credit cards, EC cards, software licenses and identities. Each information type offers a separate model, with e.g. the credit card window featuring a PIN field. File attachments. To your password entries, you may add file attachments containing e.g. additional information. These attachments can be opened directly from within Password Depot and may additionally be saved on data storage media. Transfer passwords. You can both import password entries from other password managers into Password Depot as well as export entries from Password Depot. To do so the software offers you special wizards that facilitate importing and exporting password information. Synchronize databases. Password Depot supports you in synchronizing two different databases. This is relevant e.g. if you are using a single passwords file on two different PCs. This being said, the file synchronization works in both directions. Clean-up databases. This function discovers password entries that you have not used for a long time or have even already expired. Afterwards, the found entries can be directly deleted. This guarantees that your databases always remain up-to-date. Search for password entries. By means of this function, you can search any character string within your passwords file – no matter if within the passwords themselves or within e.g. their descriptions and URLs. To refine your results you are able to limit the search to specific areas. Encrypt external files. Password Depot permits you to encrypt external files and to then directly save them as individual entries within the software. In this way, Password Depot enables you to make confident documents inaccessible to third parties. Self-extracting files. When encrypting external data by means of Password Depot, you can additionally generate encrypted self-extracting files. This method enables other people who do not have Password Depot to also decrypt the core files. Delete external files. With Password Depot you can delete external files, regardless of their format. In doing so, the software does not leave any traces on your hard disk which means that the files cannot be restored by any application however refined.
CommonCrawl
This variable has a subscript $x_0$. These variables also have subscripts: $y_0$, $y_1$. Math in this paragraph is broken. I'm aware that this workaround exists. It's still very confusing that in the first paragraph you don't have to escape the underscore but you do have to escape it in the second paragraph. I suspect this will catch a lot of people.
CommonCrawl
Group of $r$ people at least three people have the same birthday? Squaring both sides when units are different? How do you compute eigenvalues/vectors of big $n\times n$ matrix? +5 How many functions satisfy the property $f(i)<f(j)$ for some $1 \leq i \leq j \leq n$? +5 Why is tree not uniquely possible with given preorder and postorder traversal? -2 Total number of possible order possible in this binary search tree? There exists a regular language A such that for all languages B, A ∩ B is regular. How do I solve $\int4\cos^2(x) dx$?
CommonCrawl
How I can plot functions of the kind $f:\Bbb R\to\Bbb C$ in 3D properly? What I can do to plot functions $f:\mathbb R\to\mathbb C$ in 3D? By example, how to manage to plot $f(x)=i^x$ in the most direct (and simpler) way? Thank you. but my problem now is that the axes are not scaled properly, the option scale=1doesnt work. I found a partial solution using the option aspect_ratio=[1,1,1]but now the graph is too tiny (at least when I draw it in sagemathcell). To fix I tried to use the option zoom=2 (or the mouse wheel) but does not work because the graph is not centered in the screen, check this. Then zooming a non-centered plot I cant see half of the plot because is out of the screen. At this moment I dont know a real fix for this. @kcrisman I updated all the infor that I have at the moment, but there are other problems that I dont know how to fix them by now. I don't find the zoom to be a problem, because I always zoom in and out anyway, and rotate it etc. - on my laptop using a gesture, I think with a mouse with a wheel you can use that, without these there is still right-clicking and choosing "zoom". By the way, I don't think that keyword will work without the jmol viewer. @kcrisman indeed it is a problem, the position of the figure is not optimal at all, you zoom it (with the mouse or the option) and the figure is not centered in the screen, so you cant have a good vision of it. Rotating the figure is nice but still the figure not centered is a very bad thing. Ah, I see what you mean - I am able to center is by rotating but it doesn't start there. I've seen that for others, I'm not sure what causes it as it isn't easily reproducible in every graph.
CommonCrawl
For more details, refer to this article. Reference: [Multi-Player Bandits Revisited, Lilian Besson and Emilie Kaufmann, 2017], presented at the Internation Conference on Algorithmic Learning Theorey 2018. PDF : BK__ALT_2018.pdf | HAL notice : BK__ALT_2018 | BibTeX : BK__ALT_2018.bib | Source code and documentation Published Maintenance Ask Me Anything ! There is another point of view: instead of comparing different single-player policies on the same problem, we can make them play against each other, in a multi-player setting. The basic difference is about collisions : at each time $t$, if two or more user chose to sense the same channel, there is a collision. Collisions can be handled in different way from the base station point of view, and from each player point of view. noCollision is a limited model where all players can sample an arm with collision. It corresponds to the single-player simulation: each player is a policy, compared without collision. This is for testing only, not so interesting. onlyUniqUserGetsReward is a simple collision model where only the players alone on one arm sample it and receive the reward. This is the default collision model in the literature, for instance cf. [Shamir et al., 2015] collision model 1 or cf [Liu & Zhao, 2009]. Our article also focusses on this model. rewardIsSharedUniformly is similar: the players alone on one arm sample it and receive the reward, and in case of more than one player on one arm, only one player (uniform choice, chosen by the base station) can sample it and receive the reward. closerUserGetsReward is similar but uses another approach to chose who can emit. Instead of randomly choosing the lucky player, it uses a given (or random) vector indicating the distance of each player to the base station (it can also indicate the quality of the communication), and when two (or more) players are colliding, only the one who is closer to the base station can transmit. It is the more physically plausible. and some naive policies are implemented in the PoliciesMultiPlayers/ folder. As far as now, there is the Selfish, CentralizedFixed, CentralizedCycling, OracleNotFair, OracleFair multi-players policy. The first one I implemented is the "Musical Chair" policy, from [Shamir et al., 2015], in MusicalChair. Then I implemented the "MEGA" policy from [Avner & Mannor, 2014], in MEGA. But it has too much parameter, the question is how to chose them. The rhoRand and variants are from [Distributed Algorithms for Learning…, Anandkumar et al., 2010. Our algorithms introduced in [Multi-Player Bandits Revisited, Lilian Besson and Emilie Kaufmann, 2017] are in RandTopM: RandTopM and MCTopM. We also studied deeply the Selfish policy, without being able to prove that it is as efficient as rhoRand, RandTopM and MCTopM. A simple python file, configuration_multiplayers.py, is used to import the arm classes, the policy classes and define the problems and the experiments. See the explanations given for the simple-player case. The multi-players policies are added by giving a list of their children (eg Selfish(*args).children), who are instances of the proxy class ChildPointer. Each child methods is just passed back to the mother class (the multi-players policy, e.g., Selfish), who can then handle the calls as it wants (can be centralized or not). Figure 1 : Regret, $M=6$ players, $K=9$ arms, horizon $T=5000$, against $500$ problems $\mu$ uniformly sampled in $[0,1]^K$. rhoRand (top blue curve) is outperformed by the other algorithms (and the gain increases with $M$). MCTopM (bottom yellow) outperforms all the other algorithms is most cases. Figure 2 : Regret (in loglog scale), for $M=6$ players for $K=9$ arms, horizon $T=5000$, for $1000$ repetitions on problem $\mu=[0.1,\ldots,0.9]$. RandTopM (yellow curve) outperforms Selfish (green), both clearly outperform rhoRand. The regret of MCTopM is logarithmic, empirically with the same slope as the lower bound. The $x$ axis on the regret histograms have different scale for each algorithm. Figure 3 : Regret (in logy scale) for $M=3$ players for $K=9$ arms, horizon $T=123456$, for $100$ repetitions on problem $\mu=[0.1,\ldots,0.9]$. With the parameters from their respective article, MEGA and MusicalChair fail completely, even with knowing the horizon for MusicalChair. These illustrations come from my article, [Multi-Player Bandits Revisited, Lilian Besson and Emilie Kaufmann, 2017], presented at the Internation Conference on Algorithmic Learning Theorey 2018. For a multi-player policy, being fair means that on every simulation with $M$ players, each player access any of the $M$ best arms (about) the same amount of time. It is important to highlight that it has to be verified on each run of the MP policy, having this property in average is NOT enough. For instance, the oracle policy OracleNotFair affects each of the $M$ players to one of the $M$ best arms, orthogonally, but once they are affected they always pull this arm. It's unfair because one player will be lucky and affected to the best arm, the others are unlucky. The centralized regret is optimal (null, in average), but it is not fair. And the other oracle policy OracleFair affects an offset to each of the $M$ players corresponding to one of the $M$ best arms, orthogonally, and once they are affected they will cycle among the best $M$ arms. It's fair because every player will pull the $M$ best arms an equal number of time. And the centralized regret is also optimal (null, in average). Usually, the Selfish policy is not fair: as each player is selfish and tries to maximize her personal regret, there is no reason for them to share the time on the $M$ best arms. Conversely, the MusicalChair policy is not fair either, and cannot be: when each player has attained the last step, ie. they are all choosing the same arm, orthogonally, and they are not sharing the $M$ best arms. The MEGA policy is designed to be fair: when players collide, they all have the same chance of leaving or staying on the arm, and they all sample from the $M$ best arms equally. The rhoRand policy is not designed to be fair for every run, but it is fair in average. Similarly for our algorithms RandTopM and MCTopM, defined in RandTopM.
CommonCrawl
How does $E=mc^2$ put an upper limit to velocity of a body? How does $E=mc^2$ put a upper limit to velocity of a body? I have read some articles on speed of light and they just tell me that it is the maximum velocity that can be acquired by any particle. How is it so? What is violated if $v>c$ ? $E_0 = m_0 c^2$ is only the equation for the "rest energy" of a particle/object. where $v$ is the relative velocity of the particle. An "intuitive" answer to the question can be seen by noticing that the particle's energy approaches $\infty$ as its velocity approaches the speed of light. Thus, in order for the particle to move faster than the speed of light would require it to attain infinite kinetic energy, which can't happen. To complete bclifford's answer, our current equation for energy-momentum of a particle is $E^2=p^2c^2+m^2c^4$ which is the final expression for $E=\gamma mc^2$, where $\gamma$ is the Lorentz factor obtained from his transformations. Hence, for a particle like the photon - this equation is valid throwing $E=pc$, which says that the photon has momentum. For particles at rest, $p=0$ which gives the rest energy $mc^2$ of the massive object. You can see that as the speed approaches the speed of light the energy required according to special relativity shoots up compared to what nonrelativistic mechanics would say. It requires an infinite amount of energy for any massive body to reach the speed of light. It doesn't. The equation $E = mc^2$ and the fact that no physical object can be accelerated past the speed of light are two entirely separate conclusions of special relativity. The reason $c$ is an upper bound on the speed of an object has to do with the Lorentz transformations. These are the mathematical expressions that relate positions and times as measured by one observer to positions and times as measured by another observer. Now, suppose an object starts at rest with respect to observer A, and then accelerates until it is at rest with respect to observer B, which is moving at a speed $v$ relative to A. There has to be some Lorentz transformation you can use to convert between A's measurements and B's measurements, or equivalently, between the reference frame of the object pre-acceleration and its reference frame post-acceleration. But there is no Lorentz transformation that will take you from a reference frame in which an object is going slower than light to a reference frame where the same object is going faster than light or at the speed of light. Not the answer you're looking for? Browse other questions tagged special-relativity speed-of-light mass-energy or ask your own question. What is the fastest speed that a massive object can travel at? Stuff can't go at the speed of light - in relation to what? What's the purpose of the speed of light in $E = mc^2$? How does a particle of light reach the max speed of light? What is the theoretical upper limit on the rigidity of a material? What does 99.9% speed of light mean when there is no absolute velocity? Is the Speed of Light an universal spacetime constant, the velocity of electromagnetic waves, or of photons? What is the relationship between the size of a particle and its velocity relative to a fixed frame of reference? Does angular velocity have an upper bound?
CommonCrawl
Inspired by the four colours puzzle. $n$ is not $2$ or $3$. It's trivially possible for $n=1$. It's impossible for $n=2$ and $n=3$, because four distinct colours are needed in order to colour any $2\times2$ square in such a way that no two cells of the same colour meet at an edge or vertex. Let the first row of the $n\times n$ block contain one cell of each colour. Let the second row contain the same colours in the same order but cycled round by two places (e.g. ABCDEF -> CDEFAB). Keep on filling in each row in this way until you reach the bottom of the $n\times n$ block.
CommonCrawl
How could you design an efficient algorithm that uses the least amount of memory space and outputs, in ascending order, the list of integers in the range 0..255 that are not in a randomly generated list of 100 integers? The aim is to output the number of such integers, and determine if the randomly generated list of 100 integers included any repeats. The naive approach would be to compare each number from 0 to 255 against the list of 100 integers. This would not be very efficient. What would be a more efficient approach to solve this? Let $n$ be the number of random elements (in your case $n = 100$), and let $k$ be the total number of elements (in your case $k = 256$). Draconis gave a solution in time and space $O(k)$. Sort the input elements $A,\ldots,A[n]$. If $i > n$ or $A[i] \neq j$: output $j$. While $i \leq n$ and $A[i] = j$: let $i \gets i + 1$. This is strictly better than the other solution if $n = o(k/\log k)$. Facetious answer: all the sizes involved are constant, so a straightforward algorithm for this will run in $O(1)$. Serious answer: with such small numbers involved, getting too elaborate may just slow you down. Simplicity is your friend. This runs in $O(n+k)$ time and $O(k)$ space (where $n$ is the number of elements in the array and $k$ is the maximum size of those elements). I don't believe it's possible to do better. How to compare the output of a neural network with his target? What are the potential uses of a good R.N.S. system?
CommonCrawl
Let's say there's a ball and a (physically ideal - no friction etc.) robotic arm situated in otherwise empty space. The arm takes the ball, moves it around in a circle, and then returns it exactly to where it started. The arm also returns to its starting position. In this case, there is no displacement overall. The ball and the robot arm are in the exact same positions as when they began. Thus, $W = F\cdot 0 \cdot \cos\theta = 0$ So no energy was required to move the ball in a circle. However, this disagrees with my intuition, because if I were to make a robot arm that would do this, I feel like I would need to give it an energy source (for example, a battery), and that by the time the robot was done, I would have lost energy from that battery. Many objects move in circles without needing any energy. This starts with stuff like geostationary satellites, continues with moons that orbit planets, goes on to planets orbiting stars, stars orbiting other stars or black holes, and stars orbiting their galaxies' center of mass. All of these perform thousands, millions, and billions of rotations without any need for energy. Of course, these objects all need to have sufficient kinetic energy to be able to orbit in the first place. An object that's stationary with respect to earth, will just fall down. To get it to orbit earth, it must be accelerated first, and that needs energy. And if you want to get it stationary again after it has completed an orbit you have to decelerated it again. However, while the object is orbiting on a circle, its kinetic energy remains the same all the time, no energy needs to be put in or removed. However, that's not the entire story. Because, when an electrically charged object goes in circles, it emits electromagnetic radiation. A circling charge induces a magnetic field. That is what happens in any electric motor, including the one that spins your computer's fan right now. The reverse process works as well, a changing magnetic field accelerates charges round in circles. That's the working principle of any electrical generator, including your bike's dynamo. You can move stuff around in circles without needing energy, as long as the object is not electrically charged, and losses to gravitational waves are negligible. Which is pretty much always the case. And, at least for the electro-magnetic effects, you personally rely on them every single day. First, you are not equating the work done correctly. This is a good physics lesson. Please understand your equations before you use them. Blindly plugging in numbers will not work out. The equation you give is only true for motion in one dimension and with a constant force. Plugging in $0$ for displacement is not correct here. In general you need to look at infinitesimal displacements $\text d\mathbf x$ and calculate the work $\text d W=\mathbf F\cdot\text d\mathbf x$, then integrate (add up) the total work. Now, I am assuming the ball starts and stops at rest. Therefore, the arm does work to increase the ball's speed, and then it does the same amount of negative work to bring it to rest. So the net work is $0$, but it is because the total change in kinetic energy is $0$ (since $W=\Delta K$), not because the displacement is $0$ around the circle. Now, this is not the same thing as the robot using something like a battery. The robot (neglecting friction) has to apply forces to change the speed, and this requires power from the power supply. Just imagine yourself doing the action of the robot. You will need to exert effort to get the ball (and yourself) moving, and you will need to exert effort to get the ball (and yourself) to stop rotating. Does the movement require energy? If you are considering cases where the motion includes some components downward in relation to gravity and some components upward, not every robot will be designed to recover the energy gained by moving downward and use it to move the object back upward, so some energy will be used lifting the object on the upward part of the path. If we can neglect external gravitational or electromagnetic fields then we only need to consider the kinetic energy of the ball. This is zero at the beginning of the motion and zero at the end, so the net change of energy in the ball is zero. If we assume an ideal robot arm (no friction, perfect conductors, no air resistance etc.) then the energy that the robot arm puts into the ball to accelerate it at the start of the motion can be 100% recovered when the ball decelerates at the end of motion. So the net loss of energy from the robot arm is also zero. In practice the robot arm will lose energy due to friction, resistive heating, air resistance etc. Recover back all potential (lifting up the load) and kinetic (accelerating the load from zero initial velocity so it can be moved) energy without loss. A simple electric engine can do this, but obviously not without loss. Move the load without friction (well, you have stated in the question that your robot can do this, but even spacecraft do hit atoms on their way through the vacuum). The robot will need energy in general, especially if the circle stands vertically like a Ferris wheel. Only an ideal robot would be able to recover all the energy back at the end of the loop. A great example of such a machine is the Buzz Aldrin cycler. It is a "space bus" that travels between Earth and Mars essentially for free because it travels in a closed loop. In space, it is relatively easy to satisfy the two conditions above in a significant degree. If the robot arm is extended holding the ball and rotated horizontally, then the energy required would be calculated using torque, using the distance of the centre of mass and the total mass being moved by the motor. Therefore, if we say the arm takes 1 second to complete the circle, the power required is 10 x 2 x pi x 1 = 62.83 Watts (= joules/sec). A typical stepper motor for a robot arm with a reach about 1m would draw 12V, therefore would have to be rated at 6A which in a perfect system would be capable of delivering 72 Watts. The infinitesimal work on an object is $Fd\cosθ$. If any of those quantities aren't constant, then we have to take the integral over some path. However, all paths will the same starting and ending conditions will yield the same answer. If the ball ends up in the same condition that its started in, then it hasn't had any work done on it. So this is not the central issue. What is the central issue is that this is the formula for doing work on an object. Just because the robot arm has done zero work on the ball, that does not mean that the robot arm has expended no energy. It just means that none of the energy expended by the robot arm has gone towards permanently increasing the kinetic energy of the ball. The robot arm could have expended energy otherwise, such as overcoming friction in its internal mechanisms. If the robot arm accelerated to move the ball, and then decelerated to stop the ball, then it took energy to accelerate, and then the energy went somewhere when the arm decelerated. If the deceleration happened through friction, then the energy dissipated into heat. But the robot arm could have regenerative braking, in which case some of the energy went back into the battery. In any real world system, there will be some loss of energy to heat. No motor and no regenerative braking system operates with perfect efficiency. But in an ideal system with no friction or other inefficiency, the robot arm could indeed move the ball and end up with the same amount of energy in its battery, and thus this would not use up any energy. However, we would still have to have some energy to start with to power the system, even if that energy isn't used up. So if direction of movement changes, speed changes also. And if speed changes - there MUST be a force acting upon an object. And for keeping force, you need an energy of course. Not the answer you're looking for? Browse other questions tagged newtonian-mechanics energy rotational-dynamics energy-conservation work or ask your own question. Does an opposing force cause loss/ waste of energy? What's wrong with this idea for recover energy from pressure? Kinetic energy for generalized coordinates?
CommonCrawl
Abstract: We analyze the structure of quark and lepton mass matrices under the hypothesis that they are determined from a minimum principle applied to a generic potential invariant under the $\left[SU(3)\right]^5\otimes \mathcal O(3)$ flavor symmetry, acting on Standard Model fermions and right-handed neutrinos. Unlike the quark case, we show that hierarchical masses for charged leptons are naturally accompanied by degenerate Majorana neutrinos with one mixing angle close to maximal, a second potentially large, a third one necessarily small, and one maximal relative Majorana phase. Adding small perturbations the predicted structure for the neutrino mass matrix is in excellent agreement with present observations and could be tested in the near future via neutrino-less double beta decay and cosmological measurements. The generalization of these results to arbitrary sew-saw models is also discussed.
CommonCrawl
Does this solve the problem that Steve was trying to address? Using Steve's example as a guide, can you construct a quadratic polynomial which passes through the three points $(1,2), (2, 4), (4, -1)$? Is this the only such quadratic polynomial? Can you construct a cubic polynomial which passes through the four points $(1,2), (2, 4), (3, 7),(4, -1)$. Is this the only such cubic polynomial? Can you write down an expression for a line passing through the two points $(2, 7)$ and $(8,-6)$ using this method? Can you always fit a quadratic polynomial through three points $(x_1, y_1), (x_2, y_2), (x_3, y_3)$? Can you always fit a quartic polynomial through five points $(x_i, y_i)$ ($i=1\dots 5$) where exactly two of $x_i$ are zero? How many different polynomials can you construct which would pass through the points $(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)$? Explore sets of points through which it is not possible to fit a polynomial. Extension: For various numbers of points and degrees of polynomial you might wish to consider when the fitting is unique, when it is possible with multiple polynomials and when it is impossible. Expanding and factorising quadratics. Simultaneous equations. Generalising. Polynomial functions and their roots. Factors and multiples. Inequalities. Creating and manipulating expressions and formulae. Investigations. Mathematical reasoning & proof. Working systematically.
CommonCrawl
Speakers: Mauricio Gutiérrez (Tufts University, Medford, MA, USA), Jim Howie (Heriot-Watt University, Edinburgh, Scotland), Olga Macedońska (Silesian University of Technology, Gliwice, Poland), Conchita Martínez (Universidad de Zaragoza, Spain), Tim Riley (Bristol University, England) . Automorphisms of a graph product of abelian groups. Finitely presentable residually free groups. A question of R.Burns on group laws. Virtually soluble groups of type $FP_\infty$.
CommonCrawl
Given $N$ assets, the Markowitz mean-variance model requires expected returns, expected variances and a $N \times N$ covariance matrix. The joint distribution is fully defined by these measures. However I often read that assets are required to be normally distributed for consideration in the mean-variance model. While I understand that a normal joint distribution is fully defined by the statistics described above, I can't really see why normality is required. Can't we simply assume that the distribution is fully described by $\mu$, $\sigma^2$ and $\Sigma$, and not necessarily imply normality? That is, an obvious drawback is not considering higher moments which influence assets, such as skewness and kurtosis, but why is normality an assumption? it doesn't require normality. What it requires is that the investor's decisions are determined by mean and variance. A normal distribution is determined by mean and variance, so if you assume joint normality then there is no point in the investor being interested in anything else. So while the portfolio covariance matrix can always be computed, to the extent that underlying assets have returns which are not normal the optimization is likely to result in spuriously optimal weights. Not the answer you're looking for? Browse other questions tagged mean-variance normal-distribution markowitz covariance-matrix or ask your own question. Why does my posterior mean differs from Idzorek's results? Why does Bloomberg's HRH test the simple returns for normality?
CommonCrawl
William Goldman used projective invariants in order to classify a triangle with three adjacent ones . Choi in " Geometric structures on low-dimensional manifolds " wants to generalize Goldman's results to 3-dimensional manifolds , so he also used the projective invariants so that he could classify a tetrahedron with four adjacent ones. He also introduced some concepts such as " vertex diagram " and " geometric vertex diagram ". In theorem 3 of the above article , he wants to reduce the problem of the existence of real projective structure on a topologically triangulated 3-manifold to one on projective spheres. In his proof of theorem 3 in his article , he said that : Given a tehtrahedron and four adjacent ones in the triangulation of M , we can identify them with ones in the projective space as given by the invariants.such an indentification may not be unique but is unique up to isotopies preserving the triangulatiosn of M. this gives an atlas of charts from M removed with one-skeletons to RP3.since one can send a quintuple of points in general position in RP3 to an arbitrary quintuple by a unique projective map, we see that once we determine the image of the 5-tuples of tetrahedrons , their projective transition maps are determined. if we only consider the interiors of the tetrahedra , we obtain a projective structure on M removed with 1-skeletons.one needs to verify that around the edges , the identifications give us trivial holonomy elemenst . Why one has to show that around the edges , the identifications give us trivial holonomy elements??? There are two kinds of holonomy, which are well contrasted with each other in the opening paragraphs of the link given. The first kind is exemplified by the holonomy of a Riemannian metric: parallel transport of tangent vectors around a closed loop. This kind of holonomy does not need to be locally trivial, in fact the real usefulness of this holonomy is that it can be used to quantify infinitesmal invariants of Riemannian metrics such as the curvature, by considering the holonomy around very tiny closed loops. The other kind is exemplified by the holonomy of a flat connection or a projective structure. In contexts like that the holonomy IS required to be locally trivial, meaning that if the closed loop is contained in an open disc then whatever object is being parallel transported (not usually just a tangent vector, perhaps some more complicated object like some kind of transformation such as a projective map), the parallel transport around the loop is the identity. It follows that holonomy around any homotopically trivial closed loop is the identity. It follows that there is a "holonomy homomorphism" defined on the fundamental group. Sometimes the term monodromy is used in this context instead of holonomy. The reader has to understand the ambiguities of the terminology and which meaning is intended by the writer. In the context of Choi's paper, a projective structure is defined, at the beginning of the article, by local coordinate charts with locally trivial holonomy. He builds up the projective structure first on the complement of the one-skeleton. Then by proving that the holonomy is trivial around each edge, he extends the projective structure to the complement of the zero-skeleton. Then by proving that holonomy is trivial around each vertex, he extends the projective structure to the whole manifold. Singular, holonomy-free connections on Riemannian surfaces? Which criteria guarantee an orthogonal circuit in $\mathbb R^3$ to be rigid?
CommonCrawl
Nature published a comprehensive obituary reviewing the life and work of Vladimir Voevodsky. Vladimir Voevodsky comes from a scientific household: his mother was a chemist, his father an experimental physicist. He began to study Mathematics at the Lomonossov Moscow State University, but, although his academic achievements were later accredited with a Bachelor of Science (1989) he actually gave up formal study. Instead, Voevodsky worked in algebraic geometry on his own. To learn, he sought direct contact with scientists, first of all with Yuri Shabat and later with Mikhail Kapranov (now at Yale University). Kapranov helped Voevodsky later to become a Ph.D. student at Harvard University in the USA where he wrote a doctoral thesis on "Homology of schemes and covariant motif" (Ph.D. 1992). His thesis advisor was David Kazhdan. Voevodsky did research at the Institute for Advanced Study in Princeton (1992-93). He was a junior fellow of the Harvard Society of Fellows at Harvard University (1993- 96) and then associate professor at Northwestern University (1996-99). At the same time he was guest at Harvard (1996-97 and again 2006-08) and at the Max Planck Institute for Mathematics in Bonn (1996-97). He returned to the Institute for Advanced Study (1998-2001), where he has worked as a professor since 2002. Voevodsky has been a member of the European Academy of Sciences since 2003, and he holds an honorary degree from the Wuhan University, China (2004). Vladimir Voevodsky has worked in many areas of mathematics. His earlier work was related to the ideas introduced by Alexander Grothendieck in his famous manuscript "Esquisse d'un Programme", in particular to "Dessin d'enfant", anabelian algebraic geometry and $\infty$-groupoids as models for homotopy types. From 1990 to 2009 most of Voevodsky's work was related to the development of motivic homotopy theory. This development was in part guided and motivated by the conjectures of Beilinson and Lichtenbaum addressing the properties of hypothetical (at that time) "motivic cohomology". In 1995 Voevodsky found a proof of Milnor's Conjecture – a particular case of Beilinson-Lichtenbaum Conjectures on motivic cohomology with finite coefficients which earned him a Fields Medal in 2002. It took him about 12 years, from 1997 to 2009, to work out the details of the proof of the general Beilinson-Lichtenbaum Conjectures for finite coefficients which was published in Annals of Mathematics in 2011. Since about 2002 Voevodsky has also been actively thinking about the problem of computer proof verification in pure mathematics. The first results in the form of the theory of univalent fibrations and its connection with type theory appeared in 2005 but it was only in the fall of 2009 that it became clear that the existing system of Coq is largely adequate for formalization of mathematics based on the univalent ideas. The Univalent Foundations program was formally announced by Voevodsky in the spring of 2010. Since then it has become the main focus of his mathematical work.
CommonCrawl
Is this correct? (I'm surprised it doesn't depend on $\alpha$). Where can I find more exercises like this one? Not the answer you're looking for? Browse other questions tagged calculus tangent-spaces or ask your own question. how do I find the equation of the tangent plane to the parametric surface? Find plane equation based on tangent plane equation (both planes are parallel)? Why does the gradient of a level surface represent the differential tangent plane?
CommonCrawl
Now, since we just started programming as a class, our TA wants us to use a heuristic approach to solve the problem - e.g. swap assignments until you reach a local maximum etc. I thought of using the Hungarian algorithm to guarantee an optimal matching in (relatively) feasible time. For this, I constructed a graph with all $\alpha$ having nodes to all $\beta$ and initializing the edge of two same elements to $\infty$. However, as it turns out, the implementation I wrote disregards the symmetric aspect of this matrix - in other words my algorithm does not guarantee that $\alpha$ is also it's the partner of the partner of $\alpha$. I suspect that reason for this is that this is not a bipartite graph anymore. Even though there is a difference between an $\alpha$ on the left and a $\beta$ on the right, it's not a "real" bipartite graph. Is this correct? According to the Math stack exchange I should use the more general blossom algorithm in this case, but somehow I have a feeling that I could make better use of the properties of my matrix. Is there a better algorithm for this case? Your problem is exactly the same as maximum weight perfect matching, solved by the blossom algorithm. Although $\delta$ is not given as symmetric, you can reduce your problem to the symmetric case by replacing $\delta$ with $$ \delta'(\alpha,\beta) = \max(\delta(\alpha,\beta),\delta(\beta,\alpha)). $$ I'll let you figure out how to translate a solution which is optimal with respect to $\delta'$ to one which is optimal with respect to $\delta$.
CommonCrawl
Let $\Gamma$ be a finitely generated subgroup of $PSL_2(\mathbb R)$. 3.- Fuchsian and co-compact (ie. the Riemann surface $\mathbb H/\Gamma$ is compact). I'm not familiar with this field, but I know that there are a lot of works on this question, especially when $\Gamma$ is generated by two elements. In this case, one can certainly find some satisfying criteria or algorithms in Gilman's memoir `Two-generators discrete subgroups of $PSL_2(\mathbb R)$'. Main question: what about the case when $\Gamma$ is generated by more than two elements? Main question (made more precise): assume that $\Gamma$ is generated by $n\geq 3$ elements. Is there a criterion involving the $n$ generators of $\Gamma$ all together that must be verified if this group is Fuchsian (or Fuchsian of the first type, or Fuchsian and cocompact)? Secondary question: in case of two-generators subgroups, are the existing criteria/algorithms about the properties 1, 2 and 3 implemented? If yes, where is it possible to find softwares/codes allowing to make explicit computations? Browse other questions tagged gr.group-theory riemann-surfaces or ask your own question. How do you find the genus of a Fuchsian group derived from a quaternion algebra? Can the finiteness of a Burnside group with two generators be checked algorithmically by using Fuchsian von Dyck groups?
CommonCrawl
however, if $X$ is a random variable and $\alpha$ is a parameter, we have to write $p(X; \alpha)$. I notice several times that the machine learning community seems to ignore the differences and abuse the terms. For example, in the famous LDA model, where $\alpha$ is the Dirichlet parameter instead of a random variable. Shouldn't it be $p(\theta;\alpha)$? I see a lot of people, including the LDA paper's original authors, write it as $p(\theta\mid\alpha)$. I think this is more about Bayesian/non-Bayesian statistics than machine learning vs.. statistics. In Bayesian statistics parameter are modelled as random variables, too. If you have a joint distribution for $X,\alpha$, $p(X \mid \alpha)$ is a conditional distribution, no matter what the physical interpretation of $X$ and $\alpha$. If one considers only fixed $\alpha$s or otherwise does not put a probability distribution over $\alpha$, the computations with $p(X; \alpha)$ are exactly the same as with $p(X \mid \alpha)$ with $p(\alpha)$. Furthermore, one can at any point decide to extend the model with fixed values of $\alpha$ to one where there is a prior distribution over $\alpha$. To me at least, it seems strange that the notation for the distribution-given-$\alpha$ should change at this point, wherefore some Bayesians prefer to use the conditioning notation even if one has not (yet?) bothered to define all parameters as random variables. Argument about whether one can write $p(X ; \alpha)$ as $p(X \mid \alpha)$ has also arisen in comments of Andrew Gelman's blog post Misunderstanding the $p$-value. For example, Larry Wasserman had the opinion that $\mid$ is not allowed when there is no conditioning-from-joint while Andrew Gelman had the opposite opinion. Is the machine learning community abusing "true distribution"? Data naming convention in machine learning and cross-validation?
CommonCrawl
I have a metric space $(X,d)$. I have a physical situation (data) where each physical entity corresponds to an $x \in X$. I want to do some mathematical/statistical modeling of this data, but the problem is I cant add two elements of this set, as addition is not defined on them or its not closed under addition operation. So i take a strange approach, where I cluster the data, $X$ (using the metric $d$) into $N$ clusters, each cluster $C_i$ having a centroid $K_i$. Now I give a vector space like representation to each element $x \in X$ as the vector $$x_v = [d(x,K_1),d(x,K_2),....d(x,K_N)] \in V$$ i.e, $x$ is represented by a set of distances from each of the $N$ centroids. This way we moved to a vector space from a metric space, there by enabling us to do some modeling in vector space. After modeling when we get a final new vector $p$, it may not be having any corresponding element in our data $D$, thus we assume such $x$ that $||x_v -p||$ is minimum over entire $D$ or some selected codebook (subset of $D$). What want is some form of mathematicaly rigourous formulation of this problem in a formal way, if possible making any suitable assumptions. Your idea reminds me of a standard construction which is often used to prove the existence of a completion of an arbitrary metric space. If a metric space $X$ is bounded, you can embed it into the Banach space $C_b(X;R)$ of bounded continuous functions from $X$ to $R$: simply identify $x\in X$ with the function $f_x(y)=d(x,y)$. If $X$ is not bounded it is easy to modify this construction: fix a reference point $x_0$ and define $f_x(y)=d(x,y)-d(x_0,y)$. There is a concept of the free Banach space over a metric space. This a canonical way to embed a metric space into a Banach space and was introduced by Arens and Eells in a paper in the Pacific Journal of Mathematics ("On embedding uniform and topological spaces", vol. 6 (1956), 397-403). One way to construct it is as a predual of a suitable space of Lipschitz functions on the metric space. You may be interested in kernel methods, which are not exactly what you describe but have some of the properties you seem to be seeking. In particular they can be viewed as a way to use linear algebraic methods to analyze data which does not naturally live in a vector space. Not the answer you're looking for? Browse other questions tagged real-analysis vector-spaces or ask your own question.
CommonCrawl
two shapes in a $2n\times 2n$ grid sheet, can we pick third one? Can anyone help me with this problem? It just popped to my mind!!! we have a $2n\times 2n$ grid sheet and a connected shape $L$ consisting of $2n-1$ grid squares. we've cut two copies of $L$ out of the sheet. Is it always possible to cut a third copy of $L$? I think the answer is yes, but I couldn't solve it. any Ideas? Let $n=4$. Label the squares $(a,b)$, $1\le a\le8$, $1\le b\le8$. Cut out the identical shapes $$(3,3),(4,3),(5,3),(6,3),(7,3),(5,2),(5,4)$$ and $$(2,6),(3,6),(4,6),(5,6),(6,6),(4,5),(4,7)$$ You'll find you can't cut out another copy of this shape. Proof: any copy of this shape must have a row of five, horizontally or vertically. The row of five can't be along an edge of the square because there must be a square on either side of the row of five. No horizontal row, other than an edge, has five contiguous squares, once you have cut out the two shapes. The only columns with five contiguous squares, other than the edge columns, are columns 2 and 7, and those two locations for the 3rd shape are blocked by the missing squares at $(3,3)$ and $(6,6)$, respectively. Number of triangles possible in android lock patterns? How many different ways can a group of students be hired to work a survey?
CommonCrawl
What is correct answer for this IQ task and why? If I answer 1-10 questions correctly and 38 H, I get IQ 92. If I answer 1-10 questions correctly and 38 F, I get IQ 93. So I conclude that correct answer is F. In the corners (cells: 11,13,31,33) we have some objects. In the middle edges (12,21,23,32) we have operators on neighbouring two objects. The centre cell (22) is simply do not used here [this opinion agrees with tasks 4 and 9 at the same test, where it is used similarly]. Let's look on (12) operator (one arrow), it leads to simple vertical stretching of object. Let's look on (21) operator (two arrows), it leads to simple swap of object colours. Now we are looking on (23) and (32). (23), by our assumption, should swap colours of (13), creating the object from varian H, plus some additional operation, since (23) is different from (13). (32), by our assumption, should stretch (31), making variant H from it, plus some additional operation. What are these additional operations? Since they make the same object from the same object (H) that must be the same. Looking on possible answers we see that all differences between objects are: size, swap of the colours, angle. We have notations for first two differences, so additional operation should be rotation. We see that (23) is rotated respectively to (21), by 45 degree clockwise. (32) is rotated respectively to (12) by 45 degree clockwise. So most probably both additional operations means rotation by 45 degree clockwise. This means that the right variant for (33) is F. One arrow and two arrows means completely different things. It is counter intuitive. And though double arrow associated with swap, no doubts, it is also associated with stretching. I hope, someone else will find even better explanation, without mentioned two drawbacks. In the absence of any obvious common rule for all symbols, I conclude that each set follows its own rules. Thus the answer is H (column 3 = tall, row 3 = top left half black). I did wonder if the arrows represented transformations for the squares on either side - the arrow in row 1 column 2 could represent a height increase. The double headed arrows suggest a flip to me but would have to represent a rotation for this to work. But this system breaks down with row 3 column 2 - there are no answers which have been stretched along a diagonal (which would give a parallelogram). Resulting in a color-inverted square. The arrows and $\times$ are just distractions. Though I get it slightly differently. I got this answer before looking at the other answers, just in an attempt to solve it myself. The first thing I noticed is that the arrows are different lengths. The exact length of the arrow is also the exact size of the shape it is referring to. For example, the one at the top stretches the box to exactly that length. I also came to the conclusion that a double arrow means to flip colors. I realized that the [2,1] arrow did not change the size of the box only the colors, this does not mean that the double arrow ONLY changes colors though, it just means it ALSO changes colors. The arrow length is the exact size as the box in [1,1] and in [3,1], which means the size didn't need to change from that arrow. However, since the box at [3,1] stayed the same size we can assume that the arrow at [2,1] only effects the horizontal lines (aka the top and the bottom lines). The arrow at [1,2] only effects the vertical lines (aka the lines on the size of the box). Using this logic of it decided the exact length of the box based on arrows, and that the arrows at [1,2] and [3,2] are effecting the side walls and the arrows as [2,1] and [2,3] are effecting the top and bottom sides. We can assume right away it must be a diamond shape, since that is the length of the arrows and how they are shaped. After that I just simply replaced the bottom and top of the box at [1,3] with the length of lines from [2,3], then fliped the colors. Then we combined that with the sides of the box of [3,1] being replaced with the arrows from [3,2] and when we combine we indeed get F. You can see that if you combine those 2 pictures together that is what you end up with. The most important part I saw that was overlooked was that the arrows were always the exact length and direction of the side of the box that was being effected. This is how I came to the conclusion of the answer for this question.
CommonCrawl
Abstract: We analyze the properties of the conditional amplitude operator, the quantum analog of the conditional probability which has been introduced in [quant-ph/9512022]. The spectrum of the conditional operator characterizing a quantum bipartite system is invariant under local unitary transformations and reflects its inseparability. More specifically, it is shown that the conditional amplitude operator of a separable state cannot have an eigenvalue exceeding 1, which results in a necessary condition for separability. This leads us to consider a related separability criterion based on the positive map $\Gamma:\rho \to (Tr \rho) - \rho$, where $\rho$ is an Hermitian operator. Any separable state is mapped by the tensor product of this map and the identity into a non-negative operator, which provides a simple necessary condition for separability. In the special case where one subsystem is a quantum bit, $\Gamma$ reduces to time-reversal, so that this separability condition is equivalent to partial transposition. It is therefore also sufficient for $2\times 2$ and $2\times 3$ systems. Finally, a simple connection between this map and complex conjugation in the "magic" basis is displayed.
CommonCrawl
The talk is about a classification of order $1$ invariants of maps between $3$-manifolds whose increments in generic homotopies are defined entirely by diffeomorphism types of local bifurcations. I will mainly concentrate on the oriented situation. In this case the space of integer invariants has rank $7$ for any source and target, and I will give a geometric interpretation of its basis. The $\mod 2$ setting, with $\mathbb R^3$ as the target, adds another $4$ linearly independent invariants, one of which combines the self-linking of the cuspidal edge of the critical value set with the number of connected components of the edge.
CommonCrawl
I have a group of $n$ events. The successes don't all come in at once, and and I want to try to predict the actual success rate $s$. The number of successes showing in the system at any given time can be $0$ or greater than $0$. Is there a way that Bayes's theorem can be used to give the probability of success, given that we know $p$ as well as $s$? Can I use Laplacian smoothing to predict this probability if $x = 0$? Am I incorrect in assuming that Bayes's theorem can solve this? Is there another way to do so? In this situation it is best to use a member of the beta distribution as the prior for the binomial success probability $\pi.$ First, because a beta distribution has support $(0,1).$ Second the beta prior is 'conjugate' to the binomial likelihood, making it easier to find the posterior distribution. I choose $Beta(\alpha = 38, \beta = 1862)$ as the prior because that distribution has mean $\alpha/(\alpha + \beta) = 0.02,$ median $0.0198,$ mode $0.0195,$ standard deviation approximately $0.019,$ and $$P(0.015 < \pi < 0.025) \approx 0.95.$$ These properties seem a reasonable match to the prior information you provided. The following program in R searches for the parameters $\alpha$ and $\beta$ starting with the relationship $\alpha/(\alpha + \beta) = 0.02$ or $\beta = 49\alpha$ to put enough probability in $(0.015, 0.025)$ to match your historical experience. Below is a histogram of many simulated values of $\pi \sim Beta(38, 1862)$ with the prior density curve (blue) and the best-fitting normal density superimposed. Thus, if there were $x = 3$ successes in 100 trials, we could say that the posterior mean $E(\pi|x) = 0.0205$ and that a 95% posterior probability interval (R code below) for $\pi$ is $(0.0148,\, 0.0271).$ This information about $\pi,$ based on prior information and relatively little data, could be used to predict the success rate during additional trials of the process as you suggest. Notes: (1) On rates: You refer to 'rates' $x$ and $s,$ but it seems to me you are thinking of counts of successes. (2) On selecting a prior distribution: With a prior that is 'conjugate' to (mathematically compatible with) the likelihood, it is easy to identify the kernel of the posterior distribution without tedious computation. If we used the (green) normal distribution as a prior, the problem would become messy. Using a prior such as $Unif(0.015, 0.025)$ would constrain the posterior to the same support. (3) On the effect of the prior on the result: Very roughly speaking, your prior information contains about as much information as getting 38 successes in 2000 trials. Thus the prior distribution has much more to say about the posterior than my hypothetical additional 100 trials. Continual updating of the posterior is indicated as additional data become available. The posterior for one iteration becomes the prior for the next. (4) If the beta family of distributions is unfamiliar to you, please look at the Wikipedia article on 'beta distribution'. Not the answer you're looking for? Browse other questions tagged probability bayesian bayes-theorem or ask your own question. What is considered a good starting prior in Bayes theorem for an event that hasn't happened yet? Why do many textbooks on Bayes' Theorem include the frequency of the disease in examples on the reliability of medical tests?
CommonCrawl
Let's investigate the area of parallelograms some more. How are the two strategies for finding the area of a parallelogram the same? How they are different? Study the examples and non-examples of bases and heights of parallelograms. Then, answer the questions that follow. Examples: The dashed segment in each drawing represents the corresponding height for the given base. Non-examples: The dashed segment in each drawing does not represent the corresponding height for the given base. Select all statements that are true about bases and heights in a parallelogram. Only a horizontal side of a parallelogram can be a base. Any side of a parallelogram can be a base. A height can be drawn at any angle to the side chosen as the base. A base and its corresponding height must be perpendicular to each other. A height can only be drawn inside a parallelogram. A height can be drawn outside of the parallelogram, as long as it is drawn at a 90-degree angle to the base. A base cannot be extended to meet a height. Five students labeled a base $b$ and a corresponding height $h$ for each of these parallelograms. Are all drawings correctly labeled? Explain how you know. In the applet, the parallelogram is made of solid line segments, and the height and supporting lines are made of dashed line segments. A base ($b$) and corresponding height ($h$) are labeled. Experiment with dragging all of the movable points around the screen. Can you change the parallelogram so that . . . its height is in a different location? it is tall and skinny? it is also a rectangle? it is not a rectangle, and has $b=5$ and $h=3$? Identify a base and a corresponding height, and record their lengths in the table that follows. Find the area and record it in the right-most column. In the last row, write an expression using $b$ and $h$ for the area of any parallelogram. What happens to the area of a parallelogram if the height doubles but the base is unchanged? If the height triples? If the height is 100 times the original? What happens to the area if both the base and the height double? Both triple? Both are 100 times their original lengths? We can choose any of the four sides of a parallelogram as the base. Both the side (the segment) and its length (the measurement) are called the base. If we draw any perpendicular segment from a point on the base to the opposite side of the parallelogram, that segment will always have the same length. We call that value the height. There are infinitely many line segments that can represent the height! Here are two copies of the same parallelogram. On the left, the side that is the base is 6 units long. Its corresponding height is 4 units. On the right, the side that is the base is 5 units long. Its height is 4.8 units. For both, three different segments are shown to represent the height. We could draw in many more! We can see why this is true by decomposing and rearranging the parallelograms into rectangles. Notice that the side lengths of each rectangle are the base and height of the parallelogram. Even though the two rectangles have different side lengths, the products of the side lengths are equal, so they have the same area! And both rectangles have the same area as the parallelogram. Notice that we write the multiplication symbol with a small dot instead of a $\times$ symbol. This is so that we don't get confused about whether $\times$ means multiply, or whether the letter $x$ is standing in for a number. In high school, you will be able to prove that a perpendicular segment from a point on one side of a parallelogram to the opposite side will always have the same length. You can see this most easily when you draw a parallelogram on graph paper. For now, we will just use this as a fact. Any of the four sides of a parallelogram can be chosen as a base. The term base can also refer to the length of this side. Once we have chosen a base, then a perpendicular segment from a point on the base of a parallelogram to the opposite side will always have the same length. We call that value the height.
CommonCrawl
Let $X = [P/G]$ be a smooth finite type separated DM-stack over $\mathbb C$ given as the quotient of a smooth projective scheme $P$ by the action of a smooth (finite type separated) reductive group scheme $G$. At atlas of $X$ is an etale morphism $U\to X$ with $U$ an algebraic space. Thm. If the coarse moduli space of $X$ is a scheme (i.e., not just an algebraic space) then every atlas $U$ of $X$ is a scheme. Q. Suppose that every atlas $U$ of $X$ is a scheme. Is the coarse moduli space of $X$ a scheme? I expect the answer to be negative, but can't find a good example. Note that a counterexample can not be an algebraic space, as an algebraic space with the property that every atlas is a scheme is itself a scheme (the identity morphism being an atlas). Browse other questions tagged ag.algebraic-geometry complex-geometry stacks algebraic-stacks algebraic-spaces or ask your own question. Examples of algebraic stacks without coarse moduli space? Is the analytification of the coarse space equal to the coarse moduli space of the analytification?
CommonCrawl
This is my understanding of 'energy compaction' and want to know if it is right. Take a vector. The energy of the vector is the sum of the squares of its elements. If A is the transformation matrix that is unitary, it can be proved that the energy in x and Ax are same. Energy conservation property. Energy compaction means that the energy of Ax=y is more concentrated in some elements compared to the distribution of energy in x. DCT is said to have energy compaction property. Does that mean, for any x, if A is DCT matrix, energy of y will be more concentrated when compared to the x. Does this happen to every x or x has to satisfy some properties to get this energy compaction? Yes, I believe that your understanding of energy compaction is correct. Does that mean, for any x, if A is DCT matrix, energy of y will be more concentrated when compared to the x. No, it does not. All that is needed to prove that such is not the case is to show that there is an $x$ that is not compacted by the DCT. White noise, for example, would not be compacted by the DCT. The DCT is useful because in many real life situations the "signal" (e.g. audio, images, videos, etc.) tend to be "pinkish", i.e. tend to have most of their energy in the lower frequencies and so there is a natural assymetry that can be used to our advantage. Another way to look at this is from an information theory perspective. If a signal is not "white" (i.e. it doesn't have the same power at all frequencies) that implies that there is some correlation between samples in the time domain, which means that the samples have "information" about the value of other samples. This mutual information implies that there is redundancy, and thus should be able to reduce the amount of data without losing information. Does this happen to every x or x has to satisfy some properties to get this energy compaction? For frequency-based transforms like the DCT it is clear from the above that what is needed is for the signal to be non-white. The more non-white it is, the more compaction can be achieved. There are other kinds of transforms though, which presumably could compact signals based on other features. That is pretty much the whole point of compressed sensing. The number of coefficients accounting for a given percentage (for example 95%) of the total energy. The number of coefficients whose energy falls below a given percentage of the total energy. Any statistical measure of distribution peakedness - considering the coefficients as samples from a random distribution. Does that mean, for any x, if A is DCT matrix, energy of y will be more concentrated when compared to the x. Does this happen to every x or x has to satisfy some properties to get this energy compaction? This property is clearly not valid for any $x$. Otherwise, the transform could be applied iteratively on its result until data ends up being concentrated in a single coefficient. While this may sound tautological, one could say that DCT has good energy compaction properties on signals which are made by combining a small number of sinusoidal elements; and more generally on signals in which there is an imbalance in the distribution of energy. This happens to be a good description of speech, music and "natural" images. Speech or music sounds are mostly made of a handful of sinusoidal harmonics, with less energy in the high frequencies. Images contain large uniform regions (Consider that a $N \times N$ pixels square has $N^2$ interior pixels and only $4N$ edge pixels). Not the answer you're looking for? Browse other questions tagged image-processing discrete-signals signal-analysis or ask your own question. What does it mean for an image to be "Markovian"? How to get cofficients in DFT? In What Way Is the Difference of Gaussian (DoG) More Tunable than the Laplacian of Gaussian (LoG)?
CommonCrawl
Is it possible to determine $A$ and $T$ without assuming the Riemann hypothesis? Are there any other known results (with explicit) around this question? The proof uses the Borel-Caratheodory theorem, and can be made effective if you really really want it. Titchmarsh has a series of seven successive constants $A_1, A_2,\ldots A_6, A$ with the final $A$ being the constant you reference above. This is not conditional on the Riemann Hypothesis. It's not clear how your actual question relates to your title. do you want to determine T also? I thought the result should hold for all T? Not the answer you're looking for? Browse other questions tagged riemann-zeta-function effective-results nt.number-theory or ask your own question. How to understand the explicit formula for zeta function?
CommonCrawl
$ax^2+bx+c = 0$ is a quadratic equation in general form. One root is twice the other root. It is given that one root is twice the second root. So, $\beta = 2 \alpha$. Now, eliminate $\beta$ from both roots relations by using this condition. Now, replace the value of $\alpha$, obtained from the summation of the roots of the quadratic equation. Therefore, it is the required solution for this quadratic equation.
CommonCrawl
(a) Find all $3 \times 3$ matrices which are in reduced row echelon form and have rank 1. (b) Find all such matrices with rank 2. First we look at the rank 1 case. For a $3 \times 3$ matrix in reduced row echelon form to have rank 1, it must have 2 rows which are all 0s. The non-zero row must be the first row, and it must have a leading 1. For a rank 2 $3 \times 3$ matrix in reduced row echelon form, there must be one row, the bottom one, which has only 0s. Next story If a Symmetric Matrix is in Reduced Row Echelon Form, then Is it Diagonal? For What Values of $a$, Is the Matrix Nonsingular? The list of linear algebra problems is available here. Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Find All Values of $a$ which Will Guarantee that $A$ Has Eigenvalues 0, 3, and -3. How to Diagonalize a Matrix. Step by Step Explanation. If you are a member, Login here. Problems in Mathematics © 2019. All Rights Reserved.
CommonCrawl
I got this problem from Rustan Leino, who got it from Jay Misra, who got it from Gerard Huet. Devise an algorithm that, given an even number $N$, partitions the integers from $1$ to $N$ into pairs such that the sum of the two numbers in each pair is a prime number. The algorithm is defined recursively on $N$, as follows. If $N = 0$, use the empty partition. Otherwise, since Bertrand's Postulate holds, you can find a prime number $p$ such that $N < p < 2N$. Let $q = p - N$, and observe that $0 < q < N.$ Since $p$ is prime and exceeds 2, $p$ is odd and thus $q$ is odd and $q-1$ is even. Call the algorithm recursively to partition the integers $1$ to $q-1$. To this partition, add the pairs $(q, N),$ $(q+1, N-1),$ $(q+2, N-2),$ $\ldots,$ $([q+N-1]/2, [q+N+1]/2).$ Each of these added pairs has sum $q+N$, which equals the prime number $p$. So, the updated partition retains the property that the sum of each pair is prime. The resulting partition also contains each integer between $1$ and $N$ exactly once. So, this is the desired partition.
CommonCrawl
On a question of ZadroznyDec 13 2015We answer a question of Zadrozny. Maximally transitive semigroups of $n\times n$ matricesDec 30 2011We prove that, in both real and complex cases, there exists a pair of matrices that generates a dense subsemigroup of the set of $n\times n$ matrices. Diamond can fail at the least inaccessible cardinalJul 04 2016Starting from suitable large cardinals, we force the failure of diamond at the least inaccessible cardinal. The result improves an unpublished theorem of Woodin. (Weak) diamond can fail at the least inaccessible cardinalJul 04 2016Oct 31 2016Starting from suitable large cardinals, we force the failure of (weak) diamond at the least inaccessible cardinal. The result improves an unpublished theorem of Woodin and a recent result of Ben-Neria, Garti and Hayut. Almost Souslin Kurepa treesOct 10 2015We show that the existence of an almost Souslin Kurepa tree is consistent with $ZFC$. We also prove their existence in $L$. These results answer two questions from Zakrzewski. On the notion of generic cut for models of ZFCJan 28 2015We define the notion of generic cut between models of ZFC and give some examples. Hyperbolic Lagrangian coherent structures align with contours of path-averaged scalarsJan 21 2015Jul 26 2015While inequality (9) is mathematically correct, it does not imply alignment between path-averaged scalars and the hyperbolic LCSs. On the shape of the free boundary of variational inequalities with gradient constraintsAug 09 2015In this paper we derive an estimate on the number of local maxima of the free boundary of some variational inequalities with pointwise gradient constraints. This also gives an estimate on the number of connected components of the free boundary. A counterexample to Herzog's Conjecture on the number of involutionsFeb 20 2018In 1979, Herzog put forward the following conjecture: if two simple groups have the same number of involutions, then they are of the same order. We give a counterexample to this conjecture. Cryptanalysis of some protocols using matrices over group ringsMar 16 2015We address a cryptanalysis of two protocols based on the supposed difficulty of discrete logarithm problem on (semi) groups of matrices over a group ring. We can find the secret key and break entirely the protocols. Definable tree property can hold at all uncountable regular cardinalsAug 24 2016Aug 31 2016Starting from a supercompact cardinal and a measurable above it, we construct a model of ZFC in which definable tree property holds at all uncountable regular cardinals. This answers a question from . All uncountable regular cardinals can be inaccessible in HODAug 02 2016Assuming the existence of a supercompact cardinal and an inaccessible above it, we construct a model of ZFC, in which all uncountable regular cardinals are inaccessible in HOD.
CommonCrawl
In a graphical model, we say that set $A$ and $B$ are conditionally independent given set $C$ if all routes from $A$ to $B$ are blocked. There are multiple ways for the route to be blocked at a node. One instance for the route to be blocked at a node is when neither the descendants or the node itself is in $C$. There is only one route (or "path") from $A$ to $B$, namely $x_1\to x_2\to x_3$. Is this path blocked at any node? Yes, it is blocked at node $x_2$ since "neither the node $x_2$ nor its descendants" (which it does not have) are in $C$. So all the paths are blocked, and we have conditional independence. In this particular example we can also argue directly, though it is a bit of a "degenerate case". The Bayes net modeling assumption is "each variable is conditionally independent of its non-descendants, given its parents". Applying it to $x_1$ and $x_3$ we have $P(A,B)=P(A)P(B)$. If $C=\emptyset$ we have the original $P(A,B)=P(A)P(B)$. where we used $P((A,B)=(x,b))=P(A=x)P(B=b)$ and $P(B=b)=P(B=b|A=x)$ by independence. This is conventionally written $P(A,B|A)=P(A|A)P(B|A)$. If $C=B$ the argument is symmetric (assuming $P(B=b)$ is never zero). All of this can be done with continuous random variables and their densities instead. Not the answer you're looking for? Browse other questions tagged conditional-probability or ask your own question. Why are these variables not conditionally dependent given 'active triplets' and the 'explaining away' effect? How to measure (estimate) conditional independence?
CommonCrawl
The area of a rectangle is calculated using the following forumla: $$ Area = L \times H $$ In this equation, \(L\) is the length of the rectangle and \(H\) is the height of the rectangle. The area is found simply by taking the product (multiplication) of those two lengths. The area of a square is calculated in the same way. The difference between the two is that a square is a special kind of rectangle where the length is the same as the height. What is the height (\(H\))?
CommonCrawl
Abstract: We construct the extended flow equations of a new $Z_N$-Toda hierarchy taking values in a commutative subalgebra $Z_N$ of $gl(N,\mathbb C)$. We give the Hirota bilinear equations and tau function of this new extended $Z_N$-Toda hierarchy. Taking the presence of logarithmic terms into account, we construct some extended vertex operators in generalized Hirota bilinear equations, which might be useful in topological field theory and the Gromov–Witten theory. We present the Darboux transformations and bi-Hamiltonian structure of this hierarchy. Using Hamiltonian tau-symmetry, we obtain another tau function of this hierarchy with some unknown mysterious relation to the tau function derived using the Sato theory.
CommonCrawl
It seems there are a lot of respected physicists appearing on pop-sci programs (discovery channel, science channel, etc.) these days spreading the gospel of "we can know, we must know." Three examples, quickly: 1) Many programs feature Michio Kaku saying that he is on a quest to find an equation, "perhaps just one inch long," which will "describe the whole universe." 2) Max Tegmark has come out with a new book in which he expresses the gut feeling that "nothing is off-limits" to science. The subtitle of this book is My Quest for the Ultimate Nature of Reality. 3) In the series Through the Wormhole there is talk about a search for the "God equation." Is there any sense among physicists that it might be impossible to articulate the "ultimate nature of reality" in equations and formal logic? It seems to me that physicists are following in the footsteps of the 19th century mathematicians (led by Hilbert) who were on a similar quest which was put to rest by Gödel's incompleteness theorems in 1931. Is there any appreciation for how the Incompleteness Theorems might apply to physics? Has any progress been made on Hilbert's 6th problem for the 20th century? Shouldn't this be addressed before getting all worked up about a "God equation?" Imho, you can safely discount $99.99 \%$ of that as just chatter. Taking advantage of the fecund atmosphere for science popularization, many people (some expert and some not so) are trying their hand at drumming up some excitement. Interesting link with the opposite trend: What scientific idea is ready for retirement?: We'll Never Hit Barriers To Scientific Understanding. Martin Rees, Edge, 2014. @Siva Yes, but these are respected scientists holding high positions at respected institutions. @ben: As Emilio Pisanty, I will also refer to links with the the opposite trend: Impossibility: The Limits of Science and the Science of Limits by John D Barrow examines, among other things, some of your questions. At this link you may find a video where he talks about parts of his book. @ben: I was going to mention something I'm reading through (Urs' paper Differential Cohmology in a Cohesive $\infty$-topos), and how it approaches Hilbert's 6th problem, but… it seems Urs himself beat me to it. Personally, I think his answer deserves the ol' checkmark. I can only speak from my personal experience (which seems fair enough since this question is subjective). Most physicists I know, including myself, are much more humble on what physics knows now and will know in the future compared to the "celebrity physicists" you mentioned. It's fairly easy to see from history of the field that whenever we think we become close to explaining it all, something new is observed or some small inconsistency turns out to open up an entire new branch of physics. From these experiences, it seems highly doubtful to me that we would ever become close to pushing the ability of one theory to the point where we have to worry about Goedel's Theorems (that is worry about completeness - after all it could easily be the case that the truth statements which our theory cannot predict are not relevant to our universe i.e. experiments). Furthermore, I've yet to hear a good definition for what we mean by "one theory". After all, QFT is much more of a framework and the Standard Model is just one of many possible applications of that framework. We fit the Standard Model to conform to our observed universe. So what exactly do those physicists mean then by a "god equation"? Do they mean one framework from which multiple equations can arise? I guess I'm answering questions with questions, but it is only to make the point that these "dream theories" can become idealized to the point of myth. It seems to me that in the future what will most likely occur is some framework or language that can be used to describe gravity and dark energy in addition the other forces. This framework will be applied to some Standard Model version 2 that incorporates dark matter and other observed matter. That does not mean one equation. It just means one unified way of thinking about things. It will likely lead to many equations with a good number of assumptions that are assumed only because they give they accurately predict experiment. First regarding: Is there any appreciation for how the Incompleteness Theorems might apply to physics? To put this in perspective, image Newton said "Oh, looks like my $F = m a$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete." And then later Lagrange: "Oh, looks like my $\delta L = 0$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete." And then later Schrödinger: "Oh, looks like my $i \hbar \partial_t \psi = H \psi$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete." they didn't even have the mathematics yet to formulate what later was understood to be the more fundamental theory. Gödel's incompleteness theorem is, much like "$E = m c^2$" in the pop culture: people like to allude to it with a vague feeling of deep importance, without really knowing what the impact is. Gödel incompletenss is a statement about the relation between metalanguage and "object language" (it's the metalanguage that allows one to know that a given statement "is true", after all, even if if cannot be proven in the object language!). To even appreciate this distinction one has to delve a bit deeper into formal logic than I typically see people do who wonder about its relevance to physics. And the above history suggests: it is in any case premature to worry about the fine detail of formal logic as long as the candidate formalization of physics that we actually have is glaringly insufficient, and in particular as long as it seems plausible that in 100 years form now fundamental physics will be phrased in new mathematics compared to which present tools of mathematical physics look as outdated as those from a 100 years back do to us now. Just open a theoretical physics textbook from the turn of the 19th to the 20th century to see that with our knowledge about physics it would have been laughable for the people back then to worry about incompleteness. They had to worry about learning linear algebra and differential geometry. second: Has any progress been made on Hilbert's 6th problem for the 20th century? I had recently been giving some talks which started out with considering this question, see the links on my site at Synthetic quantum field theory. One answer is: there has been considerable progress (see the table right at the beginning of the slides or also in this talk note). Lots of core aspects of modern physics have a very clean mathematical formulation. For instance gauge theory is firmely captured by differential cohomology and Chern-Weil theory, local TQFT by higher monoidal category theory, and so forth. But two things are remarkable here: first, the maths that formalizes aspects of modern fundamental physics involves the crown jewels of modern mathematics, so something deep might be going on, but, second, these insights remain piecemeal. There is a field of mathematics here, another there. One could get the idea that somehow all this wants to be put together into one coherent formal story, only that maybe the kind of maths used these days is not quite sufficient for doing so. This is a point of view that, more or less implicitly, has driven the life work of William Lawvere. He is famous among pure mathematicians as being the founder of categorical logic, of topos theory in formal logic, of structural foundations of mathematics. What is for some weird reason almost unknown, however, is that all this work of his has been inspired by the desire to produce a working formal foundations for physics. (See on the nLab at William Lawvere -- Motivation from foundations of physics). I think anyone who is genuinely interested in the formal mathematical foundations of phyiscs and questions as to whether a fundamental formalization is possible and, more importantly, whether it can be useful, should try to learn about what Lawvere has to say. You might start with the note on the nLab: "Higher toposes of laws of motion" for an idea of what Lawverian foundations of physics is about. A little later this month I'll be giving various talks on this issue of formally founding modern physics (local Lagrangian gauge quantum field theory) in foundational mathematics in a useful way. The notes for this are titled Homotopy-type semantics for quantization. The question Is there any sense among physicists that it might be impossible to articulate the "ultimate nature of reality" in equations and formal logic? is about a belief (like a faith of a religion) that most physicists may or may not have. Just like physicists have many different faith, I think physicists have different believes on this issue. So it is hard to answer yes or no, since physicists do not have a common opinion. The Dao that can be stated cannot be the eternal Dao. The Name that can be given cannot be the eternal Name. Here DAO ~ "ultimate nature of reality". So the point of view is that "ultimate nature of reality" exists. But any (or current) concrete description of "ultimate nature of reality" in terms of equations and formal logic is not a faithful description of the "ultimate nature of reality". Space = a collection of many many qubits. Vacuum = the ground state of the qubits. Elementary particles = collective excitations of the qubits. In other words, all matter are formed by the excitations of the qubits. We live inside a quantum computer. -- this is AN approximate approach to "ultimate nature of reality" (or DAO). I updated my answer for your point. In your link it mentions that gravity is not unified with the others through String-nets. However, I remember that in your paper with Levin you mention that LQG may be a string-net. I believe the program to join the two was called quantum graphity at one point? I know that there was a paper published on it in Phys. Rev. D. Separately Michael Freeman has played with some String-net like ideas "off lattice" using a "quantum gravity" Hamiltonian. Do you know if there is a good reference for how this program has continued? I feel that string-net or LQG is a good description of gauge theory. But I still do not see (understand) if string-net or LQG is a good description of gravity or not. Gu and I have a paper on emergent (linearized) gravity, but that is not based on string-net nor LQG (see arxiv.org/abs/0907.1203 ). I think Lao Tzu is rolling in his grave. If we live in a quantum computer, what would be the observable consequence (physical prediction)? The history of physics shows that each generation of physicists, theoretical and experimentally biased ones, believes at some point that they have found the holy grail or are very close to finding it. Certainly this was true in the nineteenth century when mathematics reigned and theories were so complete and beautiful they thought that all that was left was applications of known theories . That is a type of hubris. What changed the game was newer and better experiments that showed up inconsistencies in the predictions of their Theory Of Everything (TOE) . It is fair to suppose that the goal will always be a TOE, and hypothesize that newer and better experimental data will open up again and again the scope of what the TOE describes. Because this has to be said: at the limits of the experimental domains of their applications, newer theories and older theories blend, usually older ones are shown to emerge from the newer ones ( as for example thermodynamics from statistical mechanics ). There is consistency in our theories. Now as for Godel and his theorem, which I remember from my mathematics course in the form "the set of all sets is open" , as applied to a TOE is not inconsistent with the above view. What may happen though, we will reach the limits of our possible experimental verifications and the openness will be a moot point, going towards metaphysics. I don't think that "the set of all sets is open" is one of Gödel's incompleteness theorems. well it was in a set theory course back in 1960 so I may be paraphrasing, the professor might have proven this using G theorem. ... and seems to think that's OK - implicitly I think he was saying to Dedekind that the onus was on the mathematician to check that his or her sets were not, as he called them "infinite or inconsistent multiplicities". What a deft sidestep: "I define my theory to be sound whenever it is sound!": although sounding like a bit of a con, is really quite a stroke of genius. It is a shame that he never published his ideas on "infinite or inconsistent multiplicities", probably because Kronecker and others influential in mathematics publishing were dead against him.
CommonCrawl
This mini-demo gives you the opportunity to play around with the 2-norm condition number of a $2\times 2$ matrix. What happens if you choose the columns of the matrix to be nearly linearly dependent? What happens if you choose the diagonal entries to be very different in magnitude?
CommonCrawl
We will use Young's inequality to prove the significant Hölder's inequality for $L^1(E)$ and for $L^p(E)$ where $1 < p < \infty$. Theorem 1 (Hölder's Inequality for $L^1(E)$ and $L^p(E)$): Let $E$ be a measurable set and let $1 \leq p < \infty$ with $q$ the conjugate index of $p$. If $f \in L^p(E)$ and $g \in L^q(E)$ then $fg \in L^1(E)$ and $\| fg \|_1 \leq \| f \|_p \| g \|_q$. Proof: Observe that if $f = 0$ a.e. on $E$ or $g = 0$ a.e. on $E$ then Young's inequality holds trivially. So assume that $f \neq 0$ and $g \neq 0$ a.e. on $E$. We break the proof up into a few cases. So Young's Inequality holds if $\| f \|_p = 1$ and $\| g \|_q = 1$.
CommonCrawl
Not sure that there would be a closed formula as Jim has pointed out. You can easily calculate it in examples because you can determine highest weight vectors in $g$ for $m$. Indeed, if $\alpha_1, \ldots \alpha_k$ are simple rooots for $m$, you just need to find those roots $\beta$ of $g$ such that no $\beta + \alpha_i$ is a root of $g$. This will give you all non-trivial summands. The trivial summands sit in the Cartan and are easy to find as well. Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebras or ask your own question. relate parabolic subalgebras to gradings? When is the Levi subalgebra an ideal?
CommonCrawl
Computes Holt-Winters Filtering of a given time series. Unknown parameters are determined by minimizing the squared prediction error. \(alpha\) parameter of Holt-Winters Filter. \(beta\) parameter of Holt-Winters Filter. If set to FALSE, the function will do exponential smoothing. \(gamma\) parameter used for the seasonal component. If set to FALSE, an non-seasonal model is fitted. Character string to select an "additive" (the default) or "multiplicative" seasonal model. The first few characters are sufficient. (Only takes effect if gamma is non-zero). Start periods used in the autodetection of start values. Must be at least 2. Start value for level (a). Start value for trend (b). Vector with named components alpha, beta, and gamma containing the starting values for the optimizer. Only the values needed must be specified. Ignored in the one-parameter case. Optional list with additional control parameters passed to optim if this is used. Ignored in the one-parameter case. The multiplicative Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = (a[t] + h b[t]) \times s[t - p + 1 + (h - 1) \bmod p].$$ where \(a[t]\), \(b[t]\) and \(s[t]\) are given by $$a[t] = \alpha (Y[t] / s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] - a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] / a[t]) + (1-\gamma) s[t-p]$$ The data in x are required to be non-zero for a multiplicative model, but it makes most sense if they are all positive. The function tries to find the optimal values of \(\alpha\) and/or \(\beta\) and/or \(\gamma\) by minimizing the squared one-step prediction error if they are NULL (the default). optimize will be used for the single-parameter case, and optim otherwise. For seasonal models, start values for a, b and s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function decompose) on the start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for a and b are x and x - x, respectively. For level-only models (ordinary exponential smoothing), the start value for a is x. A multiple time series with one column for the filtered series as well as for the level, trend and seasonal components, estimated contemporaneously (that is at time t and not at the end of the series). C. C. Holt (1957) Forecasting seasonals and trends by exponentially weighted moving averages, ONR Research Memorandum, Carnegie Institute of Technology 52. (reprint at http://dx.doi.org/10.1016/j.ijforecast.2003.09.015). P. R. Winters (1960) Forecasting sales by exponentially weighted moving averages, Management Science 6, 324--342.
CommonCrawl
Abstract: Ballistic transport through a collection of quantum billiards in undoped graphene is studied analytically within the conformal mapping technique. The billiards show pseudodiffusive behavior, with the conductance equal to that of a classical conductor characterized by the conductivity $\sigma_0=4e^2/\pi h$, and the Fano factor $F=1/3$. By shrinking at least one of the billiard openings, we observe a tunneling behavior, where the conductance shows a power-law decay with the system size, and the shot noise is Poissonian (F=1). In the crossover region between tunneling and pseudodiffusive regimes, the conductance $G\approx (1-F)\times se^2/h$. The degeneracy $s=8$ for the Corbino disk, which preserves the full symmetry of the Dirac equation, $s=4$ for billiards bounded with smooth edges which break the symplectic symmetry, and $s=2$ when abrupt edges lead to strong intervalley scattering. An alternative, analytical or numerical technique, is utilized for each of the billiards to confirm the applicability of the conformal mapping for various boundary conditions.
CommonCrawl
Abstract: Quenched QCD simulations on three volumes, $8^3 \times$, $12^3 \times$ and $16^3 \times 32$ and three couplings, $\beta=5.7$, 5.85 and 6.0 using domain wall fermions provide a consistent picture of quenched QCD. We demonstrate that the small induced effects of chiral symmetry breaking inherent in this formulation can be described by a residual mass ($\mres$) whose size decreases as the separation between the domain walls ($L_s$) is increased. However, at stronger couplings much larger values of $L_s$ are required to achieve a given physical value of $\mres$. For $\beta=6.0$ and $L_s=16$, we find $\mres/m_s=0.033(3)$, while for $\beta=5.7$, and $L_s=48$, $\mres/m_s=0.074(5)$, where $m_s$ is the strange quark mass. These values are significantly smaller than those obtained from a more naive determination in our earlier studies. Important effects of topological near zero modes which should afflict an accurate quenched calculation are easily visible in both the chiral condensate and the pion propagator. These effects can be controlled by working at an appropriately large volume. A non-linear behavior of $m_\pi^2$ in the limit of small quark mass suggests the presence of additional infrared subtlety in the quenched approximation. Good scaling is seen both in masses and in $f_\pi$ over our entire range, with inverse lattice spacing varying between 1 and 2 GeV.
CommonCrawl
Let $n\geq 4$. Prove that the number of partitions of $n$ into 4 parts equals the number of partitions of $3n$ into 4 parts of size at most $n-1$. I am stuck on this problem but I suspect I need to establish a bijection by possibly looking at the conjugate of the Ferrer's diagram. Thanks for any help. Proof: Let $n\geq 4$. The number of partitions of $n$ into 4 parts is a solution to the system $x_1+x_2+x_3+x_4=n$ for $x_1\geq x_2\geq x_3\geq x_4\geq 1$. Let $y_1=n-x_4$, $y_2=n-x_3$, $y_3=n-x_2$, and $y_4=n-x_1$. We know that $x_1\geq x_2\geq x_3\geq x_4\geq 1$ which implies $-x_1\leq -x_2\leq -x_3\leq -x_4\leq -1$ and thus $n-x_1\leq n-x_2\leq n-x_3\leq n-x_4\leq n-1$. So, $1\leq y_4\leq y_3\leq y_2\leq y_1\leq n-1$. Hence $y_1+y_2+y_3+y_4=4n-(x_1+x_2+x_3+x_4)=4n-n=3n$ which yields a solution to the number of partitions of $3n$ into 4 parts of size at most $n-1$. Thus there is a bijection between the system $y_1+y_2+y_3+y_4=3n$ for $n-1\geq y_4\geq y_3\geq y_2\geq y_1\geq 1$ and the system $x_1+x_2+x_3+x_4=n$ for $x_1\geq x_2\geq x_3\geq x_4\geq 1$. Therefore, the number of partitions of $n$ into 4 parts equals the number of partitions of $3n$ into 4 parts of size at most $n-1$. Prove that the number of partitions of $2010$ into $10$ parts is equal to the number of partitions of $2055$ into $10$ distinct parts. Prove that the number of partitions of $n$ into $3$ parts is equal to the number of partitions of $2n$ into $3$ parts, each of size less than $n$.
CommonCrawl
Does "typical" mean mean? or median? It appears that "mean" is intended, because the front page of this census says there are 2,325 billionaires globally, with a combined net worth of 7.3 trillion dollars; the quotient is just around 3.1 billion. the median of a Pareto distribution is . Let (i. e. measure money in units of billions of dollars) and you get that , if "typical" means median. the mean of a Pareto distribution is , so you get , or $\alpha = 31/21 \approx 1.48$, if "typical" means mean. The original survey also mentions that there's a "wealth ceiling" around 10 billion USD; see the plot at quartz. But I don't see any really clear evidence for this. There could be such a ceiling, though, a function of the size and growth rate of the world economy, the typical length of human lives, tax rates on the income of the very wealthy, and so on. Next Post How weird is it that three pairs of same-market teams made the playoffs this year?
CommonCrawl
In calculus, limit and continuity is one of the most crucial topics. A limit of a function is a number that a function approaches as the non dependent variable approaches a given value. We define limit as the value the function is approaching. A continuous function is one which can be graphed without lifting the pen from the paper easily. If the function cannot be graphed this way then it is discontinuous. i) A continuous function neither jumps nor blast to infinity. ii) The graph of such functions can be drawn easily without lifting pencil up from the paper. 1) h (c) is defined. If either of the conditions above is not fulfilled then we say that the function 'h' is discontinuous at x = c. If a function h (x) is continuous in some interval [c, d], then there exist some 'a' belonging to [c, d] such that y = h (a) where h (a) lies between h (c) and h (d). This implies that the function 'h' attains all the values between h (c) and h (d) as 'x' reaches from c to d. This theorem in general clears the fact that the functions that are continuous do not have jumps. If a function h (x) is continuous in some interval [c, d], then real numbers p and P exist such that p <= h (x) <= P for every 'x' belonging to [c, d]. in other words we can say that the function 'h' is bounded from both sides above and below in the interval [c, d]. This theorem in general clears the fact that function that are continuous do not blast to infinity since they are bounded in the interval. Example 1: For what values of c and d the given function is continuous. Solution: It is clear that the function is continuous at (-$\infty$, 2), (2, 3), (3, $\infty$). This is because 'h' is defined in them separately. The problem arises at points 2 and 3 only. For function to be continuous (i) and (ii) should be equal. Again (iii) and (iv) needs to be same for function to be continuous. Thus when c = 5 and d = 14, then the function 'h' here will be continuous. Example 2: Determine if the following function is continuous at x = 3. Thus, function f is not continuous at x = 3.
CommonCrawl
For statistics about Wikipedia, see Wikipedia:Statistics. Statistics is the science of data. It enables the collection, analysis, understanding, and presentation of data. It helps in the study of many other fields, such as medicine, economics, psychology, and marketing. Someone who works in statistics is called a statistician. First, statistics can help describe the data. This is known as descriptive statistics. Descriptive statistics is about finding meaningful ways to summarize the data, because it is easier to use the "summary" than having to use the whole set of data all the time. Summarizing the data also allows to find common patterns. In statistics, such patterns are called probability distributions. The basic idea is to look at the results of an experiment, and look at how the results are grouped. Once the results have been summarized and described they can be used for prediction. This is called Inferential Statistics. As an example, the size of an animal is dependent on many factors. Some of these factors are controlled by the environment, but others are by inheritance. A biologist might therefore make a model that says that there's a high probability, the offspring will be small in size if the parents were small in size. This model probably allows to predict the size in better ways than by just guessing at random. Testing whether a certain drug can be used to cure a certain condition or disease is usually done by comparing the results of people who are given the drug against those of people who are given a placebo. Statistics have been in use for a long time. The first known statistics are census data. The Babylonians did a census around 3500BC, the Egyptians around 2500 BC, and the Ancient Chinese around 1000 BC. Before we can describe the world with statistics, we must collect data. The data that we collect in statistics are called measurements. After we collect data, we use one or more numbers to describe each observation or measurement. For example, suppose we want to find out how popular a certain TV show is. We can pick a group of people (called a sample) out of the total population of viewers. Then we ask each one how often they watch the show, or (better) we measure it by attaching a counter to each of their television sets. For another example, if we want to know whether a certain drug can help lower blood pressure, we could give the drug to people for some time and measure their blood pressure before and after. Most often we collect statistical data by doing surveys or experiments. To do a survey, we pick a small number of people and ask them questions. Then, we use their answers as the data. The choice of which individuals to take for a survey or data collection is important, as it directly influences the statistics. When the statistics are done, it can no longer be determined which individuals are taken. Suppose we want to measure the water quality of a big lake. If we take samples next to the waste drain, we will get different results than if the samples are taken in a far away, hard to reach, spot of the lake. If there are many samples, the samples will likely be very close to what they are in the real population. If there are very few samples, however, they might be very different from what they are in the real population. This error is called a chance error (see Errors and residuals in statistics). The individuals for the samples need to be chosen carefully, usually they will be chosen randomly. If this is not the case, the samples might be very different from what they really are in the total population. This is true even if a great number of samples is taken. This kind of error is called bias. We can reduce chance errors by taking a larger sample, and we can avoid some bias by choosing randomly. However, sometimes large random samples are hard to take. And bias can happen if some people refuse to answer our questions, or if they know they are getting a fake treatment. These problems can be hard to fix. See also standard error. The middle of the data is called an average. The average tells us about a typical individual in the population. There are three kinds of average that are often used: the mean, the median and the mode. Where [math]x_1, x_2, \ldots, x_N[/math] are the data and [math]N[/math] is the population size. (see Sigma Notation). This means that you add up all the values, and then divide by the number of values. The problem with the mean is that it does not tell anything about how the values are distributed. Values that are very large or very small change the mean a lot. In statistics, these extreme values might be errors of measurement, but sometimes the population really does contain these values. For example, if in a room there are 10 people who make $10/day and 1 who makes $1,000,000/day. The mean of the data is $90,918/day. Even though it is the average amount, the mean in this case is not the amount any single person makes, and is probably useless. The median is the middle item of the data. To find the median we sort the data from the smallest number to the largest number and then choose the number in the middle. If there is an even number of data, there will not be a number right in the middle, so we choose the two middle ones and calculate their mean. In our example there are 10 items of data, the two middle ones are "57" and "64", so the median is (57+64)/2 = 60.5. Another example, like the income example presented for the mean, consider a room with 10 people who have incomes of $10, $20, $20, $40, $50, $60, $90, $90, $90, $100, and $1,000,000, the median is $55 because $55 is the average of the two middle numbers, $50 and $60. If the extreme value of $1,000,000 is ignored, the mean is $57. In this case, the median is close to the value obtained when the extreme value is thrown out. The median solves the problem of extreme values as described in the definition of mean above. The mode is the most frequent item of data. For example the most common letter in English is the letter "e". We would say that "e" is the mode of the distribution of the letters. For example, if in a room there are 10 people with incomes of $10, $20, $20, $40, $50, $60, $90, $90, $90, $100, and $1,000,000, the mode is $90 because $90 occurs three times and all other values occur fewer than three times. There can be more than one mode. For example, if in a room there are 10 people with incomes of $10, $20, $20, $20, $50, $60, $90, $90, $90, $100, and $1,000,000, the modes are $20 and $90. This is bi-modal, or has two modes. Bi-modality is very common and often indicates that the data is the combination of two different groups. For instance, the average height of all adults in the U.S. has a bi-modal distribution. This is because males and females have separate average heights of 1.763 m (5 ft 9 + 1⁄2 in) for men and 1.622 m (5 ft 4 in) for women. These peaks are apparent when both groups are combined. The mode is the only form of average that can be used for data that can not be put in order. Another thing we can say about a set of data is how spread out it is. A common way to describe the spread of a set of data is the standard deviation. If the standard deviation of a set of data is small, then most of the data is very close to the average. If the standard deviation is large, though, then a lot of the data is very different from the average. If the data follows the common pattern called the normal distribution, then it is very useful to know the standard deviation. If the data follows this pattern (we would say the data is normally distributed), about 68 of every 100 pieces of data will be off the average by less than the standard deviation. Not only that, but about 95 of every 100 measurements will be off the average by less that two times the standard deviation, and about 997 in 1000 will be closer to the average than three standard deviations. We also can use statistics to find out that some percent, percentile, number, or fraction of people or things in a group do something or fit in a certain category. For example, social scientists used statistics to find out that 49% of people in the world are males. ↑ Moses, Lincoln E. (1986). Think and Explain with statistics. Addison-Wesley. pp. 1 - 3. This page was last changed on 10 December 2013, at 10:00.
CommonCrawl
Abstract: The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a suborbital rocket experiment that on 3rd September 2015 measured the linear polarization produced by scattering processes in the hydrogen Ly-$\alpha$ line of the solar disk radiation, whose line-center photons stem from the chromosphere-corona transition region (TR). These unprecedented spectropolarimetric observations revealed an interesting surprise, namely that there is practically no center-to-limb variation (CLV) in the $Q/I$ line-center signals. Using an analytical model, we first show that the geometrical complexity of the corrugated surface that delineates the TR has a crucial impact on the CLV of the $Q/I$ and $U/I$ line-center signals. Secondly, we introduce a statistical description of the solar atmosphere based on a three-dimensional (3D) model derived from a state-of-the-art radiation magneto-hydrodynamic simulation. Each realization of the statistical ensemble is a 3D model characterized by a given degree of magnetization and corrugation of the TR, and for each such realization we solve the full 3D radiative transfer problem taking into account the impact of the CLASP instrument degradation on the calculated polarization signals. Finally, we apply the statistical inference method presented in a previous paper to show that the TR of the 3D model that produces the best agreement with the CLASP observations has a relatively weak magnetic field and a relatively high degree of corrugation. We emphasize that a suitable way to validate or refute numerical models of the upper solar chromosphere is by confronting calculations and observations of the scattering polarization in ultraviolet lines sensitive to the Hanle effect.
CommonCrawl
There are 2 white knights and 2 black knights positioned at a (3 X 3) chess board. Find the minimum number of moves required to replace the blacks with whites and the whites with blacks. I tried the above in 19 steps and reckon that I'm wrong. Please help !! What is your take on this? Here is my solution, but it seems that my answer takes longer number of steps. We want to move white from $1\&7$ to $9\&3$, for black from the $9\&3$ to $1\&7$. It is straightforward that the minimum movement we should take is shifting them all $4$ times to right or left. Hence, $4 \times 4 = 16$ moves is the optimal one. Since the path is cyclic, there are working positions reached along the way too, of course. I can get you down to 16 moves. I don't know if I can do any better. Eight moves total, so far. Not the answer you're looking for? Browse other questions tagged logical-deduction chess .
CommonCrawl
I'm trying to convert some pixel coordinates I have into WCS coordinates, ideally into a WCS region for use in some further analysis. So far I've been able to load and parse a NuSTAR FITS file, do some analysis to make my selections and get the pixel coordinates for the image. For example, after my analysis I would end up having selected a rectangle of x values from pixels 480 to 518, and y values from 478 to 516. After that I tried to use the astropy WCS module to convert them, but it doesn't seem to find the required data in the header to actually do the conversion and just says that pixel 480 gives coordinate 480. That or I'm just doing something wrong. So I looked through the FITS header myself, found that some keys (TCRPX, TCRVL, TCDLT) give the reference pixel, reference pixel degree coordinate and pixel axis scale values. To then run this through the nuproducts tool. TL;DR: Does anybody know how to convert the NuSTAR FITS physical pixel coordinates to FK5 coordinates? EDIT: Wrote some code to extract the coordinates. It converts the pixel to degree fine, but converting the FK5 $\alpha$ angle to sexagesimal doesn't work correctly for... some reason. EDIT2: Everything works now, here's a notebook where I run through it all in case anybody runs into a similar problem. I go through extracting the right-ascension and declination in degrees from the pixel coordinates given by the FITS file, then converting these degrees to sexagesimal (h:m:s) angles. From my tests it seems to work quite well, results are a tad off due to what I assume are some floating point arithmetic errors. You can upload your FITS files to astrometry.net (or use an API) and get the coordinates. You can optionally get new FITS files back with the coordinates included in the metadata. From there you will be able to proceed with your processing. And, instead of uploading everything, you can also install a copy of the astrometry.net package locally if you're running Linux. I did this exercise with IRAF by the functions geomap and geotran. There is PYRAF version of the IRAF, but I have never tried the python version. Using these routines, you can find the tranformation equation regardless of the header. What you need to know is some standard objects of known WCS and pixel coordinates. Not the answer you're looking for? Browse other questions tagged observational-astronomy coordinate python fits-header or ask your own question.
CommonCrawl
Continuous and bounded function such that limit of average integral does not exist? If a continuous function $f: U \to \mathbb R$, $U$ open, has compact support, then f is Riemann integrable on $U$. What's $\tilde f$ an extension of? A compactly supported continuous function on an open subset of $\mathbb R^n$ is Riemann integrable. What is the relevance of openness in the proof? Can I define a closed subset in a normed vector space as below? Is there an example of an analytic function whose derivative is not analytic?
CommonCrawl