doc_id
stringlengths 9
10
| corpus_id
uint64 1.75k
252M
| title
stringlengths 6
300
| abstract
stringlengths 9
6.28k
⌀ | index
int32 0
142k
| retweets
float32 0
28.8
| count
int32 1
29
| mentions
float32 2.18
47.9
|
---|---|---|---|---|---|---|---|
0704.0304 | 29,780 | The World as Evolving Information | null | 0 | 1.764706 | 3 | 4.765 |
0704.0646 | 9,890,455 | The Mathematical Universe | null | 1 | 1.176471 | 4 | 5.176 |
0706.2488 | 119,656,665 | Should physicists begin experimental study of the God's physical nature? | Inequality of forward and reversed processes in quantum physics means an existence of a memory of quantum system about the initial state. Importance of its experimental study for correct interpretation of quantum mechanics and understanding of a physical base of a consciousness is discussed. | 30 | 0 | 3 | 3 |
0707.0699 | 115,167,148 | A double demonstration of a theorem of Newton, which gives a relation between the coefficient of an algebraic equation and the sums of the powers of its roots | Translation from the Latin original, "Demonstratio gemina theorematis Neutoniani, quo traditur relatio inter coefficientes cuiusvis aequationis algebraicae et summas potestatum radicum eiusdem" (1747). E153 in the Enestrom index. In this paper Euler gives two proofs of Newton's identities, which express the sums of powers of the roots of a polynomial in terms of its coefficients. The first proof takes the derivative of a logarithm. The second proof uses induction and the fact that in a polynomial of degree $n$, the coefficient of $x^{n-k}$ is equal to the sum of the products of $k$ roots, times $(-1)^k$. | 35 | 1.176471 | 1 | 2.176 |
0708.1874 | 121,854,016 | Point estimation with exponentially tilted empirical likelihood | Parameters defined via general estimating equations (GEE) can be estimated by maximizing the empirical likelihood (EL). Newey and Smith [Econometrica 72 (2004) 219--255] have recently shown that this EL estimator exhibits desirable higher-order asymptotic properties, namely, that its $O(n^{-1})$ bias is small and that bias-corrected EL is higher-order efficient. Although EL possesses these properties when the model is correctly specified, this paper shows that, in the presence of model misspecification, EL may cease to be root n convergent when the functions defining the moment conditions are unbounded (even when their expectations are bounded). In contrast, the related exponential tilting (ET) estimator avoids this problem. This paper shows that the ET and EL estimators can be naturally combined to yield an estimator called exponentially tilted empirical likelihood (ETEL) exhibiting the same $O(n^{-1})$ bias and the same $O(n^{-2})$ variance as EL, while maintaining root n convergence under model misspecification. | 50 | 0 | 3 | 3 |
0711.0770 | 62,347,300 | An Exceptionally Simple Theory of Everything | All fields of the standard model and gravity are unified as an E8 principal bundle connection. A non-compact real form of the E8 Lie algebra has G2 and F4 subalgebras which break down to strong su(3), electroweak su(2) x u(1), gravitational so(3,1), the frame-Higgs, and three generations of fermions related by triality. The interactions and dynamics of these 1-form and Grassmann valued parts of an E8 superconnection are described by the curvature and action over a four dimensional base manifold. | 81 | 0.588235 | 2 | 2.588 |
0801.1475 | 153,640,078 | Effect of Asian currency crisis on multifractal spectra | null | 112 | 0.588235 | 3 | 3.588 |
0805.2366 | 16,790,489 | LSST: From Science Drivers to Reference Design and Anticipated Data Products | We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachón in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320–1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r ∼ 27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics. | 186 | 10.588235 | 1 | 11.588 |
0807.5082 | 117,764,789 | A Monte Carlo Approach to Joe DiMaggio and Streaks in Baseball | We examine Joe DiMaggio's 56-game hitting streak and look at its likelihood, using a number of simple models. And it turns out that, contrary to many people's expectations, an extreme streak, while unlikely in any given year, is not unlikely to have occurred about once within the history of baseball. Surprisingly, however, such a record should have occurred far earlier in baseball history: back in the late 1800's or early 1900's. But not in 1941, when it actually happened. | 235 | 1.176471 | 2 | 3.176 |
0810.4672 | 16,073,753 | Scientists who engage with society perform better academically | Most scientific institutions acknowledge the importance of opening the so-called ‘ivory tower’ of academic research through popularization, industrial collaboration or teaching. However, little is known about the actual openness of scientific institutions and how their proclaimed priorities translate into concrete measures. This paper gives an idea of some actual practices by studying three key points: the proportion of researchers who are active in wider dissemination, the academic productivity of these scientists, and the institutional recognition of their wider dissemination activities in terms of their careers. We analyze extensive data about the academic production, career recognition and teaching or public/industrial outreach of several thousand of scientists, from many disciplines, from France's Centre National de la Recherche Scientifique. We find that, contrary to what is often suggested, scientists active in wider dissemination are also more active academically. However, their dissemination activities have almost no impact (positive or negative) on their careers. Copyright , Beech Tree Publishing. | 289 | 17.647058 | 5 | 22.646999 |
0811.0164 | 14,027,607 | A strict non-standard inequality .999... < 1 | Is .999... equal to 1? Lightstone's decimal expansions yield an infinity of numbers in [0,1] whose expansion starts with an unbounded number of digits "9". We present some non-standard thoughts on the ambiguity of an ellipsis, modeling the cognitive concept of generic limit of B. Cornu and D. Tall. A choice of a non-standard hyperinteger H specifies an H-infinite extended decimal string of 9s, corresponding to an infinitesimally diminished hyperreal value. In our model, the student resistance to the unital evaluation of .999... is directed against an unspoken and unacknowledged application of the standard part function, namely the stripping away of a ghost of an infinitesimal, to echo George Berkeley. So long as the number system has not been specified, the students' hunch that .999... can fall infinitesimally short of 1, can be justified in a mathematically rigorous fashion. | 298 | 1.764706 | 1 | 2.765 |
0812.0347 | 119,269,274 | The point spread function of electrons in a magnetic field, and the decay of the free neutron | null | 320 | 0.588235 | 2 | 2.588 |
0812.3367 | 17,973,467 | Scattering in Mass-Deformed N>=4 Chern-Simons Models | We investigate the scattering matrix in mass-deformed ≥ 4 Chern-Simons models including as special cases the BLG and ABJM theories of multiple M2 branes. Curiously the structure of this scattering matrix in three spacetime dimensions is equivalent to (a) the two-dimensional worldsheet matrix found in the context of AdS/CFT integrability and (b) the R-matrix of the one-dimensional Hubbard model. The underlying reason is that all three models are based on an extension of the (2|2) superalgebra which constrains the matrix completely. We also compute scattering amplitudes in one-loop field theory and find perfect agreement with scattering unitarity. | 331 | 12.352942 | 1 | 13.353 |
0901.0768 | 118,790,642 | Mathematical irrational numbers not so physically irrational | We investigate the topological structure of the decimal expansions of the three famous naturally occurring irrational numbers, $\pi$, $e$, and golden ratio, by explicitly calculating the diversity of the pair distributions of the ten digits ranging from 0 to 9. And we find that there is a universal two-phase behavior, which collapses into a single curve with a power law phenomenon. We further reveal that the two-phase behavior is closely related to general aspects of phase transitions in physical systems. It is then numerically shown that such characteristics originate from an intrinsic property of genuine random distribution of the digits in decimal expansions. Thus, mathematical irrational numbers are not so physically irrational as long as they have such an intrinsic property. | 346 | 0.588235 | 2 | 2.588 |
0901.3003 | 325,280 | Timed Tuplix Calculus and the Wesseling and van den Bergh Equation | We develop an algebraic framework for the description and analysis of financial behaviours, that is, behaviours that consist of transferring certain amounts of money at planned times. To a large extent, analysis of financial products amounts to analysis of such behaviours. We formalize the cumulative interest compliant conservation requirement for financial products proposed by Wesseling and van den Bergh by an equation in the framework developed and define a notion of financial product behaviour using this formalization. We also present some properties of financial product behaviours. The development of the framework has been influenced by previous work on the process algebra ACP. | 357 | 0 | 5 | 5 |
0902.0431 | 115,171,846 | Exceptional Lie groups | We describe simply connected compact exceptional simple Lie groups in very elementary way. We first construct all simply connected compact exceptional Lie groups G concretely. Next, we find all involutive automorphisms of G, and determine the group structures of the fixed points subgroup. They correspond to the classification of all irreducible compact symmetric spaces of exceptional type, and that they also correspond to classification of all non-compact exceptionalsimple Lie groups. Finally, we determined the group structures of the maximal subgroups of maximal rank. At any rate, we would like this book to be used in mathematics and physics. | 372 | 4.705883 | 2 | 6.706 |
0903.3246 | 16,125,716 | Lectures on holographic methods for condensed matter physics | These notes are loosely based on lectures given at the CERN Winter School on Supergravity, Strings and Gauge theories, February 2009, and at the IPM String School in Tehran, April 2009. I have focused on a few concrete topics and also on addressing questions that have arisen repeatedly. Background condensed matter physics material is included as motivation and easy reference for the high energy physics community. The discussion of holographic techniques progresses from equilibrium, to transport and to superconductivity. | 427 | 0.588235 | 2 | 2.588 |
0903.4377 | 115,132,787 | Dark matter axions revisited | We study for what specific values of the theoretical parameters the axion can form the totality of cold dark matter. We examine the allowed axion parameter region in the light of recent data collected by the WMAP5 mission plus baryon acoustic oscillations and supernovae, and assume an inflationary scenario and standard cosmology. We also upgrade the treatment of anharmonicities in the axion potential, which we find important in certain cases. If the Peccei-Quinn symmetry is restored after inflation, we recover the usual relation between axion mass and density, so that an axion mass m{sub a}=(85{+-}3) {mu}eV makes the axion 100% of the cold dark matter. If the Peccei-Quinn symmetry is broken during inflation, the axion can instead be 100% of the cold dark matter for m{sub a}<15 meV provided a specific value of the initial misalignment angle {theta}{sub i} is chosen in correspondence to a given value of its mass m{sub a}. Large values of the Peccei-Quinn symmetry breaking scale correspond to small, perhaps uncomfortably small, values of the initial misalignment angle {theta}{sub i}. | 434 | 1.176471 | 1 | 2.176 |
0903.5321 | 117,703,713 | Time variation of a fundamental dimensionless constant | We examine the time variation of a previously-uninvestigated fundamental dimensionless constant. Constraints are placed on this time variation using historical measurements. A model is presented for the time variation, and it is shown to lead to an accelerated expansion for the universe. Directions for future research are discussed. | 441 | 0.588235 | 3 | 3.588 |
0904.0382 | 116,903,261 | Dynamics of Universe in Problems | To our best knowledge, there are no problem books on cosmology yet, that would include its spectacular recent achievements. We believe there is a strong need for such now, when cosmology is swiftly becoming a strict and vast science, and the book would be extremely useful for the youth pouring in this area of research. Indeed, the only way to rise over the popular level in any science is to master its alphabet, that is, to learn to solve problems.
Of course, most of modern textbooks on cosmology include problems. However, a reader, exhausted by high theory, may often be thwarted by the lack of time and strength to solve them. Might it be worth sometimes to change the tactics and just throw those who wish to learn to swim into the water? We present an updated version of the "Dynamics of the Universe in Problems" We have the following new sections, 'Gravitational Waves', "Interactions in the Dark Sector", "Horizons" and "Quantum Cosmology" . A number of new problems have been added to almost every section. The total number of problems exceeds fifteen hundred. Solutions to all the problems can be found at www.universeinproblems.com | 444 | 0 | 3 | 3 |
0904.1426 | 52,994,161 | WHAT ARE THE LIMITS ON COMMERCIAL BANK LENDING | Analysis of the 2007–2008 credit crisis has concentrated on issues of relaxed lending standards, and the perception of irrational behavior by speculative investors in real estate and other assets. Asset backed securities have been extensively criticized for creating a moral hazard in loan issuance and an associated increase in default risk, by removing the immediate lender's incentive to ensure that the underlying loans could be repaid. However significant monetary issues can accompany any form of increased commercial bank lending, and these appear to have been overlooked by this analysis. In this paper we propose a general explanation for credit crises based on an examination of the mechanics of the banking system, and in particular its internal controls on the supply of credit. We suggest that the current credit crisis is the result of multiple failures in the Basel regulatory framework, including the removal of central bank reserve requirements from some classes of deposit accounts within the banking system, allowing financial instruments representing debt to be used as regulatory capital, and in particular the introduction of securitized lending which effectively removed a previously implicit control over the total quantity of lending originating from the banking system. We further argue that the interaction of these problems has led to a destabilizing imbalance between total money and loan supply growth, in that total lending sourced from the commercial bank sector increased at a faster rate than accompanying growth in the money supply. This not only created a multi-decade macro-economic debt spiral, but by increasing the ratio of debt to money within the monetary system acted to increase the risk of loan defaults, and consequentially reduce the overall stability of the banking system. | 452 | 0.588235 | 4 | 4.588 |
0906.4091 | 116,986,055 | Dark Energy Regulation with Approximate Emergent Conformal Symmetry | A cosmic potential which can relax the vacuum energy is proposed in a framework of scalar-tensor gravity. In the phase of the gravity scalar field around the evolution with an approximate emergent conformal symmetry, we have obtained a set of cosmological equations with the dark energy regulated to the order of a conformal anomaly parameter. Through a role of the cosmic potential, the vacuum energy which could be generated in matter Lagrangian does not contribute to the dark energy in the phase. | 563 | 1.176471 | 1 | 2.176 |
0906.5418 | 1,684,374 | Citing and reading behaviours in high-energy physics | null | 576 | 5.882353 | 5 | 10.882 |
0907.0455 | 9,077,554 | The Peter Principle Revisited: A Computational Study | null | 580 | 0 | 5 | 5 |
0907.3282 | 14,300,360 | An optimal execution problem with market impact | null | 603 | 0 | 3 | 3 |
0907.4740 | 8,791,542 | Positional effects on citation and readership in arXiv | arXiv.org mediates contact with the literature for entire scholarly communities, both through provision of archival access and through daily email and web announcements of new materials, potentially many screenlengths long. We confirm and extend a surprising correlation between article position in these initial announcements, ordered by submission time, and later citation impact, due primarily to intentional "self-promotion" on the part of authors. A pure "visibility" effect was also present: the subset of articles accidentally in early positions fared measurably better in the long-term citation record than those lower down. Astrophysics articles announced in position 1, for example, overall received a median number of citations 83\% higher, while those there accidentally had a 44\% visibility boost. For two large subcommunities of theoretical high energy physics, hep-th and hep-ph articles announced in position 1 had median numbers of citations 50\% and 100\% larger than for positions 5--15, and the subsets there accidentally had visibility boosts of 38\% and 71\%.
We also consider the positional effects on early readership. The median numbers of early full text downloads for astro-ph, hep-th, and hep-ph articles announced in position 1 were 82\%, 61\%, and 58\% higher than for lower positions, respectively, and those there accidentally had medians visibility-boosted by 53\%, 44\%, and 46\%. Finally, we correlate a variety of readership features with long-term citations, using machine learning methods, thereby extending previous results on the predictive power of early readership in a broader context. We conclude with some observations on impact metrics and dangers of recommender mechanisms. | 618 | 0 | 3 | 3 |
0908.1926 | 8,552,378 | High Order Discretization Schemes for Stochastic Volatility Models | In usual stochastic volatility models, the process driving the volatility of the asset price evolves according to an autonomous one-dimensional stochastic differential equation. We assume that the coefficients of this equation are smooth. Using Ito's formula, we get rid, in the asset price dynamics, of the stochastic integral with respect to the Brownian motion driving this SDE. Taking advantage of this structure, we propose - a scheme, based on the Milstein discretization of this SDE, with order one of weak trajectorial convergence for the asset price, - a scheme, based on the Ninomiya-Victoir discretization of this SDE, with order two of weak convergence for the asset price. We also propose a specific scheme with improved convergence properties when the volatility of the asset price is driven by an Orstein-Uhlenbeck process. We confirm the theoretical rates of convergence by numerical experiments and show that our schemes are well adapted to the multilevel Monte Carlo method introduced by Giles [2008a, 2008b]. | 651 | 0 | 3 | 3 |
0908.3091 | 14,509,538 | Computational Understanding and Manipulation of Symmetries | null | 661 | 0 | 3 | 3 |
0909.4765 | 153,643,615 | Linear stochastic volatility models | In this paper we investigate general linear stochastic volatility models with correlated Brownian noises. In such models the asset price satisfies a linear SDE with coefficient of linearity being the volatility process. This class contains among others Black-Scholes model, a log-normal stochastic volatility model and Heston stochastic volatility model. For a linear stochastic volatility model we derive representations for the probability density function of the arbitrage price of a financial asset and the prices of European call and put options. A closed-form formulae for the density function and the prices of European call and put options are given for log-normal stochastic volatility model. We also obtain present some new results for Heston and extended Heston stochastic volatility models. | 720 | 0.588235 | 4 | 4.588 |
0910.1671 | 220,665,749 | Geometric Arbitrage Theory and Market Dynamics Reloaded | We have embedded the classical theory of stochastic finance into a differential geometric framework called Geometric Arbitrage Theory and show that it is possible to: --Write arbitrage as curvature of a principal fibre bundle. --Parameterize arbitrage strategies by its holonomy. --Give the Fundamental Theorem of Asset Pricing a differential homotopic characterization. --Characterize Geometric Arbitrage Theory by five principles and show they they are consistent with the classical theory of stochastic finance. --Derive for a closed market the equilibrium solution for market portfolio and dynamics in the cases where: -->Arbitrage is allowed but minimized. -->Arbitrage is not allowed. --Prove that the no-free-lunch-with-vanishing-risk condition implies the zero curvature condition. The converse is in general not true and additionally requires the Novikov condition for the instantaneous Sharpe Ratio Dynamics to be satisfied. | 751 | 0 | 8 | 8 |
0910.3926 | 60,078 | A new proof of the density Hales-Jewett theorem | The Hales-Jewett theorem asserts that for every r and every k there exists n such that every r-colouring of the n-dimensional grid {1,...,k}^n contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The theorem of van der Waerden has a famous density version, conjectured by Erdos and Turan in 1936, proved by Szemeredi in 1975, and given a different proof by Furstenberg in 1977. The Hales-Jewett theorem has a density version as well, proved by Furstenberg and Katznelson in 1991 by means of a significant extension of the ergodic techniques that had been pioneered by Furstenberg in his proof of Szemeredi's theorem. In this paper, we give the first elementary proof of the theorem of Furstenberg and Katznelson, and the first to provide a quantitative bound on how large n needs to be. In particular, we show that a subset of {1,2,3}^n of density delta contains a combinatorial line if n is at least a tower of 2's of height O(1/delta^3). Our proof is reasonably simple: indeed, it gives what is arguably the simplest known proof of Szemeredi's theorem. | 784 | 0 | 3 | 3 |
0910.4113 | 113,405,438 | Galaxy Zoo: Passive Red Spirals . | We study the spectroscopic properties and environments of red (or passive) spiral galaxies found by the Galaxy Zoo project. By carefully selecting face-on disc-dominated spirals, we construct a sample of truly passive discs (i.e. they are not dust reddened spirals, nor are they dominated by old stellar populations in a bulge). As such, our red spirals represent an interesting set of possible transition objects between normal blue spiral galaxies and red early types, making up ∼6 per cent of late-type spirals. We use optical images and spectra from Sloan Digital Sky Survey to investigate the physical processes which could have turned these objects red without disturbing their morphology. We find red spirals preferentially in intermediate density regimes. However, there are no obvious correlations between red spiral properties and environment suggesting that environment alone is not sufficient to determine whether a galaxy will become a red spiral. Red spirals are a very small fraction of all spirals at low masses (M★ < 1010 M⊙), but are a significant fraction of the spiral population at large stellar masses showing that massive galaxies are red independent of morphology. We confirm that as expected, red spirals have older stellar populations and less recent star formation than the main spiral population. While the presence of spiral arms suggests that a major star formation could not have ceased a long ago (not more than a few Gyr), we show that these are also not recent post-starburst objects (having had no significant star formation in the last Gyr), so star formation must have ceased gradually. Intriguingly, red spirals are roughly four times as likely than the normal spiral population to host optically identified Seyfert/low-ionization nuclear emission region (LINER; at a given stellar mass and even accounting for low-luminosity lines hidden by star formation), with most of the difference coming from the objects with LINER-like emission. We also find a curiously large optical bar fraction in the red spirals (70 ± 5 verses 27 ± 5 per cent in blue spirals) suggesting that the cessation of star formation and bar instabilities in spirals are strongly correlated. We conclude by discussing the possible origins of these red spirals. We suggest that they may represent the very oldest spiral galaxies which have already used up their reserves of gas – probably aided by strangulation or starvation, and perhaps also by the effect of bar instabilities moving material around in the disc. We provide an online table listing our full sample of red spirals along with the normal/blue spirals used for comparison. | 785 | 0 | 3 | 3 |
0911.1824 | 10,920,772 | Community Structure in Time-Dependent, Multiscale, and Multiplex Networks | Network Notation Networks are often characterized by clusters of constituents that interact more closely with each other and have more connections to one another than they do with the rest of the components of the network. However, systematically identifying and studying such community structure in complicated networks is not easy, especially when the network interactions change over time or contain multiple types of connections, as seen in many biological regulatory networks or social networks. Mucha et al. (p. 876) developed a mathematical method to allow detection of communities that may be critical functional units of such networks. Application to real-world tasks—like making sense of the voting record in the U.S. Senate—demonstrated the promise of the method. A general mathematical method used to identify closely interacting groups can explain the behavior of complicated networks. Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales. | 854 | 0 | 3 | 3 |
0911.3789 | 117,934,010 | On the Existence Of Consistent Price Systems | We formulate a sufficient condition for the existence of a consistent price system (CPS), which is weaker than the conditional full support condition (CFS). We use the new condition to show the existence of CPSs for certain processes that fail to have the CFS property. In particular this condition gives sufficient conditions, under which a continuous function of a process with CFS admits a CPS, while the CFS property might be lost. | 894 | 0 | 3 | 3 |
0911.3802 | 21,946,226 | A Coupled Markov Chain Approach to Credit Risk Modeling | null | 896 | 0 | 3 | 3 |
0911.5239 | 18,577,919 | Opinion Dynamics With Decaying Confidence: Application to Community Detection in Graphs | We study a class of discrete-time multi-agent systems modeling opinion dynamics with decaying confidence. We consider a network of agents where each agent has an opinion. At each time step, the agents exchange their opinion with their neighbors and update it by taking into account only the opinions that differ from their own less than some confidence bound. This confidence bound is decaying: an agent gives repetitively confidence only to its neighbors that approach sufficiently fast its opinion. Essentially, the agents try to reach an agreement with the constraint that it has to be approached no slower than a prescribed convergence rate. Under that constraint, global consensus may not be achieved and only local agreements may be reached. The agents reaching a local agreement form communities inside the network. In this paper, we analyze this opinion dynamics model: we show that communities correspond to asymptotically connected components of the network and give an algebraic characterization of communities in terms of eigenvalues of the matrix defining the collective dynamics. Finally, we apply our opinion dynamics model to address the problem of community detection in graphs. We propose a new formulation of the community detection problem based on eigenvalues of normalized Laplacian matrix of graphs and show that this problem can be solved using our opinion dynamics model. We consider three examples of networks, and compare the communities we detect with those obtained by existing algorithms based on modularity optimization. We show that our opinion dynamics model not only provides an appealing approach to community detection but that it is also effective. | 913 | 0 | 3 | 3 |
0912.0004 | 118,596,941 | Higgs in space | We consider the possibility that the Higgs can be produced in dark matter annihilations, appearing as a line in the spectrum of gamma rays at an energy determined by the masses of the WIMP and the Higgs itself. We argue that this phenomenon occurs generally in models in which the the dark sector has large couplings to the most massive states of the SM and provide a simple example inspired by the Randall-Sundrum vision of dark matter, whose 4d dual corresponds to electroweak symmetry-breaking by strong dynamics which respect global symmetries that guarantee a stable WIMP. The dark matter is a Dirac fermion that couples to a Z' acting as a portal to the Standard Model through its strong coupling to top quarks. Annihilation into light standard model degrees of freedom is suppressed and generates a feeble continuum spectrum of gamma rays. Loops of top quarks mediate annihilation into γZ, γh, and γZ', providing a forest of lines in the spectrum. Such models can be probed by the Fermi/GLAST satellite and ground-based Air Cherenkov telescopes. | 927 | 0 | 3 | 3 |
0912.0238 | 5,262,868 | Spectral ranking | Abstract We sketch the history of spectral ranking—a general umbrella name for techniques that apply the theory of linear maps (in particular, eigenvalues and eigenvectors) to matrices that do not represent geometric transformations, but rather some kind of relationship between entities. Albeit recently made famous by the ample press coverage of Google's PageRank algorithm, spectral ranking was devised more than 60 years ago, almost exactly in the same terms, and has been studied in psychology, social sciences, bibliometrics, economy, and choice theory. We describe the contribution given by previous scholars in precise and modern mathematical terms: Along the way, we show how to express in a general way damped rankings, such as Katz's index, as dominant eigenvectors of perturbed matrices, and then use results on the Drazin inverse to go back to the dominant eigenvectors by a limit process. The result suggests a regularized definition of spectral ranking that yields for a general matrix a unique vector depending on a boundary condition. | 934 | 0 | 3 | 3 |
0912.0520 | 118,417,017 | Electroweak stars: How nature may capitalize on the standard model's ultimate fuel | We study the possible existence of an electroweak star — a compact stellar-mass object whose central core temperature is higher than the electroweak symmetry restoration temperature. We found a solution to the Tolman-Oppenheimer-Volkoff equations describing such an object. The parameters of such a star are not substantially different from a neutron star — its mass is around 1.3 Solar masses while its radius is around 8km. What is different is the existence of a small electroweak core. The source of energy in the core that can at least temporarily balance gravity are standard-model non-perturbative baryon number (B) and lepton number (L) violating processes that allow the chemical potential of B+L to relax to zero. The energy released at the core is enormous, but gravitational redshift and the enhanced neutrino interaction cross section at these energies make the energy release rate moderate at the surface of the star. The lifetime of this new quasi-equilibrium can be more than ten million years. This is long enough to represent a new stage in the evolution of a star if stellar evolution can take it there. | 940 | 0 | 4 | 4 |
0912.1601 | 119,236,931 | Results from the CDMS II experiment | I report recent results and the status of the Cryogenic Dark Matter Search (CDMS II) experiment at the Soudan Underground Laboratory in Minnesota, USA. A blind analysis of data taken by 30 detectors between October 2006 and July 2007 found zero events consistent with WIMPs elastically scattering in our Ge detectors. This resulted in an upper limit on the spin-independent, WIMP-nucleon cross section of 6.6 × 10−44 cm2 (4.6 × 10−44 cm2 when combined with our previous results) at the 90% C.L. for a WIMP of mass 60 GeV/c2. In March 2009 data taking with CDMS II stopped in order to install the first of 5 SuperTowers of detectors for the SuperCDMS Soudan project. Analysis of data taken between August 2007 and March 2009 is ongoing. | 964 | 0 | 4 | 4 |
0912.3320 | 117,770,472 | The INTEGRAL/SPI 511 keV signal from hidden valleys in type Ia and core collapse supernova explosions | null | 1,007 | 0.588235 | 5 | 5.588 |
0912.3362 | 102,494,566 | Asymptotic power utility-based pricing and hedging | null | 1,008 | 0.588235 | 3 | 3.588 |
0912.4389 | 15,777,360 | Line Graphs of Weighted Networks for Overlapping Communities | null | 1,037 | 0.588235 | 3 | 3.588 |
1001.0785 | 3,597,565 | On the origin of gravity and the laws of Newton | null | 1,102 | 0.588235 | 8 | 8.588 |
1001.0918 | 118,581,159 | Do Solids Flow? | null | 1,105 | 0 | 3 | 3 |
1001.1199 | 14,229,079 | New ways of scientific publishing and accessing human knowledge inspired by transdisciplinary approaches | Abstract Inspired by interdisciplinary work touching biology and microtribology, the authors propose a new, dynamic way of publishing research results, the establishment of a tree of knowledge and the localisation of scientific articles on this tree. The current two-dimensional standard of scientific publications is outdated. Over-information in almost any field is a problem. Therefore, it is suggested that, to succeed and be read, modern publications should be dynamic and use all types of multimedia. Such ways of presenting and managing research results would be accessible by people with different kinds of backgrounds and levels of education, and allow for full use of the ever-increasing number of scientific and technical publications. This approach would dramatically change and revolutionise the way we are doing science, and contribute to overcoming the three gaps between the world of ideas, inventors, innovators and investors as introduced by Gebeshuber, Gruber and Drack in 2009 for accelerated scientific and technological breakthroughs to improve the human condition. Inspiration for the development of above methods was the fact that generally tribologists and biologists do not see many overlaps of their professions. However, both deal with materials, structures and processes. Tribology is omnipresent in biology and many biological systems have impressive tribological properties. Tribologists can therefore get valuable input and inspiration from living systems. The aim of biomimetics is knowledge transfer from biology to technology and successful biomimetics in tribology needs collaboration between biologists and tribologists. Literature search shows that the number of papers regarding biotribology is steadily increasing. However, at the moment, most scientific papers of the other respective field are hard to access and hard to understand, in terms of concepts and specific wording, hindering successful collaboration and resulting in long times that are needed to speak and understand the other's language. For example, there is a plenitude of biology papers that deal with friction, adhesion, wear and lubrication that were written solely for a biology readership and that have high potential to serve as inspiration for tribology if they were available in a language or in an environment accessible for tribologists. The three needs that can be identified regarding successful biomimetics for microtribologists (i.e. joint language, joint way of publishing results, and joint seminars, workshops and conferences) are developed further into a general concept concerning the future of scientific publications and ordering as well as accessing the knowledge of our time. | 1,114 | 0.588235 | 2 | 2.588 |
1001.1401 | 8,413,128 | Incorporating characteristics of human creativity into an evolutionary art algorithm | null | 1,116 | 0 | 3 | 3 |
1001.1697 | 119,304,462 | Effect of Sun and Planet-Bound Dark Matter on Planet and Satellite Dynamics in the Solar System | We apply our recent results on orbital dynamics around a mass-varying central body to the phenomenon of accretion of Dark Matter-assumed not self-annihilating-on the Sun and the major bodies of the solar system due to its motion throughout the Milky Way halo. We inspect its consequences on the orbits of the planets and their satellites over timescales of the order of the age of the solar system. It turns out that a solar Dark Matter accretion rate of ≈ 10−12 yr−1, inferred from the upper limit ΔM/M = 0.02−0.05 on the Sun's Dark Matter content, assumed somehow accumulated during last 4.5 Gyr, would have displaced the planets faraway by about 10−2−101 au 4.5 Gyr ago. Another consequence is that the semimajor axis of the Earth's orbit, approximately equal to the Astronomical Unit, would undergo a secular increase of 0.02-0.05 m yr−1, in agreement with the latest observational determinations of the Astronomical Unit secular increase of 0.07±0.02 m yr−1 and 0.05 m yr−1. By assuming that the Sun will continue to accrete Dark Matter in the next billions year at the same rate as putatively done in the past, the orbits of its planets will shrink by about 10−1−101 au ( ≈ 0.2−0.5 au for the Earth), with consequences for their fate, especially of the inner planets. On the other hand, lunar and planetary ephemerides set upper bounds on the secular variation of the Sun's gravitational parameter GM which are one one order of magnitude smaller than ≈ 10−12 yr−1. Dark Matter accretion on planets has, instead, less relevant consequences for their satellites. Indeed, 4.5 Gyr ago their orbits would have been just 10−2−101 km wider than now. Dark Matter accretion is not able to explain the observed accelerations of the orbits of some of the Galilean satellites of Jupiter, the secular decrease of the semimajor axis of the Earth's artificial satellite LAGEOS and the secular increase of the Moon's orbit eccentricity. | 1,121 | 1.176471 | 2 | 3.176 |
1001.3253 | 55,449,955 | Bayesian Thought in Early Modern Detective Stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes | This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms. | 1,171 | 2.941176 | 2 | 4.941 |
1001.4243 | 55,166,272 | Forming the Moon from terrestrial silicate-rich material | null | 1,210 | 0 | 3 | 3 |
1002.0621 | 119,287,485 | Transverse momentum and pseudorapidity distributions of charged hadrons in pp collisions at sqrt(s) = 0.9 and 2.36 TeV | null | 1,268 | 0.588235 | 4 | 4.588 |
1002.1936 | 6,400,932 | Making sense of the evolution of a scientific domain: a visual analytic study of the Sloan Digital Sky Survey research | null | 1,312 | 1.176471 | 2 | 3.176 |
1002.2284 | 15,649,030 | Markets are Efficient if and Only if P = NP | I prove that if markets are efficient, meaning current prices fully reflect all information available in past prices, then P=NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can “program” the market to solve NP -complete problems. Since P probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way of partitioning the excess returns to momentum strategies based on data availability confirms this prediction. | 1,328 | 0 | 7 | 7 |
1002.3019 | 54,212,349 | PRACTICAL USE OF VARIATIONAL PRINCIPLES FOR MODELING WATER WAVES | null | 1,360 | 0 | 3 | 3 |
1002.3286 | 32,880,913 | Entropic origin of disassortativity in complex networks. | Why are most empirical networks, with the prominent exception of social ones, generically degree-degree anticorrelated? To answer this long-standing question, we define the ensemble of correlated networks and obtain the associated Shannon entropy. Maximum entropy can correspond to either assortative (correlated) or disassortative (anticorrelated) configurations, but in the case of highly heterogeneous, scale-free networks a certain disassortativity is predicted--offering a parsimonious explanation for the question above. Our approach provides a neutral model from which, in the absence of further knowledge regarding network evolution, one can obtain the expected value of correlations. When empirical observations deviate from the neutral predictions--as happens for social networks--one can then infer that there are specific correlating mechanisms at work. | 1,371 | 0 | 3 | 3 |
1002.4290 | 197,476,331 | A weakly universal cellular automaton in the hyperbolic 3D space with three states | In this paper, we significantly improve a previous result by the same author showing the existence of a weakly universal cellular automaton with five states living in the hyperbolic 3D-space. Here, we get such a cellular automaton with three states only. | 1,429 | 0 | 3 | 3 |
1002.4482 | 17,121,184 | Exploring the Limits of GPUs With Parallel Graph Algorithms | In this paper, we explore the limits of graphics processors (GPUs) for general purpose parallel computing by studying problems that require highly irregular data access patterns: parallel graph algorithms for list ranking and connected components. Such graph problems represent a worst case scenario for coalescing parallel memory accesses on GPUs which is critical for good GPU performance. Our experimental study indicates that PRAM algorithms are a good starting point for developing efficient parallel GPU methods but require non-trivial modifications to ensure good GPU performance. We present a set of guidelines that help algorithm designers adapt PRAM graph algorithms for parallel GPU computation. We point out that the study of parallel graph algorithms for GPUs is of wider interest for discrete and combinatorial problems in general because many of these problems require similar irregular data access patterns. | 1,441 | 0 | 3 | 3 |
1002.4615 | 7,833,399 | Effects of mass media action on the Axelrod model with social influence. | The use of dyadic interaction between agents, in combination with homophily (the principle that "likes attract") in the Axelrod model for the study of cultural dissemination, has two important problems: the prediction of monoculture in large societies and an extremely narrow window of noise levels in which diversity with local convergence is obtained. Recently, the inclusion of social influence has proven to overcome them [A. Flache and M. W. Macy, e-print arXiv:0808.2710]. Here, we extend the Axelrod model with social influence interaction for the study of mass media effects through the inclusion of a superagent which acts over the whole system and has non-null overlap with each agent of the society. The dependence with different parameters as the initial social diversity, size effects, mass media strength, and noise is outlined. Our results might be relevant in several socioeconomic contexts and for the study of the emergence of collective behavior in complex social systems. | 1,455 | 0 | 3 | 3 |
1002.4738 | 16,540,479 | An Approach to Ad hoc Cloud Computing | We consider how underused computing resources within an enterprise may be harnessed to improve utilization and create an elastic computing infrastructure. Most current cloud provision involves a data center model, in which clusters of machines are dedicated to running cloud infrastructure software. We propose an additional model, the ad hoc cloud, in which infrastructure software is distributed over resources harvested from machines already in existence within an enterprise. In contrast to the data center cloud model, resource levels are not established a priori, nor are resources dedicated exclusively to the cloud while in use. A participating machine is not dedicated to the cloud, but has some other primary purpose such as running interactive processes for a particular user. We outline the major implementation challenges and one approach to tackling them. | 1,464 | 0 | 3 | 3 |
1003.0115 | 5,530,910 | Opinion dynamics with confidence threshold: an alternative to the Axelrod model | The voter model and the Axelrod model are two of the main stochastic processes that describe the spread of opinions on networks. The former includes social influence, the tendency of individuals to become more similar when they interact, while the latter also accounts for homophily, the tendency to interact more frequently with individuals which are more similar. The Axelrod model has been extensively studied during the past ten years based on numerical simulations. In contrast, we give rigorous analytical results for a generalization of the voter model that is closely related to the Axelrod model as it combines social influence and confidence threshold, which is modeled somewhat similarly to homophily. Each vertex of the network, represented by a finite connected graph, is characterized by an opinion and may interact with its adjacent vertices. Like the voter model, an interaction results in an agreement between both interacting vertices -- social influence -- but unlike the voter model, an interaction takes place if and only if the vertices' opinions are within a certain distance -- confidence threshold. In a deterministic static approach, we first give lower and upper bounds for the maximum number of opinions that can be supported by the network as a function of the confidence threshold and various characteristics of the graph. The number of opinions coexisting at equilibrium is then investigated in a probabilistic dynamic approach for the stochastic process starting from a random configuration ... | 1,492 | 0 | 3 | 3 |
1003.0449 | 35,994,355 | Galaxy Zoo:bars in disc galaxies | We present first results from Galaxy Zoo 2, the second phase of the highly successful Galaxy Zoo project (http://www.galaxyzoo.org). Using a volume-limited sample of 13 665 disc galaxies (0.01 < z < 0.06 and Mr < −19.38), we study the fraction of galaxies with bars as a function of global galaxy properties like colour, luminosity and bulge prominence. Overall, 29.4 ± 0.5 per cent of galaxies in our sample have a bar, in excellent agreement with previous visually classified samples of galaxies (although this overall fraction is lower than that measured by automated bar-finding methods). We see a clear increase in the bar fraction with redder (g−r) colours, decreased luminosity and in galaxies with more prominent bulges, to the extent that over half of the red, bulge-dominated disc galaxies in our sample possess a bar. We see evidence for a colour bimodality for our sample of disc galaxies, with a ‘red sequence’ that is both bulge and bar dominated, and a ‘blue cloud’ which has little, or no, evidence for a (classical) bulge or bar. These results are consistent with similar trends for barred galaxies seen recently both locally and at higher redshift, and with early studies using the RC3. We discuss these results in the context of internal (secular) galaxy evolution scenarios and the possible links to the formation of bars and bulges in disc galaxies. | 1,506 | 1.764706 | 1 | 2.765 |
1003.0469 | 5,908,419 | Information-Sharing and Privacy in Social Networks | We present a new model for reasoning about the way information is shared among friends in a social network, and the resulting ways in which it spreads. Our model formalizes the intuition that revealing personal information in social settings involves a trade-off between the benefits of sharing information with friends, and the risks that additional gossiping will propagate it to people with whom one is not on friendly terms. We study the behavior of rational agents in such a situation, and we characterize the existence and computability of stable information-sharing networks, in which agents do not have an incentive to change the partners with whom they share information. We analyze the implications of these stable networks for social welfare, and the resulting fragmentation of the social network. | 1,509 | 0 | 3 | 3 |
1003.0508 | 11,856,326 | Comments on “The Depth-Dependent Current and Wave Interaction Equations: A Revision” | AbstractEquations for the wave-averaged three-dimensional momentum equations have been published in this journal. It appears that these equations are not consistent with the known depth-integrated momentum balance, especially over a sloping bottom. These equations should thus be considered with caution, because they can produce erroneous flows, particularly outside of the surf zone. It is suggested that the inconsistency in the equations may arise from the different averaging operators applied to the different terms of the momentum equation. It is concluded that other forms of the momentum equations, expressed in terms of the quasi-Eulerian velocity, are better suited for three-dimensional modeling of wave–current interactions. | 1,511 | 0 | 3 | 3 |
1003.0575 | 628,749 | The genome is software and evolution is a software developer | The genome is software because it a set of verbal instructions for a programmable computer, the ribosome. The theory of evolution now reads: evolution is the software developer responsible for the existence of the genome. We claim that this setting, whose official name is genetic programming, is necessary and sufficient to discuss all important questions about evolution. A great effort has been made to pass from wording to science, i.e., from naive theories to robust models to predictions to testing for falsification. | 1,518 | 0 | 3 | 3 |
1003.0692 | 18,411,678 | Centrality scaling in large networks. | Betweenness centrality lies at the core of both transport and structural vulnerability properties of complex networks; however, it is computationally costly, and its measurement for networks with millions of nodes is nearly impossible. By introducing a multiscale decomposition of shortest paths, we show that the contributions to betweenness coming from geodesics not longer than L obey a characteristic scaling versus L, which can be used to predict the distribution of the full centralities. The method is also illustrated on a real-world social network of 5.5 × 10(6) nodes and 2.7 × 10(7) links. | 1,522 | 0 | 3 | 3 |
1003.0931 | 3,105,113 | A student's guide to searching the literature using online databases | A method is described to empower students to efficiently perform general and specific literature searches using online resources. The method was tested on undergraduate and graduate students with varying backgrounds in scientific literature. Students involved in this study showed marked improvement in their awareness of how and where to find accurate scientific information. | 1,537 | 0 | 3 | 3 |
1003.1153 | 119,113,664 | Quantum dating market | null | 1,556 | 0 | 4 | 4 |
1003.1251 | 15,052,027 | Minimum Spanning Tree on Spatio-Temporal Networks | null | 1,563 | 0 | 3 | 3 |
1003.1898 | 476,928 | Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units | null | 1,609 | 0.588235 | 3 | 3.588 |
1003.1983 | 8,492,501 | Cellular Automata, PDEs, and Pattern Formation | State-of-the-art review of cellular automata, cellular automata for partial differential equations, differential equations for cellular automata and pattern formation in biology and engineering. | 1,617 | 0 | 3 | 3 |
1003.2092 | 14,181,239 | Modeling Symbiosis by Interactions Through Species Carrying Capacities | null | 1,626 | 0 | 4 | 4 |
1003.2198 | 13,252,430 | The relation between Eigenfactor, audience factor, and influence weight | We present a theoretical and empirical analysis of a number of bibliometric indicators of journal performance. We focus on three indicators in particular: the Eigenfactor indicator, the audience factor, and the influence weight indicator. Our main finding is that the last two indicators can be regarded as a kind of special case of the first indicator. We also find that the three indicators can be nicely characterized in terms of two properties. We refer to these properties as the property of insensitivity to field differences and the property of insensitivity to insignificant journals. The empirical results that we present illustrate our theoretical findings. We also show empirically that the differences between various indicators of journal performance are quite substantial. © 2010 Wiley Periodicals, Inc. | 1,639 | 0 | 4 | 4 |
1003.2424 | 17,376 | Signed networks in social media | Relations between users on social media sites often reflect a mixture of positive (friendly) and negative (antagonistic) interactions. In contrast to the bulk of research on social networks that has focused almost exclusively on positive interpretations of links between people, we study how the interplay between positive and negative relationships affects the structure of on-line social networks. We connect our analyses to theories of signed networks from social psychology. We find that the classical theory of structural balance tends to capture certain common patterns of interaction, but that it is also at odds with some of the fundamental phenomena we observe --- particularly related to the evolving, directed nature of these on-line networks. We then develop an alternate theory of status that better explains the observed edge signs and provides insights into the underlying social mechanisms. Our work provides one of the first large-scale evaluations of theories of signed networks using on-line datasets, as well as providing a perspective for reasoning about social media sites. | 1,651 | 0 | 3 | 3 |
1003.2429 | 7,119,014 | Predicting positive and negative links in online social networks | We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network. | 1,653 | 0 | 3 | 3 |
1003.2469 | 6,794,729 | The Directed Closure Process in Hybrid Social-Information Networks, with an Analysis of Link Formation on Twitter | It has often been taken as a working assumption that directed links in information networks are frequently formed by “short-cutting” a two-step path between the source and the destination — a kind of implicit “link copying” analogous to the process of triadic closure in social networks. Despite the role of this assumption in theoretical models such as preferential attachment, it has received very little direct empirical investigation. Here we develop a formalization and methodology for studying this type of directed closure process, and we provide evidence for its important role in the formation of links on Twitter. We then analyze a sequence of models designed to capture the structural phenomena related to directed closure that we observe in the Twitter data. | 1,655 | 0.588235 | 4 | 4.588 |
1003.2688 | 7,212,434 | Warning: Physics Envy May be Hazardous to Your Wealth! | The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems - and financial markets in particular - that are as predictive as those in physics. While this perspective has led to a number of important breakthroughs in economics, "physics envy" has also created a false sense of mathematical precision in some cases. We speculate on the origins of physics envy, and then describe an alternate perspective of economic behavior based on a new taxonomy of uncertainty. We illustrate the relevance of this taxonomy with two concrete examples: the classical harmonic oscillator with some new twists that make physics look more like economics, and a quantitative equity market-neutral strategy. We conclude by offering a new interpretation of tail events, proposing an "uncertainty checklist" with which our taxonomy can be implemented, and considering the role that quants played in the current financial crisis. | 1,664 | 0 | 3 | 3 |
1003.2807 | 96,428,074 | Spatial correlations in vote statistics: a diffusive field model for decision-making | null | 1,672 | 0 | 3 | 3 |
1003.3122 | 115,180,867 | Knots and links in steady solutions of the Euler equation | Given any possibly unbounded, locally finite link, we show that there exists a smooth diffeomorphism transforming this link into a set of stream (or vortex) lines of a vector field that solves the steady incompressible Euler equation in $\mathbb{R}^3$. Furthermore, the diffeomorphism can be chosen arbitrarily close to the identity in any $C^r$ norm. | 1,698 | 1.764706 | 1 | 2.765 |
1003.3223 | 119,211,425 | Wolfgang Pauli 1900 to 1930: His Early Physics in Jungian Perspective | Wolfgang Pauli's philosophy and physics were intertwined. His philosophy was a variety of Platonism, in which Pauli's affiliation with Carl Jung formed an integral part, but Pauli's philosophical explorations in physics appeared before he met Jung. Jung validated Pauli's psycho-philosophical perspective. Thus, the roots of Pauli's physics and philosophy are important in the history of modern physics. In his early physics, Pauli attempted to ground his theoretical physics in positivism. He then began instead to trust his intuitive visualizations of entities that formed an underlying reality to the sensible physical world. These visualizations included holistic kernels of mathematical-physical entities that later became for him synonymous with Jung's mandalas. I have connected Pauli's visualization patterns in physics during the period 1900 to 1930 to the psychological philosophy of Jung and displayed some examples of Pauli's creativity in the development of quantum mechanics. By looking at Pauli's early physics and philosophy, we gain insight into Pauli's contributions to quantum mechanics. His exclusion principle, his influence on Werner Heisenberg in the formulation of matrix mechanics, his emphasis on firm logical and empirical foundations, his creativity in formulating electron spinors, his neutrino hypothesis, and his dialogues with other quantum physicists, all point to Pauli being the dominant genius in the development of quantum theory. Because Pauli was in a difficult individuation process during his early years, his own writings on philosophy tend to be sparse and often contradictory. My analysis of Pauli's physics and philosophy is based upon published and unpublished sources, and Pauli's later reflections. A pattern has emerged. Pauli changed his mind from relying on high rationality and empiricism, to valuing intuitive metaphysical visualizations. This coupled with disturbing events in his life precipitated a breakdown and led Pauli to seek treatment at the Jung Clinic. Pauli's psychological tension diminished after 1932. His physics consistently involved symmetry and invariants. His philosophy allied with Jung's resembled a Platonism of combined psyche and physics. Pauli sought a rational unification and foundation for his philosophy, but that goal was cut short by his untimely death at the age of 58. | 1,702 | 0 | 3 | 3 |
1003.3394 | 118,353,732 | Emergence from Symmetry: A New Type of Cellular Automata | In this paper, a different perspective of constructing the CA models is proposed. Its kernel, the Local Symmetric Distribution Principle, relates to some fundamental concepts in physics, which maybe raise a wide interest. With a rich palette of configurations, this model also hints its capability of universal computation. | 1,713 | 0.588235 | 4 | 4.588 |
1003.3937 | 118,582,915 | Evolution and Earth's entropy | Entropy decreases on the Earth due to day/night temperature differences. This decrease exceeds the decrease in entropy on the Earth related to evolution by many orders of magnitude. Claims by creationists that science is somehow inconsistent with regard to evolution are thus show to be baseless. | 1,740 | 0.588235 | 3 | 3.588 |
1003.4340 | 11,903,674 | Enhancing the spectral gap of networks by node removal. | Dynamics on networks are often characterized by the second smallest eigenvalue of the Laplacian matrix of the network, which is called the spectral gap. Examples include the threshold coupling strength for synchronization and the relaxation time of a random walk. A large spectral gap is usually associated with high network performance, such as facilitated synchronization and rapid convergence. In this study, we seek to enhance the spectral gap of undirected and unweighted networks by removing nodes because, practically, the removal of nodes often costs less than the addition of nodes, addition of links, and rewiring of links. In particular, we develop a perturbative method to achieve this goal. The proposed method realizes better performance than other heuristic methods on various model and real networks. The spectral gap increases as we remove up to half the nodes in most of these networks. | 1,766 | 0 | 3 | 3 |
1003.4847 | 9,659,057 | A tree-decomposed transfer matrix for computing exact Potts model partition functions for arbitrary graphs, with applications to planar graph colourings | Combining tree decomposition and transfer matrix techniques provides a very general algorithm for computing exact partition functions of statistical models defined on arbitrary graphs. The algorithm is particularly efficient in the case of planar graphs. We illustrate it by computing the Potts model partition functions and chromatic polynomials (the number of proper vertex colourings using Q colours) for large samples of random planar graphs with up to N = 100 vertices. In the latter case, our algorithm yields a sub-exponential average running time of , a substantial improvement over the exponential running time ∼exp (0.245N) provided by the hitherto best-known algorithm. We study the statistics of chromatic roots of random planar graphs in some detail, comparing the findings with results for finite pieces of a regular lattice. | 1,796 | 0.588235 | 3 | 3.588 |
1003.4871 | 119,280,800 | Thermal broadening of the Coulomb blockade peaks in quantum Hall interferometers | We demonstrate that the differential magnetic susceptibility of a fractional quantum Hall disk, representing a Coulomb island in a Fabry-Pérot interferometer, is exactly proportional to the island's conductance and its paramagnetic peaks are the equilibrium counterparts of the Coulomb blockade conductance peaks. Using as a thermodynamic potential the partition functions of the edge states' effective conformal field theory we find the positions of the Coulomb blockade peaks, when the area of the island is varied, the modulations of the distance between them as well as the thermal decay and broadening of the peaks when temperature is increased. The finite-temperature estimates of the peak's heights and widths could give important information about the experimental observability of the Coulomb blockade. In addition, the predicted peak asymmetry and displacement at finite temperature due to neutral multiplicities could serve to distinguish different fractional quantum Hall states with similar zero-temperature Coulomb blockade patterns. | 1,798 | 1.176471 | 1 | 2.176 |
1003.4940 | 38,158,445 | Chaos in small-world networks. | A nonlinear small-world network model has been presented to investigate the effect of nonlinear interaction and time delay on the dynamic properties of small-world networks. Both numerical simulations and analytical analysis for networks with time delay and nonlinear interaction show chaotic features in the system response when nonlinear interaction is strong enough or the length scale is large enough. In addition, the small-world system may behave very differently on different scales. Time-delay parameter also has a very strong effect on properties such as the critical length and response time of small-world networks. | 1,805 | 0 | 3 | 3 |
1003.4958 | 117,897,721 | Cellular Automata Networks | A small-world cellular automaton network has been formulated to simulate the long-range interactions of complex networks using unconventional computing methods in this paper. Conventional cellular automata use local updating rules. The new type of cellular automata networks uses local rules with a fraction of long-range shortcuts derived from the properties of small-world networks. Simulations show that the self-organized criticality emerges naturally in the system for a given probability of shortcuts and transition occurs as the probability increases to some critical value indicating the small-world behaviour of the complex automata networks. Pattern formation of cellular automata networks and the comparison with equation-based reaction-diffusion systems are also discussed | 1,807 | 0 | 3 | 3 |
1003.5424 | 115,179,757 | The approach to thermal equilibrium and "thermodynamic normality" --- An observation based on the works by Goldstein, Lebowitz, Mastrodonato, Tumulka, and Zanghi in 2009, and by von Neumann in 1929 | We treat the problem of the approach to thermal equilibrium by only resorting to quantum dynamics of an isolated macroscopic system. Inspired by the two important works in 2009 and in 1929, we have noted that a condition we call "thermodynamic normality" for a macroscopic observable guarantees the approach to equilibrium (in the sense that a measurement of the observable at time $t$ almost certainly yields a result close to the corresponding microcanonical average for a sufficiently long and typical $t$). A crucial point is that we make no assumptions on the initial state of the system, except that its energy is distributed close to a certain macroscopic value. We also present three (rather artificial) models in which the thermodynamic normality can be established, thus providing concrete examples in which the approach to equilibrium is rigorously justified. Note that this kind of results which hold for ANY initial state are never possible in classical systems. We are thus dealing with a mechanism which is peculiar to quantum systems. The present note is written in a self-contained (and hopefully readable) manner. It only requires basic knowledge in quantum physics and equilibrium statistical mechanics. | 1,843 | 0 | 3 | 3 |
1003.5583 | 37,138,600 | Bootstrap percolation on complex networks. | We consider bootstrap percolation on uncorrelated complex networks. We obtain the phase diagram for this process with respect to two parameters: f, the fraction of vertices initially activated, and p, the fraction of undamaged vertices in the graph. We observe two transitions: the giant active component appears continuously at a first threshold. There may also be a second, discontinuous, hybrid transition at a higher threshold. Avalanches of activations increase in size as this second critical point is approached, finally diverging at this threshold. We describe the existence of a special critical point at which this second transition first appears. In networks with degree distributions whose second moment diverges (but whose first moment does not), we find a qualitatively different behavior. In this case the giant active component appears for any f>0 and p>0, and the discontinuous transition is absent. This means that the giant active component is robust to damage, and also is very easily activated. We also formulate a generalized bootstrap process in which each vertex can have an arbitrary threshold. | 1,849 | 0 | 3 | 3 |
1003.5699 | 8,674,839 | Predicting the Future with Social Media | null | 1,856 | 1.176471 | 7 | 8.176 |
1003.6064 | 117,419,473 | Orthographic Correlations in Astrophysics | Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street, NY 10027, USA(Dated: April 1st 2010)We analyze correlations between the first letter of the name of an author and the number ofcitations their papers receive. We look at simple mean counts, numbers of highly-cited papers, andnormalized h-indices, by letter. To our surprise, we conclude that orthographically senior authorsproduce a better body of work than their colleagues, despite some evidence of discrimination againstthem.I. INTRODUCTION | 1,892 | 1.764706 | 5 | 6.765 |
1003.6087 | 117,850,953 | "How many zombies do you know?" Using indirect survey methods to measure alien attacks and outbreaks of the undead | The zombie menace has so far been studied only qualitatively or through the use of mathematical models without empirical content. We propose to use a new tool in survey research to allow zombies to be studied indirectly without risk to the interviewers. | 1,897 | 0 | 6 | 6 |
1003.6130 | 117,018,937 | LHC card games: Bringing about retrocausality? | The model of Nielsen and Ninomiya claims that "the SSC (Superconducting Supercollider) were stopped by the US Congress due to the backward causation from the big amounts of Higgs particles, which it would have produced, if it had been allowed to run". They also proposed to play a card game and if the "close LHC" card is drawn (with probability $\sim 10^{-6}$), really close LHC on the eve of Higgs particle discovery to avoid more severe bad luck. Crazy? Probably. But paraphrasing Salvador Dali, if you believe that you and me are smarter in physics than Nielsen and Ninomiya, don't read this article, just go right on in your blissful idiocy. Therefore, I will try to make sense of backward causation. It turns out that not only the backward causation makes perfect sense in some models of possible reality, but that Nielsen and Ninomiya really have a chance to close LHC by a card game. The only thing they need is to be smart enough to manage to develop their theory up to the level of brilliance beginning from which it becomes a part of the fabric of reality. We hope, however, that they will use their outstanding abilities to bring about some more interesting future. | 1,900 | 1.176471 | 3 | 4.176 |
1004.0664 | 119,259,219 | Classical paradoxes of locality and their possible quantum resolutions in deformed special relativity | null | 1,948 | 0 | 3 | 3 |
1004.0810 | 51,813,076 | Biefeld - Brown effect and space curvature of electromagnetic field | With applying of new proposed electromagnetic gravity Lagrangian together with Einstein-Hilbert equation not zero space curvature was derived. The curvature gives “a priory” postulate of equivalence of mass and electromagnetic field gravity properties. The non zero trace of energy-stress tensor of electrical field changes space curvature of gravity mass, which yields to prediction of dependence of capacitor gravity mass from capacitor capacitance and voltage values, observed in Biefeld-Brown effect. The other, not observed prediction could be applied to coil gravity mass dependence from coil inductance and current values. New physical constant, electromagnetic field gravity constant αg, was introduced to conform with theoretical and experimental data. | 1,964 | 0 | 3 | 3 |
1004.1035 | 53,533,798 | ESTIMATIONS OF TOTAL MASS AND ENERGY OF THE OBSERVABLE UNIVERSE | The recent astronomical observations indicate that the expanding universe is homogeneous, isotropic and asymptotically flat. The Euclidean geometry of the universe enables to determine the total gravitational and kinetic energy of the universe by Newtonian gravity in a flat space. By means of dimensional analysis, we have found the mass of the observable universe close to the Hoyle-Carvalho formula M∼c3/(GH). This value is independent from the cosmological model and infers a size (radius) of the observable universe close to Hubble distance. It has been shown that almost the entire kinetic energy of the observable universe ensues from the cosmological expansion. Both, the total gravitational and kinetic energies of the observable universe have been determined in relation to an observer at an arbitrary location. The relativistic calculations for total kinetic energy have been made and the dark energy has been excluded from calculations. The total mechanical energy of the observable universe has been found close to zero, which is a remarkable result. This result supports the conjecture that the gravitational energy of the observable universe is approximately balanced with its kinetic energy of the expansion and favours a density of dark energy ΩΛ≈0.78. | 1,994 | 0 | 3 | 3 |
1004.1091 | 118,484,129 | The Potato Radius: a Lower Minimum Size for Dwarf Planets | Gravitational and electronic forces produce a correlation between the mass and shape of objects in the universe. For example, at an average radius of ~ 200 km - 300 km, the icy moons and rocky asteroids of our Solar System transition from a rounded potato shape to a sphere. We derive this potato-to-sphere transition radius -- or "potato radius" -- from first principles. Using the empirical potato radii of asteroids and icy moons, we derive a constraint on the yield strength of these bodies during their formative years when their shapes were determined. Our proposed ~ 200 km potato radius for icy moons would substantially increase the number of trans-Neptunian objects classified as dwarf planets. | 2,002 | 0 | 3 | 3 |
1004.1346 | 118,370,357 | Electromagnetism and time-asymmetry | It is a commonplace to note that in a world governed by special or general relativity, an observer has access only to data within her past lightcone (if that). The significance of this for prediction, and thus for confirmation, does not however seem to have been appreciated. In this paper I show that what we regard as our most well-confirmed relativistic theory, Maxwell's theory of electromagnetism, is not at all well-confirmed in the absence of an additional assumption, the assumption that all fields have sources in their past. I conclude that we have reason to believe that there is a lawlike time-asymmetry in the world. | 2,035 | 0 | 3 | 3 |
1004.1670 | 154,303,998 | Any regulation of risk increases risk | We show that any objective risk measurement algorithm mandated by central banks for regulated financial entities will result in more risk being taken by those financial entities than would otherwise be the case. Furthermore, the risks taken by the regulated financial entities are far more systemically concentrated than they would have been otherwise, making the entire financial system more fragile. This result leaves three options for the future of financial regulation: (1) continue regulating by enforcing risk measurement algorithms at the cost of occasional severe crises, (2) regulate more severely and subjectively by fully nationalizing all financial entities, or (3) abolish all central banking regulations, including deposit insurance, thus allowing risk to be determined by the entities themselves and, ultimately, by their depositors through voluntary market transactions, rather than by the taxpayers through enforced government participation. | 2,063 | 0.588235 | 3 | 3.588 |
1004.1701 | 13,258,254 | The danger of pseudo science in Informetrics | Two papers have been archived to which this letter is complementary: 1) Opthof and Leydesdorff arxiv:1002.2769 2) Van Raan et al. arxiv:1003.2113 Van Raan at all claims that the order of operations (first dividing then adding) does not apply to citation analysis. In my contribution I discuss a few analogues in Physics and Medicine and argue that in no other field of science where quantities have physical or financial meaning, implying that that numbers have a real unit of measure, it would be allowed to ignore the rule of operations. Hence, the claim of CWTS that the order of operations is not relevant brings studies ignoring this rule as done by CWTS in the category 'Pseudo Science'. | 2,068 | 0 | 3 | 3 |
1004.1999 | 8,743,733 | Towards a mathematical theory of meaningful communication | null | 2,091 | 0 | 3 | 3 |