id
stringlengths
9
16
abstract
stringlengths
67
2.61k
cats
sequence
primary
stringlengths
5
18
secondary
stringlengths
0
18
strlabel
stringlengths
5
315
stratlabel
class label
7.27k classes
1909.00997
Existing synthetic datasets (FigureQA, DVQA) for reasoning over plots do not contain variability in data labels, real-valued data, or complex reasoning questions. Consequently, proposed models for these datasets do not fully address the challenge of reasoning over plots. In particular, they assume that the answer comes either from a small fixed size vocabulary or from a bounding box within the image. However, in practice, this is an unrealistic assumption because many questions require reasoning and thus have real-valued answers which appear neither in a small fixed size vocabulary nor in the image. In this work, we aim to bridge this gap between existing datasets and real-world plots. Specifically, we propose PlotQA with 28.9 million question-answer pairs over 224,377 plots on data from real-world sources and questions based on crowd-sourced question templates. Further, 80.76% of the out-of-vocabulary (OOV) questions in PlotQA have answers that are not in a fixed vocabulary. Analysis of existing models on PlotQA reveals that they cannot deal with OOV questions: their overall accuracy on our dataset is in single digits. This is not surprising given that these models were not designed for such questions. As a step towards a more holistic model which can address fixed vocabulary as well as OOV questions, we propose a hybrid approach: Specific questions are answered by choosing the answer from a fixed vocabulary or by extracting it from a predicted bounding box in the plot, while other questions are answered with a table question-answering engine which is fed with a structured table generated by detecting visual elements from the image. On the existing DVQA dataset, our model has an accuracy of 58%, significantly improving on the highest reported accuracy of 46%. On PlotQA, our model has an accuracy of 22.52%, which is significantly better than state of the art models.
[ "cs.CV", "cs.AI", "cs.CL" ]
cs.CV
cs.AI
Computer Vision and Pattern Recognition;Artificial Intelligence;Computation and Language
1,503Computer Vision and Pattern Recognition;Artificial Intelligence;Computation and Language
cond-mat/0505502
For Small World Ising systems of different dimensions, "concentration" dependencies T_C(p) of the Curie temperature upon the fraction p of long-range links have been derived on a basis of simple physical considerations. We have found T_C(p) ~ 1/ln|p| for 1D, T_C(p) ~ p^{1/2} for 2D, and T_C(p) ~ p^{2/3} for 3D.
[ "cond-mat.dis-nn" ]
cond-mat.dis-nn
Disordered Systems and Neural Networks
2,129Disordered Systems and Neural Networks
2112.03952
We present a calculation of the connected-diagram contributions to the first three non-trivial Mellin moments for the pion and kaon, extracted using local operators with up to 3 covariant derivatives. We use one ensemble of gauge configurations with two degenerate light, a strange and a charm quark ($N_f$=2+1+1) of maximally twisted mass fermions with clover improvement. The ensemble has a pion mass $\sim$260 MeV, and a kaon mass $\sim$530 MeV. We reconstruct the $x$-dependence of the PDFs via fits to our results, and find that our lattice data favor a $(1-x)^2$-behavior in the large-$x$ region for both the pion and kaon PDFs. We integrate the reconstructed PDFs to extract the higher moments, $\langle x^n \rangle$, with $4 \leq n \leq 6$. Finally, we compare the pion and kaon PDFs, as well as the ratios of their Mellin moments, to address the effect of SU(3) flavor symmetry breaking.
[ "hep-lat", "hep-ph", "nucl-th" ]
hep-lat
hep-ph
High Energy Physics - Lattice;High Energy Physics - Phenomenology;Nuclear Theory
3,109High Energy Physics - Lattice;High Energy Physics - Phenomenology;Nuclear Theory
1301.3724
Cosmic explosions dissipate energy into their surroundings on a very wide range of time-scales: producing shock waves and associated particle acceleration. The historical culprits for the acceleration of the bulk of Galactic cosmic rays are supernova remnants: explosions on ~10000 year time-scales. Increasingly however, time-variable emission points to rapid and efficient particle acceleration in a range of different astrophysical systems. Gamma-ray bursts have the shortest time-scales, with inferred bulk Lorentz factors of ~1000 and photons emitted beyond 100 GeV, but active galaxies, pulsar wind nebulae and colliding stellar winds are all now associated with time-variable emission at ~TeV energies. Cosmic photons and neutrinos at these energies offer a powerful probe of the underlying physical mechanisms of cosmic explosions, and a tool for exploring fundamental physics with these systems. Here we discuss the motivations for high-energy observations of transients, the current experimental situation, and the prospects for the next decade, with particular reference to the major next-generation high-energy observatory CTA.
[ "astro-ph.HE" ]
astro-ph.HE
High Energy Astrophysical Phenomena
2,990High Energy Astrophysical Phenomena
1705.06998
We deduce an analogue of Quillen--Suslin's local-global principle for the transvection subgroups of the general quadratic (Bak's unitary) groups. As an application we revisit the result of Bak--Petrov--Tang on injective stabilization for the K_1-functor of the general quadratic groups.
[ "math.KT" ]
math.KT
K-Theory and Homology
3,775K-Theory and Homology
1905.06852
As more and more applications and services depend on data collected and provided by Internet of Things (IoT) devices, it is of importance that such data can be trusted. Data provenance solutions together with blockchain technology are one way to make data more trustworthy. However, current solutions do not address the heterogeneous nature of IoT applications and their data. In this work, we identify functional and non-functional requirements for a generic IoT data provenance framework, and conceptualise the framework as a layered architecture. Using a proof-of-concept implementation based on Ethereum smart contracts, data provenance can be realised for a wide range of IoT use cases. Benefits of a generic framework include simplified adoption and a more rapid implementation of data provenance for the IoT.
[ "cs.CR" ]
cs.CR
Cryptography and Security
1,782Cryptography and Security
1406.3398
Yeast cells grown in culture can spontaneously synchronize their respiration, metabolism, gene expression and cell division. Such metabolic oscillations in synchronized cultures reflect single-cell oscillations, but the relationship between the oscillations in single cells and synchronized cultures is poorly understood. To understand this relationship and the coordination between metabolism and cell division, we collected and analyzed DNA-content, gene-expression and physiological data, at hundreds of time-points, from cultures metabolically-synchronized at different growth rates, carbon sources and biomass densities. The data enabled us to extend and generalize an ensemble-average-over-phases (EAP) model that connects the population-average gene-expression of asynchronous cultures to the gene-expression dynamics in the single-cells comprising the cultures. The extended model explains the carbon-source specific growth-rate responses of hundreds of genes. Our data demonstrate that for a given growth rate, the frequency of metabolic cycling in synchronized cultures increases with the biomass density. This observation underscores the difference between metabolic cycling in synchronized cultures and in single cells and suggests entraining of the single-cell cycle by a quorum-sensing mechanism. Constant levels of residual glucose during the metabolic cycling of synchronized cultures indicate that storage carbohydrates are required to fuel not only the G1/S transition of the division cycle but also the metabolic cycle. Despite the large variation in profiled conditions and in the time-scale of their dynamics, most genes preserve invariant dynamics of coordination with each other and with the rate of oxygen consumption. Similarly, the G1/S transition always occurs at the beginning, middle or end of the high oxygen consumption phases, analogous to observations in human and drosophila cells.
[ "q-bio.GN", "nlin.AO", "physics.bio-ph", "q-bio.CB", "q-bio.PE" ]
q-bio.GN
nlin.AO
Genomics;Adaptation and Self-Organizing Systems;Biological Physics;Cell Behavior;Populations and Evolution
7,267longtail
1004.2864
The ratio of the $\psi'$ over the $J/\psi$ production cross section in the dielectron channel has been measured in $\sqrt{s}=$ 200 GeV $p+p$ collisions with the PHENIX detector at RHIC. The analysis is based on fitting of the dielectron invariant mass spectra in the area around the $J/\psi$ and $\psi'$ signals in order to extract a ratio $\psi'$ over $J/\psi$ of 0.019$\pm 0.005($stat$)\pm 0.002($sys$)$ and a fractional feed-down contribution to $J/\psi$ from $\psi^\prime$ of $8.6 \pm 2.5 %$.
[ "nucl-ex" ]
nucl-ex
Nuclear Experiment
4,855Nuclear Experiment
2103.16340
This paper studies Makespan Minimization in the secretary model. Formally, jobs, specified by their processing times, are presented in a uniformly random order. An online algorithm has to assign each job permanently and irrevocably to one of m parallel and identical machines such that the expected time it takes to process them all, the makespan, is minimized. We give two deterministic algorithms. First, a straightforward adaptation of the semi-online strategy LightLoad provides a very simple algorithm retaining its competitive ratio of 1.75. A new and sophisticated algorithm is 1.535-competitive. These competitive ratios are not only obtained in expectation but, in fact, for all but a very tiny fraction of job orders. Classically, online makespan minimization only considers the worst-case order. Here, no competitive ratio below 1.885 for deterministic algorithms and 1.581 using randomization is possible. The best randomized algorithm so far is 1.916-competitive. Our results show that classical worst-case orders are quite rare and pessimistic for many applications. They also demonstrate the power of randomization when compared to much stronger deterministic reordering models. We complement our results by providing first lower bounds. A competitive ratio obtained on nearly all possible job orders must be at least 1.257. This implies a lower bound of 1.043 for both deterministic and randomized algorithms in the general model.
[ "cs.DS" ]
cs.DS
Data Structures and Algorithms
1,908Data Structures and Algorithms
2010.13283
The $0$-trace of a knot is the $4$-manifold represented by the $0$-framing of the knot. In this manuscript, we survey methods constructing a pair of knots with diffeomorphic $0$-traces. In particular, we focus on Gompf-Miyazaki's dualizable pattern, Abe-Jong-Omae-Takeuchi's band presentation, and RGB-diagram given by Piccirillo and named by the author, and we draw the relations among these methods directly. As an application, we give a sufficient condition that two knots obtained by Abe-Jong-Omae-Takeuchi's method coincide.
[ "math.GT" ]
math.GT
Geometric Topology
2,813Geometric Topology
2010.15543
We characterize the possible groups $E(\mathbb{Z}/N\mathbb{Z})$ arising from elliptic curves over $\mathbb{Z}/N\mathbb{Z}$ in terms of the groups $E(\mathbb{F}_p)$, with $p$ varying among the prime divisors of $N$. This classification is achieved by showing that the infinity part of any elliptic curve over $\mathbb{Z}/p^e\mathbb{Z}$ is a $\mathbb{Z}/p^e\mathbb{Z}$-torsor, of which a generator is exhibited. As a first consequence, when $E(\mathbb{Z}/N\mathbb{Z})$ is a $p$-group, we provide an explicit and sharp bound on its rank. As a second consequence, when $N = p^e$ is a prime power and the projected curve $E(\mathbb{F}_p)$ has trace one, we provide an isomorphism attack to the ECDLP, which works only by means of finite rings arithmetic.
[ "math.NT", "math.AG" ]
math.NT
math.AG
Number Theory;Algebraic Geometry
4,946Number Theory;Algebraic Geometry
2007.07652
Liquid crystal networks combine the orientational order of liquid crystals with the elastic properties of polymer networks, leading to a vast application potential in the field of responsive coatings, e.g., for haptic feedback, self-cleaning surfaces and static and dynamic pattern formation. Recent experimental work has further paved the way toward such applications by realizing the fast and reversible surface modulation of a liquid crystal network coating upon in-plane actuation with an AC electric field. Here, we construct a Landau-type theory for electrically-responsive liquid crystal networks and perform Molecular Dynamics simulations to explain the findings of these experiments and inform on rational design strategies. Qualitatively, the theory agrees with our simulations and reproduces the salient experimental features. We also provide a set of testable predictions: the aspect ratio of the nematogens, their initial orientational order when cross-linked into the polymer network and the cross-linking fraction of the network all increase the plasticization time required for the film to macroscopically deform. We demonstrate that the dynamic response to oscillating electric fields is characterized by two resonances, which can likewise be influenced by varying these parameters, providing an experimental handle to fine-tune device design.
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
cond-mat.soft
cond-mat.mtrl-sci
Soft Condensed Matter;Materials Science
6,577Soft Condensed Matter;Materials Science
2304.09572
This paper presents an ecosystem for personal knowledge graphs (PKG), commonly defined as resources of structured information about entities related to an individual, their attributes, and the relations between them. PKGs are a key enabler of secure and sophisticated personal data management and personalized services. However, there are challenges that need to be addressed before PKGs can achieve widespread adoption. One of the fundamental challenges is the very definition of what constitutes a PKG, as there are multiple interpretations of the term. We propose our own definition of a PKG, emphasizing the aspects of (1) data ownership by a single individual and (2) the delivery of personalized services as the primary purpose. We further argue that a holistic view of PKGs is needed to unlock their full potential, and propose a unified framework for PKGs, where the PKG is a part of a larger ecosystem with clear interfaces towards data services and data sources. A comprehensive survey and synthesis of existing work is conducted, with a mapping of the surveyed work into the proposed unified ecosystem. Finally, we identify open challenges and research opportunities for the ecosystem as a whole, as well as for the specific aspects of PKGs, which include population, representation and management, and utilization.
[ "cs.AI", "cs.IR" ]
cs.AI
cs.IR
Artificial Intelligence;Information Retrieval
413Artificial Intelligence;Information Retrieval
2010.13227
At thermal equilibrium, intensive quantities like temperature and pressure have to be uniform throughout the system, restricting inhomogeneous systems composed of different phases. The paradigmatic example is the coexistence of vapor and liquid, a state that can also be observed for active Brownian particles steadily driven away from equilibrium. Recently, a strategy has been proposed that allows to predict phase equilibria of active particles [Phys. Rev. E \textbf{97}, 020602(R)(2018)]. Here we elaborate on this strategy and formulate it in the framework of a van der Waals theory for active discs. For a given equation of state, we derive the effective free energy analytically and show that it yields coexisting densities in very good agreement with numerical results. We discuss the interfacial tension and the relation to Cahn-Hilliard models.
[ "cond-mat.stat-mech" ]
cond-mat.stat-mech
Statistical Mechanics
6,821Statistical Mechanics
2006.08709
We investigate the relationship between the thermal properties of a micro pulsating heat pipe (MPHP) and the internal flow characteristics. The MPHP consists of an eleven-turn closed-loop of a meandering square microchannel with a hydraulic diameter of $350\ {}{\mu}{\rm m}$ engraved on a silicon substrate. The MPHP charged with Fluorinert FC-72 tends to exhibit higher effective thermal conductivities for the coolant temperature of $T_{\rm c} = 40\ {}^\circ\mathrm{C}$ compared to $T_{\rm c} = 20\ {}^\circ\mathrm{C}$, and provides the highest effective thermal conductivity of about $700\ {}{\rm W/(m{\cdot}K)}$ for $T_{\rm c} = 40\ {}^\circ\mathrm{C}$ and a filling ratio of 48%. Interestingly, we observe two different self-oscillation modes having different thermal conductivities, even for identical heat input rates. This tendency indicates a hysteresis of the effective thermal conductivity, which originates from the difference in the heat input rates at which the MPHP falls into and recovers from dryout. Subsequently, semantic segmentation-based image recognition is applied to the recorded flow images to identify the flow characteristics, successfully extracting four different flow patterns involving liquid slugs, liquid films, dry walls, and rapid-boiling regions. The image recognition results indicate that high effective thermal conductivities of the MPHP relate to stable self-oscillations with large amplitudes and high frequencies, along with long and thin liquid films beneficial for latent heat transfer. Finally, we perform numerical simulations of latent/sensible heat transfer via vapor plugs and of sensible heat transfer via liquid slugs using the extracted flow patterns as inputs. We find that latent heat transfer via liquid films accounts for a considerable portion of the overall heat transfer, while the sensible heat transfer via liquid slugs is much less significant.
[ "physics.app-ph", "physics.flu-dyn" ]
physics.app-ph
physics.flu-dyn
Applied Physics;Fluid Dynamics
329Applied Physics;Fluid Dynamics
1101.1026
We study large-scale structure formation in the presence of a quintessence component with zero speed of sound in the framework of Eulerian Perturbation Theory. Due to the absence of pressure gradients, quintessence and dark matter are comoving and can be studied as a unique fluid in terms of the total energy density contrast and the common velocity. In this description the clustering of quintessence enhances the linear term proportional to the velocity divergence in the continuity equation by a factor (1+w) Omega_Q / Omega_m. This is responsible for a rapid evolution of the growth rate at low redshifts, and modifies the standard relation between the velocity divergence and the growth factor. For the total fluid, the solutions for the linear growth function and growth rate can be written in integral forms and admit simple fitting formulae, as in the LambdaCDM case. At second order in perturbation theory, we derive an explicit expression for the kernels F_2 and G_2. They receive modifications of the order of the ratio between quintessence and total energy density perturbations, which affect the corresponding tree-level bispectra. We finally compute the cumulative signal-to-noise in the power spectrum, bispectrum and reduced bispectrum, expected for departures from a LambdaCDM cosmology both in the clustering and smooth quintessence scenarios. The reduced bispectrum, in particular, receives sensible modifications only in the clustering case and can potentially be used to detect or rule out the model.
[ "astro-ph.CO", "gr-qc", "hep-th" ]
astro-ph.CO
gr-qc
Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Theory
1,748Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Theory
hep-th/0006154
The CPT anomaly, which was first seen in perturbation theory for certain four-dimensional chiral gauge theories, is also present in the exact result for a class of two-dimensional chiral U(1) gauge theories on the torus. Specifically, the chiral determinant for periodic fermion fields changes sign under a CPT transformation of the background gauge field. There is, in fact, an anomaly of Lorentz invariance, which allows for the CPT theorem to be circumvented.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
1711.10352
The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.
[ "cs.CV" ]
cs.CV
Computer Vision and Pattern Recognition
1,498Computer Vision and Pattern Recognition
astro-ph/0601155
We discuss the detectability of high-redshift galaxies via [CII] 158 micron line emission by coupling an analytic model with cosmological Smoothed Particle Hydrodynamics (SPH) simulations that are based on the concordance Lambda cold dark matter (CDM) model. Our analytic model describes a multiphase interstellar medium irradiated by the far ultra-violet radiation from local star-forming regions, and it calculates thermal and ionization equilibrium between cooling and heating. The model allows us to predict the mass fraction of a cold neutral medium (CNM) embedded in a warm neutral medium (WNM). Our cosmological SPH simulations include a treatment of radiative cooling/heating, star formation, and feedback effects from supernovae and galactic winds. Using our method, we make predictions for the [CII] luminosity from high-redshift galaxies which can be directly compared with upcoming observations by the Atacama Large Millimeter Array (ALMA) and the Space Infrared Telescope for Cosmology and Astrophysics (SPICA). We find that the number density of high-redshift galaxies detectable by ALMA and SPICA via [CII] emission depends significantly on the amount of neutral gas which is highly uncertain. Our calculations suggest that, in a CDM universe, most [CII] sources at z=3 are faint objects with \Snu < 0.01 mJy. Lyman-break galaxies (LBGs) brighter than R_AB=23.5 mag are expected to have flux densities \Snu = 1-3 mJy depending on the strength of galactic wind feedback. The recommended observing strategy for ALMA and SPICA is to aim at very bright LBGs or star-forming DRG/BzK galaxies.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
2005.12065
A Locality-Sensitive Hash (LSH) function is called $(r,cr,p_1,p_2)$-sensitive, if two data-points with a distance less than $r$ collide with probability at least $p_1$ while data points with a distance greater than $cr$ collide with probability at most $p_2$. These functions form the basis of the successful Indyk-Motwani algorithm (STOC 1998) for nearest neighbour problems. In particular one may build a $c$-approximate nearest neighbour data structure with query time $\tilde O(n^\rho/p_1)$ where $\rho=\frac{\log1/p_1}{\log1/p_2}\in(0,1)$. That is, sub-linear time, as long as $p_1$ is not too small. This is significant since most high dimensional nearest neighbour problems suffer from the curse of dimensionality, and can't be solved exact, faster than a brute force linear-time scan of the database. Unfortunately, the best LSH functions tend to have very low collision probabilities, $p_1$ and $p_2$. Including the best functions for Cosine and Jaccard Similarity. This means that the $n^\rho/p_1$ query time of LSH is often not sub-linear after all, even for approximate nearest neighbours! In this paper, we improve the general Indyk-Motwani algorithm to reduce the query time of LSH to $\tilde O(n^\rho/p_1^{1-\rho})$ (and the space usage correspondingly.) Since $n^\rho p_1^{\rho-1} < n \Leftrightarrow p_1 > n^{-1}$, our algorithm always obtains sublinear query time, for any collision probabilities at least $1/n$. For $p_1$ and $p_2$ small enough, our improvement over all previous methods can be \emph{up to a factor $n$} in both query time and space. The improvement comes from a simple change to the Indyk-Motwani algorithm, which can easily be implemented in existing software packages.
[ "cs.DS" ]
cs.DS
Data Structures and Algorithms
1,908Data Structures and Algorithms
1906.02504
We report experimental observation of incoherently coupled dark-bright vector solitons in single-mode fibers. Properties of the vector solitons agree well with those predicted by the respective systems of incoherently coupled nonlinear Schroedinger equations. To the best of our knowledge, this is the first experimental observation of temporal incoherently coupled dark-bright solitons in single-mode fibers.
[ "physics.optics", "nlin.PS" ]
physics.optics
nlin.PS
Optics;Pattern Formation and Solitons
5,217Optics;Pattern Formation and Solitons
1503.04180
Societies consisting of cooperative individuals seem to require for their continuing success that defectors be policed. The precise connection between punishers and benefits, population structure, and division of labour, however, remains ill-understood. Many models assume costly "peer punishment" to enforce cooperation, but results in the economics literature suggest that this assumption may not be generally valid. In many human and animal societies, there is a division of labour between a purely supportive majority and a dedicated minority of police-like enforcers. Here we present several extensions to the Public Goods Game with punishment which allow for this possibility, and evaluate their influence on the level of cooperative behaviour. We find that a structure of separate subpopulations, which only interact through migration of individuals, can have a strong effect on the evolutionary dynamics of a system and significantly facilitate cooperation. Forcing defectors to contribute and enabling fitness transfers to punishers both have a weak positive effect on cooperation levels. In the presence of group competition, however, evolutionary effects can paradoxically hinder cooperation.
[ "q-bio.PE", "physics.soc-ph" ]
q-bio.PE
physics.soc-ph
Populations and Evolution;Physics and Society
5,666Populations and Evolution;Physics and Society
0801.0371
If ultra-high-energy cosmic rays (UHECRs) originate from extragalactic sources, understanding the propagation of charged particles through the magnetized large scale structure (LSS) of the universe is crucial in the search for the astrophysical accelerators. Based on a novel model of the turbulence dynamo, we estimate the intergalactic magnetic fields (IGMFs) in cosmological simulations of the formation of the LSS. Under the premise that the sources of UHECRs are strongly associated with the LSS, we consider a model in which protons with E >10^{19} eV are injected by sources that represent active galactic nuclei located inside clusters of galaxies. With the model IGMFs, we then follow the trajectories of the protons, while taking into account the energy losses due to interactions with the cosmic background radiation. For observers located inside groups of galaxies like ours, about 70% and 35% of UHECR events above 60 EeV arrive within ~15 degree and ~5 degree, respectively, of the source position with time delays of less than ~10^7 yr. This implies that the arrival direction of super-GZK protons might exhibit a correlation with the distribution of cosmological sources on the sky. In this model, nearby sources (within 10 - 20 Mpc) should contribute significantly to the particle flux above ~10^{20} eV.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
cond-mat/0601378
We consider the tunneling Density of States (DoS) of superconducting films driven to the paramagnetic phase by the Zeeman splitting. We show that there is minimum in the DoS whose position depends on the orientation of the applied field. This dependence, not predicted by previous theoretical calculations, is in agreement with a recent experiment.
[ "cond-mat.supr-con" ]
cond-mat.supr-con
Superconductivity
7,066Superconductivity
2012.00491
Contributing to the need of new graphene nanoribbon (GNR) structures that can be synthesized with atomic precision, we have designed a reactant that renders chiral (3,1) - GNRs after a multi-step reaction including Ullmann coupling and cyclodehydrogenation. The nanoribbon synthesis has been successfully proved on different coinage metals, and the formation process, together with the fingerprints associated to each reaction step, has been studied combining scanning tunnelling microscopy, core-level spectroscopy and density functional calculations. In addition to the GNR chiral edge structure, the substantial GNR lengths achieved and the low processing temperature required to complete the reaction grant this reactant extremely interesting properties for potential applications.
[ "cond-mat.mtrl-sci" ]
cond-mat.mtrl-sci
Materials Science
4,287Materials Science
2306.12829
In the last decade, the need for storing videos from cataract surgery has increased significantly. Hospitals continue to improve their imaging and recording devices (e.g., microscopes and cameras used in microscopic surgery, such as ophthalmology) to enhance their post-surgical processing efficiency. The video recordings enable a lot of user-cases after the actual surgery, for example, teaching, documentation, and forensics. However, videos recorded from operations are typically stored in the internal archive without any domain-specific compression, leading to a massive storage space consumption. In this work, we propose a relevance-based compression scheme for videos from cataract surgery, which is based on content specifics of particular cataract surgery phases. We evaluate our compression scheme with three state-of-the-art video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to evaluate the visual quality of encoded videos. Our results show significant savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when using H.265/HEVC, and up to 98.82% when using AV1.
[ "cs.MM" ]
cs.MM
Multimedia
4,692Multimedia
1812.10114
Photoemission driven by a strong electric field of near-infrared or visible light, referred to as strong-field photoemission, produces attosecond electron pulses that are synchronized to the waveform of the incident light, and this principle lies at the heart of current attosecond technologies. However, full access to strong-field photoemission regimes at near-infrared wavelengths based on solid-state materials is restricted by space-charge screening and material damage at high optical-field strengths, which significantly hampers the realization of predicted attosecond technologies, such as ultra-sensitive optical phase modulation. Here, we demonstrate a new type of strong-field photoemission behaviour with extreme nonlinearity -- photoemission current scales follow a 40th power law of the optical-field strength, making use of sub-nanometric carbon nanotubes and 800 nm pulses. As a result, the total photoemission current depends on the carrier-envelope phase with a greatly improved photoemission current modulation depth of up to 100%, which has not previously been achieved. Time-dependent density functional calculations reveal the completely new behaviour of the optical-field induced tunnelling emission process directly from the valence band of the carbon nanotubes, which is an indication of full access to a strong-field photoemission regime. Furthermore, the nonlinear dynamics are observed to be tunable by changing the binding energy of the valence-band-maximum, as confirmed by Simpleman model calculations. We believe that such extreme nonlinear photoemission from nanotips offers a new means of producing extreme temporal-spatial resolved electron pulses. These results additionally provide a new design philosophy for attosecond electronics and optics by making use of tunable band structures in nanomaterials.
[ "physics.optics" ]
physics.optics
Optics
5,146Optics
hep-th/9809198
The existence of fluctuations together with interactions leads to scale-dependence in the couplings of quantum field theories for the case of quantum fluctuations, and in the couplings of stochastic systems when the fluctuations are of thermal or statistical nature. In both cases the effects of these fluctuations can be accounted for by solutions of the corresponding renormalization group equations. We show how the renormalization group equations are intimately connected with the effective action: given the effective action we can extract the renormalization group equations; given the renormalization group equations the effects of these fluctuations can be included in the classical action by using what is known as improved perturbation theory (wherein the bare parameters appearing in tree-level expressions are replaced by their scale-dependent running forms). The improved action can then be used to reconstruct the effective action, up to finite renormalizations, and gradient terms.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
cs/0501043
We advocate a declarative approach to proving properties of logic programs. Total correctness can be separated into correctness, completeness and clean termination; the latter includes non-floundering. Only clean termination depends on the operational semantics, in particular on the selection rule. We show how to deal with correctness and completeness in a declarative way, treating programs only from the logical point of view. Specifications used in this approach are interpretations (or theories). We point out that specifications for correctness may differ from those for completeness, as usually there are answers which are neither considered erroneous nor required to be computed. We present proof methods for correctness and completeness for definite programs and generalize them to normal programs. For normal programs we use the 3-valued completion semantics; this is a standard semantics corresponding to negation as finite failure. The proof methods employ solely the classical 2-valued logic. We use a 2-valued characterization of the 3-valued completion semantics which may be of separate interest. The presented methods are compared with an approach based on operational semantics. We also employ the ideas of this work to generalize a known method of proving termination of normal programs.
[ "cs.LO", "cs.PL" ]
cs.LO
cs.PL
Logic in Computer Science;Programming Languages
3,842Logic in Computer Science;Programming Languages
1504.08265
We present optimal online algorithms for two related known problems involving Steiner Arborescence, improving both the lower and the upper bounds. One of them is the well studied continuous problem of the {\em Rectilinear Steiner Arborescence} ($RSA$). We improve the lower bound and the upper bound on the competitive ratio for $RSA$ from $O(\log N)$ and $\Omega(\sqrt{\log N})$ to $\Theta(\frac{\log N}{\log \log N})$, where $N$ is the number of Steiner points. This separates the competitive ratios of $RSA$ and the Symetric-$RSA$, two problems for which the bounds of Berman and Coulston is STOC 1997 were identical. The second problem is one of the Multimedia Content Distribution problems presented by Papadimitriou et al. in several papers and Charikar et al. SODA 1998. It can be viewed as the discrete counterparts (or a network counterpart) of $RSA$. For this second problem we present tight bounds also in terms of the network size, in addition to presenting tight bounds in terms of the number of Steiner points (the latter are similar to those we derived for $RSA$).
[ "cs.DS" ]
cs.DS
Data Structures and Algorithms
1,908Data Structures and Algorithms
1808.05414
Functional data analysis can be seriously impaired by abnormal observations, which can be classified as either magnitude or shape outliers based on their way of deviating from the bulk of data. Identifying magnitude outliers is relatively easy, while detecting shape outliers is much more challenging. We propose turning the shape outliers into magnitude outliers through data transformation and detecting them using the functional boxplot. Besides easing the detection procedure, applying several transformations sequentially provides a reasonable taxonomy for the flagged outliers. A joint functional ranking, which consists of several transformations, is also defined here. Simulation studies are carried out to evaluate the performance of the proposed method using different functional depth notions. Interesting results are obtained in several practical applications.
[ "stat.ME", "stat.CO" ]
stat.ME
stat.CO
Methodology;Computation
4,566Methodology;Computation
2010.05003
In this paper, we propose second-order graph-based neural dependency parsing using message passing and end-to-end neural networks. We empirically show that our approaches match the accuracy of very recent state-of-the-art second-order graph-based neural dependency parsers and have significantly faster speed in both training and testing. We also empirically show the advantage of second-order parsing over first-order parsing and observe that the usefulness of the head-selection structured constraint vanishes when using BERT embedding.
[ "cs.CL", "cs.LG" ]
cs.CL
cs.LG
Computation and Language;Machine Learning
1,237Computation and Language;Machine Learning
physics/0509107
In this paper, we analyze the response of music and book sales to an external field and to buyer herding. We distinguish endogenous and exogenous shocks. We focus on some case studies, whose data have been collected from ranking on amazon.com. We show that an ensemble of equivalent systems quantitatively respond in a similar way to a similar ''external shock'', indicating roads to universality features. In contrast to Sornette et al. [Phys. Rev. Lett. {93}, 228701 (2004)] who seemed to find power law behaviors, in particular at long times, - a law interpreted in terms of an epidemic activity, we observe that the relaxation process can be as well seen as an exponential one that saturates toward an asymptotic state, itself different from the pre-shock state. By studying an ensemble of 111 shocks, on books or records, we show that exogenous and endogenous shocks are discriminated by their short-time behaviour: the relaxation time seems to be twice shorter in endogenous shocks than in exogenous ones. We interpret the finding through a simple thermodynamic model with a dissipative force.
[ "physics.soc-ph" ]
physics.soc-ph
Physics and Society
5,463Physics and Society
2208.13275
This study proposes an end-to-end unsupervised diffeomorphic deformable registration framework based on moving mesh parameterization. Using this parameterization, a deformation field can be modeled with its transformation Jacobian determinant and curl of end velocity field. The new model of the deformation field has three important advantages; firstly, it relaxes the need for an explicit regularization term and the corresponding weight in the cost function. The smoothness is implicitly embedded in the solution which results in a physically plausible deformation field. Secondly, it guarantees diffeomorphism through explicit constraints applied to the transformation Jacobian determinant to keep it positive. Finally, it is suitable for cardiac data processing, since the nature of this parameterization is to define the deformation field in terms of the radial and rotational components. The effectiveness of the algorithm is investigated by evaluating the proposed method on three different data sets including 2D and 3D cardiac MRI scans. The results demonstrate that the proposed framework outperforms existing learning-based and non-learning-based methods while generating diffeomorphic transformations.
[ "eess.IV", "cs.CV", "cs.LG" ]
eess.IV
cs.CV
Image and Video Processing;Computer Vision and Pattern Recognition;Machine Learning
3,535Image and Video Processing;Computer Vision and Pattern Recognition;Machine Learning
cond-mat/0305311
A model of carbon nanotube at half filling is studied. The Coulomb interaction is assumed to be unscreened. It is shown that this allows to develop the adiabatic approximation which leads to considerable simplifications in calculations of the excitation spectrum. We give a detailed analysis of the spectrum and the phase diagram at half filling and discuss effects of small doping. At small doping several phases develop strong superconducting fluctuations corresponding to various types of pairing.
[ "cond-mat.str-el" ]
cond-mat.str-el
Strongly Correlated Electrons
6,979Strongly Correlated Electrons
1908.05474
Recently, a variety of regularization techniques have been widely applied in deep neural networks, such as dropout, batch normalization, data augmentation, and so on. These methods mainly focus on the regularization of weight parameters to prevent overfitting effectively. In addition, label regularization techniques such as label smoothing and label disturbance have also been proposed with the motivation of adding a stochastic perturbation to labels. In this paper, we propose a novel adaptive label regularization method, which enables the neural network to learn from the erroneous experience and update the optimal label representation online. On the other hand, compared with knowledge distillation, which learns the correlation of categories using teacher network, our proposed method requires only a minuscule increase in parameters without cumbersome teacher network. Furthermore, we evaluate our method on CIFAR-10/CIFAR-100/ImageNet datasets for image recognition tasks and AGNews/Yahoo/Yelp-Full datasets for text classification tasks. The empirical results show significant improvement under all experimental settings.
[ "cs.LG", "stat.ML" ]
cs.LG
stat.ML
Machine Learning;Machine Learning
4,163Machine Learning;Machine Learning
2207.04222
Fluctuations of dynamical quantities are fundamental and inevitable. For the booming research in nanotechnology, huge relative fluctuation comes with the reduction of system size, leading to large uncertainty for the estimates of dynamical quantities. Thus, increasing statistical efficiency, i.e., reducing the number of samples required to achieve a given accuracy, is of great significance for accurate estimation. Here we propose a theory as a fundamental solution for such problem by constructing auxiliary path for each real path. The states on auxiliary paths constitute canonical ensemble and share the same macroscopic properties with the initial states of the real path. By implementing the theory in molecular dynamics simulations, we obtain a nanoscale Couette flow field with an accuracy of 0.2 {\mu}m/s with relative standard error < 0.1. The required number of samples is reduced by 12 orders compared to conventional method. The predicted thermolubric behavior of water sliding on a self-assembled surface is directly validated by experiment under the same velocity. As the theory only assumes the system is initially in thermal equilibrium then driven from that equilibrium by an external perturbation, we believe it could serve as a general approach for extracting the accurate estimate of dynamical quantities from large fluctuations to provide insights on atomic level under experimental conditions, and benefit the studies on mass transport across (biological) nanochannels and fluid film lubrication of nanometer thickness.
[ "stat.CO", "physics.comp-ph", "physics.flu-dyn" ]
stat.CO
physics.comp-ph
Computation;Computational Physics;Fluid Dynamics
7,267longtail
1006.0488
We present a simple estimate of the mass 'deficits' in cored spheroids, as a function of galaxy mass and radius within the galaxy. Previous attempts to measure such deficits depended on fitting some functional form to the profile at large radii and extrapolating inwards; this is sensitive to the assumed functional form and does not allow for variation in nuclear profile shapes. We take advantage of larger data sets to directly construct stellar mass profiles of observed systems and measure the stellar mass enclosed in a series of physical radii (M(<R)), for samples of cusp and core spheroids at the same stellar mass. There is a significant bimodality in this distribution at small radii, and we non-parametrically measure the median offset between core and cusp populations (the deficit Delta_M(<R)). We construct the scoured mass profile as a function of radius, without reference to any assumed functional form. The mass deficit rises in power-law fashion (Delta_M(<R) R^{1.3-1.8}) from a significant but small mass at R<10pc, to asymptote to a maximum ~0.5-2 M_BH at ~100pc. At larger radii there is no statistically significant separation between populations; the upper limit to the cumulative scoured mass at ~kpc is ~2-4 M_BH. This does not depend strongly on stellar mass. The dispersion in M(<R) appears larger in the core population, possibly reflecting the fact that scouring increases the scatter in profile shapes. These results are in good agreement with models of scouring from BH binary systems.
[ "astro-ph.CO", "astro-ph.GA", "astro-ph.HE" ]
astro-ph.CO
astro-ph.GA
Cosmology and Nongalactic Astrophysics;Astrophysics of Galaxies;High Energy Astrophysical Phenomena
1,732Cosmology and Nongalactic Astrophysics;Astrophysics of Galaxies;High Energy Astrophysical Phenomena
1411.0133
The Z(N) dependence of the pure Yang-Mills gluon propagator, in the Landau gauge, is investigated at finite temperature for N=3. Special attention will be given to the behaviour near the critical temperature $T_c$. Our simulations show a complex pattern as expected in a first order phase transition. Furthermore, we identify an order parameter directly associated with the breaking of the SU(3) center symmetry.
[ "hep-lat" ]
hep-lat
High Energy Physics - Lattice
3,092High Energy Physics - Lattice
0906.2086
We prove an equivalence result between the validity of a pointwise Hardy inequality in a domain and uniform capacity density of the complement. This result is new even in Euclidean spaces, but our methods apply in general metric spaces as well. We also present a new transparent proof for the fact that uniform capacity density implies the classical integral version of the Hardy inequality in the setting of metric spaces. In addition, we consider the relations between the above concepts and certain Hausdorff content conditions.
[ "math.AP" ]
math.AP
Analysis of PDEs
205Analysis of PDEs
1809.04807
When a high dimension system of ordinary differential equations is solved numerically, the computer memory capacity may be compromised. Thus, for such systems, it is important to incorporate low memory usage to some other properties of the scheme. In the context of strong stability preserving (SSP) schemes, some low-storage methods have been considered in the literature. In this paper we study 5-stage third order 2N* low-storage SSP explicit Runge-Kutta schemes. These are SSP schemes that can be implemented with 2N memory registers, where N is the dimension of the problem, and retain the previous time step approximation. This last property is crucial for a variable step size implementation of the scheme. In this paper, first we show that the optimal SSP methods cannot be implemented with 2N* memory registers. Next, two non-optimal SSP 2N* low-storage methods are constructed; although their SSP coefficients are not optimal, they achieve some other interesting properties. Finally, we show some numerical experiments.
[ "math.NA" ]
math.NA
Numerical Analysis
5,002Numerical Analysis
2207.06010
Extracting informative representations of molecules using Graph neural networks (GNNs) is crucial in AI-driven drug discovery. Recently, the graph research community has been trying to replicate the success of self-supervised pretraining in natural language processing, with several successes claimed. However, we find the benefit brought by self-supervised pretraining on small molecular data can be negligible in many cases. We conduct thorough ablation studies on the key components of GNN pretraining, including pretraining objectives, data splitting methods, input features, pretraining dataset scales, and GNN architectures, to see how they affect the accuracy of the downstream tasks. Our first important finding is, self-supervised graph pretraining do not always have statistically significant advantages over non-pretraining methods in many settings. Secondly, although noticeable improvement can be observed with additional supervised pretraining, the improvement may diminish with richer features or more balanced data splits. Thirdly, hyper-parameters could have larger impacts on accuracy of downstream tasks than the choice of pretraining tasks, especially when the scales of downstream tasks are small. Finally, we provide our conjectures where the complexity of some pretraining methods on small molecules might be insufficient, followed by empirical evidences on different pretraining datasets.
[ "cs.LG", "q-bio.BM" ]
cs.LG
q-bio.BM
Machine Learning;Biomolecules
3,998Machine Learning;Biomolecules
2201.06282
In contrast to SPD matrices, few tools exist to perform Riemannian statistics on the open elliptope of full-rank correlation matrices. The quotient-affine metric was recently built as the quotient of the affine-invariant metric by the congruence action of positive diagonal matrices. The space of SPD matrices had always been thought of as a Riemannian homogeneous space. In contrast, we view in this work SPD matrices as a Lie group and the affine-invariant metric as a left-invariant metric. This unexpected new viewpoint allows us to generalize the construction of the quotient-affine metric and to show that the main Riemannian operations can be computed numerically. However, the uniqueness of the Riemannian logarithm or the Fr{\'e}chet mean are not ensured, which is bad for computing on the elliptope. Hence, we define three new families of Riemannian metrics on full-rank correlation matrices which provide Hadamard structures, including two flat. Thus the Riemannian logarithm and the Fr{\'e}chet mean are unique. We also define a nilpotent group structure for which the affine logarithm and the group mean are unique. We provide the main Riemannian/group operations of these four structures in closed form.
[ "math.DG" ]
math.DG
Differential Geometry
2,010Differential Geometry
2004.00697
A dynamical aspect of quantum gravity on de Sitter spacetime is investigated by holography or the dS/CFT correspondence. We show that de Sitter spacetime emerges from a free Sp(N) vector model by complexifying the ghost fields and flowing them in parallel to the imaginary axis. We confirm that the emergence of de Sitter spacetime is ensured by conformal symmetry. We also compute the quantum corrections to the cosmological constant up to the next-to-leading order of the 1/N expansion in a proposed holographic approach. As a result the sub-leading corrections have the opposite sign to the classical value.This implies that a quantum gravity on de Sitter spacetime is perturbatively stable and quantum effects make the universe flatter and the cosmological constant smaller.
[ "hep-th", "gr-qc", "hep-lat", "hep-ph" ]
hep-th
gr-qc
High Energy Physics - Theory;General Relativity and Quantum Cosmology;High Energy Physics - Lattice;High Energy Physics - Phenomenology
3,327High Energy Physics - Theory;General Relativity and Quantum Cosmology;High Energy Physics - Lattice;High Energy Physics - Phenomenology
1208.3333
In recent research it was found that the fundamental shear-localizing instability of amorphous solids under external strain, which eventually results in a shear band and failure, consists of a highly correlated array of Eshelby quadrupoles all having the same orientation and some density $\rho$. In this paper we calculate analytically the energy $E(\rho,\gamma)$ associated with such highly correlated structures as a function of the density $\rho$ and the external strain $\gamma$. We show that for strains smaller than a characteristic strain $\gamma_Y$ the total strain energy initially increases as the quadrupole density increases, but that for strains larger than $\gamma_Y$ the energy monotonically decreases with quadrupole density. We identify $\gamma_Y$ as the yield strain. Its value, derived from values of the qudrupole strength based on the atomistic model, agrees with that from the computed stress-strain curves and broadly with experimental results.
[ "cond-mat.soft" ]
cond-mat.soft
Soft Condensed Matter
6,537Soft Condensed Matter
1407.0125
During this work, using subtraction renormalization mechanism, zero point quantum fluctuations for bosonic scalar fields in a de-Sitter like background are investigated. By virtue of the observed value for spectral index, $n_s(k)$, for massive scalar field the best value for the first slow roll parameter, $\epsilon$, is achieved. In addition the energy density of vacuum quantum fluctuations for massless scalar field is obtained. The effects of these fluctuations on other components of the Universe are studied. By solving the conservation equation, for some different examples, the energy density for different components of the Universe are obtained. In the case which, all components of the Universe are in an interaction, the different dissipation functions, $\tilde{Q}_{i}$, are considered. The time evolution of ${\rho_{DE}(z)}/{\rho_{cri}(z)}$ shows that $\tilde{Q}=3 \gamma H(t) \rho_{m}$ has best agreement in comparison to observational data including CMB, BAO and SNeIa data set.
[ "gr-qc" ]
gr-qc
General Relativity and Quantum Cosmology
2,674General Relativity and Quantum Cosmology
1406.7532
Xclaim (x-ray core level atomic multiplets) is a graphical interface for the calculation of core-hole spectroscopy and ground state properties within a charge-transfer multiplet model taking into account a many-body hamiltonian with Coulomb, spin-orbit, crystal-field, and hybridization interactions. Using Hartree-Fock estimates for the Coulomb and spin-orbit interactions and ligand field parameters (crystal-field, hybridization and charge-transfer energy) the program can calculate x-ray absorption spectroscopy (XAS), x-ray photoemission spectroscopy (XPS), photoemission spectrospcy (PES) and inverse photoemission (IPES) for d- and f-valence metals and different absorption edges. The program runs in Linux, Windows and MacOS platforms.
[ "cond-mat.str-el" ]
cond-mat.str-el
Strongly Correlated Electrons
6,979Strongly Correlated Electrons
2307.09139
We report the synthesis of transition-metal-doped ferromagnetic elemental single-crystal semiconductors with quantum oscillations using the physical vapor transport method. The 7.7 atom% Cr-doped Te crystals (Cr_Te) show ferromagnetism, butterfly-like negative magnetoresistance in the low temperature (< 3.8 K) and low field (< 0.15 T) region, and high Hall mobility, e.g., 1320 cm2 V-1 s-1 at 30 K and 350 cm2 V-1 s-1 at 300 K, implying that Cr_Te crystals are ferromagnetic elemental semiconductors. When B // c // I, the maximum negative MR is -27% at T = 20 K and B = 8 T. In the low temperature semiconducting region, Cr_Te crystals show strong discrete scale invariance dominated logarithmic quantum oscillations when the direction of the magnetic field B is parallel to the [100] crystallographic direction and show Landau quantization dominated Shubnikov-de Haas (SdH) oscillations for B // [210] direction, which suggests the broken rotation symmetry of the Fermi pockets in the Cr_Te crystals. The findings of coexistence of multiple quantum oscillations and ferromagnetism in such an elemental quantum material may inspire more study of narrow bandgap semiconductors with ferromagnetism and quantum phenomena.
[ "cond-mat.mtrl-sci" ]
cond-mat.mtrl-sci
Materials Science
4,287Materials Science
2008.07466
We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings by interpolation: for example, we are told that Jim went hiking and at the end Jim needed to be rescued, and we want the model to incrementally generate steps along the way. The core of our method is an interpolation model based on GPT-2 which conditions on a previous sentence and a next sentence in a narrative and fills in the gap. Additionally, a reranker helps control for coherence of the generated text. With human evaluation, we show that ending-guided generation results in narratives which are coherent, faithful to the given ending guide, and require less manual effort on the part of the human guide writer than past approaches.
[ "cs.CL" ]
cs.CL
Computation and Language
1,168Computation and Language
0807.2036
Fermionic mean-field theory and variational Monte Carlo calculations are employed to shed light on the possible uniform ground states of the Heisenberg model on the pyrochlore lattice. Among the various flux configurations, we find the chiral spin states carrying \pm pi/2 flux through each triangular face to be the most stable both within the mean-field theory and the projected wave-function studies. Properties of the spin-spin correlation function and the chirality order parameter are calculated for the projected wave functions. Mean-field band structures are examined.
[ "cond-mat.str-el" ]
cond-mat.str-el
Strongly Correlated Electrons
6,979Strongly Correlated Electrons
1209.3444
We compare deformations of algebras to deformations of schemes in the setting of invariant theory. Our results generalize comparison theorems of Schlessinger and the second author for projective schemes. We consider deformations (abstract and embedded) of a scheme $X$ which is a good quotient of a quasi-affine scheme $X^\prime$ by a linearly reductive group $G$ and compare them to invariant deformations of an affine $G$-scheme containing $X^\prime$ as an open invariant subset. The main theorems give conditions for when the comparison morphisms are smooth or isomorphisms.
[ "math.AG" ]
math.AG
Algebraic Geometry
47Algebraic Geometry
1901.00795
We propose to model mortality hazard rates for human population using the exponential of the solution of a stochastic differential equation (SDE). The noise in the SDE is a fractional Brownian motion. We will use the well-known fractional Ornstein-Uhlenbeck process. Using the Hurst parameter we showed that mortality rates exhibit long-term memory. The proposed model is a generalization of the model introduced by [6], where they used an SDE driven with a Brownian motion. We tested our model with the Italian population between the years 1950 to 2004.
[ "math.PR", "stat.AP" ]
math.PR
stat.AP
Probability;Applications
5,720Probability;Applications
1203.3482
Computing the probability of a formula given the probabilities or weights associated with other formulas is a natural extension of logical inference to the probabilistic setting. Surprisingly, this problem has received little attention in the literature to date, particularly considering that it includes many standard inference problems as special cases. In this paper, we propose two algorithms for this problem: formula decomposition and conditioning, which is an exact method, and formula importance sampling, which is an approximate method. The latter is, to our knowledge, the first application of model counting to approximate probabilistic inference. Unlike conventional variable-based algorithms, our algorithms work in the dual realm of logical formulas. Theoretically, we show that our algorithms can greatly improve efficiency by exploiting the structural information in the formulas. Empirically, we show that they are indeed quite powerful, often achieving substantial performance gains over state-of-the-art schemes.
[ "cs.AI" ]
cs.AI
Artificial Intelligence
361Artificial Intelligence
1601.01576
Social structures emerge as a result of individuals managing a variety of different of social relationships. Societies can be represented as highly structured dynamic multiplex networks. Here we study the dynamical origins of the specific community structures of a large-scale social multiplex network of a human society that interacts in a virtual world of a massive multiplayer online game. There we find substantial differences in the community structures of different social actions, represented by the various network layers in the multiplex. Community size distributions are either similar to a power-law or appear to be centered around a size of 50 individuals. To understand these observations we propose a voter model that is built around the principle of triadic closure. It explicitly models the co-evolution of node- and link-dynamics across different layers of the multiplex. Depending on link- and node fluctuation rates, the model exhibits an anomalous shattered fragmentation transition, where one layer fragments from one large component into many small components. The observed community size distributions are in good agreement with the predicted fragmentation in the model. We show that the empirical pairwise similarities of network layers, in terms of link overlap and degree correlations, practically coincide with the model. This suggests that several detailed features of the fragmentation in societies can be traced back to the triadic closure processes.
[ "physics.soc-ph", "cond-mat.stat-mech", "cs.SI" ]
physics.soc-ph
cond-mat.stat-mech
Physics and Society;Statistical Mechanics;Social and Information Networks
5,548Physics and Society;Statistical Mechanics;Social and Information Networks
0802.3213
The fundamental properties of stellar clusters, such as the age or the total initial mass in stars, are often inferred from population synthesis models. The predicted properties are then used to constrain the physical mechanisms involved in the formation of such clusters in a variety of environments. Population synthesis models cannot, however, be applied blindy to such systems. We show that synthesis models cannot be used in the usual straightforward way to small-mass clusters (say, M < few times 10**4 Mo). The reason is that the basic hypothesis underlying population synthesis (a fixed proportionality between the number of stars in the different evolutionary phases) is not fulfilled in these clusters due to their small number of stars. This incomplete sampling of the stellar mass function results in a non-gaussian distribution of the mass-luminosity ratio for clusters that share the same evolutionary conditions (age, metallicity and initial stellar mass distribution function). We review some tests that can be carried out a priori to check whether a given cluster can be analysed with the fully-sampled standard population synthesis models, or, on the contrary, a probabilistic framework must be used. This leads to a re-assessment in the estimation of the low-mass tail in the distribution function of initial masses of stellar clusters.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
1907.07279
This paper addresses the problem of target detection and localisation in a limited area using multiple coordinated agents. The swarm of Unmanned Aerial Vehicles (UAVs) determines the position of the dispersion of stack effluents to a gas plume in a certain production area as fast as possible, that makes the problem challenging to model and solve, because of the time variability of the target. Three different exploration algorithms are designed and compared. Besides the exploration strategies, the paper reports a solution for quick convergence towards the actual stack position once detected by one member of the team. Both the navigation and localisation algorithms are fully distributed and based on the consensus theory. Simulations on realistic case studies are reported.
[ "cs.RO", "cs.DC" ]
cs.RO
cs.DC
Robotics;Distributed, Parallel, and Cluster Computing
6,369Robotics;Distributed, Parallel, and Cluster Computing
2303.04273
As some of the most compact stellar objects in the universe, neutron stars are unique cosmic laboratories. The study of neutron stars provides an ideal theoretical testbed for investigating both physics at supra-nuclear densities as well as fundamental physics. Their global astrophysical properties however depend strongly on the star's internal structure, which is currently unknown due to uncertainties in the equation of state. In recent years, a lot of work has revealed the existence of universal relations between stellar quantities that are insensitive to the equation of state. At the same time, the fields of multimessenger astronomy and machine learning have both advanced significantly. As such, there has been a confluence of research into their combination and the field is growing. In this paper, we develop universal relations for rapidly rotating neutron stars, by using supervised machine learning methods, thus proposing a new way of discovering and validating such relations. The analysis is performed for tabulated hadronic, hyperonic, and hybrid EoS-ensembles that obey the multimessenger constraints and cover a wide range of stiffnesses. The relations discussed could provide an accurate tool to constrain the equation of state of nuclear matter when measurements of the relevant observables become available.
[ "astro-ph.HE", "gr-qc" ]
astro-ph.HE
gr-qc
High Energy Astrophysical Phenomena;General Relativity and Quantum Cosmology
3,022High Energy Astrophysical Phenomena;General Relativity and Quantum Cosmology
1707.04523
The realization of Dirac and Weyl physics in solids has made topological materials one of the main focuses of condensed matter physics. Recently, the topic of topological nodal line semimetals, materials in which Dirac or Weyl-like crossings along special lines in momentum space create either a closed ring or line of degeneracies, rather than discrete points, has become a hot topic in topological quantum matter. Here we review the experimentally confirmed and theoretically predicted topological nodal line semimetals, focusing in particular on the symmetry protection mechanisms of the nodal lines in various materials. Three different mechanisms: a combination of inversion and time-reversal symmetry, mirror reflection symmetry, and non-symmorphic symmetry, and their robustness under the effect of spin orbit coupling are discussed. We also present a new Weyl nodal line material, the Te-square net compound KCu$_2$EuTe$_4$, which has several Weyl nodal lines including one extremely close to the Fermi level ($<$30 meV below E$_F$). Finally, we discuss potential experimental signatures for observing exotic properties of nodal line physics.
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
cond-mat.mtrl-sci
cond-mat.mes-hall
Materials Science;Mesoscale and Nanoscale Physics
4,330Materials Science;Mesoscale and Nanoscale Physics
1407.2344
In this paper, we study the Cauchy problem of the Euler-Nernst-Planck-Possion system. We obtain global well-posedness for the system in dimension $d=2$ for any initial data in $H^{s_1}(\mathbb{R}^2)\times H^{s_2}(\mathbb{R}^2)\times H^{s_2}(\mathbb{R}^2)$ under certain conditions of $s_1$ and $s_2$.
[ "math.AP" ]
math.AP
Analysis of PDEs
205Analysis of PDEs
cond-mat/0109083
We obtain the exact position of the percolation threshold in intentionally damaged scale-free networks.
[ "cond-mat.stat-mech" ]
cond-mat.stat-mech
Statistical Mechanics
6,821Statistical Mechanics
2012.10262
We show that filling an order with a large number of distinct counterparts incurs additional market impact, as opposed to filling the order with a small number of counterparts. For best execution, therefore, it may be beneficial to opportunistically fill orders with as few counterparts as possible in Large-in-scale (LIS) venues. This article introduces the concept of concentrated trading, a situation that occurs when a large fraction of buying or selling in a given time period is done by one or a few traders, for example when executing a large order. Using London Stock Exchange data, we show that concentrated trading suffers price impact in addition to impact caused by (smart) order routing. However, when matched with similarly concentrated counterparts on the other side of the market, the impact is greatly reduced. This suggests that exposing an order on LIS venues is expected to result in execution performance improvement.
[ "q-fin.TR" ]
q-fin.TR
Trading and Market Microstructure
7,254Trading and Market Microstructure
2110.06133
Topline hotels are now shifting into the digital way in how they understand their customers to maintain and ensuring satisfaction. Rather than the conventional way which uses written reviews or interviews, the hotel is now heavily investing in Artificial Intelligence particularly Machine Learning solutions. Analysis of online customer reviews changes the way companies make decisions in a more effective way than using conventional analysis. The purpose of this research is to measure hotel service quality. The proposed approach emphasizes service quality dimensions reviews of the top-5 luxury hotel in Indonesia that appear on the online travel site TripAdvisor based on section Best of 2018. In this research, we use a model based on a simple Bayesian classifier to classify each customer review into one of the service quality dimensions. Our model was able to separate each classification properly by accuracy, kappa, recall, precision, and F-measure measurements. To uncover latent topics in the customer's opinion we use Topic Modeling. We found that the common issue that occurs is about responsiveness as it got the lowest percentage compared to others. Our research provides a faster outlook of hotel rank based on service quality to end customers based on a summary of the previous online review.
[ "cs.IR", "cs.SI", "econ.GN", "q-fin.EC" ]
cs.IR
cs.SI
Information Retrieval;Social and Information Networks;General Economics;Economics
7,267longtail
2110.02023
We study the effect on the parton distribution functions (PDFs) from the inclusion of projected measurements in the Drell-Yan (DY) di-lepton production neutral channel of the angular coefficient associated to the $Z$-boson longitudinal polarisation. The pseudodata, generated assuming two luminosity scenarios, is employed for the profiling of existing PDF sets using the open-source platform xFitter. We find the observable particularly relevant in constraining the gluon PDF, which in turn translates into a reduction of the systematic uncertainties of the Standard Model (SM) Higgs boson production cross section.
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
2302.06145
The modified Langevin noise formalism has been proposed for the correct charaterization of quantum electromagnetic fields in the presence of finite-sized lossy dielectric objects in free space. The main modification to the original one (also known as the Green's function approach available only for bulk inhomogeneous lossy dielectric medium) was to add fluctuating sources in reaction to the radiation loss. Consequently, a resulting electric field operator is now determined by (i) boundary-assisted and (ii) medium-assisted fields on an equal footing, which are fluctuating sources due to radiation and medium losses, respectively. However, due to the lengthy mathematical manipulation and complicated concepts, the validity of the modified Langevin noise formalism has not been clearly checked yet. In this work, we propose and develop a novel numerical framework for the modified Langevin noise formalism by exploiting computational electromagnetic methods (CEM). Specifically, we utilize the finite-element method to numerically solve plane-wave-scattering and point-source-radiation problems whose solutions are boundary-assisted and medium-assisted fields, respectively. Based on the developed numerical framework, we calculate the Purcell factor of a two-level atom inside or outside a lossy dielectric slab. It is numerically proved, for the first time, that one can retrieve the conventional expression of the spontaneous emission rate, viz., the imaginary part of the Green's function. The proposed numerical framework is particularly useful for estimating the dynamics of multi-level atoms near practical plasmonic structures or metasurfaces.
[ "quant-ph" ]
quant-ph
Quantum Physics
5,985Quantum Physics
1705.03189
Every Serre subcategory of an abelian category is assigned a unique type. The type of a Serre subcategory of a Grothendieck category is in the list: $$(0, 0), \ (0, -1), \ (1, -1), \ (0, -2), \ (1, -2), \ (2, -1), \ (+\infty, -\infty);$$ and for each $(m, -n)$ in this list, there exists a Serre subcategory such that its type is $(m, -n)$. This uses right (left) recollements of abelian categories, Tachikawa-Ohtake [TO] on strongly hereditary torsion pairs, and Geigle-Lenzing [GL] on localizing subcategories. If all the functors in a recollement of abelian categories are exact, then the recollement splits. Quite surprising, any left recollement of a Grothendieck category can be extended to a recollement; but this is not true for a right recollement. Thus, a colocalizing subcategory of a Grothendieck category is localizing; but the converse is not true. All these results do not hold in triangulated categories.
[ "math.CT" ]
math.CT
Category Theory
757Category Theory
2102.10151
The increasing number of Photovoltaic (PV) systems connected to the power grid are vulnerable to the projection of shadows from moving clouds. Global Solar Irradiance (GSI) forecasting allows smart grids to optimize the energy dispatch, preventing energy shortages caused by occlusion of the sun. This investigation compares the performances of machine learning algorithms (not requiring labelled images for training) for real-time segmentation of clouds in images acquired using a ground-based infrared sky imager. Real-time segmentation is utilized to extract cloud features using only the pixels in which clouds are detected.
[ "eess.IV" ]
eess.IV
Image and Video Processing
3,521Image and Video Processing
0902.1844
The nuclear chosmochronometer suggested by Hayakawa et al. [Phys. Rev.C 77, 065802 (2008)] based on the 138La-138Ce-136Ce abundance ratio in presolar grains would be affected by the existence of a hitherto unknown low-energy 1+ state in 138La. Results of a recent high-resolution study of the 138Ba(3He,t) reaction under kinematics selectively populating 1+ states in 138La through Gamow-Teller transitions provides strong evidence against the existence of such a hypothetical state.
[ "astro-ph.SR", "nucl-ex" ]
astro-ph.SR
nucl-ex
Solar and Stellar Astrophysics;Nuclear Experiment
6,718Solar and Stellar Astrophysics;Nuclear Experiment
2012.13526
The extended scalar-tensor and vector-tensor theories admit black hole solutions with the nontrivial profiles of the scalar and vector fields, respectively. The disformal transformation maps a solution in a class of the scalar-tensor or vector-tensor theories to that in another class, and hence it can be a useful tool to construct a new nontrivial solution from the known one. First, we investigate how the stationary and axisymmetric solutions in the vector-tensor theories without and with the $U(1)$ gauge symmetry are disformally transformed. We start from a stationary and axisymmetric solution satisfying the circularity conditions, and show that in both the cases the metric of the disformed solution in general does not satisfy the circularity conditions. Using the fact that a solution in a class of the vector-tensor theories with the vanishing field strength is mapped to that in a class of the shift-symmetric scalar-tensor theories, we derive the disformed stationary and axisymmetric solutions in a class of these theories, and show that the metric of the disformed solutions does not satisfy the circularity conditions if the scalar field depends on the time or azimuthal coordinate. We also confirm that in the scalar-tensor theories without the shift symmetry, the disformed stationary and axisymmetric solutions satisfy the circularity conditions. Second, we investigate the disformal transformations of the stationary and axisymmetric black hole solutions in the generalized Proca theory with the nonminimal coupling to the Einstein tensor, the shift-symmetric scalar-tensor theory with the nonminimal derivative coupling to the Einstein tensor, the Einstein-Maxwell theory, and the Einstein-conformally coupled scalar field theory. We show that the disformal transformations modify the causal properties of the spacetime.
[ "gr-qc" ]
gr-qc
General Relativity and Quantum Cosmology
2,674General Relativity and Quantum Cosmology
2205.11741
Since it is difficult to apply the existing method of friction and heat flux decomposition on the complex surface, a combined decomposition method of friction and heat flux with clear physical interpretation is proposed, which is based on FIK and RD decomposition method and can be applied to arbitrary surface. Based on this method, the aerothermodynamic characteristics of bistable states of curved compression ramps are analyzed from the perspective of energy transformation. The results show that the decrease of friction in the interaction region of the attachment state and the minimum values of friction in the separation bubble are all caused by the energy injection of the work by the adverse pressure gradient. The peak friction is mainly induced by the viscous dissipation, and its position is affected by the mechanical energy transport. The peak heat flux is mainly induced by viscous dissipation, and the enthalpy transport of the separation state plays a greater role in the peak heat flux generation than that of the attachment state. These results indicate that reducing viscous dissipation is a potential way for realizing friction and heat flux control simultaneously.
[ "physics.flu-dyn" ]
physics.flu-dyn
Fluid Dynamics
2,452Fluid Dynamics
1001.5060
The class of type Ic supernovae have drawn increasing attention since 1998 owing to their sparse association (only four so far) with long duration gamma-ray bursts. Although both phenomena originate from the core collapse of a massive star, supernovae emit mostly at optical wavelengths, whereas GRBs emit mostly in soft gamma-rays or hard X-rays. Though the GRB central engine generates ultra-relativistic jets, which beam the early emission into a narrow cone, no relativistic outflows have hitherto been found in type Ib/c supernovae explosions, despite theoretical expectations and searches. Here we report radio (interferometric) observations that reveal a mildly relativistic expansion in a nearby type Ic supernova, SN 2007gr. Using two observational epochs 60 days apart, we detect expansion of the source and establish a conservative lower limit for the average apparent expansion velocity of 0.6c. Independently, a second mildly relativistic supernova has been reported. Contrary to the radio data, optical observations of SN 2007gr indicate a typical type Ic supernova with ejecta velocities ~6000 km/s, much lower than in GRB-associated supernovae. We conclude that in SN 2007gr a small fraction of the ejecta produced a low-energy mildly relativistic bipolar radio jet, while the bulk of the ejecta were slower and, as shown by optical spectro-polarimetry, mildly aspherical.
[ "astro-ph.HE" ]
astro-ph.HE
High Energy Astrophysical Phenomena
2,990High Energy Astrophysical Phenomena
2012.05813
We study the local limits of uniform high genus bipartite maps with prescribed face degrees. We prove the convergence towards a family of infinite maps of the plane, the q-IBPMs, which exhibit both a spatial Markov property and a hyperbolic behaviour. Therefore, we observe a similar local behaviour for a wide class of models of random high genus maps, which can be seen as a result of universality. Our results cover all the regimes where the expected degree of the root face remains finite in the limit. This follows a work by the same authors on high genus triangulations arXiv:1902.00492.
[ "math.PR", "math.CO" ]
math.PR
math.CO
Probability;Combinatorics
5,726Probability;Combinatorics
2307.06336
Recent compilations of NIRSpec emission line galaxies have shown a mild redshift evolution of the FMR at $z > 4$, indicating that the FMR alone is not fully capable of capturing the redshift evolution of the mass-metallicity relation: $z > 4$ galaxies appear more metal-poor than the FMR predictions. There is evidence that the most metal-deficient high-redshift galaxies are also the most compact. In this work, we further investigate this anti-correlation by leveraging the wealth of data gathered through the first cycle of JWST. We compile a sample of 427 $z > 3$ galaxies covered by both the NIRSpec prism and NIRCam short-wavelength photometry, consisting of 334 galaxies from the publicly available programs and 93 galaxies from the first data release of the JADES program. We use this sample to infer the redshift evolution of the FMR from $z = 3$ to $z \sim 10$, further confirming the previously reported mild redshift evolution. We measure the rest-ultraviolet (UV) sizes of $z > 4$ galaxies, inferring the mass-size relation at $z = 4-10$ with a power-law slope of $0.21 \pm 0.04$. We investigate the redshift evolution of the mass-size relation, finding that at a fixed stellar mass, higher redshift galaxies appear more compact. The degree of this redshift evolution depends on the stellar mass, with the lowest mass galaxies showing the strongest redshift evolution and the most massive galaxies ($\log(M_{\star}/M_{\odot}) > 9$) showing no redshift evolution. We investigate the anti-correlation between the compactness of galaxies and their gas-phase metallicities, finding that the more compact galaxies appear more metal-deficient and therefore more offset from the local calibration of the FMR. (abridged)
[ "astro-ph.GA" ]
astro-ph.GA
Astrophysics of Galaxies
464Astrophysics of Galaxies
1108.5382
A significant fraction of unstable multiple planet systems likely scatter during the transitional disc phase as gas damping becomes ineffectual. Using an ensemble of FARGO hydrodynamic simulations and MERCURY n-body integrations, we directly follow planet-disc and planet-planet interactions through the clearing phase and on through 50 Myr of dynamical evolution. Disc clearing occurs via X-ray driven photoevaporation. The hydrodynamic evolution of individual scattering systems is complex, and involves phases in which massive planets orbit within eccentric gaps, or accrete directly from the disc without a gap. Comparing the results to a gas-free model, we find that the n-body dynamics and hydrodynamics of scattering into one- and two-planet final states are almost identical. The eccentricity distributions in these channels are almost unaltered by the presence of gas. The hydrodynamic simulations, however, also form low eccentricity three-planet systems in long-term stable configurations, and the admixture of these systems results in modestly lower eccentricities in hydrodynamic as opposed to gas-free simulations. The incidence of these three-planet systems is likely a function of the initial conditions; different planet setups (number or spacing) may change the character of this result. We analyze the properties of surviving multiple planet systems, and show that only a small fraction (a few percent) enter mean-motion resonances after scattering, while a larger fraction form stable resonant chains and avoid scattering entirely. Our results remain consistent with the hypothesis that exoplanet eccentricity results from scattering, though the detailed agreement between observations and gas-free simulation results is likely coincidental. We discuss the prospects for testing scattering models by observing planets or non-axisymmetric gas structure in transitional discs.
[ "astro-ph.EP", "astro-ph.SR" ]
astro-ph.EP
astro-ph.SR
Earth and Planetary Astrophysics;Solar and Stellar Astrophysics
2,390Earth and Planetary Astrophysics;Solar and Stellar Astrophysics
1110.1812
We study properties of Wigner crystal in snaked nanochannels and show that they are characterized by a conducting sliding phase at low charge densities and an insulating pinned phase emerging above a certain critical charge density. We trace parallels between this model problem and the Little suggestion for electron transport in organic molecules. We also show that in presence of periodic potential inside the snaked channel the sliding phase exists only inside a certain window of electron densities that has similarities with a pressure dependence of conductivity in organic conductors. Our studies show emergence of dynamical glassy phase in a purely periodic potential in absence of any disorder that can explain enormously slow variations of resistivity in organic conductors. Finally we discuss the KAM concept of superfluidity induced by repulsive Coulomb interaction between electrons. We argue that the transition from the sliding KAM phase to the pinned Aubry phase corresponds to the superfluid-insulator transition.
[ "cond-mat.str-el" ]
cond-mat.str-el
Strongly Correlated Electrons
6,979Strongly Correlated Electrons
1605.05177
Using femtosecond time- and angle-resolved photoemission spectroscopy we investigate the effect of electron doping on the electron dynamics in Ba(Fe_{1-x}Co_x)_2As_2 in a range of 0 < x < 0.15 at temperatures slightly above the N\'eel temperature. By analyzing the time-dependent photoemission intensity of the pump laser excited population as a function of energy, we found that the relaxation times at 0 < E-E_F < 0.2 eV are doping dependent and about 100 fs shorter at optimal doping than for overdoped and parent compounds. Analysis of the relaxation rates also reveals the presence of a pump fluence dependent step in the relaxation time at E-E_F = 200meV which we explain by coupling of the excited electronic system to a boson of this energy. We compare our results with static ARPES and transport measurements and find disagreement and agreement concerning the doping-dependence, respectively. We discuss the effect of the electron-boson coupling on the energy-dependent relaxation and assign the origin of the boson to a magnetic excitation.
[ "cond-mat.supr-con" ]
cond-mat.supr-con
Superconductivity
7,066Superconductivity
1503.01110
In this paper, we present the first results from the Renaissance Simulations, a suite of extremely high-resolution and physics-rich AMR calculations of high redshift galaxy formation performed on the Blue Waters supercomputer. These simulations contain hundreds of well-resolved galaxies at $z \sim 25-8$, and make several novel, testable predictions. Most critically, we show that the ultraviolet luminosity function of our simulated galaxies is consistent with observations of high-z galaxy populations at the bright end of the luminosity function (M$_{1600} \leq -17$), but at lower luminosities is essentially flat rather than rising steeply, as has been inferred by Schechter function fits to high-z observations, and has a clearly-defined lower limit in UV luminosity. This behavior of the luminosity function is due to two factors: (i) the strong dependence of the star formation rate on halo virial mass in our simulated galaxy population, with lower-mass halos having systematically lower star formation rates and thus lower UV luminosities; and (ii) the fact that halos with virial masses below $\simeq 2 \times 10^8$ M$_\odot$ do not universally contain stars, with the fraction of halos containing stars dropping to zero at $\simeq 7 \times 10^6$ M$_\odot$. Finally, we show that the brightest of our simulated galaxies may be visible to current and future ultra-deep space-based surveys, particularly if lensed regions are chosen for observation.
[ "astro-ph.GA" ]
astro-ph.GA
Astrophysics of Galaxies
464Astrophysics of Galaxies
hep-ph/9704214
If a leptoquark is produced at HERA as a narrow resonance, various effects tend to broaden the measurable mass distribution considerably. These effects are discussed here, with special emphasis on initial- and final-state QCD radiation. A proper understanding is important to assess the significance of data and to devise strategies for better mass reconstruction.
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
2312.01005
In this paper, we introduce a novel data augmentation methodology based on Conditional Progressive Generative Adversarial Networks (CPGAN) to generate diverse black hole (BH) images, accounting for variations in spin and electron temperature prescriptions. These generated images are valuable resources for training deep learning algorithms to accurately estimate black hole parameters from observational data. Our model can generate BH images for any spin value within the range of [-1, 1], given an electron temperature distribution. To validate the effectiveness of our approach, we employ a convolutional neural network to predict the BH spin using both the GRMHD images and the images generated by our proposed model. Our results demonstrate a significant performance improvement when training is conducted with the augmented dataset while testing is performed using GRMHD simulated data, as indicated by the high R2 score. Consequently, we propose that GANs can be employed as cost effective models for black hole image generation and reliably augment training datasets for other parameterization algorithms.
[ "astro-ph.GA", "cs.LG", "eess.IV" ]
astro-ph.GA
cs.LG
Astrophysics of Galaxies;Machine Learning;Image and Video Processing
7,267longtail
hep-th/0510038
In this Letter we have proposed a point particle model that generates a noncommutative three-space, with the coordinate brackets being Lie algebraic in nature, in particular isomorphic to the angular momentum algebra. The work is in the spirit of our earlier works in this connection, {\it {i.e.}} PLB 618 (2005)243 and PLB 623 (2005)251, where the $\kappa $-Minkowski form of noncomutative spacetime was considered. This non-linear and operatorial nature of the configuration space coordinate algebra can pose problems regarding its quantization. This prompts us to embed the model in the Batalin-Tyutin extended space where the equivalent model comprises of phase space variables satisfying a canonical algebra. We also compare our present model with the point particle model, previously proposed by us, in the context of $\kappa$-Minkowski spacetime.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
2209.08611
This paper presents a solution to the automatic task planning problem for multi-agent systems. A formal framework is developed based on the Nondeterministic Finite Automata with $\epsilon$-transitions, where given the capabilities, constraints and failure modes of the agents involved, an initial state of the system and a task specification, an optimal solution is generated that satisfies the system constraints and the task specification. The resulting solution is guaranteed to be complete and optimal; moreover a heuristic solution that offers significant reduction of the computational requirements while relaxing the completeness and optimality requirements is proposed. The constructed system model is independent from the initial condition and the task specification, alleviating the need to repeat the costly pre-processing cycle for solving other scenarios, while allowing the incorporation of failure modes on-the-fly. Two case studies are provided: a simple one to showcase the concepts of the proposed methodology and a more elaborate one to demonstrate the effectiveness and validity of the methodology.
[ "cs.RO", "cs.FL" ]
cs.RO
cs.FL
Robotics;Formal Languages and Automata Theory
7,267longtail
0707.3148
We use data from observational cosmology to put constraints on higher-dimensional extensions of general relativity in which the effective four-dimensional dark-energy density (or cosmological "constant") decays with time. In particular we study the implications of this decaying dark energy for the age of the universe, large-scale structure formation, big-bang nucleosynthesis and the magnitude-redshift relation for Type Ia supernovae. Two of these tests (age and the magnitude-redshift relation) place modest lower limits on the free parameter of the theory, a cosmological length scale L akin to the de Sitter radius. These limits will improve if experimental uncertainties on supernova magnitudes can be reduced around z=1.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
2010.12261
In this paper we establish the well-posedness of the Muskat problem with surface tension and equal viscosities in the subcritical Sobolev spaces $W^s_p(\mathbb{R})$, where ${p\in(1,2]}$ and ${s\in(1+1/p,2)}$. This is achieved by showing that the mathematical model can be formulated as a quasilinear parabolic evolution problem in $W^{\overline{s}-2}_p(\mathbb{R})$, where ${\overline{s}\in(1+1/p,s)}$. Moreover, we prove that the solutions become instantly smooth and we provide a criterion for the global existence of solutions.
[ "math.AP" ]
math.AP
Analysis of PDEs
205Analysis of PDEs
1403.2634
In [Bl1], it is proved that a subgroup of $PL_{+}(I)$ has a finite height if and only if it is solvable. We prove the "only if" part for any subgroup of Homeo$_{+}(I)$, and present a construction which indicates a plethora of examples of solvable groups with infinite height.
[ "math.GR", "math.DS" ]
math.GR
math.DS
Group Theory;Dynamical Systems
2,939Group Theory;Dynamical Systems
1012.4269
Let $X$ be an analytic space of pure dimension. We introduce a formalism to generate intrinsic weighted Koppelman formulas on $X$ that provide solutions to the $\dbar$-equation. We obtain new existence results for the $\dbar$-equation, as well as new proofs of various known results.
[ "math.CV" ]
math.CV
Complex Variables
1,135Complex Variables
1811.07093
In this manuscript we investigate the long-term behavior of a single-species fishery, which is harvested by several fleets. The time evolution of this population is modeled by a discrete time stochastic age-structured model. We assume that incertitude only affects the recruitment. First, for the deterministic version of this model, we characterize the equilibrium yields in terms of the fishing mortality. Then, for the stochastic version, we introduce the concepts of maximum expected, log expected and harmonic expected sustainable yield, and we analyze how the incertitude affects the behavior of these yields and their stationary distribution. All the numerical simulations are performed with data obtained from Patagonian Tooth-fish fishery, which is harvested by four different type of fleets: Chilean Industrial fleet, Chilean Artisanal fleet, Argentinean longline fleet, and Argentinean Artisanal fleet.
[ "q-bio.PE" ]
q-bio.PE
Populations and Evolution
5,627Populations and Evolution
0712.0768
The strange quark mass is determined from a new QCD Finite Energy Sum Rule (FESR) optimized to reduce considerably the systematic uncertainties arising from the hadronic resonance sector. As a result, the main uncertainty in this determination is due to the value of $\Lambda_{QCD}$. The correlator of axial-vector divergences is used in perturbative QCD to five-loop order, including quark and gluon condensate contributions, in the framework of both Fixed Order (FOPT), and Contour Improved Perturbation Theory (CIPT). The latter exhibits very good convergence, leading to a remarkably stable result in the very wide range $s_0 = 1.0 - 4.0 {GeV}^2$, where $s_0$ is the radius of the integration contour in the complex energy (squared) plane. The value of the strange quark mass in this framework at a scale of 2 GeV is $m_s(2 {GeV}) = 95 \pm 5 (111 \pm 6) {MeV}$ for $\Lambda_{QCD} = 420 (330) {MeV}$, respectively.
[ "hep-ph", "hep-lat" ]
hep-ph
hep-lat
High Energy Physics - Phenomenology;High Energy Physics - Lattice
3,218High Energy Physics - Phenomenology;High Energy Physics - Lattice
0810.4991
A branching process in random environment $(Z_n, n \in \N)$ is a generalization of Galton Watson processes where at each generation the reproduction law is picked randomly. In this paper we give several results which belong to the class of {\it large deviations}. By contrast to the Galton-Watson case, here random environments and the branching process can conspire to achieve atypical events such as $Z_n \le e^{cn}$ when $c$ is smaller than the typical geometric growth rate $\bar L$ and $ Z_n \ge e^{cn}$ when $c > \bar L$. One way to obtain such an atypical rate of growth is to have a typical realization of the branching process in an atypical sequence of environments. This gives us a general lower bound for the rate of decrease of their probability. When each individual leaves at least one offspring in the next generation almost surely, we compute the exact rate function of these events and we show that conditionally on the large deviation event, the trajectory $t \mapsto \frac1n \log Z_{[nt]}, t\in [0,1]$ converges to a deterministic function $f_c :[0,1] \mapsto \R_+$ in probability in the sense of the uniform norm. The most interesting case is when $c < \bar L$ and we authorize individuals to have only one offspring in the next generation. In this situation, conditionally on $Z_n \le e^{cn}$, the population size stays fixed at 1 until a time $ \sim n t_c$. After time $n t_c$ an atypical sequence of environments let $Z_n$ grow with the appropriate rate ($\neq \bar L$) to reach $c.$ The corresponding map $f_c(t)$ is piecewise linear and is 0 on $[0,t_c]$ and $f_c(t) = c(t-t_c)/(1-t_c)$ on $[t_c,1].$
[ "math.PR" ]
math.PR
Probability
5,709Probability
2206.03511
(Abridged) The lifetime of protoplanetary disks around young stars limits the timescale when planets form. A disk dissipation timescale < 10 Myr was inferred from surveys providing the fraction of stars with disks in young stellar clusters with different ages. However, most previous surveys focused on the compact region within ~ 2 pc from the clusters' centers, for which the disk fraction information considering the outer part is practically absent. We aim to test if disk fraction estimates change when inferred from an extended region around the clusters' centers. Gaia EDR3 data and a best-suited, Virtual Observatory (VO)-based tool -Clusterix-, are used to identify member stars for a representative sample of 19 young clusters considering two concentric fields of view (FOV) with radii ~ 20 pc and ~ 2 pc. Our analysis reveals that the inner disk fractions inferred from the compact and the extended regions are equal within ~ 10%, which does not support a previous hypothesis proposing that disk fractions should be significantly larger considering extended regions. A list of member and disk stars in each cluster is provided and stored in a VO-compliant archive. Averaged values and plots characterizing the whole clusters are also provided, including HR diagrams based on Gaia colors and absolute magnitudes. Our results cover the largest fields ever probed when dealing with disk fractions for all clusters analysed, and imply that their complete characterization requires the use of wide FOVs. The resulting database is a benchmark for future detailed studies of young clusters, whose disk fractions must be accurately determined by using multi-wavelength analysis potentially combined with data from coming Gaia releases.
[ "astro-ph.SR" ]
astro-ph.SR
Solar and Stellar Astrophysics
6,668Solar and Stellar Astrophysics
2304.05219
Classic no-regret online prediction algorithms, including variants of the Upper Confidence Bound ($\texttt{UCB}$) algorithm, $\texttt{Hedge}$, and $\texttt{EXP3}$, are inherently unfair by design. The unfairness stems from their very objective of playing the most rewarding arm as many times as possible while ignoring the less rewarding ones among $N$ arms. In this paper, we consider a fair prediction problem in the stochastic setting with hard lower bounds on the rate of accrual of rewards for a set of arms. We study the problem in both full and bandit feedback settings. Using queueing-theoretic techniques in conjunction with adversarial learning, we propose a new online prediction policy called $\texttt{BanditQ}$ that achieves the target reward rates while achieving a regret and target rate violation penalty of $O(T^{\frac{3}{4}}).$ In the full-information setting, the regret bound can be further improved to $O(\sqrt{T})$ when considering the average regret over the entire horizon of length $T$. The proposed policy is efficient and admits a black-box reduction from the fair prediction problem to the standard MAB problem with a carefully defined sequence of rewards. The design and analysis of the $\texttt{BanditQ}$ policy involve a novel use of the potential function method in conjunction with scale-free second-order regret bounds and a new self-bounding inequality for the reward gradients, which are of independent interest.
[ "cs.LG", "cs.PF" ]
cs.LG
cs.PF
Machine Learning;Performance
4,238Machine Learning;Performance
0809.0111
If in the Randall and Sundrum RS1 model the inverse of the compactification radius, the AdS curvature scale, and the five and four-dimensional Planck scales are equal in size, as is natural, then the warp factor at the location of the low energy brane is of value 1/pi. So that all scales derive from locations in the space, we identify the extra dimension with the infinite covering space of the S1/Z2 orbifold. The extra dimension is then essentially a series of connected line intervals, punctuated by branes. Scales on successive branes in the extra dimension descend from Planck scale in a geometric sequence of common ratio 1/pi. Evidence is provided for such a sequence within the spectrum of particle masses, and of a second geometric sequence, of common ratio 2/pi, which suggests that the AdS spacetime is six-dimensional and doubly warped. The scales of the Standard Model lie at coincident levels within the two sequences. A third sequence, of common ratio 1/e, provides a symmetrical framework for the Standard Model and points to a warped product spacetime.
[ "physics.gen-ph" ]
physics.gen-ph
General Physics
2,645General Physics
2210.06417
The issue of bias (i.e., systematic unfairness) in machine learning models has recently attracted the attention of both researchers and practitioners. For the graph mining community in particular, an important goal toward algorithmic fairness is to detect and mitigate bias incorporated into graph embeddings since they are commonly used in human-centered applications, e.g., social-media recommendations. However, simple analytical methods for detecting bias typically involve aggregate statistics which do not reveal the sources of unfairness. Instead, visual methods can provide a holistic fairness characterization of graph embeddings and help uncover the causes of observed bias. In this work, we present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings. The tool is the product of a design study in collaboration with domain experts. It allows the user to (i) visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology. Experts' feedback confirms that our tool is effective at detecting and diagnosing unfairness. Thus, we envision our tool both as a companion for researchers in designing their algorithms as well as a guide for practitioners who use off-the-shelf graph embeddings.
[ "cs.HC", "cs.CY", "cs.GR", "cs.SI" ]
cs.HC
cs.CY
Human-Computer Interaction;Computers and Society;Graphics;Social and Information Networks
7,267longtail
1902.09121
We discuss the compatibility of the combined annual modulation effect measured by DAMA/LIBRA-phase1 and DAMA/LIBRA-phase2 with an explanation in terms of inelastic scattering events induced by the most general Galilean-invariant effective contact interaction of a Weakly Interacting Massive Particle (WIMP) dark matter particle of spin 0, 1/2 or 1. We take into account all the possible interferences among operators by studying the intersections among the ellipsoidal surfaces of constant signal of DAMA and other experiments in the space of the coupling constants of the effective theory. In our analysis we assume a standard Maxwellian velocity distribution in the Galaxy. We find that, compared to the elastic case, inelastic scattering partially relieves but does not eliminate the existing tension between the DAMA effect and the constraints from the null results of other experiments. Such tension is very large in all the parameter space with the exception of a small region for WIMP mass $m_{\chi}\simeq$ 10 GeV and mass splitting $\delta\gtrsim$ 20 keV, where it is partially, but not completely relieved. In such region the bounds from fluorine targets are evaded in a kinematic way because the minimal WIMP incoming speed required to trigger upscatters off fluorine exceeds the maximal WIMP velocity in the Galaxy, or is very close to it. As a consequence, we also find that the residual tension between DAMA and other results is more sensitive on the astrophysical parameters compared to the elastic case. We find that the configurations with the smallest tension can produce enough yearly modulation in some of the DAMA bins in compliance with the constraints from other experiments, but the ensuing shape of the modulation spectrum is too steep compared to the measured one. For such configurations the recent COSINE-100 bound is evaded in a natural way due to their large expected modulation fractions.
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
0904.2089
The third moments of conserved charges, the baryon and electric charge numbers, and energy, as well as their mixed moments, carry more information on the state around the QCD phase boundary than previously proposed fluctuation observables and higher order moments. In particular, their signs give plenty of information on the location of the state created in relativistic heavy ion collisions in the temperature and baryon chemical potential plane. We demonstrate this with an effective model.
[ "nucl-th", "hep-ph", "nucl-ex" ]
nucl-th
hep-ph
Nuclear Theory;High Energy Physics - Phenomenology;Nuclear Experiment
4,919Nuclear Theory;High Energy Physics - Phenomenology;Nuclear Experiment
1709.03517
This paper introduces "Multi-Level Spherical LSH": parameter-free, a multi-level, data-dependant Locality Sensitive Hashing data structure for solving the Approximate Near Neighbors Problem (ANN). This data structure uses a modified version of a multi-probe adaptive querying algorithm, with the potential of achieving a $O(n^p + t)$ query run time, for all inputs n where $t <= n$.
[ "cs.DS" ]
cs.DS
Data Structures and Algorithms
1,908Data Structures and Algorithms
0902.3783
An $k$-noncrossing RNA structure can be identified with an $k$-noncrossing diagram over $[n]$, which in turn corresponds to a vacillating tableaux having at most $(k-1)$ rows. In this paper we derive the limit distribution of irreducible substructures via studying their corresponding vacillating tableaux. Our main result proves, that the limit distribution of the numbers of irreducible substructures in $k$-noncrossing, $\sigma$-canonical RNA structures is determined by the density function of a $\Gamma(-\ln\tau_k,2)$-distribution for some $\tau_k<1$.
[ "q-bio.BM", "math.CO", "q-bio.QM" ]
q-bio.BM
math.CO
Biomolecules;Combinatorics;Quantitative Methods
7,267longtail
2006.05479
Principal Component Analysis (PCA) minimizes the reconstruction error given a class of linear models of fixed component dimensionality. Probabilistic PCA adds a probabilistic structure by learning the probability distribution of the PCA latent space weights, thus creating a generative model. Autoencoders (AE) minimize the reconstruction error in a class of nonlinear models of fixed latent space dimensionality and outperform PCA at fixed dimensionality. Here, we introduce the Probabilistic Autoencoder (PAE) that learns the probability distribution of the AE latent space weights using a normalizing flow (NF). The PAE is fast and easy to train and achieves small reconstruction errors, high sample quality, and good performance in downstream tasks. We compare the PAE to Variational AE (VAE), showing that the PAE trains faster, reaches a lower reconstruction error, and produces good sample quality without requiring special tuning parameters or training procedures. We further demonstrate that the PAE is a powerful model for performing the downstream tasks of probabilistic image reconstruction in the context of Bayesian inference of inverse problems for inpainting and denoising applications. Finally, we identify latent space density from NF as a promising outlier detection metric.
[ "cs.LG", "stat.ML" ]
cs.LG
stat.ML
Machine Learning;Machine Learning
4,163Machine Learning;Machine Learning
1403.7050
This book is an attempt to help students transform all of the concepts of quantum mechanics into concrete computer representations, which can be constructed, evaluated, analyzed, and hopefully understood at a deeper level than what is possible with more abstract representations. It was written for a Master's and PhD lecture given yearly at the University of Basel, Switzerland. The goal is to give a language to the student in which to speak about quantum physics in more detail, and to start the student on a path of fluency in this language. On our journey we approach questions such as: -- You already know how to calculate the energy eigenstates of a single particle in a simple one-dimensional potential. How can such calculations be generalized to non-trivial potentials, higher dimensions, and interacting particles? -- You have heard that quantum mechanics describes our everyday world just as well as classical mechanics does, but have you ever seen an example where such behavior is calculated in detail and where the transition from classical to quantum physics is evident? -- How can we describe the internal spin structure of particles? How does this internal structure couple to the particles' motion? -- What are qubits and quantum circuits, and how can they be assembled to simulate a future quantum computer?
[ "quant-ph", "physics.ed-ph" ]
quant-ph
physics.ed-ph
Quantum Physics;Physics Education
6,162Quantum Physics;Physics Education
1209.5253
We study the newly discovered Pt phosphides $A$Pt$_3$P ($A$=Sr, Ca, La) [ T. Takayama et al. Phys. Rev. Lett. 108, 237001 (2012)] using first-principles calculations and Migdal-Eliashberg theory. Given the remarkable agreement with the experiment, we exclude the charge-density wave scenario proposed by previous first-principles calculations, and give conclusive answers concerning the superconducting state in these materials. The pairing increases from La to Ca and Sr due to changes in the electron-phonon matrix elements and low-frequency phonons. Although we find that all three compounds are well described by conventional s-wave superconductivity and spin-orbit coupling of Pt plays a marginal role, we show that it could be possible to tune the structure from centrosymmetric to noncentrosymmetric opening new perspectives towards the understanding of unconventional superconductivity.
[ "cond-mat.supr-con" ]
cond-mat.supr-con
Superconductivity
7,066Superconductivity
1906.05049
Let $G/H$ be a contractible homogeneous Sasaki manifold. A compact locally homogeneous aspherical Sasaki manifold $\Gamma\big\backslash G/H$ is by definition a quotient of $G/H$ by a discrete uniform subgroup $\Gamma\leq G$. We show that a compact locally homogeneous aspherical Sasaki manifold is always quasi-regular, that is, $\Gamma\big\backslash G/H$ is an $S^{1}$-Seifert bundle over a locally homogeneous aspherical K\"ahler orbifold. We discuss the structure of the isometry group $\mathrm{Isom}(G/H)$ for a Sasaki metric of $G/H$ in relation with the pseudo-Hermitian group $\mathrm{Psh} (G/H)$ for the Sasaki structure of $G/H$. We show that a Sasaki Lie group $G$, when $\Gamma\big\backslash G$ is a compact locally homogeneous aspherical Sasaki manifold, is either the universal covering group of $SL(2,R)$ or a modification of a Heisenberg nilpotent Lie group with its natural Sasaki structure. In addition, we classify all aspherical Sasaki homogeneous spaces for semisimple Lie groups.
[ "math.DG", "math.CV" ]
math.DG
math.CV
Differential Geometry;Complex Variables
2,036Differential Geometry;Complex Variables
2008.02932
Programming language concepts are used to give some new perspectives on a long-standing open problem: is logspace = ptime ?
[ "cs.CC", "cs.PL" ]
cs.CC
cs.PL
Computational Complexity;Programming Languages
7,267longtail