id
stringlengths
9
16
abstract
stringlengths
67
2.61k
cats
sequence
primary
stringlengths
5
18
secondary
stringlengths
0
18
strlabel
stringlengths
5
315
stratlabel
class label
7.27k classes
2204.06232
The dynamics of the partially ionized solar atmosphere is controlled by the frequent collision and charge exchange between the predominant neutral Hydrogen atoms and charged ions. At signal frequencies below or of the order of either of the collision or charge exchange frequencies the magnetic stress is {\it felt} by both the charged and neutral particles simultaneously. The resulting neutral-mass loading of the ions leads to the rescaling of the effective ion-cyclotron frequency-it becomes the Hall frequency, and the resultant effective Larmor radius becomes of the order of few kms. Thus the finite Larmor radius (FLR) effect which manifests as the ion and neutral pressure stress tensors operates over macroscopic scales. Whereas parallel and perpendicular (with respect to the magnetic field) viscous momentum transport competes with the Ohm and Hall diffusion of the magnetic field in the photosphere-chromosphre, the gyroviscous effect becomes important only in the transition region between the chromosphere and corona, where it competes with the ambipolar diffusion. The wave propagation in the gyroviscous effect dominated medium depends on the plasma $\beta$ (a ratio of the thermal and magnetic energies). The abundance of free energy makes gyro waves unstable with the onset condition exactly opposite of the Hall instability. However, the maximum growth rate is identical to the Hall instability. For a flow gradient $\sim 0.1 \,\mbox{s}^{-1}$ the instability growth time is one minute. Thus, the transition region may become subject to this fast growing, gyroviscous instability.
[ "astro-ph.SR" ]
astro-ph.SR
Solar and Stellar Astrophysics
6,668Solar and Stellar Astrophysics
1704.05822
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how it works in MLE and show that DQAEM outperforms EM.
[ "stat.ML", "cond-mat.stat-mech", "physics.comp-ph", "quant-ph" ]
stat.ML
cond-mat.stat-mech
Machine Learning;Statistical Mechanics;Computational Physics;Quantum Physics
7,267longtail
astro-ph/0305394
We have carried out a high resolution spectroscopic survey of the 220-250 Myr old cluster NGC 6475: our main purpose is to investigate Li evolution during the early stages of the Main Sequence. We have determined Li abundances for 33 late F to K-type X-ray selected cluster candidates, extending the samples already available in the literature; for part of the stars we obtained radial and rotational velocities, allowing us to confirm the membership and to check for binarity. We also estimated the cluster metallicity which turned out to be over-solar ([Fe/H]=+0.14 +/-0.06). Our Li analysis evidenced that (i) late F-type stars (Teff > 6000 K) undergo a very small amount of Li depletion during the early phases on the ZAMS; (ii) G-type stars (6000 > Teff > 5500 K) instead do deplete lithium soon after arrival on the ZAMS. Whereas this result is not new, we show that the time scale for Li depletion in these stars is almost constant between 100 and 600 Myr; (iii) we confirm that the spread observed in early K-type stars in younger clusters has converged by 220 Myr. No constraints can be put on later-type stars. (iv) Finally, we investigate the effect of metallicity on Li depletion by comparing NGC 6475 with the similar age cluster M 34, but we show that the issue remains open, given the uncertain metallicity of the latter cluster. By using the combined NGC 6475 + M 34 sample together with the Hyades and the Pleiades, we compare quantitatively Li evolution from the ZAMS to 600 Myr with theoretical predictions of standard models.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
2204.05708
Recent experiments have shown that CeH9 and (Ce,La)H9 can be synthesized under high pressure between 90-170GPa and become a superconductor with a high value of superconducting critical temperature (Tc) between 100-200K. In this work, we performed a theoretical study of a (Ce,La)H9 compound where the Ce:La ratio is equal to 1. We used the density functional theory and the {\it ab initio} molecular dynamics (AIMD) method. From the phonon dispersion, there exist some unstable modes around the K-point phonons. Then, we performed AIMD simulation at around 203K and found that the compound becomes stable. The superconducting spectral function can be calculated. We found that $\lambda$ is as high as 3.0 at 200GPa. By using Allen-Dynes-modified McMillan equation in the strong coupling regime, we found that Tc = 87K at 200GPa.
[ "cond-mat.supr-con", "physics.comp-ph" ]
cond-mat.supr-con
physics.comp-ph
Superconductivity;Computational Physics
7,069Superconductivity;Computational Physics
1209.0789
The corona of the Sun is dominated by emission from loop-like structures. When observed in X-ray or extreme ultraviolet emission, these million K hot coronal loops show a more or less constant cross section. In this study we show how the interplay of heating, radiative cooling, and heat conduction in an expanding magnetic structure can explain the observed constant cross section. We employ a three-dimensional magnetohydrodynamics (3D MHD) model of the corona. The heating of the coronal plasma is the result of braiding of the magnetic field lines through footpoint motions and subsequent dissipation of the induced currents. From the model we synthesize the coronal emission, which is directly comparable to observations from, e.g., the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (AIA/SDO). We find that the synthesized observation of a coronal loop seen in the 3D data cube does match actually observed loops in count rate and that the cross section is roughly constant, as observed. The magnetic field in the loop is expanding and the plasma density is concentrated in this expanding loop; however, the temperature is not constant perpendicular to the plasma loop. The higher temperature in the upper outer parts of the loop is so high that this part of the loop is outside the contribution function of the respective emission line(s). In effect, the upper part of the plasma loop is not bright and thus the loop actually seen in coronal emission appears to have a constant width. From this we can conclude that the underlying field-line-braiding heating mechanism provides the proper spatial and temporal distribution of the energy input into the corona --- at least on the observable scales.
[ "astro-ph.SR" ]
astro-ph.SR
Solar and Stellar Astrophysics
6,668Solar and Stellar Astrophysics
2301.11531
We present a quantum annealing-based solution method for topology optimization (TO). In particular, we consider TO in a more general setting, i.e., applied to structures of continuum domains where designs are represented as distributed functions, referred to as continuum TO problems. According to the problem's properties and structure, we formulate appropriate sub-problems that can be solved on an annealing-based quantum computer. The methodology established can effectively tackle continuum TO problems formulated as mixed-integer nonlinear programs. To maintain the resulting sub-problems small enough to be solved on quantum computers currently accessible with small numbers of quits and limited connectivity, we further develop a splitting approach that splits the problem into two parts: the first part can be efficiently solved on classical computers, and the second part with a reduced number of variables is solved on a quantum computer. By such, a practical continuum TO problem of varying scales can be handled on the D-Wave quantum annealer. More specifically, we concern the minimum compliance, a canonical TO problem that seeks an optimal distribution of materials to minimize the compliance with desired material usage. The superior performance of the developed methodology is assessed and compared with the state-of-the-art heuristic classical methods, in terms of both solution quality and computational efficiency. The present work hence provides a promising new avenue of applying quantum computing to practical designs of topology for various applications.
[ "math.NA", "cs.CE", "cs.NA", "math.OC" ]
math.NA
cs.CE
Numerical Analysis;Computational Engineering, Finance, and Science;Numerical Analysis;Optimization and Control
7,267longtail
2208.13362
Krylov complexity is a novel observable for detecting quantum chaos, and an indicator of a possible gravity dual. In this paper, we compute the Krylov complexity and the associated Lanczos coefficients in the SU(2) Yang-Mills theory, which can be reduced to a nonlinearly coupled harmonic oscillators (CHO) model. We show that there exists a chaotic transition in the growth of Krylov complexity. The Krylov complexity shows a quadratic growth in the early time stage and then grows linearly. The corresponding Lanczos coefficient satisfies the universal operator growth hypothesis, i.e., grows linearly first and then enters the saturation plateau. By the linear growth of Lanczos coefficients, we obtain an upper bound of the quantum Lyapunov exponent. Finally, we investigate the effect of different energy sectors on the K-complexity and Lanczos coefficients.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
math/0209050
Various best-choice problems related to the planar homogeneous Poisson process in finite or semi-infinite rectangle are studied. The analysis is largely based on properties of the one-dimensional box-area process associated with the sequence of records. We prove a series of distributional identities involving exponential and uniform random variables, and resolve the Petruccelli-Porosinski-Samuels paradox on coincidence of asymptotic values in certain discrete-time optimal stopping problems.
[ "math.PR" ]
math.PR
Probability
5,709Probability
1703.02827
In this paper, the containment problem for the defining ideal of a special type of zero dimensional subschemes of $\mathbb{P}^2$, so called quasi star configurations, is investigated. Some sharp bounds for the resurgence of these types of ideals are given. As an application of this result, for every real number $0 < \varepsilon < \frac{1}{2}$, we construct an infinite family of homogeneous radical ideals of points in $\mathbb{K}[\mathbb{P}^2]$ such that their resurgences lie in the interval $[2- \varepsilon ,2)$. Moreover, the Castelnuovo-Mumford regularity of all ordinary powers of defining ideal of quasi star configurations are determined. In particular, it is shown that, %the defining ideal of a quasi star configuration, and all of them have linear resolution.
[ "math.AG" ]
math.AG
Algebraic Geometry
47Algebraic Geometry
1905.05666
A few years ago the use of standard functional manipulations was demonstrated to imply an unexpected property satisfied by the fermionic Green's functions of QCD, and called effective locality. This feature of QCD is non-perturbative as it results from a full integration of the gluonic degrees of freedom. In this paper, at eikonal and quenching approximation at least, the relation of effective locality to dynamical chiral symmetry breaking is examined.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
hep-th/9701076
An earlier proposed theory with linear-gonihedhic action for quantum gravity is reviewed. One can consider this theory as a "square root" of classical gravity with a new fundamental constant of dimension one. We demonstrate also, that the partition function for the discretized version of the Einstein-Hilbert action found by Regge in 1961 can be represented as a superposition of random surfaces with Euler character as an action and in the case of linear gravity as a superposition of three-dimensional manifolds with an action which is proportional to the total solid angle deficit of these manifolds. This representation allows to construct the transfer matrix which describes the propagation of space manifold. We discuss the so called gonihedric principle which allows to defind a discrete version of high derivative terms in quantum gravity and to introduce intrinsic rigidity of spacetime. This note is based on a talk delivered at the II meeting on constrained dynamics and quantum gravity at Santa Margherita Ligure.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
1107.3657
By considering a continuous pruning procedure on Aldous's Brownian tree, we construct a random variable $\Theta$ which is distributed, conditionally given the tree, according to the probability law introduced by Janson as the limit distribution of the number of cuts needed to isolate the root in a critical Galton-Watson tree. We also prove that this random variable can be obtained as the a.s. limit of the number of cuts needed to cut down the subtree of the continuum tree spanned by $n$ leaves.
[ "math.PR" ]
math.PR
Probability
5,709Probability
hep-ph/9609311
Measurements of distributions associated with the pair production of top quarks at the LHC can be used to constrain (or observe) the anomalous chromomagnetic dipole moment($\kappa$) of the top. For example, using either the $t\bar t$ invariant mass or the $p_t$ distribution of top we find that sensitivities to $|\kappa|$ of order 0.05 are obtainable with 100 $fb^{-1}$ of integrated luminosity. This is similar in magnitude to what can be obtained at a 500 GeV NLC with an integrated luminosity of 50 $fb^{-1}$ through an examination of the $e^+e^- \to t\bar tg$ process. [To appear in the Proceedings of the 1996 DPF/DPB Summer Study on NewDirections for High Energy Physics-Snowmass96, Snowmass, CO, 25 June-12 July, 1996.]
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
2310.17064
As artificial intelligence (AI) gains greater adoption in a wide variety of applications, it has immense potential to contribute to mathematical discovery, by guiding conjecture generation, constructing counterexamples, assisting in formalizing mathematics, and discovering connections between different mathematical areas, to name a few. While prior work has leveraged computers for exhaustive mathematical proof search, recent efforts based on large language models (LLMs) aspire to position computing platforms as co-contributors in the mathematical research process. Despite their current limitations in logic and mathematical tasks, there is growing interest in melding theorem proving systems with foundation models. This work investigates the applicability of LLMs in formalizing advanced mathematical concepts and proposes a framework that can critically review and check mathematical reasoning in research papers. Given the noted reasoning shortcomings of LLMs, our approach synergizes the capabilities of proof assistants, specifically PVS, with LLMs, enabling a bridge between textual descriptions in academic papers and formal specifications in PVS. By harnessing the PVS environment, coupled with data ingestion and conversion mechanisms, we envision an automated process, called \emph{math-PVS}, to extract and formalize mathematical theorems from research papers, offering an innovative tool for academic review and discovery.
[ "cs.AI", "cs.CL", "cs.LG", "cs.LO" ]
cs.AI
cs.CL
Artificial Intelligence;Computation and Language;Machine Learning;Logic in Computer Science
7,267longtail
1508.00632
We show how to price and replicate a variety of barrier-style claims written on the $\log$ price $X$ and quadratic variation $\langle X \rangle$ of a risky asset. Our framework assumes no arbitrage, frictionless markets and zero interest rates. We model the risky asset as a strictly positive continuous semimartingale with an independent volatility process. The volatility process may exhibit jumps and may be non-Markovian. As hedging instruments, we use only the underlying risky asset, zero-coupon bonds, and European calls and puts with the same maturity as the barrier-style claim. We consider knock-in, knock-out and rebate claims in single and double barrier varieties.
[ "q-fin.MF" ]
q-fin.MF
Mathematical Finance
4,385Mathematical Finance
cond-mat/0111217
Eu2-xCexRuSr2Cu2O10-d (Ru-2122) is the first Cu-O based system in which superconductivity (SC) in the CuO2 planes and weak-ferromagnetism (W-FM) in the Ru sub-lattice coexists. The hole doping in the CuO2 planes, is controlled by appropriate variation of the Ce concentration. SC occurs for Ce contents of 0.4-0.8, with the highest TC=35 K for Ce=0.6. The as-prepared non-SC EuCeRuSr2Cu2O10 (x=1) sample exhibits magnetic irreversibility below Tirr=125 K and orders anti-ferromagnetically (AFM) at TM =165 K. The saturation moment at 5 K is Msat=0.89 mB /Ru close to the expected 1 mB for the low-spin state of Ru5+. Annealing under oxygen pressures, does not affect these parameters, whereas depletion of oxygen shifts both Tirr and TM up to 169 and 215 K respectively. Systematic magnetic studies on Eu2-xCexRuSr2Cu2O10-d show that TM, Tirr and Msat decrease with x, and the Ce dependent magnetic-SC phase diagram is presented. A simple model for the SC state is proposed. We interpret the magnetic behavior in the framework of our ac and dc magnetic studies, and argue that: (i) the system becomes AFM ordered at TM; (b) at Tirr < TM, W-FM is induced by the canting of the Ru moments, and (c), at lower temperatures the appropriate samples become SC at TC. The magnetic features are not affected by the SC state, and the two states coexist.
[ "cond-mat.supr-con" ]
cond-mat.supr-con
Superconductivity
7,066Superconductivity
1903.03454
The negative hydrogen ion is the first three body quantum problem whose ground state energy is calculated using the `Chandrasekhar Wavefunction' that accounts for the electron-electron correlation. Solving multi-body systems is a daunting task in quantum mechanics as it includes choosing a trial wavefunction and the calculation of integrals for the system that becomes almost impossible for systems with three or more particles. This difficulty can be addressed by quantum computers. They have emerged as a tool to address different electronic structure problems with remarkable efficiency. They have been realized in various fields and proved their efficiency over classical computers. Here, we show the quantum simulation of H^{-} ion to calculate it's ground state energy in IBM quantum computer. The energy is found to be -0.5339355468 Hartree with an error of 0.8376% as compared to the theoretical value. We observe that the quantum computer is efficient in preparing the correlated wavefunction of H^{-} and calculating it's ground state energy. We use a recently developed algorithm known as `Variational Quantum Eigensolver' and implement it in IBM's 5-qubit quantum chip `ibmqx2'. The method consists of a quantum part i.e., state preparation and measurement of expectation values using the quantum computer, and the classical part i.e., the optimization routine run in a classical computer for energy convergence. An optimization routine is performed on classical computer by running quantum chemistry program and codes in QISKit to converge the energy to the minimum. We also present a comparison of different optimization routines and encoding methods used to converge the energy value to the minimum. The technique can be used to solve various many body problems with great efficiency.
[ "quant-ph" ]
quant-ph
Quantum Physics
5,985Quantum Physics
astro-ph/0101561
A LiBeB evolution model including Galactic Cosmic Ray nucleosynthesis, the $\nu$-process, novae, AGB and C-stars is presented. We have included Galactic Cosmic Ray Nucleosynthesis (GCRN) in a complete Chemical Evolution Model that takes into account 76 stable isotopes from hydrogen to zinc. Any successful LiBeB evolution model should also be compatible with other observational constraints like the age-metallicity relation, the G-dwarf distribution or the evolution of other elements. At the same time, we have checked how different would be a model that took into account the last observations by Wakker et al. (1999) of metal-enriched clouds falling onto the disk, from a primordial infall model.
[ "astro-ph" ]
astro-ph
Astrophysics
463Astrophysics
1707.02676
In hierarchical searches for continuous gravitational waves, clustering of candidates is an important postprocessing step because it reduces the number of noise candidates that are followed-up at successive stages [1][7][12]. Previous clustering procedures bundled together nearby candidates ascribing them to the same root cause (be it a signal or a disturbance), based on a predefined cluster volume. In this paper, we present a procedure that adapts the cluster volume to the data itself and checks for consistency of such volume with what is expected from a signal. This significantly improves the noise rejection capabilities at fixed detection threshold, and at fixed computing resources for the follow-up stages, this results in an overall more sensitive search. This new procedure was employed in the first Einstein@Home search on data from the first science run of the advanced LIGO detectors (O1) [11].
[ "gr-qc", "astro-ph.IM", "math.GN" ]
gr-qc
astro-ph.IM
General Relativity and Quantum Cosmology;Instrumentation and Methods for Astrophysics;General Topology
7,267longtail
1309.4170
Measurements in the radio regime embrace a number of effective approaches for WISP searches, often covering unique or highly complementary ranges of the parameter space compared to those explored in other research domains. These measurements can be used to search for electromagnetic tracers of the hidden photon and axion oscillations, extending down to ~10^-19 eV the range of the hidden photon mass probed, and closing the last gaps in the strongly favoured 1-5 micro-eV range for axion dark matter. This provides a strong impetus for several new initiatives in the field, including the WISP Dark Matter eXperiment (WISPDMX) and novel conceptual approaches for broad-band WISP searches in the 0.1-1000 micro-eV range.
[ "physics.ins-det", "astro-ph.CO", "hep-ex", "hep-ph" ]
physics.ins-det
astro-ph.CO
Instrumentation and Detectors;Cosmology and Nongalactic Astrophysics;High Energy Physics - Experiment;High Energy Physics - Phenomenology
3,639Instrumentation and Detectors;Cosmology and Nongalactic Astrophysics;High Energy Physics - Experiment;High Energy Physics - Phenomenology
2104.10900
This paper presents a novel preconditioning strategy for the classic 8-point algorithm (8-PA) for estimating an essential matrix from 360-FoV images (i.e., equirectangular images) in spherical projection. To alleviate the effect of uneven key-feature distributions and outlier correspondences, which can potentially decrease the accuracy of an essential matrix, our method optimizes a non-rigid transformation to deform a spherical camera into a new spatial domain, defining a new constraint and a more robust and accurate solution for an essential matrix. Through several experiments using random synthetic points, 360-FoV, and fish-eye images, we demonstrate that our normalization can increase the camera pose accuracy by about 20% without significantly overhead the computation time. In addition, we present further benefits of our method through both a constant weighted least-square optimization that improves further the well known Gold Standard Method (GSM) (i.e., the non-linear optimization by using epipolar errors); and a relaxation of the number of RANSAC iterations, both showing that our normalization outcomes a more reliable, robust, and accurate solution.
[ "cs.CV", "cs.RO" ]
cs.CV
cs.RO
Computer Vision and Pattern Recognition;Robotics
1,636Computer Vision and Pattern Recognition;Robotics
2010.10352
Moving loads such as cars and trains are very useful sources of seismic waves, which can be analyzed to retrieve information on the seismic velocity of subsurface materials using the techniques of ambient noise seismology. This information is valuable for a variety of applications such as geotechnical characterization of the near-surface, seismic hazard evaluation, and groundwater monitoring. However, for such processes to converge quickly, data segments with appropriate noise energy should be selected. Distributed Acoustic Sensing (DAS) is a novel sensing technique that enables acquisition of these data at very high spatial and temporal resolution for tens of kilometers. One major challenge when utilizing the DAS technology is the large volume of data that is produced, thereby presenting a significant Big Data challenge to find regions of useful energy. In this work, we present a highly scalable and efficient approach to process real, complex DAS data by integrating physics knowledge acquired during a data exploration phase followed by deep supervised learning to identify "useful" coherent surface waves generated by anthropogenic activity, a class of seismic waves that is abundant on these recordings and is useful for geophysical imaging. Data exploration and training were done on 130~Gigabytes (GB) of DAS measurements. Using parallel computing, we were able to do inference on an additional 170~GB of data (or the equivalent of 10 days' worth of recordings) in less than 30 minutes. Our method provides interpretable patterns describing the interaction of ground-based human activities with the buried sensors.
[ "eess.SP", "cs.LG", "physics.geo-ph" ]
eess.SP
cs.LG
Signal Processing;Machine Learning;Geophysics
7,267longtail
2106.15588
A dessin d'enfant, or dessin, is a bicolored graph embedded into a Riemann surface, and the monodromy group is an algebraic invariant of the dessin generated by rotations of edges about black and white vertices. A rational billiards surface is a two dimensional surface that allows one to view the path of a billiards ball as a continuous path. In this paper, we classify the monodromy groups of dessins associated to rational triangular billiards surfaces.
[ "math.NT" ]
math.NT
Number Theory
4,945Number Theory
1403.7158
We prove sharp inequalities for the average number of affine diameters through the points of a convex body $K$ in ${\mathbb R}^n$. These inequalities hold if $K$ is either a polytope or of dimension two. An example shows that the proof given in the latter case does not extend to higher dimensions.
[ "math.MG" ]
math.MG
Metric Geometry
4,601Metric Geometry
hep-ex/0408129
We report a study of the suppressed decay $B^{-} \to [K^{+}\pi^{-}]_{D}K^{-}$(and its charge-conjugate mode) at Belle, where $[K^{+}\pi^{-}]_{D}$ indicates that the $K^{+}\pi^{-}$ pair originates from a neutral $D$ meson. A data sample containing 274 million $B\bar{B}$ pairs recorded at the $\Upsilon(4S)$ resonance with the Belle detector at the KEKB asymmetric $e^{+}e^{-}$ storage ring is used. This decay mode can be used to extract the CKM angle $\phi_{3}$ using the so-called Atwood-Dunietz-Soni method. The signal for $B^{-} \to [K^{+}\pi^{-}]_{D}K^{-}$ has $2.7\sigma$ statistical significance, and we set a limit on the ratio of B decay amplitudes $r_B < 0.28$ at the 90% confidence level. We observe a signal with $5.8\sigma$ statistical significance in the related mode, $B^{-} \to [K^{+}\pi^{-}]_{D}\pi^{-}$.
[ "hep-ex" ]
hep-ex
High Energy Physics - Experiment
3,059High Energy Physics - Experiment
1802.09900
To launch black-box attacks against a Deep Neural Network (DNN) based Face Recognition (FR) system, one needs to build \textit{substitute} models to simulate the target model, so the adversarial examples discovered from substitute models could also mislead the target model. Such \textit{transferability} is achieved in recent studies through querying the target model to obtain data for training the substitute models. A real-world target, likes the FR system of law enforcement, however, is less accessible to the adversary. To attack such a system, a substitute model with similar quality as the target model is needed to identify their common defects. This is hard since the adversary often does not have the enough resources to train such a powerful model (hundreds of millions of images and rooms of GPUs are needed to train a commercial FR system). We found in our research, however, that a resource-constrained adversary could still effectively approximate the target model's capability to recognize \textit{specific} individuals, by training \textit{biased} substitute models on additional images of those victims whose identities the attacker want to cover or impersonate. This is made possible by a new property we discovered, called \textit{Nearly Local Linearity} (NLL), which models the observation that an ideal DNN model produces the image representations (embeddings) whose distances among themselves truthfully describe the human perception of the differences among the input images. By simulating this property around the victim's images, we significantly improve the transferability of black-box impersonation attacks by nearly 50\%. Particularly, we successfully attacked a commercial system trained over 20 million images, using 4 million images and 1/5 of the training time but achieving 62\% transferability in an impersonation attack and 89\% in a dodging attack.
[ "cs.LG", "cs.CV" ]
cs.LG
cs.CV
Machine Learning;Computer Vision and Pattern Recognition
4,045Machine Learning;Computer Vision and Pattern Recognition
cond-mat/0204590
One dimensional spin-1/2 $XXZ$ model in a transverse magnetic field is studied. It is shown that the field induces the gap in the spectrum of the model with easy-plain anisotropy. Using conformal invariance the field dependence of the gap at small fields is found. The ground state phase diagram is obtained. It contains four phases with different types of the long range order (LRO) and a disordered one. These phases are separated by critical lines, where the gap and the long range order vanish. Using scaling estimations and a mean-field approach as well as numerical calculations in the vicinity of all critical lines we found the critical exponents of the gap and the LRO. It is shown that transition line between the ordered and disordered phases belongs to the universality class of the transverse Ising model.
[ "cond-mat.str-el" ]
cond-mat.str-el
Strongly Correlated Electrons
6,979Strongly Correlated Electrons
1403.1142
Security APIs, key servers and protocols that need to keep the status of transactions, require to maintain a global, non-monotonic state, e.g., in the form of a database or register. However, most existing automated verification tools do not support the analysis of such stateful security protocols - sometimes because of fundamental reasons, such as the encoding of the protocol as Horn clauses, which are inherently monotonic. A notable exception is the recent tamarin prover which allows specifying protocols as multiset rewrite (msr) rules, a formalism expressive enough to encode state. As multiset rewriting is a "low-level" specification language with no direct support for concurrent message passing, encoding protocols correctly is a difficult and error-prone process. We propose a process calculus which is a variant of the applied pi calculus with constructs for manipulation of a global state by processes running in parallel. We show that this language can be translated to msr rules whilst preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which uses the tamarin prover as a backend. We apply the tool to several case studies among which a simplified fragment of PKCS\#11, the Yubikey security token, and an optimistic contract signing protocol.
[ "cs.CR" ]
cs.CR
Cryptography and Security
1,782Cryptography and Security
2112.10070
So far, named entity recognition (NER) has been involved with three major types, including flat, overlapped (aka. nested), and discontinuous NER, which have mostly been studied individually. Recently, a growing interest has been built for unified NER, tackling the above three jobs concurrently with one single model. Current best-performing methods mainly include span-based and sequence-to-sequence models, where unfortunately the former merely focus on boundary identification and the latter may suffer from exposure bias. In this work, we present a novel alternative by modeling the unified NER as word-word relation classification, namely W^2NER. The architecture resolves the kernel bottleneck of unified NER by effectively modeling the neighboring relations between entity words with Next-Neighboring-Word (NNW) and Tail-Head-Word-* (THW-*) relations. Based on the W^2NER scheme we develop a neural framework, in which the unified NER is modeled as a 2D grid of word pairs. We then propose multi-granularity 2D convolutions for better refining the grid representations. Finally, a co-predictor is used to sufficiently reason the word-word relations. We perform extensive experiments on 14 widely-used benchmark datasets for flat, overlapped, and discontinuous NER (8 English and 6 Chinese datasets), where our model beats all the current top-performing baselines, pushing the state-of-the-art performances of unified NER.
[ "cs.CL" ]
cs.CL
Computation and Language
1,168Computation and Language
1811.02668
Recent studies have shown promising results in using Deep Learning to detect malignancy in whole slide imaging. However, they were limited to just predicting positive or negative finding for a specific neoplasm. We attempted to use Deep Learning with a convolutional neural network algorithm to build a lymphoma diagnostic model for four diagnostic categories: benign lymph node, diffuse large B cell lymphoma, Burkitt lymphoma, and small lymphocytic lymphoma. Our software was written in Python language. We obtained digital whole slide images of Hematoxylin and Eosin stained slides of 128 cases including 32 cases for each diagnostic category. Four sets of 5 representative images, 40x40 pixels in dimension, were taken for each case. A total of 2,560 images were obtained from which 1,856 were used for training, 464 for validation and 240 for testing. For each test set of 5 images, the predicted diagnosis was combined from prediction of 5 images. The test results showed excellent diagnostic accuracy at 95% for image-by-image prediction and at 10% for set-by-set prediction. This preliminary study provided a proof of concept for incorporating automated lymphoma diagnostic screen into future pathology workflow to augment the pathologists' productivity.
[ "cs.CV", "cs.LG", "stat.ML" ]
cs.CV
cs.LG
Computer Vision and Pattern Recognition;Machine Learning;Machine Learning
1,601Computer Vision and Pattern Recognition;Machine Learning;Machine Learning
1507.02297
We introduce new methods of equivalence checking and simulation based on Computing Range Reduction (CRR). Given a combinational circuit $N$, the CRR problem is to compute the set of outputs that disappear from the range of $N$ if a set of inputs of $N$ is excluded from consideration. Importantly, in many cases, range reduction can be efficiently found even if computing the entire range of $N$ is infeasible. Solving equivalence checking by CRR facilitates generation of proofs of equivalence that mimic a "cut propagation" approach. A limited version of such an approach has been successfully used by commercial tools. Functional verification of a circuit $N$ by simulation can be viewed as a way to reduce the complexity of computing the range of $N$. Instead of finding the entire range of $N$ and checking if it contains a bad output, such a range is computed only for one input. Simulation by CRR offers an alternative way of coping with the complexity of range computation. The idea is to exclude a subset of inputs of $N$ and compute the range reduction caused by such an exclusion. If the set of disappeared outputs contains a bad one, then $N$ is buggy.
[ "cs.LO" ]
cs.LO
Logic in Computer Science
3,801Logic in Computer Science
1110.3128
In this paper, we consider constructibility of simplicial 3-balls. In many cases, examining 1-dimensional subcomplexes of a simplicial 3-ball is efficient to solve the decision problem whether the simplicial 3-ball is constructible or not. From the point of view, we consider the case where a simplicial 3-ball has spanning edges and present a sufficient condition for nonconstructibility.
[ "math.CO" ]
math.CO
Combinatorics
1,014Combinatorics
1501.04004
We consider cosmological modelling in $f(R)$ theories of gravity, using both top-down and bottom-up constructions. The top-down models are based on Robertson-Walker geometries, and the bottom-up constructions are built by patching together sub-horizon-sized regions of perturbed Minkowski space. Our results suggest that these theories do not provide a theoretically attractive alternative to the standard general relativistic cosmology. We find that the only $f(R)$ theories that can admit an observationally viable weak-field limit have large-scale expansions that are observationally indistinguishable from the Friedmann solutions of General Relativity with $\Lambda$. Such theories do not alleviate any of the difficulties associated with $\Lambda$, and cannot produce any new behaviour in the cosmological expansion without simultaneously destroying the Newtonian approximation to gravity on small scales.
[ "gr-qc", "astro-ph.CO" ]
gr-qc
astro-ph.CO
General Relativity and Quantum Cosmology;Cosmology and Nongalactic Astrophysics
2,701General Relativity and Quantum Cosmology;Cosmology and Nongalactic Astrophysics
1702.05782
The Principle of Maximum Entropy, a powerful and general method for inferring the distribution function given a set of constraints, is applied to deduce the overall distribution of 3D plasmoids (flux ropes/tubes) for systems where resistive MHD is applicable and large numbers of plasmoids are produced. The analysis is undertaken for the 3D case, with mass, total flux and velocity serving as the variables of interest, on account of their physical and observational relevance. The distribution functions for the mass, width, total flux and helicity exhibit a power-law behavior with exponents of $-4/3$, $-2$, $-3$ and $-2$ respectively for small values, whilst all of them display an exponential falloff for large values. In contrast, the velocity distribution, as a function of $v = |{\bf v}|$, is shown to be flat for $v \rightarrow 0$, and becomes a power law with an exponent of $-7/3$ for $v \rightarrow \infty$. Most of these results are nearly independent of the free parameters involved in this specific problem. A preliminary comparison of our results with the observational evidence is presented, and some of the ensuing space and astrophysical implications are briefly discussed.
[ "astro-ph.HE", "astro-ph.SR", "cond-mat.stat-mech", "physics.plasm-ph", "physics.space-ph" ]
astro-ph.HE
astro-ph.SR
High Energy Astrophysical Phenomena;Solar and Stellar Astrophysics;Statistical Mechanics;Plasma Physics;Space Physics
7,267longtail
math/0011177
Using the frame formalism we determine some possible metrics and metric-compatible connections on the noncommutative differential geometry of the real quantum plane. By definition a metric maps the tensor product of two 1-forms into a `function' on the quantum plane. It is symmetric in a modified sense, namely in the definition of symmetry one has to replace the permutator map with a deformed map \sigma fulfilling some suitable conditions. Correspondingly, also the definition of the hermitean conjugate of the tensor product of two 1-forms is modified (but reduces to the standard one if \sigma coincides with the permutator). The metric is real with respect to such modified *-structure.
[ "math.QA" ]
math.QA
Quantum Algebra
5,873Quantum Algebra
1710.10900
Let $\vec{K}_{\mathbb{N}}$ be the complete symmetric digraph on the positive integers. Answering a question of DeBiasio and McKenney, we construct a 2-colouring of the edges of $\vec{K}_{\mathbb{N}}$ in which every monochromatic path has density 0. On the other hand, we show that, in every colouring that does not have a directed path with $r$ edges in the first colour, there is directed path in the second colour with density at least $\frac1r$.
[ "math.CO" ]
math.CO
Combinatorics
1,014Combinatorics
2209.04598
This paper proposes a data-driven affinely adjustable robust Volt/VAr control (AARVVC) scheme, which modulates the smart inverter reactive power in an affine function of its active power, based on the voltage sensitivities with respect to real/reactive power injections. To achieve a fast and accurate estimation of voltage sensitivities, we propose a data-driven method based on deep neural network (DNN), together with a rule-based bus-selection process using the bidirectional search method. Our method only uses the operating statuses of selected buses as inputs to DNN, thus significantly improving the training efficiency and reducing information redundancy. Finally, a distributed consensus-based solution, based on the alternating direction method of multipliers (ADMM), for the AARVVC is applied to decide the inverter reactive power adjustment rule with respect to its active power. Only limited information exchange is required between each local agent and the central agent to obtain the slope of the reactive power adjustment rule, and there is no need for the central agent to solve any (sub)optimization problems. Numerical results on the modified IEEE-123 bus system validate the effectiveness and superiority of the proposed data-driven AARVVC method.
[ "eess.SY", "cs.SY" ]
eess.SY
cs.SY
Systems and Control;Systems and Control
7,220Systems and Control;Systems and Control
1203.4553
Isophote comprises a locus of the surface points whose normal vectors make a constant angle with a fixed vector. Main objective of this paper is to find the axis of an isophote curve via its Darboux frame and afterwards to give some characterizations about isophote and its axis. Particularly, for isophotes lying on a canal surface will be obtained other characterizations again.
[ "math.DG" ]
math.DG
Differential Geometry
2,010Differential Geometry
cs/9811020
This paper addresses the issue of making legacy information (that material held in paper format only) electronically searchable and retrievable. We used proprietary software and commercial hardware to create a process for scanning, cataloging, archiving and electronically disseminating full-text documents. This process is relatively easy to implement and reasonably affordable.
[ "cs.DL" ]
cs.DL
Digital Libraries
2,081Digital Libraries
0910.3950
The Constrained Minimal Supersymmetric Standard Model (CMSSM) is one of the simplest and most widely-studied supersymmetric extensions to the standard model of particle physics. Nevertheless, current data do not sufficiently constrain the model parameters in a way completely independent of priors, statistical measures and scanning techniques. We present a new technique for scanning supersymmetric parameter spaces, optimised for frequentist profile likelihood analyses and based on Genetic Algorithms. We apply this technique to the CMSSM, taking into account existing collider and cosmological data in our global fit. We compare our method to the MultiNest algorithm, an efficient Bayesian technique, paying particular attention to the best-fit points and implications for particle masses at the LHC and dark matter searches. Our global best-fit point lies in the focus point region. We find many high-likelihood points in both the stau co-annihilation and focus point regions, including a previously neglected section of the co-annihilation region at large m_0. We show that there are many high-likelihood points in the CMSSM parameter space commonly missed by existing scanning techniques, especially at high masses. This has a significant influence on the derived confidence regions for parameters and observables, and can dramatically change the entire statistical inference of such scans.
[ "hep-ph", "astro-ph.CO" ]
hep-ph
astro-ph.CO
High Energy Physics - Phenomenology;Cosmology and Nongalactic Astrophysics
3,156High Energy Physics - Phenomenology;Cosmology and Nongalactic Astrophysics
2111.06718
In this paper we present a short overview of the new Wolfram Mathematica package intended for elementary "in-basis" tensor and differential-geometric calculations. In contrast to alternatives our package is designed to be easy-to-use, short, all-purpose, and hackable. It supports tensor contractions using Einstein notation, transformations between different bases, tensor derivative operator, expansion in basis vectors and forms, exterior derivative, and interior product.
[ "nucl-th", "cs.MS", "cs.SC", "hep-th", "physics.comp-ph" ]
nucl-th
cs.MS
Nuclear Theory;Mathematical Software;Symbolic Computation;High Energy Physics - Theory;Computational Physics
7,267longtail
cond-mat/0403387
An exact treatment of the non-equilibrium dynamics of hard-core bosons on one-dimensional lattices shows that, starting from a pure Fock state, quasi-long-range correlations develop dynamically, and that they lead to the formation of quasi-condensates at finite momenta. Scaling relations characterizing the quasi-condensate and the dynamics of its formation are obtained. The relevance of our findings for atom lasers with full control of the wave-length by means of a lattice is discussed.
[ "cond-mat.stat-mech" ]
cond-mat.stat-mech
Statistical Mechanics
6,821Statistical Mechanics
1611.01091
In 1982, Tamaki Yano proposed a conjecture predicting the set of b-exponents of an irreducible plane curve singularity germ which is generic in its equisingularity class. In \cite{ACLM-Yano2} we proved the conjecture for the case in which the germ has two Puiseux pairs and its algebraic monodromy has distinct eigenvalues. In this article we aim to study the Bernstein polynomial for any function with the hypotheses above. In particular the set of all common roots of those Bernstein polynomials is given. We provide also bounds for some analytic invariants of singularities and illustrate the computations in suitable examples.
[ "math.AG" ]
math.AG
Algebraic Geometry
47Algebraic Geometry
1505.01483
We present Atacama Large Millimeter/submillimeter Array (ALMA) observations of GSC 6214-210 A and B, a solar-mass member of the 5-10 Myr Upper Scorpius association with a 15 $\pm$ 2 Mjup companion orbiting at $\approx$330 AU (2.2"). Previous photometry and spectroscopy spanning 0.3-5 $\mu$m revealed optical and thermal excess as well as strong H$\alpha$ and Pa~$\beta$ emission originating from a circum-substellar accretion disk around GSC 6214-210 B, making it the lowest mass companion with unambiguous evidence of a subdisk. Despite ALMA's unprecedented sensitivity and angular resolution, neither component was detected in our 880 $\mu$m (341 GHz) continuum observations down to a 3-$\sigma$ limit of 0.22 mJy/beam. The corresponding constraints on the dust mass and total mass are <0.15 Mearth and <0.05 Mjup, respectively, or <0.003% and <0.3% of the mass of GSC 6214-210 B itself assuming a 100:1 gas-to-dust ratio and characteristic dust temperature of 10-20 K. If the host star possesses a putative circum-stellar disk then at most it is a meager 0.0015% of the primary mass, implying that giant planet formation has certainly ceased in this system. Considering these limits and its current accretion rate, GSC 6214-210 B appears to be at the end stages of assembly and is not expected to gain any appreciable mass over the next few Myr.
[ "astro-ph.EP" ]
astro-ph.EP
Earth and Planetary Astrophysics
2,351Earth and Planetary Astrophysics
1912.08868
We describe the use of Non-Negative Matrix Factorization (NMF) and Latent Dirichlet Allocation (LDA) algorithms to perform topic mining and labelling applied to retail customer communications in attempt to characterize the subject of customers inquiries. In this paper we compare both algorithms in the topic mining performance and propose methods to assign topic subject labels in an automated way.
[ "cs.LG", "cs.CL", "cs.CY", "cs.IR", "stat.ML" ]
cs.LG
cs.CL
Machine Learning;Computation and Language;Computers and Society;Information Retrieval;Machine Learning
7,267longtail
1202.0388
We present colour transformations for the conversion of the Wide-Field Survey Explorer (WISE) W1, W2, and W3 magnitudes to the Johnson-Cousins (BVIc), Sloan Digital Sky Survey (gri), and Two Micron All Sky Survey JHKs photometric systems, for red clump (RC) stars. RC stars were selected from the Third Radial Velocity Experiment (RAVE) Data Release (DR3). The apparent magnitudes were collected by matching the coordinates of this sample with different photometric catalogues. The final sample (355 RC stars) used to obtain metallicity dependent-and free of metallicity- transformations between WISE and Johnson-Cousins, SDSS, 2MASS photometric systems. These transformations combined with known absolute magnitudes at shorter wavelengths can be used in space density determinations for the Galactic (thin and thick) discs at distances larger than the ones evaluated with JHKs photometry alone, hence providing a powerful tool in the analysis of Galactic structure.
[ "astro-ph.GA" ]
astro-ph.GA
Astrophysics of Galaxies
464Astrophysics of Galaxies
2207.13769
This paper presents the results of calculating Casimir-Lifshitz friction force and heating rate of a small metal particle moving above a metal surface (thick plate) in the case of their different local temperatures. The case of normal nonmagnetic metals (Au) is considered. There is a strong interplay of temperatures, particle velocity and separation distance resulting in an anomalous direction of the heat flux between the bodies and a peak temperature dependence of the friction force at sufficiently low temperatures. In particular, a hot moving particle can additionally receive heat from a cold surface. The conditions for experimental measurement of these effects are discussed.
[ "cond-mat.mes-hall" ]
cond-mat.mes-hall
Mesoscale and Nanoscale Physics
4,450Mesoscale and Nanoscale Physics
2310.20616
Cross-flow turbines harness kinetic energy in wind or moving water. Due to their unsteady fluid dynamics, it can be difficult to predict the interplay between aspects of rotor geometry and turbine performance. This study considers the effects of three geometric parameters: the number of blades, the preset pitch angle, and the chord-to-radius ratio. The relevant fluid dynamics of cross-flow turbines are reviewed, as are prior experimental studies that have investigated these parameters in a more limited manner. Here, 223 unique experiments are conducted across an order of magnitude of diameter-based Reynolds numbers ($\approx 8\!\times\!10^4 - 8\!\times\!10^5$) in which the performance implications of these three geometric parameters are evaluated. In agreement with prior work, maximum performance is generally observed to increase with Reynolds number and decrease with blade count. The broader experimental space identifies new parametric interdependencies; for example, the optimal preset pitch angle is increasingly negative as the chord-to-radius ratio increases. Because these experiments vary both the chord-to-radius ratio and blade count, the performance of different rotor geometries with the same solidity (the ratio of total blade chord to rotor circumference) can also be evaluated. Results demonstrate that while solidity can be a poor predictor of maximum performance, across all scales and tested geometries it is an excellent predictor of the tip-speed ratio corresponding to maximum performance. Overall, these results present a uniquely holistic view of relevant geometric considerations for cross-flow turbine rotor design and provide a rich dataset for validation of numerical simulations and reduced-order models.
[ "physics.flu-dyn" ]
physics.flu-dyn
Fluid Dynamics
2,452Fluid Dynamics
2201.08416
We study the electromechanical response of Janus transition metal dichalcogenide (TMD) nanotubes from first principles. In particular, considering both armchair and zigzag variants of twenty-seven select Janus TMD nanotubes, we determine the change in bandgap and charge carriers' effective mass upon axial and torsional deformations using density functional theory (DFT). We observe that metallic nanotubes remain unaffected, whereas the bandgap in semiconducting nanotubes decreases linearly and quadratically with axial and shear strains, respectively, leading to semiconductor--metal transitions. In addition, we find that there is a continuous decrease and increase in the effective mass of holes and electrons with strains, respectively, leading to n-type--p-type semiconductor transitions. We show that this behavior is a consequence of the rehybridization of orbitals, rather than charge transfer between the atoms. Overall, mechanical deformations form a powerful tool for tailoring the electronic response of semiconducting Janus TMD nanotubes.
[ "cond-mat.mtrl-sci", "physics.chem-ph" ]
cond-mat.mtrl-sci
physics.chem-ph
Materials Science;Chemical Physics
4,301Materials Science;Chemical Physics
1712.00819
We establish a theoretical framework for exploring the quantum dynamics of finite ultracold bosonic ensembles based on the Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY) hierarchy of equations of motion for few-particle reduced density matrices (RDMs). The theory applies to zero as well as low temperatures and is formulated in a highly efficient way by utilizing dynamically optimized single-particle basis states and representing the RDMs in terms of permanents with respect to those. An energy, RDM compatibility and symmetry conserving closure approximation is developed on the basis of a recursively formulated cluster expansion for these finite systems. In order to enforce necessary representability conditions, two novel, minimal-invasive and energy-conserving correction algorithms are proposed, involving the dynamical purification of the solution of the truncated BBGKY hierarchy and the correction of the equations of motion themselves, respectively. For gaining conceptual insights, the impact of two-particle correlations on the dynamical quantum depletion is studied analytically. We apply this theoretical framework to both a tunneling and an interaction-quench scenario. Due to our efficient formulation of the theory, we can reach truncation orders as large as twelve and thereby systematically study the impact of the truncation order on the results. While the short-time dynamics is found to be excellently described with controllable accuracy, significant deviations occur on a longer time-scale in sufficiently far off-equilibrium situations. Theses deviations are accompanied by exponential-like instabilities leading to unphysical results. The phenomenology of these instabilities is investigated in detail and we show that the minimal-invasive correction algorithm of the equation of motion can indeed stabilize the BBGKY hierarchy truncated at the second order.
[ "quant-ph", "cond-mat.quant-gas" ]
quant-ph
cond-mat.quant-gas
Quantum Physics;Quantum Gases
6,169Quantum Physics;Quantum Gases
2311.00790
Metaphor identification aims at understanding whether a given expression is used figuratively in context. However, in this paper we show how existing metaphor identification datasets can be gamed by fully ignoring the potential metaphorical expression or the context in which it occurs. We test this hypothesis in a variety of datasets and settings, and show that metaphor identification systems based on language models without complete information can be competitive with those using the full context. This is due to the construction procedures to build such datasets, which introduce unwanted biases for positive and negative classes. Finally, we test the same hypothesis on datasets that are carefully sampled from natural corpora and where this bias is not present, making these datasets more challenging and reliable.
[ "cs.CL" ]
cs.CL
Computation and Language
1,168Computation and Language
1708.06602
This paper investigates the energy bounds in modified Gauss-Bonnet gravity with anisotropic background. Locally rotationally symmetric Bianchi type ${I}$ cosmological model in $f(R,G)$ gravity is considered to meet this aim. Primarily, a general $f(R,G)$ model is used to develop the field equations. In this aspect, we investigate the viability of modified gravitational theory by studying the energy conditions. We take in account four $f(R,G)$ gravity models commonly discussed in the literature. We formulate the inequalities obtained by energy conditions and investigate the viability of the above mentioned models using the Hubble, deceleration, jerk and snap parameters. Graphical analysis shows that for first two $f(R,G)$ gravity models, NEC, WEC and SEC are satisfied under suitable values of anisotropy and model parameters involved. Moreover, SEC is violated for the third and fourth models which predicts the cosmic expansion.
[ "physics.gen-ph" ]
physics.gen-ph
General Physics
2,645General Physics
2009.05138
In many online platforms, customers' decisions are substantially influenced by product rankings as most customers only examine a few top-ranked products. Concurrently, such platforms also use the same data corresponding to customers' actions to learn how these products must be ranked or ordered. These interactions in the underlying learning process, however, may incentivize sellers to artificially inflate their position by employing fake users, as exemplified by the emergence of click farms. Motivated by such fraudulent behavior, we study the ranking problem of a platform that faces a mixture of real and fake users who are indistinguishable from one another. We first show that existing learning algorithms---that are optimal in the absence of fake users---may converge to highly sub-optimal rankings under manipulation by fake users. To overcome this deficiency, we develop efficient learning algorithms under two informational environments: in the first setting, the platform is aware of the number of fake users, and in the second setting, it is agnostic to the number of fake users. For both these environments, we prove that our algorithms converge to the optimal ranking, while being robust to the aforementioned fraudulent behavior; we also present worst-case performance guarantees for our methods, and show that they significantly outperform existing algorithms. At a high level, our work employs several novel approaches to guarantee robustness such as: (i) constructing product-ordering graphs that encode the pairwise relationships between products inferred from the customers' actions; and (ii) implementing multiple levels of learning with a judicious amount of bi-directional cross-learning between levels.
[ "cs.LG", "cs.IR", "stat.ML" ]
cs.LG
cs.IR
Machine Learning;Information Retrieval;Machine Learning
4,156Machine Learning;Information Retrieval;Machine Learning
2007.14328
We present an option pricing formula for European options in a stochastic volatility model. In particular, the volatility process is defined using a fractional integral of a diffusion process and both the stock price and the volatility processes have jumps in order to capture the market effect known as leverage effect. We show how to compute a martingale representation for the volatility process. Finally, using It\^o calculus for processes with discontinuous trajectories, we develop a first order approximation formula for option prices. There are two main advantages in the usage of such approximating formulas to traditional pricing methods. First, to improve computational effciency, and second, to have a deeper understanding of the option price changes in terms of changes in the model parameters.
[ "q-fin.PR", "math.PR" ]
q-fin.PR
math.PR
Pricing of Securities;Probability
5,704Pricing of Securities;Probability
2101.09155
In this paper we derive some Edmundson-Lah-Ribari\v{c} type inequalities for positive linear functionals and 3-convex functions. Main results are applied to the generalized f-divergence functional. Examples with Zipf Mandelbrot law are used to illustrate the results. In addition, obtained results are utilized in constructing some families of exponentially convex functions and Stolarsky-type means.
[ "math.GM" ]
math.GM
General Mathematics
2,639General Mathematics
1809.10196
Objective: Ultrahigh-resolution optical coherence microscopy (OCM) has recently demonstrated its potential for accurate diagnosis of human cervical diseases. One major challenge for clinical adoption, however, is the steep learning curve clinicians need to overcome to interpret OCM images. Developing an intelligent technique for computer-aided diagnosis (CADx) to accurately interpret OCM images will facilitate clinical adoption of the technology and improve patient care. Methods: 497 high-resolution 3-D OCM volumes (600 cross-sectional images each) were collected from 159 ex vivo specimens of 92 female patients. OCM image features were extracted using a convolutional neural network (CNN) model, concatenated with patient information (e.g., age, HPV results), and classified using a support vector machine classifier. Ten-fold cross-validations were utilized to test the performance of the CADx method in a five-class classification task and a binary classification task. Results: An 88.3 plus or minus 4.9% classification accuracy was achieved for five fine-grained classes of cervical tissue, namely normal, ectropion, low-grade and high-grade squamous intraepithelial lesions (LSIL and HSIL), and cancer. In the binary classification task (low-risk [normal, ectropion and LSIL] vs. high-risk [HSIL and cancer]), the CADx method achieved an area-under-the-curve (AUC) value of 0.959 with an 86.7 plus or minus 11.4% sensitivity and 93.5 plus or minus 3.8% specificity. Conclusion: The proposed deep-learning based CADx method outperformed three human experts. It was also able to identify morphological characteristics in OCM images that were consistent with histopathological interpretations. Significance: Label-free OCM imaging, combined with deep-learning based CADx methods, hold a great promise to be used in clinical settings for the effective screening and diagnosis of cervical diseases.
[ "cs.CV" ]
cs.CV
Computer Vision and Pattern Recognition
1,498Computer Vision and Pattern Recognition
1510.00109
We consider a multi-agent confinement control problem in which a single leader has a purely repulsive effect on follower agents with double-integrator dynamics. By decomposing the leader's control inputs into periodic and aperiodic components, we show that the leader can be driven so as to guarantee confinement of the followers about a time-dependent trajectory in the plane. We use tools from averaging theory and an input-to-state stability type argument to derive conditions on the model parameters that guarantee confinement of the followers about the trajectory. For the case of a single follower, we show that if the follower starts at the origin, then the error in trajectory tracking can be made arbitrarily small depending on the frequency of the periodic control components and the rate of change of the trajectory. We validate our approach using simulations and experiments with a small mobile robot.
[ "cs.MA", "cs.SY", "math.OC" ]
cs.MA
cs.SY
Multiagent Systems;Systems and Control;Optimization and Control
4,690Multiagent Systems;Systems and Control;Optimization and Control
chao-dyn/9907032
Separated flows past complex geometries are modelled by discrete vortex techniques. The flows are assumed to be rotational and inviscid, and a new technique is described to determine the streamfunctions for linear shear profiles. The geometries considered are the snow cornice and the backward-facing step, whose edges allow for the separation of the flow and reattachment downstream of the recirculation regions. A point vortex has been added to the flows in order to constrain the separation points to be located at the edges, while the conformal mappings have been modified in order to smooth the sharp edges and let the separation points be free to oscillate around the points of maximum curvature. Unsteadiness is imposed on the flow by perturbing the vortex location, either by displacing the vortex from equilibrium, or by imposing a random perturbation with zero mean on the vortex in equilibrium. The trajectories of passive scalars continuously released upwind of the separation point and trapped by the recirculating bubble are numerically integrated, and concentration time series are calculated at fixed locations downwind of the reattachment points. This model proves to be capable of reproducing the trapping and intermittent release of scalars, in agreement with the simulation of the flow past a snow cornice performed by a discrete multi-vortex model, as well as with direct numerical simulations of the flow past a backward-facing step. The simulation results indicate that for flows undergoing separation and reattachment the unsteadiness of the recirculating bubble is the main mechanism responsible for the intense large-scale concentration fluctuations downstream.
[ "chao-dyn", "nlin.CD" ]
chao-dyn
nlin.CD
Chaotic Dynamics;Chaotic Dynamics
821Chaotic Dynamics;Chaotic Dynamics
cond-mat/0107264
We use the two-electron wavefunctions (geminals) and the simple screened Coulomb potential proposed by Overhauser [Can. J. Phys. 73, 683 (1995)] to compute the pair-distribution function for a uniform electron gas. We find excellent agreement with Quantum Monte Carlo simulations in the short-range region, for a wide range of electron densities. We are thus able to estimate the value of the second-order coefficient of the small interelectronic-distance expansion of the pair-distribution function. The results are generalized to the partially polarized gas.
[ "cond-mat" ]
cond-mat
Condensed Matter
1,697Condensed Matter
1602.00680
We perform the one-loop induced charged lepton flavor violating decays of the neutral Higgses in an extended mirror fermion model with non-sterile electroweak-scale right-handed neutrinos and a horizontal $A_4$ symmetry in the lepton sector. We demonstrate that for the 125 GeV scalar $h$ there is tension between the recent LHC result ${\cal B}(h \to \tau \mu) \sim $ 1% and the stringent limits on the rare processes $\mu \to e \gamma$ and $\tau \to (\mu$ or $e) \gamma$ from low energy experiments.
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
1906.08916
We present a melody based classification of musical styles by exploiting the pitch and energy based characteristics derived from the audio signal. Three prominent musical styles were chosen which have improvisation as integral part with similar melodic principles, theme, and structure of concerts namely, Hindustani, Carnatic and Turkish music. Listeners of one or more of these genres can discriminate between these based on the melodic contour alone. Listening tests were carried out using melodic attributes alone, on similar melodic pieces with respect to raga/makam, and removing any instrumentation cue to validate our hypothesis that style distinction is evident in the melody. Our method is based on finding a set of highly discriminatory features, derived from musicology, to capture distinct characteristics of the melodic contour. Behavior in terms of transitions of the pitch contour, the presence of micro-tonal notes and the nature of variations in the vocal energy are exploited. The automatically classified style labels are found to correlate well with subjective listening judgments. This was verified by using statistical tests to compare the labels from subjective and objective judgments. The melody based features, when combined with timbre based features, were seen to improve the classification performance.
[ "cs.SD", "eess.AS" ]
cs.SD
eess.AS
Sound;Audio and Speech Processing
6,734Sound;Audio and Speech Processing
2204.06412
We present a simplified way to access and manipulate the topology of massive Dirac fermions by means of scalar potential. We show systematically how a distribution of scalar potential can manipulate the signature of the gap or the mass term as well as the dispersion leading to a band inversion via inverse Klein tunnelling. In one dimension it can lead to the formation of edge localisation. In two dimensions this can give rise to an emergent mechanism, which we refer to as the Scalar Hall Effect. This can facilitate a direct manipulation of topological invariants, e.g. the Chern number, as well as allows to manipulate the edge states locally and thus opens new possibilities for tuning physical observables which originate from the nontrivial topology.
[ "cond-mat.mes-hall" ]
cond-mat.mes-hall
Mesoscale and Nanoscale Physics
4,450Mesoscale and Nanoscale Physics
1509.03141
Given a Banach space~$X$ with an unconditional basis, we consider the following question: does the identity on~$X$ factor through every operator on~$X$ with large diagonal relative to the unconditional basis? We show that on Gowers' unconditional Banach space, there exists an operator for which the answer to the question is negative. By contrast, for any operator on the mixed-norm Hardy spaces $H^p(H^q)$, where $1 \leq p,q < \infty$, with the bi-parameter Haar system, this problem always has a positive solution. The spaces $L^p, 1 < p < \infty$, were treated first by Andrew~[{\em Studia Math.}~1979].
[ "math.FA" ]
math.FA
Functional Analysis
2,549Functional Analysis
1711.11039
The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we quantify the timescale with which galactic SFRs and metallicities evolve using hydrodynamical simulations. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR evolve over similar timescales, are often anti-correlated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. In fact, for a FMR to exist, the metallicity and SFR must evolve in an anti-correlated sense which requires that they evolve with similar time variability. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by globally-bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR -- especially at high redshift.
[ "astro-ph.GA" ]
astro-ph.GA
Astrophysics of Galaxies
464Astrophysics of Galaxies
2207.09061
Feature selection is an important process in machine learning. It builds an interpretable and robust model by selecting the features that contribute the most to the prediction target. However, most mature feature selection algorithms, including supervised and semi-supervised, fail to fully exploit the complex potential structure between features. We believe that these structures are very important for the feature selection process, especially when labels are lacking and data is noisy. To this end, we innovatively introduce a deep learning-based self-supervised mechanism into feature selection problems, namely batch-Attention-based Self-supervision Feature Selection(A-SFS). Firstly, a multi-task self-supervised autoencoder is designed to uncover the hidden structure among features with the support of two pretext tasks. Guided by the integrated information from the multi-self-supervised learning model, a batch-attention mechanism is designed to generate feature weights according to batch-based feature selection patterns to alleviate the impacts introduced by a handful of noisy data. This method is compared to 14 major strong benchmarks, including LightGBM and XGBoost. Experimental results show that A-SFS achieves the highest accuracy in most datasets. Furthermore, this design significantly reduces the reliance on labels, with only 1/10 labeled data needed to achieve the same performance as those state of art baselines. Results show that A-SFS is also most robust to the noisy and missing data.
[ "cs.LG", "cs.AI" ]
cs.LG
cs.AI
Machine Learning;Artificial Intelligence
3,892Machine Learning;Artificial Intelligence
1701.08053
Performance evaluation is a key issue for designers and users of Database Management Systems (DBMSs). Performance is generally assessed with software benchmarks that help, e.g., test architectural choices, compare different technologies or tune a system. In the particular context of data warehousing and On-Line Analytical Processing (OLAP), although the Transaction Processing Performance Council (TPC) aims at issuing standard decision-support benchmarks, few benchmarks do actually exist. We present in this chapter the Data Warehouse Engineering Benchmark (DWEB), which allows generating various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill various data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. We also expand on our previous work on DWEB by presenting its new Extract, Transform, and Load (ETL) feature as well as its new execution protocol. A Java implementation of DWEB is freely available on-line, which can be interfaced with most existing relational DMBSs. To the best of our knowledge, DWEB is the only easily available, up-to-date benchmark for data warehouses.
[ "cs.DB" ]
cs.DB
Databases
1,977Databases
1912.03635
This paper deals with study of Birkhoff-James orthogonality of a linear operator to a subspace of operators defined between arbitrary Banach spaces. In case the domain space is reflexive and the subspace is finite dimensional we obtain a complete characterization. For arbitrary Banach spaces, we obtain the same under some additional conditions. For arbitrary Hilbert space $ \mathbb{H},$ we also study orthogonality to subspace of the space of linear operators $L(\mathbb{H}), $ both with respect to operator norm as well as numerical radius norm.
[ "math.FA" ]
math.FA
Functional Analysis
2,549Functional Analysis
0907.5604
In this paper, we give the first detailed proof of the short-time existence of Deane Yang's local Ricci flow. Then using the local Ricci flow, we prove short-time existence of the Ricci flow on noncompact manifolds, whose Ricci curvature has global lower bound and sectional curvature has only local average integral bound. The short-time existence of the Ricci flow on noncompact manifolds with bounded curvature was studied by Wan-Xiong Shi in 1990s. As a corollary of our main theorem, we get the short-time existence part of Shi's theorem in this more general context.
[ "math.DG", "math.AP" ]
math.DG
math.AP
Differential Geometry;Analysis of PDEs
2,022Differential Geometry;Analysis of PDEs
2307.01207
Recommender Systems (RS) currently represent a fundamental tool in online services, especially with the advent of Online Social Networks (OSN). In this case, users generate huge amounts of contents and they can be quickly overloaded by useless information. At the same time, social media represent an important source of information to characterize contents and users' interests. RS can exploit this information to further personalize suggestions and improve the recommendation process. In this paper we present a survey of Recommender Systems designed and implemented for Online and Mobile Social Networks, highlighting how the use of social context information improves the recommendation task, and how standard algorithms must be enhanced and optimized to run in a fully distributed environment, as opportunistic networks. We describe advantages and drawbacks of these systems in terms of algorithms, target domains, evaluation metrics and performance evaluations. Eventually, we present some open research challenges in this area.
[ "cs.IR", "cs.LG", "cs.SI" ]
cs.IR
cs.LG
Information Retrieval;Machine Learning;Social and Information Networks
3,614Information Retrieval;Machine Learning;Social and Information Networks
1410.1616
In this paper we study the supersymmetric generalization of the new soft theorem which was proposed by Cachazo and Strominger recently. At tree level, we prove the validity of the super soft theorems in both ${\cal N}=4$ super-Yang-Mills theory and ${\cal N}=8$ supergravity using super-BCFW recursion relations. We verify these theorems exactly by showing some examples.
[ "hep-th" ]
hep-th
High Energy Physics - Theory
3,266High Energy Physics - Theory
0912.4667
We study the substructure statistics of a representative sample of galaxy clusters by means of two currently popular substructure characterisation methods, power ratios and centroid shifts. We use the 31 clusters from the REXCESS sample, compiled from the southern ROSAT All-Sky cluster survey REFLEX with a morphologically unbiased selection in X-ray luminosity and redshift, all of which have been reobserved with XMM-Newton. We investigate the uncertainties of the substructure parameters and examine the dependence of the results on projection effects, finding that the uncertainties of the parameters can be quite substantial. Thus while the quantification of the dynamical state of individual clusters with these parameters should be treated with extreme caution, these substructure measures provide powerful statistical tools to characterise trends of properties in large cluster samples. The centre shift parameter, w, is found to be more sensitive in general. For the REXCESS sample neither the occurence of substructure nor the presence of cool cores depends on cluster mass. There is a significant anti-correlation between the existence of substantial substructure and cool cores. The simulated clusters show on average larger substructure parameters than the observed clusters, a trend that is traced to the fact that cool regions are more pronounced in the simulated clusters, leading to stronger substructure measures in merging clusters and clusters with offset cores. Moreover, the frequency of cool regions is higher in the simulations than in the observations, implying that the description of the physical processes shaping cluster formation in the simulations requires further improvement.
[ "astro-ph.CO", "astro-ph.HE" ]
astro-ph.CO
astro-ph.HE
Cosmology and Nongalactic Astrophysics;High Energy Astrophysical Phenomena
1,749Cosmology and Nongalactic Astrophysics;High Energy Astrophysical Phenomena
2208.00805
The paper investigates a dial-a-ride problem focusing on the residents of large cities. These individuals have the opportunity to use a wide variety of transportation modes. Because of this, ridepooling providers have to solve the tradeoff between a high pooling rate and a small detour for customers to be competitive. We provide a Branch-and-Cut algorithm for this problem setting and introduce a new technique using information about already fixed paths to identify infeasible solutions ahead of time and to improve lower bounds on the arrival times at customer locations. By this, we are able to introduce additional valid inequalities to improve the search. We evaluate our procedure in an extensive computational study with up to 120 customers and ten vehicles. Our procedure finds significantly more optimal solutions and better lower and upper bounds in comparison with a mixed-integer programming formulation.
[ "math.OC" ]
math.OC
Optimization and Control
5,234Optimization and Control
1510.06083
Variable selection is a fundamental task in statistical data analysis. Sparsity-inducing regularization methods are a popular class of methods that simultaneously perform variable selection and model estimation. The central problem is a quadratic optimization problem with an l0-norm penalty. Exactly enforcing the l0-norm penalty is computationally intractable for larger scale problems, so dif- ferent sparsity-inducing penalty functions that approximate the l0-norm have been introduced. In this paper, we show that viewing the problem from a convex relaxation perspective offers new insights. In particular, we show that a popular sparsity-inducing concave penalty function known as the Minimax Concave Penalty (MCP), and the reverse Huber penalty derived in a recent work by Pilanci, Wainwright and Ghaoui, can both be derived as special cases of a lifted convex relaxation called the perspective relaxation. The optimal perspective relaxation is a related minimax problem that balances the overall convexity and tightness of approximation to the l0 norm. We show it can be solved by a semidefinite relaxation. Moreover, a probabilistic interpretation of the semidefinite relaxation reveals connections with the boolean quadric polytope in combinatorial optimization. Finally by reformulating the l0-norm pe- nalized problem as a two-level problem, with the inner level being a Max-Cut problem, our proposed semidefinite relaxation can be realized by replacing the inner level problem with its semidefinite relaxation studied by Goemans and Williamson. This interpretation suggests using the Goemans-Williamson rounding procedure to find approximate solutions to the l0-norm penalized problem. Numerical experiments demonstrate the tightness of our proposed semidefinite relaxation, and the effectiveness of finding approximate solutions by Goemans-Williamson rounding.
[ "cs.LG", "math.NA", "math.OC", "stat.ML" ]
cs.LG
math.NA
Machine Learning;Numerical Analysis;Optimization and Control;Machine Learning
7,267longtail
2004.01553
In this paper, we investigate the almost surely pointwise convergence problem of free KdV equation, free wave equation, free elliptic and non-elliptic Schr\"odinger equation respectively. We firstly establish some estimates related to the Wiener decomposition of frequency spaces which are just Lemmas 2.1-2.6 in this paper. Secondly, by using Lemmas 2.1-2.6, 3.1, we establish the probabilistic estimates of some random series which are just Lemmas 3.2-3.11 in this paper. Finally, combining the density theorem in L$^{2}$ with Lemmas 3.2-3.11, we obtain almost surely pointwise convergence of the solutions to corresponding equations with randomized initial data in $L^{2}$, which require much less regularity of the initial data than the rough data case. At the same time, we present the probabilistic density theorem, which is Lemma 3.11 in this paper.
[ "math.AP" ]
math.AP
Analysis of PDEs
205Analysis of PDEs
2002.09908
Let $n_0$ be 1 or 3. If a multiplicative function $f$ satisfies $f(p+q-n_0) = f(p)+f(q)-f(n_0)$ for all primes $p$ and $q$, then $f$ is the identity function $f(n)=n$ or a constant function $f(n)=1$.
[ "math.NT" ]
math.NT
Number Theory
4,945Number Theory
2202.00911
To leverage the power of big data from source tasks and overcome the scarcity of the target task samples, representation learning based on multi-task pretraining has become a standard approach in many applications. However, up until now, choosing which source tasks to include in the multi-task learning has been more art than science. In this paper, we give the first formal study on resource task sampling by leveraging the techniques from active learning. We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance. Theoretically, we show that for the linear representation class, to achieve the same error rate, our algorithm can save up to a \textit{number of source tasks} factor in the source task sample complexity, compared with the naive uniform sampling from all source tasks. We also provide experiments on real-world computer vision datasets to illustrate the effectiveness of our proposed method on both linear and convolutional neural network representation classes. We believe our paper serves as an important initial step to bring techniques from active learning to representation learning.
[ "cs.LG", "cs.AI" ]
cs.LG
cs.AI
Machine Learning;Artificial Intelligence
3,892Machine Learning;Artificial Intelligence
hep-ph/9608325
In this talk the Higgs boson effects in electroweak precision observables are reviewed and the possibility of indirect information on the Higgs mass from electroweak radiative corrections and precision data is discussed.
[ "hep-ph" ]
hep-ph
High Energy Physics - Phenomenology
3,129High Energy Physics - Phenomenology
2004.07373
This article explores the parallels between improvisational theater (Improv) and teaching in an Active Learning environment. It presents the notions of Active Teaching as a natural complement to Active Learning, and discusses how unexpected situations give rise to valuable Teaching Moments. These Teaching Moments can be strategically utilized following the rules of Improv. This article presents some examples of this in the Mathematics classroom, as well as the implementation of an Improv Seminar in the Mathematics Department at the University of California, Santa Cruz.
[ "math.HO" ]
math.HO
History and Overview
3,426History and Overview
2211.16786
The image recapture attack is an effective image manipulation method to erase certain forensic traces, and when targeting on personal document images, it poses a great threat to the security of e-commerce and other web applications. Considering the current learning-based methods suffer from serious overfitting problem, in this paper, we propose a novel two-branch deep neural network by mining better generalized recapture artifacts with a designed frequency filter bank and multi-scale cross-attention fusion module. In the extensive experiment, we show that our method can achieve better generalization capability compared with state-of-the-art techniques on different scenarios.
[ "cs.CV", "cs.AI" ]
cs.CV
cs.AI
Computer Vision and Pattern Recognition;Artificial Intelligence
1,502Computer Vision and Pattern Recognition;Artificial Intelligence