text
stringlengths
121
2.54k
summary
stringlengths
23
219
We present efficient coupling of single organic molecules to a gallium phosphide subwavelength waveguide (nanoguide). By examining and correlating the temporal dynamics of various single-molecule resonances at different locations along the nanoguide, we reveal light-induced fluctuations of their Stark shifts. Our observations are consistent with the predictions of a simple model based on the optical activation of a small number of charges in the GaP nanostructure.
Nanoscopic charge fluctuations in a gallium phosphide waveguide measured by single molecules
Our published paper contains an incorrect statement of a result due to Artin and Zhang. This corrigendum gives the correct statement of their result and includes a new result that allows us to use their result to prove our main theorem. Thus the main theorem of our published paper is correct as stated but its proof must be modified.
Corrigendum to "An equivalence of categories for graded modules over monomial algebras and path algebras of quivers" [J. Algebra, 353(1) (2012) 249-260]
We give an overview of the main features of the CMS trigger and data acquisition (DAQ) system. Then, we illustrate the strategies and trigger configurations (trigger tables) developed for the detector calibration and physics program of the CMS experiment, at start-up of LHC operations, as well as their possible evolution with increasing luminosity. Finally, we discuss the expected CPU time performance of the trigger algorithms and the CPU requirements for the event filter farm at start-up.
The Trigger System of the CMS Experiment
Quantum Mechanics (QM) is a quantum probability theory based on the density matrix. The possibility of applying classical probability theory, which is based on the probability distribution function(PDF), to describe quantum systems is investigated in this work. In a sense this is also the question about the possibility of a Hidden Variable Theory (HVT) of Quantum Mechanics. Unlike Bell's inequality, which need to be checked experimentally, here HVT is ruled out by theoretical consideration. The approach taken here is to construct explicitly the most general HVT, which agrees with all results from experiments on quantum systems (QS), and to check its validity and acceptability. Our list of experimental facts of quantum objects, which all quantum theories are required to respect, includes facts on repeat quantum measurement. We show that it plays an essential role at showing that it is very unlikely that a classical theory can successfully reproduce all QS facts, even for a single spin-1/2 object. We also examine and rule out Bell's HVT and Bohm's HVT based on the same consideration.
Could a Classical Probability Theory Describe Quantum Systems?
We study the possibility of detecting the charged Higgs bosons predicted in the Minimal Supersymmetric Standard Model $(H^\pm)$, with the reactions $e^{+}e^{-}\to \tau^-\bar \nu_{\tau}H^+, \tau^+\nu_\tau H^-$, using the helicity formalism. We analyze the region of parameter space $(m_{A^0}-\tan\beta)$ where $H^\pm$ could be detected in the limit when $\tan\beta$ is large. The numerical computation is done for the energie which is expected to be available at LEP-II ($\sqrt{s}=200$ GeV) and for a possible Next Linear $e^{+}e^{-}$ Collider ($\sqrt{s}=500$ GeV).
Detection of Charged MSSM Higgs Bosons at CERN LEP-II and NLC
Quantum-well (QW) states in {\it nonmagnetic} metal layers contained in magnetic multilayers are known to be important in spin-dependent transport, but the role of QW states in {\it magnetic} layers remains elusive. Here we identify the conditions and mechanisms for resonant tunneling through QW states in magnetic layers and determine candidate structures. We report first-principles calculations of spin-dependent transport in epitaxial Fe/MgO/FeO/Fe/Cr and Co/MgO/Fe/Cr tunnel junctions. We demonstrate the formation of sharp QW states in the Fe layer and show discrete conductance jumps as the QW states enter the transport window with increasing bias. At resonance, the current increases by one to two orders of magnitude. The tunneling magnetoresistance ratio is several times larger than in simple spin tunnel junctions and is positive (negative) for majority- (minority-) spin resonances, with a large asymmetry between positive and negative biases. The results can serve as the basis for novel spintronic devices.
Spin-dependent resonant tunneling through quantum-well states in magnetic metallic thin films
The Python library FatGHol FatGHoL used in Murri2012 to reckon the rational homology of the moduli space of Riemann surfaces is an example of a non-numeric scientific code: most of the processing it does is generating graphs (represented by complex Python objects) and computing their isomorphisms (a triple of Python lists; again a nested data structure). These operations are repeated many times over: for example, the spaces and are triangulated by 4'583'322 and 747'664 graphs, respectively. This is an opportunity for every Python runtime to prove its strength in optimization. The purpose of this experiment was to assess the maturity of alternative Python runtimes, in terms of: compatibility with the language as implemented in CPython 2.7, and performance speedup. This paper compares the results and experiences from running FatGHol with different Python runtimes: CPython 2.7.5, PyPy 2.1, Cython 0.19, Numba 0.11, Nuitka 0.4.4 and Falcon.
Performance of Python runtimes on a non-numeric scientific code
In most machine learning applications, classification accuracy is not the primary metric of interest. Binary classifiers which face class imbalance are often evaluated by the $F_\beta$ score, area under the precision-recall curve, Precision at K, and more. The maximization of many of these metrics can be expressed as a constrained optimization problem, where the constraint is a function of the classifier's predictions. In this paper we propose a novel framework for learning with constraints that can be expressed as a predicted positive rate (or negative rate) on a subset of the training data. We explicitly model the threshold at which a classifier must operate to satisfy the constraint, yielding a surrogate loss function which avoids the complexity of constrained optimization. The method is model-agnostic and only marginally more expensive than minimization of the unconstrained loss. Experiments on a variety of benchmarks show competitive performance relative to existing baselines.
Constrained Classification and Ranking via Quantiles
The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the medium and Low Voltage levels that will support local energy trading among prosumers. In our previous work, we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how different networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and path-cost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling.
Power Grid Network Evolutions for Local Energy Trading
Motivated by the recent achievement of space-based Bose-Einstein condensates (BEC) with ultracold alkali-metal atoms under microgravity and by the proposal of bubble traps which confine atoms on a thin shell, we investigate the BEC thermodynamics on the surface of a sphere. We determine analytically the critical temperature and the condensate fraction of a noninteracting Bose gas. Then we consider the inclusion of a zero-range interatomic potential, extending the noninteracting results at zero and finite temperature. Both in the noninteracting and interacting cases the crucial role of the finite radius of the sphere is emphasized, showing that in the limit of infinite radius one recovers the familiar two-dimensional results. We also investigate the Berezinski-Kosterlitz-Thouless transition driven by vortical configurations on the surface of the sphere, analyzing the interplay of condensation and superfluidity in this finite-size system.
Bose-Einstein Condensation on the Surface of a Sphere
In this paper, we study the zero sets of the confluent hypergeometric function $_{1}F_{1}(\alpha;\gamma;z):=\sum_{n=0}^{\infty}\frac{(\alpha)_{n}}{n!(\gamma)_{n}}z^{n}$, where $\alpha, \gamma, \gamma-\alpha\not\in \mathbb{Z}_{\leq 0}$, and show that if $\{z_n\}_{n=1}^{\infty}$ is the zero set of $_{1}F_{1}(\alpha;\gamma;z)$ with multiple zeros repeated and modulus in increasing order, then there exists a constant $M>0$ such that $|z_n|\geq M n$ for all $n\geq 1$.
On the zeros of Confluent Hypergeometric Functions
Let $K, K'$ be ribbon knottings of $n$-spheres with $1$-handles in $S^{n+2}$, $n\geq 2$. We show that if the knot quandles of these knots are isomorphic, then the ribbon knottings are stably equivalent, in the sense of Nakanishi and Nakagawa, after taking a finite number of connected sums with trivially embedded copies of $S^{n-1}\times S^{1}$.
Ribbon n-knots with isomorphic quandles
We investigate the existence of spiral ordering in the planar spin orientation of skyrmions localised on a face centered rectangular lattice (FCRL). We use the non-linear sigma model (NLSM) to numerically calculate the minimum energy configurations of this lattice around the $\nu=1$ quantum Hall ground state. Our variational ansatz contains an angle $\theta$, characterising the FCRL and an angle $q$, characterising the orientational order. As $\nu$ is increased towards one, there is a smooth transition from the triangular lattice (TL) characterised by $(\theta,q) = (120^o,120^o)$ to FCRLs with spiral orientational order. The novel feature we find is that these phases are characterised by $\theta, q)$ values such that $\theta+q = 240^o$ (same as the TL phase). As $\nu$ incresaes further towards one, there is a sharp transition from the FCRLs to the square lattice (SL), characterised by $(\theta,q)=(90^o,180^o)$. Consequently, the parameter $\theta+q$ jumps sharply at the FCRL-SL transition and can serve as an order parameter to characterise it.
Spiral orientational order in quantum Hall skyrmion lattices
A (conjecturally complete) list of components of complements of discriminant varieties of parabolic singularities of smooth real functions is given. We also promote a combinatorial program that enumerates possible topological types of non-discriminant morsifications of isolated real function singularities and provides a strong invariant of components of complements of discriminant varieties.
Complements of discriminants of real parabolic function singularities
All hypersurface homogeneous locally rotationally symmetric spacetimes which admit conformal symmetries are determined and the symmetry vectors are given explicitly. It is shown that these spacetimes must be considered in two sets. One set containing Ellis Class II and the other containing Ellis Class I, III LRS spacetimes. The determination of the conformal algebra in the first set is achieved by systematizing and completing results on the determination of CKVs in 2+2 decomposable spacetimes. In the second set new methods are developed. The results are applied to obtain the classification of the conformal algebra of all static LRS spacetimes in terms of geometrical variables. Furthermore all perfect fluid nontilted LRS spacetimes which admit proper conformal symmetries are determined and the physical properties some of them are discussed.
Hypersurface homogeneous locally rotationally symmetric spacetimes admitting conformal symmetries
We develop algorithms to turn quotients of rings of rings of integers into effective Euclidean rings by giving polynomial algorithms for all fundamental ring operations. In addition, we study normal forms for modules over such rings and their behavior under certain quotients. We illustrate the power of our ideas in a new modular normal form algorithm for modules over rings of integers, vastly outperforming classical algorithms.
Computing in quotients of rings of integers
We derive a lower bound for moments of random chaoses of order two with coefficients in arbitrary Banach space F generated by independent symmetric random variables with logarithmically concave tails (which is probably two-sided). We also provide two upper bounds for moments of such chaoses when F = L_q. The first is true under the additional subgaussanity assumption. The second one does not require additional assumptions but is not optimal in general. Both upper bounds are sufficient for obtaining two-sided moment estimates for chaoses with values in Lq generated by Weibull random variables with shape parameter greater or equal to 1.
Moments and tails of Lq-valued chaoses based on independent variables with log-concave tails
Social media corpora pose unique challenges and opportunities, including typically short document lengths and rich meta-data such as author characteristics and relationships. This creates great potential for systematic analysis of the enormous body of the users and thus provides implications for industrial strategies such as targeted marketing. Here we propose a novel and statistically principled method, clust-LDA, which incorporates authorship structure into the topical modeling, thus accomplishing the task of the topical inferences across documents on the basis of authorship and, simultaneously, the identification of groupings between authors. We develop an inference procedure for clust-LDA and demonstrate its performance on simulated data, showing that clust-LDA out-performs the "vanilla" LDA on the topic identification task where authors exhibit distinctive topical preference. We also showcase the empirical performance of clust-LDA based on a real-world social media dataset from Reddit.
Clust-LDA: Joint Model for Text Mining and Author Group Inference
We provide an analytic proof of a theorem of Krylov dealing with global $C^{1,1}$ estimates to solutons of degenerate complex Monge-Amp\`ere equations. As an application we show optimal regularity for various extremal functions with nonconstant boundary values.
An analytic proof of the Krylov estimates for the complex Monge-Ampere equation and applications
Observations of non-thermal emission from several supernova remnants suggest that magnetic fields close to the blastwave are much stronger than would be naively expected from simple shock compression of the field permeating the interstellar medium (ISM). We investigate in some detail a simple model based on turbulence generation by cosmic-ray pressure gradients. Previously this model was investigated using 2D MHD simulations. Motivated by the well-known qualitative differences between 2D and 3D turbulence, we further our investigations of this model using both 2D and 3D simulations to study the influence of the dimensionality of the simulations on the field amplification achieved. Further, since the model implies the formation of shocks which can, in principle, be efficiently cooled by collisional cooling we include such cooling in our simulations to ascertain whether it could increase the field amplification achieved. Finally, we examine the influence of different orientations of the magnetic field with respect to the normal of the blastwave. We find that dimensionality has a slight influence on the overall amplification achieved, but a significant impact on the morphology of the amplified field. Collisional cooling has surprisingly little impact, primarily due to the short time which any element of the ISM resides in the precursor region for supernova blastwaves. Even allowing for a wide range of orientations of the magnetic field, we find that the magnetic field can be expected to be amplified by, on average, at least an order of magnitude in the precursors of supernova blastwaves.
Cosmic-ray pressure driven magnetic field amplification: dimensional, radiative and field orientation effects
In this note we consider a certain class of Gaussian entire functions, characterized by some asymptotic properties of their covariance kernels, which we call admissible (as defined by Hayman). A notable example is the Gaussian Entire Function, whose zero set is well-known to be invariant with respect to the isometries of the complex plane. We explore the rigidity of the zero set of Gaussian Taylor series, a phenomenon discovered not long ago by Ghosh and Peres for the Gaussian Entire Function. In particular, we find that for a function of infinite order of growth, and having an admissible kernel, the zero set is "fully rigid". This means that if we know the location of the zeros in the complement of any given compact set, then the number and location of the zeros inside that set can be determined uniquely. As far as we are aware, this is the first explicit construction in a natural class of random point processes with full rigidity.s with full rigidity.
Rigidity for zero sets of Gaussian entire functions
The distributions of the initial main-sequence binary parameters are one of the key ingredients in obtaining evolutionary predictions for compact binary (BH-BH / BH-NS / NS-NS) merger rates. Until now, such calculations were done under the assumption that initial binary parameter distributions were independent. Here, we implement empirically derived inter-correlated distributions of initial binary parameters primary mass (M1), mass ratio (q), orbital period (P), and eccentricity (e). Unexpectedly, the introduction of inter-correlated initial binary parameters leads to only a small decrease in the predicted merger rates by a factor of 2 $-$ 3 relative to the previously used non-correlated initial distributions. The formation of compact object mergers in the isolated classical binary evolution favors initial binaries with stars of comparable masses (q = 0.5 $-$ 1) at intermediate orbital periods (log P (days) = 2 $-$ 4). New distributions slightly shift the mass ratios towards smaller values with respect to the previously used flat q distribution, which is the dominant effect decreasing the rates. New orbital periods only negligibly increase the number of progenitors. Additionally, we discuss the uncertainty of merger rate predictions associated with possible variations of the massive-star initial mass function (IMF). We argue that evolutionary calculations should be normalized to a star formation rate (SFR) that is obtained from the observed amount of UV light at wavelength 1500{\AA} (SFR indicator). In this case, contrary to recent reports, the uncertainty of the IMF does not affect the rates by more than a factor of 2. Any change to the IMF slope for massive stars requires a change of SFR in a way that counteracts the impact of IMF variations on the merger rates. In contrast, we suggest that the uncertainty in cosmic SFR at low metallicity can be a significant factor at play.
Impact of inter-correlated initial binary parameters on double black hole and neutron star mergers
Over the course of several decades, organic liquid scintillators have formed the basis for successful neutrino detectors. Gadolinium-loaded liquid scintillators provide efficient background suppression for electron antineutrino detection at nuclear reactor plants. In the Double Chooz reactor antineutrino experiment, a newly developed beta-diketonate gadolinium-loaded scintillator is utilized for the first time. Its large scale production and characterization are described. A new, light yield matched metal-free companion scintillator is presented. Both organic liquids comprise the target and "Gamma Catcher" of the Double Chooz detectors.
Large scale Gd-beta-diketonate based organic liquid scintillator production for antineutrino detection
This chapter reviews standard parameter-estimation techniques and presents a novel gradient-, ensemble-, adjoint-free data-driven parameter estimation technique in the DDDAS framework. This technique, called retrospective cost parameter estimation (RCPE), is motivated by large-scale complex estimation models characterized by high-dimensional nonlinear dynamics, nonlinear parameterizations, and representational models. RCPE is illustrated by estimating unknown parameters in three examples. In the first example, salient features of RCPE are investigated by considering parameter estimation problem in a low-order nonlinear system. In the second example, RCPE is used to estimate the convective coefficient and the viscosity in the generalized Burgers equation by using a scalar measurement. In the final example, RCPE is used to estimate thermal conductivity coefficients that relate temporal temperature variation with the vertical gradient of the temperature in the atmosphere.
Retrospective Cost Parameter Estimation with Application to Space Weather Modeling
We investigate how nontrivial topology affects the entanglement dynamics between a detector and a quantum field and between two detectors mediated by a quantum field. Nontrivial topology refers to both that of the base space and that of the bundle. Using a derivative-coupling Unruh-DeWitt-like detector model interacting with a quantum scalar field in an Einstein cylinder S1 (space) x R1 (time), we see the beating behaviors in the dynamics of the detector-field entanglement and the detector-detector entanglement, which distinguish from the results in the non-compact (1+1) dimensional Minkowski space. The beat patterns of entanglement dynamics in a normal and a twisted field with the same parameter values are different because of the difference in the spectrum of the field modes. In terms of the kinetic momentum of the detectors, we find that the contribution by the zero mode in a normal field to entanglement dynamics has no qualitative difference from those by the nonzero modes.
Entanglement Dynamics of Detectors in an Einstein Cylinder
Sparse-view computed tomography (CT) -- using a small number of projections for tomographic reconstruction -- enables much lower radiation dose to patients and accelerated data acquisition. The reconstructed images, however, suffer from strong artifacts, greatly limiting their diagnostic value. Current trends for sparse-view CT turn to the raw data for better information recovery. The resultant dual-domain methods, nonetheless, suffer from secondary artifacts, especially in ultra-sparse view scenarios, and their generalization to other scanners/protocols is greatly limited. A crucial question arises: have the image post-processing methods reached the limit? Our answer is not yet. In this paper, we stick to image post-processing methods due to great flexibility and propose global representation (GloRe) distillation framework for sparse-view CT, termed GloReDi. First, we propose to learn GloRe with Fourier convolution, so each element in GloRe has an image-wide receptive field. Second, unlike methods that only use the full-view images for supervision, we propose to distill GloRe from intermediate-view reconstructed images that are readily available but not explored in previous literature. The success of GloRe distillation is attributed to two key components: representation directional distillation to align the GloRe directions, and band-pass-specific contrastive distillation to gain clinically important details. Extensive experiments demonstrate the superiority of the proposed GloReDi over the state-of-the-art methods, including dual-domain ones. The source code is available at https://github.com/longzilicart/GloReDi.
Learning to Distill Global Representation for Sparse-View CT
We present calculations of the two-pion-exchange contribution to proton-proton scattering at 90 degrees using form factors appropriate for representing the distribution of the constituent partons of the nucleon. Talk given at MENU2001, George Washington University, July 26-31, 2001
Two-Pion Exchange in proton-proton Scattering
Matrix Product States (MPS) are a particular type of one dimensional tensor network states, that have been applied to the study of numerous quantum many body problems. One of their key features is the possibility to describe and encode symmetries on the level of a single building block (tensor), and hence they provide a natural playground for the study of symmetric systems. In particular, recent works have proposed to use MPS (and higher dimensional tensor networks) for the study of systems with local symmetry that appear in the context of gauge theories. In this work we classify MPS which exhibit local invariance under arbitrary gauge groups. We study the respective tensors and their structure, revealing known constructions that follow known gauging procedures, as well as different, other types of possible gauge invariant states.
Classification of Matrix Product States with a Local (Gauge) Symmetry
We consider state reconstruction from the measurement statistics of phase space observables generated by photon number states. The results are obtained by inverting certain infinite matrices. In particular, we obtain reconstruction formulas, each of which involves only a single phase space observable.
Density matrix reconstruction from displaced photon number distributions
We incorporate a time-independent gravitational field into the BGK scheme for numerical hydrodynamics. In the BGK scheme the gas evolves via an approximation to the collisional Boltzmann equation, namely the Bhatnagar-Gross-Krook (BGK) equation. Time-dependent hydrodynamical fluxes are computed from local solutions of the BGK equation. By accounting for particle collisions, the fundamental mechanism for generating dissipation in gas flow, a scheme based on the BGK equation gives solutions to the Navier-Stokes equations: the fluxes carry both advective and dissipative terms. We perform numerical experiments in both 1D Cartesian geometries and axisymmetric cylindrical coordinates.
Time-Independent Gravitational Fields in the BGK Scheme for Hydrodynamics
We use an exact solution of the elastic membrane shape equation, representing the curvature, which will serve as a quantum potential in the quantum mechanical two dimensional Schrodinger equation for a (quasi-) particle on the surface of the membrane. Surface curvature in the quasi one-dimensional case is related to an unexpected static formation: on one hand the elastic energy has a maximum where surface curvature has a maximum and on the other hand the concentration of the expectation value to find the (quasi-) particle is again where the elastic energy is concentrated, namely where surface curvature has a maximum. This represents a particular form of a conformon.
Quantum-elastic bump on a surface
We prove the security of a high-capacity quantum key distribution protocol over noisy channels. By using entanglement purification protocol, we construct a modified version of the protocol in which we separate it into two consecutive stages. We prove their securities respectively and hence the security of the whole protocol.
Proof of Security of a High-Capacity Quantum Key Distribution Protocol
UV frequency metrology has been performed on the a3Pi - X1Sigma+ (0,0) band of various isotopologues of CO using a frequency-quadrupled injection-seeded narrow-band pulsed Titanium:Sapphire laser referenced to a frequency comb laser. The band origin is determined with an accuracy of 5 MHz (delta \nu / \nu = 3 * 10^-9), while the energy differences between rotational levels in the a3Pi state are determined with an accuracy of 500 kHz. From these measurements, in combination with previously published radiofrequency and microwave data, a new set of molecular constants is obtained that describes the level structure of the a3Pi state of 12C16O and 13C16O with improved accuracy. Transitions in the different isotopologues are well reproduced by scaling the molecular constants of 12C16O via the common mass-scaling rules. Only the value of the band origin could not be scaled, indicative of a breakdown of the Born-Oppenheimer approximation. Our analysis confirms the extreme sensitivity of two-photon microwave transitions between nearly-degenerate rotational levels of different Omega-manifolds for probing a possible variation of the proton-to-electron mass ratio, \mu=m_p/m_e, on a laboratory time scale.
UV frequency metrology on CO (a3Pi); isotope effects and sensitivity to a variation of the proton-to-electron mass ratio
Image recognition models that work in challenging environments (e.g., extremely dark, blurry, or high dynamic range conditions) must be useful. However, creating training datasets for such environments is expensive and hard due to the difficulties of data collection and annotation. It is desirable if we could get a robust model without the need for hard-to-obtain datasets. One simple approach is to apply data augmentation such as color jitter and blur to standard RGB (sRGB) images in simple scenes. Unfortunately, this approach struggles to yield realistic images in terms of pixel intensity and noise distribution due to not considering the non-linearity of Image Signal Processors (ISPs) and noise characteristics of image sensors. Instead, we propose a noise-accounted RAW image augmentation method. In essence, color jitter and blur augmentation are applied to a RAW image before applying non-linear ISP, resulting in realistic intensity. Furthermore, we introduce a noise amount alignment method that calibrates the domain gap in the noise property caused by the augmentation. We show that our proposed noise-accounted RAW augmentation method doubles the image recognition accuracy in challenging environments only with simple training data.
Rawgment: Noise-Accounted RAW Augmentation Enables Recognition in a Wide Variety of Environments
This paper studies the asymptotic power of tests of sphericity against perturbations in a single unknown direction as both the dimensionality of the data and the number of observations go to infinity. We establish the convergence, under the null hypothesis and contiguous alternatives, of the log ratio of the joint densities of the sample covariance eigenvalues to a Gaussian process indexed by the norm of the perturbation. When the perturbation norm is larger than the phase transition threshold studied in Baik, Ben Arous and Peche [Ann. Probab. 33 (2005) 1643-1697] the limiting process is degenerate, and discrimination between the null and the alternative is asymptotically certain. When the norm is below the threshold, the limiting process is nondegenerate, and the joint eigenvalue densities under the null and alternative hypotheses are mutually contiguous. Using the asymptotic theory of statistical experiments, we obtain asymptotic power envelopes and derive the asymptotic power for various sphericity tests in the contiguity region. In particular, we show that the asymptotic power of the Tracy-Widom-type tests is trivial (i.e., equals the asymptotic size), whereas that of the eigenvalue-based likelihood ratio test is strictly larger than the size, and close to the power envelope.
Asymptotic power of sphericity tests for high-dimensional data
Zero mean curvature surfaces in the simply isotropic 3-space $\mathbb{I}^3$ naturally appear as intermediate geometry between geometry of minimal surfaces in $\mathbb{E}^3$ and that of maximal surfaces in $\mathbb{L}^3$. In this paper, we investigate reflection principles for zero mean curvature surfaces in $\mathbb{I}^3$ as with the above surfaces in $\mathbb{E}^3$ and $\mathbb{L}^3$. In particular, we show a reflection principle for isotropic line segments on such zero mean curvature surfaces in $\mathbb{I}^3$, along which the induced metrics become singular.
Reflection principles for zero mean curvature surfaces in the simply isotropic 3-space
A remarkable number of different numerical algorithms can be understood and analyzed using the concepts of symmetric spaces and Lie triple systems, which are well known in differential geometry from the study of spaces of constant curvature and their tangents. This theory can be used to unify a range of different topics, such as polar-type matrix decompositions, splitting methods for computation of the matrix exponential, composition of selfadjoint numerical integrators and dynamical systems with symmetries and reversing symmetries. The thread of this paper is the following: involutive automorphisms on groups induce a factorization at a group level, and a splitting at the algebra level. In this paper we will give an introduction to the mathematical theory behind these constructions, and review recent results. Furthermore, we present a new Yoshida-like technique, for self-adjoint numerical schemes, that allows to increase the order of preservation of symmetries by two units. Since all the time-steps are positive, the technique is particularly suited to stiff problems, where a negative time-step can cause instabilities.
Symmetric spaces and Lie triple systems in numerical analysis of differential equations
We review here our study of a supersymmetric left-right model (SLRM). In the model the $R$-parity is spontaneously broken. Phenomenologically novel feature of the model is the occurrance of the doubly charged particles in the Higgs sector, which are possibly light enough to be seen in the next linear collider. Detection of the doubly charged higgsinos in the next linear collider is discussed.
Supersymmetric Left-Right Model and its Phenomenological Implications
Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution $\mathcal D$ over an alphabet of size $k$ up to $\pm\epsilon$ additive error by streaming over $(k/\epsilon^3) \cdot \text{polylog}(1/\epsilon)$ i.i.d. samples and using only $O(1)$ words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to $(k/\epsilon^2)\cdot \text{polylog}(1/\epsilon)$. We conjecture that this is optimal up to $\text{polylog}(1/\epsilon)$ factors.
Estimation of Entropy in Constant Space with Improved Sample Complexity
The growing penetration of inverter-based resources and associated controls necessitates system-wide electromagnetic transient (EMT) analyses. EMT tools and methods today were not designed for the scale of these analyses. In light of the emerging need, there is a great deal of interest in developing new techniques for fast and accurate EMT simulations for large power grids; the foundations of which will be built on current tools and methods. However, we find that educational texts covering the fundamentals and inner workings of current EMT tools are limited. As such, there is a lack of introductory material for students and professionals interested in researching the field. To that end, in this tutorial, we introduce the principles of EMT analyses from the circuit-theoretic viewpoint, mimicking how time-domain analyses are performed in circuit simulation tools like SPICE and Cadence. We perform EMT simulations for two examples, one linear and one nonlinear, including induction motor (IM) from the first principles. By the document's end, we anticipate the readers will have a \textit{basic} understanding of how power grid EMT tools work.
Tutorial: Circuit-based Electromagnetic Transient Simulation
We make a brief review of (optical) Holonomic Quantum Computer (or Computation) proposed by Zanardi and Rasetti (quant-ph/9904011) and Pachos and Chountasis (quant-ph/9912093), and give a mathematical reinforcement to their works.
Mathematical Foundations of Holonomic Quantum Computer
Voice activity detection (VAD) improves the performance of speaker verification (SV) by preserving speech segments and attenuating the effects of non-speech. However, this scheme is not ideal: (1) it fails in noisy environments or multi-speaker conversations; (2) it is trained based on inaccurate non-SV sensitive labels. To address this, we propose a speaker verification-based voice activity detection (SVVAD) framework that can adapt the speech features according to which are most informative for SV. To achieve this, we introduce a label-free training method with triplet-like losses that completely avoids the performance degradation of SV due to incorrect labeling. Extensive experiments show that SVVAD significantly outperforms the baseline in terms of equal error rate (EER) under conditions where other speakers are mixed at different ratios. Moreover, the decision boundaries reveal the importance of the different parts of speech, which are largely consistent with human judgments.
SVVAD: Personal Voice Activity Detection for Speaker Verification
In recent years, various means of efficiently detecting changepoints in the univariate setting have been proposed, with one popular approach involving minimising a penalised cost function using dynamic programming. In some situations, these algorithms can have an expected computational cost that is linear in the number of data points; however, the worst case cost remains quadratic. We introduce two means of improving the computational performance of these methods, both based on parallelising the dynamic programming approach. We establish that parallelisation can give substantial computational improvements: in some situations the computational cost decreases roughly quadratically in the number of cores used. These parallel implementations are no longer guaranteed to find the true minimum of the penalised cost; however, we show that they retain the same asymptotic guarantees in terms of their accuracy in estimating the number and location of the changes.
Parallelisation of a Common Changepoint Detection Method
In this work, we develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker (HT) tensors, an efficient structured tensor format based on recursive subspace factorizations. By exploiting the smooth manifold structure of these tensors, we construct standard optimization algorithms such as Steepest Descent and Conjugate Gradient for completing tensors from missing entries. Our algorithmic framework is fast and scalable to large problem sizes as we do not require SVDs on the ambient tensor space, as required by other methods. Moreover, we exploit the structure of the Gramian matrices associated with the HT format to regularize our problem, reducing overfitting for high subsampling ratios. We also find that the organization of the tensor can have a major impact on completion from realistic seismic acquisition geometries. These samplings are far from idealized randomized samplings that are usually considered in the literature but are realizable in practical scenarios. Using these algorithms, we successfully interpolate large-scale seismic data sets and demonstrate the competitive computational scaling of our algorithms as the problem sizes grow.
Optimization on the Hierarchical Tucker manifold - applications to tensor completion
Let $K$ be a convex body in $\mathbb{R}^n$ with Santal\'o point at 0\. We show that if $K$ has a point on the boundary with positive generalized Gau{\ss} curvature, then the volume product $|K| |K^\circ|$ is not minimal. This means that a body with minimal volume product has Gau{\ss} curvature equal to 0 almost everywhere and thus suggests strongly that a minimal body is a polytope.
A note on Mahler's conjecture
We discuss the tightly bound (hydrino) solution of the Klein-Gordon equation for the Coulomb potential in 3 dimensions. We show that a similarly tightly bound state occurs for the Dirac equation in 2 dimensions. These states are unphysical since they disappear if the nuclear charge distribution is taken to have an arbitrarily small but non-zero radius.
The hydrino and other unlikely states
We calculate angle-resolved above-threshold ionization spectra for diatomic molecules in linearly polarized laser fields, employing the strong-field approximation. The interference structure resulting from the individual contributions of the different scattering scenarios is discussed in detail, with respect to the dependence on the internuclear distance and molecular orientation. We show that, in general, the contributions from the processes in which the electron is freed at one center and rescatters off the other obscure the interference maxima and minima obtained from single-center processes. However, around the boundary of the energy regions for which rescattering has a classical counterpart, such processes play a negligible role and very clear interference patterns are observed. In such energy regions, one is able to infer the internuclear distance from the energy difference between adjacent interference minima.
Interference effects in above-threshold ionization from diatomic molecules: determining the internuclear separation
Using a sample of 68 million KL -> 3pi0 decays collected in 1996-1999 by the KTeV (E832) experiment at Fermilab, we present a detailed study of the KL -> 3pi0 Dalitz plot density. We report the first observation of interference from KL->pi+pi-pi0 decays in which pi+pi- rescatters to 2pi0 in a final-state interaction. This rescattering effect is described by the Cabibbo-Isidori model, and it depends on the difference in pion scattering lengths between the isospin I=0 and I=2 states, a0-a2. Using the Cabibbo-Isidori model, we present the first measurement of the KL-> 3pi0 quadratic slope parameter that accounts for the rescattering effect.
Detailed Study of the KL -> 3pi0 Dalitz Plot
In this paper, we discuss asymptotic relations for the approximation of $\left\vert x\right\vert ^{\alpha},\alpha>0$ in $L_{\infty}\left[ -1,1\right] $ by Lagrange interpolation polynomials based on the zeros of the Chebyshev polynomials of first kind. The limiting process reveals an entire function of exponential type for which we can present an explicit formula. As a consequence, we further deduce an asymptotic relation for the Approximation error when $\alpha\rightarrow\infty$. Finally, we present connections of our results together with some recent work of Ganzburg [5] and Lubinsky [10], by presenting numerical results, indicating a possible constructive way towards a representation for the Bernstein constants.
Extremal Polynomials and Entire Functions of Exponential Type
A discrete-time Quantum Walk (QW) is essentially an operator driving the evolution of a single particle on the lattice, through local unitaries. Some QWs admit a continuum limit, leading to well-known physics partial differential equations, such as the Dirac equation. We show that these simulation results need not rely on the grid: the Dirac equation in $(2+1)$--dimensions can also be simulated, through local unitaries, on the honeycomb or the triangular lattice. The former is of interest in the study of graphene-like materials. The latter, we argue, opens the door for a generalization of the Dirac equation to arbitrary discrete surfaces.
The Dirac equation as a quantum walk over the honeycomb and triangular lattices
The conversational recommender systems (CRSs) have received extensive attention in recent years. However, most of the existing works focus on various deep learning models, which are largely limited by the requirement of large-scale human-annotated datasets. Such methods are not able to deal with the cold-start scenarios in industrial products. To alleviate the problem, we propose FORCE, a Framework Of Rule-based Conversational Recommender system that helps developers to quickly build CRS bots by simple configuration. We conduct experiments on two datasets in different languages and domains to verify its effectiveness and usability.
FORCE: A Framework of Rule-Based Conversational Recommender System
We derive the Hessian geometric structure of nonequilibrium chemical reaction networks (CRN) on the flux and force spaces induced by the Legendre duality of convex dissipation functions and characterize their dynamics as a generalized flow. With this structure, we can extend theories of nonequilibrium systems with quadratic dissipation functions to more general ones with nonquadratic ones, which are pivotal for studying chemical reaction networks. By applying generalized notions of orthogonality in Hessian geometry to chemical reaction networks, we obtain two generalized decompositions of the entropy production rate, each of which captures gradient-flow and minimum-dissipation aspects in nonequilibrium dynamics.
Geometry of Nonequilibrium Chemical Reaction Networks and Generalized Entropy Production Decompositions
Locally repairable codes (LRCs) are error correcting codes used in distributed data storage. Besides a global level, they enable errors to be corrected locally, reducing the need for communication between storage nodes. There is a close connection between almost affine LRCs and matroid theory which can be utilized to construct good LRCs and derive bounds on their performance. A generalized Singleton bound for linear LRCs with parameters $(n,k,d,r,\delta)$ was given in [N. Prakash et al., "Optimal Linear Codes with a Local-Error-Correction Property", IEEE Int. Symp. Inf. Theory]. In this paper, a LRC achieving this bound is called perfect. Results on the existence and nonexistence of linear perfect $(n,k,d,r,\delta)$-LRCs were given in [W. Song et al., "Optimal locally repairable codes", IEEE J. Sel. Areas Comm.]. Using matroid theory, these existence and nonexistence results were later strengthened in [T. Westerb\"ack et al., "On the Combinatorics of Locally Repairable Codes", Arxiv: 1501.00153], which also provided a general lower bound on the maximal achievable minimum distance $d_{\rm{max}}(n,k,r,\delta)$ that a linear LRC with parameters $(n,k,r,\delta)$ can have. This article expands the class of parameters $(n,k,d,r,\delta)$ for which there exist perfect linear LRCs and improves the lower bound for $d_{\rm{max}}(n,k,r,\delta)$. Further, this bound is proved to be optimal for the class of matroids that is used to derive the existence bounds of linear LRCs.
Bounds on the Maximal Minimum Distance of Linear Locally Repairable Codes
Using the random matrix approach, we calculate analytically the average shot-noise power in a chaotic cavity at an arbitrary number of propagating modes (channels) in each of the two attached leads. A simple relationship between this quantity, the average conductance and the conductance variance is found. The dependence of the Fano factor on the channel number is considered in detail.
Shot noise in chaotic cavities with an arbitrary number of open channels
In this work we construct a new class of maximal partial spreads in $PG(4,q)$, that we call $q$-added maximal partial spreads. We obtain them by depriving a spread of a hyperplane of some lines and adding $q+1$ lines not of the hyperplane for each removed line. We do this in a theoretic way for every value of $q$, and by a computer search for $q$ an odd prime and $q \leq 13$. More precisely we prove that for every $q$ there are $q$-added maximal partial spreads from the size $q^2+q+1$ to the size $q^2+(q-1)q+1$, while by a computer search we get larger cardinalities.
A new class of maximal partial spreads in PG(4,q)
A different technique is used to study the radiative decay of a metastable state in multiply ionized atoms. With use of a unitary Penning trap to selectively capture Kr$^{17+}$ ions from an ion source at NIST, the decay of the 3d $^2D_{5/2}$ metastable state is measured in isolation at low energy, without any active cooling. The highly ionized atoms are trapped in the fine structure of the electronic ground configuration with an energy spread of 4(1) eV, which is narrower than within the ion source by a factor of about 100. By observing the visible 637 nm photon emission of the forbidden transition from the 3d $^2D_{5/2}$ level to the ground state, we measured its radiative lifetime to be $\tau=$ 24.48 ms +/- 0.28(stat.) ms +/- 0.14(syst.) ms. Remarkably, various theoretical predictions for this relativistic Rydberg atom are in agreement with our measurement at the 1% level.
Measurement of the Kr XVIII 3d $^2D_{5/2}$ lifetime at low energy in a unitary Penning trap
We show among other things how knowing Schauder or Sobolev-space estimates for the one-dimensional heat equation allows one to derive their multidimensional analogs for equations with coefficients depending only on time variable with the {\em same\/} constants as in the case of the one-dimensional heat equation. The method is based on using the Poisson stochastic process. It looks like no other method is available at this time and it is a very challenging problem to find a purely analytic approach to proving such results.
Poisson stochastic process and basic Schauder and Sobolev estimates in the theory of parabolic equations
Polarized $\Lambda_b \to \Lambda \gamma$ decays at the Z pole are shown to be well suited for probing a large variety of New Physics effects. A new observable is proposed, the angular asymmetry between the $\Lambda_b$ spin and photon momentum, which is sensitive to the relative strengths of the opposite chirality and Standard Model chirality $b \to s \gamma$ dipole operators. Combination with the $\Lambda $ decay polarization asymmetry and comparison with the $\Lambda_b$ polarization extracted from semileptonic decays allows important tests of the $V-A$ structure of the Standard Model. Modifications of the rates and angular asymmetries which arise at next-to-leading order are discussed. Measurements for $\Lambda_b \to \Lambda \gamma$ and the CP conjugate mode, with branching ratios of a few times $10^{-5}$, are shown to be sensitive to non-standard sources of CP violation in the $\Lambda_b \to \Lambda \gamma$ matrix element. Form factor relations for heavy-to-light baryon decays are derived in the large energy limit, which are of general interest.
Probing for New Physics in Polarized $\Lambda_b$ decays at the Z
Creativity Support Tools (CST) aim to enhance human creativity, but the deeply personal and subjective nature of creativity makes the design of universal support tools challenging. Individuals develop personal approaches to creativity, particularly in the context of commercial design where signature styles and techniques are valuable commodities. Artificial Intelligence (AI) and Machine Learning (ML) techniques could provide a means of creating 'intelligent' CST which learn and adapt to personal styles of creativity. Identifying what kind of role such tools could play in the design process requires a better understanding of designers' attitudes towards working with AI, and their willingness to include it in their personal creative process. This paper details the results of a survey of professional designers which indicates a positive and pragmatic attitude towards collaborating with AI tools, and a particular opportunity for incorporating them in the research stages of a design project.
Guru, Partner, or Pencil Sharpener? Understanding Designers' Attitudes Towards Intelligent Creativity Support Tools
Real-time dispatch practices for operating the electric grid in an economic and reliable manner are evolving to accommodate higher levels of renewable energy generation. In particular, stochastic optimization is receiving increased attention as a technique for handling the inherent uncertainty in wind and solar generation. The typical two-stage stochastic optimization formulation relies on a sample average approximation with scenarios representing errors in forecasting renewable energy ramp events. Standard Monte Carlo sampling approaches can result in prohibitively high-dimensional systems for optimization, as well as a poor representation of extreme events that challenge grid reliability. We propose two alternative scenario creation strategies, importance sampling and Bayesian quadrature, that can reduce the estimator's variance. Their performance is assessed on a week's worth of 5 minute stochastic economic dispatch decisions for realistic wind and electrical system data. Both strategies yield more economic solutions and improved reliability compared to Monte Carlo sampling, with Bayesian quadrature being less computationally intensive than importance sampling and more economic when considering at least 20 scenarios.
Advanced Scenario Creation Strategies for Stochastic Economic Dispatch with Renewables
There have been controversies among statisticians on (i) what to model and (ii) how to make inferences from models with unobservables. One such controversy concerns the difference between estimation methods for the marginal means not necessarily having a probabilistic basis and statistical models having unobservables with a probabilistic basis. Another concerns likelihood-based inference for statistical models with unobservables. This needs an extended-likelihood framework, and we show how one such extension, hierarchical likelihood, allows this to be done. Modeling of unobservables leads to rich classes of new probabilistic models from which likelihood-type inferences can be made naturally with hierarchical likelihood.
Likelihood Inference for Models with Unobservables: Another View
This paper examines the relationship between spectra of stars of same spectral type with extremely low reddenings. According to the standard theory, the relationship between the spectrum of stars with same spectral type and small, but different reddenings should be different in the optical and in the UV. This difference is not observed: the ratio of the spectra of two stars in directions where the reddening is large enough to be detected and low enough not to give a noticeable 2200Ang. bump is an exponential of 1/lambda from the near-infrared to the far-UV. This result is in conformity with the ideas introduced in preceding papers: the exponential optical extinction extends to the UV, and the spectrum of stars with enough reddening is contaminated by light scattered at close angular distance from the stars. An application will be the determination of the spectrum of a non-reddened star from the spectrum of a star of same spectral type with little reddening.
The standard theory of extinction and the spectrum of stars with very little reddening
A 12 year-long monitoring of the absorption caused by a z=0.89 spiral galaxy on the line of sight to the radio-loud gravitationally lensed quasar PKS 1830-211 reveals spectacular changes in the HCO+ and HCN (2-1) line profiles. The depth of the absorption toward the quasar NE image increased by a factor of ~3 in 1998-1999 and subsequently decreased by a factor >=6 between 2003 and 2006. These changes were echoed by similar variations in the absorption line wings toward the SW image. Most likely, these variations result from a motion of the quasar images with respect to the foreground galaxy, which could be due to a sporadic ejection of bright plasmons by the background quasar. VLBA observations have shown that the separation between the NE and SW images changed in 1997 by as much as 0.2 mas within a few months. Assuming that motions of similar amplitude occurred in 1999 and 2003, we argue that the clouds responsible for the NE absorption and the broad wings of the SW absorption should be sparse and have characteristic sizes of 0.5-1 pc.
Drastic changes in the molecular absorption at redshift z=0.89 toward the quasar PKS 1830-211
Individually addressed Er$^{3+}$ ions in solid-state hosts are promising resources for quantum repeaters, because of their direct emission in the telecom band and compatibility with silicon photonic devices. While the Er$^{3+}$ electron spin provides a spin-photon interface, ancilla nuclear spins could enable multi-qubit registers with longer storage times. In this work, we demonstrate coherent coupling between the electron spin of a single Er$^{3+}$ ion and a single $I=1/2$ nuclear spin in the solid-state host crystal, which is a fortuitously located proton ($^1$H). We control the nuclear spin using dynamical decoupling sequences applied to the electron spin, implementing one- and two-qubit gate operations. Crucially, the nuclear spin coherence time exceeds the electron coherence time by several orders of magnitude, because of its smaller magnetic moment. These results provide a path towards combining long-lived nuclear spin quantum registers with telecom-wavelength emitters for long-distance quantum repeaters.
Coherent control of a nuclear spin via interactions with a rare-earth ion in the solid-state
Using observations from the Chandra X-ray Observatory and Giant Metrewave Radio Telescope, we examine the interaction between the intracluster medium and central radio source in the poor cluster AWM 4. In the Chandra observation a small cool core or galactic corona is resolved coincident with the radio core. This corona is capable of fuelling the active nucleus, but must be inefficiently heated by jet interactions or conduction, possibly precluding a feedback relationship between the radio source and cluster. A lack of clearly detected X-ray cavities suggests that the radio lobes are only partially filled by relativistic plasma. We estimate a filling factor of phi=0.21 (3 sigma upper limit phi<0.42) for the better constrained east lobe. We consider the particle population in the jets and lobes, and find that the standard equipartition assumptions predict pressures and ages which agree poorly with X-ray estimates. Including an electron population extending to low Lorentz factors either reduces (gamma_min=100) or removes (gamma_min=10) the pressure imbalance between the lobes and their environment. Pressure balance can also be achieved by entrainment of thermal gas, probably in the first few kiloparsecs of the radio jets. We estimate the mechanical power output of the radio galaxy, and find it to be marginally capable of balancing radiative cooling.
A deep Chandra observation of the poor cluster AWM 4 - I. Properties of the central radio galaxy and its effects on the intracluster medium
The local variation of grain boundary atomic structure and chemistry caused by segregation of impurities influences the macroscopic properties of poylcrystalline materials. Here, the effect of co-segregation of carbon and boron on the depletion of aluminum at a $\Sigma 5\,(3\,1\,0\,) [0\,0\,1]$ tilt grain boundary in a $\alpha-$Fe-$4~at.~\%$Al bicrystal was studied by combining atomic resolution scanning transmission electron microscopy, atom probe tomography and density functional theory calculations. The atomic grain boundary structural units mostly resemble kite-type motifs and the structure appears disrupted by atomic scale defects. Atom probe tomography reveals that carbon and boron impurities are co-segregating to the grain boundary reaching levels of >1.5 at.\%, whereas aluminum is locally depleted by approx. 2~at.\%. First-principles calculations indicate that carbon and boron exhibit the strongest segregation tendency and their repulsive interaction with aluminum promotes its depletion from the grain boundary. It is also predicted that substitutional segregation of boron atoms may contribute to local distortions of the kite-type structural units. These results suggest that the co-segregation and interaction of interstitial impurities with substitutional solutes strongly influences grain boundary composition and with this the properties of the interface.
Aluminum depletion induced by complex co-segregation of carbon and boron in a {\Sigma} 5 [3 1 0] bcc-iron grain boundary
Recent studies have proposed that the diffusion of messenger molecules, such as monoamines, can mediate the plastic adaptation of synapses in supervised learning of neural networks. Based on these findings we developed a model for neural learning, where the signal for plastic adaptation is assumed to propagate through the extracellular space. We investigate the conditions allowing learning of Boolean rules in a neural network. Even fully excitatory networks show very good learning performances. Moreover, the investigation of the plastic adaptation features optimizing the performance suggests that learning is very sensitive to the extent of the plastic adaptation and the spatial range of synaptic connections.
Spatial features of synaptic adaptation affecting learning performance
We study the energetic efficiency of navigating microswimmers by explicitly taking into account the geometry of their body. We show that, as their shape transitions from prolate to oblate, non-steering microswimmers rotated by flow gradients naturally follow increasingly time-optimal trajectories. At the same time, they also require larger dissipation to swim. The coupling between body geometry and hydrodynamics thus leads to a generic trade-off between the energetic costs associated with propulsion and navigation, which is accompanied by the selection of a finite optimal aspect ratio. We derive from optimal control theory the steering policy ensuring overall minimum energy dissipation, and characterize how navigation performances vary with the swimmer shape. Our results highlight the important role of the swimmer geometry in realistic navigation problems.
Energetic cost of microswimmer navigation: the role of body shape
We study a holomorphic Poisson structure defined on the linear space $S(n,d):= {\rm Mat}_{n\times d}(\mathbb{C}) \times {\rm Mat}_{d\times n}(\mathbb{C})$ that is covariant under the natural left actions of the standard ${\rm GL}(n,\mathbb{C})$ and ${\rm GL}(d,\mathbb{C})$ Poisson-Lie groups. The Poisson brackets of the matrix elements contain quadratic and constant terms, and the Poisson tensor is non-degenerate on a dense subset. Taking the $d=1$ special case gives a Poisson structure on $S(n,1)$, and we construct a local Poisson map from the Cartesian product of $d$ independent copies of $S(n,1)$ into $S(n,d)$, which is a holomorphic diffeomorphism in a neighborhood of zero. The Poisson structure on $S(n,d)$ is the complexification of a real Poisson structure on ${\rm Mat}_{n\times d}(\mathbb{C})$ constructed by the authors and Marshall, where a similar decoupling into $d$ independent copies was observed. We also relate our construction to a Poisson structure on $S(n,d)$ defined by Arutyunov and Olivucci in the treatment of the complex trigonometric spin Ruijsenaars-Schneider system by Hamiltonian reduction.
A decoupling property of some Poisson structures on ${\rm Mat}_{n\times d}(\mathbb{C}) \times {\rm Mat}_{d\times n}(\mathbb{C})$ supporting ${\rm GL}(n,\mathbb{C}) \times {\rm GL}(d,\mathbb{C})$ Poisson-Lie symmetry
A simple model which can explain the observed vertical distribution and size spectrum of atmospheric aerosol has been proposed. The model is based on a new physical hypothesis for the vertical mass exchange between the troposphere and the stratosphere. The vertical mass excange takes place through a gravity wave feedback mechanism. There is a close agreement between the model predicted aerosol distribution and size spectrum and the observed distributions.
A New Hypothesis for the Vertical Distribution of Atmospheric Aerosols
The comparison of different atomic transition frequencies over time can be used to determine the present value of the temporal derivative of the fine structure constant alpha in a model-independent way without assumptions on constancy or variability of other parameters. We have measured an optical transition frequency at 688 THz in ^{171}Yb+ with a cesium atomic clock at two times separated by 2.8 years and find a value for the fractional variation of the frequency ratio $f_{\rm Yb}/f_{\rm Cs}$ of $(-1.2\pm 4.4)\cdot 10^{-15}$ yr$^{-1}$, consistent with zero. Combined with recently published values for the constancy of other transition frequencies this measurement sets an upper limit on the present variability of alpha at the level of $2.0\cdot 10^{-15}$ yr$^{-1}$, corresponding so far to the most stringent limit from laboratory experiments.
New limit on the present temporal variation of the fine structure constant
In this paper, the effect of twin boundaries on the crack growth behaviour of single crystal BCC Fe has been investigated using molecular dynamics simulations. The growth of an atomically sharp crack with an orientation of (111)$<$110$>$ (crack plane/crack front) has been studied under mode-I loading at constant strain rate. In order to study the influence of twin boundaries on the crack growth behaviour, single and multiple twin boundaries were introduced perpendicular to crack growth direction. The results indicate that the (111)$<$110$>$ crack in single crystal BCC Fe grows in brittle manner. However, following the introduction of twin boundaries, a noticeable plastic deformation has been observed at the crack tip. Further, increasing the number of twin boundaries increased the amount of plastic deformation leading to better crack resistance and high failure strains. Finally, an interesting relationship has been observed between the crack growth rate and flow stress.
Atomistic simulations of twin boundary effect on the crack growth behaviour in BCC Fe
When a particle is placed in a material with a lower bulk melting temperature, intermolecular forces can lead to the existence of a premelted liquid film of the lower melting temperature material. Despite the system being below the melting temperatures of both solids, the liquid film is a consequence of thermodynamic equilibrium, controlled by intermolecular, ionic and other interactions. An imposed temperature gradient drives the translation of the particle by a process of melting and refreezing known as thermal regelation. We calculate the rate of regelation of spherical particles surrounded by premelted films that contain ionic impurities. The impurities enhance the rate of motion thereby influencing the dynamics of single particles and distributions of particles, which we describe in addition to the consequences in natural and technological settings.
Impurity effects in thermal regelation
A coplactic class in the symmetric group S_n consists of all permutations in S_n with a given Schensted Q-symbol, and may be described in terms of local relations introduced by Knuth. Any Lie element in the group algebra of S_n which is constant on coplactic classes is already constant on descent classes. As a consequence, the intersection of the Lie convolution algebra introduced by Patras and Reutenauer and the coplactic algebra introduced by Poirier and Reutenauer is the Solomon descent algebra.
Lie Elements and Knuth Relations
We implement a scale-free version of the pivot algorithm and use it to sample pairs of three-dimensional self-avoiding walks, for the purpose of efficiently calculating an observable that corresponds to the probability that pairs of self-avoiding walks remain self-avoiding when they are concatenated. We study the properties of this Markov chain, and then use it to find the critical exponent $\gamma$ for self-avoiding walks to unprecedented accuracy. Our final estimate for $\gamma$ is $1.15695300(95)$.
Scale-free Monte Carlo method for calculating the critical exponent $\gamma$ of self-avoiding walks
DNA has been discussed as a potential medium for data storage. Potentially it could be denser, could consume less energy, and could be more durable than conventional storage media such as hard drives, solid-state storage, and optical media. However, computing on data stored in DNA is a largely unexplored challenge. This paper proposes an integrated circuit (IC) based on microfluidics that can perform complex operations such as artificial neural network (ANN) computation on data stored in DNA. It computes entirely in the molecular domain without converting data to electrical form, making it a form of in-memory computing on DNA. The computation is achieved by topologically modifying DNA strands through the use of enzymes called nickases. A novel scheme is proposed for representing data stochastically through the concentration of the DNA molecules that are nicked at specific sites. The paper provides details of the biochemical design, as well as the design, layout, and operation of the microfluidics device. Benchmarks are reported on the performance of neural network computation.
Neural network execution using nicked DNA and microfluidics
It is well-known that every weakly convergent sequence in $\ell_1$ is convergent in the norm topology (Schur's lemma). Phillips' lemma asserts even more strongly that if a sequence $(\mu_n)_{n\in\mathbb N}$ in $\ell_\infty'$ converges pointwise on $\{0,1\}^\mathbb N$ to $0$, then its $\ell_1$-projection converges in norm to $0$. In this note we show how the second category version of Schur's lemma, for which a short proof is included, can be used to replace in Phillips' lemma $\{0,1\}^\mathbb N$ by any of its subsets which contains all finite sets and having some kind of interpolation property for finite sets.
A note on the Schur and Phillips lemmas
With the celebrated success of deep learning, some attempts to develop effective methods for detecting malicious PowerShell programs employ neural nets in a traditional natural language processing setup while others employ convolutional neural nets to detect obfuscated malicious commands at a character level. While these representations may express salient PowerShell properties, our hypothesis is that tools from static program analysis will be more effective. We propose a hybrid approach combining traditional program analysis (in the form of abstract syntax trees) and deep learning. This poster presents preliminary results of a fundamental step in our approach: learning embeddings for nodes of PowerShell ASTs. We classify malicious scripts by family type and explore embedded program vector representations.
AST-Based Deep Learning for Detecting Malicious PowerShell
The paper argues that Fodor and Lepore are misguided in their attack on Pustejovsky's Generative Lexicon, largely because their argument rests on a traditional, but implausible and discredited, view of the lexicon on which it is effectively empty of content, a view that stands in the long line of explaining word meaning (a) by ostension and then (b) explaining it by means of a vacuous symbol in a lexicon, often the word itself after typographic transmogrification. (a) and (b) both share the wrong belief that to a word must correspond a simple entity that is its meaning. I then turn to the semantic rules that Pustejovsky uses and argue first that, although they have novel features, they are in a well-established Artificial Intelligence tradition of explaining meaning by reference to structures that mention other structures assigned to words that may occur in close proximity to the first. It is argued that Fodor and Lepore's view that there cannot be such rules is without foundation, and indeed systems using such rules have proved their practical worth in computational systems. Their justification descends from line of argument, whose high points were probably Wittgenstein and Quine that meaning is not to be understood by simple links to the world, ostensive or otherwise, but by the relationship of whole cultural representational structures to each other and to the world as a whole.
The "Fodor"-FODOR fallacy bites back
We have observed 152 nearby solar-type stars with the Infrared Spectrometer (IRS) on the Spitzer Space Telescope. Including stars that met our criteria but were observed in other surveys, we get an overall success rate for finding excesses in the long wavelength IRS band (30-34 micron) of 11.8% +/- 2.4%. The success rate for excesses in the short wavelength band (8.5-12 micron) is ~1% including sources from other surveys. For stars with no excess at 8.5-12 microns, the IRS data set 3 sigma limits of around 1,000 times the level of zodiacal emission present in our solar system, while at 30-34 microns set limits of around 100 times the level of our solar system. Two stars (HD 40136 and HD 10647) show weak evidence for spectral features; the excess emission in the other systems is featureless. If the emitting material consists of large (10 micron) grains as implied by the lack of spectral features, we find that these grains are typically located at or beyond the snow line, ~1-35 AU from the host stars, with an average distance of 14 +/- 6 AU; however smaller grains could be located at significantly greater distances from the host stars. These distances correspond to dust temperatures in the range ~50-450 K. Several of the disks are well modeled by a single dust temperature, possibly indicative of a ring-like structure. However, a single dust temperature does not match the data for other disks in the sample, implying a distribution of temperatures within these disks. For most stars with excesses, we detect an excess at both IRS and MIPS wavelengths. Only three stars in this sample show a MIPS 70 micron excess with no IRS excess, implying that very cold dust is rare around solar-type stars.
Explorations Beyond the Snow Line: Spitzer/IRS Spectra of Debris Disks Around Solar-Type Stars
Learning-based color enhancement approaches typically learn to map from input images to retouched images. Most of existing methods require expensive pairs of input-retouched images or produce results in a non-interpretable way. In this paper, we present a deep reinforcement learning (DRL) based method for color enhancement to explicitly model the step-wise nature of human retouching process. We cast a color enhancement process as a Markov Decision Process where actions are defined as global color adjustment operations. Then we train our agent to learn the optimal global enhancement sequence of the actions. In addition, we present a 'distort-and-recover' training scheme which only requires high-quality reference images for training instead of input and retouched image pairs. Given high-quality reference images, we distort the images' color distribution and form distorted-reference image pairs for training. Through extensive experiments, we show that our method produces decent enhancement results and our DRL approach is more suitable for the 'distort-and-recover' training scheme than previous supervised approaches. Supplementary material and code are available at https://sites.google.com/view/distort-and-recover/
Distort-and-Recover: Color Enhancement using Deep Reinforcement Learning
Human intelligence has the remarkable ability to adapt to new tasks and environments quickly. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research in this direction, we propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment. The primary goal of the competition is to approach the problem of how to develop interactive embodied agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Understanding the complexity of the challenge, we split it into sub-tasks to make it feasible for participants. This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). Therefore, the suggested challenge can bring two communities together to approach one of the crucial challenges in AI. Another critical aspect of the challenge is the dedication to perform a human-in-the-loop evaluation as a final evaluation for the agents developed by contestants.
IGLU 2022: Interactive Grounded Language Understanding in a Collaborative Environment at NeurIPS 2022
Full waveform inversion (FWI) delivers high-resolution images of the subsurface by minimizing iteratively the misfit between the recorded and calculated seismic data. It has been attacked successfully with the Gauss-Newton method and sparsity promoting regularization based on fixed multiscale transforms that permit significant subsampling of the seismic data when the model perturbation at each FWI data-fitting iteration can be represented with sparse coefficients. Rather than using analytical transforms with predefined dictionaries to achieve sparse representation, we introduce an adaptive transform called the Sparse Orthonormal Transform (SOT) whose dictionary is learned from many small training patches taken from the model perturbations in previous iterations. The patch-based dictionary is constrained to be orthonormal and trained with an online approach to provide the best sparse representation of the complex features and variations of the entire model perturbation. The complexity of the training method is proportional to the cube of the number of samples in one small patch. By incorporating both compressive subsampling and the adaptive SOT-based representation into the Gauss-Newton least-squares problem for each FWI iteration, the model perturbation can be recovered after an l1-norm sparsity constraint is applied on the SOT coefficients. Numerical experiments on synthetic models demonstrate that the SOT-based sparsity promoting regularization can provide robust FWI results with reduced computation.
Sparse-promoting Full Waveform Inversion based on Online Orthonormal Dictionary Learning
The description of strong interaction physics of low-lying resonances is out of the valid range of perturbative QCD. Chiral effective field theories have been developed to tackle the issue. Partial wave dynamics is the systematic tool to decode the underlying physics and reveal the properties of those resonances. It is extremely powerful and helpful for our understanding of the non-perturbative regime, especially when dispersion techniques are utilized simultaneously. Recently, plenty of exotic/ordinary hadrons have been reported by experiment collaborations, e.g. LHCb, Belle, and BESIII, etc.. In this review, we summarize the recent progress on the applications of partial wave dynamics combined with chiral effective field theories and dispersion relations, on related topics, with emphasis on $\pi\pi$, $\pi K$, $\pi N$ and $\bar{K}N$ scatterings.
A Review on Partial-wave Dynamics with Chiral Effective Field Theory and Dispersion Relation
We study the issue of black hole entropy in the topologically massive gravity. Assuming that the presence of gravitational Chern-Simons term with the coupling $1/\mu$ does modify the horizon radius $\tilde{r}_+$, we propose $\tilde{S}_{BH}=\pi \tilde{r}_+/2G_3$ as the Bekenstein-Hawking entropy. This entropy of CS-BTZ black hole satisfies the first-law of thermodynamics and the area-law but it is slightly different from the shifted-entropy $S_c=\pi r_+/2G_3+ (1/\mu l)\pi r_-/2G_3$ based on the BTZ black hole with outer $r_+$ and inner horizon $r_-$. In the case of $r_-=0$, $\tilde{S}_{BH}$ represents the entropy of non-rotating BTZ black hole with the Chern-Simons term (NBTZ-CS), while $S_c$ reduces to the entropy of NBTZ black hole. It shows that $\tilde{S}_{BH}$ may be a candidate for the entropy of the CS-BTZ black hole.
Entropy of black holes in topologically massive gravity
Of the major deuterostome groups, the echinoderms with their multiple forms and complex development are arguably the most mysterious. Although larval echinoderms are bilaterally symmetric, the adult body seems to abandon the larval body plan and to develop independently a new structure with different symmetries. The prevalent pentamer structure, the asymmetry of Loven's rule and the variable location of the periproct and madrepore present enormous difficulties in homologizing structures across the major clades, despite the excellent fossil record. This irregularity in body forms seems to place echinoderms outside the other deuterostomes. Here I propose that the predominant five-ray structure is derived from a hexamer structure that is grounded directly in the structure of the bilaterally symmetric larva. This hypothesis implies that the adult echinoderm body can be derived directly from the larval bilateral symmetry and thus firmly ranks even the adult echinoderms among the bilaterians. In order to test the hypothesis rigorously, a model is developed in which one ray is missing between rays IV-V (Loven's schema) or rays C-D (Carpenter's schema). The model is used to make predictions, which are tested and verified for the process of metamorphosis and for the morphology of recent and fossil forms. The theory provides fundamental insight into the M-plane and the Ubisch', Loven's and Carpenter's planes and generalizes them for all echinoderms. The theory also makes robust predictions about the evolution of the pentamer structure and its developmental basis. *** including corrections (see footnotes) ***
A hexamer origin of the echinoderms' five rays
We investigate the steady-state phases of the dissipative spin-1/2 $J_1$-$J_2$ XYZ model on a two-dimensional square lattice. We show the next-nearest-neighboring interaction plays a crucial role in determining the steady-state properties. By means of the Gutzwiller mean-field factorization, we find the emergence of antiferromag-netic steady-state phases. The existence of such antiferromagnetic steady-state phases in thermodynamic limit is confirmed by the cluster mean-field analysis. Moreover, we find the evidence of the limit cycle phase through the largest quantum Lyapunov exponent in small cluster, and check the stability of the oscillation by calculating the averaged oscillation amplitude up to $4\times4$ cluster mean-field approximation.
Steady-state phases of dissipative spin-1/2 XYZ model with frustrated interaction
We introduce a binary relation on the finite discrete probability distributions which generalizes notions of majorization that have been studied in quantum information theory. Motivated by questions in thermodynamics, our relation describes the transitions induced by bistochastic maps in the presence of additional auxiliary systems which may become correlated in the process. We show that this relation is completely characterized by Shannon entropy H, which yields an interpretation of H in resource-theoretic terms, and admits a particularly simple proof of a known characterization of H in terms of natural information-theoretic properties.
A generalization of majorization that characterizes Shannon entropy
We study the magnetic field response of the Majorana Kramers pairs of a one-dimensional time-reversal invariant (TRI) superconductors (class DIII) with or without a coexisting chirality symmetry. For unbroken TR and chirality invariance the parameter regimes for nontrivial values of the (Z_2) DIII-invariant and the (Z) chiral invariant coincide. However, broken TR may or may not be accompanied by broken chirality, and if chiral symmetry is unbroken, the pair of Majorana fermions (MFs) at a given end survives the loss of TR symmetry in an entire plane perpendicular to the spin-orbit coupling field. Conversely, we show that broken chirality may or may not be accompanied by broken TR, and if TR is unbroken, the pair of MFs survives the loss of broken chirality. In addition to explaining the anomalous magnetic field response of all the DIII class TS systems proposed in the literature, we provide a realistic route to engineer a "true" TR-invariant TS, whose pair of MFs at each end is split by an applied Zeeman field in arbitrary direction. We also prove that, quite generally, the splitting of the MFs by TR-breaking fields in TRI superconductors is highly anisotropic in spin space, even in the absence of the topological chiral symmetry.
Magnetic Field Response and Chiral Symmetry of Time Reversal Invariant Topological Superconductors
We study C*-algebras generated by left regular representations of right LCM one-relator monoids and Artin-Tits monoids of finite type. We obtain structural results concerning nuclearity, ideal structure and pure infiniteness. Moreover, we compute K-theory. Based on our K-theory results, we develop a new way of computing K-theory for certain group C*-algebras and crossed products.
C*-algebras of right LCM one-relator monoids and Artin-Tits monoids of finite type
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high $p_T$ ({$> 500$} GeV) back-to-back correlated di-jets of total mass {$M > M_f \sim 1$}TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass $\sim M_f$ and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
Mini Black Holes in the first year of the LHC
Driven by the need to accelerate numerical simulations, the use of machine learning techniques is rapidly growing in the field of computational solid mechanics. Their application is especially advantageous in concurrent multiscale finite element analysis (FE$^2$) due to the exceedingly high computational costs often associated with it and the high number of similar micromechanical analyses involved. To tackle the issue, using surrogate models to approximate the microscopic behavior and accelerate the simulations is a promising and increasingly popular strategy. However, several challenges related to their data-driven nature compromise the reliability of surrogate models in material modeling. The alternative explored in this work is to reintroduce some of the physics-based knowledge of classical constitutive modeling into a neural network by employing the actual material models used in the full-order micromodel to introduce non-linearity. Thus, path-dependency arises naturally since every material model in the layer keeps track of its own internal variables. For the numerical examples, a composite Representative Volume Element with elastic fibers and elasto-plastic matrix material is used as the microscopic model. The network is tested in a series of challenging scenarios and its performance is compared to that of a state-of-the-art Recurrent Neural Network (RNN). A remarkable outcome of the novel framework is the ability to naturally predict unloading/reloading behavior without ever seeing it during training, a stark contrast with popular but data-hungry models such as RNNs. Finally, the proposed network is applied to FE$^2$ examples to assess its robustness for application in nonlinear finite element analysis.
Physically recurrent neural networks for path-dependent heterogeneous materials: embedding constitutive models in a data-driven surrogate
Liquid crystals have emerged as potential candidates for next-generation lubricants due to their tendency to exhibit long-range ordering. Here, we construct a full atomistic model of 4-cyano-4-hexylbiphenyl (6CB) nematic liquid crystal lubricants mixed with hexane and confined by mica surfaces. We explore the effect of the surface structure of mica, as well as lubricant composition and thickness, on the nanoscale friction in the system. Our results demonstrate the key role of the structure of the mica surfaces, specifically the positions of potassium ($\mathrm{K}^+$) ions, in determining the nature of sliding friction with monolayer lubricants, including the presence or absence of stick-slip dynamics. With the commensurate setup of confining surfaces, when the grooves created between the periodic $\mathrm{K}^+$ ions are parallel to the sliding direction we observe a lower friction force as compared to the perpendicular situation. Random positions of ions exhibit even smaller friction forces with respect to the previous two cases. For thicker lubrication layers the surface structure becomes less important and we observe a good agreement with the experimental data on bulk viscosity of 6CB and the additive hexane. In case of thicker lubrication layers, friction may still be controlled by tuning the relative concentrations of 6CB and hexane in the mixture.
Nanoscale Liquid Crystal Lubrication Controlled by Surface Structure and Film Composition
We study completion with respect to the iterated suspension functor on $\mathcal{O}$-algebras, where $\mathcal{O}$ is a reduced operad in symmetric spectra. This completion is the unit of a derived adjunction comparing $\mathcal{O}$-algebras with coalgebras over the associated iterated suspension-loop homotopical comonad via the iterated suspension functor. We prove that this derived adjunction becomes a derived equivalence when restricted to 0-connected $\mathcal{O}$-algebras and $r$-connected $\tilde{\Sigma}^r \tilde{\Omega}^r$-coalgebras. We also consider the dual picture, using iterated loops to build a cocompletion map from algebras over the iterated loop-suspension homotopical monad to $\mathcal{O}$-algebras. This is the counit of a derived adjunction, which we prove is a derived equivalence when restricting to $r$-connected $\mathcal{O}$-algebras and $0$-connected $\tilde{\Omega}^r \tilde{\Sigma}^r$-algebras.
Iterated delooping and desuspension of structured ring spectra
Pairwise sequence alignment is one of the most computationally intensive kernels in genomic data analysis, accounting for more than 90% of the runtime for key bioinformatics applications. This method is particularly expensive for third-generation sequences due to the high computational cost of analyzing sequences of length between 1Kb and 1Mb. Given the quadratic overhead of exact pairwise algorithms for long alignments, the community primarily relies on approximate algorithms that search only for high-quality alignments and stop early when one is not found. In this work, we present the first GPU optimization of the popular X-drop alignment algorithm, that we named LOGAN. Results show that our high-performance multi-GPU implementation achieves up to 181.6 GCUPS and speed-ups up to 6.6x and 30.7x using 1 and 6 NVIDIA Tesla V100, respectively, over the state-of-the-art software running on two IBM Power9 processors using 168 CPU threads, with equivalent accuracy. We also demonstrate a 2.3x LOGAN speed-up versus ksw2, a state-of-art vectorized algorithm for sequence alignment implemented in minimap2, a long-read mapping software. To highlight the impact of our work on a real-world application, we couple LOGAN with a many-to-many long-read alignment software called BELLA, and demonstrate that our implementation improves the overall BELLA runtime by up to 10.6x. Finally, we adapt the Roofline model for LOGAN and demonstrate that our implementation is near-optimal on the NVIDIA Tesla V100s.
LOGAN: High-Performance GPU-Based X-Drop Long-Read Alignment
The channeling of the ion recoiling after a collision with a WIMP changes the ionization signal in direct detection experiments, producing a larger signal than otherwise expected. We give estimates of the fraction of channeled recoiling ions in NaI (Tl) crystals using analytic models produced since the 1960's and 70's to describe channeling and blocking effects. We find that the channeling fraction of recoiling lattice nuclei is smaller than that of ions that are injected into the crystal and that it is strongly temperature dependent.
Channeling in direct dark matter detection I: channeling fraction in NaI (Tl) crystals
Modified Newtonian Dynamics (MoND) is an empirically modification of Newtonian gravity at largest scales in order to explain rotation curves of galaxies, as an alternative to nonbaryonic dark matter. But MoND theories can hardly connect themselves to the formalism of relativistic cosmology type Friedmann-Robertson-Walker. The present work posits the possibility of building this connection by postulating a Yukawa-like scalar potential, with non gravitational origin. This potential comes from a simple reflection speculate of the well-know potential of Yukawa and it is intended to describe the following physics scenarios: null in very near solar system, slightly attractive in ranges of interstellar distances, very attractive in distance ranges comparable to galaxies cluster, and repulsive to cosmic scales. As a result of introducing this potential into the typical Friedman equations we found that the critical density of matter is consistent with the observed density (without a dark matter assumption), besides this, MoND theory is obtained for interstellar scales and consequently would explain rotation curves. Also it is shown that Yukawa type inverse does not alter the predictions of the Cosmic Microwave Background neither the primordial nucleosinthesys in early universe; and can be useful to explain the large-scale structure formation.
Theory MOND in a Friedmann-Robertson-Walker Cosmology as alternative to the Nonbaryonic Dark Matter paradigm
Bayesian Gaussian Process Optimization can be considered as a method of the determination of the model parameters, based on the experimental data. In the range of soft QCD physics, the processes of hadron and nuclear interactions require using phenomenological models containing many parameters. In order to minimize the computation time, the model predictions can be parameterized using Gaussian Process regression, and then provide the input to the Bayesian Optimization. In this paper, the Bayesian Gaussian Process Optimization has been applied to the Monte Carlo model with string fusion. The parameters of the model are determined using experimental data on multiplicity and cross section of pp, pA and AA collisions at wide energy range. The results provide important constraints on the transverse radius of the quark-gluon string ($r_{str}$) and the mean multiplicity per rapidity from one string ($\mu_0$).
Determination of the quark-gluon string parameters from the data on pp, pA and AA collisions at wide energy range using Bayesian Gaussian Process Optimization
We present a self-similar, steady-state model describing both a magnetized accretion disc and a magnetohydrodynamic jet. We focus on the role of a hot corona in such a structure. This corona enables the disc to launch various types of jets. By considering the energy conservation, we also present a diagnostic of the luminosity of the magnetized disc, which could explain some observational signatures of galactic objects.
MHD jets around galactic objects
An extended XMM-Newton observation of the Seyfert I galaxy NGC 4051 in 2009 revealed a complex absorption spectrum, with a wide range of outflow velocities and ionisation states.The main velocity and ionisation structure was interpreted in Paper I in terms of a decelerating, recombining flow resulting from the shocking of a still higher velocity wind colliding with the ISM or slower moving ejecta. The high sensitivity of the XMM-Newton observation also revealed a number of broad emission lines, all showing evidence of self-absorption near the line cores. The line profiles are found here to be consistent with emission from a limb-brightened shell of post-shock gas building up ahead of the contact discontinuity. While the broad emission lines remain quasi-constant as the continuum flux changes by an order of magnitude, recombination continua of several H- and He-like ions are found to vary in response to the continuum, providing an important key to scaling the ionised flow.
An extended XMM-Newton observation of the Seyfert galaxy NGC 4051. II. Soft X-ray emission from a limb-brightened shell of post-shock gas