text
stringlengths
133
1.92k
summary
stringlengths
24
228
We investigate the sensitivity of future linear collider experiments to CP violating $WW\gamma$ couplings in the process $e^{+}e^{-} \to \nu\bar{\nu}\gamma$. We consider several sets of machine parameters: centre of mass energies $\sqrt{s}$ = 350, 500 and 800 GeV and operating at different luminosities. From an analysis of the differential cross-section the following 95% C.L. limits $|\tilde{\kappa}_{\gamma}| < 0.18$, $|\tilde{\lambda}_{\gamma}| < 0.069$ are estimated to be obtained at a future 500 GeV LC with an integrated luminosity of $125 \fb^{-1}$, a great improvement as compared to the LEP2 reach, where a senstitivity of order 2 for both couplings is found.
CP-violating Anomalous $WW\gamma$ Couplings in e^+e^- Collisions
The scaling behavior of the order parameter at the chiral phase transition, the so-called magnetic equation of state, of strongly interacting matter is studied within effective models. We explore universal and nonuniversal structures near the critical point. These include the scaling functions, the leading corrections to scaling, and the corresponding size of the scaling window as well as their dependence on an external symmetry breaking field. We consider two models in the mean-field approximation, the quark-meson and the Polyakov loop extended quark-meson (PQM) models, and compare their critical properties with a purely bosonic theory, the $O(N)$ linear sigma model in the $N\rightarrow \infty$ limit. In these models the order parameter scaling function is found analytically using the high temperature expansion of the thermodynamic potential. The effects of a gluonic background on the nonuniversal scaling parameters are studied within the PQM model.
Scaling violation and the magnetic equation of state in chiral models
It has been pointed out in arXiv:2211.17057 that our recent (published) paper might be revised, due to an incorrect evaluation of the diffusion coefficients, $D(E)$, employed in the calculations. Unfortunately, there is no indication as to where the incorrectness might be. Here we offer the opportunity to be more specific, by providing the community with the whole description of the equations involved in the calculation of $D(E)$, which is missing in the {\tt arXiv} note. In this context, we mention that no \textit{ad hoc} parameterisation has been used in our paper. Furthermore, any assumption on the injection mechanisms is explicitly described in the paper as an input factor and is obviously part of the modelisation procedure, hence the final outcome is subject to it. Finally, we discuss in a more broad context what, in this calculation of the diffusion coefficient, we believe is the key message.
Diffuse $\gamma$-ray emission in Cygnus X: Comments to Yan & Pavaskar
Recently, malevolent user hacking has become a huge problem for real-world companies. In order to learn predictive models for recommender systems, factorization techniques have been developed to deal with user-item ratings. In this paper, we suggest a broad architecture of a factorization model with adversarial training to get over these issues. The effectiveness of our systems is demonstrated by experimental findings on real-world datasets.
Deep Factorization Model for Robust Recommendation
The correctness of Harrods model in the differential form is studied. The inadequacy of exponential growth of economy is shown; an alternative result is obtained. By example of Phillips model, an approach to correction of macroeconomic models (in terms of initial prerequisites) is generalized. A methodology based on balance relations for modelling of economic dynamics, including obtaining forecast estimates, is developed. The problems thus considered are reduced to the solution of Volterra and Fredholm integral equations of the second kind.
The Problem of Modeling of Economic Dynamics
We test the concept that seismicity prior to a large earthquake can be understood in terms of the statistical physics of a critical phase transition. In this model, the cumulative seismic strain release increases as a power-law time-to-failure before the final event. Furthermore, the region of correlated seismicity predicted by this model is much greater than would be predicted from simple elasto-dynamic interactions. We present a systematic procedure to test for the accelerating seismicity predicted by the critical point model and to identify the region approaching criticality, based on a comparison between the observed cumulative energy (Benioff strain) release and the power-law behavior predicted by theory. This method is used to find the critical region before all earthquakes along the San Andreas system since 1950 with M 6.5. The statistical significance of our results is assessed by performing the same procedure on a large number of randomly generated synthetic catalogs. The null hypothesis, that the observed acceleration in all these earthquakes could result from spurious patterns generated by our procedure in purely random catalogs, is rejected with 99.5% confidence. An empirical relation between the logarithm of the critical region radius (R) and the magnitude of the final event (M) is found, such that log R \mu 0.5 M, suggesting that the largest probable event in a given region scales with the size of the regional fault network.
An Observational Test of the Critical Earthquake Concept
Sequence-level learning objective has been widely used in captioning tasks to achieve the state-of-the-art performance for many models. In this objective, the model is trained by the reward on the quality of its generated captions (sequence-level). In this work, we show the limitation of the current sequence-level learning objective for captioning tasks from both theory and empirical result. In theory, we show that the current objective is equivalent to only optimizing the precision side of the caption set generated by the model and therefore overlooks the recall side. Empirical result shows that the model trained by this objective tends to get lower score on the recall side. We propose to add a sequence-level exploration term to the current objective to boost recall. It guides the model to explore more plausible captions in the training. In this way, the proposed objective takes both the precision and recall sides of generated captions into account. Experiments show the effectiveness of the proposed method on both video and image captioning datasets.
Better Captioning with Sequence-Level Exploration
Abstractive text summarization has recently become a popular approach, but data hallucination remains a serious problem, including with quantitative data. We propose a set of probing tests to evaluate the efficacy of abstract summarization models' modeling of quantitative values found in the input text. Our results show that in most cases, the encoders of recent SOTA-performing models struggle to provide embeddings that adequately represent quantitative values in the input compared to baselines, and in particular, they outperform random representations in some, but surprisingly not all, cases. Under our assumptions, this suggests that the encoder's performance contributes to the quantity hallucination problem. One model type in particular, DistilBART-CDM, was observed to underperform randomly initialized representations for several experiments, and performance versus BERT suggests that standard pretraining and fine-tuning approaches for the summarization task may play a role in underperformance for some encoders.
Probing of Quantitative Values in Abstractive Summarization Models
Territorial control is a key aspect shaping the dynamics of civil war. Despite its importance, we lack data on territorial control that are fine-grained enough to account for subnational spatio-temporal variation and that cover a large set of conflicts. To resolve this issue, we propose a theoretical model of the relationship between territorial control and tactical choice in civil war and outline how Hidden Markov Models (HMMs) are suitable to capture theoretical intuitions and estimate levels of territorial control. We discuss challenges of using HMMs in this application and mitigation strategies for future work.
Measuring Territorial Control in Civil Wars Using Hidden Markov Models: A Data Informatics-Based Approach
We show that through a point of an affine variety there always exists a smooth plane curve inside the ambient affine space, which has the multiplicity of intersection with the variety at least 3. This result has an application to the study of affine schemes.
Approximation by smooth curves near the tangent cone
We explore the properties of the holographic fermions in extremal $R$-charged black hole background with a running chemical potential, as well as the dipole coupling between fermions and the gauge field in the bulk. We find that although the running chemical potential effect the location of the Fermi surface, it does not change the type of fermions. We also study the onset of the Fermi gap and the gap effected by running chemical potential and the dipole coupling. The spectral function in the limit $\omega\rightarrow0$ and the existence of the Fermi liquid are also investigated. The running chemical potential and the dipole coupling altogether can make a non-Fermi liquid become the Landau-Fermi type.
Holographic fermions with running chemical potential and dipole coupling
KATRIN is a very large scale tritium-beta-decay experiment to determine the mass of the neutrino. It is presently under construction at the Forschungszentrum Karlsruhe, and makes use of the Tritium Laboratory built there for the ITER project. The combination of a very large retarding-potential electrostatic-magnetic spectrometer and an intense gaseous molecular tritium source makes possible a sensitivity to neutrino mass of 0.2 eV, about an order of magnitude below present laboratory limits. The measurement is kinematic and independent of whether the neutrino is Dirac or Majorana. The status of the project is summarized briefly in this report.
KATRIN: an experiment to measure the neutrino mass
We derived the expression of the normalized $q$-expectation value based on the density operator to the order $1-q$ with the physical temperature in the Tsallis nonextensive statistics of entropic parameter $q$. With the derived expression of the normalized $q$-expectation value, we calculated the momentum distribution and the correlation to the order $1-q$ as functions of the inverse physical temperature for a free scalar field. To the order $1-q$, the momentum distribution derived by using the density operator coincides with the momentum distribution derived from the entropic measure described with the distribution, when the physical temperature equals the temperature in the distribution derived from the entropic measure. The correlation depends on the momentums for $q \neq 1$. The factor two appears in the correlation for the same momentums, and indicates that the effects of boson at $q \neq 1$ and those at $q=1$ are similar for the correlation.
Momentum distribution and correlation for a free scalar field in the Tsallis nonextensive statistics based on density operator
The James Webb Space Telescope (JWST) will enable the search for and characterization of terrestrial exoplanet atmospheres in the habitable zone via transmission spectroscopy. However, relatively little work has been done to use solar system data, where ground truth is known, to validate spectroscopic retrieval codes intended for exoplanet studies, particularly in the limit of high resolution and high signal-to-noise (S/N). In this work, we perform such a validation by analyzing a high S/N empirical transmission spectrum of Earth using a new terrestrial exoplanet atmospheric retrieval model with heritage in Solar System remote sensing and gaseous exoplanet retrievals. We fit the Earth's 2-14 um transmission spectrum in low resolution (R=250 at 5 um) and high resolution (R=100,000 at 5 um) under a variety of assumptions about the 1D vertical atmospheric structure. In the limit of noiseless transmission spectra, we find excellent agreement between model and data (deviations < 10%) that enable the robust detection of H2O, CO2, O3, CH4, N2, N2O, NO2, HNO3, CFC-11, and CFC-12 thereby providing compelling support for the detection of habitability, biosignature, and technosignature gases in the atmosphere of the planet using an exoplanet-analog transmission spectrum. Our retrievals at high spectral resolution show a marked sensitivity to the thermal structure of the atmosphere, trace gas abundances, density-dependent effects, such as collision-induced absorption and refraction, and even hint at 3D spatial effects. However, we used synthetic observations of TRAPPIST-1e to verify that the use of simple 1D vertically homogeneous atmospheric models will likely suffice for JWST observations of terrestrial exoplanets transiting M dwarfs.
Earth as a Transiting Exoplanet: A Validation of Transmission Spectroscopy and Atmospheric Retrieval Methodologies for Terrestrial Exoplanets
We study 2D Maxwell-dilaton gravity on AdS(2). We distinguish two distinctive cases depending on whether the AdS(2) solution can be lifted to an AdS(3) geometry. In both cases, in order to get a consistent boundary condition we need to work with a twisted energy momentum tensor which has non-zero central charge. With this central charge and the explicit form of the twisted Virasoro generators we compute the entropy of the system using the Cardy formula. The entropy is found to be the same as that obtained from gravity calculations for a specific value of the level of the U(1) current. The agreement is an indication of $AdS(2)/CFT(1) correspondence.
Central Charge for 2D Gravity on AdS(2) and AdS(2)/CFT(1) Correspondence
The spin-orbit torques (SOTs) generated from topological insulators (TIs) have gained increasing attention in recent years. These TIs, which are typically formed by epitaxially grown chalcogenides, possess extremely high SOT efficiencies and have great potential to be employed in the next-generation spintronics devices. However, epitaxy of these chalcogenides is required to ensure the existence of topologically-protected surface state (TSS), which limits the feasibility of using these materials in industry. In this work, we show that non-epitaxial Bi$_{x}$Te$_{1-x}$/ferromagnet heterostructures prepared by conventional magnetron sputtering possess giant SOT efficiencies even without TSS. Through harmonic voltage measurement and hysteresis loop shift measurement, we find that the damping-like SOT efficiencies originated from the bulk spin-orbit interactions of such non-epitaxial heterostructures can reach values greater than 100% at room temperature. We further demonstrate current-induced SOT switching in these Bi$_{x}$Te$_{1-x}$-based heterostructures with thermally stable ferromagnetic layers, which indicates that such non-epitaxial chalcogenide materials can be potential efficient SOT sources in future SOT magnetic memory devices.
Efficient Spin-Orbit Torque Switching with Non-Epitaxial Chalcogenide Heterostructures
The growing need for a better understanding of nonlinear processes in plasma physics has in the last decades stimulated the development of new and more advanced data analysis techniques. This review lists some of the basic properties one may wish to infer from a data set and then presents appropriate analysis techniques with some recent applications. The emphasis is put on the investigation of nonlinear wave phenomena and turbulence in space plasmas.
Data Analysis Techniques for Resolving Nonlinear Processes in Plasmas : a Review
In transition metal dichalcogenides layers of atomic scale thickness, the electron-hole Coulomb interaction potential is strongly influenced by the sharp discontinuity of the dielectric function across the layer plane. This feature results in peculiar non-hydrogenic excitonic states, in which exciton-mediated optical nonlinearities are predicted to be enhanced as compared to their hydrogenic counterpart. To demonstrate this enhancement, we performed optical transmission spectroscopy of a MoSe$_2$ monolayer placed in the strong coupling regime with the mode of an optical microcavity, and analyzed the results quantitatively with a nonlinear input-output theory. We find an enhancement of both the exciton-exciton interaction and of the excitonic fermionic saturation with respect to realistic values expected in the hydrogenic picture. Such results demonstrate that unconventional excitons in MoSe$_2$ are highly favourable for the implementation of large exciton-mediated optical nonlinearities, potentially working up to room temperature.
Exciton-exciton interaction beyond the hydrogenic picture in a MoSe$_2$ monolayer in the strong light-matter coupling regime
We create an artificial system of agents (attention-based neural networks) which selectively exchange messages with each-other in order to study the emergence of memetic evolution and how memetic evolutionary pressures interact with genetic evolution of the network weights. We observe that the ability of agents to exert selection pressures on each-other is essential for memetic evolution to bootstrap itself into a state which has both high-fidelity replication of memes, as well as continuing production of new memes over time. However, in this system there is very little interaction between this memetic 'ecology' and underlying tasks driving individual fitness - the emergent meme layer appears to be neither helpful nor harmful to agents' ability to learn to solve tasks. Sourcecode for these experiments is available at https://github.com/GoodAI/memes
Bootstrapping of memetic from genetic evolution via inter-agent selection pressures
In this work, we study the radiative decay of heavy quarkonium states by using the effective Lagrangian approach. Firstly, we construct the spin-breaking terms in the effective Lagrangian for the $nP\leftrightarrow mS$ transitions and determine the some of the coupling constants by fitting the experimental data. Our results show that in $\chi_{cJ}$, $\psi(2S)$, $\Upsilon(2S)$, and $\Upsilon(3S)$ radiative decays, the spin-breaking effect is so small that can be ignored. Secondly, we investigate the radiative decay widths of the $c\bar{c}(1D)$ states and find the if $\psi(3770)$ is a pure $^3D_1$ state its radiative decay into $\chi_{cJ}+\gamma$ roughly preserve the heavy-quark spin symmetry, while if it is a $S-D$ mixing state with mixing angel $12^{\circ}$ the heavy quark-spin symmetry in its radiative decay and in the radiative decay of $\psi(3686)$ will be largely violated. In the end, we show that combining the radiative decay and the light hadron decay of $P$-wave $\chi_{bJ}(1,2P)$ can provide another way to extract the information of the color-octet matrix element in the context of non-relativistic QCD (NRQCD) effective theory, and our result is consistent with potential NRQCD hypothesis.
Investigating the heavy quarkonium radiative transitions with the effective Lagrangian method
High-resolution simulations within the GOY shell model are used to study various scaling relations for turbulence. A power-law relation between the second-order intermittency correction and the crossover from the inertial to the dissipation range is confirmed. Evidence is found for the intermediate viscous dissipation range proposed by Frisch and Vergassola. It is emphasized that insufficient dissipation-range resolution systematically drives the energy spectrum towards statistical-mechanical equipartition. In fully resolved simulations the inertial-range scaling exponents depend on both model parameters; in particular, there is no evidence that the conservation of a helicity-like quantity leads to universal exponents.
Links between dissipation, intermittency, and helicity in the GOY model revisited
Plane symmetric cosmological models are investigated with or without any dark energy components in the field equations. Keeping an eye on the recent observational constraints concerning the accelerating phase of expansion of the universe, the role of magnetic field is assessed. In the absence of dark energy components, magnetic field can favour an accelerating model even if we take a linear relationship between the directional Hubble parameters. In presence of dark energy components in the form of a time varying cosmological constant, the influence of magnetic field is found to be limited.
Cosmic Acceleration and Anisotropic models with Magnetic field
As virtual reality (VR) emerges as a mainstream platform, designers have started to experiment new interaction techniques to enhance the user experience. This is a challenging task because designers not only strive to provide designs with good performance but also carefully ensure not to disrupt users' immersive experience. There is a dire need for a new evaluation tool that extends beyond traditional quantitative measurements to assist designers in the design process. We propose an EEG-based experiment framework that evaluates interaction techniques in VR by measuring intentionally elicited cognitive conflict. Through the analysis of the feedback-related negativity (FRN) as well as other quantitative measurements, this framework allows designers to evaluate the effect of the variables of interest. We studied the framework by applying it to the fundamental task of 3D object selection using direct 3D input, i.e. tracked hand in VR. The cognitive conflict is intentionally elicited by manipulating the selection radius of the target object. Our first behavior experiment validated the framework in line with the findings of conflict-induced behavior adjustments like those reported in other classical psychology experiment paradigms. Our second EEG-based experiment examines the effect of the appearance of virtual hands. We found that the amplitude of FRN correlates with the level of realism of the virtual hands, which concurs with the Uncanny Valley theory.
Measuring Cognitive Conflict in Virtual Reality with Feedback-Related Negativity
The sensitivity of direct terahertz detectors based on self-mixing of terahertz electromagnetic wave in field-effect transistors is being improved with noise-equivalent power close to that of Schottky-barrier-diode detectors. Here we report such detectors based on AlGaN/GaN two-dimensional electron gas at 77~K are able to sense broadband and incoherent terahertz radiation. The measured photocurrent as a function of the gate voltage agrees well with the self-mixing model and the spectral response is mainly determined by the antenna. A Fourier-transform spectrometer equipped with detectors designed for 340, 650 and 900~GHz bands allows for terahertz spectroscopy in a frequency range from 0.1 to 2.0~THz. The 900~GHz detector at 77~K offers an optical sensitivity about $1~\mathrm{pW/\sqrt{Hz}}$ being comparable to a commercial silicon bolometer at 4.2~K. By further improving the sensitivity, room-temperature detectors would find applications in active/passive terahertz imaging and terahertz spectroscopy.
Detection of incoherent broadband terahertz light using antenna-coupled high-electron-mobility field-effect transistors
We examine the potential impact of Large Language Models (LLM) on the recognition of territorial sovereignty and its legitimization. We argue that while technology tools, such as Google Maps and Large Language Models (LLM) like OpenAI's ChatGPT, are often perceived as impartial and objective, this perception is flawed, as AI algorithms reflect the biases of their designers or the data they are built on. We also stress the importance of evaluating the actions and decisions of AI and multinational companies that offer them, which play a crucial role in aspects such as legitimizing and establishing ideas in the collective imagination. Our paper highlights the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions. We contend that the emergence of AI-based tools like LLMs is leading to a new scenario in which emerging technology consolidates power and influences our understanding of reality. Therefore, it is crucial to monitor and analyze the role of AI in the construction of legitimacy and the recognition of territorial sovereignty.
The Role of Large Language Models in the Recognition of Territorial Sovereignty: An Analysis of the Construction of Legitimacy
A finite range interacting particle system on a transitive graph is considered. Assuming that the dynamics and the initial measure are invariant, the normalized empirical distribution process converges in distribution to a centered diffusion process. As an application, a central limit theorem for certain hitting times, interpreted as failure times of a coherent system in reliability, is derived.
A functional central limit theorem for interacting particle systems on transitive graphs
For a wide family of even kernels $\{\varphi_u, u\in I\}$, we describe discrete sets $\Lambda$ such that every bandlimited signal $f$ can be reconstructed from the space-time samples $\{(f\ast\varphi_u)(\lambda), \lambda\in\Lambda, u\in I\}$.
Reconstruction of Bandlimited Functions from Space-Time Samples
We present empirical fits to the UBVRI light curves of type Ia supernovae. These fits are used to objectively evaluate light curve parameters. We find that the relative times of maximum light in the filter passbands are very similar for most objects. Surprisingly the maximum at longer wavelengths is reached earlier than in the B and V light curves. This clearly demonstrates the complicated nature of the supernova emission. Bolometric light curves for a small sample of well-observed SNe Ia are constructed by integration over the optical filters. In most objects a plateau or inflection is observed in the light curve about 20-40 days after bolometric maximum. The strength of this plateau varies considerably among the individual objects in the sample. Furthermore the rise times show a range of several days for the few objects which have observations early enough for such an analysis. On the other hand, the decline rate between 50 and 80 days past maximum is remarkably similar for all objects, with the notable exception of SN 1991bg. The similar late decline rates for the supernovae indicate that the energy release at late times is very uniform; the differences at early times is likely due to the radiation diffusing out of the ejecta. With the exception of SN 1991bg, the range of absolute bolometric luminosities of SNe Ia is found to be at least a factor of 2.5. The nickel masses derived from this estimate range from 0.4 to 1.1 Msun. It seems impossible to explain such a mass range by a single explosion mechanism, especially since the rate of gamma-ray escape at late phases seems to be very uniform.
Epochs of Maximum Light and Bolometric Light Curves of Type Ia Supernovae
One of the oldest outstanding problems in dynamical algebraic combinatorics is the following conjecture of P. Cameron and D. Fon-Der-Flaass (1995). Consider a plane partition $P$ in an $a \times b \times c$ box ${\sf B}$. Let $\Psi(P)$ denote the smallest plane partition containing the minimal elements of ${\sf B} - P$. Then if $p= a+b+c-1$ is prime, Cameron and Fon-Der-Flaass conjectured that the cardinality of the $\Psi$-orbit of $P$ is always a multiple of $p$. This conjecture was established for $p \gg 0$ by Cameron and Fon-Der-Flaass (1995) and for slightly smaller values of $p$ in work of K. Dilks, J. Striker, and the second author (2017). Our main theorem specializes to prove this conjecture in full generality.
Dynamics of plane partitions: Proof of the Cameron-Fon-Der-Flaass conjecture
We propose an algorithm for carrying out joint frame and frequency synchronization in reduced-guard-interval coherent optical orthogonal frequency division multiplexing (RGI-CO-OFDM) systems. The synchronization is achieved by using the same training symbols (TS) employed for training-aided channel estimation (TA-CE), thereby avoiding additional training overhead. The proposed algorithm is designed for polarization division multiplexing (PDM) RGI-CO-OFDM systems that use the Alamouti-type polarization-time coding for TA-CE. Due to their optimal TA-CE performance, Golay complementary sequences have been used as the TS in the proposed algorithm. The frame synchronization is accomplished by exploiting the cross-correlation between the received TS from the two orthogonal polarizations. The arrangement of the TS is also used to estimate the carrier frequency offset. Simulation results of a PDM RGI-CO-OFDM system operating at 238.1 Gb/s data rate (197.6-Gb/s after coding), with a total overhead of 9.2% (31.6% after coding), show that the proposed scheme has accurate synchronization, and is robust to linear fiber impairments.
Robust Frame and Frequency Synchronization Based on Alamouti Coding for RGI-CO-OFDM
We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full n-gram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified NMT beam-search decoder we find gains over both Hiero and NMT decoding alone, with practical advantages in extending NMT to very large input and output vocabularies.
Syntactically Guided Neural Machine Translation
We review the construction of gravitational solutions holographically dual to N=1 quiver gauge theories with dynamical flavor multiplets. We focus on the D3-D7 construction and consider the finite temperature, finite quark chemical potential case where there is a charged black hole in the dual solution. Discussed physical outputs of the model include its thermodynamics (with susceptibilities) and general hydrodynamic properties.
Holographic Duals of Quark Gluon Plasmas with Unquenched Flavors
Aspect-based sentiment analysis aims to identify the sentiment polarity of a specific aspect in product reviews. We notice that about 30% of reviews do not contain obvious opinion words, but still convey clear human-aware sentiment orientation, which is known as implicit sentiment. However, recent neural network-based approaches paid little attention to implicit sentiment entailed in the reviews. To overcome this issue, we adopt Supervised Contrastive Pre-training on large-scale sentiment-annotated corpora retrieved from in-domain language resources. By aligning the representation of implicit sentiment expressions to those with the same sentiment label, the pre-training process leads to better capture of both implicit and explicit sentiment orientation towards aspects in reviews. Experimental results show that our method achieves state-of-the-art performance on SemEval2014 benchmarks, and comprehensive analysis validates its effectiveness on learning implicit sentiment.
Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training
The graph coloring problem is often investigated in the literature. Many insights about many neighboring solutions with the same fitness value are raised but as far as we know, no deep analysis of this neutrality has ever been conducted in the literature. In this paper, we quantify the neutrality of some hard instances of the graph coloring problem. This neutrality property has to be detected as it impacts the search process. Indeed, local optima may belong to plateaus that represents a barrier for local search methods. In this work, we also aim at pointing out the interest of exploiting neutrality during the search. Therefore, a generic local search dedicated to neutral problems, NILS, is performed on several hard instances.
Neutrality in the Graph Coloring Problem
In this paper, we consider the following fractional Laplacian system with one critical exponent and one subcritical exponent \begin{equation*} \begin{cases} (-\Delta)^{s}u+\mu u=|u|^{p-1}u+\lambda v & x\in \ \mathbb{R}^{N}, (-\Delta)^{s}v+\nu v = |v|^{2^{\ast}-2}v+\lambda u& x\in \ \mathbb{R}^{N},\\ \end{cases} \end{equation*} where $(-\Delta)^{s}$ is the fractional Laplacian, $0<s<1,\ N>2s, \ \lambda <\sqrt{\mu\nu },\ 1<p<2^{\ast}-1~ and~\ 2^{\ast}=\frac{2N}{N-2s}$~ is the Sobolev critical exponent. By using the Nehari\ manifold, we show that there exists a $\mu_{0}\in(0,1)$, such that when $0<\mu\leq\mu_{0}$, the system has a positive ground state solution. When $\mu>\mu_{0}$, there exists a $\lambda_{\mu,\nu}\in[\sqrt{(\mu-\mu_{0})\nu},\sqrt{\mu\nu})$ such that if $\lambda>\lambda_{\mu,\nu}$, the system has a positive ground state solution, if $\lambda<\lambda_{\mu,\nu}$, the system has no ground state solution.
Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent
The secular equation for surface acoustic waves propagating on an orthotropic incompressible half-space is derived in a direct manner, using the method of first integrals.
Surface waves in orthotropic incompressible materials
There is a worldwide trend towards application of bibliometric research evaluation, in support of the needs of policy makers and research administrators. However the assumptions and limitations of bibliometric measurements suggest a probabilistic rather than the traditional deterministic approach to the assessment of research performance. The aim of this work is to propose a multivariate stochastic model for measuring the performance of individual scientists and to compare the results of its application with those arising from a deterministic approach. The dataset of the analysis covers the scientific production indexed in Web of Science for the 2006-2010 period, of over 900 Italian academic scientists working in two distinct fields of the life sciences.
A multivariate stochastic model to assess research performance
We studied the single dimer dynamics in a lattice diffusive model as a function of particle density in the high densification regime. The mean square displacement is found to be subdiffusive both in one and two dimensions. The spatial dependence of the self part of the van Hove correlation function displays as function of $r$ a single peak and signals a dramatic slow down of the system for high density. The self intermediate scattering function is fitted to the Kohlrausch-Williams-Watts law. The exponent $\beta$ extracted from the fits is density independent while the relaxation time $\tau$ follows a scaling law with an exponent 2.5.
Stretched exponential relaxation in a diffusive lattice model
Water-free NH4V3O8 microcrystals have been successfully synthesized by a microwave-assisted hydrothermal method. The products were characterized by means of X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, thermal gravimetric analysis, cyclic voltammetry, and galvanostatic cycling. The results show phase-pure products whose particle size and morphology can be tailored by varying the reaction conditions, i.e., reaction temperature, synthesis duration, and initial pH value. For instance, at low pH (2.5 to 3), flower-like agglomerates with primary particles of 20 to 30 microm length are found, while at pH = 5.5 single microplates with hexagonal outline (30 to 40 microm) prevail. The sample with the comparably highest specific surface area (11 m2/g) was studied regarding its electrochemical performance. It shows an extraordinary initial discharge capacity of 378 mA h g-1 at 10 mA g-1, which corresponds to the intercalation of 4.2 Li+/f.u.
Microwave-assisted hydrothermal synthesis of NH4V3O8 microcrystals with controllable morphology
The measurement of the large scale distribution of neutral hydrogen in the late Universe, obtained with radio telescopes through the hydrogen 21cm line emission, has the potential to become a key cosmological probe in the upcoming years. We explore the constraining power of 21cm intensity mapping observations on the full set of cosmological parameters that describe the $\Lambda$CDM model. We assume a single-dish survey for the SKA Observatory and simulate the 21cm linear power spectrum monopole and quadrupole within six redshift bins in the range $z=0.25-3$. Forecasted constraints are computed numerically through Markov Chain Monte Carlo techniques. We extend the sampler \texttt{CosmoMC} by implementing the likelihood function for the 21cm power spectrum multipoles. We assess the constraining power of the mock data set alone and combined with Planck 2018 CMB observations. We include a discussion on the impact of extending measurements to non-linear scales in our analysis. We find that 21cm multipoles observations alone are enough to obtain constraints on the cosmological parameters comparable with other probes. Combining the 21cm data set with CMB observations results in significantly reduced errors on all the cosmological parameters. The strongest effect is on $\Omega_ch^2$ and $H_0$, for which the error is reduced by almost a factor four. The percentage errors we estimate are $\sigma_{\Omega_ch^2} = 0.25\%$ and $\sigma_{H_0} = 0.16\%$, to be compared with the Planck only results $\sigma_{\Omega_ch^2} = 0.99\%$ and $\sigma_{H_0} = 0.79\%$. We conclude that 21cm SKAO observations will provide a competitive cosmological probe, complementary to CMB and, thus, pivotal for gaining statistical significance on the cosmological parameters constraints, allowing a stress test for the current cosmological model.
Multipole expansion for 21cm Intensity Mapping power spectrum: forecasted cosmological parameters estimation for the SKA Observatory
Self-supervised learning (SSL) learns to capture discriminative visual features useful for knowledge transfers. To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts. Nevertheless, these methods may prove inadequate in situations where object identity needs to be reasoned from associated context, such as recognizing or inferring tiny or obscured objects. As an initial effort in the SSL literature, we investigate whether and how contextual associations can be enhanced for visual reasoning within SSL regimes, by (a) proposing a new Self-supervised method with external memories for Context Reasoning (SeCo), and (b) introducing two new downstream tasks, lift-the-flap and object priming, addressing the problems of "what" and "where" in context reasoning. In both tasks, SeCo outperformed all state-of-the-art (SOTA) SSL methods by a significant margin. Our network analysis revealed that the proposed external memory in SeCo learns to store prior contextual knowledge, facilitating target identity inference in the lift-the-flap task. Moreover, we conducted psychophysics experiments and introduced a Human benchmark in Object Priming dataset (HOP). Our results demonstrate that SeCo exhibits human-like behaviors.
Reason from Context with Self-supervised Learning
For the description of the Universe expansion, compatible with observational data, a model of modified gravity - Lovelock gravity with dilaton - is investigated. D-dimensional space with 3- and (D-4)-dimensional maximally symmetric subspaces is considered. Space without matter and space with perfect fluid are under test. In various forms of the theory under way (third order without dilaton and second order - Einstein-Gauss-Bonnet gravity - with dilaton and without it) stationary, power-law, exponential and exponent-of-exponent form cosmological solutions are obtained. Last two forms include solutions which are clear to describe accelerating expansion of 3-dimensional subspace. Also there is a set of solutions describing cosmological expansion which does not tend to isotropization in the presence of matter.
Accelerating cosmologies in Lovelock gravity with dilaton
We use deep Gemini/GMOS-S $g,r$ photometry to study the stellar populations of the recently discovered Milky Way satellite candidates Horologium I, Pictor I, Grus I, and Phoenix II. Horologium I is most likely an ultra-faint dwarf galaxy at $D_\odot = 68\pm3$ kpc, with $r_h = 23^{+4}_{-3}$pc and $\langle $[Fe/H]$ \rangle = -2.40^{+0.10}_{-0.35}$\,dex. It's color-magnitude diagram shows evidence of a split sub-giant branch similar to that seen in some globular clusters. Additionally, Gaia DR2 data suggests it is, or was, a member of the Magellanic Cloud group. Pictor I with its compact size ($r_h = 12.9^{+0.3}_{-0.2}$pc) and metal-poor stellar population ($\langle $[Fe/H]$ \rangle = -2.28^{+0.30}_{-0.25}$) closely resembles confirmed star clusters. Grus I lacks a well-defined centre, but has two stellar concentrations within the reported half-light radius ($r_h = 1.77^{+0.85}_{-0.39}$ arcmin) and has a mean metallicity of $\langle $[Fe/H]$ \rangle = -2.5\pm0.3$. Phoenix II has a half-light radius of $r_h = 12.6\pm2.5$pc and an $\langle $[Fe/H]$ \rangle = -2.10^{+0.25}_{-0.20}$ and exhibits S-shaped tidal arms extending from its compact core. Great circles through each of these substructures intersect at the Large Magellanic Cloud (LMC). This suggests that these objects are, or once were, satellites of the LMC.
On the Nature of Ultra-faint Dwarf Galaxy Candidates. III. Horologium I, Pictor I, Grus I, and Phoenix II
Recently, the early superinflation driven by phantom field has been proposed and studied. The detection of primordial gravitational wave is an important means to know the state of very early universe. In this brief report we discuss in detail the gravitational wave background excited during the phantom superinflation.
Gravitational Wave Background from Phantom Superinflation
In this study, we investigate the mass spectrum of $\pi$ and $\sigma$ mesons at finite chemical potential using the self-consistent NJL model and the Fierz-transformed interaction Lagrangian. The model introduces an arbitrary parameter $\alpha$ to reflect the weights of the Fierz-transformed interaction channels. We show that when $\alpha$ exceeds a certain threshold value, the chiral phase transition transforms from a first-order one to a smooth crossover, which is evident from the behaviors of the chiral condensates and meson masses. Additionally, at high chemical potential, the smaller the value of $\alpha$, the higher the masses of the $\pi$ and $\sigma$ mesons become. Moreover, the Mott and dissociation chemical potentials both increase with the increase in $\alpha$. Thus, the meson mass emerges as a valuable experimental observable for determining the value of $\alpha$ and investigating the properties of the chiral phase transition in dense QCD matter.
(pseudo)Scalar mesons in a self-consistent NJL model
In this talk we present the recent calculation in all partonic channels of the fully differential single jet inclusive cross section at Next-to-Next-to-Leading Order in QCD. We discuss the size and shape of the perturbative corrections as a function of the functional form of the renormalisation and factorisation scales and compare the predictions at NLO and NNLO to the available ATLAS 7 TeV data. We find significant effects at low-$p_T$ due to changes in the functional form of the scale choice whereas at high-$p_T$ the two most common scale choices in the literature give identical results and the perturbative corrections lead to a substantial reduction in the scale dependence of the theoretical prediction at NNLO.
Differential single jet inclusive production at Next-to-Next-to-Leading Order in QCD
Various results are proved giving lower bounds for the $m$th intrinsic volume $V_m(K)$, $m=1,\dots,n-1$, of a compact convex set $K$ in ${\mathbb{R}}^n$, in terms of the $m$th intrinsic volumes of its projections on the coordinate hyperplanes (or its intersections with the coordinate hyperplanes). The bounds are sharp when $m=1$ and $m=n-1$. These are reverse (or dual, respectively) forms of the Loomis-Whitney inequality and versions of it that apply to intrinsic volumes. For the intrinsic volume $V_1(K)$, which corresponds to mean width, the inequality obtained confirms a conjecture of Betke and McMullen made in 1983.
Reverse and dual Loomis-Whitney-type inequalities
We give an answer to the following question: for which metric in an abstract lattice the completion as a metric space coincides with the completion as a lattice. We obtain the answer for inductive limits of lattices which are complete in both senses. As an application we construct such a metric in the lattice of continuous functions endowed with the uniform convergence.
On Metric Completness and Completness in sense of Lattices
Individuals are always limited by some inelastic resources, such as time and energy, which restrict them to dedicate to social interaction and limit their contact capacity. Contact capacity plays an important role in dynamics of social contagions, which so far has eluded theoretical analysis. In this paper, we first propose a non-Markovian model to understand the effects of contact capacity on social contagions, in which each individual can only contact and transmit the information to a finite number of neighbors. We then develop a heterogeneous edge-based compartmental theory for this model, and a remarkable agreement with simulations is obtained. Through theory and simulations, we find that enlarging the contact capacity makes the network more fragile to behavior spreading. Interestingly, we find that both the continuous and discontinuous dependence of the final adoption size on the information transmission probability can arise. And there is a crossover phenomenon between the two types of dependence. More specifically, the crossover phenomenon can be induced by enlarging the contact capacity only when the degree exponent is above a critical degree exponent, while the the final behavior adoption size always grows continuously for any contact capacity when degree exponent is below the critical degree exponent.
Dynamics of social contagions with limited contact capacity
We present a semi-analytic method to calculate the dispersion curves and the group velocity of photonic crystal waveguide modes in two-dimensional geometries. We model the waveguide as a homogenous strip, surrounded by photonic crystal acting as diffracting mirrors. Following conventional guided-wave optics, the properties of the photonic crystal waveguide may be calculated from the phase upon propagation over the strip and the phase upon reflection. The cases of interest require a theory including the specular order and one other diffracted reflected order. The computational advantages let us scan a large parameter space, allowing us to find novel types of solutions.
Semi-analytic method for slow light photonic crystal waveguide design
While the abundance of elemental deuterium is relatively low (D/H ~ a few 1E-5), orders of magnitude higher D/H abundance ratios have been found for many interstellar molecules, enhanced by deuterium fractionation. In cold molecular clouds (T < 20K) deuterium fractionation is driven by the H2D+ ion, whereas at higher temperatures (T > 20-30K) gas-phase deuteration is controlled by reactions with CH2D+ and C2HD+. While the role of H2D+ in driving cold interstellar deuterium chemistry is well understood, thanks to observational constraints from direct measurements of H2D+, deuteration stemming from CH2D+ is far less understood, caused by the absence of direct observational constraints of its key ions. Therefore, making use of chemical surrogates is imperative for exploring deuterium chemistry at intermediate temperatures. Formed at an early stage of ion-molecule chemistry, directly from the dissociative recombination of CH3+ (CH2D+), CH (CD) is an ideal tracer for investigating deuterium substitution initiated by reactions with CH2D+. This paper reports the first detection of CD in the interstellar medium, carried out using the APEX 12m telescope toward the widely studied low-mass protostellar system IRAS 16293-2422. Gas-phase chemical models reproducing the observed CD/CH abundance ratio of 0.016 suggests that it reflects `warm deuterium chemistry' (which ensues in moderately warm conditions of the interstellar medium) and illustrates the potential use of the CD/CH ratio in constraining the gas temperatures of the envelope gas clouds it probes.
First detection of deuterated methylidyne (CD) in the interstellar medium
We consider a mobile impurity particle injected into a one-dimensional quantum gas. The time evolution of the system strongly depends on whether the mass of the impurity and the masses of the host particles are equal or not. For equal masses, the model is Bethe Ansatz solvable, but for unequal masses, the model is no longer integrable and the Bethe Ansatz technique breaks down. We construct a controllable numerical method of computing the spectrum of the model with a finite number of host particles, based on exact diagonalization of the Hamiltonian in the truncated basis of the Bethe Ansatz states. We illustrate our approach on a few-body system of 5+1 particles, and trace the evolution of the spectrum depending on the mass ratio of the impurity and the host particles.
Mobile impurity in a one-dimensional quantum gas: Exact diagonalization in the Bethe Ansatz basis
Dimuonium (the bound system of two muons, $\mu^+\mu^-$-atom) has not been observed yet. In this paper we discuss the electromagnetic production of dimuonium at RHIC and LHC in relativistic heavy ion collisions. The production of parastates is analyzed in the equivalent photon approximation. For the treatment of orthostates, we develop a three photon formalism. We determine the production rates at RHIC and LHC with an accuracy of a few percent and discuss problems related to the observation of dimuonium.
Production of bound {$\mu^{+}\mu^{-}$}-systems in relativistic heavy ion collisions
Large Question-and-Answer (Q&A) platforms support diverse knowledge curation on the Web. While researchers have studied user behavior on the platforms in a variety of contexts, there is relatively little insight into important by-products of user behavior that also encode knowledge. Here, we analyze and model the macroscopic structure of tags applied by users to annotate and catalog questions, using a collection of 168 Stack Exchange websites. We find striking similarity in tagging structure across these Stack Exchange communities, even though each community evolves independently (albeit under similar guidelines). Using our empirical findings, we develop a simple generative model that creates random bipartite graphs of tags and questions. Our model accounts for the tag frequency distribution but does not explicitly account for co-tagging correlations. Even under these constraints, we demonstrate empirically and theoretically that our model can reproduce a number of statistical properties of the co-tagging graph that links tags appearing in the same post.
Modeling and Analysis of Tagging Networks in Stack Exchange Communities
The Next Generation 5G Networks can greatly benefit from the synergy between virtualization paradigms, such as the Network Function Virtualization (NFV), and service provisioning platforms such as the IP Multimedia Subsystem (IMS). The NFV concept is evolving towards a lightweight solution based on containers that, by contrast to classic virtual machines, do not carry a whole operating system and result in more efficient and scalable deployments. On the other hand, IMS has become an integral part of the 5G core network, for instance, to provide advanced services like Voice over LTE (VoLTE). In this paper we combine these virtualization and service provisioning concepts, deriving a containerized IMS infrastructure, dubbed cIMS, providing its assessment through statistical characterization and experimental measurements. Specifically, we: i) model cIMS through the queueing networks methodology to characterize the utilization of virtual resources under constrained conditions; ii) draw an extended version of the Pollaczek-Khinchin formula, which is useful to deal with bulk arrivals; iii) afford an optimization problem focused at maximizing the whole cIMS performance in the presence of capacity constraints, thus providing new means for the service provider to manage service level agreements (SLAs); $iv)$ evaluate a range of cIMS scenarios, considering different queuing disciplines including also multiple job classes. An experimental testbed based on the open source platform Clearwater has been deployed to derive some realistic values of key parameters (e.g. arrival and service times).
Statistical Assessment of IP Multimedia Subsystem in a Softwarized Environment: a Queueing Networks Approach
This paper proposes a Deep Reinforcement Learning algorithm for financial portfolio trading based on Deep Q-learning. The algorithm is capable of trading high-dimensional portfolios from cross-sectional datasets of any size which may include data gaps and non-unique history lengths in the assets. We sequentially set up environments by sampling one asset for each environment while rewarding investments with the resulting asset's return and cash reservation with the average return of the set of assets. This enforces the agent to strategically assign capital to assets that it predicts to perform above-average. We apply our methodology in an out-of-sample analysis to 48 US stock portfolio setups, varying in the number of stocks from ten up to 500 stocks, in the selection criteria and in the level of transaction costs. The algorithm on average outperforms all considered passive and active benchmark investment strategies by a large margin using only one hyperparameter setup for all portfolios.
High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning
Chandra has recently observed 1E0657-56, a hot merging system at z=0.3 (the ``bullet'' cluster), for 500 ks. I present some of the findings from this dataset. The cluster exhibits a prominent bow shock with M=3.0+-0.4 (one of only two known M>>1 shock fronts), which we use for a first test of the electron-ion equilibrium in an intergalactic plasma. The temperatures across the shock are consistent with instant shock-heating of the electrons; at 95% confidence, the equilibration timescale is much shorter than the collisional Spitzer value. Global properties of 1E0657-56 are also remarkable. Despite being extremely unrelaxed, the cluster fits well on the Lx-T relation, yet its total mass estimated from the M-T relation is more than twice the value measured from lensing. This is consistent with simulations predicting that in the middle of a merger, global temperature and X-ray luminosity may be temporarily boosted by a large factor.
Chandra observation of the most interesting cluster in the Universe
In a wide class of direct and semi-direct gauge mediation models, it has been observed that the gaugino masses vanish at leading order. It implies that there is a hierarchy between the gaugino and sfermion masses, invoking a fine-tuning problem in the Higgs sector via radiative corrections. In this paper, we explore the possibility of solving this anomalously light gaugino problem exploiting strong conformal dynamics in the hidden sector. With a mild assumption on the anomalous dimensions of the hidden sector operators, we show that the next to leading order contributions to the gaugino masses can naturally be in the same order as the sfermion masses. \mu/B_\mu problem is also discussed.
Light Gauginos and Conformal Sequestering
A family of spin-orbit coupled honeycomb Mott insulators offers a playground to search for quantum spin liquids (QSLs) via bond-dependent interactions. In candidate materials, a symmetric off-diagonal $\Gamma$ term, close cousin of Kitaev interaction, has emerged as another source of frustration that is essential for complete understanding of these systems. However, the ground state of honeycomb $\Gamma$ model remains elusive, with a suggested zigzag magnetic order. Here we attempt to resolve the puzzle by perturbing the $\Gamma$ region with a staggered Heisenberg interaction which favours the zigzag ordering. Despite such favour, we find a wide disordered region inclusive of the $\Gamma$ limit in the phase diagram. Further, this phase exhibits a vanishing energy gap, a collapse of excitation spectrum, and a logarithmic entanglement entropy scaling on long cylinders, indicating a gapless QSL. Other quantities such as plaquette-plaquette correlation are also discussed.
Gapless quantum spin liquid in a honeycomb $\Gamma$ magnet
We study a single vortex in a two-dimensional $p+ip$ Fermi superfluid interacting with a Bose-Einstein condensate. The Fermi superfluid is topologically non-trivial and hosts a zero-energy Majorana bound state at the vortex core. Assuming a repulsive $s$-wave contact interaction between fermions and bosons, we find that fermions are depleted from the vortex core when the bosonic density becomes sufficiently large. In this case, a dynamically-driven local interface emerges between fermions and bosons, along which chiral Majorana edge states should appear.We examine in detail the variation of vortex-core structures as well as the formation of chiral Majorana edge states with increasing bosonic density. In particular, when the angular momentum of the vortex matches the chirality of the Fermi superfluid, the Majorana zero mode and normal bound states within the core continuously evolve into chiral Majorana edge states. Otherwise, a first-order transition occurs in the lowest excited state within the core, due to the competition between counter-rotating normal bound states in forming chiral Majorana edge states. Such a transition is manifested as a sharp peak in the excitation gap above the Majorana zero mode, at which point the Majorana zero mode is protected by a large excitation gap.Our study presents an illuminating example on how topological defects can be dynamically controlled in the context of cold atomic gases.
Chiral Majorana edge states in the vortex core of a $p+ip$ Fermi superfluid
We first introduce and study the notion of multi-weighted blow-ups, which is later used to systematically construct an explicit yet efficient algorithm for functorial logarithmic resolution in characteristic zero, in the sense of Hironaka. Specifically, for a singular, reduced closed subscheme $X$ of a smooth scheme $Y$ over a field of characteristic zero, we resolve the singularities of $X$ by taking proper transforms $X_i \subset Y_i$ along a sequence of multi-weighted blow-ups $Y_N \to Y_{N-1} \to \dotsb \to Y_0 = Y$ which satisfies the following properties: (i) the $Y_i$ are smooth Artin stacks with simple normal crossing exceptional loci; (ii) at each step we always blow up the worst singular locus of $X_i$, and witness on $X_{i+1}$ an immediate improvement in singularities; (iii) and finally, the singular locus of $X$ is transformed into a simple normal crossing divisor on $X_N$.
Logarithmic resolution via multi-weighted blow-ups
During the preparatory phase of the International Linear Collider (ILC) project, all technical development and engineering design needed for the start of ILC construction must be completed, in parallel with intergovernmental discussion of governance and sharing of responsibilities and cost. The ILC Preparatory Laboratory (Pre-lab) is conceived to execute the technical and engineering work and to assist the intergovernmental discussion by providing relevant information upon request. It will be based on a worldwide partnership among laboratories with a headquarters hosted in Japan. This proposal, prepared by the ILC International Development Team and endorsed by the International Committee for Future Accelerators, describes an organisational framework and work plan for the Pre-lab. Elaboration, modification and adjustment should be introduced for its implementation, in order to incorporate requirements arising from the physics community, laboratories, and governmental authorities interested in the ILC.
Proposal for the ILC Preparatory Laboratory (Pre-lab)
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
A New Calibrated Sunspot Group Series Since 1749: Statistics of Active Day Fractions
Hadronic form factors for the rare weak transitions $\Lambda_{b}\rightarrow\Lambda^{(*)}$ are calculated using a nonrelativistic quark model. The form factors are extracted in two ways. An analytic extraction using single component wave functions (SCA) with the quark current being reduced to its nonrelativistic Pauli form is employed in the first method. In the second method, the form factors are extracted numerically using the full quark model wave function (MCN) with the full relativistic form of the quark current. Although there are differences between the two sets of form factors, both sets satisfy the relationships expected from the heavy quark effective theory (HQET). Differential decay rates, branching ratios and forward-backward asymmetries (FBAs) are calculated for the dileptonic decays $\Lambda_{b}\rightarrow\Lambda^{(*)}\ell^{+}\ell^{-}$, for transitions to both ground state and excited daughter baryons. Inclusion of the long distance contributions from charmonium resonances significantly enhances the decay rates. In the MCN model the $\Lambda(1600)$ mode is the dominant mode in the $\mu$ channel when charmonium resonances are considered; the $\Lambda(1520)$ mode is also found to have a comparable branching ratio to that of the ground state in the $\mu$ channel.
Rare dileptonic decays of \Lambda_b in a quark model
The determination of heavy element abundances from planetary nebula (PN) spectra provides an exciting opportunity to study the nucleosynthesis occurring in the progenitor asymptotic giant branch (AGB) star. We perform post-processing calculations on AGB models of a large range of mass and metallicity to obtain predictions for the production of neutron-capture elements up to the first s-process peak at strontium. We find that solar metallicity intermediate-mass AGB models provide a reasonable match to the heavy element composition of Type I PNe. Likewise, many of the Se and Kr enriched PNe are well fitted by lower mass models with solar or close-to-solar metallicities. However the most Kr-enriched objects, and the PN with sub-solar Se/O ratios are difficult to explain with AGB nucleosynthesis models. Furthermore, we compute s-process abundance predictions for low-mass AGB models of very low metallicity ([Fe/H] =-2.3) using both scaled solar and an alpha-enhanced initial composition. For these models, O is dredged to the surface, which means that abundance ratios measured relative to this element (e.g., [X/O]) do not provide a reliable measure of initial abundance ratios, or of production within the star owing to internal nucleosynthesis.
Heavy element abundances in planetary nebulae: A theorist's perspective
We consider the time-dependent traveling salesman problem (TDTSP), a generalization of the asymmetric traveling salesman problem (ATSP) to incorporate time-dependent cost functions. In our model, the costs of an arc can change arbitrarily over time (and do not only dependent on the position in the tour). The TDTSP turns out to be structurally more difficult than the TSP. We prove it is NP-hard and APX-hard even if a generalized version of the triangle inequality is satisfied. In particular, we show that even the computation of one-trees becomes intractable in the case of time-dependent costs. We derive two IP formulations of the TDTSP based on time-expansion and propose different pricing algorithms to handle the significantly in- creased problem size. We introduce multiple families of cutting planes for the TDTSP as well as different LP-based primal heuristics, a propaga- tion method and a branching rule. We conduct computational experiments to evaluate the effectiveness of our approaches on randomly generated in- stances. We are able to decrease the optimality gap remaining after one hour of computations to about six percent, compared to a gap of more than forty percent obtained by an off-the-shelf IP solver. Finally, we carry out a first attempt to learn strong branching decisions for the TDTSP. At the current state, this method does not improve the running times.
Cuts, Primal Heuristics, and Learning to Branch for the Time-Dependent Traveling Salesman Problem
We present stellar and gaseous kinematics of the inner 120x250pc^2 of the Liner/Seyfert 1 galaxy M81, from optical spectra obtained with the GMOS integral field spectrograph on the Gemini North telescope at a spatial resolution of 10pc. The stellar velocity field shows circular rotation but deviations are observed close to the minor axis which can be attributed to stellar motions possibly associated to a nuclear bar. The stellar velocity dispersion of the bulge is 162km/s leading to a black hole mass of M_BH=5.5x10^7M_sun based on the M_BH-sigma relationship. The gas kinematics is dominated by non-circular motions and the subtraction of the stellar velocity field reveals blueshifts of ~-100km/s on the far side of the galaxy and a few redshifts on the near side. These characteristics can be interpreted in terms of streaming towards the center if the gas is in the plane. On the basis of the observed velocities and geometry of the flow, we estimate a mass inflow rate in ionized gas of ~4.0x10^-3M_sun/year, which is of the order of the accretion rate necessary to power the LINER nucleus of M81. We have also applied the technique of Principal Component Analysis (PCA) to our data, which reveals the presence of a rotating nuclear gas disk within ~50pc from the nucleus and a compact outflow, approximately perpendicular to the disk. The PCA combined with the observed gas velocity field shows that the nuclear disk is being fed by gas circulating in the galaxy plane. The presence of the outflow is supported by a compact jet seen in radio observations at a similar orientation, as well as by an enhancement of the [OI]\Halpha line ratio, probably resulting from shock excitation of the circumnuclear gas by the radio jet. With these observations we are thus resolving both the feeding -- via the nuclear disk and observed gas inflow, and the feedback -- via the outflow, around the nucleus of M81.
Gas Streaming Motions towards the Nucleus of M81
We provide a computer-assisted proof of the holomorphy of the quartic and the octic meromorphic differentials arising in the main Theorem 4.11 of our paper 'The Classification of Branched Willmore spheres in the $3$-Sphere and the $4$-Sphere' (arXiv:1706.01405), using the free mathematical software Sage.
Computer-Assisted Proof of the Main Theorem of 'The Classification of Branched Willmore Spheres in the $3$-Sphere and the $4$-Sphere'
We study time-dependent solutions in M and superstring theories with higher order corrections. We first present general field equations for theories of Lovelock type with stringy corrections in arbitrary dimensions. We then exhaust all exact and asymptotic solutions of exponential and power-law expansions in the theory with Gauss-Bonnet terms relevant to heterotic strings and in the theories with quartic corrections corresponding to the M-theory and type II superstrings. We discuss interesting inflationary solutions that can generate enough e-foldings in the early universe.
Inflation from Superstring/M Theory Compactification with Higher Order Corrections I
moment maps arise as a generalization of genuine moment maps on symplectic manifolds when the symplectic structure is discarded, but the relation between the mapping and the action is kept. Particular examples of abstract moment maps had been used in Hamiltonian mechanics for some time, but the abstract notion originated in the study of cobordisms of Hamiltonian group actions. In this paper we answer the question of existence of a (proper) abstract moment map for a torus action and give a necessary and sufficient condition for an abstract moment map to be associated with a pre-symplectic form. This is done by using the notion of an assignment, which is a combinatorial counterpart of an abstract moment map. Finally, we show that the space of assignments fits as the zeroth cohomology in a series of certain cohomology spaces associated with a torus action on a manifold. We study the resulting "assignment cohomology" theory.
Assignments and Abstract Moment Maps
In this paper, we provide a systematic analysis of some finite volume lattice Boltzmann schemes in two dimensions. A complete iteration cycle in time evolution of discretized distribution functions is formally divided into collision and propagation (streaming) steps. Considering mass and momentum conserving properties of the collision step, it becomes obvious that changes in the momentum of finite volume cells is just due to the propagation step. Details of the propagation step are discussed for different approximate schemes for the evaluation of fluxes at the boundaries of the finite volume cells. Moreover, a full Chapman-Enskog analysis is conducted allowing to recover the Navier-Stokes equation. As an important result of this analysis, the relation between the lattice Boltzmann relaxation time and the kinematic viscosity of the fluid is derived for each approximate flux evaluation scheme. In particular, it is found that the constant upwind scheme leads to a positive numerical viscosity while the central scheme as well as the linear upwind scheme are free of this artifact.
Chapman-Enskog Analysis of Finite Volume Lattice Boltzmann Schemes
The finite temperature phase transition of strongly interacting matter is studied within a nonlocal chiral quark model of the NJL type coupled to a Polyakov loop. In contrast to previous investigations which were restricted to the mean-field approximation, mesonic correlations are included by evaluating the quark-antiquark ring sum. For physical pion masses, we find that the pions dominate the pressure below the phase transition, whereas above T_c the pressure is well described by the mean-field approximation result. For large pion masses, as realized in lattice simulations, the meson effects are suppressed.
Effects of mesonic correlations in the QCD phase transition
We demonstrate a 2-grating free electron Mach-Zehnder interferometer constructed in a transmission electron microscope. A symmetric binary phase grating and condenser lens system forms two spatially separated, focused probes at the sample which can be scanned while maintaining alignment. The two paths interfere at a second grating, creating in constructive or destructive interference in output beams. This interferometer has many notable features: positionable probe beams, large path separations relative to beam width, continuously tunable relative phase between paths, and real-time phase information. Here we use the electron interferometer to measure the relative phase shifts imparted to the electron probes by electrostatic potentials as well as a demonstration of quantitative nanoscale phase imaging of a polystyrene latex nanoparticle.
A Scanning 2-Grating Free Electron Mach-Zehnder Interferometer
We use magnetic force microscopy (MFM) and scanning SQUID susceptometry to measure the local superfluid density $\rho_{s}$ in Ba(Fe$_{0.95}$Co$_{0.05}$)$_2$As$_2$ single crystals from 0.4 K to the critical temperature $T_c=18.5$ K. We observe that the penetration depth $\lambda$ varies about ten times more slowly with temperature than previously published, with a dependence that can be well described by a clean two-band fully gapped model. We demonstrate that MFM can measure the important and hard-to-determine absolute value of $\lambda$, as well as obtain its temperature dependence and spatial homogeneity. We find $\rho_{s}$ to be uniform despite the highly disordered vortex pinning.
Local measurement of the penetration depth in the pnictide superconductor Ba(Fe$_{0.95}$Co$_{0.05}$)$_2$As$_2$
In this paper, we study inverse boundary problems associated with semilinear parabolic systems in several scenarios where both the nonlinearities and the initial data can be unknown. We establish several simultaneous recovery results showing that the passive or active boundary Dirichlet-to-Neumann operators can uniquely recover both of the unknowns, even stably in a certain case. It turns out that the nonlinearities play a critical role in deriving these recovery results. If the nonlinear term belongs to a general $C^1$ class but fulfilling a certain growth condition, the recovery results are established by the control approach via Carleman estimates. If the nonlinear term belongs to an analytic class, the recovery results are established through successive linearization in combination with special CGO (Complex Geometrical Optics) solutions for the parabolic system.
Simultaneous recoveries for semilinear parabolic systems
Intelligent Traffic Monitoring (ITMo) technologies hold the potential for improving road safety/security and for enabling smart city infrastructure. Understanding traffic situations requires a complex fusion of perceptual information with domain-specific and causal commonsense knowledge. Whereas prior work has provided benchmarks and methods for traffic monitoring, it remains unclear whether models can effectively align these information sources and reason in novel scenarios. To address this assessment gap, we devise three novel text-based tasks for situational reasoning in the traffic domain: i) BDD-QA, which evaluates the ability of Language Models (LMs) to perform situational decision-making, ii) TV-QA, which assesses LMs' abilities to reason about complex event causality, and iii) HDT-QA, which evaluates the ability of models to solve human driving exams. We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work, based on natural language inference, commonsense knowledge-graph self-supervision, multi-QA joint training, and dense retrieval of domain information. We associate each method with a relevant knowledge source, including knowledge graphs, relevant benchmarks, and driving manuals. In extensive experiments, we benchmark various knowledge-aware methods against the three datasets, under zero-shot evaluation; we provide in-depth analyses of model performance on data partitions and examine model predictions categorically, to yield useful insights on traffic understanding, given different background knowledge and reasoning strategies.
A Study of Situational Reasoning for Traffic Understanding
Hydrophobicity is thought to be one of the primary forces driving the folding of proteins. On average, hydrophobic residues occur preferentially in the core, whereas polar residues tends to occur at the surface of a folded protein. By analyzing the known protein structures, we quantify the degree to which the hydrophobicity sequence of a protein correlates with its pattern of surface exposure. We have assessed the statistical significance of this correlation for several hydrophobicity scales in the literature, and find that the computed correlations are significant but far from optimal. We show that this less than optimal correlation arises primarily from the large degree of mutations that naturally occurring proteins can tolerate. Lesser effects are due in part to forces other than hydrophobicity and we quantify this by analyzing the surface exposure distributions of all amino acids. Lastly we show that our database findings are consistent with those found from an off-lattice hydrophobic-polar model of protein folding.
Correlation between sequence hydrophobicity and surface-exposure pattern of database proteins
The starting point of any lattice QCD computation is the generation of a Markov chain of gauge field configurations. Due to the large number of lattice links and due to the matrix multiplications, generating SU(Nc) lattice QCD configurations is a highly demanding computational task, requiring advanced computer parallel architectures such as clusters of several Central Processing Units (CPUs) or Graphics Processing Units (GPUs). In this paper we present and explore the performance of CUDA codes for NVIDIA GPUs to generate SU(Nc) lattice QCD pure gauge configurations. Our implementation in one GPU uses CUDA and in multiple GPUs uses OpenMP and CUDA. We present optimized CUDA codes SU(2), SU(3) and SU(4). We also show a generic SU(Nc) code for Nc$\,\geq 4$ and compare it with the optimized version of SU(4). Our codes are publicly available for free use by the lattice QCD community.
Generating SU(Nc) pure gauge lattice QCD configurations on GPUs with CUDA
The 3-dimensional Heisenberg group can be equipped with three different types of left-invariant Lorentzian metric, according to whether the center of the Lie algebra is spacelike, timelike or null. Using the second of these types, we study spacelike surfaces of mean curvature zero. These surfaces with singularities are associated with harmonic maps into the 2-sphere. We show that the generic singularities are cuspidal edge, swallowtail and cuspidal cross-cap. We also give the loop group construction for these surfaces, and the criteria on the loop group potentials for the different generic singularities. Lastly, we solve the Cauchy problem for harmonic maps into the 2-sphere using loop groups, and use this to give a geometric characterization of the singularities. We use these results to prove that a regular spacelike maximal disc with null oundary must have at least two cuspidal cross-cap singularities on the boundary.
Maximal surfaces in the Lorentzian Heisenberg group
We obtain and study new $\Phi$-entropy inequalities for diffusion semigroups, with Poincar\'e or logarithmic Sobolev inequalities as particular cases. From this study we derive the asymptotic behaviour of a large class of linear Fokker-Plank type equations under simple conditions, widely extending previous results. Nonlinear diffusion equations are also studied by means of these inequalities. The $\Gamma_2$ criterion of D. Bakry and M. Emery appears as a main tool in the analysis, in local or integral forms.
Phi-entropy inequalities for diffusion semigroups
The ability to manipulate the refractive index is a fundamental principle underlying numerous photonic devices. Various techniques exist to modify the refractive index across diverse materials, making performance comparison far from straightforward. In evaluating these methods, power consumption emerges as a key performance characteristic, alongside bandwidth and footprint. Here I undertake a comprehensive comparison of the energy and power requirements for the most well-known index change schemes. The findings reveal that while the energy per volume for index change remains within the same order of magnitude across different techniques and materials, the power consumption required to achieve switching, 100% modulation, or 100% frequency conversion can differ significantly, spanning many orders of magnitude. As it turns out, the material used has less influence on power reduction than the specific resonant or traveling wave scheme employed to enhance the interaction time between light and matter. Though this work is not intended to serve as a design guide, it does establish the limitations and trade-offs involved in index modulation, thus providing valuable insights for photonics practitioners.
Energy and Power requirements for alteration of the refractive index
Using methods of conformal field theory, we conjecture an exact form for the probability that n distinct clusters span a large rectangle or open cylinder of aspect ratio k, in the limit when k is large.
The Number of Incipient Spanning Clusters in Two-Dimensional Percolation
We study the dynamic behavior of a Bose-Einstein condensate (BEC) containing a dark soliton separately reflected from potential drops and potential barriers. It is shown that for a rapidly varying potential and in a certain regime of incident velocity, the quantum reflection probability displays the cosine of the deflection angle between the incident soliton and the reflected soliton, i.e., $R(\theta) \sim \cos 2\theta$. For a potential drop, $R(\theta)$ is susceptible to the widths of potential drop up to the length of the dark soliton and the difference of the reflection rates between the orientation angle of the soliton $\theta=0$ and $\theta=\pi/2$, $\delta R_s$, displays oscillating exponential decay with increasing potential widths. However, for a barrier potential, $R(\theta)$ is insensitive for the potential width less than the decay length of the matter wave and $\delta R_s$ presents an exponential trend. This discrepancy of the reflectances in two systems is arisen from the different behaviors of matter waves in the region of potential variation.
Quantum reflection of a Bose-Einstein condensate from a rapidly varying potential: the role of dark soliton
Current semantic segmentation methods focus only on mining "local" context, i.e., dependencies between pixels within individual images, by context-aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization criteria (e.g., IoU-like loss). However, they ignore "global" context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely explored before. Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HR-Net), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCAL-Context, COCO-Stuff, CamVid). We expect this work will encourage our community to rethink the current de facto training paradigm in fully supervised semantic segmentation.
Exploring Cross-Image Pixel Contrast for Semantic Segmentation
We construct identities of Pohozhaev type, in the context of elastostatics and elastodynamics, by using the Noetherian approach. As an application, a non-existence result for forced semi-linear isotropic and anisotropic elastic systems is established.
Pohozhaev and Morawetz Identities in Elastostatics and Elastodynamics
Faraday rotation has become a powerful tool in a large variety of physics applications. Most prominently, Faraday rotation can be used in precision magnetometry. Here we report the first measurements of gyromagnetic Faraday rotation on a dense, hyperpolarized $^3$He gas target. Theoretical calculations predict the rotations of linearly polarized light due to the magnetization of spin-1/2 particles are on the scale of 10$^{-7}$ radians. To maximize the signal, a $^3$He target designed to use with a multipass cavity is combined with a sensitive apparatus for polarimetry that can detect optical rotations on the order of 10$^{-8}$ radians. Although the expected results are well above the sensitivity for the given experimental conditions, no nuclear-spin induced rotation was observed.
Limits on Magnetically Induced Faraday Rotation from Polarized $^3$He Atoms
Global-scale quantum communication networks will require efficient long-distance distribution of quantum signals. Optical fibre communication channels have range constraints due to exponential losses in the absence of quantum memories and repeaters. Satellites enable intercontinental quantum communication by exploiting more benign inverse square free-space attenuation and long sight lines. However, the design and engineering of satellite quantum key distribution (QKD) systems are difficult and characteristic differences to terrestrial QKD networks and operations pose additional challenges. The typical approach to modelling satellite QKD (SatQKD) has been to estimate performances with a fully optimised protocol parameter space and with few payload and platform resource limitations. Here, we analyse how practical constraints affect the performance of SatQKD for the Bennett-Brassard 1984 (BB84) weak coherent pulse decoy state protocol with finite key size effects. We consider engineering limitations and trade-offs in mission design including limited in-orbit tunability, quantum random number generation rates and storage, and source intensity uncertainty. We quantify practical SatQKD performance limits to determine the long-term key generation capacity and provide important performance benchmarks to support the design of upcoming missions.
Finite key performance of satellite quantum key distribution under practical constraints
As computational resolution of modern cosmological simulations reach ever so close to resolving individual star-forming clumps in a galaxy, a need for "resolution-appropriate" physics for a galaxy-scale simulation has never been greater. To this end, we introduce a self-consistent numerical framework that includes explicit treatments of feedback from star-forming molecular clouds (SFMCs) and massive black holes (MBHs). In addition to the thermal supernovae feedback from SFMC particles, photoionizing radiation from both SFMCs and MBHs is tracked through full 3-dimensional ray tracing. A mechanical feedback channel from MBHs is also considered. Using our framework, we perform a state-of-the-art cosmological simulation of a quasar-host galaxy at z~7.5 for ~25 Myrs with all relevant galactic components such as dark matter, gas, SFMCs, and an embedded MBH seed of ~> 1e6 Ms. We find that feedback from SFMCs and an accreting MBH suppresses runaway star formation locally in the galactic core region. Newly included radiation feedback from SFMCs, combined with feedback from the MBH, helps the MBH grow faster by retaining gas that eventually accretes on to the MBH. Our experiment demonstrates that previously undiscussed types of interplay between gas, SFMCs, and a MBH may hold important clues about the growth and feedback of quasars and their host galaxies in the high-redshift Universe.
High-redshift Galaxy Formation with Self-consistently Modeled Stars and Massive Black Holes: Stellar Feedback and Quasar Growth
For a team of robots to work collaboratively, it is crucial that each robot have the ability to determine the position of their neighbors, relative to themselves, in order to execute tasks autonomously. This letter presents an algorithm for determining the three-dimensional relative position between two mobile robots, each using nothing more than a single ultra-wideband transceiver, an accelerometer, a rate gyro, and a magnetometer. A sliding window filter estimates the relative position at selected keypoints by combining the distance measurements with acceleration estimates, which each agent computes using an on-board attitude estimator. The algorithm is appropriate for real-time implementation, and has been tested in simulation and experiment, where it comfortably outperforms standard estimators. A positioning accuracy of less than 1 meter is achieved with inexpensive sensors.
Relative Position Estimation Between Two UWB Devices with IMUs
In an important series of articles published during the 70's, Krasi\'nski displayed a class of interior solutions of the Einstein field equations sourced by a stationary isentropic rotating cylinder of perfect fluid. However, these solutions depend on an unspecified arbitrary function, which lead the author to claim that the equation of state of the fluid could not be obtained directly from the field equations but had to be added by hand. In the present article, we use a double ansatz which we have developed in 2021 and implemented at length into a series of recent papers displaying exact interior solutions for a stationary rotating cylindrically symmetric fluid with anisotropic pressure. This ansatz allows us to obtain here a fully integrated class of solutions to the Einstein equations, written with the use of very simple analytical functions, and to show that the equation of state of the fluid follows naturally from these field equations.
Fully integrated interior solutions of GR for stationary rigidly rotating cylindrical perfect fluids
The young O-type star theta1 OriC, the brightest star of the Trapezium cluster in Orion, is one of only two known magnetic rotators among the O stars. However, not all spectroscopic variations of this star can be explained by the magnetic rotator model. We present results from a long-term monitoring to study these unexplained variations and to improve the stellar rotational period. We want to study long-term trends of the radial velocity of theta1 OriC, to search for unusual changes, to improve the established rotational period and to check for possible period changes. We combine a large set of published spectroscopic data with new observations and analyze the spectra in a homogeneous way. We study the radial velocity from selected photo-spheric lines and determine the equivalent width of the Halpha and HeII4686 lines. We find evidence for a secular change of the radial velocity of theta1 OriC that is consistent with the published interferometric orbit. We refine the rotational period of theta1 OriC and discuss the possibility of detecting period changes in the near future.
Long-term monitoring of theta1 OriC: the spectroscopic orbit and an improved rotational period
We report on our monitoring of the strong-field magnetar-like pulsar PSR J1846-0258 with the Neutron Star Interior Composition Explorer (NICER) and the timing and spectral evolution during its outburst in August 2020. Phase-coherent timing solutions were maintained from March 2017 through November 2021, including a coherent solution throughout the outburst. We detected a large spin-up glitch of magnitude \Delta\nu/\nu = 3 X 10^{-6} at the start of the outburst and observed an increase in pulsed flux that reached a factor of more than 10 times the quiescent level, a behavior similar to that of the 2006 outburst. Our monitoring observations in June and July 2020 indicate that the flux was rising prior to the SWIFT announcement of the outburst on August 1, 2020. We also observed several sharp rises in the pulsed flux following the outburst and the flux reached quiescent level by November 2020. The pulse profile was observed to change shape during the outburst, returning to the pre-outburst shape by 2021. Spectral analysis of the pulsed emission of NICER data shows that the flux increases result entirely from a new black body component that gradually fades away while the power-law remains nearly constant at its quiescent level throughout the outburst. Joint spectral analysis of NICER and simultaneous NuSTAR data confirms this picture. We discuss the interpretation of the magnetar-like outburst and origin of the transient thermal component in the context of both a pulsar-like and a magnetar-like model.
A NICER View on the 2020 Magnetar-Like Outburst of PSR J1846-0258
We present a process for the construction of a SWAP gate which does not require a composition of elementary gates from a universal set. We propose to employ direct techniques adapted to the preparation of this specific gate. The mechanism, based on adiabatic passage, constitutes a decoherence-free method in the sense that spontaneous emission and cavity damping are avoided.
Fast SWAP gate by adiabatic passage
Recently, Mobile-Edge Computing (MEC) has arisen as an emerging paradigm that extends cloud-computing capabilities to the edge of the Radio Access Network (RAN) by deploying MEC servers right at the Base Stations (BSs). In this paper, we envision a collaborative joint caching and processing strategy for on-demand video streaming in MEC networks. Our design aims at enhancing the widely used Adaptive BitRate (ABR) streaming technology, where multiple bitrate versions of a video can be delivered so as to adapt to the heterogeneity of user capabilities and the varying of network connection bandwidth. The proposed strategy faces two main challenges: (i) not only the videos but their appropriate bitrate versions have to be effectively selected to store in the caches, and (ii) the transcoding relationships among different versions need to be taken into account to effectively utilize the processing capacity at the MEC servers. To this end, we formulate the collaborative joint caching and processing problem as an Integer Linear Program (ILP) that minimizes the backhaul network cost, subject to the cache storage and processing capacity constraints. Due to the NP-completeness of the problem and the impractical overheads of the existing offline approaches, we propose a novel online algorithm that makes cache placement and video scheduling decisions upon the arrival of each new request. Extensive simulations results demonstrate the significant performance improvement of the proposed strategy over traditional approaches in terms of cache hit ratio increase, backhaul traffic and initial access delay reduction.
Collaborative Multi-bitrate Video Caching and Processing in Mobile-Edge Computing Networks
We study an inverse problem of determining a time-dependent potential appearing in the wave equation in conformally transversally anisotropic manifolds of dimension three or higher. These are compact Riemannian manifolds with boundary that are conformally embedded in a product of the real line and a transversal manifold. Under the assumption of the attenuated geodesic ray transform being injective on the transversal manifold, we prove the unique determination of time-dependent potentials from the knowledge of a certain partial Cauchy data set.
Recovery of a time-dependent potential in hyperbolic equations on conformally transversally anisotropic manifolds
The detection of the high-energy neutrino event, IceCube-170922A, demonstrated that multimessenger particle astrophysics triggered by neutrino alerts is feasible. We consider time delay signatures caused by secret neutrino interactions with the cosmic neutrino background and dark matter and suggest that these can be used as a novel probe of neutrino interactions beyond the Standard Model (BSM). The tests with BSM-induced neutrino echoes are distinct from existing constraints from the spectral modification and will be enabled by multimessenger observations of bright neutrino transients with future experiments such as IceCube-Gen2, KM3Net, and Hyper-Kamiokande. The constraints are complementary to those from accelerator and laboratory experiments and powerful for testing various particle models that explain tensions prevailing in the cosmological data.
Neutrino Echoes from Multimessenger Transient Sources
This paper proposes a Generalized Power Method (GPM) to tackle the problem of community detection and group synchronization simultaneously in a direct non-convex manner. Under the stochastic group block model (SGBM), theoretical analysis indicates that the algorithm is able to exactly recover the ground truth in $O(n\log^2n)$ time, sharply outperforming the benchmark method of semidefinite programming (SDP) in $O(n^{3.5})$ time. Moreover, a lower bound of parameters is given as a necessary condition for exact recovery of GPM. The new bound breaches the information-theoretic threshold for pure community detection under the stochastic block model (SBM), thus demonstrating the superiority of our simultaneous optimization algorithm over the trivial two-stage method which performs the two tasks in succession. We also conduct numerical experiments on GPM and SDP to evidence and complement our theoretical analysis.
Non-Convex Joint Community Detection and Group Synchronization via Generalized Power Method
Motivated by a series of experiments that revealed a temperature dependence of the dynamic scaling regime of growing surfaces, we investigate theoretically how a nonequilibrium growth process reacts to a sudden change of system parameters. We discuss quenches between correlated regimes through exact expressions derived from the stochastic Edwards-Wilkinson equation with a variable diffusion constant. Our study reveals that a sudden change of the diffusion constant leads to remarkable changes in the surface roughness. Different dynamic regimes, characterized by a power-law or by an exponential relaxation, are identified, and a dynamic phase diagram is constructed. We conclude that growth processes provide one of the rare instances where quenches between correlated regimes yield a power-law relaxation.
Changing growth conditions during surface growth
In this note, we consider the line search for a class of abstract nonconvex algorithm which have been deeply studied in the Kurdyka-Lojasiewicz theory. We provide a weak convergence result of the line search in general. When the objective function satisfies the Kurdyka-Lojasiewicz property and some certain assumption, a global convergence result can be derived. An application is presented for the L0-regularized least square minimization in the end of the paper.
A note on the convergence of nonconvex line search
We consider a two-dimensional free harmonic oscillator where the initial position is fixed and the initial velocity can change direction. All possible orbits are ellipses and their enveloping curve is an ellipse too. We show that the locus of the foci of all elliptical orbits is a Cassini oval. Depending on the magnitude of the initial velocity we observe all three kinds of Cassini ovals, one of which is the lemniscate of Bernoulli. These Cassini ovals have the same foci as the enveloping ellipse.
Harmonic motion and Cassini ovals